linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W
@ 2012-03-12 21:28 Will Drewry
  2012-03-12 21:28 ` [PATCH v14 02/13] net/compat.c,linux/filter.h: share compat_sock_fprog Will Drewry
                   ` (13 more replies)
  0 siblings, 14 replies; 40+ messages in thread
From: Will Drewry @ 2012-03-12 21:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arch, linux-doc, kernel-hardening, netdev, x86, arnd,
	davem, hpa, mingo, oleg, peterz, rdunlap, mcgrathr, tglx, luto,
	eparis, serge.hallyn, djm, scarybeasts, indan, pmoore, akpm,
	corbet, eric.dumazet, markus, coreyb, keescook, Will Drewry

Introduces a new BPF ancillary instruction that all LD calls will be
mapped through when skb_run_filter() is being used for seccomp BPF.  The
rewriting will be done using a secondary chk_filter function that is run
after skb_chk_filter.

The code change is guarded by CONFIG_SECCOMP_FILTER which is added,
along with the seccomp_bpf_load() function later in this series.

This is based on http://lkml.org/lkml/2012/3/2/141

v14: First cut using a single additional instruction
... v13: made bpf functions generic.


Suggested-by: Indan Zupancic <indan@nul.nu>
Signed-off-by: Will Drewry <wad@chromium.org>
---
 include/linux/filter.h |    1 +
 net/core/filter.c      |    5 +++++
 2 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index 8eeb205..aaa2e80 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -228,6 +228,7 @@ enum {
 	BPF_S_ANC_HATYPE,
 	BPF_S_ANC_RXHASH,
 	BPF_S_ANC_CPU,
+	BPF_S_ANC_SECCOMP_LD_W,
 };
 
 #endif /* __KERNEL__ */
diff --git a/net/core/filter.c b/net/core/filter.c
index 5dea452..3000931 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -350,6 +350,11 @@ load_b:
 				A = 0;
 			continue;
 		}
+#ifdef CONFIG_SECCOMP_FILTER
+		case BPF_S_ANC_SECCOMP_LD_W:
+			A = seccomp_bpf_load(fentry->k);
+			continue;
+#endif
 		default:
 			WARN_RATELIMIT(1, "Unknown code:%u jt:%u tf:%u k:%u\n",
 				       fentry->code, fentry->jt,
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v14 02/13] net/compat.c,linux/filter.h: share compat_sock_fprog
  2012-03-12 21:28 [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Will Drewry
@ 2012-03-12 21:28 ` Will Drewry
  2012-03-12 21:28 ` [PATCH v14 03/13] seccomp: kill the seccomp_t typedef Will Drewry
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 40+ messages in thread
From: Will Drewry @ 2012-03-12 21:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arch, linux-doc, kernel-hardening, netdev, x86, arnd,
	davem, hpa, mingo, oleg, peterz, rdunlap, mcgrathr, tglx, luto,
	eparis, serge.hallyn, djm, scarybeasts, indan, pmoore, akpm,
	corbet, eric.dumazet, markus, coreyb, keescook, Will Drewry

Any other users of bpf_*_filter that take a struct sock_fprog from
userspace will need to be able to also accept a compat_sock_fprog
if the arch supports compat calls.  This change let's the existing
compat_sock_fprog be shared.

Acked-by: Serge Hallyn <serge.hallyn@canonical.com>
Signed-off-by: Will Drewry <wad@chromium.org>

v14: rebase/nochanges
v13: rebase on to 88ebdda6159ffc15699f204c33feb3e431bf9bdc
v12: rebase on to linux-next
v11: introduction
---
 include/linux/filter.h |   11 +++++++++++
 net/compat.c           |    8 --------
 2 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index aaa2e80..f2e5315 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -10,6 +10,7 @@
 
 #ifdef __KERNEL__
 #include <linux/atomic.h>
+#include <linux/compat.h>
 #endif
 
 /*
@@ -132,6 +133,16 @@ struct sock_fprog {	/* Required for SO_ATTACH_FILTER. */
 
 #ifdef __KERNEL__
 
+#ifdef CONFIG_COMPAT
+/*
+ * A struct sock_filter is architecture independent.
+ */
+struct compat_sock_fprog {
+	u16		len;
+	compat_uptr_t	filter;		/* struct sock_filter * */
+};
+#endif
+
 struct sk_buff;
 struct sock;
 
diff --git a/net/compat.c b/net/compat.c
index 6def90e..c5c61c8 100644
--- a/net/compat.c
+++ b/net/compat.c
@@ -326,14 +326,6 @@ void scm_detach_fds_compat(struct msghdr *kmsg, struct scm_cookie *scm)
 	__scm_destroy(scm);
 }
 
-/*
- * A struct sock_filter is architecture independent.
- */
-struct compat_sock_fprog {
-	u16		len;
-	compat_uptr_t	filter;		/* struct sock_filter * */
-};
-
 static int do_set_attach_filter(struct socket *sock, int level, int optname,
 				char __user *optval, unsigned int optlen)
 {
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v14 03/13] seccomp: kill the seccomp_t typedef
  2012-03-12 21:28 [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Will Drewry
  2012-03-12 21:28 ` [PATCH v14 02/13] net/compat.c,linux/filter.h: share compat_sock_fprog Will Drewry
@ 2012-03-12 21:28 ` Will Drewry
  2012-03-12 21:28 ` [PATCH v14 04/13] arch/x86: add syscall_get_arch to syscall.h Will Drewry
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 40+ messages in thread
From: Will Drewry @ 2012-03-12 21:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arch, linux-doc, kernel-hardening, netdev, x86, arnd,
	davem, hpa, mingo, oleg, peterz, rdunlap, mcgrathr, tglx, luto,
	eparis, serge.hallyn, djm, scarybeasts, indan, pmoore, akpm,
	corbet, eric.dumazet, markus, coreyb, keescook, Will Drewry

Replaces the seccomp_t typedef with struct seccomp to match modern
kernel style.

v14: rebase/nochanges
v13: rebase on to 88ebdda6159ffc15699f204c33feb3e431bf9bdc
v12: rebase on to linux-next
v8-v11: no changes
v7: struct seccomp_struct -> struct seccomp
v6: original inclusion in this series.

Acked-by: Serge Hallyn <serge.hallyn@canonical.com>
Reviewed-by: James Morris <jmorris@namei.org>
Signed-off-by: Will Drewry <wad@chromium.org>
---
 include/linux/sched.h   |    2 +-
 include/linux/seccomp.h |   10 ++++++----
 2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 3bcde52..6311128 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1420,7 +1420,7 @@ struct task_struct {
 	uid_t loginuid;
 	unsigned int sessionid;
 #endif
-	seccomp_t seccomp;
+	struct seccomp seccomp;
 
 /* Thread group tracking */
    	u32 parent_exec_id;
diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
index cc7a4e9..d61f27f 100644
--- a/include/linux/seccomp.h
+++ b/include/linux/seccomp.h
@@ -7,7 +7,9 @@
 #include <linux/thread_info.h>
 #include <asm/seccomp.h>
 
-typedef struct { int mode; } seccomp_t;
+struct seccomp {
+	int mode;
+};
 
 extern void __secure_computing(int);
 static inline void secure_computing(int this_syscall)
@@ -19,7 +21,7 @@ static inline void secure_computing(int this_syscall)
 extern long prctl_get_seccomp(void);
 extern long prctl_set_seccomp(unsigned long);
 
-static inline int seccomp_mode(seccomp_t *s)
+static inline int seccomp_mode(struct seccomp *s)
 {
 	return s->mode;
 }
@@ -28,7 +30,7 @@ static inline int seccomp_mode(seccomp_t *s)
 
 #include <linux/errno.h>
 
-typedef struct { } seccomp_t;
+struct seccomp { };
 
 #define secure_computing(x) do { } while (0)
 
@@ -42,7 +44,7 @@ static inline long prctl_set_seccomp(unsigned long arg2)
 	return -EINVAL;
 }
 
-static inline int seccomp_mode(seccomp_t *s)
+static inline int seccomp_mode(struct seccomp *s)
 {
 	return 0;
 }
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v14 04/13] arch/x86: add syscall_get_arch to syscall.h
  2012-03-12 21:28 [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Will Drewry
  2012-03-12 21:28 ` [PATCH v14 02/13] net/compat.c,linux/filter.h: share compat_sock_fprog Will Drewry
  2012-03-12 21:28 ` [PATCH v14 03/13] seccomp: kill the seccomp_t typedef Will Drewry
@ 2012-03-12 21:28 ` Will Drewry
  2012-03-12 21:28 ` [PATCH v14 05/13] asm/syscall.h: add syscall_get_arch Will Drewry
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 40+ messages in thread
From: Will Drewry @ 2012-03-12 21:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arch, linux-doc, kernel-hardening, netdev, x86, arnd,
	davem, hpa, mingo, oleg, peterz, rdunlap, mcgrathr, tglx, luto,
	eparis, serge.hallyn, djm, scarybeasts, indan, pmoore, akpm,
	corbet, eric.dumazet, markus, coreyb, keescook, Will Drewry

Add syscall_get_arch() to export the current AUDIT_ARCH_* based on system call
entry path.

Acked-by: Serge Hallyn <serge.hallyn@canonical.com>
Signed-off-by: Will Drewry <wad@chromium.org>

v14: rebase/nochanges
v13: rebase on to 88ebdda6159ffc15699f204c33feb3e431bf9bdc
---
 arch/x86/include/asm/syscall.h |   23 +++++++++++++++++++++++
 1 files changed, 23 insertions(+), 0 deletions(-)

diff --git a/arch/x86/include/asm/syscall.h b/arch/x86/include/asm/syscall.h
index d962e56..1d713e4 100644
--- a/arch/x86/include/asm/syscall.h
+++ b/arch/x86/include/asm/syscall.h
@@ -13,6 +13,7 @@
 #ifndef _ASM_X86_SYSCALL_H
 #define _ASM_X86_SYSCALL_H
 
+#include <linux/audit.h>
 #include <linux/sched.h>
 #include <linux/err.h>
 #include <asm/asm-offsets.h>	/* For NR_syscalls */
@@ -87,6 +88,12 @@ static inline void syscall_set_arguments(struct task_struct *task,
 	memcpy(&regs->bx + i, args, n * sizeof(args[0]));
 }
 
+static inline int syscall_get_arch(struct task_struct *task,
+				   struct pt_regs *regs)
+{
+	return AUDIT_ARCH_I386;
+}
+
 #else	 /* CONFIG_X86_64 */
 
 static inline void syscall_get_arguments(struct task_struct *task,
@@ -211,6 +218,22 @@ static inline void syscall_set_arguments(struct task_struct *task,
 		}
 }
 
+static inline int syscall_get_arch(struct task_struct *task,
+				   struct pt_regs *regs)
+{
+#ifdef CONFIG_IA32_EMULATION
+	/*
+	 * TS_COMPAT is set for 32-bit syscall entries and then
+	 * remains set until we return to user mode.
+	 *
+	 * TIF_IA32 tasks should always have TS_COMPAT set at
+	 * system call time.
+	 */
+	if (task_thread_info(task)->status & TS_COMPAT)
+		return AUDIT_ARCH_I386;
+#endif
+	return AUDIT_ARCH_X86_64;
+}
 #endif	/* CONFIG_X86_32 */
 
 #endif	/* _ASM_X86_SYSCALL_H */
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v14 05/13] asm/syscall.h: add syscall_get_arch
  2012-03-12 21:28 [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Will Drewry
                   ` (2 preceding siblings ...)
  2012-03-12 21:28 ` [PATCH v14 04/13] arch/x86: add syscall_get_arch to syscall.h Will Drewry
@ 2012-03-12 21:28 ` Will Drewry
  2012-03-12 21:28 ` [PATCH v14 06/13] seccomp: add system call filtering using BPF Will Drewry
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 40+ messages in thread
From: Will Drewry @ 2012-03-12 21:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arch, linux-doc, kernel-hardening, netdev, x86, arnd,
	davem, hpa, mingo, oleg, peterz, rdunlap, mcgrathr, tglx, luto,
	eparis, serge.hallyn, djm, scarybeasts, indan, pmoore, akpm,
	corbet, eric.dumazet, markus, coreyb, keescook, Will Drewry

Adds a stub for a function that will return the AUDIT_ARCH_*
value appropriate to the supplied task based on the system
call convention.

For audit's use, the value can generally be hard-coded at the
audit-site.  However, for other functionality not inlined into
syscall entry/exit, this makes that information available.
seccomp_filter is the first planned consumer and, as such,
the comment indicates a tie to HAVE_ARCH_SECCOMP_FILTER.  That
is probably an unneeded detail.

Acked-by: Serge Hallyn <serge.hallyn@canonical.com>
Suggested-by: Roland McGrath <mcgrathr@chromium.org>
Signed-off-by: Will Drewry <wad@chromium.org>

v14: rebase/nochanges
v13: rebase on to 88ebdda6159ffc15699f204c33feb3e431bf9bdc
v12: rebase on to linux-next
v11: fixed improper return type
v10: introduced
---
 include/asm-generic/syscall.h |   14 ++++++++++++++
 1 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/include/asm-generic/syscall.h b/include/asm-generic/syscall.h
index 5c122ae..a2c13dc 100644
--- a/include/asm-generic/syscall.h
+++ b/include/asm-generic/syscall.h
@@ -142,4 +142,18 @@ void syscall_set_arguments(struct task_struct *task, struct pt_regs *regs,
 			   unsigned int i, unsigned int n,
 			   const unsigned long *args);
 
+/**
+ * syscall_get_arch - return the AUDIT_ARCH for the current system call
+ * @task:	task of interest, must be in system call entry tracing
+ * @regs:	task_pt_regs() of @task
+ *
+ * Returns the AUDIT_ARCH_* based on the system call convention in use.
+ *
+ * It's only valid to call this when @task is stopped on entry to a system
+ * call, due to %TIF_SYSCALL_TRACE, %TIF_SYSCALL_AUDIT, or %TIF_SECCOMP.
+ *
+ * Note, at present this function is only required with
+ * CONFIG_HAVE_ARCH_SECCOMP_FILTER.
+ */
+int syscall_get_arch(struct task_struct *task, struct pt_regs *regs);
 #endif	/* _ASM_SYSCALL_H */
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v14 06/13] seccomp: add system call filtering using BPF
  2012-03-12 21:28 [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Will Drewry
                   ` (3 preceding siblings ...)
  2012-03-12 21:28 ` [PATCH v14 05/13] asm/syscall.h: add syscall_get_arch Will Drewry
@ 2012-03-12 21:28 ` Will Drewry
  2012-03-13  3:33   ` Indan Zupancic
  2012-03-12 21:28 ` [PATCH v14 07/13] signal, x86: add SIGSYS info and make it synchronous Will Drewry
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 40+ messages in thread
From: Will Drewry @ 2012-03-12 21:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arch, linux-doc, kernel-hardening, netdev, x86, arnd,
	davem, hpa, mingo, oleg, peterz, rdunlap, mcgrathr, tglx, luto,
	eparis, serge.hallyn, djm, scarybeasts, indan, pmoore, akpm,
	corbet, eric.dumazet, markus, coreyb, keescook, Will Drewry

[This patch depends on luto@mit.edu's no_new_privs patch:
   https://lkml.org/lkml/2012/1/30/264
 The whole series including Andrew's patches can be found here:
   https://github.com/redpig/linux/tree/seccomp
 Complete diff here:
   https://github.com/redpig/linux/compare/1dc65fed...seccomp
 A GPG signed tag 'seccomp/v14/posted' will be pushed shortly.
]

This patch adds support for seccomp mode 2.  Mode 2 introduces the
ability for unprivileged processes to install system call filtering
policy expressed in terms of a Berkeley Packet Filter (BPF) program.
This program will be evaluated in the kernel for each system call
the task makes and computes a result based on data in the format
of struct seccomp_data.

A filter program may be installed by calling:
  struct sock_fprog fprog = { ... };
  ...
  prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &fprog);

The return value of the filter program determines if the system call is
allowed to proceed or denied.  If the first filter program installed
allows prctl(2) calls, then the above call may be made repeatedly
by a task to further reduce its access to the kernel.  All attached
programs must be evaluated before a system call will be allowed to
proceed.

Filter programs will be inherited across fork/clone and execve.
However, if the task attaching the filter is unprivileged
(!CAP_SYS_ADMIN) the no_new_privs bit will be set on the task.  This
ensures that unprivileged tasks cannot attach filters that affect
privileged tasks (e.g., setuid binary).

There are a number of benefits to this approach. A few of which are
as follows:
- BPF has been exposed to userland for a long time
- BPF optimization (and JIT'ing) are well understood
- Userland already knows its ABI: system call numbers and desired
  arguments
- No time-of-check-time-of-use vulnerable data accesses are possible.
- system call arguments are loaded on access only to minimize copying
  required for system call policy decisions.

Mode 2 support is restricted to architectures that enable
HAVE_ARCH_SECCOMP_FILTER.  In this patch, the primary dependency is on
syscall_get_arguments().  The full desired scope of this feature will
add a few minor additional requirements expressed later in this series.
Based on discussion, SECCOMP_RET_ERRNO and SECCOMP_RET_TRACE seem to be
the desired additional functionality.

No architectures are enabled in this patch.

v14: - put/get_seccomp_filter takes struct task_struct
       (indan@nul.nu,keescook@chromium.org)
     - adds seccomp_chk_filter and drops general bpf_run/chk_filter user
     - add seccomp_bpf_load for use by net/core/filter.c
     - lower max per-process/per-hierarchy: 1MB
     - moved nnp/capability check prior to allocation
       (all of the above: indan@nul.nu)
v13: - rebase on to 88ebdda6159ffc15699f204c33feb3e431bf9bdc
v12: - added a maximum instruction count per path (indan@nul.nu,oleg@redhat.com)
     - removed copy_seccomp (keescook@chromium.org,indan@nul.nu)
     - reworded the prctl_set_seccomp comment (indan@nul.nu)
v11: - reorder struct seccomp_data to allow future args expansion (hpa@zytor.com)
     - style clean up, @compat dropped, compat_sock_fprog32 (indan@nul.nu)
     - do_exit(SIGSYS) (keescook@chromium.org, luto@mit.edu)
     - pare down Kconfig doc reference.
     - extra comment clean up
v10: - seccomp_data has changed again to be more aesthetically pleasing
       (hpa@zytor.com)
     - calling convention is noted in a new u32 field using syscall_get_arch.
       This allows for cross-calling convention tasks to use seccomp filters.
       (hpa@zytor.com)
     - lots of clean up (thanks, Indan!)
 v9: - n/a
 v8: - use bpf_chk_filter, bpf_run_filter. update load_fns
     - Lots of fixes courtesy of indan@nul.nu:
     -- fix up load behavior, compat fixups, and merge alloc code,
     -- renamed pc and dropped __packed, use bool compat.
     -- Added a hidden CONFIG_SECCOMP_FILTER to synthesize non-arch
        dependencies
 v7:  (massive overhaul thanks to Indan, others)
     - added CONFIG_HAVE_ARCH_SECCOMP_FILTER
     - merged into seccomp.c
     - minimal seccomp_filter.h
     - no config option (part of seccomp)
     - no new prctl
     - doesn't break seccomp on systems without asm/syscall.h
       (works but arg access always fails)
     - dropped seccomp_init_task, extra free functions, ...
     - dropped the no-asm/syscall.h code paths
     - merges with network sk_run_filter and sk_chk_filter
 v6: - fix memory leak on attach compat check failure
     - require no_new_privs || CAP_SYS_ADMIN prior to filter
       installation. (luto@mit.edu)
     - s/seccomp_struct_/seccomp_/ for macros/functions (amwang@redhat.com)
     - cleaned up Kconfig (amwang@redhat.com)
     - on block, note if the call was compat (so the # means something)
 v5: - uses syscall_get_arguments
       (indan@nul.nu,oleg@redhat.com, mcgrathr@chromium.org)
      - uses union-based arg storage with hi/lo struct to
        handle endianness.  Compromises between the two alternate
        proposals to minimize extra arg shuffling and account for
        endianness assuming userspace uses offsetof().
        (mcgrathr@chromium.org, indan@nul.nu)
      - update Kconfig description
      - add include/seccomp_filter.h and add its installation
      - (naive) on-demand syscall argument loading
      - drop seccomp_t (eparis@redhat.com)
 v4:  - adjusted prctl to make room for PR_[SG]ET_NO_NEW_PRIVS
      - now uses current->no_new_privs
        (luto@mit.edu,torvalds@linux-foundation.com)
      - assign names to seccomp modes (rdunlap@xenotime.net)
      - fix style issues (rdunlap@xenotime.net)
      - reworded Kconfig entry (rdunlap@xenotime.net)
 v3:  - macros to inline (oleg@redhat.com)
      - init_task behavior fixed (oleg@redhat.com)
      - drop creator entry and extra NULL check (oleg@redhat.com)
      - alloc returns -EINVAL on bad sizing (serge.hallyn@canonical.com)
      - adds tentative use of "always_unprivileged" as per
        torvalds@linux-foundation.org and luto@mit.edu
 v2:  - (patch 2 only)

Reviewed-by: Indan Zupancic <indan@nul.nu>
Acked-by: Serge Hallyn <serge.hallyn@canonical.com>

Signed-off-by: Will Drewry <wad@chromium.org>
---
 arch/Kconfig            |   17 +++
 include/linux/Kbuild    |    1 +
 include/linux/seccomp.h |   76 +++++++++-
 kernel/fork.c           |    3 +
 kernel/seccomp.c        |  366 ++++++++++++++++++++++++++++++++++++++++++++---
 kernel/sys.c            |    2 +-
 6 files changed, 442 insertions(+), 23 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 4f55c73..7c6bd48 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -199,4 +199,21 @@ config HAVE_CMPXCHG_LOCAL
 config HAVE_CMPXCHG_DOUBLE
 	bool
 
+config HAVE_ARCH_SECCOMP_FILTER
+	bool
+	help
+	  This symbol should be selected by an architecure if it provides
+	  asm/syscall.h, specifically syscall_get_arguments() and
+	  syscall_get_arch().
+
+config SECCOMP_FILTER
+	def_bool y
+	depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET
+	help
+	  Enable tasks to build secure computing environments defined
+	  in terms of Berkeley Packet Filter programs which implement
+	  task-defined system call filtering polices.
+
+	  See Documentation/prctl/seccomp_filter.txt for details.
+
 source "kernel/gcov/Kconfig"
diff --git a/include/linux/Kbuild b/include/linux/Kbuild
index c94e717..d41ba12 100644
--- a/include/linux/Kbuild
+++ b/include/linux/Kbuild
@@ -330,6 +330,7 @@ header-y += scc.h
 header-y += sched.h
 header-y += screen_info.h
 header-y += sdla.h
+header-y += seccomp.h
 header-y += securebits.h
 header-y += selinux_netlink.h
 header-y += sem.h
diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
index d61f27f..ce980a8 100644
--- a/include/linux/seccomp.h
+++ b/include/linux/seccomp.h
@@ -1,14 +1,67 @@
 #ifndef _LINUX_SECCOMP_H
 #define _LINUX_SECCOMP_H
 
+#include <linux/compiler.h>
+#include <linux/types.h>
+
+
+/* Valid values for seccomp.mode and prctl(PR_SET_SECCOMP, <mode>) */
+#define SECCOMP_MODE_DISABLED	0 /* seccomp is not in use. */
+#define SECCOMP_MODE_STRICT	1 /* uses hard-coded filter. */
+#define SECCOMP_MODE_FILTER	2 /* uses user-supplied filter. */
+
+/*
+ * All BPF programs must return a 32-bit value.
+ * The bottom 16-bits are reserved for future use.
+ * The upper 16-bits are ordered from least permissive values to most.
+ *
+ * The ordering ensures that a min_t() over composed return values always
+ * selects the least permissive choice.
+ */
+#define SECCOMP_RET_KILL	0x00000000U /* kill the task immediately */
+#define SECCOMP_RET_ALLOW	0x7fff0000U /* allow */
+
+/* Masks for the return value sections. */
+#define SECCOMP_RET_ACTION	0xffff0000U
+#define SECCOMP_RET_DATA	0x0000ffffU
+
+/**
+ * struct seccomp_data - the format the BPF program executes over.
+ * @nr: the system call number
+ * @arch: indicates system call convention as an AUDIT_ARCH_* value
+ *        as defined in <linux/audit.h>.
+ * @instruction_pointer: at the time of the system call.
+ * @args: up to 6 system call arguments always stored as 64-bit values
+ *        regardless of the architecture.
+ */
+struct seccomp_data {
+	int nr;
+	__u32 arch;
+	__u64 instruction_pointer;
+	__u64 args[6];
+};
 
+#ifdef __KERNEL__
 #ifdef CONFIG_SECCOMP
 
 #include <linux/thread_info.h>
 #include <asm/seccomp.h>
 
+struct seccomp_filter;
+/**
+ * struct seccomp - the state of a seccomp'ed process
+ *
+ * @mode:  indicates one of the valid values above for controlled
+ *         system calls available to a process.
+ * @filter: The metadata and ruleset for determining what system calls
+ *          are allowed for a task.
+ *
+ *          @filter must only be accessed from the context of current as there
+ *          is no locking.
+ */
 struct seccomp {
 	int mode;
+	struct seccomp_filter *filter;
 };
 
 extern void __secure_computing(int);
@@ -19,7 +72,7 @@ static inline void secure_computing(int this_syscall)
 }
 
 extern long prctl_get_seccomp(void);
-extern long prctl_set_seccomp(unsigned long);
+extern long prctl_set_seccomp(unsigned long, char __user *);
 
 static inline int seccomp_mode(struct seccomp *s)
 {
@@ -31,15 +84,16 @@ static inline int seccomp_mode(struct seccomp *s)
 #include <linux/errno.h>
 
 struct seccomp { };
+struct seccomp_filter { };
 
-#define secure_computing(x) do { } while (0)
+#define secure_computing(x) 0
 
 static inline long prctl_get_seccomp(void)
 {
 	return -EINVAL;
 }
 
-static inline long prctl_set_seccomp(unsigned long arg2)
+static inline long prctl_set_seccomp(unsigned long arg2, char __user *arg3)
 {
 	return -EINVAL;
 }
@@ -48,7 +102,21 @@ static inline int seccomp_mode(struct seccomp *s)
 {
 	return 0;
 }
-
 #endif /* CONFIG_SECCOMP */
 
+#ifdef CONFIG_SECCOMP_FILTER
+extern void put_seccomp_filter(struct task_struct *tsk);
+extern void get_seccomp_filter(struct task_struct *tsk);
+extern u32 seccomp_bpf_load(int off);
+#else  /* CONFIG_SECCOMP_FILTER */
+static inline void put_seccomp_filter(struct task_struct *tsk)
+{
+	return;
+}
+static inline void get_seccomp_filter(struct task_struct *tsk)
+{
+	return;
+}
+#endif /* CONFIG_SECCOMP_FILTER */
+#endif /* __KERNEL__ */
 #endif /* _LINUX_SECCOMP_H */
diff --git a/kernel/fork.c b/kernel/fork.c
index 26a7a67..4f7a186 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -34,6 +34,7 @@
 #include <linux/cgroup.h>
 #include <linux/security.h>
 #include <linux/hugetlb.h>
+#include <linux/seccomp.h>
 #include <linux/swap.h>
 #include <linux/syscalls.h>
 #include <linux/jiffies.h>
@@ -170,6 +171,7 @@ void free_task(struct task_struct *tsk)
 	free_thread_info(tsk->stack);
 	rt_mutex_debug_task_free(tsk);
 	ftrace_graph_exit_task(tsk);
+	put_seccomp_filter(tsk);
 	free_task_struct(tsk);
 }
 EXPORT_SYMBOL(free_task);
@@ -1143,6 +1145,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
 		goto fork_out;
 
 	ftrace_graph_init_task(p);
+	get_seccomp_filter(p);
 
 	rt_mutex_init_task(p);
 
diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index e8d76c5..82f5a36 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -3,16 +3,314 @@
  *
  * Copyright 2004-2005  Andrea Arcangeli <andrea@cpushare.com>
  *
- * This defines a simple but solid secure-computing mode.
+ * Copyright (C) 2012 Google, Inc.
+ * Will Drewry <wad@chromium.org>
+ *
+ * This defines a simple but solid secure-computing facility.
+ *
+ * Mode 1 uses a fixed list of allowed system calls.
+ * Mode 2 allows user-defined system call filters in the form
+ *        of Berkeley Packet Filters/Linux Socket Filters.
  */
 
+#include <linux/atomic.h>
 #include <linux/audit.h>
-#include <linux/seccomp.h>
-#include <linux/sched.h>
 #include <linux/compat.h>
+#include <linux/filter.h>
+#include <linux/sched.h>
+#include <linux/seccomp.h>
+#include <linux/security.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+
+#include <linux/tracehook.h>
+#include <asm/syscall.h>
 
 /* #define SECCOMP_DEBUG 1 */
-#define NR_SECCOMP_MODES 1
+
+#ifdef CONFIG_SECCOMP_FILTER
+/**
+ * struct seccomp_filter - container for seccomp BPF programs
+ *
+ * @usage: reference count to manage the object liftime.
+ *         get/put helpers should be used when accessing an instance
+ *         outside of a lifetime-guarded section.  In general, this
+ *         is only needed for handling filters shared across tasks.
+ * @prev: points to a previously installed, or inherited, filter
+ * @insns: the BPF program instructions to evaluate
+ * @len: the number of instructions in the program
+ *
+ * seccomp_filter objects are organized in a tree linked via the @prev
+ * pointer.  For any task, it appears to be a singly-linked list starting
+ * with current->seccomp.filter, the most recently attached or inherited filter.
+ * However, multiple filters may share a @prev node, by way of fork(), which
+ * results in a unidirectional tree existing in memory.  This is similar to
+ * how namespaces work.
+ *
+ * seccomp_filter objects should never be modified after being attached
+ * to a task_struct (other than @usage).
+ */
+struct seccomp_filter {
+	atomic_t usage;
+	struct seccomp_filter *prev;
+	unsigned short len;  /* Instruction count */
+	struct sock_filter insns[];
+};
+
+/* Limit any path through the tree to 1 megabytes worth of instructions. */
+#define MAX_INSNS_PER_PATH ((1 << 20) / sizeof(struct sock_filter))
+
+static void seccomp_filter_log_failure(int syscall)
+{
+	int compat = 0;
+#ifdef CONFIG_COMPAT
+	compat = is_compat_task();
+#endif
+	pr_info("%s[%d]: %ssystem call %d blocked at 0x%lx\n",
+		current->comm, task_pid_nr(current),
+		(compat ? "compat " : ""),
+		syscall, KSTK_EIP(current));
+}
+
+/**
+ * get_u32 - returns a u32 offset into data
+ * @data: a unsigned 64 bit value
+ * @index: 0 or 1 to return the first or second 32-bits
+ *
+ * This inline exists to hide the length of unsigned long.
+ * If a 32-bit unsigned long is passed in, it will be extended
+ * and the top 32-bits will be 0. If it is a 64-bit unsigned
+ * long, then whatever data is resident will be properly returned.
+ */
+static inline u32 get_u32(u64 data, int index)
+{
+	return ((u32 *)&data)[index];
+}
+
+/* Helper for bpf_load below. */
+#define BPF_DATA(_name) offsetof(struct seccomp_data, _name)
+/**
+ * bpf_load: checks and returns a pointer to the requested offset
+ * @off: offset into struct seccomp_data to load from
+ *
+ * Returns the requested 32-bits of data.
+ * seccomp_chk_filter() should assure that @off is 32-bit aligned
+ * and not out of bounds.  Failure to do so is a BUG.
+ */
+u32 seccomp_bpf_load(int off)
+{
+	struct pt_regs *regs = task_pt_regs(current);
+
+	if (off == BPF_DATA(nr))
+		return syscall_get_nr(current, regs);
+	if (off == BPF_DATA(arch))
+		return syscall_get_arch(current, regs);
+	if (off == BPF_DATA(instruction_pointer))
+		return get_u32(KSTK_EIP(current), 0);
+	if (off == BPF_DATA(instruction_pointer) + sizeof(u32))
+		return get_u32(KSTK_EIP(current), 1);
+	if (off >= BPF_DATA(args[0]) && off < BPF_DATA(args[6])) {
+		unsigned long value;
+		int arg = (off - BPF_DATA(args[0])) / sizeof(u64);
+		int index = !!(off % sizeof(u64));
+		syscall_get_arguments(current, regs, arg, 1, &value);
+		return get_u32(value, index);
+	}
+	/* seccomp_chk_filter should make this impossible. */
+	BUG();
+}
+
+/**
+ *	seccomp_chk_filter - verify seccomp filter code
+ *	@filter: filter to verify
+ *	@flen: length of filter
+ *
+ * Takes a previously checked filter (by sk_chk_filter) and
+ * redirects all filter code that loads struct sk_buff data
+ * and related data through seccomp_bpf_load.  It also
+ * enforces length and alignment checking of those loads.
+ *
+ * Returns 0 if the rule set is legal or -EINVAL if not.
+ */
+static int seccomp_chk_filter(struct sock_filter *filter, unsigned int flen)
+{
+	int pc;
+	for (pc = 0; pc < flen; pc++) {
+		struct sock_filter *ftest = &filter[pc];
+		u16 code = ftest->code;
+		u32 k = ftest->k;
+		if (code <= BPF_S_ALU_NEG)
+			continue;
+		/* Do not allow ancillary codes (incl. BPF_S_ANC_SECCOMP_*) */
+		if (code >= BPF_S_ANC_PROTOCOL)
+			return -EINVAL;
+		if (code >= BPF_S_LDX_IMM)
+			continue;
+		switch (code) {
+		case BPF_S_LD_W_ABS:
+			ftest->code = BPF_S_ANC_SECCOMP_LD_W;
+			/* 32-bit aligned and not out of bounds. */
+			if (k >= sizeof(struct seccomp_data) || k & 3)
+				return -EINVAL;
+			continue;
+		case BPF_S_LD_W_LEN:
+			ftest->code = BPF_S_LD_IMM;
+			ftest->k = sizeof(struct seccomp_data);
+			continue;
+		case BPF_S_LD_IMM:
+			continue;
+		case BPF_S_LDX_W_LEN:
+			ftest->code = BPF_S_LDX_IMM;
+			ftest->k = sizeof(struct seccomp_data);
+			continue;
+		case BPF_S_LD_W_IND:
+			/* XXX: would runtime seccomp_bpf_load checking k */
+		default:
+			return -EINVAL;
+		}
+	}
+	return 0;
+}
+
+/**
+ * seccomp_run_filters - evaluates all seccomp filters against @syscall
+ * @syscall: number of the current system call
+ *
+ * Returns valid seccomp BPF response codes.
+ */
+static u32 seccomp_run_filters(int syscall)
+{
+	struct seccomp_filter *f;
+	u32 ret = SECCOMP_RET_KILL;
+	/*
+	 * All filters are evaluated in order of youngest to oldest. The lowest
+	 * BPF return value always takes priority.
+	 */
+	for (f = current->seccomp.filter; f; f = f->prev) {
+		ret = sk_run_filter(NULL, f->insns);
+		if (ret != SECCOMP_RET_ALLOW)
+			break;
+	}
+	return ret;
+}
+
+/**
+ * seccomp_attach_filter: Attaches a seccomp filter to current.
+ * @fprog: BPF program to install
+ *
+ * Returns 0 on success or an errno on failure.
+ */
+static long seccomp_attach_filter(struct sock_fprog *fprog)
+{
+	struct seccomp_filter *filter;
+	unsigned long fp_size = fprog->len * sizeof(struct sock_filter);
+	unsigned long total_insns = 0;
+	long ret;
+
+	if (fprog->len == 0 || fprog->len > BPF_MAXINSNS)
+		return -EINVAL;
+
+	for (filter = current->seccomp.filter; filter; filter = filter->prev)
+		total_insns += filter->len;
+	if (total_insns > MAX_INSNS_PER_PATH - fprog->len)
+		return -ENOSPC;
+
+	/*
+	 * Installing a seccomp filter requires that the task have
+	 * CAP_SYS_ADMIN in its namespace or be running with no_new_privs.
+	 * This avoids scenarios where unprivileged tasks can affect the
+	 * behavior of privileged children.
+	 */
+	if (!current->no_new_privs &&
+	    security_capable_noaudit(current_cred(), current_user_ns(),
+				     CAP_SYS_ADMIN) != 0)
+		return -EACCES;
+
+	/* Allocate a new seccomp_filter */
+	filter = kzalloc(sizeof(struct seccomp_filter) + fp_size, GFP_KERNEL);
+	if (!filter)
+		return -ENOMEM;
+	atomic_set(&filter->usage, 1);
+	filter->len = fprog->len;
+
+	/* Copy the instructions from fprog. */
+	ret = -EFAULT;
+	if (copy_from_user(filter->insns, fprog->filter, fp_size))
+		goto fail;
+
+	/* Check and rewrite the fprog via the skb checker */
+	ret = sk_chk_filter(filter->insns, filter->len);
+	if (ret)
+		goto fail;
+
+	/* Check and rewrite the fprog for seccomp use */
+	ret = seccomp_chk_filter(filter->insns, filter->len);
+	if (ret)
+		goto fail;
+
+	/*
+	 * If there is an existing filter, make it the prev and don't drop its
+	 * task reference.
+	 */
+	filter->prev = current->seccomp.filter;
+	current->seccomp.filter = filter;
+	return 0;
+fail:
+	kfree(filter);
+	return ret;
+}
+
+/**
+ * seccomp_attach_user_filter - attaches a user-supplied sock_fprog
+ * @user_filter: pointer to the user data containing a sock_fprog.
+ *
+ * Returns 0 on success and non-zero otherwise.
+ */
+long seccomp_attach_user_filter(char __user *user_filter)
+{
+	struct sock_fprog fprog;
+	long ret = -EFAULT;
+
+	if (!user_filter)
+		goto out;
+#ifdef CONFIG_COMPAT
+	if (is_compat_task()) {
+		struct compat_sock_fprog fprog32;
+		if (copy_from_user(&fprog32, user_filter, sizeof(fprog32)))
+			goto out;
+		fprog.len = fprog32.len;
+		fprog.filter = compat_ptr(fprog32.filter);
+	} else /* falls through to the if below. */
+#endif
+	if (copy_from_user(&fprog, user_filter, sizeof(fprog)))
+		goto out;
+	ret = seccomp_attach_filter(&fprog);
+out:
+	return ret;
+}
+
+/* get_seccomp_filter - increments the reference count of the filter on @tsk */
+void get_seccomp_filter(struct task_struct *tsk)
+{
+	struct seccomp_filter *orig = tsk->seccomp.filter;
+	if (!orig)
+		return;
+	/* Reference count is bounded by the number of total processes. */
+	atomic_inc(&orig->usage);
+}
+
+/* put_seccomp_filter - decrements the ref count of tsk->seccomp.filter */
+void put_seccomp_filter(struct task_struct *tsk)
+{
+	struct seccomp_filter *orig = tsk->seccomp.filter;
+	/* Clean up single-reference branches iteratively. */
+	while (orig && atomic_dec_and_test(&orig->usage)) {
+		struct seccomp_filter *freeme = orig;
+		orig = orig->prev;
+		kfree(freeme);
+	}
+}
+#endif	/* CONFIG_SECCOMP_FILTER */
 
 /*
  * Secure computing mode 1 allows only read/write/exit/sigreturn.
@@ -34,10 +332,11 @@ static int mode1_syscalls_32[] = {
 void __secure_computing(int this_syscall)
 {
 	int mode = current->seccomp.mode;
-	int * syscall;
+	int exit_code = SIGKILL;
+	int *syscall;
 
 	switch (mode) {
-	case 1:
+	case SECCOMP_MODE_STRICT:
 		syscall = mode1_syscalls;
 #ifdef CONFIG_COMPAT
 		if (is_compat_task())
@@ -48,6 +347,14 @@ void __secure_computing(int this_syscall)
 				return;
 		} while (*++syscall);
 		break;
+#ifdef CONFIG_SECCOMP_FILTER
+	case SECCOMP_MODE_FILTER:
+		if (seccomp_run_filters(this_syscall) == SECCOMP_RET_ALLOW)
+			return;
+		seccomp_filter_log_failure(this_syscall);
+		exit_code = SIGSYS;
+		break;
+#endif
 	default:
 		BUG();
 	}
@@ -56,7 +363,7 @@ void __secure_computing(int this_syscall)
 	dump_stack();
 #endif
 	audit_seccomp(this_syscall);
-	do_exit(SIGKILL);
+	do_exit(exit_code);
 }
 
 long prctl_get_seccomp(void)
@@ -64,25 +371,48 @@ long prctl_get_seccomp(void)
 	return current->seccomp.mode;
 }
 
-long prctl_set_seccomp(unsigned long seccomp_mode)
+/**
+ * prctl_set_seccomp: configures current->seccomp.mode
+ * @seccomp_mode: requested mode to use
+ * @filter: optional struct sock_fprog for use with SECCOMP_MODE_FILTER
+ *
+ * This function may be called repeatedly with a @seccomp_mode of
+ * SECCOMP_MODE_FILTER to install additional filters.  Every filter
+ * successfully installed will be evaluated (in reverse order) for each system
+ * call the task makes.
+ *
+ * Once current->seccomp.mode is non-zero, it may not be changed.
+ *
+ * Returns 0 on success or -EINVAL on failure.
+ */
+long prctl_set_seccomp(unsigned long seccomp_mode, char __user *filter)
 {
-	long ret;
+	long ret = -EINVAL;
 
-	/* can set it only once to be even more secure */
-	ret = -EPERM;
-	if (unlikely(current->seccomp.mode))
+	if (current->seccomp.mode &&
+	    current->seccomp.mode != seccomp_mode)
 		goto out;
 
-	ret = -EINVAL;
-	if (seccomp_mode && seccomp_mode <= NR_SECCOMP_MODES) {
-		current->seccomp.mode = seccomp_mode;
-		set_thread_flag(TIF_SECCOMP);
+	switch (seccomp_mode) {
+	case SECCOMP_MODE_STRICT:
+		ret = 0;
 #ifdef TIF_NOTSC
 		disable_TSC();
 #endif
-		ret = 0;
+		break;
+#ifdef CONFIG_SECCOMP_FILTER
+	case SECCOMP_MODE_FILTER:
+		ret = seccomp_attach_user_filter(filter);
+		if (ret)
+			goto out;
+		break;
+#endif
+	default:
+		goto out;
 	}
 
- out:
+	current->seccomp.mode = seccomp_mode;
+	set_thread_flag(TIF_SECCOMP);
+out:
 	return ret;
 }
diff --git a/kernel/sys.c b/kernel/sys.c
index 12e862a..038e6b7 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -1899,7 +1899,7 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
 			error = prctl_get_seccomp();
 			break;
 		case PR_SET_SECCOMP:
-			error = prctl_set_seccomp(arg2);
+			error = prctl_set_seccomp(arg2, (char __user *)arg3);
 			break;
 		case PR_GET_TSC:
 			error = GET_TSC_CTL(arg2);
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v14 07/13] signal, x86: add SIGSYS info and make it synchronous.
  2012-03-12 21:28 [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Will Drewry
                   ` (4 preceding siblings ...)
  2012-03-12 21:28 ` [PATCH v14 06/13] seccomp: add system call filtering using BPF Will Drewry
@ 2012-03-12 21:28 ` Will Drewry
  2012-03-12 21:28 ` [PATCH v14 08/13] seccomp: add SECCOMP_RET_ERRNO Will Drewry
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 40+ messages in thread
From: Will Drewry @ 2012-03-12 21:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arch, linux-doc, kernel-hardening, netdev, x86, arnd,
	davem, hpa, mingo, oleg, peterz, rdunlap, mcgrathr, tglx, luto,
	eparis, serge.hallyn, djm, scarybeasts, indan, pmoore, akpm,
	corbet, eric.dumazet, markus, coreyb, keescook, Will Drewry

This change enables SIGSYS, defines _sigfields._sigsys, and adds
x86 (compat) arch support.  _sigsys defines fields which allow
a signal handler to receive the triggering system call number,
the relevant AUDIT_ARCH_* value for that number, and the address
of the callsite.

SIGSYS is added to the SYNCHRONOUS_MASK because it is desirable for it
to have setup_frame() called for it. The goal is to ensure that
ucontext_t reflects the machine state from the time-of-syscall and not
from another signal handler.

The first consumer of SIGSYS would be seccomp filter.  In particular,
a filter program could specify a new return value, SECCOMP_RET_TRAP,
which would result in the system call being denied and the calling
thread signaled.  This also means that implementing arch-specific
support can be dependent upon HAVE_ARCH_SECCOMP_FILTER.

v14: - rebase/nochanges
v13: - rebase on to 88ebdda6159ffc15699f204c33feb3e431bf9bdc
v12: - reworded changelog (oleg@redhat.com)
v11: - fix dropped words in the change description
     - added fallback copy_siginfo support.
     - added __ARCH_SIGSYS define to allow stepped arch support.
v10: - first version based on suggestion

Acked-by: Serge Hallyn <serge.hallyn@canonical.com>
Suggested-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Will Drewry <wad@chromium.org>
---
 arch/x86/ia32/ia32_signal.c   |    4 ++++
 arch/x86/include/asm/ia32.h   |    6 ++++++
 include/asm-generic/siginfo.h |   22 ++++++++++++++++++++++
 kernel/signal.c               |    9 ++++++++-
 4 files changed, 40 insertions(+), 1 deletions(-)

diff --git a/arch/x86/ia32/ia32_signal.c b/arch/x86/ia32/ia32_signal.c
index 6557769..c81d2c7 100644
--- a/arch/x86/ia32/ia32_signal.c
+++ b/arch/x86/ia32/ia32_signal.c
@@ -73,6 +73,10 @@ int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from)
 			switch (from->si_code >> 16) {
 			case __SI_FAULT >> 16:
 				break;
+			case __SI_SYS >> 16:
+				put_user_ex(from->si_syscall, &to->si_syscall);
+				put_user_ex(from->si_arch, &to->si_arch);
+				break;
 			case __SI_CHLD >> 16:
 				put_user_ex(from->si_utime, &to->si_utime);
 				put_user_ex(from->si_stime, &to->si_stime);
diff --git a/arch/x86/include/asm/ia32.h b/arch/x86/include/asm/ia32.h
index 1f7e625..541485f 100644
--- a/arch/x86/include/asm/ia32.h
+++ b/arch/x86/include/asm/ia32.h
@@ -126,6 +126,12 @@ typedef struct compat_siginfo {
 			int _band;	/* POLL_IN, POLL_OUT, POLL_MSG */
 			int _fd;
 		} _sigpoll;
+
+		struct {
+			unsigned int _call_addr; /* calling insn */
+			int _syscall;	/* triggering system call number */
+			unsigned int _arch;	/* AUDIT_ARCH_* of syscall */
+		} _sigsys;
 	} _sifields;
 } compat_siginfo_t;
 
diff --git a/include/asm-generic/siginfo.h b/include/asm-generic/siginfo.h
index 0dd4e87..31306f5 100644
--- a/include/asm-generic/siginfo.h
+++ b/include/asm-generic/siginfo.h
@@ -90,9 +90,18 @@ typedef struct siginfo {
 			__ARCH_SI_BAND_T _band;	/* POLL_IN, POLL_OUT, POLL_MSG */
 			int _fd;
 		} _sigpoll;
+
+		/* SIGSYS */
+		struct {
+			void __user *_call_addr; /* calling insn */
+			int _syscall;	/* triggering system call number */
+			unsigned int _arch;	/* AUDIT_ARCH_* of syscall */
+		} _sigsys;
 	} _sifields;
 } siginfo_t;
 
+/* If the arch shares siginfo, then it has SIGSYS. */
+#define __ARCH_SIGSYS
 #endif
 
 /*
@@ -116,6 +125,11 @@ typedef struct siginfo {
 #define si_addr_lsb	_sifields._sigfault._addr_lsb
 #define si_band		_sifields._sigpoll._band
 #define si_fd		_sifields._sigpoll._fd
+#ifdef __ARCH_SIGSYS
+#define si_call_addr	_sifields._sigsys._call_addr
+#define si_syscall	_sifields._sigsys._syscall
+#define si_arch		_sifields._sigsys._arch
+#endif
 
 #ifdef __KERNEL__
 #define __SI_MASK	0xffff0000u
@@ -126,6 +140,7 @@ typedef struct siginfo {
 #define __SI_CHLD	(4 << 16)
 #define __SI_RT		(5 << 16)
 #define __SI_MESGQ	(6 << 16)
+#define __SI_SYS	(7 << 16)
 #define __SI_CODE(T,N)	((T) | ((N) & 0xffff))
 #else
 #define __SI_KILL	0
@@ -135,6 +150,7 @@ typedef struct siginfo {
 #define __SI_CHLD	0
 #define __SI_RT		0
 #define __SI_MESGQ	0
+#define __SI_SYS	0
 #define __SI_CODE(T,N)	(N)
 #endif
 
@@ -232,6 +248,12 @@ typedef struct siginfo {
 #define NSIGPOLL	6
 
 /*
+ * SIGSYS si_codes
+ */
+#define SYS_SECCOMP		(__SI_SYS|1)	/* seccomp triggered */
+#define NSIGSYS	1
+
+/*
  * sigevent definitions
  * 
  * It seems likely that SIGEV_THREAD will have to be handled from 
diff --git a/kernel/signal.c b/kernel/signal.c
index c73c428..15a9e9b 100644
--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -160,7 +160,7 @@ void recalc_sigpending(void)
 
 #define SYNCHRONOUS_MASK \
 	(sigmask(SIGSEGV) | sigmask(SIGBUS) | sigmask(SIGILL) | \
-	 sigmask(SIGTRAP) | sigmask(SIGFPE))
+	 sigmask(SIGTRAP) | sigmask(SIGFPE) | sigmask(SIGSYS))
 
 int next_signal(struct sigpending *pending, sigset_t *mask)
 {
@@ -2686,6 +2686,13 @@ int copy_siginfo_to_user(siginfo_t __user *to, siginfo_t *from)
 		err |= __put_user(from->si_uid, &to->si_uid);
 		err |= __put_user(from->si_ptr, &to->si_ptr);
 		break;
+#ifdef __ARCH_SIGSYS
+	case __SI_SYS:
+		err |= __put_user(from->si_call_addr, &to->si_call_addr);
+		err |= __put_user(from->si_syscall, &to->si_syscall);
+		err |= __put_user(from->si_arch, &to->si_arch);
+		break;
+#endif
 	default: /* this is just in case for now ... */
 		err |= __put_user(from->si_pid, &to->si_pid);
 		err |= __put_user(from->si_uid, &to->si_uid);
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v14 08/13] seccomp: add SECCOMP_RET_ERRNO
  2012-03-12 21:28 [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Will Drewry
                   ` (5 preceding siblings ...)
  2012-03-12 21:28 ` [PATCH v14 07/13] signal, x86: add SIGSYS info and make it synchronous Will Drewry
@ 2012-03-12 21:28 ` Will Drewry
  2012-03-12 21:28 ` [PATCH v14 09/13] seccomp: Add SECCOMP_RET_TRAP Will Drewry
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 40+ messages in thread
From: Will Drewry @ 2012-03-12 21:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arch, linux-doc, kernel-hardening, netdev, x86, arnd,
	davem, hpa, mingo, oleg, peterz, rdunlap, mcgrathr, tglx, luto,
	eparis, serge.hallyn, djm, scarybeasts, indan, pmoore, akpm,
	corbet, eric.dumazet, markus, coreyb, keescook, Will Drewry

This change adds the SECCOMP_RET_ERRNO as a valid return value from a
seccomp filter.  Additionally, it makes the first use of the lower
16-bits for storing a filter-supplied errno.  16-bits is more than
enough for the errno-base.h calls.

Returning errors instead of immediately terminating processes that
violate seccomp policy allow for broader use of this functionality
for kernel attack surface reduction.  For example, a linux container
could maintain a whitelist of pre-existing system calls but drop
all new ones with errnos.  This would keep a logically static attack
surface while providing errnos that may allow for graceful failure
without the downside of do_exit() on a bad call.

v14: - no change/rebase
v13: - rebase on to 88ebdda6159ffc15699f204c33feb3e431bf9bdc
v12: - move to WARN_ON if filter is NULL
       (oleg@redhat.com, luto@mit.edu, keescook@chromium.org)
     - return immediately for filter==NULL (keescook@chromium.org)
     - change evaluation to only compare the ACTION so that layered
       errnos don't result in the lowest one being returned.
       (keeschook@chromium.org)
v11: - check for NULL filter (keescook@chromium.org)
v10: - change loaders to fn
 v9: - n/a
 v8: - update Kconfig to note new need for syscall_set_return_value.
     - reordered such that TRAP behavior follows on later.
     - made the for loop a little less indent-y
 v7: - introduced

Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Serge Hallyn <serge.hallyn@canonical.com>
Signed-off-by: Will Drewry <wad@chromium.org>
---
 arch/Kconfig            |    6 ++++--
 include/linux/seccomp.h |   15 +++++++++++----
 kernel/seccomp.c        |   44 +++++++++++++++++++++++++++++++++++---------
 3 files changed, 50 insertions(+), 15 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 7c6bd48..dd4e067 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -203,8 +203,10 @@ config HAVE_ARCH_SECCOMP_FILTER
 	bool
 	help
 	  This symbol should be selected by an architecure if it provides
-	  asm/syscall.h, specifically syscall_get_arguments() and
-	  syscall_get_arch().
+	  asm/syscall.h, specifically syscall_get_arguments(),
+	  syscall_get_arch(), and syscall_set_return_value().  Additionally,
+	  its system call entry path must respect a return value of -1 from
+	  __secure_computing_int() and/or secure_computing().
 
 config SECCOMP_FILTER
 	def_bool y
diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
index ce980a8..eb36185 100644
--- a/include/linux/seccomp.h
+++ b/include/linux/seccomp.h
@@ -12,13 +12,14 @@
 
 /*
  * All BPF programs must return a 32-bit value.
- * The bottom 16-bits are reserved for future use.
+ * The bottom 16-bits are for optional return data.
  * The upper 16-bits are ordered from least permissive values to most.
  *
  * The ordering ensures that a min_t() over composed return values always
  * selects the least permissive choice.
  */
 #define SECCOMP_RET_KILL	0x00000000U /* kill the task immediately */
+#define SECCOMP_RET_ERRNO	0x00030000U /* returns an errno */
 #define SECCOMP_RET_ALLOW	0x7fff0000U /* allow */
 
 /* Masks for the return value sections. */
@@ -64,11 +65,17 @@ struct seccomp {
 	struct seccomp_filter *filter;
 };
 
-extern void __secure_computing(int);
-static inline void secure_computing(int this_syscall)
+/*
+ * Direct callers to __secure_computing should be updated as
+ * CONFIG_HAVE_ARCH_SECCOMP_FILTER propagates.
+ */
+extern void __secure_computing(int) __deprecated;
+extern int __secure_computing_int(int);
+static inline int secure_computing(int this_syscall)
 {
 	if (unlikely(test_thread_flag(TIF_SECCOMP)))
-		__secure_computing(this_syscall);
+		return  __secure_computing_int(this_syscall);
+	return 0;
 }
 
 extern long prctl_get_seccomp(void);
diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index 82f5a36..8cb9b69 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -181,15 +181,20 @@ static int seccomp_chk_filter(struct sock_filter *filter, unsigned int flen)
 static u32 seccomp_run_filters(int syscall)
 {
 	struct seccomp_filter *f;
-	u32 ret = SECCOMP_RET_KILL;
+	u32 ret = SECCOMP_RET_ALLOW;
+
+	/* Ensure unexpected behavior doesn't result in failing open. */
+	if (WARN_ON(current->seccomp.filter == NULL))
+		return SECCOMP_RET_KILL;
+
 	/*
 	 * All filters are evaluated in order of youngest to oldest. The lowest
-	 * BPF return value always takes priority.
+	 * BPF return value (ignoring the DATA) always takes priority.
 	 */
 	for (f = current->seccomp.filter; f; f = f->prev) {
-		ret = sk_run_filter(NULL, f->insns);
-		if (ret != SECCOMP_RET_ALLOW)
-			break;
+		u32 cur_ret = sk_run_filter(NULL, f->insns);
+		if ((cur_ret & SECCOMP_RET_ACTION) < (ret & SECCOMP_RET_ACTION))
+			ret = cur_ret;
 	}
 	return ret;
 }
@@ -331,6 +336,13 @@ static int mode1_syscalls_32[] = {
 
 void __secure_computing(int this_syscall)
 {
+	/* Filter calls should never use this function. */
+	BUG_ON(current->seccomp.mode == SECCOMP_MODE_FILTER);
+	__secure_computing_int(this_syscall);
+}
+
+int __secure_computing_int(int this_syscall)
+{
 	int mode = current->seccomp.mode;
 	int exit_code = SIGKILL;
 	int *syscall;
@@ -344,16 +356,29 @@ void __secure_computing(int this_syscall)
 #endif
 		do {
 			if (*syscall == this_syscall)
-				return;
+				return 0;
 		} while (*++syscall);
 		break;
 #ifdef CONFIG_SECCOMP_FILTER
-	case SECCOMP_MODE_FILTER:
-		if (seccomp_run_filters(this_syscall) == SECCOMP_RET_ALLOW)
-			return;
+	case SECCOMP_MODE_FILTER: {
+		u32 action = seccomp_run_filters(this_syscall);
+		switch (action & SECCOMP_RET_ACTION) {
+		case SECCOMP_RET_ERRNO:
+			/* Set the low-order 16-bits as a errno. */
+			syscall_set_return_value(current, task_pt_regs(current),
+						 -(action & SECCOMP_RET_DATA),
+						 0);
+			return -1;
+		case SECCOMP_RET_ALLOW:
+			return 0;
+		case SECCOMP_RET_KILL:
+		default:
+			break;
+		}
 		seccomp_filter_log_failure(this_syscall);
 		exit_code = SIGSYS;
 		break;
+	}
 #endif
 	default:
 		BUG();
@@ -364,6 +389,7 @@ void __secure_computing(int this_syscall)
 #endif
 	audit_seccomp(this_syscall);
 	do_exit(exit_code);
+	return -1;	/* never reached */
 }
 
 long prctl_get_seccomp(void)
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v14 09/13] seccomp: Add SECCOMP_RET_TRAP
  2012-03-12 21:28 [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Will Drewry
                   ` (6 preceding siblings ...)
  2012-03-12 21:28 ` [PATCH v14 08/13] seccomp: add SECCOMP_RET_ERRNO Will Drewry
@ 2012-03-12 21:28 ` Will Drewry
  2012-03-12 21:28 ` [PATCH v14 10/13] ptrace,seccomp: Add PTRACE_SECCOMP support Will Drewry
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 40+ messages in thread
From: Will Drewry @ 2012-03-12 21:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arch, linux-doc, kernel-hardening, netdev, x86, arnd,
	davem, hpa, mingo, oleg, peterz, rdunlap, mcgrathr, tglx, luto,
	eparis, serge.hallyn, djm, scarybeasts, indan, pmoore, akpm,
	corbet, eric.dumazet, markus, coreyb, keescook, Will Drewry

Adds a new return value to seccomp filters that triggers a SIGSYS to be
delivered with the new SYS_SECCOMP si_code.

This allows in-process system call emulation, including just specifying
an errno or cleanly dumping core, rather than just dying.

v14: - rebase/nochanges
v13: - rebase on to 88ebdda6159ffc15699f204c33feb3e431bf9bdc
v12: - rebase on to linux-next
v11: - clarify the comment (indan@nul.nu)
     - s/sigtrap/sigsys
v10: - use SIGSYS, syscall_get_arch, updates arch/Kconfig
       note suggested-by (though original suggestion had other behaviors)
v9:  - changes to SIGILL
v8:  - clean up based on changes to dependent patches
v7:  - introduction

Suggested-by: Markus Gutschke <markus@chromium.org>
Suggested-by: Julien Tinnes <jln@chromium.org>
Signed-off-by: Will Drewry <wad@chromium.org>
---
 arch/Kconfig                  |   14 +++++++++-----
 include/asm-generic/siginfo.h |    2 +-
 include/linux/seccomp.h       |    1 +
 kernel/seccomp.c              |   28 ++++++++++++++++++++++++++++
 4 files changed, 39 insertions(+), 6 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index dd4e067..d92a78e 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -202,11 +202,15 @@ config HAVE_CMPXCHG_DOUBLE
 config HAVE_ARCH_SECCOMP_FILTER
 	bool
 	help
-	  This symbol should be selected by an architecure if it provides
-	  asm/syscall.h, specifically syscall_get_arguments(),
-	  syscall_get_arch(), and syscall_set_return_value().  Additionally,
-	  its system call entry path must respect a return value of -1 from
-	  __secure_computing_int() and/or secure_computing().
+	  This symbol should be selected by an architecure if it provides:
+	  asm/syscall.h:
+	  - syscall_get_arch()
+	  - syscall_get_arguments()
+	  - syscall_rollback()
+	  - syscall_set_return_value()
+	  SIGSYS siginfo_t support must be implemented.
+	  __secure_computing_int()/secure_computing()'s return value must be
+	  checked, with -1 resulting in the syscall being skipped.
 
 config SECCOMP_FILTER
 	def_bool y
diff --git a/include/asm-generic/siginfo.h b/include/asm-generic/siginfo.h
index 31306f5..af5d035 100644
--- a/include/asm-generic/siginfo.h
+++ b/include/asm-generic/siginfo.h
@@ -93,7 +93,7 @@ typedef struct siginfo {
 
 		/* SIGSYS */
 		struct {
-			void __user *_call_addr; /* calling insn */
+			void __user *_call_addr; /* calling user insn */
 			int _syscall;	/* triggering system call number */
 			unsigned int _arch;	/* AUDIT_ARCH_* of syscall */
 		} _sigsys;
diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
index eb36185..e6d4b56 100644
--- a/include/linux/seccomp.h
+++ b/include/linux/seccomp.h
@@ -19,6 +19,7 @@
  * selects the least permissive choice.
  */
 #define SECCOMP_RET_KILL	0x00000000U /* kill the task immediately */
+#define SECCOMP_RET_TRAP	0x00020000U /* disallow and force a SIGSYS */
 #define SECCOMP_RET_ERRNO	0x00030000U /* returns an errno */
 #define SECCOMP_RET_ALLOW	0x7fff0000U /* allow */
 
diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index 8cb9b69..140490a 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -315,6 +315,26 @@ void put_seccomp_filter(struct task_struct *tsk)
 		kfree(freeme);
 	}
 }
+
+/**
+ * seccomp_send_sigsys - signals the task to allow in-process syscall emulation
+ * @syscall: syscall number to send to userland
+ * @reason: filter-supplied reason code to send to userland (via si_errno)
+ *
+ * Forces a SIGSYS with a code of SYS_SECCOMP and related sigsys info.
+ */
+static void seccomp_send_sigsys(int syscall, int reason)
+{
+	struct siginfo info;
+	memset(&info, 0, sizeof(info));
+	info.si_signo = SIGSYS;
+	info.si_code = SYS_SECCOMP;
+	info.si_call_addr = (void __user *)KSTK_EIP(current);
+	info.si_errno = reason;
+	info.si_arch = syscall_get_arch(current, task_pt_regs(current));
+	info.si_syscall = syscall;
+	force_sig_info(SIGSYS, &info, current);
+}
 #endif	/* CONFIG_SECCOMP_FILTER */
 
 /*
@@ -369,6 +389,14 @@ int __secure_computing_int(int this_syscall)
 						 -(action & SECCOMP_RET_DATA),
 						 0);
 			return -1;
+		case SECCOMP_RET_TRAP: {
+			int reason_code = action & SECCOMP_RET_DATA;
+			/* Show the handler the original registers. */
+			syscall_rollback(current, task_pt_regs(current));
+			/* Let the filter pass back 16 bits of data. */
+			seccomp_send_sigsys(this_syscall, reason_code);
+			return -1;
+		}
 		case SECCOMP_RET_ALLOW:
 			return 0;
 		case SECCOMP_RET_KILL:
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v14 10/13] ptrace,seccomp: Add PTRACE_SECCOMP support
  2012-03-12 21:28 [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Will Drewry
                   ` (7 preceding siblings ...)
  2012-03-12 21:28 ` [PATCH v14 09/13] seccomp: Add SECCOMP_RET_TRAP Will Drewry
@ 2012-03-12 21:28 ` Will Drewry
  2012-03-14  7:31   ` Indan Zupancic
  2012-03-12 21:28 ` [PATCH v14 11/13] x86: Enable HAVE_ARCH_SECCOMP_FILTER Will Drewry
                   ` (4 subsequent siblings)
  13 siblings, 1 reply; 40+ messages in thread
From: Will Drewry @ 2012-03-12 21:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arch, linux-doc, kernel-hardening, netdev, x86, arnd,
	davem, hpa, mingo, oleg, peterz, rdunlap, mcgrathr, tglx, luto,
	eparis, serge.hallyn, djm, scarybeasts, indan, pmoore, akpm,
	corbet, eric.dumazet, markus, coreyb, keescook, Will Drewry

This change adds support for a new ptrace option, PTRACE_O_TRACESECCOMP,
and a new return value for seccomp BPF programs, SECCOMP_RET_TRACE.

When a tracer specifies the PTRACE_O_TRACESECCOMP ptrace option, the
tracer will be notified for any syscall that results in a BPF program
returning SECCOMP_RET_TRACE.  The 16-bit SECCOMP_RET_DATA mask of the
BPF program return value will be passed as the ptrace_message and may be
retrieved using PTRACE_GETEVENTMSG.

If the subordinate process is not using seccomp filter, then no
system call notifications will occur even if the option is specified.

If there is no attached tracer when SECCOMP_RET_TRACE is returned,
the system call will not be executed and an -ENOSYS errno will be
returned to userspace.

This change adds a dependency on the system call slow path.  Any future
efforts to use the system call fast path for seccomp filter will need to
address this restriction.

v14: - rebase/nochanges
v13: - rebase on to 88ebdda6159ffc15699f204c33feb3e431bf9bdc
       (Brings back a change to ptrace.c and the masks.)
v12: - rebase to linux-next
     - use ptrace_event and update arch/Kconfig to mention slow-path dependency
     - drop all tracehook changes and inclusion (oleg@redhat.com)
v11: - invert the logic to just make it a PTRACE_SYSCALL accelerator
       (indan@nul.nu)
v10: - moved to PTRACE_O_SECCOMP / PT_TRACE_SECCOMP
v9:  - n/a
v8:  - guarded PTRACE_SECCOMP use with an ifdef
v7:  - introduced

Signed-off-by: Will Drewry <wad@chromium.org>
---
 arch/Kconfig            |   11 ++++++-----
 include/linux/ptrace.h  |    7 +++++--
 include/linux/seccomp.h |    1 +
 kernel/ptrace.c         |    3 +++
 kernel/seccomp.c        |   20 +++++++++++++++-----
 5 files changed, 30 insertions(+), 12 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index d92a78e..3f8132c 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -202,15 +202,16 @@ config HAVE_CMPXCHG_DOUBLE
 config HAVE_ARCH_SECCOMP_FILTER
 	bool
 	help
-	  This symbol should be selected by an architecure if it provides:
-	  asm/syscall.h:
+	  An arch should select this symbol if it provides all of these things:
 	  - syscall_get_arch()
 	  - syscall_get_arguments()
 	  - syscall_rollback()
 	  - syscall_set_return_value()
-	  SIGSYS siginfo_t support must be implemented.
-	  __secure_computing_int()/secure_computing()'s return value must be
-	  checked, with -1 resulting in the syscall being skipped.
+	  - SIGSYS siginfo_t support
+	  - uses __secure_computing_int() or secure_computing()
+	  - secure_computing is called from a ptrace_event()-safe context
+	  - secure_computing return value is checked and a return value of -1
+	    results in the system call being skipped immediately.
 
 config SECCOMP_FILTER
 	def_bool y
diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
index c2f1f6a..84b3418 100644
--- a/include/linux/ptrace.h
+++ b/include/linux/ptrace.h
@@ -62,8 +62,9 @@
 #define PTRACE_O_TRACEEXEC	0x00000010
 #define PTRACE_O_TRACEVFORKDONE	0x00000020
 #define PTRACE_O_TRACEEXIT	0x00000040
+#define PTRACE_O_TRACESECCOMP	0x00000080
 
-#define PTRACE_O_MASK		0x0000007f
+#define PTRACE_O_MASK		0x000000ff
 
 /* Wait extended result codes for the above trace options.  */
 #define PTRACE_EVENT_FORK	1
@@ -73,6 +74,7 @@
 #define PTRACE_EVENT_VFORK_DONE	5
 #define PTRACE_EVENT_EXIT	6
 #define PTRACE_EVENT_STOP	7
+#define PTRACE_EVENT_SECCOMP	8
 
 #include <asm/ptrace.h>
 
@@ -101,8 +103,9 @@
 #define PT_TRACE_EXEC		PT_EVENT_FLAG(PTRACE_EVENT_EXEC)
 #define PT_TRACE_VFORK_DONE	PT_EVENT_FLAG(PTRACE_EVENT_VFORK_DONE)
 #define PT_TRACE_EXIT		PT_EVENT_FLAG(PTRACE_EVENT_EXIT)
+#define PT_TRACE_SECCOMP	PT_EVENT_FLAG(PTRACE_EVENT_SECCOMP)
 
-#define PT_TRACE_MASK	0x000003f4
+#define PT_TRACE_MASK	0x00000ff4
 
 /* single stepping state bits (used on ARM and PA-RISC) */
 #define PT_SINGLESTEP_BIT	31
diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
index e6d4b56..f4c1774 100644
--- a/include/linux/seccomp.h
+++ b/include/linux/seccomp.h
@@ -21,6 +21,7 @@
 #define SECCOMP_RET_KILL	0x00000000U /* kill the task immediately */
 #define SECCOMP_RET_TRAP	0x00020000U /* disallow and force a SIGSYS */
 #define SECCOMP_RET_ERRNO	0x00030000U /* returns an errno */
+#define SECCOMP_RET_TRACE	0x7ffe0000U /* pass to a tracer or disallow */
 #define SECCOMP_RET_ALLOW	0x7fff0000U /* allow */
 
 /* Masks for the return value sections. */
diff --git a/kernel/ptrace.c b/kernel/ptrace.c
index 00ab2ca..8cf6da1 100644
--- a/kernel/ptrace.c
+++ b/kernel/ptrace.c
@@ -551,6 +551,9 @@ static int ptrace_setoptions(struct task_struct *child, unsigned long data)
 	if (data & PTRACE_O_TRACEEXIT)
 		child->ptrace |= PT_TRACE_EXIT;
 
+	if (data & PTRACE_O_TRACESECCOMP)
+		child->ptrace |= PT_TRACE_SECCOMP;
+
 	return (data & ~PTRACE_O_MASK) ? -EINVAL : 0;
 }
 
diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index 140490a..ddacc68 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -17,13 +17,13 @@
 #include <linux/audit.h>
 #include <linux/compat.h>
 #include <linux/filter.h>
+#include <linux/ptrace.h>
 #include <linux/sched.h>
 #include <linux/seccomp.h>
 #include <linux/security.h>
 #include <linux/slab.h>
 #include <linux/uaccess.h>
 
-#include <linux/tracehook.h>
 #include <asm/syscall.h>
 
 /* #define SECCOMP_DEBUG 1 */
@@ -389,14 +389,24 @@ int __secure_computing_int(int this_syscall)
 						 -(action & SECCOMP_RET_DATA),
 						 0);
 			return -1;
-		case SECCOMP_RET_TRAP: {
-			int reason_code = action & SECCOMP_RET_DATA;
+		case SECCOMP_RET_TRAP:
 			/* Show the handler the original registers. */
 			syscall_rollback(current, task_pt_regs(current));
 			/* Let the filter pass back 16 bits of data. */
-			seccomp_send_sigsys(this_syscall, reason_code);
+			seccomp_send_sigsys(this_syscall,
+					    action & SECCOMP_RET_DATA);
 			return -1;
-		}
+		case SECCOMP_RET_TRACE:
+			/* Skip these calls if there is no tracer. */
+			if (!ptrace_event_enabled(current,
+						  PTRACE_EVENT_SECCOMP))
+				return -1;
+			/* Allow the BPF to provide the event message */
+			ptrace_event(PTRACE_EVENT_SECCOMP,
+				     action & SECCOMP_RET_DATA);
+			if (fatal_signal_pending(current))
+				break;
+			return 0;
 		case SECCOMP_RET_ALLOW:
 			return 0;
 		case SECCOMP_RET_KILL:
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v14 11/13] x86: Enable HAVE_ARCH_SECCOMP_FILTER
  2012-03-12 21:28 [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Will Drewry
                   ` (8 preceding siblings ...)
  2012-03-12 21:28 ` [PATCH v14 10/13] ptrace,seccomp: Add PTRACE_SECCOMP support Will Drewry
@ 2012-03-12 21:28 ` Will Drewry
  2012-03-12 21:28 ` [PATCH v14 12/13] Documentation: prctl/seccomp_filter Will Drewry
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 40+ messages in thread
From: Will Drewry @ 2012-03-12 21:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arch, linux-doc, kernel-hardening, netdev, x86, arnd,
	davem, hpa, mingo, oleg, peterz, rdunlap, mcgrathr, tglx, luto,
	eparis, serge.hallyn, djm, scarybeasts, indan, pmoore, akpm,
	corbet, eric.dumazet, markus, coreyb, keescook, Will Drewry

Enable support for seccomp filter on x86:
- asm/tracehook.h exists
- syscall_get_arguments() works
- syscall_rollback() works
- ptrace_report_syscall() works
- secure_computing() return value is honored (see below)

This also adds support for honoring the return
value from secure_computing().

SECCOMP_RET_TRACE and SECCOMP_RET_TRAP may result in seccomp needing to
skip a system call without killing the process.  This is done by
returning a non-zero (-1) value from secure_computing.  This change
makes x86 respect that return value.

To ensure that minimal kernel code is exposed, a non-zero return value
results in an immediate return to user space (with an invalid syscall
number).

Signed-off-by: Will Drewry <wad@chromium.org>
---
 arch/x86/Kconfig         |    1 +
 arch/x86/kernel/ptrace.c |    7 ++++++-
 2 files changed, 7 insertions(+), 1 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 5bed94e..4c9012b 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -82,6 +82,7 @@ config X86
 	select CLKEVT_I8253
 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
 	select GENERIC_IOMAP
+	select HAVE_ARCH_SECCOMP_FILTER
 
 config INSTRUCTION_DECODER
 	def_bool (KPROBES || PERF_EVENTS)
diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
index 5026738..90d465a 100644
--- a/arch/x86/kernel/ptrace.c
+++ b/arch/x86/kernel/ptrace.c
@@ -1380,7 +1380,11 @@ long syscall_trace_enter(struct pt_regs *regs)
 		regs->flags |= X86_EFLAGS_TF;
 
 	/* do the secure computing check first */
-	secure_computing(regs->orig_ax);
+	if (secure_computing(regs->orig_ax)) {
+		/* seccomp failures shouldn't expose any additional code. */
+		ret = -1L;
+		goto out;
+	}
 
 	if (unlikely(test_thread_flag(TIF_SYSCALL_EMU)))
 		ret = -1L;
@@ -1405,6 +1409,7 @@ long syscall_trace_enter(struct pt_regs *regs)
 				    regs->dx, regs->r10);
 #endif
 
+out:
 	return ret ?: regs->orig_ax;
 }
 
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v14 12/13] Documentation: prctl/seccomp_filter
  2012-03-12 21:28 [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Will Drewry
                   ` (9 preceding siblings ...)
  2012-03-12 21:28 ` [PATCH v14 11/13] x86: Enable HAVE_ARCH_SECCOMP_FILTER Will Drewry
@ 2012-03-12 21:28 ` Will Drewry
  2012-03-12 21:28 ` [PATCH v14 13/13] seccomp: remove duplicated failure logging Will Drewry
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 40+ messages in thread
From: Will Drewry @ 2012-03-12 21:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arch, linux-doc, kernel-hardening, netdev, x86, arnd,
	davem, hpa, mingo, oleg, peterz, rdunlap, mcgrathr, tglx, luto,
	eparis, serge.hallyn, djm, scarybeasts, indan, pmoore, akpm,
	corbet, eric.dumazet, markus, coreyb, keescook, Will Drewry

Documents how system call filtering using Berkeley Packet
Filter programs works and how it may be used.
Includes an example for x86 and a semi-generic
example using a macro-based code generator.

v14: - rebase/nochanges
v13: - rebase on to 88ebdda6159ffc15699f204c33feb3e431bf9bdc
v12: - comment on the ptrace_event use
     - update arch support comment
     - note the behavior of SECCOMP_RET_DATA when there are multiple filters
       (keescook@chromium.org)
     - lots of samples/ clean up incl 64-bit bpf-direct support
       (markus@chromium.org)
     - rebase to linux-next
v11: - overhaul return value language, updates (keescook@chromium.org)
     - comment on do_exit(SIGSYS)
v10: - update for SIGSYS
     - update for new seccomp_data layout
     - update for ptrace option use
v9: - updated bpf-direct.c for SIGILL
v8: - add PR_SET_NO_NEW_PRIVS to the samples.
v7: - updated for all the new stuff in v7: TRAP, TRACE
    - only talk about PR_SET_SECCOMP now
    - fixed bad JLE32 check (coreyb@linux.vnet.ibm.com)
    - adds dropper.c: a simple system call disabler
v6: - tweak the language to note the requirement of
      PR_SET_NO_NEW_PRIVS being called prior to use. (luto@mit.edu)
v5: - update sample to use system call arguments
    - adds a "fancy" example using a macro-based generator
    - cleaned up bpf in the sample
    - update docs to mention arguments
    - fix prctl value (eparis@redhat.com)
    - language cleanup (rdunlap@xenotime.net)
v4: - update for no_new_privs use
    - minor tweaks
v3: - call out BPF <-> Berkeley Packet Filter (rdunlap@xenotime.net)
    - document use of tentative always-unprivileged
    - guard sample compilation for i386 and x86_64
v2: - move code to samples (corbet@lwn.net)

Signed-off-by: Will Drewry <wad@chromium.org>
---
 Documentation/prctl/seccomp_filter.txt |  156 +++++++++++++++++++++
 samples/Makefile                       |    2 +-
 samples/seccomp/Makefile               |   38 +++++
 samples/seccomp/bpf-direct.c           |  176 +++++++++++++++++++++++
 samples/seccomp/bpf-fancy.c            |  102 ++++++++++++++
 samples/seccomp/bpf-helper.c           |   89 ++++++++++++
 samples/seccomp/bpf-helper.h           |  238 ++++++++++++++++++++++++++++++++
 samples/seccomp/dropper.c              |   68 +++++++++
 8 files changed, 868 insertions(+), 1 deletions(-)
 create mode 100644 Documentation/prctl/seccomp_filter.txt
 create mode 100644 samples/seccomp/Makefile
 create mode 100644 samples/seccomp/bpf-direct.c
 create mode 100644 samples/seccomp/bpf-fancy.c
 create mode 100644 samples/seccomp/bpf-helper.c
 create mode 100644 samples/seccomp/bpf-helper.h
 create mode 100644 samples/seccomp/dropper.c

diff --git a/Documentation/prctl/seccomp_filter.txt b/Documentation/prctl/seccomp_filter.txt
new file mode 100644
index 0000000..4aa3e78
--- /dev/null
+++ b/Documentation/prctl/seccomp_filter.txt
@@ -0,0 +1,156 @@
+		SECure COMPuting with filters
+		=============================
+
+Introduction
+------------
+
+A large number of system calls are exposed to every userland process
+with many of them going unused for the entire lifetime of the process.
+As system calls change and mature, bugs are found and eradicated.  A
+certain subset of userland applications benefit by having a reduced set
+of available system calls.  The resulting set reduces the total kernel
+surface exposed to the application.  System call filtering is meant for
+use with those applications.
+
+Seccomp filtering provides a means for a process to specify a filter for
+incoming system calls.  The filter is expressed as a Berkeley Packet
+Filter (BPF) program, as with socket filters, except that the data
+operated on is related to the system call being made: system call
+number and the system call arguments.  This allows for expressive
+filtering of system calls using a filter program language with a long
+history of being exposed to userland and a straightforward data set.
+
+Additionally, BPF makes it impossible for users of seccomp to fall prey
+to time-of-check-time-of-use (TOCTOU) attacks that are common in system
+call interposition frameworks.  BPF programs may not dereference
+pointers which constrains all filters to solely evaluating the system
+call arguments directly.
+
+What it isn't
+-------------
+
+System call filtering isn't a sandbox.  It provides a clearly defined
+mechanism for minimizing the exposed kernel surface.  It is meant to be
+a tool for sandbox developers to use.  Beyond that, policy for logical
+behavior and information flow should be managed with a combination of
+other system hardening techniques and, potentially, an LSM of your
+choosing.  Expressive, dynamic filters provide further options down this
+path (avoiding pathological sizes or selecting which of the multiplexed
+system calls in socketcall() is allowed, for instance) which could be
+construed, incorrectly, as a more complete sandboxing solution.
+
+Usage
+-----
+
+An additional seccomp mode is added and is enabled using the same
+prctl(2) call as the strict seccomp.  If the architecture has
+CONFIG_HAVE_ARCH_SECCOMP_FILTER, then filters may be added as below:
+
+PR_SET_SECCOMP:
+	Now takes an additional argument which specifies a new filter
+	using a BPF program.
+	The BPF program will be executed over struct seccomp_data
+	reflecting the system call number, arguments, and other
+	metadata.  The BPF program must then return one of the
+	acceptable values to inform the kernel which action should be
+	taken.
+
+	Usage:
+		prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, prog);
+
+	The 'prog' argument is a pointer to a struct sock_fprog which
+	will contain the filter program.  If the program is invalid, the
+	call will return -1 and set errno to EINVAL.
+
+	Note, is_compat_task is also tracked for the @prog.  This means
+	that once set the calling task will have all of its system calls
+	blocked if it switches its system call ABI.
+
+	If fork/clone and execve are allowed by @prog, any child
+	processes will be constrained to the same filters and system
+	call ABI as the parent.
+
+	Prior to use, the task must call prctl(PR_SET_NO_NEW_PRIVS, 1) or
+	run with CAP_SYS_ADMIN privileges in its namespace.  If these are not
+	true, -EACCES will be returned.  This requirement ensures that filter
+	programs cannot be applied to child processes with greater privileges
+	than the task that installed them.
+
+	Additionally, if prctl(2) is allowed by the attached filter,
+	additional filters may be layered on which will increase evaluation
+	time, but allow for further decreasing the attack surface during
+	execution of a process.
+
+The above call returns 0 on success and non-zero on error.
+
+Return values
+-------------
+A seccomp filter may return any of the following values. If multiple
+filters exist, the return value for the evaluation of a given system
+call will always use the highest precedent value. (For example,
+SECCOMP_RET_KILL will always take precedence.)
+
+In precedence order, they are:
+
+SECCOMP_RET_KILL:
+	Results in the task exiting immediately without executing the
+	system call.  The exit status of the task (status & 0x7f) will
+	be SIGSYS, not SIGKILL.
+
+SECCOMP_RET_TRAP:
+	Results in the kernel sending a SIGSYS signal to the triggering
+	task without executing the system call.  The kernel will
+	rollback the register state to just before the system call
+	entry such that a signal handler in the task will be able to
+	inspect the ucontext_t->uc_mcontext registers and emulate
+	system call success or failure upon return from the signal
+	handler.
+
+	The SECCOMP_RET_DATA portion of the return value will be passed
+	as si_errno.
+
+	SIGSYS triggered by seccomp will have a si_code of SYS_SECCOMP.
+
+SECCOMP_RET_ERRNO:
+	Results in the lower 16-bits of the return value being passed
+	to userland as the errno without executing the system call.
+
+SECCOMP_RET_TRACE:
+	When returned, this value will cause the kernel to attempt to
+	notify a ptrace()-based tracer prior to executing the system
+	call.  If there is no tracer present, -ENOSYS is returned to
+	userland and the system call is not executed.
+
+	A tracer will be notified if it requests PTRACE_O_TRACESECCOMP
+	using ptrace(PTRACE_SETOPTIONS).  The tracer will be notified
+	of a PTRACE_EVENT_SECCOMP and the SECCOMP_RET_DATA portion of
+	the BPF program return value will be available to the tracer
+	via PTRACE_GETEVENTMSG.
+
+SECCOMP_RET_ALLOW:
+	Results in the system call being executed.
+
+If multiple filters exist, the return value for the evaluation of a
+given system call will always use the highest precedent value.
+
+Precedence is only determined using the SECCOMP_RET_ACTION mask.  When
+multiple filters return values of the same precedence, only the
+SECCOMP_RET_DATA from the most recently installed filter will be
+returned.
+
+
+Example
+-------
+
+The samples/seccomp/ directory contains both an x86-specific example
+and a more generic example of a higher level macro interface for BPF
+program generation.
+
+Adding architecture support
+-----------------------
+
+See arch/Kconfig for the authoritative requirements.  In general, if an
+architecture supports both ptrace_event and seccomp, it will be able to
+support seccomp filter with minor fixup: SIGSYS support and seccomp return
+value checking.  Then it must just add CONFIG_HAVE_ARCH_SECCOMP_FILTER
+to its arch-specific Kconfig.
diff --git a/samples/Makefile b/samples/Makefile
index 6280817..f29b19c 100644
--- a/samples/Makefile
+++ b/samples/Makefile
@@ -1,4 +1,4 @@
 # Makefile for Linux samples code
 
 obj-$(CONFIG_SAMPLES)	+= kobject/ kprobes/ tracepoints/ trace_events/ \
-			   hw_breakpoint/ kfifo/ kdb/ hidraw/
+			   hw_breakpoint/ kfifo/ kdb/ hidraw/ seccomp/
diff --git a/samples/seccomp/Makefile b/samples/seccomp/Makefile
new file mode 100644
index 0000000..e8fe0f5
--- /dev/null
+++ b/samples/seccomp/Makefile
@@ -0,0 +1,38 @@
+# kbuild trick to avoid linker error. Can be omitted if a module is built.
+obj- := dummy.o
+
+hostprogs-$(CONFIG_SECCOMP) := bpf-fancy dropper
+bpf-fancy-objs := bpf-fancy.o bpf-helper.o
+
+HOSTCFLAGS_bpf-fancy.o += -I$(objtree)/usr/include
+HOSTCFLAGS_bpf-fancy.o += -idirafter $(objtree)/include
+HOSTCFLAGS_bpf-helper.o += -I$(objtree)/usr/include
+HOSTCFLAGS_bpf-helper.o += -idirafter $(objtree)/include
+
+HOSTCFLAGS_dropper.o += -I$(objtree)/usr/include
+HOSTCFLAGS_dropper.o += -idirafter $(objtree)/include
+dropper-objs := dropper.o
+
+# bpf-direct.c is x86-only.
+ifeq ($(SRCARCH),x86)
+# List of programs to build
+hostprogs-$(CONFIG_SECCOMP) += bpf-direct
+bpf-direct-objs := bpf-direct.o
+endif
+
+HOSTCFLAGS_bpf-direct.o += -I$(objtree)/usr/include
+HOSTCFLAGS_bpf-direct.o += -idirafter $(objtree)/include
+
+# Try to match the kernel target.
+ifeq ($(CONFIG_64BIT),)
+HOSTCFLAGS_bpf-direct.o += -m32
+HOSTCFLAGS_dropper.o += -m32
+HOSTCFLAGS_bpf-helper.o += -m32
+HOSTCFLAGS_bpf-fancy.o += -m32
+HOSTLOADLIBES_bpf-direct += -m32
+HOSTLOADLIBES_bpf-fancy += -m32
+HOSTLOADLIBES_dropper += -m32
+endif
+
+# Tell kbuild to always build the programs
+always := $(hostprogs-y)
diff --git a/samples/seccomp/bpf-direct.c b/samples/seccomp/bpf-direct.c
new file mode 100644
index 0000000..f1567ad
--- /dev/null
+++ b/samples/seccomp/bpf-direct.c
@@ -0,0 +1,176 @@
+/*
+ * Seccomp filter example for x86 (32-bit and 64-bit) with BPF macros
+ *
+ * Copyright (c) 2012 The Chromium OS Authors <chromium-os-dev@chromium.org>
+ * Author: Will Drewry <wad@chromium.org>
+ *
+ * The code may be used by anyone for any purpose,
+ * and can serve as a starting point for developing
+ * applications using prctl(PR_SET_SECCOMP, 2, ...).
+ */
+#define __USE_GNU 1
+#define _GNU_SOURCE 1
+
+#include <linux/types.h>
+#include <linux/filter.h>
+#include <linux/seccomp.h>
+#include <linux/unistd.h>
+#include <signal.h>
+#include <stdio.h>
+#include <stddef.h>
+#include <string.h>
+#include <sys/prctl.h>
+#include <unistd.h>
+
+#define syscall_arg(_n) (offsetof(struct seccomp_data, args[_n]))
+#define syscall_nr (offsetof(struct seccomp_data, nr))
+
+#if defined(__i386__)
+#define REG_RESULT	REG_EAX
+#define REG_SYSCALL	REG_EAX
+#define REG_ARG0	REG_EBX
+#define REG_ARG1	REG_ECX
+#define REG_ARG2	REG_EDX
+#define REG_ARG3	REG_ESI
+#define REG_ARG4	REG_EDI
+#define REG_ARG5	REG_EBP
+#elif defined(__x86_64__)
+#define REG_RESULT	REG_RAX
+#define REG_SYSCALL	REG_RAX
+#define REG_ARG0	REG_RDI
+#define REG_ARG1	REG_RSI
+#define REG_ARG2	REG_RDX
+#define REG_ARG3	REG_R10
+#define REG_ARG4	REG_R8
+#define REG_ARG5	REG_R9
+#else
+#error Unsupported platform
+#endif
+
+#ifndef PR_SET_NO_NEW_PRIVS
+#define PR_SET_NO_NEW_PRIVS 36
+#endif
+
+#ifndef SYS_SECCOMP
+#define SYS_SECCOMP 1
+#endif
+
+static void emulator(int nr, siginfo_t *info, void *void_context)
+{
+	ucontext_t *ctx = (ucontext_t *)(void_context);
+	int syscall;
+	char *buf;
+	ssize_t bytes;
+	size_t len;
+	if (info->si_code != SYS_SECCOMP)
+		return;
+	if (!ctx)
+		return;
+	syscall = ctx->uc_mcontext.gregs[REG_SYSCALL];
+	buf = (char *) ctx->uc_mcontext.gregs[REG_ARG1];
+	len = (size_t) ctx->uc_mcontext.gregs[REG_ARG2];
+
+	if (syscall != __NR_write)
+		return;
+	if (ctx->uc_mcontext.gregs[REG_ARG0] != STDERR_FILENO)
+		return;
+	/* Redirect stderr messages to stdout. Doesn't handle EINTR, etc */
+	ctx->uc_mcontext.gregs[REG_RESULT] = -1;
+	if (write(STDOUT_FILENO, "[ERR] ", 6) > 0) {
+		bytes = write(STDOUT_FILENO, buf, len);
+		ctx->uc_mcontext.gregs[REG_RESULT] = bytes;
+	}
+	return;
+}
+
+static int install_emulator(void)
+{
+	struct sigaction act;
+	sigset_t mask;
+	memset(&act, 0, sizeof(act));
+	sigemptyset(&mask);
+	sigaddset(&mask, SIGSYS);
+
+	act.sa_sigaction = &emulator;
+	act.sa_flags = SA_SIGINFO;
+	if (sigaction(SIGSYS, &act, NULL) < 0) {
+		perror("sigaction");
+		return -1;
+	}
+	if (sigprocmask(SIG_UNBLOCK, &mask, NULL)) {
+		perror("sigprocmask");
+		return -1;
+	}
+	return 0;
+}
+
+static int install_filter(void)
+{
+	struct sock_filter filter[] = {
+		/* Grab the system call number */
+		BPF_STMT(BPF_LD+BPF_W+BPF_ABS, syscall_nr),
+		/* Jump table for the allowed syscalls */
+		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_rt_sigreturn, 0, 1),
+		BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
+#ifdef __NR_sigreturn
+		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_sigreturn, 0, 1),
+		BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
+#endif
+		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_exit_group, 0, 1),
+		BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
+		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_exit, 0, 1),
+		BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
+		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_read, 1, 0),
+		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_write, 3, 2),
+
+		/* Check that read is only using stdin. */
+		BPF_STMT(BPF_LD+BPF_W+BPF_ABS, syscall_arg(0)),
+		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, STDIN_FILENO, 4, 0),
+		BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_KILL),
+
+		/* Check that write is only using stdout */
+		BPF_STMT(BPF_LD+BPF_W+BPF_ABS, syscall_arg(0)),
+		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, STDOUT_FILENO, 1, 0),
+		/* Trap attempts to write to stderr */
+		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, STDERR_FILENO, 1, 2),
+
+		BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
+		BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_TRAP),
+		BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_KILL),
+	};
+	struct sock_fprog prog = {
+		.len = (unsigned short)(sizeof(filter)/sizeof(filter[0])),
+		.filter = filter,
+	};
+
+	if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)) {
+		perror("prctl(NO_NEW_PRIVS)");
+		return 1;
+	}
+
+
+	if (prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog)) {
+		perror("prctl");
+		return 1;
+	}
+	return 0;
+}
+
+#define payload(_c) (_c), sizeof((_c))
+int main(int argc, char **argv)
+{
+	char buf[4096];
+	ssize_t bytes = 0;
+	if (install_emulator())
+		return 1;
+	if (install_filter())
+		return 1;
+	syscall(__NR_write, STDOUT_FILENO,
+		payload("OHAI! WHAT IS YOUR NAME? "));
+	bytes = syscall(__NR_read, STDIN_FILENO, buf, sizeof(buf));
+	syscall(__NR_write, STDOUT_FILENO, payload("HELLO, "));
+	syscall(__NR_write, STDOUT_FILENO, buf, bytes);
+	syscall(__NR_write, STDERR_FILENO,
+		payload("Error message going to STDERR\n"));
+	return 0;
+}
diff --git a/samples/seccomp/bpf-fancy.c b/samples/seccomp/bpf-fancy.c
new file mode 100644
index 0000000..bf1f6b5
--- /dev/null
+++ b/samples/seccomp/bpf-fancy.c
@@ -0,0 +1,102 @@
+/*
+ * Seccomp BPF example using a macro-based generator.
+ *
+ * Copyright (c) 2012 The Chromium OS Authors <chromium-os-dev@chromium.org>
+ * Author: Will Drewry <wad@chromium.org>
+ *
+ * The code may be used by anyone for any purpose,
+ * and can serve as a starting point for developing
+ * applications using prctl(PR_ATTACH_SECCOMP_FILTER).
+ */
+
+#include <linux/filter.h>
+#include <linux/seccomp.h>
+#include <linux/unistd.h>
+#include <stdio.h>
+#include <string.h>
+#include <sys/prctl.h>
+#include <unistd.h>
+
+#include "bpf-helper.h"
+
+#ifndef PR_SET_NO_NEW_PRIVS
+#define PR_SET_NO_NEW_PRIVS 36
+#endif
+
+int main(int argc, char **argv)
+{
+	struct bpf_labels l;
+	static const char msg1[] = "Please type something: ";
+	static const char msg2[] = "You typed: ";
+	char buf[256];
+	struct sock_filter filter[] = {
+		/* TODO: LOAD_SYSCALL_NR(arch) and enforce an arch */
+		LOAD_SYSCALL_NR,
+		SYSCALL(__NR_exit, ALLOW),
+		SYSCALL(__NR_exit_group, ALLOW),
+		SYSCALL(__NR_write, JUMP(&l, write_fd)),
+		SYSCALL(__NR_read, JUMP(&l, read)),
+		DENY,  /* Don't passthrough into a label */
+
+		LABEL(&l, read),
+		ARG(0),
+		JNE(STDIN_FILENO, DENY),
+		ARG(1),
+		JNE((unsigned long)buf, DENY),
+		ARG(2),
+		JGE(sizeof(buf), DENY),
+		ALLOW,
+
+		LABEL(&l, write_fd),
+		ARG(0),
+		JEQ(STDOUT_FILENO, JUMP(&l, write_buf)),
+		JEQ(STDERR_FILENO, JUMP(&l, write_buf)),
+		DENY,
+
+		LABEL(&l, write_buf),
+		ARG(1),
+		JEQ((unsigned long)msg1, JUMP(&l, msg1_len)),
+		JEQ((unsigned long)msg2, JUMP(&l, msg2_len)),
+		JEQ((unsigned long)buf, JUMP(&l, buf_len)),
+		DENY,
+
+		LABEL(&l, msg1_len),
+		ARG(2),
+		JLT(sizeof(msg1), ALLOW),
+		DENY,
+
+		LABEL(&l, msg2_len),
+		ARG(2),
+		JLT(sizeof(msg2), ALLOW),
+		DENY,
+
+		LABEL(&l, buf_len),
+		ARG(2),
+		JLT(sizeof(buf), ALLOW),
+		DENY,
+	};
+	struct sock_fprog prog = {
+		.filter = filter,
+		.len = (unsigned short)(sizeof(filter)/sizeof(filter[0])),
+	};
+	ssize_t bytes;
+	bpf_resolve_jumps(&l, filter, sizeof(filter)/sizeof(*filter));
+
+	if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)) {
+		perror("prctl(NO_NEW_PRIVS)");
+		return 1;
+	}
+
+	if (prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog)) {
+		perror("prctl(SECCOMP)");
+		return 1;
+	}
+	syscall(__NR_write, STDOUT_FILENO, msg1, strlen(msg1));
+	bytes = syscall(__NR_read, STDIN_FILENO, buf, sizeof(buf)-1);
+	bytes = (bytes > 0 ? bytes : 0);
+	syscall(__NR_write, STDERR_FILENO, msg2, strlen(msg2));
+	syscall(__NR_write, STDERR_FILENO, buf, bytes);
+	/* Now get killed */
+	syscall(__NR_write, STDERR_FILENO, msg2, strlen(msg2)+2);
+	return 0;
+}
diff --git a/samples/seccomp/bpf-helper.c b/samples/seccomp/bpf-helper.c
new file mode 100644
index 0000000..579cfe3
--- /dev/null
+++ b/samples/seccomp/bpf-helper.c
@@ -0,0 +1,89 @@
+/*
+ * Seccomp BPF helper functions
+ *
+ * Copyright (c) 2012 The Chromium OS Authors <chromium-os-dev@chromium.org>
+ * Author: Will Drewry <wad@chromium.org>
+ *
+ * The code may be used by anyone for any purpose,
+ * and can serve as a starting point for developing
+ * applications using prctl(PR_ATTACH_SECCOMP_FILTER).
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include "bpf-helper.h"
+
+int bpf_resolve_jumps(struct bpf_labels *labels,
+		      struct sock_filter *filter, size_t count)
+{
+	struct sock_filter *begin = filter;
+	__u8 insn = count - 1;
+
+	if (count < 1)
+		return -1;
+	/*
+	* Walk it once, backwards, to build the label table and do fixups.
+	* Since backward jumps are disallowed by BPF, this is easy.
+	*/
+	filter += insn;
+	for (; filter >= begin; --insn, --filter) {
+		if (filter->code != (BPF_JMP+BPF_JA))
+			continue;
+		switch ((filter->jt<<8)|filter->jf) {
+		case (JUMP_JT<<8)|JUMP_JF:
+			if (labels->labels[filter->k].location == 0xffffffff) {
+				fprintf(stderr, "Unresolved label: '%s'\n",
+					labels->labels[filter->k].label);
+				return 1;
+			}
+			filter->k = labels->labels[filter->k].location -
+				    (insn + 1);
+			filter->jt = 0;
+			filter->jf = 0;
+			continue;
+		case (LABEL_JT<<8)|LABEL_JF:
+			if (labels->labels[filter->k].location != 0xffffffff) {
+				fprintf(stderr, "Duplicate label use: '%s'\n",
+					labels->labels[filter->k].label);
+				return 1;
+			}
+			labels->labels[filter->k].location = insn;
+			filter->k = 0; /* fall through */
+			filter->jt = 0;
+			filter->jf = 0;
+			continue;
+		}
+	}
+	return 0;
+}
+
+/* Simple lookup table for labels. */
+__u32 seccomp_bpf_label(struct bpf_labels *labels, const char *label)
+{
+	struct __bpf_label *begin = labels->labels, *end;
+	int id;
+	if (labels->count == 0) {
+		begin->label = label;
+		begin->location = 0xffffffff;
+		labels->count++;
+		return 0;
+	}
+	end = begin + labels->count;
+	for (id = 0; begin < end; ++begin, ++id) {
+		if (!strcmp(label, begin->label))
+			return id;
+	}
+	begin->label = label;
+	begin->location = 0xffffffff;
+	labels->count++;
+	return id;
+}
+
+void seccomp_bpf_print(struct sock_filter *filter, size_t count)
+{
+	struct sock_filter *end = filter + count;
+	for ( ; filter < end; ++filter)
+		printf("{ code=%u,jt=%u,jf=%u,k=%u },\n",
+			filter->code, filter->jt, filter->jf, filter->k);
+}
diff --git a/samples/seccomp/bpf-helper.h b/samples/seccomp/bpf-helper.h
new file mode 100644
index 0000000..643279d
--- /dev/null
+++ b/samples/seccomp/bpf-helper.h
@@ -0,0 +1,238 @@
+/*
+ * Example wrapper around BPF macros.
+ *
+ * Copyright (c) 2012 The Chromium OS Authors <chromium-os-dev@chromium.org>
+ * Author: Will Drewry <wad@chromium.org>
+ *
+ * The code may be used by anyone for any purpose,
+ * and can serve as a starting point for developing
+ * applications using prctl(PR_SET_SECCOMP, 2, ...).
+ *
+ * No guarantees are provided with respect to the correctness
+ * or functionality of this code.
+ */
+#ifndef __BPF_HELPER_H__
+#define __BPF_HELPER_H__
+
+#include <asm/bitsperlong.h>	/* for __BITS_PER_LONG */
+#include <endian.h>
+#include <linux/filter.h>
+#include <linux/seccomp.h>	/* for seccomp_data */
+#include <linux/types.h>
+#include <linux/unistd.h>
+#include <stddef.h>
+
+#define BPF_LABELS_MAX 256
+struct bpf_labels {
+	int count;
+	struct __bpf_label {
+		const char *label;
+		__u32 location;
+	} labels[BPF_LABELS_MAX];
+};
+
+int bpf_resolve_jumps(struct bpf_labels *labels,
+		      struct sock_filter *filter, size_t count);
+__u32 seccomp_bpf_label(struct bpf_labels *labels, const char *label);
+void seccomp_bpf_print(struct sock_filter *filter, size_t count);
+
+#define JUMP_JT 0xff
+#define JUMP_JF 0xff
+#define LABEL_JT 0xfe
+#define LABEL_JF 0xfe
+
+#define ALLOW \
+	BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW)
+#define DENY \
+	BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_KILL)
+#define JUMP(labels, label) \
+	BPF_JUMP(BPF_JMP+BPF_JA, FIND_LABEL((labels), (label)), \
+		 JUMP_JT, JUMP_JF)
+#define LABEL(labels, label) \
+	BPF_JUMP(BPF_JMP+BPF_JA, FIND_LABEL((labels), (label)), \
+		 LABEL_JT, LABEL_JF)
+#define SYSCALL(nr, jt) \
+	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (nr), 0, 1), \
+	jt
+
+/* Lame, but just an example */
+#define FIND_LABEL(labels, label) seccomp_bpf_label((labels), #label)
+
+#define EXPAND(...) __VA_ARGS__
+/* Map all width-sensitive operations */
+#if __BITS_PER_LONG == 32
+
+#define JEQ(x, jt) JEQ32(x, EXPAND(jt))
+#define JNE(x, jt) JNE32(x, EXPAND(jt))
+#define JGT(x, jt) JGT32(x, EXPAND(jt))
+#define JLT(x, jt) JLT32(x, EXPAND(jt))
+#define JGE(x, jt) JGE32(x, EXPAND(jt))
+#define JLE(x, jt) JLE32(x, EXPAND(jt))
+#define JA(x, jt) JA32(x, EXPAND(jt))
+#define ARG(i) ARG_32(i)
+#define LO_ARG(idx) offsetof(struct seccomp_data, args[(idx)])
+
+#elif __BITS_PER_LONG == 64
+
+/* Ensure that we load the logically correct offset. */
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+#define ENDIAN(_lo, _hi) _lo, _hi
+#define LO_ARG(idx) offsetof(struct seccomp_data, args[(idx)])
+#define HI_ARG(idx) offsetof(struct seccomp_data, args[(idx)]) + sizeof(__u32)
+#elif __BYTE_ORDER == __BIG_ENDIAN
+#define ENDIAN(_lo, _hi) _hi, _lo
+#define LO_ARG(idx) offsetof(struct seccomp_data, args[(idx)]) + sizeof(__u32)
+#define HI_ARG(idx) offsetof(struct seccomp_data, args[(idx)])
+#else
+#error "Unknown endianness"
+#endif
+
+union arg64 {
+	struct {
+		__u32 ENDIAN(lo32, hi32);
+	};
+	__u64 u64;
+};
+
+#define JEQ(x, jt) \
+	JEQ64(((union arg64){.u64 = (x)}).lo32, \
+	      ((union arg64){.u64 = (x)}).hi32, \
+	      EXPAND(jt))
+#define JGT(x, jt) \
+	JGT64(((union arg64){.u64 = (x)}).lo32, \
+	      ((union arg64){.u64 = (x)}).hi32, \
+	      EXPAND(jt))
+#define JGE(x, jt) \
+	JGE64(((union arg64){.u64 = (x)}).lo32, \
+	      ((union arg64){.u64 = (x)}).hi32, \
+	      EXPAND(jt))
+#define JNE(x, jt) \
+	JNE64(((union arg64){.u64 = (x)}).lo32, \
+	      ((union arg64){.u64 = (x)}).hi32, \
+	      EXPAND(jt))
+#define JLT(x, jt) \
+	JLT64(((union arg64){.u64 = (x)}).lo32, \
+	      ((union arg64){.u64 = (x)}).hi32, \
+	      EXPAND(jt))
+#define JLE(x, jt) \
+	JLE64(((union arg64){.u64 = (x)}).lo32, \
+	      ((union arg64){.u64 = (x)}).hi32, \
+	      EXPAND(jt))
+
+#define JA(x, jt) \
+	JA64(((union arg64){.u64 = (x)}).lo32, \
+	       ((union arg64){.u64 = (x)}).hi32, \
+	       EXPAND(jt))
+#define ARG(i) ARG_64(i)
+
+#else
+#error __BITS_PER_LONG value unusable.
+#endif
+
+/* Loads the arg into A */
+#define ARG_32(idx) \
+	BPF_STMT(BPF_LD+BPF_W+BPF_ABS, LO_ARG(idx))
+
+/* Loads hi into A and lo in X */
+#define ARG_64(idx) \
+	BPF_STMT(BPF_LD+BPF_W+BPF_ABS, LO_ARG(idx)), \
+	BPF_STMT(BPF_ST, 0), /* lo -> M[0] */ \
+	BPF_STMT(BPF_LD+BPF_W+BPF_ABS, HI_ARG(idx)), \
+	BPF_STMT(BPF_ST, 1) /* hi -> M[1] */
+
+#define JEQ32(value, jt) \
+	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (value), 0, 1), \
+	jt
+
+#define JNE32(value, jt) \
+	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (value), 1, 0), \
+	jt
+
+/* Checks the lo, then swaps to check the hi. A=lo,X=hi */
+#define JEQ64(lo, hi, jt) \
+	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \
+	BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
+	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (lo), 0, 2), \
+	BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
+	jt, \
+	BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
+
+#define JNE64(lo, hi, jt) \
+	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 5, 0), \
+	BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
+	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (lo), 2, 0), \
+	BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
+	jt, \
+	BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
+
+#define JA32(value, jt) \
+	BPF_JUMP(BPF_JMP+BPF_JSET+BPF_K, (value), 0, 1), \
+	jt
+
+#define JA64(lo, hi, jt) \
+	BPF_JUMP(BPF_JMP+BPF_JSET+BPF_K, (hi), 3, 0), \
+	BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
+	BPF_JUMP(BPF_JMP+BPF_JSET+BPF_K, (lo), 0, 2), \
+	BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
+	jt, \
+	BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
+
+#define JGE32(value, jt) \
+	BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (value), 0, 1), \
+	jt
+
+#define JLT32(value, jt) \
+	BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (value), 1, 0), \
+	jt
+
+/* Shortcut checking if hi > arg.hi. */
+#define JGE64(lo, hi, jt) \
+	BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (hi), 4, 0), \
+	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \
+	BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
+	BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (lo), 0, 2), \
+	BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
+	jt, \
+	BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
+
+#define JLT64(lo, hi, jt) \
+	BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (hi), 0, 4), \
+	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \
+	BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
+	BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (lo), 2, 0), \
+	BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
+	jt, \
+	BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
+
+#define JGT32(value, jt) \
+	BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (value), 0, 1), \
+	jt
+
+#define JLE32(value, jt) \
+	BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (value), 1, 0), \
+	jt
+
+/* Check hi > args.hi first, then do the GE checking */
+#define JGT64(lo, hi, jt) \
+	BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (hi), 4, 0), \
+	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \
+	BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
+	BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (lo), 0, 2), \
+	BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
+	jt, \
+	BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
+
+#define JLE64(lo, hi, jt) \
+	BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (hi), 6, 0), \
+	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 3), \
+	BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
+	BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (lo), 2, 0), \
+	BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
+	jt, \
+	BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
+
+#define LOAD_SYSCALL_NR \
+	BPF_STMT(BPF_LD+BPF_W+BPF_ABS, \
+		 offsetof(struct seccomp_data, nr))
+
+#endif  /* __BPF_HELPER_H__ */
diff --git a/samples/seccomp/dropper.c b/samples/seccomp/dropper.c
new file mode 100644
index 0000000..c69c347
--- /dev/null
+++ b/samples/seccomp/dropper.c
@@ -0,0 +1,68 @@
+/*
+ * Naive system call dropper built on seccomp_filter.
+ *
+ * Copyright (c) 2012 The Chromium OS Authors <chromium-os-dev@chromium.org>
+ * Author: Will Drewry <wad@chromium.org>
+ *
+ * The code may be used by anyone for any purpose,
+ * and can serve as a starting point for developing
+ * applications using prctl(PR_SET_SECCOMP, 2, ...).
+ *
+ * When run, returns the specified errno for the specified
+ * system call number against the given architecture.
+ *
+ * Run this one as root as PR_SET_NO_NEW_PRIVS is not called.
+ */
+
+#include <errno.h>
+#include <linux/audit.h>
+#include <linux/filter.h>
+#include <linux/seccomp.h>
+#include <linux/unistd.h>
+#include <stdio.h>
+#include <stddef.h>
+#include <stdlib.h>
+#include <sys/prctl.h>
+#include <unistd.h>
+
+static int install_filter(int nr, int arch, int error)
+{
+	struct sock_filter filter[] = {
+		BPF_STMT(BPF_LD+BPF_W+BPF_ABS,
+			 (offsetof(struct seccomp_data, arch))),
+		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, arch, 0, 3),
+		BPF_STMT(BPF_LD+BPF_W+BPF_ABS,
+			 (offsetof(struct seccomp_data, nr))),
+		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, nr, 0, 1),
+		BPF_STMT(BPF_RET+BPF_K,
+			 SECCOMP_RET_ERRNO|(error & SECCOMP_RET_DATA)),
+		BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
+	};
+	struct sock_fprog prog = {
+		.len = (unsigned short)(sizeof(filter)/sizeof(filter[0])),
+		.filter = filter,
+	};
+	if (prctl(PR_SET_SECCOMP, 2, &prog)) {
+		perror("prctl");
+		return 1;
+	}
+	return 0;
+}
+
+int main(int argc, char **argv)
+{
+	if (argc < 5) {
+		fprintf(stderr, "Usage:\n"
+			"dropper <syscall_nr> <arch> <errno> <prog> [<args>]\n"
+			"Hint:	AUDIT_ARCH_I386: 0x%X\n"
+			"	AUDIT_ARCH_X86_64: 0x%X\n"
+			"\n", AUDIT_ARCH_I386, AUDIT_ARCH_X86_64);
+		return 1;
+	}
+	if (install_filter(strtol(argv[1], NULL, 0), strtol(argv[2], NULL, 0),
+			   strtol(argv[3], NULL, 0)))
+		return 1;
+	execv(argv[4], &argv[4]);
+	printf("Failed to execv\n");
+	return 255;
+}
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v14 13/13] seccomp: remove duplicated failure logging
  2012-03-12 21:28 [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Will Drewry
                   ` (10 preceding siblings ...)
  2012-03-12 21:28 ` [PATCH v14 12/13] Documentation: prctl/seccomp_filter Will Drewry
@ 2012-03-12 21:28 ` Will Drewry
  2012-03-13  3:40 ` [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Indan Zupancic
  2012-03-13 10:04 ` Indan Zupancic
  13 siblings, 0 replies; 40+ messages in thread
From: Will Drewry @ 2012-03-12 21:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arch, linux-doc, kernel-hardening, netdev, x86, arnd,
	davem, hpa, mingo, oleg, peterz, rdunlap, mcgrathr, tglx, luto,
	eparis, serge.hallyn, djm, scarybeasts, indan, pmoore, akpm,
	corbet, eric.dumazet, markus, coreyb, keescook

From: Kees Cook <keescook@chromium.org>

This consolidates the seccomp filter error logging path and adds more
details to the audit log.

Reviewed-by: Will Drewry <wad@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 include/linux/audit.h |    8 ++++----
 kernel/auditsc.c      |    9 +++++++--
 kernel/seccomp.c      |   15 +--------------
 3 files changed, 12 insertions(+), 20 deletions(-)

diff --git a/include/linux/audit.h b/include/linux/audit.h
index 9ff7a2c..5aac29b 100644
--- a/include/linux/audit.h
+++ b/include/linux/audit.h
@@ -463,7 +463,7 @@ extern void audit_putname(const char *name);
 extern void __audit_inode(const char *name, const struct dentry *dentry);
 extern void __audit_inode_child(const struct dentry *dentry,
 				const struct inode *parent);
-extern void __audit_seccomp(unsigned long syscall);
+extern void __audit_seccomp(unsigned long syscall, long signr);
 extern void __audit_ptrace(struct task_struct *t);
 
 static inline int audit_dummy_context(void)
@@ -508,10 +508,10 @@ static inline void audit_inode_child(const struct dentry *dentry,
 }
 void audit_core_dumps(long signr);
 
-static inline void audit_seccomp(unsigned long syscall)
+static inline void audit_seccomp(unsigned long syscall, long signr)
 {
 	if (unlikely(!audit_dummy_context()))
-		__audit_seccomp(syscall);
+		__audit_seccomp(syscall, signr);
 }
 
 static inline void audit_ptrace(struct task_struct *t)
@@ -634,7 +634,7 @@ extern int audit_signals;
 #define audit_inode(n,d) do { (void)(d); } while (0)
 #define audit_inode_child(i,p) do { ; } while (0)
 #define audit_core_dumps(i) do { ; } while (0)
-#define audit_seccomp(i) do { ; } while (0)
+#define audit_seccomp(i, s) do { ; } while (0)
 #define auditsc_get_stamp(c,t,s) (0)
 #define audit_get_loginuid(t) (-1)
 #define audit_get_sessionid(t) (-1)
diff --git a/kernel/auditsc.c b/kernel/auditsc.c
index af1de0f..74652fe 100644
--- a/kernel/auditsc.c
+++ b/kernel/auditsc.c
@@ -67,6 +67,7 @@
 #include <linux/syscalls.h>
 #include <linux/capability.h>
 #include <linux/fs_struct.h>
+#include <linux/compat.h>
 
 #include "audit.h"
 
@@ -2710,13 +2711,17 @@ void audit_core_dumps(long signr)
 	audit_log_end(ab);
 }
 
-void __audit_seccomp(unsigned long syscall)
+void __audit_seccomp(unsigned long syscall, long signr)
 {
 	struct audit_buffer *ab;
 
 	ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_ANOM_ABEND);
-	audit_log_abend(ab, "seccomp", SIGKILL);
+	audit_log_abend(ab, "seccomp", signr);
 	audit_log_format(ab, " syscall=%ld", syscall);
+#ifdef CONFIG_COMPAT
+	audit_log_format(ab, " compat=%d", is_compat_task());
+#endif
+	audit_log_format(ab, " ip=0x%lx", KSTK_EIP(current));
 	audit_log_end(ab);
 }
 
diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index ddacc68..8a5b86c 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -60,18 +60,6 @@ struct seccomp_filter {
 /* Limit any path through the tree to 1 megabytes worth of instructions. */
 #define MAX_INSNS_PER_PATH ((1 << 20) / sizeof(struct sock_filter))
 
-static void seccomp_filter_log_failure(int syscall)
-{
-	int compat = 0;
-#ifdef CONFIG_COMPAT
-	compat = is_compat_task();
-#endif
-	pr_info("%s[%d]: %ssystem call %d blocked at 0x%lx\n",
-		current->comm, task_pid_nr(current),
-		(compat ? "compat " : ""),
-		syscall, KSTK_EIP(current));
-}
-
 /**
  * get_u32 - returns a u32 offset into data
  * @data: a unsigned 64 bit value
@@ -413,7 +401,6 @@ int __secure_computing_int(int this_syscall)
 		default:
 			break;
 		}
-		seccomp_filter_log_failure(this_syscall);
 		exit_code = SIGSYS;
 		break;
 	}
@@ -425,7 +412,7 @@ int __secure_computing_int(int this_syscall)
 #ifdef SECCOMP_DEBUG
 	dump_stack();
 #endif
-	audit_seccomp(this_syscall);
+	audit_seccomp(this_syscall, exit_code);
 	do_exit(exit_code);
 	return -1;	/* never reached */
 }
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH v14 06/13] seccomp: add system call filtering using BPF
  2012-03-12 21:28 ` [PATCH v14 06/13] seccomp: add system call filtering using BPF Will Drewry
@ 2012-03-13  3:33   ` Indan Zupancic
  2012-03-13 15:57     ` [kernel-hardening] " Will Drewry
  0 siblings, 1 reply; 40+ messages in thread
From: Indan Zupancic @ 2012-03-13  3:33 UTC (permalink / raw)
  To: Will Drewry
  Cc: linux-kernel, linux-arch, linux-doc, kernel-hardening, netdev,
	x86, arnd, davem, hpa, mingo, oleg, peterz, rdunlap, mcgrathr,
	tglx, luto, eparis, serge.hallyn, djm, scarybeasts, pmoore, akpm,
	corbet, eric.dumazet, markus, coreyb, keescook, Will Drewry

Hello,

Could you send a 00/13 patch with a diffstat summary of all patches
combined and also some code size diffs? That would be very nice.

I assume the BPF performance for networking doesn't change with this
version.

> +/* Limit any path through the tree to 1 megabytes worth of instructions. */
> +#define MAX_INSNS_PER_PATH ((1 << 20) / sizeof(struct sock_filter))

This still makes me feel uneasy. It's also easier to increase the limit
in the future if necessary than to reduce it. So I would go for a lower
initial limit, like 256kB or 512Kb. 1 megabyte are 32 max size filters,
that seems like a lot. And in practise it's more likely that the more
filters there are, the smaller they usually are.

> +
> +static void seccomp_filter_log_failure(int syscall)
> +{
> +	int compat = 0;
> +#ifdef CONFIG_COMPAT
> +	compat = is_compat_task();
> +#endif
> +	pr_info("%s[%d]: %ssystem call %d blocked at 0x%lx\n",
> +		current->comm, task_pid_nr(current),
> +		(compat ? "compat " : ""),
> +		syscall, KSTK_EIP(current));
> +}

If you keep this, at least make it rate limited?

> +/**
> + * bpf_load: checks and returns a pointer to the requested offset
> + * @off: offset into struct seccomp_data to load from
> + *
> + * Returns the requested 32-bits of data.
> + * seccomp_chk_filter() should assure that @off is 32-bit aligned
> + * and not out of bounds.  Failure to do so is a BUG.
> + */
> +u32 seccomp_bpf_load(int off)
> +{
> +	struct pt_regs *regs = task_pt_regs(current);
> +
> +	if (off == BPF_DATA(nr))
> +		return syscall_get_nr(current, regs);
> +	if (off == BPF_DATA(arch))
> +		return syscall_get_arch(current, regs);
> +	if (off == BPF_DATA(instruction_pointer))
> +		return get_u32(KSTK_EIP(current), 0);
> +	if (off == BPF_DATA(instruction_pointer) + sizeof(u32))
> +		return get_u32(KSTK_EIP(current), 1);

I'd put the IP checks at the end, as it's the least likely case.
But maybe gcc is smart enough to detect a pattern.

> +	if (off >= BPF_DATA(args[0]) && off < BPF_DATA(args[6])) {
> +		unsigned long value;
> +		int arg = (off - BPF_DATA(args[0])) / sizeof(u64);
> +		int index = !!(off % sizeof(u64));
> +		syscall_get_arguments(current, regs, arg, 1, &value);
> +		return get_u32(value, index);
> +	}
> +	/* seccomp_chk_filter should make this impossible. */
> +	BUG();
> +}
> +
> +/**
> + *	seccomp_chk_filter - verify seccomp filter code
> + *	@filter: filter to verify
> + *	@flen: length of filter
> + *
> + * Takes a previously checked filter (by sk_chk_filter) and
> + * redirects all filter code that loads struct sk_buff data
> + * and related data through seccomp_bpf_load.  It also
> + * enforces length and alignment checking of those loads.
> + *
> + * Returns 0 if the rule set is legal or -EINVAL if not.
> + */
> +static int seccomp_chk_filter(struct sock_filter *filter, unsigned int flen)
> +{
> +	int pc;
> +	for (pc = 0; pc < flen; pc++) {
> +		struct sock_filter *ftest = &filter[pc];
> +		u16 code = ftest->code;
> +		u32 k = ftest->k;
> +		if (code <= BPF_S_ALU_NEG)
> +			continue;
> +		/* Do not allow ancillary codes (incl. BPF_S_ANC_SECCOMP_*) */
> +		if (code >= BPF_S_ANC_PROTOCOL)
> +			return -EINVAL;
> +		if (code >= BPF_S_LDX_IMM)
> +			continue;

I know I did the same in my code, but to avoid silly bugs when someone changes
the enum in filter.h without being aware of SECCOMP, it's much safer to check
all known good instructions explicitly in a big switch. And the compiler is
hopefully smart enough to generate pretty much the same code as now. Then if
new cases are added or existing ones swapped around, everything will still
work and we don't end up with a silent security hole.

The reason I didn't do that was laziness and because it looks ugly. But it's
more robust and I don't see a better way to get the same guarantees.

> +		switch (code) {
> +		case BPF_S_LD_W_ABS:
> +			ftest->code = BPF_S_ANC_SECCOMP_LD_W;
> +			/* 32-bit aligned and not out of bounds. */
> +			if (k >= sizeof(struct seccomp_data) || k & 3)
> +				return -EINVAL;
> +			continue;
> +		case BPF_S_LD_W_LEN:
> +			ftest->code = BPF_S_LD_IMM;
> +			ftest->k = sizeof(struct seccomp_data);
> +			continue;
> +		case BPF_S_LD_IMM:
> +			continue;
> +		case BPF_S_LDX_W_LEN:
> +			ftest->code = BPF_S_LDX_IMM;
> +			ftest->k = sizeof(struct seccomp_data);
> +			continue;
> +		case BPF_S_LD_W_IND:
> +			/* XXX: would runtime seccomp_bpf_load checking k */

Why? I checked your example code and haven't seen one case where you're
not using absolute loads. That said, I can think of situations where it's
nice to be able to store the syscall and argument numbers in MEM and use
an indirect load instead of changing the instructions themselves. But I
don't think it's worth the trouble supporting this, as it's only a minor
improvement while it adds more messy code just for this.

> +/**
> + * seccomp_attach_filter: Attaches a seccomp filter to current.
> + * @fprog: BPF program to install
> + *
> + * Returns 0 on success or an errno on failure.
> + */
> +static long seccomp_attach_filter(struct sock_fprog *fprog)
> +{
> +	struct seccomp_filter *filter;
> +	unsigned long fp_size = fprog->len * sizeof(struct sock_filter);
> +	unsigned long total_insns = 0;
> +	long ret;
> +
> +	if (fprog->len == 0 || fprog->len > BPF_MAXINSNS)
> +		return -EINVAL;
> +
> +	for (filter = current->seccomp.filter; filter; filter = filter->prev)
> +		total_insns += filter->len;

More accurate and better in case of one instruction filters:

		total_insns += filter->len + 4;

This to penalize tiny filters, otherwise a filter with one instruction will
use a lot more memory than just 8 bytes, so your effective max memory usage
would be much higher. It also depends on kzalloc's minimum allocation size
and rounding, so all in all 4 seems like a good number.

> +	if (total_insns > MAX_INSNS_PER_PATH - fprog->len)
> +		return -ENOSPC;

It seems more readable to me to initialize total_insns to fprog->len
instead of zero, and then drop the substraction. Overflows aren't
possible either way.

"No space left on device" isn't suitable either, I think. What's wrong
with just using ENOMEM?

> +/**
> + * seccomp_attach_user_filter - attaches a user-supplied sock_fprog
> + * @user_filter: pointer to the user data containing a sock_fprog.
> + *
> + * Returns 0 on success and non-zero otherwise.
> + */
> +long seccomp_attach_user_filter(char __user *user_filter)
> +{
> +	struct sock_fprog fprog;
> +	long ret = -EFAULT;
> +
> +	if (!user_filter)
> +		goto out;

This case should be handled by copy_from_user() already, so adding this
check doesn't add anything at all. Zero is only one out of many invalid
pointers, why check for 0 but not for e.g. 1, 2, 3, 4, 5, 6, 7, 8 or 9?

Reviewed-by: Indan Zupancic <indan@nul.nu>

Greetings,

Indan



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W
  2012-03-12 21:28 [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Will Drewry
                   ` (11 preceding siblings ...)
  2012-03-12 21:28 ` [PATCH v14 13/13] seccomp: remove duplicated failure logging Will Drewry
@ 2012-03-13  3:40 ` Indan Zupancic
  2012-03-13 15:40   ` Will Drewry
  2012-03-13 10:04 ` Indan Zupancic
  13 siblings, 1 reply; 40+ messages in thread
From: Indan Zupancic @ 2012-03-13  3:40 UTC (permalink / raw)
  To: Will Drewry
  Cc: linux-kernel, linux-arch, linux-doc, kernel-hardening, netdev,
	x86, arnd, davem, hpa, mingo, oleg, peterz, rdunlap, mcgrathr,
	tglx, luto, eparis, serge.hallyn, djm, scarybeasts, pmoore, akpm,
	corbet, eric.dumazet, markus, coreyb, keescook, Will Drewry

Hello,

On Mon, March 12, 2012 22:28, Will Drewry wrote:
> Introduces a new BPF ancillary instruction that all LD calls will be
> mapped through when skb_run_filter() is being used for seccomp BPF.  The
> rewriting will be done using a secondary chk_filter function that is run
> after skb_chk_filter.
>
> The code change is guarded by CONFIG_SECCOMP_FILTER which is added,
> along with the seccomp_bpf_load() function later in this series.
>
> This is based on http://lkml.org/lkml/2012/3/2/141
>
> v14: First cut using a single additional instruction
> ... v13: made bpf functions generic.
>
>
> Suggested-by: Indan Zupancic <indan@nul.nu>
> Signed-off-by: Will Drewry <wad@chromium.org>
> ---
>  include/linux/filter.h |    1 +
>  net/core/filter.c      |    5 +++++
>  2 files changed, 6 insertions(+), 0 deletions(-)
>
> diff --git a/include/linux/filter.h b/include/linux/filter.h
> index 8eeb205..aaa2e80 100644
> --- a/include/linux/filter.h
> +++ b/include/linux/filter.h
> @@ -228,6 +228,7 @@ enum {
>  	BPF_S_ANC_HATYPE,
>  	BPF_S_ANC_RXHASH,
>  	BPF_S_ANC_CPU,
> +	BPF_S_ANC_SECCOMP_LD_W,
>  };
>
>  #endif /* __KERNEL__ */
> diff --git a/net/core/filter.c b/net/core/filter.c
> index 5dea452..3000931 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -350,6 +350,11 @@ load_b:
>  				A = 0;
>  			continue;
>  		}
> +#ifdef CONFIG_SECCOMP_FILTER
> +		case BPF_S_ANC_SECCOMP_LD_W:
> +			A = seccomp_bpf_load(fentry->k);

I think you forgot to declare seccomp_bpf_load() anywhere filter.c can find.
That is, filter.c probably needs to include seccomp.h, or maybe better, add
"extern u32 seccomp_bpf_load(int off);" to filter.h instead.

> +			continue;
> +#endif
>  		default:
>  			WARN_RATELIMIT(1, "Unknown code:%u jt:%u tf:%u k:%u\n",
>  				       fentry->code, fentry->jt,

Reviewed-by: Indan Zupancic <indan@nul.nu>

Greetings,

Indan



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W
  2012-03-12 21:28 [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Will Drewry
                   ` (12 preceding siblings ...)
  2012-03-13  3:40 ` [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Indan Zupancic
@ 2012-03-13 10:04 ` Indan Zupancic
  2012-03-13 15:43   ` Will Drewry
  2012-03-13 17:13   ` Eric Dumazet
  13 siblings, 2 replies; 40+ messages in thread
From: Indan Zupancic @ 2012-03-13 10:04 UTC (permalink / raw)
  To: Will Drewry
  Cc: linux-kernel, linux-arch, linux-doc, kernel-hardening, netdev,
	x86, arnd, davem, hpa, mingo, oleg, peterz, rdunlap, mcgrathr,
	tglx, luto, eparis, serge.hallyn, djm, scarybeasts, pmoore, akpm,
	corbet, eric.dumazet, markus, coreyb, keescook, Will Drewry

Hello,

I made a quick pseudo-patch for BPF JIT support. As far as I can tell,
the actual code itself is very simple, just:

case BPF_S_ANC_SECCOMP_LD_W:
	/* SECCOMP doesn't use SKB, no need to preserve %rdi. */
	t_offset = seccomp_bpf_load - (image + addrs[i]);
	EMIT1_off32(0xbf, K); /* mov imm32,%rdi */
	EMIT1_off32(0xe8, t_offset); /* call */
	break;

EAX is set directly as it's the return register, EBX is preserved by the
callee, RDI and other registers are unused by seccomp, so no need for
trampoline code AFAIK.

The rest of the patch just makes the JIT code suitable for sharing.
Only real change is that after this patch unused insns memory is freed.

The code is untested and even uncompiled, as I've only access to my 32-bit
laptop at the moment.

Would be interesting to know if this actually works and what the performance
difference is for seccomp.

Greetings,

Indan


 arch/x86/net/bpf_jit_comp.c |   47 ++++++++++++++++++++----------------------
 include/linux/filter.h      |   14 +++++++-----
 net/core/filter.c           |   27 ++++++++++++++++++++++--
 3 files changed, 54 insertions(+), 34 deletions(-)

---

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 7c1b765..3cd6626 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -118,7 +118,7 @@ static inline void bpf_flush_icache(void *start, void *end)
 }


-void bpf_jit_compile(struct sk_filter *fp)
+bpf_func_t bpf_jit_compile(const struct sock_filter* filter, int flen, int use_skb)
 {
 	u8 temp[64];
 	u8 *prog;
@@ -131,15 +131,13 @@ void bpf_jit_compile(struct sk_filter *fp)
 	int pc_ret0 = -1; /* bpf index of first RET #0 instruction (if any) */
 	unsigned int cleanup_addr; /* epilogue code offset */
 	unsigned int *addrs;
-	const struct sock_filter *filter = fp->insns;
-	int flen = fp->len;

 	if (!bpf_jit_enable)
-		return;
+		return NULL;

 	addrs = kmalloc(flen * sizeof(*addrs), GFP_KERNEL);
 	if (addrs == NULL)
-		return;
+		return NULL;

 	/* Before first pass, make a rough estimation of addrs[]
 	 * each bpf instruction is translated to less than 64 bytes
@@ -151,11 +149,16 @@ void bpf_jit_compile(struct sk_filter *fp)
 	cleanup_addr = proglen; /* epilogue address */

 	for (pass = 0; pass < 10; pass++) {
-		u8 seen_or_pass0 = (pass == 0) ? (SEEN_XREG | SEEN_DATAREF | SEEN_MEM) : seen;
+		u8 seen_or_pass0 = seen;
 		/* no prologue/epilogue for trivial filters (RET something) */
 		proglen = 0;
 		prog = temp;

+		if (pass == 0) {
+			seen_or_pass0 = SEEN_XREG | SEEN_MEM;
+			if (use_skb)
+				seen_or_pass0 |= SEEN_DATAREF;
+		}
 		if (seen_or_pass0) {
 			EMIT4(0x55, 0x48, 0x89, 0xe5); /* push %rbp; mov %rsp,%rbp */
 			EMIT4(0x48, 0x83, 0xec, 96);	/* subq  $96,%rsp	*/
@@ -472,6 +475,14 @@ void bpf_jit_compile(struct sk_filter *fp)
 				CLEAR_A();
 #endif
 				break;
+#ifdef CONFIG_SECCOMP_FILTER
+			case BPF_S_ANC_SECCOMP_LD_W:
+				/* SECCOMP doesn't use SKB, no need to preserve %rdi. */
+				t_offset = seccomp_bpf_load - (image + addrs[i]);
+				EMIT1_off32(0xbf, K); /* mov imm32,%rdi */
+				EMIT1_off32(0xe8, t_offset); /* call */
+				break;
+#endif
 			case BPF_S_LD_W_ABS:
 				func = sk_load_word;
 common_load:			seen |= SEEN_DATAREF;
@@ -588,13 +599,14 @@ cond_branch:			f_offset = addrs[i + filter[i].jf] - addrs[i];
 				/* hmm, too complex filter, give up with jit compiler */
 				goto out;
 			}
+			BUG_ON(!use_skb && (seen & SEEN_DATAREF));
 			ilen = prog - temp;
 			if (image) {
 				if (unlikely(proglen + ilen > oldproglen)) {
 					pr_err("bpb_jit_compile fatal error\n");
 					kfree(addrs);
 					module_free(NULL, image);
-					return;
+					return NULL;
 				}
 				memcpy(image + proglen, temp, ilen);
 			}
@@ -635,28 +647,13 @@ cond_branch:			f_offset = addrs[i + filter[i].jf] - addrs[i];
 				       16, 1, image, proglen, false);

 		bpf_flush_icache(image, image + proglen);
-
-		fp->bpf_func = (void *)image;
 	}
 out:
 	kfree(addrs);
-	return;
+	return (void *)image;
 }

-static void jit_free_defer(struct work_struct *arg)
+void bpf_jit_free(bpf_func_t image)
 {
-	module_free(NULL, arg);
-}
-
-/* run from softirq, we must use a work_struct to call
- * module_free() from process context
- */
-void bpf_jit_free(struct sk_filter *fp)
-{
-	if (fp->bpf_func != sk_run_filter) {
-		struct work_struct *work = (struct work_struct *)fp->bpf_func;
-
-		INIT_WORK(work, jit_free_defer);
-		schedule_work(work);
-	}
+	module_free(NULL, image);
 }
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 8eeb205..292ccca 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -135,12 +135,13 @@ struct sock_fprog {	/* Required for SO_ATTACH_FILTER. */
 struct sk_buff;
 struct sock;

+typedef unsigned int (*bpf_func_t)(const struct sk_buff*, const struct sock_filter*);
+
 struct sk_filter
 {
 	atomic_t		refcnt;
 	unsigned int         	len;	/* Number of filter blocks */
-	unsigned int		(*bpf_func)(const struct sk_buff *skb,
-					    const struct sock_filter *filter);
+	bpf_func_t		bpf_func;
 	struct rcu_head		rcu;
 	struct sock_filter     	insns[0];
 };
@@ -158,14 +159,15 @@ extern int sk_detach_filter(struct sock *sk);
 extern int sk_chk_filter(struct sock_filter *filter, unsigned int flen);

 #ifdef CONFIG_BPF_JIT
-extern void bpf_jit_compile(struct sk_filter *fp);
-extern void bpf_jit_free(struct sk_filter *fp);
+extern bpf_func_t bpf_jit_compile(const struct sock_filter*, int flen, int use_skb);
+extern void bpf_jit_free(bpf_funct_t);
 #define SK_RUN_FILTER(FILTER, SKB) (*FILTER->bpf_func)(SKB, FILTER->insns)
 #else
-static inline void bpf_jit_compile(struct sk_filter *fp)
+static inline bpf_func_t bpf_jit_compile(const struct sock_filter*, int flen, int use_skb)
 {
+	return NULL;
 }
-static inline void bpf_jit_free(struct sk_filter *fp)
+static inline void bpf_jit_free(bpf_func_t)
 {
 }
 #define SK_RUN_FILTER(FILTER, SKB) sk_run_filter(SKB, FILTER->insns)
diff --git a/net/core/filter.c b/net/core/filter.c
index 5dea452..03e3ea3 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -574,6 +574,14 @@ int sk_chk_filter(struct sock_filter *filter, unsigned int flen)
 }
 EXPORT_SYMBOL(sk_chk_filter);

+/* run from softirq, we must use a work_struct to call
+ * bpf_jit_free() from process context
+ */
+static void jit_free_defer(struct work_struct *arg)
+{
+	bpf_jit_free((bpf_func_t)arg);
+}
+
 /**
  * 	sk_filter_release_rcu - Release a socket filter by rcu_head
  *	@rcu: rcu_head that contains the sk_filter to free
@@ -582,7 +590,12 @@ void sk_filter_release_rcu(struct rcu_head *rcu)
 {
 	struct sk_filter *fp = container_of(rcu, struct sk_filter, rcu);

-	bpf_jit_free(fp);
+	if (fp->bpf_func != sk_run_filter) {
+		struct work_struct *work = (struct work_struct *)fp->bpf_func;
+
+		INIT_WORK(work, jit_free_defer);
+		schedule_work(work);
+	}
 	kfree(fp);
 }
 EXPORT_SYMBOL(sk_filter_release_rcu);
@@ -599,9 +612,10 @@ EXPORT_SYMBOL(sk_filter_release_rcu);
  */
 int sk_attach_filter(struct sock_fprog *fprog, struct sock *sk)
 {
-	struct sk_filter *fp, *old_fp;
+	struct sk_filter *fp, *old_fp, *new_fp;
 	unsigned int fsize = sizeof(struct sock_filter) * fprog->len;
 	int err;
+	bpf_func_t jit;

 	/* Make sure new filter is there and in the right amounts. */
 	if (fprog->filter == NULL)
@@ -625,7 +639,14 @@ int sk_attach_filter(struct sock_fprog *fprog, struct sock *sk)
 		return err;
 	}

-	bpf_jit_compile(fp);
+	jit = bpf_jit_compile(fp->insns, fp->len, 1);
+	if (jit) {
+		fp->bpf_func = jit;
+		/* Free unused insns memory */
+		newfp = krealloc(fp, sizeof(*fp), GFP_KERNEL);
+		if (newfp)
+			fp = newfp;
+	}

 	old_fp = rcu_dereference_protected(sk->sk_filter,
 					   sock_owned_by_user(sk));




^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W
  2012-03-13  3:40 ` [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Indan Zupancic
@ 2012-03-13 15:40   ` Will Drewry
  0 siblings, 0 replies; 40+ messages in thread
From: Will Drewry @ 2012-03-13 15:40 UTC (permalink / raw)
  To: Indan Zupancic
  Cc: linux-kernel, linux-arch, linux-doc, kernel-hardening, netdev,
	x86, arnd, davem, hpa, mingo, oleg, peterz, rdunlap, mcgrathr,
	tglx, luto, eparis, serge.hallyn, djm, scarybeasts, pmoore, akpm,
	corbet, eric.dumazet, markus, coreyb, keescook

On Mon, Mar 12, 2012 at 10:40 PM, Indan Zupancic <indan@nul.nu> wrote:
> Hello,
>
> On Mon, March 12, 2012 22:28, Will Drewry wrote:
>> Introduces a new BPF ancillary instruction that all LD calls will be
>> mapped through when skb_run_filter() is being used for seccomp BPF.  The
>> rewriting will be done using a secondary chk_filter function that is run
>> after skb_chk_filter.
>>
>> The code change is guarded by CONFIG_SECCOMP_FILTER which is added,
>> along with the seccomp_bpf_load() function later in this series.
>>
>> This is based on http://lkml.org/lkml/2012/3/2/141
>>
>> v14: First cut using a single additional instruction
>> ... v13: made bpf functions generic.
>>
>>
>> Suggested-by: Indan Zupancic <indan@nul.nu>
>> Signed-off-by: Will Drewry <wad@chromium.org>
>> ---
>>  include/linux/filter.h |    1 +
>>  net/core/filter.c      |    5 +++++
>>  2 files changed, 6 insertions(+), 0 deletions(-)
>>
>> diff --git a/include/linux/filter.h b/include/linux/filter.h
>> index 8eeb205..aaa2e80 100644
>> --- a/include/linux/filter.h
>> +++ b/include/linux/filter.h
>> @@ -228,6 +228,7 @@ enum {
>>       BPF_S_ANC_HATYPE,
>>       BPF_S_ANC_RXHASH,
>>       BPF_S_ANC_CPU,
>> +     BPF_S_ANC_SECCOMP_LD_W,
>>  };
>>
>>  #endif /* __KERNEL__ */
>> diff --git a/net/core/filter.c b/net/core/filter.c
>> index 5dea452..3000931 100644
>> --- a/net/core/filter.c
>> +++ b/net/core/filter.c
>> @@ -350,6 +350,11 @@ load_b:
>>                               A = 0;
>>                       continue;
>>               }
>> +#ifdef CONFIG_SECCOMP_FILTER
>> +             case BPF_S_ANC_SECCOMP_LD_W:
>> +                     A = seccomp_bpf_load(fentry->k);
>
> I think you forgot to declare seccomp_bpf_load() anywhere filter.c can find.
> That is, filter.c probably needs to include seccomp.h, or maybe better, add
> "extern u32 seccomp_bpf_load(int off);" to filter.h instead.

Doh, it should include seccomp.h.  Right now it gets that on accident
via sched.h.  Since at this point in the patch series, the function
doesn't exist, I'd prefer to just add seccomp.h explicitly.  I'll do
that in the next version unless there is a clear problem.  (In
practice, it is already pulled in.)


> Reviewed-by: Indan Zupancic <indan@nul.nu>

Thanks!
will

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W
  2012-03-13 10:04 ` Indan Zupancic
@ 2012-03-13 15:43   ` Will Drewry
  2012-03-13 17:13   ` Eric Dumazet
  1 sibling, 0 replies; 40+ messages in thread
From: Will Drewry @ 2012-03-13 15:43 UTC (permalink / raw)
  To: Indan Zupancic
  Cc: linux-kernel, linux-arch, linux-doc, kernel-hardening, netdev,
	x86, arnd, davem, hpa, mingo, oleg, peterz, rdunlap, mcgrathr,
	tglx, luto, eparis, serge.hallyn, djm, scarybeasts, pmoore, akpm,
	corbet, eric.dumazet, markus, coreyb, keescook

On Tue, Mar 13, 2012 at 5:04 AM, Indan Zupancic <indan@nul.nu> wrote:
> Hello,
>
> I made a quick pseudo-patch for BPF JIT support. As far as I can tell,
> the actual code itself is very simple, just:

Awesome - yet another reason this approach is nicer.  When I'm done
working up v15, I'll pull in this patch and see what explodes and/or
runs really fast.

cheers!
will

> case BPF_S_ANC_SECCOMP_LD_W:
>        /* SECCOMP doesn't use SKB, no need to preserve %rdi. */
>        t_offset = seccomp_bpf_load - (image + addrs[i]);
>        EMIT1_off32(0xbf, K); /* mov imm32,%rdi */
>        EMIT1_off32(0xe8, t_offset); /* call */
>        break;
>
> EAX is set directly as it's the return register, EBX is preserved by the
> callee, RDI and other registers are unused by seccomp, so no need for
> trampoline code AFAIK.
>
> The rest of the patch just makes the JIT code suitable for sharing.
> Only real change is that after this patch unused insns memory is freed.
>
> The code is untested and even uncompiled, as I've only access to my 32-bit
> laptop at the moment.
>
> Would be interesting to know if this actually works and what the performance
> difference is for seccomp.
>
> Greetings,
>
> Indan
>
>
>  arch/x86/net/bpf_jit_comp.c |   47 ++++++++++++++++++++----------------------
>  include/linux/filter.h      |   14 +++++++-----
>  net/core/filter.c           |   27 ++++++++++++++++++++++--
>  3 files changed, 54 insertions(+), 34 deletions(-)
>
> ---
>
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index 7c1b765..3cd6626 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -118,7 +118,7 @@ static inline void bpf_flush_icache(void *start, void *end)
>  }
>
>
> -void bpf_jit_compile(struct sk_filter *fp)
> +bpf_func_t bpf_jit_compile(const struct sock_filter* filter, int flen, int use_skb)
>  {
>        u8 temp[64];
>        u8 *prog;
> @@ -131,15 +131,13 @@ void bpf_jit_compile(struct sk_filter *fp)
>        int pc_ret0 = -1; /* bpf index of first RET #0 instruction (if any) */
>        unsigned int cleanup_addr; /* epilogue code offset */
>        unsigned int *addrs;
> -       const struct sock_filter *filter = fp->insns;
> -       int flen = fp->len;
>
>        if (!bpf_jit_enable)
> -               return;
> +               return NULL;
>
>        addrs = kmalloc(flen * sizeof(*addrs), GFP_KERNEL);
>        if (addrs == NULL)
> -               return;
> +               return NULL;
>
>        /* Before first pass, make a rough estimation of addrs[]
>         * each bpf instruction is translated to less than 64 bytes
> @@ -151,11 +149,16 @@ void bpf_jit_compile(struct sk_filter *fp)
>        cleanup_addr = proglen; /* epilogue address */
>
>        for (pass = 0; pass < 10; pass++) {
> -               u8 seen_or_pass0 = (pass == 0) ? (SEEN_XREG | SEEN_DATAREF | SEEN_MEM) : seen;
> +               u8 seen_or_pass0 = seen;
>                /* no prologue/epilogue for trivial filters (RET something) */
>                proglen = 0;
>                prog = temp;
>
> +               if (pass == 0) {
> +                       seen_or_pass0 = SEEN_XREG | SEEN_MEM;
> +                       if (use_skb)
> +                               seen_or_pass0 |= SEEN_DATAREF;
> +               }
>                if (seen_or_pass0) {
>                        EMIT4(0x55, 0x48, 0x89, 0xe5); /* push %rbp; mov %rsp,%rbp */
>                        EMIT4(0x48, 0x83, 0xec, 96);    /* subq  $96,%rsp       */
> @@ -472,6 +475,14 @@ void bpf_jit_compile(struct sk_filter *fp)
>                                CLEAR_A();
>  #endif
>                                break;
> +#ifdef CONFIG_SECCOMP_FILTER
> +                       case BPF_S_ANC_SECCOMP_LD_W:
> +                               /* SECCOMP doesn't use SKB, no need to preserve %rdi. */
> +                               t_offset = seccomp_bpf_load - (image + addrs[i]);
> +                               EMIT1_off32(0xbf, K); /* mov imm32,%rdi */
> +                               EMIT1_off32(0xe8, t_offset); /* call */
> +                               break;
> +#endif
>                        case BPF_S_LD_W_ABS:
>                                func = sk_load_word;
>  common_load:                   seen |= SEEN_DATAREF;
> @@ -588,13 +599,14 @@ cond_branch:                      f_offset = addrs[i + filter[i].jf] - addrs[i];
>                                /* hmm, too complex filter, give up with jit compiler */
>                                goto out;
>                        }
> +                       BUG_ON(!use_skb && (seen & SEEN_DATAREF));
>                        ilen = prog - temp;
>                        if (image) {
>                                if (unlikely(proglen + ilen > oldproglen)) {
>                                        pr_err("bpb_jit_compile fatal error\n");
>                                        kfree(addrs);
>                                        module_free(NULL, image);
> -                                       return;
> +                                       return NULL;
>                                }
>                                memcpy(image + proglen, temp, ilen);
>                        }
> @@ -635,28 +647,13 @@ cond_branch:                      f_offset = addrs[i + filter[i].jf] - addrs[i];
>                                       16, 1, image, proglen, false);
>
>                bpf_flush_icache(image, image + proglen);
> -
> -               fp->bpf_func = (void *)image;
>        }
>  out:
>        kfree(addrs);
> -       return;
> +       return (void *)image;
>  }
>
> -static void jit_free_defer(struct work_struct *arg)
> +void bpf_jit_free(bpf_func_t image)
>  {
> -       module_free(NULL, arg);
> -}
> -
> -/* run from softirq, we must use a work_struct to call
> - * module_free() from process context
> - */
> -void bpf_jit_free(struct sk_filter *fp)
> -{
> -       if (fp->bpf_func != sk_run_filter) {
> -               struct work_struct *work = (struct work_struct *)fp->bpf_func;
> -
> -               INIT_WORK(work, jit_free_defer);
> -               schedule_work(work);
> -       }
> +       module_free(NULL, image);
>  }
> diff --git a/include/linux/filter.h b/include/linux/filter.h
> index 8eeb205..292ccca 100644
> --- a/include/linux/filter.h
> +++ b/include/linux/filter.h
> @@ -135,12 +135,13 @@ struct sock_fprog {       /* Required for SO_ATTACH_FILTER. */
>  struct sk_buff;
>  struct sock;
>
> +typedef unsigned int (*bpf_func_t)(const struct sk_buff*, const struct sock_filter*);
> +
>  struct sk_filter
>  {
>        atomic_t                refcnt;
>        unsigned int            len;    /* Number of filter blocks */
> -       unsigned int            (*bpf_func)(const struct sk_buff *skb,
> -                                           const struct sock_filter *filter);
> +       bpf_func_t              bpf_func;
>        struct rcu_head         rcu;
>        struct sock_filter      insns[0];
>  };
> @@ -158,14 +159,15 @@ extern int sk_detach_filter(struct sock *sk);
>  extern int sk_chk_filter(struct sock_filter *filter, unsigned int flen);
>
>  #ifdef CONFIG_BPF_JIT
> -extern void bpf_jit_compile(struct sk_filter *fp);
> -extern void bpf_jit_free(struct sk_filter *fp);
> +extern bpf_func_t bpf_jit_compile(const struct sock_filter*, int flen, int use_skb);
> +extern void bpf_jit_free(bpf_funct_t);
>  #define SK_RUN_FILTER(FILTER, SKB) (*FILTER->bpf_func)(SKB, FILTER->insns)
>  #else
> -static inline void bpf_jit_compile(struct sk_filter *fp)
> +static inline bpf_func_t bpf_jit_compile(const struct sock_filter*, int flen, int use_skb)
>  {
> +       return NULL;
>  }
> -static inline void bpf_jit_free(struct sk_filter *fp)
> +static inline void bpf_jit_free(bpf_func_t)
>  {
>  }
>  #define SK_RUN_FILTER(FILTER, SKB) sk_run_filter(SKB, FILTER->insns)
> diff --git a/net/core/filter.c b/net/core/filter.c
> index 5dea452..03e3ea3 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -574,6 +574,14 @@ int sk_chk_filter(struct sock_filter *filter, unsigned int flen)
>  }
>  EXPORT_SYMBOL(sk_chk_filter);
>
> +/* run from softirq, we must use a work_struct to call
> + * bpf_jit_free() from process context
> + */
> +static void jit_free_defer(struct work_struct *arg)
> +{
> +       bpf_jit_free((bpf_func_t)arg);
> +}
> +
>  /**
>  *     sk_filter_release_rcu - Release a socket filter by rcu_head
>  *     @rcu: rcu_head that contains the sk_filter to free
> @@ -582,7 +590,12 @@ void sk_filter_release_rcu(struct rcu_head *rcu)
>  {
>        struct sk_filter *fp = container_of(rcu, struct sk_filter, rcu);
>
> -       bpf_jit_free(fp);
> +       if (fp->bpf_func != sk_run_filter) {
> +               struct work_struct *work = (struct work_struct *)fp->bpf_func;
> +
> +               INIT_WORK(work, jit_free_defer);
> +               schedule_work(work);
> +       }
>        kfree(fp);
>  }
>  EXPORT_SYMBOL(sk_filter_release_rcu);
> @@ -599,9 +612,10 @@ EXPORT_SYMBOL(sk_filter_release_rcu);
>  */
>  int sk_attach_filter(struct sock_fprog *fprog, struct sock *sk)
>  {
> -       struct sk_filter *fp, *old_fp;
> +       struct sk_filter *fp, *old_fp, *new_fp;
>        unsigned int fsize = sizeof(struct sock_filter) * fprog->len;
>        int err;
> +       bpf_func_t jit;
>
>        /* Make sure new filter is there and in the right amounts. */
>        if (fprog->filter == NULL)
> @@ -625,7 +639,14 @@ int sk_attach_filter(struct sock_fprog *fprog, struct sock *sk)
>                return err;
>        }
>
> -       bpf_jit_compile(fp);
> +       jit = bpf_jit_compile(fp->insns, fp->len, 1);
> +       if (jit) {
> +               fp->bpf_func = jit;
> +               /* Free unused insns memory */
> +               newfp = krealloc(fp, sizeof(*fp), GFP_KERNEL);
> +               if (newfp)
> +                       fp = newfp;
> +       }
>
>        old_fp = rcu_dereference_protected(sk->sk_filter,
>                                           sock_owned_by_user(sk));
>
>
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v14 06/13] seccomp: add system call filtering using BPF
  2012-03-13  3:33   ` Indan Zupancic
@ 2012-03-13 15:57     ` Will Drewry
  0 siblings, 0 replies; 40+ messages in thread
From: Will Drewry @ 2012-03-13 15:57 UTC (permalink / raw)
  To: kernel-hardening
  Cc: linux-kernel, linux-arch, linux-doc, netdev, x86, arnd, davem,
	hpa, mingo, oleg, peterz, rdunlap, mcgrathr, tglx, luto, eparis,
	serge.hallyn, djm, scarybeasts, pmoore, akpm, corbet,
	eric.dumazet, markus, coreyb, keescook

On Mon, Mar 12, 2012 at 10:33 PM, Indan Zupancic <indan@nul.nu> wrote:
> Hello,
>
> Could you send a 00/13 patch with a diffstat summary of all patches
> combined and also some code size diffs? That would be very nice.

No problem.  I'll do that for v15 integrating these tweaks.

> I assume the BPF performance for networking doesn't change with this
> version.

Not that I could see given normal jitter, but I can post numbers if
it's preferable.

>> +/* Limit any path through the tree to 1 megabytes worth of instructions. */
>> +#define MAX_INSNS_PER_PATH ((1 << 20) / sizeof(struct sock_filter))
>
> This still makes me feel uneasy. It's also easier to increase the limit
> in the future if necessary than to reduce it. So I would go for a lower
> initial limit, like 256kB or 512Kb. 1 megabyte are 32 max size filters,
> that seems like a lot. And in practise it's more likely that the more
> filters there are, the smaller they usually are.

I'll drop it, but next tweak will result in a sysctl :)

>> +
>> +static void seccomp_filter_log_failure(int syscall)
>> +{
>> +     int compat = 0;
>> +#ifdef CONFIG_COMPAT
>> +     compat = is_compat_task();
>> +#endif
>> +     pr_info("%s[%d]: %ssystem call %d blocked at 0x%lx\n",
>> +             current->comm, task_pid_nr(current),
>> +             (compat ? "compat " : ""),
>> +             syscall, KSTK_EIP(current));
>> +}
>
> If you keep this, at least make it rate limited?

It goes away with the last patch in the series.  Maybe in the next
rev, I'll pull in Kees's changes so I can further expand
audit_seccomp().

>> +/**
>> + * bpf_load: checks and returns a pointer to the requested offset
>> + * @off: offset into struct seccomp_data to load from
>> + *
>> + * Returns the requested 32-bits of data.
>> + * seccomp_chk_filter() should assure that @off is 32-bit aligned
>> + * and not out of bounds.  Failure to do so is a BUG.
>> + */
>> +u32 seccomp_bpf_load(int off)
>> +{
>> +     struct pt_regs *regs = task_pt_regs(current);
>> +
>> +     if (off == BPF_DATA(nr))
>> +             return syscall_get_nr(current, regs);
>> +     if (off == BPF_DATA(arch))
>> +             return syscall_get_arch(current, regs);
>> +     if (off == BPF_DATA(instruction_pointer))
>> +             return get_u32(KSTK_EIP(current), 0);
>> +     if (off == BPF_DATA(instruction_pointer) + sizeof(u32))
>> +             return get_u32(KSTK_EIP(current), 1);
>
> I'd put the IP checks at the end, as it's the least likely case.
> But maybe gcc is smart enough to detect a pattern.

Will do

>> +     if (off >= BPF_DATA(args[0]) && off < BPF_DATA(args[6])) {
>> +             unsigned long value;
>> +             int arg = (off - BPF_DATA(args[0])) / sizeof(u64);
>> +             int index = !!(off % sizeof(u64));
>> +             syscall_get_arguments(current, regs, arg, 1, &value);
>> +             return get_u32(value, index);
>> +     }
>> +     /* seccomp_chk_filter should make this impossible. */
>> +     BUG();
>> +}
>> +
>> +/**
>> + *   seccomp_chk_filter - verify seccomp filter code
>> + *   @filter: filter to verify
>> + *   @flen: length of filter
>> + *
>> + * Takes a previously checked filter (by sk_chk_filter) and
>> + * redirects all filter code that loads struct sk_buff data
>> + * and related data through seccomp_bpf_load.  It also
>> + * enforces length and alignment checking of those loads.
>> + *
>> + * Returns 0 if the rule set is legal or -EINVAL if not.
>> + */
>> +static int seccomp_chk_filter(struct sock_filter *filter, unsigned int flen)
>> +{
>> +     int pc;
>> +     for (pc = 0; pc < flen; pc++) {
>> +             struct sock_filter *ftest = &filter[pc];
>> +             u16 code = ftest->code;
>> +             u32 k = ftest->k;
>> +             if (code <= BPF_S_ALU_NEG)
>> +                     continue;
>> +             /* Do not allow ancillary codes (incl. BPF_S_ANC_SECCOMP_*) */
>> +             if (code >= BPF_S_ANC_PROTOCOL)
>> +                     return -EINVAL;
>> +             if (code >= BPF_S_LDX_IMM)
>> +                     continue;
>
> I know I did the same in my code, but to avoid silly bugs when someone changes
> the enum in filter.h without being aware of SECCOMP, it's much safer to check
> all known good instructions explicitly in a big switch. And the compiler is
> hopefully smart enough to generate pretty much the same code as now. Then if
> new cases are added or existing ones swapped around, everything will still
> work and we don't end up with a silent security hole.

Anything before BPF_S_ANC_* is unlikely to change, and anything after
BPF_S_ANC_* is blacklisted.  However, you're right, if someone changes
the enum and array tables unexpectedly, this would be a problem.  Ew.
I'll push bits around and see what comes out.  I would almost prefer a
comment in filter.h than enumerating all the ALU instructions, etc
here.  Ah well.

> The reason I didn't do that was laziness and because it looks ugly. But it's
> more robust and I don't see a better way to get the same guarantees.

True, though the changes to make it a problem would be severe.

>> +             switch (code) {
>> +             case BPF_S_LD_W_ABS:
>> +                     ftest->code = BPF_S_ANC_SECCOMP_LD_W;
>> +                     /* 32-bit aligned and not out of bounds. */
>> +                     if (k >= sizeof(struct seccomp_data) || k & 3)
>> +                             return -EINVAL;
>> +                     continue;
>> +             case BPF_S_LD_W_LEN:
>> +                     ftest->code = BPF_S_LD_IMM;
>> +                     ftest->k = sizeof(struct seccomp_data);
>> +                     continue;
>> +             case BPF_S_LD_IMM:
>> +                     continue;
>> +             case BPF_S_LDX_W_LEN:
>> +                     ftest->code = BPF_S_LDX_IMM;
>> +                     ftest->k = sizeof(struct seccomp_data);
>> +                     continue;
>> +             case BPF_S_LD_W_IND:
>> +                     /* XXX: would runtime seccomp_bpf_load checking k */
>
> Why? I checked your example code and haven't seen one case where you're
> not using absolute loads. That said, I can think of situations where it's
> nice to be able to store the syscall and argument numbers in MEM and use
> an indirect load instead of changing the instructions themselves. But I
> don't think it's worth the trouble supporting this, as it's only a minor
> improvement while it adds more messy code just for this.

LD_W_IND is for using a index into the data stored in X.  It is pretty
unlikely that a system call argument will supply a value that
determines where to index into the data again.  (e.g., seccomp_data[X
+ k]).  If it turns out it is needed, then someone can change the
seccomp_bpf_load() prototype to allow returning an error and checking
if k is outside of a sane bounds.

>> +/**
>> + * seccomp_attach_filter: Attaches a seccomp filter to current.
>> + * @fprog: BPF program to install
>> + *
>> + * Returns 0 on success or an errno on failure.
>> + */
>> +static long seccomp_attach_filter(struct sock_fprog *fprog)
>> +{
>> +     struct seccomp_filter *filter;
>> +     unsigned long fp_size = fprog->len * sizeof(struct sock_filter);
>> +     unsigned long total_insns = 0;
>> +     long ret;
>> +
>> +     if (fprog->len == 0 || fprog->len > BPF_MAXINSNS)
>> +             return -EINVAL;
>> +
>> +     for (filter = current->seccomp.filter; filter; filter = filter->prev)
>> +             total_insns += filter->len;
>
> More accurate and better in case of one instruction filters:
>
>                total_insns += filter->len + 4;
>
> This to penalize tiny filters, otherwise a filter with one instruction will
> use a lot more memory than just 8 bytes, so your effective max memory usage
> would be much higher. It also depends on kzalloc's minimum allocation size
> and rounding, so all in all 4 seems like a good number.

Well I'd prefer sizeof(struct seccomp_filter), but then the size
bounding would vary based on arch pointer size.  Perhaps something
larger would be better: 16 or 32.  That or just further decrease the
total allowed instructions.

>> +     if (total_insns > MAX_INSNS_PER_PATH - fprog->len)
>> +             return -ENOSPC;
>
> It seems more readable to me to initialize total_insns to fprog->len
> instead of zero, and then drop the substraction. Overflows aren't
> possible either way.

Ok

> "No space left on device" isn't suitable either, I think. What's wrong
> with just using ENOMEM?

It's not really ENOMEM either.  It's more like EPERM or EACCES given
that the upper bounds has been hit and it is an administrative policy,
even if hard-coded.  It's just that we end up with overlapping errnos
for disparate failure modes, so I was hoping it would make sense to
use one logically applied (assuming BPF programs live in a bpf device
:).

>> +/**
>> + * seccomp_attach_user_filter - attaches a user-supplied sock_fprog
>> + * @user_filter: pointer to the user data containing a sock_fprog.
>> + *
>> + * Returns 0 on success and non-zero otherwise.
>> + */
>> +long seccomp_attach_user_filter(char __user *user_filter)
>> +{
>> +     struct sock_fprog fprog;
>> +     long ret = -EFAULT;
>> +
>> +     if (!user_filter)
>> +             goto out;
>
> This case should be handled by copy_from_user() already, so adding this
> check doesn't add anything at all. Zero is only one out of many invalid
> pointers, why check for 0 but not for e.g. 1, 2, 3, 4, 5, 6, 7, 8 or 9?

I guess so. This cause in the access_ok checks and not just a simple
cmp.  I think this is cleaner, but I'll drop it.

> Reviewed-by: Indan Zupancic <indan@nul.nu>

Thanks!  Next rev underway soonish.
will

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W
  2012-03-13 10:04 ` Indan Zupancic
  2012-03-13 15:43   ` Will Drewry
@ 2012-03-13 17:13   ` Eric Dumazet
  2012-03-14  5:12     ` Indan Zupancic
  1 sibling, 1 reply; 40+ messages in thread
From: Eric Dumazet @ 2012-03-13 17:13 UTC (permalink / raw)
  To: Indan Zupancic
  Cc: Will Drewry, linux-kernel, linux-arch, linux-doc,
	kernel-hardening, netdev, x86, arnd, davem, hpa, mingo, oleg,
	peterz, rdunlap, mcgrathr, tglx, luto, eparis, serge.hallyn, djm,
	scarybeasts, pmoore, akpm, corbet, markus, coreyb, keescook

On Tue, 2012-03-13 at 11:04 +0100, Indan Zupancic wrote:
> Hello,
> 
> I made a quick pseudo-patch for BPF JIT support. As far as I can tell,
> the actual code itself is very simple, just:
> 

> -	bpf_jit_compile(fp);
> +	jit = bpf_jit_compile(fp->insns, fp->len, 1);
> +	if (jit) {
> +		fp->bpf_func = jit;
> +		/* Free unused insns memory */
> +		newfp = krealloc(fp, sizeof(*fp), GFP_KERNEL);
> +		if (newfp)
> +			fp = newfp;
> +	}
> 
>  	old_fp = rcu_dereference_protected(sk->sk_filter,
>  					   sock_owned_by_user(sk));

Dont mix unrelated changes in a single patch.

Current krealloc() doesnt alloc a new area if the current one is bigger
than 'new_size', but this might change in next kernel versions.

So this part of your patch does nothing.

If it did, this kind of 'optimization' can actually be not good, because
sizeof(*fp) is small enough (less than half cache line size) to trigger
a possible false sharing issue. (other part of the cache line could be
used to hold a often dirtied object)




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W
  2012-03-13 17:13   ` Eric Dumazet
@ 2012-03-14  5:12     ` Indan Zupancic
  2012-03-14  5:55       ` Eric Dumazet
  0 siblings, 1 reply; 40+ messages in thread
From: Indan Zupancic @ 2012-03-14  5:12 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Will Drewry, linux-kernel, linux-arch, linux-doc,
	kernel-hardening, netdev, x86, arnd, davem, hpa, mingo, oleg,
	peterz, rdunlap, mcgrathr, tglx, luto, eparis, serge.hallyn, djm,
	scarybeasts, pmoore, akpm, corbet, markus, coreyb, keescook

Hello,

On Tue, March 13, 2012 18:13, Eric Dumazet wrote:
> On Tue, 2012-03-13 at 11:04 +0100, Indan Zupancic wrote:
>> -	bpf_jit_compile(fp);
>> +	jit = bpf_jit_compile(fp->insns, fp->len, 1);
>> +	if (jit) {
>> +		fp->bpf_func = jit;
>> +		/* Free unused insns memory */
>> +		newfp = krealloc(fp, sizeof(*fp), GFP_KERNEL);
>> +		if (newfp)
>> +			fp = newfp;
>> +	}
>>
>>  	old_fp = rcu_dereference_protected(sk->sk_filter,
>>  					   sock_owned_by_user(sk));
>
> Dont mix unrelated changes in a single patch.

I know, but it was just a quick proof-of-concept. It just showed
a possible advantage of the API change for the current networking
code. The real patch would be split into an API change, the memory
freeing change, and the seccomp support.

>
> Current krealloc() doesnt alloc a new area if the current one is bigger
> than 'new_size', but this might change in next kernel versions.
>
> So this part of your patch does nothing.

Problem is that 'old_size' can be up to 32kB in size and it would be nice
if that memory could be released. If it isn't, then using JIT increases
memory usage, while also not accounting it to the socket.

>
> If it did, this kind of 'optimization' can actually be not good, because
> sizeof(*fp) is small enough (less than half cache line size) to trigger
> a possible false sharing issue. (other part of the cache line could be
> used to hold a often dirtied object)

It could avoid this by allocating at least a cache size. But this is a
problem for all small kmalloc's, isn't it?

What about something like the below:

@@ -625,7 +639,18 @@ int sk_attach_filter(struct sock_fprog *fprog, struct sock *sk)
 		return err;
 	}

-	bpf_jit_compile(fp);
+	jit = bpf_jit_compile(fp->insns, fp->len, 1);
+	if (jit) {
+		int size = max(cache_line_size(), sizeof(*fp));
+		fp->bpf_func = jit;
+		/* Free unused insns memory */
+		newfp = kmalloc(size, GFP_KERNEL);
+		if (newfp) {
+			memcpy(newfp, fp, size);
+			kfree(fp);
+			fp = newfp;
+		}
+	}

 	old_fp = rcu_dereference_protected(sk->sk_filter,
 					   sock_owned_by_user(sk));

Not sure whether to use cache_line_size() or just L1_CACHE_BYTES or
something else.

Greeings,

Indan



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W
  2012-03-14  5:12     ` Indan Zupancic
@ 2012-03-14  5:55       ` Eric Dumazet
  2012-03-14  7:59         ` Indan Zupancic
  0 siblings, 1 reply; 40+ messages in thread
From: Eric Dumazet @ 2012-03-14  5:55 UTC (permalink / raw)
  To: Indan Zupancic
  Cc: Will Drewry, linux-kernel, linux-arch, linux-doc,
	kernel-hardening, netdev, x86, arnd, davem, hpa, mingo, oleg,
	peterz, rdunlap, mcgrathr, tglx, luto, eparis, serge.hallyn, djm,
	scarybeasts, pmoore, akpm, corbet, markus, coreyb, keescook

Le mercredi 14 mars 2012 à 06:12 +0100, Indan Zupancic a écrit :

> Problem is that 'old_size' can be up to 32kB in size and it would be nice
> if that memory could be released. If it isn't, then using JIT increases
> memory usage, while also not accounting it to the socket.
> 

It is accounted for, since jit size is in relation with standard filter
size. Check sock_kmalloc()

Fact we can have a litle underestimation was already the case without
jit, since kmalloc() does a roundup to next power of two.

I dont think this discussion has anything to do with SECCOMP anyway.

These accounting dont need to be 100% precise, we only want a limit to
prevent rogue users from using all kernel memory.

> >
> > If it did, this kind of 'optimization' can actually be not good, because
> > sizeof(*fp) is small enough (less than half cache line size) to trigger
> > a possible false sharing issue. (other part of the cache line could be
> > used to hold a often dirtied object)
> 
> It could avoid this by allocating at least a cache size. But this is a
> problem for all small kmalloc's, isn't it?

Its a problem that was already met on several critical paths :

# find net|xargs grep -n L1_CACHE_BYTES
net/core/dev_addr_lists.c:51:	if (alloc_size < L1_CACHE_BYTES)
net/core/dev_addr_lists.c:52:		alloc_size = L1_CACHE_BYTES;
net/core/net-sysfs.c:586:	    RPS_MAP_SIZE(cpumask_weight(mask)), L1_CACHE_BYTES),
net/core/net-sysfs.c:1111:	    XPS_DEV_MAPS_SIZE, L1_CACHE_BYTES), GFP_KERNEL);
net/ipv6/ip6_fib.c:1612:	size = max_t(size_t, size, L1_CACHE_BYTES);
net/ipv4/fib_frontend.c:1049:	size = max_t(size_t, size, L1_CACHE_BYTES);




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v14 10/13] ptrace,seccomp: Add PTRACE_SECCOMP support
  2012-03-12 21:28 ` [PATCH v14 10/13] ptrace,seccomp: Add PTRACE_SECCOMP support Will Drewry
@ 2012-03-14  7:31   ` Indan Zupancic
  2012-03-14 15:03     ` [kernel-hardening] " Will Drewry
  2012-03-15 20:31     ` [PATCH v16 11/13] " Will Drewry
  0 siblings, 2 replies; 40+ messages in thread
From: Indan Zupancic @ 2012-03-14  7:31 UTC (permalink / raw)
  To: Will Drewry
  Cc: linux-kernel, linux-arch, linux-doc, kernel-hardening, netdev,
	x86, arnd, davem, hpa, mingo, oleg, peterz, rdunlap, mcgrathr,
	tglx, luto, eparis, serge.hallyn, djm, scarybeasts, pmoore, akpm,
	corbet, eric.dumazet, markus, coreyb, keescook, Will Drewry

Hello,

On Mon, March 12, 2012 22:28, Will Drewry wrote:
> This change adds support for a new ptrace option, PTRACE_O_TRACESECCOMP,
> and a new return value for seccomp BPF programs, SECCOMP_RET_TRACE.
>
> When a tracer specifies the PTRACE_O_TRACESECCOMP ptrace option, the
> tracer will be notified for any syscall that results in a BPF program
> returning SECCOMP_RET_TRACE.  The 16-bit SECCOMP_RET_DATA mask of the
> BPF program return value will be passed as the ptrace_message and may be
> retrieved using PTRACE_GETEVENTMSG.

Maybe good to tell it gets notified with PTRACE_EVENT_SECCOMP.

>
> If the subordinate process is not using seccomp filter, then no
> system call notifications will occur even if the option is specified.
>
> If there is no attached tracer when SECCOMP_RET_TRACE is returned,
> the system call will not be executed and an -ENOSYS errno will be
> returned to userspace.

When no tracer with PTRACE_O_TRACESECCOMP set is attached?
(Because that's what the code is doing.)

>
> This change adds a dependency on the system call slow path.  Any future
> efforts to use the system call fast path for seccomp filter will need to
> address this restriction.
>
> v14: - rebase/nochanges
> v13: - rebase on to 88ebdda6159ffc15699f204c33feb3e431bf9bdc
>       (Brings back a change to ptrace.c and the masks.)
> v12: - rebase to linux-next
>     - use ptrace_event and update arch/Kconfig to mention slow-path dependency
>     - drop all tracehook changes and inclusion (oleg@redhat.com)
> v11: - invert the logic to just make it a PTRACE_SYSCALL accelerator
>       (indan@nul.nu)
> v10: - moved to PTRACE_O_SECCOMP / PT_TRACE_SECCOMP
> v9:  - n/a
> v8:  - guarded PTRACE_SECCOMP use with an ifdef
> v7:  - introduced
>
> Signed-off-by: Will Drewry <wad@chromium.org>
> ---
> arch/Kconfig            |   11 ++++++-----
> include/linux/ptrace.h  |    7 +++++--
> include/linux/seccomp.h |    1 +
> kernel/ptrace.c         |    3 +++
> kernel/seccomp.c        |   20 +++++++++++++++-----
> 5 files changed, 30 insertions(+), 12 deletions(-)
>
> diff --git a/arch/Kconfig b/arch/Kconfig
> index d92a78e..3f8132c 100644
> --- a/arch/Kconfig
> +++ b/arch/Kconfig
> @@ -202,15 +202,16 @@ config HAVE_CMPXCHG_DOUBLE
> config HAVE_ARCH_SECCOMP_FILTER
> 	bool
> 	help
> -	  This symbol should be selected by an architecure if it provides:
> -	  asm/syscall.h:
> +	  An arch should select this symbol if it provides all of these things:
> 	  - syscall_get_arch()
> 	  - syscall_get_arguments()
> 	  - syscall_rollback()
> 	  - syscall_set_return_value()
> -	  SIGSYS siginfo_t support must be implemented.
> -	  __secure_computing_int()/secure_computing()'s return value must be
> -	  checked, with -1 resulting in the syscall being skipped.
> +	  - SIGSYS siginfo_t support
> +	  - uses __secure_computing_int() or secure_computing()
> +	  - secure_computing is called from a ptrace_event()-safe context
> +	  - secure_computing return value is checked and a return value of -1
> +	    results in the system call being skipped immediately.
>
> config SECCOMP_FILTER
> 	def_bool y
> diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
> index c2f1f6a..84b3418 100644
> --- a/include/linux/ptrace.h
> +++ b/include/linux/ptrace.h
> @@ -62,8 +62,9 @@
> #define PTRACE_O_TRACEEXEC	0x00000010
> #define PTRACE_O_TRACEVFORKDONE	0x00000020
> #define PTRACE_O_TRACEEXIT	0x00000040
> +#define PTRACE_O_TRACESECCOMP	0x00000080
>
> -#define PTRACE_O_MASK		0x0000007f
> +#define PTRACE_O_MASK		0x000000ff
>
> /* Wait extended result codes for the above trace options.  */
> #define PTRACE_EVENT_FORK	1
> @@ -73,6 +74,7 @@
> #define PTRACE_EVENT_VFORK_DONE	5
> #define PTRACE_EVENT_EXIT	6
> #define PTRACE_EVENT_STOP	7
> +#define PTRACE_EVENT_SECCOMP	8

I think PTRACE_EVENT_STOP is supposed to be hidden, it's never directly
seen by user space. Instead of doing the obvious thing, they went the
crazy PTRACE_INTERRUPT + PTRACE_LISTEN way.

So it's better to add PTRACE_EVENT_SECCOMP as 8 and bump STOP one up.
But if PTRACE_EVENT_STOP is really hidden then it shouldn't show up in
the user space header at all, it should be after the ifdef KERNEL.

>
> #include <asm/ptrace.h>
>
> @@ -101,8 +103,9 @@
> #define PT_TRACE_EXEC		PT_EVENT_FLAG(PTRACE_EVENT_EXEC)
> #define PT_TRACE_VFORK_DONE	PT_EVENT_FLAG(PTRACE_EVENT_VFORK_DONE)
> #define PT_TRACE_EXIT		PT_EVENT_FLAG(PTRACE_EVENT_EXIT)
> +#define PT_TRACE_SECCOMP	PT_EVENT_FLAG(PTRACE_EVENT_SECCOMP)
>
> -#define PT_TRACE_MASK	0x000003f4
> +#define PT_TRACE_MASK	0x00000ff4

This is wrong. Shouldn't it be 0xbf4? (0x7f4 if you bump STOP up.)

>
> /* single stepping state bits (used on ARM and PA-RISC) */
> #define PT_SINGLESTEP_BIT	31
> diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
> index e6d4b56..f4c1774 100644
> --- a/include/linux/seccomp.h
> +++ b/include/linux/seccomp.h
> @@ -21,6 +21,7 @@
> #define SECCOMP_RET_KILL	0x00000000U /* kill the task immediately */
> #define SECCOMP_RET_TRAP	0x00020000U /* disallow and force a SIGSYS */
> #define SECCOMP_RET_ERRNO	0x00030000U /* returns an errno */
> +#define SECCOMP_RET_TRACE	0x7ffe0000U /* pass to a tracer or disallow */
> #define SECCOMP_RET_ALLOW	0x7fff0000U /* allow */

Maybe a good idea to leave more gaps between all the return codes, in
case new return codes are added in the future that fall between existing
ones? E.g:

#define SECCOMP_RET_KILL	0x00000000U /* kill the task immediately */
#define SECCOMP_RET_TRAP	0x00100000U /* disallow and force a SIGSYS */
#define SECCOMP_RET_ERRNO	0x00200000U /* returns an errno */
#define SECCOMP_RET_TRACE	0x00300000U /* pass to a tracer or disallow */
#define SECCOMP_RET_ALLOW	0x00400000U /* allow */

>
> /* Masks for the return value sections. */
> diff --git a/kernel/ptrace.c b/kernel/ptrace.c
> index 00ab2ca..8cf6da1 100644
> --- a/kernel/ptrace.c
> +++ b/kernel/ptrace.c
> @@ -551,6 +551,9 @@ static int ptrace_setoptions(struct task_struct *child, unsigned
long data)
> 	if (data & PTRACE_O_TRACEEXIT)
> 		child->ptrace |= PT_TRACE_EXIT;
>
> +	if (data & PTRACE_O_TRACESECCOMP)
> +		child->ptrace |= PT_TRACE_SECCOMP;
> +
> 	return (data & ~PTRACE_O_MASK) ? -EINVAL : 0;
> }
>
> diff --git a/kernel/seccomp.c b/kernel/seccomp.c
> index 140490a..ddacc68 100644
> --- a/kernel/seccomp.c
> +++ b/kernel/seccomp.c
> @@ -17,13 +17,13 @@
> #include <linux/audit.h>
> #include <linux/compat.h>
> #include <linux/filter.h>
> +#include <linux/ptrace.h>
> #include <linux/sched.h>
> #include <linux/seccomp.h>
> #include <linux/security.h>
> #include <linux/slab.h>
> #include <linux/uaccess.h>
>
> -#include <linux/tracehook.h>
> #include <asm/syscall.h>
>
> /* #define SECCOMP_DEBUG 1 */
> @@ -389,14 +389,24 @@ int __secure_computing_int(int this_syscall)
> 						 -(action & SECCOMP_RET_DATA),
> 						 0);
> 			return -1;
> -		case SECCOMP_RET_TRAP: {
> -			int reason_code = action & SECCOMP_RET_DATA;
> +		case SECCOMP_RET_TRAP:
> 			/* Show the handler the original registers. */
> 			syscall_rollback(current, task_pt_regs(current));
> 			/* Let the filter pass back 16 bits of data. */
> -			seccomp_send_sigsys(this_syscall, reason_code);
> +			seccomp_send_sigsys(this_syscall,
> +					    action & SECCOMP_RET_DATA);
> 			return -1;
> -		}

These are unrelated changes and probably shouldn't be here. It just makes
it harder to review the code if you change it in a later patch for no
apparent reason.

> +		case SECCOMP_RET_TRACE:
> +			/* Skip these calls if there is no tracer. */
> +			if (!ptrace_event_enabled(current,
> +						  PTRACE_EVENT_SECCOMP))

One line please, it's 81 chars.

> +				return -1;
> +			/* Allow the BPF to provide the event message */
> +			ptrace_event(PTRACE_EVENT_SECCOMP,
> +				     action & SECCOMP_RET_DATA);

Why not move "int reason_code = action & SECCOMP_RET_DATA;" to the start
of the function out of the if checks, instead of duplicating the code?

> +			if (fatal_signal_pending(current))
> +				break;
> +			return 0;
> 		case SECCOMP_RET_ALLOW:
> 			return 0;
> 		case SECCOMP_RET_KILL:

Greetings,

Indan



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W
  2012-03-14  5:55       ` Eric Dumazet
@ 2012-03-14  7:59         ` Indan Zupancic
  2012-03-14  8:05           ` Eric Dumazet
  0 siblings, 1 reply; 40+ messages in thread
From: Indan Zupancic @ 2012-03-14  7:59 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Will Drewry, linux-kernel, linux-arch, linux-doc,
	kernel-hardening, netdev, x86, arnd, davem, hpa, mingo, oleg,
	peterz, rdunlap, mcgrathr, tglx, luto, eparis, serge.hallyn, djm,
	scarybeasts, pmoore, akpm, corbet, markus, coreyb, keescook

On Wed, March 14, 2012 06:55, Eric Dumazet wrote:
> Le mercredi 14 mars 2012 à 06:12 +0100, Indan Zupancic a écrit :
>
>> Problem is that 'old_size' can be up to 32kB in size and it would be nice
>> if that memory could be released. If it isn't, then using JIT increases
>> memory usage, while also not accounting it to the socket.
>>
>
> It is accounted for, since jit size is in relation with standard filter
> size. Check sock_kmalloc()
>
> Fact we can have a litle underestimation was already the case without
> jit, since kmalloc() does a roundup to next power of two.

OK.

> I dont think this discussion has anything to do with SECCOMP anyway.

Certainly true.

> These accounting dont need to be 100% precise, we only want a limit to
> prevent rogue users from using all kernel memory.

Fair enough.

The only remaining question is, is it worth the extra code to release
up to 32kB of unused memory? It seems a waste to not free it, but if
people think it's not worth it then let's just leave it around.

>> >
>> > If it did, this kind of 'optimization' can actually be not good, because
>> > sizeof(*fp) is small enough (less than half cache line size) to trigger
>> > a possible false sharing issue. (other part of the cache line could be
>> > used to hold a often dirtied object)
>>
>> It could avoid this by allocating at least a cache size. But this is a
>> problem for all small kmalloc's, isn't it?
>
> Its a problem that was already met on several critical paths :
>
> # find net|xargs grep -n L1_CACHE_BYTES
> net/core/dev_addr_lists.c:51:	if (alloc_size < L1_CACHE_BYTES)
> net/core/dev_addr_lists.c:52:		alloc_size = L1_CACHE_BYTES;
> net/core/net-sysfs.c:586:	    RPS_MAP_SIZE(cpumask_weight(mask)), L1_CACHE_BYTES),
> net/core/net-sysfs.c:1111:	    XPS_DEV_MAPS_SIZE, L1_CACHE_BYTES), GFP_KERNEL);
> net/ipv6/ip6_fib.c:1612:	size = max_t(size_t, size, L1_CACHE_BYTES);
> net/ipv4/fib_frontend.c:1049:	size = max_t(size_t, size, L1_CACHE_BYTES);

So using L1_CACHE_BYTES is the more common thing to do, I'll keep that in
mind for next time.

Thanks,

Indan



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W
  2012-03-14  7:59         ` Indan Zupancic
@ 2012-03-14  8:05           ` Eric Dumazet
  2012-03-17 10:14             ` Indan Zupancic
  0 siblings, 1 reply; 40+ messages in thread
From: Eric Dumazet @ 2012-03-14  8:05 UTC (permalink / raw)
  To: Indan Zupancic
  Cc: Will Drewry, linux-kernel, linux-arch, linux-doc,
	kernel-hardening, netdev, x86, arnd, davem, hpa, mingo, oleg,
	peterz, rdunlap, mcgrathr, tglx, luto, eparis, serge.hallyn, djm,
	scarybeasts, pmoore, akpm, corbet, markus, coreyb, keescook

Le mercredi 14 mars 2012 à 08:59 +0100, Indan Zupancic a écrit :

> The only remaining question is, is it worth the extra code to release
> up to 32kB of unused memory? It seems a waste to not free it, but if
> people think it's not worth it then let's just leave it around.

Quite frankly its not an issue, given JIT BPF is not yet default
enabled.

I am not sure all bugs were found and fixed. I would warn users before
considering using it in production.

If you have time, I would appreciate if you could double check and find
last bugs in it.




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v14 10/13] ptrace,seccomp: Add PTRACE_SECCOMP support
  2012-03-14  7:31   ` Indan Zupancic
@ 2012-03-14 15:03     ` Will Drewry
  2012-03-14 15:52       ` Will Drewry
  2012-03-15 20:31     ` [PATCH v16 11/13] " Will Drewry
  1 sibling, 1 reply; 40+ messages in thread
From: Will Drewry @ 2012-03-14 15:03 UTC (permalink / raw)
  To: kernel-hardening
  Cc: linux-kernel, linux-arch, linux-doc, netdev, x86, arnd, davem,
	hpa, mingo, oleg, peterz, rdunlap, mcgrathr, tglx, luto, eparis,
	serge.hallyn, djm, scarybeasts, pmoore, akpm, corbet,
	eric.dumazet, markus, coreyb, keescook

On Wed, Mar 14, 2012 at 2:31 AM, Indan Zupancic <indan@nul.nu> wrote:
> Hello,
>
> On Mon, March 12, 2012 22:28, Will Drewry wrote:
>> This change adds support for a new ptrace option, PTRACE_O_TRACESECCOMP,
>> and a new return value for seccomp BPF programs, SECCOMP_RET_TRACE.
>>
>> When a tracer specifies the PTRACE_O_TRACESECCOMP ptrace option, the
>> tracer will be notified for any syscall that results in a BPF program
>> returning SECCOMP_RET_TRACE.  The 16-bit SECCOMP_RET_DATA mask of the
>> BPF program return value will be passed as the ptrace_message and may be
>> retrieved using PTRACE_GETEVENTMSG.
>
> Maybe good to tell it gets notified with PTRACE_EVENT_SECCOMP.

Good call - I'll update it.

>>
>> If the subordinate process is not using seccomp filter, then no
>> system call notifications will occur even if the option is specified.
>>
>> If there is no attached tracer when SECCOMP_RET_TRACE is returned,
>> the system call will not be executed and an -ENOSYS errno will be
>> returned to userspace.
>
> When no tracer with PTRACE_O_TRACESECCOMP set is attached?
> (Because that's what the code is doing.)

That too :)

>>
>> This change adds a dependency on the system call slow path.  Any future
>> efforts to use the system call fast path for seccomp filter will need to
>> address this restriction.
>>
>> v14: - rebase/nochanges
>> v13: - rebase on to 88ebdda6159ffc15699f204c33feb3e431bf9bdc
>>       (Brings back a change to ptrace.c and the masks.)
>> v12: - rebase to linux-next
>>     - use ptrace_event and update arch/Kconfig to mention slow-path dependency
>>     - drop all tracehook changes and inclusion (oleg@redhat.com)
>> v11: - invert the logic to just make it a PTRACE_SYSCALL accelerator
>>       (indan@nul.nu)
>> v10: - moved to PTRACE_O_SECCOMP / PT_TRACE_SECCOMP
>> v9:  - n/a
>> v8:  - guarded PTRACE_SECCOMP use with an ifdef
>> v7:  - introduced
>>
>> Signed-off-by: Will Drewry <wad@chromium.org>
>> ---
>> arch/Kconfig            |   11 ++++++-----
>> include/linux/ptrace.h  |    7 +++++--
>> include/linux/seccomp.h |    1 +
>> kernel/ptrace.c         |    3 +++
>> kernel/seccomp.c        |   20 +++++++++++++++-----
>> 5 files changed, 30 insertions(+), 12 deletions(-)
>>
>> diff --git a/arch/Kconfig b/arch/Kconfig
>> index d92a78e..3f8132c 100644
>> --- a/arch/Kconfig
>> +++ b/arch/Kconfig
>> @@ -202,15 +202,16 @@ config HAVE_CMPXCHG_DOUBLE
>> config HAVE_ARCH_SECCOMP_FILTER
>>       bool
>>       help
>> -       This symbol should be selected by an architecure if it provides:
>> -       asm/syscall.h:
>> +       An arch should select this symbol if it provides all of these things:
>>         - syscall_get_arch()
>>         - syscall_get_arguments()
>>         - syscall_rollback()
>>         - syscall_set_return_value()
>> -       SIGSYS siginfo_t support must be implemented.
>> -       __secure_computing_int()/secure_computing()'s return value must be
>> -       checked, with -1 resulting in the syscall being skipped.
>> +       - SIGSYS siginfo_t support
>> +       - uses __secure_computing_int() or secure_computing()
>> +       - secure_computing is called from a ptrace_event()-safe context
>> +       - secure_computing return value is checked and a return value of -1
>> +         results in the system call being skipped immediately.
>>
>> config SECCOMP_FILTER
>>       def_bool y
>> diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
>> index c2f1f6a..84b3418 100644
>> --- a/include/linux/ptrace.h
>> +++ b/include/linux/ptrace.h
>> @@ -62,8 +62,9 @@
>> #define PTRACE_O_TRACEEXEC    0x00000010
>> #define PTRACE_O_TRACEVFORKDONE       0x00000020
>> #define PTRACE_O_TRACEEXIT    0x00000040
>> +#define PTRACE_O_TRACESECCOMP        0x00000080
>>
>> -#define PTRACE_O_MASK                0x0000007f
>> +#define PTRACE_O_MASK                0x000000ff
>>
>> /* Wait extended result codes for the above trace options.  */
>> #define PTRACE_EVENT_FORK     1
>> @@ -73,6 +74,7 @@
>> #define PTRACE_EVENT_VFORK_DONE       5
>> #define PTRACE_EVENT_EXIT     6
>> #define PTRACE_EVENT_STOP     7
>> +#define PTRACE_EVENT_SECCOMP 8
>
> I think PTRACE_EVENT_STOP is supposed to be hidden, it's never directly
> seen by user space. Instead of doing the obvious thing, they went the
> crazy PTRACE_INTERRUPT + PTRACE_LISTEN way.
>
> So it's better to add PTRACE_EVENT_SECCOMP as 8 and bump STOP one up.
> But if PTRACE_EVENT_STOP is really hidden then it shouldn't show up in
> the user space header at all, it should be after the ifdef KERNEL.

yeah that's in the -next branch so it will be fixed on merge resolve.

>>
>> #include <asm/ptrace.h>
>>
>> @@ -101,8 +103,9 @@
>> #define PT_TRACE_EXEC         PT_EVENT_FLAG(PTRACE_EVENT_EXEC)
>> #define PT_TRACE_VFORK_DONE   PT_EVENT_FLAG(PTRACE_EVENT_VFORK_DONE)
>> #define PT_TRACE_EXIT         PT_EVENT_FLAG(PTRACE_EVENT_EXIT)
>> +#define PT_TRACE_SECCOMP     PT_EVENT_FLAG(PTRACE_EVENT_SECCOMP)
>>
>> -#define PT_TRACE_MASK        0x000003f4
>> +#define PT_TRACE_MASK        0x00000ff4
>
> This is wrong. Shouldn't it be 0xbf4? (0x7f4 if you bump STOP up.)

If I bump stop up, but in the resolved version the masks simplify.

>
>>
>> /* single stepping state bits (used on ARM and PA-RISC) */
>> #define PT_SINGLESTEP_BIT     31
>> diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
>> index e6d4b56..f4c1774 100644
>> --- a/include/linux/seccomp.h
>> +++ b/include/linux/seccomp.h
>> @@ -21,6 +21,7 @@
>> #define SECCOMP_RET_KILL      0x00000000U /* kill the task immediately */
>> #define SECCOMP_RET_TRAP      0x00020000U /* disallow and force a SIGSYS */
>> #define SECCOMP_RET_ERRNO     0x00030000U /* returns an errno */
>> +#define SECCOMP_RET_TRACE    0x7ffe0000U /* pass to a tracer or disallow */
>> #define SECCOMP_RET_ALLOW     0x7fff0000U /* allow */
>
> Maybe a good idea to leave more gaps between all the return codes, in
> case new return codes are added in the future that fall between existing
> ones? E.g:
>
> #define SECCOMP_RET_KILL        0x00000000U /* kill the task immediately */
> #define SECCOMP_RET_TRAP        0x00100000U /* disallow and force a SIGSYS */
> #define SECCOMP_RET_ERRNO       0x00200000U /* returns an errno */
> #define SECCOMP_RET_TRACE       0x00300000U /* pass to a tracer or disallow */
> #define SECCOMP_RET_ALLOW       0x00400000U /* allow */

The gaps are intentional.  Priority is from least permissive (0) to
most permissive (0x7fff).  So the gaps are
there for things between errno and trace. I could add more gaps on
those sides, but I don't think there is much need to given the scope
of what is possible in the middle.  I certainly wouldn't push ALLOW
down to the bottom.

>>
>> /* Masks for the return value sections. */
>> diff --git a/kernel/ptrace.c b/kernel/ptrace.c
>> index 00ab2ca..8cf6da1 100644
>> --- a/kernel/ptrace.c
>> +++ b/kernel/ptrace.c
>> @@ -551,6 +551,9 @@ static int ptrace_setoptions(struct task_struct *child, unsigned
> long data)
>>       if (data & PTRACE_O_TRACEEXIT)
>>               child->ptrace |= PT_TRACE_EXIT;
>>
>> +     if (data & PTRACE_O_TRACESECCOMP)
>> +             child->ptrace |= PT_TRACE_SECCOMP;
>> +
>>       return (data & ~PTRACE_O_MASK) ? -EINVAL : 0;
>> }
>>
>> diff --git a/kernel/seccomp.c b/kernel/seccomp.c
>> index 140490a..ddacc68 100644
>> --- a/kernel/seccomp.c
>> +++ b/kernel/seccomp.c
>> @@ -17,13 +17,13 @@
>> #include <linux/audit.h>
>> #include <linux/compat.h>
>> #include <linux/filter.h>
>> +#include <linux/ptrace.h>
>> #include <linux/sched.h>
>> #include <linux/seccomp.h>
>> #include <linux/security.h>
>> #include <linux/slab.h>
>> #include <linux/uaccess.h>
>>
>> -#include <linux/tracehook.h>
>> #include <asm/syscall.h>
>>
>> /* #define SECCOMP_DEBUG 1 */
>> @@ -389,14 +389,24 @@ int __secure_computing_int(int this_syscall)
>>                                                -(action & SECCOMP_RET_DATA),
>>                                                0);
>>                       return -1;
>> -             case SECCOMP_RET_TRAP: {
>> -                     int reason_code = action & SECCOMP_RET_DATA;
>> +             case SECCOMP_RET_TRAP:
>>                       /* Show the handler the original registers. */
>>                       syscall_rollback(current, task_pt_regs(current));
>>                       /* Let the filter pass back 16 bits of data. */
>> -                     seccomp_send_sigsys(this_syscall, reason_code);
>> +                     seccomp_send_sigsys(this_syscall,
>> +                                         action & SECCOMP_RET_DATA);
>>                       return -1;
>> -             }
>
> These are unrelated changes and probably shouldn't be here. It just makes
> it harder to review the code if you change it in a later patch for no
> apparent reason.
>
>> +             case SECCOMP_RET_TRACE:
>> +                     /* Skip these calls if there is no tracer. */
>> +                     if (!ptrace_event_enabled(current,
>> +                                               PTRACE_EVENT_SECCOMP))
>
> One line please, it's 81 chars.

Ok :)

>> +                             return -1;
>> +                     /* Allow the BPF to provide the event message */
>> +                     ptrace_event(PTRACE_EVENT_SECCOMP,
>> +                                  action & SECCOMP_RET_DATA);
>
> Why not move "int reason_code = action & SECCOMP_RET_DATA;" to the start
> of the function out of the if checks, instead of duplicating the code?

Yeah I am cleaning this all up in the next rev. This was just ugly.

>> +                     if (fatal_signal_pending(current))
>> +                             break;
>> +                     return 0;
>>               case SECCOMP_RET_ALLOW:
>>                       return 0;
>>               case SECCOMP_RET_KILL:
>

Thanks!
will

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v14 10/13] ptrace,seccomp: Add PTRACE_SECCOMP support
  2012-03-14 15:03     ` [kernel-hardening] " Will Drewry
@ 2012-03-14 15:52       ` Will Drewry
  0 siblings, 0 replies; 40+ messages in thread
From: Will Drewry @ 2012-03-14 15:52 UTC (permalink / raw)
  To: kernel-hardening
  Cc: linux-kernel, linux-arch, linux-doc, netdev, x86, arnd, davem,
	hpa, mingo, oleg, peterz, rdunlap, mcgrathr, tglx, luto, eparis,
	serge.hallyn, djm, scarybeasts, pmoore, akpm, corbet,
	eric.dumazet, markus, coreyb, keescook

On Wed, Mar 14, 2012 at 10:03 AM, Will Drewry <wad@chromium.org> wrote:
> On Wed, Mar 14, 2012 at 2:31 AM, Indan Zupancic <indan@nul.nu> wrote:
>> Hello,
>>
>> On Mon, March 12, 2012 22:28, Will Drewry wrote:
>>> This change adds support for a new ptrace option, PTRACE_O_TRACESECCOMP,
>>> and a new return value for seccomp BPF programs, SECCOMP_RET_TRACE.
>>>
>>> When a tracer specifies the PTRACE_O_TRACESECCOMP ptrace option, the
>>> tracer will be notified for any syscall that results in a BPF program
>>> returning SECCOMP_RET_TRACE.  The 16-bit SECCOMP_RET_DATA mask of the
>>> BPF program return value will be passed as the ptrace_message and may be
>>> retrieved using PTRACE_GETEVENTMSG.
>>
>> Maybe good to tell it gets notified with PTRACE_EVENT_SECCOMP.
>
> Good call - I'll update it.
>
>>>
>>> If the subordinate process is not using seccomp filter, then no
>>> system call notifications will occur even if the option is specified.
>>>
>>> If there is no attached tracer when SECCOMP_RET_TRACE is returned,
>>> the system call will not be executed and an -ENOSYS errno will be
>>> returned to userspace.
>>
>> When no tracer with PTRACE_O_TRACESECCOMP set is attached?
>> (Because that's what the code is doing.)
>
> That too :)
>
>>>
>>> This change adds a dependency on the system call slow path.  Any future
>>> efforts to use the system call fast path for seccomp filter will need to
>>> address this restriction.
>>>
>>> v14: - rebase/nochanges
>>> v13: - rebase on to 88ebdda6159ffc15699f204c33feb3e431bf9bdc
>>>       (Brings back a change to ptrace.c and the masks.)
>>> v12: - rebase to linux-next
>>>     - use ptrace_event and update arch/Kconfig to mention slow-path dependency
>>>     - drop all tracehook changes and inclusion (oleg@redhat.com)
>>> v11: - invert the logic to just make it a PTRACE_SYSCALL accelerator
>>>       (indan@nul.nu)
>>> v10: - moved to PTRACE_O_SECCOMP / PT_TRACE_SECCOMP
>>> v9:  - n/a
>>> v8:  - guarded PTRACE_SECCOMP use with an ifdef
>>> v7:  - introduced
>>>
>>> Signed-off-by: Will Drewry <wad@chromium.org>
>>> ---
>>> arch/Kconfig            |   11 ++++++-----
>>> include/linux/ptrace.h  |    7 +++++--
>>> include/linux/seccomp.h |    1 +
>>> kernel/ptrace.c         |    3 +++
>>> kernel/seccomp.c        |   20 +++++++++++++++-----
>>> 5 files changed, 30 insertions(+), 12 deletions(-)
>>>
>>> diff --git a/arch/Kconfig b/arch/Kconfig
>>> index d92a78e..3f8132c 100644
>>> --- a/arch/Kconfig
>>> +++ b/arch/Kconfig
>>> @@ -202,15 +202,16 @@ config HAVE_CMPXCHG_DOUBLE
>>> config HAVE_ARCH_SECCOMP_FILTER
>>>       bool
>>>       help
>>> -       This symbol should be selected by an architecure if it provides:
>>> -       asm/syscall.h:
>>> +       An arch should select this symbol if it provides all of these things:
>>>         - syscall_get_arch()
>>>         - syscall_get_arguments()
>>>         - syscall_rollback()
>>>         - syscall_set_return_value()
>>> -       SIGSYS siginfo_t support must be implemented.
>>> -       __secure_computing_int()/secure_computing()'s return value must be
>>> -       checked, with -1 resulting in the syscall being skipped.
>>> +       - SIGSYS siginfo_t support
>>> +       - uses __secure_computing_int() or secure_computing()
>>> +       - secure_computing is called from a ptrace_event()-safe context
>>> +       - secure_computing return value is checked and a return value of -1
>>> +         results in the system call being skipped immediately.
>>>
>>> config SECCOMP_FILTER
>>>       def_bool y
>>> diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
>>> index c2f1f6a..84b3418 100644
>>> --- a/include/linux/ptrace.h
>>> +++ b/include/linux/ptrace.h
>>> @@ -62,8 +62,9 @@
>>> #define PTRACE_O_TRACEEXEC    0x00000010
>>> #define PTRACE_O_TRACEVFORKDONE       0x00000020
>>> #define PTRACE_O_TRACEEXIT    0x00000040
>>> +#define PTRACE_O_TRACESECCOMP        0x00000080
>>>
>>> -#define PTRACE_O_MASK                0x0000007f
>>> +#define PTRACE_O_MASK                0x000000ff
>>>
>>> /* Wait extended result codes for the above trace options.  */
>>> #define PTRACE_EVENT_FORK     1
>>> @@ -73,6 +74,7 @@
>>> #define PTRACE_EVENT_VFORK_DONE       5
>>> #define PTRACE_EVENT_EXIT     6
>>> #define PTRACE_EVENT_STOP     7
>>> +#define PTRACE_EVENT_SECCOMP 8
>>
>> I think PTRACE_EVENT_STOP is supposed to be hidden, it's never directly
>> seen by user space. Instead of doing the obvious thing, they went the
>> crazy PTRACE_INTERRUPT + PTRACE_LISTEN way.
>>
>> So it's better to add PTRACE_EVENT_SECCOMP as 8 and bump STOP one up.
>> But if PTRACE_EVENT_STOP is really hidden then it shouldn't show up in
>> the user space header at all, it should be after the ifdef KERNEL.
>
> yeah that's in the -next branch so it will be fixed on merge resolve.
>
>>>
>>> #include <asm/ptrace.h>
>>>
>>> @@ -101,8 +103,9 @@
>>> #define PT_TRACE_EXEC         PT_EVENT_FLAG(PTRACE_EVENT_EXEC)
>>> #define PT_TRACE_VFORK_DONE   PT_EVENT_FLAG(PTRACE_EVENT_VFORK_DONE)
>>> #define PT_TRACE_EXIT         PT_EVENT_FLAG(PTRACE_EVENT_EXIT)
>>> +#define PT_TRACE_SECCOMP     PT_EVENT_FLAG(PTRACE_EVENT_SECCOMP)
>>>
>>> -#define PT_TRACE_MASK        0x000003f4
>>> +#define PT_TRACE_MASK        0x00000ff4
>>
>> This is wrong. Shouldn't it be 0xbf4? (0x7f4 if you bump STOP up.)
>
> If I bump stop up, but in the resolved version the masks simplify.

Oops. To answer the question about 0xbf4 versus 0xff4.  0xbf4 would
make sense for filtering out STOP, but it seemed like overkill because
stop has already been moved out of the way in -next:
http://git.kernel.org/?p=linux/kernel/git/next/linux-next.git;a=blob;f=include/linux/ptrace.h;h=5c719627c2aa7bc58431ba5aa66195f9f89113ee;hb=c3381bf07c37799a7893c39469ad892322f9ded2#l62

Merge should resolve this issue entirely.

>
>>
>>>
>>> /* single stepping state bits (used on ARM and PA-RISC) */
>>> #define PT_SINGLESTEP_BIT     31
>>> diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
>>> index e6d4b56..f4c1774 100644
>>> --- a/include/linux/seccomp.h
>>> +++ b/include/linux/seccomp.h
>>> @@ -21,6 +21,7 @@
>>> #define SECCOMP_RET_KILL      0x00000000U /* kill the task immediately */
>>> #define SECCOMP_RET_TRAP      0x00020000U /* disallow and force a SIGSYS */
>>> #define SECCOMP_RET_ERRNO     0x00030000U /* returns an errno */
>>> +#define SECCOMP_RET_TRACE    0x7ffe0000U /* pass to a tracer or disallow */
>>> #define SECCOMP_RET_ALLOW     0x7fff0000U /* allow */
>>
>> Maybe a good idea to leave more gaps between all the return codes, in
>> case new return codes are added in the future that fall between existing
>> ones? E.g:
>>
>> #define SECCOMP_RET_KILL        0x00000000U /* kill the task immediately */
>> #define SECCOMP_RET_TRAP        0x00100000U /* disallow and force a SIGSYS */
>> #define SECCOMP_RET_ERRNO       0x00200000U /* returns an errno */
>> #define SECCOMP_RET_TRACE       0x00300000U /* pass to a tracer or disallow */
>> #define SECCOMP_RET_ALLOW       0x00400000U /* allow */
>
> The gaps are intentional.  Priority is from least permissive (0) to
> most permissive (0x7fff).  So the gaps are
> there for things between errno and trace. I could add more gaps on
> those sides, but I don't think there is much need to given the scope
> of what is possible in the middle.  I certainly wouldn't push ALLOW
> down to the bottom.
>
>>>
>>> /* Masks for the return value sections. */
>>> diff --git a/kernel/ptrace.c b/kernel/ptrace.c
>>> index 00ab2ca..8cf6da1 100644
>>> --- a/kernel/ptrace.c
>>> +++ b/kernel/ptrace.c
>>> @@ -551,6 +551,9 @@ static int ptrace_setoptions(struct task_struct *child, unsigned
>> long data)
>>>       if (data & PTRACE_O_TRACEEXIT)
>>>               child->ptrace |= PT_TRACE_EXIT;
>>>
>>> +     if (data & PTRACE_O_TRACESECCOMP)
>>> +             child->ptrace |= PT_TRACE_SECCOMP;
>>> +
>>>       return (data & ~PTRACE_O_MASK) ? -EINVAL : 0;
>>> }
>>>
>>> diff --git a/kernel/seccomp.c b/kernel/seccomp.c
>>> index 140490a..ddacc68 100644
>>> --- a/kernel/seccomp.c
>>> +++ b/kernel/seccomp.c
>>> @@ -17,13 +17,13 @@
>>> #include <linux/audit.h>
>>> #include <linux/compat.h>
>>> #include <linux/filter.h>
>>> +#include <linux/ptrace.h>
>>> #include <linux/sched.h>
>>> #include <linux/seccomp.h>
>>> #include <linux/security.h>
>>> #include <linux/slab.h>
>>> #include <linux/uaccess.h>
>>>
>>> -#include <linux/tracehook.h>
>>> #include <asm/syscall.h>
>>>
>>> /* #define SECCOMP_DEBUG 1 */
>>> @@ -389,14 +389,24 @@ int __secure_computing_int(int this_syscall)
>>>                                                -(action & SECCOMP_RET_DATA),
>>>                                                0);
>>>                       return -1;
>>> -             case SECCOMP_RET_TRAP: {
>>> -                     int reason_code = action & SECCOMP_RET_DATA;
>>> +             case SECCOMP_RET_TRAP:
>>>                       /* Show the handler the original registers. */
>>>                       syscall_rollback(current, task_pt_regs(current));
>>>                       /* Let the filter pass back 16 bits of data. */
>>> -                     seccomp_send_sigsys(this_syscall, reason_code);
>>> +                     seccomp_send_sigsys(this_syscall,
>>> +                                         action & SECCOMP_RET_DATA);
>>>                       return -1;
>>> -             }
>>
>> These are unrelated changes and probably shouldn't be here. It just makes
>> it harder to review the code if you change it in a later patch for no
>> apparent reason.
>>
>>> +             case SECCOMP_RET_TRACE:
>>> +                     /* Skip these calls if there is no tracer. */
>>> +                     if (!ptrace_event_enabled(current,
>>> +                                               PTRACE_EVENT_SECCOMP))
>>
>> One line please, it's 81 chars.
>
> Ok :)
>
>>> +                             return -1;
>>> +                     /* Allow the BPF to provide the event message */
>>> +                     ptrace_event(PTRACE_EVENT_SECCOMP,
>>> +                                  action & SECCOMP_RET_DATA);
>>
>> Why not move "int reason_code = action & SECCOMP_RET_DATA;" to the start
>> of the function out of the if checks, instead of duplicating the code?
>
> Yeah I am cleaning this all up in the next rev. This was just ugly.
>
>>> +                     if (fatal_signal_pending(current))
>>> +                             break;
>>> +                     return 0;
>>>               case SECCOMP_RET_ALLOW:
>>>                       return 0;
>>>               case SECCOMP_RET_KILL:
>>
>
> Thanks!
> will

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v16 11/13] ptrace,seccomp: Add PTRACE_SECCOMP support
  2012-03-14  7:31   ` Indan Zupancic
  2012-03-14 15:03     ` [kernel-hardening] " Will Drewry
@ 2012-03-15 20:31     ` Will Drewry
  1 sibling, 0 replies; 40+ messages in thread
From: Will Drewry @ 2012-03-15 20:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arch, linux-doc, kernel-hardening, netdev, x86, arnd,
	davem, hpa, mingo, oleg, peterz, rdunlap, mcgrathr, tglx, luto,
	eparis, serge.hallyn, djm, scarybeasts, indan, pmoore, akpm,
	corbet, eric.dumazet, markus, coreyb, keescook, Will Drewry

~~
Only this file changed from v15.  However, the whole v16 series may be
pulled from
  https://github.com/redpig/linux/tree/seccomp%2Fv16%2Fposted
The seccomp/v16/posted tag is GPG-signed.
~~

This change adds support for a new ptrace option, PTRACE_O_TRACESECCOMP,
and a new return value for seccomp BPF programs, SECCOMP_RET_TRACE.

When a tracer specifies the PTRACE_O_TRACESECCOMP ptrace option, the
tracer will be notified, via PTRACE_EVENT_SECCOMP, for any syscall that
results in a BPF program returning SECCOMP_RET_TRACE.  The 16-bit
SECCOMP_RET_DATA mask of the BPF program return value will be passed as
the ptrace_message and may be retrieved using PTRACE_GETEVENTMSG.

If the subordinate process is not using seccomp filter, then no
system call notifications will occur even if the option is specified.

If there is no tracer with PTRACE_O_TRACESECCOMP when SECCOMP_RET_TRACE
is returned, the system call will not be executed and an -ENOSYS errno
will be returned to userspace.

This change adds a dependency on the system call slow path.  Any future
efforts to use the system call fast path for seccomp filter will need to
address this restriction.

v16: - update PT_TRACE_MASK to 0xbf4 so that STOP isn't clear on
       SETOPTIONS call (indan@nul.nu)
       [Note, PT_TRACE_MASK disappears in linux-next]
v15: - add audit support for non-zero return codes
     - clean up style (indan@nul.nu)
v14: - rebase/nochanges
v13: - rebase on to 88ebdda6159ffc15699f204c33feb3e431bf9bdc
       (Brings back a change to ptrace.c and the masks.)
v12: - rebase to linux-next
     - use ptrace_event and update arch/Kconfig to mention slow-path dependency
     - drop all tracehook changes and inclusion (oleg@redhat.com)
v11: - invert the logic to just make it a PTRACE_SYSCALL accelerator
       (indan@nul.nu)
v10: - moved to PTRACE_O_SECCOMP / PT_TRACE_SECCOMP
v9:  - n/a
v8:  - guarded PTRACE_SECCOMP use with an ifdef
v7:  - introduced

Signed-off-by: Will Drewry <wad@chromium.org>
---
 arch/Kconfig            |   11 ++++++-----
 include/linux/ptrace.h  |    7 +++++--
 include/linux/seccomp.h |    1 +
 kernel/ptrace.c         |    3 +++
 kernel/seccomp.c        |   13 +++++++++++--
 5 files changed, 26 insertions(+), 9 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index d92a78e..3f8132c 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -202,15 +202,16 @@ config HAVE_CMPXCHG_DOUBLE
 config HAVE_ARCH_SECCOMP_FILTER
 	bool
 	help
-	  This symbol should be selected by an architecure if it provides:
-	  asm/syscall.h:
+	  An arch should select this symbol if it provides all of these things:
 	  - syscall_get_arch()
 	  - syscall_get_arguments()
 	  - syscall_rollback()
 	  - syscall_set_return_value()
-	  SIGSYS siginfo_t support must be implemented.
-	  __secure_computing_int()/secure_computing()'s return value must be
-	  checked, with -1 resulting in the syscall being skipped.
+	  - SIGSYS siginfo_t support
+	  - uses __secure_computing_int() or secure_computing()
+	  - secure_computing is called from a ptrace_event()-safe context
+	  - secure_computing return value is checked and a return value of -1
+	    results in the system call being skipped immediately.
 
 config SECCOMP_FILTER
 	def_bool y
diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
index c2f1f6a..3bf7ee9 100644
--- a/include/linux/ptrace.h
+++ b/include/linux/ptrace.h
@@ -62,8 +62,9 @@
 #define PTRACE_O_TRACEEXEC	0x00000010
 #define PTRACE_O_TRACEVFORKDONE	0x00000020
 #define PTRACE_O_TRACEEXIT	0x00000040
+#define PTRACE_O_TRACESECCOMP	0x00000080
 
-#define PTRACE_O_MASK		0x0000007f
+#define PTRACE_O_MASK		0x000000ff
 
 /* Wait extended result codes for the above trace options.  */
 #define PTRACE_EVENT_FORK	1
@@ -73,6 +74,7 @@
 #define PTRACE_EVENT_VFORK_DONE	5
 #define PTRACE_EVENT_EXIT	6
 #define PTRACE_EVENT_STOP	7
+#define PTRACE_EVENT_SECCOMP	8
 
 #include <asm/ptrace.h>
 
@@ -101,8 +103,9 @@
 #define PT_TRACE_EXEC		PT_EVENT_FLAG(PTRACE_EVENT_EXEC)
 #define PT_TRACE_VFORK_DONE	PT_EVENT_FLAG(PTRACE_EVENT_VFORK_DONE)
 #define PT_TRACE_EXIT		PT_EVENT_FLAG(PTRACE_EVENT_EXIT)
+#define PT_TRACE_SECCOMP	PT_EVENT_FLAG(PTRACE_EVENT_SECCOMP)
 
-#define PT_TRACE_MASK	0x000003f4
+#define PT_TRACE_MASK	0x00000bf4
 
 /* single stepping state bits (used on ARM and PA-RISC) */
 #define PT_SINGLESTEP_BIT	31
diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
index a5078fb..306733e 100644
--- a/include/linux/seccomp.h
+++ b/include/linux/seccomp.h
@@ -21,6 +21,7 @@
 #define SECCOMP_RET_KILL	0x00000000U /* kill the task immediately */
 #define SECCOMP_RET_TRAP	0x00030000U /* disallow and force a SIGSYS */
 #define SECCOMP_RET_ERRNO	0x00050000U /* returns an errno */
+#define SECCOMP_RET_TRACE	0x7ff00000U /* pass to a tracer or disallow */
 #define SECCOMP_RET_ALLOW	0x7fff0000U /* allow */
 
 /* Masks for the return value sections. */
diff --git a/kernel/ptrace.c b/kernel/ptrace.c
index 00ab2ca..8cf6da1 100644
--- a/kernel/ptrace.c
+++ b/kernel/ptrace.c
@@ -551,6 +551,9 @@ static int ptrace_setoptions(struct task_struct *child, unsigned long data)
 	if (data & PTRACE_O_TRACEEXIT)
 		child->ptrace |= PT_TRACE_EXIT;
 
+	if (data & PTRACE_O_TRACESECCOMP)
+		child->ptrace |= PT_TRACE_SECCOMP;
+
 	return (data & ~PTRACE_O_MASK) ? -EINVAL : 0;
 }
 
diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index fb3b993..725e83e 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -17,13 +17,13 @@
 #include <linux/audit.h>
 #include <linux/compat.h>
 #include <linux/filter.h>
+#include <linux/ptrace.h>
 #include <linux/sched.h>
 #include <linux/seccomp.h>
 #include <linux/security.h>
 #include <linux/slab.h>
 #include <linux/uaccess.h>
 
-#include <linux/tracehook.h>
 #include <asm/syscall.h>
 
 /* #define SECCOMP_DEBUG 1 */
@@ -398,7 +398,7 @@ int __secure_computing_int(int this_syscall)
 	case SECCOMP_MODE_FILTER:
 		ret = seccomp_run_filters(this_syscall);
 		data = ret & SECCOMP_RET_DATA;
-		switch (code & SECCOMP_RET_ACTION) {
+		switch (ret & SECCOMP_RET_ACTION) {
 		case SECCOMP_RET_ERRNO:
 			/* Set the low-order 16-bits as a errno. */
 			syscall_set_return_value(current, task_pt_regs(current),
@@ -410,6 +410,15 @@ int __secure_computing_int(int this_syscall)
 			/* Let the filter pass back 16 bits of data. */
 			seccomp_send_sigsys(this_syscall, data);
 			goto skip;
+		case SECCOMP_RET_TRACE:
+			/* Skip these calls if there is no tracer. */
+			if (!ptrace_event_enabled(current, PTRACE_EVENT_SECCOMP))
+				goto skip;
+			/* Allow the BPF to provide the event message */
+			ptrace_event(PTRACE_EVENT_SECCOMP, data);
+			if (fatal_signal_pending(current))
+				break;
+			return 0;
 		case SECCOMP_RET_ALLOW:
 			return 0;
 		case SECCOMP_RET_KILL:
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W
  2012-03-14  8:05           ` Eric Dumazet
@ 2012-03-17 10:14             ` Indan Zupancic
  2012-03-17 13:49               ` Eric Dumazet
  0 siblings, 1 reply; 40+ messages in thread
From: Indan Zupancic @ 2012-03-17 10:14 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Will Drewry, linux-kernel, linux-arch, linux-doc,
	kernel-hardening, netdev, x86, arnd, davem, hpa, mingo, oleg,
	peterz, rdunlap, mcgrathr, tglx, luto, eparis, serge.hallyn, djm,
	scarybeasts, pmoore, akpm, corbet, markus, coreyb, keescook

On Wed, March 14, 2012 19:05, Eric Dumazet wrote:
> Le mercredi 14 mars 2012 à 08:59 +0100, Indan Zupancic a écrit :
>
>> The only remaining question is, is it worth the extra code to release
>> up to 32kB of unused memory? It seems a waste to not free it, but if
>> people think it's not worth it then let's just leave it around.
>
> Quite frankly its not an issue, given JIT BPF is not yet default
> enabled.

And what if assuming JIT BPF would be default enabled?

> I am not sure all bugs were found and fixed. I would warn users before
> considering using it in production.
>
> If you have time, I would appreciate if you could double check and find
> last bugs in it.

I'll do my best.

The current JIT doesn't handle negative offsets: The stuff that's handled
by __load_pointer(). Easiest solution would be to make it non-static and
call it instead of doing bpf_error. I guess __load_pointer was added later
and the JIT code didn't get updated.

But gcc refuses to inline load_pointer, instead it inlines __load_pointer
and does the important checks first. Considering the current assembly code
does a call too, it could as well call load_pointer() directly. That would
save a lot of assembly code, handle all negative cases too and be pretty
much the same speed. The only question is if this slow down some other
archs than x86. What do you think?

The EMIT_COND_JMP(f_op, f_offset); should be in an else case, otherwise
it's superfluous. It's a harmless bug though. I haven't spotted anything
else yet.

You can get rid of all the "if (is_imm8(offsetof(struct sk_buff, len)))"
code by making sure everything is near: Somewhere at the start, just
add 127 to %rdi and a BUILD_BUG_ON(sizeof(struct sk_buff) > 255).

I'll write some more patches tomorrow.

Greetings,

Indan

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 7c1b765..7e0f575 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -581,8 +581,9 @@ cond_branch:			f_offset = addrs[i + filter[i].jf] - addrs[i];
 					if (filter[i].jf)
 						EMIT_JMP(f_offset);
 					break;
+				} else {
+					EMIT_COND_JMP(f_op, f_offset);
 				}
-				EMIT_COND_JMP(f_op, f_offset);
 				break;
 			default:
 				/* hmm, too complex filter, give up with jit compiler */




^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W
  2012-03-17 10:14             ` Indan Zupancic
@ 2012-03-17 13:49               ` Eric Dumazet
  2012-03-18  8:35                 ` Indan Zupancic
  0 siblings, 1 reply; 40+ messages in thread
From: Eric Dumazet @ 2012-03-17 13:49 UTC (permalink / raw)
  To: Indan Zupancic
  Cc: Will Drewry, linux-kernel, linux-arch, linux-doc,
	kernel-hardening, netdev, x86, arnd, davem, hpa, mingo, oleg,
	peterz, rdunlap, mcgrathr, tglx, luto, eparis, serge.hallyn, djm,
	scarybeasts, pmoore, akpm, corbet, markus, coreyb, keescook

Le samedi 17 mars 2012 à 21:14 +1100, Indan Zupancic a écrit :
> On Wed, March 14, 2012 19:05, Eric Dumazet wrote:
> > Le mercredi 14 mars 2012 à 08:59 +0100, Indan Zupancic a écrit :
> >
> >> The only remaining question is, is it worth the extra code to release
> >> up to 32kB of unused memory? It seems a waste to not free it, but if
> >> people think it's not worth it then let's just leave it around.
> >
> > Quite frankly its not an issue, given JIT BPF is not yet default
> > enabled.
> 
> And what if assuming JIT BPF would be default enabled?
> 

OK, so here are the reasons why I chose not doing this :
---------------------------------------------------------

1) When I wrote this code, I _wanted_ keeping the original BPF around
for post morterm analysis. When we are 100% confident code is bug free,
we might remove the "BPF source code", but I am not convinced.

2) Most filters are less than 1 Kbytes, and who run thousands of BPF
network filters on a machine ? Do you have real cases ? Because in these
cases, the vmalloc() PAGE granularity might be a problem anyway.


Some filters are setup for a very short period of time...
(tcpdump for example setup a "ret 0" at the very beginning of a capture
). Doing the extra kmalloc()/copy/kfree() is a loss.

tcpdump -n -s 0 -c 1000 arp

[29211.083449] JIT code: ffffffffa0cbe000: 31 c0 c3
[29211.083481] flen=4 proglen=55 pass=3 image=ffffffffa0cc0000
[29211.083487] JIT code: ffffffffa0cc0000: 55 48 89 e5 48 83 ec 60 48 89 5d f8 44 8b 4f 68
[29211.083494] JIT code: ffffffffa0cc0010: 44 2b 4f 6c 4c 8b 87 e0 00 00 00 be 0c 00 00 00
[29211.083500] JIT code: ffffffffa0cc0020: e8 04 32 38 e0 3d 06 08 00 00 75 07 b8 ff ff 00
[29211.083506] JIT code: ffffffffa0cc0030: 00 eb 02 31 c0 c9 c3



> The current JIT doesn't handle negative offsets: The stuff that's handled
> by __load_pointer(). Easiest solution would be to make it non-static and
> call it instead of doing bpf_error. I guess __load_pointer was added later
> and the JIT code didn't get updated.

I dont think so, check git history if you want :)

> 
> But gcc refuses to inline load_pointer, instead it inlines __load_pointer
> and does the important checks first. Considering the current assembly code
> does a call too, it could as well call load_pointer() directly. That would
> save a lot of assembly code, handle all negative cases too and be pretty
> much the same speed. The only question is if this slow down some other
> archs than x86. What do you think?

You miss the point : 99.999 % of offsets are positive in filters.

Best is to not call load_pointer() and only call skb_copy_bits() if the
data is not in skb head, but in some fragment.

I dont know, I never had to use negative offsets in my own filters.
So in the BPF JIT I said : If we have a negative offset in a filter,
just disable JIT code completely for this filter (lines 478-479).

Same for fancy instructions like BPF_S_ANC_NLATTR /
BPF_S_ANC_NLATTR_NEST

Show me a real use first.

I am pragmatic : I spend time coding stuff if there is a real need.

> 
> The EMIT_COND_JMP(f_op, f_offset); should be in an else case, otherwise
> it's superfluous. It's a harmless bug though. I haven't spotted anything
> else yet.

Its not superflous, see my comment at the end of this mail.

> 
> You can get rid of all the "if (is_imm8(offsetof(struct sk_buff, len)))"
> code by making sure everything is near: Somewhere at the start, just
> add 127 to %rdi and a BUILD_BUG_ON(sizeof(struct sk_buff) > 255).
> 

This code is optimized away by the compiler, you know that ?

Adding "add 127 to rdi" is one more instruction, adding dependencies and
making out slow path code more complex (calls to skb_copy_bits() in
bpf_jit.S ...). Thats a bad idea.


> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index 7c1b765..7e0f575 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -581,8 +581,9 @@ cond_branch:			f_offset = addrs[i + filter[i].jf] - addrs[i];
>  					if (filter[i].jf)
>  						EMIT_JMP(f_offset);
>  					break;
> +				} else {
> +					EMIT_COND_JMP(f_op, f_offset);
>  				}
> -				EMIT_COND_JMP(f_op, f_offset);
>  				break;
>  			default:
>  				/* hmm, too complex filter, give up with jit compiler */
> 
> 
> 

I see no change in your patch in the code generation.

if (filter[i].jt == 0), we want to EMIT_COND_JMP(f_op, f_offset);
because we know at this point that filter[i].jf != 0) [ line 536 ]

if (filter[i].jt != 0), the break; in line 583 prevents the
EMIT_COND_JMP(f_op, f_offset);

Thanks !



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W
  2012-03-17 13:49               ` Eric Dumazet
@ 2012-03-18  8:35                 ` Indan Zupancic
  2012-03-18 12:40                   ` [PATCH] net: bpf_jit: fix BPF_S_LDX_B_MSH compilation Eric Dumazet
  2012-03-18 12:52                   ` [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Eric Dumazet
  0 siblings, 2 replies; 40+ messages in thread
From: Indan Zupancic @ 2012-03-18  8:35 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Will Drewry, linux-kernel, linux-arch, linux-doc,
	kernel-hardening, netdev, x86, arnd, davem, hpa, mingo, oleg,
	peterz, rdunlap, mcgrathr, tglx, luto, eparis, serge.hallyn, djm,
	scarybeasts, pmoore, akpm, corbet, markus, coreyb, keescook

On Sun, March 18, 2012 00:49, Eric Dumazet wrote:
> Le samedi 17 mars 2012 à 21:14 +1100, Indan Zupancic a écrit :
>> On Wed, March 14, 2012 19:05, Eric Dumazet wrote:
>> > Le mercredi 14 mars 2012 à 08:59 +0100, Indan Zupancic a écrit :
>> >
>> >> The only remaining question is, is it worth the extra code to release
>> >> up to 32kB of unused memory? It seems a waste to not free it, but if
>> >> people think it's not worth it then let's just leave it around.
>> >
>> > Quite frankly its not an issue, given JIT BPF is not yet default
>> > enabled.
>>
>> And what if assuming JIT BPF would be default enabled?
>>
>
> OK, so here are the reasons why I chose not doing this :
> ---------------------------------------------------------
>
> 1) When I wrote this code, I _wanted_ keeping the original BPF around
> for post morterm analysis. When we are 100% confident code is bug free,
> we might remove the "BPF source code", but I am not convinced.
>
> 2) Most filters are less than 1 Kbytes, and who run thousands of BPF
> network filters on a machine ? Do you have real cases ? Because in these
> cases, the vmalloc() PAGE granularity might be a problem anyway.

That are good reasons.

>
> Some filters are setup for a very short period of time...
> (tcpdump for example setup a "ret 0" at the very beginning of a capture
> ). Doing the extra kmalloc()/copy/kfree() is a loss.

Doing the JITing at all is a loss then. It would be better to only JIT
a filter if it's run at least a certain number of times. This could be
done from a work queue when the filter is run often enough, and only
change bpf_func when the compilation is done, so it doesn't interfere
with incoming packets. Not sure if its worth the trouble.

>> The current JIT doesn't handle negative offsets: The stuff that's handled
>> by __load_pointer(). Easiest solution would be to make it non-static and
>> call it instead of doing bpf_error. I guess __load_pointer was added later
>> and the JIT code didn't get updated.
>
> I dont think so, check git history if you want :)

Hm, apparently not.

>
>>
>> But gcc refuses to inline load_pointer, instead it inlines __load_pointer
>> and does the important checks first. Considering the current assembly code
>> does a call too, it could as well call load_pointer() directly. That would
>> save a lot of assembly code, handle all negative cases too and be pretty
>> much the same speed. The only question is if this slow down some other
>> archs than x86. What do you think?
>
> You miss the point : 99.999 % of offsets are positive in filters.

And in the 00.00001% case that the filter uses a computed negative
offset the BPF JIT fails at runtime. So to not be buggy you need at
least a call to __load_pointer() for the negative case.

>
> Best is to not call load_pointer() and only call skb_copy_bits() if the
> data is not in skb head, but in some fragment.

Of course, but it would be nice if you didn't need to do any call at
all. Once you do, not calling load_pointer() makes less of a difference.

>
> I dont know, I never had to use negative offsets in my own filters.
> So in the BPF JIT I said : If we have a negative offset in a filter,
> just disable JIT code completely for this filter (lines 478-479).

That only works for absolute loads, not indirect ones.

Having negative indirect loads may be unlikely, but they are still
possible.

>
> Same for fancy instructions like BPF_S_ANC_NLATTR /
> BPF_S_ANC_NLATTR_NEST

Such instructions could be handled by a separate function called from
both sk_run_filter and the JIT code. That way you would be able to JIT
all BPF programs.

>
> Show me a real use first.

Those features were added for a reason, if there is no real use they
shouldn't have been added. Supporting them isn't very hard, though
requires some changes in filter.c.

>> You can get rid of all the "if (is_imm8(offsetof(struct sk_buff, len)))"
>> code by making sure everything is near: Somewhere at the start, just
>> add 127 to %rdi and a BUILD_BUG_ON(sizeof(struct sk_buff) > 255).
>>
>
> This code is optimized away by the compiler, you know that ?

Yes. The main difference would be that the JIT could always generate imm8
offsets, saving 4 bytes per long offset while also simplifying the compiler
code. The %rdi + 127 is 4 bytes, and if the rest become slightly faster
because they're imm8 then it's worth the extra instruction.

I first thought the +127 could be done in two bytes, but 4 bytes are
needed, so maybe it's not worth it.

>
> Adding "add 127 to rdi" is one more instruction, adding dependencies and
> making out slow path code more complex (calls to skb_copy_bits() in
> bpf_jit.S ...). Thats a bad idea.

The add 127 would be at the start, the first instruction using it would
be a couple of instructions later, so I don't think the dependency is a
problem.

You're right about skb_copy_bits(), I did a quick search for rdi usage
but missed it was the first parameter too. It would need one extra
sub 127 or add -127 in the slow path, after the push. But it's the slow
path already, one extra instruction won't make much difference.

> I see no change in your patch in the code generation.
>
> if (filter[i].jt == 0), we want to EMIT_COND_JMP(f_op, f_offset);
> because we know at this point that filter[i].jf != 0) [ line 536 ]
>
> if (filter[i].jt != 0), the break; in line 583 prevents the
> EMIT_COND_JMP(f_op, f_offset);

I indeed missed the break, should have double checked.

Greetings,

Indan

A patch implementing the skb + 127 trick below. It has upsides and it has
the downsides you mentioned.

 arch/x86/net/bpf_jit.S      |    1 +
 arch/x86/net/bpf_jit_comp.c |  111 ++++++++++++++-----------------------------
 2 files changed, 37 insertions(+), 75 deletions(-)
diff --git a/arch/x86/net/bpf_jit.S b/arch/x86/net/bpf_jit.S
index 6687022..fdfacfa 100644
--- a/arch/x86/net/bpf_jit.S
+++ b/arch/x86/net/bpf_jit.S
@@ -98,6 +98,7 @@ bpf_error:
 	push	%rdi;    /* save skb */		\
 	push	%r9;				\
 	push	SKBDATA;			\
+	subq	$127,%rdi;	/* fix skb */	\
 /* rsi already has offset */			\
 	mov	$LEN,%ecx;	/* len */	\
 	lea	-12(%rbp),%rdx;			\
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 7c1b765..d950765 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -56,6 +56,8 @@ static inline u8 *emit_code(u8 *ptr, u32 bytes, unsigned int len)
 #define CLEAR_A() EMIT2(0x31, 0xc0) /* xor %eax,%eax */
 #define CLEAR_X() EMIT2(0x31, 0xdb) /* xor %ebx,%ebx */

+#define SKB_OFF(field)	(offsetof(struct sk_buff, field) - 127)
+
 static inline bool is_imm8(int value)
 {
 	return value <= 127 && value >= -128;
@@ -106,6 +108,8 @@ do {								\
 #define SEEN_DATAREF 1 /* might call external helpers */
 #define SEEN_XREG    2 /* ebx is used */
 #define SEEN_MEM     4 /* use mem[] for temporary storage */
+#define SEEN_SKB     8 /* rdi is used */
+#define SEEN_ALL     (SEEN_DATAREF|SEEN_XREG|SEEN_MEM|SEEN_SKB)

 static inline void bpf_flush_icache(void *start, void *end)
 {
@@ -151,7 +155,7 @@ void bpf_jit_compile(struct sk_filter *fp)
 	cleanup_addr = proglen; /* epilogue address */

 	for (pass = 0; pass < 10; pass++) {
-		u8 seen_or_pass0 = (pass == 0) ? (SEEN_XREG | SEEN_DATAREF | SEEN_MEM) : seen;
+		u8 seen_or_pass0 = (pass == 0) ? (SEEN_ALL) : seen;
 		/* no prologue/epilogue for trivial filters (RET something) */
 		proglen = 0;
 		prog = temp;
@@ -159,6 +163,10 @@ void bpf_jit_compile(struct sk_filter *fp)
 		if (seen_or_pass0) {
 			EMIT4(0x55, 0x48, 0x89, 0xe5); /* push %rbp; mov %rsp,%rbp */
 			EMIT4(0x48, 0x83, 0xec, 96);	/* subq  $96,%rsp	*/
+			/* Make sure all offsets are imm8 */
+			BUILD_BUG_ON(sizeof(struct sk_buff) > 255);
+			if (seen_or_pass0 & (SEEN_SKB | SEEN_DATAREF))
+				EMIT4(0x48, 0x83, 0xc7, 127); /* addq $127,%rdi */
 			/* note : must save %rbx in case bpf_error is hit */
 			if (seen_or_pass0 & (SEEN_XREG | SEEN_DATAREF))
 				EMIT4(0x48, 0x89, 0x5d, 0xf8); /* mov %rbx, -8(%rbp) */
@@ -172,30 +180,12 @@ void bpf_jit_compile(struct sk_filter *fp)
 			 *  r8 = skb->data
 			 */
 			if (seen_or_pass0 & SEEN_DATAREF) {
-				if (offsetof(struct sk_buff, len) <= 127)
-					/* mov    off8(%rdi),%r9d */
-					EMIT4(0x44, 0x8b, 0x4f, offsetof(struct sk_buff, len));
-				else {
-					/* mov    off32(%rdi),%r9d */
-					EMIT3(0x44, 0x8b, 0x8f);
-					EMIT(offsetof(struct sk_buff, len), 4);
-				}
-				if (is_imm8(offsetof(struct sk_buff, data_len)))
-					/* sub    off8(%rdi),%r9d */
-					EMIT4(0x44, 0x2b, 0x4f, offsetof(struct sk_buff, data_len));
-				else {
-					EMIT3(0x44, 0x2b, 0x8f);
-					EMIT(offsetof(struct sk_buff, data_len), 4);
-				}
-
-				if (is_imm8(offsetof(struct sk_buff, data)))
-					/* mov off8(%rdi),%r8 */
-					EMIT4(0x4c, 0x8b, 0x47, offsetof(struct sk_buff, data));
-				else {
-					/* mov off32(%rdi),%r8 */
-					EMIT3(0x4c, 0x8b, 0x87);
-					EMIT(offsetof(struct sk_buff, data), 4);
-				}
+				/* mov    off8(%rdi),%r9d */
+				EMIT4(0x44, 0x8b, 0x4f, SKB_OFF(len));
+				/* sub    off8(%rdi),%r9d */
+				EMIT4(0x44, 0x2b, 0x4f, SKB_OFF(data_len));
+				/* mov off8(%rdi),%r8 */
+				EMIT4(0x4c, 0x8b, 0x47, SKB_OFF(data));
 			}
 		}

@@ -390,44 +380,27 @@ void bpf_jit_compile(struct sk_filter *fp)
 				EMIT3(0x89, 0x5d, 0xf0 - K*4);
 				break;
 			case BPF_S_LD_W_LEN: /*	A = skb->len; */
+				seen |= SEEN_SKB;
 				BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, len) != 4);
-				if (is_imm8(offsetof(struct sk_buff, len)))
-					/* mov    off8(%rdi),%eax */
-					EMIT3(0x8b, 0x47, offsetof(struct sk_buff, len));
-				else {
-					EMIT2(0x8b, 0x87);
-					EMIT(offsetof(struct sk_buff, len), 4);
-				}
+				/* mov    off8(%rdi),%eax */
+				EMIT3(0x8b, 0x47, SKB_OFF(len));
 				break;
 			case BPF_S_LDX_W_LEN: /* X = skb->len; */
-				seen |= SEEN_XREG;
-				if (is_imm8(offsetof(struct sk_buff, len)))
-					/* mov off8(%rdi),%ebx */
-					EMIT3(0x8b, 0x5f, offsetof(struct sk_buff, len));
-				else {
-					EMIT2(0x8b, 0x9f);
-					EMIT(offsetof(struct sk_buff, len), 4);
-				}
+				seen |= SEEN_XREG | SEEN_SKB;
+				/* mov off8(%rdi),%ebx */
+				EMIT3(0x8b, 0x5f, SKB_OFF(len));
 				break;
 			case BPF_S_ANC_PROTOCOL: /* A = ntohs(skb->protocol); */
+				seen |= SEEN_SKB;
 				BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, protocol) != 2);
-				if (is_imm8(offsetof(struct sk_buff, protocol))) {
-					/* movzwl off8(%rdi),%eax */
-					EMIT4(0x0f, 0xb7, 0x47, offsetof(struct sk_buff, protocol));
-				} else {
-					EMIT3(0x0f, 0xb7, 0x87); /* movzwl off32(%rdi),%eax */
-					EMIT(offsetof(struct sk_buff, protocol), 4);
-				}
+				/* movzwl off8(%rdi),%eax */
+				EMIT4(0x0f, 0xb7, 0x47, SKB_OFF(protocol));
 				EMIT2(0x86, 0xc4); /* ntohs() : xchg   %al,%ah */
 				break;
 			case BPF_S_ANC_IFINDEX:
-				if (is_imm8(offsetof(struct sk_buff, dev))) {
-					/* movq off8(%rdi),%rax */
-					EMIT4(0x48, 0x8b, 0x47, offsetof(struct sk_buff, dev));
-				} else {
-					EMIT3(0x48, 0x8b, 0x87); /* movq off32(%rdi),%rax */
-					EMIT(offsetof(struct sk_buff, dev), 4);
-				}
+				seen |= SEEN_SKB;
+				/* movq off8(%rdi),%rax */
+				EMIT4(0x48, 0x8b, 0x47, SKB_OFF(dev));
 				EMIT3(0x48, 0x85, 0xc0);	/* test %rax,%rax */
 				EMIT_COND_JMP(X86_JE, cleanup_addr - (addrs[i] - 6));
 				BUILD_BUG_ON(FIELD_SIZEOF(struct net_device, ifindex) != 4);
@@ -435,34 +408,22 @@ void bpf_jit_compile(struct sk_filter *fp)
 				EMIT(offsetof(struct net_device, ifindex), 4);
 				break;
 			case BPF_S_ANC_MARK:
+				seen |= SEEN_SKB;
 				BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, mark) != 4);
-				if (is_imm8(offsetof(struct sk_buff, mark))) {
-					/* mov off8(%rdi),%eax */
-					EMIT3(0x8b, 0x47, offsetof(struct sk_buff, mark));
-				} else {
-					EMIT2(0x8b, 0x87);
-					EMIT(offsetof(struct sk_buff, mark), 4);
-				}
+				/* mov off8(%rdi),%eax */
+				EMIT3(0x8b, 0x47, SKB_OFF(mark));
 				break;
 			case BPF_S_ANC_RXHASH:
+				seen |= SEEN_SKB;
 				BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, rxhash) != 4);
-				if (is_imm8(offsetof(struct sk_buff, rxhash))) {
-					/* mov off8(%rdi),%eax */
-					EMIT3(0x8b, 0x47, offsetof(struct sk_buff, rxhash));
-				} else {
-					EMIT2(0x8b, 0x87);
-					EMIT(offsetof(struct sk_buff, rxhash), 4);
-				}
+				/* mov off8(%rdi),%eax */
+				EMIT3(0x8b, 0x47, SKB_OFF(rxhash));
 				break;
 			case BPF_S_ANC_QUEUE:
+				seen |= SEEN_SKB;
 				BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, queue_mapping) != 2);
-				if (is_imm8(offsetof(struct sk_buff, queue_mapping))) {
-					/* movzwl off8(%rdi),%eax */
-					EMIT4(0x0f, 0xb7, 0x47, offsetof(struct sk_buff, queue_mapping));
-				} else {
-					EMIT3(0x0f, 0xb7, 0x87); /* movzwl off32(%rdi),%eax */
-					EMIT(offsetof(struct sk_buff, queue_mapping), 4);
-				}
+				/* movzwl off8(%rdi),%eax */
+				EMIT4(0x0f, 0xb7, 0x47, SKB_OFF(queue_mapping));
 				break;
 			case BPF_S_ANC_CPU:
 #ifdef CONFIG_SMP



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH] net: bpf_jit: fix BPF_S_LDX_B_MSH compilation
  2012-03-18  8:35                 ` Indan Zupancic
@ 2012-03-18 12:40                   ` Eric Dumazet
  2012-03-19 21:42                     ` David Miller
  2012-03-20  0:16                     ` [PATCH] net: bpf_jit: Document evilness of negative indirect loads Indan Zupancic
  2012-03-18 12:52                   ` [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Eric Dumazet
  1 sibling, 2 replies; 40+ messages in thread
From: Eric Dumazet @ 2012-03-18 12:40 UTC (permalink / raw)
  To: Indan Zupancic, David Miller
  Cc: Will Drewry, linux-kernel, linux-arch, linux-doc,
	kernel-hardening, netdev, x86, arnd, davem, hpa, mingo, oleg,
	peterz, rdunlap, mcgrathr, tglx, luto, eparis, serge.hallyn, djm,
	scarybeasts, pmoore, akpm, corbet, markus, coreyb, keescook,
	Matt Evans

Le dimanche 18 mars 2012 à 19:35 +1100, Indan Zupancic a écrit :

> And in the 00.00001% case that the filter uses a computed negative
> offset the BPF JIT fails at runtime. So to not be buggy you need at
> least a call to __load_pointer() for the negative case.

Please show me how and why a real (I mean useful one...) filter could
generate a dynamic negative value, and I'll change the code.

Negative values are there to allow access to network/mac header
components. I cant see how a BPF code could have a valid use of dynamic
indexes in these headers.

Right now we consider such code is evil and filter does "return 0"
saying so.

If I remember well, a more practical issue was mentioned by Matt Evans
and I forgot about it.

Its even _documented_ in his code.

Thanks

[PATCH] net: bpf_jit: fix BPF_S_LDX_B_MSH compilation

Matt Evans spotted that x86 bpf_jit was incorrectly handling negative
constant offsets in BPF_S_LDX_B_MSH instruction.

We need to abort JIT compilation like we do in common_load so that
filter uses the interpreter code and can call __load_pointer()

Reference: http://lists.openwall.net/netdev/2011/07/19/11

Thanks to Indan Zupancic to bring back this issue.

Reported-by: Matt Evans <matt@ozlabs.org>
Reported-by: Indan Zupancic <indan@nul.nu>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
---
David, libpcap doesnt generate such pattern, so feel to add this patch
to net-next. Thanks !

 arch/x86/net/bpf_jit_comp.c |   14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 7c1b765..5671752 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -475,8 +475,10 @@ void bpf_jit_compile(struct sk_filter *fp)
 			case BPF_S_LD_W_ABS:
 				func = sk_load_word;
 common_load:			seen |= SEEN_DATAREF;
-				if ((int)K < 0)
+				if ((int)K < 0) {
+					/* Abort the JIT because __load_pointer() is needed. */
 					goto out;
+				}
 				t_offset = func - (image + addrs[i]);
 				EMIT1_off32(0xbe, K); /* mov imm32,%esi */
 				EMIT1_off32(0xe8, t_offset); /* call */
@@ -489,14 +491,8 @@ common_load:			seen |= SEEN_DATAREF;
 				goto common_load;
 			case BPF_S_LDX_B_MSH:
 				if ((int)K < 0) {
-					if (pc_ret0 > 0) {
-						/* addrs[pc_ret0 - 1] is the start address */
-						EMIT_JMP(addrs[pc_ret0 - 1] - addrs[i]);
-						break;
-					}
-					CLEAR_A();
-					EMIT_JMP(cleanup_addr - addrs[i]);
-					break;
+					/* Abort the JIT because __load_pointer() is needed. */
+					goto out;
 				}
 				seen |= SEEN_DATAREF | SEEN_XREG;
 				t_offset = sk_load_byte_msh - (image + addrs[i]);



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W
  2012-03-18  8:35                 ` Indan Zupancic
  2012-03-18 12:40                   ` [PATCH] net: bpf_jit: fix BPF_S_LDX_B_MSH compilation Eric Dumazet
@ 2012-03-18 12:52                   ` Eric Dumazet
  2012-03-20  2:24                     ` [PATCH] net: bpf_jit: Simplify code by always using offset8 or offset32 Indan Zupancic
  1 sibling, 1 reply; 40+ messages in thread
From: Eric Dumazet @ 2012-03-18 12:52 UTC (permalink / raw)
  To: Indan Zupancic
  Cc: Will Drewry, linux-kernel, linux-arch, linux-doc,
	kernel-hardening, netdev, x86, arnd, davem, hpa, mingo, oleg,
	peterz, rdunlap, mcgrathr, tglx, luto, eparis, serge.hallyn, djm,
	scarybeasts, pmoore, akpm, corbet, markus, coreyb, keescook

Le dimanche 18 mars 2012 à 19:35 +1100, Indan Zupancic a écrit :

> Yes. The main difference would be that the JIT could always generate imm8
> offsets, saving 4 bytes per long offset while also simplifying the compiler
> code. The %rdi + 127 is 4 bytes, and if the rest become slightly faster
> because they're imm8 then it's worth the extra instruction.
> 

Do you understand you try to save 3 bytes in the function prolog, but
your single EMIT4(0x48, 0x83, 0xc7, 127); /* addq $127,%rdi */
defeats this ?

Ancillary instructions are rarely used, libpcap for example doesnt have
support for them.

> I first thought the +127 could be done in two bytes, but 4 bytes are
> needed, so maybe it's not worth it.
...
> The add 127 would be at the start, the first instruction using it would
> be a couple of instructions later, so I don't think the dependency is a
> problem.
> 
> You're right about skb_copy_bits(), I did a quick search for rdi usage
> but missed it was the first parameter too. It would need one extra
> sub 127 or add -127 in the slow path, after the push. But it's the slow
> path already, one extra instruction won't make much difference.

It will, because new NIC drivers tend to provide skbs with fragments.

Using libpcap filter like "udp[100]" calls the skb_copy_bits() helper in
this case.

There is no difference in instruction timing using offset32 or offset8,
so the code you add will slow the filter anyway.

Please dont obfuscate this code, I'd like to keep it maintainable.



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH] net: bpf_jit: fix BPF_S_LDX_B_MSH compilation
  2012-03-18 12:40                   ` [PATCH] net: bpf_jit: fix BPF_S_LDX_B_MSH compilation Eric Dumazet
@ 2012-03-19 21:42                     ` David Miller
  2012-03-20  0:16                     ` [PATCH] net: bpf_jit: Document evilness of negative indirect loads Indan Zupancic
  1 sibling, 0 replies; 40+ messages in thread
From: David Miller @ 2012-03-19 21:42 UTC (permalink / raw)
  To: eric.dumazet
  Cc: indan, wad, linux-kernel, linux-arch, linux-doc,
	kernel-hardening, netdev, x86, arnd, hpa, mingo, oleg, peterz,
	rdunlap, mcgrathr, tglx, luto, eparis, serge.hallyn, djm,
	scarybeasts, pmoore, akpm, corbet, markus, coreyb, keescook,
	matt

From: Eric Dumazet <eric.dumazet@gmail.com>
Date: Sun, 18 Mar 2012 05:40:48 -0700

> [PATCH] net: bpf_jit: fix BPF_S_LDX_B_MSH compilation
> 
> Matt Evans spotted that x86 bpf_jit was incorrectly handling negative
> constant offsets in BPF_S_LDX_B_MSH instruction.
> 
> We need to abort JIT compilation like we do in common_load so that
> filter uses the interpreter code and can call __load_pointer()
> 
> Reference: http://lists.openwall.net/netdev/2011/07/19/11
> 
> Thanks to Indan Zupancic to bring back this issue.
> 
> Reported-by: Matt Evans <matt@ozlabs.org>
> Reported-by: Indan Zupancic <indan@nul.nu>
> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>

Applied and even though libpcap won't generate this I'll queue it up
to -stable anyways.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH] net: bpf_jit: Document evilness of negative indirect loads
  2012-03-18 12:40                   ` [PATCH] net: bpf_jit: fix BPF_S_LDX_B_MSH compilation Eric Dumazet
  2012-03-19 21:42                     ` David Miller
@ 2012-03-20  0:16                     ` Indan Zupancic
  1 sibling, 0 replies; 40+ messages in thread
From: Indan Zupancic @ 2012-03-20  0:16 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: David Miller, Will Drewry, linux-kernel, linux-arch, linux-doc,
	kernel-hardening, netdev, x86, arnd, hpa, mingo, oleg, peterz,
	rdunlap, mcgrathr, tglx, luto, eparis, serge.hallyn, djm,
	scarybeasts, pmoore, akpm, corbet, markus, coreyb, keescook,
	Matt Evans

On Sun, March 18, 2012 23:40, Eric Dumazet wrote:
> Le dimanche 18 mars 2012 à 19:35 +1100, Indan Zupancic a écrit :
>
>> And in the 00.00001% case that the filter uses a computed negative
>> offset the BPF JIT fails at runtime. So to not be buggy you need at
>> least a call to __load_pointer() for the negative case.
>
> Please show me how and why a real (I mean useful one...) filter could
> generate a dynamic negative value, and I'll change the code.
>
>
> Negative values are there to allow access to network/mac header
> components. I cant see how a BPF code could have a valid use of dynamic
> indexes in these headers.

E.g. when poking in a variable length IP header with a filter
attached to a TCP/UDP socket. Still a bit far fetched though.

>
> Right now we consider such code is evil and filter does "return 0"
> saying so.

I'm fine with that, but this should be documented somewhere I think.

Greetings,

Indan


[PATCH] net: bpf_jit: Document evilness of negative indirect loads

Negative offsets are used to access ancillary data. In the case of
SKF_NET_OFF and SKF_LL_OFF users may expect negative indirect loads
to work. If BPF JIT is used then such loads will fail. In any case,
negative indirect loads are considered evil and are not supported.

---

Reported-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Indan Zupancic <indan@nul.nu>

diff --git a/include/linux/filter.h b/include/linux/filter.h
index 8eeb205..2bd4bbb 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -114,6 +114,9 @@ struct sock_fprog {	/* Required for SO_ATTACH_FILTER. */
    We use them to reference ancillary data.
    Unlike introduction new instructions, it does not break
    existing compilers/optimizers.
+
+   Do not expect negative indirect loads to work, they are
+   considered evil.
  */
 #define SKF_AD_OFF    (-0x1000)
 #define SKF_AD_PROTOCOL 0



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH] net: bpf_jit: Simplify code by always using offset8 or offset32.
  2012-03-18 12:52                   ` [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Eric Dumazet
@ 2012-03-20  2:24                     ` Indan Zupancic
  2012-03-20  2:59                       ` Eric Dumazet
  0 siblings, 1 reply; 40+ messages in thread
From: Indan Zupancic @ 2012-03-20  2:24 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Will Drewry, linux-kernel, linux-arch, linux-doc,
	kernel-hardening, netdev, x86, arnd, davem, hpa, mingo, oleg,
	peterz, rdunlap, mcgrathr, tglx, luto, eparis, serge.hallyn, djm,
	scarybeasts, pmoore, akpm, corbet, markus, coreyb, keescook

On Sun, March 18, 2012 23:52, Eric Dumazet wrote:
> Le dimanche 18 mars 2012 à 19:35 +1100, Indan Zupancic a écrit :
>
>> Yes. The main difference would be that the JIT could always generate imm8
>> offsets, saving 4 bytes per long offset while also simplifying the compiler
>> code. The %rdi + 127 is 4 bytes, and if the rest become slightly faster
>> because they're imm8 then it's worth the extra instruction.
>>
>
> Do you understand you try to save 3 bytes in the function prolog, but
> your single EMIT4(0x48, 0x83, 0xc7, 127); /* addq $127,%rdi */
> defeats this ?

That's not what I'm trying to do, I'm trying to simplify the code
so it's easier to verify and less cluttered. I admit the fixup in
bpf_slow_path_common makes it a bad idea over all. But if that one
extra addq would have been all that was needed then it would have
been worth it I think. Even if it makes the prologue one byte longer.

>
> Ancillary instructions are rarely used, libpcap for example doesnt have
> support for them.
>
>> I first thought the +127 could be done in two bytes, but 4 bytes are
>> needed, so maybe it's not worth it.
> ...
>> The add 127 would be at the start, the first instruction using it would
>> be a couple of instructions later, so I don't think the dependency is a
>> problem.
>>
>> You're right about skb_copy_bits(), I did a quick search for rdi usage
>> but missed it was the first parameter too. It would need one extra
>> sub 127 or add -127 in the slow path, after the push. But it's the slow
>> path already, one extra instruction won't make much difference.
>
> It will, because new NIC drivers tend to provide skbs with fragments.

If it does then perhaps the fast path should be made faster by inlining
the code instead of calling a function which may not be cached.

>
> Using libpcap filter like "udp[100]" calls the skb_copy_bits() helper in
> this case.
>
> There is no difference in instruction timing using offset32 or offset8,
> so the code you add will slow the filter anyway.

I assumed optimising for imm8 was worthwile, but if it's the same speed,
saving a few bytes here and there while wasting kilobytes of memory
any way doesn't make much sense.

Greetings,

Indan


[PATCH] net: bpf_jit: Simplify code by always using offset8 or offset32.

Instruction timing of offset32 or offset8 is the same, do not bother saving
a few bytes of code here and there. Only use offset8 for skb.len, skb.data_len
and skb.dev: It is very unlikely that those fields are ever moved to the end
of sk_buff. The other fields are used in ancillary instructions, for those
always use offset32.

Signed-off-by: Indan Zupancic <indan@nul.nu>

 arch/x86/net/bpf_jit_comp.c |  104 +++++++++++++------------------------------
 1 files changed, 31 insertions(+), 73 deletions(-)
---
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 7c1b765..5ddb82b 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -102,6 +102,12 @@ do {								\
 		f_op = FOP;		\
 		goto cond_branch

+#define SKB_OFF8(field) ({				\
+	int _off = offsetof(struct sk_buff, field);	\
+	BUILD_BUG_ON(_off > 127);			\
+	_off;						\
+	})
+

 #define SEEN_DATAREF 1 /* might call external helpers */
 #define SEEN_XREG    2 /* ebx is used */
@@ -172,30 +178,13 @@ void bpf_jit_compile(struct sk_filter *fp)
 			 *  r8 = skb->data
 			 */
 			if (seen_or_pass0 & SEEN_DATAREF) {
-				if (offsetof(struct sk_buff, len) <= 127)
-					/* mov    off8(%rdi),%r9d */
-					EMIT4(0x44, 0x8b, 0x4f, offsetof(struct sk_buff, len));
-				else {
-					/* mov    off32(%rdi),%r9d */
-					EMIT3(0x44, 0x8b, 0x8f);
-					EMIT(offsetof(struct sk_buff, len), 4);
-				}
-				if (is_imm8(offsetof(struct sk_buff, data_len)))
-					/* sub    off8(%rdi),%r9d */
-					EMIT4(0x44, 0x2b, 0x4f, offsetof(struct sk_buff, data_len));
-				else {
-					EMIT3(0x44, 0x2b, 0x8f);
-					EMIT(offsetof(struct sk_buff, data_len), 4);
-				}
-
-				if (is_imm8(offsetof(struct sk_buff, data)))
-					/* mov off8(%rdi),%r8 */
-					EMIT4(0x4c, 0x8b, 0x47, offsetof(struct sk_buff, data));
-				else {
-					/* mov off32(%rdi),%r8 */
-					EMIT3(0x4c, 0x8b, 0x87);
-					EMIT(offsetof(struct sk_buff, data), 4);
-				}
+				/* mov    off8(%rdi),%r9d */
+				EMIT4(0x44, 0x8b, 0x4f, SKB_OFF8(len));
+				/* sub    off8(%rdi),%r9d */
+				EMIT4(0x44, 0x2b, 0x4f, SKB_OFF8(data_len));
+				/* mov off32(%rdi),%r8 */
+				EMIT3(0x4c, 0x8b, 0x87);
+				EMIT(offsetof(struct sk_buff, data), 4);
 			}
 		}

@@ -391,43 +380,24 @@ void bpf_jit_compile(struct sk_filter *fp)
 				break;
 			case BPF_S_LD_W_LEN: /*	A = skb->len; */
 				BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, len) != 4);
-				if (is_imm8(offsetof(struct sk_buff, len)))
-					/* mov    off8(%rdi),%eax */
-					EMIT3(0x8b, 0x47, offsetof(struct sk_buff, len));
-				else {
-					EMIT2(0x8b, 0x87);
-					EMIT(offsetof(struct sk_buff, len), 4);
-				}
+				/* mov    off8(%rdi),%eax */
+				EMIT3(0x8b, 0x47, SKB_OFF8(len));
 				break;
 			case BPF_S_LDX_W_LEN: /* X = skb->len; */
 				seen |= SEEN_XREG;
-				if (is_imm8(offsetof(struct sk_buff, len)))
-					/* mov off8(%rdi),%ebx */
-					EMIT3(0x8b, 0x5f, offsetof(struct sk_buff, len));
-				else {
-					EMIT2(0x8b, 0x9f);
-					EMIT(offsetof(struct sk_buff, len), 4);
-				}
+				/* mov off8(%rdi),%ebx */
+				EMIT3(0x8b, 0x5f, SKB_OFF8(len));
 				break;
 			case BPF_S_ANC_PROTOCOL: /* A = ntohs(skb->protocol); */
 				BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, protocol) != 2);
-				if (is_imm8(offsetof(struct sk_buff, protocol))) {
-					/* movzwl off8(%rdi),%eax */
-					EMIT4(0x0f, 0xb7, 0x47, offsetof(struct sk_buff, protocol));
-				} else {
-					EMIT3(0x0f, 0xb7, 0x87); /* movzwl off32(%rdi),%eax */
-					EMIT(offsetof(struct sk_buff, protocol), 4);
-				}
+				/* movzwl off32(%rdi),%eax */
+				EMIT3(0x0f, 0xb7, 0x87);
+				EMIT(offsetof(struct sk_buff, protocol), 4);
 				EMIT2(0x86, 0xc4); /* ntohs() : xchg   %al,%ah */
 				break;
 			case BPF_S_ANC_IFINDEX:
-				if (is_imm8(offsetof(struct sk_buff, dev))) {
-					/* movq off8(%rdi),%rax */
-					EMIT4(0x48, 0x8b, 0x47, offsetof(struct sk_buff, dev));
-				} else {
-					EMIT3(0x48, 0x8b, 0x87); /* movq off32(%rdi),%rax */
-					EMIT(offsetof(struct sk_buff, dev), 4);
-				}
+				/* movq off8(%rdi),%rax */
+				EMIT4(0x48, 0x8b, 0x47, SKB_OFF8(dev));
 				EMIT3(0x48, 0x85, 0xc0);	/* test %rax,%rax */
 				EMIT_COND_JMP(X86_JE, cleanup_addr - (addrs[i] - 6));
 				BUILD_BUG_ON(FIELD_SIZEOF(struct net_device, ifindex) != 4);
@@ -436,33 +406,21 @@ void bpf_jit_compile(struct sk_filter *fp)
 				break;
 			case BPF_S_ANC_MARK:
 				BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, mark) != 4);
-				if (is_imm8(offsetof(struct sk_buff, mark))) {
-					/* mov off8(%rdi),%eax */
-					EMIT3(0x8b, 0x47, offsetof(struct sk_buff, mark));
-				} else {
-					EMIT2(0x8b, 0x87);
-					EMIT(offsetof(struct sk_buff, mark), 4);
-				}
+				/* mov off32(%rdi),%eax */
+				EMIT2(0x8b, 0x87);
+				EMIT(offsetof(struct sk_buff, mark), 4);
 				break;
 			case BPF_S_ANC_RXHASH:
 				BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, rxhash) != 4);
-				if (is_imm8(offsetof(struct sk_buff, rxhash))) {
-					/* mov off8(%rdi),%eax */
-					EMIT3(0x8b, 0x47, offsetof(struct sk_buff, rxhash));
-				} else {
-					EMIT2(0x8b, 0x87);
-					EMIT(offsetof(struct sk_buff, rxhash), 4);
-				}
+				/* mov off32(%rdi),%eax */
+				EMIT2(0x8b, 0x87);
+				EMIT(offsetof(struct sk_buff, rxhash), 4);
 				break;
 			case BPF_S_ANC_QUEUE:
 				BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, queue_mapping) != 2);
-				if (is_imm8(offsetof(struct sk_buff, queue_mapping))) {
-					/* movzwl off8(%rdi),%eax */
-					EMIT4(0x0f, 0xb7, 0x47, offsetof(struct sk_buff, queue_mapping));
-				} else {
-					EMIT3(0x0f, 0xb7, 0x87); /* movzwl off32(%rdi),%eax */
-					EMIT(offsetof(struct sk_buff, queue_mapping), 4);
-				}
+				/* movzwl off32(%rdi),%eax */
+				EMIT3(0x0f, 0xb7, 0x87);
+				EMIT(offsetof(struct sk_buff, queue_mapping), 4);
 				break;
 			case BPF_S_ANC_CPU:
 #ifdef CONFIG_SMP



^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH] net: bpf_jit: Simplify code by always using offset8 or offset32.
  2012-03-20  2:24                     ` [PATCH] net: bpf_jit: Simplify code by always using offset8 or offset32 Indan Zupancic
@ 2012-03-20  2:59                       ` Eric Dumazet
  2012-03-20 11:33                         ` Indan Zupancic
  0 siblings, 1 reply; 40+ messages in thread
From: Eric Dumazet @ 2012-03-20  2:59 UTC (permalink / raw)
  To: Indan Zupancic
  Cc: Will Drewry, linux-kernel, linux-arch, linux-doc,
	kernel-hardening, netdev, x86, arnd, davem, hpa, mingo, oleg,
	peterz, rdunlap, mcgrathr, tglx, luto, eparis, serge.hallyn, djm,
	scarybeasts, pmoore, akpm, corbet, markus, coreyb, keescook

On Tue, 2012-03-20 at 13:24 +1100, Indan Zupancic wrote:

> If it does then perhaps the fast path should be made faster by inlining
> the code instead of calling a function which may not be cached.
> 

inlining 400 times a sequence of code is waste of icache, you probably
missed this.

I spent a lot of time on working on this implementation, tried many
different strategies before choosing the one in place.

Listen, I am tired of this thread, it seems you want to push changes
that have almost no value but still need lot of review.

Unless you make benchmarks and can make at least 5 % improvement of the
speed, or improve maintainability of this code, I am not interested.

We certainly _can_ one day have sizeof(struct sk_buff) > 256, and actual
code is ready for this. You want to break this for absolutely no valid
reason.

We _can_ change fields order anytime in struct sk_buff, even if you
state "its very unlikely that those fields are ever moved to the end
of sk_buff".




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH] net: bpf_jit: Simplify code by always using offset8 or offset32.
  2012-03-20  2:59                       ` Eric Dumazet
@ 2012-03-20 11:33                         ` Indan Zupancic
  2012-03-20 11:41                           ` David Laight
  2012-03-20 13:56                           ` Eric Dumazet
  0 siblings, 2 replies; 40+ messages in thread
From: Indan Zupancic @ 2012-03-20 11:33 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Will Drewry, linux-kernel, linux-arch, linux-doc,
	kernel-hardening, netdev, x86, arnd, davem, hpa, mingo, oleg,
	peterz, rdunlap, mcgrathr, tglx, luto, eparis, serge.hallyn, djm,
	scarybeasts, pmoore, akpm, corbet, markus, coreyb, keescook

On Tue, March 20, 2012 13:59, Eric Dumazet wrote:
> On Tue, 2012-03-20 at 13:24 +1100, Indan Zupancic wrote:
>
>> If it does then perhaps the fast path should be made faster by inlining
>> the code instead of calling a function which may not be cached.
>>
>
> inlining 400 times a sequence of code is waste of icache, you probably
> missed this.

Well, according to you most filters were small, inling 26 bytes a few
times should be faster than calling an external function. Not all calls
need to be inlined either.

>
> I spent a lot of time on working on this implementation, tried many
> different strategies before choosing the one in place.
>
> Listen, I am tired of this thread, it seems you want to push changes
> that have almost no value but still need lot of review.

The latest patch didn't change generated code except for a few ancillary
instructions. The one before that just added documentation. The first
patch was indeed bad.

>
> Unless you make benchmarks and can make at least 5 % improvement of the
> speed, or improve maintainability of this code, I am not interested.

My next patch would have changed the compiler to always compile in two
passes instead of looping till the result is stable. But never mind.

>
> We certainly _can_ one day have sizeof(struct sk_buff) > 256, and actual
> code is ready for this. You want to break this for absolutely no valid
> reason.

I've the feeling you didn't read the latest patch, it doesn't assume
sizeof(struct sk_buff) < 256, nor that fields aren't reordered.

>
> We _can_ change fields order anytime in struct sk_buff, even if you
> state "its very unlikely that those fields are ever moved to the end
> of sk_buff".

And if the dev, len or data_len fields are really moved past the first
127 bytes the JIT code can be changed too. The JIT code already depends
on some of struct sk_buff's field properties anyway.

Greetings,

Indan



^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: [PATCH] net: bpf_jit: Simplify code by always using offset8 or offset32.
  2012-03-20 11:33                         ` Indan Zupancic
@ 2012-03-20 11:41                           ` David Laight
  2012-03-20 13:56                           ` Eric Dumazet
  1 sibling, 0 replies; 40+ messages in thread
From: David Laight @ 2012-03-20 11:41 UTC (permalink / raw)
  To: Indan Zupancic, Eric Dumazet
  Cc: Will Drewry, linux-kernel, linux-arch, linux-doc,
	kernel-hardening, netdev, x86, arnd, davem, hpa, mingo, oleg,
	peterz, rdunlap, mcgrathr, tglx, luto, eparis, serge.hallyn, djm,
	scarybeasts, pmoore, akpm, corbet, markus, coreyb, keescook

 
> On Tue, March 20, 2012 13:59, Eric Dumazet wrote:
> > On Tue, 2012-03-20 at 13:24 +1100, Indan Zupancic wrote:
> >
> >> If it does then perhaps the fast path should be made faster by
inlining
> >> the code instead of calling a function which may not be cached.
> >>
> >
> > inlining 400 times a sequence of code is waste of icache you
probably
> > missed this.
> 
> Well, according to you most filters were small, inling 26 bytes a few
> times should be faster than calling an external function. Not 
> all calls need to be inlined either.

I wouldn't bet on it.
If the number of arguments is small enough to fit in the registers,
the called function doesn't to save any registers, and the call
doesn't mean the calling code runs out of registers,
the actual cost of the call will be minimal.

OTOH the benefit of only having to fetch the code once,
and the higher likelyhood that it will be in the i-cache
from some other use, will make the version with the calls
faster.

You need to do real benchmarks on a real system running
a real workload to find out which is better.
Oh and beware that changes in which code shares cache
lines can have a measuarable effect (typified by unrelated
changes affecting measured performance).

	David



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH] net: bpf_jit: Simplify code by always using offset8 or offset32.
  2012-03-20 11:33                         ` Indan Zupancic
  2012-03-20 11:41                           ` David Laight
@ 2012-03-20 13:56                           ` Eric Dumazet
  1 sibling, 0 replies; 40+ messages in thread
From: Eric Dumazet @ 2012-03-20 13:56 UTC (permalink / raw)
  To: Indan Zupancic
  Cc: Will Drewry, linux-kernel, linux-arch, linux-doc,
	kernel-hardening, netdev, x86, arnd, davem, hpa, mingo, oleg,
	peterz, rdunlap, mcgrathr, tglx, luto, eparis, serge.hallyn, djm,
	scarybeasts, pmoore, akpm, corbet, markus, coreyb, keescook

On Tue, 2012-03-20 at 22:33 +1100, Indan Zupancic wrote:

> And if the dev, len or data_len fields are really moved past the first
> 127 bytes the JIT code can be changed too. The JIT code already depends
> on some of struct sk_buff's field properties anyway.

Its seems you didnt understand, but I said NO to your patch.

This time I am really tired.





^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2012-03-20 13:56 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-03-12 21:28 [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Will Drewry
2012-03-12 21:28 ` [PATCH v14 02/13] net/compat.c,linux/filter.h: share compat_sock_fprog Will Drewry
2012-03-12 21:28 ` [PATCH v14 03/13] seccomp: kill the seccomp_t typedef Will Drewry
2012-03-12 21:28 ` [PATCH v14 04/13] arch/x86: add syscall_get_arch to syscall.h Will Drewry
2012-03-12 21:28 ` [PATCH v14 05/13] asm/syscall.h: add syscall_get_arch Will Drewry
2012-03-12 21:28 ` [PATCH v14 06/13] seccomp: add system call filtering using BPF Will Drewry
2012-03-13  3:33   ` Indan Zupancic
2012-03-13 15:57     ` [kernel-hardening] " Will Drewry
2012-03-12 21:28 ` [PATCH v14 07/13] signal, x86: add SIGSYS info and make it synchronous Will Drewry
2012-03-12 21:28 ` [PATCH v14 08/13] seccomp: add SECCOMP_RET_ERRNO Will Drewry
2012-03-12 21:28 ` [PATCH v14 09/13] seccomp: Add SECCOMP_RET_TRAP Will Drewry
2012-03-12 21:28 ` [PATCH v14 10/13] ptrace,seccomp: Add PTRACE_SECCOMP support Will Drewry
2012-03-14  7:31   ` Indan Zupancic
2012-03-14 15:03     ` [kernel-hardening] " Will Drewry
2012-03-14 15:52       ` Will Drewry
2012-03-15 20:31     ` [PATCH v16 11/13] " Will Drewry
2012-03-12 21:28 ` [PATCH v14 11/13] x86: Enable HAVE_ARCH_SECCOMP_FILTER Will Drewry
2012-03-12 21:28 ` [PATCH v14 12/13] Documentation: prctl/seccomp_filter Will Drewry
2012-03-12 21:28 ` [PATCH v14 13/13] seccomp: remove duplicated failure logging Will Drewry
2012-03-13  3:40 ` [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Indan Zupancic
2012-03-13 15:40   ` Will Drewry
2012-03-13 10:04 ` Indan Zupancic
2012-03-13 15:43   ` Will Drewry
2012-03-13 17:13   ` Eric Dumazet
2012-03-14  5:12     ` Indan Zupancic
2012-03-14  5:55       ` Eric Dumazet
2012-03-14  7:59         ` Indan Zupancic
2012-03-14  8:05           ` Eric Dumazet
2012-03-17 10:14             ` Indan Zupancic
2012-03-17 13:49               ` Eric Dumazet
2012-03-18  8:35                 ` Indan Zupancic
2012-03-18 12:40                   ` [PATCH] net: bpf_jit: fix BPF_S_LDX_B_MSH compilation Eric Dumazet
2012-03-19 21:42                     ` David Miller
2012-03-20  0:16                     ` [PATCH] net: bpf_jit: Document evilness of negative indirect loads Indan Zupancic
2012-03-18 12:52                   ` [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W Eric Dumazet
2012-03-20  2:24                     ` [PATCH] net: bpf_jit: Simplify code by always using offset8 or offset32 Indan Zupancic
2012-03-20  2:59                       ` Eric Dumazet
2012-03-20 11:33                         ` Indan Zupancic
2012-03-20 11:41                           ` David Laight
2012-03-20 13:56                           ` Eric Dumazet

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).