All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next 0/3] BPF updates
@ 2014-09-06  9:42 Daniel Borkmann
  2014-09-06  9:42 ` [PATCH net-next 1/3] net: bpf: consolidate JIT binary allocator Daniel Borkmann
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Daniel Borkmann @ 2014-09-06  9:42 UTC (permalink / raw)
  To: davem; +Cc: ast, netdev

[ Set applies on top of current net-next but also on top of
  Alexei's latest patches. Please see individual patches for
  more details. ]

Daniel Borkmann (3):
  net: bpf: consolidate JIT binary allocator
  net: bpf: arm: address randomize and write protect JIT code
  net: bpf: be friendly to kmemcheck

 arch/arm/net/bpf_jit_32.c       | 34 +++++++++++++++++++++------
 arch/mips/net/bpf_jit.c         |  2 +-
 arch/powerpc/net/bpf_jit_comp.c |  2 +-
 arch/s390/net/bpf_jit_comp.c    | 47 ++++++++-----------------------------
 arch/sparc/net/bpf_jit_comp.c   |  2 +-
 arch/x86/net/bpf_jit_comp.c     | 52 ++++++++++-------------------------------
 include/linux/filter.h          | 19 ++++++++++++---
 kernel/bpf/core.c               | 39 +++++++++++++++++++++++++++++++
 net/core/filter.c               |  2 +-
 9 files changed, 108 insertions(+), 91 deletions(-)

-- 
1.7.11.7

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH net-next 1/3] net: bpf: consolidate JIT binary allocator
  2014-09-06  9:42 [PATCH net-next 0/3] BPF updates Daniel Borkmann
@ 2014-09-06  9:42 ` Daniel Borkmann
  2014-09-07 23:15   ` David Miller
  2014-09-08  6:17   ` Heiko Carstens
  2014-09-06  9:42 ` [PATCH net-next 2/3] net: bpf: arm: address randomize and write protect JIT code Daniel Borkmann
  2014-09-06  9:42 ` [PATCH net-next 3/3] net: bpf: be friendly to kmemcheck Daniel Borkmann
  2 siblings, 2 replies; 11+ messages in thread
From: Daniel Borkmann @ 2014-09-06  9:42 UTC (permalink / raw)
  To: davem; +Cc: ast, netdev, Eric Dumazet, Heiko Carstens, Martin Schwidefsky

Introduced in commit 314beb9bcabf ("x86: bpf_jit_comp: secure bpf jit
against spraying attacks") and later on replicated in aa2d2c73c21f
("s390/bpf,jit: address randomize and write protect jit code") for
s390 architecture, write protection for BPF JIT images got added and
a random start address of the JIT code, so that it's not on a page
boundary anymore.

Since both use a very similar allocator for the BPF binary header,
we can consolidate this code into the BPF core as it's mostly JIT
independant anyway.

This will also allow for future archs that support DEBUG_SET_MODULE_RONX
to just reuse instead of reimplementing it.

While reviewing the code, I think on s390, the alignment masking
seems not to be correct in it's current form, that is, we make sure
the first instruction starts at an even address as stated by commit
aa2d2c73c21f but masks the start with '& -2' while 2 byte-alignment
should rather be '& ~1'.

JIT tested on x86_64 and s390x with BPF test suite.

Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
---
 arch/s390/net/bpf_jit_comp.c | 45 ++++++++-------------------------------
 arch/x86/net/bpf_jit_comp.c  | 50 ++++++++++----------------------------------
 include/linux/filter.h       | 13 ++++++++++++
 kernel/bpf/core.c            | 39 ++++++++++++++++++++++++++++++++++
 4 files changed, 72 insertions(+), 75 deletions(-)

diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index f2833c5..b734f97 100644
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -5,11 +5,9 @@
  *
  * Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com>
  */
-#include <linux/moduleloader.h>
 #include <linux/netdevice.h>
 #include <linux/if_vlan.h>
 #include <linux/filter.h>
-#include <linux/random.h>
 #include <linux/init.h>
 #include <asm/cacheflush.h>
 #include <asm/facility.h>
@@ -148,6 +146,12 @@ struct bpf_jit {
 	ret;						\
 })
 
+static void bpf_jit_fill_hole(void *area, unsigned int size)
+{
+	/* Fill whole space with illegal instructions */
+	memset(area, 0, size);
+}
+
 static void bpf_jit_prologue(struct bpf_jit *jit)
 {
 	/* Save registers and create stack frame if necessary */
@@ -780,38 +784,6 @@ out:
 	return -1;
 }
 
-/*
- * Note: for security reasons, bpf code will follow a randomly
- *	 sized amount of illegal instructions.
- */
-struct bpf_binary_header {
-	unsigned int pages;
-	u8 image[];
-};
-
-static struct bpf_binary_header *bpf_alloc_binary(unsigned int bpfsize,
-						  u8 **image_ptr)
-{
-	struct bpf_binary_header *header;
-	unsigned int sz, hole;
-
-	/* Most BPF filters are really small, but if some of them fill a page,
-	 * allow at least 128 extra bytes for illegal instructions.
-	 */
-	sz = round_up(bpfsize + sizeof(*header) + 128, PAGE_SIZE);
-	header = module_alloc(sz);
-	if (!header)
-		return NULL;
-	memset(header, 0, sz);
-	header->pages = sz / PAGE_SIZE;
-	hole = min(sz - (bpfsize + sizeof(*header)), PAGE_SIZE - sizeof(*header));
-	/* Insert random number of illegal instructions before BPF code
-	 * and make sure the first instruction starts at an even address.
-	 */
-	*image_ptr = &header->image[(prandom_u32() % hole) & -2];
-	return header;
-}
-
 void bpf_jit_compile(struct bpf_prog *fp)
 {
 	struct bpf_binary_header *header = NULL;
@@ -850,7 +822,8 @@ void bpf_jit_compile(struct bpf_prog *fp)
 			size = prg_len + lit_len;
 			if (size >= BPF_SIZE_MAX)
 				goto out;
-			header = bpf_alloc_binary(size, &jit.start);
+			header = bpf_jit_binary_alloc(size, &jit.start,
+						      2, bpf_jit_fill_hole);
 			if (!header)
 				goto out;
 			jit.prg = jit.mid = jit.start + prg_len;
@@ -884,7 +857,7 @@ void bpf_jit_free(struct bpf_prog *fp)
 		goto free_filter;
 
 	set_memory_rw(addr, header->pages);
-	module_free(NULL, header);
+	bpf_jit_binary_free(header);
 
 free_filter:
 	bpf_prog_unlock_free(fp);
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 06f8c17..9de0b54 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -8,12 +8,10 @@
  * as published by the Free Software Foundation; version 2
  * of the License.
  */
-#include <linux/moduleloader.h>
-#include <asm/cacheflush.h>
 #include <linux/netdevice.h>
 #include <linux/filter.h>
 #include <linux/if_vlan.h>
-#include <linux/random.h>
+#include <asm/cacheflush.h>
 
 int bpf_jit_enable __read_mostly;
 
@@ -109,39 +107,6 @@ static inline void bpf_flush_icache(void *start, void *end)
 #define CHOOSE_LOAD_FUNC(K, func) \
 	((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative_offset : func) : func##_positive_offset)
 
-struct bpf_binary_header {
-	unsigned int	pages;
-	/* Note : for security reasons, bpf code will follow a randomly
-	 * sized amount of int3 instructions
-	 */
-	u8		image[];
-};
-
-static struct bpf_binary_header *bpf_alloc_binary(unsigned int proglen,
-						  u8 **image_ptr)
-{
-	unsigned int sz, hole;
-	struct bpf_binary_header *header;
-
-	/* Most of BPF filters are really small,
-	 * but if some of them fill a page, allow at least
-	 * 128 extra bytes to insert a random section of int3
-	 */
-	sz = round_up(proglen + sizeof(*header) + 128, PAGE_SIZE);
-	header = module_alloc(sz);
-	if (!header)
-		return NULL;
-
-	memset(header, 0xcc, sz); /* fill whole space with int3 instructions */
-
-	header->pages = sz / PAGE_SIZE;
-	hole = min(sz - (proglen + sizeof(*header)), PAGE_SIZE - sizeof(*header));
-
-	/* insert a random number of int3 instructions before BPF code */
-	*image_ptr = &header->image[prandom_u32() % hole];
-	return header;
-}
-
 /* pick a register outside of BPF range for JIT internal work */
 #define AUX_REG (MAX_BPF_REG + 1)
 
@@ -206,6 +171,12 @@ static inline u8 add_2reg(u8 byte, u32 dst_reg, u32 src_reg)
 	return byte + reg2hex[dst_reg] + (reg2hex[src_reg] << 3);
 }
 
+static void jit_fill_hole(void *area, unsigned int size)
+{
+	/* fill whole space with int3 instructions */
+	memset(area, 0xcc, size);
+}
+
 struct jit_context {
 	unsigned int cleanup_addr; /* epilogue code offset */
 	bool seen_ld_abs;
@@ -959,7 +930,7 @@ void bpf_int_jit_compile(struct bpf_prog *prog)
 		if (proglen <= 0) {
 			image = NULL;
 			if (header)
-				module_free(NULL, header);
+				bpf_jit_binary_free(header);
 			goto out;
 		}
 		if (image) {
@@ -969,7 +940,8 @@ void bpf_int_jit_compile(struct bpf_prog *prog)
 			break;
 		}
 		if (proglen == oldproglen) {
-			header = bpf_alloc_binary(proglen, &image);
+			header = bpf_jit_binary_alloc(proglen, &image,
+						      1, jit_fill_hole);
 			if (!header)
 				goto out;
 		}
@@ -998,7 +970,7 @@ void bpf_jit_free(struct bpf_prog *fp)
 		goto free_filter;
 
 	set_memory_rw(addr, header->pages);
-	module_free(NULL, header);
+	bpf_jit_binary_free(header);
 
 free_filter:
 	bpf_prog_unlock_free(fp);
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 8f82ef3..868764f 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -289,6 +289,11 @@ struct sock_fprog_kern {
 	struct sock_filter	*filter;
 };
 
+struct bpf_binary_header {
+	unsigned int pages;
+	u8 image[];
+};
+
 struct bpf_work_struct {
 	struct bpf_prog *prog;
 	struct work_struct work;
@@ -358,6 +363,14 @@ struct bpf_prog *bpf_prog_realloc(struct bpf_prog *fp_old, unsigned int size,
 				  gfp_t gfp_extra_flags);
 void __bpf_prog_free(struct bpf_prog *fp);
 
+typedef void (*bpf_jit_fill_hole_t)(void *area, unsigned int size);
+
+struct bpf_binary_header *
+bpf_jit_binary_alloc(unsigned int proglen, u8 **image_ptr,
+		     unsigned int alignment,
+		     bpf_jit_fill_hole_t bpf_fill_ill_insns);
+void bpf_jit_binary_free(struct bpf_binary_header *hdr);
+
 static inline void bpf_prog_unlock_free(struct bpf_prog *fp)
 {
 	bpf_prog_unlock_ro(fp);
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 2c2bfaa..8ee520f 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -20,9 +20,12 @@
  * Andi Kleen - Fix a few bad bugs and races.
  * Kris Katterjohn - Added many additional checks in bpf_check_classic()
  */
+
 #include <linux/filter.h>
 #include <linux/skbuff.h>
 #include <linux/vmalloc.h>
+#include <linux/random.h>
+#include <linux/moduleloader.h>
 #include <asm/unaligned.h>
 
 /* Registers */
@@ -125,6 +128,42 @@ void __bpf_prog_free(struct bpf_prog *fp)
 }
 EXPORT_SYMBOL_GPL(__bpf_prog_free);
 
+struct bpf_binary_header *
+bpf_jit_binary_alloc(unsigned int proglen, u8 **image_ptr,
+		     unsigned int alignment,
+		     bpf_jit_fill_hole_t bpf_fill_ill_insns)
+{
+	struct bpf_binary_header *hdr;
+	unsigned int size, hole, start;
+
+	/* Most of BPF filters are really small, but if some of them
+	 * fill a page, allow at least 128 extra bytes to insert a
+	 * random section of illegal instructions.
+	 */
+	size = round_up(proglen + sizeof(*hdr) + 128, PAGE_SIZE);
+	hdr = module_alloc(size);
+	if (hdr == NULL)
+		return NULL;
+
+	/* Fill space with illegal/arch-dep instructions. */
+	bpf_fill_ill_insns(hdr, size);
+
+	hdr->pages = size / PAGE_SIZE;
+	hole = min_t(unsigned int, size - (proglen + sizeof(*hdr)),
+		     PAGE_SIZE - sizeof(*hdr));
+	start = (prandom_u32() % hole) & ~(alignment - 1);
+
+	/* Leave a random number of instructions before BPF code. */
+	*image_ptr = &hdr->image[start];
+
+	return hdr;
+}
+
+void bpf_jit_binary_free(struct bpf_binary_header *hdr)
+{
+	module_free(NULL, hdr);
+}
+
 /* Base function for offset calculation. Needs to go into .text section,
  * therefore keeping it non-static as well; will also be used by JITs
  * anyway later on, so do not let the compiler omit it.
-- 
1.7.11.7

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH net-next 2/3] net: bpf: arm: address randomize and write protect JIT code
  2014-09-06  9:42 [PATCH net-next 0/3] BPF updates Daniel Borkmann
  2014-09-06  9:42 ` [PATCH net-next 1/3] net: bpf: consolidate JIT binary allocator Daniel Borkmann
@ 2014-09-06  9:42 ` Daniel Borkmann
  2014-09-06 17:36   ` Mircea Gherzan
  2014-09-06  9:42 ` [PATCH net-next 3/3] net: bpf: be friendly to kmemcheck Daniel Borkmann
  2 siblings, 1 reply; 11+ messages in thread
From: Daniel Borkmann @ 2014-09-06  9:42 UTC (permalink / raw)
  To: davem; +Cc: ast, netdev, Mircea Gherzan

This is the ARM variant for 314beb9bcab ("x86: bpf_jit_comp: secure bpf
jit against spraying attacks").

It is now possible to implement it due to commits 75374ad47c64 ("ARM: mm:
Define set_memory_* functions for ARM") and dca9aa92fc7c ("ARM: add
DEBUG_SET_MODULE_RONX option to Kconfig") which added infrastructure for
this facility.

Thus, this patch makes sure the BPF generated JIT code is marked RO, as
other kernel text sections, and also lets the generated JIT code start
at a pseudo random offset instead on a page boundary. The holes are filled
with illegal instructions.

JIT tested on armv7hl with BPF test suite.

Reference: http://mainisusuallyafunction.blogspot.com/2012/11/attacking-hardened-linux-systems-with.html
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Mircea Gherzan <mgherzan@gmail.com>
---
 arch/arm/net/bpf_jit_32.c | 32 ++++++++++++++++++++++++++------
 1 file changed, 26 insertions(+), 6 deletions(-)

diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index a76623b..2d1a5b9 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -12,7 +12,6 @@
 #include <linux/compiler.h>
 #include <linux/errno.h>
 #include <linux/filter.h>
-#include <linux/moduleloader.h>
 #include <linux/netdevice.h>
 #include <linux/string.h>
 #include <linux/slab.h>
@@ -174,6 +173,15 @@ static inline bool is_load_to_a(u16 inst)
 	}
 }
 
+static void jit_fill_hole(void *area, unsigned int size)
+{
+	/* Insert illegal UND instructions. */
+	u32 *ptr, fill_ins = 0xe7ffffff;
+	/* We are guaranteed to have aligned memory. */
+	for (ptr = area; size >= sizeof(u32); size -= sizeof(u32))
+		*ptr++ = fill_ins;
+}
+
 static void build_prologue(struct jit_ctx *ctx)
 {
 	u16 reg_set = saved_regs(ctx);
@@ -859,9 +867,11 @@ b_epilogue:
 
 void bpf_jit_compile(struct bpf_prog *fp)
 {
+	struct bpf_binary_header *header;
 	struct jit_ctx ctx;
 	unsigned tmp_idx;
 	unsigned alloc_size;
+	u8 *target_ptr;
 
 	if (!bpf_jit_enable)
 		return;
@@ -897,13 +907,15 @@ void bpf_jit_compile(struct bpf_prog *fp)
 	/* there's nothing after the epilogue on ARMv7 */
 	build_epilogue(&ctx);
 #endif
-
 	alloc_size = 4 * ctx.idx;
-	ctx.target = module_alloc(alloc_size);
-	if (unlikely(ctx.target == NULL))
+	header = bpf_jit_binary_alloc(alloc_size, &target_ptr,
+				      4, jit_fill_hole);
+	if (header == NULL)
 		goto out;
 
+	ctx.target = (u32 *) target_ptr;
 	ctx.idx = 0;
+
 	build_prologue(&ctx);
 	build_body(&ctx);
 	build_epilogue(&ctx);
@@ -919,6 +931,7 @@ void bpf_jit_compile(struct bpf_prog *fp)
 		/* there are 2 passes here */
 		bpf_jit_dump(fp->len, alloc_size, 2, ctx.target);
 
+	set_memory_ro((unsigned long)header, header->pages);
 	fp->bpf_func = (void *)ctx.target;
 	fp->jited = 1;
 out:
@@ -928,8 +941,15 @@ out:
 
 void bpf_jit_free(struct bpf_prog *fp)
 {
-	if (fp->jited)
-		module_free(NULL, fp->bpf_func);
+	unsigned long addr = (unsigned long)fp->bpf_func & PAGE_MASK;
+	struct bpf_binary_header *header = (void *)addr;
+
+	if (!fp->jited)
+		goto free_filter;
+
+	set_memory_rw(addr, header->pages);
+	bpf_jit_binary_free(header);
 
+free_filter:
 	bpf_prog_unlock_free(fp);
 }
-- 
1.7.11.7

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH net-next 3/3] net: bpf: be friendly to kmemcheck
  2014-09-06  9:42 [PATCH net-next 0/3] BPF updates Daniel Borkmann
  2014-09-06  9:42 ` [PATCH net-next 1/3] net: bpf: consolidate JIT binary allocator Daniel Borkmann
  2014-09-06  9:42 ` [PATCH net-next 2/3] net: bpf: arm: address randomize and write protect JIT code Daniel Borkmann
@ 2014-09-06  9:42 ` Daniel Borkmann
  2014-09-06 16:09   ` Alexei Starovoitov
  2 siblings, 1 reply; 11+ messages in thread
From: Daniel Borkmann @ 2014-09-06  9:42 UTC (permalink / raw)
  To: davem; +Cc: ast, netdev, Mikulas Patocka

Reported by Mikulas Patocka, kmemcheck currently barks out a
false positive since we don't have special kmemcheck annotation
for bitfields used in bpf_prog structure.

We currently have jited:1, len:31 and thus when accessing len
while CONFIG_KMEMCHECK enabled, kmemcheck throws a warning that
we're reading uninitialized memory.

As we don't need the whole bit universe for pages member, we
can just split it to u16 and use a bool flag for jited instead
of a bitfield.

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
---
 arch/arm/net/bpf_jit_32.c       | 2 +-
 arch/mips/net/bpf_jit.c         | 2 +-
 arch/powerpc/net/bpf_jit_comp.c | 2 +-
 arch/s390/net/bpf_jit_comp.c    | 2 +-
 arch/sparc/net/bpf_jit_comp.c   | 2 +-
 arch/x86/net/bpf_jit_comp.c     | 2 +-
 include/linux/filter.h          | 6 +++---
 net/core/filter.c               | 2 +-
 8 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index 2d1a5b9..6b45f64 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -933,7 +933,7 @@ void bpf_jit_compile(struct bpf_prog *fp)
 
 	set_memory_ro((unsigned long)header, header->pages);
 	fp->bpf_func = (void *)ctx.target;
-	fp->jited = 1;
+	fp->jited = true;
 out:
 	kfree(ctx.offsets);
 	return;
diff --git a/arch/mips/net/bpf_jit.c b/arch/mips/net/bpf_jit.c
index cfa83cf..0e97ccd 100644
--- a/arch/mips/net/bpf_jit.c
+++ b/arch/mips/net/bpf_jit.c
@@ -1417,7 +1417,7 @@ void bpf_jit_compile(struct bpf_prog *fp)
 		bpf_jit_dump(fp->len, alloc_size, 2, ctx.target);
 
 	fp->bpf_func = (void *)ctx.target;
-	fp->jited = 1;
+	fp->jited = true;
 
 out:
 	kfree(ctx.offsets);
diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
index 40c53ff..cbae2df 100644
--- a/arch/powerpc/net/bpf_jit_comp.c
+++ b/arch/powerpc/net/bpf_jit_comp.c
@@ -686,7 +686,7 @@ void bpf_jit_compile(struct bpf_prog *fp)
 		((u64 *)image)[0] = (u64)code_base;
 		((u64 *)image)[1] = local_paca->kernel_toc;
 		fp->bpf_func = (void *)image;
-		fp->jited = 1;
+		fp->jited = true;
 	}
 out:
 	kfree(addrs);
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index b734f97..555f5c7 100644
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -842,7 +842,7 @@ void bpf_jit_compile(struct bpf_prog *fp)
 	if (jit.start) {
 		set_memory_ro((unsigned long)header, header->pages);
 		fp->bpf_func = (void *) jit.start;
-		fp->jited = 1;
+		fp->jited = true;
 	}
 out:
 	kfree(addrs);
diff --git a/arch/sparc/net/bpf_jit_comp.c b/arch/sparc/net/bpf_jit_comp.c
index f7a736b..b2ad9dc 100644
--- a/arch/sparc/net/bpf_jit_comp.c
+++ b/arch/sparc/net/bpf_jit_comp.c
@@ -801,7 +801,7 @@ cond_branch:			f_offset = addrs[i + filter[i].jf];
 	if (image) {
 		bpf_flush_icache(image, image + proglen);
 		fp->bpf_func = (void *)image;
-		fp->jited = 1;
+		fp->jited = true;
 	}
 out:
 	kfree(addrs);
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 9de0b54..d56cd1f 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -955,7 +955,7 @@ void bpf_int_jit_compile(struct bpf_prog *prog)
 		bpf_flush_icache(header, image + proglen);
 		set_memory_ro((unsigned long)header, header->pages);
 		prog->bpf_func = (void *)image;
-		prog->jited = 1;
+		prog->jited = true;
 	}
 out:
 	kfree(addrs);
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 868764f..4b59ede 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -300,9 +300,9 @@ struct bpf_work_struct {
 };
 
 struct bpf_prog {
-	u32			pages;		/* Number of allocated pages */
-	u32			jited:1,	/* Is our filter JIT'ed? */
-				len:31;		/* Number of filter blocks */
+	u16			pages;		/* Number of allocated pages */
+	bool			jited;		/* Is our filter JIT'ed? */
+	u32			len;		/* Number of filter blocks */
 	struct sock_fprog_kern	*orig_prog;	/* Original BPF program */
 	struct bpf_work_struct	*work;		/* Deferred free work struct */
 	unsigned int		(*bpf_func)(const struct sk_buff *skb,
diff --git a/net/core/filter.c b/net/core/filter.c
index fa5b7d0..dfc716f 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -972,7 +972,7 @@ static struct bpf_prog *bpf_prepare_filter(struct bpf_prog *fp)
 	int err;
 
 	fp->bpf_func = NULL;
-	fp->jited = 0;
+	fp->jited = false;
 
 	err = bpf_check_classic(fp->insns, fp->len);
 	if (err) {
-- 
1.7.11.7

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 3/3] net: bpf: be friendly to kmemcheck
  2014-09-06  9:42 ` [PATCH net-next 3/3] net: bpf: be friendly to kmemcheck Daniel Borkmann
@ 2014-09-06 16:09   ` Alexei Starovoitov
  0 siblings, 0 replies; 11+ messages in thread
From: Alexei Starovoitov @ 2014-09-06 16:09 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: David S. Miller, Network Development, Mikulas Patocka

On Sat, Sep 6, 2014 at 2:42 AM, Daniel Borkmann <dborkman@redhat.com> wrote:
> Reported by Mikulas Patocka, kmemcheck currently barks out a
> false positive since we don't have special kmemcheck annotation
> for bitfields used in bpf_prog structure.
>
> We currently have jited:1, len:31 and thus when accessing len
> while CONFIG_KMEMCHECK enabled, kmemcheck throws a warning that
> we're reading uninitialized memory.
>
> As we don't need the whole bit universe for pages member, we
> can just split it to u16 and use a bool flag for jited instead
> of a bitfield.
>
> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
> Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
> Cc: Alexei Starovoitov <ast@plumgrid.com>
> ---
>  arch/arm/net/bpf_jit_32.c       | 2 +-
>  arch/mips/net/bpf_jit.c         | 2 +-
>  arch/powerpc/net/bpf_jit_comp.c | 2 +-
>  arch/s390/net/bpf_jit_comp.c    | 2 +-
>  arch/sparc/net/bpf_jit_comp.c   | 2 +-
>  arch/x86/net/bpf_jit_comp.c     | 2 +-
>  include/linux/filter.h          | 6 +++---
>  net/core/filter.c               | 2 +-
>  8 files changed, 10 insertions(+), 10 deletions(-)

This one also looks good. Thanks!
Acked-by: Alexei Starovoitov <ast@plumgrid.com>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 2/3] net: bpf: arm: address randomize and write protect JIT code
  2014-09-06  9:42 ` [PATCH net-next 2/3] net: bpf: arm: address randomize and write protect JIT code Daniel Borkmann
@ 2014-09-06 17:36   ` Mircea Gherzan
  0 siblings, 0 replies; 11+ messages in thread
From: Mircea Gherzan @ 2014-09-06 17:36 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: davem, ast, netdev

2014-09-06 11:42 GMT+02:00 Daniel Borkmann <dborkman@redhat.com>:
> This is the ARM variant for 314beb9bcab ("x86: bpf_jit_comp: secure bpf
> jit against spraying attacks").
>
> It is now possible to implement it due to commits 75374ad47c64 ("ARM: mm:
> Define set_memory_* functions for ARM") and dca9aa92fc7c ("ARM: add
> DEBUG_SET_MODULE_RONX option to Kconfig") which added infrastructure for
> this facility.
>
> Thus, this patch makes sure the BPF generated JIT code is marked RO, as
> other kernel text sections, and also lets the generated JIT code start
> at a pseudo random offset instead on a page boundary. The holes are filled
> with illegal instructions.
>
> JIT tested on armv7hl with BPF test suite.
>
> Reference: http://mainisusuallyafunction.blogspot.com/2012/11/attacking-hardened-linux-systems-with.html
> Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
> Cc: Mircea Gherzan <mgherzan@gmail.com>
> ---
>  arch/arm/net/bpf_jit_32.c | 32 ++++++++++++++++++++++++++------
>  1 file changed, 26 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
> index a76623b..2d1a5b9 100644
> --- a/arch/arm/net/bpf_jit_32.c
> +++ b/arch/arm/net/bpf_jit_32.c
> @@ -12,7 +12,6 @@
>  #include <linux/compiler.h>
>  #include <linux/errno.h>
>  #include <linux/filter.h>
> -#include <linux/moduleloader.h>
>  #include <linux/netdevice.h>
>  #include <linux/string.h>
>  #include <linux/slab.h>
> @@ -174,6 +173,15 @@ static inline bool is_load_to_a(u16 inst)
>         }
>  }
>
> +static void jit_fill_hole(void *area, unsigned int size)
> +{
> +       /* Insert illegal UND instructions. */
> +       u32 *ptr, fill_ins = 0xe7ffffff;
> +       /* We are guaranteed to have aligned memory. */
> +       for (ptr = area; size >= sizeof(u32); size -= sizeof(u32))
> +               *ptr++ = fill_ins;
> +}
> +
>  static void build_prologue(struct jit_ctx *ctx)
>  {
>         u16 reg_set = saved_regs(ctx);
> @@ -859,9 +867,11 @@ b_epilogue:
>
>  void bpf_jit_compile(struct bpf_prog *fp)
>  {
> +       struct bpf_binary_header *header;
>         struct jit_ctx ctx;
>         unsigned tmp_idx;
>         unsigned alloc_size;
> +       u8 *target_ptr;
>
>         if (!bpf_jit_enable)
>                 return;
> @@ -897,13 +907,15 @@ void bpf_jit_compile(struct bpf_prog *fp)
>         /* there's nothing after the epilogue on ARMv7 */
>         build_epilogue(&ctx);
>  #endif
> -
>         alloc_size = 4 * ctx.idx;
> -       ctx.target = module_alloc(alloc_size);
> -       if (unlikely(ctx.target == NULL))
> +       header = bpf_jit_binary_alloc(alloc_size, &target_ptr,
> +                                     4, jit_fill_hole);
> +       if (header == NULL)
>                 goto out;
>
> +       ctx.target = (u32 *) target_ptr;
>         ctx.idx = 0;
> +
>         build_prologue(&ctx);
>         build_body(&ctx);
>         build_epilogue(&ctx);
> @@ -919,6 +931,7 @@ void bpf_jit_compile(struct bpf_prog *fp)
>                 /* there are 2 passes here */
>                 bpf_jit_dump(fp->len, alloc_size, 2, ctx.target);
>
> +       set_memory_ro((unsigned long)header, header->pages);
>         fp->bpf_func = (void *)ctx.target;
>         fp->jited = 1;
>  out:
> @@ -928,8 +941,15 @@ out:
>
>  void bpf_jit_free(struct bpf_prog *fp)
>  {
> -       if (fp->jited)
> -               module_free(NULL, fp->bpf_func);
> +       unsigned long addr = (unsigned long)fp->bpf_func & PAGE_MASK;
> +       struct bpf_binary_header *header = (void *)addr;
> +
> +       if (!fp->jited)
> +               goto free_filter;
> +
> +       set_memory_rw(addr, header->pages);
> +       bpf_jit_binary_free(header);
>
> +free_filter:
>         bpf_prog_unlock_free(fp);
>  }

Acked-by: Mircea Gherzan <mgherzan@gmail.com>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 1/3] net: bpf: consolidate JIT binary allocator
  2014-09-06  9:42 ` [PATCH net-next 1/3] net: bpf: consolidate JIT binary allocator Daniel Borkmann
@ 2014-09-07 23:15   ` David Miller
  2014-09-08  0:17     ` Alexei Starovoitov
  2014-09-08  6:09     ` Daniel Borkmann
  2014-09-08  6:17   ` Heiko Carstens
  1 sibling, 2 replies; 11+ messages in thread
From: David Miller @ 2014-09-07 23:15 UTC (permalink / raw)
  To: dborkman; +Cc: ast, netdev, edumazet, heiko.carstens, schwidefsky

From: Daniel Borkmann <dborkman@redhat.com>
Date: Sat,  6 Sep 2014 11:42:45 +0200

> While reviewing the code, I think on s390, the alignment masking
> seems not to be correct in it's current form, that is, we make sure
> the first instruction starts at an even address as stated by commit
> aa2d2c73c21f but masks the start with '& -2' while 2 byte-alignment
> should rather be '& ~1'.

Both -2 and ~1 are the same value.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 1/3] net: bpf: consolidate JIT binary allocator
  2014-09-07 23:15   ` David Miller
@ 2014-09-08  0:17     ` Alexei Starovoitov
  2014-09-08  6:09     ` Daniel Borkmann
  1 sibling, 0 replies; 11+ messages in thread
From: Alexei Starovoitov @ 2014-09-08  0:17 UTC (permalink / raw)
  To: David Miller
  Cc: Daniel Borkmann, Network Development, Eric Dumazet,
	Heiko Carstens, Martin Schwidefsky

On Sun, Sep 7, 2014 at 4:15 PM, David Miller <davem@davemloft.net> wrote:
> From: Daniel Borkmann <dborkman@redhat.com>
> Date: Sat,  6 Sep 2014 11:42:45 +0200
>
>> While reviewing the code, I think on s390, the alignment masking
>> seems not to be correct in it's current form, that is, we make sure
>> the first instruction starts at an even address as stated by commit
>> aa2d2c73c21f but masks the start with '& -2' while 2 byte-alignment
>> should rather be '& ~1'.
>
> Both -2 and ~1 are the same value.

oops. you're right. commit log is incorrect.

The new code makes the logic more clear:
in s390:
- *image_ptr = &header->image[(prandom_u32() % hole) & -2];
+ header = bpf_jit_binary_alloc(size, &jit.start, 2, bpf_jit_fill_hole);

and in bpf/core.c:
+struct bpf_binary_header *
+bpf_jit_binary_alloc(unsigned int proglen, u8 **image_ptr,
+                    unsigned int alignment,
+                    bpf_jit_fill_hole_t bpf_fill_ill_insns)
+{
...
+       start = (prandom_u32() % hole) & ~(alignment - 1);

we'll fix the commit log and resubmit.
Thx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 1/3] net: bpf: consolidate JIT binary allocator
  2014-09-07 23:15   ` David Miller
  2014-09-08  0:17     ` Alexei Starovoitov
@ 2014-09-08  6:09     ` Daniel Borkmann
  1 sibling, 0 replies; 11+ messages in thread
From: Daniel Borkmann @ 2014-09-08  6:09 UTC (permalink / raw)
  To: David Miller; +Cc: ast, netdev, edumazet, heiko.carstens, schwidefsky

On 09/08/2014 01:15 AM, David Miller wrote:
...
> Both -2 and ~1 are the same value.

Argh, you are right, sorry. I have removed that paragraph
from the commit message and resent the set. Thanks!

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 1/3] net: bpf: consolidate JIT binary allocator
  2014-09-06  9:42 ` [PATCH net-next 1/3] net: bpf: consolidate JIT binary allocator Daniel Borkmann
  2014-09-07 23:15   ` David Miller
@ 2014-09-08  6:17   ` Heiko Carstens
  2014-09-08  8:12     ` Daniel Borkmann
  1 sibling, 1 reply; 11+ messages in thread
From: Heiko Carstens @ 2014-09-08  6:17 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: davem, ast, netdev, Eric Dumazet, Martin Schwidefsky

On Sat, Sep 06, 2014 at 11:42:45AM +0200, Daniel Borkmann wrote:
> Introduced in commit 314beb9bcabf ("x86: bpf_jit_comp: secure bpf jit
> against spraying attacks") and later on replicated in aa2d2c73c21f
> ("s390/bpf,jit: address randomize and write protect jit code") for
> s390 architecture, write protection for BPF JIT images got added and
> a random start address of the JIT code, so that it's not on a page
> boundary anymore.
> 
> Since both use a very similar allocator for the BPF binary header,
> we can consolidate this code into the BPF core as it's mostly JIT
> independant anyway.
> 
> This will also allow for future archs that support DEBUG_SET_MODULE_RONX
> to just reuse instead of reimplementing it.
> 
> While reviewing the code, I think on s390, the alignment masking
> seems not to be correct in it's current form, that is, we make sure
> the first instruction starts at an even address as stated by commit
> aa2d2c73c21f but masks the start with '& -2' while 2 byte-alignment
> should rather be '& ~1'.
> 
> JIT tested on x86_64 and s390x with BPF test suite.
> 
> Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
> Acked-by: Alexei Starovoitov <ast@plumgrid.com>
> Cc: Eric Dumazet <edumazet@google.com>
> Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
> ---
>  arch/s390/net/bpf_jit_comp.c | 45 ++++++++-------------------------------
>  arch/x86/net/bpf_jit_comp.c  | 50 ++++++++++----------------------------------
>  include/linux/filter.h       | 13 ++++++++++++
>  kernel/bpf/core.c            | 39 ++++++++++++++++++++++++++++++++++
>  4 files changed, 72 insertions(+), 75 deletions(-)

Looks good to me (except for the comment about s390 ;).

Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 1/3] net: bpf: consolidate JIT binary allocator
  2014-09-08  6:17   ` Heiko Carstens
@ 2014-09-08  8:12     ` Daniel Borkmann
  0 siblings, 0 replies; 11+ messages in thread
From: Daniel Borkmann @ 2014-09-08  8:12 UTC (permalink / raw)
  To: Heiko Carstens; +Cc: davem, ast, netdev, Eric Dumazet, Martin Schwidefsky

On 09/08/2014 08:17 AM, Heiko Carstens wrote:
> On Sat, Sep 06, 2014 at 11:42:45AM +0200, Daniel Borkmann wrote:
>> Introduced in commit 314beb9bcabf ("x86: bpf_jit_comp: secure bpf jit
>> against spraying attacks") and later on replicated in aa2d2c73c21f
>> ("s390/bpf,jit: address randomize and write protect jit code") for
>> s390 architecture, write protection for BPF JIT images got added and
>> a random start address of the JIT code, so that it's not on a page
>> boundary anymore.
>>
>> Since both use a very similar allocator for the BPF binary header,
>> we can consolidate this code into the BPF core as it's mostly JIT
>> independant anyway.
>>
>> This will also allow for future archs that support DEBUG_SET_MODULE_RONX
>> to just reuse instead of reimplementing it.
>>
>> While reviewing the code, I think on s390, the alignment masking
>> seems not to be correct in it's current form, that is, we make sure
>> the first instruction starts at an even address as stated by commit
>> aa2d2c73c21f but masks the start with '& -2' while 2 byte-alignment
>> should rather be '& ~1'.
>>
>> JIT tested on x86_64 and s390x with BPF test suite.
>>
>> Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
>> Acked-by: Alexei Starovoitov <ast@plumgrid.com>
>> Cc: Eric Dumazet <edumazet@google.com>
>> Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
>> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
>> ---
>>   arch/s390/net/bpf_jit_comp.c | 45 ++++++++-------------------------------
>>   arch/x86/net/bpf_jit_comp.c  | 50 ++++++++++----------------------------------
>>   include/linux/filter.h       | 13 ++++++++++++
>>   kernel/bpf/core.c            | 39 ++++++++++++++++++++++++++++++++++
>>   4 files changed, 72 insertions(+), 75 deletions(-)
>
> Looks good to me (except for the comment about s390 ;).

Yes, sorry for that. I guess I had too much coffee. :) I have already
updated the commit message and resent the set.

> Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>

Thanks a lot,
Daniel

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2014-09-08  8:12 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-06  9:42 [PATCH net-next 0/3] BPF updates Daniel Borkmann
2014-09-06  9:42 ` [PATCH net-next 1/3] net: bpf: consolidate JIT binary allocator Daniel Borkmann
2014-09-07 23:15   ` David Miller
2014-09-08  0:17     ` Alexei Starovoitov
2014-09-08  6:09     ` Daniel Borkmann
2014-09-08  6:17   ` Heiko Carstens
2014-09-08  8:12     ` Daniel Borkmann
2014-09-06  9:42 ` [PATCH net-next 2/3] net: bpf: arm: address randomize and write protect JIT code Daniel Borkmann
2014-09-06 17:36   ` Mircea Gherzan
2014-09-06  9:42 ` [PATCH net-next 3/3] net: bpf: be friendly to kmemcheck Daniel Borkmann
2014-09-06 16:09   ` Alexei Starovoitov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.