All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next 0/9] bpf: hash map pre-alloc
@ 2016-03-07  1:58 Alexei Starovoitov
  2016-03-07  1:58 ` [PATCH net-next 1/9] bpf: prevent kprobe+bpf deadlocks Alexei Starovoitov
                   ` (8 more replies)
  0 siblings, 9 replies; 15+ messages in thread
From: Alexei Starovoitov @ 2016-03-07  1:58 UTC (permalink / raw)
  To: David S . Miller
  Cc: Daniel Borkmann, Daniel Wagner, Tom Zanussi, Wang Nan, He Kuang,
	Martin KaFai Lau, Brendan Gregg, netdev, linux-kernel,
	kernel-team

Hi,

this path set switches bpf hash map to use pre-allocation by default
and introduces BPF_F_NO_PREALLOC flag to keep old behavior for cases
where full map pre-allocation is too memory expensive.

Some time back Daniel Wagner reported crashes when bpf hash map is
used to compute time intervals between preempt_disable->preempt_enable
and recently Tom Zanussi reported a dead lock in iovisor/bcc/funccount
tool if it's used to count the number of invocations of kernel
'*spin*' functions. Both problems are due to the recursive use of
slub and can only be solved by pre-allocating all map elements.

A lot of different solutions were considered. Many implemented,
but at the end pre-allocation seems to be the only feasible answer.
As far as pre-allocation goes it also was implemented 4 different ways:
- simple free-list with single lock
- percpu_ida with optimizations
- blk-mq-tag variant customized for bpf use case
- percpu_freelist
For bpf style of alloc/free patterns percpu_freelist is the best
and implemented in this patch set.
Detailed performance numbers in patch 3.
Patch 2 introduces percpu_freelist
Patch 1 fixes simple deadlocks due to missing recursion checks
Patches 4-7: prepare test infra
Patch 8: stress test for hash map infra. It attaches to spin_lock
functions and bpf_map_update/delete are called from different contexts
(except nmi, which is unsupported by bpf still)
Patch 9: map performance test

Reported-by: Daniel Wagner <daniel.wagner@bmw-carit.de>
Reported-by: Tom Zanussi <tom.zanussi@linux.intel.com>

Alexei Starovoitov (9):
  bpf: prevent kprobe+bpf deadlocks
  bpf: introduce percpu_freelist
  bpf: pre-allocate hash map elements
  samples/bpf: make map creation more verbose
  samples/bpf: move ksym_search() into library
  samples/bpf: add map_flags to bpf loader
  samples/bpf: test both pre-alloc and normal maps
  samples/bpf: add bpf map stress test
  samples/bpf: add map performance test

 include/linux/bpf.h              |   4 +
 include/uapi/linux/bpf.h         |   3 +
 kernel/bpf/Makefile              |   2 +-
 kernel/bpf/hashtab.c             | 264 ++++++++++++++++++++++++++++-----------
 kernel/bpf/percpu_freelist.c     |  81 ++++++++++++
 kernel/bpf/percpu_freelist.h     |  31 +++++
 kernel/bpf/syscall.c             |  15 ++-
 kernel/trace/bpf_trace.c         |   2 -
 samples/bpf/Makefile             |   8 ++
 samples/bpf/bpf_helpers.h        |   1 +
 samples/bpf/bpf_load.c           |  70 ++++++++++-
 samples/bpf/bpf_load.h           |   6 +
 samples/bpf/fds_example.c        |   2 +-
 samples/bpf/libbpf.c             |   5 +-
 samples/bpf/libbpf.h             |   2 +-
 samples/bpf/map_perf_test_kern.c | 100 +++++++++++++++
 samples/bpf/map_perf_test_user.c | 155 +++++++++++++++++++++++
 samples/bpf/offwaketime_user.c   |  67 +---------
 samples/bpf/sock_example.c       |   2 +-
 samples/bpf/spintest_kern.c      |  59 +++++++++
 samples/bpf/spintest_user.c      |  50 ++++++++
 samples/bpf/test_maps.c          |  29 +++--
 samples/bpf/test_verifier.c      |   4 +-
 23 files changed, 802 insertions(+), 160 deletions(-)
 create mode 100644 kernel/bpf/percpu_freelist.c
 create mode 100644 kernel/bpf/percpu_freelist.h
 create mode 100644 samples/bpf/map_perf_test_kern.c
 create mode 100644 samples/bpf/map_perf_test_user.c
 create mode 100644 samples/bpf/spintest_kern.c
 create mode 100644 samples/bpf/spintest_user.c

-- 
2.6.5

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH net-next 1/9] bpf: prevent kprobe+bpf deadlocks
  2016-03-07  1:58 [PATCH net-next 0/9] bpf: hash map pre-alloc Alexei Starovoitov
@ 2016-03-07  1:58 ` Alexei Starovoitov
  2016-03-07 10:07   ` Daniel Borkmann
  2016-03-07  1:58 ` [PATCH net-next 2/9] bpf: introduce percpu_freelist Alexei Starovoitov
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 15+ messages in thread
From: Alexei Starovoitov @ 2016-03-07  1:58 UTC (permalink / raw)
  To: David S . Miller
  Cc: Daniel Borkmann, Daniel Wagner, Tom Zanussi, Wang Nan, He Kuang,
	Martin KaFai Lau, Brendan Gregg, netdev, linux-kernel,
	kernel-team

if kprobe is placed within update or delete hash map helpers
that hold bucket spin lock and triggered bpf program is trying to
grab the spinlock for the same bucket on the same cpu, it will
deadlock.
Fix it by extending existing recursion prevention mechanism.

Note, map_lookup and other tracing helpers don't have this problem,
since they don't hold any locks and don't modify global data.
bpf_trace_printk has its own recursive check and ok as well.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 include/linux/bpf.h      |  3 +++
 kernel/bpf/syscall.c     | 13 +++++++++++++
 kernel/trace/bpf_trace.c |  2 --
 3 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 51e498e5470e..4b070827200d 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -10,6 +10,7 @@
 #include <uapi/linux/bpf.h>
 #include <linux/workqueue.h>
 #include <linux/file.h>
+#include <linux/percpu.h>
 
 struct bpf_map;
 
@@ -163,6 +164,8 @@ bool bpf_prog_array_compatible(struct bpf_array *array, const struct bpf_prog *f
 const struct bpf_func_proto *bpf_get_trace_printk_proto(void);
 
 #ifdef CONFIG_BPF_SYSCALL
+DECLARE_PER_CPU(int, bpf_prog_active);
+
 void bpf_register_prog_type(struct bpf_prog_type_list *tl);
 void bpf_register_map_type(struct bpf_map_type_list *tl);
 
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index c95a753c2007..dc99f6a000f5 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -18,6 +18,8 @@
 #include <linux/filter.h>
 #include <linux/version.h>
 
+DEFINE_PER_CPU(int, bpf_prog_active);
+
 int sysctl_unprivileged_bpf_disabled __read_mostly;
 
 static LIST_HEAD(bpf_map_types);
@@ -347,6 +349,11 @@ static int map_update_elem(union bpf_attr *attr)
 	if (copy_from_user(value, uvalue, value_size) != 0)
 		goto free_value;
 
+	/* must increment bpf_prog_active to avoid kprobe+bpf triggering from
+	 * inside bpf map update or delete otherwise deadlocks are possible
+	 */
+	preempt_disable();
+	__this_cpu_inc(bpf_prog_active);
 	if (map->map_type == BPF_MAP_TYPE_PERCPU_HASH) {
 		err = bpf_percpu_hash_update(map, key, value, attr->flags);
 	} else if (map->map_type == BPF_MAP_TYPE_PERCPU_ARRAY) {
@@ -356,6 +363,8 @@ static int map_update_elem(union bpf_attr *attr)
 		err = map->ops->map_update_elem(map, key, value, attr->flags);
 		rcu_read_unlock();
 	}
+	__this_cpu_dec(bpf_prog_active);
+	preempt_enable();
 
 free_value:
 	kfree(value);
@@ -394,9 +403,13 @@ static int map_delete_elem(union bpf_attr *attr)
 	if (copy_from_user(key, ukey, map->key_size) != 0)
 		goto free_key;
 
+	preempt_disable();
+	__this_cpu_inc(bpf_prog_active);
 	rcu_read_lock();
 	err = map->ops->map_delete_elem(map, key);
 	rcu_read_unlock();
+	__this_cpu_dec(bpf_prog_active);
+	preempt_enable();
 
 free_key:
 	kfree(key);
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 4b8caa392b86..3e4ffb3ace5f 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -13,8 +13,6 @@
 #include <linux/ctype.h>
 #include "trace.h"
 
-static DEFINE_PER_CPU(int, bpf_prog_active);
-
 /**
  * trace_call_bpf - invoke BPF program
  * @prog: BPF program
-- 
2.6.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 2/9] bpf: introduce percpu_freelist
  2016-03-07  1:58 [PATCH net-next 0/9] bpf: hash map pre-alloc Alexei Starovoitov
  2016-03-07  1:58 ` [PATCH net-next 1/9] bpf: prevent kprobe+bpf deadlocks Alexei Starovoitov
@ 2016-03-07  1:58 ` Alexei Starovoitov
  2016-03-07 10:33   ` Daniel Borkmann
  2016-03-07  1:58 ` [PATCH net-next 3/9] bpf: pre-allocate hash map elements Alexei Starovoitov
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 15+ messages in thread
From: Alexei Starovoitov @ 2016-03-07  1:58 UTC (permalink / raw)
  To: David S . Miller
  Cc: Daniel Borkmann, Daniel Wagner, Tom Zanussi, Wang Nan, He Kuang,
	Martin KaFai Lau, Brendan Gregg, netdev, linux-kernel,
	kernel-team

Introduce simple percpu_freelist to keep single list of elements
spread across per-cpu singly linked lists.

/* push element into the list */
void pcpu_freelist_push(struct pcpu_freelist *, struct pcpu_freelist_node *);

/* pop element from the list */
struct pcpu_freelist_node *pcpu_freelist_pop(struct pcpu_freelist *);

The object is pushed to the current cpu list.
Pop first trying to get the object from the current cpu list,
if it's empty goes to the neigbour cpu list.

For bpf program usage pattern the collision rate is very low,
since programs push and pop the objects typically on the same cpu.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 kernel/bpf/Makefile          |  2 +-
 kernel/bpf/percpu_freelist.c | 81 ++++++++++++++++++++++++++++++++++++++++++++
 kernel/bpf/percpu_freelist.h | 31 +++++++++++++++++
 3 files changed, 113 insertions(+), 1 deletion(-)
 create mode 100644 kernel/bpf/percpu_freelist.c
 create mode 100644 kernel/bpf/percpu_freelist.h

diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index 8a932d079c24..eed911d091da 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -1,7 +1,7 @@
 obj-y := core.o
 
 obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o
-obj-$(CONFIG_BPF_SYSCALL) += hashtab.o arraymap.o
+obj-$(CONFIG_BPF_SYSCALL) += hashtab.o arraymap.o percpu_freelist.o
 ifeq ($(CONFIG_PERF_EVENTS),y)
 obj-$(CONFIG_BPF_SYSCALL) += stackmap.o
 endif
diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c
new file mode 100644
index 000000000000..250e45c223a1
--- /dev/null
+++ b/kernel/bpf/percpu_freelist.c
@@ -0,0 +1,81 @@
+/* Copyright (c) 2016 Facebook
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ */
+#include "percpu_freelist.h"
+
+int pcpu_freelist_init(struct pcpu_freelist *s)
+{
+	int cpu;
+
+	s->freelist = alloc_percpu(struct pcpu_freelist_head);
+	if (!s->freelist)
+		return -ENOMEM;
+
+	for_each_possible_cpu(cpu) {
+		struct pcpu_freelist_head *head = per_cpu_ptr(s->freelist, cpu);
+
+		raw_spin_lock_init(&head->lock);
+		head->first = NULL;
+	}
+	return 0;
+}
+
+void pcpu_freelist_destroy(struct pcpu_freelist *s)
+{
+	free_percpu(s->freelist);
+}
+
+static inline void __pcpu_freelist_push(struct pcpu_freelist_head *head,
+					struct pcpu_freelist_node *node)
+{
+	raw_spin_lock(&head->lock);
+	node->next = head->first;
+	head->first = node;
+	raw_spin_unlock(&head->lock);
+}
+
+/* must be called with preemption disabled */
+void pcpu_freelist_push(struct pcpu_freelist *s,
+			struct pcpu_freelist_node *node)
+{
+	struct pcpu_freelist_head *head = this_cpu_ptr(s->freelist);
+
+	__pcpu_freelist_push(head, node);
+}
+
+void pcpu_freelist_push_cpu(struct pcpu_freelist *s,
+			    struct pcpu_freelist_node *node, int cpu)
+{
+	struct pcpu_freelist_head *head = per_cpu_ptr(s->freelist, cpu);
+
+	__pcpu_freelist_push(head, node);
+}
+
+/* must be called with preemption disabled */
+struct pcpu_freelist_node *pcpu_freelist_pop(struct pcpu_freelist *s)
+{
+	struct pcpu_freelist_head *head;
+	struct pcpu_freelist_node *node;
+	int orig_cpu, cpu;
+
+	orig_cpu = cpu = raw_smp_processor_id();
+	while (1) {
+		head = per_cpu_ptr(s->freelist, cpu);
+		raw_spin_lock(&head->lock);
+		node = head->first;
+		if (node) {
+			head->first = node->next;
+			raw_spin_unlock(&head->lock);
+			return node;
+		}
+		raw_spin_unlock(&head->lock);
+		cpu = cpumask_next(cpu, cpu_possible_mask);
+		if (cpu >= nr_cpu_ids)
+			cpu = 0;
+		if (cpu == orig_cpu)
+			return NULL;
+	}
+}
diff --git a/kernel/bpf/percpu_freelist.h b/kernel/bpf/percpu_freelist.h
new file mode 100644
index 000000000000..4150da61dd13
--- /dev/null
+++ b/kernel/bpf/percpu_freelist.h
@@ -0,0 +1,31 @@
+/* Copyright (c) 2016 Facebook
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ */
+#ifndef __PERCPU_FREELIST_H__
+#define __PERCPU_FREELIST_H__
+#include <linux/spinlock.h>
+#include <linux/percpu.h>
+
+struct pcpu_freelist_head {
+	struct pcpu_freelist_node *first;
+	raw_spinlock_t lock;
+};
+
+struct pcpu_freelist {
+	struct pcpu_freelist_head __percpu *freelist;
+};
+
+struct pcpu_freelist_node {
+	struct pcpu_freelist_node *next;
+};
+
+void pcpu_freelist_push(struct pcpu_freelist *, struct pcpu_freelist_node *);
+void pcpu_freelist_push_cpu(struct pcpu_freelist *, struct pcpu_freelist_node *,
+			    int);
+struct pcpu_freelist_node *pcpu_freelist_pop(struct pcpu_freelist *);
+int pcpu_freelist_init(struct pcpu_freelist *);
+void pcpu_freelist_destroy(struct pcpu_freelist *s);
+#endif
-- 
2.6.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 3/9] bpf: pre-allocate hash map elements
  2016-03-07  1:58 [PATCH net-next 0/9] bpf: hash map pre-alloc Alexei Starovoitov
  2016-03-07  1:58 ` [PATCH net-next 1/9] bpf: prevent kprobe+bpf deadlocks Alexei Starovoitov
  2016-03-07  1:58 ` [PATCH net-next 2/9] bpf: introduce percpu_freelist Alexei Starovoitov
@ 2016-03-07  1:58 ` Alexei Starovoitov
  2016-03-07 11:08   ` Daniel Borkmann
  2016-03-07  1:58 ` [PATCH net-next 4/9] samples/bpf: make map creation more verbose Alexei Starovoitov
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 15+ messages in thread
From: Alexei Starovoitov @ 2016-03-07  1:58 UTC (permalink / raw)
  To: David S . Miller
  Cc: Daniel Borkmann, Daniel Wagner, Tom Zanussi, Wang Nan, He Kuang,
	Martin KaFai Lau, Brendan Gregg, netdev, linux-kernel,
	kernel-team

If kprobe is placed on spin_unlock then calling kmalloc/kfree from
bpf programs is not safe, since the following dead lock is possible:
kfree->spin_lock(kmem_cache_node->lock)...spin_unlock->kprobe->
bpf_prog->map_update->kmalloc->spin_lock(of the same kmem_cache_node->lock)
and deadlocks.

The following solutions were considered and some implemented, but
eventually discarded
- kmem_cache_create for every map
- add recursion check to slow-path of slub
- use reserved memory in bpf_map_update for in_irq or in preempt_disabled
- kmalloc via irq_work

At the end pre-allocation of all map elements turned out to be the simplest
solution and since the user is charged upfront for all the memory, such
pre-allocation doesn't affect the user space visible behavior.

Since it's impossible to tell whether kprobe is triggered in a safe
location from kmalloc point of view, use pre-allocation by default
and introduce new BPF_F_NO_PREALLOC flag.

While testing of per-cpu hash maps it was discovered
that alloc_percpu(GFP_ATOMIC) has odd corner cases and often
fails to allocate memory even when 90% of it is free.
The pre-allocation of per-cpu hash elements solves this problem as well.

Turned out that bpf_map_update() quickly followed by
bpf_map_lookup()+bpf_map_delete() is very common pattern used
in many of iovisor/bcc/tools, so there is additional benefit of
pre-allocation, since such use cases are must faster.

Since all hash map elements are now pre-allocated we can remove
atomic increment of htab->count and save few more cycles.

Also add precharge_memlock() to check rlimit_memlock early to avoid
large malloc/free done by by users who don't have sufficient limits.

Pre-allocation is done with vmalloc and alloc/free is done
via percpu_freelist. Here are performance numbers for different
pre-allocation algorithms that were implemented, but discarded
in favor of percpu_freelist:

1 cpu:
pcpu_ida	2.1M
pcpu_ida nolock	2.3M
bt		2.4M
kmalloc		1.8M
hlist+spinlock	2.3M
pcpu_freelist	2.6M

4 cpu:
pcpu_ida	1.5M
pcpu_ida nolock	1.8M
bt w/smp_align	1.7M
bt no/smp_align	1.1M
kmalloc		0.7M
hlist+spinlock	0.2M
pcpu_freelist	2.0M

8 cpu:
pcpu_ida	0.7M
bt w/smp_align	0.8M
kmalloc		0.4M
pcpu_freelist	1.5M

32 cpu:
kmalloc		0.13M
pcpu_freelist	0.49M

pcpu_ida nolock is a modified percpu_ida algorithm without
percpu_ida_cpu locks and without cross-cpu tag stealing.
It's faster than existing percpu_ida, but not as fast as pcpu_freelist.

bt is a variant of block/blk-mq-tag.c simlified and customized
for bpf use case. bt w/smp_align is using cache line for every 'long'
(similar to blk-mq-tag). bt no/smp_align allocates 'long'
bitmasks continuously to save memory. It's comparable to percpu_ida
and in some cases faster, but slower than percpu_freelist

hlist+spinlock is the simplest free list with single spinlock.
As expeceted it has very bad scaling in SMP.

kmalloc is existing implementation which is still available via
BPF_F_NO_PREALLOC flag. It's significantly slower in single cpu and
in 8 cpu setup it's 3 times slower than pre-allocation with pcpu_freelist,
but saves memory, so in cases where map->max_entries can be large
and number of map update/delete per second is low, it may make
sense to use it.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 include/linux/bpf.h      |   1 +
 include/uapi/linux/bpf.h |   3 +
 kernel/bpf/hashtab.c     | 264 ++++++++++++++++++++++++++++++++++-------------
 kernel/bpf/syscall.c     |   2 +-
 4 files changed, 196 insertions(+), 74 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 4b070827200d..c81efb10bbb5 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -37,6 +37,7 @@ struct bpf_map {
 	u32 key_size;
 	u32 value_size;
 	u32 max_entries;
+	u32 map_flags;
 	u32 pages;
 	struct user_struct *user;
 	const struct bpf_map_ops *ops;
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 6496f98d3d68..5eeb2ca9441e 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -101,12 +101,15 @@ enum bpf_prog_type {
 #define BPF_NOEXIST	1 /* create new element if it didn't exist */
 #define BPF_EXIST	2 /* update existing element */
 
+#define BPF_F_NO_PREALLOC	(1ULL << 0)
+
 union bpf_attr {
 	struct { /* anonymous struct used by BPF_MAP_CREATE command */
 		__u32	map_type;	/* one of enum bpf_map_type */
 		__u32	key_size;	/* size of key in bytes */
 		__u32	value_size;	/* size of value in bytes */
 		__u32	max_entries;	/* max number of entries in a map */
+		__u32	map_flags;	/* prealloc or not */
 	};
 
 	struct { /* anonymous struct used by BPF_MAP_*_ELEM commands */
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index a68e95133fcd..7d5f539d196b 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -1,4 +1,5 @@
 /* Copyright (c) 2011-2014 PLUMgrid, http://plumgrid.com
+ * Copyright (c) 2016 Facebook
  *
  * This program is free software; you can redistribute it and/or
  * modify it under the terms of version 2 of the GNU General Public
@@ -13,6 +14,7 @@
 #include <linux/jhash.h>
 #include <linux/filter.h>
 #include <linux/vmalloc.h>
+#include "percpu_freelist.h"
 
 struct bucket {
 	struct hlist_head head;
@@ -22,6 +24,8 @@ struct bucket {
 struct bpf_htab {
 	struct bpf_map map;
 	struct bucket *buckets;
+	void *elems;
+	struct pcpu_freelist freelist;
 	atomic_t count;	/* number of elements in this hashtable */
 	u32 n_buckets;	/* number of hash buckets */
 	u32 elem_size;	/* size of each element in bytes */
@@ -29,15 +33,111 @@ struct bpf_htab {
 
 /* each htab element is struct htab_elem + key + value */
 struct htab_elem {
-	struct hlist_node hash_node;
-	struct rcu_head rcu;
 	union {
-		u32 hash;
-		u32 key_size;
+		struct hlist_node hash_node;
+		struct bpf_htab *htab;
+		struct pcpu_freelist_node fnode;
 	};
+	struct rcu_head rcu;
+	u32 hash;
 	char key[0] __aligned(8);
 };
 
+static inline void htab_elem_set_ptr(struct htab_elem *l, u32 key_size,
+				     void __percpu *pptr)
+{
+	*(void __percpu **)(l->key + key_size) = pptr;
+}
+
+static inline void __percpu *htab_elem_get_ptr(struct htab_elem *l, u32 key_size)
+{
+	return *(void __percpu **)(l->key + key_size);
+}
+
+static struct htab_elem *get_htab_elem(struct bpf_htab *htab, int i)
+{
+	return (struct htab_elem *) (htab->elems + i * htab->elem_size);
+}
+
+static void htab_free_elems(struct bpf_htab *htab)
+{
+	int i;
+
+	if (htab->map.map_type != BPF_MAP_TYPE_PERCPU_HASH)
+		goto free_elems;
+
+	for (i = 0; i < htab->map.max_entries; i++) {
+		void __percpu *pptr;
+
+		pptr = htab_elem_get_ptr(get_htab_elem(htab, i),
+					 htab->map.key_size);
+		free_percpu(pptr);
+	}
+free_elems:
+	vfree(htab->elems);
+}
+
+/* if requested map size is larger than memlock limit, reject it early */
+static int precharge_memlock(u32 pages)
+{
+	struct user_struct *user = get_current_user();
+	unsigned long memlock_limit, cur;
+
+	memlock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
+	cur = atomic_long_read(&user->locked_vm);
+	free_uid(user);
+	if (cur + pages > memlock_limit)
+		return -EPERM;
+	return 0;
+}
+
+static int prealloc_elems_and_freelist(struct bpf_htab *htab)
+{
+	int err = -ENOMEM, i, cpu, pcpu_entries;
+
+	htab->elems = vzalloc(htab->elem_size * htab->map.max_entries);
+	if (!htab->elems)
+		return -ENOMEM;
+
+	if (htab->map.map_type != BPF_MAP_TYPE_PERCPU_HASH)
+		goto skip_percpu_elems;
+
+	for (i = 0; i < htab->map.max_entries; i++) {
+		u32 size = round_up(htab->map.value_size, 8);
+		void __percpu *pptr;
+
+		pptr = __alloc_percpu_gfp(size, 8, GFP_USER | __GFP_NOWARN);
+		if (!pptr)
+			goto free_elems;
+		htab_elem_set_ptr(get_htab_elem(htab, i), htab->map.key_size,
+				  pptr);
+	}
+
+skip_percpu_elems:
+	err = pcpu_freelist_init(&htab->freelist);
+	if (err)
+		goto free_elems;
+
+	pcpu_entries = htab->map.max_entries / num_possible_cpus() + 1;
+	i = 0;
+	for_each_possible_cpu(cpu) {
+		struct htab_elem *elem;
+again:
+		elem = get_htab_elem(htab, i);
+		pcpu_freelist_push_cpu(&htab->freelist, &elem->fnode, cpu);
+		i++;
+		if (i == htab->map.max_entries)
+			break;
+		if (i % pcpu_entries)
+			goto again;
+	}
+	return 0;
+
+free_elems:
+	htab_free_elems(htab);
+	return err;
+}
+
 /* Called from syscall */
 static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
 {
@@ -46,6 +146,10 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
 	int err, i;
 	u64 cost;
 
+	if (attr->map_flags & ~BPF_F_NO_PREALLOC)
+		/* reserved bits should not be used */
+		return ERR_PTR(-EINVAL);
+
 	htab = kzalloc(sizeof(*htab), GFP_USER);
 	if (!htab)
 		return ERR_PTR(-ENOMEM);
@@ -55,6 +159,7 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
 	htab->map.key_size = attr->key_size;
 	htab->map.value_size = attr->value_size;
 	htab->map.max_entries = attr->max_entries;
+	htab->map.map_flags = attr->map_flags;
 
 	/* check sanity of attributes.
 	 * value_size == 0 may be allowed in the future to use map as a set
@@ -92,7 +197,7 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
 	if (percpu)
 		htab->elem_size += sizeof(void *);
 	else
-		htab->elem_size += htab->map.value_size;
+		htab->elem_size += round_up(htab->map.value_size, 8);
 
 	/* prevent zero size kmalloc and check for u32 overflow */
 	if (htab->n_buckets == 0 ||
@@ -112,6 +217,10 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
 
 	htab->map.pages = round_up(cost, PAGE_SIZE) >> PAGE_SHIFT;
 
+	err = precharge_memlock(htab->map.pages);
+	if (err)
+		goto free_htab;
+
 	err = -ENOMEM;
 	htab->buckets = kmalloc_array(htab->n_buckets, sizeof(struct bucket),
 				      GFP_USER | __GFP_NOWARN);
@@ -127,10 +236,16 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
 		raw_spin_lock_init(&htab->buckets[i].lock);
 	}
 
-	atomic_set(&htab->count, 0);
+	if (!(attr->map_flags & BPF_F_NO_PREALLOC)) {
+		err = prealloc_elems_and_freelist(htab);
+		if (err)
+			goto free_buckets;
+	}
 
 	return &htab->map;
 
+free_buckets:
+	kvfree(htab->buckets);
 free_htab:
 	kfree(htab);
 	return ERR_PTR(err);
@@ -249,42 +364,42 @@ find_first_elem:
 		}
 	}
 
-	/* itereated over all buckets and all elements */
+	/* iterated over all buckets and all elements */
 	return -ENOENT;
 }
 
-
-static inline void htab_elem_set_ptr(struct htab_elem *l, u32 key_size,
-				     void __percpu *pptr)
-{
-	*(void __percpu **)(l->key + key_size) = pptr;
-}
-
-static inline void __percpu *htab_elem_get_ptr(struct htab_elem *l, u32 key_size)
+static void htab_elem_free(struct bpf_htab *htab, struct htab_elem *l)
 {
-	return *(void __percpu **)(l->key + key_size);
-}
-
-static void htab_percpu_elem_free(struct htab_elem *l)
-{
-	free_percpu(htab_elem_get_ptr(l, l->key_size));
+	if (htab->map.map_type == BPF_MAP_TYPE_PERCPU_HASH)
+		free_percpu(htab_elem_get_ptr(l, htab->map.key_size));
 	kfree(l);
+
 }
 
-static void htab_percpu_elem_free_rcu(struct rcu_head *head)
+static void htab_elem_free_rcu(struct rcu_head *head)
 {
 	struct htab_elem *l = container_of(head, struct htab_elem, rcu);
+	struct bpf_htab *htab = l->htab;
 
-	htab_percpu_elem_free(l);
+	/* must increment bpf_prog_active to avoid kprobe+bpf triggering while
+	 * we're calling kfree, otherwise deadlock is possible if kprobes
+	 * are placed somewhere inside of slub
+	 */
+	preempt_disable();
+	__this_cpu_inc(bpf_prog_active);
+	htab_elem_free(htab, l);
+	__this_cpu_dec(bpf_prog_active);
+	preempt_enable();
 }
 
-static void free_htab_elem(struct htab_elem *l, bool percpu, u32 key_size)
+static void free_htab_elem(struct bpf_htab *htab, struct htab_elem *l)
 {
-	if (percpu) {
-		l->key_size = key_size;
-		call_rcu(&l->rcu, htab_percpu_elem_free_rcu);
+	if (!(htab->map.map_flags & BPF_F_NO_PREALLOC)) {
+		pcpu_freelist_push(&htab->freelist, &l->fnode);
 	} else {
-		kfree_rcu(l, rcu);
+		atomic_dec(&htab->count);
+		l->htab = htab;
+		call_rcu(&l->rcu, htab_elem_free_rcu);
 	}
 }
 
@@ -293,23 +408,39 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key,
 					 bool percpu, bool onallcpus)
 {
 	u32 size = htab->map.value_size;
+	bool prealloc = !(htab->map.map_flags & BPF_F_NO_PREALLOC);
 	struct htab_elem *l_new;
 	void __percpu *pptr;
 
-	l_new = kmalloc(htab->elem_size, GFP_ATOMIC | __GFP_NOWARN);
-	if (!l_new)
-		return NULL;
+	if (prealloc) {
+		l_new = (struct htab_elem *)pcpu_freelist_pop(&htab->freelist);
+		if (!l_new)
+			return ERR_PTR(-E2BIG);
+	} else {
+		if (atomic_inc_return(&htab->count) > htab->map.max_entries) {
+			atomic_dec(&htab->count);
+			return ERR_PTR(-E2BIG);
+		}
+		l_new = kmalloc(htab->elem_size, GFP_ATOMIC | __GFP_NOWARN);
+		if (!l_new)
+			return ERR_PTR(-ENOMEM);
+	}
 
 	memcpy(l_new->key, key, key_size);
 	if (percpu) {
 		/* round up value_size to 8 bytes */
 		size = round_up(size, 8);
 
-		/* alloc_percpu zero-fills */
-		pptr = __alloc_percpu_gfp(size, 8, GFP_ATOMIC | __GFP_NOWARN);
-		if (!pptr) {
-			kfree(l_new);
-			return NULL;
+		if (prealloc) {
+			pptr = htab_elem_get_ptr(l_new, key_size);
+		} else {
+			/* alloc_percpu zero-fills */
+			pptr = __alloc_percpu_gfp(size, 8,
+						  GFP_ATOMIC | __GFP_NOWARN);
+			if (!pptr) {
+				kfree(l_new);
+				return ERR_PTR(-ENOMEM);
+			}
 		}
 
 		if (!onallcpus) {
@@ -324,7 +455,8 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key,
 				off += size;
 			}
 		}
-		htab_elem_set_ptr(l_new, key_size, pptr);
+		if (!prealloc)
+			htab_elem_set_ptr(l_new, key_size, pptr);
 	} else {
 		memcpy(l_new->key + round_up(key_size, 8), value, size);
 	}
@@ -336,12 +468,6 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key,
 static int check_flags(struct bpf_htab *htab, struct htab_elem *l_old,
 		       u64 map_flags)
 {
-	if (!l_old && unlikely(atomic_read(&htab->count) >= htab->map.max_entries))
-		/* if elem with this 'key' doesn't exist and we've reached
-		 * max_entries limit, fail insertion of new elem
-		 */
-		return -E2BIG;
-
 	if (l_old && map_flags == BPF_NOEXIST)
 		/* elem already exists */
 		return -EEXIST;
@@ -375,13 +501,6 @@ static int htab_map_update_elem(struct bpf_map *map, void *key, void *value,
 
 	hash = htab_map_hash(key, key_size);
 
-	/* allocate new element outside of the lock, since
-	 * we're most likley going to insert it
-	 */
-	l_new = alloc_htab_elem(htab, key, value, key_size, hash, false, false);
-	if (!l_new)
-		return -ENOMEM;
-
 	b = __select_bucket(htab, hash);
 	head = &b->head;
 
@@ -394,21 +513,24 @@ static int htab_map_update_elem(struct bpf_map *map, void *key, void *value,
 	if (ret)
 		goto err;
 
+	l_new = alloc_htab_elem(htab, key, value, key_size, hash, false, false);
+	if (IS_ERR(l_new)) {
+		/* all pre-allocated elements are in use or memory exhausted */
+		ret = PTR_ERR(l_new);
+		goto err;
+	}
+
 	/* add new element to the head of the list, so that
 	 * concurrent search will find it before old elem
 	 */
 	hlist_add_head_rcu(&l_new->hash_node, head);
 	if (l_old) {
 		hlist_del_rcu(&l_old->hash_node);
-		kfree_rcu(l_old, rcu);
-	} else {
-		atomic_inc(&htab->count);
+		free_htab_elem(htab, l_old);
 	}
-	raw_spin_unlock_irqrestore(&b->lock, flags);
-	return 0;
+	ret = 0;
 err:
 	raw_spin_unlock_irqrestore(&b->lock, flags);
-	kfree(l_new);
 	return ret;
 }
 
@@ -466,12 +588,11 @@ static int __htab_percpu_map_update_elem(struct bpf_map *map, void *key,
 	} else {
 		l_new = alloc_htab_elem(htab, key, value, key_size,
 					hash, true, onallcpus);
-		if (!l_new) {
-			ret = -ENOMEM;
+		if (IS_ERR(l_new)) {
+			ret = PTR_ERR(l_new);
 			goto err;
 		}
 		hlist_add_head_rcu(&l_new->hash_node, head);
-		atomic_inc(&htab->count);
 	}
 	ret = 0;
 err:
@@ -489,7 +610,6 @@ static int htab_percpu_map_update_elem(struct bpf_map *map, void *key,
 static int htab_map_delete_elem(struct bpf_map *map, void *key)
 {
 	struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
-	bool percpu = map->map_type == BPF_MAP_TYPE_PERCPU_HASH;
 	struct hlist_head *head;
 	struct bucket *b;
 	struct htab_elem *l;
@@ -511,8 +631,7 @@ static int htab_map_delete_elem(struct bpf_map *map, void *key)
 
 	if (l) {
 		hlist_del_rcu(&l->hash_node);
-		atomic_dec(&htab->count);
-		free_htab_elem(l, percpu, key_size);
+		free_htab_elem(htab, l);
 		ret = 0;
 	}
 
@@ -531,17 +650,10 @@ static void delete_all_elements(struct bpf_htab *htab)
 
 		hlist_for_each_entry_safe(l, n, head, hash_node) {
 			hlist_del_rcu(&l->hash_node);
-			atomic_dec(&htab->count);
-			if (htab->map.map_type == BPF_MAP_TYPE_PERCPU_HASH) {
-				l->key_size = htab->map.key_size;
-				htab_percpu_elem_free(l);
-			} else {
-				kfree(l);
-			}
+			htab_elem_free(htab, l);
 		}
 	}
 }
-
 /* Called when map->refcnt goes to zero, either from workqueue or from syscall */
 static void htab_map_free(struct bpf_map *map)
 {
@@ -554,10 +666,16 @@ static void htab_map_free(struct bpf_map *map)
 	 */
 	synchronize_rcu();
 
-	/* some of kfree_rcu() callbacks for elements of this map may not have
-	 * executed. It's ok. Proceed to free residual elements and map itself
+	/* some of free_htab_elem() callbacks for elements of this map may
+	 * not have executed. Wait for them.
 	 */
-	delete_all_elements(htab);
+	rcu_barrier();
+	if (htab->map.map_flags & BPF_F_NO_PREALLOC) {
+		delete_all_elements(htab);
+	} else {
+		htab_free_elems(htab);
+		pcpu_freelist_destroy(&htab->freelist);
+	}
 	kvfree(htab->buckets);
 	kfree(htab);
 }
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index dc99f6a000f5..1e7dfae0de8c 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -153,7 +153,7 @@ int bpf_map_new_fd(struct bpf_map *map)
 		   offsetof(union bpf_attr, CMD##_LAST_FIELD) - \
 		   sizeof(attr->CMD##_LAST_FIELD)) != NULL
 
-#define BPF_MAP_CREATE_LAST_FIELD max_entries
+#define BPF_MAP_CREATE_LAST_FIELD map_flags
 /* called via syscall */
 static int map_create(union bpf_attr *attr)
 {
-- 
2.6.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 4/9] samples/bpf: make map creation more verbose
  2016-03-07  1:58 [PATCH net-next 0/9] bpf: hash map pre-alloc Alexei Starovoitov
                   ` (2 preceding siblings ...)
  2016-03-07  1:58 ` [PATCH net-next 3/9] bpf: pre-allocate hash map elements Alexei Starovoitov
@ 2016-03-07  1:58 ` Alexei Starovoitov
  2016-03-07  1:58 ` [PATCH net-next 5/9] samples/bpf: move ksym_search() into library Alexei Starovoitov
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Alexei Starovoitov @ 2016-03-07  1:58 UTC (permalink / raw)
  To: David S . Miller
  Cc: Daniel Borkmann, Daniel Wagner, Tom Zanussi, Wang Nan, He Kuang,
	Martin KaFai Lau, Brendan Gregg, netdev, linux-kernel,
	kernel-team

map creation is typically the first one to fail when rlimits are
too low, not enough memory, etc
Make this failure scenario more verbose

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 samples/bpf/bpf_load.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/samples/bpf/bpf_load.c b/samples/bpf/bpf_load.c
index da86a8e0a95a..816bca5760a0 100644
--- a/samples/bpf/bpf_load.c
+++ b/samples/bpf/bpf_load.c
@@ -158,8 +158,11 @@ static int load_maps(struct bpf_map_def *maps, int len)
 					   maps[i].key_size,
 					   maps[i].value_size,
 					   maps[i].max_entries);
-		if (map_fd[i] < 0)
+		if (map_fd[i] < 0) {
+			printf("failed to create a map: %d %s\n",
+			       errno, strerror(errno));
 			return 1;
+		}
 
 		if (maps[i].type == BPF_MAP_TYPE_PROG_ARRAY)
 			prog_array_fd = map_fd[i];
-- 
2.6.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 5/9] samples/bpf: move ksym_search() into library
  2016-03-07  1:58 [PATCH net-next 0/9] bpf: hash map pre-alloc Alexei Starovoitov
                   ` (3 preceding siblings ...)
  2016-03-07  1:58 ` [PATCH net-next 4/9] samples/bpf: make map creation more verbose Alexei Starovoitov
@ 2016-03-07  1:58 ` Alexei Starovoitov
  2016-03-07  1:58 ` [PATCH net-next 6/9] samples/bpf: add map_flags to bpf loader Alexei Starovoitov
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Alexei Starovoitov @ 2016-03-07  1:58 UTC (permalink / raw)
  To: David S . Miller
  Cc: Daniel Borkmann, Daniel Wagner, Tom Zanussi, Wang Nan, He Kuang,
	Martin KaFai Lau, Brendan Gregg, netdev, linux-kernel,
	kernel-team

move ksym search from offwaketime into library to be reused
in other tests

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 samples/bpf/bpf_load.c         | 62 ++++++++++++++++++++++++++++++++++++++
 samples/bpf/bpf_load.h         |  6 ++++
 samples/bpf/offwaketime_user.c | 67 +-----------------------------------------
 3 files changed, 69 insertions(+), 66 deletions(-)

diff --git a/samples/bpf/bpf_load.c b/samples/bpf/bpf_load.c
index 816bca5760a0..d16864293c00 100644
--- a/samples/bpf/bpf_load.c
+++ b/samples/bpf/bpf_load.c
@@ -346,3 +346,65 @@ void read_trace_pipe(void)
 		}
 	}
 }
+
+#define MAX_SYMS 300000
+static struct ksym syms[MAX_SYMS];
+static int sym_cnt;
+
+static int ksym_cmp(const void *p1, const void *p2)
+{
+	return ((struct ksym *)p1)->addr - ((struct ksym *)p2)->addr;
+}
+
+int load_kallsyms(void)
+{
+	FILE *f = fopen("/proc/kallsyms", "r");
+	char func[256], buf[256];
+	char symbol;
+	void *addr;
+	int i = 0;
+
+	if (!f)
+		return -ENOENT;
+
+	while (!feof(f)) {
+		if (!fgets(buf, sizeof(buf), f))
+			break;
+		if (sscanf(buf, "%p %c %s", &addr, &symbol, func) != 3)
+			break;
+		if (!addr)
+			continue;
+		syms[i].addr = (long) addr;
+		syms[i].name = strdup(func);
+		i++;
+	}
+	sym_cnt = i;
+	qsort(syms, sym_cnt, sizeof(struct ksym), ksym_cmp);
+	return 0;
+}
+
+struct ksym *ksym_search(long key)
+{
+	int start = 0, end = sym_cnt;
+	int result;
+
+	while (start < end) {
+		size_t mid = start + (end - start) / 2;
+
+		result = key - syms[mid].addr;
+		if (result < 0)
+			end = mid;
+		else if (result > 0)
+			start = mid + 1;
+		else
+			return &syms[mid];
+	}
+
+	if (start >= 1 && syms[start - 1].addr < key &&
+	    key < syms[start].addr)
+		/* valid ksym */
+		return &syms[start - 1];
+
+	/* out of range. return _stext */
+	return &syms[0];
+}
diff --git a/samples/bpf/bpf_load.h b/samples/bpf/bpf_load.h
index cbd7c2b532b9..dfa57fe65c8e 100644
--- a/samples/bpf/bpf_load.h
+++ b/samples/bpf/bpf_load.h
@@ -23,5 +23,11 @@ extern int event_fd[MAX_PROGS];
 int load_bpf_file(char *path);
 
 void read_trace_pipe(void);
+struct ksym {
+	long addr;
+	char *name;
+};
 
+int load_kallsyms(void);
+struct ksym *ksym_search(long key);
 #endif
diff --git a/samples/bpf/offwaketime_user.c b/samples/bpf/offwaketime_user.c
index 17cf3024e22c..6f002a9c24fa 100644
--- a/samples/bpf/offwaketime_user.c
+++ b/samples/bpf/offwaketime_user.c
@@ -18,80 +18,15 @@
 #include "libbpf.h"
 #include "bpf_load.h"
 
-#define MAX_SYMS 300000
 #define PRINT_RAW_ADDR 0
 
-static struct ksym {
-	long addr;
-	char *name;
-} syms[MAX_SYMS];
-static int sym_cnt;
-
-static int ksym_cmp(const void *p1, const void *p2)
-{
-	return ((struct ksym *)p1)->addr - ((struct ksym *)p2)->addr;
-}
-
-static int load_kallsyms(void)
-{
-	FILE *f = fopen("/proc/kallsyms", "r");
-	char func[256], buf[256];
-	char symbol;
-	void *addr;
-	int i = 0;
-
-	if (!f)
-		return -ENOENT;
-
-	while (!feof(f)) {
-		if (!fgets(buf, sizeof(buf), f))
-			break;
-		if (sscanf(buf, "%p %c %s", &addr, &symbol, func) != 3)
-			break;
-		if (!addr)
-			continue;
-		syms[i].addr = (long) addr;
-		syms[i].name = strdup(func);
-		i++;
-	}
-	sym_cnt = i;
-	qsort(syms, sym_cnt, sizeof(struct ksym), ksym_cmp);
-	return 0;
-}
-
-static void *search(long key)
-{
-	int start = 0, end = sym_cnt;
-	int result;
-
-	while (start < end) {
-		size_t mid = start + (end - start) / 2;
-
-		result = key - syms[mid].addr;
-		if (result < 0)
-			end = mid;
-		else if (result > 0)
-			start = mid + 1;
-		else
-			return &syms[mid];
-	}
-
-	if (start >= 1 && syms[start - 1].addr < key &&
-	    key < syms[start].addr)
-		/* valid ksym */
-		return &syms[start - 1];
-
-	/* out of range. return _stext */
-	return &syms[0];
-}
-
 static void print_ksym(__u64 addr)
 {
 	struct ksym *sym;
 
 	if (!addr)
 		return;
-	sym = search(addr);
+	sym = ksym_search(addr);
 	if (PRINT_RAW_ADDR)
 		printf("%s/%llx;", sym->name, addr);
 	else
-- 
2.6.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 6/9] samples/bpf: add map_flags to bpf loader
  2016-03-07  1:58 [PATCH net-next 0/9] bpf: hash map pre-alloc Alexei Starovoitov
                   ` (4 preceding siblings ...)
  2016-03-07  1:58 ` [PATCH net-next 5/9] samples/bpf: move ksym_search() into library Alexei Starovoitov
@ 2016-03-07  1:58 ` Alexei Starovoitov
  2016-03-07  1:58 ` [PATCH net-next 7/9] samples/bpf: test both pre-alloc and normal maps Alexei Starovoitov
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Alexei Starovoitov @ 2016-03-07  1:58 UTC (permalink / raw)
  To: David S . Miller
  Cc: Daniel Borkmann, Daniel Wagner, Tom Zanussi, Wang Nan, He Kuang,
	Martin KaFai Lau, Brendan Gregg, netdev, linux-kernel,
	kernel-team

note old loader is compatible with new kernel.
map_flags are optional

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 samples/bpf/bpf_helpers.h   |  1 +
 samples/bpf/bpf_load.c      |  3 ++-
 samples/bpf/fds_example.c   |  2 +-
 samples/bpf/libbpf.c        |  5 +++--
 samples/bpf/libbpf.h        |  2 +-
 samples/bpf/sock_example.c  |  2 +-
 samples/bpf/test_maps.c     | 19 ++++++++++++-------
 samples/bpf/test_verifier.c |  4 ++--
 8 files changed, 23 insertions(+), 15 deletions(-)

diff --git a/samples/bpf/bpf_helpers.h b/samples/bpf/bpf_helpers.h
index 811bcca0f29d..9363500131a7 100644
--- a/samples/bpf/bpf_helpers.h
+++ b/samples/bpf/bpf_helpers.h
@@ -61,6 +61,7 @@ struct bpf_map_def {
 	unsigned int key_size;
 	unsigned int value_size;
 	unsigned int max_entries;
+	unsigned int map_flags;
 };
 
 static int (*bpf_skb_store_bytes)(void *ctx, int off, void *from, int len, int flags) =
diff --git a/samples/bpf/bpf_load.c b/samples/bpf/bpf_load.c
index d16864293c00..58f86bd11b3d 100644
--- a/samples/bpf/bpf_load.c
+++ b/samples/bpf/bpf_load.c
@@ -157,7 +157,8 @@ static int load_maps(struct bpf_map_def *maps, int len)
 		map_fd[i] = bpf_create_map(maps[i].type,
 					   maps[i].key_size,
 					   maps[i].value_size,
-					   maps[i].max_entries);
+					   maps[i].max_entries,
+					   maps[i].map_flags);
 		if (map_fd[i] < 0) {
 			printf("failed to create a map: %d %s\n",
 			       errno, strerror(errno));
diff --git a/samples/bpf/fds_example.c b/samples/bpf/fds_example.c
index e2fd16c3d0f0..625e797be6ef 100644
--- a/samples/bpf/fds_example.c
+++ b/samples/bpf/fds_example.c
@@ -44,7 +44,7 @@ static void usage(void)
 static int bpf_map_create(void)
 {
 	return bpf_create_map(BPF_MAP_TYPE_ARRAY, sizeof(uint32_t),
-			      sizeof(uint32_t), 1024);
+			      sizeof(uint32_t), 1024, 0);
 }
 
 static int bpf_prog_create(const char *object)
diff --git a/samples/bpf/libbpf.c b/samples/bpf/libbpf.c
index 65a8d48d2799..9969e35550c3 100644
--- a/samples/bpf/libbpf.c
+++ b/samples/bpf/libbpf.c
@@ -19,13 +19,14 @@ static __u64 ptr_to_u64(void *ptr)
 }
 
 int bpf_create_map(enum bpf_map_type map_type, int key_size, int value_size,
-		   int max_entries)
+		   int max_entries, int map_flags)
 {
 	union bpf_attr attr = {
 		.map_type = map_type,
 		.key_size = key_size,
 		.value_size = value_size,
-		.max_entries = max_entries
+		.max_entries = max_entries,
+		.map_flags = map_flags,
 	};
 
 	return syscall(__NR_bpf, BPF_MAP_CREATE, &attr, sizeof(attr));
diff --git a/samples/bpf/libbpf.h b/samples/bpf/libbpf.h
index 014aacf916e4..364582b77888 100644
--- a/samples/bpf/libbpf.h
+++ b/samples/bpf/libbpf.h
@@ -5,7 +5,7 @@
 struct bpf_insn;
 
 int bpf_create_map(enum bpf_map_type map_type, int key_size, int value_size,
-		   int max_entries);
+		   int max_entries, int map_flags);
 int bpf_update_elem(int fd, void *key, void *value, unsigned long long flags);
 int bpf_lookup_elem(int fd, void *key, void *value);
 int bpf_delete_elem(int fd, void *key);
diff --git a/samples/bpf/sock_example.c b/samples/bpf/sock_example.c
index a0ce251c5390..28b60baa9fa8 100644
--- a/samples/bpf/sock_example.c
+++ b/samples/bpf/sock_example.c
@@ -34,7 +34,7 @@ static int test_sock(void)
 	long long value = 0, tcp_cnt, udp_cnt, icmp_cnt;
 
 	map_fd = bpf_create_map(BPF_MAP_TYPE_ARRAY, sizeof(key), sizeof(value),
-				256);
+				256, 0);
 	if (map_fd < 0) {
 		printf("failed to create map '%s'\n", strerror(errno));
 		goto cleanup;
diff --git a/samples/bpf/test_maps.c b/samples/bpf/test_maps.c
index ad466ed33093..af02f7518c0a 100644
--- a/samples/bpf/test_maps.c
+++ b/samples/bpf/test_maps.c
@@ -2,6 +2,7 @@
  * Testsuite for eBPF maps
  *
  * Copyright (c) 2014 PLUMgrid, http://plumgrid.com
+ * Copyright (c) 2016 Facebook
  *
  * This program is free software; you can redistribute it and/or
  * modify it under the terms of version 2 of the GNU General Public
@@ -17,13 +18,16 @@
 #include <stdlib.h>
 #include "libbpf.h"
 
+static int map_flags;
+
 /* sanity tests for map API */
 static void test_hashmap_sanity(int i, void *data)
 {
 	long long key, next_key, value;
 	int map_fd;
 
-	map_fd = bpf_create_map(BPF_MAP_TYPE_HASH, sizeof(key), sizeof(value), 2);
+	map_fd = bpf_create_map(BPF_MAP_TYPE_HASH, sizeof(key), sizeof(value),
+				2, map_flags);
 	if (map_fd < 0) {
 		printf("failed to create hashmap '%s'\n", strerror(errno));
 		exit(1);
@@ -99,7 +103,7 @@ static void test_percpu_hashmap_sanity(int task, void *data)
 	int map_fd, i;
 
 	map_fd = bpf_create_map(BPF_MAP_TYPE_PERCPU_HASH, sizeof(key),
-				sizeof(value[0]), 2);
+				sizeof(value[0]), 2, map_flags);
 	if (map_fd < 0) {
 		printf("failed to create hashmap '%s'\n", strerror(errno));
 		exit(1);
@@ -188,7 +192,8 @@ static void test_arraymap_sanity(int i, void *data)
 	int key, next_key, map_fd;
 	long long value;
 
-	map_fd = bpf_create_map(BPF_MAP_TYPE_ARRAY, sizeof(key), sizeof(value), 2);
+	map_fd = bpf_create_map(BPF_MAP_TYPE_ARRAY, sizeof(key), sizeof(value),
+				2, map_flags);
 	if (map_fd < 0) {
 		printf("failed to create arraymap '%s'\n", strerror(errno));
 		exit(1);
@@ -244,7 +249,7 @@ static void test_percpu_arraymap_many_keys(void)
 	int key, map_fd, i;
 
 	map_fd = bpf_create_map(BPF_MAP_TYPE_PERCPU_ARRAY, sizeof(key),
-				sizeof(values[0]), nr_keys);
+				sizeof(values[0]), nr_keys, map_flags);
 	if (map_fd < 0) {
 		printf("failed to create per-cpu arraymap '%s'\n",
 		       strerror(errno));
@@ -275,7 +280,7 @@ static void test_percpu_arraymap_sanity(int i, void *data)
 	int key, next_key, map_fd;
 
 	map_fd = bpf_create_map(BPF_MAP_TYPE_PERCPU_ARRAY, sizeof(key),
-				sizeof(values[0]), 2);
+				sizeof(values[0]), 2, map_flags);
 	if (map_fd < 0) {
 		printf("failed to create arraymap '%s'\n", strerror(errno));
 		exit(1);
@@ -336,7 +341,7 @@ static void test_map_large(void)
 
 	/* allocate 4Mbyte of memory */
 	map_fd = bpf_create_map(BPF_MAP_TYPE_HASH, sizeof(key), sizeof(value),
-				MAP_SIZE);
+				MAP_SIZE, map_flags);
 	if (map_fd < 0) {
 		printf("failed to create large map '%s'\n", strerror(errno));
 		exit(1);
@@ -421,7 +426,7 @@ static void test_map_parallel(void)
 	int data[2];
 
 	map_fd = bpf_create_map(BPF_MAP_TYPE_HASH, sizeof(key), sizeof(value),
-				MAP_SIZE);
+				MAP_SIZE, map_flags);
 	if (map_fd < 0) {
 		printf("failed to create map for parallel test '%s'\n",
 		       strerror(errno));
diff --git a/samples/bpf/test_verifier.c b/samples/bpf/test_verifier.c
index 563c507c0a09..4b51a9039c0d 100644
--- a/samples/bpf/test_verifier.c
+++ b/samples/bpf/test_verifier.c
@@ -1198,7 +1198,7 @@ static int create_map(void)
 	int map_fd;
 
 	map_fd = bpf_create_map(BPF_MAP_TYPE_HASH,
-				sizeof(long long), sizeof(long long), 1024);
+				sizeof(long long), sizeof(long long), 1024, 0);
 	if (map_fd < 0)
 		printf("failed to create map '%s'\n", strerror(errno));
 
@@ -1210,7 +1210,7 @@ static int create_prog_array(void)
 	int map_fd;
 
 	map_fd = bpf_create_map(BPF_MAP_TYPE_PROG_ARRAY,
-				sizeof(int), sizeof(int), 4);
+				sizeof(int), sizeof(int), 4, 0);
 	if (map_fd < 0)
 		printf("failed to create prog_array '%s'\n", strerror(errno));
 
-- 
2.6.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 7/9] samples/bpf: test both pre-alloc and normal maps
  2016-03-07  1:58 [PATCH net-next 0/9] bpf: hash map pre-alloc Alexei Starovoitov
                   ` (5 preceding siblings ...)
  2016-03-07  1:58 ` [PATCH net-next 6/9] samples/bpf: add map_flags to bpf loader Alexei Starovoitov
@ 2016-03-07  1:58 ` Alexei Starovoitov
  2016-03-07  1:58 ` [PATCH net-next 8/9] samples/bpf: add bpf map stress test Alexei Starovoitov
  2016-03-07  1:58 ` [PATCH net-next 9/9] samples/bpf: add map performance test Alexei Starovoitov
  8 siblings, 0 replies; 15+ messages in thread
From: Alexei Starovoitov @ 2016-03-07  1:58 UTC (permalink / raw)
  To: David S . Miller
  Cc: Daniel Borkmann, Daniel Wagner, Tom Zanussi, Wang Nan, He Kuang,
	Martin KaFai Lau, Brendan Gregg, netdev, linux-kernel,
	kernel-team

extend test coveraged to include pre-allocated and run-time alloc maps

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 samples/bpf/test_maps.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/samples/bpf/test_maps.c b/samples/bpf/test_maps.c
index af02f7518c0a..d1e63f48e39c 100644
--- a/samples/bpf/test_maps.c
+++ b/samples/bpf/test_maps.c
@@ -468,7 +468,7 @@ static void test_map_parallel(void)
 	assert(bpf_get_next_key(map_fd, &key, &key) == -1 && errno == ENOENT);
 }
 
-int main(void)
+static void run_all_tests(void)
 {
 	test_hashmap_sanity(0, NULL);
 	test_percpu_hashmap_sanity(0, NULL);
@@ -479,6 +479,14 @@ int main(void)
 	test_map_large();
 	test_map_parallel();
 	test_map_stress();
+}
+
+int main(void)
+{
+	map_flags = 0;
+	run_all_tests();
+	map_flags = BPF_F_NO_PREALLOC;
+	run_all_tests();
 	printf("test_maps: OK\n");
 	return 0;
 }
-- 
2.6.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 8/9] samples/bpf: add bpf map stress test
  2016-03-07  1:58 [PATCH net-next 0/9] bpf: hash map pre-alloc Alexei Starovoitov
                   ` (6 preceding siblings ...)
  2016-03-07  1:58 ` [PATCH net-next 7/9] samples/bpf: test both pre-alloc and normal maps Alexei Starovoitov
@ 2016-03-07  1:58 ` Alexei Starovoitov
  2016-03-07  1:58 ` [PATCH net-next 9/9] samples/bpf: add map performance test Alexei Starovoitov
  8 siblings, 0 replies; 15+ messages in thread
From: Alexei Starovoitov @ 2016-03-07  1:58 UTC (permalink / raw)
  To: David S . Miller
  Cc: Daniel Borkmann, Daniel Wagner, Tom Zanussi, Wang Nan, He Kuang,
	Martin KaFai Lau, Brendan Gregg, netdev, linux-kernel,
	kernel-team

this test calls bpf programs from different contexts:
from inside of slub, from rcu, from pretty much everywhere,
since it kprobes all spin_lock functions.
It stresses the bpf hash and percpu map pre-allocation,
deallocation logic and call_rcu mechanisms.
User space part adding more stress by walking and deleting map elements.

Note that due to nature bpf_load.c the earlier kprobe+bpf programs are
already active while loader loads new programs, creates new kprobes and
attaches them.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 samples/bpf/Makefile        |  4 +++
 samples/bpf/spintest_kern.c | 59 +++++++++++++++++++++++++++++++++++++++++++++
 samples/bpf/spintest_user.c | 50 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 113 insertions(+)
 create mode 100644 samples/bpf/spintest_kern.c
 create mode 100644 samples/bpf/spintest_user.c

diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
index c4f8ae0c8afe..75a13e742ab4 100644
--- a/samples/bpf/Makefile
+++ b/samples/bpf/Makefile
@@ -17,6 +17,7 @@ hostprogs-y += tracex6
 hostprogs-y += trace_output
 hostprogs-y += lathist
 hostprogs-y += offwaketime
+hostprogs-y += spintest
 
 test_verifier-objs := test_verifier.o libbpf.o
 test_maps-objs := test_maps.o libbpf.o
@@ -34,6 +35,7 @@ tracex6-objs := bpf_load.o libbpf.o tracex6_user.o
 trace_output-objs := bpf_load.o libbpf.o trace_output_user.o
 lathist-objs := bpf_load.o libbpf.o lathist_user.o
 offwaketime-objs := bpf_load.o libbpf.o offwaketime_user.o
+spintest-objs := bpf_load.o libbpf.o spintest_user.o
 
 # Tell kbuild to always build the programs
 always := $(hostprogs-y)
@@ -50,6 +52,7 @@ always += trace_output_kern.o
 always += tcbpf1_kern.o
 always += lathist_kern.o
 always += offwaketime_kern.o
+always += spintest_kern.o
 
 HOSTCFLAGS += -I$(objtree)/usr/include
 
@@ -67,6 +70,7 @@ HOSTLOADLIBES_tracex6 += -lelf
 HOSTLOADLIBES_trace_output += -lelf -lrt
 HOSTLOADLIBES_lathist += -lelf
 HOSTLOADLIBES_offwaketime += -lelf
+HOSTLOADLIBES_spintest += -lelf
 
 # point this to your LLVM backend with bpf support
 LLC=$(srctree)/tools/bpf/llvm/bld/Debug+Asserts/bin/llc
diff --git a/samples/bpf/spintest_kern.c b/samples/bpf/spintest_kern.c
new file mode 100644
index 000000000000..ef8ac33bb2e9
--- /dev/null
+++ b/samples/bpf/spintest_kern.c
@@ -0,0 +1,59 @@
+/* Copyright (c) 2016, Facebook
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ */
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include <linux/version.h>
+#include <uapi/linux/bpf.h>
+#include "bpf_helpers.h"
+
+struct bpf_map_def SEC("maps") my_map = {
+	.type = BPF_MAP_TYPE_HASH,
+	.key_size = sizeof(long),
+	.value_size = sizeof(long),
+	.max_entries = 1024,
+};
+struct bpf_map_def SEC("maps") my_map2 = {
+	.type = BPF_MAP_TYPE_PERCPU_HASH,
+	.key_size = sizeof(long),
+	.value_size = sizeof(long),
+	.max_entries = 1024,
+};
+
+#define PROG(foo) \
+int foo(struct pt_regs *ctx) \
+{ \
+	long v = ctx->ip, *val; \
+\
+	val = bpf_map_lookup_elem(&my_map, &v); \
+	bpf_map_update_elem(&my_map, &v, &v, BPF_ANY); \
+	bpf_map_update_elem(&my_map2, &v, &v, BPF_ANY); \
+	bpf_map_delete_elem(&my_map2, &v); \
+	return 0; \
+}
+
+/* add kprobes to all possible *spin* functions */
+SEC("kprobe/spin_unlock")PROG(p1)
+SEC("kprobe/spin_lock")PROG(p2)
+SEC("kprobe/mutex_spin_on_owner")PROG(p3)
+SEC("kprobe/rwsem_spin_on_owner")PROG(p4)
+SEC("kprobe/spin_unlock_irqrestore")PROG(p5)
+SEC("kprobe/_raw_spin_unlock_irqrestore")PROG(p6)
+SEC("kprobe/_raw_spin_unlock_bh")PROG(p7)
+SEC("kprobe/_raw_spin_unlock")PROG(p8)
+SEC("kprobe/_raw_spin_lock_irqsave")PROG(p9)
+SEC("kprobe/_raw_spin_trylock_bh")PROG(p10)
+SEC("kprobe/_raw_spin_lock_irq")PROG(p11)
+SEC("kprobe/_raw_spin_trylock")PROG(p12)
+SEC("kprobe/_raw_spin_lock")PROG(p13)
+SEC("kprobe/_raw_spin_lock_bh")PROG(p14)
+/* and to inner bpf helpers */
+SEC("kprobe/htab_map_update_elem")PROG(p15)
+SEC("kprobe/__htab_percpu_map_update_elem")PROG(p16)
+SEC("kprobe/htab_map_alloc")PROG(p17)
+
+char _license[] SEC("license") = "GPL";
+u32 _version SEC("version") = LINUX_VERSION_CODE;
diff --git a/samples/bpf/spintest_user.c b/samples/bpf/spintest_user.c
new file mode 100644
index 000000000000..311ede532230
--- /dev/null
+++ b/samples/bpf/spintest_user.c
@@ -0,0 +1,50 @@
+#include <stdio.h>
+#include <unistd.h>
+#include <linux/bpf.h>
+#include <string.h>
+#include <assert.h>
+#include <sys/resource.h>
+#include "libbpf.h"
+#include "bpf_load.h"
+
+int main(int ac, char **argv)
+{
+	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
+	long key, next_key, value;
+	char filename[256];
+	struct ksym *sym;
+	int i;
+
+	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
+	setrlimit(RLIMIT_MEMLOCK, &r);
+
+	if (load_kallsyms()) {
+		printf("failed to process /proc/kallsyms\n");
+		return 2;
+	}
+
+	if (load_bpf_file(filename)) {
+		printf("%s", bpf_log_buf);
+		return 1;
+	}
+
+	for (i = 0; i < 5; i++) {
+		key = 0;
+		printf("kprobing funcs:");
+		while (bpf_get_next_key(map_fd[0], &key, &next_key) == 0) {
+			bpf_lookup_elem(map_fd[0], &next_key, &value);
+			assert(next_key == value);
+			sym = ksym_search(value);
+			printf(" %s", sym->name);
+			key = next_key;
+		}
+		if (key)
+			printf("\n");
+		key = 0;
+		while (bpf_get_next_key(map_fd[0], &key, &next_key) == 0)
+			bpf_delete_elem(map_fd[0], &next_key);
+		sleep(1);
+	}
+
+	return 0;
+}
-- 
2.6.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 9/9] samples/bpf: add map performance test
  2016-03-07  1:58 [PATCH net-next 0/9] bpf: hash map pre-alloc Alexei Starovoitov
                   ` (7 preceding siblings ...)
  2016-03-07  1:58 ` [PATCH net-next 8/9] samples/bpf: add bpf map stress test Alexei Starovoitov
@ 2016-03-07  1:58 ` Alexei Starovoitov
  8 siblings, 0 replies; 15+ messages in thread
From: Alexei Starovoitov @ 2016-03-07  1:58 UTC (permalink / raw)
  To: David S . Miller
  Cc: Daniel Borkmann, Daniel Wagner, Tom Zanussi, Wang Nan, He Kuang,
	Martin KaFai Lau, Brendan Gregg, netdev, linux-kernel,
	kernel-team

performance tests for hash map and per-cpu hash map
with and without pre-allocation

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 samples/bpf/Makefile             |   4 +
 samples/bpf/map_perf_test_kern.c | 100 +++++++++++++++++++++++++
 samples/bpf/map_perf_test_user.c | 155 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 259 insertions(+)
 create mode 100644 samples/bpf/map_perf_test_kern.c
 create mode 100644 samples/bpf/map_perf_test_user.c

diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
index 75a13e742ab4..502c9fc8db85 100644
--- a/samples/bpf/Makefile
+++ b/samples/bpf/Makefile
@@ -18,6 +18,7 @@ hostprogs-y += trace_output
 hostprogs-y += lathist
 hostprogs-y += offwaketime
 hostprogs-y += spintest
+hostprogs-y += map_perf_test
 
 test_verifier-objs := test_verifier.o libbpf.o
 test_maps-objs := test_maps.o libbpf.o
@@ -36,6 +37,7 @@ trace_output-objs := bpf_load.o libbpf.o trace_output_user.o
 lathist-objs := bpf_load.o libbpf.o lathist_user.o
 offwaketime-objs := bpf_load.o libbpf.o offwaketime_user.o
 spintest-objs := bpf_load.o libbpf.o spintest_user.o
+map_perf_test-objs := bpf_load.o libbpf.o map_perf_test_user.o
 
 # Tell kbuild to always build the programs
 always := $(hostprogs-y)
@@ -53,6 +55,7 @@ always += tcbpf1_kern.o
 always += lathist_kern.o
 always += offwaketime_kern.o
 always += spintest_kern.o
+always += map_perf_test_kern.o
 
 HOSTCFLAGS += -I$(objtree)/usr/include
 
@@ -71,6 +74,7 @@ HOSTLOADLIBES_trace_output += -lelf -lrt
 HOSTLOADLIBES_lathist += -lelf
 HOSTLOADLIBES_offwaketime += -lelf
 HOSTLOADLIBES_spintest += -lelf
+HOSTLOADLIBES_map_perf_test += -lelf -lrt
 
 # point this to your LLVM backend with bpf support
 LLC=$(srctree)/tools/bpf/llvm/bld/Debug+Asserts/bin/llc
diff --git a/samples/bpf/map_perf_test_kern.c b/samples/bpf/map_perf_test_kern.c
new file mode 100644
index 000000000000..311538e5a701
--- /dev/null
+++ b/samples/bpf/map_perf_test_kern.c
@@ -0,0 +1,100 @@
+/* Copyright (c) 2016 Facebook
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ */
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include <linux/version.h>
+#include <uapi/linux/bpf.h>
+#include "bpf_helpers.h"
+
+#define MAX_ENTRIES 1000
+
+struct bpf_map_def SEC("maps") hash_map = {
+	.type = BPF_MAP_TYPE_HASH,
+	.key_size = sizeof(u32),
+	.value_size = sizeof(long),
+	.max_entries = MAX_ENTRIES,
+};
+
+struct bpf_map_def SEC("maps") percpu_hash_map = {
+	.type = BPF_MAP_TYPE_PERCPU_HASH,
+	.key_size = sizeof(u32),
+	.value_size = sizeof(long),
+	.max_entries = MAX_ENTRIES,
+};
+
+struct bpf_map_def SEC("maps") hash_map_alloc = {
+	.type = BPF_MAP_TYPE_HASH,
+	.key_size = sizeof(u32),
+	.value_size = sizeof(long),
+	.max_entries = MAX_ENTRIES,
+	.map_flags = BPF_F_NO_PREALLOC,
+};
+
+struct bpf_map_def SEC("maps") percpu_hash_map_alloc = {
+	.type = BPF_MAP_TYPE_PERCPU_HASH,
+	.key_size = sizeof(u32),
+	.value_size = sizeof(long),
+	.max_entries = MAX_ENTRIES,
+	.map_flags = BPF_F_NO_PREALLOC,
+};
+
+SEC("kprobe/sys_getuid")
+int stress_hmap(struct pt_regs *ctx)
+{
+	u32 key = bpf_get_current_pid_tgid();
+	long init_val = 1;
+	long *value;
+
+	bpf_map_update_elem(&hash_map, &key, &init_val, BPF_ANY);
+	value = bpf_map_lookup_elem(&hash_map, &key);
+	if (value)
+		bpf_map_delete_elem(&hash_map, &key);
+	return 0;
+}
+
+SEC("kprobe/sys_geteuid")
+int stress_percpu_hmap(struct pt_regs *ctx)
+{
+	u32 key = bpf_get_current_pid_tgid();
+	long init_val = 1;
+	long *value;
+
+	bpf_map_update_elem(&percpu_hash_map, &key, &init_val, BPF_ANY);
+	value = bpf_map_lookup_elem(&percpu_hash_map, &key);
+	if (value)
+		bpf_map_delete_elem(&percpu_hash_map, &key);
+	return 0;
+}
+SEC("kprobe/sys_getgid")
+int stress_hmap_alloc(struct pt_regs *ctx)
+{
+	u32 key = bpf_get_current_pid_tgid();
+	long init_val = 1;
+	long *value;
+
+	bpf_map_update_elem(&hash_map_alloc, &key, &init_val, BPF_ANY);
+	value = bpf_map_lookup_elem(&hash_map_alloc, &key);
+	if (value)
+		bpf_map_delete_elem(&hash_map_alloc, &key);
+	return 0;
+}
+
+SEC("kprobe/sys_getegid")
+int stress_percpu_hmap_alloc(struct pt_regs *ctx)
+{
+	u32 key = bpf_get_current_pid_tgid();
+	long init_val = 1;
+	long *value;
+
+	bpf_map_update_elem(&percpu_hash_map_alloc, &key, &init_val, BPF_ANY);
+	value = bpf_map_lookup_elem(&percpu_hash_map_alloc, &key);
+	if (value)
+		bpf_map_delete_elem(&percpu_hash_map_alloc, &key);
+	return 0;
+}
+char _license[] SEC("license") = "GPL";
+u32 _version SEC("version") = LINUX_VERSION_CODE;
diff --git a/samples/bpf/map_perf_test_user.c b/samples/bpf/map_perf_test_user.c
new file mode 100644
index 000000000000..95af56ec5739
--- /dev/null
+++ b/samples/bpf/map_perf_test_user.c
@@ -0,0 +1,155 @@
+/* Copyright (c) 2016 Facebook
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ */
+#define _GNU_SOURCE
+#include <sched.h>
+#include <stdio.h>
+#include <sys/types.h>
+#include <asm/unistd.h>
+#include <unistd.h>
+#include <assert.h>
+#include <sys/wait.h>
+#include <stdlib.h>
+#include <signal.h>
+#include <linux/bpf.h>
+#include <string.h>
+#include <time.h>
+#include "libbpf.h"
+#include "bpf_load.h"
+
+#define MAX_CNT 1000000
+
+static __u64 time_get_ns(void)
+{
+	struct timespec ts;
+
+	clock_gettime(CLOCK_MONOTONIC, &ts);
+	return ts.tv_sec * 1000000000ull + ts.tv_nsec;
+}
+
+#define HASH_PREALLOC		(1 << 0)
+#define PERCPU_HASH_PREALLOC	(1 << 1)
+#define HASH_KMALLOC		(1 << 2)
+#define PERCPU_HASH_KMALLOC	(1 << 3)
+
+static int test_flags = ~0;
+
+static void test_hash_prealloc(int cpu)
+{
+	__u64 start_time;
+	int i;
+
+	start_time = time_get_ns();
+	for (i = 0; i < MAX_CNT; i++)
+		syscall(__NR_getuid);
+	printf("%d:hash_map_perf pre-alloc %lld events per sec\n",
+	       cpu, MAX_CNT * 1000000000ll / (time_get_ns() - start_time));
+}
+
+static void test_percpu_hash_prealloc(int cpu)
+{
+	__u64 start_time;
+	int i;
+
+	start_time = time_get_ns();
+	for (i = 0; i < MAX_CNT; i++)
+		syscall(__NR_geteuid);
+	printf("%d:percpu_hash_map_perf pre-alloc %lld events per sec\n",
+	       cpu, MAX_CNT * 1000000000ll / (time_get_ns() - start_time));
+}
+
+static void test_hash_kmalloc(int cpu)
+{
+	__u64 start_time;
+	int i;
+
+	start_time = time_get_ns();
+	for (i = 0; i < MAX_CNT; i++)
+		syscall(__NR_getgid);
+	printf("%d:hash_map_perf kmalloc %lld events per sec\n",
+	       cpu, MAX_CNT * 1000000000ll / (time_get_ns() - start_time));
+}
+
+static void test_percpu_hash_kmalloc(int cpu)
+{
+	__u64 start_time;
+	int i;
+
+	start_time = time_get_ns();
+	for (i = 0; i < MAX_CNT; i++)
+		syscall(__NR_getegid);
+	printf("%d:percpu_hash_map_perf kmalloc %lld events per sec\n",
+	       cpu, MAX_CNT * 1000000000ll / (time_get_ns() - start_time));
+}
+
+static void loop(int cpu)
+{
+	cpu_set_t cpuset;
+
+	CPU_ZERO(&cpuset);
+	CPU_SET(cpu, &cpuset);
+	sched_setaffinity(0, sizeof(cpuset), &cpuset);
+
+	if (test_flags & HASH_PREALLOC)
+		test_hash_prealloc(cpu);
+
+	if (test_flags & PERCPU_HASH_PREALLOC)
+		test_percpu_hash_prealloc(cpu);
+
+	if (test_flags & HASH_KMALLOC)
+		test_hash_kmalloc(cpu);
+
+	if (test_flags & PERCPU_HASH_KMALLOC)
+		test_percpu_hash_kmalloc(cpu);
+}
+
+static void run_perf_test(int tasks)
+{
+	pid_t pid[tasks];
+	int i;
+
+	for (i = 0; i < tasks; i++) {
+		pid[i] = fork();
+		if (pid[i] == 0) {
+			loop(i);
+			exit(0);
+		} else if (pid[i] == -1) {
+			printf("couldn't spawn #%d process\n", i);
+			exit(1);
+		}
+	}
+	for (i = 0; i < tasks; i++) {
+		int status;
+
+		assert(waitpid(pid[i], &status, 0) == pid[i]);
+		assert(status == 0);
+	}
+}
+
+int main(int argc, char **argv)
+{
+	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
+	char filename[256];
+	int num_cpu = 8;
+
+	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
+	setrlimit(RLIMIT_MEMLOCK, &r);
+
+	if (argc > 1)
+		test_flags = atoi(argv[1]) ? : test_flags;
+
+	if (argc > 2)
+		num_cpu = atoi(argv[2]) ? : num_cpu;
+
+	if (load_bpf_file(filename)) {
+		printf("%s", bpf_log_buf);
+		return 1;
+	}
+
+	run_perf_test(num_cpu);
+
+	return 0;
+}
-- 
2.6.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next 1/9] bpf: prevent kprobe+bpf deadlocks
  2016-03-07  1:58 ` [PATCH net-next 1/9] bpf: prevent kprobe+bpf deadlocks Alexei Starovoitov
@ 2016-03-07 10:07   ` Daniel Borkmann
  0 siblings, 0 replies; 15+ messages in thread
From: Daniel Borkmann @ 2016-03-07 10:07 UTC (permalink / raw)
  To: Alexei Starovoitov, David S . Miller
  Cc: Daniel Wagner, Tom Zanussi, Wang Nan, He Kuang, Martin KaFai Lau,
	Brendan Gregg, netdev, linux-kernel, kernel-team

On 03/07/2016 02:58 AM, Alexei Starovoitov wrote:
> if kprobe is placed within update or delete hash map helpers
> that hold bucket spin lock and triggered bpf program is trying to
> grab the spinlock for the same bucket on the same cpu, it will
> deadlock.
> Fix it by extending existing recursion prevention mechanism.
>
> Note, map_lookup and other tracing helpers don't have this problem,
> since they don't hold any locks and don't modify global data.
> bpf_trace_printk has its own recursive check and ok as well.
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

LGTM

Acked-by: Daniel Borkmann <daniel@iogearbox.net>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next 2/9] bpf: introduce percpu_freelist
  2016-03-07  1:58 ` [PATCH net-next 2/9] bpf: introduce percpu_freelist Alexei Starovoitov
@ 2016-03-07 10:33   ` Daniel Borkmann
  2016-03-07 18:26     ` Alexei Starovoitov
  0 siblings, 1 reply; 15+ messages in thread
From: Daniel Borkmann @ 2016-03-07 10:33 UTC (permalink / raw)
  To: Alexei Starovoitov, David S . Miller
  Cc: Daniel Wagner, Tom Zanussi, Wang Nan, He Kuang, Martin KaFai Lau,
	Brendan Gregg, netdev, linux-kernel, kernel-team

On 03/07/2016 02:58 AM, Alexei Starovoitov wrote:
> Introduce simple percpu_freelist to keep single list of elements
> spread across per-cpu singly linked lists.
>
> /* push element into the list */
> void pcpu_freelist_push(struct pcpu_freelist *, struct pcpu_freelist_node *);
>
> /* pop element from the list */
> struct pcpu_freelist_node *pcpu_freelist_pop(struct pcpu_freelist *);
>
> The object is pushed to the current cpu list.
> Pop first trying to get the object from the current cpu list,
> if it's empty goes to the neigbour cpu list.
>
> For bpf program usage pattern the collision rate is very low,
> since programs push and pop the objects typically on the same cpu.
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

These bits and their usage in combination with preallocation of objects
in patch 3/9 look very useful to me!

This code seems generic enough and doesn't contain any BPF specifics,
other subsystems could potentially utilize it as well, I'd suggest that
this should better be placed under lib/ so it's exposed/visible for other
developers too.

You can still add 'F:' entries into the MAINTAINERS file to make sure
patches also hit netdev:

BPF (Safe dynamic programs and tools)
[...]
F:	kernel/bpf/
F:      lib/percpu_freelist.c
F:      include/linux/percpu_freelist.h

When BPF_SYSCALL is enabled, it would then just select these library bits
via Kconfig. The lib bits themselves can be a hidden Kconfig entry.

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next 3/9] bpf: pre-allocate hash map elements
  2016-03-07  1:58 ` [PATCH net-next 3/9] bpf: pre-allocate hash map elements Alexei Starovoitov
@ 2016-03-07 11:08   ` Daniel Borkmann
  2016-03-07 18:29     ` Alexei Starovoitov
  0 siblings, 1 reply; 15+ messages in thread
From: Daniel Borkmann @ 2016-03-07 11:08 UTC (permalink / raw)
  To: Alexei Starovoitov, David S . Miller
  Cc: Daniel Wagner, Tom Zanussi, Wang Nan, He Kuang, Martin KaFai Lau,
	Brendan Gregg, netdev, linux-kernel, kernel-team

On 03/07/2016 02:58 AM, Alexei Starovoitov wrote:
[...]
> ---
>   include/linux/bpf.h      |   1 +
>   include/uapi/linux/bpf.h |   3 +
>   kernel/bpf/hashtab.c     | 264 ++++++++++++++++++++++++++++++++++-------------
>   kernel/bpf/syscall.c     |   2 +-
>   4 files changed, 196 insertions(+), 74 deletions(-)

Shouldn't all other map types (like array) need something like this as well to
reserve this for their future flags?

   if (attr->map_flags)
     return ERR_PTR(-EINVAL);

> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 4b070827200d..c81efb10bbb5 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -37,6 +37,7 @@ struct bpf_map {
>   	u32 key_size;
>   	u32 value_size;
>   	u32 max_entries;
> +	u32 map_flags;

Just naming this 'flags' doesn't work due to the anonymous struct inside that
union, right? :/

>   	u32 pages;
>   	struct user_struct *user;
>   	const struct bpf_map_ops *ops;
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 6496f98d3d68..5eeb2ca9441e 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -101,12 +101,15 @@ enum bpf_prog_type {
>   #define BPF_NOEXIST	1 /* create new element if it didn't exist */
>   #define BPF_EXIST	2 /* update existing element */
>
> +#define BPF_F_NO_PREALLOC	(1ULL << 0)

Nit: Should better be (1U << 0) as map_flags are of __u32.

>   union bpf_attr {
>   	struct { /* anonymous struct used by BPF_MAP_CREATE command */
>   		__u32	map_type;	/* one of enum bpf_map_type */
>   		__u32	key_size;	/* size of key in bytes */
>   		__u32	value_size;	/* size of value in bytes */
>   		__u32	max_entries;	/* max number of entries in a map */
> +		__u32	map_flags;	/* prealloc or not */
>   	};
>
>   	struct { /* anonymous struct used by BPF_MAP_*_ELEM commands */

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next 2/9] bpf: introduce percpu_freelist
  2016-03-07 10:33   ` Daniel Borkmann
@ 2016-03-07 18:26     ` Alexei Starovoitov
  0 siblings, 0 replies; 15+ messages in thread
From: Alexei Starovoitov @ 2016-03-07 18:26 UTC (permalink / raw)
  To: Daniel Borkmann, David S . Miller
  Cc: Daniel Wagner, Tom Zanussi, Wang Nan, He Kuang, Martin KaFai Lau,
	Brendan Gregg, netdev, linux-kernel, kernel-team

On 3/7/16 2:33 AM, Daniel Borkmann wrote:
> On 03/07/2016 02:58 AM, Alexei Starovoitov wrote:
>> Introduce simple percpu_freelist to keep single list of elements
>> spread across per-cpu singly linked lists.
>>
>> /* push element into the list */
>> void pcpu_freelist_push(struct pcpu_freelist *, struct
>> pcpu_freelist_node *);
>>
>> /* pop element from the list */
>> struct pcpu_freelist_node *pcpu_freelist_pop(struct pcpu_freelist *);
>>
>> The object is pushed to the current cpu list.
>> Pop first trying to get the object from the current cpu list,
>> if it's empty goes to the neigbour cpu list.
>>
>> For bpf program usage pattern the collision rate is very low,
>> since programs push and pop the objects typically on the same cpu.
>>
>> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
>
> These bits and their usage in combination with preallocation of objects
> in patch 3/9 look very useful to me!
>
> This code seems generic enough and doesn't contain any BPF specifics,
> other subsystems could potentially utilize it as well, I'd suggest that
> this should better be placed under lib/ so it's exposed/visible for other
> developers too.

I thought about that but the code is bpf usage pattern specific.
It only looks generic. If it's in the lib/ the patches will start
piling in to make it actually generic and adding features that not
only may slow it down, but will risk breaking bpf usage in subtle ways.
It being in kernel/bpf/ is a clear sign that it's for bpf maps only.
When it's in lib/ it's like an invitation to use it for something else
which is the opposite of the trade-offs made here. It's specific to
bpf usage pattern. That's why it performs better than all other
alternatives I've tried.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next 3/9] bpf: pre-allocate hash map elements
  2016-03-07 11:08   ` Daniel Borkmann
@ 2016-03-07 18:29     ` Alexei Starovoitov
  0 siblings, 0 replies; 15+ messages in thread
From: Alexei Starovoitov @ 2016-03-07 18:29 UTC (permalink / raw)
  To: Daniel Borkmann, David S . Miller
  Cc: Daniel Wagner, Tom Zanussi, Wang Nan, He Kuang, Martin KaFai Lau,
	Brendan Gregg, netdev, linux-kernel, kernel-team

On 3/7/16 3:08 AM, Daniel Borkmann wrote:
> On 03/07/2016 02:58 AM, Alexei Starovoitov wrote:
> [...]
>> ---
>>   include/linux/bpf.h      |   1 +
>>   include/uapi/linux/bpf.h |   3 +
>>   kernel/bpf/hashtab.c     | 264
>> ++++++++++++++++++++++++++++++++++-------------
>>   kernel/bpf/syscall.c     |   2 +-
>>   4 files changed, 196 insertions(+), 74 deletions(-)
>
> Shouldn't all other map types (like array) need something like this as
> well to
> reserve this for their future flags?
>
>    if (attr->map_flags)
>      return ERR_PTR(-EINVAL);

yeah. good point. will add another patch for that.

>> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
>> index 4b070827200d..c81efb10bbb5 100644
>> --- a/include/linux/bpf.h
>> +++ b/include/linux/bpf.h
>> @@ -37,6 +37,7 @@ struct bpf_map {
>>       u32 key_size;
>>       u32 value_size;
>>       u32 max_entries;
>> +    u32 map_flags;
>
> Just naming this 'flags' doesn't work due to the anonymous struct inside
> that
> union, right? :/

yep. exactly. there is already 'flags' member there.

>>
>> +#define BPF_F_NO_PREALLOC    (1ULL << 0)
>
> Nit: Should better be (1U << 0) as map_flags are of __u32.

right. will do.

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2016-03-07 18:30 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-07  1:58 [PATCH net-next 0/9] bpf: hash map pre-alloc Alexei Starovoitov
2016-03-07  1:58 ` [PATCH net-next 1/9] bpf: prevent kprobe+bpf deadlocks Alexei Starovoitov
2016-03-07 10:07   ` Daniel Borkmann
2016-03-07  1:58 ` [PATCH net-next 2/9] bpf: introduce percpu_freelist Alexei Starovoitov
2016-03-07 10:33   ` Daniel Borkmann
2016-03-07 18:26     ` Alexei Starovoitov
2016-03-07  1:58 ` [PATCH net-next 3/9] bpf: pre-allocate hash map elements Alexei Starovoitov
2016-03-07 11:08   ` Daniel Borkmann
2016-03-07 18:29     ` Alexei Starovoitov
2016-03-07  1:58 ` [PATCH net-next 4/9] samples/bpf: make map creation more verbose Alexei Starovoitov
2016-03-07  1:58 ` [PATCH net-next 5/9] samples/bpf: move ksym_search() into library Alexei Starovoitov
2016-03-07  1:58 ` [PATCH net-next 6/9] samples/bpf: add map_flags to bpf loader Alexei Starovoitov
2016-03-07  1:58 ` [PATCH net-next 7/9] samples/bpf: test both pre-alloc and normal maps Alexei Starovoitov
2016-03-07  1:58 ` [PATCH net-next 8/9] samples/bpf: add bpf map stress test Alexei Starovoitov
2016-03-07  1:58 ` [PATCH net-next 9/9] samples/bpf: add map performance test Alexei Starovoitov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.