bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting
@ 2020-07-30 21:22 Roman Gushchin
  2020-07-30 21:22 ` [PATCH bpf-next v3 01/29] bpf: memcg-based memory accounting for bpf progs Roman Gushchin
                   ` (29 more replies)
  0 siblings, 30 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:22 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin

Currently bpf is using the memlock rlimit for the memory accounting.
This approach has its downsides and over time has created a significant
amount of problems:

1) The limit is per-user, but because most bpf operations are performed
   as root, the limit has a little value.

2) It's hard to come up with a specific maximum value. Especially because
   the counter is shared with non-bpf users (e.g. memlock() users).
   Any specific value is either too low and creates false failures
   or too high and useless.

3) Charging is not connected to the actual memory allocation. Bpf code
   should manually calculate the estimated cost and precharge the counter,
   and then take care of uncharging, including all fail paths.
   It adds to the code complexity and makes it easy to leak a charge.

4) There is no simple way of getting the current value of the counter.
   We've used drgn for it, but it's far from being convenient.

5) Cryptic -EPERM is returned on exceeding the limit. Libbpf even had
   a function to "explain" this case for users.

In order to overcome these problems let's switch to the memcg-based
memory accounting of bpf objects. With the recent addition of the percpu
memory accounting, now it's possible to provide a comprehensive accounting
of memory used by bpf programs and maps.

This approach has the following advantages:
1) The limit is per-cgroup and hierarchical. It's way more flexible and allows
   a better control over memory usage by different workloads.

2) The actual memory consumption is taken into account. It happens automatically
   on the allocation time if __GFP_ACCOUNT flags is passed. Uncharging is also
   performed automatically on releasing the memory. So the code on the bpf side
   becomes simpler and safer.

3) There is a simple way to get the current value and statistics.

The patchset consists of the following parts:
1) memcg-based accounting for various bpf objects: progs and maps
2) removal of the rlimit-based accounting
3) removal of rlimit adjustments in userspace samples

v3:
  - droped the userspace part for further discussions/refinements,
    by Andrii and Song
v2:
  - fixed build issue, caused by the remaining rlimit-based accounting
    for sockhash maps


Roman Gushchin (29):
  bpf: memcg-based memory accounting for bpf progs
  bpf: memcg-based memory accounting for bpf maps
  bpf: refine memcg-based memory accounting for arraymap maps
  bpf: refine memcg-based memory accounting for cpumap maps
  bpf: memcg-based memory accounting for cgroup storage maps
  bpf: refine memcg-based memory accounting for devmap maps
  bpf: refine memcg-based memory accounting for hashtab maps
  bpf: memcg-based memory accounting for lpm_trie maps
  bpf: memcg-based memory accounting for bpf ringbuffer
  bpf: memcg-based memory accounting for socket storage maps
  bpf: refine memcg-based memory accounting for sockmap and sockhash
    maps
  bpf: refine memcg-based memory accounting for xskmap maps
  bpf: eliminate rlimit-based memory accounting for arraymap maps
  bpf: eliminate rlimit-based memory accounting for bpf_struct_ops maps
  bpf: eliminate rlimit-based memory accounting for cpumap maps
  bpf: eliminate rlimit-based memory accounting for cgroup storage maps
  bpf: eliminate rlimit-based memory accounting for devmap maps
  bpf: eliminate rlimit-based memory accounting for hashtab maps
  bpf: eliminate rlimit-based memory accounting for lpm_trie maps
  bpf: eliminate rlimit-based memory accounting for queue_stack_maps
    maps
  bpf: eliminate rlimit-based memory accounting for reuseport_array maps
  bpf: eliminate rlimit-based memory accounting for bpf ringbuffer
  bpf: eliminate rlimit-based memory accounting for sockmap and sockhash
    maps
  bpf: eliminate rlimit-based memory accounting for stackmap maps
  bpf: eliminate rlimit-based memory accounting for socket storage maps
  bpf: eliminate rlimit-based memory accounting for xskmap maps
  bpf: eliminate rlimit-based memory accounting infra for bpf maps
  bpf: eliminate rlimit-based memory accounting for bpf progs
  bpf: samples: do not touch RLIMIT_MEMLOCK

 include/linux/bpf.h                           |  23 ---
 kernel/bpf/arraymap.c                         |  30 +---
 kernel/bpf/bpf_struct_ops.c                   |  19 +--
 kernel/bpf/core.c                             |  20 +--
 kernel/bpf/cpumap.c                           |  20 +--
 kernel/bpf/devmap.c                           |  23 +--
 kernel/bpf/hashtab.c                          |  33 +---
 kernel/bpf/local_storage.c                    |  38 ++---
 kernel/bpf/lpm_trie.c                         |  17 +-
 kernel/bpf/queue_stack_maps.c                 |  16 +-
 kernel/bpf/reuseport_array.c                  |  12 +-
 kernel/bpf/ringbuf.c                          |  33 ++--
 kernel/bpf/stackmap.c                         |  16 +-
 kernel/bpf/syscall.c                          | 152 ++----------------
 net/core/bpf_sk_storage.c                     |  23 +--
 net/core/sock_map.c                           |  40 ++---
 net/xdp/xskmap.c                              |  13 +-
 samples/bpf/map_perf_test_user.c              |  11 --
 samples/bpf/offwaketime_user.c                |   2 -
 samples/bpf/sockex2_user.c                    |   2 -
 samples/bpf/sockex3_user.c                    |   2 -
 samples/bpf/spintest_user.c                   |   2 -
 samples/bpf/syscall_tp_user.c                 |   2 -
 samples/bpf/task_fd_query_user.c              |   5 -
 samples/bpf/test_lru_dist.c                   |   3 -
 samples/bpf/test_map_in_map_user.c            |   9 --
 samples/bpf/test_overhead_user.c              |   2 -
 samples/bpf/trace_event_user.c                |   2 -
 samples/bpf/tracex2_user.c                    |   6 -
 samples/bpf/tracex3_user.c                    |   6 -
 samples/bpf/tracex4_user.c                    |   6 -
 samples/bpf/tracex5_user.c                    |   3 -
 samples/bpf/tracex6_user.c                    |   3 -
 samples/bpf/xdp1_user.c                       |   6 -
 samples/bpf/xdp_adjust_tail_user.c            |   6 -
 samples/bpf/xdp_monitor_user.c                |   6 -
 samples/bpf/xdp_redirect_cpu_user.c           |   6 -
 samples/bpf/xdp_redirect_map_user.c           |   6 -
 samples/bpf/xdp_redirect_user.c               |   6 -
 samples/bpf/xdp_router_ipv4_user.c            |   6 -
 samples/bpf/xdp_rxq_info_user.c               |   6 -
 samples/bpf/xdp_sample_pkts_user.c            |   6 -
 samples/bpf/xdp_tx_iptunnel_user.c            |   6 -
 samples/bpf/xdpsock_user.c                    |   7 -
 .../selftests/bpf/progs/map_ptr_kern.c        |   5 -
 45 files changed, 94 insertions(+), 572 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 01/29] bpf: memcg-based memory accounting for bpf progs
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
@ 2020-07-30 21:22 ` Roman Gushchin
  2020-07-31 22:48   ` Song Liu
  2020-07-30 21:22 ` [PATCH bpf-next v3 02/29] bpf: memcg-based memory accounting for bpf maps Roman Gushchin
                   ` (28 subsequent siblings)
  29 siblings, 1 reply; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:22 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin

Include memory used by bpf programs into the memcg-based accounting.
This includes the memory used by programs itself, auxiliary data
and statistics.

Signed-off-by: Roman Gushchin <guro@fb.com>
---
 kernel/bpf/core.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index bde93344164d..daab8dcafbd4 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -77,7 +77,7 @@ void *bpf_internal_load_pointer_neg_helper(const struct sk_buff *skb, int k, uns
 
 struct bpf_prog *bpf_prog_alloc_no_stats(unsigned int size, gfp_t gfp_extra_flags)
 {
-	gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO | gfp_extra_flags;
+	gfp_t gfp_flags = GFP_KERNEL_ACCOUNT | __GFP_ZERO | gfp_extra_flags;
 	struct bpf_prog_aux *aux;
 	struct bpf_prog *fp;
 
@@ -86,7 +86,7 @@ struct bpf_prog *bpf_prog_alloc_no_stats(unsigned int size, gfp_t gfp_extra_flag
 	if (fp == NULL)
 		return NULL;
 
-	aux = kzalloc(sizeof(*aux), GFP_KERNEL | gfp_extra_flags);
+	aux = kzalloc(sizeof(*aux), GFP_KERNEL_ACCOUNT | gfp_extra_flags);
 	if (aux == NULL) {
 		vfree(fp);
 		return NULL;
@@ -104,7 +104,7 @@ struct bpf_prog *bpf_prog_alloc_no_stats(unsigned int size, gfp_t gfp_extra_flag
 
 struct bpf_prog *bpf_prog_alloc(unsigned int size, gfp_t gfp_extra_flags)
 {
-	gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO | gfp_extra_flags;
+	gfp_t gfp_flags = GFP_KERNEL_ACCOUNT | __GFP_ZERO | gfp_extra_flags;
 	struct bpf_prog *prog;
 	int cpu;
 
@@ -217,7 +217,7 @@ void bpf_prog_free_linfo(struct bpf_prog *prog)
 struct bpf_prog *bpf_prog_realloc(struct bpf_prog *fp_old, unsigned int size,
 				  gfp_t gfp_extra_flags)
 {
-	gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO | gfp_extra_flags;
+	gfp_t gfp_flags = GFP_KERNEL_ACCOUNT | __GFP_ZERO | gfp_extra_flags;
 	struct bpf_prog *fp;
 	u32 pages, delta;
 	int ret;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 02/29] bpf: memcg-based memory accounting for bpf maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
  2020-07-30 21:22 ` [PATCH bpf-next v3 01/29] bpf: memcg-based memory accounting for bpf progs Roman Gushchin
@ 2020-07-30 21:22 ` Roman Gushchin
  2020-07-30 21:22 ` [PATCH bpf-next v3 03/29] bpf: refine memcg-based memory accounting for arraymap maps Roman Gushchin
                   ` (27 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:22 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

This patch enables memcg-based memory accounting for memory allocated
by __bpf_map_area_alloc(), which is used by most map types for
large allocations.

Following patches in the series will refine the accounting for
some map types.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 kernel/bpf/syscall.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index cd3d599e9e90..a53e7aff3efc 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -275,7 +275,7 @@ static void *__bpf_map_area_alloc(u64 size, int numa_node, bool mmapable)
 	 * __GFP_RETRY_MAYFAIL to avoid such situations.
 	 */
 
-	const gfp_t gfp = __GFP_NOWARN | __GFP_ZERO;
+	const gfp_t gfp = __GFP_NOWARN | __GFP_ZERO | __GFP_ACCOUNT;
 	unsigned int flags = 0;
 	unsigned long align = 1;
 	void *area;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 03/29] bpf: refine memcg-based memory accounting for arraymap maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
  2020-07-30 21:22 ` [PATCH bpf-next v3 01/29] bpf: memcg-based memory accounting for bpf progs Roman Gushchin
  2020-07-30 21:22 ` [PATCH bpf-next v3 02/29] bpf: memcg-based memory accounting for bpf maps Roman Gushchin
@ 2020-07-30 21:22 ` Roman Gushchin
  2020-07-30 21:22 ` [PATCH bpf-next v3 04/29] bpf: refine memcg-based memory accounting for cpumap maps Roman Gushchin
                   ` (26 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:22 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Include percpu arrays and auxiliary data into the memcg-based memory
accounting.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 kernel/bpf/arraymap.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index 8ff419b632a6..9597fecff8da 100644
--- a/kernel/bpf/arraymap.c
+++ b/kernel/bpf/arraymap.c
@@ -28,12 +28,12 @@ static void bpf_array_free_percpu(struct bpf_array *array)
 
 static int bpf_array_alloc_percpu(struct bpf_array *array)
 {
+	const gfp_t gfp = GFP_USER | __GFP_NOWARN | __GFP_ACCOUNT;
 	void __percpu *ptr;
 	int i;
 
 	for (i = 0; i < array->map.max_entries; i++) {
-		ptr = __alloc_percpu_gfp(array->elem_size, 8,
-					 GFP_USER | __GFP_NOWARN);
+		ptr = __alloc_percpu_gfp(array->elem_size, 8, gfp);
 		if (!ptr) {
 			bpf_array_free_percpu(array);
 			return -ENOMEM;
@@ -969,7 +969,7 @@ static struct bpf_map *prog_array_map_alloc(union bpf_attr *attr)
 	struct bpf_array_aux *aux;
 	struct bpf_map *map;
 
-	aux = kzalloc(sizeof(*aux), GFP_KERNEL);
+	aux = kzalloc(sizeof(*aux), GFP_KERNEL_ACCOUNT);
 	if (!aux)
 		return ERR_PTR(-ENOMEM);
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 04/29] bpf: refine memcg-based memory accounting for cpumap maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (2 preceding siblings ...)
  2020-07-30 21:22 ` [PATCH bpf-next v3 03/29] bpf: refine memcg-based memory accounting for arraymap maps Roman Gushchin
@ 2020-07-30 21:22 ` Roman Gushchin
  2020-07-30 21:22 ` [PATCH bpf-next v3 05/29] bpf: memcg-based memory accounting for cgroup storage maps Roman Gushchin
                   ` (25 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:22 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Include metadata and percpu data into the memcg-based memory accounting.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 kernel/bpf/cpumap.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
index f1c46529929b..74ae9fcbe82e 100644
--- a/kernel/bpf/cpumap.c
+++ b/kernel/bpf/cpumap.c
@@ -99,7 +99,7 @@ static struct bpf_map *cpu_map_alloc(union bpf_attr *attr)
 	    attr->map_flags & ~BPF_F_NUMA_NODE)
 		return ERR_PTR(-EINVAL);
 
-	cmap = kzalloc(sizeof(*cmap), GFP_USER);
+	cmap = kzalloc(sizeof(*cmap), GFP_USER | __GFP_ACCOUNT);
 	if (!cmap)
 		return ERR_PTR(-ENOMEM);
 
@@ -418,7 +418,7 @@ static struct bpf_cpu_map_entry *
 __cpu_map_entry_alloc(struct bpf_cpumap_val *value, u32 cpu, int map_id)
 {
 	int numa, err, i, fd = value->bpf_prog.fd;
-	gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
+	gfp_t gfp = GFP_KERNEL_ACCOUNT | __GFP_NOWARN;
 	struct bpf_cpu_map_entry *rcpu;
 	struct xdp_bulk_queue *bq;
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 05/29] bpf: memcg-based memory accounting for cgroup storage maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (3 preceding siblings ...)
  2020-07-30 21:22 ` [PATCH bpf-next v3 04/29] bpf: refine memcg-based memory accounting for cpumap maps Roman Gushchin
@ 2020-07-30 21:22 ` Roman Gushchin
  2020-07-30 21:22 ` [PATCH bpf-next v3 06/29] bpf: refine memcg-based memory accounting for devmap maps Roman Gushchin
                   ` (24 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:22 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Account memory used by cgroup storage maps including the percpu memory
for the percpu flavor of cgroup storage and map metadata.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 kernel/bpf/local_storage.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/kernel/bpf/local_storage.c b/kernel/bpf/local_storage.c
index 571bb351ed3b..212d6dbbc39a 100644
--- a/kernel/bpf/local_storage.c
+++ b/kernel/bpf/local_storage.c
@@ -166,7 +166,8 @@ static int cgroup_storage_update_elem(struct bpf_map *map, void *key,
 
 	new = kmalloc_node(sizeof(struct bpf_storage_buffer) +
 			   map->value_size,
-			   __GFP_ZERO | GFP_ATOMIC | __GFP_NOWARN,
+			   __GFP_ZERO | GFP_ATOMIC | __GFP_NOWARN |
+			   __GFP_ACCOUNT,
 			   map->numa_node);
 	if (!new)
 		return -ENOMEM;
@@ -313,7 +314,7 @@ static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
 		return ERR_PTR(ret);
 
 	map = kmalloc_node(sizeof(struct bpf_cgroup_storage_map),
-			   __GFP_ZERO | GFP_USER, numa_node);
+			   __GFP_ZERO | GFP_USER | __GFP_ACCOUNT, numa_node);
 	if (!map) {
 		bpf_map_charge_finish(&mem);
 		return ERR_PTR(-ENOMEM);
@@ -496,9 +497,9 @@ static size_t bpf_cgroup_storage_calculate_size(struct bpf_map *map, u32 *pages)
 struct bpf_cgroup_storage *bpf_cgroup_storage_alloc(struct bpf_prog *prog,
 					enum bpf_cgroup_storage_type stype)
 {
+	const gfp_t gfp = __GFP_ZERO | GFP_USER | __GFP_ACCOUNT;
 	struct bpf_cgroup_storage *storage;
 	struct bpf_map *map;
-	gfp_t flags;
 	size_t size;
 	u32 pages;
 
@@ -511,20 +512,18 @@ struct bpf_cgroup_storage *bpf_cgroup_storage_alloc(struct bpf_prog *prog,
 	if (bpf_map_charge_memlock(map, pages))
 		return ERR_PTR(-EPERM);
 
-	storage = kmalloc_node(sizeof(struct bpf_cgroup_storage),
-			       __GFP_ZERO | GFP_USER, map->numa_node);
+	storage = kmalloc_node(sizeof(struct bpf_cgroup_storage), gfp,
+			       map->numa_node);
 	if (!storage)
 		goto enomem;
 
-	flags = __GFP_ZERO | GFP_USER;
-
 	if (stype == BPF_CGROUP_STORAGE_SHARED) {
-		storage->buf = kmalloc_node(size, flags, map->numa_node);
+		storage->buf = kmalloc_node(size, gfp, map->numa_node);
 		if (!storage->buf)
 			goto enomem;
 		check_and_init_map_lock(map, storage->buf->data);
 	} else {
-		storage->percpu_buf = __alloc_percpu_gfp(size, 8, flags);
+		storage->percpu_buf = __alloc_percpu_gfp(size, 8, gfp);
 		if (!storage->percpu_buf)
 			goto enomem;
 	}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 06/29] bpf: refine memcg-based memory accounting for devmap maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (4 preceding siblings ...)
  2020-07-30 21:22 ` [PATCH bpf-next v3 05/29] bpf: memcg-based memory accounting for cgroup storage maps Roman Gushchin
@ 2020-07-30 21:22 ` Roman Gushchin
  2020-07-30 21:22 ` [PATCH bpf-next v3 07/29] bpf: refine memcg-based memory accounting for hashtab maps Roman Gushchin
                   ` (23 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:22 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Include map metadata and the node size (struct bpf_dtab_netdev) on
element update into the accounting.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 kernel/bpf/devmap.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
index 10abb06065bb..05bf93088063 100644
--- a/kernel/bpf/devmap.c
+++ b/kernel/bpf/devmap.c
@@ -175,7 +175,7 @@ static struct bpf_map *dev_map_alloc(union bpf_attr *attr)
 	if (!capable(CAP_NET_ADMIN))
 		return ERR_PTR(-EPERM);
 
-	dtab = kzalloc(sizeof(*dtab), GFP_USER);
+	dtab = kzalloc(sizeof(*dtab), GFP_USER | __GFP_ACCOUNT);
 	if (!dtab)
 		return ERR_PTR(-ENOMEM);
 
@@ -603,7 +603,8 @@ static struct bpf_dtab_netdev *__dev_map_alloc_node(struct net *net,
 	struct bpf_prog *prog = NULL;
 	struct bpf_dtab_netdev *dev;
 
-	dev = kmalloc_node(sizeof(*dev), GFP_ATOMIC | __GFP_NOWARN,
+	dev = kmalloc_node(sizeof(*dev),
+			   GFP_ATOMIC | __GFP_NOWARN | __GFP_ACCOUNT,
 			   dtab->map.numa_node);
 	if (!dev)
 		return ERR_PTR(-ENOMEM);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 07/29] bpf: refine memcg-based memory accounting for hashtab maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (5 preceding siblings ...)
  2020-07-30 21:22 ` [PATCH bpf-next v3 06/29] bpf: refine memcg-based memory accounting for devmap maps Roman Gushchin
@ 2020-07-30 21:22 ` Roman Gushchin
  2020-07-30 21:22 ` [PATCH bpf-next v3 08/29] bpf: memcg-based memory accounting for lpm_trie maps Roman Gushchin
                   ` (22 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:22 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Include percpu objects and the size of map metadata into the
accounting.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 kernel/bpf/hashtab.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 024276787055..9d0432170812 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -263,10 +263,11 @@ static int prealloc_init(struct bpf_htab *htab)
 		goto skip_percpu_elems;
 
 	for (i = 0; i < num_entries; i++) {
+		const gfp_t gfp = GFP_USER | __GFP_NOWARN | __GFP_ACCOUNT;
 		u32 size = round_up(htab->map.value_size, 8);
 		void __percpu *pptr;
 
-		pptr = __alloc_percpu_gfp(size, 8, GFP_USER | __GFP_NOWARN);
+		pptr = __alloc_percpu_gfp(size, 8, gfp);
 		if (!pptr)
 			goto free_elems;
 		htab_elem_set_ptr(get_htab_elem(htab, i), htab->map.key_size,
@@ -321,7 +322,7 @@ static int alloc_extra_elems(struct bpf_htab *htab)
 	int cpu;
 
 	pptr = __alloc_percpu_gfp(sizeof(struct htab_elem *), 8,
-				  GFP_USER | __GFP_NOWARN);
+				  GFP_USER | __GFP_NOWARN | __GFP_ACCOUNT);
 	if (!pptr)
 		return -ENOMEM;
 
@@ -424,7 +425,7 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
 	u64 cost;
 	int err;
 
-	htab = kzalloc(sizeof(*htab), GFP_USER);
+	htab = kzalloc(sizeof(*htab), GFP_USER | __GFP_ACCOUNT);
 	if (!htab)
 		return ERR_PTR(-ENOMEM);
 
@@ -827,6 +828,7 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key,
 					 bool percpu, bool onallcpus,
 					 struct htab_elem *old_elem)
 {
+	const gfp_t gfp = GFP_ATOMIC | __GFP_NOWARN | __GFP_ACCOUNT;
 	u32 size = htab->map.value_size;
 	bool prealloc = htab_is_prealloc(htab);
 	struct htab_elem *l_new, **pl_new;
@@ -859,8 +861,7 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key,
 				l_new = ERR_PTR(-E2BIG);
 				goto dec_count;
 			}
-		l_new = kmalloc_node(htab->elem_size, GFP_ATOMIC | __GFP_NOWARN,
-				     htab->map.numa_node);
+		l_new = kmalloc_node(htab->elem_size, gfp, htab->map.numa_node);
 		if (!l_new) {
 			l_new = ERR_PTR(-ENOMEM);
 			goto dec_count;
@@ -876,8 +877,7 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key,
 			pptr = htab_elem_get_ptr(l_new, key_size);
 		} else {
 			/* alloc_percpu zero-fills */
-			pptr = __alloc_percpu_gfp(size, 8,
-						  GFP_ATOMIC | __GFP_NOWARN);
+			pptr = __alloc_percpu_gfp(size, 8, gfp);
 			if (!pptr) {
 				kfree(l_new);
 				l_new = ERR_PTR(-ENOMEM);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 08/29] bpf: memcg-based memory accounting for lpm_trie maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (6 preceding siblings ...)
  2020-07-30 21:22 ` [PATCH bpf-next v3 07/29] bpf: refine memcg-based memory accounting for hashtab maps Roman Gushchin
@ 2020-07-30 21:22 ` Roman Gushchin
  2020-07-30 21:22 ` [PATCH bpf-next v3 09/29] bpf: memcg-based memory accounting for bpf ringbuffer Roman Gushchin
                   ` (21 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:22 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Include lpm trie and lpm trie node objects into the memcg-based memory
accounting.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 kernel/bpf/lpm_trie.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
index 44474bf3ab7a..d85e0fc2cafc 100644
--- a/kernel/bpf/lpm_trie.c
+++ b/kernel/bpf/lpm_trie.c
@@ -282,7 +282,7 @@ static struct lpm_trie_node *lpm_trie_node_alloc(const struct lpm_trie *trie,
 	if (value)
 		size += trie->map.value_size;
 
-	node = kmalloc_node(size, GFP_ATOMIC | __GFP_NOWARN,
+	node = kmalloc_node(size, GFP_ATOMIC | __GFP_NOWARN | __GFP_ACCOUNT,
 			    trie->map.numa_node);
 	if (!node)
 		return NULL;
@@ -557,7 +557,7 @@ static struct bpf_map *trie_alloc(union bpf_attr *attr)
 	    attr->value_size > LPM_VAL_SIZE_MAX)
 		return ERR_PTR(-EINVAL);
 
-	trie = kzalloc(sizeof(*trie), GFP_USER | __GFP_NOWARN);
+	trie = kzalloc(sizeof(*trie), GFP_USER | __GFP_NOWARN | __GFP_ACCOUNT);
 	if (!trie)
 		return ERR_PTR(-ENOMEM);
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 09/29] bpf: memcg-based memory accounting for bpf ringbuffer
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (7 preceding siblings ...)
  2020-07-30 21:22 ` [PATCH bpf-next v3 08/29] bpf: memcg-based memory accounting for lpm_trie maps Roman Gushchin
@ 2020-07-30 21:22 ` Roman Gushchin
  2020-07-30 21:22 ` [PATCH bpf-next v3 10/29] bpf: memcg-based memory accounting for socket storage maps Roman Gushchin
                   ` (20 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:22 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Enable the memcg-based memory accounting for the memory used by
the bpf ringbuffer.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 kernel/bpf/ringbuf.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
index 002f8a5c9e51..e8e2c39cbdc9 100644
--- a/kernel/bpf/ringbuf.c
+++ b/kernel/bpf/ringbuf.c
@@ -60,8 +60,8 @@ struct bpf_ringbuf_hdr {
 
 static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node)
 {
-	const gfp_t flags = GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN |
-			    __GFP_ZERO;
+	const gfp_t flags = GFP_KERNEL_ACCOUNT | __GFP_RETRY_MAYFAIL |
+			    __GFP_NOWARN | __GFP_ZERO;
 	int nr_meta_pages = RINGBUF_PGOFF + RINGBUF_POS_PAGES;
 	int nr_data_pages = data_sz >> PAGE_SHIFT;
 	int nr_pages = nr_meta_pages + nr_data_pages;
@@ -89,7 +89,8 @@ static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node)
 	 */
 	array_size = (nr_meta_pages + 2 * nr_data_pages) * sizeof(*pages);
 	if (array_size > PAGE_SIZE)
-		pages = vmalloc_node(array_size, numa_node);
+		pages = __vmalloc_node(array_size, 1, GFP_KERNEL_ACCOUNT,
+				       numa_node, __builtin_return_address(0));
 	else
 		pages = kmalloc_node(array_size, flags, numa_node);
 	if (!pages)
@@ -167,7 +168,7 @@ static struct bpf_map *ringbuf_map_alloc(union bpf_attr *attr)
 		return ERR_PTR(-E2BIG);
 #endif
 
-	rb_map = kzalloc(sizeof(*rb_map), GFP_USER);
+	rb_map = kzalloc(sizeof(*rb_map), GFP_USER | __GFP_ACCOUNT);
 	if (!rb_map)
 		return ERR_PTR(-ENOMEM);
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 10/29] bpf: memcg-based memory accounting for socket storage maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (8 preceding siblings ...)
  2020-07-30 21:22 ` [PATCH bpf-next v3 09/29] bpf: memcg-based memory accounting for bpf ringbuffer Roman Gushchin
@ 2020-07-30 21:22 ` Roman Gushchin
  2020-07-30 21:22 ` [PATCH bpf-next v3 11/29] bpf: refine memcg-based memory accounting for sockmap and sockhash maps Roman Gushchin
                   ` (19 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:22 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Account memory used by the socket storage.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 net/core/bpf_sk_storage.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
index d3377c90a291..c9b9cd2d11c5 100644
--- a/net/core/bpf_sk_storage.c
+++ b/net/core/bpf_sk_storage.c
@@ -130,7 +130,8 @@ static struct bpf_sk_storage_elem *selem_alloc(struct bpf_sk_storage_map *smap,
 	if (charge_omem && omem_charge(sk, smap->elem_size))
 		return NULL;
 
-	selem = kzalloc(smap->elem_size, GFP_ATOMIC | __GFP_NOWARN);
+	selem = kzalloc(smap->elem_size,
+			GFP_ATOMIC | __GFP_NOWARN | __GFP_ACCOUNT);
 	if (selem) {
 		if (value)
 			memcpy(SDATA(selem)->data, value, smap->map.value_size);
@@ -337,7 +338,8 @@ static int sk_storage_alloc(struct sock *sk,
 	if (err)
 		return err;
 
-	sk_storage = kzalloc(sizeof(*sk_storage), GFP_ATOMIC | __GFP_NOWARN);
+	sk_storage = kzalloc(sizeof(*sk_storage),
+			     GFP_ATOMIC | __GFP_NOWARN | __GFP_ACCOUNT);
 	if (!sk_storage) {
 		err = -ENOMEM;
 		goto uncharge;
@@ -677,7 +679,7 @@ static struct bpf_map *bpf_sk_storage_map_alloc(union bpf_attr *attr)
 	u64 cost;
 	int ret;
 
-	smap = kzalloc(sizeof(*smap), GFP_USER | __GFP_NOWARN);
+	smap = kzalloc(sizeof(*smap), GFP_USER | __GFP_NOWARN | __GFP_ACCOUNT);
 	if (!smap)
 		return ERR_PTR(-ENOMEM);
 	bpf_map_init_from_attr(&smap->map, attr);
@@ -695,7 +697,7 @@ static struct bpf_map *bpf_sk_storage_map_alloc(union bpf_attr *attr)
 	}
 
 	smap->buckets = kvcalloc(sizeof(*smap->buckets), nbuckets,
-				 GFP_USER | __GFP_NOWARN);
+				 GFP_USER | __GFP_NOWARN | __GFP_ACCOUNT);
 	if (!smap->buckets) {
 		bpf_map_charge_finish(&smap->map.memory);
 		kfree(smap);
@@ -1034,7 +1036,7 @@ bpf_sk_storage_diag_alloc(const struct nlattr *nla_stgs)
 	}
 
 	diag = kzalloc(sizeof(*diag) + sizeof(diag->maps[0]) * nr_maps,
-		       GFP_KERNEL);
+		       GFP_KERNEL | __GFP_ACCOUNT);
 	if (!diag)
 		return ERR_PTR(-ENOMEM);
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 11/29] bpf: refine memcg-based memory accounting for sockmap and sockhash maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (9 preceding siblings ...)
  2020-07-30 21:22 ` [PATCH bpf-next v3 10/29] bpf: memcg-based memory accounting for socket storage maps Roman Gushchin
@ 2020-07-30 21:22 ` Roman Gushchin
  2020-07-30 21:22 ` [PATCH bpf-next v3 12/29] bpf: refine memcg-based memory accounting for xskmap maps Roman Gushchin
                   ` (18 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:22 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Include internal metadata into the memcg-based memory accounting.
Also include the memory allocated on updating an element.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 net/core/sock_map.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/net/core/sock_map.c b/net/core/sock_map.c
index 119f52a99dc1..bc797adca44c 100644
--- a/net/core/sock_map.c
+++ b/net/core/sock_map.c
@@ -38,7 +38,7 @@ static struct bpf_map *sock_map_alloc(union bpf_attr *attr)
 	    attr->map_flags & ~SOCK_CREATE_FLAG_MASK)
 		return ERR_PTR(-EINVAL);
 
-	stab = kzalloc(sizeof(*stab), GFP_USER);
+	stab = kzalloc(sizeof(*stab), GFP_USER | __GFP_ACCOUNT);
 	if (!stab)
 		return ERR_PTR(-ENOMEM);
 
@@ -829,7 +829,8 @@ static struct bpf_shtab_elem *sock_hash_alloc_elem(struct bpf_shtab *htab,
 		}
 	}
 
-	new = kmalloc_node(htab->elem_size, GFP_ATOMIC | __GFP_NOWARN,
+	new = kmalloc_node(htab->elem_size,
+			   GFP_ATOMIC | __GFP_NOWARN | __GFP_ACCOUNT,
 			   htab->map.numa_node);
 	if (!new) {
 		atomic_dec(&htab->count);
@@ -1011,7 +1012,7 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr)
 	if (attr->key_size > MAX_BPF_STACK)
 		return ERR_PTR(-E2BIG);
 
-	htab = kzalloc(sizeof(*htab), GFP_USER);
+	htab = kzalloc(sizeof(*htab), GFP_USER | __GFP_ACCOUNT);
 	if (!htab)
 		return ERR_PTR(-ENOMEM);
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 12/29] bpf: refine memcg-based memory accounting for xskmap maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (10 preceding siblings ...)
  2020-07-30 21:22 ` [PATCH bpf-next v3 11/29] bpf: refine memcg-based memory accounting for sockmap and sockhash maps Roman Gushchin
@ 2020-07-30 21:22 ` Roman Gushchin
  2020-07-30 21:22 ` [PATCH bpf-next v3 13/29] bpf: eliminate rlimit-based memory accounting for arraymap maps Roman Gushchin
                   ` (17 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:22 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Extend xskmap memory accounting to include the memory taken by
the xsk_map_node structure.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 net/xdp/xskmap.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c
index 8367adbbe9df..e574b22defe5 100644
--- a/net/xdp/xskmap.c
+++ b/net/xdp/xskmap.c
@@ -28,7 +28,8 @@ static struct xsk_map_node *xsk_map_node_alloc(struct xsk_map *map,
 	struct xsk_map_node *node;
 	int err;
 
-	node = kzalloc(sizeof(*node), GFP_ATOMIC | __GFP_NOWARN);
+	node = kzalloc(sizeof(*node),
+		       GFP_ATOMIC | __GFP_NOWARN | __GFP_ACCOUNT);
 	if (!node)
 		return ERR_PTR(-ENOMEM);
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 13/29] bpf: eliminate rlimit-based memory accounting for arraymap maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (11 preceding siblings ...)
  2020-07-30 21:22 ` [PATCH bpf-next v3 12/29] bpf: refine memcg-based memory accounting for xskmap maps Roman Gushchin
@ 2020-07-30 21:22 ` Roman Gushchin
  2020-07-30 21:22 ` [PATCH bpf-next v3 14/29] bpf: eliminate rlimit-based memory accounting for bpf_struct_ops maps Roman Gushchin
                   ` (16 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:22 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Do not use rlimit-based memory accounting for arraymap maps.
It has been replaced with the memcg-based memory accounting.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 kernel/bpf/arraymap.c | 24 ++++--------------------
 1 file changed, 4 insertions(+), 20 deletions(-)

diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index 9597fecff8da..41581c38b31d 100644
--- a/kernel/bpf/arraymap.c
+++ b/kernel/bpf/arraymap.c
@@ -75,11 +75,10 @@ int array_map_alloc_check(union bpf_attr *attr)
 static struct bpf_map *array_map_alloc(union bpf_attr *attr)
 {
 	bool percpu = attr->map_type == BPF_MAP_TYPE_PERCPU_ARRAY;
-	int ret, numa_node = bpf_map_attr_numa_node(attr);
+	int numa_node = bpf_map_attr_numa_node(attr);
 	u32 elem_size, index_mask, max_entries;
 	bool bypass_spec_v1 = bpf_bypass_spec_v1();
-	u64 cost, array_size, mask64;
-	struct bpf_map_memory mem;
+	u64 array_size, mask64;
 	struct bpf_array *array;
 
 	elem_size = round_up(attr->value_size, 8);
@@ -120,44 +119,29 @@ static struct bpf_map *array_map_alloc(union bpf_attr *attr)
 		}
 	}
 
-	/* make sure there is no u32 overflow later in round_up() */
-	cost = array_size;
-	if (percpu)
-		cost += (u64)attr->max_entries * elem_size * num_possible_cpus();
-
-	ret = bpf_map_charge_init(&mem, cost);
-	if (ret < 0)
-		return ERR_PTR(ret);
-
 	/* allocate all map elements and zero-initialize them */
 	if (attr->map_flags & BPF_F_MMAPABLE) {
 		void *data;
 
 		/* kmalloc'ed memory can't be mmap'ed, use explicit vmalloc */
 		data = bpf_map_area_mmapable_alloc(array_size, numa_node);
-		if (!data) {
-			bpf_map_charge_finish(&mem);
+		if (!data)
 			return ERR_PTR(-ENOMEM);
-		}
 		array = data + PAGE_ALIGN(sizeof(struct bpf_array))
 			- offsetof(struct bpf_array, value);
 	} else {
 		array = bpf_map_area_alloc(array_size, numa_node);
 	}
-	if (!array) {
-		bpf_map_charge_finish(&mem);
+	if (!array)
 		return ERR_PTR(-ENOMEM);
-	}
 	array->index_mask = index_mask;
 	array->map.bypass_spec_v1 = bypass_spec_v1;
 
 	/* copy mandatory map attributes */
 	bpf_map_init_from_attr(&array->map, attr);
-	bpf_map_charge_move(&array->map.memory, &mem);
 	array->elem_size = elem_size;
 
 	if (percpu && bpf_array_alloc_percpu(array)) {
-		bpf_map_charge_finish(&array->map.memory);
 		bpf_map_area_free(array);
 		return ERR_PTR(-ENOMEM);
 	}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 14/29] bpf: eliminate rlimit-based memory accounting for bpf_struct_ops maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (12 preceding siblings ...)
  2020-07-30 21:22 ` [PATCH bpf-next v3 13/29] bpf: eliminate rlimit-based memory accounting for arraymap maps Roman Gushchin
@ 2020-07-30 21:22 ` Roman Gushchin
  2020-07-30 21:22 ` [PATCH bpf-next v3 15/29] bpf: eliminate rlimit-based memory accounting for cpumap maps Roman Gushchin
                   ` (15 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:22 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Do not use rlimit-based memory accounting for bpf_struct_ops maps.
It has been replaced with the memcg-based memory accounting.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 kernel/bpf/bpf_struct_ops.c | 19 +++----------------
 1 file changed, 3 insertions(+), 16 deletions(-)

diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index 969c5d47f81f..22bfa236683b 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -550,12 +550,10 @@ static int bpf_struct_ops_map_alloc_check(union bpf_attr *attr)
 static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
 {
 	const struct bpf_struct_ops *st_ops;
-	size_t map_total_size, st_map_size;
+	size_t st_map_size;
 	struct bpf_struct_ops_map *st_map;
 	const struct btf_type *t, *vt;
-	struct bpf_map_memory mem;
 	struct bpf_map *map;
-	int err;
 
 	if (!bpf_capable())
 		return ERR_PTR(-EPERM);
@@ -575,20 +573,11 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
 		 * struct bpf_struct_ops_tcp_congestions_ops
 		 */
 		(vt->size - sizeof(struct bpf_struct_ops_value));
-	map_total_size = st_map_size +
-		/* uvalue */
-		sizeof(vt->size) +
-		/* struct bpf_progs **progs */
-		 btf_type_vlen(t) * sizeof(struct bpf_prog *);
-	err = bpf_map_charge_init(&mem, map_total_size);
-	if (err < 0)
-		return ERR_PTR(err);
 
 	st_map = bpf_map_area_alloc(st_map_size, NUMA_NO_NODE);
-	if (!st_map) {
-		bpf_map_charge_finish(&mem);
+	if (!st_map)
 		return ERR_PTR(-ENOMEM);
-	}
+
 	st_map->st_ops = st_ops;
 	map = &st_map->map;
 
@@ -599,14 +588,12 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
 	st_map->image = bpf_jit_alloc_exec(PAGE_SIZE);
 	if (!st_map->uvalue || !st_map->progs || !st_map->image) {
 		bpf_struct_ops_map_free(map);
-		bpf_map_charge_finish(&mem);
 		return ERR_PTR(-ENOMEM);
 	}
 
 	mutex_init(&st_map->lock);
 	set_vm_flush_reset_perms(st_map->image);
 	bpf_map_init_from_attr(map, attr);
-	bpf_map_charge_move(&map->memory, &mem);
 
 	return map;
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 15/29] bpf: eliminate rlimit-based memory accounting for cpumap maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (13 preceding siblings ...)
  2020-07-30 21:22 ` [PATCH bpf-next v3 14/29] bpf: eliminate rlimit-based memory accounting for bpf_struct_ops maps Roman Gushchin
@ 2020-07-30 21:22 ` Roman Gushchin
  2020-07-30 21:22 ` [PATCH bpf-next v3 16/29] bpf: eliminate rlimit-based memory accounting for cgroup storage maps Roman Gushchin
                   ` (14 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:22 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Do not use rlimit-based memory accounting for cpumap maps.
It has been replaced with the memcg-based memory accounting.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 kernel/bpf/cpumap.c | 16 +---------------
 1 file changed, 1 insertion(+), 15 deletions(-)

diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
index 74ae9fcbe82e..50f3444a3301 100644
--- a/kernel/bpf/cpumap.c
+++ b/kernel/bpf/cpumap.c
@@ -86,8 +86,6 @@ static struct bpf_map *cpu_map_alloc(union bpf_attr *attr)
 	u32 value_size = attr->value_size;
 	struct bpf_cpu_map *cmap;
 	int err = -ENOMEM;
-	u64 cost;
-	int ret;
 
 	if (!bpf_capable())
 		return ERR_PTR(-EPERM);
@@ -111,26 +109,14 @@ static struct bpf_map *cpu_map_alloc(union bpf_attr *attr)
 		goto free_cmap;
 	}
 
-	/* make sure page count doesn't overflow */
-	cost = (u64) cmap->map.max_entries * sizeof(struct bpf_cpu_map_entry *);
-
-	/* Notice returns -EPERM on if map size is larger than memlock limit */
-	ret = bpf_map_charge_init(&cmap->map.memory, cost);
-	if (ret) {
-		err = ret;
-		goto free_cmap;
-	}
-
 	/* Alloc array for possible remote "destination" CPUs */
 	cmap->cpu_map = bpf_map_area_alloc(cmap->map.max_entries *
 					   sizeof(struct bpf_cpu_map_entry *),
 					   cmap->map.numa_node);
 	if (!cmap->cpu_map)
-		goto free_charge;
+		goto free_cmap;
 
 	return &cmap->map;
-free_charge:
-	bpf_map_charge_finish(&cmap->map.memory);
 free_cmap:
 	kfree(cmap);
 	return ERR_PTR(err);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 16/29] bpf: eliminate rlimit-based memory accounting for cgroup storage maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (14 preceding siblings ...)
  2020-07-30 21:22 ` [PATCH bpf-next v3 15/29] bpf: eliminate rlimit-based memory accounting for cpumap maps Roman Gushchin
@ 2020-07-30 21:22 ` Roman Gushchin
  2020-07-30 21:22 ` [PATCH bpf-next v3 17/29] bpf: eliminate rlimit-based memory accounting for devmap maps Roman Gushchin
                   ` (13 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:22 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Do not use rlimit-based memory accounting for cgroup storage maps.
It has been replaced with the memcg-based memory accounting.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 kernel/bpf/local_storage.c | 21 +--------------------
 1 file changed, 1 insertion(+), 20 deletions(-)

diff --git a/kernel/bpf/local_storage.c b/kernel/bpf/local_storage.c
index 212d6dbbc39a..c28a47d5177a 100644
--- a/kernel/bpf/local_storage.c
+++ b/kernel/bpf/local_storage.c
@@ -288,8 +288,6 @@ static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
 {
 	int numa_node = bpf_map_attr_numa_node(attr);
 	struct bpf_cgroup_storage_map *map;
-	struct bpf_map_memory mem;
-	int ret;
 
 	if (attr->key_size != sizeof(struct bpf_cgroup_storage_key) &&
 	    attr->key_size != sizeof(__u64))
@@ -309,18 +307,10 @@ static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
 		/* max_entries is not used and enforced to be 0 */
 		return ERR_PTR(-EINVAL);
 
-	ret = bpf_map_charge_init(&mem, sizeof(struct bpf_cgroup_storage_map));
-	if (ret < 0)
-		return ERR_PTR(ret);
-
 	map = kmalloc_node(sizeof(struct bpf_cgroup_storage_map),
 			   __GFP_ZERO | GFP_USER | __GFP_ACCOUNT, numa_node);
-	if (!map) {
-		bpf_map_charge_finish(&mem);
+	if (!map)
 		return ERR_PTR(-ENOMEM);
-	}
-
-	bpf_map_charge_move(&map->map.memory, &mem);
 
 	/* copy mandatory map attributes */
 	bpf_map_init_from_attr(&map->map, attr);
@@ -509,9 +499,6 @@ struct bpf_cgroup_storage *bpf_cgroup_storage_alloc(struct bpf_prog *prog,
 
 	size = bpf_cgroup_storage_calculate_size(map, &pages);
 
-	if (bpf_map_charge_memlock(map, pages))
-		return ERR_PTR(-EPERM);
-
 	storage = kmalloc_node(sizeof(struct bpf_cgroup_storage), gfp,
 			       map->numa_node);
 	if (!storage)
@@ -533,7 +520,6 @@ struct bpf_cgroup_storage *bpf_cgroup_storage_alloc(struct bpf_prog *prog,
 	return storage;
 
 enomem:
-	bpf_map_uncharge_memlock(map, pages);
 	kfree(storage);
 	return ERR_PTR(-ENOMEM);
 }
@@ -560,16 +546,11 @@ void bpf_cgroup_storage_free(struct bpf_cgroup_storage *storage)
 {
 	enum bpf_cgroup_storage_type stype;
 	struct bpf_map *map;
-	u32 pages;
 
 	if (!storage)
 		return;
 
 	map = &storage->map->map;
-
-	bpf_cgroup_storage_calculate_size(map, &pages);
-	bpf_map_uncharge_memlock(map, pages);
-
 	stype = cgroup_storage_type(map);
 	if (stype == BPF_CGROUP_STORAGE_SHARED)
 		call_rcu(&storage->rcu, free_shared_cgroup_storage_rcu);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 17/29] bpf: eliminate rlimit-based memory accounting for devmap maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (15 preceding siblings ...)
  2020-07-30 21:22 ` [PATCH bpf-next v3 16/29] bpf: eliminate rlimit-based memory accounting for cgroup storage maps Roman Gushchin
@ 2020-07-30 21:22 ` Roman Gushchin
  2020-07-30 21:22 ` [PATCH bpf-next v3 18/29] bpf: eliminate rlimit-based memory accounting for hashtab maps Roman Gushchin
                   ` (12 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:22 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Do not use rlimit-based memory accounting for devmap maps.
It has been replaced with the memcg-based memory accounting.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 kernel/bpf/devmap.c | 18 ++----------------
 1 file changed, 2 insertions(+), 16 deletions(-)

diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
index 05bf93088063..8148c7260a54 100644
--- a/kernel/bpf/devmap.c
+++ b/kernel/bpf/devmap.c
@@ -109,8 +109,6 @@ static inline struct hlist_head *dev_map_index_hash(struct bpf_dtab *dtab,
 static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr)
 {
 	u32 valsize = attr->value_size;
-	u64 cost = 0;
-	int err;
 
 	/* check sanity of attributes. 2 value sizes supported:
 	 * 4 bytes: ifindex
@@ -135,21 +133,13 @@ static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr)
 
 		if (!dtab->n_buckets) /* Overflow check */
 			return -EINVAL;
-		cost += (u64) sizeof(struct hlist_head) * dtab->n_buckets;
-	} else {
-		cost += (u64) dtab->map.max_entries * sizeof(struct bpf_dtab_netdev *);
 	}
 
-	/* if map size is larger than memlock limit, reject it */
-	err = bpf_map_charge_init(&dtab->map.memory, cost);
-	if (err)
-		return -EINVAL;
-
 	if (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH) {
 		dtab->dev_index_head = dev_map_create_hash(dtab->n_buckets,
 							   dtab->map.numa_node);
 		if (!dtab->dev_index_head)
-			goto free_charge;
+			return -ENOMEM;
 
 		spin_lock_init(&dtab->index_lock);
 	} else {
@@ -157,14 +147,10 @@ static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr)
 						      sizeof(struct bpf_dtab_netdev *),
 						      dtab->map.numa_node);
 		if (!dtab->netdev_map)
-			goto free_charge;
+			return -ENOMEM;
 	}
 
 	return 0;
-
-free_charge:
-	bpf_map_charge_finish(&dtab->map.memory);
-	return -ENOMEM;
 }
 
 static struct bpf_map *dev_map_alloc(union bpf_attr *attr)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 18/29] bpf: eliminate rlimit-based memory accounting for hashtab maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (16 preceding siblings ...)
  2020-07-30 21:22 ` [PATCH bpf-next v3 17/29] bpf: eliminate rlimit-based memory accounting for devmap maps Roman Gushchin
@ 2020-07-30 21:22 ` Roman Gushchin
  2020-07-30 21:23 ` [PATCH bpf-next v3 19/29] bpf: eliminate rlimit-based memory accounting for lpm_trie maps Roman Gushchin
                   ` (11 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:22 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Do not use rlimit-based memory accounting for hashtab maps.
It has been replaced with the memcg-based memory accounting.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 kernel/bpf/hashtab.c | 19 +------------------
 1 file changed, 1 insertion(+), 18 deletions(-)

diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 9d0432170812..9372b559b4e7 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -422,7 +422,6 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
 	bool percpu_lru = (attr->map_flags & BPF_F_NO_COMMON_LRU);
 	bool prealloc = !(attr->map_flags & BPF_F_NO_PREALLOC);
 	struct bpf_htab *htab;
-	u64 cost;
 	int err;
 
 	htab = kzalloc(sizeof(*htab), GFP_USER | __GFP_ACCOUNT);
@@ -459,26 +458,12 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
 	    htab->n_buckets > U32_MAX / sizeof(struct bucket))
 		goto free_htab;
 
-	cost = (u64) htab->n_buckets * sizeof(struct bucket) +
-	       (u64) htab->elem_size * htab->map.max_entries;
-
-	if (percpu)
-		cost += (u64) round_up(htab->map.value_size, 8) *
-			num_possible_cpus() * htab->map.max_entries;
-	else
-	       cost += (u64) htab->elem_size * num_possible_cpus();
-
-	/* if map size is larger than memlock limit, reject it */
-	err = bpf_map_charge_init(&htab->map.memory, cost);
-	if (err)
-		goto free_htab;
-
 	err = -ENOMEM;
 	htab->buckets = bpf_map_area_alloc(htab->n_buckets *
 					   sizeof(struct bucket),
 					   htab->map.numa_node);
 	if (!htab->buckets)
-		goto free_charge;
+		goto free_htab;
 
 	if (htab->map.map_flags & BPF_F_ZERO_SEED)
 		htab->hashrnd = 0;
@@ -508,8 +493,6 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
 	prealloc_destroy(htab);
 free_buckets:
 	bpf_map_area_free(htab->buckets);
-free_charge:
-	bpf_map_charge_finish(&htab->map.memory);
 free_htab:
 	kfree(htab);
 	return ERR_PTR(err);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 19/29] bpf: eliminate rlimit-based memory accounting for lpm_trie maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (17 preceding siblings ...)
  2020-07-30 21:22 ` [PATCH bpf-next v3 18/29] bpf: eliminate rlimit-based memory accounting for hashtab maps Roman Gushchin
@ 2020-07-30 21:23 ` Roman Gushchin
  2020-07-30 21:23 ` [PATCH bpf-next v3 20/29] bpf: eliminate rlimit-based memory accounting for queue_stack_maps maps Roman Gushchin
                   ` (10 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:23 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Do not use rlimit-based memory accounting for lpm_trie maps.
It has been replaced with the memcg-based memory accounting.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 kernel/bpf/lpm_trie.c | 13 -------------
 1 file changed, 13 deletions(-)

diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
index d85e0fc2cafc..c747f0835eb1 100644
--- a/kernel/bpf/lpm_trie.c
+++ b/kernel/bpf/lpm_trie.c
@@ -540,8 +540,6 @@ static int trie_delete_elem(struct bpf_map *map, void *_key)
 static struct bpf_map *trie_alloc(union bpf_attr *attr)
 {
 	struct lpm_trie *trie;
-	u64 cost = sizeof(*trie), cost_per_node;
-	int ret;
 
 	if (!bpf_capable())
 		return ERR_PTR(-EPERM);
@@ -567,20 +565,9 @@ static struct bpf_map *trie_alloc(union bpf_attr *attr)
 			  offsetof(struct bpf_lpm_trie_key, data);
 	trie->max_prefixlen = trie->data_size * 8;
 
-	cost_per_node = sizeof(struct lpm_trie_node) +
-			attr->value_size + trie->data_size;
-	cost += (u64) attr->max_entries * cost_per_node;
-
-	ret = bpf_map_charge_init(&trie->map.memory, cost);
-	if (ret)
-		goto out_err;
-
 	spin_lock_init(&trie->lock);
 
 	return &trie->map;
-out_err:
-	kfree(trie);
-	return ERR_PTR(ret);
 }
 
 static void trie_free(struct bpf_map *map)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 20/29] bpf: eliminate rlimit-based memory accounting for queue_stack_maps maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (18 preceding siblings ...)
  2020-07-30 21:23 ` [PATCH bpf-next v3 19/29] bpf: eliminate rlimit-based memory accounting for lpm_trie maps Roman Gushchin
@ 2020-07-30 21:23 ` Roman Gushchin
  2020-07-30 21:23 ` [PATCH bpf-next v3 21/29] bpf: eliminate rlimit-based memory accounting for reuseport_array maps Roman Gushchin
                   ` (9 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:23 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Do not use rlimit-based memory accounting for queue_stack maps.
It has been replaced with the memcg-based memory accounting.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 kernel/bpf/queue_stack_maps.c | 16 ++++------------
 1 file changed, 4 insertions(+), 12 deletions(-)

diff --git a/kernel/bpf/queue_stack_maps.c b/kernel/bpf/queue_stack_maps.c
index 44184f82916a..92e73c35a34a 100644
--- a/kernel/bpf/queue_stack_maps.c
+++ b/kernel/bpf/queue_stack_maps.c
@@ -66,29 +66,21 @@ static int queue_stack_map_alloc_check(union bpf_attr *attr)
 
 static struct bpf_map *queue_stack_map_alloc(union bpf_attr *attr)
 {
-	int ret, numa_node = bpf_map_attr_numa_node(attr);
-	struct bpf_map_memory mem = {0};
+	int numa_node = bpf_map_attr_numa_node(attr);
 	struct bpf_queue_stack *qs;
-	u64 size, queue_size, cost;
+	u64 size, queue_size;
 
 	size = (u64) attr->max_entries + 1;
-	cost = queue_size = sizeof(*qs) + size * attr->value_size;
-
-	ret = bpf_map_charge_init(&mem, cost);
-	if (ret < 0)
-		return ERR_PTR(ret);
+	queue_size = sizeof(*qs) + size * attr->value_size;
 
 	qs = bpf_map_area_alloc(queue_size, numa_node);
-	if (!qs) {
-		bpf_map_charge_finish(&mem);
+	if (!qs)
 		return ERR_PTR(-ENOMEM);
-	}
 
 	memset(qs, 0, sizeof(*qs));
 
 	bpf_map_init_from_attr(&qs->map, attr);
 
-	bpf_map_charge_move(&qs->map.memory, &mem);
 	qs->size = size;
 
 	raw_spin_lock_init(&qs->lock);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 21/29] bpf: eliminate rlimit-based memory accounting for reuseport_array maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (19 preceding siblings ...)
  2020-07-30 21:23 ` [PATCH bpf-next v3 20/29] bpf: eliminate rlimit-based memory accounting for queue_stack_maps maps Roman Gushchin
@ 2020-07-30 21:23 ` Roman Gushchin
  2020-07-30 21:23 ` [PATCH bpf-next v3 22/29] bpf: eliminate rlimit-based memory accounting for bpf ringbuffer Roman Gushchin
                   ` (8 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:23 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Do not use rlimit-based memory accounting for reuseport_array maps.
It has been replaced with the memcg-based memory accounting.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 kernel/bpf/reuseport_array.c | 12 ++----------
 1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/kernel/bpf/reuseport_array.c b/kernel/bpf/reuseport_array.c
index 90b29c5b1da7..9d0161fdfec7 100644
--- a/kernel/bpf/reuseport_array.c
+++ b/kernel/bpf/reuseport_array.c
@@ -150,9 +150,8 @@ static void reuseport_array_free(struct bpf_map *map)
 
 static struct bpf_map *reuseport_array_alloc(union bpf_attr *attr)
 {
-	int err, numa_node = bpf_map_attr_numa_node(attr);
+	int numa_node = bpf_map_attr_numa_node(attr);
 	struct reuseport_array *array;
-	struct bpf_map_memory mem;
 	u64 array_size;
 
 	if (!bpf_capable())
@@ -161,20 +160,13 @@ static struct bpf_map *reuseport_array_alloc(union bpf_attr *attr)
 	array_size = sizeof(*array);
 	array_size += (u64)attr->max_entries * sizeof(struct sock *);
 
-	err = bpf_map_charge_init(&mem, array_size);
-	if (err)
-		return ERR_PTR(err);
-
 	/* allocate all map elements and zero-initialize them */
 	array = bpf_map_area_alloc(array_size, numa_node);
-	if (!array) {
-		bpf_map_charge_finish(&mem);
+	if (!array)
 		return ERR_PTR(-ENOMEM);
-	}
 
 	/* copy mandatory map attributes */
 	bpf_map_init_from_attr(&array->map, attr);
-	bpf_map_charge_move(&array->map.memory, &mem);
 
 	return &array->map;
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 22/29] bpf: eliminate rlimit-based memory accounting for bpf ringbuffer
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (20 preceding siblings ...)
  2020-07-30 21:23 ` [PATCH bpf-next v3 21/29] bpf: eliminate rlimit-based memory accounting for reuseport_array maps Roman Gushchin
@ 2020-07-30 21:23 ` Roman Gushchin
  2020-07-30 21:23 ` [PATCH bpf-next v3 23/29] bpf: eliminate rlimit-based memory accounting for sockmap and sockhash maps Roman Gushchin
                   ` (7 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:23 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu, Andrii Nakryiko

Do not use rlimit-based memory accounting for bpf ringbuffer.
It has been replaced with the memcg-based memory accounting.

bpf_ringbuf_alloc() can't return anything except ERR_PTR(-ENOMEM)
and a valid pointer, so to simplify the code make it return NULL
in the first case. This allows to drop a couple of lines in
ringbuf_map_alloc() and also makes it look similar to other memory
allocating function like kmalloc().

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andriin@fb.com>
---
 kernel/bpf/ringbuf.c | 24 ++++--------------------
 1 file changed, 4 insertions(+), 20 deletions(-)

diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
index e8e2c39cbdc9..e687b798d097 100644
--- a/kernel/bpf/ringbuf.c
+++ b/kernel/bpf/ringbuf.c
@@ -48,7 +48,6 @@ struct bpf_ringbuf {
 
 struct bpf_ringbuf_map {
 	struct bpf_map map;
-	struct bpf_map_memory memory;
 	struct bpf_ringbuf *rb;
 };
 
@@ -135,7 +134,7 @@ static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t data_sz, int numa_node)
 
 	rb = bpf_ringbuf_area_alloc(data_sz, numa_node);
 	if (!rb)
-		return ERR_PTR(-ENOMEM);
+		return NULL;
 
 	spin_lock_init(&rb->spinlock);
 	init_waitqueue_head(&rb->waitq);
@@ -151,8 +150,6 @@ static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t data_sz, int numa_node)
 static struct bpf_map *ringbuf_map_alloc(union bpf_attr *attr)
 {
 	struct bpf_ringbuf_map *rb_map;
-	u64 cost;
-	int err;
 
 	if (attr->map_flags & ~RINGBUF_CREATE_FLAG_MASK)
 		return ERR_PTR(-EINVAL);
@@ -174,26 +171,13 @@ static struct bpf_map *ringbuf_map_alloc(union bpf_attr *attr)
 
 	bpf_map_init_from_attr(&rb_map->map, attr);
 
-	cost = sizeof(struct bpf_ringbuf_map) +
-	       sizeof(struct bpf_ringbuf) +
-	       attr->max_entries;
-	err = bpf_map_charge_init(&rb_map->map.memory, cost);
-	if (err)
-		goto err_free_map;
-
 	rb_map->rb = bpf_ringbuf_alloc(attr->max_entries, rb_map->map.numa_node);
-	if (IS_ERR(rb_map->rb)) {
-		err = PTR_ERR(rb_map->rb);
-		goto err_uncharge;
+	if (!rb_map->rb) {
+		kfree(rb_map);
+		return ERR_PTR(-ENOMEM);
 	}
 
 	return &rb_map->map;
-
-err_uncharge:
-	bpf_map_charge_finish(&rb_map->map.memory);
-err_free_map:
-	kfree(rb_map);
-	return ERR_PTR(err);
 }
 
 static void bpf_ringbuf_free(struct bpf_ringbuf *rb)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 23/29] bpf: eliminate rlimit-based memory accounting for sockmap and sockhash maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (21 preceding siblings ...)
  2020-07-30 21:23 ` [PATCH bpf-next v3 22/29] bpf: eliminate rlimit-based memory accounting for bpf ringbuffer Roman Gushchin
@ 2020-07-30 21:23 ` Roman Gushchin
  2020-07-30 21:23 ` [PATCH bpf-next v3 24/29] bpf: eliminate rlimit-based memory accounting for stackmap maps Roman Gushchin
                   ` (6 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:23 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Do not use rlimit-based memory accounting for sockmap and sockhash maps.
It has been replaced with the memcg-based memory accounting.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 net/core/sock_map.c | 33 ++++++---------------------------
 1 file changed, 6 insertions(+), 27 deletions(-)

diff --git a/net/core/sock_map.c b/net/core/sock_map.c
index bc797adca44c..07c90baf8db1 100644
--- a/net/core/sock_map.c
+++ b/net/core/sock_map.c
@@ -26,8 +26,6 @@ struct bpf_stab {
 static struct bpf_map *sock_map_alloc(union bpf_attr *attr)
 {
 	struct bpf_stab *stab;
-	u64 cost;
-	int err;
 
 	if (!capable(CAP_NET_ADMIN))
 		return ERR_PTR(-EPERM);
@@ -45,22 +43,15 @@ static struct bpf_map *sock_map_alloc(union bpf_attr *attr)
 	bpf_map_init_from_attr(&stab->map, attr);
 	raw_spin_lock_init(&stab->lock);
 
-	/* Make sure page count doesn't overflow. */
-	cost = (u64) stab->map.max_entries * sizeof(struct sock *);
-	err = bpf_map_charge_init(&stab->map.memory, cost);
-	if (err)
-		goto free_stab;
-
 	stab->sks = bpf_map_area_alloc(stab->map.max_entries *
 				       sizeof(struct sock *),
 				       stab->map.numa_node);
-	if (stab->sks)
-		return &stab->map;
-	err = -ENOMEM;
-	bpf_map_charge_finish(&stab->map.memory);
-free_stab:
-	kfree(stab);
-	return ERR_PTR(err);
+	if (!stab->sks) {
+		kfree(stab);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	return &stab->map;
 }
 
 int sock_map_get_from_fd(const union bpf_attr *attr, struct bpf_prog *prog)
@@ -999,7 +990,6 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr)
 {
 	struct bpf_shtab *htab;
 	int i, err;
-	u64 cost;
 
 	if (!capable(CAP_NET_ADMIN))
 		return ERR_PTR(-EPERM);
@@ -1027,21 +1017,10 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr)
 		goto free_htab;
 	}
 
-	cost = (u64) htab->buckets_num * sizeof(struct bpf_shtab_bucket) +
-	       (u64) htab->elem_size * htab->map.max_entries;
-	if (cost >= U32_MAX - PAGE_SIZE) {
-		err = -EINVAL;
-		goto free_htab;
-	}
-	err = bpf_map_charge_init(&htab->map.memory, cost);
-	if (err)
-		goto free_htab;
-
 	htab->buckets = bpf_map_area_alloc(htab->buckets_num *
 					   sizeof(struct bpf_shtab_bucket),
 					   htab->map.numa_node);
 	if (!htab->buckets) {
-		bpf_map_charge_finish(&htab->map.memory);
 		err = -ENOMEM;
 		goto free_htab;
 	}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 24/29] bpf: eliminate rlimit-based memory accounting for stackmap maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (22 preceding siblings ...)
  2020-07-30 21:23 ` [PATCH bpf-next v3 23/29] bpf: eliminate rlimit-based memory accounting for sockmap and sockhash maps Roman Gushchin
@ 2020-07-30 21:23 ` Roman Gushchin
  2020-07-30 21:23 ` [PATCH bpf-next v3 25/29] bpf: eliminate rlimit-based memory accounting for socket storage maps Roman Gushchin
                   ` (5 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:23 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Do not use rlimit-based memory accounting for stackmap maps.
It has been replaced with the memcg-based memory accounting.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 kernel/bpf/stackmap.c | 16 +++-------------
 1 file changed, 3 insertions(+), 13 deletions(-)

diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index 4fd830a62be2..c1f4972afbed 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -90,7 +90,6 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr)
 {
 	u32 value_size = attr->value_size;
 	struct bpf_stack_map *smap;
-	struct bpf_map_memory mem;
 	u64 cost, n_buckets;
 	int err;
 
@@ -119,15 +118,9 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr)
 
 	cost = n_buckets * sizeof(struct stack_map_bucket *) + sizeof(*smap);
 	cost += n_buckets * (value_size + sizeof(struct stack_map_bucket));
-	err = bpf_map_charge_init(&mem, cost);
-	if (err)
-		return ERR_PTR(err);
-
 	smap = bpf_map_area_alloc(cost, bpf_map_attr_numa_node(attr));
-	if (!smap) {
-		bpf_map_charge_finish(&mem);
+	if (!smap)
 		return ERR_PTR(-ENOMEM);
-	}
 
 	bpf_map_init_from_attr(&smap->map, attr);
 	smap->map.value_size = value_size;
@@ -135,20 +128,17 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr)
 
 	err = get_callchain_buffers(sysctl_perf_event_max_stack);
 	if (err)
-		goto free_charge;
+		goto free_smap;
 
 	err = prealloc_elems_and_freelist(smap);
 	if (err)
 		goto put_buffers;
 
-	bpf_map_charge_move(&smap->map.memory, &mem);
-
 	return &smap->map;
 
 put_buffers:
 	put_callchain_buffers();
-free_charge:
-	bpf_map_charge_finish(&mem);
+free_smap:
 	bpf_map_area_free(smap);
 	return ERR_PTR(err);
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 25/29] bpf: eliminate rlimit-based memory accounting for socket storage maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (23 preceding siblings ...)
  2020-07-30 21:23 ` [PATCH bpf-next v3 24/29] bpf: eliminate rlimit-based memory accounting for stackmap maps Roman Gushchin
@ 2020-07-30 21:23 ` Roman Gushchin
  2020-07-30 21:23 ` [PATCH bpf-next v3 26/29] bpf: eliminate rlimit-based memory accounting for xskmap maps Roman Gushchin
                   ` (4 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:23 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Do not use rlimit-based memory accounting for socket storage maps.
It has been replaced with the memcg-based memory accounting.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 net/core/bpf_sk_storage.c | 11 -----------
 1 file changed, 11 deletions(-)

diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
index c9b9cd2d11c5..11fbb71114ca 100644
--- a/net/core/bpf_sk_storage.c
+++ b/net/core/bpf_sk_storage.c
@@ -676,8 +676,6 @@ static struct bpf_map *bpf_sk_storage_map_alloc(union bpf_attr *attr)
 	struct bpf_sk_storage_map *smap;
 	unsigned int i;
 	u32 nbuckets;
-	u64 cost;
-	int ret;
 
 	smap = kzalloc(sizeof(*smap), GFP_USER | __GFP_NOWARN | __GFP_ACCOUNT);
 	if (!smap)
@@ -688,18 +686,9 @@ static struct bpf_map *bpf_sk_storage_map_alloc(union bpf_attr *attr)
 	/* Use at least 2 buckets, select_bucket() is undefined behavior with 1 bucket */
 	nbuckets = max_t(u32, 2, nbuckets);
 	smap->bucket_log = ilog2(nbuckets);
-	cost = sizeof(*smap->buckets) * nbuckets + sizeof(*smap);
-
-	ret = bpf_map_charge_init(&smap->map.memory, cost);
-	if (ret < 0) {
-		kfree(smap);
-		return ERR_PTR(ret);
-	}
-
 	smap->buckets = kvcalloc(sizeof(*smap->buckets), nbuckets,
 				 GFP_USER | __GFP_NOWARN | __GFP_ACCOUNT);
 	if (!smap->buckets) {
-		bpf_map_charge_finish(&smap->map.memory);
 		kfree(smap);
 		return ERR_PTR(-ENOMEM);
 	}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 26/29] bpf: eliminate rlimit-based memory accounting for xskmap maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (24 preceding siblings ...)
  2020-07-30 21:23 ` [PATCH bpf-next v3 25/29] bpf: eliminate rlimit-based memory accounting for socket storage maps Roman Gushchin
@ 2020-07-30 21:23 ` Roman Gushchin
  2020-07-30 21:23 ` [PATCH bpf-next v3 27/29] bpf: eliminate rlimit-based memory accounting infra for bpf maps Roman Gushchin
                   ` (3 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:23 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Do not use rlimit-based memory accounting for xskmap maps.
It has been replaced with the memcg-based memory accounting.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 net/xdp/xskmap.c | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c
index e574b22defe5..0366013f13c6 100644
--- a/net/xdp/xskmap.c
+++ b/net/xdp/xskmap.c
@@ -74,7 +74,6 @@ static void xsk_map_sock_delete(struct xdp_sock *xs,
 
 static struct bpf_map *xsk_map_alloc(union bpf_attr *attr)
 {
-	struct bpf_map_memory mem;
 	int err, numa_node;
 	struct xsk_map *m;
 	u64 size;
@@ -90,18 +89,11 @@ static struct bpf_map *xsk_map_alloc(union bpf_attr *attr)
 	numa_node = bpf_map_attr_numa_node(attr);
 	size = struct_size(m, xsk_map, attr->max_entries);
 
-	err = bpf_map_charge_init(&mem, size);
-	if (err < 0)
-		return ERR_PTR(err);
-
 	m = bpf_map_area_alloc(size, numa_node);
-	if (!m) {
-		bpf_map_charge_finish(&mem);
+	if (!m)
 		return ERR_PTR(-ENOMEM);
-	}
 
 	bpf_map_init_from_attr(&m->map, attr);
-	bpf_map_charge_move(&m->map.memory, &mem);
 	spin_lock_init(&m->lock);
 
 	return &m->map;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 27/29] bpf: eliminate rlimit-based memory accounting infra for bpf maps
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (25 preceding siblings ...)
  2020-07-30 21:23 ` [PATCH bpf-next v3 26/29] bpf: eliminate rlimit-based memory accounting for xskmap maps Roman Gushchin
@ 2020-07-30 21:23 ` Roman Gushchin
  2020-07-31 22:47   ` Song Liu
  2020-07-30 21:23 ` [PATCH bpf-next v3 28/29] bpf: eliminate rlimit-based memory accounting for bpf progs Roman Gushchin
                   ` (2 subsequent siblings)
  29 siblings, 1 reply; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:23 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin

Remove rlimit-based accounting infrastructure code, which is not used
anymore.

Signed-off-by: Roman Gushchin <guro@fb.com>
---
 include/linux/bpf.h                           | 12 ----
 kernel/bpf/syscall.c                          | 64 +------------------
 .../selftests/bpf/progs/map_ptr_kern.c        |  5 --
 3 files changed, 2 insertions(+), 79 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 40c5e206ecf2..25821dcf822a 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -112,11 +112,6 @@ struct bpf_map_ops {
 	const struct bpf_iter_seq_info *iter_seq_info;
 };
 
-struct bpf_map_memory {
-	u32 pages;
-	struct user_struct *user;
-};
-
 struct bpf_map {
 	/* The first two cachelines with read-mostly members of which some
 	 * are also accessed in fast-path (e.g. ops, max_entries).
@@ -137,7 +132,6 @@ struct bpf_map {
 	u32 btf_key_type_id;
 	u32 btf_value_type_id;
 	struct btf *btf;
-	struct bpf_map_memory memory;
 	char name[BPF_OBJ_NAME_LEN];
 	u32 btf_vmlinux_value_type_id;
 	bool bypass_spec_v1;
@@ -1143,12 +1137,6 @@ void bpf_map_inc_with_uref(struct bpf_map *map);
 struct bpf_map * __must_check bpf_map_inc_not_zero(struct bpf_map *map);
 void bpf_map_put_with_uref(struct bpf_map *map);
 void bpf_map_put(struct bpf_map *map);
-int bpf_map_charge_memlock(struct bpf_map *map, u32 pages);
-void bpf_map_uncharge_memlock(struct bpf_map *map, u32 pages);
-int bpf_map_charge_init(struct bpf_map_memory *mem, u64 size);
-void bpf_map_charge_finish(struct bpf_map_memory *mem);
-void bpf_map_charge_move(struct bpf_map_memory *dst,
-			 struct bpf_map_memory *src);
 void *bpf_map_area_alloc(u64 size, int numa_node);
 void *bpf_map_area_mmapable_alloc(u64 size, int numa_node);
 void bpf_map_area_free(void *base);
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index a53e7aff3efc..a204a9e4a2cb 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -354,60 +354,6 @@ static void bpf_uncharge_memlock(struct user_struct *user, u32 pages)
 		atomic_long_sub(pages, &user->locked_vm);
 }
 
-int bpf_map_charge_init(struct bpf_map_memory *mem, u64 size)
-{
-	u32 pages = round_up(size, PAGE_SIZE) >> PAGE_SHIFT;
-	struct user_struct *user;
-	int ret;
-
-	if (size >= U32_MAX - PAGE_SIZE)
-		return -E2BIG;
-
-	user = get_current_user();
-	ret = bpf_charge_memlock(user, pages);
-	if (ret) {
-		free_uid(user);
-		return ret;
-	}
-
-	mem->pages = pages;
-	mem->user = user;
-
-	return 0;
-}
-
-void bpf_map_charge_finish(struct bpf_map_memory *mem)
-{
-	bpf_uncharge_memlock(mem->user, mem->pages);
-	free_uid(mem->user);
-}
-
-void bpf_map_charge_move(struct bpf_map_memory *dst,
-			 struct bpf_map_memory *src)
-{
-	*dst = *src;
-
-	/* Make sure src will not be used for the redundant uncharging. */
-	memset(src, 0, sizeof(struct bpf_map_memory));
-}
-
-int bpf_map_charge_memlock(struct bpf_map *map, u32 pages)
-{
-	int ret;
-
-	ret = bpf_charge_memlock(map->memory.user, pages);
-	if (ret)
-		return ret;
-	map->memory.pages += pages;
-	return ret;
-}
-
-void bpf_map_uncharge_memlock(struct bpf_map *map, u32 pages)
-{
-	bpf_uncharge_memlock(map->memory.user, pages);
-	map->memory.pages -= pages;
-}
-
 static int bpf_map_alloc_id(struct bpf_map *map)
 {
 	int id;
@@ -456,13 +402,10 @@ void bpf_map_free_id(struct bpf_map *map, bool do_idr_lock)
 static void bpf_map_free_deferred(struct work_struct *work)
 {
 	struct bpf_map *map = container_of(work, struct bpf_map, work);
-	struct bpf_map_memory mem;
 
-	bpf_map_charge_move(&mem, &map->memory);
 	security_bpf_map_free(map);
 	/* implementation dependent freeing */
 	map->ops->map_free(map);
-	bpf_map_charge_finish(&mem);
 }
 
 static void bpf_map_put_uref(struct bpf_map *map)
@@ -541,7 +484,7 @@ static void bpf_map_show_fdinfo(struct seq_file *m, struct file *filp)
 		   "value_size:\t%u\n"
 		   "max_entries:\t%u\n"
 		   "map_flags:\t%#x\n"
-		   "memlock:\t%llu\n"
+		   "memlock:\t%llu\n" /* deprecated */
 		   "map_id:\t%u\n"
 		   "frozen:\t%u\n",
 		   map->map_type,
@@ -549,7 +492,7 @@ static void bpf_map_show_fdinfo(struct seq_file *m, struct file *filp)
 		   map->value_size,
 		   map->max_entries,
 		   map->map_flags,
-		   map->memory.pages * 1ULL << PAGE_SHIFT,
+		   0LLU,
 		   map->id,
 		   READ_ONCE(map->frozen));
 	if (type) {
@@ -790,7 +733,6 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf,
 static int map_create(union bpf_attr *attr)
 {
 	int numa_node = bpf_map_attr_numa_node(attr);
-	struct bpf_map_memory mem;
 	struct bpf_map *map;
 	int f_flags;
 	int err;
@@ -887,9 +829,7 @@ static int map_create(union bpf_attr *attr)
 	security_bpf_map_free(map);
 free_map:
 	btf_put(map->btf);
-	bpf_map_charge_move(&mem, &map->memory);
 	map->ops->map_free(map);
-	bpf_map_charge_finish(&mem);
 	return err;
 }
 
diff --git a/tools/testing/selftests/bpf/progs/map_ptr_kern.c b/tools/testing/selftests/bpf/progs/map_ptr_kern.c
index 473665cac67e..49d1dcaf7999 100644
--- a/tools/testing/selftests/bpf/progs/map_ptr_kern.c
+++ b/tools/testing/selftests/bpf/progs/map_ptr_kern.c
@@ -26,17 +26,12 @@ __u32 g_line = 0;
 		return 0;	\
 })
 
-struct bpf_map_memory {
-	__u32 pages;
-} __attribute__((preserve_access_index));
-
 struct bpf_map {
 	enum bpf_map_type map_type;
 	__u32 key_size;
 	__u32 value_size;
 	__u32 max_entries;
 	__u32 id;
-	struct bpf_map_memory memory;
 } __attribute__((preserve_access_index));
 
 static inline int check_bpf_map_fields(struct bpf_map *map, __u32 key_size,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 28/29] bpf: eliminate rlimit-based memory accounting for bpf progs
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (26 preceding siblings ...)
  2020-07-30 21:23 ` [PATCH bpf-next v3 27/29] bpf: eliminate rlimit-based memory accounting infra for bpf maps Roman Gushchin
@ 2020-07-30 21:23 ` Roman Gushchin
  2020-07-30 21:23 ` [PATCH bpf-next v3 29/29] bpf: samples: do not touch RLIMIT_MEMLOCK Roman Gushchin
  2020-08-03 12:05 ` [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Daniel Borkmann
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:23 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Do not use rlimit-based memory accounting for bpf progs. It has been
replaced with memcg-based memory accounting.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 include/linux/bpf.h  | 11 ------
 kernel/bpf/core.c    | 12 ++-----
 kernel/bpf/syscall.c | 86 ++++++--------------------------------------
 3 files changed, 12 insertions(+), 97 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 25821dcf822a..55d87febea76 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1121,8 +1121,6 @@ void bpf_prog_sub(struct bpf_prog *prog, int i);
 void bpf_prog_inc(struct bpf_prog *prog);
 struct bpf_prog * __must_check bpf_prog_inc_not_zero(struct bpf_prog *prog);
 void bpf_prog_put(struct bpf_prog *prog);
-int __bpf_prog_charge(struct user_struct *user, u32 pages);
-void __bpf_prog_uncharge(struct user_struct *user, u32 pages);
 void __bpf_free_used_maps(struct bpf_prog_aux *aux,
 			  struct bpf_map **used_maps, u32 len);
 
@@ -1380,15 +1378,6 @@ bpf_prog_inc_not_zero(struct bpf_prog *prog)
 	return ERR_PTR(-EOPNOTSUPP);
 }
 
-static inline int __bpf_prog_charge(struct user_struct *user, u32 pages)
-{
-	return 0;
-}
-
-static inline void __bpf_prog_uncharge(struct user_struct *user, u32 pages)
-{
-}
-
 static inline void bpf_link_init(struct bpf_link *link, enum bpf_link_type type,
 				 const struct bpf_link_ops *ops,
 				 struct bpf_prog *prog)
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index daab8dcafbd4..23b8ff109ac8 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -219,23 +219,15 @@ struct bpf_prog *bpf_prog_realloc(struct bpf_prog *fp_old, unsigned int size,
 {
 	gfp_t gfp_flags = GFP_KERNEL_ACCOUNT | __GFP_ZERO | gfp_extra_flags;
 	struct bpf_prog *fp;
-	u32 pages, delta;
-	int ret;
+	u32 pages;
 
 	size = round_up(size, PAGE_SIZE);
 	pages = size / PAGE_SIZE;
 	if (pages <= fp_old->pages)
 		return fp_old;
 
-	delta = pages - fp_old->pages;
-	ret = __bpf_prog_charge(fp_old->aux->user, delta);
-	if (ret)
-		return NULL;
-
 	fp = __vmalloc(size, gfp_flags);
-	if (fp == NULL) {
-		__bpf_prog_uncharge(fp_old->aux->user, delta);
-	} else {
+	if (fp) {
 		memcpy(fp, fp_old, fp_old->pages * PAGE_SIZE);
 		fp->pages = pages;
 		fp->aux->prog = fp;
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index a204a9e4a2cb..7c2b7ed5540e 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -337,23 +337,6 @@ void bpf_map_init_from_attr(struct bpf_map *map, union bpf_attr *attr)
 	map->numa_node = bpf_map_attr_numa_node(attr);
 }
 
-static int bpf_charge_memlock(struct user_struct *user, u32 pages)
-{
-	unsigned long memlock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
-
-	if (atomic_long_add_return(pages, &user->locked_vm) > memlock_limit) {
-		atomic_long_sub(pages, &user->locked_vm);
-		return -EPERM;
-	}
-	return 0;
-}
-
-static void bpf_uncharge_memlock(struct user_struct *user, u32 pages)
-{
-	if (user)
-		atomic_long_sub(pages, &user->locked_vm);
-}
-
 static int bpf_map_alloc_id(struct bpf_map *map)
 {
 	int id;
@@ -1563,51 +1546,6 @@ static void bpf_audit_prog(const struct bpf_prog *prog, unsigned int op)
 	audit_log_end(ab);
 }
 
-int __bpf_prog_charge(struct user_struct *user, u32 pages)
-{
-	unsigned long memlock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
-	unsigned long user_bufs;
-
-	if (user) {
-		user_bufs = atomic_long_add_return(pages, &user->locked_vm);
-		if (user_bufs > memlock_limit) {
-			atomic_long_sub(pages, &user->locked_vm);
-			return -EPERM;
-		}
-	}
-
-	return 0;
-}
-
-void __bpf_prog_uncharge(struct user_struct *user, u32 pages)
-{
-	if (user)
-		atomic_long_sub(pages, &user->locked_vm);
-}
-
-static int bpf_prog_charge_memlock(struct bpf_prog *prog)
-{
-	struct user_struct *user = get_current_user();
-	int ret;
-
-	ret = __bpf_prog_charge(user, prog->pages);
-	if (ret) {
-		free_uid(user);
-		return ret;
-	}
-
-	prog->aux->user = user;
-	return 0;
-}
-
-static void bpf_prog_uncharge_memlock(struct bpf_prog *prog)
-{
-	struct user_struct *user = prog->aux->user;
-
-	__bpf_prog_uncharge(user, prog->pages);
-	free_uid(user);
-}
-
 static int bpf_prog_alloc_id(struct bpf_prog *prog)
 {
 	int id;
@@ -1657,7 +1595,7 @@ static void __bpf_prog_put_rcu(struct rcu_head *rcu)
 
 	kvfree(aux->func_info);
 	kfree(aux->func_info_aux);
-	bpf_prog_uncharge_memlock(aux->prog);
+	free_uid(aux->user);
 	security_bpf_prog_free(aux);
 	bpf_prog_free(aux->prog);
 }
@@ -2090,7 +2028,7 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr)
 		tgt_prog = bpf_prog_get(attr->attach_prog_fd);
 		if (IS_ERR(tgt_prog)) {
 			err = PTR_ERR(tgt_prog);
-			goto free_prog_nouncharge;
+			goto free_prog;
 		}
 		prog->aux->linked_prog = tgt_prog;
 	}
@@ -2099,18 +2037,15 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr)
 
 	err = security_bpf_prog_alloc(prog->aux);
 	if (err)
-		goto free_prog_nouncharge;
-
-	err = bpf_prog_charge_memlock(prog);
-	if (err)
-		goto free_prog_sec;
+		goto free_prog;
 
+	prog->aux->user = get_current_user();
 	prog->len = attr->insn_cnt;
 
 	err = -EFAULT;
 	if (copy_from_user(prog->insns, u64_to_user_ptr(attr->insns),
 			   bpf_prog_insn_size(prog)) != 0)
-		goto free_prog;
+		goto free_prog_sec;
 
 	prog->orig_prog = NULL;
 	prog->jited = 0;
@@ -2121,19 +2056,19 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr)
 	if (bpf_prog_is_dev_bound(prog->aux)) {
 		err = bpf_prog_offload_init(prog, attr);
 		if (err)
-			goto free_prog;
+			goto free_prog_sec;
 	}
 
 	/* find program type: socket_filter vs tracing_filter */
 	err = find_prog_type(type, prog);
 	if (err < 0)
-		goto free_prog;
+		goto free_prog_sec;
 
 	prog->aux->load_time = ktime_get_boottime_ns();
 	err = bpf_obj_name_cpy(prog->aux->name, attr->prog_name,
 			       sizeof(attr->prog_name));
 	if (err < 0)
-		goto free_prog;
+		goto free_prog_sec;
 
 	/* run eBPF verifier */
 	err = bpf_check(&prog, attr, uattr);
@@ -2178,11 +2113,10 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr)
 	 */
 	__bpf_prog_put_noref(prog, prog->aux->func_cnt);
 	return err;
-free_prog:
-	bpf_prog_uncharge_memlock(prog);
 free_prog_sec:
+	free_uid(prog->aux->user);
 	security_bpf_prog_free(prog->aux);
-free_prog_nouncharge:
+free_prog:
 	bpf_prog_free(prog);
 	return err;
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH bpf-next v3 29/29] bpf: samples: do not touch RLIMIT_MEMLOCK
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (27 preceding siblings ...)
  2020-07-30 21:23 ` [PATCH bpf-next v3 28/29] bpf: eliminate rlimit-based memory accounting for bpf progs Roman Gushchin
@ 2020-07-30 21:23 ` Roman Gushchin
  2020-08-03 12:05 ` [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Daniel Borkmann
  29 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-07-30 21:23 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	linux-kernel, Roman Gushchin, Song Liu

Since bpf is not using rlimit memlock for the memory accounting
and control, do not change the limit in sample applications.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 samples/bpf/map_perf_test_user.c    | 11 -----------
 samples/bpf/offwaketime_user.c      |  2 --
 samples/bpf/sockex2_user.c          |  2 --
 samples/bpf/sockex3_user.c          |  2 --
 samples/bpf/spintest_user.c         |  2 --
 samples/bpf/syscall_tp_user.c       |  2 --
 samples/bpf/task_fd_query_user.c    |  5 -----
 samples/bpf/test_lru_dist.c         |  3 ---
 samples/bpf/test_map_in_map_user.c  |  9 ---------
 samples/bpf/test_overhead_user.c    |  2 --
 samples/bpf/trace_event_user.c      |  2 --
 samples/bpf/tracex2_user.c          |  6 ------
 samples/bpf/tracex3_user.c          |  6 ------
 samples/bpf/tracex4_user.c          |  6 ------
 samples/bpf/tracex5_user.c          |  3 ---
 samples/bpf/tracex6_user.c          |  3 ---
 samples/bpf/xdp1_user.c             |  6 ------
 samples/bpf/xdp_adjust_tail_user.c  |  6 ------
 samples/bpf/xdp_monitor_user.c      |  6 ------
 samples/bpf/xdp_redirect_cpu_user.c |  6 ------
 samples/bpf/xdp_redirect_map_user.c |  6 ------
 samples/bpf/xdp_redirect_user.c     |  6 ------
 samples/bpf/xdp_router_ipv4_user.c  |  6 ------
 samples/bpf/xdp_rxq_info_user.c     |  6 ------
 samples/bpf/xdp_sample_pkts_user.c  |  6 ------
 samples/bpf/xdp_tx_iptunnel_user.c  |  6 ------
 samples/bpf/xdpsock_user.c          |  7 -------
 27 files changed, 133 deletions(-)

diff --git a/samples/bpf/map_perf_test_user.c b/samples/bpf/map_perf_test_user.c
index 8b13230b4c46..4c198bc55beb 100644
--- a/samples/bpf/map_perf_test_user.c
+++ b/samples/bpf/map_perf_test_user.c
@@ -421,20 +421,9 @@ static void fixup_map(struct bpf_object *obj)
 
 int main(int argc, char **argv)
 {
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
-	int nr_cpus = sysconf(_SC_NPROCESSORS_ONLN);
-	struct bpf_link *links[8];
-	struct bpf_program *prog;
-	struct bpf_object *obj;
-	struct bpf_map *map;
 	char filename[256];
 	int i = 0;
 
-	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
-		perror("setrlimit(RLIMIT_MEMLOCK)");
-		return 1;
-	}
-
 	if (argc > 1)
 		test_flags = atoi(argv[1]) ? : test_flags;
 
diff --git a/samples/bpf/offwaketime_user.c b/samples/bpf/offwaketime_user.c
index 51c7da5341cc..9e51dd011a2a 100644
--- a/samples/bpf/offwaketime_user.c
+++ b/samples/bpf/offwaketime_user.c
@@ -95,12 +95,10 @@ static void int_exit(int sig)
 
 int main(int argc, char **argv)
 {
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
 	char filename[256];
 	int delay = 1;
 
 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
-	setrlimit(RLIMIT_MEMLOCK, &r);
 
 	signal(SIGINT, int_exit);
 	signal(SIGTERM, int_exit);
diff --git a/samples/bpf/sockex2_user.c b/samples/bpf/sockex2_user.c
index af925a5afd1d..bafa567b840c 100644
--- a/samples/bpf/sockex2_user.c
+++ b/samples/bpf/sockex2_user.c
@@ -16,7 +16,6 @@ struct pair {
 
 int main(int ac, char **argv)
 {
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
 	struct bpf_object *obj;
 	int map_fd, prog_fd;
 	char filename[256];
@@ -24,7 +23,6 @@ int main(int ac, char **argv)
 	FILE *f;
 
 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
-	setrlimit(RLIMIT_MEMLOCK, &r);
 
 	if (bpf_prog_load(filename, BPF_PROG_TYPE_SOCKET_FILTER,
 			  &obj, &prog_fd))
diff --git a/samples/bpf/sockex3_user.c b/samples/bpf/sockex3_user.c
index 4dbee7427d47..6ee7b7a4b9b7 100644
--- a/samples/bpf/sockex3_user.c
+++ b/samples/bpf/sockex3_user.c
@@ -26,7 +26,6 @@ struct pair {
 int main(int argc, char **argv)
 {
 	int i, sock, key, fd, main_prog_fd, jmp_table_fd, hash_map_fd;
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
 	struct bpf_program *prog;
 	struct bpf_object *obj;
 	char filename[256];
@@ -34,7 +33,6 @@ int main(int argc, char **argv)
 	FILE *f;
 
 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
-	setrlimit(RLIMIT_MEMLOCK, &r);
 
 	obj = bpf_object__open_file(filename, NULL);
 	if (libbpf_get_error(obj)) {
diff --git a/samples/bpf/spintest_user.c b/samples/bpf/spintest_user.c
index fb430ea2ef51..458f1439e670 100644
--- a/samples/bpf/spintest_user.c
+++ b/samples/bpf/spintest_user.c
@@ -11,14 +11,12 @@
 
 int main(int ac, char **argv)
 {
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
 	long key, next_key, value;
 	char filename[256];
 	struct ksym *sym;
 	int i;
 
 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
-	setrlimit(RLIMIT_MEMLOCK, &r);
 
 	if (load_kallsyms()) {
 		printf("failed to process /proc/kallsyms\n");
diff --git a/samples/bpf/syscall_tp_user.c b/samples/bpf/syscall_tp_user.c
index 57014bab7cbe..caa3891ee774 100644
--- a/samples/bpf/syscall_tp_user.c
+++ b/samples/bpf/syscall_tp_user.c
@@ -85,7 +85,6 @@ static int test(char *filename, int num_progs)
 
 int main(int argc, char **argv)
 {
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
 	int opt, num_progs = 1;
 	char filename[256];
 
@@ -101,7 +100,6 @@ int main(int argc, char **argv)
 		}
 	}
 
-	setrlimit(RLIMIT_MEMLOCK, &r);
 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
 
 	return test(filename, num_progs);
diff --git a/samples/bpf/task_fd_query_user.c b/samples/bpf/task_fd_query_user.c
index ff2e9c1c7266..e2c1cacb781c 100644
--- a/samples/bpf/task_fd_query_user.c
+++ b/samples/bpf/task_fd_query_user.c
@@ -290,16 +290,11 @@ static int test_debug_fs_uprobe(char *binary_path, long offset, bool is_return)
 
 int main(int argc, char **argv)
 {
-	struct rlimit r = {1024*1024, RLIM_INFINITY};
 	extern char __executable_start;
 	char filename[256], buf[256];
 	__u64 uprobe_file_offset;
 
 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
-	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
-		perror("setrlimit(RLIMIT_MEMLOCK)");
-		return 1;
-	}
 
 	if (load_kallsyms()) {
 		printf("failed to process /proc/kallsyms\n");
diff --git a/samples/bpf/test_lru_dist.c b/samples/bpf/test_lru_dist.c
index b313dba4111b..c92c5c06b965 100644
--- a/samples/bpf/test_lru_dist.c
+++ b/samples/bpf/test_lru_dist.c
@@ -489,7 +489,6 @@ static void test_parallel_lru_loss(int map_type, int map_flags, int nr_tasks)
 
 int main(int argc, char **argv)
 {
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
 	int map_flags[] = {0, BPF_F_NO_COMMON_LRU};
 	const char *dist_file;
 	int nr_tasks = 1;
@@ -508,8 +507,6 @@ int main(int argc, char **argv)
 
 	setbuf(stdout, NULL);
 
-	assert(!setrlimit(RLIMIT_MEMLOCK, &r));
-
 	srand(time(NULL));
 
 	nr_cpus = bpf_num_possible_cpus();
diff --git a/samples/bpf/test_map_in_map_user.c b/samples/bpf/test_map_in_map_user.c
index 98656de56b83..0e65753a157a 100644
--- a/samples/bpf/test_map_in_map_user.c
+++ b/samples/bpf/test_map_in_map_user.c
@@ -114,17 +114,8 @@ static void test_map_in_map(void)
 
 int main(int argc, char **argv)
 {
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
-	struct bpf_link *link = NULL;
-	struct bpf_program *prog;
-	struct bpf_object *obj;
 	char filename[256];
 
-	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
-		perror("setrlimit(RLIMIT_MEMLOCK)");
-		return 1;
-	}
-
 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
 	obj = bpf_object__open_file(filename, NULL);
 	if (libbpf_get_error(obj)) {
diff --git a/samples/bpf/test_overhead_user.c b/samples/bpf/test_overhead_user.c
index 94f74112a20e..c100fd46cd8a 100644
--- a/samples/bpf/test_overhead_user.c
+++ b/samples/bpf/test_overhead_user.c
@@ -125,12 +125,10 @@ static void unload_progs(void)
 
 int main(int argc, char **argv)
 {
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
 	char filename[256];
 	int num_cpu = 8;
 	int test_flags = ~0;
 
-	setrlimit(RLIMIT_MEMLOCK, &r);
 
 	if (argc > 1)
 		test_flags = atoi(argv[1]) ? : test_flags;
diff --git a/samples/bpf/trace_event_user.c b/samples/bpf/trace_event_user.c
index ac1ba368195c..9664749bf618 100644
--- a/samples/bpf/trace_event_user.c
+++ b/samples/bpf/trace_event_user.c
@@ -294,13 +294,11 @@ static void test_bpf_perf_event(void)
 
 int main(int argc, char **argv)
 {
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
 	struct bpf_object *obj = NULL;
 	char filename[256];
 	int error = 1;
 
 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
-	setrlimit(RLIMIT_MEMLOCK, &r);
 
 	signal(SIGINT, err_exit);
 	signal(SIGTERM, err_exit);
diff --git a/samples/bpf/tracex2_user.c b/samples/bpf/tracex2_user.c
index 3e36b3e4e3ef..1626d51dfffd 100644
--- a/samples/bpf/tracex2_user.c
+++ b/samples/bpf/tracex2_user.c
@@ -116,7 +116,6 @@ static void int_exit(int sig)
 
 int main(int ac, char **argv)
 {
-	struct rlimit r = {1024*1024, RLIM_INFINITY};
 	long key, next_key, value;
 	struct bpf_link *links[2];
 	struct bpf_program *prog;
@@ -125,11 +124,6 @@ int main(int ac, char **argv)
 	int i, j = 0;
 	FILE *f;
 
-	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
-		perror("setrlimit(RLIMIT_MEMLOCK)");
-		return 1;
-	}
-
 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
 	obj = bpf_object__open_file(filename, NULL);
 	if (libbpf_get_error(obj)) {
diff --git a/samples/bpf/tracex3_user.c b/samples/bpf/tracex3_user.c
index 70e987775c15..33e16ba39f25 100644
--- a/samples/bpf/tracex3_user.c
+++ b/samples/bpf/tracex3_user.c
@@ -107,7 +107,6 @@ static void print_hist(int fd)
 
 int main(int ac, char **argv)
 {
-	struct rlimit r = {1024*1024, RLIM_INFINITY};
 	struct bpf_link *links[2];
 	struct bpf_program *prog;
 	struct bpf_object *obj;
@@ -127,11 +126,6 @@ int main(int ac, char **argv)
 		}
 	}
 
-	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
-		perror("setrlimit(RLIMIT_MEMLOCK)");
-		return 1;
-	}
-
 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
 	obj = bpf_object__open_file(filename, NULL);
 	if (libbpf_get_error(obj)) {
diff --git a/samples/bpf/tracex4_user.c b/samples/bpf/tracex4_user.c
index e8faf8f184ae..cea399424bca 100644
--- a/samples/bpf/tracex4_user.c
+++ b/samples/bpf/tracex4_user.c
@@ -48,18 +48,12 @@ static void print_old_objects(int fd)
 
 int main(int ac, char **argv)
 {
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
 	struct bpf_link *links[2];
 	struct bpf_program *prog;
 	struct bpf_object *obj;
 	char filename[256];
 	int map_fd, i, j = 0;
 
-	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
-		perror("setrlimit(RLIMIT_MEMLOCK, RLIM_INFINITY)");
-		return 1;
-	}
-
 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
 	obj = bpf_object__open_file(filename, NULL);
 	if (libbpf_get_error(obj)) {
diff --git a/samples/bpf/tracex5_user.c b/samples/bpf/tracex5_user.c
index 98dad57a96c4..1549fa3ec65c 100644
--- a/samples/bpf/tracex5_user.c
+++ b/samples/bpf/tracex5_user.c
@@ -34,7 +34,6 @@ static void install_accept_all_seccomp(void)
 
 int main(int ac, char **argv)
 {
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
 	struct bpf_link *link = NULL;
 	struct bpf_program *prog;
 	struct bpf_object *obj;
@@ -43,8 +42,6 @@ int main(int ac, char **argv)
 	const char *title;
 	FILE *f;
 
-	setrlimit(RLIMIT_MEMLOCK, &r);
-
 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
 	obj = bpf_object__open_file(filename, NULL);
 	if (libbpf_get_error(obj)) {
diff --git a/samples/bpf/tracex6_user.c b/samples/bpf/tracex6_user.c
index 33df9784775d..28296f40c133 100644
--- a/samples/bpf/tracex6_user.c
+++ b/samples/bpf/tracex6_user.c
@@ -175,15 +175,12 @@ static void test_bpf_perf_event(void)
 
 int main(int argc, char **argv)
 {
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
 	struct bpf_link *links[2];
 	struct bpf_program *prog;
 	struct bpf_object *obj;
 	char filename[256];
 	int i = 0;
 
-	setrlimit(RLIMIT_MEMLOCK, &r);
-
 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
 	obj = bpf_object__open_file(filename, NULL);
 	if (libbpf_get_error(obj)) {
diff --git a/samples/bpf/xdp1_user.c b/samples/bpf/xdp1_user.c
index c447ad9e3a1d..116e39f6b666 100644
--- a/samples/bpf/xdp1_user.c
+++ b/samples/bpf/xdp1_user.c
@@ -79,7 +79,6 @@ static void usage(const char *prog)
 
 int main(int argc, char **argv)
 {
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
 	struct bpf_prog_load_attr prog_load_attr = {
 		.prog_type	= BPF_PROG_TYPE_XDP,
 	};
@@ -117,11 +116,6 @@ int main(int argc, char **argv)
 		return 1;
 	}
 
-	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
-		perror("setrlimit(RLIMIT_MEMLOCK)");
-		return 1;
-	}
-
 	ifindex = if_nametoindex(argv[optind]);
 	if (!ifindex) {
 		perror("if_nametoindex");
diff --git a/samples/bpf/xdp_adjust_tail_user.c b/samples/bpf/xdp_adjust_tail_user.c
index ba482dc3da33..a70b094c8ec5 100644
--- a/samples/bpf/xdp_adjust_tail_user.c
+++ b/samples/bpf/xdp_adjust_tail_user.c
@@ -82,7 +82,6 @@ static void usage(const char *cmd)
 
 int main(int argc, char **argv)
 {
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
 	struct bpf_prog_load_attr prog_load_attr = {
 		.prog_type	= BPF_PROG_TYPE_XDP,
 	};
@@ -143,11 +142,6 @@ int main(int argc, char **argv)
 		}
 	}
 
-	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
-		perror("setrlimit(RLIMIT_MEMLOCK, RLIM_INFINITY)");
-		return 1;
-	}
-
 	if (!ifindex) {
 		fprintf(stderr, "Invalid ifname\n");
 		return 1;
diff --git a/samples/bpf/xdp_monitor_user.c b/samples/bpf/xdp_monitor_user.c
index ef53b93db573..25e6a24f8d7b 100644
--- a/samples/bpf/xdp_monitor_user.c
+++ b/samples/bpf/xdp_monitor_user.c
@@ -645,7 +645,6 @@ static void print_bpf_prog_info(void)
 
 int main(int argc, char **argv)
 {
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
 	int longindex = 0, opt;
 	int ret = EXIT_SUCCESS;
 	char bpf_obj_file[256];
@@ -676,11 +675,6 @@ int main(int argc, char **argv)
 		}
 	}
 
-	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
-		perror("setrlimit(RLIMIT_MEMLOCK)");
-		return EXIT_FAILURE;
-	}
-
 	if (load_bpf_file(bpf_obj_file)) {
 		printf("ERROR - bpf_log_buf: %s", bpf_log_buf);
 		return EXIT_FAILURE;
diff --git a/samples/bpf/xdp_redirect_cpu_user.c b/samples/bpf/xdp_redirect_cpu_user.c
index 004c0622c913..6773027b2a89 100644
--- a/samples/bpf/xdp_redirect_cpu_user.c
+++ b/samples/bpf/xdp_redirect_cpu_user.c
@@ -779,7 +779,6 @@ static int load_cpumap_prog(char *file_name, char *prog_name,
 
 int main(int argc, char **argv)
 {
-	struct rlimit r = {10 * 1024 * 1024, RLIM_INFINITY};
 	char *prog_name = "xdp_cpu_map5_lb_hash_ip_pairs";
 	char *mprog_filename = "xdp_redirect_kern.o";
 	char *redir_interface = NULL, *redir_map = NULL;
@@ -818,11 +817,6 @@ int main(int argc, char **argv)
 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
 	prog_load_attr.file = filename;
 
-	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
-		perror("setrlimit(RLIMIT_MEMLOCK)");
-		return 1;
-	}
-
 	if (bpf_prog_load_xattr(&prog_load_attr, &obj, &prog_fd))
 		return EXIT_FAIL;
 
diff --git a/samples/bpf/xdp_redirect_map_user.c b/samples/bpf/xdp_redirect_map_user.c
index 35e16dee613e..31131b6e7782 100644
--- a/samples/bpf/xdp_redirect_map_user.c
+++ b/samples/bpf/xdp_redirect_map_user.c
@@ -96,7 +96,6 @@ static void usage(const char *prog)
 
 int main(int argc, char **argv)
 {
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
 	struct bpf_prog_load_attr prog_load_attr = {
 		.prog_type	= BPF_PROG_TYPE_XDP,
 	};
@@ -135,11 +134,6 @@ int main(int argc, char **argv)
 		return 1;
 	}
 
-	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
-		perror("setrlimit(RLIMIT_MEMLOCK)");
-		return 1;
-	}
-
 	ifindex_in = if_nametoindex(argv[optind]);
 	if (!ifindex_in)
 		ifindex_in = strtoul(argv[optind], NULL, 0);
diff --git a/samples/bpf/xdp_redirect_user.c b/samples/bpf/xdp_redirect_user.c
index 9ca2bf457cda..41d705c3a1f7 100644
--- a/samples/bpf/xdp_redirect_user.c
+++ b/samples/bpf/xdp_redirect_user.c
@@ -97,7 +97,6 @@ static void usage(const char *prog)
 
 int main(int argc, char **argv)
 {
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
 	struct bpf_prog_load_attr prog_load_attr = {
 		.prog_type	= BPF_PROG_TYPE_XDP,
 	};
@@ -136,11 +135,6 @@ int main(int argc, char **argv)
 		return 1;
 	}
 
-	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
-		perror("setrlimit(RLIMIT_MEMLOCK)");
-		return 1;
-	}
-
 	ifindex_in = if_nametoindex(argv[optind]);
 	if (!ifindex_in)
 		ifindex_in = strtoul(argv[optind], NULL, 0);
diff --git a/samples/bpf/xdp_router_ipv4_user.c b/samples/bpf/xdp_router_ipv4_user.c
index c2da1b51ff95..b5f03cb17a3c 100644
--- a/samples/bpf/xdp_router_ipv4_user.c
+++ b/samples/bpf/xdp_router_ipv4_user.c
@@ -625,7 +625,6 @@ static void usage(const char *prog)
 
 int main(int ac, char **argv)
 {
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
 	struct bpf_prog_load_attr prog_load_attr = {
 		.prog_type	= BPF_PROG_TYPE_XDP,
 	};
@@ -670,11 +669,6 @@ int main(int ac, char **argv)
 		return 1;
 	}
 
-	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
-		perror("setrlimit(RLIMIT_MEMLOCK)");
-		return 1;
-	}
-
 	if (bpf_prog_load_xattr(&prog_load_attr, &obj, &prog_fd))
 		return 1;
 
diff --git a/samples/bpf/xdp_rxq_info_user.c b/samples/bpf/xdp_rxq_info_user.c
index caa4e7ffcfc7..74a2926eba08 100644
--- a/samples/bpf/xdp_rxq_info_user.c
+++ b/samples/bpf/xdp_rxq_info_user.c
@@ -450,7 +450,6 @@ static void stats_poll(int interval, int action, __u32 cfg_opt)
 int main(int argc, char **argv)
 {
 	__u32 cfg_options= NO_TOUCH ; /* Default: Don't touch packet memory */
-	struct rlimit r = {10 * 1024 * 1024, RLIM_INFINITY};
 	struct bpf_prog_load_attr prog_load_attr = {
 		.prog_type	= BPF_PROG_TYPE_XDP,
 	};
@@ -474,11 +473,6 @@ int main(int argc, char **argv)
 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
 	prog_load_attr.file = filename;
 
-	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
-		perror("setrlimit(RLIMIT_MEMLOCK)");
-		return 1;
-	}
-
 	if (bpf_prog_load_xattr(&prog_load_attr, &obj, &prog_fd))
 		return EXIT_FAIL;
 
diff --git a/samples/bpf/xdp_sample_pkts_user.c b/samples/bpf/xdp_sample_pkts_user.c
index 991ef6f0880b..551c6839f593 100644
--- a/samples/bpf/xdp_sample_pkts_user.c
+++ b/samples/bpf/xdp_sample_pkts_user.c
@@ -110,7 +110,6 @@ static void usage(const char *prog)
 
 int main(int argc, char **argv)
 {
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
 	struct bpf_prog_load_attr prog_load_attr = {
 		.prog_type	= BPF_PROG_TYPE_XDP,
 	};
@@ -144,11 +143,6 @@ int main(int argc, char **argv)
 		return 1;
 	}
 
-	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
-		perror("setrlimit(RLIMIT_MEMLOCK)");
-		return 1;
-	}
-
 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
 	prog_load_attr.file = filename;
 
diff --git a/samples/bpf/xdp_tx_iptunnel_user.c b/samples/bpf/xdp_tx_iptunnel_user.c
index a419bee151a8..1d4f305d02aa 100644
--- a/samples/bpf/xdp_tx_iptunnel_user.c
+++ b/samples/bpf/xdp_tx_iptunnel_user.c
@@ -155,7 +155,6 @@ int main(int argc, char **argv)
 	struct bpf_prog_load_attr prog_load_attr = {
 		.prog_type	= BPF_PROG_TYPE_XDP,
 	};
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
 	int min_port = 0, max_port = 0, vip2tnl_map_fd;
 	const char *optstr = "i:a:p:s:d:m:T:P:FSNh";
 	unsigned char opt_flags[256] = {};
@@ -254,11 +253,6 @@ int main(int argc, char **argv)
 		}
 	}
 
-	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
-		perror("setrlimit(RLIMIT_MEMLOCK, RLIM_INFINITY)");
-		return 1;
-	}
-
 	if (!ifindex) {
 		fprintf(stderr, "Invalid ifname\n");
 		return 1;
diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c
index 19c679456a0e..b3bd60433546 100644
--- a/samples/bpf/xdpsock_user.c
+++ b/samples/bpf/xdpsock_user.c
@@ -1216,7 +1216,6 @@ static void enter_xsks_into_map(struct bpf_object *obj)
 
 int main(int argc, char **argv)
 {
-	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
 	bool rx = false, tx = false;
 	struct xsk_umem_info *umem;
 	struct bpf_object *obj;
@@ -1226,12 +1225,6 @@ int main(int argc, char **argv)
 
 	parse_command_line(argc, argv);
 
-	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
-		fprintf(stderr, "ERROR: setrlimit(RLIMIT_MEMLOCK) \"%s\"\n",
-			strerror(errno));
-		exit(EXIT_FAILURE);
-	}
-
 	if (opt_num_xsks > 1)
 		load_xdp_program(argv, &obj);
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v3 27/29] bpf: eliminate rlimit-based memory accounting infra for bpf maps
  2020-07-30 21:23 ` [PATCH bpf-next v3 27/29] bpf: eliminate rlimit-based memory accounting infra for bpf maps Roman Gushchin
@ 2020-07-31 22:47   ` Song Liu
  0 siblings, 0 replies; 38+ messages in thread
From: Song Liu @ 2020-07-31 22:47 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: bpf, Networking, Alexei Starovoitov, Daniel Borkmann,
	Kernel Team, open list

On Thu, Jul 30, 2020 at 2:27 PM Roman Gushchin <guro@fb.com> wrote:
>
> Remove rlimit-based accounting infrastructure code, which is not used
> anymore.
>
> Signed-off-by: Roman Gushchin <guro@fb.com>

The code is good, so

Acked-by: Song Liu <songliubraving@fb.com>

However, I am still nervous as we deprecate memlock.

> ---
>  include/linux/bpf.h                           | 12 ----
>  kernel/bpf/syscall.c                          | 64 +------------------
>  .../selftests/bpf/progs/map_ptr_kern.c        |  5 --
>  3 files changed, 2 insertions(+), 79 deletions(-)

[...]

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v3 01/29] bpf: memcg-based memory accounting for bpf progs
  2020-07-30 21:22 ` [PATCH bpf-next v3 01/29] bpf: memcg-based memory accounting for bpf progs Roman Gushchin
@ 2020-07-31 22:48   ` Song Liu
  0 siblings, 0 replies; 38+ messages in thread
From: Song Liu @ 2020-07-31 22:48 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: bpf, Networking, Alexei Starovoitov, Daniel Borkmann,
	Kernel Team, open list

On Thu, Jul 30, 2020 at 2:28 PM Roman Gushchin <guro@fb.com> wrote:
>
> Include memory used by bpf programs into the memcg-based accounting.
> This includes the memory used by programs itself, auxiliary data
> and statistics.
>
> Signed-off-by: Roman Gushchin <guro@fb.com>

Acked-by: Song Liu <songliubraving@fb.com>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting
  2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
                   ` (28 preceding siblings ...)
  2020-07-30 21:23 ` [PATCH bpf-next v3 29/29] bpf: samples: do not touch RLIMIT_MEMLOCK Roman Gushchin
@ 2020-08-03 12:05 ` Daniel Borkmann
  2020-08-03 15:34   ` Roman Gushchin
  29 siblings, 1 reply; 38+ messages in thread
From: Daniel Borkmann @ 2020-08-03 12:05 UTC (permalink / raw)
  To: Roman Gushchin, bpf; +Cc: netdev, Alexei Starovoitov, kernel-team, linux-kernel

On 7/30/20 11:22 PM, Roman Gushchin wrote:
> Currently bpf is using the memlock rlimit for the memory accounting.
> This approach has its downsides and over time has created a significant
> amount of problems:
> 
> 1) The limit is per-user, but because most bpf operations are performed
>     as root, the limit has a little value.
> 
> 2) It's hard to come up with a specific maximum value. Especially because
>     the counter is shared with non-bpf users (e.g. memlock() users).
>     Any specific value is either too low and creates false failures
>     or too high and useless.
> 
> 3) Charging is not connected to the actual memory allocation. Bpf code
>     should manually calculate the estimated cost and precharge the counter,
>     and then take care of uncharging, including all fail paths.
>     It adds to the code complexity and makes it easy to leak a charge.
> 
> 4) There is no simple way of getting the current value of the counter.
>     We've used drgn for it, but it's far from being convenient.
> 
> 5) Cryptic -EPERM is returned on exceeding the limit. Libbpf even had
>     a function to "explain" this case for users.
> 
> In order to overcome these problems let's switch to the memcg-based
> memory accounting of bpf objects. With the recent addition of the percpu
> memory accounting, now it's possible to provide a comprehensive accounting
> of memory used by bpf programs and maps.
> 
> This approach has the following advantages:
> 1) The limit is per-cgroup and hierarchical. It's way more flexible and allows
>     a better control over memory usage by different workloads.
> 
> 2) The actual memory consumption is taken into account. It happens automatically
>     on the allocation time if __GFP_ACCOUNT flags is passed. Uncharging is also
>     performed automatically on releasing the memory. So the code on the bpf side
>     becomes simpler and safer.
> 
> 3) There is a simple way to get the current value and statistics.
> 
> The patchset consists of the following parts:
> 1) memcg-based accounting for various bpf objects: progs and maps
> 2) removal of the rlimit-based accounting
> 3) removal of rlimit adjustments in userspace samples

The diff stat looks nice & agree that rlimit sucks, but I'm missing how this is set
is supposed to work reliably, at least I currently fail to see it. Elaborating on this
in more depth especially for the case of unprivileged users should be a /fundamental/
part of the commit message.

Lets take an example: unprivileged user adds a max sized hashtable to one of its
programs, and configures the map that it will perform runtime allocation. The load
succeeds as it doesn't surpass the limits set for the current memcg. Kernel then
processes packets from softirq. Given the runtime allocations, we end up mischarging
to whoever ended up triggering __do_softirq(). If, for example, ksoftirq thread, then
it's probably reasonable to assume that this might not be accounted e.g. limits are
not imposed on the root cgroup. If so we would probably need to drag the context of
/where/ this must be charged to __memcg_kmem_charge_page() to do it reliably. Otherwise
how do you protect unprivileged users to OOM the machine?

Similarly, what happens to unprivileged users if kmemcg was not configured into the
kernel or has been disabled?

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting
  2020-08-03 12:05 ` [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Daniel Borkmann
@ 2020-08-03 15:34   ` Roman Gushchin
  2020-08-03 16:39     ` Daniel Borkmann
  0 siblings, 1 reply; 38+ messages in thread
From: Roman Gushchin @ 2020-08-03 15:34 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: bpf, netdev, Alexei Starovoitov, kernel-team, linux-kernel

On Mon, Aug 03, 2020 at 02:05:29PM +0200, Daniel Borkmann wrote:
> On 7/30/20 11:22 PM, Roman Gushchin wrote:
> > Currently bpf is using the memlock rlimit for the memory accounting.
> > This approach has its downsides and over time has created a significant
> > amount of problems:
> > 
> > 1) The limit is per-user, but because most bpf operations are performed
> >     as root, the limit has a little value.
> > 
> > 2) It's hard to come up with a specific maximum value. Especially because
> >     the counter is shared with non-bpf users (e.g. memlock() users).
> >     Any specific value is either too low and creates false failures
> >     or too high and useless.
> > 
> > 3) Charging is not connected to the actual memory allocation. Bpf code
> >     should manually calculate the estimated cost and precharge the counter,
> >     and then take care of uncharging, including all fail paths.
> >     It adds to the code complexity and makes it easy to leak a charge.
> > 
> > 4) There is no simple way of getting the current value of the counter.
> >     We've used drgn for it, but it's far from being convenient.
> > 
> > 5) Cryptic -EPERM is returned on exceeding the limit. Libbpf even had
> >     a function to "explain" this case for users.
> > 
> > In order to overcome these problems let's switch to the memcg-based
> > memory accounting of bpf objects. With the recent addition of the percpu
> > memory accounting, now it's possible to provide a comprehensive accounting
> > of memory used by bpf programs and maps.
> > 
> > This approach has the following advantages:
> > 1) The limit is per-cgroup and hierarchical. It's way more flexible and allows
> >     a better control over memory usage by different workloads.
> > 
> > 2) The actual memory consumption is taken into account. It happens automatically
> >     on the allocation time if __GFP_ACCOUNT flags is passed. Uncharging is also
> >     performed automatically on releasing the memory. So the code on the bpf side
> >     becomes simpler and safer.
> > 
> > 3) There is a simple way to get the current value and statistics.
> > 
> > The patchset consists of the following parts:
> > 1) memcg-based accounting for various bpf objects: progs and maps
> > 2) removal of the rlimit-based accounting
> > 3) removal of rlimit adjustments in userspace samples

Hi Daniel,

> 
> The diff stat looks nice & agree that rlimit sucks, but I'm missing how this is set
> is supposed to work reliably, at least I currently fail to see it. Elaborating on this
> in more depth especially for the case of unprivileged users should be a /fundamental/
> part of the commit message.
> 
> Lets take an example: unprivileged user adds a max sized hashtable to one of its
> programs, and configures the map that it will perform runtime allocation. The load
> succeeds as it doesn't surpass the limits set for the current memcg. Kernel then
> processes packets from softirq. Given the runtime allocations, we end up mischarging
> to whoever ended up triggering __do_softirq(). If, for example, ksoftirq thread, then
> it's probably reasonable to assume that this might not be accounted e.g. limits are
> not imposed on the root cgroup. If so we would probably need to drag the context of
> /where/ this must be charged to __memcg_kmem_charge_page() to do it reliably. Otherwise
> how do you protect unprivileged users to OOM the machine?

this is a valid concern, thank you for bringing it in. It can be resolved by
associating a map with a memory cgroup on creation, so that we can charge
this memory cgroup later, even from a soft-irq context. The question here is
whether we want to do it for all maps, or just for dynamic hashtables
(or any similar cases, if there are any)? I think the second option
is better. With the first option we have to annotate all memory allocations
in bpf maps code with memalloc_use_memcg()/memalloc_unuse_memcg(),
so it's easy to mess it up in the future.
What do you think?

> 
> Similarly, what happens to unprivileged users if kmemcg was not configured into the
> kernel or has been disabled?

Well, I don't think we can address it. Memcg-based memory accounting requires
enabled memory cgroups, a properly configured cgroup tree and also the kernel
memory accounting turned on to function properly.
Because we at Facebook are using cgroup for the memory accounting and control
everywhere, I might be biased. If there are real !memcg systems which are
actively using non-privileged bpf, we should keep the old system in place
and make it optional, so everyone can choose between having both accounting
systems or just the new one. Or we can disable the rlimit-based accounting
for root. But eliminating it completely looks so much nicer to me.

Thanks!

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting
  2020-08-03 15:34   ` Roman Gushchin
@ 2020-08-03 16:39     ` Daniel Borkmann
  2020-08-03 17:05       ` Roman Gushchin
  0 siblings, 1 reply; 38+ messages in thread
From: Daniel Borkmann @ 2020-08-03 16:39 UTC (permalink / raw)
  To: Roman Gushchin; +Cc: bpf, netdev, Alexei Starovoitov, kernel-team, linux-kernel

On 8/3/20 5:34 PM, Roman Gushchin wrote:
> On Mon, Aug 03, 2020 at 02:05:29PM +0200, Daniel Borkmann wrote:
>> On 7/30/20 11:22 PM, Roman Gushchin wrote:
>>> Currently bpf is using the memlock rlimit for the memory accounting.
>>> This approach has its downsides and over time has created a significant
>>> amount of problems:
>>>
>>> 1) The limit is per-user, but because most bpf operations are performed
>>>      as root, the limit has a little value.
>>>
>>> 2) It's hard to come up with a specific maximum value. Especially because
>>>      the counter is shared with non-bpf users (e.g. memlock() users).
>>>      Any specific value is either too low and creates false failures
>>>      or too high and useless.
>>>
>>> 3) Charging is not connected to the actual memory allocation. Bpf code
>>>      should manually calculate the estimated cost and precharge the counter,
>>>      and then take care of uncharging, including all fail paths.
>>>      It adds to the code complexity and makes it easy to leak a charge.
>>>
>>> 4) There is no simple way of getting the current value of the counter.
>>>      We've used drgn for it, but it's far from being convenient.
>>>
>>> 5) Cryptic -EPERM is returned on exceeding the limit. Libbpf even had
>>>      a function to "explain" this case for users.
>>>
>>> In order to overcome these problems let's switch to the memcg-based
>>> memory accounting of bpf objects. With the recent addition of the percpu
>>> memory accounting, now it's possible to provide a comprehensive accounting
>>> of memory used by bpf programs and maps.
>>>
>>> This approach has the following advantages:
>>> 1) The limit is per-cgroup and hierarchical. It's way more flexible and allows
>>>      a better control over memory usage by different workloads.
>>>
>>> 2) The actual memory consumption is taken into account. It happens automatically
>>>      on the allocation time if __GFP_ACCOUNT flags is passed. Uncharging is also
>>>      performed automatically on releasing the memory. So the code on the bpf side
>>>      becomes simpler and safer.
>>>
>>> 3) There is a simple way to get the current value and statistics.
>>>
>>> The patchset consists of the following parts:
>>> 1) memcg-based accounting for various bpf objects: progs and maps
>>> 2) removal of the rlimit-based accounting
>>> 3) removal of rlimit adjustments in userspace samples
> 
>> The diff stat looks nice & agree that rlimit sucks, but I'm missing how this is set
>> is supposed to work reliably, at least I currently fail to see it. Elaborating on this
>> in more depth especially for the case of unprivileged users should be a /fundamental/
>> part of the commit message.
>>
>> Lets take an example: unprivileged user adds a max sized hashtable to one of its
>> programs, and configures the map that it will perform runtime allocation. The load
>> succeeds as it doesn't surpass the limits set for the current memcg. Kernel then
>> processes packets from softirq. Given the runtime allocations, we end up mischarging
>> to whoever ended up triggering __do_softirq(). If, for example, ksoftirq thread, then
>> it's probably reasonable to assume that this might not be accounted e.g. limits are
>> not imposed on the root cgroup. If so we would probably need to drag the context of
>> /where/ this must be charged to __memcg_kmem_charge_page() to do it reliably. Otherwise
>> how do you protect unprivileged users to OOM the machine?
> 
> this is a valid concern, thank you for bringing it in. It can be resolved by
> associating a map with a memory cgroup on creation, so that we can charge
> this memory cgroup later, even from a soft-irq context. The question here is
> whether we want to do it for all maps, or just for dynamic hashtables
> (or any similar cases, if there are any)? I think the second option
> is better. With the first option we have to annotate all memory allocations
> in bpf maps code with memalloc_use_memcg()/memalloc_unuse_memcg(),
> so it's easy to mess it up in the future.
> What do you think?

We would need to do it for all maps that are configured with non-prealloc, e.g. not
only hash/LRU table but also others like LPM maps etc. I wonder whether program entry/
exit could do the memalloc_use_memcg() / memalloc_unuse_memcg() and then everything
would be accounted against the prog's memcg from runtime side, but then there's the
usual issue with 'unuse'-restore on tail calls, and it doesn't solve the syscall side.
But seems like the memalloc_{use,unuse}_memcg()'s remote charging is lightweight
anyway compared to some of the other map update work such as taking bucket lock etc.

>> Similarly, what happens to unprivileged users if kmemcg was not configured into the
>> kernel or has been disabled?
> 
> Well, I don't think we can address it. Memcg-based memory accounting requires
> enabled memory cgroups, a properly configured cgroup tree and also the kernel
> memory accounting turned on to function properly.
> Because we at Facebook are using cgroup for the memory accounting and control
> everywhere, I might be biased. If there are real !memcg systems which are
> actively using non-privileged bpf, we should keep the old system in place
> and make it optional, so everyone can choose between having both accounting
> systems or just the new one. Or we can disable the rlimit-based accounting
> for root. But eliminating it completely looks so much nicer to me.

Eliminating it entirely feels better indeed. Another option could be that BPF kconfig
would select memcg, so it's always built with it. Perhaps that is an acceptable tradeoff.

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting
  2020-08-03 16:39     ` Daniel Borkmann
@ 2020-08-03 17:05       ` Roman Gushchin
  2020-08-03 18:37         ` Daniel Borkmann
  0 siblings, 1 reply; 38+ messages in thread
From: Roman Gushchin @ 2020-08-03 17:05 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: bpf, netdev, Alexei Starovoitov, kernel-team, linux-kernel

On Mon, Aug 03, 2020 at 06:39:01PM +0200, Daniel Borkmann wrote:
> On 8/3/20 5:34 PM, Roman Gushchin wrote:
> > On Mon, Aug 03, 2020 at 02:05:29PM +0200, Daniel Borkmann wrote:
> > > On 7/30/20 11:22 PM, Roman Gushchin wrote:
> > > > Currently bpf is using the memlock rlimit for the memory accounting.
> > > > This approach has its downsides and over time has created a significant
> > > > amount of problems:
> > > > 
> > > > 1) The limit is per-user, but because most bpf operations are performed
> > > >      as root, the limit has a little value.
> > > > 
> > > > 2) It's hard to come up with a specific maximum value. Especially because
> > > >      the counter is shared with non-bpf users (e.g. memlock() users).
> > > >      Any specific value is either too low and creates false failures
> > > >      or too high and useless.
> > > > 
> > > > 3) Charging is not connected to the actual memory allocation. Bpf code
> > > >      should manually calculate the estimated cost and precharge the counter,
> > > >      and then take care of uncharging, including all fail paths.
> > > >      It adds to the code complexity and makes it easy to leak a charge.
> > > > 
> > > > 4) There is no simple way of getting the current value of the counter.
> > > >      We've used drgn for it, but it's far from being convenient.
> > > > 
> > > > 5) Cryptic -EPERM is returned on exceeding the limit. Libbpf even had
> > > >      a function to "explain" this case for users.
> > > > 
> > > > In order to overcome these problems let's switch to the memcg-based
> > > > memory accounting of bpf objects. With the recent addition of the percpu
> > > > memory accounting, now it's possible to provide a comprehensive accounting
> > > > of memory used by bpf programs and maps.
> > > > 
> > > > This approach has the following advantages:
> > > > 1) The limit is per-cgroup and hierarchical. It's way more flexible and allows
> > > >      a better control over memory usage by different workloads.
> > > > 
> > > > 2) The actual memory consumption is taken into account. It happens automatically
> > > >      on the allocation time if __GFP_ACCOUNT flags is passed. Uncharging is also
> > > >      performed automatically on releasing the memory. So the code on the bpf side
> > > >      becomes simpler and safer.
> > > > 
> > > > 3) There is a simple way to get the current value and statistics.
> > > > 
> > > > The patchset consists of the following parts:
> > > > 1) memcg-based accounting for various bpf objects: progs and maps
> > > > 2) removal of the rlimit-based accounting
> > > > 3) removal of rlimit adjustments in userspace samples
> > 
> > > The diff stat looks nice & agree that rlimit sucks, but I'm missing how this is set
> > > is supposed to work reliably, at least I currently fail to see it. Elaborating on this
> > > in more depth especially for the case of unprivileged users should be a /fundamental/
> > > part of the commit message.
> > > 
> > > Lets take an example: unprivileged user adds a max sized hashtable to one of its
> > > programs, and configures the map that it will perform runtime allocation. The load
> > > succeeds as it doesn't surpass the limits set for the current memcg. Kernel then
> > > processes packets from softirq. Given the runtime allocations, we end up mischarging
> > > to whoever ended up triggering __do_softirq(). If, for example, ksoftirq thread, then
> > > it's probably reasonable to assume that this might not be accounted e.g. limits are
> > > not imposed on the root cgroup. If so we would probably need to drag the context of
> > > /where/ this must be charged to __memcg_kmem_charge_page() to do it reliably. Otherwise
> > > how do you protect unprivileged users to OOM the machine?
> > 
> > this is a valid concern, thank you for bringing it in. It can be resolved by
> > associating a map with a memory cgroup on creation, so that we can charge
> > this memory cgroup later, even from a soft-irq context. The question here is
> > whether we want to do it for all maps, or just for dynamic hashtables
> > (or any similar cases, if there are any)? I think the second option
> > is better. With the first option we have to annotate all memory allocations
> > in bpf maps code with memalloc_use_memcg()/memalloc_unuse_memcg(),
> > so it's easy to mess it up in the future.
> > What do you think?
> 
> We would need to do it for all maps that are configured with non-prealloc, e.g. not
> only hash/LRU table but also others like LPM maps etc. I wonder whether program entry/
> exit could do the memalloc_use_memcg() / memalloc_unuse_memcg() and then everything
> would be accounted against the prog's memcg from runtime side, but then there's the
> usual issue with 'unuse'-restore on tail calls, and it doesn't solve the syscall side.
> But seems like the memalloc_{use,unuse}_memcg()'s remote charging is lightweight
> anyway compared to some of the other map update work such as taking bucket lock etc.

I'll explore it and address in the next version. Thank you for suggestions!

> 
> > > Similarly, what happens to unprivileged users if kmemcg was not configured into the
> > > kernel or has been disabled?
> > 
> > Well, I don't think we can address it. Memcg-based memory accounting requires
> > enabled memory cgroups, a properly configured cgroup tree and also the kernel
> > memory accounting turned on to function properly.
> > Because we at Facebook are using cgroup for the memory accounting and control
> > everywhere, I might be biased. If there are real !memcg systems which are
> > actively using non-privileged bpf, we should keep the old system in place
> > and make it optional, so everyone can choose between having both accounting
> > systems or just the new one. Or we can disable the rlimit-based accounting
> > for root. But eliminating it completely looks so much nicer to me.
> 
> Eliminating it entirely feels better indeed. Another option could be that BPF kconfig
> would select memcg, so it's always built with it. Perhaps that is an acceptable tradeoff.

But wouldn't it limit the usage of bpf on embedded devices?
Where memory cgroups are probably not used, but bpf still can be useful for tracing,
for example.

Adding this build dependency doesn't really guarantee anything (e.g. cgroupfs
can be simple not mounted on the system), so I'm not sure if we really need it.

Maybe we can print a warning if memcg is not properly configured and somebody
is creating a map? Idk.

Thanks!

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting
  2020-08-03 17:05       ` Roman Gushchin
@ 2020-08-03 18:37         ` Daniel Borkmann
  2020-08-03 19:06           ` Roman Gushchin
  0 siblings, 1 reply; 38+ messages in thread
From: Daniel Borkmann @ 2020-08-03 18:37 UTC (permalink / raw)
  To: Roman Gushchin; +Cc: bpf, netdev, Alexei Starovoitov, kernel-team, linux-kernel

On 8/3/20 7:05 PM, Roman Gushchin wrote:
> On Mon, Aug 03, 2020 at 06:39:01PM +0200, Daniel Borkmann wrote:
>> On 8/3/20 5:34 PM, Roman Gushchin wrote:
>>> On Mon, Aug 03, 2020 at 02:05:29PM +0200, Daniel Borkmann wrote:
>>>> On 7/30/20 11:22 PM, Roman Gushchin wrote:
>>>>> Currently bpf is using the memlock rlimit for the memory accounting.
>>>>> This approach has its downsides and over time has created a significant
>>>>> amount of problems:
>>>>>
>>>>> 1) The limit is per-user, but because most bpf operations are performed
>>>>>       as root, the limit has a little value.
>>>>>
>>>>> 2) It's hard to come up with a specific maximum value. Especially because
>>>>>       the counter is shared with non-bpf users (e.g. memlock() users).
>>>>>       Any specific value is either too low and creates false failures
>>>>>       or too high and useless.
>>>>>
>>>>> 3) Charging is not connected to the actual memory allocation. Bpf code
>>>>>       should manually calculate the estimated cost and precharge the counter,
>>>>>       and then take care of uncharging, including all fail paths.
>>>>>       It adds to the code complexity and makes it easy to leak a charge.
>>>>>
>>>>> 4) There is no simple way of getting the current value of the counter.
>>>>>       We've used drgn for it, but it's far from being convenient.
>>>>>
>>>>> 5) Cryptic -EPERM is returned on exceeding the limit. Libbpf even had
>>>>>       a function to "explain" this case for users.
>>>>>
>>>>> In order to overcome these problems let's switch to the memcg-based
>>>>> memory accounting of bpf objects. With the recent addition of the percpu
>>>>> memory accounting, now it's possible to provide a comprehensive accounting
>>>>> of memory used by bpf programs and maps.
>>>>>
>>>>> This approach has the following advantages:
>>>>> 1) The limit is per-cgroup and hierarchical. It's way more flexible and allows
>>>>>       a better control over memory usage by different workloads.
>>>>>
>>>>> 2) The actual memory consumption is taken into account. It happens automatically
>>>>>       on the allocation time if __GFP_ACCOUNT flags is passed. Uncharging is also
>>>>>       performed automatically on releasing the memory. So the code on the bpf side
>>>>>       becomes simpler and safer.
>>>>>
>>>>> 3) There is a simple way to get the current value and statistics.
>>>>>
>>>>> The patchset consists of the following parts:
>>>>> 1) memcg-based accounting for various bpf objects: progs and maps
>>>>> 2) removal of the rlimit-based accounting
>>>>> 3) removal of rlimit adjustments in userspace samples
>>>
>>>> The diff stat looks nice & agree that rlimit sucks, but I'm missing how this is set
>>>> is supposed to work reliably, at least I currently fail to see it. Elaborating on this
>>>> in more depth especially for the case of unprivileged users should be a /fundamental/
>>>> part of the commit message.
>>>>
>>>> Lets take an example: unprivileged user adds a max sized hashtable to one of its
>>>> programs, and configures the map that it will perform runtime allocation. The load
>>>> succeeds as it doesn't surpass the limits set for the current memcg. Kernel then
>>>> processes packets from softirq. Given the runtime allocations, we end up mischarging
>>>> to whoever ended up triggering __do_softirq(). If, for example, ksoftirq thread, then
>>>> it's probably reasonable to assume that this might not be accounted e.g. limits are
>>>> not imposed on the root cgroup. If so we would probably need to drag the context of
>>>> /where/ this must be charged to __memcg_kmem_charge_page() to do it reliably. Otherwise
>>>> how do you protect unprivileged users to OOM the machine?
>>>
>>> this is a valid concern, thank you for bringing it in. It can be resolved by
>>> associating a map with a memory cgroup on creation, so that we can charge
>>> this memory cgroup later, even from a soft-irq context. The question here is
>>> whether we want to do it for all maps, or just for dynamic hashtables
>>> (or any similar cases, if there are any)? I think the second option
>>> is better. With the first option we have to annotate all memory allocations
>>> in bpf maps code with memalloc_use_memcg()/memalloc_unuse_memcg(),
>>> so it's easy to mess it up in the future.
>>> What do you think?
>>
>> We would need to do it for all maps that are configured with non-prealloc, e.g. not
>> only hash/LRU table but also others like LPM maps etc. I wonder whether program entry/
>> exit could do the memalloc_use_memcg() / memalloc_unuse_memcg() and then everything
>> would be accounted against the prog's memcg from runtime side, but then there's the
>> usual issue with 'unuse'-restore on tail calls, and it doesn't solve the syscall side.
>> But seems like the memalloc_{use,unuse}_memcg()'s remote charging is lightweight
>> anyway compared to some of the other map update work such as taking bucket lock etc.
> 
> I'll explore it and address in the next version. Thank you for suggestions!

Ok.

I'm probably still missing one more thing, but could you elaborate what limits would
be enforced if an unprivileged user creates a prog/map on the host (w/o further action
such as moving to a specific cgroup)?

 From what I can tell via looking at systemd:

   $ cat /proc/self/cgroup
   11:cpuset:/
   10:hugetlb:/
   9:devices:/user.slice
   8:cpu,cpuacct:/
   7:freezer:/
   6:pids:/user.slice/user-1000.slice/user@1000.service
   5:memory:/user.slice/user-1000.slice/user@1000.service
   4:net_cls,net_prio:/
   3:perf_event:/
   2:blkio:/
   1:name=systemd:/user.slice/user-1000.slice/user@1000.service/gnome-terminal-server.service
   0::/user.slice/user-1000.slice/user@1000.service/gnome-terminal-server.service

And then:

   $ systemctl cat user-1000.slice
   # /usr/lib/systemd/system/user-.slice.d/10-defaults.conf
   #  SPDX-License-Identifier: LGPL-2.1+
   #
   #  This file is part of systemd.
   #
   #  systemd is free software; you can redistribute it and/or modify it
   #  under the terms of the GNU Lesser General Public License as published by
   #  the Free Software Foundation; either version 2.1 of the License, or
   #  (at your option) any later version.

   [Unit]
   Description=User Slice of UID %j
   Documentation=man:user@.service(5)
   After=systemd-user-sessions.service
   StopWhenUnneeded=yes

   [Slice]
   TasksMax=33%

So that has a Pid limit in place by default, but it does not say anything on memory. I
presume the accounting relevant to us is tracked in memory.kmem.limit_in_bytes and
memory.kmem.usage_in_bytes, is that correct? If true, it looks like the default would
not prevent from OOM, no?

   $ cat /sys/fs/cgroup/memory/user.slice/user-1000.slice/user@1000.service/memory.kmem.usage_in_bytes
   257966080
   $ cat /sys/fs/cgroup/memory/user.slice/user-1000.slice/user@1000.service/memory.kmem.limit_in_bytes
   9223372036854771712

>>>> Similarly, what happens to unprivileged users if kmemcg was not configured into the
>>>> kernel or has been disabled?
>>>
>>> Well, I don't think we can address it. Memcg-based memory accounting requires
>>> enabled memory cgroups, a properly configured cgroup tree and also the kernel
>>> memory accounting turned on to function properly.
>>> Because we at Facebook are using cgroup for the memory accounting and control
>>> everywhere, I might be biased. If there are real !memcg systems which are
>>> actively using non-privileged bpf, we should keep the old system in place
>>> and make it optional, so everyone can choose between having both accounting
>>> systems or just the new one. Or we can disable the rlimit-based accounting
>>> for root. But eliminating it completely looks so much nicer to me.
>>
>> Eliminating it entirely feels better indeed. Another option could be that BPF kconfig
>> would select memcg, so it's always built with it. Perhaps that is an acceptable tradeoff.
> 
> But wouldn't it limit the usage of bpf on embedded devices?
> Where memory cgroups are probably not used, but bpf still can be useful for tracing,
> for example.
> 
> Adding this build dependency doesn't really guarantee anything (e.g. cgroupfs
> can be simple not mounted on the system), so I'm not sure if we really need it.

Argh, true as well. :/ Is there some fallback accounting/limitation that could be done
either explicit or ideally hidden via __GFP_ACCOUNT for unprivileged? We still need to
prevent unprivileged users to easily cause OOM damage in those situations, too.

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting
  2020-08-03 18:37         ` Daniel Borkmann
@ 2020-08-03 19:06           ` Roman Gushchin
  0 siblings, 0 replies; 38+ messages in thread
From: Roman Gushchin @ 2020-08-03 19:06 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: bpf, netdev, Alexei Starovoitov, kernel-team, linux-kernel

On Mon, Aug 03, 2020 at 08:37:15PM +0200, Daniel Borkmann wrote:
> On 8/3/20 7:05 PM, Roman Gushchin wrote:
> > On Mon, Aug 03, 2020 at 06:39:01PM +0200, Daniel Borkmann wrote:
> > > On 8/3/20 5:34 PM, Roman Gushchin wrote:
> > > > On Mon, Aug 03, 2020 at 02:05:29PM +0200, Daniel Borkmann wrote:
> > > > > On 7/30/20 11:22 PM, Roman Gushchin wrote:
> > > > > > Currently bpf is using the memlock rlimit for the memory accounting.
> > > > > > This approach has its downsides and over time has created a significant
> > > > > > amount of problems:
> > > > > > 
> > > > > > 1) The limit is per-user, but because most bpf operations are performed
> > > > > >       as root, the limit has a little value.
> > > > > > 
> > > > > > 2) It's hard to come up with a specific maximum value. Especially because
> > > > > >       the counter is shared with non-bpf users (e.g. memlock() users).
> > > > > >       Any specific value is either too low and creates false failures
> > > > > >       or too high and useless.
> > > > > > 
> > > > > > 3) Charging is not connected to the actual memory allocation. Bpf code
> > > > > >       should manually calculate the estimated cost and precharge the counter,
> > > > > >       and then take care of uncharging, including all fail paths.
> > > > > >       It adds to the code complexity and makes it easy to leak a charge.
> > > > > > 
> > > > > > 4) There is no simple way of getting the current value of the counter.
> > > > > >       We've used drgn for it, but it's far from being convenient.
> > > > > > 
> > > > > > 5) Cryptic -EPERM is returned on exceeding the limit. Libbpf even had
> > > > > >       a function to "explain" this case for users.
> > > > > > 
> > > > > > In order to overcome these problems let's switch to the memcg-based
> > > > > > memory accounting of bpf objects. With the recent addition of the percpu
> > > > > > memory accounting, now it's possible to provide a comprehensive accounting
> > > > > > of memory used by bpf programs and maps.
> > > > > > 
> > > > > > This approach has the following advantages:
> > > > > > 1) The limit is per-cgroup and hierarchical. It's way more flexible and allows
> > > > > >       a better control over memory usage by different workloads.
> > > > > > 
> > > > > > 2) The actual memory consumption is taken into account. It happens automatically
> > > > > >       on the allocation time if __GFP_ACCOUNT flags is passed. Uncharging is also
> > > > > >       performed automatically on releasing the memory. So the code on the bpf side
> > > > > >       becomes simpler and safer.
> > > > > > 
> > > > > > 3) There is a simple way to get the current value and statistics.
> > > > > > 
> > > > > > The patchset consists of the following parts:
> > > > > > 1) memcg-based accounting for various bpf objects: progs and maps
> > > > > > 2) removal of the rlimit-based accounting
> > > > > > 3) removal of rlimit adjustments in userspace samples
> > > > 
> > > > > The diff stat looks nice & agree that rlimit sucks, but I'm missing how this is set
> > > > > is supposed to work reliably, at least I currently fail to see it. Elaborating on this
> > > > > in more depth especially for the case of unprivileged users should be a /fundamental/
> > > > > part of the commit message.
> > > > > 
> > > > > Lets take an example: unprivileged user adds a max sized hashtable to one of its
> > > > > programs, and configures the map that it will perform runtime allocation. The load
> > > > > succeeds as it doesn't surpass the limits set for the current memcg. Kernel then
> > > > > processes packets from softirq. Given the runtime allocations, we end up mischarging
> > > > > to whoever ended up triggering __do_softirq(). If, for example, ksoftirq thread, then
> > > > > it's probably reasonable to assume that this might not be accounted e.g. limits are
> > > > > not imposed on the root cgroup. If so we would probably need to drag the context of
> > > > > /where/ this must be charged to __memcg_kmem_charge_page() to do it reliably. Otherwise
> > > > > how do you protect unprivileged users to OOM the machine?
> > > > 
> > > > this is a valid concern, thank you for bringing it in. It can be resolved by
> > > > associating a map with a memory cgroup on creation, so that we can charge
> > > > this memory cgroup later, even from a soft-irq context. The question here is
> > > > whether we want to do it for all maps, or just for dynamic hashtables
> > > > (or any similar cases, if there are any)? I think the second option
> > > > is better. With the first option we have to annotate all memory allocations
> > > > in bpf maps code with memalloc_use_memcg()/memalloc_unuse_memcg(),
> > > > so it's easy to mess it up in the future.
> > > > What do you think?
> > > 
> > > We would need to do it for all maps that are configured with non-prealloc, e.g. not
> > > only hash/LRU table but also others like LPM maps etc. I wonder whether program entry/
> > > exit could do the memalloc_use_memcg() / memalloc_unuse_memcg() and then everything
> > > would be accounted against the prog's memcg from runtime side, but then there's the
> > > usual issue with 'unuse'-restore on tail calls, and it doesn't solve the syscall side.
> > > But seems like the memalloc_{use,unuse}_memcg()'s remote charging is lightweight
> > > anyway compared to some of the other map update work such as taking bucket lock etc.
> > 
> > I'll explore it and address in the next version. Thank you for suggestions!
> 
> Ok.
> 
> I'm probably still missing one more thing, but could you elaborate what limits would
> be enforced if an unprivileged user creates a prog/map on the host (w/o further action
> such as moving to a specific cgroup)?

If cgroups are not configured properly, no limits can be enforced. Memory cgroups
are completely orthogonal to users.

However, in the most popular case (at least in our setup) when all bpf operations
are performed by root, per-user accounting is useless.

> 
> From what I can tell via looking at systemd:
> 
>   $ cat /proc/self/cgroup
>   11:cpuset:/
>   10:hugetlb:/
>   9:devices:/user.slice
>   8:cpu,cpuacct:/
>   7:freezer:/
>   6:pids:/user.slice/user-1000.slice/user@1000.service
>   5:memory:/user.slice/user-1000.slice/user@1000.service
>   4:net_cls,net_prio:/
>   3:perf_event:/
>   2:blkio:/
>   1:name=systemd:/user.slice/user-1000.slice/user@1000.service/gnome-terminal-server.service
>   0::/user.slice/user-1000.slice/user@1000.service/gnome-terminal-server.service
> 
> And then:
> 
>   $ systemctl cat user-1000.slice
>   # /usr/lib/systemd/system/user-.slice.d/10-defaults.conf
>   #  SPDX-License-Identifier: LGPL-2.1+
>   #
>   #  This file is part of systemd.
>   #
>   #  systemd is free software; you can redistribute it and/or modify it
>   #  under the terms of the GNU Lesser General Public License as published by
>   #  the Free Software Foundation; either version 2.1 of the License, or
>   #  (at your option) any later version.
> 
>   [Unit]
>   Description=User Slice of UID %j
>   Documentation=man:user@.service(5)
>   After=systemd-user-sessions.service
>   StopWhenUnneeded=yes
> 
>   [Slice]
>   TasksMax=33%
> 
> So that has a Pid limit in place by default, but it does not say anything on memory. I
> presume the accounting relevant to us is tracked in memory.kmem.limit_in_bytes and
> memory.kmem.usage_in_bytes, is that correct? If true, it looks like the default would
> not prevent from OOM, no?

Yeah, it's true.

Also in general we're moving from setting hard limits to pressure-based oom handling,
where we detect high continuous memory pressure in a cgroups using psi metrics
and handle it in userspace. Memory.high can be used to slow down the workload
to avoid an extensive overfilling over the limit.

So the most important "feature" is that bpf memory is accounted in
memory.current (cgroup v2) and memory.usage_in_bytes (cgroup v1).

> 
>   $ cat /sys/fs/cgroup/memory/user.slice/user-1000.slice/user@1000.service/memory.kmem.usage_in_bytes
>   257966080
>   $ cat /sys/fs/cgroup/memory/user.slice/user-1000.slice/user@1000.service/memory.kmem.limit_in_bytes
>   9223372036854771712
> 
> > > > > Similarly, what happens to unprivileged users if kmemcg was not configured into the
> > > > > kernel or has been disabled?
> > > > 
> > > > Well, I don't think we can address it. Memcg-based memory accounting requires
> > > > enabled memory cgroups, a properly configured cgroup tree and also the kernel
> > > > memory accounting turned on to function properly.
> > > > Because we at Facebook are using cgroup for the memory accounting and control
> > > > everywhere, I might be biased. If there are real !memcg systems which are
> > > > actively using non-privileged bpf, we should keep the old system in place
> > > > and make it optional, so everyone can choose between having both accounting
> > > > systems or just the new one. Or we can disable the rlimit-based accounting
> > > > for root. But eliminating it completely looks so much nicer to me.
> > > 
> > > Eliminating it entirely feels better indeed. Another option could be that BPF kconfig
> > > would select memcg, so it's always built with it. Perhaps that is an acceptable tradeoff.
> > 
> > But wouldn't it limit the usage of bpf on embedded devices?
> > Where memory cgroups are probably not used, but bpf still can be useful for tracing,
> > for example.
> > 
> > Adding this build dependency doesn't really guarantee anything (e.g. cgroupfs
> > can be simple not mounted on the system), so I'm not sure if we really need it.
> 
> Argh, true as well. :/ Is there some fallback accounting/limitation that could be done
> either explicit or ideally hidden via __GFP_ACCOUNT for unprivileged? We still need to
> prevent unprivileged users to easily cause OOM damage in those situations, too.

Users and memory cgroups are orthogonal, so an unprivileged user can have a process
in the root memory cgroup and it shouldn't be limited. And the opposite: a root process
in a non-root memory cgroup might be limited. We can't really emulate the old semantics
using cgroups.

But I'm not sure if it's a problem: there are other ways to exhaust (kernel) memory
beside bpf. So if a user is not limited by a memory cgroup with the enabled kernel
memory accounting, it's not completely safe anyway.

If we want to save the old behavior, I think the best thing is to keep it as it is,
only add an option (sysctl?) to disable it, which everybody who relies on cgroups
can do to avoid all this hassle with rlimits.

I actually wonder, does anybody rely on this memlock limit?
Or everybody's just bumping it to be "big enough" to avoid getting errors.

Thanks!

^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2020-08-03 19:07 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-30 21:22 [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Roman Gushchin
2020-07-30 21:22 ` [PATCH bpf-next v3 01/29] bpf: memcg-based memory accounting for bpf progs Roman Gushchin
2020-07-31 22:48   ` Song Liu
2020-07-30 21:22 ` [PATCH bpf-next v3 02/29] bpf: memcg-based memory accounting for bpf maps Roman Gushchin
2020-07-30 21:22 ` [PATCH bpf-next v3 03/29] bpf: refine memcg-based memory accounting for arraymap maps Roman Gushchin
2020-07-30 21:22 ` [PATCH bpf-next v3 04/29] bpf: refine memcg-based memory accounting for cpumap maps Roman Gushchin
2020-07-30 21:22 ` [PATCH bpf-next v3 05/29] bpf: memcg-based memory accounting for cgroup storage maps Roman Gushchin
2020-07-30 21:22 ` [PATCH bpf-next v3 06/29] bpf: refine memcg-based memory accounting for devmap maps Roman Gushchin
2020-07-30 21:22 ` [PATCH bpf-next v3 07/29] bpf: refine memcg-based memory accounting for hashtab maps Roman Gushchin
2020-07-30 21:22 ` [PATCH bpf-next v3 08/29] bpf: memcg-based memory accounting for lpm_trie maps Roman Gushchin
2020-07-30 21:22 ` [PATCH bpf-next v3 09/29] bpf: memcg-based memory accounting for bpf ringbuffer Roman Gushchin
2020-07-30 21:22 ` [PATCH bpf-next v3 10/29] bpf: memcg-based memory accounting for socket storage maps Roman Gushchin
2020-07-30 21:22 ` [PATCH bpf-next v3 11/29] bpf: refine memcg-based memory accounting for sockmap and sockhash maps Roman Gushchin
2020-07-30 21:22 ` [PATCH bpf-next v3 12/29] bpf: refine memcg-based memory accounting for xskmap maps Roman Gushchin
2020-07-30 21:22 ` [PATCH bpf-next v3 13/29] bpf: eliminate rlimit-based memory accounting for arraymap maps Roman Gushchin
2020-07-30 21:22 ` [PATCH bpf-next v3 14/29] bpf: eliminate rlimit-based memory accounting for bpf_struct_ops maps Roman Gushchin
2020-07-30 21:22 ` [PATCH bpf-next v3 15/29] bpf: eliminate rlimit-based memory accounting for cpumap maps Roman Gushchin
2020-07-30 21:22 ` [PATCH bpf-next v3 16/29] bpf: eliminate rlimit-based memory accounting for cgroup storage maps Roman Gushchin
2020-07-30 21:22 ` [PATCH bpf-next v3 17/29] bpf: eliminate rlimit-based memory accounting for devmap maps Roman Gushchin
2020-07-30 21:22 ` [PATCH bpf-next v3 18/29] bpf: eliminate rlimit-based memory accounting for hashtab maps Roman Gushchin
2020-07-30 21:23 ` [PATCH bpf-next v3 19/29] bpf: eliminate rlimit-based memory accounting for lpm_trie maps Roman Gushchin
2020-07-30 21:23 ` [PATCH bpf-next v3 20/29] bpf: eliminate rlimit-based memory accounting for queue_stack_maps maps Roman Gushchin
2020-07-30 21:23 ` [PATCH bpf-next v3 21/29] bpf: eliminate rlimit-based memory accounting for reuseport_array maps Roman Gushchin
2020-07-30 21:23 ` [PATCH bpf-next v3 22/29] bpf: eliminate rlimit-based memory accounting for bpf ringbuffer Roman Gushchin
2020-07-30 21:23 ` [PATCH bpf-next v3 23/29] bpf: eliminate rlimit-based memory accounting for sockmap and sockhash maps Roman Gushchin
2020-07-30 21:23 ` [PATCH bpf-next v3 24/29] bpf: eliminate rlimit-based memory accounting for stackmap maps Roman Gushchin
2020-07-30 21:23 ` [PATCH bpf-next v3 25/29] bpf: eliminate rlimit-based memory accounting for socket storage maps Roman Gushchin
2020-07-30 21:23 ` [PATCH bpf-next v3 26/29] bpf: eliminate rlimit-based memory accounting for xskmap maps Roman Gushchin
2020-07-30 21:23 ` [PATCH bpf-next v3 27/29] bpf: eliminate rlimit-based memory accounting infra for bpf maps Roman Gushchin
2020-07-31 22:47   ` Song Liu
2020-07-30 21:23 ` [PATCH bpf-next v3 28/29] bpf: eliminate rlimit-based memory accounting for bpf progs Roman Gushchin
2020-07-30 21:23 ` [PATCH bpf-next v3 29/29] bpf: samples: do not touch RLIMIT_MEMLOCK Roman Gushchin
2020-08-03 12:05 ` [PATCH bpf-next v3 00/29] bpf: switch to memcg-based memory accounting Daniel Borkmann
2020-08-03 15:34   ` Roman Gushchin
2020-08-03 16:39     ` Daniel Borkmann
2020-08-03 17:05       ` Roman Gushchin
2020-08-03 18:37         ` Daniel Borkmann
2020-08-03 19:06           ` Roman Gushchin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).