All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 bpf-next] selftests/bpf: Add benchmark for local_storage get
@ 2022-06-04 22:20 Dave Marchevsky
  2022-06-08 23:02 ` Martin KaFai Lau
  2022-06-22 23:00 ` Yosry Ahmed
  0 siblings, 2 replies; 8+ messages in thread
From: Dave Marchevsky @ 2022-06-04 22:20 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Kernel Team, Dave Marchevsky

Add a benchmarks to demonstrate the performance cliff for local_storage
get as the number of local_storage maps increases beyond current
local_storage implementation's cache size.

"sequential get" and "interleaved get" benchmarks are added, both of
which do many bpf_task_storage_get calls on sets of task local_storage
maps of various counts, while considering a single specific map to be
'important' and counting task_storage_gets to the important map
separately in addition to normal 'hits' count of all gets. Goal here is
to mimic scenario where a particular program using one map - the
important one - is running on a system where many other local_storage
maps exist and are accessed often.

While "sequential get" benchmark does bpf_task_storage_get for map 0, 1,
..., {9, 99, 999} in order, "interleaved" benchmark interleaves 4
bpf_task_storage_gets for the important map for every 10 map gets. This
is meant to highlight performance differences when important map is
accessed far more frequently than non-important maps.

A "hashmap control" benchmark is also included for easy comparison of
standard bpf hashmap lookup vs local_storage get. The benchmark is
similar to "sequential get", but creates and uses BPF_MAP_TYPE_HASH
instead of local storage. Only one inner map is created - a hashmap
meant to hold tid -> data mapping for all tasks. Size of the hashmap is
hardcoded to my system's PID_MAX_LIMIT (4,194,304). The number of these
keys which are actually fetched as part of the benchmark is
configurable.

Addition of this benchmark is inspired by conversation with Alexei in a
previous patchset's thread [0], which highlighted the need for such a
benchmark to motivate and validate improvements to local_storage
implementation. My approach in that series focused on improving
performance for explicitly-marked 'important' maps and was rejected
with feedback to make more generally-applicable improvements while
avoiding explicitly marking maps as important. Thus the benchmark
reports both general and important-map-focused metrics, so effect of
future work on both is clear.

Regarding the benchmark results. On a powerful system (Skylake, 20
cores, 256gb ram):

Hashmap Control
===============
        num keys: 10
hashmap (control) sequential    get:  hits throughput: 33.748 ± 0.700 M ops/s, hits latency: 29.631 ns/op, important_hits throughput: 33.748 ± 0.700 M ops/s

        num keys: 1000
hashmap (control) sequential    get:  hits throughput: 29.997 ± 0.953 M ops/s, hits latency: 33.337 ns/op, important_hits throughput: 29.997 ± 0.953 M ops/s

        num keys: 10000
hashmap (control) sequential    get:  hits throughput: 22.828 ± 1.114 M ops/s, hits latency: 43.805 ns/op, important_hits throughput: 22.828 ± 1.114 M ops/s

        num keys: 100000
hashmap (control) sequential    get:  hits throughput: 17.595 ± 0.225 M ops/s, hits latency: 56.834 ns/op, important_hits throughput: 17.595 ± 0.225 M ops/s

        num keys: 4194304
hashmap (control) sequential    get:  hits throughput: 7.098 ± 0.757 M ops/s, hits latency: 140.878 ns/op, important_hits throughput: 7.098 ± 0.757 M ops/s

Local Storage
=============
        num_maps: 1
local_storage cache sequential  get:  hits throughput: 47.298 ± 0.180 M ops/s, hits latency: 21.142 ns/op, important_hits throughput: 47.298 ± 0.180 M ops/s
local_storage cache interleaved get:  hits throughput: 55.277 ± 0.888 M ops/s, hits latency: 18.091 ns/op, important_hits throughput: 55.277 ± 0.888 M ops/s

        num_maps: 10
local_storage cache sequential  get:  hits throughput: 40.240 ± 0.802 M ops/s, hits latency: 24.851 ns/op, important_hits throughput: 4.024 ± 0.080 M ops/s
local_storage cache interleaved get:  hits throughput: 48.701 ± 0.722 M ops/s, hits latency: 20.533 ns/op, important_hits throughput: 17.393 ± 0.258 M ops/s

        num_maps: 16
local_storage cache sequential  get:  hits throughput: 44.515 ± 0.708 M ops/s, hits latency: 22.464 ns/op, important_hits throughput: 2.782 ± 0.044 M ops/s
local_storage cache interleaved get:  hits throughput: 49.553 ± 2.260 M ops/s, hits latency: 20.181 ns/op, important_hits throughput: 15.767 ± 0.719 M ops/s

        num_maps: 17
local_storage cache sequential  get:  hits throughput: 38.778 ± 0.302 M ops/s, hits latency: 25.788 ns/op, important_hits throughput: 2.284 ± 0.018 M ops/s
local_storage cache interleaved get:  hits throughput: 43.848 ± 1.023 M ops/s, hits latency: 22.806 ns/op, important_hits throughput: 13.349 ± 0.311 M ops/s

        num_maps: 24
local_storage cache sequential  get:  hits throughput: 19.317 ± 0.568 M ops/s, hits latency: 51.769 ns/op, important_hits throughput: 0.806 ± 0.024 M ops/s
local_storage cache interleaved get:  hits throughput: 24.397 ± 0.272 M ops/s, hits latency: 40.989 ns/op, important_hits throughput: 6.863 ± 0.077 M ops/s

        num_maps: 32
local_storage cache sequential  get:  hits throughput: 13.333 ± 0.135 M ops/s, hits latency: 75.000 ns/op, important_hits throughput: 0.417 ± 0.004 M ops/s
local_storage cache interleaved get:  hits throughput: 16.898 ± 0.383 M ops/s, hits latency: 59.178 ns/op, important_hits throughput: 4.717 ± 0.107 M ops/s

        num_maps: 100
local_storage cache sequential  get:  hits throughput: 6.360 ± 0.107 M ops/s, hits latency: 157.233 ns/op, important_hits throughput: 0.064 ± 0.001 M ops/s
local_storage cache interleaved get:  hits throughput: 7.303 ± 0.362 M ops/s, hits latency: 136.930 ns/op, important_hits throughput: 1.907 ± 0.094 M ops/s

        num_maps: 1000
local_storage cache sequential  get:  hits throughput: 0.452 ± 0.010 M ops/s, hits latency: 2214.022 ns/op, important_hits throughput: 0.000 ± 0.000 M ops/s
local_storage cache interleaved get:  hits throughput: 0.542 ± 0.007 M ops/s, hits latency: 1843.341 ns/op, important_hits throughput: 0.136 ± 0.002 M ops/s

Looking at the "sequential get" results, it's clear that as the
number of task local_storage maps grows beyond the current cache size
(16), there's a significant reduction in hits throughput. Note that
current local_storage implementation assigns a cache_idx to maps as they
are created. Since "sequential get" is creating maps 0..n in order and
then doing bpf_task_storage_get calls in the same order, the benchmark
is effectively ensuring that a map will not be in cache when the program
tries to access it.

For "interleaved get" results, important-map hits throughput is greatly
increased as the important map is more likely to be in cache by virtue
of being accessed far more frequently. Throughput still reduces as #
maps increases, though.

To get a sense of the overhead of the benchmark program, I
commented out bpf_task_storage_get/bpf_map_lookup_elem in
local_storage_bench.c and ran the benchmark on the same host as the
'real' run. Results:

Hashmap Control
===============
        num keys: 10
hashmap (control) sequential    get:  hits throughput: 54.288 ± 0.655 M ops/s, hits latency: 18.420 ns/op, important_hits throughput: 54.288 ± 0.655 M ops/s

        num keys: 1000
hashmap (control) sequential    get:  hits throughput: 52.913 ± 0.519 M ops/s, hits latency: 18.899 ns/op, important_hits throughput: 52.913 ± 0.519 M ops/s

        num keys: 10000
hashmap (control) sequential    get:  hits throughput: 53.480 ± 1.235 M ops/s, hits latency: 18.699 ns/op, important_hits throughput: 53.480 ± 1.235 M ops/s

        num keys: 100000
hashmap (control) sequential    get:  hits throughput: 54.982 ± 1.902 M ops/s, hits latency: 18.188 ns/op, important_hits throughput: 54.982 ± 1.902 M ops/s

        num keys: 4194304
hashmap (control) sequential    get:  hits throughput: 50.858 ± 0.707 M ops/s, hits latency: 19.662 ns/op, important_hits throughput: 50.858 ± 0.707 M ops/s

Local Storage
=============
        num_maps: 1
local_storage cache sequential  get:  hits throughput: 110.990 ± 4.828 M ops/s, hits latency: 9.010 ns/op, important_hits throughput: 110.990 ± 4.828 M ops/s
local_storage cache interleaved get:  hits throughput: 161.057 ± 4.090 M ops/s, hits latency: 6.209 ns/op, important_hits throughput: 161.057 ± 4.090 M ops/s

        num_maps: 10
local_storage cache sequential  get:  hits throughput: 112.930 ± 1.079 M ops/s, hits latency: 8.855 ns/op, important_hits throughput: 11.293 ± 0.108 M ops/s
local_storage cache interleaved get:  hits throughput: 115.841 ± 2.088 M ops/s, hits latency: 8.633 ns/op, important_hits throughput: 41.372 ± 0.746 M ops/s

        num_maps: 16
local_storage cache sequential  get:  hits throughput: 115.653 ± 0.416 M ops/s, hits latency: 8.647 ns/op, important_hits throughput: 7.228 ± 0.026 M ops/s
local_storage cache interleaved get:  hits throughput: 138.717 ± 1.649 M ops/s, hits latency: 7.209 ns/op, important_hits throughput: 44.137 ± 0.525 M ops/s

        num_maps: 17
local_storage cache sequential  get:  hits throughput: 112.020 ± 1.649 M ops/s, hits latency: 8.927 ns/op, important_hits throughput: 6.598 ± 0.097 M ops/s
local_storage cache interleaved get:  hits throughput: 128.089 ± 1.960 M ops/s, hits latency: 7.807 ns/op, important_hits throughput: 38.995 ± 0.597 M ops/s

        num_maps: 24
local_storage cache sequential  get:  hits throughput: 92.447 ± 5.170 M ops/s, hits latency: 10.817 ns/op, important_hits throughput: 3.855 ± 0.216 M ops/s
local_storage cache interleaved get:  hits throughput: 128.844 ± 2.808 M ops/s, hits latency: 7.761 ns/op, important_hits throughput: 36.245 ± 0.790 M ops/s

        num_maps: 32
local_storage cache sequential  get:  hits throughput: 102.042 ± 1.462 M ops/s, hits latency: 9.800 ns/op, important_hits throughput: 3.194 ± 0.046 M ops/s
local_storage cache interleaved get:  hits throughput: 126.577 ± 1.818 M ops/s, hits latency: 7.900 ns/op, important_hits throughput: 35.332 ± 0.507 M ops/s

        num_maps: 100
local_storage cache sequential  get:  hits throughput: 111.327 ± 1.401 M ops/s, hits latency: 8.983 ns/op, important_hits throughput: 1.113 ± 0.014 M ops/s
local_storage cache interleaved get:  hits throughput: 131.327 ± 1.339 M ops/s, hits latency: 7.615 ns/op, important_hits throughput: 34.302 ± 0.350 M ops/s

        num_maps: 1000
local_storage cache sequential  get:  hits throughput: 101.978 ± 0.563 M ops/s, hits latency: 9.806 ns/op, important_hits throughput: 0.102 ± 0.001 M ops/s
local_storage cache interleaved get:  hits throughput: 141.084 ± 1.098 M ops/s, hits latency: 7.088 ns/op, important_hits throughput: 35.430 ± 0.276 M ops/s

Adjusting for overhead, latency numbers for "hashmap control" and
"sequential get" are:

hashmap_control_1k:   ~14.4ns
hashmap_control_10k:  ~25.1ns
hashmap_control_100k: ~38.6ns
sequential_get_1:     ~12.1ns
sequential_get_10:    ~16.0ns
sequential_get_16:    ~13.8ns
sequential_get_17:    ~16.8ns
sequential_get_24:    ~40.9ns
sequential_get_32:    ~65.2ns
sequential_get_100:   ~148.2ns
sequential_get_1000:  ~2204ns

Clearly demonstrating a cliff.

In the discussion for v1 of this patchset, Alexei noted that
local_storage was 2.5x faster than a large hashmap [1]. The benchmark
results confirm that this is still the case: a long-running BPF
application putting some pid-specific info into a hashmap for each pid
it sees will probably see on the order of 10-100k pids. Bench numbers
for hashmaps of this size are ~2.5x slower than sequential_get_16, but
as the number of local_storage maps grows past local_storage cache size
performance advantage reverses.

When running the benchmarks it may be necessary to bump 'open files'
ulimit for a successful run.

  [0]: https://lore.kernel.org/all/20220420002143.1096548-1-davemarchevsky@fb.com
  [1]: https://lore.kernel.org/bpf/20220511173305.ftldpn23m4ski3d3@MBP-98dd607d3435.dhcp.thefacebook.com/

Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
---
Changelog:

v4 -> v5:
  * Remove 2nd patch refactoring mean/stddev calculations (Andrii)
  * Change "hashmap control" benchmark to use one big hashmap w/
    configurable number of keys accessed (Andrii)
    * Add discussion of "hashmap control" vs "sequential get" numbers

v3 -> v4:
  * Remove ifs guarding increments in measure fn (Andrii)
  * Refactor to use 1 bpf prog for all 3 benchmarks w/ global vars set
    from userspace before load to control behavior (Andrii)
  * Greatly reduce variance in overhead by having benchmark bpf prog
    loop 10k times regardless of map count (Andrii)
    * Also, move sync_fetch_and_incr out of do_lookup as the guaranteed
      second sync_fetch_and_incr call for num_maps = 1 was adding
      overhead
  * Add second patch refactoring bench.c's mean/stddev calculations
    in reporting helper fns

v2 -> v3:
  * Accessing 1k maps in ARRAY_OF_MAPS doesn't hit MAX_USED_MAPS limit,
    so just use 1 program (Alexei)

v1 -> v2:
  * Adopt ARRAY_OF_MAPS approach for bpf program, allowing truly
    configurable # of maps (Andrii)
  * Add hashmap benchmark (Alexei)
  * Add discussion of overhead

 tools/testing/selftests/bpf/Makefile          |   4 +-
 tools/testing/selftests/bpf/bench.c           |  55 ++++
 tools/testing/selftests/bpf/bench.h           |   4 +
 .../bpf/benchs/bench_local_storage.c          | 268 ++++++++++++++++++
 .../bpf/benchs/run_bench_local_storage.sh     |  24 ++
 .../selftests/bpf/benchs/run_common.sh        |  17 ++
 .../selftests/bpf/progs/local_storage_bench.c | 104 +++++++
 7 files changed, 475 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/bpf/benchs/bench_local_storage.c
 create mode 100755 tools/testing/selftests/bpf/benchs/run_bench_local_storage.sh
 create mode 100644 tools/testing/selftests/bpf/progs/local_storage_bench.c

diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 2d3c8c8f558a..f82f77075f67 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -560,6 +560,7 @@ $(OUTPUT)/bench_ringbufs.o: $(OUTPUT)/ringbuf_bench.skel.h \
 $(OUTPUT)/bench_bloom_filter_map.o: $(OUTPUT)/bloom_filter_bench.skel.h
 $(OUTPUT)/bench_bpf_loop.o: $(OUTPUT)/bpf_loop_bench.skel.h
 $(OUTPUT)/bench_strncmp.o: $(OUTPUT)/strncmp_bench.skel.h
+$(OUTPUT)/bench_local_storage.o: $(OUTPUT)/local_storage_bench.skel.h
 $(OUTPUT)/bench.o: bench.h testing_helpers.h $(BPFOBJ)
 $(OUTPUT)/bench: LDLIBS += -lm
 $(OUTPUT)/bench: $(OUTPUT)/bench.o \
@@ -571,7 +572,8 @@ $(OUTPUT)/bench: $(OUTPUT)/bench.o \
 		 $(OUTPUT)/bench_ringbufs.o \
 		 $(OUTPUT)/bench_bloom_filter_map.o \
 		 $(OUTPUT)/bench_bpf_loop.o \
-		 $(OUTPUT)/bench_strncmp.o
+		 $(OUTPUT)/bench_strncmp.o \
+		 $(OUTPUT)/bench_local_storage.o
 	$(call msg,BINARY,,$@)
 	$(Q)$(CC) $(CFLAGS) $(LDFLAGS) $(filter %.a %.o,$^) $(LDLIBS) -o $@
 
diff --git a/tools/testing/selftests/bpf/bench.c b/tools/testing/selftests/bpf/bench.c
index f061cc20e776..32399554f89b 100644
--- a/tools/testing/selftests/bpf/bench.c
+++ b/tools/testing/selftests/bpf/bench.c
@@ -150,6 +150,53 @@ void ops_report_final(struct bench_res res[], int res_cnt)
 	printf("latency %8.3lf ns/op\n", 1000.0 / hits_mean * env.producer_cnt);
 }
 
+void local_storage_report_progress(int iter, struct bench_res *res,
+				   long delta_ns)
+{
+	double important_hits_per_sec, hits_per_sec;
+	double delta_sec = delta_ns / 1000000000.0;
+
+	hits_per_sec = res->hits / 1000000.0 / delta_sec;
+	important_hits_per_sec = res->important_hits / 1000000.0 / delta_sec;
+
+	printf("Iter %3d (%7.3lfus): ", iter, (delta_ns - 1000000000) / 1000.0);
+
+	printf("hits %8.3lfM/s ", hits_per_sec);
+	printf("important_hits %8.3lfM/s\n", important_hits_per_sec);
+}
+
+void local_storage_report_final(struct bench_res res[], int res_cnt)
+{
+	double important_hits_mean = 0.0, important_hits_stddev = 0.0;
+	double hits_mean = 0.0, hits_stddev = 0.0;
+	int i;
+
+	for (i = 0; i < res_cnt; i++) {
+		hits_mean += res[i].hits / 1000000.0 / (0.0 + res_cnt);
+		important_hits_mean += res[i].important_hits / 1000000.0 / (0.0 + res_cnt);
+	}
+
+	if (res_cnt > 1)  {
+		for (i = 0; i < res_cnt; i++) {
+			hits_stddev += (hits_mean - res[i].hits / 1000000.0) *
+				       (hits_mean - res[i].hits / 1000000.0) /
+				       (res_cnt - 1.0);
+			important_hits_stddev +=
+				       (important_hits_mean - res[i].important_hits / 1000000.0) *
+				       (important_hits_mean - res[i].important_hits / 1000000.0) /
+				       (res_cnt - 1.0);
+		}
+
+		hits_stddev = sqrt(hits_stddev);
+		important_hits_stddev = sqrt(important_hits_stddev);
+	}
+	printf("Summary: hits throughput %8.3lf \u00B1 %5.3lf M ops/s, ",
+	       hits_mean, hits_stddev);
+	printf("hits latency %8.3lf ns/op, ", 1000.0 / hits_mean);
+	printf("important_hits throughput %8.3lf \u00B1 %5.3lf M ops/s\n",
+	       important_hits_mean, important_hits_stddev);
+}
+
 const char *argp_program_version = "benchmark";
 const char *argp_program_bug_address = "<bpf@vger.kernel.org>";
 const char argp_program_doc[] =
@@ -188,12 +235,14 @@ static const struct argp_option opts[] = {
 extern struct argp bench_ringbufs_argp;
 extern struct argp bench_bloom_map_argp;
 extern struct argp bench_bpf_loop_argp;
+extern struct argp bench_local_storage_argp;
 extern struct argp bench_strncmp_argp;
 
 static const struct argp_child bench_parsers[] = {
 	{ &bench_ringbufs_argp, 0, "Ring buffers benchmark", 0 },
 	{ &bench_bloom_map_argp, 0, "Bloom filter map benchmark", 0 },
 	{ &bench_bpf_loop_argp, 0, "bpf_loop helper benchmark", 0 },
+	{ &bench_local_storage_argp, 0, "local_storage benchmark", 0 },
 	{ &bench_strncmp_argp, 0, "bpf_strncmp helper benchmark", 0 },
 	{},
 };
@@ -396,6 +445,9 @@ extern const struct bench bench_hashmap_with_bloom;
 extern const struct bench bench_bpf_loop;
 extern const struct bench bench_strncmp_no_helper;
 extern const struct bench bench_strncmp_helper;
+extern const struct bench bench_local_storage_cache_seq_get;
+extern const struct bench bench_local_storage_cache_interleaved_get;
+extern const struct bench bench_local_storage_cache_hashmap_control;
 
 static const struct bench *benchs[] = {
 	&bench_count_global,
@@ -430,6 +482,9 @@ static const struct bench *benchs[] = {
 	&bench_bpf_loop,
 	&bench_strncmp_no_helper,
 	&bench_strncmp_helper,
+	&bench_local_storage_cache_seq_get,
+	&bench_local_storage_cache_interleaved_get,
+	&bench_local_storage_cache_hashmap_control,
 };
 
 static void setup_benchmark()
diff --git a/tools/testing/selftests/bpf/bench.h b/tools/testing/selftests/bpf/bench.h
index fb3e213df3dc..4b15286753ba 100644
--- a/tools/testing/selftests/bpf/bench.h
+++ b/tools/testing/selftests/bpf/bench.h
@@ -34,6 +34,7 @@ struct bench_res {
 	long hits;
 	long drops;
 	long false_hits;
+	long important_hits;
 };
 
 struct bench {
@@ -61,6 +62,9 @@ void false_hits_report_progress(int iter, struct bench_res *res, long delta_ns);
 void false_hits_report_final(struct bench_res res[], int res_cnt);
 void ops_report_progress(int iter, struct bench_res *res, long delta_ns);
 void ops_report_final(struct bench_res res[], int res_cnt);
+void local_storage_report_progress(int iter, struct bench_res *res,
+				   long delta_ns);
+void local_storage_report_final(struct bench_res res[], int res_cnt);
 
 static inline __u64 get_time_ns(void)
 {
diff --git a/tools/testing/selftests/bpf/benchs/bench_local_storage.c b/tools/testing/selftests/bpf/benchs/bench_local_storage.c
new file mode 100644
index 000000000000..9a3fd5295db1
--- /dev/null
+++ b/tools/testing/selftests/bpf/benchs/bench_local_storage.c
@@ -0,0 +1,268 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
+
+#include <argp.h>
+#include <linux/btf.h>
+
+#include "local_storage_bench.skel.h"
+#include "bench.h"
+
+#include <test_btf.h>
+
+static struct {
+	__u32 nr_maps;
+	__u32 hashmap_nr_keys_used;
+} args = {
+	.nr_maps = 1000,
+	.hashmap_nr_keys_used = 1000,
+};
+
+enum {
+	ARG_NR_MAPS = 6000,
+	ARG_HASHMAP_NR_KEYS_USED = 6001,
+};
+
+static const struct argp_option opts[] = {
+	{ "nr_maps", ARG_NR_MAPS, "NR_MAPS", 0,
+		"Set number of local_storage maps"},
+	{ "hashmap_nr_keys_used", ARG_HASHMAP_NR_KEYS_USED, "NR_KEYS",
+		0, "When doing hashmap test, set number of hashmap keys test uses"},
+	{},
+};
+
+static error_t parse_arg(int key, char *arg, struct argp_state *state)
+{
+	long ret;
+
+	switch (key) {
+	case ARG_NR_MAPS:
+		ret = strtol(arg, NULL, 10);
+		if (ret < 1 || ret > UINT_MAX) {
+			fprintf(stderr, "invalid nr_maps");
+			argp_usage(state);
+		}
+		args.nr_maps = ret;
+		break;
+	case ARG_HASHMAP_NR_KEYS_USED:
+		ret = strtol(arg, NULL, 10);
+		if (ret < 1 || ret > UINT_MAX) {
+			fprintf(stderr, "invalid hashmap_nr_keys_used");
+			argp_usage(state);
+		}
+		args.hashmap_nr_keys_used = ret;
+		break;
+	default:
+		return ARGP_ERR_UNKNOWN;
+	}
+
+	return 0;
+}
+
+const struct argp bench_local_storage_argp = {
+	.options = opts,
+	.parser = parse_arg,
+};
+
+/* Keep in sync w/ array of maps in bpf */
+#define MAX_NR_MAPS 1000
+/* keep in sync w/ same define in bpf */
+#define HASHMAP_SZ 4194304
+
+static void validate(void)
+{
+	if (env.producer_cnt != 1) {
+		fprintf(stderr, "benchmark doesn't support multi-producer!\n");
+		exit(1);
+	}
+	if (env.consumer_cnt != 1) {
+		fprintf(stderr, "benchmark doesn't support multi-consumer!\n");
+		exit(1);
+	}
+
+	if (args.nr_maps > MAX_NR_MAPS) {
+		fprintf(stderr, "nr_maps must be <= 1000\n");
+		exit(1);
+	}
+
+	if (args.hashmap_nr_keys_used > HASHMAP_SZ) {
+		fprintf(stderr, "hashmap_nr_keys_used must be <= %u\n", HASHMAP_SZ);
+		exit(1);
+	}
+}
+
+static struct {
+	struct local_storage_bench *skel;
+	void *bpf_obj;
+	struct bpf_map *array_of_maps;
+} ctx;
+
+static void __setup(struct bpf_program *prog, bool hashmap)
+{
+	struct bpf_map *inner_map;
+	int i, fd, mim_fd, err;
+
+	LIBBPF_OPTS(bpf_map_create_opts, create_opts);
+
+	if (!hashmap)
+		create_opts.map_flags = BPF_F_NO_PREALLOC;
+
+	ctx.skel->rodata->num_maps = args.nr_maps;
+	ctx.skel->rodata->hashmap_num_keys = args.hashmap_nr_keys_used;
+	inner_map = bpf_map__inner_map(ctx.array_of_maps);
+	create_opts.btf_key_type_id = bpf_map__btf_key_type_id(inner_map);
+	create_opts.btf_value_type_id = bpf_map__btf_value_type_id(inner_map);
+
+	err = local_storage_bench__load(ctx.skel);
+	if (err) {
+		fprintf(stderr, "Error loading skeleton\n");
+		goto err_out;
+	}
+
+	create_opts.btf_fd = bpf_object__btf_fd(ctx.skel->obj);
+
+	mim_fd = bpf_map__fd(ctx.array_of_maps);
+	if (mim_fd < 0) {
+		fprintf(stderr, "Error getting map_in_map fd\n");
+		goto err_out;
+	}
+
+	for (i = 0; i < args.nr_maps; i++) {
+		if (hashmap)
+			fd = bpf_map_create(BPF_MAP_TYPE_HASH, NULL, sizeof(int),
+					    sizeof(int), HASHMAP_SZ, &create_opts);
+		else
+			fd = bpf_map_create(BPF_MAP_TYPE_TASK_STORAGE, NULL, sizeof(int),
+					    sizeof(int), 0, &create_opts);
+		if (fd < 0) {
+			fprintf(stderr, "Error creating map %d: %d\n", i, fd);
+			goto err_out;
+		}
+
+		err = bpf_map_update_elem(mim_fd, &i, &fd, 0);
+		if (err) {
+			fprintf(stderr, "Error updating array-of-maps w/ map %d\n", i);
+			goto err_out;
+		}
+	}
+
+	if (!bpf_program__attach(prog)) {
+		fprintf(stderr, "Error attaching bpf program\n");
+		goto err_out;
+	}
+
+	return;
+err_out:
+	exit(1);
+}
+
+static void hashmap_setup(void)
+{
+	struct local_storage_bench *skel;
+
+	setup_libbpf();
+
+	skel = local_storage_bench__open();
+	ctx.skel = skel;
+	ctx.array_of_maps = skel->maps.array_of_hash_maps;
+	skel->rodata->use_hashmap = 1;
+	skel->rodata->interleave = 0;
+
+	__setup(skel->progs.get_local, true);
+}
+
+static void local_storage_cache_get_setup(void)
+{
+	struct local_storage_bench *skel;
+
+	setup_libbpf();
+
+	skel = local_storage_bench__open();
+	ctx.skel = skel;
+	ctx.array_of_maps = skel->maps.array_of_local_storage_maps;
+	skel->rodata->use_hashmap = 0;
+	skel->rodata->interleave = 0;
+
+	__setup(skel->progs.get_local, false);
+}
+
+static void local_storage_cache_get_interleaved_setup(void)
+{
+	struct local_storage_bench *skel;
+
+	setup_libbpf();
+
+	skel = local_storage_bench__open();
+	ctx.skel = skel;
+	ctx.array_of_maps = skel->maps.array_of_local_storage_maps;
+	skel->rodata->use_hashmap = 0;
+	skel->rodata->interleave = 1;
+
+	__setup(skel->progs.get_local, false);
+}
+
+static void measure(struct bench_res *res)
+{
+	res->hits = atomic_swap(&ctx.skel->bss->hits, 0);
+	res->important_hits = atomic_swap(&ctx.skel->bss->important_hits, 0);
+}
+
+static inline void trigger_bpf_program(void)
+{
+	syscall(__NR_getpgid);
+}
+
+static void *consumer(void *input)
+{
+	return NULL;
+}
+
+static void *producer(void *input)
+{
+	while (true)
+		trigger_bpf_program();
+
+	return NULL;
+}
+
+/* cache sequential and interleaved get benchs test local_storage get
+ * performance, specifically they demonstrate performance cliff of
+ * current list-plus-cache local_storage model.
+ *
+ * cache sequential get: call bpf_task_storage_get on n maps in order
+ * cache interleaved get: like "sequential get", but interleave 4 calls to the
+ *	'important' map (idx 0 in array_of_maps) for every 10 calls. Goal
+ *	is to mimic environment where many progs are accessing their local_storage
+ *	maps, with 'our' prog needing to access its map more often than others
+ */
+const struct bench bench_local_storage_cache_seq_get = {
+	.name = "local-storage-cache-seq-get",
+	.validate = validate,
+	.setup = local_storage_cache_get_setup,
+	.producer_thread = producer,
+	.consumer_thread = consumer,
+	.measure = measure,
+	.report_progress = local_storage_report_progress,
+	.report_final = local_storage_report_final,
+};
+
+const struct bench bench_local_storage_cache_interleaved_get = {
+	.name = "local-storage-cache-int-get",
+	.validate = validate,
+	.setup = local_storage_cache_get_interleaved_setup,
+	.producer_thread = producer,
+	.consumer_thread = consumer,
+	.measure = measure,
+	.report_progress = local_storage_report_progress,
+	.report_final = local_storage_report_final,
+};
+
+const struct bench bench_local_storage_cache_hashmap_control = {
+	.name = "local-storage-cache-hashmap-control",
+	.validate = validate,
+	.setup = hashmap_setup,
+	.producer_thread = producer,
+	.consumer_thread = consumer,
+	.measure = measure,
+	.report_progress = local_storage_report_progress,
+	.report_final = local_storage_report_final,
+};
diff --git a/tools/testing/selftests/bpf/benchs/run_bench_local_storage.sh b/tools/testing/selftests/bpf/benchs/run_bench_local_storage.sh
new file mode 100755
index 000000000000..2eb2b513a173
--- /dev/null
+++ b/tools/testing/selftests/bpf/benchs/run_bench_local_storage.sh
@@ -0,0 +1,24 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+source ./benchs/run_common.sh
+
+set -eufo pipefail
+
+header "Hashmap Control"
+for i in 10 1000 10000 100000 4194304; do
+subtitle "num keys: $i"
+	summarize_local_storage "hashmap (control) sequential    get: "\
+		"$(./bench --nr_maps 1 --hashmap_nr_keys_used=$i local-storage-cache-hashmap-control)"
+	printf "\n"
+done
+
+header "Local Storage"
+for i in 1 10 16 17 24 32 100 1000; do
+subtitle "num_maps: $i"
+	summarize_local_storage "local_storage cache sequential  get: "\
+		"$(./bench --nr_maps $i local-storage-cache-seq-get)"
+	summarize_local_storage "local_storage cache interleaved get: "\
+		"$(./bench --nr_maps $i local-storage-cache-int-get)"
+	printf "\n"
+done
diff --git a/tools/testing/selftests/bpf/benchs/run_common.sh b/tools/testing/selftests/bpf/benchs/run_common.sh
index 6c5e6023a69f..d9f40af82006 100644
--- a/tools/testing/selftests/bpf/benchs/run_common.sh
+++ b/tools/testing/selftests/bpf/benchs/run_common.sh
@@ -41,6 +41,16 @@ function ops()
 	echo "$*" | sed -E "s/.*latency\s+([0-9]+\.[0-9]+\sns\/op).*/\1/"
 }
 
+function local_storage()
+{
+	echo -n "hits throughput: "
+	echo -n "$*" | sed -E "s/.* hits throughput\s+([0-9]+\.[0-9]+ ± [0-9]+\.[0-9]+\sM\sops\/s).*/\1/"
+	echo -n -e ", hits latency: "
+	echo -n "$*" | sed -E "s/.* hits latency\s+([0-9]+\.[0-9]+\sns\/op).*/\1/"
+	echo -n ", important_hits throughput: "
+	echo "$*" | sed -E "s/.*important_hits throughput\s+([0-9]+\.[0-9]+ ± [0-9]+\.[0-9]+\sM\sops\/s).*/\1/"
+}
+
 function total()
 {
 	echo "$*" | sed -E "s/.*total operations\s+([0-9]+\.[0-9]+ ± [0-9]+\.[0-9]+M\/s).*/\1/"
@@ -67,6 +77,13 @@ function summarize_ops()
 	printf "%-20s %s\n" "$bench" "$(ops $summary)"
 }
 
+function summarize_local_storage()
+{
+	bench="$1"
+	summary=$(echo $2 | tail -n1)
+	printf "%-20s %s\n" "$bench" "$(local_storage $summary)"
+}
+
 function summarize_total()
 {
 	bench="$1"
diff --git a/tools/testing/selftests/bpf/progs/local_storage_bench.c b/tools/testing/selftests/bpf/progs/local_storage_bench.c
new file mode 100644
index 000000000000..2c3234c5b73a
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/local_storage_bench.c
@@ -0,0 +1,104 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+#define HASHMAP_SZ 4194304
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
+	__uint(max_entries, 1000);
+	__type(key, int);
+	__type(value, int);
+	__array(values, struct {
+		__uint(type, BPF_MAP_TYPE_TASK_STORAGE);
+		__uint(map_flags, BPF_F_NO_PREALLOC);
+		__type(key, int);
+		__type(value, int);
+	});
+} array_of_local_storage_maps SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
+	__uint(max_entries, 1000);
+	__type(key, int);
+	__type(value, int);
+	__array(values, struct {
+		__uint(type, BPF_MAP_TYPE_HASH);
+		__uint(max_entries, HASHMAP_SZ);
+		__type(key, int);
+		__type(value, int);
+	});
+} array_of_hash_maps SEC(".maps");
+
+long important_hits;
+long hits;
+
+/* set from user-space */
+const volatile unsigned int use_hashmap;
+const volatile unsigned int hashmap_num_keys;
+const volatile unsigned int num_maps;
+const volatile unsigned int interleave;
+
+struct loop_ctx {
+	struct task_struct *task;
+	long loop_hits;
+	long loop_important_hits;
+};
+
+static int do_lookup(unsigned int elem, struct loop_ctx *lctx)
+{
+	void *map, *inner_map;
+	int idx = 0;
+
+	if (use_hashmap)
+		map = &array_of_hash_maps;
+	else
+		map = &array_of_local_storage_maps;
+
+	inner_map = bpf_map_lookup_elem(map, &elem);
+	if (!inner_map)
+		return -1;
+
+	if (use_hashmap) {
+		idx = bpf_get_prandom_u32() % hashmap_num_keys;
+		bpf_map_lookup_elem(inner_map, &idx);
+	} else {
+		bpf_task_storage_get(inner_map, lctx->task, &idx,
+				     BPF_LOCAL_STORAGE_GET_F_CREATE);
+	}
+
+	lctx->loop_hits++;
+	if (!elem)
+		lctx->loop_important_hits++;
+	return 0;
+}
+
+static long loop(u32 index, void *ctx)
+{
+	struct loop_ctx *lctx = (struct loop_ctx *)ctx;
+	unsigned int map_idx = index % num_maps;
+
+	do_lookup(map_idx, lctx);
+	if (interleave && map_idx % 3 == 0)
+		do_lookup(0, lctx);
+	return 0;
+}
+
+SEC("fentry/" SYS_PREFIX "sys_getpgid")
+int get_local(void *ctx)
+{
+	struct loop_ctx lctx;
+
+	lctx.task = bpf_get_current_task_btf();
+	lctx.loop_hits = 0;
+	lctx.loop_important_hits = 0;
+	bpf_loop(10000, &loop, &lctx, 0);
+	__sync_add_and_fetch(&hits, lctx.loop_hits);
+	__sync_add_and_fetch(&important_hits, lctx.loop_important_hits);
+	return 0;
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 bpf-next] selftests/bpf: Add benchmark for local_storage get
  2022-06-04 22:20 [PATCH v5 bpf-next] selftests/bpf: Add benchmark for local_storage get Dave Marchevsky
@ 2022-06-08 23:02 ` Martin KaFai Lau
  2022-06-09 14:27   ` Dave Marchevsky
  2022-06-22 23:00 ` Yosry Ahmed
  1 sibling, 1 reply; 8+ messages in thread
From: Martin KaFai Lau @ 2022-06-08 23:02 UTC (permalink / raw)
  To: Dave Marchevsky
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Kernel Team

On Sat, Jun 04, 2022 at 03:20:06PM -0700, Dave Marchevsky wrote:
> Add a benchmarks to demonstrate the performance cliff for local_storage
> get as the number of local_storage maps increases beyond current
> local_storage implementation's cache size.
Thanks for working on this.  Have some high level comments and questions.

> "sequential get" and "interleaved get" benchmarks are added, both of
> which do many bpf_task_storage_get calls on sets of task local_storage
> maps of various counts, while considering a single specific map to be
> 'important' and counting task_storage_gets to the important map
> separately in addition to normal 'hits' count of all gets. Goal here is
> to mimic scenario where a particular program using one map - the
> important one - is running on a system where many other local_storage
> maps exist and are accessed often.
> 
> While "sequential get" benchmark does bpf_task_storage_get for map 0, 1,
> ..., {9, 99, 999} in order, "interleaved" benchmark interleaves 4
> bpf_task_storage_gets for the important map for every 10 map gets. This
> is meant to highlight performance differences when important map is
> accessed far more frequently than non-important maps.
> 
> A "hashmap control" benchmark is also included for easy comparison of
> standard bpf hashmap lookup vs local_storage get. The benchmark is
> similar to "sequential get", but creates and uses BPF_MAP_TYPE_HASH
> instead of local storage. Only one inner map is created - a hashmap
> meant to hold tid -> data mapping for all tasks. Size of the hashmap is
> hardcoded to my system's PID_MAX_LIMIT (4,194,304). The number of these
> keys which are actually fetched as part of the benchmark is
> configurable.
Note that the key size of the hashmap in the socket use case could make
a different also.  It usually uses the four tuples(src/dst ip6/port).
Not necessarily something need to be configurable now but would be
nice thing to do later.

> 
> Addition of this benchmark is inspired by conversation with Alexei in a
> previous patchset's thread [0], which highlighted the need for such a
> benchmark to motivate and validate improvements to local_storage
> implementation. My approach in that series focused on improving
> performance for explicitly-marked 'important' maps and was rejected
> with feedback to make more generally-applicable improvements while
> avoiding explicitly marking maps as important. Thus the benchmark
> reports both general and important-map-focused metrics, so effect of
> future work on both is clear.
> 
> Regarding the benchmark results. On a powerful system (Skylake, 20
> cores, 256gb ram):
> 
> Hashmap Control
> ===============
>         num keys: 10
> hashmap (control) sequential    get:  hits throughput: 33.748 ± 0.700 M ops/s, hits latency: 29.631 ns/op, important_hits throughput: 33.748 ± 0.700 M ops/s
> 
>         num keys: 1000
> hashmap (control) sequential    get:  hits throughput: 29.997 ± 0.953 M ops/s, hits latency: 33.337 ns/op, important_hits throughput: 29.997 ± 0.953 M ops/s
> 
>         num keys: 10000
> hashmap (control) sequential    get:  hits throughput: 22.828 ± 1.114 M ops/s, hits latency: 43.805 ns/op, important_hits throughput: 22.828 ± 1.114 M ops/s
> 
>         num keys: 100000
> hashmap (control) sequential    get:  hits throughput: 17.595 ± 0.225 M ops/s, hits latency: 56.834 ns/op, important_hits throughput: 17.595 ± 0.225 M ops/s
> 
>         num keys: 4194304
> hashmap (control) sequential    get:  hits throughput: 7.098 ± 0.757 M ops/s, hits latency: 140.878 ns/op, important_hits throughput: 7.098 ± 0.757 M ops/s
> 
> Local Storage
> =============
>         num_maps: 1
> local_storage cache sequential  get:  hits throughput: 47.298 ± 0.180 M ops/s, hits latency: 21.142 ns/op, important_hits throughput: 47.298 ± 0.180 M ops/s
> local_storage cache interleaved get:  hits throughput: 55.277 ± 0.888 M ops/s, hits latency: 18.091 ns/op, important_hits throughput: 55.277 ± 0.888 M ops/s
> 
>         num_maps: 10
> local_storage cache sequential  get:  hits throughput: 40.240 ± 0.802 M ops/s, hits latency: 24.851 ns/op, important_hits throughput: 4.024 ± 0.080 M ops/s
> local_storage cache interleaved get:  hits throughput: 48.701 ± 0.722 M ops/s, hits latency: 20.533 ns/op, important_hits throughput: 17.393 ± 0.258 M ops/s
iiuc, important_hits is only useful for the 'interleaved get' test?

and the important_hits is always a certain fraction of the total get.
For num_maps:10 here, 4 extra for every 10 get,
so 4/14 ~ 28% of the total get?

> 
>         num_maps: 16
> local_storage cache sequential  get:  hits throughput: 44.515 ± 0.708 M ops/s, hits latency: 22.464 ns/op, important_hits throughput: 2.782 ± 0.044 M ops/s
> local_storage cache interleaved get:  hits throughput: 49.553 ± 2.260 M ops/s, hits latency: 20.181 ns/op, important_hits throughput: 15.767 ± 0.719 M ops/s
> 
>         num_maps: 17
> local_storage cache sequential  get:  hits throughput: 38.778 ± 0.302 M ops/s, hits latency: 25.788 ns/op, important_hits throughput: 2.284 ± 0.018 M ops/s
> local_storage cache interleaved get:  hits throughput: 43.848 ± 1.023 M ops/s, hits latency: 22.806 ns/op, important_hits throughput: 13.349 ± 0.311 M ops/s
> 
>         num_maps: 24
> local_storage cache sequential  get:  hits throughput: 19.317 ± 0.568 M ops/s, hits latency: 51.769 ns/op, important_hits throughput: 0.806 ± 0.024 M ops/s
> local_storage cache interleaved get:  hits throughput: 24.397 ± 0.272 M ops/s, hits latency: 40.989 ns/op, important_hits throughput: 6.863 ± 0.077 M ops/s
> 
>         num_maps: 32
> local_storage cache sequential  get:  hits throughput: 13.333 ± 0.135 M ops/s, hits latency: 75.000 ns/op, important_hits throughput: 0.417 ± 0.004 M ops/s
> local_storage cache interleaved get:  hits throughput: 16.898 ± 0.383 M ops/s, hits latency: 59.178 ns/op, important_hits throughput: 4.717 ± 0.107 M ops/s
> 
>         num_maps: 100
> local_storage cache sequential  get:  hits throughput: 6.360 ± 0.107 M ops/s, hits latency: 157.233 ns/op, important_hits throughput: 0.064 ± 0.001 M ops/s
> local_storage cache interleaved get:  hits throughput: 7.303 ± 0.362 M ops/s, hits latency: 136.930 ns/op, important_hits throughput: 1.907 ± 0.094 M ops/s
> 
>         num_maps: 1000
> local_storage cache sequential  get:  hits throughput: 0.452 ± 0.010 M ops/s, hits latency: 2214.022 ns/op, important_hits throughput: 0.000 ± 0.000 M ops/s
> local_storage cache interleaved get:  hits throughput: 0.542 ± 0.007 M ops/s, hits latency: 1843.341 ns/op, important_hits throughput: 0.136 ± 0.002 M ops/s
> 
> Looking at the "sequential get" results, it's clear that as the
> number of task local_storage maps grows beyond the current cache size
> (16), there's a significant reduction in hits throughput. Note that
> current local_storage implementation assigns a cache_idx to maps as they
> are created. Since "sequential get" is creating maps 0..n in order and
> then doing bpf_task_storage_get calls in the same order, the benchmark
> is effectively ensuring that a map will not be in cache when the program
> tries to access it.
> 
> For "interleaved get" results, important-map hits throughput is greatly
> increased as the important map is more likely to be in cache by virtue
> of being accessed far more frequently. Throughput still reduces as #
> maps increases, though.
> 
> To get a sense of the overhead of the benchmark program, I
> commented out bpf_task_storage_get/bpf_map_lookup_elem in
> local_storage_bench.c and ran the benchmark on the same host as the
> 'real' run. Results:
> Hashmap Control
> ===============
>         num keys: 10
> hashmap (control) sequential    get:  hits throughput: 54.288 ± 0.655 M ops/s, hits latency: 18.420 ns/op, important_hits throughput: 54.288 ± 0.655 M ops/s
> 
>         num keys: 1000
> hashmap (control) sequential    get:  hits throughput: 52.913 ± 0.519 M ops/s, hits latency: 18.899 ns/op, important_hits throughput: 52.913 ± 0.519 M ops/s
> 
>         num keys: 10000
> hashmap (control) sequential    get:  hits throughput: 53.480 ± 1.235 M ops/s, hits latency: 18.699 ns/op, important_hits throughput: 53.480 ± 1.235 M ops/s
> 
>         num keys: 100000
> hashmap (control) sequential    get:  hits throughput: 54.982 ± 1.902 M ops/s, hits latency: 18.188 ns/op, important_hits throughput: 54.982 ± 1.902 M ops/s
> 
>         num keys: 4194304
> hashmap (control) sequential    get:  hits throughput: 50.858 ± 0.707 M ops/s, hits latency: 19.662 ns/op, important_hits throughput: 50.858 ± 0.707 M ops/s
> 
> Local Storage
> =============
>         num_maps: 1
> local_storage cache sequential  get:  hits throughput: 110.990 ± 4.828 M ops/s, hits latency: 9.010 ns/op, important_hits throughput: 110.990 ± 4.828 M ops/s
> local_storage cache interleaved get:  hits throughput: 161.057 ± 4.090 M ops/s, hits latency: 6.209 ns/op, important_hits throughput: 161.057 ± 4.090 M ops/s
> 
>         num_maps: 10
> local_storage cache sequential  get:  hits throughput: 112.930 ± 1.079 M ops/s, hits latency: 8.855 ns/op, important_hits throughput: 11.293 ± 0.108 M ops/s
> local_storage cache interleaved get:  hits throughput: 115.841 ± 2.088 M ops/s, hits latency: 8.633 ns/op, important_hits throughput: 41.372 ± 0.746 M ops/s
> 
>         num_maps: 16
> local_storage cache sequential  get:  hits throughput: 115.653 ± 0.416 M ops/s, hits latency: 8.647 ns/op, important_hits throughput: 7.228 ± 0.026 M ops/s
> local_storage cache interleaved get:  hits throughput: 138.717 ± 1.649 M ops/s, hits latency: 7.209 ns/op, important_hits throughput: 44.137 ± 0.525 M ops/s
> 
>         num_maps: 17
> local_storage cache sequential  get:  hits throughput: 112.020 ± 1.649 M ops/s, hits latency: 8.927 ns/op, important_hits throughput: 6.598 ± 0.097 M ops/s
> local_storage cache interleaved get:  hits throughput: 128.089 ± 1.960 M ops/s, hits latency: 7.807 ns/op, important_hits throughput: 38.995 ± 0.597 M ops/s
> 
>         num_maps: 24
> local_storage cache sequential  get:  hits throughput: 92.447 ± 5.170 M ops/s, hits latency: 10.817 ns/op, important_hits throughput: 3.855 ± 0.216 M ops/s
> local_storage cache interleaved get:  hits throughput: 128.844 ± 2.808 M ops/s, hits latency: 7.761 ns/op, important_hits throughput: 36.245 ± 0.790 M ops/s
> 
>         num_maps: 32
> local_storage cache sequential  get:  hits throughput: 102.042 ± 1.462 M ops/s, hits latency: 9.800 ns/op, important_hits throughput: 3.194 ± 0.046 M ops/s
> local_storage cache interleaved get:  hits throughput: 126.577 ± 1.818 M ops/s, hits latency: 7.900 ns/op, important_hits throughput: 35.332 ± 0.507 M ops/s
> 
>         num_maps: 100
> local_storage cache sequential  get:  hits throughput: 111.327 ± 1.401 M ops/s, hits latency: 8.983 ns/op, important_hits throughput: 1.113 ± 0.014 M ops/s
> local_storage cache interleaved get:  hits throughput: 131.327 ± 1.339 M ops/s, hits latency: 7.615 ns/op, important_hits throughput: 34.302 ± 0.350 M ops/s
> 
>         num_maps: 1000
> local_storage cache sequential  get:  hits throughput: 101.978 ± 0.563 M ops/s, hits latency: 9.806 ns/op, important_hits throughput: 0.102 ± 0.001 M ops/s
> local_storage cache interleaved get:  hits throughput: 141.084 ± 1.098 M ops/s, hits latency: 7.088 ns/op, important_hits throughput: 35.430 ± 0.276 M ops/s
> 
> Adjusting for overhead, latency numbers for "hashmap control" and
> "sequential get" are:
> 
> hashmap_control_1k:   ~14.4ns
> hashmap_control_10k:  ~25.1ns
> hashmap_control_100k: ~38.6ns
> sequential_get_1:     ~12.1ns
> sequential_get_10:    ~16.0ns
> sequential_get_16:    ~13.8ns
> sequential_get_17:    ~16.8ns
> sequential_get_24:    ~40.9ns
> sequential_get_32:    ~65.2ns
> sequential_get_100:   ~148.2ns
> sequential_get_1000:  ~2204ns
> 
> Clearly demonstrating a cliff.
> 
> In the discussion for v1 of this patchset, Alexei noted that
> local_storage was 2.5x faster than a large hashmap [1]. The benchmark
> results confirm that this is still the case: a long-running BPF
> application putting some pid-specific info into a hashmap for each pid
> it sees will probably see on the order of 10-100k pids. Bench numbers
> for hashmaps of this size are ~2.5x slower than sequential_get_16, but
> as the number of local_storage maps grows past local_storage cache size
> performance advantage reverses.
iiuc, the test on the local_storage get is done on the same task ?

> 
> When running the benchmarks it may be necessary to bump 'open files'
> ulimit for a successful run.
> 
>   [0]: https://lore.kernel.org/all/20220420002143.1096548-1-davemarchevsky@fb.com
>   [1]: https://lore.kernel.org/bpf/20220511173305.ftldpn23m4ski3d3@MBP-98dd607d3435.dhcp.thefacebook.com/
> 

[ ... ]

> +static int do_lookup(unsigned int elem, struct loop_ctx *lctx)
> +{
> +	void *map, *inner_map;
> +	int idx = 0;
> +
> +	if (use_hashmap)
> +		map = &array_of_hash_maps;
> +	else
> +		map = &array_of_local_storage_maps;
> +
> +	inner_map = bpf_map_lookup_elem(map, &elem);
> +	if (!inner_map)
> +		return -1;
> +
> +	if (use_hashmap) {
> +		idx = bpf_get_prandom_u32() % hashmap_num_keys;
> +		bpf_map_lookup_elem(inner_map, &idx);
Is the hashmap populated ?

> +	} else {
> +		bpf_task_storage_get(inner_map, lctx->task, &idx,
> +				     BPF_LOCAL_STORAGE_GET_F_CREATE);
> +	}
> +
> +	lctx->loop_hits++;
> +	if (!elem)
> +		lctx->loop_important_hits++;
> +	return 0;
> +}
> +
> +static long loop(u32 index, void *ctx)
> +{
> +	struct loop_ctx *lctx = (struct loop_ctx *)ctx;
> +	unsigned int map_idx = index % num_maps;
> +
> +	do_lookup(map_idx, lctx);
> +	if (interleave && map_idx % 3 == 0)
> +		do_lookup(0, lctx);
> +	return 0;
> +}
> +
> +SEC("fentry/" SYS_PREFIX "sys_getpgid")
> +int get_local(void *ctx)
> +{
> +	struct loop_ctx lctx;
> +
> +	lctx.task = bpf_get_current_task_btf();
> +	lctx.loop_hits = 0;
> +	lctx.loop_important_hits = 0;
> +	bpf_loop(10000, &loop, &lctx, 0);
> +	__sync_add_and_fetch(&hits, lctx.loop_hits);
> +	__sync_add_and_fetch(&important_hits, lctx.loop_important_hits);
> +	return 0;
> +}
> +
> +char _license[] SEC("license") = "GPL";
> -- 
> 2.30.2
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 bpf-next] selftests/bpf: Add benchmark for local_storage get
  2022-06-08 23:02 ` Martin KaFai Lau
@ 2022-06-09 14:27   ` Dave Marchevsky
  2022-06-09 15:49     ` Alexei Starovoitov
  0 siblings, 1 reply; 8+ messages in thread
From: Dave Marchevsky @ 2022-06-09 14:27 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Kernel Team

On 6/8/22 7:02 PM, Martin KaFai Lau wrote:   
> On Sat, Jun 04, 2022 at 03:20:06PM -0700, Dave Marchevsky wrote:
>> Add a benchmarks to demonstrate the performance cliff for local_storage
>> get as the number of local_storage maps increases beyond current
>> local_storage implementation's cache size.
> Thanks for working on this.  Have some high level comments and questions.
> 
>> "sequential get" and "interleaved get" benchmarks are added, both of
>> which do many bpf_task_storage_get calls on sets of task local_storage
>> maps of various counts, while considering a single specific map to be
>> 'important' and counting task_storage_gets to the important map
>> separately in addition to normal 'hits' count of all gets. Goal here is
>> to mimic scenario where a particular program using one map - the
>> important one - is running on a system where many other local_storage
>> maps exist and are accessed often.
>>
>> While "sequential get" benchmark does bpf_task_storage_get for map 0, 1,
>> ..., {9, 99, 999} in order, "interleaved" benchmark interleaves 4
>> bpf_task_storage_gets for the important map for every 10 map gets. This
>> is meant to highlight performance differences when important map is
>> accessed far more frequently than non-important maps.
>>
>> A "hashmap control" benchmark is also included for easy comparison of
>> standard bpf hashmap lookup vs local_storage get. The benchmark is
>> similar to "sequential get", but creates and uses BPF_MAP_TYPE_HASH
>> instead of local storage. Only one inner map is created - a hashmap
>> meant to hold tid -> data mapping for all tasks. Size of the hashmap is
>> hardcoded to my system's PID_MAX_LIMIT (4,194,304). The number of these
>> keys which are actually fetched as part of the benchmark is
>> configurable.
> Note that the key size of the hashmap in the socket use case could make
> a different also.  It usually uses the four tuples(src/dst ip6/port).
> Not necessarily something need to be configurable now but would be
> nice thing to do later.
> 

IIRC I tried varying the key / val sizes and while there was a difference
it was not nearly as big as varying 'number of keys actually fetched'. Will
confirm.

>>
>> Addition of this benchmark is inspired by conversation with Alexei in a
>> previous patchset's thread [0], which highlighted the need for such a
>> benchmark to motivate and validate improvements to local_storage
>> implementation. My approach in that series focused on improving
>> performance for explicitly-marked 'important' maps and was rejected
>> with feedback to make more generally-applicable improvements while
>> avoiding explicitly marking maps as important. Thus the benchmark
>> reports both general and important-map-focused metrics, so effect of
>> future work on both is clear.
>>
>> Regarding the benchmark results. On a powerful system (Skylake, 20
>> cores, 256gb ram):
>>
>> Hashmap Control
>> ===============
>>         num keys: 10
>> hashmap (control) sequential    get:  hits throughput: 33.748 ± 0.700 M ops/s, hits latency: 29.631 ns/op, important_hits throughput: 33.748 ± 0.700 M ops/s
>>
>>         num keys: 1000
>> hashmap (control) sequential    get:  hits throughput: 29.997 ± 0.953 M ops/s, hits latency: 33.337 ns/op, important_hits throughput: 29.997 ± 0.953 M ops/s
>>
>>         num keys: 10000
>> hashmap (control) sequential    get:  hits throughput: 22.828 ± 1.114 M ops/s, hits latency: 43.805 ns/op, important_hits throughput: 22.828 ± 1.114 M ops/s
>>
>>         num keys: 100000
>> hashmap (control) sequential    get:  hits throughput: 17.595 ± 0.225 M ops/s, hits latency: 56.834 ns/op, important_hits throughput: 17.595 ± 0.225 M ops/s
>>
>>         num keys: 4194304
>> hashmap (control) sequential    get:  hits throughput: 7.098 ± 0.757 M ops/s, hits latency: 140.878 ns/op, important_hits throughput: 7.098 ± 0.757 M ops/s
>>
>> Local Storage
>> =============
>>         num_maps: 1
>> local_storage cache sequential  get:  hits throughput: 47.298 ± 0.180 M ops/s, hits latency: 21.142 ns/op, important_hits throughput: 47.298 ± 0.180 M ops/s
>> local_storage cache interleaved get:  hits throughput: 55.277 ± 0.888 M ops/s, hits latency: 18.091 ns/op, important_hits throughput: 55.277 ± 0.888 M ops/s
>>
>>         num_maps: 10
>> local_storage cache sequential  get:  hits throughput: 40.240 ± 0.802 M ops/s, hits latency: 24.851 ns/op, important_hits throughput: 4.024 ± 0.080 M ops/s
>> local_storage cache interleaved get:  hits throughput: 48.701 ± 0.722 M ops/s, hits latency: 20.533 ns/op, important_hits throughput: 17.393 ± 0.258 M ops/s
> iiuc, important_hits is only useful for the 'interleaved get' test?
> 
> and the important_hits is always a certain fraction of the total get.
> For num_maps:10 here, 4 extra for every 10 get,
> so 4/14 ~ 28% of the total get?
>

I think important_hits is meaningful for both sequential and interleaved
benchmarks - but only when seeking to optimize the performance of a particular
map, perhaps at the expense of others, vs all maps in general.

Presumably if the performance of a particular map is more important than others
it's also being used significantly more often, hence interleaved benchmark.

I expect that future work here will improve cliff performance in general, which
will be visible from sequential benchmark, while hopefully also further
improving important_hits in interleaved case where it's 4/14 of all gets.

>>
>>         num_maps: 16
>> local_storage cache sequential  get:  hits throughput: 44.515 ± 0.708 M ops/s, hits latency: 22.464 ns/op, important_hits throughput: 2.782 ± 0.044 M ops/s
>> local_storage cache interleaved get:  hits throughput: 49.553 ± 2.260 M ops/s, hits latency: 20.181 ns/op, important_hits throughput: 15.767 ± 0.719 M ops/s
>>
>>         num_maps: 17
>> local_storage cache sequential  get:  hits throughput: 38.778 ± 0.302 M ops/s, hits latency: 25.788 ns/op, important_hits throughput: 2.284 ± 0.018 M ops/s
>> local_storage cache interleaved get:  hits throughput: 43.848 ± 1.023 M ops/s, hits latency: 22.806 ns/op, important_hits throughput: 13.349 ± 0.311 M ops/s
>>
>>         num_maps: 24
>> local_storage cache sequential  get:  hits throughput: 19.317 ± 0.568 M ops/s, hits latency: 51.769 ns/op, important_hits throughput: 0.806 ± 0.024 M ops/s
>> local_storage cache interleaved get:  hits throughput: 24.397 ± 0.272 M ops/s, hits latency: 40.989 ns/op, important_hits throughput: 6.863 ± 0.077 M ops/s
>>
>>         num_maps: 32
>> local_storage cache sequential  get:  hits throughput: 13.333 ± 0.135 M ops/s, hits latency: 75.000 ns/op, important_hits throughput: 0.417 ± 0.004 M ops/s
>> local_storage cache interleaved get:  hits throughput: 16.898 ± 0.383 M ops/s, hits latency: 59.178 ns/op, important_hits throughput: 4.717 ± 0.107 M ops/s
>>
>>         num_maps: 100
>> local_storage cache sequential  get:  hits throughput: 6.360 ± 0.107 M ops/s, hits latency: 157.233 ns/op, important_hits throughput: 0.064 ± 0.001 M ops/s
>> local_storage cache interleaved get:  hits throughput: 7.303 ± 0.362 M ops/s, hits latency: 136.930 ns/op, important_hits throughput: 1.907 ± 0.094 M ops/s
>>
>>         num_maps: 1000
>> local_storage cache sequential  get:  hits throughput: 0.452 ± 0.010 M ops/s, hits latency: 2214.022 ns/op, important_hits throughput: 0.000 ± 0.000 M ops/s
>> local_storage cache interleaved get:  hits throughput: 0.542 ± 0.007 M ops/s, hits latency: 1843.341 ns/op, important_hits throughput: 0.136 ± 0.002 M ops/s
>>
>> Looking at the "sequential get" results, it's clear that as the
>> number of task local_storage maps grows beyond the current cache size
>> (16), there's a significant reduction in hits throughput. Note that
>> current local_storage implementation assigns a cache_idx to maps as they
>> are created. Since "sequential get" is creating maps 0..n in order and
>> then doing bpf_task_storage_get calls in the same order, the benchmark
>> is effectively ensuring that a map will not be in cache when the program
>> tries to access it.
>>
>> For "interleaved get" results, important-map hits throughput is greatly
>> increased as the important map is more likely to be in cache by virtue
>> of being accessed far more frequently. Throughput still reduces as #
>> maps increases, though.
>>
>> To get a sense of the overhead of the benchmark program, I
>> commented out bpf_task_storage_get/bpf_map_lookup_elem in
>> local_storage_bench.c and ran the benchmark on the same host as the
>> 'real' run. Results:
>> Hashmap Control
>> ===============
>>         num keys: 10
>> hashmap (control) sequential    get:  hits throughput: 54.288 ± 0.655 M ops/s, hits latency: 18.420 ns/op, important_hits throughput: 54.288 ± 0.655 M ops/s
>>
>>         num keys: 1000
>> hashmap (control) sequential    get:  hits throughput: 52.913 ± 0.519 M ops/s, hits latency: 18.899 ns/op, important_hits throughput: 52.913 ± 0.519 M ops/s
>>
>>         num keys: 10000
>> hashmap (control) sequential    get:  hits throughput: 53.480 ± 1.235 M ops/s, hits latency: 18.699 ns/op, important_hits throughput: 53.480 ± 1.235 M ops/s
>>
>>         num keys: 100000
>> hashmap (control) sequential    get:  hits throughput: 54.982 ± 1.902 M ops/s, hits latency: 18.188 ns/op, important_hits throughput: 54.982 ± 1.902 M ops/s
>>
>>         num keys: 4194304
>> hashmap (control) sequential    get:  hits throughput: 50.858 ± 0.707 M ops/s, hits latency: 19.662 ns/op, important_hits throughput: 50.858 ± 0.707 M ops/s
>>
>> Local Storage
>> =============
>>         num_maps: 1
>> local_storage cache sequential  get:  hits throughput: 110.990 ± 4.828 M ops/s, hits latency: 9.010 ns/op, important_hits throughput: 110.990 ± 4.828 M ops/s
>> local_storage cache interleaved get:  hits throughput: 161.057 ± 4.090 M ops/s, hits latency: 6.209 ns/op, important_hits throughput: 161.057 ± 4.090 M ops/s
>>
>>         num_maps: 10
>> local_storage cache sequential  get:  hits throughput: 112.930 ± 1.079 M ops/s, hits latency: 8.855 ns/op, important_hits throughput: 11.293 ± 0.108 M ops/s
>> local_storage cache interleaved get:  hits throughput: 115.841 ± 2.088 M ops/s, hits latency: 8.633 ns/op, important_hits throughput: 41.372 ± 0.746 M ops/s
>>
>>         num_maps: 16
>> local_storage cache sequential  get:  hits throughput: 115.653 ± 0.416 M ops/s, hits latency: 8.647 ns/op, important_hits throughput: 7.228 ± 0.026 M ops/s
>> local_storage cache interleaved get:  hits throughput: 138.717 ± 1.649 M ops/s, hits latency: 7.209 ns/op, important_hits throughput: 44.137 ± 0.525 M ops/s
>>
>>         num_maps: 17
>> local_storage cache sequential  get:  hits throughput: 112.020 ± 1.649 M ops/s, hits latency: 8.927 ns/op, important_hits throughput: 6.598 ± 0.097 M ops/s
>> local_storage cache interleaved get:  hits throughput: 128.089 ± 1.960 M ops/s, hits latency: 7.807 ns/op, important_hits throughput: 38.995 ± 0.597 M ops/s
>>
>>         num_maps: 24
>> local_storage cache sequential  get:  hits throughput: 92.447 ± 5.170 M ops/s, hits latency: 10.817 ns/op, important_hits throughput: 3.855 ± 0.216 M ops/s
>> local_storage cache interleaved get:  hits throughput: 128.844 ± 2.808 M ops/s, hits latency: 7.761 ns/op, important_hits throughput: 36.245 ± 0.790 M ops/s
>>
>>         num_maps: 32
>> local_storage cache sequential  get:  hits throughput: 102.042 ± 1.462 M ops/s, hits latency: 9.800 ns/op, important_hits throughput: 3.194 ± 0.046 M ops/s
>> local_storage cache interleaved get:  hits throughput: 126.577 ± 1.818 M ops/s, hits latency: 7.900 ns/op, important_hits throughput: 35.332 ± 0.507 M ops/s
>>
>>         num_maps: 100
>> local_storage cache sequential  get:  hits throughput: 111.327 ± 1.401 M ops/s, hits latency: 8.983 ns/op, important_hits throughput: 1.113 ± 0.014 M ops/s
>> local_storage cache interleaved get:  hits throughput: 131.327 ± 1.339 M ops/s, hits latency: 7.615 ns/op, important_hits throughput: 34.302 ± 0.350 M ops/s
>>
>>         num_maps: 1000
>> local_storage cache sequential  get:  hits throughput: 101.978 ± 0.563 M ops/s, hits latency: 9.806 ns/op, important_hits throughput: 0.102 ± 0.001 M ops/s
>> local_storage cache interleaved get:  hits throughput: 141.084 ± 1.098 M ops/s, hits latency: 7.088 ns/op, important_hits throughput: 35.430 ± 0.276 M ops/s
>>
>> Adjusting for overhead, latency numbers for "hashmap control" and
>> "sequential get" are:
>>
>> hashmap_control_1k:   ~14.4ns
>> hashmap_control_10k:  ~25.1ns
>> hashmap_control_100k: ~38.6ns
>> sequential_get_1:     ~12.1ns
>> sequential_get_10:    ~16.0ns
>> sequential_get_16:    ~13.8ns
>> sequential_get_17:    ~16.8ns
>> sequential_get_24:    ~40.9ns
>> sequential_get_32:    ~65.2ns
>> sequential_get_100:   ~148.2ns
>> sequential_get_1000:  ~2204ns
>>
>> Clearly demonstrating a cliff.
>>
>> In the discussion for v1 of this patchset, Alexei noted that
>> local_storage was 2.5x faster than a large hashmap [1]. The benchmark
>> results confirm that this is still the case: a long-running BPF
>> application putting some pid-specific info into a hashmap for each pid
>> it sees will probably see on the order of 10-100k pids. Bench numbers
>> for hashmaps of this size are ~2.5x slower than sequential_get_16, but
>> as the number of local_storage maps grows past local_storage cache size
>> performance advantage reverses.
> iiuc, the test on the local_storage get is done on the same task ?
> 

Yes, the task of the userspace runner process.

>>
>> When running the benchmarks it may be necessary to bump 'open files'
>> ulimit for a successful run.
>>
>>   [0]: https://lore.kernel.org/all/20220420002143.1096548-1-davemarchevsky@fb.com
>>   [1]: https://lore.kernel.org/bpf/20220511173305.ftldpn23m4ski3d3@MBP-98dd607d3435.dhcp.thefacebook.com/
>>
> 
> [ ... ]
> 
>> +static int do_lookup(unsigned int elem, struct loop_ctx *lctx)
>> +{
>> +	void *map, *inner_map;
>> +	int idx = 0;
>> +
>> +	if (use_hashmap)
>> +		map = &array_of_hash_maps;
>> +	else
>> +		map = &array_of_local_storage_maps;
>> +
>> +	inner_map = bpf_map_lookup_elem(map, &elem);
>> +	if (!inner_map)
>> +		return -1;
>> +
>> +	if (use_hashmap) {
>> +		idx = bpf_get_prandom_u32() % hashmap_num_keys;
>> +		bpf_map_lookup_elem(inner_map, &idx);
> Is the hashmap populated ?
> 

Nope. Do you expect this to make a difference? Will try when confirming key / 
val size above. 

>> +	} else {
>> +		bpf_task_storage_get(inner_map, lctx->task, &idx,
>> +				     BPF_LOCAL_STORAGE_GET_F_CREATE);
>> +	}
>> +
>> +	lctx->loop_hits++;
>> +	if (!elem)
>> +		lctx->loop_important_hits++;
>> +	return 0;
>> +}
>> +
>> +static long loop(u32 index, void *ctx)
>> +{
>> +	struct loop_ctx *lctx = (struct loop_ctx *)ctx;
>> +	unsigned int map_idx = index % num_maps;
>> +
>> +	do_lookup(map_idx, lctx);
>> +	if (interleave && map_idx % 3 == 0)
>> +		do_lookup(0, lctx);
>> +	return 0;
>> +}
>> +
>> +SEC("fentry/" SYS_PREFIX "sys_getpgid")
>> +int get_local(void *ctx)
>> +{
>> +	struct loop_ctx lctx;
>> +
>> +	lctx.task = bpf_get_current_task_btf();
>> +	lctx.loop_hits = 0;
>> +	lctx.loop_important_hits = 0;
>> +	bpf_loop(10000, &loop, &lctx, 0);
>> +	__sync_add_and_fetch(&hits, lctx.loop_hits);
>> +	__sync_add_and_fetch(&important_hits, lctx.loop_important_hits);
>> +	return 0;
>> +}
>> +
>> +char _license[] SEC("license") = "GPL";
>> -- 
>> 2.30.2
>>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 bpf-next] selftests/bpf: Add benchmark for local_storage get
  2022-06-09 14:27   ` Dave Marchevsky
@ 2022-06-09 15:49     ` Alexei Starovoitov
  2022-06-20 19:49       ` Dave Marchevsky
  0 siblings, 1 reply; 8+ messages in thread
From: Alexei Starovoitov @ 2022-06-09 15:49 UTC (permalink / raw)
  To: Dave Marchevsky
  Cc: Martin KaFai Lau, bpf, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Kernel Team

On Thu, Jun 9, 2022 at 7:27 AM Dave Marchevsky <davemarchevsky@fb.com> wrote:
> >> +
> >> +    if (use_hashmap) {
> >> +            idx = bpf_get_prandom_u32() % hashmap_num_keys;
> >> +            bpf_map_lookup_elem(inner_map, &idx);
> > Is the hashmap populated ?
> >
>
> Nope. Do you expect this to make a difference? Will try when confirming key /
> val size above.

Martin brought up an important point.
The map should be populated.
If the map is empty lookup_nulls_elem_raw() will select a bucket,
it will be empty and it will return NULL.
Whereas the more accurates apples to apples comparison
would be to find a task in a map, since bpf_task_storage_get(,F_CREATE);
will certainly find it.
Then if (l->hash == hash && !memcmp ... will be triggered.
When we're counting nsecs that should be noticeable.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 bpf-next] selftests/bpf: Add benchmark for local_storage get
  2022-06-09 15:49     ` Alexei Starovoitov
@ 2022-06-20 19:49       ` Dave Marchevsky
  0 siblings, 0 replies; 8+ messages in thread
From: Dave Marchevsky @ 2022-06-20 19:49 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Martin KaFai Lau, bpf, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Kernel Team

On 6/9/22 11:49 AM, Alexei Starovoitov wrote:   
> On Thu, Jun 9, 2022 at 7:27 AM Dave Marchevsky <davemarchevsky@fb.com> wrote:
>>>> +
>>>> +    if (use_hashmap) {
>>>> +            idx = bpf_get_prandom_u32() % hashmap_num_keys;
>>>> +            bpf_map_lookup_elem(inner_map, &idx);
>>> Is the hashmap populated ?
>>>
>>
>> Nope. Do you expect this to make a difference? Will try when confirming key /
>> val size above.
> 
> Martin brought up an important point.
> The map should be populated.
> If the map is empty lookup_nulls_elem_raw() will select a bucket,
> it will be empty and it will return NULL.
> Whereas the more accurates apples to apples comparison
> would be to find a task in a map, since bpf_task_storage_get(,F_CREATE);
> will certainly find it.
> Then if (l->hash == hash && !memcmp ... will be triggered.
> When we're counting nsecs that should be noticeable.

Prepopulating the hashmap before running the benchmark does indeed have a
significant effect (2-3x slower):

Hashmap Control
===============
        num keys: 10
hashmap (control) sequential    get:  hits throughput: 21.193 ± 0.479 M ops/s, hits latency: 47.185 ns/op, important_hits throughput: 21.193 ± 0.479 M ops/s

        num keys: 1000
hashmap (control) sequential    get:  hits throughput: 13.515 ± 0.321 M ops/s, hits latency: 73.992 ns/op, important_hits throughput: 13.515 ± 0.321 M ops/s

        num keys: 10000
hashmap (control) sequential    get:  hits throughput: 6.087 ± 0.085 M ops/s, hits latency: 164.294 ns/op, important_hits throughput: 6.087 ± 0.085 M ops/s

        num keys: 100000
hashmap (control) sequential    get:  hits throughput: 3.860 ± 0.617 M ops/s, hits latency: 259.067 ns/op, important_hits throughput: 3.860 ± 0.617 M ops/s

        num keys: 4194304
hashmap (control) sequential    get:  hits throughput: 1.918 ± 0.017 M ops/s, hits latency: 521.286 ns/op, important_hits throughput: 1.918 ± 0.017 M ops/s


vs empty hashmap's


Hashmap Control
===============
        num keys: 10
hashmap (control) sequential    get:  hits throughput: 33.748 ± 0.700 M ops/s, hits latency: 29.631 ns/op, important_hits throughput: 33.748 ± 0.700 M ops/s

        num keys: 1000
hashmap (control) sequential    get:  hits throughput: 29.997 ± 0.953 M ops/s, hits latency: 33.337 ns/op, important_hits throughput: 29.997 ± 0.953 M ops/s

        num keys: 10000
hashmap (control) sequential    get:  hits throughput: 22.828 ± 1.114 M ops/s, hits latency: 43.805 ns/op, important_hits throughput: 22.828 ± 1.114 M ops/s

        num keys: 100000
hashmap (control) sequential    get:  hits throughput: 17.595 ± 0.225 M ops/s, hits latency: 56.834 ns/op, important_hits throughput: 17.595 ± 0.225 M ops/s

        num keys: 4194304
hashmap (control) sequential    get:  hits throughput: 7.098 ± 0.757 M ops/s, hits latency: 140.878 ns/op, important_hits throughput: 7.098 ± 0.757 M ops/s


Bumping key size to u64 + 64 chars (72 byte total), without prepopulating the
hashmap, results in significant increase as well:


Hashmap Control
===============
        num keys: 10
hashmap (control) sequential    get:  hits throughput: 16.613 ± 0.693 M ops/s, hits latency: 60.193 ns/op, important_hits throughput: 16.613 ± 0.693 M ops/s

        num keys: 1000
hashmap (control) sequential    get:  hits throughput: 17.053 ± 0.137 M ops/s, hits latency: 58.640 ns/op, important_hits throughput: 17.053 ± 0.137 M ops/s

        num keys: 10000
hashmap (control) sequential    get:  hits throughput: 15.088 ± 0.131 M ops/s, hits latency: 66.276 ns/op, important_hits throughput: 15.088 ± 0.131 M ops/s

        num keys: 100000
hashmap (control) sequential    get:  hits throughput: 12.357 ± 0.050 M ops/s, hits latency: 80.928 ns/op, important_hits throughput: 12.357 ± 0.050 M ops/s

        num keys: 4194304
hashmap (control) sequential    get:  hits throughput: 5.627 ± 0.266 M ops/s, hits latency: 177.725 ns/op, important_hits throughput: 5.627 ± 0.266 M ops/s


Whereas bumping value size w/o prepopulating results in no significant
change from baseline.


I will send a v6 with prepopulated hashmap.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 bpf-next] selftests/bpf: Add benchmark for local_storage get
  2022-06-04 22:20 [PATCH v5 bpf-next] selftests/bpf: Add benchmark for local_storage get Dave Marchevsky
  2022-06-08 23:02 ` Martin KaFai Lau
@ 2022-06-22 23:00 ` Yosry Ahmed
  2022-06-24 20:22   ` Martin KaFai Lau
  1 sibling, 1 reply; 8+ messages in thread
From: Yosry Ahmed @ 2022-06-22 23:00 UTC (permalink / raw)
  To: Dave Marchevsky
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Kernel Team

Thanks for adding these benchmarks!


On Sat, Jun 4, 2022 at 3:20 PM Dave Marchevsky <davemarchevsky@fb.com> wrote:
>
> Add a benchmarks to demonstrate the performance cliff for local_storage
> get as the number of local_storage maps increases beyond current
> local_storage implementation's cache size.
>
> "sequential get" and "interleaved get" benchmarks are added, both of
> which do many bpf_task_storage_get calls on sets of task local_storage
> maps of various counts, while considering a single specific map to be
> 'important' and counting task_storage_gets to the important map
> separately in addition to normal 'hits' count of all gets. Goal here is
> to mimic scenario where a particular program using one map - the
> important one - is running on a system where many other local_storage
> maps exist and are accessed often.
>
> While "sequential get" benchmark does bpf_task_storage_get for map 0, 1,
> ..., {9, 99, 999} in order, "interleaved" benchmark interleaves 4
> bpf_task_storage_gets for the important map for every 10 map gets. This
> is meant to highlight performance differences when important map is
> accessed far more frequently than non-important maps.
>
> A "hashmap control" benchmark is also included for easy comparison of
> standard bpf hashmap lookup vs local_storage get. The benchmark is
> similar to "sequential get", but creates and uses BPF_MAP_TYPE_HASH
> instead of local storage. Only one inner map is created - a hashmap
> meant to hold tid -> data mapping for all tasks. Size of the hashmap is
> hardcoded to my system's PID_MAX_LIMIT (4,194,304). The number of these
> keys which are actually fetched as part of the benchmark is
> configurable.
>
> Addition of this benchmark is inspired by conversation with Alexei in a
> previous patchset's thread [0], which highlighted the need for such a
> benchmark to motivate and validate improvements to local_storage
> implementation. My approach in that series focused on improving
> performance for explicitly-marked 'important' maps and was rejected
> with feedback to make more generally-applicable improvements while
> avoiding explicitly marking maps as important. Thus the benchmark
> reports both general and important-map-focused metrics, so effect of
> future work on both is clear.

The current implementation falls back to a list traversal of
bpf_local_storage->list when there is a cache miss. I wonder if this
is a place with room for optimization? Maybe a hash table or a tree
would be a more performant alternative?

>
> Regarding the benchmark results. On a powerful system (Skylake, 20
> cores, 256gb ram):
>
> Hashmap Control
> ===============
>         num keys: 10
> hashmap (control) sequential    get:  hits throughput: 33.748 ± 0.700 M ops/s, hits latency: 29.631 ns/op, important_hits throughput: 33.748 ± 0.700 M ops/s
>
>         num keys: 1000
> hashmap (control) sequential    get:  hits throughput: 29.997 ± 0.953 M ops/s, hits latency: 33.337 ns/op, important_hits throughput: 29.997 ± 0.953 M ops/s
>
>         num keys: 10000
> hashmap (control) sequential    get:  hits throughput: 22.828 ± 1.114 M ops/s, hits latency: 43.805 ns/op, important_hits throughput: 22.828 ± 1.114 M ops/s
>
>         num keys: 100000
> hashmap (control) sequential    get:  hits throughput: 17.595 ± 0.225 M ops/s, hits latency: 56.834 ns/op, important_hits throughput: 17.595 ± 0.225 M ops/s
>
>         num keys: 4194304
> hashmap (control) sequential    get:  hits throughput: 7.098 ± 0.757 M ops/s, hits latency: 140.878 ns/op, important_hits throughput: 7.098 ± 0.757 M ops/s
>
> Local Storage
> =============
>         num_maps: 1
> local_storage cache sequential  get:  hits throughput: 47.298 ± 0.180 M ops/s, hits latency: 21.142 ns/op, important_hits throughput: 47.298 ± 0.180 M ops/s
> local_storage cache interleaved get:  hits throughput: 55.277 ± 0.888 M ops/s, hits latency: 18.091 ns/op, important_hits throughput: 55.277 ± 0.888 M ops/s
>
>         num_maps: 10
> local_storage cache sequential  get:  hits throughput: 40.240 ± 0.802 M ops/s, hits latency: 24.851 ns/op, important_hits throughput: 4.024 ± 0.080 M ops/s
> local_storage cache interleaved get:  hits throughput: 48.701 ± 0.722 M ops/s, hits latency: 20.533 ns/op, important_hits throughput: 17.393 ± 0.258 M ops/s
>
>         num_maps: 16
> local_storage cache sequential  get:  hits throughput: 44.515 ± 0.708 M ops/s, hits latency: 22.464 ns/op, important_hits throughput: 2.782 ± 0.044 M ops/s
> local_storage cache interleaved get:  hits throughput: 49.553 ± 2.260 M ops/s, hits latency: 20.181 ns/op, important_hits throughput: 15.767 ± 0.719 M ops/s
>
>         num_maps: 17
> local_storage cache sequential  get:  hits throughput: 38.778 ± 0.302 M ops/s, hits latency: 25.788 ns/op, important_hits throughput: 2.284 ± 0.018 M ops/s
> local_storage cache interleaved get:  hits throughput: 43.848 ± 1.023 M ops/s, hits latency: 22.806 ns/op, important_hits throughput: 13.349 ± 0.311 M ops/s
>
>         num_maps: 24
> local_storage cache sequential  get:  hits throughput: 19.317 ± 0.568 M ops/s, hits latency: 51.769 ns/op, important_hits throughput: 0.806 ± 0.024 M ops/s
> local_storage cache interleaved get:  hits throughput: 24.397 ± 0.272 M ops/s, hits latency: 40.989 ns/op, important_hits throughput: 6.863 ± 0.077 M ops/s
>
>         num_maps: 32
> local_storage cache sequential  get:  hits throughput: 13.333 ± 0.135 M ops/s, hits latency: 75.000 ns/op, important_hits throughput: 0.417 ± 0.004 M ops/s
> local_storage cache interleaved get:  hits throughput: 16.898 ± 0.383 M ops/s, hits latency: 59.178 ns/op, important_hits throughput: 4.717 ± 0.107 M ops/s
>
>         num_maps: 100
> local_storage cache sequential  get:  hits throughput: 6.360 ± 0.107 M ops/s, hits latency: 157.233 ns/op, important_hits throughput: 0.064 ± 0.001 M ops/s
> local_storage cache interleaved get:  hits throughput: 7.303 ± 0.362 M ops/s, hits latency: 136.930 ns/op, important_hits throughput: 1.907 ± 0.094 M ops/s
>
>         num_maps: 1000
> local_storage cache sequential  get:  hits throughput: 0.452 ± 0.010 M ops/s, hits latency: 2214.022 ns/op, important_hits throughput: 0.000 ± 0.000 M ops/s
> local_storage cache interleaved get:  hits throughput: 0.542 ± 0.007 M ops/s, hits latency: 1843.341 ns/op, important_hits throughput: 0.136 ± 0.002 M ops/s
>
> Looking at the "sequential get" results, it's clear that as the
> number of task local_storage maps grows beyond the current cache size
> (16), there's a significant reduction in hits throughput. Note that
> current local_storage implementation assigns a cache_idx to maps as they
> are created. Since "sequential get" is creating maps 0..n in order and
> then doing bpf_task_storage_get calls in the same order, the benchmark
> is effectively ensuring that a map will not be in cache when the program
> tries to access it.
>
> For "interleaved get" results, important-map hits throughput is greatly
> increased as the important map is more likely to be in cache by virtue
> of being accessed far more frequently. Throughput still reduces as #
> maps increases, though.
>
> To get a sense of the overhead of the benchmark program, I
> commented out bpf_task_storage_get/bpf_map_lookup_elem in
> local_storage_bench.c and ran the benchmark on the same host as the
> 'real' run. Results:
>
> Hashmap Control
> ===============
>         num keys: 10
> hashmap (control) sequential    get:  hits throughput: 54.288 ± 0.655 M ops/s, hits latency: 18.420 ns/op, important_hits throughput: 54.288 ± 0.655 M ops/s
>
>         num keys: 1000
> hashmap (control) sequential    get:  hits throughput: 52.913 ± 0.519 M ops/s, hits latency: 18.899 ns/op, important_hits throughput: 52.913 ± 0.519 M ops/s
>
>         num keys: 10000
> hashmap (control) sequential    get:  hits throughput: 53.480 ± 1.235 M ops/s, hits latency: 18.699 ns/op, important_hits throughput: 53.480 ± 1.235 M ops/s
>
>         num keys: 100000
> hashmap (control) sequential    get:  hits throughput: 54.982 ± 1.902 M ops/s, hits latency: 18.188 ns/op, important_hits throughput: 54.982 ± 1.902 M ops/s
>
>         num keys: 4194304
> hashmap (control) sequential    get:  hits throughput: 50.858 ± 0.707 M ops/s, hits latency: 19.662 ns/op, important_hits throughput: 50.858 ± 0.707 M ops/s
>
> Local Storage
> =============
>         num_maps: 1
> local_storage cache sequential  get:  hits throughput: 110.990 ± 4.828 M ops/s, hits latency: 9.010 ns/op, important_hits throughput: 110.990 ± 4.828 M ops/s
> local_storage cache interleaved get:  hits throughput: 161.057 ± 4.090 M ops/s, hits latency: 6.209 ns/op, important_hits throughput: 161.057 ± 4.090 M ops/s
>
>         num_maps: 10
> local_storage cache sequential  get:  hits throughput: 112.930 ± 1.079 M ops/s, hits latency: 8.855 ns/op, important_hits throughput: 11.293 ± 0.108 M ops/s
> local_storage cache interleaved get:  hits throughput: 115.841 ± 2.088 M ops/s, hits latency: 8.633 ns/op, important_hits throughput: 41.372 ± 0.746 M ops/s
>
>         num_maps: 16
> local_storage cache sequential  get:  hits throughput: 115.653 ± 0.416 M ops/s, hits latency: 8.647 ns/op, important_hits throughput: 7.228 ± 0.026 M ops/s
> local_storage cache interleaved get:  hits throughput: 138.717 ± 1.649 M ops/s, hits latency: 7.209 ns/op, important_hits throughput: 44.137 ± 0.525 M ops/s
>
>         num_maps: 17
> local_storage cache sequential  get:  hits throughput: 112.020 ± 1.649 M ops/s, hits latency: 8.927 ns/op, important_hits throughput: 6.598 ± 0.097 M ops/s
> local_storage cache interleaved get:  hits throughput: 128.089 ± 1.960 M ops/s, hits latency: 7.807 ns/op, important_hits throughput: 38.995 ± 0.597 M ops/s
>
>         num_maps: 24
> local_storage cache sequential  get:  hits throughput: 92.447 ± 5.170 M ops/s, hits latency: 10.817 ns/op, important_hits throughput: 3.855 ± 0.216 M ops/s
> local_storage cache interleaved get:  hits throughput: 128.844 ± 2.808 M ops/s, hits latency: 7.761 ns/op, important_hits throughput: 36.245 ± 0.790 M ops/s
>
>         num_maps: 32
> local_storage cache sequential  get:  hits throughput: 102.042 ± 1.462 M ops/s, hits latency: 9.800 ns/op, important_hits throughput: 3.194 ± 0.046 M ops/s
> local_storage cache interleaved get:  hits throughput: 126.577 ± 1.818 M ops/s, hits latency: 7.900 ns/op, important_hits throughput: 35.332 ± 0.507 M ops/s
>
>         num_maps: 100
> local_storage cache sequential  get:  hits throughput: 111.327 ± 1.401 M ops/s, hits latency: 8.983 ns/op, important_hits throughput: 1.113 ± 0.014 M ops/s
> local_storage cache interleaved get:  hits throughput: 131.327 ± 1.339 M ops/s, hits latency: 7.615 ns/op, important_hits throughput: 34.302 ± 0.350 M ops/s
>
>         num_maps: 1000
> local_storage cache sequential  get:  hits throughput: 101.978 ± 0.563 M ops/s, hits latency: 9.806 ns/op, important_hits throughput: 0.102 ± 0.001 M ops/s
> local_storage cache interleaved get:  hits throughput: 141.084 ± 1.098 M ops/s, hits latency: 7.088 ns/op, important_hits throughput: 35.430 ± 0.276 M ops/s
>
> Adjusting for overhead, latency numbers for "hashmap control" and
> "sequential get" are:
>
> hashmap_control_1k:   ~14.4ns
> hashmap_control_10k:  ~25.1ns
> hashmap_control_100k: ~38.6ns
> sequential_get_1:     ~12.1ns
> sequential_get_10:    ~16.0ns
> sequential_get_16:    ~13.8ns
> sequential_get_17:    ~16.8ns
> sequential_get_24:    ~40.9ns
> sequential_get_32:    ~65.2ns
> sequential_get_100:   ~148.2ns
> sequential_get_1000:  ~2204ns
>
> Clearly demonstrating a cliff.
>
> In the discussion for v1 of this patchset, Alexei noted that
> local_storage was 2.5x faster than a large hashmap [1]. The benchmark
> results confirm that this is still the case: a long-running BPF
> application putting some pid-specific info into a hashmap for each pid
> it sees will probably see on the order of 10-100k pids. Bench numbers
> for hashmaps of this size are ~2.5x slower than sequential_get_16, but
> as the number of local_storage maps grows past local_storage cache size
> performance advantage reverses.
>
> When running the benchmarks it may be necessary to bump 'open files'
> ulimit for a successful run.
>
>   [0]: https://lore.kernel.org/all/20220420002143.1096548-1-davemarchevsky@fb.com
>   [1]: https://lore.kernel.org/bpf/20220511173305.ftldpn23m4ski3d3@MBP-98dd607d3435.dhcp.thefacebook.com/
>
> Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
> ---
> Changelog:
>
> v4 -> v5:
>   * Remove 2nd patch refactoring mean/stddev calculations (Andrii)
>   * Change "hashmap control" benchmark to use one big hashmap w/
>     configurable number of keys accessed (Andrii)
>     * Add discussion of "hashmap control" vs "sequential get" numbers
>
> v3 -> v4:
>   * Remove ifs guarding increments in measure fn (Andrii)
>   * Refactor to use 1 bpf prog for all 3 benchmarks w/ global vars set
>     from userspace before load to control behavior (Andrii)
>   * Greatly reduce variance in overhead by having benchmark bpf prog
>     loop 10k times regardless of map count (Andrii)
>     * Also, move sync_fetch_and_incr out of do_lookup as the guaranteed
>       second sync_fetch_and_incr call for num_maps = 1 was adding
>       overhead
>   * Add second patch refactoring bench.c's mean/stddev calculations
>     in reporting helper fns
>
> v2 -> v3:
>   * Accessing 1k maps in ARRAY_OF_MAPS doesn't hit MAX_USED_MAPS limit,
>     so just use 1 program (Alexei)
>
> v1 -> v2:
>   * Adopt ARRAY_OF_MAPS approach for bpf program, allowing truly
>     configurable # of maps (Andrii)
>   * Add hashmap benchmark (Alexei)
>   * Add discussion of overhead
>
>  tools/testing/selftests/bpf/Makefile          |   4 +-
>  tools/testing/selftests/bpf/bench.c           |  55 ++++
>  tools/testing/selftests/bpf/bench.h           |   4 +
>  .../bpf/benchs/bench_local_storage.c          | 268 ++++++++++++++++++
>  .../bpf/benchs/run_bench_local_storage.sh     |  24 ++
>  .../selftests/bpf/benchs/run_common.sh        |  17 ++
>  .../selftests/bpf/progs/local_storage_bench.c | 104 +++++++
>  7 files changed, 475 insertions(+), 1 deletion(-)
>  create mode 100644 tools/testing/selftests/bpf/benchs/bench_local_storage.c
>  create mode 100755 tools/testing/selftests/bpf/benchs/run_bench_local_storage.sh
>  create mode 100644 tools/testing/selftests/bpf/progs/local_storage_bench.c
>
> diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
> index 2d3c8c8f558a..f82f77075f67 100644
> --- a/tools/testing/selftests/bpf/Makefile
> +++ b/tools/testing/selftests/bpf/Makefile
> @@ -560,6 +560,7 @@ $(OUTPUT)/bench_ringbufs.o: $(OUTPUT)/ringbuf_bench.skel.h \
>  $(OUTPUT)/bench_bloom_filter_map.o: $(OUTPUT)/bloom_filter_bench.skel.h
>  $(OUTPUT)/bench_bpf_loop.o: $(OUTPUT)/bpf_loop_bench.skel.h
>  $(OUTPUT)/bench_strncmp.o: $(OUTPUT)/strncmp_bench.skel.h
> +$(OUTPUT)/bench_local_storage.o: $(OUTPUT)/local_storage_bench.skel.h
>  $(OUTPUT)/bench.o: bench.h testing_helpers.h $(BPFOBJ)
>  $(OUTPUT)/bench: LDLIBS += -lm
>  $(OUTPUT)/bench: $(OUTPUT)/bench.o \
> @@ -571,7 +572,8 @@ $(OUTPUT)/bench: $(OUTPUT)/bench.o \
>                  $(OUTPUT)/bench_ringbufs.o \
>                  $(OUTPUT)/bench_bloom_filter_map.o \
>                  $(OUTPUT)/bench_bpf_loop.o \
> -                $(OUTPUT)/bench_strncmp.o
> +                $(OUTPUT)/bench_strncmp.o \
> +                $(OUTPUT)/bench_local_storage.o
>         $(call msg,BINARY,,$@)
>         $(Q)$(CC) $(CFLAGS) $(LDFLAGS) $(filter %.a %.o,$^) $(LDLIBS) -o $@
>
> diff --git a/tools/testing/selftests/bpf/bench.c b/tools/testing/selftests/bpf/bench.c
> index f061cc20e776..32399554f89b 100644
> --- a/tools/testing/selftests/bpf/bench.c
> +++ b/tools/testing/selftests/bpf/bench.c
> @@ -150,6 +150,53 @@ void ops_report_final(struct bench_res res[], int res_cnt)
>         printf("latency %8.3lf ns/op\n", 1000.0 / hits_mean * env.producer_cnt);
>  }
>
> +void local_storage_report_progress(int iter, struct bench_res *res,
> +                                  long delta_ns)
> +{
> +       double important_hits_per_sec, hits_per_sec;
> +       double delta_sec = delta_ns / 1000000000.0;
> +
> +       hits_per_sec = res->hits / 1000000.0 / delta_sec;
> +       important_hits_per_sec = res->important_hits / 1000000.0 / delta_sec;
> +
> +       printf("Iter %3d (%7.3lfus): ", iter, (delta_ns - 1000000000) / 1000.0);
> +
> +       printf("hits %8.3lfM/s ", hits_per_sec);
> +       printf("important_hits %8.3lfM/s\n", important_hits_per_sec);
> +}
> +
> +void local_storage_report_final(struct bench_res res[], int res_cnt)
> +{
> +       double important_hits_mean = 0.0, important_hits_stddev = 0.0;
> +       double hits_mean = 0.0, hits_stddev = 0.0;
> +       int i;
> +
> +       for (i = 0; i < res_cnt; i++) {
> +               hits_mean += res[i].hits / 1000000.0 / (0.0 + res_cnt);
> +               important_hits_mean += res[i].important_hits / 1000000.0 / (0.0 + res_cnt);
> +       }
> +
> +       if (res_cnt > 1)  {
> +               for (i = 0; i < res_cnt; i++) {
> +                       hits_stddev += (hits_mean - res[i].hits / 1000000.0) *
> +                                      (hits_mean - res[i].hits / 1000000.0) /
> +                                      (res_cnt - 1.0);
> +                       important_hits_stddev +=
> +                                      (important_hits_mean - res[i].important_hits / 1000000.0) *
> +                                      (important_hits_mean - res[i].important_hits / 1000000.0) /
> +                                      (res_cnt - 1.0);
> +               }
> +
> +               hits_stddev = sqrt(hits_stddev);
> +               important_hits_stddev = sqrt(important_hits_stddev);
> +       }
> +       printf("Summary: hits throughput %8.3lf \u00B1 %5.3lf M ops/s, ",
> +              hits_mean, hits_stddev);
> +       printf("hits latency %8.3lf ns/op, ", 1000.0 / hits_mean);
> +       printf("important_hits throughput %8.3lf \u00B1 %5.3lf M ops/s\n",
> +              important_hits_mean, important_hits_stddev);
> +}
> +
>  const char *argp_program_version = "benchmark";
>  const char *argp_program_bug_address = "<bpf@vger.kernel.org>";
>  const char argp_program_doc[] =
> @@ -188,12 +235,14 @@ static const struct argp_option opts[] = {
>  extern struct argp bench_ringbufs_argp;
>  extern struct argp bench_bloom_map_argp;
>  extern struct argp bench_bpf_loop_argp;
> +extern struct argp bench_local_storage_argp;
>  extern struct argp bench_strncmp_argp;
>
>  static const struct argp_child bench_parsers[] = {
>         { &bench_ringbufs_argp, 0, "Ring buffers benchmark", 0 },
>         { &bench_bloom_map_argp, 0, "Bloom filter map benchmark", 0 },
>         { &bench_bpf_loop_argp, 0, "bpf_loop helper benchmark", 0 },
> +       { &bench_local_storage_argp, 0, "local_storage benchmark", 0 },
>         { &bench_strncmp_argp, 0, "bpf_strncmp helper benchmark", 0 },
>         {},
>  };
> @@ -396,6 +445,9 @@ extern const struct bench bench_hashmap_with_bloom;
>  extern const struct bench bench_bpf_loop;
>  extern const struct bench bench_strncmp_no_helper;
>  extern const struct bench bench_strncmp_helper;
> +extern const struct bench bench_local_storage_cache_seq_get;
> +extern const struct bench bench_local_storage_cache_interleaved_get;
> +extern const struct bench bench_local_storage_cache_hashmap_control;
>
>  static const struct bench *benchs[] = {
>         &bench_count_global,
> @@ -430,6 +482,9 @@ static const struct bench *benchs[] = {
>         &bench_bpf_loop,
>         &bench_strncmp_no_helper,
>         &bench_strncmp_helper,
> +       &bench_local_storage_cache_seq_get,
> +       &bench_local_storage_cache_interleaved_get,
> +       &bench_local_storage_cache_hashmap_control,
>  };
>
>  static void setup_benchmark()
> diff --git a/tools/testing/selftests/bpf/bench.h b/tools/testing/selftests/bpf/bench.h
> index fb3e213df3dc..4b15286753ba 100644
> --- a/tools/testing/selftests/bpf/bench.h
> +++ b/tools/testing/selftests/bpf/bench.h
> @@ -34,6 +34,7 @@ struct bench_res {
>         long hits;
>         long drops;
>         long false_hits;
> +       long important_hits;
>  };
>
>  struct bench {
> @@ -61,6 +62,9 @@ void false_hits_report_progress(int iter, struct bench_res *res, long delta_ns);
>  void false_hits_report_final(struct bench_res res[], int res_cnt);
>  void ops_report_progress(int iter, struct bench_res *res, long delta_ns);
>  void ops_report_final(struct bench_res res[], int res_cnt);
> +void local_storage_report_progress(int iter, struct bench_res *res,
> +                                  long delta_ns);
> +void local_storage_report_final(struct bench_res res[], int res_cnt);
>
>  static inline __u64 get_time_ns(void)
>  {
> diff --git a/tools/testing/selftests/bpf/benchs/bench_local_storage.c b/tools/testing/selftests/bpf/benchs/bench_local_storage.c
> new file mode 100644
> index 000000000000..9a3fd5295db1
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/benchs/bench_local_storage.c
> @@ -0,0 +1,268 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
> +
> +#include <argp.h>
> +#include <linux/btf.h>
> +
> +#include "local_storage_bench.skel.h"
> +#include "bench.h"
> +
> +#include <test_btf.h>
> +
> +static struct {
> +       __u32 nr_maps;
> +       __u32 hashmap_nr_keys_used;
> +} args = {
> +       .nr_maps = 1000,
> +       .hashmap_nr_keys_used = 1000,
> +};
> +
> +enum {
> +       ARG_NR_MAPS = 6000,
> +       ARG_HASHMAP_NR_KEYS_USED = 6001,
> +};
> +
> +static const struct argp_option opts[] = {
> +       { "nr_maps", ARG_NR_MAPS, "NR_MAPS", 0,
> +               "Set number of local_storage maps"},
> +       { "hashmap_nr_keys_used", ARG_HASHMAP_NR_KEYS_USED, "NR_KEYS",
> +               0, "When doing hashmap test, set number of hashmap keys test uses"},
> +       {},
> +};
> +
> +static error_t parse_arg(int key, char *arg, struct argp_state *state)
> +{
> +       long ret;
> +
> +       switch (key) {
> +       case ARG_NR_MAPS:
> +               ret = strtol(arg, NULL, 10);
> +               if (ret < 1 || ret > UINT_MAX) {
> +                       fprintf(stderr, "invalid nr_maps");
> +                       argp_usage(state);
> +               }
> +               args.nr_maps = ret;
> +               break;
> +       case ARG_HASHMAP_NR_KEYS_USED:
> +               ret = strtol(arg, NULL, 10);
> +               if (ret < 1 || ret > UINT_MAX) {
> +                       fprintf(stderr, "invalid hashmap_nr_keys_used");
> +                       argp_usage(state);
> +               }
> +               args.hashmap_nr_keys_used = ret;
> +               break;
> +       default:
> +               return ARGP_ERR_UNKNOWN;
> +       }
> +
> +       return 0;
> +}
> +
> +const struct argp bench_local_storage_argp = {
> +       .options = opts,
> +       .parser = parse_arg,
> +};
> +
> +/* Keep in sync w/ array of maps in bpf */
> +#define MAX_NR_MAPS 1000
> +/* keep in sync w/ same define in bpf */
> +#define HASHMAP_SZ 4194304
> +
> +static void validate(void)
> +{
> +       if (env.producer_cnt != 1) {
> +               fprintf(stderr, "benchmark doesn't support multi-producer!\n");
> +               exit(1);
> +       }
> +       if (env.consumer_cnt != 1) {
> +               fprintf(stderr, "benchmark doesn't support multi-consumer!\n");
> +               exit(1);
> +       }
> +
> +       if (args.nr_maps > MAX_NR_MAPS) {
> +               fprintf(stderr, "nr_maps must be <= 1000\n");
> +               exit(1);
> +       }
> +
> +       if (args.hashmap_nr_keys_used > HASHMAP_SZ) {
> +               fprintf(stderr, "hashmap_nr_keys_used must be <= %u\n", HASHMAP_SZ);
> +               exit(1);
> +       }
> +}
> +
> +static struct {
> +       struct local_storage_bench *skel;
> +       void *bpf_obj;
> +       struct bpf_map *array_of_maps;
> +} ctx;
> +
> +static void __setup(struct bpf_program *prog, bool hashmap)
> +{
> +       struct bpf_map *inner_map;
> +       int i, fd, mim_fd, err;
> +
> +       LIBBPF_OPTS(bpf_map_create_opts, create_opts);
> +
> +       if (!hashmap)
> +               create_opts.map_flags = BPF_F_NO_PREALLOC;
> +
> +       ctx.skel->rodata->num_maps = args.nr_maps;
> +       ctx.skel->rodata->hashmap_num_keys = args.hashmap_nr_keys_used;
> +       inner_map = bpf_map__inner_map(ctx.array_of_maps);
> +       create_opts.btf_key_type_id = bpf_map__btf_key_type_id(inner_map);
> +       create_opts.btf_value_type_id = bpf_map__btf_value_type_id(inner_map);
> +
> +       err = local_storage_bench__load(ctx.skel);
> +       if (err) {
> +               fprintf(stderr, "Error loading skeleton\n");
> +               goto err_out;
> +       }
> +
> +       create_opts.btf_fd = bpf_object__btf_fd(ctx.skel->obj);
> +
> +       mim_fd = bpf_map__fd(ctx.array_of_maps);
> +       if (mim_fd < 0) {
> +               fprintf(stderr, "Error getting map_in_map fd\n");
> +               goto err_out;
> +       }
> +
> +       for (i = 0; i < args.nr_maps; i++) {
> +               if (hashmap)
> +                       fd = bpf_map_create(BPF_MAP_TYPE_HASH, NULL, sizeof(int),
> +                                           sizeof(int), HASHMAP_SZ, &create_opts);
> +               else
> +                       fd = bpf_map_create(BPF_MAP_TYPE_TASK_STORAGE, NULL, sizeof(int),
> +                                           sizeof(int), 0, &create_opts);
> +               if (fd < 0) {
> +                       fprintf(stderr, "Error creating map %d: %d\n", i, fd);
> +                       goto err_out;
> +               }
> +
> +               err = bpf_map_update_elem(mim_fd, &i, &fd, 0);
> +               if (err) {
> +                       fprintf(stderr, "Error updating array-of-maps w/ map %d\n", i);
> +                       goto err_out;
> +               }
> +       }
> +
> +       if (!bpf_program__attach(prog)) {
> +               fprintf(stderr, "Error attaching bpf program\n");
> +               goto err_out;
> +       }
> +
> +       return;
> +err_out:
> +       exit(1);
> +}
> +
> +static void hashmap_setup(void)
> +{
> +       struct local_storage_bench *skel;
> +
> +       setup_libbpf();
> +
> +       skel = local_storage_bench__open();
> +       ctx.skel = skel;
> +       ctx.array_of_maps = skel->maps.array_of_hash_maps;
> +       skel->rodata->use_hashmap = 1;
> +       skel->rodata->interleave = 0;
> +
> +       __setup(skel->progs.get_local, true);
> +}
> +
> +static void local_storage_cache_get_setup(void)
> +{
> +       struct local_storage_bench *skel;
> +
> +       setup_libbpf();
> +
> +       skel = local_storage_bench__open();
> +       ctx.skel = skel;
> +       ctx.array_of_maps = skel->maps.array_of_local_storage_maps;
> +       skel->rodata->use_hashmap = 0;
> +       skel->rodata->interleave = 0;
> +
> +       __setup(skel->progs.get_local, false);
> +}
> +
> +static void local_storage_cache_get_interleaved_setup(void)
> +{
> +       struct local_storage_bench *skel;
> +
> +       setup_libbpf();
> +
> +       skel = local_storage_bench__open();
> +       ctx.skel = skel;
> +       ctx.array_of_maps = skel->maps.array_of_local_storage_maps;
> +       skel->rodata->use_hashmap = 0;
> +       skel->rodata->interleave = 1;
> +
> +       __setup(skel->progs.get_local, false);
> +}
> +
> +static void measure(struct bench_res *res)
> +{
> +       res->hits = atomic_swap(&ctx.skel->bss->hits, 0);
> +       res->important_hits = atomic_swap(&ctx.skel->bss->important_hits, 0);
> +}
> +
> +static inline void trigger_bpf_program(void)
> +{
> +       syscall(__NR_getpgid);
> +}
> +
> +static void *consumer(void *input)
> +{
> +       return NULL;
> +}
> +
> +static void *producer(void *input)
> +{
> +       while (true)
> +               trigger_bpf_program();
> +
> +       return NULL;
> +}
> +
> +/* cache sequential and interleaved get benchs test local_storage get
> + * performance, specifically they demonstrate performance cliff of
> + * current list-plus-cache local_storage model.
> + *
> + * cache sequential get: call bpf_task_storage_get on n maps in order
> + * cache interleaved get: like "sequential get", but interleave 4 calls to the
> + *     'important' map (idx 0 in array_of_maps) for every 10 calls. Goal
> + *     is to mimic environment where many progs are accessing their local_storage
> + *     maps, with 'our' prog needing to access its map more often than others
> + */
> +const struct bench bench_local_storage_cache_seq_get = {
> +       .name = "local-storage-cache-seq-get",
> +       .validate = validate,
> +       .setup = local_storage_cache_get_setup,
> +       .producer_thread = producer,
> +       .consumer_thread = consumer,
> +       .measure = measure,
> +       .report_progress = local_storage_report_progress,
> +       .report_final = local_storage_report_final,
> +};
> +
> +const struct bench bench_local_storage_cache_interleaved_get = {
> +       .name = "local-storage-cache-int-get",
> +       .validate = validate,
> +       .setup = local_storage_cache_get_interleaved_setup,
> +       .producer_thread = producer,
> +       .consumer_thread = consumer,
> +       .measure = measure,
> +       .report_progress = local_storage_report_progress,
> +       .report_final = local_storage_report_final,
> +};
> +
> +const struct bench bench_local_storage_cache_hashmap_control = {
> +       .name = "local-storage-cache-hashmap-control",
> +       .validate = validate,
> +       .setup = hashmap_setup,
> +       .producer_thread = producer,
> +       .consumer_thread = consumer,
> +       .measure = measure,
> +       .report_progress = local_storage_report_progress,
> +       .report_final = local_storage_report_final,
> +};
> diff --git a/tools/testing/selftests/bpf/benchs/run_bench_local_storage.sh b/tools/testing/selftests/bpf/benchs/run_bench_local_storage.sh
> new file mode 100755
> index 000000000000..2eb2b513a173
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/benchs/run_bench_local_storage.sh
> @@ -0,0 +1,24 @@
> +#!/bin/bash
> +# SPDX-License-Identifier: GPL-2.0
> +
> +source ./benchs/run_common.sh
> +
> +set -eufo pipefail
> +
> +header "Hashmap Control"
> +for i in 10 1000 10000 100000 4194304; do
> +subtitle "num keys: $i"
> +       summarize_local_storage "hashmap (control) sequential    get: "\
> +               "$(./bench --nr_maps 1 --hashmap_nr_keys_used=$i local-storage-cache-hashmap-control)"
> +       printf "\n"
> +done
> +
> +header "Local Storage"
> +for i in 1 10 16 17 24 32 100 1000; do
> +subtitle "num_maps: $i"
> +       summarize_local_storage "local_storage cache sequential  get: "\
> +               "$(./bench --nr_maps $i local-storage-cache-seq-get)"
> +       summarize_local_storage "local_storage cache interleaved get: "\
> +               "$(./bench --nr_maps $i local-storage-cache-int-get)"
> +       printf "\n"
> +done
> diff --git a/tools/testing/selftests/bpf/benchs/run_common.sh b/tools/testing/selftests/bpf/benchs/run_common.sh
> index 6c5e6023a69f..d9f40af82006 100644
> --- a/tools/testing/selftests/bpf/benchs/run_common.sh
> +++ b/tools/testing/selftests/bpf/benchs/run_common.sh
> @@ -41,6 +41,16 @@ function ops()
>         echo "$*" | sed -E "s/.*latency\s+([0-9]+\.[0-9]+\sns\/op).*/\1/"
>  }
>
> +function local_storage()
> +{
> +       echo -n "hits throughput: "
> +       echo -n "$*" | sed -E "s/.* hits throughput\s+([0-9]+\.[0-9]+ ± [0-9]+\.[0-9]+\sM\sops\/s).*/\1/"
> +       echo -n -e ", hits latency: "
> +       echo -n "$*" | sed -E "s/.* hits latency\s+([0-9]+\.[0-9]+\sns\/op).*/\1/"
> +       echo -n ", important_hits throughput: "
> +       echo "$*" | sed -E "s/.*important_hits throughput\s+([0-9]+\.[0-9]+ ± [0-9]+\.[0-9]+\sM\sops\/s).*/\1/"
> +}
> +
>  function total()
>  {
>         echo "$*" | sed -E "s/.*total operations\s+([0-9]+\.[0-9]+ ± [0-9]+\.[0-9]+M\/s).*/\1/"
> @@ -67,6 +77,13 @@ function summarize_ops()
>         printf "%-20s %s\n" "$bench" "$(ops $summary)"
>  }
>
> +function summarize_local_storage()
> +{
> +       bench="$1"
> +       summary=$(echo $2 | tail -n1)
> +       printf "%-20s %s\n" "$bench" "$(local_storage $summary)"
> +}
> +
>  function summarize_total()
>  {
>         bench="$1"
> diff --git a/tools/testing/selftests/bpf/progs/local_storage_bench.c b/tools/testing/selftests/bpf/progs/local_storage_bench.c
> new file mode 100644
> index 000000000000..2c3234c5b73a
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/local_storage_bench.c
> @@ -0,0 +1,104 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
> +
> +#include "vmlinux.h"
> +#include <bpf/bpf_helpers.h>
> +#include "bpf_misc.h"
> +
> +#define HASHMAP_SZ 4194304
> +
> +struct {
> +       __uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
> +       __uint(max_entries, 1000);
> +       __type(key, int);
> +       __type(value, int);
> +       __array(values, struct {
> +               __uint(type, BPF_MAP_TYPE_TASK_STORAGE);
> +               __uint(map_flags, BPF_F_NO_PREALLOC);
> +               __type(key, int);
> +               __type(value, int);
> +       });
> +} array_of_local_storage_maps SEC(".maps");
> +
> +struct {
> +       __uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
> +       __uint(max_entries, 1000);
> +       __type(key, int);
> +       __type(value, int);
> +       __array(values, struct {
> +               __uint(type, BPF_MAP_TYPE_HASH);
> +               __uint(max_entries, HASHMAP_SZ);
> +               __type(key, int);
> +               __type(value, int);
> +       });
> +} array_of_hash_maps SEC(".maps");
> +
> +long important_hits;
> +long hits;
> +
> +/* set from user-space */
> +const volatile unsigned int use_hashmap;
> +const volatile unsigned int hashmap_num_keys;
> +const volatile unsigned int num_maps;
> +const volatile unsigned int interleave;
> +
> +struct loop_ctx {
> +       struct task_struct *task;
> +       long loop_hits;
> +       long loop_important_hits;
> +};
> +
> +static int do_lookup(unsigned int elem, struct loop_ctx *lctx)
> +{
> +       void *map, *inner_map;
> +       int idx = 0;
> +
> +       if (use_hashmap)
> +               map = &array_of_hash_maps;
> +       else
> +               map = &array_of_local_storage_maps;
> +
> +       inner_map = bpf_map_lookup_elem(map, &elem);
> +       if (!inner_map)
> +               return -1;
> +
> +       if (use_hashmap) {
> +               idx = bpf_get_prandom_u32() % hashmap_num_keys;
> +               bpf_map_lookup_elem(inner_map, &idx);
> +       } else {
> +               bpf_task_storage_get(inner_map, lctx->task, &idx,
> +                                    BPF_LOCAL_STORAGE_GET_F_CREATE);
> +       }
> +
> +       lctx->loop_hits++;
> +       if (!elem)
> +               lctx->loop_important_hits++;
> +       return 0;
> +}
> +
> +static long loop(u32 index, void *ctx)
> +{
> +       struct loop_ctx *lctx = (struct loop_ctx *)ctx;
> +       unsigned int map_idx = index % num_maps;
> +
> +       do_lookup(map_idx, lctx);
> +       if (interleave && map_idx % 3 == 0)
> +               do_lookup(0, lctx);
> +       return 0;
> +}
> +
> +SEC("fentry/" SYS_PREFIX "sys_getpgid")
> +int get_local(void *ctx)
> +{
> +       struct loop_ctx lctx;
> +
> +       lctx.task = bpf_get_current_task_btf();
> +       lctx.loop_hits = 0;
> +       lctx.loop_important_hits = 0;
> +       bpf_loop(10000, &loop, &lctx, 0);
> +       __sync_add_and_fetch(&hits, lctx.loop_hits);
> +       __sync_add_and_fetch(&important_hits, lctx.loop_important_hits);
> +       return 0;
> +}
> +
> +char _license[] SEC("license") = "GPL";
> --
> 2.30.2
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 bpf-next] selftests/bpf: Add benchmark for local_storage get
  2022-06-22 23:00 ` Yosry Ahmed
@ 2022-06-24 20:22   ` Martin KaFai Lau
  2022-06-24 21:25     ` Yosry Ahmed
  0 siblings, 1 reply; 8+ messages in thread
From: Martin KaFai Lau @ 2022-06-24 20:22 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Dave Marchevsky, bpf, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Kernel Team

On Wed, Jun 22, 2022 at 04:00:04PM -0700, Yosry Ahmed wrote:
> Thanks for adding these benchmarks!
> 
> 
> On Sat, Jun 4, 2022 at 3:20 PM Dave Marchevsky <davemarchevsky@fb.com> wrote:
> >
> > Add a benchmarks to demonstrate the performance cliff for local_storage
> > get as the number of local_storage maps increases beyond current
> > local_storage implementation's cache size.
> >
> > "sequential get" and "interleaved get" benchmarks are added, both of
> > which do many bpf_task_storage_get calls on sets of task local_storage
> > maps of various counts, while considering a single specific map to be
> > 'important' and counting task_storage_gets to the important map
> > separately in addition to normal 'hits' count of all gets. Goal here is
> > to mimic scenario where a particular program using one map - the
> > important one - is running on a system where many other local_storage
> > maps exist and are accessed often.
> >
> > While "sequential get" benchmark does bpf_task_storage_get for map 0, 1,
> > ..., {9, 99, 999} in order, "interleaved" benchmark interleaves 4
> > bpf_task_storage_gets for the important map for every 10 map gets. This
> > is meant to highlight performance differences when important map is
> > accessed far more frequently than non-important maps.
> >
> > A "hashmap control" benchmark is also included for easy comparison of
> > standard bpf hashmap lookup vs local_storage get. The benchmark is
> > similar to "sequential get", but creates and uses BPF_MAP_TYPE_HASH
> > instead of local storage. Only one inner map is created - a hashmap
> > meant to hold tid -> data mapping for all tasks. Size of the hashmap is
> > hardcoded to my system's PID_MAX_LIMIT (4,194,304). The number of these
> > keys which are actually fetched as part of the benchmark is
> > configurable.
> >
> > Addition of this benchmark is inspired by conversation with Alexei in a
> > previous patchset's thread [0], which highlighted the need for such a
> > benchmark to motivate and validate improvements to local_storage
> > implementation. My approach in that series focused on improving
> > performance for explicitly-marked 'important' maps and was rejected
> > with feedback to make more generally-applicable improvements while
> > avoiding explicitly marking maps as important. Thus the benchmark
> > reports both general and important-map-focused metrics, so effect of
> > future work on both is clear.
> 
> The current implementation falls back to a list traversal of
> bpf_local_storage->list when there is a cache miss. I wonder if this
> is a place with room for optimization? Maybe a hash table or a tree
> would be a more performant alternative?
With a benchmark to ensure it won't regress the common use cases
and guage the potential improvement,  it is probably something Dave
is planning to explore next if I read the last thread correctly.

How many task storages is in your production use cases ?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 bpf-next] selftests/bpf: Add benchmark for local_storage get
  2022-06-24 20:22   ` Martin KaFai Lau
@ 2022-06-24 21:25     ` Yosry Ahmed
  0 siblings, 0 replies; 8+ messages in thread
From: Yosry Ahmed @ 2022-06-24 21:25 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: Dave Marchevsky, bpf, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Kernel Team

On Fri, Jun 24, 2022 at 1:22 PM Martin KaFai Lau <kafai@fb.com> wrote:
>
> On Wed, Jun 22, 2022 at 04:00:04PM -0700, Yosry Ahmed wrote:
> > Thanks for adding these benchmarks!
> >
> >
> > On Sat, Jun 4, 2022 at 3:20 PM Dave Marchevsky <davemarchevsky@fb.com> wrote:
> > >
> > > Add a benchmarks to demonstrate the performance cliff for local_storage
> > > get as the number of local_storage maps increases beyond current
> > > local_storage implementation's cache size.
> > >
> > > "sequential get" and "interleaved get" benchmarks are added, both of
> > > which do many bpf_task_storage_get calls on sets of task local_storage
> > > maps of various counts, while considering a single specific map to be
> > > 'important' and counting task_storage_gets to the important map
> > > separately in addition to normal 'hits' count of all gets. Goal here is
> > > to mimic scenario where a particular program using one map - the
> > > important one - is running on a system where many other local_storage
> > > maps exist and are accessed often.
> > >
> > > While "sequential get" benchmark does bpf_task_storage_get for map 0, 1,
> > > ..., {9, 99, 999} in order, "interleaved" benchmark interleaves 4
> > > bpf_task_storage_gets for the important map for every 10 map gets. This
> > > is meant to highlight performance differences when important map is
> > > accessed far more frequently than non-important maps.
> > >
> > > A "hashmap control" benchmark is also included for easy comparison of
> > > standard bpf hashmap lookup vs local_storage get. The benchmark is
> > > similar to "sequential get", but creates and uses BPF_MAP_TYPE_HASH
> > > instead of local storage. Only one inner map is created - a hashmap
> > > meant to hold tid -> data mapping for all tasks. Size of the hashmap is
> > > hardcoded to my system's PID_MAX_LIMIT (4,194,304). The number of these
> > > keys which are actually fetched as part of the benchmark is
> > > configurable.
> > >
> > > Addition of this benchmark is inspired by conversation with Alexei in a
> > > previous patchset's thread [0], which highlighted the need for such a
> > > benchmark to motivate and validate improvements to local_storage
> > > implementation. My approach in that series focused on improving
> > > performance for explicitly-marked 'important' maps and was rejected
> > > with feedback to make more generally-applicable improvements while
> > > avoiding explicitly marking maps as important. Thus the benchmark
> > > reports both general and important-map-focused metrics, so effect of
> > > future work on both is clear.
> >
> > The current implementation falls back to a list traversal of
> > bpf_local_storage->list when there is a cache miss. I wonder if this
> > is a place with room for optimization? Maybe a hash table or a tree
> > would be a more performant alternative?
> With a benchmark to ensure it won't regress the common use cases
> and guage the potential improvement,  it is probably something Dave
> is planning to explore next if I read the last thread correctly.
>
> How many task storages is in your production use cases ?

Unfortunately, I don't have this information. I was just making a
suggestion based on code inspection, not based on data from
production.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-06-24 21:25 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-04 22:20 [PATCH v5 bpf-next] selftests/bpf: Add benchmark for local_storage get Dave Marchevsky
2022-06-08 23:02 ` Martin KaFai Lau
2022-06-09 14:27   ` Dave Marchevsky
2022-06-09 15:49     ` Alexei Starovoitov
2022-06-20 19:49       ` Dave Marchevsky
2022-06-22 23:00 ` Yosry Ahmed
2022-06-24 20:22   ` Martin KaFai Lau
2022-06-24 21:25     ` Yosry Ahmed

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.