All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/3] Introduce local_storage exclusive caching
@ 2022-04-20  0:21 Dave Marchevsky
  2022-04-20  0:21 ` [PATCH bpf-next 1/3] bpf: Introduce local_storage exclusive caching option Dave Marchevsky
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Dave Marchevsky @ 2022-04-20  0:21 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, KP Singh, Tejun Heo, kernel-team,
	Dave Marchevsky

Currently, each local_storage type (sk, inode, task) has a 16-entry
cache for local_storage data associated with a particular map. A
local_storage map is assigned a fixed cache_idx when it is allocated.
When looking in a local_storage for data associated with a map the cache
entry at cache_idx is the only place the map can appear in cache. If the
map's data is not in cache it is placed there after a search through the
cache hlist. When there are >16 local_storage maps allocated for a
local_storage type multiple maps have same cache_idx and thus may knock
each other out of cache.

BPF programs that use local_storage may require fast and consistent
local storage access. For example, a BPF program using task local
storage to make scheduling decisions would not be able to tolerate a
long hlist search for its local_storage as this would negatively affect
cycles available to applications. Providing a mechanism for such a
program to ensure that its local_storage_data will always be in cache
would ensure fast access.

This series introduces a BPF_LOCAL_STORAGE_FORCE_CACHE flag that can be
set on sk, inode, and task local_storage maps via map_extras. When a map
with the FORCE_CACHE flag set is allocated it is assigned an 'exclusive'
cache slot that it can't be evicted from until the map is free'd. 

If there are no slots available to exclusively claim, the allocation
fails. BPF programs are expected to use BPF_LOCAL_STORAGE_FORCE_CACHE
only if their data _must_ be in cache.

The existing cache slots are used - as opposed to a separate cache - as
exclusive caching is not expected to be used by the majority of
local_storage BPF programs. So better to avoid adding a separate cache
that will bloat memory and go unused.

Patches:
* Patch 1 implements kernel-side changes to support the feature
* Patch 2 adds selftests validating functionality
* Patch 3 is a oneline #define dedupe

Dave Marchevsky (3):
  bpf: Introduce local_storage exclusive caching option
  selftests/bpf: Add local_storage exclusive cache test
  bpf: Remove duplicate define in bpf_local_storage.h

 include/linux/bpf_local_storage.h             |   8 +-
 include/uapi/linux/bpf.h                      |  14 +++
 kernel/bpf/bpf_inode_storage.c                |  16 ++-
 kernel/bpf/bpf_local_storage.c                |  42 ++++++--
 kernel/bpf/bpf_task_storage.c                 |  16 ++-
 kernel/bpf/syscall.c                          |   7 +-
 net/core/bpf_sk_storage.c                     |  15 ++-
 .../test_local_storage_excl_cache.c           |  52 +++++++++
 .../bpf/progs/local_storage_excl_cache.c      | 100 ++++++++++++++++++
 .../bpf/progs/local_storage_excl_cache_fail.c |  36 +++++++
 10 files changed, 283 insertions(+), 23 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/test_local_storage_excl_cache.c
 create mode 100644 tools/testing/selftests/bpf/progs/local_storage_excl_cache.c
 create mode 100644 tools/testing/selftests/bpf/progs/local_storage_excl_cache_fail.c

-- 
2.30.2


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH bpf-next 1/3] bpf: Introduce local_storage exclusive caching option
  2022-04-20  0:21 [PATCH bpf-next 0/3] Introduce local_storage exclusive caching Dave Marchevsky
@ 2022-04-20  0:21 ` Dave Marchevsky
  2022-04-20  0:21 ` [PATCH bpf-next 2/3] selftests/bpf: Add local_storage exclusive cache test Dave Marchevsky
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Dave Marchevsky @ 2022-04-20  0:21 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, KP Singh, Tejun Heo, kernel-team,
	Dave Marchevsky

Allow local_storage maps to claim exclusive use of a cache slot in
struct bpf_local_storage's cache. When a local_storage map claims a
slot and is cached in via bpf_local_storage_lookup, it will not be
replaced until the map is free'd. As a result, after a local_storage map
is alloc'd for a specific bpf_local_storage, lookup calls after the
first will quickly find the correct map.

When requesting an exclusive cache slot, bpf_local_storage_cache_idx_get
can now fail if all slots are already claimed. Because a map's cache_idx
is assigned when the bpf_map is allocated - which occurs before the
program runs - the map load and subsequent prog load will fail.

A bit in struct bpf_map's map_extra is used to designate whether a map
would like to claim an exclusive slot. Similarly, bitmap idx_exclusive
is added to bpf_local_storage_cache to track whether a slot is
exclusively claimed. Functions that manipulate the cache are modified to
test for BPF_LOCAL_STORAGE_FORCE_CACHE bit and test/set idx_exclusive
where necessary.

When a map exclusively claims a cache slot, non-exclusive local_storage
maps which were previously assigned the same cache_idx are not
migrated to unclaimed cache_idx. Such a migration would require full
iteration of the cache list and necessitate a reverse migration on map
free to even things out. Since a used cache slot will only be
exclusively claimed if no empty slot exists, the additional complexity
was deemed unnecessary.

Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
---
 include/linux/bpf_local_storage.h |  6 +++--
 include/uapi/linux/bpf.h          | 14 +++++++++++
 kernel/bpf/bpf_inode_storage.c    | 16 +++++++++---
 kernel/bpf/bpf_local_storage.c    | 42 +++++++++++++++++++++++++------
 kernel/bpf/bpf_task_storage.c     | 16 +++++++++---
 kernel/bpf/syscall.c              |  7 ++++--
 net/core/bpf_sk_storage.c         | 15 ++++++++---
 7 files changed, 95 insertions(+), 21 deletions(-)

diff --git a/include/linux/bpf_local_storage.h b/include/linux/bpf_local_storage.h
index 493e63258497..d87405a1b65d 100644
--- a/include/linux/bpf_local_storage.h
+++ b/include/linux/bpf_local_storage.h
@@ -109,6 +109,7 @@ struct bpf_local_storage {
 struct bpf_local_storage_cache {
 	spinlock_t idx_lock;
 	u64 idx_usage_counts[BPF_LOCAL_STORAGE_CACHE_SIZE];
+	DECLARE_BITMAP(idx_exclusive, BPF_LOCAL_STORAGE_CACHE_SIZE);
 };
 
 #define DEFINE_BPF_STORAGE_CACHE(name)				\
@@ -116,9 +117,10 @@ static struct bpf_local_storage_cache name = {			\
 	.idx_lock = __SPIN_LOCK_UNLOCKED(name.idx_lock),	\
 }
 
-u16 bpf_local_storage_cache_idx_get(struct bpf_local_storage_cache *cache);
+int bpf_local_storage_cache_idx_get(struct bpf_local_storage_cache *cache,
+				    u64 flags);
 void bpf_local_storage_cache_idx_free(struct bpf_local_storage_cache *cache,
-				      u16 idx);
+				       u16 idx, u64 flags);
 
 /* Helper functions for bpf_local_storage */
 int bpf_local_storage_map_alloc_check(union bpf_attr *attr);
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index d14b10b85e51..566035bc2f08 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1257,6 +1257,18 @@ enum bpf_stack_build_id_status {
 	BPF_STACK_BUILD_ID_IP = 2,
 };
 
+/* Flags passed in map_extra when creating local_storage maps
+ * of types: BPF_MAP_TYPE_INODE_STORAGE
+ *           BPF_MAP_TYPE_TASK_STORAGE
+ *           BPF_MAP_TYPE_SK_STORAGE
+ */
+enum bpf_local_storage_extra_flags {
+	/* Give the map exclusive use of a local_storage cache slot
+	 * or fail map alloc
+	 */
+	BPF_LOCAL_STORAGE_FORCE_CACHE = (1U << 0),
+};
+
 #define BPF_BUILD_ID_SIZE 20
 struct bpf_stack_build_id {
 	__s32		status;
@@ -1296,6 +1308,8 @@ union bpf_attr {
 		 * BPF_MAP_TYPE_BLOOM_FILTER - the lowest 4 bits indicate the
 		 * number of hash functions (if 0, the bloom filter will default
 		 * to using 5 hash functions).
+		 * BPF_MAP_TYPE_{INODE,TASK,SK}_STORAGE - local_storage specific
+		 * flags (see bpf_local_storage_extra_flags)
 		 */
 		__u64	map_extra;
 	};
diff --git a/kernel/bpf/bpf_inode_storage.c b/kernel/bpf/bpf_inode_storage.c
index 96be8d518885..8b32adc23fc3 100644
--- a/kernel/bpf/bpf_inode_storage.c
+++ b/kernel/bpf/bpf_inode_storage.c
@@ -227,12 +227,21 @@ static int notsupp_get_next_key(struct bpf_map *map, void *key,
 static struct bpf_map *inode_storage_map_alloc(union bpf_attr *attr)
 {
 	struct bpf_local_storage_map *smap;
+	int cache_idx_or_err;
+
+	cache_idx_or_err = bpf_local_storage_cache_idx_get(&inode_cache,
+							   attr->map_extra);
+	if (cache_idx_or_err < 0)
+		return ERR_PTR(cache_idx_or_err);
 
 	smap = bpf_local_storage_map_alloc(attr);
-	if (IS_ERR(smap))
+	if (IS_ERR(smap)) {
+		bpf_local_storage_cache_idx_free(&inode_cache, (u16)cache_idx_or_err,
+						 attr->map_extra);
 		return ERR_CAST(smap);
+	}
 
-	smap->cache_idx = bpf_local_storage_cache_idx_get(&inode_cache);
+	smap->cache_idx = (u16)cache_idx_or_err;
 	return &smap->map;
 }
 
@@ -241,7 +250,8 @@ static void inode_storage_map_free(struct bpf_map *map)
 	struct bpf_local_storage_map *smap;
 
 	smap = (struct bpf_local_storage_map *)map;
-	bpf_local_storage_cache_idx_free(&inode_cache, smap->cache_idx);
+	bpf_local_storage_cache_idx_free(&inode_cache, smap->cache_idx,
+					 map->map_extra);
 	bpf_local_storage_map_free(smap, NULL);
 }
 
diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
index 01aa2b51ec4d..b23080247bef 100644
--- a/kernel/bpf/bpf_local_storage.c
+++ b/kernel/bpf/bpf_local_storage.c
@@ -231,12 +231,19 @@ bpf_local_storage_lookup(struct bpf_local_storage *local_storage,
 {
 	struct bpf_local_storage_data *sdata;
 	struct bpf_local_storage_elem *selem;
+	struct bpf_local_storage_map *cached;
+	bool cached_exclusive = false;
 
 	/* Fast path (cache hit) */
 	sdata = rcu_dereference_check(local_storage->cache[smap->cache_idx],
 				      bpf_rcu_lock_held());
-	if (sdata && rcu_access_pointer(sdata->smap) == smap)
-		return sdata;
+	if (sdata) {
+		if (rcu_access_pointer(sdata->smap) == smap)
+			return sdata;
+
+		cached = rcu_dereference_check(sdata->smap, bpf_rcu_lock_held());
+		cached_exclusive = cached->map.map_extra & BPF_LOCAL_STORAGE_FORCE_CACHE;
+	}
 
 	/* Slow path (cache miss) */
 	hlist_for_each_entry_rcu(selem, &local_storage->list, snode,
@@ -248,7 +255,7 @@ bpf_local_storage_lookup(struct bpf_local_storage *local_storage,
 		return NULL;
 
 	sdata = SDATA(selem);
-	if (cacheit_lockit) {
+	if (cacheit_lockit && !cached_exclusive) {
 		unsigned long flags;
 
 		/* spinlock is needed to avoid racing with the
@@ -482,15 +489,27 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap,
 	return ERR_PTR(err);
 }
 
-u16 bpf_local_storage_cache_idx_get(struct bpf_local_storage_cache *cache)
+int bpf_local_storage_cache_idx_get(struct bpf_local_storage_cache *cache,
+				    u64 flags)
 {
+	bool exclusive = flags & BPF_LOCAL_STORAGE_FORCE_CACHE;
+	bool adding_to_full = false;
 	u64 min_usage = U64_MAX;
-	u16 i, res = 0;
+	int res = 0;
+	u16 i;
 
 	spin_lock(&cache->idx_lock);
 
+	if (bitmap_full(cache->idx_exclusive, BPF_LOCAL_STORAGE_CACHE_SIZE)) {
+		res = -ENOMEM;
+		adding_to_full = true;
+		if (exclusive)
+			goto out;
+	}
+
 	for (i = 0; i < BPF_LOCAL_STORAGE_CACHE_SIZE; i++) {
-		if (cache->idx_usage_counts[i] < min_usage) {
+		if ((adding_to_full || !test_bit(i, cache->idx_exclusive)) &&
+		    cache->idx_usage_counts[i] < min_usage) {
 			min_usage = cache->idx_usage_counts[i];
 			res = i;
 
@@ -499,17 +518,23 @@ u16 bpf_local_storage_cache_idx_get(struct bpf_local_storage_cache *cache)
 				break;
 		}
 	}
+
+	if (exclusive)
+		set_bit(res, cache->idx_exclusive);
 	cache->idx_usage_counts[res]++;
 
+out:
 	spin_unlock(&cache->idx_lock);
 
 	return res;
 }
 
 void bpf_local_storage_cache_idx_free(struct bpf_local_storage_cache *cache,
-				      u16 idx)
+				      u16 idx, u64 flags)
 {
 	spin_lock(&cache->idx_lock);
+	if (flags & BPF_LOCAL_STORAGE_FORCE_CACHE)
+		clear_bit(idx, cache->idx_exclusive);
 	cache->idx_usage_counts[idx]--;
 	spin_unlock(&cache->idx_lock);
 }
@@ -583,7 +608,8 @@ int bpf_local_storage_map_alloc_check(union bpf_attr *attr)
 	    attr->max_entries ||
 	    attr->key_size != sizeof(int) || !attr->value_size ||
 	    /* Enforce BTF for userspace sk dumping */
-	    !attr->btf_key_type_id || !attr->btf_value_type_id)
+	    !attr->btf_key_type_id || !attr->btf_value_type_id ||
+	    attr->map_extra & ~BPF_LOCAL_STORAGE_FORCE_CACHE)
 		return -EINVAL;
 
 	if (!bpf_capable())
diff --git a/kernel/bpf/bpf_task_storage.c b/kernel/bpf/bpf_task_storage.c
index 6638a0ecc3d2..bf7b098d15c9 100644
--- a/kernel/bpf/bpf_task_storage.c
+++ b/kernel/bpf/bpf_task_storage.c
@@ -289,12 +289,21 @@ static int notsupp_get_next_key(struct bpf_map *map, void *key, void *next_key)
 static struct bpf_map *task_storage_map_alloc(union bpf_attr *attr)
 {
 	struct bpf_local_storage_map *smap;
+	int cache_idx_or_err;
+
+	cache_idx_or_err = bpf_local_storage_cache_idx_get(&task_cache,
+							   attr->map_extra);
+	if (cache_idx_or_err < 0)
+		return ERR_PTR(cache_idx_or_err);
 
 	smap = bpf_local_storage_map_alloc(attr);
-	if (IS_ERR(smap))
+	if (IS_ERR(smap)) {
+		bpf_local_storage_cache_idx_free(&task_cache, (u16)cache_idx_or_err,
+						 attr->map_extra);
 		return ERR_CAST(smap);
+	}
 
-	smap->cache_idx = bpf_local_storage_cache_idx_get(&task_cache);
+	smap->cache_idx = (u16)cache_idx_or_err;
 	return &smap->map;
 }
 
@@ -303,7 +312,8 @@ static void task_storage_map_free(struct bpf_map *map)
 	struct bpf_local_storage_map *smap;
 
 	smap = (struct bpf_local_storage_map *)map;
-	bpf_local_storage_cache_idx_free(&task_cache, smap->cache_idx);
+	bpf_local_storage_cache_idx_free(&task_cache, smap->cache_idx,
+					 map->map_extra);
 	bpf_local_storage_map_free(smap, &bpf_task_storage_busy);
 }
 
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index e9621cfa09f2..9fd610e53840 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -847,8 +847,11 @@ static int map_create(union bpf_attr *attr)
 		return -EINVAL;
 	}
 
-	if (attr->map_type != BPF_MAP_TYPE_BLOOM_FILTER &&
-	    attr->map_extra != 0)
+	if (!(attr->map_type == BPF_MAP_TYPE_BLOOM_FILTER ||
+	      attr->map_type == BPF_MAP_TYPE_INODE_STORAGE ||
+	      attr->map_type == BPF_MAP_TYPE_TASK_STORAGE ||
+	      attr->map_type == BPF_MAP_TYPE_SK_STORAGE) &&
+	     attr->map_extra != 0)
 		return -EINVAL;
 
 	f_flags = bpf_get_file_flag(attr->map_flags);
diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
index e3ac36380520..f6a95f525f50 100644
--- a/net/core/bpf_sk_storage.c
+++ b/net/core/bpf_sk_storage.c
@@ -90,19 +90,28 @@ static void bpf_sk_storage_map_free(struct bpf_map *map)
 	struct bpf_local_storage_map *smap;
 
 	smap = (struct bpf_local_storage_map *)map;
-	bpf_local_storage_cache_idx_free(&sk_cache, smap->cache_idx);
+	bpf_local_storage_cache_idx_free(&sk_cache, smap->cache_idx, map->map_extra);
 	bpf_local_storage_map_free(smap, NULL);
 }
 
 static struct bpf_map *bpf_sk_storage_map_alloc(union bpf_attr *attr)
 {
 	struct bpf_local_storage_map *smap;
+	int cache_idx_or_err;
+
+	cache_idx_or_err = bpf_local_storage_cache_idx_get(&sk_cache,
+							   attr->map_extra);
+	if (cache_idx_or_err < 0)
+		return ERR_PTR(cache_idx_or_err);
 
 	smap = bpf_local_storage_map_alloc(attr);
-	if (IS_ERR(smap))
+	if (IS_ERR(smap)) {
+		bpf_local_storage_cache_idx_free(&sk_cache, (u16)cache_idx_or_err,
+						 attr->map_extra);
 		return ERR_CAST(smap);
+	}
 
-	smap->cache_idx = bpf_local_storage_cache_idx_get(&sk_cache);
+	smap->cache_idx = (u16)cache_idx_or_err;
 	return &smap->map;
 }
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH bpf-next 2/3] selftests/bpf: Add local_storage exclusive cache test
  2022-04-20  0:21 [PATCH bpf-next 0/3] Introduce local_storage exclusive caching Dave Marchevsky
  2022-04-20  0:21 ` [PATCH bpf-next 1/3] bpf: Introduce local_storage exclusive caching option Dave Marchevsky
@ 2022-04-20  0:21 ` Dave Marchevsky
  2022-04-20  0:21 ` [PATCH bpf-next 3/3] bpf: Remove duplicate define in bpf_local_storage.h Dave Marchevsky
  2022-04-22  1:40 ` [PATCH bpf-next 0/3] Introduce local_storage exclusive caching Alexei Starovoitov
  3 siblings, 0 replies; 8+ messages in thread
From: Dave Marchevsky @ 2022-04-20  0:21 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, KP Singh, Tejun Heo, kernel-team,
	Dave Marchevsky

Validate local_storage exclusive caching functionality:

* Adding >BPF_LOCAL_STORAGE_CACHE_SIZE task_storage maps w/
  BPF_LOCAL_STORAGE_FORCE_CACHE results in failure to load program
  as there are free slots to claim.

* Adding BPF_LOCAL_STORAGE_CACHE_SIZE task_storage maps w/ FORCE_CACHE
  succeeds and results in a filled idx_bitmap for the cache. After first
  bpf_task_storage_get call for each map, the map's local storage data
  is in the cache slot. Subsequent bpf_task_storage_get calls to
  non-exclusive-cached maps don't replace exclusive-cached maps.

Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
---
 .../test_local_storage_excl_cache.c           |  52 +++++++++
 .../bpf/progs/local_storage_excl_cache.c      | 100 ++++++++++++++++++
 .../bpf/progs/local_storage_excl_cache_fail.c |  36 +++++++
 3 files changed, 188 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/test_local_storage_excl_cache.c
 create mode 100644 tools/testing/selftests/bpf/progs/local_storage_excl_cache.c
 create mode 100644 tools/testing/selftests/bpf/progs/local_storage_excl_cache_fail.c

diff --git a/tools/testing/selftests/bpf/prog_tests/test_local_storage_excl_cache.c b/tools/testing/selftests/bpf/prog_tests/test_local_storage_excl_cache.c
new file mode 100644
index 000000000000..a3742e69accb
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/test_local_storage_excl_cache.c
@@ -0,0 +1,52 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
+
+#include <linux/bitmap.h>
+#include <test_progs.h>
+
+#include "local_storage_excl_cache.skel.h"
+#include "local_storage_excl_cache_fail.skel.h"
+
+void test_test_local_storage_excl_cache(void)
+{
+	u64 cache_idx_exclusive, cache_idx_exclusive_expected;
+	struct local_storage_excl_cache_fail *skel_fail = NULL;
+	struct local_storage_excl_cache *skel = NULL;
+	u16 cache_size, i;
+	int err;
+
+	skel_fail = local_storage_excl_cache_fail__open_and_load();
+	ASSERT_ERR_PTR(skel_fail, "excl_cache_fail load should fail");
+	local_storage_excl_cache_fail__destroy(skel_fail);
+
+	skel = local_storage_excl_cache__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "excl_cache load should succeed"))
+		goto cleanup;
+
+	cache_size = skel->data->__BPF_LOCAL_STORAGE_CACHE_SIZE;
+
+	err = local_storage_excl_cache__attach(skel);
+	if (!ASSERT_OK(err, "excl_cache__attach"))
+		goto cleanup;
+
+	/* trigger tracepoint */
+	usleep(1);
+	cache_idx_exclusive = skel->data->out__cache_bitmap;
+	cache_idx_exclusive_expected = 0;
+	for (i = 0; i < cache_size; i++)
+		cache_idx_exclusive_expected |= (1U << i);
+
+	if (!ASSERT_EQ(cache_idx_exclusive & cache_idx_exclusive_expected,
+		       cache_idx_exclusive_expected, "excl cache bitmap should be full"))
+		goto cleanup;
+
+	usleep(1);
+	for (i = 0; i < cache_size; i++)
+		if (!ASSERT_EQ(skel->data->out__cache_smaps[i],
+			       skel->data->out__declared_smaps[i],
+			       "cached map not equal"))
+			goto cleanup;
+
+cleanup:
+	local_storage_excl_cache__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/local_storage_excl_cache.c b/tools/testing/selftests/bpf/progs/local_storage_excl_cache.c
new file mode 100644
index 000000000000..003c866c9d0e
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/local_storage_excl_cache.c
@@ -0,0 +1,100 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+
+char _license[] SEC("license") = "GPL";
+
+#define make_task_local_excl_map(name, num) \
+struct { \
+	__uint(type, BPF_MAP_TYPE_TASK_STORAGE); \
+	__uint(map_flags, BPF_F_NO_PREALLOC); \
+	__type(key, int); \
+	__type(value, __u32); \
+	__uint(map_extra, BPF_LOCAL_STORAGE_FORCE_CACHE); \
+} name ## num SEC(".maps");
+
+#define make_task_local_map(name, num) \
+struct { \
+	__uint(type, BPF_MAP_TYPE_TASK_STORAGE); \
+	__uint(map_flags, BPF_F_NO_PREALLOC); \
+	__type(key, int); \
+	__type(value, __u32); \
+} name ## num SEC(".maps");
+
+#define task_storage_get_excl(map, num) \
+({ \
+	bpf_task_storage_get(&map ## num, task, 0, BPF_LOCAL_STORAGE_GET_F_CREATE); \
+	bpf_probe_read_kernel(&out__cache_smaps[num], \
+			sizeof(void *), \
+			&task->bpf_storage->cache[num]->smap); \
+	out__declared_smaps[num] = &map ## num; \
+})
+
+/* must match define in bpf_local_storage.h */
+#define BPF_LOCAL_STORAGE_CACHE_SIZE 16
+
+/* Try adding BPF_LOCAL_STORAGE_CACHE_SIZE task_storage maps w/ exclusive
+ * cache slot
+ */
+make_task_local_excl_map(task_storage_map, 0);
+make_task_local_excl_map(task_storage_map, 1);
+make_task_local_excl_map(task_storage_map, 2);
+make_task_local_excl_map(task_storage_map, 3);
+make_task_local_excl_map(task_storage_map, 4);
+make_task_local_excl_map(task_storage_map, 5);
+make_task_local_excl_map(task_storage_map, 6);
+make_task_local_excl_map(task_storage_map, 7);
+make_task_local_excl_map(task_storage_map, 8);
+make_task_local_excl_map(task_storage_map, 9);
+make_task_local_excl_map(task_storage_map, 10);
+make_task_local_excl_map(task_storage_map, 11);
+make_task_local_excl_map(task_storage_map, 12);
+make_task_local_excl_map(task_storage_map, 13);
+make_task_local_excl_map(task_storage_map, 14);
+make_task_local_excl_map(task_storage_map, 15);
+
+make_task_local_map(task_storage_map, 16);
+
+extern const void task_cache __ksym;
+__u64 __BPF_LOCAL_STORAGE_CACHE_SIZE = BPF_LOCAL_STORAGE_CACHE_SIZE;
+__u64 out__cache_bitmap = -1;
+void *out__cache_smaps[BPF_LOCAL_STORAGE_CACHE_SIZE] = { (void *)-1 };
+void *out__declared_smaps[BPF_LOCAL_STORAGE_CACHE_SIZE] = { (void *)-1 };
+
+SEC("raw_tp/sys_enter")
+int handler(const void *ctx)
+{
+	struct task_struct *task = bpf_get_current_task_btf();
+	__u32 *ptr;
+
+	bpf_probe_read_kernel(&out__cache_bitmap, sizeof(out__cache_bitmap),
+			      &task_cache +
+			      offsetof(struct bpf_local_storage_cache, idx_exclusive));
+
+	/* Get all BPF_LOCAL_STORAGE_CACHE_SIZE exclusive-cache maps into cache,
+	 * and one that shouldn't be cached
+	 */
+	task_storage_get_excl(task_storage_map, 0);
+	task_storage_get_excl(task_storage_map, 1);
+	task_storage_get_excl(task_storage_map, 2);
+	task_storage_get_excl(task_storage_map, 3);
+	task_storage_get_excl(task_storage_map, 4);
+	task_storage_get_excl(task_storage_map, 5);
+	task_storage_get_excl(task_storage_map, 6);
+	task_storage_get_excl(task_storage_map, 7);
+	task_storage_get_excl(task_storage_map, 8);
+	task_storage_get_excl(task_storage_map, 9);
+	task_storage_get_excl(task_storage_map, 10);
+	task_storage_get_excl(task_storage_map, 11);
+	task_storage_get_excl(task_storage_map, 12);
+	task_storage_get_excl(task_storage_map, 13);
+	task_storage_get_excl(task_storage_map, 14);
+	task_storage_get_excl(task_storage_map, 15);
+
+	bpf_task_storage_get(&task_storage_map16, task, 0,
+			     BPF_LOCAL_STORAGE_GET_F_CREATE);
+
+	return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/local_storage_excl_cache_fail.c b/tools/testing/selftests/bpf/progs/local_storage_excl_cache_fail.c
new file mode 100644
index 000000000000..918b8c49da37
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/local_storage_excl_cache_fail.c
@@ -0,0 +1,36 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+
+char _license[] SEC("license") = "GPL";
+
+#define make_task_local_excl_map(name, num) \
+struct { \
+	__uint(type, BPF_MAP_TYPE_TASK_STORAGE); \
+	__uint(map_flags, BPF_F_NO_PREALLOC); \
+	__type(key, int); \
+	__type(value, __u32); \
+	__uint(map_extra, BPF_LOCAL_STORAGE_FORCE_CACHE); \
+} name ## num SEC(".maps");
+
+/* Try adding BPF_LOCAL_STORAGE_CACHE_SIZE+1 task_storage maps w/ exclusive
+ * cache slot */
+make_task_local_excl_map(task_storage_map, 0);
+make_task_local_excl_map(task_storage_map, 1);
+make_task_local_excl_map(task_storage_map, 2);
+make_task_local_excl_map(task_storage_map, 3);
+make_task_local_excl_map(task_storage_map, 4);
+make_task_local_excl_map(task_storage_map, 5);
+make_task_local_excl_map(task_storage_map, 6);
+make_task_local_excl_map(task_storage_map, 7);
+make_task_local_excl_map(task_storage_map, 8);
+make_task_local_excl_map(task_storage_map, 9);
+make_task_local_excl_map(task_storage_map, 10);
+make_task_local_excl_map(task_storage_map, 11);
+make_task_local_excl_map(task_storage_map, 12);
+make_task_local_excl_map(task_storage_map, 13);
+make_task_local_excl_map(task_storage_map, 14);
+make_task_local_excl_map(task_storage_map, 15);
+make_task_local_excl_map(task_storage_map, 16);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH bpf-next 3/3] bpf: Remove duplicate define in bpf_local_storage.h
  2022-04-20  0:21 [PATCH bpf-next 0/3] Introduce local_storage exclusive caching Dave Marchevsky
  2022-04-20  0:21 ` [PATCH bpf-next 1/3] bpf: Introduce local_storage exclusive caching option Dave Marchevsky
  2022-04-20  0:21 ` [PATCH bpf-next 2/3] selftests/bpf: Add local_storage exclusive cache test Dave Marchevsky
@ 2022-04-20  0:21 ` Dave Marchevsky
  2022-04-22  1:40 ` [PATCH bpf-next 0/3] Introduce local_storage exclusive caching Alexei Starovoitov
  3 siblings, 0 replies; 8+ messages in thread
From: Dave Marchevsky @ 2022-04-20  0:21 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, KP Singh, Tejun Heo, kernel-team,
	Dave Marchevsky

BPF_LOCAL_STORAGE_CACHE_SIZE is defined elsewhere in the same header.

Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
---
 include/linux/bpf_local_storage.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/include/linux/bpf_local_storage.h b/include/linux/bpf_local_storage.h
index d87405a1b65d..a5e4c220fc0d 100644
--- a/include/linux/bpf_local_storage.h
+++ b/include/linux/bpf_local_storage.h
@@ -104,8 +104,6 @@ struct bpf_local_storage {
 	container_of((_SDATA), struct bpf_local_storage_elem, sdata)
 #define SDATA(_SELEM) (&(_SELEM)->sdata)
 
-#define BPF_LOCAL_STORAGE_CACHE_SIZE	16
-
 struct bpf_local_storage_cache {
 	spinlock_t idx_lock;
 	u64 idx_usage_counts[BPF_LOCAL_STORAGE_CACHE_SIZE];
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH bpf-next 0/3] Introduce local_storage exclusive caching
  2022-04-20  0:21 [PATCH bpf-next 0/3] Introduce local_storage exclusive caching Dave Marchevsky
                   ` (2 preceding siblings ...)
  2022-04-20  0:21 ` [PATCH bpf-next 3/3] bpf: Remove duplicate define in bpf_local_storage.h Dave Marchevsky
@ 2022-04-22  1:40 ` Alexei Starovoitov
  2022-04-22  4:05   ` Dave Marchevsky
  2022-04-23  9:43   ` Yosry Ahmed
  3 siblings, 2 replies; 8+ messages in thread
From: Alexei Starovoitov @ 2022-04-22  1:40 UTC (permalink / raw)
  To: Dave Marchevsky
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, KP Singh, Tejun Heo, kernel-team

On Tue, Apr 19, 2022 at 05:21:40PM -0700, Dave Marchevsky wrote:
> Currently, each local_storage type (sk, inode, task) has a 16-entry
> cache for local_storage data associated with a particular map. A
> local_storage map is assigned a fixed cache_idx when it is allocated.
> When looking in a local_storage for data associated with a map the cache
> entry at cache_idx is the only place the map can appear in cache. If the
> map's data is not in cache it is placed there after a search through the
> cache hlist. When there are >16 local_storage maps allocated for a
> local_storage type multiple maps have same cache_idx and thus may knock
> each other out of cache.
> 
> BPF programs that use local_storage may require fast and consistent
> local storage access. For example, a BPF program using task local
> storage to make scheduling decisions would not be able to tolerate a
> long hlist search for its local_storage as this would negatively affect
> cycles available to applications. Providing a mechanism for such a
> program to ensure that its local_storage_data will always be in cache
> would ensure fast access.
> 
> This series introduces a BPF_LOCAL_STORAGE_FORCE_CACHE flag that can be
> set on sk, inode, and task local_storage maps via map_extras. When a map
> with the FORCE_CACHE flag set is allocated it is assigned an 'exclusive'
> cache slot that it can't be evicted from until the map is free'd. 
> 
> If there are no slots available to exclusively claim, the allocation
> fails. BPF programs are expected to use BPF_LOCAL_STORAGE_FORCE_CACHE
> only if their data _must_ be in cache.

I'm afraid new uapi flag doesn't solve this problem.
Sooner or later every bpf program would deem itself "important" and
performance critical. All of them will be using FORCE_CACHE flag
and we will back to the same situation.

Also please share the performance data that shows more than 16 programs
that use local storage at the same time and existing simple cache
replacing logic is not enough.
For any kind link list walking to become an issue there gotta be at
least 17 progs. Two progs should pick up the same cache_idx and
run interleaved to evict each other. 
It feels like unlikely scenario, so real data would be good to see.
If it really an issue we might need a different caching logic.
Like instead of single link list per local storage we might
have 16 link lists. cache_idx can point to a slot.
If it's not 1st it will be a 2nd in much shorter link list.
With 16 slots the link lists will have 2 elements until 32 bpf progs
are using local storage.
We can get rid of cache too and replace with mini hash table of N
elements where map_id would be an index into a hash table.
All sorts of other algorithms are possible.
In any case the bpf user shouldn't be telling the kernel about
"importance" of its program. If program is indeed executing a lot
the kernel should be caching/accelerating it where it can.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH bpf-next 0/3] Introduce local_storage exclusive caching
  2022-04-22  1:40 ` [PATCH bpf-next 0/3] Introduce local_storage exclusive caching Alexei Starovoitov
@ 2022-04-22  4:05   ` Dave Marchevsky
  2022-04-22 22:07     ` Alexei Starovoitov
  2022-04-23  9:43   ` Yosry Ahmed
  1 sibling, 1 reply; 8+ messages in thread
From: Dave Marchevsky @ 2022-04-22  4:05 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, KP Singh, Tejun Heo, kernel-team

On 4/21/22 9:40 PM, Alexei Starovoitov wrote:   
> On Tue, Apr 19, 2022 at 05:21:40PM -0700, Dave Marchevsky wrote:
>> Currently, each local_storage type (sk, inode, task) has a 16-entry
>> cache for local_storage data associated with a particular map. A
>> local_storage map is assigned a fixed cache_idx when it is allocated.
>> When looking in a local_storage for data associated with a map the cache
>> entry at cache_idx is the only place the map can appear in cache. If the
>> map's data is not in cache it is placed there after a search through the
>> cache hlist. When there are >16 local_storage maps allocated for a
>> local_storage type multiple maps have same cache_idx and thus may knock
>> each other out of cache.
>>
>> BPF programs that use local_storage may require fast and consistent
>> local storage access. For example, a BPF program using task local
>> storage to make scheduling decisions would not be able to tolerate a
>> long hlist search for its local_storage as this would negatively affect
>> cycles available to applications. Providing a mechanism for such a
>> program to ensure that its local_storage_data will always be in cache
>> would ensure fast access.
>>
>> This series introduces a BPF_LOCAL_STORAGE_FORCE_CACHE flag that can be
>> set on sk, inode, and task local_storage maps via map_extras. When a map
>> with the FORCE_CACHE flag set is allocated it is assigned an 'exclusive'
>> cache slot that it can't be evicted from until the map is free'd. 
>>
>> If there are no slots available to exclusively claim, the allocation
>> fails. BPF programs are expected to use BPF_LOCAL_STORAGE_FORCE_CACHE
>> only if their data _must_ be in cache.
> 
> I'm afraid new uapi flag doesn't solve this problem.
> Sooner or later every bpf program would deem itself "important" and
> performance critical. All of them will be using FORCE_CACHE flag
> and we will back to the same situation.

In this scenario, if 16 maps had been loaded w/ FORCE_CACHE flag and 17th tried
to load, it would fail, so programs depending on the map would fail to load.
Patch 2 adds a selftest 'local_storage_excl_cache_fail' demonstrating this.

> Also please share the performance data that shows more than 16 programs
> that use local storage at the same time and existing simple cache
> replacing logic is not enough.
> For any kind link list walking to become an issue there gotta be at
> least 17 progs. Two progs should pick up the same cache_idx and
> run interleaved to evict each other. 
> It feels like unlikely scenario, so real data would be good to see.
> If it really an issue we might need a different caching logic.
> Like instead of single link list per local storage we might
> have 16 link lists. cache_idx can point to a slot.
> If it's not 1st it will be a 2nd in much shorter link list.
> With 16 slots the link lists will have 2 elements until 32 bpf progs
> are using local storage.
> We can get rid of cache too and replace with mini hash table of N
> elements where map_id would be an index into a hash table.
> All sorts of other algorithms are possible.
> In any case the bpf user shouldn't be telling the kernel about
> "importance" of its program. If program is indeed executing a lot
> the kernel should be caching/accelerating it where it can.

It's worth noting that this is a map-level setting, not prog-level. Telling the
kernel about importance of data feels more palatable to me. Sort of like mmap's
MAP_LOCKED, but for local_storage cache.

Going back to the motivating example - using data in task local_storage to make
scheduling decisions - the desire is to have the task local_storage access be
like "accessing a task_struct member" vs "doing a search for right data to 
access (w/ some caching to try to avoid search)".

Re: performance data, would adding a benchmark in selftests/bpf/benchs work?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH bpf-next 0/3] Introduce local_storage exclusive caching
  2022-04-22  4:05   ` Dave Marchevsky
@ 2022-04-22 22:07     ` Alexei Starovoitov
  0 siblings, 0 replies; 8+ messages in thread
From: Alexei Starovoitov @ 2022-04-22 22:07 UTC (permalink / raw)
  To: Dave Marchevsky
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, KP Singh, Tejun Heo, kernel-team

On Fri, Apr 22, 2022 at 12:05:23AM -0400, Dave Marchevsky wrote:
> On 4/21/22 9:40 PM, Alexei Starovoitov wrote:   
> > On Tue, Apr 19, 2022 at 05:21:40PM -0700, Dave Marchevsky wrote:
> >> Currently, each local_storage type (sk, inode, task) has a 16-entry
> >> cache for local_storage data associated with a particular map. A
> >> local_storage map is assigned a fixed cache_idx when it is allocated.
> >> When looking in a local_storage for data associated with a map the cache
> >> entry at cache_idx is the only place the map can appear in cache. If the
> >> map's data is not in cache it is placed there after a search through the
> >> cache hlist. When there are >16 local_storage maps allocated for a
> >> local_storage type multiple maps have same cache_idx and thus may knock
> >> each other out of cache.
> >>
> >> BPF programs that use local_storage may require fast and consistent
> >> local storage access. For example, a BPF program using task local
> >> storage to make scheduling decisions would not be able to tolerate a
> >> long hlist search for its local_storage as this would negatively affect
> >> cycles available to applications. Providing a mechanism for such a
> >> program to ensure that its local_storage_data will always be in cache
> >> would ensure fast access.
> >>
> >> This series introduces a BPF_LOCAL_STORAGE_FORCE_CACHE flag that can be
> >> set on sk, inode, and task local_storage maps via map_extras. When a map
> >> with the FORCE_CACHE flag set is allocated it is assigned an 'exclusive'
> >> cache slot that it can't be evicted from until the map is free'd. 
> >>
> >> If there are no slots available to exclusively claim, the allocation
> >> fails. BPF programs are expected to use BPF_LOCAL_STORAGE_FORCE_CACHE
> >> only if their data _must_ be in cache.
> > 
> > I'm afraid new uapi flag doesn't solve this problem.
> > Sooner or later every bpf program would deem itself "important" and
> > performance critical. All of them will be using FORCE_CACHE flag
> > and we will back to the same situation.
> 
> In this scenario, if 16 maps had been loaded w/ FORCE_CACHE flag and 17th tried
> to load, it would fail, so programs depending on the map would fail to load.

Ahh. I missed that the cache_idx is assigned at map creation time.

> Patch 2 adds a selftest 'local_storage_excl_cache_fail' demonstrating this.
> 
> > Also please share the performance data that shows more than 16 programs
> > that use local storage at the same time and existing simple cache
> > replacing logic is not enough.
> > For any kind link list walking to become an issue there gotta be at
> > least 17 progs. Two progs should pick up the same cache_idx and
> > run interleaved to evict each other. 
> > It feels like unlikely scenario, so real data would be good to see.
> > If it really an issue we might need a different caching logic.
> > Like instead of single link list per local storage we might
> > have 16 link lists. cache_idx can point to a slot.
> > If it's not 1st it will be a 2nd in much shorter link list.
> > With 16 slots the link lists will have 2 elements until 32 bpf progs
> > are using local storage.
> > We can get rid of cache too and replace with mini hash table of N
> > elements where map_id would be an index into a hash table.
> > All sorts of other algorithms are possible.
> > In any case the bpf user shouldn't be telling the kernel about
> > "importance" of its program. If program is indeed executing a lot
> > the kernel should be caching/accelerating it where it can.
> 
> It's worth noting that this is a map-level setting, not prog-level. Telling the
> kernel about importance of data feels more palatable to me. Sort of like mmap's
> MAP_LOCKED, but for local_storage cache.

For mmap it's an operational difference. Not just performance.

> Going back to the motivating example - using data in task local_storage to make
> scheduling decisions - the desire is to have the task local_storage access be
> like "accessing a task_struct member" vs "doing a search for right data to 
> access (w/ some caching to try to avoid search)".

Exactly. The motivation is performance. Let's try to make good performance
without uapi flags.
Think from user's pov. They have to pick between FORCE_CACHE == good performance
and no flag == bad? performance.
Why would anyone pick something that has worse performance?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH bpf-next 0/3] Introduce local_storage exclusive caching
  2022-04-22  1:40 ` [PATCH bpf-next 0/3] Introduce local_storage exclusive caching Alexei Starovoitov
  2022-04-22  4:05   ` Dave Marchevsky
@ 2022-04-23  9:43   ` Yosry Ahmed
  1 sibling, 0 replies; 8+ messages in thread
From: Yosry Ahmed @ 2022-04-23  9:43 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Dave Marchevsky, bpf, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, KP Singh, Tejun Heo,
	kernel-team

On Thu, Apr 21, 2022 at 6:47 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Tue, Apr 19, 2022 at 05:21:40PM -0700, Dave Marchevsky wrote:
> > Currently, each local_storage type (sk, inode, task) has a 16-entry
> > cache for local_storage data associated with a particular map. A
> > local_storage map is assigned a fixed cache_idx when it is allocated.
> > When looking in a local_storage for data associated with a map the cache
> > entry at cache_idx is the only place the map can appear in cache. If the
> > map's data is not in cache it is placed there after a search through the
> > cache hlist. When there are >16 local_storage maps allocated for a
> > local_storage type multiple maps have same cache_idx and thus may knock
> > each other out of cache.
> >
> > BPF programs that use local_storage may require fast and consistent
> > local storage access. For example, a BPF program using task local
> > storage to make scheduling decisions would not be able to tolerate a
> > long hlist search for its local_storage as this would negatively affect
> > cycles available to applications. Providing a mechanism for such a
> > program to ensure that its local_storage_data will always be in cache
> > would ensure fast access.
> >
> > This series introduces a BPF_LOCAL_STORAGE_FORCE_CACHE flag that can be
> > set on sk, inode, and task local_storage maps via map_extras. When a map
> > with the FORCE_CACHE flag set is allocated it is assigned an 'exclusive'
> > cache slot that it can't be evicted from until the map is free'd.
> >
> > If there are no slots available to exclusively claim, the allocation
> > fails. BPF programs are expected to use BPF_LOCAL_STORAGE_FORCE_CACHE
> > only if their data _must_ be in cache.
>
> I'm afraid new uapi flag doesn't solve this problem.
> Sooner or later every bpf program would deem itself "important" and
> performance critical. All of them will be using FORCE_CACHE flag
> and we will back to the same situation.
>
> Also please share the performance data that shows more than 16 programs
> that use local storage at the same time and existing simple cache
> replacing logic is not enough.
> For any kind link list walking to become an issue there gotta be at
> least 17 progs. Two progs should pick up the same cache_idx and
> run interleaved to evict each other.
> It feels like unlikely scenario, so real data would be good to see.
> If it really an issue we might need a different caching logic.
> Like instead of single link list per local storage we might
> have 16 link lists. cache_idx can point to a slot.
> If it's not 1st it will be a 2nd in much shorter link list.
> With 16 slots the link lists will have 2 elements until 32 bpf progs
> are using local storage.
> We can get rid of cache too and replace with mini hash table of N
> elements where map_id would be an index into a hash table.

This is a tangent to this discussion, but I was actually wondering why
the elements in bpf_local_storage are stored in a list rather than a
hashtable. Is there a specific reason for this?

> All sorts of other algorithms are possible.
> In any case the bpf user shouldn't be telling the kernel about
> "importance" of its program. If program is indeed executing a lot
> the kernel should be caching/accelerating it where it can.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-04-23  9:44 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-20  0:21 [PATCH bpf-next 0/3] Introduce local_storage exclusive caching Dave Marchevsky
2022-04-20  0:21 ` [PATCH bpf-next 1/3] bpf: Introduce local_storage exclusive caching option Dave Marchevsky
2022-04-20  0:21 ` [PATCH bpf-next 2/3] selftests/bpf: Add local_storage exclusive cache test Dave Marchevsky
2022-04-20  0:21 ` [PATCH bpf-next 3/3] bpf: Remove duplicate define in bpf_local_storage.h Dave Marchevsky
2022-04-22  1:40 ` [PATCH bpf-next 0/3] Introduce local_storage exclusive caching Alexei Starovoitov
2022-04-22  4:05   ` Dave Marchevsky
2022-04-22 22:07     ` Alexei Starovoitov
2022-04-23  9:43   ` Yosry Ahmed

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.