* [PATCH bpf-next v5 0/2] Optimize bpf_redirect_map()/xdp_do_redirect()
@ 2021-02-27 12:21 Björn Töpel
2021-02-27 12:21 ` [PATCH bpf-next v5 1/2] bpf, xdp: make bpf_redirect_map() a map operation Björn Töpel
2021-02-27 12:21 ` [PATCH bpf-next v5 2/2] bpf, xdp: restructure redirect actions Björn Töpel
0 siblings, 2 replies; 8+ messages in thread
From: Björn Töpel @ 2021-02-27 12:21 UTC (permalink / raw)
To: ast, daniel, netdev, bpf
Cc: Björn Töpel, bjorn.topel, maciej.fijalkowski, hawk,
toke, magnus.karlsson, john.fastabend, kuba, davem
Hi XDP-folks,
This two patch series contain two optimizations for the
bpf_redirect_map() helper and the xdp_do_redirect() function.
The bpf_redirect_map() optimization is about avoiding the map lookup
dispatching. Instead of having a switch-statement and selecting the
correct lookup function, we let bpf_redirect_map() be a map operation,
where each map has its own bpf_redirect_map() implementation. This way
the run-time lookup is avoided.
The xdp_do_redirect() patch restructures the code, so that the map
pointer indirection can be avoided.
Performance-wise I got 4% improvement for XSKMAP
(sample:xdpsock/rx-drop), and 8% (sample:xdp_redirect_map) on my
machine.
More details in each commit.
Changelog:
v4->v5: Renamed map operation to map_redirect. (Daniel)
v3->v4: Made bpf_redirect_map() a map operation. (Daniel)
v2->v3: Fix build when CONFIG_NET is not set. (lkp)
v1->v2: Removed warning when CONFIG_BPF_SYSCALL was not set. (lkp)
Cleaned up case-clause in xdp_do_generic_redirect_map(). (Toke)
Re-added comment. (Toke)
rfc->v1: Use map_id, and remove bpf_clear_redirect_map(). (Toke)
Get rid of the macro and use __always_inline. (Jesper)
rfc: https://lore.kernel.org/bpf/87im7fy9nc.fsf@toke.dk/ (Cover not on lore!)
v1: https://lore.kernel.org/bpf/20210219145922.63655-1-bjorn.topel@gmail.com/
v2: https://lore.kernel.org/bpf/20210220153056.111968-1-bjorn.topel@gmail.com/
v3: https://lore.kernel.org/bpf/20210221200954.164125-3-bjorn.topel@gmail.com/
v4: https://lore.kernel.org/bpf/20210226112322.144927-1-bjorn.topel@gmail.com/
Cheers,
Björn
Björn Töpel (2):
bpf, xdp: make bpf_redirect_map() a map operation
bpf, xdp: restructure redirect actions
include/linux/bpf.h | 26 ++----
include/linux/filter.h | 39 +++++++-
include/net/xdp_sock.h | 19 ----
include/trace/events/xdp.h | 66 ++++++++-----
kernel/bpf/cpumap.c | 10 +-
kernel/bpf/devmap.c | 19 +++-
kernel/bpf/verifier.c | 11 ++-
net/core/filter.c | 183 ++++++++++++-------------------------
net/xdp/xskmap.c | 20 +++-
9 files changed, 195 insertions(+), 198 deletions(-)
base-commit: 85e142cb42a1e7b33971bf035dae432d8670c46b
--
2.27.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH bpf-next v5 1/2] bpf, xdp: make bpf_redirect_map() a map operation
2021-02-27 12:21 [PATCH bpf-next v5 0/2] Optimize bpf_redirect_map()/xdp_do_redirect() Björn Töpel
@ 2021-02-27 12:21 ` Björn Töpel
2021-03-05 15:55 ` Daniel Borkmann
2021-02-27 12:21 ` [PATCH bpf-next v5 2/2] bpf, xdp: restructure redirect actions Björn Töpel
1 sibling, 1 reply; 8+ messages in thread
From: Björn Töpel @ 2021-02-27 12:21 UTC (permalink / raw)
To: ast, daniel, netdev, bpf
Cc: Björn Töpel, maciej.fijalkowski, hawk, toke,
magnus.karlsson, john.fastabend, kuba, davem,
Jesper Dangaard Brouer
From: Björn Töpel <bjorn.topel@intel.com>
Currently the bpf_redirect_map() implementation dispatches to the
correct map-lookup function via a switch-statement. To avoid the
dispatching, this change adds bpf_redirect_map() as a map
operation. Each map provides its bpf_redirect_map() version, and
correct function is automatically selected by the BPF verifier.
A nice side-effect of the code movement is that the map lookup
functions are now local to the map implementation files, which removes
one additional function call.
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
---
include/linux/bpf.h | 26 ++++++--------------------
include/linux/filter.h | 27 +++++++++++++++++++++++++++
include/net/xdp_sock.h | 19 -------------------
kernel/bpf/cpumap.c | 8 +++++++-
kernel/bpf/devmap.c | 16 ++++++++++++++--
kernel/bpf/verifier.c | 11 +++++++++--
net/core/filter.c | 39 +--------------------------------------
net/xdp/xskmap.c | 18 ++++++++++++++++++
8 files changed, 82 insertions(+), 82 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 4c730863fa77..3d3e89a37e62 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -118,6 +118,9 @@ struct bpf_map_ops {
void *owner, u32 size);
struct bpf_local_storage __rcu ** (*map_owner_storage_ptr)(void *owner);
+ /* XDP helpers.*/
+ int (*map_redirect)(struct bpf_map *map, u32 ifindex, u64 flags);
+
/* map_meta_equal must be implemented for maps that can be
* used as an inner map. It is a runtime check to ensure
* an inner map can be inserted to an outer map.
@@ -1450,9 +1453,9 @@ struct btf *bpf_get_btf_vmlinux(void);
/* Map specifics */
struct xdp_buff;
struct sk_buff;
+struct bpf_dtab_netdev;
+struct bpf_cpu_map_entry;
-struct bpf_dtab_netdev *__dev_map_lookup_elem(struct bpf_map *map, u32 key);
-struct bpf_dtab_netdev *__dev_map_hash_lookup_elem(struct bpf_map *map, u32 key);
void __dev_flush(void);
int dev_xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp,
struct net_device *dev_rx);
@@ -1462,7 +1465,6 @@ int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb,
struct bpf_prog *xdp_prog);
bool dev_map_can_have_prog(struct bpf_map *map);
-struct bpf_cpu_map_entry *__cpu_map_lookup_elem(struct bpf_map *map, u32 key);
void __cpu_map_flush(void);
int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_buff *xdp,
struct net_device *dev_rx);
@@ -1590,17 +1592,6 @@ static inline int bpf_obj_get_user(const char __user *pathname, int flags)
return -EOPNOTSUPP;
}
-static inline struct net_device *__dev_map_lookup_elem(struct bpf_map *map,
- u32 key)
-{
- return NULL;
-}
-
-static inline struct net_device *__dev_map_hash_lookup_elem(struct bpf_map *map,
- u32 key)
-{
- return NULL;
-}
static inline bool dev_map_can_have_prog(struct bpf_map *map)
{
return false;
@@ -1612,6 +1603,7 @@ static inline void __dev_flush(void)
struct xdp_buff;
struct bpf_dtab_netdev;
+struct bpf_cpu_map_entry;
static inline
int dev_xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp,
@@ -1636,12 +1628,6 @@ static inline int dev_map_generic_redirect(struct bpf_dtab_netdev *dst,
return 0;
}
-static inline
-struct bpf_cpu_map_entry *__cpu_map_lookup_elem(struct bpf_map *map, u32 key)
-{
- return NULL;
-}
-
static inline void __cpu_map_flush(void)
{
}
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 3b00fc906ccd..008691fd3b58 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -1472,4 +1472,31 @@ static inline bool bpf_sk_lookup_run_v6(struct net *net, int protocol,
}
#endif /* IS_ENABLED(CONFIG_IPV6) */
+static __always_inline int __bpf_xdp_redirect_map(struct bpf_map *map, u32 ifindex, u64 flags,
+ void *lookup_elem(struct bpf_map *map, u32 key))
+{
+ struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info);
+
+ /* Lower bits of the flags are used as return code on lookup failure */
+ if (unlikely(flags > XDP_TX))
+ return XDP_ABORTED;
+
+ ri->tgt_value = lookup_elem(map, ifindex);
+ if (unlikely(!ri->tgt_value)) {
+ /* If the lookup fails we want to clear out the state in the
+ * redirect_info struct completely, so that if an eBPF program
+ * performs multiple lookups, the last one always takes
+ * precedence.
+ */
+ WRITE_ONCE(ri->map, NULL);
+ return flags;
+ }
+
+ ri->flags = flags;
+ ri->tgt_index = ifindex;
+ WRITE_ONCE(ri->map, map);
+
+ return XDP_REDIRECT;
+}
+
#endif /* __LINUX_FILTER_H__ */
diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
index cc17bc957548..9c0722c6d7ac 100644
--- a/include/net/xdp_sock.h
+++ b/include/net/xdp_sock.h
@@ -80,19 +80,6 @@ int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp);
int __xsk_map_redirect(struct xdp_sock *xs, struct xdp_buff *xdp);
void __xsk_map_flush(void);
-static inline struct xdp_sock *__xsk_map_lookup_elem(struct bpf_map *map,
- u32 key)
-{
- struct xsk_map *m = container_of(map, struct xsk_map, map);
- struct xdp_sock *xs;
-
- if (key >= map->max_entries)
- return NULL;
-
- xs = READ_ONCE(m->xsk_map[key]);
- return xs;
-}
-
#else
static inline int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp)
@@ -109,12 +96,6 @@ static inline void __xsk_map_flush(void)
{
}
-static inline struct xdp_sock *__xsk_map_lookup_elem(struct bpf_map *map,
- u32 key)
-{
- return NULL;
-}
-
#endif /* CONFIG_XDP_SOCKETS */
#endif /* _LINUX_XDP_SOCK_H */
diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
index 5d1469de6921..7352d4160b7f 100644
--- a/kernel/bpf/cpumap.c
+++ b/kernel/bpf/cpumap.c
@@ -563,7 +563,7 @@ static void cpu_map_free(struct bpf_map *map)
kfree(cmap);
}
-struct bpf_cpu_map_entry *__cpu_map_lookup_elem(struct bpf_map *map, u32 key)
+static void *__cpu_map_lookup_elem(struct bpf_map *map, u32 key)
{
struct bpf_cpu_map *cmap = container_of(map, struct bpf_cpu_map, map);
struct bpf_cpu_map_entry *rcpu;
@@ -600,6 +600,11 @@ static int cpu_map_get_next_key(struct bpf_map *map, void *key, void *next_key)
return 0;
}
+static int cpu_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags)
+{
+ return __bpf_xdp_redirect_map(map, ifindex, flags, __cpu_map_lookup_elem);
+}
+
static int cpu_map_btf_id;
const struct bpf_map_ops cpu_map_ops = {
.map_meta_equal = bpf_map_meta_equal,
@@ -612,6 +617,7 @@ const struct bpf_map_ops cpu_map_ops = {
.map_check_btf = map_check_no_btf,
.map_btf_name = "bpf_cpu_map",
.map_btf_id = &cpu_map_btf_id,
+ .map_redirect = cpu_map_redirect,
};
static void bq_flush_to_queue(struct xdp_bulk_queue *bq)
diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
index 85d9d1b72a33..f7f42448259f 100644
--- a/kernel/bpf/devmap.c
+++ b/kernel/bpf/devmap.c
@@ -258,7 +258,7 @@ static int dev_map_get_next_key(struct bpf_map *map, void *key, void *next_key)
return 0;
}
-struct bpf_dtab_netdev *__dev_map_hash_lookup_elem(struct bpf_map *map, u32 key)
+static void *__dev_map_hash_lookup_elem(struct bpf_map *map, u32 key)
{
struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
struct hlist_head *head = dev_map_index_hash(dtab, key);
@@ -392,7 +392,7 @@ void __dev_flush(void)
* update happens in parallel here a dev_put wont happen until after reading the
* ifindex.
*/
-struct bpf_dtab_netdev *__dev_map_lookup_elem(struct bpf_map *map, u32 key)
+static void *__dev_map_lookup_elem(struct bpf_map *map, u32 key)
{
struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
struct bpf_dtab_netdev *obj;
@@ -735,6 +735,16 @@ static int dev_map_hash_update_elem(struct bpf_map *map, void *key, void *value,
map, key, value, map_flags);
}
+static int dev_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags)
+{
+ return __bpf_xdp_redirect_map(map, ifindex, flags, __dev_map_lookup_elem);
+}
+
+static int dev_hash_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags)
+{
+ return __bpf_xdp_redirect_map(map, ifindex, flags, __dev_map_hash_lookup_elem);
+}
+
static int dev_map_btf_id;
const struct bpf_map_ops dev_map_ops = {
.map_meta_equal = bpf_map_meta_equal,
@@ -747,6 +757,7 @@ const struct bpf_map_ops dev_map_ops = {
.map_check_btf = map_check_no_btf,
.map_btf_name = "bpf_dtab",
.map_btf_id = &dev_map_btf_id,
+ .map_redirect = dev_map_redirect,
};
static int dev_map_hash_map_btf_id;
@@ -761,6 +772,7 @@ const struct bpf_map_ops dev_map_hash_ops = {
.map_check_btf = map_check_no_btf,
.map_btf_name = "bpf_dtab",
.map_btf_id = &dev_map_hash_map_btf_id,
+ .map_redirect = dev_hash_map_redirect,
};
static void dev_map_hash_remove_netdev(struct bpf_dtab *dtab,
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 9fe90ce52a65..b6c44b85e960 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5582,7 +5582,8 @@ record_func_map(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta,
func_id != BPF_FUNC_map_push_elem &&
func_id != BPF_FUNC_map_pop_elem &&
func_id != BPF_FUNC_map_peek_elem &&
- func_id != BPF_FUNC_for_each_map_elem)
+ func_id != BPF_FUNC_for_each_map_elem &&
+ func_id != BPF_FUNC_redirect_map)
return 0;
if (map == NULL) {
@@ -12017,7 +12018,8 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
insn->imm == BPF_FUNC_map_delete_elem ||
insn->imm == BPF_FUNC_map_push_elem ||
insn->imm == BPF_FUNC_map_pop_elem ||
- insn->imm == BPF_FUNC_map_peek_elem)) {
+ insn->imm == BPF_FUNC_map_peek_elem ||
+ insn->imm == BPF_FUNC_redirect_map)) {
aux = &env->insn_aux_data[i + delta];
if (bpf_map_ptr_poisoned(aux))
goto patch_call_imm;
@@ -12059,6 +12061,8 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
(int (*)(struct bpf_map *map, void *value))NULL));
BUILD_BUG_ON(!__same_type(ops->map_peek_elem,
(int (*)(struct bpf_map *map, void *value))NULL));
+ BUILD_BUG_ON(!__same_type(ops->map_redirect,
+ (int (*)(struct bpf_map *map, u32 ifindex, u64 flags))NULL));
patch_map_ops_generic:
switch (insn->imm) {
case BPF_FUNC_map_lookup_elem:
@@ -12085,6 +12089,9 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
insn->imm = BPF_CAST_CALL(ops->map_peek_elem) -
__bpf_call_base;
continue;
+ case BPF_FUNC_redirect_map:
+ insn->imm = BPF_CAST_CALL(ops->map_redirect) - __bpf_call_base;
+ continue;
}
goto patch_call_imm;
diff --git a/net/core/filter.c b/net/core/filter.c
index 13bcf248ee7b..960299a3744f 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -3934,22 +3934,6 @@ void xdp_do_flush(void)
}
EXPORT_SYMBOL_GPL(xdp_do_flush);
-static inline void *__xdp_map_lookup_elem(struct bpf_map *map, u32 index)
-{
- switch (map->map_type) {
- case BPF_MAP_TYPE_DEVMAP:
- return __dev_map_lookup_elem(map, index);
- case BPF_MAP_TYPE_DEVMAP_HASH:
- return __dev_map_hash_lookup_elem(map, index);
- case BPF_MAP_TYPE_CPUMAP:
- return __cpu_map_lookup_elem(map, index);
- case BPF_MAP_TYPE_XSKMAP:
- return __xsk_map_lookup_elem(map, index);
- default:
- return NULL;
- }
-}
-
void bpf_clear_redirect_map(struct bpf_map *map)
{
struct bpf_redirect_info *ri;
@@ -4103,28 +4087,7 @@ static const struct bpf_func_proto bpf_xdp_redirect_proto = {
BPF_CALL_3(bpf_xdp_redirect_map, struct bpf_map *, map, u32, ifindex,
u64, flags)
{
- struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info);
-
- /* Lower bits of the flags are used as return code on lookup failure */
- if (unlikely(flags > XDP_TX))
- return XDP_ABORTED;
-
- ri->tgt_value = __xdp_map_lookup_elem(map, ifindex);
- if (unlikely(!ri->tgt_value)) {
- /* If the lookup fails we want to clear out the state in the
- * redirect_info struct completely, so that if an eBPF program
- * performs multiple lookups, the last one always takes
- * precedence.
- */
- WRITE_ONCE(ri->map, NULL);
- return flags;
- }
-
- ri->flags = flags;
- ri->tgt_index = ifindex;
- WRITE_ONCE(ri->map, map);
-
- return XDP_REDIRECT;
+ return map->ops->map_redirect(map, ifindex, flags);
}
static const struct bpf_func_proto bpf_xdp_redirect_map_proto = {
diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c
index 113fd9017203..711acb3636b3 100644
--- a/net/xdp/xskmap.c
+++ b/net/xdp/xskmap.c
@@ -125,6 +125,18 @@ static int xsk_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf)
return insn - insn_buf;
}
+static void *__xsk_map_lookup_elem(struct bpf_map *map, u32 key)
+{
+ struct xsk_map *m = container_of(map, struct xsk_map, map);
+ struct xdp_sock *xs;
+
+ if (key >= map->max_entries)
+ return NULL;
+
+ xs = READ_ONCE(m->xsk_map[key]);
+ return xs;
+}
+
static void *xsk_map_lookup_elem(struct bpf_map *map, void *key)
{
WARN_ON_ONCE(!rcu_read_lock_held());
@@ -215,6 +227,11 @@ static int xsk_map_delete_elem(struct bpf_map *map, void *key)
return 0;
}
+static int xsk_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags)
+{
+ return __bpf_xdp_redirect_map(map, ifindex, flags, __xsk_map_lookup_elem);
+}
+
void xsk_map_try_sock_delete(struct xsk_map *map, struct xdp_sock *xs,
struct xdp_sock **map_entry)
{
@@ -247,4 +264,5 @@ const struct bpf_map_ops xsk_map_ops = {
.map_check_btf = map_check_no_btf,
.map_btf_name = "xsk_map",
.map_btf_id = &xsk_map_btf_id,
+ .map_redirect = xsk_map_redirect,
};
--
2.27.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH bpf-next v5 2/2] bpf, xdp: restructure redirect actions
2021-02-27 12:21 [PATCH bpf-next v5 0/2] Optimize bpf_redirect_map()/xdp_do_redirect() Björn Töpel
2021-02-27 12:21 ` [PATCH bpf-next v5 1/2] bpf, xdp: make bpf_redirect_map() a map operation Björn Töpel
@ 2021-02-27 12:21 ` Björn Töpel
2021-03-05 15:44 ` Daniel Borkmann
1 sibling, 1 reply; 8+ messages in thread
From: Björn Töpel @ 2021-02-27 12:21 UTC (permalink / raw)
To: ast, daniel, netdev, bpf
Cc: Björn Töpel, maciej.fijalkowski, hawk, toke,
magnus.karlsson, john.fastabend, kuba, davem,
Jesper Dangaard Brouer
From: Björn Töpel <bjorn.topel@intel.com>
The XDP_REDIRECT implementations for maps and non-maps are fairly
similar, but obviously need to take different code paths depending on
if the target is using a map or not. Today, the redirect targets for
XDP either uses a map, or is based on ifindex.
Here, an explicit redirect type is added to bpf_redirect_info, instead
of the actual map. Redirect type, map item/ifindex, and the map_id (if
any) is passed to xdp_do_redirect().
In addition to making the code easier to follow, using an explicit
type in bpf_redirect_info has a slight positive performance impact by
avoiding a pointer indirection for the map type lookup, and instead
use the cacheline for bpf_redirect_info.
Since the actual map is not passed via bpf_redirect_info anymore, the
map lookup is only done in the BPF helper. This means that the
bpf_clear_redirect_map() function can be removed. The actual map item
is RCU protected.
The bpf_redirect_info flags member is not used by XDP, and not
read/written any more. The map member is only written to when
required/used, and not unconditionally.
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
---
include/linux/filter.h | 20 ++++--
include/trace/events/xdp.h | 66 ++++++++++-------
kernel/bpf/cpumap.c | 4 +-
kernel/bpf/devmap.c | 7 +-
net/core/filter.c | 144 +++++++++++++++----------------------
net/xdp/xskmap.c | 4 +-
6 files changed, 121 insertions(+), 124 deletions(-)
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 008691fd3b58..a7752badc2ec 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -646,11 +646,20 @@ struct bpf_redirect_info {
u32 flags;
u32 tgt_index;
void *tgt_value;
- struct bpf_map *map;
+ u32 map_id;
+ u32 tgt_type;
u32 kern_flags;
struct bpf_nh_params nh;
};
+enum xdp_redirect_type {
+ XDP_REDIR_UNSET,
+ XDP_REDIR_DEV_IFINDEX,
+ XDP_REDIR_DEV_MAP,
+ XDP_REDIR_CPU_MAP,
+ XDP_REDIR_XSK_MAP,
+};
+
DECLARE_PER_CPU(struct bpf_redirect_info, bpf_redirect_info);
/* flags for bpf_redirect_info kern_flags */
@@ -1473,7 +1482,8 @@ static inline bool bpf_sk_lookup_run_v6(struct net *net, int protocol,
#endif /* IS_ENABLED(CONFIG_IPV6) */
static __always_inline int __bpf_xdp_redirect_map(struct bpf_map *map, u32 ifindex, u64 flags,
- void *lookup_elem(struct bpf_map *map, u32 key))
+ void *lookup_elem(struct bpf_map *map, u32 key),
+ enum xdp_redirect_type type)
{
struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info);
@@ -1488,13 +1498,13 @@ static __always_inline int __bpf_xdp_redirect_map(struct bpf_map *map, u32 ifind
* performs multiple lookups, the last one always takes
* precedence.
*/
- WRITE_ONCE(ri->map, NULL);
+ ri->tgt_type = XDP_REDIR_UNSET;
return flags;
}
- ri->flags = flags;
ri->tgt_index = ifindex;
- WRITE_ONCE(ri->map, map);
+ ri->tgt_type = type;
+ ri->map_id = map->id;
return XDP_REDIRECT;
}
diff --git a/include/trace/events/xdp.h b/include/trace/events/xdp.h
index 76a97176ab81..538321735447 100644
--- a/include/trace/events/xdp.h
+++ b/include/trace/events/xdp.h
@@ -86,19 +86,15 @@ struct _bpf_dtab_netdev {
};
#endif /* __DEVMAP_OBJ_TYPE */
-#define devmap_ifindex(tgt, map) \
- (((map->map_type == BPF_MAP_TYPE_DEVMAP || \
- map->map_type == BPF_MAP_TYPE_DEVMAP_HASH)) ? \
- ((struct _bpf_dtab_netdev *)tgt)->dev->ifindex : 0)
-
DECLARE_EVENT_CLASS(xdp_redirect_template,
TP_PROTO(const struct net_device *dev,
const struct bpf_prog *xdp,
const void *tgt, int err,
- const struct bpf_map *map, u32 index),
+ enum xdp_redirect_type type,
+ const struct bpf_redirect_info *ri),
- TP_ARGS(dev, xdp, tgt, err, map, index),
+ TP_ARGS(dev, xdp, tgt, err, type, ri),
TP_STRUCT__entry(
__field(int, prog_id)
@@ -111,14 +107,30 @@ DECLARE_EVENT_CLASS(xdp_redirect_template,
),
TP_fast_assign(
+ u32 ifindex = 0, map_id = 0, index = ri->tgt_index;
+
+ switch (type) {
+ case XDP_REDIR_DEV_MAP:
+ ifindex = ((struct _bpf_dtab_netdev *)tgt)->dev->ifindex;
+ fallthrough;
+ case XDP_REDIR_CPU_MAP:
+ case XDP_REDIR_XSK_MAP:
+ map_id = ri->map_id;
+ break;
+ case XDP_REDIR_DEV_IFINDEX:
+ ifindex = (u32)(long)tgt;
+ break;
+ default:
+ break;
+ }
+
__entry->prog_id = xdp->aux->id;
__entry->act = XDP_REDIRECT;
__entry->ifindex = dev->ifindex;
__entry->err = err;
- __entry->to_ifindex = map ? devmap_ifindex(tgt, map) :
- index;
- __entry->map_id = map ? map->id : 0;
- __entry->map_index = map ? index : 0;
+ __entry->to_ifindex = ifindex;
+ __entry->map_id = map_id;
+ __entry->map_index = index;
),
TP_printk("prog_id=%d action=%s ifindex=%d to_ifindex=%d err=%d"
@@ -133,45 +145,49 @@ DEFINE_EVENT(xdp_redirect_template, xdp_redirect,
TP_PROTO(const struct net_device *dev,
const struct bpf_prog *xdp,
const void *tgt, int err,
- const struct bpf_map *map, u32 index),
- TP_ARGS(dev, xdp, tgt, err, map, index)
+ enum xdp_redirect_type type,
+ const struct bpf_redirect_info *ri),
+ TP_ARGS(dev, xdp, tgt, err, type, ri)
);
DEFINE_EVENT(xdp_redirect_template, xdp_redirect_err,
TP_PROTO(const struct net_device *dev,
const struct bpf_prog *xdp,
const void *tgt, int err,
- const struct bpf_map *map, u32 index),
- TP_ARGS(dev, xdp, tgt, err, map, index)
+ enum xdp_redirect_type type,
+ const struct bpf_redirect_info *ri),
+ TP_ARGS(dev, xdp, tgt, err, type, ri)
);
#define _trace_xdp_redirect(dev, xdp, to) \
- trace_xdp_redirect(dev, xdp, NULL, 0, NULL, to)
+ trace_xdp_redirect(dev, xdp, NULL, 0, XDP_REDIR_DEV_IFINDEX, NULL)
#define _trace_xdp_redirect_err(dev, xdp, to, err) \
- trace_xdp_redirect_err(dev, xdp, NULL, err, NULL, to)
+ trace_xdp_redirect_err(dev, xdp, NULL, err, XDP_REDIR_DEV_IFINDEX, NULL)
-#define _trace_xdp_redirect_map(dev, xdp, to, map, index) \
- trace_xdp_redirect(dev, xdp, to, 0, map, index)
+#define _trace_xdp_redirect_map(dev, xdp, to, type, ri) \
+ trace_xdp_redirect(dev, xdp, to, 0, type, ri)
-#define _trace_xdp_redirect_map_err(dev, xdp, to, map, index, err) \
- trace_xdp_redirect_err(dev, xdp, to, err, map, index)
+#define _trace_xdp_redirect_map_err(dev, xdp, to, type, ri, err) \
+ trace_xdp_redirect_err(dev, xdp, to, err, type, ri)
/* not used anymore, but kept around so as not to break old programs */
DEFINE_EVENT(xdp_redirect_template, xdp_redirect_map,
TP_PROTO(const struct net_device *dev,
const struct bpf_prog *xdp,
const void *tgt, int err,
- const struct bpf_map *map, u32 index),
- TP_ARGS(dev, xdp, tgt, err, map, index)
+ enum xdp_redirect_type type,
+ const struct bpf_redirect_info *ri),
+ TP_ARGS(dev, xdp, tgt, err, type, ri)
);
DEFINE_EVENT(xdp_redirect_template, xdp_redirect_map_err,
TP_PROTO(const struct net_device *dev,
const struct bpf_prog *xdp,
const void *tgt, int err,
- const struct bpf_map *map, u32 index),
- TP_ARGS(dev, xdp, tgt, err, map, index)
+ enum xdp_redirect_type type,
+ const struct bpf_redirect_info *ri),
+ TP_ARGS(dev, xdp, tgt, err, type, ri)
);
TRACE_EVENT(xdp_cpumap_kthread,
diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
index 7352d4160b7f..01b333e594d0 100644
--- a/kernel/bpf/cpumap.c
+++ b/kernel/bpf/cpumap.c
@@ -543,7 +543,6 @@ static void cpu_map_free(struct bpf_map *map)
* complete.
*/
- bpf_clear_redirect_map(map);
synchronize_rcu();
/* For cpu_map the remote CPUs can still be using the entries
@@ -602,7 +601,8 @@ static int cpu_map_get_next_key(struct bpf_map *map, void *key, void *next_key)
static int cpu_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags)
{
- return __bpf_xdp_redirect_map(map, ifindex, flags, __cpu_map_lookup_elem);
+ return __bpf_xdp_redirect_map(map, ifindex, flags, __cpu_map_lookup_elem,
+ XDP_REDIR_CPU_MAP);
}
static int cpu_map_btf_id;
diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
index f7f42448259f..99f5670f7273 100644
--- a/kernel/bpf/devmap.c
+++ b/kernel/bpf/devmap.c
@@ -197,7 +197,6 @@ static void dev_map_free(struct bpf_map *map)
list_del_rcu(&dtab->list);
spin_unlock(&dev_map_lock);
- bpf_clear_redirect_map(map);
synchronize_rcu();
/* Make sure prior __dev_map_entry_free() have completed. */
@@ -737,12 +736,14 @@ static int dev_map_hash_update_elem(struct bpf_map *map, void *key, void *value,
static int dev_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags)
{
- return __bpf_xdp_redirect_map(map, ifindex, flags, __dev_map_lookup_elem);
+ return __bpf_xdp_redirect_map(map, ifindex, flags, __dev_map_lookup_elem,
+ XDP_REDIR_DEV_MAP);
}
static int dev_hash_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags)
{
- return __bpf_xdp_redirect_map(map, ifindex, flags, __dev_map_hash_lookup_elem);
+ return __bpf_xdp_redirect_map(map, ifindex, flags, __dev_map_hash_lookup_elem,
+ XDP_REDIR_DEV_MAP);
}
static int dev_map_btf_id;
diff --git a/net/core/filter.c b/net/core/filter.c
index 960299a3744f..cb6a6df3318b 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -3909,23 +3909,6 @@ static const struct bpf_func_proto bpf_xdp_adjust_meta_proto = {
.arg2_type = ARG_ANYTHING,
};
-static int __bpf_tx_xdp_map(struct net_device *dev_rx, void *fwd,
- struct bpf_map *map, struct xdp_buff *xdp)
-{
- switch (map->map_type) {
- case BPF_MAP_TYPE_DEVMAP:
- case BPF_MAP_TYPE_DEVMAP_HASH:
- return dev_map_enqueue(fwd, xdp, dev_rx);
- case BPF_MAP_TYPE_CPUMAP:
- return cpu_map_enqueue(fwd, xdp, dev_rx);
- case BPF_MAP_TYPE_XSKMAP:
- return __xsk_map_redirect(fwd, xdp);
- default:
- return -EBADRQC;
- }
- return 0;
-}
-
void xdp_do_flush(void)
{
__dev_flush();
@@ -3934,55 +3917,45 @@ void xdp_do_flush(void)
}
EXPORT_SYMBOL_GPL(xdp_do_flush);
-void bpf_clear_redirect_map(struct bpf_map *map)
-{
- struct bpf_redirect_info *ri;
- int cpu;
-
- for_each_possible_cpu(cpu) {
- ri = per_cpu_ptr(&bpf_redirect_info, cpu);
- /* Avoid polluting remote cacheline due to writes if
- * not needed. Once we pass this test, we need the
- * cmpxchg() to make sure it hasn't been changed in
- * the meantime by remote CPU.
- */
- if (unlikely(READ_ONCE(ri->map) == map))
- cmpxchg(&ri->map, map, NULL);
- }
-}
-
int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp,
struct bpf_prog *xdp_prog)
{
struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info);
- struct bpf_map *map = READ_ONCE(ri->map);
- u32 index = ri->tgt_index;
+ enum xdp_redirect_type type = ri->tgt_type;
void *fwd = ri->tgt_value;
int err;
- ri->tgt_index = 0;
- ri->tgt_value = NULL;
- WRITE_ONCE(ri->map, NULL);
+ ri->tgt_type = XDP_REDIR_UNSET;
- if (unlikely(!map)) {
- fwd = dev_get_by_index_rcu(dev_net(dev), index);
+ switch (type) {
+ case XDP_REDIR_DEV_IFINDEX:
+ fwd = dev_get_by_index_rcu(dev_net(dev), (u32)(long)fwd);
if (unlikely(!fwd)) {
err = -EINVAL;
- goto err;
+ break;
}
-
err = dev_xdp_enqueue(fwd, xdp, dev);
- } else {
- err = __bpf_tx_xdp_map(dev, fwd, map, xdp);
+ break;
+ case XDP_REDIR_DEV_MAP:
+ err = dev_map_enqueue(fwd, xdp, dev);
+ break;
+ case XDP_REDIR_CPU_MAP:
+ err = cpu_map_enqueue(fwd, xdp, dev);
+ break;
+ case XDP_REDIR_XSK_MAP:
+ err = __xsk_map_redirect(fwd, xdp);
+ break;
+ default:
+ err = -EBADRQC;
}
if (unlikely(err))
goto err;
- _trace_xdp_redirect_map(dev, xdp_prog, fwd, map, index);
+ _trace_xdp_redirect_map(dev, xdp_prog, fwd, type, ri);
return 0;
err:
- _trace_xdp_redirect_map_err(dev, xdp_prog, fwd, map, index, err);
+ _trace_xdp_redirect_map_err(dev, xdp_prog, fwd, type, ri, err);
return err;
}
EXPORT_SYMBOL_GPL(xdp_do_redirect);
@@ -3991,41 +3964,37 @@ static int xdp_do_generic_redirect_map(struct net_device *dev,
struct sk_buff *skb,
struct xdp_buff *xdp,
struct bpf_prog *xdp_prog,
- struct bpf_map *map)
+ void *fwd,
+ enum xdp_redirect_type type)
{
struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info);
- u32 index = ri->tgt_index;
- void *fwd = ri->tgt_value;
- int err = 0;
-
- ri->tgt_index = 0;
- ri->tgt_value = NULL;
- WRITE_ONCE(ri->map, NULL);
-
- if (map->map_type == BPF_MAP_TYPE_DEVMAP ||
- map->map_type == BPF_MAP_TYPE_DEVMAP_HASH) {
- struct bpf_dtab_netdev *dst = fwd;
+ int err;
- err = dev_map_generic_redirect(dst, skb, xdp_prog);
+ switch (type) {
+ case XDP_REDIR_DEV_MAP:
+ err = dev_map_generic_redirect(fwd, skb, xdp_prog);
if (unlikely(err))
goto err;
- } else if (map->map_type == BPF_MAP_TYPE_XSKMAP) {
+ break;
+ case XDP_REDIR_XSK_MAP: {
struct xdp_sock *xs = fwd;
err = xsk_generic_rcv(xs, xdp);
if (err)
goto err;
consume_skb(skb);
- } else {
+ break;
+ }
+ default:
/* TODO: Handle BPF_MAP_TYPE_CPUMAP */
err = -EBADRQC;
goto err;
}
- _trace_xdp_redirect_map(dev, xdp_prog, fwd, map, index);
+ _trace_xdp_redirect_map(dev, xdp_prog, fwd, type, ri);
return 0;
err:
- _trace_xdp_redirect_map_err(dev, xdp_prog, fwd, map, index, err);
+ _trace_xdp_redirect_map_err(dev, xdp_prog, fwd, type, ri, err);
return err;
}
@@ -4033,29 +4002,31 @@ int xdp_do_generic_redirect(struct net_device *dev, struct sk_buff *skb,
struct xdp_buff *xdp, struct bpf_prog *xdp_prog)
{
struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info);
- struct bpf_map *map = READ_ONCE(ri->map);
- u32 index = ri->tgt_index;
- struct net_device *fwd;
+ enum xdp_redirect_type type = ri->tgt_type;
+ void *fwd = ri->tgt_value;
int err = 0;
- if (map)
- return xdp_do_generic_redirect_map(dev, skb, xdp, xdp_prog,
- map);
- ri->tgt_index = 0;
- fwd = dev_get_by_index_rcu(dev_net(dev), index);
- if (unlikely(!fwd)) {
- err = -EINVAL;
- goto err;
- }
+ ri->tgt_type = XDP_REDIR_UNSET;
+ ri->tgt_value = NULL;
- err = xdp_ok_fwd_dev(fwd, skb->len);
- if (unlikely(err))
- goto err;
+ if (type == XDP_REDIR_DEV_IFINDEX) {
+ fwd = dev_get_by_index_rcu(dev_net(dev), (u32)(long)fwd);
+ if (unlikely(!fwd)) {
+ err = -EINVAL;
+ goto err;
+ }
- skb->dev = fwd;
- _trace_xdp_redirect(dev, xdp_prog, index);
- generic_xdp_tx(skb, xdp_prog);
- return 0;
+ err = xdp_ok_fwd_dev(fwd, skb->len);
+ if (unlikely(err))
+ goto err;
+
+ skb->dev = fwd;
+ _trace_xdp_redirect(dev, xdp_prog, index);
+ generic_xdp_tx(skb, xdp_prog);
+ return 0;
+ }
+
+ return xdp_do_generic_redirect_map(dev, skb, xdp, xdp_prog, fwd, type);
err:
_trace_xdp_redirect_err(dev, xdp_prog, index, err);
return err;
@@ -4068,10 +4039,9 @@ BPF_CALL_2(bpf_xdp_redirect, u32, ifindex, u64, flags)
if (unlikely(flags))
return XDP_ABORTED;
- ri->flags = flags;
- ri->tgt_index = ifindex;
- ri->tgt_value = NULL;
- WRITE_ONCE(ri->map, NULL);
+ ri->tgt_type = XDP_REDIR_DEV_IFINDEX;
+ ri->tgt_index = 0;
+ ri->tgt_value = (void *)(long)ifindex;
return XDP_REDIRECT;
}
diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c
index 711acb3636b3..2c58d88aa69d 100644
--- a/net/xdp/xskmap.c
+++ b/net/xdp/xskmap.c
@@ -87,7 +87,6 @@ static void xsk_map_free(struct bpf_map *map)
{
struct xsk_map *m = container_of(map, struct xsk_map, map);
- bpf_clear_redirect_map(map);
synchronize_net();
bpf_map_area_free(m);
}
@@ -229,7 +228,8 @@ static int xsk_map_delete_elem(struct bpf_map *map, void *key)
static int xsk_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags)
{
- return __bpf_xdp_redirect_map(map, ifindex, flags, __xsk_map_lookup_elem);
+ return __bpf_xdp_redirect_map(map, ifindex, flags, __xsk_map_lookup_elem,
+ XDP_REDIR_XSK_MAP);
}
void xsk_map_try_sock_delete(struct xsk_map *map, struct xdp_sock *xs,
--
2.27.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next v5 2/2] bpf, xdp: restructure redirect actions
2021-02-27 12:21 ` [PATCH bpf-next v5 2/2] bpf, xdp: restructure redirect actions Björn Töpel
@ 2021-03-05 15:44 ` Daniel Borkmann
2021-03-05 17:11 ` Björn Töpel
0 siblings, 1 reply; 8+ messages in thread
From: Daniel Borkmann @ 2021-03-05 15:44 UTC (permalink / raw)
To: Björn Töpel, ast, netdev, bpf
Cc: Björn Töpel, maciej.fijalkowski, hawk, toke,
magnus.karlsson, john.fastabend, kuba, davem,
Jesper Dangaard Brouer
On 2/27/21 1:21 PM, Björn Töpel wrote:
[...]
> diff --git a/include/linux/filter.h b/include/linux/filter.h
> index 008691fd3b58..a7752badc2ec 100644
> --- a/include/linux/filter.h
> +++ b/include/linux/filter.h
> @@ -646,11 +646,20 @@ struct bpf_redirect_info {
> u32 flags;
> u32 tgt_index;
> void *tgt_value;
> - struct bpf_map *map;
> + u32 map_id;
> + u32 tgt_type;
> u32 kern_flags;
> struct bpf_nh_params nh;
> };
>
> +enum xdp_redirect_type {
> + XDP_REDIR_UNSET,
> + XDP_REDIR_DEV_IFINDEX,
[...]
> + XDP_REDIR_DEV_MAP,
> + XDP_REDIR_CPU_MAP,
> + XDP_REDIR_XSK_MAP,
Did you eval whether for these maps we can avoid the redundant def above by just
passing in map->map_type as ri->tgt_type and inferring the XDP_REDIR_UNSET from
invalid map_id of 0 (given the idr will never allocate such)?
[...]
> @@ -4068,10 +4039,9 @@ BPF_CALL_2(bpf_xdp_redirect, u32, ifindex, u64, flags)
> if (unlikely(flags))
> return XDP_ABORTED;
>
> - ri->flags = flags;
> - ri->tgt_index = ifindex;
> - ri->tgt_value = NULL;
> - WRITE_ONCE(ri->map, NULL);
> + ri->tgt_type = XDP_REDIR_DEV_IFINDEX;
> + ri->tgt_index = 0;
> + ri->tgt_value = (void *)(long)ifindex;
nit: Bit ugly to pass this in /read out this way, maybe union if we cannot use
tgt_index?
> return XDP_REDIRECT;
> }
> diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c
> index 711acb3636b3..2c58d88aa69d 100644
> --- a/net/xdp/xskmap.c
> +++ b/net/xdp/xskmap.c
> @@ -87,7 +87,6 @@ static void xsk_map_free(struct bpf_map *map)
> {
> struct xsk_map *m = container_of(map, struct xsk_map, map);
>
> - bpf_clear_redirect_map(map);
> synchronize_net();
> bpf_map_area_free(m);
> }
> @@ -229,7 +228,8 @@ static int xsk_map_delete_elem(struct bpf_map *map, void *key)
>
> static int xsk_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags)
> {
> - return __bpf_xdp_redirect_map(map, ifindex, flags, __xsk_map_lookup_elem);
> + return __bpf_xdp_redirect_map(map, ifindex, flags, __xsk_map_lookup_elem,
> + XDP_REDIR_XSK_MAP);
> }
>
> void xsk_map_try_sock_delete(struct xsk_map *map, struct xdp_sock *xs,
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next v5 1/2] bpf, xdp: make bpf_redirect_map() a map operation
2021-02-27 12:21 ` [PATCH bpf-next v5 1/2] bpf, xdp: make bpf_redirect_map() a map operation Björn Töpel
@ 2021-03-05 15:55 ` Daniel Borkmann
2021-03-05 17:11 ` Björn Töpel
0 siblings, 1 reply; 8+ messages in thread
From: Daniel Borkmann @ 2021-03-05 15:55 UTC (permalink / raw)
To: Björn Töpel, ast, netdev, bpf
Cc: Björn Töpel, maciej.fijalkowski, hawk, toke,
magnus.karlsson, john.fastabend, kuba, davem,
Jesper Dangaard Brouer
On 2/27/21 1:21 PM, Björn Töpel wrote:
[...]
Look good. Small nits inline I had originally fixed up locally before glancing at 2/2:
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 4c730863fa77..3d3e89a37e62 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -118,6 +118,9 @@ struct bpf_map_ops {
> void *owner, u32 size);
> struct bpf_local_storage __rcu ** (*map_owner_storage_ptr)(void *owner);
>
> + /* XDP helpers.*/
If this really needs a comment, I'd say 'Misc helpers' since we might later also
add implementations for tc and everything can be inferred from the code anyway.
> + int (*map_redirect)(struct bpf_map *map, u32 ifindex, u64 flags);
> +
[...]
> static void dev_map_hash_remove_netdev(struct bpf_dtab *dtab,
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 9fe90ce52a65..b6c44b85e960 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -5582,7 +5582,8 @@ record_func_map(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta,
> func_id != BPF_FUNC_map_push_elem &&
> func_id != BPF_FUNC_map_pop_elem &&
> func_id != BPF_FUNC_map_peek_elem &&
> - func_id != BPF_FUNC_for_each_map_elem)
> + func_id != BPF_FUNC_for_each_map_elem &&
> + func_id != BPF_FUNC_redirect_map)
> return 0;
>
> if (map == NULL) {
> @@ -12017,7 +12018,8 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
> insn->imm == BPF_FUNC_map_delete_elem ||
> insn->imm == BPF_FUNC_map_push_elem ||
> insn->imm == BPF_FUNC_map_pop_elem ||
> - insn->imm == BPF_FUNC_map_peek_elem)) {
> + insn->imm == BPF_FUNC_map_peek_elem ||
> + insn->imm == BPF_FUNC_redirect_map)) {
> aux = &env->insn_aux_data[i + delta];
> if (bpf_map_ptr_poisoned(aux))
> goto patch_call_imm;
> @@ -12059,6 +12061,8 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
> (int (*)(struct bpf_map *map, void *value))NULL));
> BUILD_BUG_ON(!__same_type(ops->map_peek_elem,
> (int (*)(struct bpf_map *map, void *value))NULL));
> + BUILD_BUG_ON(!__same_type(ops->map_redirect,
> + (int (*)(struct bpf_map *map, u32 ifindex, u64 flags))NULL));
I added a linebreak here.
> patch_map_ops_generic:
> switch (insn->imm) {
> case BPF_FUNC_map_lookup_elem:
> @@ -12085,6 +12089,9 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
> insn->imm = BPF_CAST_CALL(ops->map_peek_elem) -
> __bpf_call_base;
> continue;
> + case BPF_FUNC_redirect_map:
> + insn->imm = BPF_CAST_CALL(ops->map_redirect) - __bpf_call_base;
Ditto so it matches the rest.
> + continue;
> }
>
> goto patch_call_imm;
> diff --git a/net/core/filter.c b/net/core/filter.c
> index 13bcf248ee7b..960299a3744f 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -3934,22 +3934,6 @@ void xdp_do_flush(void)
> }
> EXPORT_SYMBOL_GPL(xdp_do_flush);
>
> -static inline void *__xdp_map_lookup_elem(struct bpf_map *map, u32 index)
> -{
> - switch (map->map_type) {
> - case BPF_MAP_TYPE_DEVMAP:
> - return __dev_map_lookup_elem(map, index);
> - case BPF_MAP_TYPE_DEVMAP_HASH:
> - return __dev_map_hash_lookup_elem(map, index);
> - case BPF_MAP_TYPE_CPUMAP:
> - return __cpu_map_lookup_elem(map, index);
> - case BPF_MAP_TYPE_XSKMAP:
> - return __xsk_map_lookup_elem(map, index);
> - default:
> - return NULL;
> - }
> -}
> -
> void bpf_clear_redirect_map(struct bpf_map *map)
> {
> struct bpf_redirect_info *ri;
> @@ -4103,28 +4087,7 @@ static const struct bpf_func_proto bpf_xdp_redirect_proto = {
> BPF_CALL_3(bpf_xdp_redirect_map, struct bpf_map *, map, u32, ifindex,
> u64, flags)
> {
> - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info);
> -
> - /* Lower bits of the flags are used as return code on lookup failure */
> - if (unlikely(flags > XDP_TX))
> - return XDP_ABORTED;
> -
> - ri->tgt_value = __xdp_map_lookup_elem(map, ifindex);
> - if (unlikely(!ri->tgt_value)) {
> - /* If the lookup fails we want to clear out the state in the
> - * redirect_info struct completely, so that if an eBPF program
> - * performs multiple lookups, the last one always takes
> - * precedence.
> - */
> - WRITE_ONCE(ri->map, NULL);
> - return flags;
> - }
> -
> - ri->flags = flags;
> - ri->tgt_index = ifindex;
> - WRITE_ONCE(ri->map, map);
> -
> - return XDP_REDIRECT;
> + return map->ops->map_redirect(map, ifindex, flags);
> }
>
> static const struct bpf_func_proto bpf_xdp_redirect_map_proto = {
> diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c
> index 113fd9017203..711acb3636b3 100644
> --- a/net/xdp/xskmap.c
> +++ b/net/xdp/xskmap.c
> @@ -125,6 +125,18 @@ static int xsk_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf)
> return insn - insn_buf;
> }
>
> +static void *__xsk_map_lookup_elem(struct bpf_map *map, u32 key)
> +{
> + struct xsk_map *m = container_of(map, struct xsk_map, map);
> + struct xdp_sock *xs;
> +
> + if (key >= map->max_entries)
> + return NULL;
> +
> + xs = READ_ONCE(m->xsk_map[key]);
Just 'return READ_ONCE(m->xsk_map[key]);'
> + return xs;
> +}
> +
> static void *xsk_map_lookup_elem(struct bpf_map *map, void *key)
> {
> WARN_ON_ONCE(!rcu_read_lock_held());
> @@ -215,6 +227,11 @@ static int xsk_map_delete_elem(struct bpf_map *map, void *key)
> return 0;
> }
>
> +static int xsk_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags)
> +{
> + return __bpf_xdp_redirect_map(map, ifindex, flags, __xsk_map_lookup_elem);
> +}
> +
> void xsk_map_try_sock_delete(struct xsk_map *map, struct xdp_sock *xs,
> struct xdp_sock **map_entry)
> {
> @@ -247,4 +264,5 @@ const struct bpf_map_ops xsk_map_ops = {
> .map_check_btf = map_check_no_btf,
> .map_btf_name = "xsk_map",
> .map_btf_id = &xsk_map_btf_id,
> + .map_redirect = xsk_map_redirect,
> };
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next v5 1/2] bpf, xdp: make bpf_redirect_map() a map operation
2021-03-05 15:55 ` Daniel Borkmann
@ 2021-03-05 17:11 ` Björn Töpel
0 siblings, 0 replies; 8+ messages in thread
From: Björn Töpel @ 2021-03-05 17:11 UTC (permalink / raw)
To: Daniel Borkmann, Björn Töpel, ast, netdev, bpf
Cc: maciej.fijalkowski, hawk, toke, magnus.karlsson, john.fastabend,
kuba, davem, Jesper Dangaard Brouer
On 2021-03-05 16:55, Daniel Borkmann wrote:
> On 2/27/21 1:21 PM, Björn Töpel wrote:
> [...]
>
> Look good. Small nits inline I had originally fixed up locally before
> glancing at 2/2:
>
>> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
>> index 4c730863fa77..3d3e89a37e62 100644
>> --- a/include/linux/bpf.h
>> +++ b/include/linux/bpf.h
>> @@ -118,6 +118,9 @@ struct bpf_map_ops {
>> void *owner, u32 size);
>> struct bpf_local_storage __rcu ** (*map_owner_storage_ptr)(void
>> *owner);
>> + /* XDP helpers.*/
>
> If this really needs a comment, I'd say 'Misc helpers' since we might
> later also
> add implementations for tc and everything can be inferred from the code
> anyway.
>
ACK!
>> + int (*map_redirect)(struct bpf_map *map, u32 ifindex, u64 flags);
>> +
> [...]
>> static void dev_map_hash_remove_netdev(struct bpf_dtab *dtab,
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index 9fe90ce52a65..b6c44b85e960 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -5582,7 +5582,8 @@ record_func_map(struct bpf_verifier_env *env,
>> struct bpf_call_arg_meta *meta,
>> func_id != BPF_FUNC_map_push_elem &&
>> func_id != BPF_FUNC_map_pop_elem &&
>> func_id != BPF_FUNC_map_peek_elem &&
>> - func_id != BPF_FUNC_for_each_map_elem)
>> + func_id != BPF_FUNC_for_each_map_elem &&
>> + func_id != BPF_FUNC_redirect_map)
>> return 0;
>> if (map == NULL) {
>> @@ -12017,7 +12018,8 @@ static int do_misc_fixups(struct
>> bpf_verifier_env *env)
>> insn->imm == BPF_FUNC_map_delete_elem ||
>> insn->imm == BPF_FUNC_map_push_elem ||
>> insn->imm == BPF_FUNC_map_pop_elem ||
>> - insn->imm == BPF_FUNC_map_peek_elem)) {
>> + insn->imm == BPF_FUNC_map_peek_elem ||
>> + insn->imm == BPF_FUNC_redirect_map)) {
>> aux = &env->insn_aux_data[i + delta];
>> if (bpf_map_ptr_poisoned(aux))
>> goto patch_call_imm;
>> @@ -12059,6 +12061,8 @@ static int do_misc_fixups(struct
>> bpf_verifier_env *env)
>> (int (*)(struct bpf_map *map, void *value))NULL));
>> BUILD_BUG_ON(!__same_type(ops->map_peek_elem,
>> (int (*)(struct bpf_map *map, void *value))NULL));
>> + BUILD_BUG_ON(!__same_type(ops->map_redirect,
>> + (int (*)(struct bpf_map *map, u32 ifindex, u64
>> flags))NULL));
>
> I added a linebreak here.
>
Ok!
>> patch_map_ops_generic:
>> switch (insn->imm) {
>> case BPF_FUNC_map_lookup_elem:
>> @@ -12085,6 +12089,9 @@ static int do_misc_fixups(struct
>> bpf_verifier_env *env)
>> insn->imm = BPF_CAST_CALL(ops->map_peek_elem) -
>> __bpf_call_base;
>> continue;
>> + case BPF_FUNC_redirect_map:
>> + insn->imm = BPF_CAST_CALL(ops->map_redirect) -
>> __bpf_call_base;
>
> Ditto so it matches the rest.
>
Fair enough; I guess my love for the 100 chars lines is bigger than
conformity. :-P
>> + continue;
>> }
>> goto patch_call_imm;
>> diff --git a/net/core/filter.c b/net/core/filter.c
>> index 13bcf248ee7b..960299a3744f 100644
>> --- a/net/core/filter.c
>> +++ b/net/core/filter.c
>> @@ -3934,22 +3934,6 @@ void xdp_do_flush(void)
>> }
>> EXPORT_SYMBOL_GPL(xdp_do_flush);
>> -static inline void *__xdp_map_lookup_elem(struct bpf_map *map, u32
>> index)
>> -{
>> - switch (map->map_type) {
>> - case BPF_MAP_TYPE_DEVMAP:
>> - return __dev_map_lookup_elem(map, index);
>> - case BPF_MAP_TYPE_DEVMAP_HASH:
>> - return __dev_map_hash_lookup_elem(map, index);
>> - case BPF_MAP_TYPE_CPUMAP:
>> - return __cpu_map_lookup_elem(map, index);
>> - case BPF_MAP_TYPE_XSKMAP:
>> - return __xsk_map_lookup_elem(map, index);
>> - default:
>> - return NULL;
>> - }
>> -}
>> -
>> void bpf_clear_redirect_map(struct bpf_map *map)
>> {
>> struct bpf_redirect_info *ri;
>> @@ -4103,28 +4087,7 @@ static const struct bpf_func_proto
>> bpf_xdp_redirect_proto = {
>> BPF_CALL_3(bpf_xdp_redirect_map, struct bpf_map *, map, u32, ifindex,
>> u64, flags)
>> {
>> - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info);
>> -
>> - /* Lower bits of the flags are used as return code on lookup
>> failure */
>> - if (unlikely(flags > XDP_TX))
>> - return XDP_ABORTED;
>> -
>> - ri->tgt_value = __xdp_map_lookup_elem(map, ifindex);
>> - if (unlikely(!ri->tgt_value)) {
>> - /* If the lookup fails we want to clear out the state in the
>> - * redirect_info struct completely, so that if an eBPF program
>> - * performs multiple lookups, the last one always takes
>> - * precedence.
>> - */
>> - WRITE_ONCE(ri->map, NULL);
>> - return flags;
>> - }
>> -
>> - ri->flags = flags;
>> - ri->tgt_index = ifindex;
>> - WRITE_ONCE(ri->map, map);
>> -
>> - return XDP_REDIRECT;
>> + return map->ops->map_redirect(map, ifindex, flags);
>> }
>> static const struct bpf_func_proto bpf_xdp_redirect_map_proto = {
>> diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c
>> index 113fd9017203..711acb3636b3 100644
>> --- a/net/xdp/xskmap.c
>> +++ b/net/xdp/xskmap.c
>> @@ -125,6 +125,18 @@ static int xsk_map_gen_lookup(struct bpf_map
>> *map, struct bpf_insn *insn_buf)
>> return insn - insn_buf;
>> }
>> +static void *__xsk_map_lookup_elem(struct bpf_map *map, u32 key)
>> +{
>> + struct xsk_map *m = container_of(map, struct xsk_map, map);
>> + struct xdp_sock *xs;
>> +
>> + if (key >= map->max_entries)
>> + return NULL;
>> +
>> + xs = READ_ONCE(m->xsk_map[key]);
>
> Just 'return READ_ONCE(m->xsk_map[key]);'
>
Indeed.
I'll make sure to include the fixups in v6.
Björn
>> + return xs;
>> +}
>> +
>> static void *xsk_map_lookup_elem(struct bpf_map *map, void *key)
>> {
>> WARN_ON_ONCE(!rcu_read_lock_held());
>> @@ -215,6 +227,11 @@ static int xsk_map_delete_elem(struct bpf_map
>> *map, void *key)
>> return 0;
>> }
>> +static int xsk_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags)
>> +{
>> + return __bpf_xdp_redirect_map(map, ifindex, flags,
>> __xsk_map_lookup_elem);
>> +}
>> +
>> void xsk_map_try_sock_delete(struct xsk_map *map, struct xdp_sock *xs,
>> struct xdp_sock **map_entry)
>> {
>> @@ -247,4 +264,5 @@ const struct bpf_map_ops xsk_map_ops = {
>> .map_check_btf = map_check_no_btf,
>> .map_btf_name = "xsk_map",
>> .map_btf_id = &xsk_map_btf_id,
>> + .map_redirect = xsk_map_redirect,
>> };
>>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next v5 2/2] bpf, xdp: restructure redirect actions
2021-03-05 15:44 ` Daniel Borkmann
@ 2021-03-05 17:11 ` Björn Töpel
2021-03-05 22:56 ` Daniel Borkmann
0 siblings, 1 reply; 8+ messages in thread
From: Björn Töpel @ 2021-03-05 17:11 UTC (permalink / raw)
To: Daniel Borkmann, Björn Töpel, ast, netdev, bpf
Cc: maciej.fijalkowski, hawk, toke, magnus.karlsson, john.fastabend,
kuba, davem, Jesper Dangaard Brouer
On 2021-03-05 16:44, Daniel Borkmann wrote:
> On 2/27/21 1:21 PM, Björn Töpel wrote:
> [...]
>> diff --git a/include/linux/filter.h b/include/linux/filter.h
>> index 008691fd3b58..a7752badc2ec 100644
>> --- a/include/linux/filter.h
>> +++ b/include/linux/filter.h
>> @@ -646,11 +646,20 @@ struct bpf_redirect_info {
>> u32 flags;
>> u32 tgt_index;
>> void *tgt_value;
>> - struct bpf_map *map;
>> + u32 map_id;
>> + u32 tgt_type;
>> u32 kern_flags;
>> struct bpf_nh_params nh;
>> };
>> +enum xdp_redirect_type {
>> + XDP_REDIR_UNSET,
>> + XDP_REDIR_DEV_IFINDEX,
>
> [...]
>
>> + XDP_REDIR_DEV_MAP,
>> + XDP_REDIR_CPU_MAP,
>> + XDP_REDIR_XSK_MAP,
>
> Did you eval whether for these maps we can avoid the redundant def above
> by just
> passing in map->map_type as ri->tgt_type and inferring the
> XDP_REDIR_UNSET from
> invalid map_id of 0 (given the idr will never allocate such)?
>
I'll take a stab at it!
> [...]
>> @@ -4068,10 +4039,9 @@ BPF_CALL_2(bpf_xdp_redirect, u32, ifindex, u64,
>> flags)
>> if (unlikely(flags))
>> return XDP_ABORTED;
>> - ri->flags = flags;
>> - ri->tgt_index = ifindex;
>> - ri->tgt_value = NULL;
>> - WRITE_ONCE(ri->map, NULL);
>> + ri->tgt_type = XDP_REDIR_DEV_IFINDEX;
>> + ri->tgt_index = 0;
>> + ri->tgt_value = (void *)(long)ifindex;
>
> nit: Bit ugly to pass this in /read out this way, maybe union if we
> cannot use
> tgt_index?
>
Dito!
Thanks for the input! I'll get back with a v6!
Björn
>> return XDP_REDIRECT;
>> }
>> diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c
>> index 711acb3636b3..2c58d88aa69d 100644
>> --- a/net/xdp/xskmap.c
>> +++ b/net/xdp/xskmap.c
>> @@ -87,7 +87,6 @@ static void xsk_map_free(struct bpf_map *map)
>> {
>> struct xsk_map *m = container_of(map, struct xsk_map, map);
>> - bpf_clear_redirect_map(map);
>> synchronize_net();
>> bpf_map_area_free(m);
>> }
>> @@ -229,7 +228,8 @@ static int xsk_map_delete_elem(struct bpf_map
>> *map, void *key)
>> static int xsk_map_redirect(struct bpf_map *map, u32 ifindex, u64
>> flags)
>> {
>> - return __bpf_xdp_redirect_map(map, ifindex, flags,
>> __xsk_map_lookup_elem);
>> + return __bpf_xdp_redirect_map(map, ifindex, flags,
>> __xsk_map_lookup_elem,
>> + XDP_REDIR_XSK_MAP);
>> }
>> void xsk_map_try_sock_delete(struct xsk_map *map, struct xdp_sock *xs,
>>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next v5 2/2] bpf, xdp: restructure redirect actions
2021-03-05 17:11 ` Björn Töpel
@ 2021-03-05 22:56 ` Daniel Borkmann
0 siblings, 0 replies; 8+ messages in thread
From: Daniel Borkmann @ 2021-03-05 22:56 UTC (permalink / raw)
To: Björn Töpel, Björn Töpel, ast, netdev, bpf
Cc: maciej.fijalkowski, hawk, toke, magnus.karlsson, john.fastabend,
kuba, davem, Jesper Dangaard Brouer
On 3/5/21 6:11 PM, Björn Töpel wrote:
> On 2021-03-05 16:44, Daniel Borkmann wrote:
>> On 2/27/21 1:21 PM, Björn Töpel wrote:
>> [...]
>>> diff --git a/include/linux/filter.h b/include/linux/filter.h
>>> index 008691fd3b58..a7752badc2ec 100644
>>> --- a/include/linux/filter.h
>>> +++ b/include/linux/filter.h
>>> @@ -646,11 +646,20 @@ struct bpf_redirect_info {
>>> u32 flags;
>>> u32 tgt_index;
>>> void *tgt_value;
>>> - struct bpf_map *map;
>>> + u32 map_id;
>>> + u32 tgt_type;
>>> u32 kern_flags;
>>> struct bpf_nh_params nh;
>>> };
>>> +enum xdp_redirect_type {
>>> + XDP_REDIR_UNSET,
>>> + XDP_REDIR_DEV_IFINDEX,
>>
>> [...]
>>
>>> + XDP_REDIR_DEV_MAP,
>>> + XDP_REDIR_CPU_MAP,
>>> + XDP_REDIR_XSK_MAP,
>>
>> Did you eval whether for these maps we can avoid the redundant def above by just
>> passing in map->map_type as ri->tgt_type and inferring the XDP_REDIR_UNSET from
>> invalid map_id of 0 (given the idr will never allocate such)?
>>
>
> I'll take a stab at it!
Sounds good, thanks! If it doesn't simplify or gets worse, we can always stick to
the one here.
>> [...]
>>> @@ -4068,10 +4039,9 @@ BPF_CALL_2(bpf_xdp_redirect, u32, ifindex, u64, flags)
>>> if (unlikely(flags))
>>> return XDP_ABORTED;
>>> - ri->flags = flags;
>>> - ri->tgt_index = ifindex;
>>> - ri->tgt_value = NULL;
>>> - WRITE_ONCE(ri->map, NULL);
>>> + ri->tgt_type = XDP_REDIR_DEV_IFINDEX;
>>> + ri->tgt_index = 0;
>>> + ri->tgt_value = (void *)(long)ifindex;
>>
>> nit: Bit ugly to pass this in /read out this way, maybe union if we cannot use
>> tgt_index?
>>
>
> Dito!
>
>
> Thanks for the input! I'll get back with a v6!
>
>
> Björn
>
>
>>> return XDP_REDIRECT;
>>> }
>>> diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c
>>> index 711acb3636b3..2c58d88aa69d 100644
>>> --- a/net/xdp/xskmap.c
>>> +++ b/net/xdp/xskmap.c
>>> @@ -87,7 +87,6 @@ static void xsk_map_free(struct bpf_map *map)
>>> {
>>> struct xsk_map *m = container_of(map, struct xsk_map, map);
>>> - bpf_clear_redirect_map(map);
>>> synchronize_net();
>>> bpf_map_area_free(m);
>>> }
>>> @@ -229,7 +228,8 @@ static int xsk_map_delete_elem(struct bpf_map *map, void *key)
>>> static int xsk_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags)
>>> {
>>> - return __bpf_xdp_redirect_map(map, ifindex, flags, __xsk_map_lookup_elem);
>>> + return __bpf_xdp_redirect_map(map, ifindex, flags, __xsk_map_lookup_elem,
>>> + XDP_REDIR_XSK_MAP);
>>> }
>>> void xsk_map_try_sock_delete(struct xsk_map *map, struct xdp_sock *xs,
>>>
>>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2021-03-05 22:57 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-27 12:21 [PATCH bpf-next v5 0/2] Optimize bpf_redirect_map()/xdp_do_redirect() Björn Töpel
2021-02-27 12:21 ` [PATCH bpf-next v5 1/2] bpf, xdp: make bpf_redirect_map() a map operation Björn Töpel
2021-03-05 15:55 ` Daniel Borkmann
2021-03-05 17:11 ` Björn Töpel
2021-02-27 12:21 ` [PATCH bpf-next v5 2/2] bpf, xdp: restructure redirect actions Björn Töpel
2021-03-05 15:44 ` Daniel Borkmann
2021-03-05 17:11 ` Björn Töpel
2021-03-05 22:56 ` Daniel Borkmann
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.