bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v5 0/5] Generic XDP improvements
@ 2021-07-01  0:27 Kumar Kartikeya Dwivedi
  2021-07-01  0:27 ` [PATCH net-next v5 1/5] net: core: split out code to run generic XDP prog Kumar Kartikeya Dwivedi
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-07-01  0:27 UTC (permalink / raw)
  To: netdev
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Toke Høiland-Jørgensen,
	Jesper Dangaard Brouer, David S. Miller, Jakub Kicinski,
	John Fastabend, Martin KaFai Lau, bpf

This small series makes some improvements to generic XDP mode and brings it
closer to native XDP. Patch 1 splits out generic XDP processing into reusable
parts, patch 2 adds pointer friendly wrappers for bitops (not have to cast back
and forth the address of local pointer to unsigned long *), patch 3 implements
generic cpumap support (details in commit) and patch 4 allows devmap bpf prog
execution before generic_xdp_tx is called.

Patch 5 just updates a couple of selftests to adapt to changes in behavior (in
that specifying devmap/cpumap prog fd in generic mode is now allowed).

Changelog:
----------
v4 -> v5
v4: https://lore.kernel.org/bpf/20210628114746.129669-1-memxor@gmail.com
 * Add comments and examples for new bitops macros (Alexei)

v3 -> v4
v3: https://lore.kernel.org/bpf/20210622202835.1151230-1-memxor@gmail.com
 * Add detach now that attach of XDP program succeeds (Toke)
 * Clean up the test to use new ASSERT macros

v2 -> v3
v2: https://lore.kernel.org/bpf/20210622195527.1110497-1-memxor@gmail.com
 * list_for_each_entry -> list_for_each_entry_safe (due to deletion of skb)

v1 -> v2
v1: https://lore.kernel.org/bpf/20210620233200.855534-1-memxor@gmail.com
 * Move __ptr_{set,clear,test}_bit to bitops.h (Toke)
   Also changed argument order to match the bit op they wrap.
 * Remove map value size checking functions for cpumap/devmap (Toke)
 * Rework prog run for skb in cpu_map_kthread_run (Toke)
 * Set skb->dev to dst->dev after devmap prog has run
 * Don't set xdp rxq that will be overwritten in cpumap prog run

Kumar Kartikeya Dwivedi (5):
  net: core: split out code to run generic XDP prog
  bitops: add non-atomic bitops for pointers
  bpf: cpumap: implement generic cpumap
  bpf: devmap: implement devmap prog execution for generic XDP
  bpf: tidy xdp attach selftests

 include/linux/bitops.h                        |  50 ++++++++
 include/linux/bpf.h                           |  10 +-
 include/linux/netdevice.h                     |   2 +
 include/linux/skbuff.h                        |  10 +-
 include/linux/typecheck.h                     |   9 ++
 kernel/bpf/cpumap.c                           | 115 +++++++++++++++---
 kernel/bpf/devmap.c                           |  49 ++++++--
 net/core/dev.c                                | 103 ++++++++--------
 net/core/filter.c                             |   6 +-
 .../bpf/prog_tests/xdp_cpumap_attach.c        |  43 +++----
 .../bpf/prog_tests/xdp_devmap_attach.c        |  39 +++---
 11 files changed, 299 insertions(+), 137 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH net-next v5 1/5] net: core: split out code to run generic XDP prog
  2021-07-01  0:27 [PATCH net-next v5 0/5] Generic XDP improvements Kumar Kartikeya Dwivedi
@ 2021-07-01  0:27 ` Kumar Kartikeya Dwivedi
  2021-07-01  0:27 ` [PATCH net-next v5 2/5] bitops: add non-atomic bitops for pointers Kumar Kartikeya Dwivedi
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-07-01  0:27 UTC (permalink / raw)
  To: netdev
  Cc: Kumar Kartikeya Dwivedi, Toke Høiland-Jørgensen,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Jesper Dangaard Brouer, David S. Miller, Jakub Kicinski,
	John Fastabend, Martin KaFai Lau, bpf

This helper can later be utilized in code that runs cpumap and devmap
programs in generic redirect mode and adjust skb based on changes made
to xdp_buff.

When returning XDP_REDIRECT/XDP_TX, it invokes __skb_push, so whenever a
generic redirect path invokes devmap/cpumap prog if set, it must
__skb_pull again as we expect mac header to be pulled.

It also drops the skb_reset_mac_len call after do_xdp_generic, as the
mac_header and network_header are advanced by the same offset, so the
difference (mac_len) remains constant.

Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 include/linux/netdevice.h |  2 +
 net/core/dev.c            | 84 ++++++++++++++++++++++++---------------
 2 files changed, 55 insertions(+), 31 deletions(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index be1dcceda5e4..90472ea70db2 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -3984,6 +3984,8 @@ static inline void dev_consume_skb_any(struct sk_buff *skb)
 	__dev_kfree_skb_any(skb, SKB_REASON_CONSUMED);
 }
 
+u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp,
+			     struct bpf_prog *xdp_prog);
 void generic_xdp_tx(struct sk_buff *skb, struct bpf_prog *xdp_prog);
 int do_xdp_generic(struct bpf_prog *xdp_prog, struct sk_buff *skb);
 int netif_rx(struct sk_buff *skb);
diff --git a/net/core/dev.c b/net/core/dev.c
index 991d09b67bd9..ad5ab33cbd39 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4740,45 +4740,18 @@ static struct netdev_rx_queue *netif_get_rxqueue(struct sk_buff *skb)
 	return rxqueue;
 }
 
-static u32 netif_receive_generic_xdp(struct sk_buff *skb,
-				     struct xdp_buff *xdp,
-				     struct bpf_prog *xdp_prog)
+u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp,
+			     struct bpf_prog *xdp_prog)
 {
 	void *orig_data, *orig_data_end, *hard_start;
 	struct netdev_rx_queue *rxqueue;
-	u32 metalen, act = XDP_DROP;
 	bool orig_bcast, orig_host;
 	u32 mac_len, frame_sz;
 	__be16 orig_eth_type;
 	struct ethhdr *eth;
+	u32 metalen, act;
 	int off;
 
-	/* Reinjected packets coming from act_mirred or similar should
-	 * not get XDP generic processing.
-	 */
-	if (skb_is_redirected(skb))
-		return XDP_PASS;
-
-	/* XDP packets must be linear and must have sufficient headroom
-	 * of XDP_PACKET_HEADROOM bytes. This is the guarantee that also
-	 * native XDP provides, thus we need to do it here as well.
-	 */
-	if (skb_cloned(skb) || skb_is_nonlinear(skb) ||
-	    skb_headroom(skb) < XDP_PACKET_HEADROOM) {
-		int hroom = XDP_PACKET_HEADROOM - skb_headroom(skb);
-		int troom = skb->tail + skb->data_len - skb->end;
-
-		/* In case we have to go down the path and also linearize,
-		 * then lets do the pskb_expand_head() work just once here.
-		 */
-		if (pskb_expand_head(skb,
-				     hroom > 0 ? ALIGN(hroom, NET_SKB_PAD) : 0,
-				     troom > 0 ? troom + 128 : 0, GFP_ATOMIC))
-			goto do_drop;
-		if (skb_linearize(skb))
-			goto do_drop;
-	}
-
 	/* The XDP program wants to see the packet starting at the MAC
 	 * header.
 	 */
@@ -4833,6 +4806,13 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb,
 		skb->protocol = eth_type_trans(skb, skb->dev);
 	}
 
+	/* Redirect/Tx gives L2 packet, code that will reuse skb must __skb_pull
+	 * before calling us again on redirect path. We do not call do_redirect
+	 * as we leave that up to the caller.
+	 *
+	 * Caller is responsible for managing lifetime of skb (i.e. calling
+	 * kfree_skb in response to actions it cannot handle/XDP_DROP).
+	 */
 	switch (act) {
 	case XDP_REDIRECT:
 	case XDP_TX:
@@ -4843,6 +4823,49 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb,
 		if (metalen)
 			skb_metadata_set(skb, metalen);
 		break;
+	}
+
+	return act;
+}
+
+static u32 netif_receive_generic_xdp(struct sk_buff *skb,
+				     struct xdp_buff *xdp,
+				     struct bpf_prog *xdp_prog)
+{
+	u32 act = XDP_DROP;
+
+	/* Reinjected packets coming from act_mirred or similar should
+	 * not get XDP generic processing.
+	 */
+	if (skb_is_redirected(skb))
+		return XDP_PASS;
+
+	/* XDP packets must be linear and must have sufficient headroom
+	 * of XDP_PACKET_HEADROOM bytes. This is the guarantee that also
+	 * native XDP provides, thus we need to do it here as well.
+	 */
+	if (skb_cloned(skb) || skb_is_nonlinear(skb) ||
+	    skb_headroom(skb) < XDP_PACKET_HEADROOM) {
+		int hroom = XDP_PACKET_HEADROOM - skb_headroom(skb);
+		int troom = skb->tail + skb->data_len - skb->end;
+
+		/* In case we have to go down the path and also linearize,
+		 * then lets do the pskb_expand_head() work just once here.
+		 */
+		if (pskb_expand_head(skb,
+				     hroom > 0 ? ALIGN(hroom, NET_SKB_PAD) : 0,
+				     troom > 0 ? troom + 128 : 0, GFP_ATOMIC))
+			goto do_drop;
+		if (skb_linearize(skb))
+			goto do_drop;
+	}
+
+	act = bpf_prog_run_generic_xdp(skb, xdp, xdp_prog);
+	switch (act) {
+	case XDP_REDIRECT:
+	case XDP_TX:
+	case XDP_PASS:
+		break;
 	default:
 		bpf_warn_invalid_xdp_action(act);
 		fallthrough;
@@ -5308,7 +5331,6 @@ static int __netif_receive_skb_core(struct sk_buff **pskb, bool pfmemalloc,
 			ret = NET_RX_DROP;
 			goto out;
 		}
-		skb_reset_mac_len(skb);
 	}
 
 	if (eth_type_vlan(skb->protocol)) {
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH net-next v5 2/5] bitops: add non-atomic bitops for pointers
  2021-07-01  0:27 [PATCH net-next v5 0/5] Generic XDP improvements Kumar Kartikeya Dwivedi
  2021-07-01  0:27 ` [PATCH net-next v5 1/5] net: core: split out code to run generic XDP prog Kumar Kartikeya Dwivedi
@ 2021-07-01  0:27 ` Kumar Kartikeya Dwivedi
  2021-07-01  0:27 ` [PATCH net-next v5 3/5] bpf: cpumap: implement generic cpumap Kumar Kartikeya Dwivedi
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-07-01  0:27 UTC (permalink / raw)
  To: netdev
  Cc: Kumar Kartikeya Dwivedi, Toke Høiland-Jørgensen,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Jesper Dangaard Brouer, David S. Miller, Jakub Kicinski,
	John Fastabend, Martin KaFai Lau, bpf

cpumap needs to set, clear, and test the lowest bit in skb pointer in
various places. To make these checks less noisy, add pointer friendly
bitop macros that also do some typechecking to sanitize the argument.

These wrap the non-atomic bitops __set_bit, __clear_bit, and test_bit
but for pointer arguments. Pointer's address has to be passed in and it
is treated as an unsigned long *, since width and representation of
pointer and unsigned long match on targets Linux supports. They are
prefixed with double underscore to indicate lack of atomicity.

Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 include/linux/bitops.h    | 50 +++++++++++++++++++++++++++++++++++++++
 include/linux/typecheck.h |  9 +++++++
 2 files changed, 59 insertions(+)

diff --git a/include/linux/bitops.h b/include/linux/bitops.h
index 26bf15e6cd35..5e62e2383b7f 100644
--- a/include/linux/bitops.h
+++ b/include/linux/bitops.h
@@ -4,6 +4,7 @@
 
 #include <asm/types.h>
 #include <linux/bits.h>
+#include <linux/typecheck.h>
 
 #include <uapi/linux/kernel.h>
 
@@ -253,6 +254,55 @@ static __always_inline void __assign_bit(long nr, volatile unsigned long *addr,
 		__clear_bit(nr, addr);
 }
 
+/**
+ * __ptr_set_bit - Set bit in a pointer's value
+ * @nr: the bit to set
+ * @addr: the address of the pointer variable
+ *
+ * Example:
+ *	void *p = foo();
+ *	__ptr_set_bit(bit, &p);
+ */
+#define __ptr_set_bit(nr, addr)                         \
+	({                                              \
+		typecheck_pointer(*(addr));             \
+		__set_bit(nr, (unsigned long *)(addr)); \
+	})
+
+/**
+ * __ptr_clear_bit - Clear bit in a pointer's value
+ * @nr: the bit to clear
+ * @addr: the address of the pointer variable
+ *
+ * Example:
+ *	void *p = foo();
+ *	__ptr_clear_bit(bit, &p);
+ */
+#define __ptr_clear_bit(nr, addr)                         \
+	({                                                \
+		typecheck_pointer(*(addr));               \
+		__clear_bit(nr, (unsigned long *)(addr)); \
+	})
+
+/**
+ * __ptr_test_bit - Test bit in a pointer's value
+ * @nr: the bit to test
+ * @addr: the address of the pointer variable
+ *
+ * Example:
+ *	void *p = foo();
+ *	if (__ptr_test_bit(bit, &p)) {
+ *	        ...
+ *	} else {
+ *		...
+ *	}
+ */
+#define __ptr_test_bit(nr, addr)                       \
+	({                                             \
+		typecheck_pointer(*(addr));            \
+		test_bit(nr, (unsigned long *)(addr)); \
+	})
+
 #ifdef __KERNEL__
 
 #ifndef set_mask_bits
diff --git a/include/linux/typecheck.h b/include/linux/typecheck.h
index 20d310331eb5..46b15e2aaefb 100644
--- a/include/linux/typecheck.h
+++ b/include/linux/typecheck.h
@@ -22,4 +22,13 @@
 	(void)__tmp; \
 })
 
+/*
+ * Check at compile time that something is a pointer type.
+ */
+#define typecheck_pointer(x) \
+({	typeof(x) __dummy; \
+	(void)sizeof(*__dummy); \
+	1; \
+})
+
 #endif		/* TYPECHECK_H_INCLUDED */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH net-next v5 3/5] bpf: cpumap: implement generic cpumap
  2021-07-01  0:27 [PATCH net-next v5 0/5] Generic XDP improvements Kumar Kartikeya Dwivedi
  2021-07-01  0:27 ` [PATCH net-next v5 1/5] net: core: split out code to run generic XDP prog Kumar Kartikeya Dwivedi
  2021-07-01  0:27 ` [PATCH net-next v5 2/5] bitops: add non-atomic bitops for pointers Kumar Kartikeya Dwivedi
@ 2021-07-01  0:27 ` Kumar Kartikeya Dwivedi
  2021-07-01  9:16   ` Jesper Dangaard Brouer
  2021-07-01  0:27 ` [PATCH net-next v5 4/5] bpf: devmap: implement devmap prog execution for generic XDP Kumar Kartikeya Dwivedi
  2021-07-01  0:27 ` [PATCH net-next v5 5/5] bpf: tidy xdp attach selftests Kumar Kartikeya Dwivedi
  4 siblings, 1 reply; 8+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-07-01  0:27 UTC (permalink / raw)
  To: netdev
  Cc: Kumar Kartikeya Dwivedi, Toke Høiland-Jørgensen,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Jesper Dangaard Brouer, David S. Miller, Jakub Kicinski,
	John Fastabend, Martin KaFai Lau, bpf

This change implements CPUMAP redirect support for generic XDP programs.
The idea is to reuse the cpu map entry's queue that is used to push
native xdp frames for redirecting skb to a different CPU. This will
match native XDP behavior (in that RPS is invoked again for packet
reinjected into networking stack).

To be able to determine whether the incoming skb is from the driver or
cpumap, we reuse skb->redirected bit that skips generic XDP processing
when it is set. To always make use of this, CONFIG_NET_REDIRECT guard on
it has been lifted and it is always available.

From the redirect side, we add the skb to ptr_ring with its lowest bit
set to 1.  This should be safe as skb is not 1-byte aligned. This allows
kthread to discern between xdp_frames and sk_buff. On consumption of the
ptr_ring item, the lowest bit is unset.

In the end, the skb is simply added to the list that kthread is anyway
going to maintain for xdp_frames converted to skb, and then received
again by using netif_receive_skb_list.

Bulking optimization for generic cpumap is left as an exercise for a
future patch for now.

Since cpumap entry progs are now supported, also remove check in
generic_xdp_install for the cpumap.

Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 include/linux/bpf.h    |   9 +++-
 include/linux/skbuff.h |  10 +---
 kernel/bpf/cpumap.c    | 115 +++++++++++++++++++++++++++++++++++------
 net/core/dev.c         |   3 +-
 net/core/filter.c      |   6 ++-
 5 files changed, 115 insertions(+), 28 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index f309fc1509f2..095aaa104c56 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1513,7 +1513,8 @@ bool dev_map_can_have_prog(struct bpf_map *map);
 void __cpu_map_flush(void);
 int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_buff *xdp,
 		    struct net_device *dev_rx);
-bool cpu_map_prog_allowed(struct bpf_map *map);
+int cpu_map_generic_redirect(struct bpf_cpu_map_entry *rcpu,
+			     struct sk_buff *skb);
 
 /* Return map's numa specified by userspace */
 static inline int bpf_map_attr_numa_node(const union bpf_attr *attr)
@@ -1710,6 +1711,12 @@ static inline int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu,
 	return 0;
 }
 
+static inline int cpu_map_generic_redirect(struct bpf_cpu_map_entry *rcpu,
+					   struct sk_buff *skb)
+{
+	return -EOPNOTSUPP;
+}
+
 static inline bool cpu_map_prog_allowed(struct bpf_map *map)
 {
 	return false;
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index b2db9cd9a73f..f19190820e63 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -863,8 +863,8 @@ struct sk_buff {
 	__u8			tc_skip_classify:1;
 	__u8			tc_at_ingress:1;
 #endif
-#ifdef CONFIG_NET_REDIRECT
 	__u8			redirected:1;
+#ifdef CONFIG_NET_REDIRECT
 	__u8			from_ingress:1;
 #endif
 #ifdef CONFIG_TLS_DEVICE
@@ -4664,17 +4664,13 @@ static inline __wsum lco_csum(struct sk_buff *skb)
 
 static inline bool skb_is_redirected(const struct sk_buff *skb)
 {
-#ifdef CONFIG_NET_REDIRECT
 	return skb->redirected;
-#else
-	return false;
-#endif
 }
 
 static inline void skb_set_redirected(struct sk_buff *skb, bool from_ingress)
 {
-#ifdef CONFIG_NET_REDIRECT
 	skb->redirected = 1;
+#ifdef CONFIG_NET_REDIRECT
 	skb->from_ingress = from_ingress;
 	if (skb->from_ingress)
 		skb->tstamp = 0;
@@ -4683,9 +4679,7 @@ static inline void skb_set_redirected(struct sk_buff *skb, bool from_ingress)
 
 static inline void skb_reset_redirect(struct sk_buff *skb)
 {
-#ifdef CONFIG_NET_REDIRECT
 	skb->redirected = 0;
-#endif
 }
 
 static inline bool skb_csum_is_sctp(struct sk_buff *skb)
diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
index a1a0c4e791c6..274353e2cd70 100644
--- a/kernel/bpf/cpumap.c
+++ b/kernel/bpf/cpumap.c
@@ -16,6 +16,7 @@
  * netstack, and assigning dedicated CPUs for this stage.  This
  * basically allows for 10G wirespeed pre-filtering via bpf.
  */
+#include <linux/bitops.h>
 #include <linux/bpf.h>
 #include <linux/filter.h>
 #include <linux/ptr_ring.h>
@@ -168,6 +169,49 @@ static void put_cpu_map_entry(struct bpf_cpu_map_entry *rcpu)
 	}
 }
 
+static void cpu_map_bpf_prog_run_skb(struct bpf_cpu_map_entry *rcpu,
+				     struct list_head *listp,
+				     struct xdp_cpumap_stats *stats)
+{
+	struct sk_buff *skb, *tmp;
+	struct xdp_buff xdp;
+	u32 act;
+	int err;
+
+	if (!rcpu->prog)
+		return;
+
+	list_for_each_entry_safe(skb, tmp, listp, list) {
+		act = bpf_prog_run_generic_xdp(skb, &xdp, rcpu->prog);
+		switch (act) {
+		case XDP_PASS:
+			break;
+		case XDP_REDIRECT:
+			skb_list_del_init(skb);
+			err = xdp_do_generic_redirect(skb->dev, skb, &xdp,
+						      rcpu->prog);
+			if (unlikely(err)) {
+				kfree_skb(skb);
+				stats->drop++;
+			} else {
+				stats->redirect++;
+			}
+			return;
+		default:
+			bpf_warn_invalid_xdp_action(act);
+			fallthrough;
+		case XDP_ABORTED:
+			trace_xdp_exception(skb->dev, rcpu->prog, act);
+			fallthrough;
+		case XDP_DROP:
+			skb_list_del_init(skb);
+			kfree_skb(skb);
+			stats->drop++;
+			return;
+		}
+	}
+}
+
 static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
 				    void **frames, int n,
 				    struct xdp_cpumap_stats *stats)
@@ -179,8 +223,6 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
 	if (!rcpu->prog)
 		return n;
 
-	rcu_read_lock_bh();
-
 	xdp_set_return_frame_no_direct();
 	xdp.rxq = &rxq;
 
@@ -227,17 +269,34 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
 		}
 	}
 
+	xdp_clear_return_frame_no_direct();
+
+	return nframes;
+}
+
+#define CPUMAP_BATCH 8
+
+static int cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames,
+				int xdp_n, struct xdp_cpumap_stats *stats,
+				struct list_head *list)
+{
+	int nframes;
+
+	rcu_read_lock_bh();
+
+	nframes = cpu_map_bpf_prog_run_xdp(rcpu, frames, xdp_n, stats);
+
 	if (stats->redirect)
-		xdp_do_flush_map();
+		xdp_do_flush();
 
-	xdp_clear_return_frame_no_direct();
+	if (unlikely(!list_empty(list)))
+		cpu_map_bpf_prog_run_skb(rcpu, list, stats);
 
-	rcu_read_unlock_bh(); /* resched point, may call do_softirq() */
+	rcu_read_unlock_bh();
 
 	return nframes;
 }
 
-#define CPUMAP_BATCH 8
 
 static int cpu_map_kthread_run(void *data)
 {
@@ -254,9 +313,9 @@ static int cpu_map_kthread_run(void *data)
 		struct xdp_cpumap_stats stats = {}; /* zero stats */
 		unsigned int kmem_alloc_drops = 0, sched = 0;
 		gfp_t gfp = __GFP_ZERO | GFP_ATOMIC;
+		int i, n, m, nframes, xdp_n;
 		void *frames[CPUMAP_BATCH];
 		void *skbs[CPUMAP_BATCH];
-		int i, n, m, nframes;
 		LIST_HEAD(list);
 
 		/* Release CPU reschedule checks */
@@ -280,9 +339,20 @@ static int cpu_map_kthread_run(void *data)
 		 */
 		n = __ptr_ring_consume_batched(rcpu->queue, frames,
 					       CPUMAP_BATCH);
-		for (i = 0; i < n; i++) {
+		for (i = 0, xdp_n = 0; i < n; i++) {
 			void *f = frames[i];
-			struct page *page = virt_to_page(f);
+			struct page *page;
+
+			if (unlikely(__ptr_test_bit(0, &f))) {
+				struct sk_buff *skb = f;
+
+				__ptr_clear_bit(0, &skb);
+				list_add_tail(&skb->list, &list);
+				continue;
+			}
+
+			frames[xdp_n++] = f;
+			page = virt_to_page(f);
 
 			/* Bring struct page memory area to curr CPU. Read by
 			 * build_skb_around via page_is_pfmemalloc(), and when
@@ -292,7 +362,7 @@ static int cpu_map_kthread_run(void *data)
 		}
 
 		/* Support running another XDP prog on this CPU */
-		nframes = cpu_map_bpf_prog_run_xdp(rcpu, frames, n, &stats);
+		nframes = cpu_map_bpf_prog_run(rcpu, frames, xdp_n, &stats, &list);
 		if (nframes) {
 			m = kmem_cache_alloc_bulk(skbuff_head_cache, gfp, nframes, skbs);
 			if (unlikely(m == 0)) {
@@ -330,12 +400,6 @@ static int cpu_map_kthread_run(void *data)
 	return 0;
 }
 
-bool cpu_map_prog_allowed(struct bpf_map *map)
-{
-	return map->map_type == BPF_MAP_TYPE_CPUMAP &&
-	       map->value_size != offsetofend(struct bpf_cpumap_val, qsize);
-}
-
 static int __cpu_map_load_bpf_program(struct bpf_cpu_map_entry *rcpu, int fd)
 {
 	struct bpf_prog *prog;
@@ -696,6 +760,25 @@ int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_buff *xdp,
 	return 0;
 }
 
+int cpu_map_generic_redirect(struct bpf_cpu_map_entry *rcpu,
+			     struct sk_buff *skb)
+{
+	int ret;
+
+	__skb_pull(skb, skb->mac_len);
+	skb_set_redirected(skb, false);
+	__ptr_set_bit(0, &skb);
+
+	ret = ptr_ring_produce(rcpu->queue, skb);
+	if (ret < 0)
+		goto trace;
+
+	wake_up_process(rcpu->kthread);
+trace:
+	trace_xdp_cpumap_enqueue(rcpu->map_id, !ret, !!ret, rcpu->cpu);
+	return ret;
+}
+
 void __cpu_map_flush(void)
 {
 	struct list_head *flush_list = this_cpu_ptr(&cpu_map_flush_list);
diff --git a/net/core/dev.c b/net/core/dev.c
index ad5ab33cbd39..8521936414f2 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -5665,8 +5665,7 @@ static int generic_xdp_install(struct net_device *dev, struct netdev_bpf *xdp)
 		 * have a bpf_prog installed on an entry
 		 */
 		for (i = 0; i < new->aux->used_map_cnt; i++) {
-			if (dev_map_can_have_prog(new->aux->used_maps[i]) ||
-			    cpu_map_prog_allowed(new->aux->used_maps[i])) {
+			if (dev_map_can_have_prog(new->aux->used_maps[i])) {
 				mutex_unlock(&new->aux->used_maps_mutex);
 				return -EINVAL;
 			}
diff --git a/net/core/filter.c b/net/core/filter.c
index 0b13d8157a8f..4a21fde3028f 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -4038,8 +4038,12 @@ static int xdp_do_generic_redirect_map(struct net_device *dev,
 			goto err;
 		consume_skb(skb);
 		break;
+	case BPF_MAP_TYPE_CPUMAP:
+		err = cpu_map_generic_redirect(fwd, skb);
+		if (unlikely(err))
+			goto err;
+		break;
 	default:
-		/* TODO: Handle BPF_MAP_TYPE_CPUMAP */
 		err = -EBADRQC;
 		goto err;
 	}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH net-next v5 4/5] bpf: devmap: implement devmap prog execution for generic XDP
  2021-07-01  0:27 [PATCH net-next v5 0/5] Generic XDP improvements Kumar Kartikeya Dwivedi
                   ` (2 preceding siblings ...)
  2021-07-01  0:27 ` [PATCH net-next v5 3/5] bpf: cpumap: implement generic cpumap Kumar Kartikeya Dwivedi
@ 2021-07-01  0:27 ` Kumar Kartikeya Dwivedi
  2021-07-01  0:27 ` [PATCH net-next v5 5/5] bpf: tidy xdp attach selftests Kumar Kartikeya Dwivedi
  4 siblings, 0 replies; 8+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-07-01  0:27 UTC (permalink / raw)
  To: netdev
  Cc: Kumar Kartikeya Dwivedi, Toke Høiland-Jørgensen,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Jesper Dangaard Brouer, David S. Miller, Jakub Kicinski,
	John Fastabend, Martin KaFai Lau, bpf

This lifts the restriction on running devmap BPF progs in generic
redirect mode. To match native XDP behavior, it is invoked right before
generic_xdp_tx is called, and only supports XDP_PASS/XDP_ABORTED/
XDP_DROP actions.

We also return 0 even if devmap program drops the packet, as
semantically redirect has already succeeded and the devmap prog is the
last point before TX of the packet to device where it can deliver a
verdict on the packet.

This also means it must take care of freeing the skb, as
xdp_do_generic_redirect callers only do that in case an error is
returned.

Since devmap entry prog is supported, remove the check in
generic_xdp_install entirely.

Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 include/linux/bpf.h |  1 -
 kernel/bpf/devmap.c | 49 ++++++++++++++++++++++++++++++++++++---------
 net/core/dev.c      | 18 -----------------
 3 files changed, 39 insertions(+), 29 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 095aaa104c56..4afbff308ca3 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1508,7 +1508,6 @@ int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb,
 int dev_map_redirect_multi(struct net_device *dev, struct sk_buff *skb,
 			   struct bpf_prog *xdp_prog, struct bpf_map *map,
 			   bool exclude_ingress);
-bool dev_map_can_have_prog(struct bpf_map *map);
 
 void __cpu_map_flush(void);
 int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_buff *xdp,
diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
index 2a75e6c2d27d..49f03e8e5561 100644
--- a/kernel/bpf/devmap.c
+++ b/kernel/bpf/devmap.c
@@ -318,16 +318,6 @@ static int dev_map_hash_get_next_key(struct bpf_map *map, void *key,
 	return -ENOENT;
 }
 
-bool dev_map_can_have_prog(struct bpf_map *map)
-{
-	if ((map->map_type == BPF_MAP_TYPE_DEVMAP ||
-	     map->map_type == BPF_MAP_TYPE_DEVMAP_HASH) &&
-	    map->value_size != offsetofend(struct bpf_devmap_val, ifindex))
-		return true;
-
-	return false;
-}
-
 static int dev_map_bpf_prog_run(struct bpf_prog *xdp_prog,
 				struct xdp_frame **frames, int n,
 				struct net_device *dev)
@@ -499,6 +489,37 @@ static inline int __xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp,
 	return 0;
 }
 
+static u32 dev_map_bpf_prog_run_skb(struct sk_buff *skb, struct bpf_dtab_netdev *dst)
+{
+	struct xdp_txq_info txq = { .dev = dst->dev };
+	struct xdp_buff xdp;
+	u32 act;
+
+	if (!dst->xdp_prog)
+		return XDP_PASS;
+
+	__skb_pull(skb, skb->mac_len);
+	xdp.txq = &txq;
+
+	act = bpf_prog_run_generic_xdp(skb, &xdp, dst->xdp_prog);
+	switch (act) {
+	case XDP_PASS:
+		__skb_push(skb, skb->mac_len);
+		break;
+	default:
+		bpf_warn_invalid_xdp_action(act);
+		fallthrough;
+	case XDP_ABORTED:
+		trace_xdp_exception(dst->dev, dst->xdp_prog, act);
+		fallthrough;
+	case XDP_DROP:
+		kfree_skb(skb);
+		break;
+	}
+
+	return act;
+}
+
 int dev_xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp,
 		    struct net_device *dev_rx)
 {
@@ -614,6 +635,14 @@ int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb,
 	err = xdp_ok_fwd_dev(dst->dev, skb->len);
 	if (unlikely(err))
 		return err;
+
+	/* Redirect has already succeeded semantically at this point, so we just
+	 * return 0 even if packet is dropped. Helper below takes care of
+	 * freeing skb.
+	 */
+	if (dev_map_bpf_prog_run_skb(skb, dst) != XDP_PASS)
+		return 0;
+
 	skb->dev = dst->dev;
 	generic_xdp_tx(skb, xdp_prog);
 
diff --git a/net/core/dev.c b/net/core/dev.c
index 8521936414f2..c674fe191e8a 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -5656,24 +5656,6 @@ static int generic_xdp_install(struct net_device *dev, struct netdev_bpf *xdp)
 	struct bpf_prog *new = xdp->prog;
 	int ret = 0;
 
-	if (new) {
-		u32 i;
-
-		mutex_lock(&new->aux->used_maps_mutex);
-
-		/* generic XDP does not work with DEVMAPs that can
-		 * have a bpf_prog installed on an entry
-		 */
-		for (i = 0; i < new->aux->used_map_cnt; i++) {
-			if (dev_map_can_have_prog(new->aux->used_maps[i])) {
-				mutex_unlock(&new->aux->used_maps_mutex);
-				return -EINVAL;
-			}
-		}
-
-		mutex_unlock(&new->aux->used_maps_mutex);
-	}
-
 	switch (xdp->command) {
 	case XDP_SETUP_PROG:
 		rcu_assign_pointer(dev->xdp_prog, new);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH net-next v5 5/5] bpf: tidy xdp attach selftests
  2021-07-01  0:27 [PATCH net-next v5 0/5] Generic XDP improvements Kumar Kartikeya Dwivedi
                   ` (3 preceding siblings ...)
  2021-07-01  0:27 ` [PATCH net-next v5 4/5] bpf: devmap: implement devmap prog execution for generic XDP Kumar Kartikeya Dwivedi
@ 2021-07-01  0:27 ` Kumar Kartikeya Dwivedi
  4 siblings, 0 replies; 8+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-07-01  0:27 UTC (permalink / raw)
  To: netdev
  Cc: Kumar Kartikeya Dwivedi, Toke Høiland-Jørgensen,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Jesper Dangaard Brouer, David S. Miller, Jakub Kicinski,
	John Fastabend, Martin KaFai Lau, bpf

Support for cpumap and devmap entry progs in previous commits means the
test needs to be updated for the new semantics. Also take this
opportunity to convert it from CHECK macros to the new ASSERT macros.

Since xdp_cpumap_attach has no subtest, put the sole test inside
test_xdptest_xdp_cpumap_attach function.

Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 .../bpf/prog_tests/xdp_cpumap_attach.c        | 43 +++++++------------
 .../bpf/prog_tests/xdp_devmap_attach.c        | 39 +++++++----------
 2 files changed, 32 insertions(+), 50 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c b/tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c
index 0176573fe4e7..8755effd80b0 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c
@@ -7,64 +7,53 @@
 
 #define IFINDEX_LO	1
 
-void test_xdp_with_cpumap_helpers(void)
+void test_xdp_cpumap_attach(void)
 {
 	struct test_xdp_with_cpumap_helpers *skel;
 	struct bpf_prog_info info = {};
+	__u32 len = sizeof(info);
 	struct bpf_cpumap_val val = {
 		.qsize = 192,
 	};
-	__u32 duration = 0, idx = 0;
-	__u32 len = sizeof(info);
 	int err, prog_fd, map_fd;
+	__u32 idx = 0;
 
 	skel = test_xdp_with_cpumap_helpers__open_and_load();
-	if (CHECK_FAIL(!skel)) {
-		perror("test_xdp_with_cpumap_helpers__open_and_load");
+	if (!ASSERT_OK_PTR(skel, "test_xdp_with_cpumap_helpers__open_and_load"))
 		return;
-	}
 
-	/* can not attach program with cpumaps that allow programs
-	 * as xdp generic
-	 */
 	prog_fd = bpf_program__fd(skel->progs.xdp_redir_prog);
 	err = bpf_set_link_xdp_fd(IFINDEX_LO, prog_fd, XDP_FLAGS_SKB_MODE);
-	CHECK(err == 0, "Generic attach of program with 8-byte CPUMAP",
-	      "should have failed\n");
+	if (!ASSERT_OK(err, "Generic attach of program with 8-byte CPUMAP"))
+		goto out_close;
+
+	err = bpf_set_link_xdp_fd(IFINDEX_LO, -1, XDP_FLAGS_SKB_MODE);
+	ASSERT_OK(err, "XDP program detach");
 
 	prog_fd = bpf_program__fd(skel->progs.xdp_dummy_cm);
 	map_fd = bpf_map__fd(skel->maps.cpu_map);
 	err = bpf_obj_get_info_by_fd(prog_fd, &info, &len);
-	if (CHECK_FAIL(err))
+	if (!ASSERT_OK(err, "bpf_obj_get_info_by_fd"))
 		goto out_close;
 
 	val.bpf_prog.fd = prog_fd;
 	err = bpf_map_update_elem(map_fd, &idx, &val, 0);
-	CHECK(err, "Add program to cpumap entry", "err %d errno %d\n",
-	      err, errno);
+	ASSERT_OK(err, "Add program to cpumap entry");
 
 	err = bpf_map_lookup_elem(map_fd, &idx, &val);
-	CHECK(err, "Read cpumap entry", "err %d errno %d\n", err, errno);
-	CHECK(info.id != val.bpf_prog.id, "Expected program id in cpumap entry",
-	      "expected %u read %u\n", info.id, val.bpf_prog.id);
+	ASSERT_OK(err, "Read cpumap entry");
+	ASSERT_EQ(info.id, val.bpf_prog.id, "Match program id to cpumap entry prog_id");
 
 	/* can not attach BPF_XDP_CPUMAP program to a device */
 	err = bpf_set_link_xdp_fd(IFINDEX_LO, prog_fd, XDP_FLAGS_SKB_MODE);
-	CHECK(err == 0, "Attach of BPF_XDP_CPUMAP program",
-	      "should have failed\n");
+	if (!ASSERT_NEQ(err, 0, "Attach of BPF_XDP_CPUMAP program"))
+		bpf_set_link_xdp_fd(IFINDEX_LO, -1, XDP_FLAGS_SKB_MODE);
 
 	val.qsize = 192;
 	val.bpf_prog.fd = bpf_program__fd(skel->progs.xdp_dummy_prog);
 	err = bpf_map_update_elem(map_fd, &idx, &val, 0);
-	CHECK(err == 0, "Add non-BPF_XDP_CPUMAP program to cpumap entry",
-	      "should have failed\n");
+	ASSERT_NEQ(err, 0, "Add non-BPF_XDP_CPUMAP program to cpumap entry");
 
 out_close:
 	test_xdp_with_cpumap_helpers__destroy(skel);
 }
-
-void test_xdp_cpumap_attach(void)
-{
-	if (test__start_subtest("cpumap_with_progs"))
-		test_xdp_with_cpumap_helpers();
-}
diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c b/tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c
index 88ef3ec8ac4c..c72af030ff10 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c
@@ -16,50 +16,45 @@ void test_xdp_with_devmap_helpers(void)
 		.ifindex = IFINDEX_LO,
 	};
 	__u32 len = sizeof(info);
-	__u32 duration = 0, idx = 0;
 	int err, dm_fd, map_fd;
+	__u32 idx = 0;
 
 
 	skel = test_xdp_with_devmap_helpers__open_and_load();
-	if (CHECK_FAIL(!skel)) {
-		perror("test_xdp_with_devmap_helpers__open_and_load");
+	if (!ASSERT_OK_PTR(skel, "test_xdp_with_devmap_helpers__open_and_load"))
 		return;
-	}
 
-	/* can not attach program with DEVMAPs that allow programs
-	 * as xdp generic
-	 */
 	dm_fd = bpf_program__fd(skel->progs.xdp_redir_prog);
 	err = bpf_set_link_xdp_fd(IFINDEX_LO, dm_fd, XDP_FLAGS_SKB_MODE);
-	CHECK(err == 0, "Generic attach of program with 8-byte devmap",
-	      "should have failed\n");
+	if (!ASSERT_OK(err, "Generic attach of program with 8-byte devmap"))
+		goto out_close;
+
+	err = bpf_set_link_xdp_fd(IFINDEX_LO, -1, XDP_FLAGS_SKB_MODE);
+	ASSERT_OK(err, "XDP program detach");
 
 	dm_fd = bpf_program__fd(skel->progs.xdp_dummy_dm);
 	map_fd = bpf_map__fd(skel->maps.dm_ports);
 	err = bpf_obj_get_info_by_fd(dm_fd, &info, &len);
-	if (CHECK_FAIL(err))
+	if (!ASSERT_OK(err, "bpf_obj_get_info_by_fd"))
 		goto out_close;
 
 	val.bpf_prog.fd = dm_fd;
 	err = bpf_map_update_elem(map_fd, &idx, &val, 0);
-	CHECK(err, "Add program to devmap entry",
-	      "err %d errno %d\n", err, errno);
+	ASSERT_OK(err, "Add program to devmap entry");
 
 	err = bpf_map_lookup_elem(map_fd, &idx, &val);
-	CHECK(err, "Read devmap entry", "err %d errno %d\n", err, errno);
-	CHECK(info.id != val.bpf_prog.id, "Expected program id in devmap entry",
-	      "expected %u read %u\n", info.id, val.bpf_prog.id);
+	ASSERT_OK(err, "Read devmap entry");
+	ASSERT_EQ(info.id, val.bpf_prog.id, "Match program id to devmap entry prog_id");
 
 	/* can not attach BPF_XDP_DEVMAP program to a device */
 	err = bpf_set_link_xdp_fd(IFINDEX_LO, dm_fd, XDP_FLAGS_SKB_MODE);
-	CHECK(err == 0, "Attach of BPF_XDP_DEVMAP program",
-	      "should have failed\n");
+	if (!ASSERT_NEQ(err, 0, "Attach of BPF_XDP_DEVMAP program"))
+		bpf_set_link_xdp_fd(IFINDEX_LO, -1, XDP_FLAGS_SKB_MODE);
 
 	val.ifindex = 1;
 	val.bpf_prog.fd = bpf_program__fd(skel->progs.xdp_dummy_prog);
 	err = bpf_map_update_elem(map_fd, &idx, &val, 0);
-	CHECK(err == 0, "Add non-BPF_XDP_DEVMAP program to devmap entry",
-	      "should have failed\n");
+	ASSERT_NEQ(err, 0, "Add non-BPF_XDP_DEVMAP program to devmap entry");
 
 out_close:
 	test_xdp_with_devmap_helpers__destroy(skel);
@@ -68,12 +63,10 @@ void test_xdp_with_devmap_helpers(void)
 void test_neg_xdp_devmap_helpers(void)
 {
 	struct test_xdp_devmap_helpers *skel;
-	__u32 duration = 0;
 
 	skel = test_xdp_devmap_helpers__open_and_load();
-	if (CHECK(skel,
-		  "Load of XDP program accessing egress ifindex without attach type",
-		  "should have failed\n")) {
+	if (!ASSERT_EQ(skel, NULL,
+		    "Load of XDP program accessing egress ifindex without attach type")) {
 		test_xdp_devmap_helpers__destroy(skel);
 	}
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH net-next v5 3/5] bpf: cpumap: implement generic cpumap
  2021-07-01  0:27 ` [PATCH net-next v5 3/5] bpf: cpumap: implement generic cpumap Kumar Kartikeya Dwivedi
@ 2021-07-01  9:16   ` Jesper Dangaard Brouer
  2021-07-02 11:18     ` Kumar Kartikeya Dwivedi
  0 siblings, 1 reply; 8+ messages in thread
From: Jesper Dangaard Brouer @ 2021-07-01  9:16 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi, netdev
  Cc: Toke Høiland-Jørgensen, Alexei Starovoitov,
	Daniel Borkmann, Andrii Nakryiko, David S. Miller,
	Jakub Kicinski, John Fastabend, Martin KaFai Lau, bpf,
	Eric Leblond

(Cc. Eric Leblond as he needed this for Suricata.)

On 01/07/2021 02.27, Kumar Kartikeya Dwivedi wrote:
> This change implements CPUMAP redirect support for generic XDP programs.
> The idea is to reuse the cpu map entry's queue that is used to push
> native xdp frames for redirecting skb to a different CPU. This will
> match native XDP behavior (in that RPS is invoked again for packet
> reinjected into networking stack).
>
> To be able to determine whether the incoming skb is from the driver or
> cpumap, we reuse skb->redirected bit that skips generic XDP processing
> when it is set. To always make use of this, CONFIG_NET_REDIRECT guard on
> it has been lifted and it is always available.
>
>  From the redirect side, we add the skb to ptr_ring with its lowest bit
> set to 1.  This should be safe as skb is not 1-byte aligned. This allows
> kthread to discern between xdp_frames and sk_buff. On consumption of the
> ptr_ring item, the lowest bit is unset.
>
> In the end, the skb is simply added to the list that kthread is anyway
> going to maintain for xdp_frames converted to skb, and then received
> again by using netif_receive_skb_list.
>
> Bulking optimization for generic cpumap is left as an exercise for a
> future patch for now.
Fine by me, I hope bulking is added later, as I think with bulking this 
will be a faster alternative than RPS.
>
> Since cpumap entry progs are now supported, also remove check in
> generic_xdp_install for the cpumap.
>
> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
>   include/linux/bpf.h    |   9 +++-
>   include/linux/skbuff.h |  10 +---
>   kernel/bpf/cpumap.c    | 115 +++++++++++++++++++++++++++++++++++------
>   net/core/dev.c         |   3 +-
>   net/core/filter.c      |   6 ++-
>   5 files changed, 115 insertions(+), 28 deletions(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index f309fc1509f2..095aaa104c56 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1513,7 +1513,8 @@ bool dev_map_can_have_prog(struct bpf_map *map);
>   void __cpu_map_flush(void);
>   int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_buff *xdp,
>   		    struct net_device *dev_rx);
> -bool cpu_map_prog_allowed(struct bpf_map *map);
> +int cpu_map_generic_redirect(struct bpf_cpu_map_entry *rcpu,
> +			     struct sk_buff *skb);
>   
>   /* Return map's numa specified by userspace */
>   static inline int bpf_map_attr_numa_node(const union bpf_attr *attr)
> @@ -1710,6 +1711,12 @@ static inline int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu,
>   	return 0;
>   }
>   
> +static inline int cpu_map_generic_redirect(struct bpf_cpu_map_entry *rcpu,
> +					   struct sk_buff *skb)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
>   static inline bool cpu_map_prog_allowed(struct bpf_map *map)
>   {
>   	return false;
> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> index b2db9cd9a73f..f19190820e63 100644
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -863,8 +863,8 @@ struct sk_buff {
>   	__u8			tc_skip_classify:1;
>   	__u8			tc_at_ingress:1;
>   #endif
> -#ifdef CONFIG_NET_REDIRECT
>   	__u8			redirected:1;
> +#ifdef CONFIG_NET_REDIRECT
>   	__u8			from_ingress:1;
>   #endif
>   #ifdef CONFIG_TLS_DEVICE
> @@ -4664,17 +4664,13 @@ static inline __wsum lco_csum(struct sk_buff *skb)
>   
>   static inline bool skb_is_redirected(const struct sk_buff *skb)
>   {
> -#ifdef CONFIG_NET_REDIRECT
>   	return skb->redirected;
> -#else
> -	return false;
> -#endif
>   }
>   
>   static inline void skb_set_redirected(struct sk_buff *skb, bool from_ingress)
>   {
> -#ifdef CONFIG_NET_REDIRECT
>   	skb->redirected = 1;
> +#ifdef CONFIG_NET_REDIRECT
>   	skb->from_ingress = from_ingress;
>   	if (skb->from_ingress)
>   		skb->tstamp = 0;
> @@ -4683,9 +4679,7 @@ static inline void skb_set_redirected(struct sk_buff *skb, bool from_ingress)
>   
>   static inline void skb_reset_redirect(struct sk_buff *skb)
>   {
> -#ifdef CONFIG_NET_REDIRECT
>   	skb->redirected = 0;
> -#endif
>   }
>   
>   static inline bool skb_csum_is_sctp(struct sk_buff *skb)
> diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
> index a1a0c4e791c6..274353e2cd70 100644
> --- a/kernel/bpf/cpumap.c
> +++ b/kernel/bpf/cpumap.c
> @@ -16,6 +16,7 @@
>    * netstack, and assigning dedicated CPUs for this stage.  This
>    * basically allows for 10G wirespeed pre-filtering via bpf.
>    */
> +#include <linux/bitops.h>
>   #include <linux/bpf.h>
>   #include <linux/filter.h>
>   #include <linux/ptr_ring.h>
> @@ -168,6 +169,49 @@ static void put_cpu_map_entry(struct bpf_cpu_map_entry *rcpu)
>   	}
>   }
>   
> +static void cpu_map_bpf_prog_run_skb(struct bpf_cpu_map_entry *rcpu,
> +				     struct list_head *listp,
> +				     struct xdp_cpumap_stats *stats)
> +{
> +	struct sk_buff *skb, *tmp;
> +	struct xdp_buff xdp;
> +	u32 act;
> +	int err;
> +
> +	if (!rcpu->prog)
> +		return;

Move this one level out. Why explained below.


> +
> +	list_for_each_entry_safe(skb, tmp, listp, list) {
> +		act = bpf_prog_run_generic_xdp(skb, &xdp, rcpu->prog);
> +		switch (act) {
> +		case XDP_PASS:
> +			break;
> +		case XDP_REDIRECT:
> +			skb_list_del_init(skb);
> +			err = xdp_do_generic_redirect(skb->dev, skb, &xdp,
> +						      rcpu->prog);
> +			if (unlikely(err)) {
> +				kfree_skb(skb);
> +				stats->drop++;
> +			} else {
> +				stats->redirect++;
> +			}
> +			return;
> +		default:
> +			bpf_warn_invalid_xdp_action(act);
> +			fallthrough;
> +		case XDP_ABORTED:
> +			trace_xdp_exception(skb->dev, rcpu->prog, act);
> +			fallthrough;
> +		case XDP_DROP:
> +			skb_list_del_init(skb);
> +			kfree_skb(skb);
> +			stats->drop++;
> +			return;
> +		}
> +	}
> +}
> +
>   static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
>   				    void **frames, int n,
>   				    struct xdp_cpumap_stats *stats)
> @@ -179,8 +223,6 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
>   	if (!rcpu->prog)
>   		return n;
>   
> -	rcu_read_lock_bh();
> -

Notice the return before doing rcu_read_lock_bh().

Here we try to avoid the extra call to do_softirq() when calling 
rcu_read_unlock_bh.

When RX-napi and cpumap share/run-on the same CPU, activating 
do_softirq() two time in the kthread_cpumap cause RX-napi to get more 
CPU time to enqueue more packets into cpumap.  Thus, cpumap can easier 
get overloaded.


>   	xdp_set_return_frame_no_direct();
>   	xdp.rxq = &rxq;
>   
> @@ -227,17 +269,34 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
>   		}
>   	}
>   
> +	xdp_clear_return_frame_no_direct();
> +
> +	return nframes;
> +}
> +
> +#define CPUMAP_BATCH 8
> +
> +static int cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames,
> +				int xdp_n, struct xdp_cpumap_stats *stats,
> +				struct list_head *list)
> +{
> +	int nframes;
> +
> +	rcu_read_lock_bh();
> +
> +	nframes = cpu_map_bpf_prog_run_xdp(rcpu, frames, xdp_n, stats);
> +
>   	if (stats->redirect)
> -		xdp_do_flush_map();
> +		xdp_do_flush();
>   
> -	xdp_clear_return_frame_no_direct();
> +	if (unlikely(!list_empty(list)))
> +		cpu_map_bpf_prog_run_skb(rcpu, list, stats);
>   
> -	rcu_read_unlock_bh(); /* resched point, may call do_softirq() */
> +	rcu_read_unlock_bh();

I would like to keep this comment, to help people 
troubleshooting/understand why RX-napi to get more CPU time than kthread.


>   
>   	return nframes;
>   }
>   
> -#define CPUMAP_BATCH 8
>   
>   static int cpu_map_kthread_run(void *data)
>   {
> @@ -254,9 +313,9 @@ static int cpu_map_kthread_run(void *data)
>   		struct xdp_cpumap_stats stats = {}; /* zero stats */
>   		unsigned int kmem_alloc_drops = 0, sched = 0;
>   		gfp_t gfp = __GFP_ZERO | GFP_ATOMIC;
> +		int i, n, m, nframes, xdp_n;
>   		void *frames[CPUMAP_BATCH];
>   		void *skbs[CPUMAP_BATCH];
> -		int i, n, m, nframes;
>   		LIST_HEAD(list);
>   
>   		/* Release CPU reschedule checks */
> @@ -280,9 +339,20 @@ static int cpu_map_kthread_run(void *data)
>   		 */
>   		n = __ptr_ring_consume_batched(rcpu->queue, frames,
>   					       CPUMAP_BATCH);
> -		for (i = 0; i < n; i++) {
> +		for (i = 0, xdp_n = 0; i < n; i++) {
>   			void *f = frames[i];
> -			struct page *page = virt_to_page(f);
> +			struct page *page;
> +
> +			if (unlikely(__ptr_test_bit(0, &f))) {
> +				struct sk_buff *skb = f;
> +
> +				__ptr_clear_bit(0, &skb);
> +				list_add_tail(&skb->list, &list);
> +				continue;
> +			}
> +
> +			frames[xdp_n++] = f;
> +			page = virt_to_page(f);
>   
>   			/* Bring struct page memory area to curr CPU. Read by
>   			 * build_skb_around via page_is_pfmemalloc(), and when
> @@ -292,7 +362,7 @@ static int cpu_map_kthread_run(void *data)
>   		}
>   
>   		/* Support running another XDP prog on this CPU */
> -		nframes = cpu_map_bpf_prog_run_xdp(rcpu, frames, n, &stats);
> +		nframes = cpu_map_bpf_prog_run(rcpu, frames, xdp_n, &stats, &list);
>   		if (nframes) {
>   			m = kmem_cache_alloc_bulk(skbuff_head_cache, gfp, nframes, skbs);
>   			if (unlikely(m == 0)) {
> @@ -330,12 +400,6 @@ static int cpu_map_kthread_run(void *data)
>   	return 0;
>   }
>   
> -bool cpu_map_prog_allowed(struct bpf_map *map)
> -{
> -	return map->map_type == BPF_MAP_TYPE_CPUMAP &&
> -	       map->value_size != offsetofend(struct bpf_cpumap_val, qsize);
> -}
> -
>   static int __cpu_map_load_bpf_program(struct bpf_cpu_map_entry *rcpu, int fd)
>   {
>   	struct bpf_prog *prog;
> @@ -696,6 +760,25 @@ int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_buff *xdp,
>   	return 0;
>   }
>   
> +int cpu_map_generic_redirect(struct bpf_cpu_map_entry *rcpu,
> +			     struct sk_buff *skb)
> +{
> +	int ret;
> +
> +	__skb_pull(skb, skb->mac_len);
> +	skb_set_redirected(skb, false);
> +	__ptr_set_bit(0, &skb);
> +
> +	ret = ptr_ring_produce(rcpu->queue, skb);
> +	if (ret < 0)
> +		goto trace;
> +
> +	wake_up_process(rcpu->kthread);
> +trace:
> +	trace_xdp_cpumap_enqueue(rcpu->map_id, !ret, !!ret, rcpu->cpu);
> +	return ret;
> +}
> +
>   void __cpu_map_flush(void)
>   {
>   	struct list_head *flush_list = this_cpu_ptr(&cpu_map_flush_list);
> diff --git a/net/core/dev.c b/net/core/dev.c
> index ad5ab33cbd39..8521936414f2 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -5665,8 +5665,7 @@ static int generic_xdp_install(struct net_device *dev, struct netdev_bpf *xdp)
>   		 * have a bpf_prog installed on an entry
>   		 */
>   		for (i = 0; i < new->aux->used_map_cnt; i++) {
> -			if (dev_map_can_have_prog(new->aux->used_maps[i]) ||
> -			    cpu_map_prog_allowed(new->aux->used_maps[i])) {
> +			if (dev_map_can_have_prog(new->aux->used_maps[i])) {
>   				mutex_unlock(&new->aux->used_maps_mutex);
>   				return -EINVAL;
>   			}
> diff --git a/net/core/filter.c b/net/core/filter.c
> index 0b13d8157a8f..4a21fde3028f 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -4038,8 +4038,12 @@ static int xdp_do_generic_redirect_map(struct net_device *dev,
>   			goto err;
>   		consume_skb(skb);
>   		break;
> +	case BPF_MAP_TYPE_CPUMAP:
> +		err = cpu_map_generic_redirect(fwd, skb);
> +		if (unlikely(err))
> +			goto err;
> +		break;
>   	default:
> -		/* TODO: Handle BPF_MAP_TYPE_CPUMAP */
>   		err = -EBADRQC;
>   		goto err;
>   	}


I like the rest :-)

Thanks for working on this!

--Jesper


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net-next v5 3/5] bpf: cpumap: implement generic cpumap
  2021-07-01  9:16   ` Jesper Dangaard Brouer
@ 2021-07-02 11:18     ` Kumar Kartikeya Dwivedi
  0 siblings, 0 replies; 8+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-07-02 11:18 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: netdev, Toke Høiland-Jørgensen, Alexei Starovoitov,
	Daniel Borkmann, Andrii Nakryiko, David S. Miller,
	Jakub Kicinski, John Fastabend, Martin KaFai Lau, bpf,
	Eric Leblond

On Thu, Jul 01, 2021 at 02:46:05PM IST, Jesper Dangaard Brouer wrote:
> (Cc. Eric Leblond as he needed this for Suricata.)
>
> On 01/07/2021 02.27, Kumar Kartikeya Dwivedi wrote:
> > This change implements CPUMAP redirect support for generic XDP programs.
> > The idea is to reuse the cpu map entry's queue that is used to push
> > native xdp frames for redirecting skb to a different CPU. This will
> > match native XDP behavior (in that RPS is invoked again for packet
> > reinjected into networking stack).
> >
> > To be able to determine whether the incoming skb is from the driver or
> > cpumap, we reuse skb->redirected bit that skips generic XDP processing
> > when it is set. To always make use of this, CONFIG_NET_REDIRECT guard on
> > it has been lifted and it is always available.
> >
> >  From the redirect side, we add the skb to ptr_ring with its lowest bit
> > set to 1.  This should be safe as skb is not 1-byte aligned. This allows
> > kthread to discern between xdp_frames and sk_buff. On consumption of the
> > ptr_ring item, the lowest bit is unset.
> >
> > In the end, the skb is simply added to the list that kthread is anyway
> > going to maintain for xdp_frames converted to skb, and then received
> > again by using netif_receive_skb_list.
> >
> > Bulking optimization for generic cpumap is left as an exercise for a
> > future patch for now.
> Fine by me, I hope bulking is added later, as I think with bulking this will
> be a faster alternative than RPS.
> >
> > Since cpumap entry progs are now supported, also remove check in
> > generic_xdp_install for the cpumap.
> >
> > Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
> > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > ---
> >   include/linux/bpf.h    |   9 +++-
> >   include/linux/skbuff.h |  10 +---
> >   kernel/bpf/cpumap.c    | 115 +++++++++++++++++++++++++++++++++++------
> >   net/core/dev.c         |   3 +-
> >   net/core/filter.c      |   6 ++-
> >   5 files changed, 115 insertions(+), 28 deletions(-)
> >
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index f309fc1509f2..095aaa104c56 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -1513,7 +1513,8 @@ bool dev_map_can_have_prog(struct bpf_map *map);
> >   void __cpu_map_flush(void);
> >   int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_buff *xdp,
> >   		    struct net_device *dev_rx);
> > -bool cpu_map_prog_allowed(struct bpf_map *map);
> > +int cpu_map_generic_redirect(struct bpf_cpu_map_entry *rcpu,
> > +			     struct sk_buff *skb);
> >   /* Return map's numa specified by userspace */
> >   static inline int bpf_map_attr_numa_node(const union bpf_attr *attr)
> > @@ -1710,6 +1711,12 @@ static inline int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu,
> >   	return 0;
> >   }
> > +static inline int cpu_map_generic_redirect(struct bpf_cpu_map_entry *rcpu,
> > +					   struct sk_buff *skb)
> > +{
> > +	return -EOPNOTSUPP;
> > +}
> > +
> >   static inline bool cpu_map_prog_allowed(struct bpf_map *map)
> >   {
> >   	return false;
> > diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> > index b2db9cd9a73f..f19190820e63 100644
> > --- a/include/linux/skbuff.h
> > +++ b/include/linux/skbuff.h
> > @@ -863,8 +863,8 @@ struct sk_buff {
> >   	__u8			tc_skip_classify:1;
> >   	__u8			tc_at_ingress:1;
> >   #endif
> > -#ifdef CONFIG_NET_REDIRECT
> >   	__u8			redirected:1;
> > +#ifdef CONFIG_NET_REDIRECT
> >   	__u8			from_ingress:1;
> >   #endif
> >   #ifdef CONFIG_TLS_DEVICE
> > @@ -4664,17 +4664,13 @@ static inline __wsum lco_csum(struct sk_buff *skb)
> >   static inline bool skb_is_redirected(const struct sk_buff *skb)
> >   {
> > -#ifdef CONFIG_NET_REDIRECT
> >   	return skb->redirected;
> > -#else
> > -	return false;
> > -#endif
> >   }
> >   static inline void skb_set_redirected(struct sk_buff *skb, bool from_ingress)
> >   {
> > -#ifdef CONFIG_NET_REDIRECT
> >   	skb->redirected = 1;
> > +#ifdef CONFIG_NET_REDIRECT
> >   	skb->from_ingress = from_ingress;
> >   	if (skb->from_ingress)
> >   		skb->tstamp = 0;
> > @@ -4683,9 +4679,7 @@ static inline void skb_set_redirected(struct sk_buff *skb, bool from_ingress)
> >   static inline void skb_reset_redirect(struct sk_buff *skb)
> >   {
> > -#ifdef CONFIG_NET_REDIRECT
> >   	skb->redirected = 0;
> > -#endif
> >   }
> >   static inline bool skb_csum_is_sctp(struct sk_buff *skb)
> > diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
> > index a1a0c4e791c6..274353e2cd70 100644
> > --- a/kernel/bpf/cpumap.c
> > +++ b/kernel/bpf/cpumap.c
> > @@ -16,6 +16,7 @@
> >    * netstack, and assigning dedicated CPUs for this stage.  This
> >    * basically allows for 10G wirespeed pre-filtering via bpf.
> >    */
> > +#include <linux/bitops.h>
> >   #include <linux/bpf.h>
> >   #include <linux/filter.h>
> >   #include <linux/ptr_ring.h>
> > @@ -168,6 +169,49 @@ static void put_cpu_map_entry(struct bpf_cpu_map_entry *rcpu)
> >   	}
> >   }
> > +static void cpu_map_bpf_prog_run_skb(struct bpf_cpu_map_entry *rcpu,
> > +				     struct list_head *listp,
> > +				     struct xdp_cpumap_stats *stats)
> > +{
> > +	struct sk_buff *skb, *tmp;
> > +	struct xdp_buff xdp;
> > +	u32 act;
> > +	int err;
> > +
> > +	if (!rcpu->prog)
> > +		return;
>
> Move this one level out. Why explained below.
>
>
> > +
> > +	list_for_each_entry_safe(skb, tmp, listp, list) {
> > +		act = bpf_prog_run_generic_xdp(skb, &xdp, rcpu->prog);
> > +		switch (act) {
> > +		case XDP_PASS:
> > +			break;
> > +		case XDP_REDIRECT:
> > +			skb_list_del_init(skb);
> > +			err = xdp_do_generic_redirect(skb->dev, skb, &xdp,
> > +						      rcpu->prog);
> > +			if (unlikely(err)) {
> > +				kfree_skb(skb);
> > +				stats->drop++;
> > +			} else {
> > +				stats->redirect++;
> > +			}
> > +			return;
> > +		default:
> > +			bpf_warn_invalid_xdp_action(act);
> > +			fallthrough;
> > +		case XDP_ABORTED:
> > +			trace_xdp_exception(skb->dev, rcpu->prog, act);
> > +			fallthrough;
> > +		case XDP_DROP:
> > +			skb_list_del_init(skb);
> > +			kfree_skb(skb);
> > +			stats->drop++;
> > +			return;
> > +		}
> > +	}
> > +}
> > +
> >   static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
> >   				    void **frames, int n,
> >   				    struct xdp_cpumap_stats *stats)
> > @@ -179,8 +223,6 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
> >   	if (!rcpu->prog)
> >   		return n;
> > -	rcu_read_lock_bh();
> > -
>
> Notice the return before doing rcu_read_lock_bh().
>
> Here we try to avoid the extra call to do_softirq() when calling
> rcu_read_unlock_bh.
>
> When RX-napi and cpumap share/run-on the same CPU, activating do_softirq()
> two time in the kthread_cpumap cause RX-napi to get more CPU time to enqueue
> more packets into cpumap.  Thus, cpumap can easier get overloaded.
>
>
> >   	xdp_set_return_frame_no_direct();
> >   	xdp.rxq = &rxq;
> > @@ -227,17 +269,34 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
> >   		}
> >   	}
> > +	xdp_clear_return_frame_no_direct();
> > +
> > +	return nframes;
> > +}
> > +
> > +#define CPUMAP_BATCH 8
> > +
> > +static int cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames,
> > +				int xdp_n, struct xdp_cpumap_stats *stats,
> > +				struct list_head *list)
> > +{
> > +	int nframes;
> > +
> > +	rcu_read_lock_bh();
> > +
> > +	nframes = cpu_map_bpf_prog_run_xdp(rcpu, frames, xdp_n, stats);
> > +
> >   	if (stats->redirect)
> > -		xdp_do_flush_map();
> > +		xdp_do_flush();
> > -	xdp_clear_return_frame_no_direct();
> > +	if (unlikely(!list_empty(list)))
> > +		cpu_map_bpf_prog_run_skb(rcpu, list, stats);
> > -	rcu_read_unlock_bh(); /* resched point, may call do_softirq() */
> > +	rcu_read_unlock_bh();
>
> I would like to keep this comment, to help people troubleshooting/understand
> why RX-napi to get more CPU time than kthread.
>
>
> >   	return nframes;
> >   }
> > -#define CPUMAP_BATCH 8
> >   static int cpu_map_kthread_run(void *data)
> >   {
> > @@ -254,9 +313,9 @@ static int cpu_map_kthread_run(void *data)
> >   		struct xdp_cpumap_stats stats = {}; /* zero stats */
> >   		unsigned int kmem_alloc_drops = 0, sched = 0;
> >   		gfp_t gfp = __GFP_ZERO | GFP_ATOMIC;
> > +		int i, n, m, nframes, xdp_n;
> >   		void *frames[CPUMAP_BATCH];
> >   		void *skbs[CPUMAP_BATCH];
> > -		int i, n, m, nframes;
> >   		LIST_HEAD(list);
> >   		/* Release CPU reschedule checks */
> > @@ -280,9 +339,20 @@ static int cpu_map_kthread_run(void *data)
> >   		 */
> >   		n = __ptr_ring_consume_batched(rcpu->queue, frames,
> >   					       CPUMAP_BATCH);
> > -		for (i = 0; i < n; i++) {
> > +		for (i = 0, xdp_n = 0; i < n; i++) {
> >   			void *f = frames[i];
> > -			struct page *page = virt_to_page(f);
> > +			struct page *page;
> > +
> > +			if (unlikely(__ptr_test_bit(0, &f))) {
> > +				struct sk_buff *skb = f;
> > +
> > +				__ptr_clear_bit(0, &skb);
> > +				list_add_tail(&skb->list, &list);
> > +				continue;
> > +			}
> > +
> > +			frames[xdp_n++] = f;
> > +			page = virt_to_page(f);
> >   			/* Bring struct page memory area to curr CPU. Read by
> >   			 * build_skb_around via page_is_pfmemalloc(), and when
> > @@ -292,7 +362,7 @@ static int cpu_map_kthread_run(void *data)
> >   		}
> >   		/* Support running another XDP prog on this CPU */
> > -		nframes = cpu_map_bpf_prog_run_xdp(rcpu, frames, n, &stats);
> > +		nframes = cpu_map_bpf_prog_run(rcpu, frames, xdp_n, &stats, &list);
> >   		if (nframes) {
> >   			m = kmem_cache_alloc_bulk(skbuff_head_cache, gfp, nframes, skbs);
> >   			if (unlikely(m == 0)) {
> > @@ -330,12 +400,6 @@ static int cpu_map_kthread_run(void *data)
> >   	return 0;
> >   }
> > -bool cpu_map_prog_allowed(struct bpf_map *map)
> > -{
> > -	return map->map_type == BPF_MAP_TYPE_CPUMAP &&
> > -	       map->value_size != offsetofend(struct bpf_cpumap_val, qsize);
> > -}
> > -
> >   static int __cpu_map_load_bpf_program(struct bpf_cpu_map_entry *rcpu, int fd)
> >   {
> >   	struct bpf_prog *prog;
> > @@ -696,6 +760,25 @@ int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_buff *xdp,
> >   	return 0;
> >   }
> > +int cpu_map_generic_redirect(struct bpf_cpu_map_entry *rcpu,
> > +			     struct sk_buff *skb)
> > +{
> > +	int ret;
> > +
> > +	__skb_pull(skb, skb->mac_len);
> > +	skb_set_redirected(skb, false);
> > +	__ptr_set_bit(0, &skb);
> > +
> > +	ret = ptr_ring_produce(rcpu->queue, skb);
> > +	if (ret < 0)
> > +		goto trace;
> > +
> > +	wake_up_process(rcpu->kthread);
> > +trace:
> > +	trace_xdp_cpumap_enqueue(rcpu->map_id, !ret, !!ret, rcpu->cpu);
> > +	return ret;
> > +}
> > +
> >   void __cpu_map_flush(void)
> >   {
> >   	struct list_head *flush_list = this_cpu_ptr(&cpu_map_flush_list);
> > diff --git a/net/core/dev.c b/net/core/dev.c
> > index ad5ab33cbd39..8521936414f2 100644
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -5665,8 +5665,7 @@ static int generic_xdp_install(struct net_device *dev, struct netdev_bpf *xdp)
> >   		 * have a bpf_prog installed on an entry
> >   		 */
> >   		for (i = 0; i < new->aux->used_map_cnt; i++) {
> > -			if (dev_map_can_have_prog(new->aux->used_maps[i]) ||
> > -			    cpu_map_prog_allowed(new->aux->used_maps[i])) {
> > +			if (dev_map_can_have_prog(new->aux->used_maps[i])) {
> >   				mutex_unlock(&new->aux->used_maps_mutex);
> >   				return -EINVAL;
> >   			}
> > diff --git a/net/core/filter.c b/net/core/filter.c
> > index 0b13d8157a8f..4a21fde3028f 100644
> > --- a/net/core/filter.c
> > +++ b/net/core/filter.c
> > @@ -4038,8 +4038,12 @@ static int xdp_do_generic_redirect_map(struct net_device *dev,
> >   			goto err;
> >   		consume_skb(skb);
> >   		break;
> > +	case BPF_MAP_TYPE_CPUMAP:
> > +		err = cpu_map_generic_redirect(fwd, skb);
> > +		if (unlikely(err))
> > +			goto err;
> > +		break;
> >   	default:
> > -		/* TODO: Handle BPF_MAP_TYPE_CPUMAP */
> >   		err = -EBADRQC;
> >   		goto err;
> >   	}
>
>
> I like the rest :-)
>
> Thanks for working on this!
>
> --Jesper
>

Hopefully addressed both points in the respin, thanks!

--
Kartikeya

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-07-02 11:20 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-01  0:27 [PATCH net-next v5 0/5] Generic XDP improvements Kumar Kartikeya Dwivedi
2021-07-01  0:27 ` [PATCH net-next v5 1/5] net: core: split out code to run generic XDP prog Kumar Kartikeya Dwivedi
2021-07-01  0:27 ` [PATCH net-next v5 2/5] bitops: add non-atomic bitops for pointers Kumar Kartikeya Dwivedi
2021-07-01  0:27 ` [PATCH net-next v5 3/5] bpf: cpumap: implement generic cpumap Kumar Kartikeya Dwivedi
2021-07-01  9:16   ` Jesper Dangaard Brouer
2021-07-02 11:18     ` Kumar Kartikeya Dwivedi
2021-07-01  0:27 ` [PATCH net-next v5 4/5] bpf: devmap: implement devmap prog execution for generic XDP Kumar Kartikeya Dwivedi
2021-07-01  0:27 ` [PATCH net-next v5 5/5] bpf: tidy xdp attach selftests Kumar Kartikeya Dwivedi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).