All of lore.kernel.org
 help / color / mirror / Atom feed
* [net-next v10 0/2] net: sched: allow user to select txqueue
@ 2022-03-14 14:15 xiangxia.m.yue
  2022-03-14 14:15 ` [net-next v10 1/2] net: sched: use queue_mapping to pick tx queue xiangxia.m.yue
  2022-03-14 14:15 ` [net-next v10 2/2] net: sched: support hash selecting " xiangxia.m.yue
  0 siblings, 2 replies; 10+ messages in thread
From: xiangxia.m.yue @ 2022-03-14 14:15 UTC (permalink / raw)
  To: netdev
  Cc: Tonghao Zhang, Jamal Hadi Salim, Cong Wang, Jiri Pirko,
	David S. Miller, Jakub Kicinski, Jonathan Lemon, Eric Dumazet,
	Alexander Lobakin, Paolo Abeni, Talal Ahmad, Kevin Hao,
	Ilias Apalodimas, Kees Cook, Kumar Kartikeya Dwivedi,
	Antoine Tenart, Wei Wang, Arnd Bergmann

From: Tonghao Zhang <xiangxia.m.yue@gmail.com>

Patch 1 allow user to select txqueue in clsact hook.
Patch 2 support skbhash to select txqueue.

Tonghao Zhang (2):
  net: sched: use queue_mapping to pick tx queue
  net: sched: support hash selecting tx queue

 include/linux/netdevice.h              |  3 ++
 include/linux/rtnetlink.h              |  1 +
 include/net/tc_act/tc_skbedit.h        |  1 +
 include/uapi/linux/tc_act/tc_skbedit.h |  2 +
 net/core/dev.c                         | 31 ++++++++++++++-
 net/sched/act_skbedit.c                | 55 ++++++++++++++++++++++++--
 6 files changed, 88 insertions(+), 5 deletions(-)

-- 
v10:
* 2/2 remove cpuid/classid hash
v9:
* 2/2 add more commit message.
v8:
* 1/2 remove the inline keyword.
v7:
* 1/2 fix build warn, move pick tx queue into egress_needed_key for
  simplifing codes.
v6:
* 1/2 use static key and compiled when CONFIG_NET_EGRESS configured.
v5:
* 1/2 merge netdev_xmit_reset_txqueue(void),
  netdev_xmit_skip_txqueue(void), to netdev_xmit_skip_txqueue(bool skip). 
v4:
* 1/2 introduce netdev_xmit_reset_txqueue() and invoked in
  __dev_queue_xmit(), so ximt.skip_txqueue will not affect
  selecting tx queue in next netdev, or next packets.
  more details, see commit log.
* 2/2 fix the coding style, rename:
  SKBEDIT_F_QUEUE_MAPPING_HASH -> SKBEDIT_F_TXQ_SKBHASH
  SKBEDIT_F_QUEUE_MAPPING_CLASSID -> SKBEDIT_F_TXQ_CLASSID
  SKBEDIT_F_QUEUE_MAPPING_CPUID -> SKBEDIT_F_TXQ_CPUID
* 2/2 refactor tcf_skbedit_hash, if type of hash is not specified, use
  the queue_mapping, because hash % mapping_mod == 0 in "case 0:"
* 2/2 merge the check and add extack
v3:
* 2/2 fix the warning, add cpuid hash type.
v2:
* 1/2 change skb->tc_skip_txqueue to per-cpu var, add more commit message.
* 2/2 optmize the codes.

Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Cong Wang <xiyou.wangcong@gmail.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Jonathan Lemon <jonathan.lemon@gmail.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Alexander Lobakin <alobakin@pm.me>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Talal Ahmad <talalahmad@google.com>
Cc: Kevin Hao <haokexin@gmail.com>
Cc: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Cc: Antoine Tenart <atenart@kernel.org>
Cc: Wei Wang <weiwan@google.com>
Cc: Arnd Bergmann <arnd@arndb.de>
--
2.27.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [net-next v10 1/2] net: sched: use queue_mapping to pick tx queue
  2022-03-14 14:15 [net-next v10 0/2] net: sched: allow user to select txqueue xiangxia.m.yue
@ 2022-03-14 14:15 ` xiangxia.m.yue
  2022-03-14 16:38   ` Jamal Hadi Salim
  2022-03-14 21:59   ` Daniel Borkmann
  2022-03-14 14:15 ` [net-next v10 2/2] net: sched: support hash selecting " xiangxia.m.yue
  1 sibling, 2 replies; 10+ messages in thread
From: xiangxia.m.yue @ 2022-03-14 14:15 UTC (permalink / raw)
  To: netdev
  Cc: Tonghao Zhang, Jamal Hadi Salim, Cong Wang, Jiri Pirko,
	David S. Miller, Jakub Kicinski, Jonathan Lemon, Eric Dumazet,
	Alexander Lobakin, Paolo Abeni, Talal Ahmad, Kevin Hao,
	Ilias Apalodimas, Kees Cook, Kumar Kartikeya Dwivedi,
	Antoine Tenart, Wei Wang, Arnd Bergmann

From: Tonghao Zhang <xiangxia.m.yue@gmail.com>

This patch fixes issue:
* If we install tc filters with act_skbedit in clsact hook.
  It doesn't work, because netdev_core_pick_tx() overwrites
  queue_mapping.

  $ tc filter ... action skbedit queue_mapping 1

And this patch is useful:
* We can use FQ + EDT to implement efficient policies. Tx queues
  are picked by xps, ndo_select_queue of netdev driver, or skb hash
  in netdev_core_pick_tx(). In fact, the netdev driver, and skb
  hash are _not_ under control. xps uses the CPUs map to select Tx
  queues, but we can't figure out which task_struct of pod/containter
  running on this cpu in most case. We can use clsact filters to classify
  one pod/container traffic to one Tx queue. Why ?

  In containter networking environment, there are two kinds of pod/
  containter/net-namespace. One kind (e.g. P1, P2), the high throughput
  is key in these applications. But avoid running out of network resource,
  the outbound traffic of these pods is limited, using or sharing one
  dedicated Tx queues assigned HTB/TBF/FQ Qdisc. Other kind of pods
  (e.g. Pn), the low latency of data access is key. And the traffic is not
  limited. Pods use or share other dedicated Tx queues assigned FIFO Qdisc.
  This choice provides two benefits. First, contention on the HTB/FQ Qdisc
  lock is significantly reduced since fewer CPUs contend for the same queue.
  More importantly, Qdisc contention can be eliminated completely if each
  CPU has its own FIFO Qdisc for the second kind of pods.

  There must be a mechanism in place to support classifying traffic based on
  pods/container to different Tx queues. Note that clsact is outside of Qdisc
  while Qdisc can run a classifier to select a sub-queue under the lock.

  In general recording the decision in the skb seems a little heavy handed.
  This patch introduces a per-CPU variable, suggested by Eric.

  The xmit.skip_txqueue flag is firstly cleared in __dev_queue_xmit().
  - Tx Qdisc may install that skbedit actions, then xmit.skip_txqueue flag
    is set in qdisc->enqueue() though tx queue has been selected in
    netdev_tx_queue_mapping() or netdev_core_pick_tx(). That flag is cleared
    firstly in __dev_queue_xmit(), is useful:
  - Avoid picking Tx queue with netdev_tx_queue_mapping() in next netdev
    in such case: eth0 macvlan - eth0.3 vlan - eth0 ixgbe-phy:
    For example, eth0, macvlan in pod, which root Qdisc install skbedit
    queue_mapping, send packets to eth0.3, vlan in host. In __dev_queue_xmit() of
    eth0.3, clear the flag, does not select tx queue according to skb->queue_mapping
    because there is no filters in clsact or tx Qdisc of this netdev.
    Same action taked in eth0, ixgbe in Host.
  - Avoid picking Tx queue for next packet. If we set xmit.skip_txqueue
    in tx Qdisc (qdisc->enqueue()), the proper way to clear it is clearing it
    in __dev_queue_xmit when processing next packets.

  For performance reasons, use the static key. If user does not config the NET_EGRESS,
  the patch will not be compiled.

  +----+      +----+      +----+
  | P1 |      | P2 |      | Pn |
  +----+      +----+      +----+
    |           |           |
    +-----------+-----------+
                |
                | clsact/skbedit
                |      MQ
                v
    +-----------+-----------+
    | q0        | q1        | qn
    v           v           v
  HTB/FQ      HTB/FQ  ...  FIFO

Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Cong Wang <xiyou.wangcong@gmail.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Jonathan Lemon <jonathan.lemon@gmail.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Alexander Lobakin <alobakin@pm.me>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Talal Ahmad <talalahmad@google.com>
Cc: Kevin Hao <haokexin@gmail.com>
Cc: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Cc: Antoine Tenart <atenart@kernel.org>
Cc: Wei Wang <weiwan@google.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Suggested-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
---
 include/linux/netdevice.h |  3 +++
 include/linux/rtnetlink.h |  1 +
 net/core/dev.c            | 31 +++++++++++++++++++++++++++++--
 net/sched/act_skbedit.c   |  6 +++++-
 4 files changed, 38 insertions(+), 3 deletions(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 0d994710b335..f33fb2d6712a 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -3065,6 +3065,9 @@ struct softnet_data {
 	struct {
 		u16 recursion;
 		u8  more;
+#ifdef CONFIG_NET_EGRESS
+		u8  skip_txqueue;
+#endif
 	} xmit;
 #ifdef CONFIG_RPS
 	/* input_queue_head should be written by cpu owning this struct,
diff --git a/include/linux/rtnetlink.h b/include/linux/rtnetlink.h
index 7f970b16da3a..ae2c6a3cec5d 100644
--- a/include/linux/rtnetlink.h
+++ b/include/linux/rtnetlink.h
@@ -100,6 +100,7 @@ void net_dec_ingress_queue(void);
 #ifdef CONFIG_NET_EGRESS
 void net_inc_egress_queue(void);
 void net_dec_egress_queue(void);
+void netdev_xmit_skip_txqueue(bool skip);
 #endif
 
 void rtnetlink_init(void);
diff --git a/net/core/dev.c b/net/core/dev.c
index 75bab5b0dbae..8e83b7099977 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3908,6 +3908,25 @@ sch_handle_egress(struct sk_buff *skb, int *ret, struct net_device *dev)
 
 	return skb;
 }
+
+static struct netdev_queue *
+netdev_tx_queue_mapping(struct net_device *dev, struct sk_buff *skb)
+{
+	int qm = skb_get_queue_mapping(skb);
+
+	return netdev_get_tx_queue(dev, netdev_cap_txqueue(dev, qm));
+}
+
+static bool netdev_xmit_txqueue_skipped(void)
+{
+	return __this_cpu_read(softnet_data.xmit.skip_txqueue);
+}
+
+void netdev_xmit_skip_txqueue(bool skip)
+{
+	__this_cpu_write(softnet_data.xmit.skip_txqueue, skip);
+}
+EXPORT_SYMBOL_GPL(netdev_xmit_skip_txqueue);
 #endif /* CONFIG_NET_EGRESS */
 
 #ifdef CONFIG_XPS
@@ -4078,7 +4097,7 @@ struct netdev_queue *netdev_core_pick_tx(struct net_device *dev,
 static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
 {
 	struct net_device *dev = skb->dev;
-	struct netdev_queue *txq;
+	struct netdev_queue *txq = NULL;
 	struct Qdisc *q;
 	int rc = -ENOMEM;
 	bool again = false;
@@ -4106,11 +4125,17 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
 			if (!skb)
 				goto out;
 		}
+
+		netdev_xmit_skip_txqueue(false);
+
 		nf_skip_egress(skb, true);
 		skb = sch_handle_egress(skb, &rc, dev);
 		if (!skb)
 			goto out;
 		nf_skip_egress(skb, false);
+
+		if (netdev_xmit_txqueue_skipped())
+			txq = netdev_tx_queue_mapping(dev, skb);
 	}
 #endif
 	/* If device/qdisc don't need skb->dst, release it right now while
@@ -4121,7 +4146,9 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
 	else
 		skb_dst_force(skb);
 
-	txq = netdev_core_pick_tx(dev, skb, sb_dev);
+	if (likely(!txq))
+		txq = netdev_core_pick_tx(dev, skb, sb_dev);
+
 	q = rcu_dereference_bh(txq->qdisc);
 
 	trace_net_dev_queue(skb);
diff --git a/net/sched/act_skbedit.c b/net/sched/act_skbedit.c
index ceba11b198bb..d5799b4fc499 100644
--- a/net/sched/act_skbedit.c
+++ b/net/sched/act_skbedit.c
@@ -58,8 +58,12 @@ static int tcf_skbedit_act(struct sk_buff *skb, const struct tc_action *a,
 		}
 	}
 	if (params->flags & SKBEDIT_F_QUEUE_MAPPING &&
-	    skb->dev->real_num_tx_queues > params->queue_mapping)
+	    skb->dev->real_num_tx_queues > params->queue_mapping) {
+#ifdef CONFIG_NET_EGRESS
+		netdev_xmit_skip_txqueue(true);
+#endif
 		skb_set_queue_mapping(skb, params->queue_mapping);
+	}
 	if (params->flags & SKBEDIT_F_MARK) {
 		skb->mark &= ~params->mask;
 		skb->mark |= params->mark & params->mask;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [net-next v10 2/2] net: sched: support hash selecting tx queue
  2022-03-14 14:15 [net-next v10 0/2] net: sched: allow user to select txqueue xiangxia.m.yue
  2022-03-14 14:15 ` [net-next v10 1/2] net: sched: use queue_mapping to pick tx queue xiangxia.m.yue
@ 2022-03-14 14:15 ` xiangxia.m.yue
  2022-03-14 16:38   ` Jamal Hadi Salim
  1 sibling, 1 reply; 10+ messages in thread
From: xiangxia.m.yue @ 2022-03-14 14:15 UTC (permalink / raw)
  To: netdev
  Cc: Tonghao Zhang, Jamal Hadi Salim, Cong Wang, Jiri Pirko,
	David S. Miller, Jakub Kicinski, Jonathan Lemon, Eric Dumazet,
	Alexander Lobakin, Paolo Abeni, Talal Ahmad, Kevin Hao,
	Ilias Apalodimas, Kees Cook, Kumar Kartikeya Dwivedi,
	Antoine Tenart, Wei Wang, Arnd Bergmann

From: Tonghao Zhang <xiangxia.m.yue@gmail.com>

This patch allows users to pick queue_mapping, range
from A to B. Then we can load balance packets from A
to B tx queue. The range is an unsigned 16bit value
in decimal format.

$ tc filter ... action skbedit queue_mapping skbhash A B

"skbedit queue_mapping QUEUE_MAPPING" (from "man 8 tc-skbedit")
is enhanced with flags: SKBEDIT_F_TXQ_SKBHASH

  +----+      +----+      +----+
  | P1 |      | P2 |      | Pn |
  +----+      +----+      +----+
    |           |           |
    +-----------+-----------+
                |
                | clsact/skbedit
                |      MQ
                v
    +-----------+-----------+
    | q0        | qn        | qm
    v           v           v
  HTB/FQ       FIFO   ...  FIFO

For example:
If P1 sends out packets to different Pods on other host, and
we want distribute flows from qn - qm. Then we can use skb->hash
as hash.

setup commands:
$ NETDEV=eth0
$ ip netns add n1
$ ip link add ipv1 link $NETDEV type ipvlan mode l2
$ ip link set ipv1 netns n1
$ ip netns exec n1 ifconfig ipv1 2.2.2.100/24 up

$ tc qdisc add dev $NETDEV clsact
$ tc filter add dev $NETDEV egress protocol ip prio 1 \
        flower skip_hw src_ip 2.2.2.100 action skbedit queue_mapping skbhash 2 6
$ tc qdisc add dev $NETDEV handle 1: root mq
$ tc qdisc add dev $NETDEV parent 1:1 handle 2: htb
$ tc class add dev $NETDEV parent 2: classid 2:1 htb rate 100kbit
$ tc class add dev $NETDEV parent 2: classid 2:2 htb rate 200kbit
$ tc qdisc add dev $NETDEV parent 1:2 tbf rate 100mbit burst 100mb latency 1
$ tc qdisc add dev $NETDEV parent 1:3 pfifo
$ tc qdisc add dev $NETDEV parent 1:4 pfifo
$ tc qdisc add dev $NETDEV parent 1:5 pfifo
$ tc qdisc add dev $NETDEV parent 1:6 pfifo
$ tc qdisc add dev $NETDEV parent 1:7 pfifo

$ ip netns exec n1 iperf3 -c 2.2.2.1 -i 1 -t 10 -P 10

pick txqueue from 2 - 6:
$ ethtool -S $NETDEV | grep -i tx_queue_[0-9]_bytes
     tx_queue_0_bytes: 42
     tx_queue_1_bytes: 0
     tx_queue_2_bytes: 11442586444
     tx_queue_3_bytes: 7383615334
     tx_queue_4_bytes: 3981365579
     tx_queue_5_bytes: 3983235051
     tx_queue_6_bytes: 6706236461
     tx_queue_7_bytes: 42
     tx_queue_8_bytes: 0
     tx_queue_9_bytes: 0

txqueues 2 - 6 are mapped to classid 1:3 - 1:7
$ tc -s class show dev $NETDEV
...
class mq 1:3 root leaf 8002:
 Sent 11949133672 bytes 7929798 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
class mq 1:4 root leaf 8003:
 Sent 7710449050 bytes 5117279 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
class mq 1:5 root leaf 8004:
 Sent 4157648675 bytes 2758990 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
class mq 1:6 root leaf 8005:
 Sent 4159632195 bytes 2759990 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
class mq 1:7 root leaf 8006:
 Sent 7003169603 bytes 4646912 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
...

Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Cong Wang <xiyou.wangcong@gmail.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Jonathan Lemon <jonathan.lemon@gmail.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Alexander Lobakin <alobakin@pm.me>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Talal Ahmad <talalahmad@google.com>
Cc: Kevin Hao <haokexin@gmail.com>
Cc: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Cc: Antoine Tenart <atenart@kernel.org>
Cc: Wei Wang <weiwan@google.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
---
 include/net/tc_act/tc_skbedit.h        |  1 +
 include/uapi/linux/tc_act/tc_skbedit.h |  2 ++
 net/sched/act_skbedit.c                | 49 ++++++++++++++++++++++++--
 3 files changed, 50 insertions(+), 2 deletions(-)

diff --git a/include/net/tc_act/tc_skbedit.h b/include/net/tc_act/tc_skbedit.h
index 00bfee70609e..ee96e0fa6566 100644
--- a/include/net/tc_act/tc_skbedit.h
+++ b/include/net/tc_act/tc_skbedit.h
@@ -17,6 +17,7 @@ struct tcf_skbedit_params {
 	u32 mark;
 	u32 mask;
 	u16 queue_mapping;
+	u16 mapping_mod;
 	u16 ptype;
 	struct rcu_head rcu;
 };
diff --git a/include/uapi/linux/tc_act/tc_skbedit.h b/include/uapi/linux/tc_act/tc_skbedit.h
index 800e93377218..6cb6101208d0 100644
--- a/include/uapi/linux/tc_act/tc_skbedit.h
+++ b/include/uapi/linux/tc_act/tc_skbedit.h
@@ -29,6 +29,7 @@
 #define SKBEDIT_F_PTYPE			0x8
 #define SKBEDIT_F_MASK			0x10
 #define SKBEDIT_F_INHERITDSFIELD	0x20
+#define SKBEDIT_F_TXQ_SKBHASH		0x40
 
 struct tc_skbedit {
 	tc_gen;
@@ -45,6 +46,7 @@ enum {
 	TCA_SKBEDIT_PTYPE,
 	TCA_SKBEDIT_MASK,
 	TCA_SKBEDIT_FLAGS,
+	TCA_SKBEDIT_QUEUE_MAPPING_MAX,
 	__TCA_SKBEDIT_MAX
 };
 #define TCA_SKBEDIT_MAX (__TCA_SKBEDIT_MAX - 1)
diff --git a/net/sched/act_skbedit.c b/net/sched/act_skbedit.c
index d5799b4fc499..2634c725bc75 100644
--- a/net/sched/act_skbedit.c
+++ b/net/sched/act_skbedit.c
@@ -23,6 +23,20 @@
 static unsigned int skbedit_net_id;
 static struct tc_action_ops act_skbedit_ops;
 
+static u16 tcf_skbedit_hash(struct tcf_skbedit_params *params,
+			    struct sk_buff *skb)
+{
+	u16 queue_mapping = params->queue_mapping;
+
+	if (params->flags & SKBEDIT_F_TXQ_SKBHASH) {
+		u32 hash = skb_get_hash(skb);
+
+		queue_mapping += hash % params->mapping_mod;
+	}
+
+	return netdev_cap_txqueue(skb->dev, queue_mapping);
+}
+
 static int tcf_skbedit_act(struct sk_buff *skb, const struct tc_action *a,
 			   struct tcf_result *res)
 {
@@ -62,7 +76,7 @@ static int tcf_skbedit_act(struct sk_buff *skb, const struct tc_action *a,
 #ifdef CONFIG_NET_EGRESS
 		netdev_xmit_skip_txqueue(true);
 #endif
-		skb_set_queue_mapping(skb, params->queue_mapping);
+		skb_set_queue_mapping(skb, tcf_skbedit_hash(params, skb));
 	}
 	if (params->flags & SKBEDIT_F_MARK) {
 		skb->mark &= ~params->mask;
@@ -96,6 +110,7 @@ static const struct nla_policy skbedit_policy[TCA_SKBEDIT_MAX + 1] = {
 	[TCA_SKBEDIT_PTYPE]		= { .len = sizeof(u16) },
 	[TCA_SKBEDIT_MASK]		= { .len = sizeof(u32) },
 	[TCA_SKBEDIT_FLAGS]		= { .len = sizeof(u64) },
+	[TCA_SKBEDIT_QUEUE_MAPPING_MAX]	= { .len = sizeof(u16) },
 };
 
 static int tcf_skbedit_init(struct net *net, struct nlattr *nla,
@@ -112,6 +127,7 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla,
 	struct tcf_skbedit *d;
 	u32 flags = 0, *priority = NULL, *mark = NULL, *mask = NULL;
 	u16 *queue_mapping = NULL, *ptype = NULL;
+	u16 mapping_mod = 1;
 	bool exists = false;
 	int ret = 0, err;
 	u32 index;
@@ -157,6 +173,25 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla,
 	if (tb[TCA_SKBEDIT_FLAGS] != NULL) {
 		u64 *pure_flags = nla_data(tb[TCA_SKBEDIT_FLAGS]);
 
+		if (*pure_flags & SKBEDIT_F_TXQ_SKBHASH) {
+			u16 *queue_mapping_max;
+
+			if (!tb[TCA_SKBEDIT_QUEUE_MAPPING] ||
+			    !tb[TCA_SKBEDIT_QUEUE_MAPPING_MAX]) {
+				NL_SET_ERR_MSG_MOD(extack, "Missing required range of queue_mapping.");
+				return -EINVAL;
+			}
+
+			queue_mapping_max =
+				nla_data(tb[TCA_SKBEDIT_QUEUE_MAPPING_MAX]);
+			if (*queue_mapping_max < *queue_mapping) {
+				NL_SET_ERR_MSG_MOD(extack, "The range of queue_mapping is invalid, max < min.");
+				return -EINVAL;
+			}
+
+			mapping_mod = *queue_mapping_max - *queue_mapping + 1;
+			flags |= SKBEDIT_F_TXQ_SKBHASH;
+		}
 		if (*pure_flags & SKBEDIT_F_INHERITDSFIELD)
 			flags |= SKBEDIT_F_INHERITDSFIELD;
 	}
@@ -208,8 +243,10 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla,
 	params_new->flags = flags;
 	if (flags & SKBEDIT_F_PRIORITY)
 		params_new->priority = *priority;
-	if (flags & SKBEDIT_F_QUEUE_MAPPING)
+	if (flags & SKBEDIT_F_QUEUE_MAPPING) {
 		params_new->queue_mapping = *queue_mapping;
+		params_new->mapping_mod = mapping_mod;
+	}
 	if (flags & SKBEDIT_F_MARK)
 		params_new->mark = *mark;
 	if (flags & SKBEDIT_F_PTYPE)
@@ -276,6 +313,13 @@ static int tcf_skbedit_dump(struct sk_buff *skb, struct tc_action *a,
 		goto nla_put_failure;
 	if (params->flags & SKBEDIT_F_INHERITDSFIELD)
 		pure_flags |= SKBEDIT_F_INHERITDSFIELD;
+	if (params->flags & SKBEDIT_F_TXQ_SKBHASH) {
+		if (nla_put_u16(skb, TCA_SKBEDIT_QUEUE_MAPPING_MAX,
+				params->queue_mapping + params->mapping_mod - 1))
+			goto nla_put_failure;
+
+		pure_flags |= SKBEDIT_F_TXQ_SKBHASH;
+	}
 	if (pure_flags != 0 &&
 	    nla_put(skb, TCA_SKBEDIT_FLAGS, sizeof(pure_flags), &pure_flags))
 		goto nla_put_failure;
@@ -325,6 +369,7 @@ static size_t tcf_skbedit_get_fill_size(const struct tc_action *act)
 	return nla_total_size(sizeof(struct tc_skbedit))
 		+ nla_total_size(sizeof(u32)) /* TCA_SKBEDIT_PRIORITY */
 		+ nla_total_size(sizeof(u16)) /* TCA_SKBEDIT_QUEUE_MAPPING */
+		+ nla_total_size(sizeof(u16)) /* TCA_SKBEDIT_QUEUE_MAPPING_MAX */
 		+ nla_total_size(sizeof(u32)) /* TCA_SKBEDIT_MARK */
 		+ nla_total_size(sizeof(u16)) /* TCA_SKBEDIT_PTYPE */
 		+ nla_total_size(sizeof(u32)) /* TCA_SKBEDIT_MASK */
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [net-next v10 1/2] net: sched: use queue_mapping to pick tx queue
  2022-03-14 14:15 ` [net-next v10 1/2] net: sched: use queue_mapping to pick tx queue xiangxia.m.yue
@ 2022-03-14 16:38   ` Jamal Hadi Salim
  2022-03-14 21:59   ` Daniel Borkmann
  1 sibling, 0 replies; 10+ messages in thread
From: Jamal Hadi Salim @ 2022-03-14 16:38 UTC (permalink / raw)
  To: xiangxia.m.yue, netdev
  Cc: Cong Wang, Jiri Pirko, David S. Miller, Jakub Kicinski,
	Jonathan Lemon, Eric Dumazet, Alexander Lobakin, Paolo Abeni,
	Talal Ahmad, Kevin Hao, Ilias Apalodimas, Kees Cook,
	Kumar Kartikeya Dwivedi, Antoine Tenart, Wei Wang, Arnd Bergmann

On 2022-03-14 10:15, xiangxia.m.yue@gmail.com wrote:
> From: Tonghao Zhang <xiangxia.m.yue@gmail.com>
> 
> This patch fixes issue:
> * If we install tc filters with act_skbedit in clsact hook.
>    It doesn't work, because netdev_core_pick_tx() overwrites
>    queue_mapping.
> 
>    $ tc filter ... action skbedit queue_mapping 1
> 
> And this patch is useful:
> * We can use FQ + EDT to implement efficient policies. Tx queues
>    are picked by xps, ndo_select_queue of netdev driver, or skb hash
>    in netdev_core_pick_tx(). In fact, the netdev driver, and skb
>    hash are _not_ under control. xps uses the CPUs map to select Tx
>    queues, but we can't figure out which task_struct of pod/containter
>    running on this cpu in most case. We can use clsact filters to classify
>    one pod/container traffic to one Tx queue. Why ?
> 
>    In containter networking environment, there are two kinds of pod/
>    containter/net-namespace. One kind (e.g. P1, P2), the high throughput
>    is key in these applications. But avoid running out of network resource,
>    the outbound traffic of these pods is limited, using or sharing one
>    dedicated Tx queues assigned HTB/TBF/FQ Qdisc. Other kind of pods
>    (e.g. Pn), the low latency of data access is key. And the traffic is not
>    limited. Pods use or share other dedicated Tx queues assigned FIFO Qdisc.
>    This choice provides two benefits. First, contention on the HTB/FQ Qdisc
>    lock is significantly reduced since fewer CPUs contend for the same queue.
>    More importantly, Qdisc contention can be eliminated completely if each
>    CPU has its own FIFO Qdisc for the second kind of pods.
> 
>    There must be a mechanism in place to support classifying traffic based on
>    pods/container to different Tx queues. Note that clsact is outside of Qdisc
>    while Qdisc can run a classifier to select a sub-queue under the lock.
> 
>    In general recording the decision in the skb seems a little heavy handed.
>    This patch introduces a per-CPU variable, suggested by Eric.
> 
>    The xmit.skip_txqueue flag is firstly cleared in __dev_queue_xmit().
>    - Tx Qdisc may install that skbedit actions, then xmit.skip_txqueue flag
>      is set in qdisc->enqueue() though tx queue has been selected in
>      netdev_tx_queue_mapping() or netdev_core_pick_tx(). That flag is cleared
>      firstly in __dev_queue_xmit(), is useful:
>    - Avoid picking Tx queue with netdev_tx_queue_mapping() in next netdev
>      in such case: eth0 macvlan - eth0.3 vlan - eth0 ixgbe-phy:
>      For example, eth0, macvlan in pod, which root Qdisc install skbedit
>      queue_mapping, send packets to eth0.3, vlan in host. In __dev_queue_xmit() of
>      eth0.3, clear the flag, does not select tx queue according to skb->queue_mapping
>      because there is no filters in clsact or tx Qdisc of this netdev.
>      Same action taked in eth0, ixgbe in Host.
>    - Avoid picking Tx queue for next packet. If we set xmit.skip_txqueue
>      in tx Qdisc (qdisc->enqueue()), the proper way to clear it is clearing it
>      in __dev_queue_xmit when processing next packets.
> 
>    For performance reasons, use the static key. If user does not config the NET_EGRESS,
>    the patch will not be compiled.
> 
>    +----+      +----+      +----+
>    | P1 |      | P2 |      | Pn |
>    +----+      +----+      +----+
>      |           |           |
>      +-----------+-----------+
>                  |
>                  | clsact/skbedit
>                  |      MQ
>                  v
>      +-----------+-----------+
>      | q0        | q1        | qn
>      v           v           v
>    HTB/FQ      HTB/FQ  ...  FIFO
> 
> Cc: Jamal Hadi Salim <jhs@mojatatu.com>
> Cc: Cong Wang <xiyou.wangcong@gmail.com>
> Cc: Jiri Pirko <jiri@resnulli.us>
> Cc: "David S. Miller" <davem@davemloft.net>
> Cc: Jakub Kicinski <kuba@kernel.org>
> Cc: Jonathan Lemon <jonathan.lemon@gmail.com>
> Cc: Eric Dumazet <edumazet@google.com>
> Cc: Alexander Lobakin <alobakin@pm.me>
> Cc: Paolo Abeni <pabeni@redhat.com>
> Cc: Talal Ahmad <talalahmad@google.com>
> Cc: Kevin Hao <haokexin@gmail.com>
> Cc: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> Cc: Antoine Tenart <atenart@kernel.org>
> Cc: Wei Wang <weiwan@google.com>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Suggested-by: Eric Dumazet <edumazet@google.com>
> Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>

Acked-by: Jamal Hadi Salim <jhs@mojatatu.com>

cheers,
jamal

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [net-next v10 2/2] net: sched: support hash selecting tx queue
  2022-03-14 14:15 ` [net-next v10 2/2] net: sched: support hash selecting " xiangxia.m.yue
@ 2022-03-14 16:38   ` Jamal Hadi Salim
  0 siblings, 0 replies; 10+ messages in thread
From: Jamal Hadi Salim @ 2022-03-14 16:38 UTC (permalink / raw)
  To: xiangxia.m.yue, netdev
  Cc: Cong Wang, Jiri Pirko, David S. Miller, Jakub Kicinski,
	Jonathan Lemon, Eric Dumazet, Alexander Lobakin, Paolo Abeni,
	Talal Ahmad, Kevin Hao, Ilias Apalodimas, Kees Cook,
	Kumar Kartikeya Dwivedi, Antoine Tenart, Wei Wang, Arnd Bergmann

On 2022-03-14 10:15, xiangxia.m.yue@gmail.com wrote:
> From: Tonghao Zhang <xiangxia.m.yue@gmail.com>
> 

Reviewed-by: Jamal Hadi Salim <jhs@mojatatu.com>

cheers,
jamal

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [net-next v10 1/2] net: sched: use queue_mapping to pick tx queue
  2022-03-14 14:15 ` [net-next v10 1/2] net: sched: use queue_mapping to pick tx queue xiangxia.m.yue
  2022-03-14 16:38   ` Jamal Hadi Salim
@ 2022-03-14 21:59   ` Daniel Borkmann
  2022-03-15 12:48     ` Tonghao Zhang
  1 sibling, 1 reply; 10+ messages in thread
From: Daniel Borkmann @ 2022-03-14 21:59 UTC (permalink / raw)
  To: xiangxia.m.yue, netdev
  Cc: Jamal Hadi Salim, Cong Wang, Jiri Pirko, David S. Miller,
	Jakub Kicinski, Jonathan Lemon, Eric Dumazet, Alexander Lobakin,
	Paolo Abeni, Talal Ahmad, Kevin Hao, ast, bpf

On 3/14/22 3:15 PM, xiangxia.m.yue@gmail.com wrote:
[...]
>   include/linux/netdevice.h |  3 +++
>   include/linux/rtnetlink.h |  1 +
>   net/core/dev.c            | 31 +++++++++++++++++++++++++++++--
>   net/sched/act_skbedit.c   |  6 +++++-
>   4 files changed, 38 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> index 0d994710b335..f33fb2d6712a 100644
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
> @@ -3065,6 +3065,9 @@ struct softnet_data {
>   	struct {
>   		u16 recursion;
>   		u8  more;
> +#ifdef CONFIG_NET_EGRESS
> +		u8  skip_txqueue;
> +#endif
>   	} xmit;
>   #ifdef CONFIG_RPS
>   	/* input_queue_head should be written by cpu owning this struct,
> diff --git a/include/linux/rtnetlink.h b/include/linux/rtnetlink.h
> index 7f970b16da3a..ae2c6a3cec5d 100644
> --- a/include/linux/rtnetlink.h
> +++ b/include/linux/rtnetlink.h
> @@ -100,6 +100,7 @@ void net_dec_ingress_queue(void);
>   #ifdef CONFIG_NET_EGRESS
>   void net_inc_egress_queue(void);
>   void net_dec_egress_queue(void);
> +void netdev_xmit_skip_txqueue(bool skip);
>   #endif
>   
>   void rtnetlink_init(void);
> diff --git a/net/core/dev.c b/net/core/dev.c
> index 75bab5b0dbae..8e83b7099977 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -3908,6 +3908,25 @@ sch_handle_egress(struct sk_buff *skb, int *ret, struct net_device *dev)
>   
>   	return skb;
>   }
> +
> +static struct netdev_queue *
> +netdev_tx_queue_mapping(struct net_device *dev, struct sk_buff *skb)
> +{
> +	int qm = skb_get_queue_mapping(skb);
> +
> +	return netdev_get_tx_queue(dev, netdev_cap_txqueue(dev, qm));
> +}
> +
> +static bool netdev_xmit_txqueue_skipped(void)
> +{
> +	return __this_cpu_read(softnet_data.xmit.skip_txqueue);
> +}
> +
> +void netdev_xmit_skip_txqueue(bool skip)
> +{
> +	__this_cpu_write(softnet_data.xmit.skip_txqueue, skip);
> +}
> +EXPORT_SYMBOL_GPL(netdev_xmit_skip_txqueue);
>   #endif /* CONFIG_NET_EGRESS */
>   
>   #ifdef CONFIG_XPS
> @@ -4078,7 +4097,7 @@ struct netdev_queue *netdev_core_pick_tx(struct net_device *dev,
>   static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
>   {
>   	struct net_device *dev = skb->dev;
> -	struct netdev_queue *txq;
> +	struct netdev_queue *txq = NULL;
>   	struct Qdisc *q;
>   	int rc = -ENOMEM;
>   	bool again = false;
> @@ -4106,11 +4125,17 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
>   			if (!skb)
>   				goto out;
>   		}
> +
> +		netdev_xmit_skip_txqueue(false);
> +
>   		nf_skip_egress(skb, true);
>   		skb = sch_handle_egress(skb, &rc, dev);
>   		if (!skb)
>   			goto out;
>   		nf_skip_egress(skb, false);
> +
> +		if (netdev_xmit_txqueue_skipped())
> +			txq = netdev_tx_queue_mapping(dev, skb);
>   	}
>   #endif
>   	/* If device/qdisc don't need skb->dst, release it right now while
> @@ -4121,7 +4146,9 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
>   	else
>   		skb_dst_force(skb);
>   
> -	txq = netdev_core_pick_tx(dev, skb, sb_dev);
> +	if (likely(!txq))

nit: Drop likely(). If the feature is used from sch_handle_egress(), then this would always be the case.

> +		txq = netdev_core_pick_tx(dev, skb, sb_dev);
> +
>   	q = rcu_dereference_bh(txq->qdisc);

How will the `netdev_xmit_skip_txqueue(true)` be usable from BPF side (see bpf_convert_ctx_access() ->
queue_mapping)?

>   	trace_net_dev_queue(skb);
> diff --git a/net/sched/act_skbedit.c b/net/sched/act_skbedit.c
> index ceba11b198bb..d5799b4fc499 100644
> --- a/net/sched/act_skbedit.c
> +++ b/net/sched/act_skbedit.c
> @@ -58,8 +58,12 @@ static int tcf_skbedit_act(struct sk_buff *skb, const struct tc_action *a,
>   		}
>   	}
>   	if (params->flags & SKBEDIT_F_QUEUE_MAPPING &&
> -	    skb->dev->real_num_tx_queues > params->queue_mapping)
> +	    skb->dev->real_num_tx_queues > params->queue_mapping) {
> +#ifdef CONFIG_NET_EGRESS
> +		netdev_xmit_skip_txqueue(true);
> +#endif
>   		skb_set_queue_mapping(skb, params->queue_mapping);
> +	}
>   	if (params->flags & SKBEDIT_F_MARK) {
>   		skb->mark &= ~params->mask;
>   		skb->mark |= params->mark & params->mask;
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [net-next v10 1/2] net: sched: use queue_mapping to pick tx queue
  2022-03-14 21:59   ` Daniel Borkmann
@ 2022-03-15 12:48     ` Tonghao Zhang
  2022-03-17  8:20       ` Paolo Abeni
  0 siblings, 1 reply; 10+ messages in thread
From: Tonghao Zhang @ 2022-03-15 12:48 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: Linux Kernel Network Developers, Jamal Hadi Salim, Cong Wang,
	Jiri Pirko, David S. Miller, Jakub Kicinski, Jonathan Lemon,
	Eric Dumazet, Alexander Lobakin, Paolo Abeni, Talal Ahmad,
	Kevin Hao, Alexei Starovoitov, bpf

On Tue, Mar 15, 2022 at 5:59 AM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 3/14/22 3:15 PM, xiangxia.m.yue@gmail.com wrote:
> [...]
> >   include/linux/netdevice.h |  3 +++
> >   include/linux/rtnetlink.h |  1 +
> >   net/core/dev.c            | 31 +++++++++++++++++++++++++++++--
> >   net/sched/act_skbedit.c   |  6 +++++-
> >   4 files changed, 38 insertions(+), 3 deletions(-)
> >
> > diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> > index 0d994710b335..f33fb2d6712a 100644
> > --- a/include/linux/netdevice.h
> > +++ b/include/linux/netdevice.h
> > @@ -3065,6 +3065,9 @@ struct softnet_data {
> >       struct {
> >               u16 recursion;
> >               u8  more;
> > +#ifdef CONFIG_NET_EGRESS
> > +             u8  skip_txqueue;
> > +#endif
> >       } xmit;
> >   #ifdef CONFIG_RPS
> >       /* input_queue_head should be written by cpu owning this struct,
> > diff --git a/include/linux/rtnetlink.h b/include/linux/rtnetlink.h
> > index 7f970b16da3a..ae2c6a3cec5d 100644
> > --- a/include/linux/rtnetlink.h
> > +++ b/include/linux/rtnetlink.h
> > @@ -100,6 +100,7 @@ void net_dec_ingress_queue(void);
> >   #ifdef CONFIG_NET_EGRESS
> >   void net_inc_egress_queue(void);
> >   void net_dec_egress_queue(void);
> > +void netdev_xmit_skip_txqueue(bool skip);
> >   #endif
> >
> >   void rtnetlink_init(void);
> > diff --git a/net/core/dev.c b/net/core/dev.c
> > index 75bab5b0dbae..8e83b7099977 100644
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -3908,6 +3908,25 @@ sch_handle_egress(struct sk_buff *skb, int *ret, struct net_device *dev)
> >
> >       return skb;
> >   }
> > +
> > +static struct netdev_queue *
> > +netdev_tx_queue_mapping(struct net_device *dev, struct sk_buff *skb)
> > +{
> > +     int qm = skb_get_queue_mapping(skb);
> > +
> > +     return netdev_get_tx_queue(dev, netdev_cap_txqueue(dev, qm));
> > +}
> > +
> > +static bool netdev_xmit_txqueue_skipped(void)
> > +{
> > +     return __this_cpu_read(softnet_data.xmit.skip_txqueue);
> > +}
> > +
> > +void netdev_xmit_skip_txqueue(bool skip)
> > +{
> > +     __this_cpu_write(softnet_data.xmit.skip_txqueue, skip);
> > +}
> > +EXPORT_SYMBOL_GPL(netdev_xmit_skip_txqueue);
> >   #endif /* CONFIG_NET_EGRESS */
> >
> >   #ifdef CONFIG_XPS
> > @@ -4078,7 +4097,7 @@ struct netdev_queue *netdev_core_pick_tx(struct net_device *dev,
> >   static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
> >   {
> >       struct net_device *dev = skb->dev;
> > -     struct netdev_queue *txq;
> > +     struct netdev_queue *txq = NULL;
> >       struct Qdisc *q;
> >       int rc = -ENOMEM;
> >       bool again = false;
> > @@ -4106,11 +4125,17 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
> >                       if (!skb)
> >                               goto out;
> >               }
> > +
> > +             netdev_xmit_skip_txqueue(false);
> > +
> >               nf_skip_egress(skb, true);
> >               skb = sch_handle_egress(skb, &rc, dev);
> >               if (!skb)
> >                       goto out;
> >               nf_skip_egress(skb, false);
> > +
> > +             if (netdev_xmit_txqueue_skipped())
> > +                     txq = netdev_tx_queue_mapping(dev, skb);
> >       }
> >   #endif
> >       /* If device/qdisc don't need skb->dst, release it right now while
> > @@ -4121,7 +4146,9 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
> >       else
> >               skb_dst_force(skb);
> >
> > -     txq = netdev_core_pick_tx(dev, skb, sb_dev);
> > +     if (likely(!txq))
>
> nit: Drop likely(). If the feature is used from sch_handle_egress(), then this would always be the case.
Hi Daniel
I think in most case, we don't use skbedit queue_mapping in the
sch_handle_egress() , so I add likely in fast path.
> > +             txq = netdev_core_pick_tx(dev, skb, sb_dev);
> > +
> >       q = rcu_dereference_bh(txq->qdisc);
>
> How will the `netdev_xmit_skip_txqueue(true)` be usable from BPF side (see bpf_convert_ctx_access() ->
> queue_mapping)?
Good questions, In other patch, I introduce the
bpf_netdev_skip_txqueue, so we can use netdev_xmit_skip_txqueue in bpf
side

not official patch:(I will post this patch, if ready)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 4eebea830613..ef147a1a2d62 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -5117,6 +5117,21 @@ union bpf_attr {
  * 0 on success.
  * **-EINVAL** for invalid input
  * **-EOPNOTSUPP** for unsupported delivery_time_type and protocol
+ *
+ * void bpf_netdev_skip_txqueue(u32 skip)
+ * Description
+ * Redirect the packet to another net device of index *ifindex*.
+ * This helper is somewhat similar to **bpf_redirect**\ (), except
+ * that the redirection happens to the *skip*' peer device and
+ * the netns switch takes place from ingress to ingress without
+ * going through the CPU's backlog queue.
+ *
+ * The *skip* argument is reserved and must be 0. The helper is
+ * currently only supported for tc BPF program types at the ingress
+ * hook and for veth device types. The peer device must reside in a
+ * different network namespace.
+ * Return
+ * Nothing. Always succeeds.
  */
 #define __BPF_FUNC_MAPPER(FN) \
  FN(unspec), \
@@ -5312,6 +5327,7 @@ union bpf_attr {
  FN(xdp_store_bytes), \
  FN(copy_from_user_task), \
  FN(skb_set_delivery_time),      \
+ FN(netdev_skip_txqueue), \
  /* */

 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
diff --git a/net/core/filter.c b/net/core/filter.c
index 88767f7da150..5845b4632b6b 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -2517,6 +2517,19 @@ static const struct bpf_func_proto
bpf_redirect_peer_proto = {
  .arg2_type      = ARG_ANYTHING,
 };

+BPF_CALL_1(bpf_netdev_skip_txqueue, u32, skip)
+{
+ netdev_xmit_skip_txqueue(!!skip);
+ return 0;
+};
+
+static const struct bpf_func_proto bpf_netdev_skip_txqueue_proto = {
+ .func           = bpf_netdev_skip_txqueue,
+ .gpl_only       = false,
+ .ret_type       = RET_VOID,
+ .arg1_type      = ARG_ANYTHING,
+};
+
 BPF_CALL_4(bpf_redirect_neigh, u32, ifindex, struct bpf_redir_neigh *, params,
     int, plen, u64, flags)
 {
@@ -7721,6 +7734,8 @@ tc_cls_act_func_proto(enum bpf_func_id func_id,
const struct bpf_prog *prog)
  return &bpf_redirect_proto;
  case BPF_FUNC_redirect_neigh:
  return &bpf_redirect_neigh_proto;
+ case BPF_FUNC_netdev_skip_txqueue:
+ return &bpf_netdev_skip_txqueue_proto;
  case BPF_FUNC_redirect_peer:
  return &bpf_redirect_peer_proto;
  case BPF_FUNC_get_route_realm:
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 4eebea830613..ef147a1a2d62 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -5117,6 +5117,21 @@ union bpf_attr {
  * 0 on success.
  * **-EINVAL** for invalid input
  * **-EOPNOTSUPP** for unsupported delivery_time_type and protocol
+ *
+ * void bpf_netdev_skip_txqueue(u32 skip)
+ * Description
+ * Redirect the packet to another net device of index *ifindex*.
+ * This helper is somewhat similar to **bpf_redirect**\ (), except
+ * that the redirection happens to the *skip*' peer device and
+ * the netns switch takes place from ingress to ingress without
+ * going through the CPU's backlog queue.
+ *
+ * The *skip* argument is reserved and must be 0. The helper is
+ * currently only supported for tc BPF program types at the ingress
+ * hook and for veth device types. The peer device must reside in a
+ * different network namespace.
+ * Return
+ * Nothing. Always succeeds.
  */
 #define __BPF_FUNC_MAPPER(FN) \
  FN(unspec), \
@@ -5312,6 +5327,7 @@ union bpf_attr {
  FN(xdp_store_bytes), \
  FN(copy_from_user_task), \
  FN(skb_set_delivery_time),      \
+ FN(netdev_skip_txqueue), \
  /* */

 /* integer value in 'imm' field of BPF_CALL instruction selects which helper

>
> >       trace_net_dev_queue(skb);
> > diff --git a/net/sched/act_skbedit.c b/net/sched/act_skbedit.c
> > index ceba11b198bb..d5799b4fc499 100644
> > --- a/net/sched/act_skbedit.c
> > +++ b/net/sched/act_skbedit.c
> > @@ -58,8 +58,12 @@ static int tcf_skbedit_act(struct sk_buff *skb, const struct tc_action *a,
> >               }
> >       }
> >       if (params->flags & SKBEDIT_F_QUEUE_MAPPING &&
> > -         skb->dev->real_num_tx_queues > params->queue_mapping)
> > +         skb->dev->real_num_tx_queues > params->queue_mapping) {
> > +#ifdef CONFIG_NET_EGRESS
> > +             netdev_xmit_skip_txqueue(true);
> > +#endif
> >               skb_set_queue_mapping(skb, params->queue_mapping);
> > +     }
> >       if (params->flags & SKBEDIT_F_MARK) {
> >               skb->mark &= ~params->mask;
> >               skb->mark |= params->mark & params->mask;
> >
>


-- 
Best regards, Tonghao

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [net-next v10 1/2] net: sched: use queue_mapping to pick tx queue
  2022-03-15 12:48     ` Tonghao Zhang
@ 2022-03-17  8:20       ` Paolo Abeni
  2022-03-18 13:36         ` Daniel Borkmann
  0 siblings, 1 reply; 10+ messages in thread
From: Paolo Abeni @ 2022-03-17  8:20 UTC (permalink / raw)
  To: Tonghao Zhang, Daniel Borkmann
  Cc: Linux Kernel Network Developers, Jamal Hadi Salim, Cong Wang,
	Jiri Pirko, David S. Miller, Jakub Kicinski, Jonathan Lemon,
	Eric Dumazet, Alexander Lobakin, Talal Ahmad, Kevin Hao,
	Alexei Starovoitov, bpf

On Tue, 2022-03-15 at 20:48 +0800, Tonghao Zhang wrote:
> On Tue, Mar 15, 2022 at 5:59 AM Daniel Borkmann <daniel@iogearbox.net> wrote:
> > 
> > On 3/14/22 3:15 PM, xiangxia.m.yue@gmail.com wrote:
> > [...]
> > >   include/linux/netdevice.h |  3 +++
> > >   include/linux/rtnetlink.h |  1 +
> > >   net/core/dev.c            | 31 +++++++++++++++++++++++++++++--
> > >   net/sched/act_skbedit.c   |  6 +++++-
> > >   4 files changed, 38 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> > > index 0d994710b335..f33fb2d6712a 100644
> > > --- a/include/linux/netdevice.h
> > > +++ b/include/linux/netdevice.h
> > > @@ -3065,6 +3065,9 @@ struct softnet_data {
> > >       struct {
> > >               u16 recursion;
> > >               u8  more;
> > > +#ifdef CONFIG_NET_EGRESS
> > > +             u8  skip_txqueue;
> > > +#endif
> > >       } xmit;
> > >   #ifdef CONFIG_RPS
> > >       /* input_queue_head should be written by cpu owning this struct,
> > > diff --git a/include/linux/rtnetlink.h b/include/linux/rtnetlink.h
> > > index 7f970b16da3a..ae2c6a3cec5d 100644
> > > --- a/include/linux/rtnetlink.h
> > > +++ b/include/linux/rtnetlink.h
> > > @@ -100,6 +100,7 @@ void net_dec_ingress_queue(void);
> > >   #ifdef CONFIG_NET_EGRESS
> > >   void net_inc_egress_queue(void);
> > >   void net_dec_egress_queue(void);
> > > +void netdev_xmit_skip_txqueue(bool skip);
> > >   #endif
> > > 
> > >   void rtnetlink_init(void);
> > > diff --git a/net/core/dev.c b/net/core/dev.c
> > > index 75bab5b0dbae..8e83b7099977 100644
> > > --- a/net/core/dev.c
> > > +++ b/net/core/dev.c
> > > @@ -3908,6 +3908,25 @@ sch_handle_egress(struct sk_buff *skb, int *ret, struct net_device *dev)
> > > 
> > >       return skb;
> > >   }
> > > +
> > > +static struct netdev_queue *
> > > +netdev_tx_queue_mapping(struct net_device *dev, struct sk_buff *skb)
> > > +{
> > > +     int qm = skb_get_queue_mapping(skb);
> > > +
> > > +     return netdev_get_tx_queue(dev, netdev_cap_txqueue(dev, qm));
> > > +}
> > > +
> > > +static bool netdev_xmit_txqueue_skipped(void)
> > > +{
> > > +     return __this_cpu_read(softnet_data.xmit.skip_txqueue);
> > > +}
> > > +
> > > +void netdev_xmit_skip_txqueue(bool skip)
> > > +{
> > > +     __this_cpu_write(softnet_data.xmit.skip_txqueue, skip);
> > > +}
> > > +EXPORT_SYMBOL_GPL(netdev_xmit_skip_txqueue);
> > >   #endif /* CONFIG_NET_EGRESS */
> > > 
> > >   #ifdef CONFIG_XPS
> > > @@ -4078,7 +4097,7 @@ struct netdev_queue *netdev_core_pick_tx(struct net_device *dev,
> > >   static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
> > >   {
> > >       struct net_device *dev = skb->dev;
> > > -     struct netdev_queue *txq;
> > > +     struct netdev_queue *txq = NULL;
> > >       struct Qdisc *q;
> > >       int rc = -ENOMEM;
> > >       bool again = false;
> > > @@ -4106,11 +4125,17 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
> > >                       if (!skb)
> > >                               goto out;
> > >               }
> > > +
> > > +             netdev_xmit_skip_txqueue(false);
> > > +
> > >               nf_skip_egress(skb, true);
> > >               skb = sch_handle_egress(skb, &rc, dev);
> > >               if (!skb)
> > >                       goto out;
> > >               nf_skip_egress(skb, false);
> > > +
> > > +             if (netdev_xmit_txqueue_skipped())
> > > +                     txq = netdev_tx_queue_mapping(dev, skb);
> > >       }
> > >   #endif
> > >       /* If device/qdisc don't need skb->dst, release it right now while
> > > @@ -4121,7 +4146,9 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
> > >       else
> > >               skb_dst_force(skb);
> > > 
> > > -     txq = netdev_core_pick_tx(dev, skb, sb_dev);
> > > +     if (likely(!txq))
> > 
> > nit: Drop likely(). If the feature is used from sch_handle_egress(), then this would always be the case.
> Hi Daniel
> I think in most case, we don't use skbedit queue_mapping in the
> sch_handle_egress() , so I add likely in fast path.
> > > +             txq = netdev_core_pick_tx(dev, skb, sb_dev);
> > > +
> > >       q = rcu_dereference_bh(txq->qdisc);
> > 
> > How will the `netdev_xmit_skip_txqueue(true)` be usable from BPF side (see bpf_convert_ctx_access() ->
> > queue_mapping)?
> Good questions, In other patch, I introduce the
> bpf_netdev_skip_txqueue, so we can use netdev_xmit_skip_txqueue in bpf
> side

@Daniel: are you ok with the above explaination?

Thanks!

Paolo


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [net-next v10 1/2] net: sched: use queue_mapping to pick tx queue
  2022-03-17  8:20       ` Paolo Abeni
@ 2022-03-18 13:36         ` Daniel Borkmann
  2022-03-19 13:40           ` Tonghao Zhang
  0 siblings, 1 reply; 10+ messages in thread
From: Daniel Borkmann @ 2022-03-18 13:36 UTC (permalink / raw)
  To: Paolo Abeni, Tonghao Zhang
  Cc: Linux Kernel Network Developers, Jamal Hadi Salim, Cong Wang,
	Jiri Pirko, David S. Miller, Jakub Kicinski, Jonathan Lemon,
	Eric Dumazet, Alexander Lobakin, Talal Ahmad, Kevin Hao,
	Alexei Starovoitov, bpf

On 3/17/22 9:20 AM, Paolo Abeni wrote:
> On Tue, 2022-03-15 at 20:48 +0800, Tonghao Zhang wrote:
>> On Tue, Mar 15, 2022 at 5:59 AM Daniel Borkmann <daniel@iogearbox.net> wrote:
>>> On 3/14/22 3:15 PM, xiangxia.m.yue@gmail.com wrote:
>>> [...]
>>>>    include/linux/netdevice.h |  3 +++
>>>>    include/linux/rtnetlink.h |  1 +
>>>>    net/core/dev.c            | 31 +++++++++++++++++++++++++++++--
>>>>    net/sched/act_skbedit.c   |  6 +++++-
>>>>    4 files changed, 38 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
>>>> index 0d994710b335..f33fb2d6712a 100644
>>>> --- a/include/linux/netdevice.h
>>>> +++ b/include/linux/netdevice.h
>>>> @@ -3065,6 +3065,9 @@ struct softnet_data {
>>>>        struct {
>>>>                u16 recursion;
>>>>                u8  more;
>>>> +#ifdef CONFIG_NET_EGRESS
>>>> +             u8  skip_txqueue;
>>>> +#endif
>>>>        } xmit;
>>>>    #ifdef CONFIG_RPS
>>>>        /* input_queue_head should be written by cpu owning this struct,
>>>> diff --git a/include/linux/rtnetlink.h b/include/linux/rtnetlink.h
>>>> index 7f970b16da3a..ae2c6a3cec5d 100644
>>>> --- a/include/linux/rtnetlink.h
>>>> +++ b/include/linux/rtnetlink.h
>>>> @@ -100,6 +100,7 @@ void net_dec_ingress_queue(void);
>>>>    #ifdef CONFIG_NET_EGRESS
>>>>    void net_inc_egress_queue(void);
>>>>    void net_dec_egress_queue(void);
>>>> +void netdev_xmit_skip_txqueue(bool skip);
>>>>    #endif
>>>>
>>>>    void rtnetlink_init(void);
>>>> diff --git a/net/core/dev.c b/net/core/dev.c
>>>> index 75bab5b0dbae..8e83b7099977 100644
>>>> --- a/net/core/dev.c
>>>> +++ b/net/core/dev.c
>>>> @@ -3908,6 +3908,25 @@ sch_handle_egress(struct sk_buff *skb, int *ret, struct net_device *dev)
>>>>
>>>>        return skb;
>>>>    }
>>>> +
>>>> +static struct netdev_queue *
>>>> +netdev_tx_queue_mapping(struct net_device *dev, struct sk_buff *skb)
>>>> +{
>>>> +     int qm = skb_get_queue_mapping(skb);
>>>> +
>>>> +     return netdev_get_tx_queue(dev, netdev_cap_txqueue(dev, qm));
>>>> +}
>>>> +
>>>> +static bool netdev_xmit_txqueue_skipped(void)
>>>> +{
>>>> +     return __this_cpu_read(softnet_data.xmit.skip_txqueue);
>>>> +}
>>>> +
>>>> +void netdev_xmit_skip_txqueue(bool skip)
>>>> +{
>>>> +     __this_cpu_write(softnet_data.xmit.skip_txqueue, skip);
>>>> +}
>>>> +EXPORT_SYMBOL_GPL(netdev_xmit_skip_txqueue);
>>>>    #endif /* CONFIG_NET_EGRESS */
>>>>
>>>>    #ifdef CONFIG_XPS
>>>> @@ -4078,7 +4097,7 @@ struct netdev_queue *netdev_core_pick_tx(struct net_device *dev,
>>>>    static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
>>>>    {
>>>>        struct net_device *dev = skb->dev;
>>>> -     struct netdev_queue *txq;
>>>> +     struct netdev_queue *txq = NULL;
>>>>        struct Qdisc *q;
>>>>        int rc = -ENOMEM;
>>>>        bool again = false;
>>>> @@ -4106,11 +4125,17 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
>>>>                        if (!skb)
>>>>                                goto out;
>>>>                }
>>>> +
>>>> +             netdev_xmit_skip_txqueue(false);
>>>> +
>>>>                nf_skip_egress(skb, true);
>>>>                skb = sch_handle_egress(skb, &rc, dev);
>>>>                if (!skb)
>>>>                        goto out;
>>>>                nf_skip_egress(skb, false);
>>>> +
>>>> +             if (netdev_xmit_txqueue_skipped())
>>>> +                     txq = netdev_tx_queue_mapping(dev, skb);
>>>>        }
>>>>    #endif
>>>>        /* If device/qdisc don't need skb->dst, release it right now while
>>>> @@ -4121,7 +4146,9 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
>>>>        else
>>>>                skb_dst_force(skb);
>>>>
>>>> -     txq = netdev_core_pick_tx(dev, skb, sb_dev);
>>>> +     if (likely(!txq))
>>>
>>> nit: Drop likely(). If the feature is used from sch_handle_egress(), then this would always be the case.
>> Hi Daniel
>> I think in most case, we don't use skbedit queue_mapping in the
>> sch_handle_egress() , so I add likely in fast path.

Yeah, but then let branch predictor do its work ? We can still change and drop the
likely() once we add support for BPF though..

>>>> +             txq = netdev_core_pick_tx(dev, skb, sb_dev);
>>>> +
>>>>        q = rcu_dereference_bh(txq->qdisc);
>>>
>>> How will the `netdev_xmit_skip_txqueue(true)` be usable from BPF side (see bpf_convert_ctx_access() ->
>>> queue_mapping)?
>> Good questions, In other patch, I introduce the
>> bpf_netdev_skip_txqueue, so we can use netdev_xmit_skip_txqueue in bpf
>> side

Yeah, that bpf_netdev_skip_txqueue() won't fly. It's basically a helper doing quirk for
an implementation detail (aka calling netdev_xmit_skip_txqueue()). Was hoping you have
something better we could use along with the context rewrite of __sk_buff's queue_mapping,
but worst case we need to rework a bit for BPF. :/

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [net-next v10 1/2] net: sched: use queue_mapping to pick tx queue
  2022-03-18 13:36         ` Daniel Borkmann
@ 2022-03-19 13:40           ` Tonghao Zhang
  0 siblings, 0 replies; 10+ messages in thread
From: Tonghao Zhang @ 2022-03-19 13:40 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: Paolo Abeni, Linux Kernel Network Developers, Jamal Hadi Salim,
	Cong Wang, Jiri Pirko, David S. Miller, Jakub Kicinski,
	Jonathan Lemon, Eric Dumazet, Alexander Lobakin, Talal Ahmad,
	Kevin Hao, Alexei Starovoitov, bpf

On Fri, Mar 18, 2022 at 9:36 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 3/17/22 9:20 AM, Paolo Abeni wrote:
> > On Tue, 2022-03-15 at 20:48 +0800, Tonghao Zhang wrote:
> >> On Tue, Mar 15, 2022 at 5:59 AM Daniel Borkmann <daniel@iogearbox.net> wrote:
> >>> On 3/14/22 3:15 PM, xiangxia.m.yue@gmail.com wrote:
> >>> [...]
> >>>>    include/linux/netdevice.h |  3 +++
> >>>>    include/linux/rtnetlink.h |  1 +
> >>>>    net/core/dev.c            | 31 +++++++++++++++++++++++++++++--
> >>>>    net/sched/act_skbedit.c   |  6 +++++-
> >>>>    4 files changed, 38 insertions(+), 3 deletions(-)
> >>>>
> >>>> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> >>>> index 0d994710b335..f33fb2d6712a 100644
> >>>> --- a/include/linux/netdevice.h
> >>>> +++ b/include/linux/netdevice.h
> >>>> @@ -3065,6 +3065,9 @@ struct softnet_data {
> >>>>        struct {
> >>>>                u16 recursion;
> >>>>                u8  more;
> >>>> +#ifdef CONFIG_NET_EGRESS
> >>>> +             u8  skip_txqueue;
> >>>> +#endif
> >>>>        } xmit;
> >>>>    #ifdef CONFIG_RPS
> >>>>        /* input_queue_head should be written by cpu owning this struct,
> >>>> diff --git a/include/linux/rtnetlink.h b/include/linux/rtnetlink.h
> >>>> index 7f970b16da3a..ae2c6a3cec5d 100644
> >>>> --- a/include/linux/rtnetlink.h
> >>>> +++ b/include/linux/rtnetlink.h
> >>>> @@ -100,6 +100,7 @@ void net_dec_ingress_queue(void);
> >>>>    #ifdef CONFIG_NET_EGRESS
> >>>>    void net_inc_egress_queue(void);
> >>>>    void net_dec_egress_queue(void);
> >>>> +void netdev_xmit_skip_txqueue(bool skip);
> >>>>    #endif
> >>>>
> >>>>    void rtnetlink_init(void);
> >>>> diff --git a/net/core/dev.c b/net/core/dev.c
> >>>> index 75bab5b0dbae..8e83b7099977 100644
> >>>> --- a/net/core/dev.c
> >>>> +++ b/net/core/dev.c
> >>>> @@ -3908,6 +3908,25 @@ sch_handle_egress(struct sk_buff *skb, int *ret, struct net_device *dev)
> >>>>
> >>>>        return skb;
> >>>>    }
> >>>> +
> >>>> +static struct netdev_queue *
> >>>> +netdev_tx_queue_mapping(struct net_device *dev, struct sk_buff *skb)
> >>>> +{
> >>>> +     int qm = skb_get_queue_mapping(skb);
> >>>> +
> >>>> +     return netdev_get_tx_queue(dev, netdev_cap_txqueue(dev, qm));
> >>>> +}
> >>>> +
> >>>> +static bool netdev_xmit_txqueue_skipped(void)
> >>>> +{
> >>>> +     return __this_cpu_read(softnet_data.xmit.skip_txqueue);
> >>>> +}
> >>>> +
> >>>> +void netdev_xmit_skip_txqueue(bool skip)
> >>>> +{
> >>>> +     __this_cpu_write(softnet_data.xmit.skip_txqueue, skip);
> >>>> +}
> >>>> +EXPORT_SYMBOL_GPL(netdev_xmit_skip_txqueue);
> >>>>    #endif /* CONFIG_NET_EGRESS */
> >>>>
> >>>>    #ifdef CONFIG_XPS
> >>>> @@ -4078,7 +4097,7 @@ struct netdev_queue *netdev_core_pick_tx(struct net_device *dev,
> >>>>    static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
> >>>>    {
> >>>>        struct net_device *dev = skb->dev;
> >>>> -     struct netdev_queue *txq;
> >>>> +     struct netdev_queue *txq = NULL;
> >>>>        struct Qdisc *q;
> >>>>        int rc = -ENOMEM;
> >>>>        bool again = false;
> >>>> @@ -4106,11 +4125,17 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
> >>>>                        if (!skb)
> >>>>                                goto out;
> >>>>                }
> >>>> +
> >>>> +             netdev_xmit_skip_txqueue(false);
> >>>> +
> >>>>                nf_skip_egress(skb, true);
> >>>>                skb = sch_handle_egress(skb, &rc, dev);
> >>>>                if (!skb)
> >>>>                        goto out;
> >>>>                nf_skip_egress(skb, false);
> >>>> +
> >>>> +             if (netdev_xmit_txqueue_skipped())
> >>>> +                     txq = netdev_tx_queue_mapping(dev, skb);
> >>>>        }
> >>>>    #endif
> >>>>        /* If device/qdisc don't need skb->dst, release it right now while
> >>>> @@ -4121,7 +4146,9 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
> >>>>        else
> >>>>                skb_dst_force(skb);
> >>>>
> >>>> -     txq = netdev_core_pick_tx(dev, skb, sb_dev);
> >>>> +     if (likely(!txq))
> >>>
> >>> nit: Drop likely(). If the feature is used from sch_handle_egress(), then this would always be the case.
> >> Hi Daniel
> >> I think in most case, we don't use skbedit queue_mapping in the
> >> sch_handle_egress() , so I add likely in fast path.
>
> Yeah, but then let branch predictor do its work ? We can still change and drop the
> likely() once we add support for BPF though..
Hi
if you are ok that introducing the bpf helper shown below, I will drop
likely() in next patch.
>
> >>>> +             txq = netdev_core_pick_tx(dev, skb, sb_dev);
> >>>> +
> >>>>        q = rcu_dereference_bh(txq->qdisc);
> >>>
> >>> How will the `netdev_xmit_skip_txqueue(true)` be usable from BPF side (see bpf_convert_ctx_access() ->
> >>> queue_mapping)?
> >> Good questions, In other patch, I introduce the
> >> bpf_netdev_skip_txqueue, so we can use netdev_xmit_skip_txqueue in bpf
> >> side
>
> Yeah, that bpf_netdev_skip_txqueue() won't fly. It's basically a helper doing quirk for
> an implementation detail (aka calling netdev_xmit_skip_txqueue()). Was hoping you have
> something better we could use along with the context rewrite of __sk_buff's queue_mapping,
Hi Daniel
I review the bpf codes, we introduce a lot helper to change the skb field:
skb_change_proto
skb_change_type
skb_change_tail
skb_pull_data
skb_change_head
skb_ecn_set_ce
skb_cgroup_classid
skb_vlan_push
skb_set_tunnel_key

did you mean that, we introduce bpf_skb_set_queue_mapping  is better
than bpf_netdev_skip_txqueue.
for example:
BPF_CALL_2(bpf_skb_set_queue_mapping, struct sk_buff *, skb, u32, txq)
{
        skb->queue_mapping = txq;
        netdev_xmit_skip_txqueue(true);
        return 0;
};

> but worst case we need to rework a bit for BPF. :/
> Thanks,
> Daniel



-- 
Best regards, Tonghao

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-03-19 13:41 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-14 14:15 [net-next v10 0/2] net: sched: allow user to select txqueue xiangxia.m.yue
2022-03-14 14:15 ` [net-next v10 1/2] net: sched: use queue_mapping to pick tx queue xiangxia.m.yue
2022-03-14 16:38   ` Jamal Hadi Salim
2022-03-14 21:59   ` Daniel Borkmann
2022-03-15 12:48     ` Tonghao Zhang
2022-03-17  8:20       ` Paolo Abeni
2022-03-18 13:36         ` Daniel Borkmann
2022-03-19 13:40           ` Tonghao Zhang
2022-03-14 14:15 ` [net-next v10 2/2] net: sched: support hash selecting " xiangxia.m.yue
2022-03-14 16:38   ` Jamal Hadi Salim

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.