All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next v5 0/7] Allow offloading of UDP NEW connections via act_ct
@ 2023-01-27 18:38 Vlad Buslov
  2023-01-27 18:38 ` [PATCH net-next v5 1/7] net: flow_offload: provision conntrack info in ct_metadata Vlad Buslov
                   ` (7 more replies)
  0 siblings, 8 replies; 18+ messages in thread
From: Vlad Buslov @ 2023-01-27 18:38 UTC (permalink / raw)
  To: davem, kuba, pabeni, pablo
  Cc: netdev, netfilter-devel, jhs, xiyou.wangcong, jiri, ozsh,
	marcelo.leitner, simon.horman, Vlad Buslov

Currently only bidirectional established connections can be offloaded
via act_ct. Such approach allows to hardcode a lot of assumptions into
act_ct, flow_table and flow_offload intermediate layer codes. In order
to enabled offloading of unidirectional UDP NEW connections start with
incrementally changing the following assumptions:

- Drivers assume that only established connections are offloaded and
  don't support updating existing connections. Extract ctinfo from meta
  action cookie and refuse offloading of new connections in the drivers.

- Fix flow_table offload fixup algorithm to calculate flow timeout
  according to current connection state instead of hardcoded
  "established" value.

- Add new flow_table flow flag that designates bidirectional connections
  instead of assuming it and hardcoding hardware offload of every flow
  in both directions.

- Add new flow_table flow "ext_data" field and use it in act_ct to track
  the ctinfo of offloaded flows instead of assuming that it is always
  "established".

With all the necessary infrastructure in place modify act_ct to offload
UDP NEW as unidirectional connection. Pass reply direction traffic to CT
and promote connection to bidirectional when UDP connection state
changes to "assured". Rely on refresh mechanism to propagate connection
state change to supporting drivers.

Note that early drop algorithm that is designed to free up some space in
connection tracking table when it becomes full (by randomly deleting up
to 5% of non-established connections) currently ignores connections
marked as "offloaded". Now, with UDP NEW connections becoming
"offloaded" it could allow malicious user to perform DoS attack by
filling the table with non-droppable UDP NEW connections by sending just
one packet in single direction. To prevent such scenario change early
drop algorithm to also consider "offloaded" connections for deletion.

Vlad Buslov (7):
  net: flow_offload: provision conntrack info in ct_metadata
  netfilter: flowtable: fixup UDP timeout depending on ct state
  netfilter: flowtable: allow unidirectional rules
  netfilter: flowtable: save ctinfo in flow_offload
  net/sched: act_ct: set ctinfo in meta action depending on ct state
  net/sched: act_ct: offload UDP NEW connections
  netfilter: nf_conntrack: allow early drop of offloaded UDP conns

 .../ethernet/mellanox/mlx5/core/en/tc_ct.c    |  4 +
 .../ethernet/netronome/nfp/flower/conntrack.c | 24 +++++
 include/net/netfilter/nf_flow_table.h         | 14 ++-
 net/netfilter/nf_conntrack_core.c             | 11 ++-
 net/netfilter/nf_flow_table_core.c            | 40 +++++---
 net/netfilter/nf_flow_table_inet.c            |  2 +-
 net/netfilter/nf_flow_table_ip.c              | 17 ++--
 net/netfilter/nf_flow_table_offload.c         | 18 ++--
 net/sched/act_ct.c                            | 99 +++++++++++++++----
 9 files changed, 174 insertions(+), 55 deletions(-)

-- 
2.38.1


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH net-next v5 1/7] net: flow_offload: provision conntrack info in ct_metadata
  2023-01-27 18:38 [PATCH net-next v5 0/7] Allow offloading of UDP NEW connections via act_ct Vlad Buslov
@ 2023-01-27 18:38 ` Vlad Buslov
  2023-01-27 18:38 ` [PATCH net-next v5 2/7] netfilter: flowtable: fixup UDP timeout depending on ct state Vlad Buslov
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 18+ messages in thread
From: Vlad Buslov @ 2023-01-27 18:38 UTC (permalink / raw)
  To: davem, kuba, pabeni, pablo
  Cc: netdev, netfilter-devel, jhs, xiyou.wangcong, jiri, ozsh,
	marcelo.leitner, simon.horman, Vlad Buslov

In order to offload connections in other states besides "established" the
driver offload callbacks need to have access to connection conntrack info.
Flow offload intermediate representation data structure already contains
that data encoded in 'cookie' field, so just reuse it in the drivers.

Reject offloading IP_CT_NEW connections for now by returning an error in
relevant driver callbacks based on value of ctinfo. Support for offloading
such connections will need to be added to the drivers afterwards.

Signed-off-by: Vlad Buslov <vladbu@nvidia.com>
---

Notes:
    Changes V3 -> V4:
    
    - Only obtain ctinfo in mlx5 after checking the meta action pointer.
    
    Changes V2 -> V3:
    
    - Reuse existing meta action 'cookie' field to obtain ctinfo instead of
    introducing a new field as suggested by Marcelo.
    
    Changes V1 -> V2:
    
    - Add missing include that caused compilation errors on certain configs.
    
    - Change naming in nfp driver as suggested by Simon and Baowen.

 .../ethernet/mellanox/mlx5/core/en/tc_ct.c    |  4 ++++
 .../ethernet/netronome/nfp/flower/conntrack.c | 24 +++++++++++++++++++
 2 files changed, 28 insertions(+)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
index 313df8232db7..193562c14c44 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
@@ -1073,12 +1073,16 @@ mlx5_tc_ct_block_flow_offload_add(struct mlx5_ct_ft *ft,
 	struct mlx5_tc_ct_priv *ct_priv = ft->ct_priv;
 	struct flow_action_entry *meta_action;
 	unsigned long cookie = flow->cookie;
+	enum ip_conntrack_info ctinfo;
 	struct mlx5_ct_entry *entry;
 	int err;
 
 	meta_action = mlx5_tc_ct_get_ct_metadata_action(flow_rule);
 	if (!meta_action)
 		return -EOPNOTSUPP;
+	ctinfo = meta_action->ct_metadata.cookie & NFCT_INFOMASK;
+	if (ctinfo == IP_CT_NEW)
+		return -EOPNOTSUPP;
 
 	spin_lock_bh(&ct_priv->ht_lock);
 	entry = rhashtable_lookup_fast(&ft->ct_entries_ht, &cookie, cts_ht_params);
diff --git a/drivers/net/ethernet/netronome/nfp/flower/conntrack.c b/drivers/net/ethernet/netronome/nfp/flower/conntrack.c
index f693119541d5..d23830b5bcb8 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/conntrack.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/conntrack.c
@@ -1964,6 +1964,27 @@ int nfp_fl_ct_stats(struct flow_cls_offload *flow,
 	return 0;
 }
 
+static bool
+nfp_fl_ct_offload_nft_supported(struct flow_cls_offload *flow)
+{
+	struct flow_rule *flow_rule = flow->rule;
+	struct flow_action *flow_action =
+		&flow_rule->action;
+	struct flow_action_entry *act;
+	int i;
+
+	flow_action_for_each(i, act, flow_action) {
+		if (act->id == FLOW_ACTION_CT_METADATA) {
+			enum ip_conntrack_info ctinfo =
+				act->ct_metadata.cookie & NFCT_INFOMASK;
+
+			return ctinfo != IP_CT_NEW;
+		}
+	}
+
+	return false;
+}
+
 static int
 nfp_fl_ct_offload_nft_flow(struct nfp_fl_ct_zone_entry *zt, struct flow_cls_offload *flow)
 {
@@ -1976,6 +1997,9 @@ nfp_fl_ct_offload_nft_flow(struct nfp_fl_ct_zone_entry *zt, struct flow_cls_offl
 	extack = flow->common.extack;
 	switch (flow->command) {
 	case FLOW_CLS_REPLACE:
+		if (!nfp_fl_ct_offload_nft_supported(flow))
+			return -EOPNOTSUPP;
+
 		/* Netfilter can request offload multiple times for the same
 		 * flow - protect against adding duplicates.
 		 */
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next v5 2/7] netfilter: flowtable: fixup UDP timeout depending on ct state
  2023-01-27 18:38 [PATCH net-next v5 0/7] Allow offloading of UDP NEW connections via act_ct Vlad Buslov
  2023-01-27 18:38 ` [PATCH net-next v5 1/7] net: flow_offload: provision conntrack info in ct_metadata Vlad Buslov
@ 2023-01-27 18:38 ` Vlad Buslov
  2023-01-28 15:27   ` Pablo Neira Ayuso
  2023-01-27 18:38 ` [PATCH net-next v5 3/7] netfilter: flowtable: allow unidirectional rules Vlad Buslov
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 18+ messages in thread
From: Vlad Buslov @ 2023-01-27 18:38 UTC (permalink / raw)
  To: davem, kuba, pabeni, pablo
  Cc: netdev, netfilter-devel, jhs, xiyou.wangcong, jiri, ozsh,
	marcelo.leitner, simon.horman, Vlad Buslov

Currently flow_offload_fixup_ct() function assumes that only replied UDP
connections can be offloaded and hardcodes UDP_CT_REPLIED timeout value.
Allow users to modify timeout calculation by implementing new flowtable
type callback 'timeout' and use the existing algorithm otherwise.

To enable UDP NEW connection offload in following patches implement
'timeout' callback in flowtable_ct of act_ct which extracts the actual
connections state from ct->status and set the timeout according to it.

Signed-off-by: Vlad Buslov <vladbu@nvidia.com>
---

Notes:
    Changes V3 -> V4:
    
    - Rework the patch to decouple netfilter and act_ct timeout fixup
    algorithms.

 include/net/netfilter/nf_flow_table.h |  6 +++-
 net/netfilter/nf_flow_table_core.c    | 40 +++++++++++++++++++--------
 net/netfilter/nf_flow_table_ip.c      | 17 ++++++------
 net/sched/act_ct.c                    | 35 ++++++++++++++++++++++-
 4 files changed, 76 insertions(+), 22 deletions(-)

diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
index cd982f4a0f50..a3e4b5127ad0 100644
--- a/include/net/netfilter/nf_flow_table.h
+++ b/include/net/netfilter/nf_flow_table.h
@@ -61,6 +61,9 @@ struct nf_flowtable_type {
 						  enum flow_offload_tuple_dir dir,
 						  struct nf_flow_rule *flow_rule);
 	void				(*free)(struct nf_flowtable *ft);
+	bool				(*timeout)(struct nf_flowtable *ft,
+						   struct flow_offload *flow,
+						   s32 *val);
 	nf_hookfn			*hook;
 	struct module			*owner;
 };
@@ -278,7 +281,8 @@ void nf_flow_table_cleanup(struct net_device *dev);
 int nf_flow_table_init(struct nf_flowtable *flow_table);
 void nf_flow_table_free(struct nf_flowtable *flow_table);
 
-void flow_offload_teardown(struct flow_offload *flow);
+void flow_offload_teardown(struct nf_flowtable *flow_table,
+			   struct flow_offload *flow);
 
 void nf_flow_snat_port(const struct flow_offload *flow,
 		       struct sk_buff *skb, unsigned int thoff,
diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
index 81c26a96c30b..e3eeea349c8d 100644
--- a/net/netfilter/nf_flow_table_core.c
+++ b/net/netfilter/nf_flow_table_core.c
@@ -178,28 +178,43 @@ static void flow_offload_fixup_tcp(struct ip_ct_tcp *tcp)
 	tcp->seen[1].td_maxwin = 0;
 }
 
-static void flow_offload_fixup_ct(struct nf_conn *ct)
+static bool flow_offload_timeout_default(struct nf_conn *ct, s32 *timeout)
 {
 	struct net *net = nf_ct_net(ct);
 	int l4num = nf_ct_protonum(ct);
-	s32 timeout;
 
 	if (l4num == IPPROTO_TCP) {
 		struct nf_tcp_net *tn = nf_tcp_pernet(net);
 
 		flow_offload_fixup_tcp(&ct->proto.tcp);
 
-		timeout = tn->timeouts[ct->proto.tcp.state];
-		timeout -= tn->offload_timeout;
+		*timeout = tn->timeouts[ct->proto.tcp.state];
+		*timeout -= tn->offload_timeout;
 	} else if (l4num == IPPROTO_UDP) {
 		struct nf_udp_net *tn = nf_udp_pernet(net);
 
-		timeout = tn->timeouts[UDP_CT_REPLIED];
-		timeout -= tn->offload_timeout;
+		*timeout = tn->timeouts[UDP_CT_REPLIED];
+		*timeout -= tn->offload_timeout;
 	} else {
-		return;
+		return false;
 	}
 
+	return true;
+}
+
+static void flow_offload_fixup_ct(struct nf_flowtable *flow_table,
+				  struct flow_offload *flow)
+{
+	struct nf_conn *ct = flow->ct;
+	bool needs_fixup;
+	s32 timeout;
+
+	needs_fixup = flow_table->type->timeout ?
+		flow_table->type->timeout(flow_table, flow, &timeout) :
+		flow_offload_timeout_default(ct, &timeout);
+	if (!needs_fixup)
+		return;
+
 	if (timeout < 0)
 		timeout = 0;
 
@@ -348,11 +363,12 @@ static void flow_offload_del(struct nf_flowtable *flow_table,
 	flow_offload_free(flow);
 }
 
-void flow_offload_teardown(struct flow_offload *flow)
+void flow_offload_teardown(struct nf_flowtable *flow_table,
+			   struct flow_offload *flow)
 {
 	clear_bit(IPS_OFFLOAD_BIT, &flow->ct->status);
 	set_bit(NF_FLOW_TEARDOWN, &flow->flags);
-	flow_offload_fixup_ct(flow->ct);
+	flow_offload_fixup_ct(flow_table, flow);
 }
 EXPORT_SYMBOL_GPL(flow_offload_teardown);
 
@@ -421,7 +437,7 @@ static void nf_flow_offload_gc_step(struct nf_flowtable *flow_table,
 {
 	if (nf_flow_has_expired(flow) ||
 	    nf_ct_is_dying(flow->ct))
-		flow_offload_teardown(flow);
+		flow_offload_teardown(flow_table, flow);
 
 	if (test_bit(NF_FLOW_TEARDOWN, &flow->flags)) {
 		if (test_bit(NF_FLOW_HW, &flow->flags)) {
@@ -569,14 +585,14 @@ static void nf_flow_table_do_cleanup(struct nf_flowtable *flow_table,
 	struct net_device *dev = data;
 
 	if (!dev) {
-		flow_offload_teardown(flow);
+		flow_offload_teardown(flow_table, flow);
 		return;
 	}
 
 	if (net_eq(nf_ct_net(flow->ct), dev_net(dev)) &&
 	    (flow->tuplehash[0].tuple.iifidx == dev->ifindex ||
 	     flow->tuplehash[1].tuple.iifidx == dev->ifindex))
-		flow_offload_teardown(flow);
+		flow_offload_teardown(flow_table, flow);
 }
 
 void nf_flow_table_gc_cleanup(struct nf_flowtable *flowtable,
diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c
index 19efba1e51ef..9c97b9994a96 100644
--- a/net/netfilter/nf_flow_table_ip.c
+++ b/net/netfilter/nf_flow_table_ip.c
@@ -18,7 +18,8 @@
 #include <linux/tcp.h>
 #include <linux/udp.h>
 
-static int nf_flow_state_check(struct flow_offload *flow, int proto,
+static int nf_flow_state_check(struct nf_flowtable *flow_table,
+			       struct flow_offload *flow, int proto,
 			       struct sk_buff *skb, unsigned int thoff)
 {
 	struct tcphdr *tcph;
@@ -28,7 +29,7 @@ static int nf_flow_state_check(struct flow_offload *flow, int proto,
 
 	tcph = (void *)(skb_network_header(skb) + thoff);
 	if (unlikely(tcph->fin || tcph->rst)) {
-		flow_offload_teardown(flow);
+		flow_offload_teardown(flow_table, flow);
 		return -1;
 	}
 
@@ -373,11 +374,11 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
 
 	iph = (struct iphdr *)(skb_network_header(skb) + offset);
 	thoff = (iph->ihl * 4) + offset;
-	if (nf_flow_state_check(flow, iph->protocol, skb, thoff))
+	if (nf_flow_state_check(flow_table, flow, iph->protocol, skb, thoff))
 		return NF_ACCEPT;
 
 	if (!nf_flow_dst_check(&tuplehash->tuple)) {
-		flow_offload_teardown(flow);
+		flow_offload_teardown(flow_table, flow);
 		return NF_ACCEPT;
 	}
 
@@ -419,7 +420,7 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
 	case FLOW_OFFLOAD_XMIT_DIRECT:
 		ret = nf_flow_queue_xmit(state->net, skb, tuplehash, ETH_P_IP);
 		if (ret == NF_DROP)
-			flow_offload_teardown(flow);
+			flow_offload_teardown(flow_table, flow);
 		break;
 	default:
 		WARN_ON_ONCE(1);
@@ -639,11 +640,11 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb,
 
 	ip6h = (struct ipv6hdr *)(skb_network_header(skb) + offset);
 	thoff = sizeof(*ip6h) + offset;
-	if (nf_flow_state_check(flow, ip6h->nexthdr, skb, thoff))
+	if (nf_flow_state_check(flow_table, flow, ip6h->nexthdr, skb, thoff))
 		return NF_ACCEPT;
 
 	if (!nf_flow_dst_check(&tuplehash->tuple)) {
-		flow_offload_teardown(flow);
+		flow_offload_teardown(flow_table, flow);
 		return NF_ACCEPT;
 	}
 
@@ -684,7 +685,7 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb,
 	case FLOW_OFFLOAD_XMIT_DIRECT:
 		ret = nf_flow_queue_xmit(state->net, skb, tuplehash, ETH_P_IPV6);
 		if (ret == NF_DROP)
-			flow_offload_teardown(flow);
+			flow_offload_teardown(flow_table, flow);
 		break;
 	default:
 		WARN_ON_ONCE(1);
diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
index 0ca2bb8ed026..861305c9c079 100644
--- a/net/sched/act_ct.c
+++ b/net/sched/act_ct.c
@@ -274,8 +274,41 @@ static int tcf_ct_flow_table_fill_actions(struct net *net,
 	return err;
 }
 
+static bool tcf_ct_flow_table_get_timeout(struct nf_flowtable *ft,
+					  struct flow_offload *flow,
+					  s32 *val)
+{
+	struct nf_conn *ct = flow->ct;
+	int l4num =
+		nf_ct_protonum(ct);
+	struct net *net =
+		nf_ct_net(ct);
+
+	if (l4num == IPPROTO_TCP) {
+		struct nf_tcp_net *tn = nf_tcp_pernet(net);
+
+		ct->proto.tcp.seen[0].td_maxwin = 0;
+		ct->proto.tcp.seen[1].td_maxwin = 0;
+		*val = tn->timeouts[ct->proto.tcp.state];
+		*val -= tn->offload_timeout;
+	} else if (l4num == IPPROTO_UDP) {
+		struct nf_udp_net *tn = nf_udp_pernet(net);
+		enum udp_conntrack state =
+			test_bit(IPS_SEEN_REPLY_BIT, &ct->status) ?
+			UDP_CT_REPLIED : UDP_CT_UNREPLIED;
+
+		*val = tn->timeouts[state];
+		*val -= tn->offload_timeout;
+	} else {
+		return false;
+	}
+
+	return true;
+}
+
 static struct nf_flowtable_type flowtable_ct = {
 	.action		= tcf_ct_flow_table_fill_actions,
+	.timeout	= tcf_ct_flow_table_get_timeout,
 	.owner		= THIS_MODULE,
 };
 
@@ -622,7 +655,7 @@ static bool tcf_ct_flow_table_lookup(struct tcf_ct_params *p,
 	ct = flow->ct;
 
 	if (tcph && (unlikely(tcph->fin || tcph->rst))) {
-		flow_offload_teardown(flow);
+		flow_offload_teardown(nf_ft, flow);
 		return false;
 	}
 
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next v5 3/7] netfilter: flowtable: allow unidirectional rules
  2023-01-27 18:38 [PATCH net-next v5 0/7] Allow offloading of UDP NEW connections via act_ct Vlad Buslov
  2023-01-27 18:38 ` [PATCH net-next v5 1/7] net: flow_offload: provision conntrack info in ct_metadata Vlad Buslov
  2023-01-27 18:38 ` [PATCH net-next v5 2/7] netfilter: flowtable: fixup UDP timeout depending on ct state Vlad Buslov
@ 2023-01-27 18:38 ` Vlad Buslov
  2023-01-27 18:38 ` [PATCH net-next v5 4/7] netfilter: flowtable: save ctinfo in flow_offload Vlad Buslov
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 18+ messages in thread
From: Vlad Buslov @ 2023-01-27 18:38 UTC (permalink / raw)
  To: davem, kuba, pabeni, pablo
  Cc: netdev, netfilter-devel, jhs, xiyou.wangcong, jiri, ozsh,
	marcelo.leitner, simon.horman, Vlad Buslov

Modify flow table offload to support unidirectional connections by
extending enum nf_flow_flags with new "NF_FLOW_HW_BIDIRECTIONAL" flag. Only
offload reply direction when the flag is set. This infrastructure change is
necessary to support offloading UDP NEW connections in original direction
in following patches in series.

Signed-off-by: Vlad Buslov <vladbu@nvidia.com>
---

Notes:
    Changes V2 -> V3:
    
    - Fix error in commit message (spotted by Marcelo).

 include/net/netfilter/nf_flow_table.h |  1 +
 net/netfilter/nf_flow_table_offload.c | 12 ++++++++----
 2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
index a3e4b5127ad0..103798ae10fc 100644
--- a/include/net/netfilter/nf_flow_table.h
+++ b/include/net/netfilter/nf_flow_table.h
@@ -167,6 +167,7 @@ enum nf_flow_flags {
 	NF_FLOW_HW_DYING,
 	NF_FLOW_HW_DEAD,
 	NF_FLOW_HW_PENDING,
+	NF_FLOW_HW_BIDIRECTIONAL,
 };
 
 enum flow_offload_type {
diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c
index 4d9b99abe37d..8b852f10fab4 100644
--- a/net/netfilter/nf_flow_table_offload.c
+++ b/net/netfilter/nf_flow_table_offload.c
@@ -895,8 +895,9 @@ static int flow_offload_rule_add(struct flow_offload_work *offload,
 
 	ok_count += flow_offload_tuple_add(offload, flow_rule[0],
 					   FLOW_OFFLOAD_DIR_ORIGINAL);
-	ok_count += flow_offload_tuple_add(offload, flow_rule[1],
-					   FLOW_OFFLOAD_DIR_REPLY);
+	if (test_bit(NF_FLOW_HW_BIDIRECTIONAL, &offload->flow->flags))
+		ok_count += flow_offload_tuple_add(offload, flow_rule[1],
+						   FLOW_OFFLOAD_DIR_REPLY);
 	if (ok_count == 0)
 		return -ENOENT;
 
@@ -926,7 +927,8 @@ static void flow_offload_work_del(struct flow_offload_work *offload)
 {
 	clear_bit(IPS_HW_OFFLOAD_BIT, &offload->flow->ct->status);
 	flow_offload_tuple_del(offload, FLOW_OFFLOAD_DIR_ORIGINAL);
-	flow_offload_tuple_del(offload, FLOW_OFFLOAD_DIR_REPLY);
+	if (test_bit(NF_FLOW_HW_BIDIRECTIONAL, &offload->flow->flags))
+		flow_offload_tuple_del(offload, FLOW_OFFLOAD_DIR_REPLY);
 	set_bit(NF_FLOW_HW_DEAD, &offload->flow->flags);
 }
 
@@ -946,7 +948,9 @@ static void flow_offload_work_stats(struct flow_offload_work *offload)
 	u64 lastused;
 
 	flow_offload_tuple_stats(offload, FLOW_OFFLOAD_DIR_ORIGINAL, &stats[0]);
-	flow_offload_tuple_stats(offload, FLOW_OFFLOAD_DIR_REPLY, &stats[1]);
+	if (test_bit(NF_FLOW_HW_BIDIRECTIONAL, &offload->flow->flags))
+		flow_offload_tuple_stats(offload, FLOW_OFFLOAD_DIR_REPLY,
+					 &stats[1]);
 
 	lastused = max_t(u64, stats[0].lastused, stats[1].lastused);
 	offload->flow->timeout = max_t(u64, offload->flow->timeout,
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next v5 4/7] netfilter: flowtable: save ctinfo in flow_offload
  2023-01-27 18:38 [PATCH net-next v5 0/7] Allow offloading of UDP NEW connections via act_ct Vlad Buslov
                   ` (2 preceding siblings ...)
  2023-01-27 18:38 ` [PATCH net-next v5 3/7] netfilter: flowtable: allow unidirectional rules Vlad Buslov
@ 2023-01-27 18:38 ` Vlad Buslov
  2023-01-27 18:38 ` [PATCH net-next v5 5/7] net/sched: act_ct: set ctinfo in meta action depending on ct state Vlad Buslov
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 18+ messages in thread
From: Vlad Buslov @ 2023-01-27 18:38 UTC (permalink / raw)
  To: davem, kuba, pabeni, pablo
  Cc: netdev, netfilter-devel, jhs, xiyou.wangcong, jiri, ozsh,
	marcelo.leitner, simon.horman, Vlad Buslov

Extend struct flow_offload with generic 'ext_data' field. Use the field in
act_ct to cache the last ctinfo value that was used to update the hardware
offload when generating the actions. This is used to optimize the flow
refresh algorithm in following patches.

Signed-off-by: Vlad Buslov <vladbu@nvidia.com>
---

Notes:
    Changes V3 -> V4:
    
    - New patch replaces gc async update that is no longer needed after
    refactoring of following act_ct patches.

 include/net/netfilter/nf_flow_table.h |  7 ++++---
 net/netfilter/nf_flow_table_inet.c    |  2 +-
 net/netfilter/nf_flow_table_offload.c |  6 +++---
 net/sched/act_ct.c                    | 12 +++++++-----
 4 files changed, 15 insertions(+), 12 deletions(-)

diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
index 103798ae10fc..6f3250624d49 100644
--- a/include/net/netfilter/nf_flow_table.h
+++ b/include/net/netfilter/nf_flow_table.h
@@ -57,7 +57,7 @@ struct nf_flowtable_type {
 						 struct net_device *dev,
 						 enum flow_block_command cmd);
 	int				(*action)(struct net *net,
-						  const struct flow_offload *flow,
+						  struct flow_offload *flow,
 						  enum flow_offload_tuple_dir dir,
 						  struct nf_flow_rule *flow_rule);
 	void				(*free)(struct nf_flowtable *ft);
@@ -178,6 +178,7 @@ enum flow_offload_type {
 struct flow_offload {
 	struct flow_offload_tuple_rhash		tuplehash[FLOW_OFFLOAD_DIR_MAX];
 	struct nf_conn				*ct;
+	void					*ext_data;
 	unsigned long				flags;
 	u16					type;
 	u32					timeout;
@@ -317,10 +318,10 @@ void nf_flow_table_offload_flush_cleanup(struct nf_flowtable *flowtable);
 int nf_flow_table_offload_setup(struct nf_flowtable *flowtable,
 				struct net_device *dev,
 				enum flow_block_command cmd);
-int nf_flow_rule_route_ipv4(struct net *net, const struct flow_offload *flow,
+int nf_flow_rule_route_ipv4(struct net *net, struct flow_offload *flow,
 			    enum flow_offload_tuple_dir dir,
 			    struct nf_flow_rule *flow_rule);
-int nf_flow_rule_route_ipv6(struct net *net, const struct flow_offload *flow,
+int nf_flow_rule_route_ipv6(struct net *net, struct flow_offload *flow,
 			    enum flow_offload_tuple_dir dir,
 			    struct nf_flow_rule *flow_rule);
 
diff --git a/net/netfilter/nf_flow_table_inet.c b/net/netfilter/nf_flow_table_inet.c
index 0ccabf3fa6aa..9505f9d188ff 100644
--- a/net/netfilter/nf_flow_table_inet.c
+++ b/net/netfilter/nf_flow_table_inet.c
@@ -39,7 +39,7 @@ nf_flow_offload_inet_hook(void *priv, struct sk_buff *skb,
 }
 
 static int nf_flow_rule_route_inet(struct net *net,
-				   const struct flow_offload *flow,
+				   struct flow_offload *flow,
 				   enum flow_offload_tuple_dir dir,
 				   struct nf_flow_rule *flow_rule)
 {
diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c
index 8b852f10fab4..1c26f03fc661 100644
--- a/net/netfilter/nf_flow_table_offload.c
+++ b/net/netfilter/nf_flow_table_offload.c
@@ -679,7 +679,7 @@ nf_flow_rule_route_common(struct net *net, const struct flow_offload *flow,
 	return 0;
 }
 
-int nf_flow_rule_route_ipv4(struct net *net, const struct flow_offload *flow,
+int nf_flow_rule_route_ipv4(struct net *net, struct flow_offload *flow,
 			    enum flow_offload_tuple_dir dir,
 			    struct nf_flow_rule *flow_rule)
 {
@@ -704,7 +704,7 @@ int nf_flow_rule_route_ipv4(struct net *net, const struct flow_offload *flow,
 }
 EXPORT_SYMBOL_GPL(nf_flow_rule_route_ipv4);
 
-int nf_flow_rule_route_ipv6(struct net *net, const struct flow_offload *flow,
+int nf_flow_rule_route_ipv6(struct net *net, struct flow_offload *flow,
 			    enum flow_offload_tuple_dir dir,
 			    struct nf_flow_rule *flow_rule)
 {
@@ -735,7 +735,7 @@ nf_flow_offload_rule_alloc(struct net *net,
 {
 	const struct nf_flowtable *flowtable = offload->flowtable;
 	const struct flow_offload_tuple *tuple, *other_tuple;
-	const struct flow_offload *flow = offload->flow;
+	struct flow_offload *flow = offload->flow;
 	struct dst_entry *other_dst = NULL;
 	struct nf_flow_rule *flow_rule;
 	int err = -ENOMEM;
diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
index 861305c9c079..48b88c96de86 100644
--- a/net/sched/act_ct.c
+++ b/net/sched/act_ct.c
@@ -170,11 +170,11 @@ tcf_ct_flow_table_add_action_nat_udp(const struct nf_conntrack_tuple *tuple,
 
 static void tcf_ct_flow_table_add_action_meta(struct nf_conn *ct,
 					      enum ip_conntrack_dir dir,
+					      enum ip_conntrack_info ctinfo,
 					      struct flow_action *action)
 {
 	struct nf_conn_labels *ct_labels;
 	struct flow_action_entry *entry;
-	enum ip_conntrack_info ctinfo;
 	u32 *act_ct_labels;
 
 	entry = tcf_ct_flow_table_flow_action_get_next(action);
@@ -182,8 +182,6 @@ static void tcf_ct_flow_table_add_action_meta(struct nf_conn *ct,
 #if IS_ENABLED(CONFIG_NF_CONNTRACK_MARK)
 	entry->ct_metadata.mark = READ_ONCE(ct->mark);
 #endif
-	ctinfo = dir == IP_CT_DIR_ORIGINAL ? IP_CT_ESTABLISHED :
-					     IP_CT_ESTABLISHED_REPLY;
 	/* aligns with the CT reference on the SKB nf_ct_set */
 	entry->ct_metadata.cookie = (unsigned long)ct | ctinfo;
 	entry->ct_metadata.orig_dir = dir == IP_CT_DIR_ORIGINAL;
@@ -237,22 +235,26 @@ static int tcf_ct_flow_table_add_action_nat(struct net *net,
 }
 
 static int tcf_ct_flow_table_fill_actions(struct net *net,
-					  const struct flow_offload *flow,
+					  struct flow_offload *flow,
 					  enum flow_offload_tuple_dir tdir,
 					  struct nf_flow_rule *flow_rule)
 {
 	struct flow_action *action = &flow_rule->rule->action;
 	int num_entries = action->num_entries;
 	struct nf_conn *ct = flow->ct;
+	enum ip_conntrack_info ctinfo;
 	enum ip_conntrack_dir dir;
 	int i, err;
 
 	switch (tdir) {
 	case FLOW_OFFLOAD_DIR_ORIGINAL:
 		dir = IP_CT_DIR_ORIGINAL;
+		ctinfo = IP_CT_ESTABLISHED;
+		WRITE_ONCE(flow->ext_data, (void *)ctinfo);
 		break;
 	case FLOW_OFFLOAD_DIR_REPLY:
 		dir = IP_CT_DIR_REPLY;
+		ctinfo = IP_CT_ESTABLISHED_REPLY;
 		break;
 	default:
 		return -EOPNOTSUPP;
@@ -262,7 +264,7 @@ static int tcf_ct_flow_table_fill_actions(struct net *net,
 	if (err)
 		goto err_nat;
 
-	tcf_ct_flow_table_add_action_meta(ct, dir, action);
+	tcf_ct_flow_table_add_action_meta(ct, dir, ctinfo, action);
 	return 0;
 
 err_nat:
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next v5 5/7] net/sched: act_ct: set ctinfo in meta action depending on ct state
  2023-01-27 18:38 [PATCH net-next v5 0/7] Allow offloading of UDP NEW connections via act_ct Vlad Buslov
                   ` (3 preceding siblings ...)
  2023-01-27 18:38 ` [PATCH net-next v5 4/7] netfilter: flowtable: save ctinfo in flow_offload Vlad Buslov
@ 2023-01-27 18:38 ` Vlad Buslov
  2023-01-27 18:38 ` [PATCH net-next v5 6/7] net/sched: act_ct: offload UDP NEW connections Vlad Buslov
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 18+ messages in thread
From: Vlad Buslov @ 2023-01-27 18:38 UTC (permalink / raw)
  To: davem, kuba, pabeni, pablo
  Cc: netdev, netfilter-devel, jhs, xiyou.wangcong, jiri, ozsh,
	marcelo.leitner, simon.horman, Vlad Buslov

Currently tcf_ct_flow_table_add_action_meta() function assumes that only
established connections can be offloaded and always sets ctinfo to either
IP_CT_ESTABLISHED or IP_CT_ESTABLISHED_REPLY strictly based on direction
without checking actual connection state. To enable UDP NEW connection
offload set the ctinfo and metadata cookie based on ct->status value.

Signed-off-by: Vlad Buslov <vladbu@nvidia.com>
---
 net/sched/act_ct.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
index 48b88c96de86..2b81a7898662 100644
--- a/net/sched/act_ct.c
+++ b/net/sched/act_ct.c
@@ -249,7 +249,8 @@ static int tcf_ct_flow_table_fill_actions(struct net *net,
 	switch (tdir) {
 	case FLOW_OFFLOAD_DIR_ORIGINAL:
 		dir = IP_CT_DIR_ORIGINAL;
-		ctinfo = IP_CT_ESTABLISHED;
+		ctinfo = test_bit(IPS_SEEN_REPLY_BIT, &ct->status) ?
+			IP_CT_ESTABLISHED : IP_CT_NEW;
 		WRITE_ONCE(flow->ext_data, (void *)ctinfo);
 		break;
 	case FLOW_OFFLOAD_DIR_REPLY:
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next v5 6/7] net/sched: act_ct: offload UDP NEW connections
  2023-01-27 18:38 [PATCH net-next v5 0/7] Allow offloading of UDP NEW connections via act_ct Vlad Buslov
                   ` (4 preceding siblings ...)
  2023-01-27 18:38 ` [PATCH net-next v5 5/7] net/sched: act_ct: set ctinfo in meta action depending on ct state Vlad Buslov
@ 2023-01-27 18:38 ` Vlad Buslov
  2023-01-28 15:26   ` Pablo Neira Ayuso
  2023-01-27 18:38 ` [PATCH net-next v5 7/7] netfilter: nf_conntrack: allow early drop of offloaded UDP conns Vlad Buslov
  2023-01-28 15:51 ` [PATCH net-next v5 0/7] Allow offloading of UDP NEW connections via act_ct Pablo Neira Ayuso
  7 siblings, 1 reply; 18+ messages in thread
From: Vlad Buslov @ 2023-01-27 18:38 UTC (permalink / raw)
  To: davem, kuba, pabeni, pablo
  Cc: netdev, netfilter-devel, jhs, xiyou.wangcong, jiri, ozsh,
	marcelo.leitner, simon.horman, Vlad Buslov

Modify the offload algorithm of UDP connections to the following:

- Offload NEW connection as unidirectional.

- When connection state changes to ESTABLISHED also update the hardware
flow. However, in order to prevent act_ct from spamming offload add wq for
every packet coming in reply direction in this state verify whether
connection has already been updated to ESTABLISHED in the drivers. If that
it the case, then skip flow_table and let conntrack handle such packets
which will also allow conntrack to potentially promote the connection to
ASSURED.

- When connection state changes to ASSURED set the flow_table flow
NF_FLOW_HW_BIDIRECTIONAL flag which will cause refresh mechanism to offload
the reply direction.

All other protocols have their offload algorithm preserved and are always
offloaded as bidirectional.

Note that this change tries to minimize the load on flow_table add
workqueue. First, it tracks the last ctinfo that was offloaded by using new
flow 'ext_data' field and doesn't schedule the refresh for reply direction
packets when the offloads have already been updated with current ctinfo.
Second, when 'add' task executes on workqueue it always update the offload
with current flow state (by checking 'bidirectional' flow flag and
obtaining actual ctinfo/cookie through meta action instead of caching any
of these from the moment of scheduling the 'add' work) preventing the need
from scheduling more updates if state changed concurrently while the 'add'
work was pending on workqueue.

Signed-off-by: Vlad Buslov <vladbu@nvidia.com>
---

Notes:
    Changes V4 -> V5:
    
    - Make clang happy.
    
    Changes V3 -> V4:
    
    - Refactor the patch to leverage the refresh code and new flow 'ext_data'
    field in order to change the offload state instead of relying on async gc
    update.

 net/sched/act_ct.c | 51 +++++++++++++++++++++++++++++++++++-----------
 1 file changed, 39 insertions(+), 12 deletions(-)

diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
index 2b81a7898662..5107f4149474 100644
--- a/net/sched/act_ct.c
+++ b/net/sched/act_ct.c
@@ -401,7 +401,7 @@ static void tcf_ct_flow_tc_ifidx(struct flow_offload *entry,
 
 static void tcf_ct_flow_table_add(struct tcf_ct_flow_table *ct_ft,
 				  struct nf_conn *ct,
-				  bool tcp)
+				  bool tcp, bool bidirectional)
 {
 	struct nf_conn_act_ct_ext *act_ct_ext;
 	struct flow_offload *entry;
@@ -420,6 +420,8 @@ static void tcf_ct_flow_table_add(struct tcf_ct_flow_table *ct_ft,
 		ct->proto.tcp.seen[0].flags |= IP_CT_TCP_FLAG_BE_LIBERAL;
 		ct->proto.tcp.seen[1].flags |= IP_CT_TCP_FLAG_BE_LIBERAL;
 	}
+	if (bidirectional)
+		__set_bit(NF_FLOW_HW_BIDIRECTIONAL, &entry->flags);
 
 	act_ct_ext = nf_conn_act_ct_ext_find(ct);
 	if (act_ct_ext) {
@@ -443,26 +445,34 @@ static void tcf_ct_flow_table_process_conn(struct tcf_ct_flow_table *ct_ft,
 					   struct nf_conn *ct,
 					   enum ip_conntrack_info ctinfo)
 {
-	bool tcp = false;
-
-	if ((ctinfo != IP_CT_ESTABLISHED && ctinfo != IP_CT_ESTABLISHED_REPLY) ||
-	    !test_bit(IPS_ASSURED_BIT, &ct->status))
-		return;
+	bool tcp = false, bidirectional = true;
 
 	switch (nf_ct_protonum(ct)) {
 	case IPPROTO_TCP:
-		tcp = true;
-		if (ct->proto.tcp.state != TCP_CONNTRACK_ESTABLISHED)
+		if ((ctinfo != IP_CT_ESTABLISHED &&
+		     ctinfo != IP_CT_ESTABLISHED_REPLY) ||
+		    !test_bit(IPS_ASSURED_BIT, &ct->status) ||
+		    ct->proto.tcp.state != TCP_CONNTRACK_ESTABLISHED)
 			return;
+
+		tcp = true;
 		break;
 	case IPPROTO_UDP:
+		if (!nf_ct_is_confirmed(ct))
+			return;
+		if (!test_bit(IPS_ASSURED_BIT, &ct->status))
+			bidirectional = false;
 		break;
 #ifdef CONFIG_NF_CT_PROTO_GRE
 	case IPPROTO_GRE: {
 		struct nf_conntrack_tuple *tuple;
 
-		if (ct->status & IPS_NAT_MASK)
+		if ((ctinfo != IP_CT_ESTABLISHED &&
+		     ctinfo != IP_CT_ESTABLISHED_REPLY) ||
+		    !test_bit(IPS_ASSURED_BIT, &ct->status) ||
+		    ct->status & IPS_NAT_MASK)
 			return;
+
 		tuple = &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple;
 		/* No support for GRE v1 */
 		if (tuple->src.u.gre.key || tuple->dst.u.gre.key)
@@ -478,7 +488,7 @@ static void tcf_ct_flow_table_process_conn(struct tcf_ct_flow_table *ct_ft,
 	    ct->status & IPS_SEQ_ADJUST)
 		return;
 
-	tcf_ct_flow_table_add(ct_ft, ct, tcp);
+	tcf_ct_flow_table_add(ct_ft, ct, tcp, bidirectional);
 }
 
 static bool
@@ -657,13 +667,30 @@ static bool tcf_ct_flow_table_lookup(struct tcf_ct_params *p,
 	flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]);
 	ct = flow->ct;
 
+	if (dir == FLOW_OFFLOAD_DIR_REPLY &&
+	    !test_bit(NF_FLOW_HW_BIDIRECTIONAL, &flow->flags)) {
+		/* Only offload reply direction after connection became
+		 * assured.
+		 */
+		if (test_bit(IPS_ASSURED_BIT, &ct->status))
+			set_bit(NF_FLOW_HW_BIDIRECTIONAL, &flow->flags);
+		else if (READ_ONCE(flow->ext_data) == IP_CT_ESTABLISHED)
+			/* If flow_table flow has already been updated to the
+			 * established state, then don't refresh.
+			 */
+			return false;
+	}
+
 	if (tcph && (unlikely(tcph->fin || tcph->rst))) {
 		flow_offload_teardown(nf_ft, flow);
 		return false;
 	}
 
-	ctinfo = dir == FLOW_OFFLOAD_DIR_ORIGINAL ? IP_CT_ESTABLISHED :
-						    IP_CT_ESTABLISHED_REPLY;
+	if (dir == FLOW_OFFLOAD_DIR_ORIGINAL)
+		ctinfo = test_bit(IPS_SEEN_REPLY_BIT, &ct->status) ?
+			IP_CT_ESTABLISHED : IP_CT_NEW;
+	else
+		ctinfo = IP_CT_ESTABLISHED_REPLY;
 
 	flow_offload_refresh(nf_ft, flow);
 	nf_conntrack_get(&ct->ct_general);
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next v5 7/7] netfilter: nf_conntrack: allow early drop of offloaded UDP conns
  2023-01-27 18:38 [PATCH net-next v5 0/7] Allow offloading of UDP NEW connections via act_ct Vlad Buslov
                   ` (5 preceding siblings ...)
  2023-01-27 18:38 ` [PATCH net-next v5 6/7] net/sched: act_ct: offload UDP NEW connections Vlad Buslov
@ 2023-01-27 18:38 ` Vlad Buslov
  2023-01-28 15:51 ` [PATCH net-next v5 0/7] Allow offloading of UDP NEW connections via act_ct Pablo Neira Ayuso
  7 siblings, 0 replies; 18+ messages in thread
From: Vlad Buslov @ 2023-01-27 18:38 UTC (permalink / raw)
  To: davem, kuba, pabeni, pablo
  Cc: netdev, netfilter-devel, jhs, xiyou.wangcong, jiri, ozsh,
	marcelo.leitner, simon.horman, Vlad Buslov

Both synchronous early drop algorithm and asynchronous gc worker completely
ignore connections with IPS_OFFLOAD_BIT status bit set. With new
functionality that enabled UDP NEW connection offload in action CT
malicious user can flood the conntrack table with offloaded UDP connections
by just sending a single packet per 5tuple because such connections can no
longer be deleted by early drop algorithm.

To mitigate the issue allow both early drop and gc to consider offloaded
UDP connections for deletion.

Signed-off-by: Vlad Buslov <vladbu@nvidia.com>
---
 net/netfilter/nf_conntrack_core.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
index 496c4920505b..52b824a60176 100644
--- a/net/netfilter/nf_conntrack_core.c
+++ b/net/netfilter/nf_conntrack_core.c
@@ -1374,9 +1374,6 @@ static unsigned int early_drop_list(struct net *net,
 	hlist_nulls_for_each_entry_rcu(h, n, head, hnnode) {
 		tmp = nf_ct_tuplehash_to_ctrack(h);
 
-		if (test_bit(IPS_OFFLOAD_BIT, &tmp->status))
-			continue;
-
 		if (nf_ct_is_expired(tmp)) {
 			nf_ct_gc_expired(tmp);
 			continue;
@@ -1446,11 +1443,14 @@ static bool gc_worker_skip_ct(const struct nf_conn *ct)
 static bool gc_worker_can_early_drop(const struct nf_conn *ct)
 {
 	const struct nf_conntrack_l4proto *l4proto;
+	u8 protonum = nf_ct_protonum(ct);
 
+	if (test_bit(IPS_OFFLOAD_BIT, &ct->status) && protonum != IPPROTO_UDP)
+		return false;
 	if (!test_bit(IPS_ASSURED_BIT, &ct->status))
 		return true;
 
-	l4proto = nf_ct_l4proto_find(nf_ct_protonum(ct));
+	l4proto = nf_ct_l4proto_find(protonum);
 	if (l4proto->can_early_drop && l4proto->can_early_drop(ct))
 		return true;
 
@@ -1507,7 +1507,8 @@ static void gc_worker(struct work_struct *work)
 
 			if (test_bit(IPS_OFFLOAD_BIT, &tmp->status)) {
 				nf_ct_offload_timeout(tmp);
-				continue;
+				if (!nf_conntrack_max95)
+					continue;
 			}
 
 			if (expired_count > GC_SCAN_EXPIRED_MAX) {
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v5 6/7] net/sched: act_ct: offload UDP NEW connections
  2023-01-27 18:38 ` [PATCH net-next v5 6/7] net/sched: act_ct: offload UDP NEW connections Vlad Buslov
@ 2023-01-28 15:26   ` Pablo Neira Ayuso
  2023-01-28 15:31     ` Vlad Buslov
  0 siblings, 1 reply; 18+ messages in thread
From: Pablo Neira Ayuso @ 2023-01-28 15:26 UTC (permalink / raw)
  To: Vlad Buslov
  Cc: davem, kuba, pabeni, netdev, netfilter-devel, jhs,
	xiyou.wangcong, jiri, ozsh, marcelo.leitner, simon.horman

Hi Vlad,

On Fri, Jan 27, 2023 at 07:38:44PM +0100, Vlad Buslov wrote:
> Modify the offload algorithm of UDP connections to the following:
> 
> - Offload NEW connection as unidirectional.
> 
> - When connection state changes to ESTABLISHED also update the hardware
> flow. However, in order to prevent act_ct from spamming offload add wq for
> every packet coming in reply direction in this state verify whether
> connection has already been updated to ESTABLISHED in the drivers. If that
> it the case, then skip flow_table and let conntrack handle such packets
> which will also allow conntrack to potentially promote the connection to
> ASSURED.
> 
> - When connection state changes to ASSURED set the flow_table flow
> NF_FLOW_HW_BIDIRECTIONAL flag which will cause refresh mechanism to offload
> the reply direction.
> 
> All other protocols have their offload algorithm preserved and are always
> offloaded as bidirectional.
> 
> Note that this change tries to minimize the load on flow_table add
> workqueue. First, it tracks the last ctinfo that was offloaded by using new
> flow 'ext_data' field and doesn't schedule the refresh for reply direction
> packets when the offloads have already been updated with current ctinfo.
> Second, when 'add' task executes on workqueue it always update the offload
> with current flow state (by checking 'bidirectional' flow flag and
> obtaining actual ctinfo/cookie through meta action instead of caching any
> of these from the moment of scheduling the 'add' work) preventing the need
> from scheduling more updates if state changed concurrently while the 'add'
> work was pending on workqueue.

Could you use a flag to achieve what you need instead of this ext_data
field? Better this ext_data and the flag, I prefer the flags.

Thanks

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v5 2/7] netfilter: flowtable: fixup UDP timeout depending on ct state
  2023-01-27 18:38 ` [PATCH net-next v5 2/7] netfilter: flowtable: fixup UDP timeout depending on ct state Vlad Buslov
@ 2023-01-28 15:27   ` Pablo Neira Ayuso
  2023-01-28 16:03     ` Vlad Buslov
  0 siblings, 1 reply; 18+ messages in thread
From: Pablo Neira Ayuso @ 2023-01-28 15:27 UTC (permalink / raw)
  To: Vlad Buslov
  Cc: davem, kuba, pabeni, netdev, netfilter-devel, jhs,
	xiyou.wangcong, jiri, ozsh, marcelo.leitner, simon.horman

Hi,

On Fri, Jan 27, 2023 at 07:38:40PM +0100, Vlad Buslov wrote:
> Currently flow_offload_fixup_ct() function assumes that only replied UDP
> connections can be offloaded and hardcodes UDP_CT_REPLIED timeout value.
> Allow users to modify timeout calculation by implementing new flowtable
> type callback 'timeout' and use the existing algorithm otherwise.
> 
> To enable UDP NEW connection offload in following patches implement
> 'timeout' callback in flowtable_ct of act_ct which extracts the actual
> connections state from ct->status and set the timeout according to it.

I found a way to fix the netfilter flowtable after your original v3
update.

Could you use your original patch in v3 for this fixup?

Thanks.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v5 6/7] net/sched: act_ct: offload UDP NEW connections
  2023-01-28 15:26   ` Pablo Neira Ayuso
@ 2023-01-28 15:31     ` Vlad Buslov
  2023-01-28 19:09       ` Pablo Neira Ayuso
  0 siblings, 1 reply; 18+ messages in thread
From: Vlad Buslov @ 2023-01-28 15:31 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: davem, kuba, pabeni, netdev, netfilter-devel, jhs,
	xiyou.wangcong, jiri, ozsh, marcelo.leitner, simon.horman


On Sat 28 Jan 2023 at 16:26, Pablo Neira Ayuso <pablo@netfilter.org> wrote:
> Hi Vlad,
>
> On Fri, Jan 27, 2023 at 07:38:44PM +0100, Vlad Buslov wrote:
>> Modify the offload algorithm of UDP connections to the following:
>> 
>> - Offload NEW connection as unidirectional.
>> 
>> - When connection state changes to ESTABLISHED also update the hardware
>> flow. However, in order to prevent act_ct from spamming offload add wq for
>> every packet coming in reply direction in this state verify whether
>> connection has already been updated to ESTABLISHED in the drivers. If that
>> it the case, then skip flow_table and let conntrack handle such packets
>> which will also allow conntrack to potentially promote the connection to
>> ASSURED.
>> 
>> - When connection state changes to ASSURED set the flow_table flow
>> NF_FLOW_HW_BIDIRECTIONAL flag which will cause refresh mechanism to offload
>> the reply direction.
>> 
>> All other protocols have their offload algorithm preserved and are always
>> offloaded as bidirectional.
>> 
>> Note that this change tries to minimize the load on flow_table add
>> workqueue. First, it tracks the last ctinfo that was offloaded by using new
>> flow 'ext_data' field and doesn't schedule the refresh for reply direction
>> packets when the offloads have already been updated with current ctinfo.
>> Second, when 'add' task executes on workqueue it always update the offload
>> with current flow state (by checking 'bidirectional' flow flag and
>> obtaining actual ctinfo/cookie through meta action instead of caching any
>> of these from the moment of scheduling the 'add' work) preventing the need
>> from scheduling more updates if state changed concurrently while the 'add'
>> work was pending on workqueue.
>
> Could you use a flag to achieve what you need instead of this ext_data
> field? Better this ext_data and the flag, I prefer the flags.

Sure, np. Do you prefer the functionality to be offloaded to gc (as in
earlier versions of this series) or leverage 'refresh' code as in
versions 4-5?

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v5 0/7] Allow offloading of UDP NEW connections via act_ct
  2023-01-27 18:38 [PATCH net-next v5 0/7] Allow offloading of UDP NEW connections via act_ct Vlad Buslov
                   ` (6 preceding siblings ...)
  2023-01-27 18:38 ` [PATCH net-next v5 7/7] netfilter: nf_conntrack: allow early drop of offloaded UDP conns Vlad Buslov
@ 2023-01-28 15:51 ` Pablo Neira Ayuso
  2023-01-28 16:04   ` Vlad Buslov
  7 siblings, 1 reply; 18+ messages in thread
From: Pablo Neira Ayuso @ 2023-01-28 15:51 UTC (permalink / raw)
  To: Vlad Buslov
  Cc: davem, kuba, pabeni, netdev, netfilter-devel, jhs,
	xiyou.wangcong, jiri, ozsh, marcelo.leitner, simon.horman

On Fri, Jan 27, 2023 at 07:38:38PM +0100, Vlad Buslov wrote:
> Currently only bidirectional established connections can be offloaded
> via act_ct. Such approach allows to hardcode a lot of assumptions into
> act_ct, flow_table and flow_offload intermediate layer codes. In order
> to enabled offloading of unidirectional UDP NEW connections start with
> incrementally changing the following assumptions:
> 
> - Drivers assume that only established connections are offloaded and
>   don't support updating existing connections. Extract ctinfo from meta
>   action cookie and refuse offloading of new connections in the drivers.
> 
> - Fix flow_table offload fixup algorithm to calculate flow timeout
>   according to current connection state instead of hardcoded
>   "established" value.
> 
> - Add new flow_table flow flag that designates bidirectional connections
>   instead of assuming it and hardcoding hardware offload of every flow
>   in both directions.
> 
> - Add new flow_table flow "ext_data" field and use it in act_ct to track
>   the ctinfo of offloaded flows instead of assuming that it is always
>   "established".
> 
> With all the necessary infrastructure in place modify act_ct to offload
> UDP NEW as unidirectional connection. Pass reply direction traffic to CT
> and promote connection to bidirectional when UDP connection state
> changes to "assured". Rely on refresh mechanism to propagate connection
> state change to supporting drivers.
> 
> Note that early drop algorithm that is designed to free up some space in
> connection tracking table when it becomes full (by randomly deleting up
> to 5% of non-established connections) currently ignores connections
> marked as "offloaded". Now, with UDP NEW connections becoming
> "offloaded" it could allow malicious user to perform DoS attack by
> filling the table with non-droppable UDP NEW connections by sending just
> one packet in single direction. To prevent such scenario change early
> drop algorithm to also consider "offloaded" connections for deletion.

If the two changes I propose are doable, then I am OK with this.

I would really like to explore my proposal to turn the workqueue into
a "scanner" that iterates over the entries searching for flows that
need to be offloaded (or updated to bidirectional, like in this new
case). I think it is not too far from what there is in the flowtable
codebase.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v5 2/7] netfilter: flowtable: fixup UDP timeout depending on ct state
  2023-01-28 15:27   ` Pablo Neira Ayuso
@ 2023-01-28 16:03     ` Vlad Buslov
  2023-01-28 19:09       ` Pablo Neira Ayuso
  0 siblings, 1 reply; 18+ messages in thread
From: Vlad Buslov @ 2023-01-28 16:03 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: davem, kuba, pabeni, netdev, netfilter-devel, jhs,
	xiyou.wangcong, jiri, ozsh, marcelo.leitner, simon.horman


On Sat 28 Jan 2023 at 16:27, Pablo Neira Ayuso <pablo@netfilter.org> wrote:
> Hi,
>
> On Fri, Jan 27, 2023 at 07:38:40PM +0100, Vlad Buslov wrote:
>> Currently flow_offload_fixup_ct() function assumes that only replied UDP
>> connections can be offloaded and hardcodes UDP_CT_REPLIED timeout value.
>> Allow users to modify timeout calculation by implementing new flowtable
>> type callback 'timeout' and use the existing algorithm otherwise.
>> 
>> To enable UDP NEW connection offload in following patches implement
>> 'timeout' callback in flowtable_ct of act_ct which extracts the actual
>> connections state from ct->status and set the timeout according to it.
>
> I found a way to fix the netfilter flowtable after your original v3
> update.
>
> Could you use your original patch in v3 for this fixup?

Sure, please send me the fixup.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v5 0/7] Allow offloading of UDP NEW connections via act_ct
  2023-01-28 15:51 ` [PATCH net-next v5 0/7] Allow offloading of UDP NEW connections via act_ct Pablo Neira Ayuso
@ 2023-01-28 16:04   ` Vlad Buslov
  0 siblings, 0 replies; 18+ messages in thread
From: Vlad Buslov @ 2023-01-28 16:04 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: davem, kuba, pabeni, netdev, netfilter-devel, jhs,
	xiyou.wangcong, jiri, ozsh, marcelo.leitner, simon.horman


On Sat 28 Jan 2023 at 16:51, Pablo Neira Ayuso <pablo@netfilter.org> wrote:
> On Fri, Jan 27, 2023 at 07:38:38PM +0100, Vlad Buslov wrote:
>> Currently only bidirectional established connections can be offloaded
>> via act_ct. Such approach allows to hardcode a lot of assumptions into
>> act_ct, flow_table and flow_offload intermediate layer codes. In order
>> to enabled offloading of unidirectional UDP NEW connections start with
>> incrementally changing the following assumptions:
>> 
>> - Drivers assume that only established connections are offloaded and
>>   don't support updating existing connections. Extract ctinfo from meta
>>   action cookie and refuse offloading of new connections in the drivers.
>> 
>> - Fix flow_table offload fixup algorithm to calculate flow timeout
>>   according to current connection state instead of hardcoded
>>   "established" value.
>> 
>> - Add new flow_table flow flag that designates bidirectional connections
>>   instead of assuming it and hardcoding hardware offload of every flow
>>   in both directions.
>> 
>> - Add new flow_table flow "ext_data" field and use it in act_ct to track
>>   the ctinfo of offloaded flows instead of assuming that it is always
>>   "established".
>> 
>> With all the necessary infrastructure in place modify act_ct to offload
>> UDP NEW as unidirectional connection. Pass reply direction traffic to CT
>> and promote connection to bidirectional when UDP connection state
>> changes to "assured". Rely on refresh mechanism to propagate connection
>> state change to supporting drivers.
>> 
>> Note that early drop algorithm that is designed to free up some space in
>> connection tracking table when it becomes full (by randomly deleting up
>> to 5% of non-established connections) currently ignores connections
>> marked as "offloaded". Now, with UDP NEW connections becoming
>> "offloaded" it could allow malicious user to perform DoS attack by
>> filling the table with non-droppable UDP NEW connections by sending just
>> one packet in single direction. To prevent such scenario change early
>> drop algorithm to also consider "offloaded" connections for deletion.
>
> If the two changes I propose are doable, then I am OK with this.
>
> I would really like to explore my proposal to turn the workqueue into
> a "scanner" that iterates over the entries searching for flows that
> need to be offloaded (or updated to bidirectional, like in this new
> case). I think it is not too far from what there is in the flowtable
> codebase.

I'm not sure I'm following. In order to accommodate your suggestions
I've already coded the algorithm in v4 in a way that always updates flow
to its current actual state according to conntrack atomic flags and
doesn't require any follow-up updated if state had been changed
concurrently. What else is missing?


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v5 6/7] net/sched: act_ct: offload UDP NEW connections
  2023-01-28 15:31     ` Vlad Buslov
@ 2023-01-28 19:09       ` Pablo Neira Ayuso
  2023-01-28 19:28         ` Vlad Buslov
  0 siblings, 1 reply; 18+ messages in thread
From: Pablo Neira Ayuso @ 2023-01-28 19:09 UTC (permalink / raw)
  To: Vlad Buslov
  Cc: davem, kuba, pabeni, netdev, netfilter-devel, jhs,
	xiyou.wangcong, jiri, ozsh, marcelo.leitner, simon.horman

On Sat, Jan 28, 2023 at 05:31:40PM +0200, Vlad Buslov wrote:
> 
> On Sat 28 Jan 2023 at 16:26, Pablo Neira Ayuso <pablo@netfilter.org> wrote:
> > Hi Vlad,
> >
> > On Fri, Jan 27, 2023 at 07:38:44PM +0100, Vlad Buslov wrote:
> >> Modify the offload algorithm of UDP connections to the following:
> >> 
> >> - Offload NEW connection as unidirectional.
> >> 
> >> - When connection state changes to ESTABLISHED also update the hardware
> >> flow. However, in order to prevent act_ct from spamming offload add wq for
> >> every packet coming in reply direction in this state verify whether
> >> connection has already been updated to ESTABLISHED in the drivers. If that
> >> it the case, then skip flow_table and let conntrack handle such packets
> >> which will also allow conntrack to potentially promote the connection to
> >> ASSURED.
> >> 
> >> - When connection state changes to ASSURED set the flow_table flow
> >> NF_FLOW_HW_BIDIRECTIONAL flag which will cause refresh mechanism to offload
> >> the reply direction.
> >> 
> >> All other protocols have their offload algorithm preserved and are always
> >> offloaded as bidirectional.
> >> 
> >> Note that this change tries to minimize the load on flow_table add
> >> workqueue. First, it tracks the last ctinfo that was offloaded by using new
> >> flow 'ext_data' field and doesn't schedule the refresh for reply direction
> >> packets when the offloads have already been updated with current ctinfo.
> >> Second, when 'add' task executes on workqueue it always update the offload
> >> with current flow state (by checking 'bidirectional' flow flag and
> >> obtaining actual ctinfo/cookie through meta action instead of caching any
> >> of these from the moment of scheduling the 'add' work) preventing the need
> >> from scheduling more updates if state changed concurrently while the 'add'
> >> work was pending on workqueue.
> >
> > Could you use a flag to achieve what you need instead of this ext_data
> > field? Better this ext_data and the flag, I prefer the flags.
> 
> Sure, np. Do you prefer the functionality to be offloaded to gc (as in
> earlier versions of this series) or leverage 'refresh' code as in
> versions 4-5?

No, I prefer generic gc/refresh mechanism is not used for this.

What I mean is: could you replace this new ->ext_data generic pointer
by a flag to annotate what you need? Between this generic pointer and
a flag, I prefer a flag.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v5 2/7] netfilter: flowtable: fixup UDP timeout depending on ct state
  2023-01-28 16:03     ` Vlad Buslov
@ 2023-01-28 19:09       ` Pablo Neira Ayuso
  2023-01-28 19:30         ` Vlad Buslov
  0 siblings, 1 reply; 18+ messages in thread
From: Pablo Neira Ayuso @ 2023-01-28 19:09 UTC (permalink / raw)
  To: Vlad Buslov
  Cc: davem, kuba, pabeni, netdev, netfilter-devel, jhs,
	xiyou.wangcong, jiri, ozsh, marcelo.leitner, simon.horman

On Sat, Jan 28, 2023 at 06:03:37PM +0200, Vlad Buslov wrote:
> 
> On Sat 28 Jan 2023 at 16:27, Pablo Neira Ayuso <pablo@netfilter.org> wrote:
> > Hi,
> >
> > On Fri, Jan 27, 2023 at 07:38:40PM +0100, Vlad Buslov wrote:
> >> Currently flow_offload_fixup_ct() function assumes that only replied UDP
> >> connections can be offloaded and hardcodes UDP_CT_REPLIED timeout value.
> >> Allow users to modify timeout calculation by implementing new flowtable
> >> type callback 'timeout' and use the existing algorithm otherwise.
> >> 
> >> To enable UDP NEW connection offload in following patches implement
> >> 'timeout' callback in flowtable_ct of act_ct which extracts the actual
> >> connections state from ct->status and set the timeout according to it.
> >
> > I found a way to fix the netfilter flowtable after your original v3
> > update.
> >
> > Could you use your original patch in v3 for this fixup?
> 
> Sure, please send me the fixup.

What I mean is if you could use your original v3 2/7 for this
conntrack timeout fixup:

https://patchwork.ozlabs.org/project/netfilter-devel/patch/20230119195104.3371966-3-vladbu@nvidia.com/

I will send a patch for netfilter's flowtable datapath to address the
original issue I mentioned, so there is no need for this new timeout
callback.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v5 6/7] net/sched: act_ct: offload UDP NEW connections
  2023-01-28 19:09       ` Pablo Neira Ayuso
@ 2023-01-28 19:28         ` Vlad Buslov
  0 siblings, 0 replies; 18+ messages in thread
From: Vlad Buslov @ 2023-01-28 19:28 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: davem, kuba, pabeni, netdev, netfilter-devel, jhs,
	xiyou.wangcong, jiri, ozsh, marcelo.leitner, simon.horman


On Sat 28 Jan 2023 at 20:09, Pablo Neira Ayuso <pablo@netfilter.org> wrote:
> On Sat, Jan 28, 2023 at 05:31:40PM +0200, Vlad Buslov wrote:
>> 
>> On Sat 28 Jan 2023 at 16:26, Pablo Neira Ayuso <pablo@netfilter.org> wrote:
>> > Hi Vlad,
>> >
>> > On Fri, Jan 27, 2023 at 07:38:44PM +0100, Vlad Buslov wrote:
>> >> Modify the offload algorithm of UDP connections to the following:
>> >> 
>> >> - Offload NEW connection as unidirectional.
>> >> 
>> >> - When connection state changes to ESTABLISHED also update the hardware
>> >> flow. However, in order to prevent act_ct from spamming offload add wq for
>> >> every packet coming in reply direction in this state verify whether
>> >> connection has already been updated to ESTABLISHED in the drivers. If that
>> >> it the case, then skip flow_table and let conntrack handle such packets
>> >> which will also allow conntrack to potentially promote the connection to
>> >> ASSURED.
>> >> 
>> >> - When connection state changes to ASSURED set the flow_table flow
>> >> NF_FLOW_HW_BIDIRECTIONAL flag which will cause refresh mechanism to offload
>> >> the reply direction.
>> >> 
>> >> All other protocols have their offload algorithm preserved and are always
>> >> offloaded as bidirectional.
>> >> 
>> >> Note that this change tries to minimize the load on flow_table add
>> >> workqueue. First, it tracks the last ctinfo that was offloaded by using new
>> >> flow 'ext_data' field and doesn't schedule the refresh for reply direction
>> >> packets when the offloads have already been updated with current ctinfo.
>> >> Second, when 'add' task executes on workqueue it always update the offload
>> >> with current flow state (by checking 'bidirectional' flow flag and
>> >> obtaining actual ctinfo/cookie through meta action instead of caching any
>> >> of these from the moment of scheduling the 'add' work) preventing the need
>> >> from scheduling more updates if state changed concurrently while the 'add'
>> >> work was pending on workqueue.
>> >
>> > Could you use a flag to achieve what you need instead of this ext_data
>> > field? Better this ext_data and the flag, I prefer the flags.
>> 
>> Sure, np. Do you prefer the functionality to be offloaded to gc (as in
>> earlier versions of this series) or leverage 'refresh' code as in
>> versions 4-5?
>
> No, I prefer generic gc/refresh mechanism is not used for this.
>
> What I mean is: could you replace this new ->ext_data generic pointer
> by a flag to annotate what you need?

Yes, will do.

> Between this generic pointer and a flag, I prefer a flag.

Got it.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next v5 2/7] netfilter: flowtable: fixup UDP timeout depending on ct state
  2023-01-28 19:09       ` Pablo Neira Ayuso
@ 2023-01-28 19:30         ` Vlad Buslov
  0 siblings, 0 replies; 18+ messages in thread
From: Vlad Buslov @ 2023-01-28 19:30 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: davem, kuba, pabeni, netdev, netfilter-devel, jhs,
	xiyou.wangcong, jiri, ozsh, marcelo.leitner, simon.horman


On Sat 28 Jan 2023 at 20:09, Pablo Neira Ayuso <pablo@netfilter.org> wrote:
> On Sat, Jan 28, 2023 at 06:03:37PM +0200, Vlad Buslov wrote:
>> 
>> On Sat 28 Jan 2023 at 16:27, Pablo Neira Ayuso <pablo@netfilter.org> wrote:
>> > Hi,
>> >
>> > On Fri, Jan 27, 2023 at 07:38:40PM +0100, Vlad Buslov wrote:
>> >> Currently flow_offload_fixup_ct() function assumes that only replied UDP
>> >> connections can be offloaded and hardcodes UDP_CT_REPLIED timeout value.
>> >> Allow users to modify timeout calculation by implementing new flowtable
>> >> type callback 'timeout' and use the existing algorithm otherwise.
>> >> 
>> >> To enable UDP NEW connection offload in following patches implement
>> >> 'timeout' callback in flowtable_ct of act_ct which extracts the actual
>> >> connections state from ct->status and set the timeout according to it.
>> >
>> > I found a way to fix the netfilter flowtable after your original v3
>> > update.
>> >
>> > Could you use your original patch in v3 for this fixup?
>> 
>> Sure, please send me the fixup.
>
> What I mean is if you could use your original v3 2/7 for this
> conntrack timeout fixup:
>
> https://patchwork.ozlabs.org/project/netfilter-devel/patch/20230119195104.3371966-3-vladbu@nvidia.com/
>
> I will send a patch for netfilter's flowtable datapath to address the
> original issue I mentioned, so there is no need for this new timeout
> callback.

Got it. Will restore the original version of this patch in v6.


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2023-01-28 19:31 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-27 18:38 [PATCH net-next v5 0/7] Allow offloading of UDP NEW connections via act_ct Vlad Buslov
2023-01-27 18:38 ` [PATCH net-next v5 1/7] net: flow_offload: provision conntrack info in ct_metadata Vlad Buslov
2023-01-27 18:38 ` [PATCH net-next v5 2/7] netfilter: flowtable: fixup UDP timeout depending on ct state Vlad Buslov
2023-01-28 15:27   ` Pablo Neira Ayuso
2023-01-28 16:03     ` Vlad Buslov
2023-01-28 19:09       ` Pablo Neira Ayuso
2023-01-28 19:30         ` Vlad Buslov
2023-01-27 18:38 ` [PATCH net-next v5 3/7] netfilter: flowtable: allow unidirectional rules Vlad Buslov
2023-01-27 18:38 ` [PATCH net-next v5 4/7] netfilter: flowtable: save ctinfo in flow_offload Vlad Buslov
2023-01-27 18:38 ` [PATCH net-next v5 5/7] net/sched: act_ct: set ctinfo in meta action depending on ct state Vlad Buslov
2023-01-27 18:38 ` [PATCH net-next v5 6/7] net/sched: act_ct: offload UDP NEW connections Vlad Buslov
2023-01-28 15:26   ` Pablo Neira Ayuso
2023-01-28 15:31     ` Vlad Buslov
2023-01-28 19:09       ` Pablo Neira Ayuso
2023-01-28 19:28         ` Vlad Buslov
2023-01-27 18:38 ` [PATCH net-next v5 7/7] netfilter: nf_conntrack: allow early drop of offloaded UDP conns Vlad Buslov
2023-01-28 15:51 ` [PATCH net-next v5 0/7] Allow offloading of UDP NEW connections via act_ct Pablo Neira Ayuso
2023-01-28 16:04   ` Vlad Buslov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.