All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
@ 2019-05-15 19:39 Edward Cree
  2019-05-15 19:42 ` [RFC PATCH v2 net-next 1/3] flow_offload: copy tcfa_index into flow_action_entry Edward Cree
                   ` (4 more replies)
  0 siblings, 5 replies; 32+ messages in thread
From: Edward Cree @ 2019-05-15 19:39 UTC (permalink / raw)
  To: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

When the flow_offload infrastructure was added, per-action statistics,
 which were previously possible for drivers to support in TC offload,
 were not plumbed through, perhaps because the drivers in the tree did
 not implement them.
In TC (and in the previous offload API) statistics are per-action,
 though generally only on 'delivery' actions like mirred, ok and shot.
 Actions also have an index, which may be supplied by the user, which
 allows the sharing of entities such as counters between multiple rules.
 (These indices are per action type, so 'drop index 1' and 'mirred index
 1' are different actions.  However, this means that a mirror and a
 redirect action are in the same index namespace, since they're both
 mirred actions.)  The existing driver implementations did not support
 this, however, instead allocating a single counter per rule.  The
 flow_offload API did not support this either, as (a) the action index
 never reached the driver, and (b) the TC_CLSFLOWER_STATS callback was
 only able to return a single set of stats which were added to all
 counters for actions on the rule.  Patch #1 of this series fixes (a) by
 storing tcfa_index in a new action_index member of struct
 flow_action_entry, while patch #2 fixes (b) by changing the .stats
 member of struct tc_cls_flower_offload from a single set of stats to an
 array.  Existing drivers are changed to retain their current behaviour,
 by adding their one set of stats to every element of this array (note
 however that this will behave oddly if an action appears more than once
 on a rule; I'm not sure whether that's a problem).  I have not tested
 these driver changes on corresponding hardware.
Patch #3 ensures that flow_offload.h is buildable in isolation, rather
 than relying on linux/kernel.h having already been included elsewhere;
 this is logically separable from the rest of the series.

A point for discussion: would it be better if, instead of the tcfa_index
 (for which the driver has to know the rules about which flow_action
 types share a namespace), we had some kind of globally unique cookie?
 In the same way that rule->cookie is really a pointer, could we use the
 address of the TC-internal data structure representing the action?  Do
 rules that share an action all point to the same struct tc_action in
 their tcf_exts, for instance?

Changed in v2:
* use array instead of new callback for getting stats
* remove CVLAN patch (posted separately for net)
* add header inclusion fix

Edward Cree (3):
  flow_offload: copy tcfa_index into flow_action_entry
  flow_offload: restore ability to collect separate stats per action
  flow_offload: include linux/kernel.h from flow_offload.h

 drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c     |  6 ++++--
 .../net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c | 10 ++++++----
 drivers/net/ethernet/mellanox/mlx5/core/en_tc.c  |  4 +++-
 .../ethernet/mellanox/mlxsw/spectrum_flower.c    |  5 +++--
 .../net/ethernet/netronome/nfp/flower/offload.c  |  8 ++++++--
 .../net/ethernet/netronome/nfp/flower/qos_conf.c |  6 ++++--
 include/net/flow_offload.h                       | 13 +++++++++++--
 include/net/pkt_cls.h                            | 13 +++++++++----
 net/core/flow_offload.c                          | 16 ++++++++++++++++
 net/sched/cls_api.c                              |  1 +
 net/sched/cls_flower.c                           |  9 ++++++---
 net/sched/cls_matchall.c                         |  7 +++++--
 12 files changed, 74 insertions(+), 24 deletions(-)


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v2 net-next 1/3] flow_offload: copy tcfa_index into flow_action_entry
  2019-05-15 19:39 [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics Edward Cree
@ 2019-05-15 19:42 ` Edward Cree
  2019-05-18 20:30   ` Jamal Hadi Salim
  2019-05-15 19:42 ` [RFC PATCH v2 net-next 2/3] flow_offload: restore ability to collect separate stats per action Edward Cree
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 32+ messages in thread
From: Edward Cree @ 2019-05-15 19:42 UTC (permalink / raw)
  To: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

Required for support of shared counters (and possibly other shared per-
 action entities in future).

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
 include/net/flow_offload.h | 1 +
 net/sched/cls_api.c        | 1 +
 2 files changed, 2 insertions(+)

diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
index 6200900434e1..4ee0f68f2e8d 100644
--- a/include/net/flow_offload.h
+++ b/include/net/flow_offload.h
@@ -137,6 +137,7 @@ enum flow_action_mangle_base {
 
 struct flow_action_entry {
 	enum flow_action_id		id;
+	u32				action_index;
 	union {
 		u32			chain_index;	/* FLOW_ACTION_GOTO */
 		struct net_device	*dev;		/* FLOW_ACTION_REDIRECT */
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index d4699156974a..0d498c3815f5 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -3195,6 +3195,7 @@ int tc_setup_flow_action(struct flow_action *flow_action,
 		struct flow_action_entry *entry;
 
 		entry = &flow_action->entries[j];
+		entry->action_index = act->tcfa_index;
 		if (is_tcf_gact_ok(act)) {
 			entry->id = FLOW_ACTION_ACCEPT;
 		} else if (is_tcf_gact_shot(act)) {


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC PATCH v2 net-next 2/3] flow_offload: restore ability to collect separate stats per action
  2019-05-15 19:39 [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics Edward Cree
  2019-05-15 19:42 ` [RFC PATCH v2 net-next 1/3] flow_offload: copy tcfa_index into flow_action_entry Edward Cree
@ 2019-05-15 19:42 ` Edward Cree
  2019-05-18 20:35   ` Jamal Hadi Salim
  2019-05-15 19:42 ` [RFC PATCH v2 net-next 3/3] flow_offload: include linux/kernel.h from flow_offload.h Edward Cree
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 32+ messages in thread
From: Edward Cree @ 2019-05-15 19:42 UTC (permalink / raw)
  To: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

In the TC_CLSFLOWER_STATS callback from fl_hw_update_stats(), pass an
 array of struct flow_stats_entry, one for each action in the flow rule.
Current drivers (which do not collect per-action stats, but rather per-
 rule) call flow_stats_update() in a loop with the same values for all
 actions; this is roughly what they did before 3b1903ef97c0, except that
 there is not a helper function (like tcf_exts_stats_update()) that does
 it for them, because we don't want to encourage future drivers to do
 the same thing (and there isn't a need for a preempt_disable() like in
 tcf_exts_stats_update()).

Also do the same in mall_stats_hw_filter()'s TC_CLSMATCHALL_STATS
 callback, since it also uses tcf_exts_stats_update().

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c     |  6 ++++--
 .../net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c | 10 ++++++----
 drivers/net/ethernet/mellanox/mlx5/core/en_tc.c  |  4 +++-
 .../ethernet/mellanox/mlxsw/spectrum_flower.c    |  5 +++--
 .../net/ethernet/netronome/nfp/flower/offload.c  |  8 ++++++--
 .../net/ethernet/netronome/nfp/flower/qos_conf.c |  6 ++++--
 include/net/flow_offload.h                       | 11 +++++++++--
 include/net/pkt_cls.h                            | 13 +++++++++----
 net/core/flow_offload.c                          | 16 ++++++++++++++++
 net/sched/cls_flower.c                           |  9 ++++++---
 net/sched/cls_matchall.c                         |  7 +++++--
 11 files changed, 71 insertions(+), 24 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
index 44d6c5743fb9..911178825daf 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
@@ -1370,6 +1370,7 @@ static int bnxt_tc_get_flow_stats(struct bnxt *bp,
 	struct bnxt_tc_flow_node *flow_node;
 	struct bnxt_tc_flow *flow;
 	unsigned long lastused;
+	int i;
 
 	flow_node = rhashtable_lookup_fast(&tc_info->flow_table,
 					   &tc_flow_cmd->cookie,
@@ -1388,8 +1389,9 @@ static int bnxt_tc_get_flow_stats(struct bnxt *bp,
 	lastused = flow->lastused;
 	spin_unlock(&flow->stats_lock);
 
-	flow_stats_update(&tc_flow_cmd->stats, stats.bytes, stats.packets,
-			  lastused);
+	for (i = 0; i < tc_flow_cmd->stats->num_entries; i++)
+		flow_stats_update(&tc_flow_cmd->stats->entries[i],
+				  stats.bytes, stats.packets, lastused);
 	return 0;
 }
 
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
index 6e2d80008a79..cf19ca9f72c5 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
@@ -788,8 +788,8 @@ int cxgb4_tc_flower_stats(struct net_device *dev,
 	struct ch_tc_flower_stats *ofld_stats;
 	struct ch_tc_flower_entry *ch_flower;
 	u64 packets;
+	int ret, i;
 	u64 bytes;
-	int ret;
 
 	ch_flower = ch_flower_lookup(adap, cls->cookie);
 	if (!ch_flower) {
@@ -808,9 +808,11 @@ int cxgb4_tc_flower_stats(struct net_device *dev,
 	if (ofld_stats->packet_count != packets) {
 		if (ofld_stats->prev_packet_count != packets)
 			ofld_stats->last_used = jiffies;
-		flow_stats_update(&cls->stats, bytes - ofld_stats->byte_count,
-				  packets - ofld_stats->packet_count,
-				  ofld_stats->last_used);
+		for (i = 0; i < cls->stats->num_entries; i++)
+			flow_stats_update(&cls->stats->entries[i],
+					  bytes - ofld_stats->byte_count,
+					  packets - ofld_stats->packet_count,
+					  ofld_stats->last_used);
 
 		ofld_stats->packet_count = packets;
 		ofld_stats->byte_count = bytes;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 122f457091a2..fbad85d0d823 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -3426,6 +3426,7 @@ int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv,
 	u64 lastuse = 0;
 	u64 packets = 0;
 	u64 bytes = 0;
+	int i;
 
 	flow = rhashtable_lookup_fast(tc_ht, &f->cookie, tc_ht_params);
 	if (!flow || !same_flow_direction(flow, flags))
@@ -3465,7 +3466,8 @@ int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv,
 no_peer_counter:
 	mlx5_devcom_release_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
 out:
-	flow_stats_update(&f->stats, bytes, packets, lastuse);
+	for (i = 0; i < f->stats->num_entries; i++)
+		flow_stats_update(&f->stats->entries[i], bytes, packets, lastuse);
 
 	return 0;
 }
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
index 15f804453cd6..ea65b4d832b4 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
@@ -453,8 +453,8 @@ int mlxsw_sp_flower_stats(struct mlxsw_sp *mlxsw_sp,
 	struct mlxsw_sp_acl_rule *rule;
 	u64 packets;
 	u64 lastuse;
+	int err, i;
 	u64 bytes;
-	int err;
 
 	ruleset = mlxsw_sp_acl_ruleset_get(mlxsw_sp, block,
 					   f->common.chain_index,
@@ -471,7 +471,8 @@ int mlxsw_sp_flower_stats(struct mlxsw_sp *mlxsw_sp,
 	if (err)
 		goto err_rule_get_stats;
 
-	flow_stats_update(&f->stats, bytes, packets, lastuse);
+	for (i = 0; i < f->stats->num_entries; i++)
+		flow_stats_update(&f->stats->entries[i], bytes, packets, lastuse);
 
 	mlxsw_sp_acl_ruleset_put(mlxsw_sp, ruleset);
 	return 0;
diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
index 1fbfeb43c538..889839bc11a5 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
@@ -1132,6 +1132,7 @@ nfp_flower_get_stats(struct nfp_app *app, struct net_device *netdev,
 	struct nfp_flower_priv *priv = app->priv;
 	struct nfp_fl_payload *nfp_flow;
 	u32 ctx_id;
+	int i;
 
 	nfp_flow = nfp_flower_search_fl_table(app, flow->cookie, netdev);
 	if (!nfp_flow)
@@ -1144,8 +1145,11 @@ nfp_flower_get_stats(struct nfp_app *app, struct net_device *netdev,
 	if (!list_empty(&nfp_flow->linked_flows))
 		nfp_flower_update_merge_stats(app, nfp_flow);
 
-	flow_stats_update(&flow->stats, priv->stats[ctx_id].bytes,
-			  priv->stats[ctx_id].pkts, priv->stats[ctx_id].used);
+	for (i = 0; i < flow->stats->num_entries; i++)
+		flow_stats_update(&flow->stats->entries[i],
+				  priv->stats[ctx_id].bytes,
+				  priv->stats[ctx_id].pkts,
+				  priv->stats[ctx_id].used);
 
 	priv->stats[ctx_id].pkts = 0;
 	priv->stats[ctx_id].bytes = 0;
diff --git a/drivers/net/ethernet/netronome/nfp/flower/qos_conf.c b/drivers/net/ethernet/netronome/nfp/flower/qos_conf.c
index 86e968cd5ffd..0ed202f315ee 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/qos_conf.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/qos_conf.c
@@ -297,6 +297,7 @@ nfp_flower_stats_rate_limiter(struct nfp_app *app, struct net_device *netdev,
 	struct nfp_stat_pair *prev_stats;
 	u64 diff_bytes, diff_pkts;
 	struct nfp_repr *repr;
+	int i;
 
 	if (!nfp_netdev_is_nfp_repr(netdev)) {
 		NL_SET_ERR_MSG_MOD(extack, "unsupported offload: qos rate limit offload not supported on higher level port");
@@ -319,8 +320,9 @@ nfp_flower_stats_rate_limiter(struct nfp_app *app, struct net_device *netdev,
 	prev_stats->bytes = curr_stats->bytes;
 	spin_unlock_bh(&fl_priv->qos_stats_lock);
 
-	flow_stats_update(&flow->stats, diff_bytes, diff_pkts,
-			  repr_priv->qos_table.last_update);
+	for (i = 0; i < flow->stats->num_entries; i++)
+		flow_stats_update(&flow->stats->entries[i], diff_bytes,
+				  diff_pkts, repr_priv->qos_table.last_update);
 	return 0;
 }
 
diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
index 4ee0f68f2e8d..9e2c64eb6726 100644
--- a/include/net/flow_offload.h
+++ b/include/net/flow_offload.h
@@ -210,13 +210,20 @@ static inline bool flow_rule_match_key(const struct flow_rule *rule,
 	return dissector_uses_key(rule->match.dissector, key);
 }
 
-struct flow_stats {
+struct flow_stats_entry {
 	u64	pkts;
 	u64	bytes;
 	u64	lastused;
 };
 
-static inline void flow_stats_update(struct flow_stats *flow_stats,
+struct flow_stats {
+	unsigned int			num_entries;
+	struct flow_stats_entry		entries[0];
+};
+
+struct flow_stats *flow_stats_alloc(unsigned int num_actions);
+
+static inline void flow_stats_update(struct flow_stats_entry *flow_stats,
 				     u64 bytes, u64 pkts, u64 lastused)
 {
 	flow_stats->pkts	+= pkts;
diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index 514e3c80ecc1..0e17ea8ba302 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -344,7 +344,7 @@ static inline void tcf_exts_put_net(struct tcf_exts *exts)
 
 static inline void
 tcf_exts_stats_update(const struct tcf_exts *exts,
-		      u64 bytes, u64 packets, u64 lastuse)
+		      const struct flow_stats *stats)
 {
 #ifdef CONFIG_NET_CLS_ACT
 	int i;
@@ -352,9 +352,14 @@ tcf_exts_stats_update(const struct tcf_exts *exts,
 	preempt_disable();
 
 	for (i = 0; i < exts->nr_actions; i++) {
+		const struct flow_stats_entry *stats_entry;
 		struct tc_action *a = exts->actions[i];
 
-		tcf_action_stats_update(a, bytes, packets, lastuse, true);
+		if (i > stats->num_entries)
+			break;
+		stats_entry = &stats->entries[i];
+		tcf_action_stats_update(a, stats_entry->bytes, stats_entry->pkts,
+					stats_entry->lastused, true);
 	}
 
 	preempt_enable();
@@ -752,7 +757,7 @@ struct tc_cls_flower_offload {
 	enum tc_fl_command command;
 	unsigned long cookie;
 	struct flow_rule *rule;
-	struct flow_stats stats;
+	struct flow_stats *stats;
 	u32 classid;
 };
 
@@ -772,7 +777,7 @@ struct tc_cls_matchall_offload {
 	struct tc_cls_common_offload common;
 	enum tc_matchall_command command;
 	struct flow_rule *rule;
-	struct flow_stats stats;
+	struct flow_stats *stats;
 	unsigned long cookie;
 };
 
diff --git a/net/core/flow_offload.c b/net/core/flow_offload.c
index c3a00eac4804..4c8aabebdb75 100644
--- a/net/core/flow_offload.c
+++ b/net/core/flow_offload.c
@@ -151,3 +151,19 @@ void flow_rule_match_enc_opts(const struct flow_rule *rule,
 	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_OPTS, out);
 }
 EXPORT_SYMBOL(flow_rule_match_enc_opts);
+
+struct flow_stats *flow_stats_alloc(unsigned int num_actions)
+{
+	struct flow_stats *stats;
+
+	stats = kzalloc(sizeof(struct flow_stats) +
+		       sizeof(struct flow_stats_entry) * num_actions,
+		       GFP_KERNEL);
+	if (!stats)
+		return NULL;
+
+	stats->num_entries = num_actions;
+
+	return stats;
+}
+EXPORT_SYMBOL(flow_stats_alloc);
diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
index f6685fc53119..8775657fb03b 100644
--- a/net/sched/cls_flower.c
+++ b/net/sched/cls_flower.c
@@ -482,13 +482,16 @@ static void fl_hw_update_stats(struct tcf_proto *tp, struct cls_fl_filter *f,
 	cls_flower.command = TC_CLSFLOWER_STATS;
 	cls_flower.cookie = (unsigned long) f;
 	cls_flower.classid = f->res.classid;
+	cls_flower.stats = flow_stats_alloc(tcf_exts_num_actions(&f->exts));
+	if (!cls_flower.stats)
+		goto out_unlock;
 
 	tc_setup_cb_call(block, TC_SETUP_CLSFLOWER, &cls_flower, false);
 
-	tcf_exts_stats_update(&f->exts, cls_flower.stats.bytes,
-			      cls_flower.stats.pkts,
-			      cls_flower.stats.lastused);
+	tcf_exts_stats_update(&f->exts, cls_flower.stats);
+	kfree(cls_flower.stats);
 
+out_unlock:
 	if (!rtnl_held)
 		rtnl_unlock();
 }
diff --git a/net/sched/cls_matchall.c b/net/sched/cls_matchall.c
index db42d97a2006..b6b7b041fd6a 100644
--- a/net/sched/cls_matchall.c
+++ b/net/sched/cls_matchall.c
@@ -335,11 +335,14 @@ static void mall_stats_hw_filter(struct tcf_proto *tp,
 	tc_cls_common_offload_init(&cls_mall.common, tp, head->flags, NULL);
 	cls_mall.command = TC_CLSMATCHALL_STATS;
 	cls_mall.cookie = cookie;
+	cls_mall.stats = flow_stats_alloc(tcf_exts_num_actions(&head->exts));
+	if (!cls_mall.stats)
+		return;
 
 	tc_setup_cb_call(block, TC_SETUP_CLSMATCHALL, &cls_mall, false);
 
-	tcf_exts_stats_update(&head->exts, cls_mall.stats.bytes,
-			      cls_mall.stats.pkts, cls_mall.stats.lastused);
+	tcf_exts_stats_update(&head->exts, cls_mall.stats);
+	kfree(cls_mall.stats);
 }
 
 static int mall_dump(struct net *net, struct tcf_proto *tp, void *fh,


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC PATCH v2 net-next 3/3] flow_offload: include linux/kernel.h from flow_offload.h
  2019-05-15 19:39 [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics Edward Cree
  2019-05-15 19:42 ` [RFC PATCH v2 net-next 1/3] flow_offload: copy tcfa_index into flow_action_entry Edward Cree
  2019-05-15 19:42 ` [RFC PATCH v2 net-next 2/3] flow_offload: restore ability to collect separate stats per action Edward Cree
@ 2019-05-15 19:42 ` Edward Cree
  2019-05-17 15:27 ` [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics Edward Cree
  2019-05-18 20:30 ` Jamal Hadi Salim
  4 siblings, 0 replies; 32+ messages in thread
From: Edward Cree @ 2019-05-15 19:42 UTC (permalink / raw)
  To: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

flow_stats_update() uses max_t, so ensure we have that defined.

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
 include/net/flow_offload.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
index 9e2c64eb6726..ccc280c5e0ac 100644
--- a/include/net/flow_offload.h
+++ b/include/net/flow_offload.h
@@ -1,6 +1,7 @@
 #ifndef _NET_FLOW_OFFLOAD_H
 #define _NET_FLOW_OFFLOAD_H
 
+#include <linux/kernel.h>
 #include <net/flow_dissector.h>
 
 struct flow_match {

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-15 19:39 [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics Edward Cree
                   ` (2 preceding siblings ...)
  2019-05-15 19:42 ` [RFC PATCH v2 net-next 3/3] flow_offload: include linux/kernel.h from flow_offload.h Edward Cree
@ 2019-05-17 15:27 ` Edward Cree
  2019-05-17 17:14   ` Edward Cree
  2019-05-19  0:22   ` Pablo Neira Ayuso
  2019-05-18 20:30 ` Jamal Hadi Salim
  4 siblings, 2 replies; 32+ messages in thread
From: Edward Cree @ 2019-05-17 15:27 UTC (permalink / raw)
  To: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

On 15/05/2019 20:39, Edward Cree wrote:
> A point for discussion: would it be better if, instead of the tcfa_index
>  (for which the driver has to know the rules about which flow_action
>  types share a namespace), we had some kind of globally unique cookie?
>  In the same way that rule->cookie is really a pointer, could we use the
>  address of the TC-internal data structure representing the action?  Do
>  rules that share an action all point to the same struct tc_action in
>  their tcf_exts, for instance?
A quick test showed that, indeed, they do; I'm now leaning towards the
 approach of adding "unsigned long cookie" to struct flow_action_entry
 and populating it with (unsigned long)act in tc_setup_flow_action().
Pablo, how do the two options interact with your netfilter offload?  I'm
 guessing it's easier for you to find a unique pointer than to generate
 a unique u32 action_index for each action.  I'm also assuming that
 netfilter doesn't have a notion of shared actions.

-Ed

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-17 15:27 ` [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics Edward Cree
@ 2019-05-17 17:14   ` Edward Cree
  2019-05-18 20:39     ` Jamal Hadi Salim
  2019-05-19  0:22   ` Pablo Neira Ayuso
  1 sibling, 1 reply; 32+ messages in thread
From: Edward Cree @ 2019-05-17 17:14 UTC (permalink / raw)
  To: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

On 17/05/2019 16:27, Edward Cree wrote:
> I'm now leaning towards the
>  approach of adding "unsigned long cookie" to struct flow_action_entry
>  and populating it with (unsigned long)act in tc_setup_flow_action().

For concreteness, here's what that looks like: patch 1 is replaced with
 the following, the other two are unchanged.
Drivers now have an easier job, as they can just use the cookie directly
 as a hashtable key, rather than worrying about which action types share
 indices.

--8<--

[RFC PATCH v2.5 net-next 1/3] flow_offload: add a cookie to flow_action_entry

Populated with the address of the struct tc_action from which it was made.
Required for support of shared counters (and possibly other shared per-
 action entities in future).

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
 include/net/flow_offload.h | 1 +
 net/sched/cls_api.c        | 1 +
 2 files changed, 2 insertions(+)

diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
index 6200900434e1..fb3278a2bd41 100644
--- a/include/net/flow_offload.h
+++ b/include/net/flow_offload.h
@@ -137,6 +137,7 @@ enum flow_action_mangle_base {
 
 struct flow_action_entry {
 	enum flow_action_id		id;
+	unsigned long			cookie;
 	union {
 		u32			chain_index;	/* FLOW_ACTION_GOTO */
 		struct net_device	*dev;		/* FLOW_ACTION_REDIRECT */
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index d4699156974a..5411cec17af5 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -3195,6 +3195,7 @@ int tc_setup_flow_action(struct flow_action *flow_action,
 		struct flow_action_entry *entry;
 
 		entry = &flow_action->entries[j];
+		entry->cookie = (unsigned long)act;
 		if (is_tcf_gact_ok(act)) {
 			entry->id = FLOW_ACTION_ACCEPT;
 		} else if (is_tcf_gact_shot(act)) {

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-15 19:39 [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics Edward Cree
                   ` (3 preceding siblings ...)
  2019-05-17 15:27 ` [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics Edward Cree
@ 2019-05-18 20:30 ` Jamal Hadi Salim
  4 siblings, 0 replies; 32+ messages in thread
From: Jamal Hadi Salim @ 2019-05-18 20:30 UTC (permalink / raw)
  To: Edward Cree, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

On 2019-05-15 3:39 p.m., Edward Cree wrote:
[..]
> 
> A point for discussion: would it be better if, instead of the tcfa_index
>   (for which the driver has to know the rules about which flow_action
>   types share a namespace), we had some kind of globally unique cookie?
>   In the same way that rule->cookie is really a pointer, could we use the
>   address of the TC-internal data structure representing the action?

tcfa_index + action identifier seem to be sufficiently global, no?
Then we dont have a mismatch with what the kernel(non-offloaded)
semantics.

Note: The kernel is free to generate the index (if the user doesnt
specify).

>  Do
>   rules that share an action all point to the same struct tc_action in
>   their tcf_exts, for instance?

Yes they do.

cheers,
jamal

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 1/3] flow_offload: copy tcfa_index into flow_action_entry
  2019-05-15 19:42 ` [RFC PATCH v2 net-next 1/3] flow_offload: copy tcfa_index into flow_action_entry Edward Cree
@ 2019-05-18 20:30   ` Jamal Hadi Salim
  0 siblings, 0 replies; 32+ messages in thread
From: Jamal Hadi Salim @ 2019-05-18 20:30 UTC (permalink / raw)
  To: Edward Cree, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

On 2019-05-15 3:42 p.m., Edward Cree wrote:
> Required for support of shared counters (and possibly other shared per-
>   action entities in future).
> 
> Signed-off-by: Edward Cree <ecree@solarflare.com>
> ---
>   include/net/flow_offload.h | 1 +
>   net/sched/cls_api.c        | 1 +
>   2 files changed, 2 insertions(+)
> 
> diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
> index 6200900434e1..4ee0f68f2e8d 100644
> --- a/include/net/flow_offload.h
> +++ b/include/net/flow_offload.h
> @@ -137,6 +137,7 @@ enum flow_action_mangle_base {
>   
>   struct flow_action_entry {
>   	enum flow_action_id		id;
> +	u32				action_index;
>   	union {
>   		u32			chain_index;	/* FLOW_ACTION_GOTO */
>   		struct net_device	*dev;		/* FLOW_ACTION_REDIRECT */
> diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
> index d4699156974a..0d498c3815f5 100644
> --- a/net/sched/cls_api.c
> +++ b/net/sched/cls_api.c
> @@ -3195,6 +3195,7 @@ int tc_setup_flow_action(struct flow_action *flow_action,
>   		struct flow_action_entry *entry;
>   
>   		entry = &flow_action->entries[j];
> +		entry->action_index = act->tcfa_index;
>   		if (is_tcf_gact_ok(act)) {
>   			entry->id = FLOW_ACTION_ACCEPT;
>   		} else if (is_tcf_gact_shot(act)) {


LGTM.

cheers,
jamal


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 2/3] flow_offload: restore ability to collect separate stats per action
  2019-05-15 19:42 ` [RFC PATCH v2 net-next 2/3] flow_offload: restore ability to collect separate stats per action Edward Cree
@ 2019-05-18 20:35   ` Jamal Hadi Salim
  2019-05-20 15:46     ` Edward Cree
  0 siblings, 1 reply; 32+ messages in thread
From: Jamal Hadi Salim @ 2019-05-18 20:35 UTC (permalink / raw)
  To: Edward Cree, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

On 2019-05-15 3:42 p.m., Edward Cree wrote:
> In the TC_CLSFLOWER_STATS callback from fl_hw_update_stats(), pass an
>   array of struct flow_stats_entry, one for each action in the flow rule.
> Current drivers (which do not collect per-action stats, but rather per-
>   rule) call flow_stats_update() in a loop with the same values for all
>   actions; this is roughly what they did before 3b1903ef97c0, except that
>   there is not a helper function (like tcf_exts_stats_update()) that does
>   it for them, because we don't want to encourage future drivers to do
>   the same thing (and there isn't a need for a preempt_disable() like in
>   tcf_exts_stats_update()).
> 
> Also do the same in mall_stats_hw_filter()'s TC_CLSMATCHALL_STATS
>   callback, since it also uses tcf_exts_stats_update().
> 
> Signed-off-by: Edward Cree <ecree@solarflare.com>

Looks good to me.
Your patch doesnt have U32. IIRC, I have seen stats on ixgbe with the
u32 classifier last time i mucked around with it
(maybe Pablo's changes removed it?).

cheers,
jamal


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-17 17:14   ` Edward Cree
@ 2019-05-18 20:39     ` Jamal Hadi Salim
  2019-05-20 15:26       ` Edward Cree
  0 siblings, 1 reply; 32+ messages in thread
From: Jamal Hadi Salim @ 2019-05-18 20:39 UTC (permalink / raw)
  To: Edward Cree, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

On 2019-05-17 1:14 p.m., Edward Cree wrote:
> On 17/05/2019 16:27, Edward Cree wrote:
>> I'm now leaning towards the
>>   approach of adding "unsigned long cookie" to struct flow_action_entry
>>   and populating it with (unsigned long)act in tc_setup_flow_action().
> 
> For concreteness, here's what that looks like: patch 1 is replaced with
>   the following, the other two are unchanged.
> Drivers now have an easier job, as they can just use the cookie directly
>   as a hashtable key, rather than worrying about which action types share
>   indices.

Per my other email, this will break tc semantics. It doesnt look
possible to specify an index from user space. Did i miss
something?


cheers,
jamal

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-17 15:27 ` [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics Edward Cree
  2019-05-17 17:14   ` Edward Cree
@ 2019-05-19  0:22   ` Pablo Neira Ayuso
  2019-05-20 15:37     ` Edward Cree
  1 sibling, 1 reply; 32+ messages in thread
From: Pablo Neira Ayuso @ 2019-05-19  0:22 UTC (permalink / raw)
  To: Edward Cree
  Cc: Jamal Hadi Salim, Jiri Pirko, David Miller, netdev, Cong Wang,
	Andy Gospodarek, Jakub Kicinski, Michael Chan, Vishal Kulkarni

On Fri, May 17, 2019 at 04:27:29PM +0100, Edward Cree wrote:
> On 15/05/2019 20:39, Edward Cree wrote:
[...]
> Pablo, how do the two options interact with your netfilter offload?  I'm
>  guessing it's easier for you to find a unique pointer than to generate
>  a unique u32 action_index for each action.  I'm also assuming that
>  netfilter doesn't have a notion of shared actions.

It has that shared actions concept, see:

https://netfilter.org/projects/nfacct/

Have a look at 'nfacct' in iptables-extensions(8) manpage.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-18 20:39     ` Jamal Hadi Salim
@ 2019-05-20 15:26       ` Edward Cree
  2019-05-20 15:38         ` Jamal Hadi Salim
  0 siblings, 1 reply; 32+ messages in thread
From: Edward Cree @ 2019-05-20 15:26 UTC (permalink / raw)
  To: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

On 18/05/2019 21:39, Jamal Hadi Salim wrote:
> On 2019-05-17 1:14 p.m., Edward Cree wrote:
>> On 17/05/2019 16:27, Edward Cree wrote:
>>> I'm now leaning towards the
>>>   approach of adding "unsigned long cookie" to struct flow_action_entry
>>>   and populating it with (unsigned long)act in tc_setup_flow_action().
>>
>> For concreteness, here's what that looks like: patch 1 is replaced with
>>   the following, the other two are unchanged.
>> Drivers now have an easier job, as they can just use the cookie directly
>>   as a hashtable key, rather than worrying about which action types share
>>   indices.
>
> Per my other email, this will break tc semantics. It doesnt look
> possible to specify an index from user space. Did i miss
> something?
Unless *I* missed something, I'm not changing the TC<=>user-space API at
 all.  If user space specifies an index, then TC will either create a new
 action with that index, or find an existing one.  Then flow_offload turns
 that into a cookie; in the 'existing action' case it'll be the same
 cookie as any previous offloads of that action, in the 'new action' case
 it'll be a cookie distinct from any existing action.
Drivers aren't interested in the specific index value, only in "which
 other actions (counters) I've offloaded are shared with this one?", which
 the cookie gives them.

With my (unreleased) driver code, I've successfully tested this with e.g.
 the following rules:
tc filter add dev $vfrep parent ffff: protocol arp flower skip_sw \
    action vlan push id 100 protocol 802.1q \
    action mirred egress mirror dev $pf index 101 \
    action vlan pop \
    action drop index 104
tc filter add dev $vfrep parent ffff: protocol ipv4 flower skip_sw \
    action vlan push id 100 protocol 802.1q \
    action mirred egress mirror dev $pf index 102 \
    action vlan pop \
    action drop index 104

Then when viewing with `tc -stats filter show`, the mirreds count their
 traffic separately (and with an extra 4 bytes per packet for the VLAN),
 whereas the drops (index 104, shared) show the total count (and without
 the 4 bytes).

(From your other email)
> tcfa_index + action identifier seem to be sufficiently global, no?
The reason I don't like using the action identifier is because flow_offload
 slightly alters those: mirred gets split into two (FLOW_ACTION_REDIRECT
 and FLOW_ACTION_MIRRED (mirror)).  Technically it'll still work (a redirect
 and a mirror are different actions, so can't have the same index, so it
 doesn't matter if they're treated as the same action-type or not) but it
 feels like a kludge.

-Ed

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-19  0:22   ` Pablo Neira Ayuso
@ 2019-05-20 15:37     ` Edward Cree
  2019-05-20 15:40       ` Jamal Hadi Salim
  2019-05-20 15:44       ` Pablo Neira Ayuso
  0 siblings, 2 replies; 32+ messages in thread
From: Edward Cree @ 2019-05-20 15:37 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: Jamal Hadi Salim, Jiri Pirko, David Miller, netdev, Cong Wang,
	Andy Gospodarek, Jakub Kicinski, Michael Chan, Vishal Kulkarni

On 19/05/2019 01:22, Pablo Neira Ayuso wrote:
> On Fri, May 17, 2019 at 04:27:29PM +0100, Edward Cree wrote:
>> On 15/05/2019 20:39, Edward Cree wrote:
> [...]
>> Pablo, how do the two options interact with your netfilter offload?  I'm
>>  guessing it's easier for you to find a unique pointer than to generate
>>  a unique u32 action_index for each action.  I'm also assuming that
>>  netfilter doesn't have a notion of shared actions.
> It has that shared actions concept, see:
>
> https://netfilter.org/projects/nfacct/
>
> Have a look at 'nfacct' in iptables-extensions(8) manpage.
Thanks.  Looking at net/netfilter/nfnetlink_acct.c, it looks as though you
 don't have a u32 index in there; for the cookie approach, would the
 address of the struct nf_acct (casted to unsigned long) work to uniquely
 identify actions that should be shared?
I'm not 100% sure how nf (or nfacct) offload is going to look, so I might
 be barking up the wrong tree here.  But it seems like the cookie method
 should work better for you — even if you did have an index, how would you
 avoid collisions with TC actions using the same indices if both are in
 use on a box?  Cookies OTOH are pointers, so guaranteed unique :)

-Ed

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-20 15:26       ` Edward Cree
@ 2019-05-20 15:38         ` Jamal Hadi Salim
  2019-05-20 16:10           ` Edward Cree
  0 siblings, 1 reply; 32+ messages in thread
From: Jamal Hadi Salim @ 2019-05-20 15:38 UTC (permalink / raw)
  To: Edward Cree, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

On 2019-05-20 11:26 a.m., Edward Cree wrote:
> On 18/05/2019 21:39, Jamal Hadi Salim wrote:
>> On 2019-05-17 1:14 p.m., Edward Cree wrote:
>>> On 17/05/2019 16:27, Edward Cree wrote:

> Unless *I* missed something, I'm not changing the TC<=>user-space API at
>   all.  If user space specifies an index, then TC will either create a new
>   action with that index, or find an existing one.  Then flow_offload turns
>   that into a cookie; in the 'existing action' case it'll be the same
>   cookie as any previous offloads of that action, in the 'new action' case
>   it'll be a cookie distinct from any existing action.

That is fine then if i could do:

tc actions add action drop index 104
then
followed by for example the two filters you show below..

Is your hardware not using explicit indices into a stats table?

> Drivers aren't interested in the specific index value, only in "which
>   other actions (counters) I've offloaded are shared with this one?", which
>   the cookie gives them.
> 
> With my (unreleased) driver code, I've successfully tested this with e.g.
>   the following rules:
> tc filter add dev $vfrep parent ffff: protocol arp flower skip_sw \
>      action vlan push id 100 protocol 802.1q \
>      action mirred egress mirror dev $pf index 101 \
>      action vlan pop \
>      action drop index 104
> tc filter add dev $vfrep parent ffff: protocol ipv4 flower skip_sw \
>      action vlan push id 100 protocol 802.1q \
>      action mirred egress mirror dev $pf index 102 \
>      action vlan pop \
>      action drop index 104
> 
> Then when viewing with `tc -stats filter show`, the mirreds count their
>   traffic separately (and with an extra 4 bytes per packet for the VLAN),
>   whereas the drops (index 104, shared) show the total count (and without
>   the 4 bytes).
>

Beauty.  Assuming the stats are being synced to the kernel?
Test 1:
What does "tc -s actions ls action drop index 104" show?
Test 2:
Delete one of the filters above then dump actions again as above.

> (From your other email)
>> tcfa_index + action identifier seem to be sufficiently global, no?
> The reason I don't like using the action identifier is because flow_offload
>   slightly alters those: mirred gets split into two (FLOW_ACTION_REDIRECT
>   and FLOW_ACTION_MIRRED (mirror)).  Technically it'll still work (a redirect
>   and a mirror are different actions, so can't have the same index, so it
>   doesn't matter if they're treated as the same action-type or not) but it
>   feels like a kludge.
> 

As long as uapi semantics are not broken (which you are demonstrating it
is not) then we are good.

cheers,
jamal

> -Ed
> 


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-20 15:37     ` Edward Cree
@ 2019-05-20 15:40       ` Jamal Hadi Salim
  2019-05-20 15:44       ` Pablo Neira Ayuso
  1 sibling, 0 replies; 32+ messages in thread
From: Jamal Hadi Salim @ 2019-05-20 15:40 UTC (permalink / raw)
  To: Edward Cree, Pablo Neira Ayuso
  Cc: Jiri Pirko, David Miller, netdev, Cong Wang, Andy Gospodarek,
	Jakub Kicinski, Michael Chan, Vishal Kulkarni

On 2019-05-20 11:37 a.m., Edward Cree wrote:
> On 19/05/2019 01:22, Pablo Neira Ayuso wrote:
>> On Fri, May 17, 2019 at 04:27:29PM +0100, Edward Cree wrote:

> Thanks.  Looking at net/netfilter/nfnetlink_acct.c, it looks as though you
>   don't have a u32 index in there; for the cookie approach, would the
>   address of the struct nf_acct (casted to unsigned long) work to uniquely
>   identify actions that should be shared?
> I'm not 100% sure how nf (or nfacct) offload is going to look, so I might
>   be barking up the wrong tree here.  But it seems like the cookie method
>   should work better for you — even if you did have an index, how would you
>   avoid collisions with TC actions using the same indices if both are in
>   use on a box?  Cookies OTOH are pointers, so guaranteed unique :)

A little concerned:
Hopefully all these can be manipulated by tc as well - otherwise we are
opening some other big pandora box of two subsystems fighting each
other.

cheers,
jamal

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-20 15:37     ` Edward Cree
  2019-05-20 15:40       ` Jamal Hadi Salim
@ 2019-05-20 15:44       ` Pablo Neira Ayuso
  1 sibling, 0 replies; 32+ messages in thread
From: Pablo Neira Ayuso @ 2019-05-20 15:44 UTC (permalink / raw)
  To: Edward Cree
  Cc: Jamal Hadi Salim, Jiri Pirko, David Miller, netdev, Cong Wang,
	Andy Gospodarek, Jakub Kicinski, Michael Chan, Vishal Kulkarni

On Mon, May 20, 2019 at 04:37:10PM +0100, Edward Cree wrote:
> On 19/05/2019 01:22, Pablo Neira Ayuso wrote:
> > On Fri, May 17, 2019 at 04:27:29PM +0100, Edward Cree wrote:
> >> On 15/05/2019 20:39, Edward Cree wrote:
> > [...]
> >> Pablo, how do the two options interact with your netfilter offload?  I'm
> >>  guessing it's easier for you to find a unique pointer than to generate
> >>  a unique u32 action_index for each action.  I'm also assuming that
> >>  netfilter doesn't have a notion of shared actions.
> > It has that shared actions concept, see:
> >
> > https://netfilter.org/projects/nfacct/
> >
> > Have a look at 'nfacct' in iptables-extensions(8) manpage.
>
> Thanks.  Looking at net/netfilter/nfnetlink_acct.c, it looks as though you
>  don't have a u32 index in there; for the cookie approach, would the
>  address of the struct nf_acct (casted to unsigned long) work to uniquely
>  identify actions that should be shared?
> I'm not 100% sure how nf (or nfacct) offload is going to look, so I might
>  be barking up the wrong tree here.  But it seems like the cookie method
>  should work better for you — even if you did have an index, how would you
>  avoid collisions with TC actions using the same indices if both are in
>  use on a box?  Cookies OTOH are pointers, so guaranteed unique :)

The cookie approach per-action looks fine to me, there's already a
cookie to identify the rule, so this looks natural to me.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 2/3] flow_offload: restore ability to collect separate stats per action
  2019-05-18 20:35   ` Jamal Hadi Salim
@ 2019-05-20 15:46     ` Edward Cree
  2019-05-20 15:55       ` Jamal Hadi Salim
  0 siblings, 1 reply; 32+ messages in thread
From: Edward Cree @ 2019-05-20 15:46 UTC (permalink / raw)
  To: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

On 18/05/2019 21:35, Jamal Hadi Salim wrote:
> Your patch doesnt have U32. IIRC, I have seen stats on ixgbe with the
> u32 classifier last time i mucked around with it
> (maybe Pablo's changes removed it?).
I can't see anything stats-offload related in net/sched/cls_u32.c (just
 SW stats dumping in u32_dump()) and it doesn't call
 tcf_exts_stats_update() either.  Looking through ixgbe code I also
 don't see any sign there of stats gathering for offloaded u32 rules.

-Ed

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 2/3] flow_offload: restore ability to collect separate stats per action
  2019-05-20 15:46     ` Edward Cree
@ 2019-05-20 15:55       ` Jamal Hadi Salim
  0 siblings, 0 replies; 32+ messages in thread
From: Jamal Hadi Salim @ 2019-05-20 15:55 UTC (permalink / raw)
  To: Edward Cree, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni, Amritha Nambiar

On 2019-05-20 11:46 a.m., Edward Cree wrote:

> I can't see anything stats-offload related in net/sched/cls_u32.c (just
>   SW stats dumping in u32_dump()) and it doesn't call
>   tcf_exts_stats_update() either.  Looking through ixgbe code I also
>   don't see any sign there of stats gathering for offloaded u32 rules.

Will look into into (also Cc some of the intel folks). I was sure I
have seen at least the flow stats updating when i last used it
to offload. They may be using a different path.

cheers,
jamal

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-20 15:38         ` Jamal Hadi Salim
@ 2019-05-20 16:10           ` Edward Cree
  2019-05-20 16:29             ` Jamal Hadi Salim
  0 siblings, 1 reply; 32+ messages in thread
From: Edward Cree @ 2019-05-20 16:10 UTC (permalink / raw)
  To: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

On 20/05/2019 16:38, Jamal Hadi Salim wrote:
> That is fine then if i could do:
>
> tc actions add action drop index 104
> then
> followed by for example the two filters you show below..
That seems to work.

> Is your hardware not using explicit indices into a stats table?
No; we ask the HW to allocate a counter and it returns us a counter ID (which
 bears no relation to the action index).  So I have an rhashtable keyed on
 the cookie (or on the action-type & action_index, when using the other
 version of my patches) which stores the HW counter ID; and the entry in that
 hashtable is what I attach to the driver's action struct.

> Beauty.  Assuming the stats are being synced to the kernel?
> Test 1:
> What does "tc -s actions ls action drop index 104" show?
It produces no output, but
    `tc -s actions get action drop index 104`
or
    `tc -s actions list action gact index 104`
shows the same stats as `tc -s filter show ...` did for that action.
> Test 2:
> Delete one of the filters above then dump actions again as above.
Ok, that's weird: after I delete one, the other (in `tc -s filter show ...`)
 no longer shows the shared action.

# tc filter del dev $vfrep parent ffff: pref 49151
# tc -stats filter show dev $vfrep parent ffff:
filter protocol arp pref 49152 flower chain 0
filter protocol arp pref 49152 flower chain 0 handle 0x1
  eth_type arp
  skip_sw
  in_hw in_hw_count 1
        action order 1: vlan  push id 100 protocol 802.1Q priority 0 pipe
         index 1 ref 1 bind 1 installed 180 sec used 180 sec
        Action statistics:
        Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
        backlog 0b 0p requeues 0

        action order 2: mirred (Egress Mirror to device $pf) pipe
        index 101 ref 1 bind 1 installed 180 sec used 169 sec
        Action statistics:
        Sent 256 bytes 4 pkt (dropped 0, overlimits 0 requeues 0)
        Sent software 0 bytes 0 pkt
        Sent hardware 256 bytes 4 pkt
        backlog 0b 0p requeues 0

        action order 3: vlan  pop pipe
         index 2 ref 1 bind 1 installed 180 sec used 180 sec
        Action statistics:
        Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
        backlog 0b 0p requeues 0

#

Yet `tc -s actions get` still shows it...

# tc -s actions get action drop index 104
total acts 0

        action order 1: gact action drop
         random type none pass val 0
         index 104 ref 2 bind 1 installed 812 sec used 797 sec
        Action statistics:
        Sent 534 bytes 7 pkt (dropped 7, overlimits 0 requeues 0)
        Sent software 0 bytes 0 pkt
        Sent hardware 534 bytes 7 pkt
        backlog 0b 0p requeues 0
# tc filter show dev $vfrep parent ffff:
filter protocol arp pref 49152 flower chain 0
filter protocol arp pref 49152 flower chain 0 handle 0x1
  eth_type arp
  skip_sw
  in_hw in_hw_count 1
        action order 1: vlan  push id 100 protocol 802.1Q priority 0 pipe
         index 1 ref 1 bind 1

        action order 3: vlan  pop pipe
         index 2 ref 1 bind 1

# tc -s actions get action mirred index 101
total acts 0

        action order 1: mirred (Egress Mirror to device $pf) pipe
        index 101 ref 1 bind 1 installed 796 sec used 785 sec
        Action statistics:
        Sent 256 bytes 4 pkt (dropped 0, overlimits 0 requeues 0)
        Sent software 0 bytes 0 pkt
        Sent hardware 256 bytes 4 pkt
        backlog 0b 0p requeues 0
#

Curiouser and curiouser... it seems that after I delete one of the rules,
 TC starts to get very confused and actions start disappearing from rule
 dumps.  Yet those actions still exist according to `tc actions list`.
I don't *think* my changes can have caused this, but I'll try a test on a
 vanilla kernel just to make sure the same thing happens there.

-Ed

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-20 16:10           ` Edward Cree
@ 2019-05-20 16:29             ` Jamal Hadi Salim
  2019-05-20 16:32               ` Jamal Hadi Salim
                                 ` (2 more replies)
  0 siblings, 3 replies; 32+ messages in thread
From: Jamal Hadi Salim @ 2019-05-20 16:29 UTC (permalink / raw)
  To: Edward Cree, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

On 2019-05-20 12:10 p.m., Edward Cree wrote:
> On 20/05/2019 16:38, Jamal Hadi Salim wrote:
>> That is fine then if i could do:
>>
>> tc actions add action drop index 104
>> then
>> followed by for example the two filters you show below..
> That seems to work.

nice.


>> Is your hardware not using explicit indices into a stats table?
> No; we ask the HW to allocate a counter and it returns us a counter ID (which
>   bears no relation to the action index).  So I have an rhashtable keyed on
>   the cookie (or on the action-type & action_index, when using the other
>   version of my patches) which stores the HW counter ID; and the entry in that
>   hashtable is what I attach to the driver's action struct.
> 
>> Beauty.  Assuming the stats are being synced to the kernel?
>> Test 1:
>> What does "tc -s actions ls action drop index 104" show?
> It produces no output, but
>      `tc -s actions get action drop index 104`
> or
>      `tc -s actions list action gact index 104`

sorry meant that. Hopefully it shows accumulated stats from both
filter hits and correct ref counts and bind counts.

> shows the same stats as `tc -s filter show ...` did for that action.
>> Test 2:
>> Delete one of the filters above then dump actions again as above.
> Ok, that's weird: after I delete one, the other (in `tc -s filter show ...`)
>   no longer shows the shared action.
> 

Sounds weird.

> # tc filter del dev $vfrep parent ffff: pref 49151
> # tc -stats filter show dev $vfrep parent ffff:
> filter protocol arp pref 49152 flower chain 0
> filter protocol arp pref 49152 flower chain 0 handle 0x1
>    eth_type arp
>    skip_sw
>    in_hw in_hw_count 1
>          action order 1: vlan  push id 100 protocol 802.1Q priority 0 pipe
>           index 1 ref 1 bind 1 installed 180 sec used 180 sec
>          Action statistics:
>          Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
>          backlog 0b 0p requeues 0
> 
>          action order 2: mirred (Egress Mirror to device $pf) pipe
>          index 101 ref 1 bind 1 installed 180 sec used 169 sec
>          Action statistics:
>          Sent 256 bytes 4 pkt (dropped 0, overlimits 0 requeues 0)
>          Sent software 0 bytes 0 pkt
>          Sent hardware 256 bytes 4 pkt
>          backlog 0b 0p requeues 0
> 
>          action order 3: vlan  pop pipe
>           index 2 ref 1 bind 1 installed 180 sec used 180 sec
>          Action statistics:
>          Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
>          backlog 0b 0p requeues 0
> 
> #
> 

Hold on, did you intentionaly add that "protocol arp" there?
Or is that just a small development time quark?

> Yet `tc -s actions get` still shows it...
> 
> # tc -s actions get action drop index 104
> total acts 0
> 
>          action order 1: gact action drop
>           random type none pass val 0
>           index 104 ref 2 bind 1 installed 812 sec used 797 sec
>          Action statistics:
>          Sent 534 bytes 7 pkt (dropped 7, overlimits 0 requeues 0)
>          Sent software 0 bytes 0 pkt
>          Sent hardware 534 bytes 7 pkt
>          backlog 0b 0p requeues 0

Assuming you first created the action then bound it, the action
output looks correct. "ref 2" implies two refcounts, one by
the first action creation and the second by the filter binding.
Also "bind 1" implies only one filter is referencing it.
Before you deleted it should have been "ref 3" and "bind 2".

> # tc filter show dev $vfrep parent ffff:
> filter protocol arp pref 49152 flower chain 0
> filter protocol arp pref 49152 flower chain 0 handle 0x1
>    eth_type arp
>    skip_sw
>    in_hw in_hw_count 1
>          action order 1: vlan  push id 100 protocol 802.1Q priority 0 pipe
>           index 1 ref 1 bind 1
> 
>          action order 3: vlan  pop pipe
>           index 2 ref 1 bind 1

I see the arp listing again.
Maybe just a bug.

> # tc -s actions get action mirred index 101
> total acts 0
> 
>          action order 1: mirred (Egress Mirror to device $pf) pipe
>          index 101 ref 1 bind 1 installed 796 sec used 785 sec
>          Action statistics:
>          Sent 256 bytes 4 pkt (dropped 0, overlimits 0 requeues 0)
>          Sent software 0 bytes 0 pkt
>          Sent hardware 256 bytes 4 pkt
>          backlog 0b 0p requeues 0
> #

Assuming in this case you added by value the actions? Bind and ref
are 1 each.
Maybe dump after each step and it will be easier to see what is
happening.

> Curiouser and curiouser... it seems that after I delete one of the rules,
>   TC starts to get very confused and actions start disappearing from rule
>   dumps.  Yet those actions still exist according to `tc actions list`.
> I don't *think* my changes can have caused this, but I'll try a test on a
>   vanilla kernel just to make sure the same thing happens there.
> 

Possible an offload bug that was already in existence. Can you try the
same steps but without offloading and see if you see the same behavior?

cheers,
jamal


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-20 16:29             ` Jamal Hadi Salim
@ 2019-05-20 16:32               ` Jamal Hadi Salim
  2019-05-20 17:58               ` Edward Cree
  2019-05-20 18:36               ` Edward Cree
  2 siblings, 0 replies; 32+ messages in thread
From: Jamal Hadi Salim @ 2019-05-20 16:32 UTC (permalink / raw)
  To: Edward Cree, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

On 2019-05-20 12:29 p.m., Jamal Hadi Salim wrote:

> Assuming in this case you added by value the actions? 
To be clear on the terminology:

"By Value" implies you add the filter and action in the
same command line.
"By Reference" implies you first create the action then
create a filter which binds to the already created action.

cheers,
jamal

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-20 16:29             ` Jamal Hadi Salim
  2019-05-20 16:32               ` Jamal Hadi Salim
@ 2019-05-20 17:58               ` Edward Cree
  2019-05-20 18:36               ` Edward Cree
  2 siblings, 0 replies; 32+ messages in thread
From: Edward Cree @ 2019-05-20 17:58 UTC (permalink / raw)
  To: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

On 20/05/2019 17:29, Jamal Hadi Salim wrote:
> On 2019-05-20 12:10 p.m., Edward Cree wrote:
>> # tc filter del dev $vfrep parent ffff: pref 49151
>> # tc -stats filter show dev $vfrep parent ffff:
>> filter protocol arp pref 49152 flower chain 0
>> filter protocol arp pref 49152 flower chain 0 handle 0x1
>>    eth_type arp
>>    skip_sw
>>    in_hw in_hw_count 1
>>          action order 1: vlan  push id 100 protocol 802.1Q priority 0 pipe
>>           index 1 ref 1 bind 1 installed 180 sec used 180 sec
>>          Action statistics:
>>          Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
>>          backlog 0b 0p requeues 0
>>
>>          action order 2: mirred (Egress Mirror to device $pf) pipe
>>          index 101 ref 1 bind 1 installed 180 sec used 169 sec
>>          Action statistics:
>>          Sent 256 bytes 4 pkt (dropped 0, overlimits 0 requeues 0)
>>          Sent software 0 bytes 0 pkt
>>          Sent hardware 256 bytes 4 pkt
>>          backlog 0b 0p requeues 0
>>
>>          action order 3: vlan  pop pipe
>>           index 2 ref 1 bind 1 installed 180 sec used 180 sec
>>          Action statistics:
>>          Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
>>          backlog 0b 0p requeues 0
>>
>> #
>>
>
> Hold on, did you intentionaly add that "protocol arp" there?
Yes; if you go back to my `tc filter add` commands, I had one matching
 'protocol arp' and the other 'protocol ipv4', and the latter is the one
 I then deleted.  I'm not 100% sure why `tc filter show` prints it twice
 ('protocol' and 'eth_type'), though.

>> # tc -s actions get action mirred index 101
>> total acts 0
>>
>>          action order 1: mirred (Egress Mirror to device $pf) pipe
>>          index 101 ref 1 bind 1 installed 796 sec used 785 sec
>>          Action statistics:
>>          Sent 256 bytes 4 pkt (dropped 0, overlimits 0 requeues 0)
>>          Sent software 0 bytes 0 pkt
>>          Sent hardware 256 bytes 4 pkt
>>          backlog 0b 0p requeues 0
>> #
>
> Assuming in this case you added by value the actions? Bind and ref
> are 1 each.
Yes, the mirreds were added by value.

> Possible an offload bug that was already in existence. Can you try the
> same steps but without offloading and see if you see the same behavior?
Running with 'skip_hw' instead of 'skip_sw', I *don't* see the same
 behaviour.

-Ed

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-20 16:29             ` Jamal Hadi Salim
  2019-05-20 16:32               ` Jamal Hadi Salim
  2019-05-20 17:58               ` Edward Cree
@ 2019-05-20 18:36               ` Edward Cree
  2019-05-20 21:12                 ` Jamal Hadi Salim
  2 siblings, 1 reply; 32+ messages in thread
From: Edward Cree @ 2019-05-20 18:36 UTC (permalink / raw)
  To: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

On 20/05/2019 17:29, Jamal Hadi Salim wrote:
> Maybe dump after each step and it will be easier to see what is
> happening.
>
>> I don't *think* my changes can have caused this, but I'll try a test on a
>>   vanilla kernel just to make sure the same thing happens there.
>
> Possible an offload bug that was already in existence.
I've now reproduced on vanilla net-next (63863ee8e2f6); the breakage occurs
 when I run `tc -s actions get action drop index 104`, even _without_ having
 previously deleted a rule.
And in a correction to my last email, it turns out this *does* reproduce
 without offloading (I evidently didn't hit the right sequence to tickle
 the bug before).

# tc -stats filter show dev $vfrep parent ffff:
filter protocol ip pref 49151 flower chain 0
filter protocol ip pref 49151 flower chain 0 handle 0x1
  eth_type ipv4
  skip_sw
  in_hw in_hw_count 1
        action order 1: vlan  push id 100 protocol 802.1Q priority 0 pipe
         index 3 ref 1 bind 1 installed 7 sec used 7 sec
        Action statistics:
        Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
        backlog 0b 0p requeues 0

        action order 2: mirred (Egress Mirror to device $pf) pipe
        index 102 ref 1 bind 1 installed 7 sec used 3 sec
        Action statistics:
        Sent 306 bytes 3 pkt (dropped 0, overlimits 0 requeues 0)
        Sent software 0 bytes 0 pkt
        Sent hardware 306 bytes 3 pkt
        backlog 0b 0p requeues 0

        action order 3: vlan  pop pipe
         index 4 ref 1 bind 1 installed 7 sec used 7 sec
        Action statistics:
        Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
        backlog 0b 0p requeues 0

        action order 4: gact action drop
         random type none pass val 0
         index 104 ref 3 bind 2 installed 13 sec used 3 sec
        Action statistics:
        Sent 306 bytes 3 pkt (dropped 3, overlimits 0 requeues 0)
        Sent software 0 bytes 0 pkt
        Sent hardware 306 bytes 3 pkt
        backlog 0b 0p requeues 0

filter protocol arp pref 49152 flower chain 0
filter protocol arp pref 49152 flower chain 0 handle 0x1
  eth_type arp
  skip_sw
  in_hw in_hw_count 1
        action order 1: vlan  push id 100 protocol 802.1Q priority 0 pipe
         index 1 ref 1 bind 1 installed 7 sec used 7 sec
        Action statistics:
        Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
        backlog 0b 0p requeues 0

        action order 2: mirred (Egress Mirror to device $pf) pipe
        index 101 ref 1 bind 1 installed 7 sec used 5 sec
        Action statistics:
        Sent 64 bytes 1 pkt (dropped 0, overlimits 0 requeues 0)
        Sent software 0 bytes 0 pkt
        Sent hardware 64 bytes 1 pkt
        backlog 0b 0p requeues 0

        action order 3: vlan  pop pipe
         index 2 ref 1 bind 1 installed 7 sec used 7 sec
        Action statistics:
        Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
        backlog 0b 0p requeues 0

        action order 4: gact action drop
         random type none pass val 0
         index 104 ref 3 bind 2 installed 13 sec used 3 sec
        Action statistics:
        Sent 370 bytes 4 pkt (dropped 4, overlimits 0 requeues 0)
        Sent software 0 bytes 0 pkt
        Sent hardware 370 bytes 4 pkt
        backlog 0b 0p requeues 0

# tc -s actions get action drop index 104
total acts 0

        action order 1: gact action drop
         random type none pass val 0
         index 104 ref 3 bind 2 installed 27 sec used 17 sec
        Action statistics:
        Sent 370 bytes 4 pkt (dropped 4, overlimits 0 requeues 0)
        Sent software 0 bytes 0 pkt
        Sent hardware 370 bytes 4 pkt
        backlog 0b 0p requeues 0
# tc -stats filter show dev $vfrep parent ffff:
filter protocol ip pref 49151 flower chain 0
filter protocol ip pref 49151 flower chain 0 handle 0x1
  eth_type ipv4
  skip_sw
  in_hw in_hw_count 1
        action order 1: vlan  push id 100 protocol 802.1Q priority 0 pipe
         index 3 ref 1 bind 1 installed 26 sec used 26 sec
        Action statistics:
        Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
        backlog 0b 0p requeues 0

        action order 2: mirred (Egress Mirror to device $pf) pipe
        index 102 ref 1 bind 1 installed 26 sec used 22 sec
        Action statistics:
        Sent 306 bytes 3 pkt (dropped 0, overlimits 0 requeues 0)
        Sent software 0 bytes 0 pkt
        Sent hardware 306 bytes 3 pkt
        backlog 0b 0p requeues 0

        action order 3: vlan  pop pipe
         index 4 ref 1 bind 1 installed 26 sec used 26 sec
        Action statistics:
        Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
        backlog 0b 0p requeues 0

filter protocol arp pref 49152 flower chain 0
filter protocol arp pref 49152 flower chain 0 handle 0x1
  eth_type arp
  skip_sw
  in_hw in_hw_count 1
        action order 1: vlan  push id 100 protocol 802.1Q priority 0 pipe
         index 1 ref 1 bind 1 installed 27 sec used 27 sec
        Action statistics:
        Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
        backlog 0b 0p requeues 0

        action order 2: mirred (Egress Mirror to device $pf) pipe
        index 101 ref 1 bind 1 installed 27 sec used 17 sec
        Action statistics:
        Sent 256 bytes 4 pkt (dropped 0, overlimits 0 requeues 0)
        Sent software 0 bytes 0 pkt
        Sent hardware 256 bytes 4 pkt
        backlog 0b 0p requeues 0

        action order 3: vlan  pop pipe
         index 2 ref 1 bind 1 installed 27 sec used 27 sec
        Action statistics:
        Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
        backlog 0b 0p requeues 0

#

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-20 18:36               ` Edward Cree
@ 2019-05-20 21:12                 ` Jamal Hadi Salim
  2019-05-21 12:46                   ` Jamal Hadi Salim
  0 siblings, 1 reply; 32+ messages in thread
From: Jamal Hadi Salim @ 2019-05-20 21:12 UTC (permalink / raw)
  To: Edward Cree, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni, Cong Wang, Vlad Buslov

On 2019-05-20 2:36 p.m., Edward Cree wrote:
> On 20/05/2019 17:29, Jamal Hadi Salim wrote:
>> Maybe dump after each step and it will be easier to see what is
>> happening.
>>
>>> I don't *think* my changes can have caused this, but I'll try a test on a
>>>    vanilla kernel just to make sure the same thing happens there.
>>
>> Possible an offload bug that was already in existence.
> I've now reproduced on vanilla net-next (63863ee8e2f6); the breakage occurs
>   when I run `tc -s actions get action drop index 104`, even _without_ having
>   previously deleted a rule.
> And in a correction to my last email, it turns out this *does* reproduce
>   without offloading (I evidently didn't hit the right sequence to tickle
>   the bug before).
> 

Ok, so the "get" does it. Will try to reproduce when i get some
cycles. Meantime CCing Cong and Vlad.

cheers,
jamal

> # tc -stats filter show dev $vfrep parent ffff:
> filter protocol ip pref 49151 flower chain 0
> filter protocol ip pref 49151 flower chain 0 handle 0x1
>    eth_type ipv4
>    skip_sw
>    in_hw in_hw_count 1
>          action order 1: vlan  push id 100 protocol 802.1Q priority 0 pipe
>           index 3 ref 1 bind 1 installed 7 sec used 7 sec
>          Action statistics:
>          Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
>          backlog 0b 0p requeues 0
> 
>          action order 2: mirred (Egress Mirror to device $pf) pipe
>          index 102 ref 1 bind 1 installed 7 sec used 3 sec
>          Action statistics:
>          Sent 306 bytes 3 pkt (dropped 0, overlimits 0 requeues 0)
>          Sent software 0 bytes 0 pkt
>          Sent hardware 306 bytes 3 pkt
>          backlog 0b 0p requeues 0
> 
>          action order 3: vlan  pop pipe
>           index 4 ref 1 bind 1 installed 7 sec used 7 sec
>          Action statistics:
>          Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
>          backlog 0b 0p requeues 0
> 
>          action order 4: gact action drop
>           random type none pass val 0
>           index 104 ref 3 bind 2 installed 13 sec used 3 sec
>          Action statistics:
>          Sent 306 bytes 3 pkt (dropped 3, overlimits 0 requeues 0)
>          Sent software 0 bytes 0 pkt
>          Sent hardware 306 bytes 3 pkt
>          backlog 0b 0p requeues 0
> 
> filter protocol arp pref 49152 flower chain 0
> filter protocol arp pref 49152 flower chain 0 handle 0x1
>    eth_type arp
>    skip_sw
>    in_hw in_hw_count 1
>          action order 1: vlan  push id 100 protocol 802.1Q priority 0 pipe
>           index 1 ref 1 bind 1 installed 7 sec used 7 sec
>          Action statistics:
>          Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
>          backlog 0b 0p requeues 0
> 
>          action order 2: mirred (Egress Mirror to device $pf) pipe
>          index 101 ref 1 bind 1 installed 7 sec used 5 sec
>          Action statistics:
>          Sent 64 bytes 1 pkt (dropped 0, overlimits 0 requeues 0)
>          Sent software 0 bytes 0 pkt
>          Sent hardware 64 bytes 1 pkt
>          backlog 0b 0p requeues 0
> 
>          action order 3: vlan  pop pipe
>           index 2 ref 1 bind 1 installed 7 sec used 7 sec
>          Action statistics:
>          Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
>          backlog 0b 0p requeues 0
> 
>          action order 4: gact action drop
>           random type none pass val 0
>           index 104 ref 3 bind 2 installed 13 sec used 3 sec
>          Action statistics:
>          Sent 370 bytes 4 pkt (dropped 4, overlimits 0 requeues 0)
>          Sent software 0 bytes 0 pkt
>          Sent hardware 370 bytes 4 pkt
>          backlog 0b 0p requeues 0
> 
> # tc -s actions get action drop index 104
> total acts 0
> 
>          action order 1: gact action drop
>           random type none pass val 0
>           index 104 ref 3 bind 2 installed 27 sec used 17 sec
>          Action statistics:
>          Sent 370 bytes 4 pkt (dropped 4, overlimits 0 requeues 0)
>          Sent software 0 bytes 0 pkt
>          Sent hardware 370 bytes 4 pkt
>          backlog 0b 0p requeues 0
> # tc -stats filter show dev $vfrep parent ffff:
> filter protocol ip pref 49151 flower chain 0
> filter protocol ip pref 49151 flower chain 0 handle 0x1
>    eth_type ipv4
>    skip_sw
>    in_hw in_hw_count 1
>          action order 1: vlan  push id 100 protocol 802.1Q priority 0 pipe
>           index 3 ref 1 bind 1 installed 26 sec used 26 sec
>          Action statistics:
>          Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
>          backlog 0b 0p requeues 0
> 
>          action order 2: mirred (Egress Mirror to device $pf) pipe
>          index 102 ref 1 bind 1 installed 26 sec used 22 sec
>          Action statistics:
>          Sent 306 bytes 3 pkt (dropped 0, overlimits 0 requeues 0)
>          Sent software 0 bytes 0 pkt
>          Sent hardware 306 bytes 3 pkt
>          backlog 0b 0p requeues 0
> 
>          action order 3: vlan  pop pipe
>           index 4 ref 1 bind 1 installed 26 sec used 26 sec
>          Action statistics:
>          Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
>          backlog 0b 0p requeues 0
> 
> filter protocol arp pref 49152 flower chain 0
> filter protocol arp pref 49152 flower chain 0 handle 0x1
>    eth_type arp
>    skip_sw
>    in_hw in_hw_count 1
>          action order 1: vlan  push id 100 protocol 802.1Q priority 0 pipe
>           index 1 ref 1 bind 1 installed 27 sec used 27 sec
>          Action statistics:
>          Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
>          backlog 0b 0p requeues 0
> 
>          action order 2: mirred (Egress Mirror to device $pf) pipe
>          index 101 ref 1 bind 1 installed 27 sec used 17 sec
>          Action statistics:
>          Sent 256 bytes 4 pkt (dropped 0, overlimits 0 requeues 0)
>          Sent software 0 bytes 0 pkt
>          Sent hardware 256 bytes 4 pkt
>          backlog 0b 0p requeues 0
> 
>          action order 3: vlan  pop pipe
>           index 2 ref 1 bind 1 installed 27 sec used 27 sec
>          Action statistics:
>          Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
>          backlog 0b 0p requeues 0
> 
> #
> 


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-20 21:12                 ` Jamal Hadi Salim
@ 2019-05-21 12:46                   ` Jamal Hadi Salim
  2019-05-21 13:23                     ` Vlad Buslov
  0 siblings, 1 reply; 32+ messages in thread
From: Jamal Hadi Salim @ 2019-05-21 12:46 UTC (permalink / raw)
  To: Edward Cree, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni, Vlad Buslov, Lucas Bates

[-- Attachment #1: Type: text/plain, Size: 435 bytes --]

On 2019-05-20 5:12 p.m., Jamal Hadi Salim wrote:
> On 2019-05-20 2:36 p.m., Edward Cree wrote:
>> On 20/05/2019 17:29, Jamal Hadi Salim wrote:

> Ok, so the "get" does it. Will try to reproduce when i get some
> cycles. Meantime CCing Cong and Vlad.
> 


I have reproduced it in a simpler setup. See attached. Vlad this is
likely from your changes. Sorry no cycles to dig more.
Lucas, can we add this to the testcases?


cheers,
jamal

[-- Attachment #2: ed-shared-actions --]
[-- Type: text/plain, Size: 714 bytes --]


sudo tc qdisc del dev lo ingress
sudo tc qdisc add dev lo ingress

sudo tc filter add dev lo parent ffff: protocol ip prio 8 u32 \
match ip dst 127.0.0.8/32 flowid 1:10 \
action vlan push id 100 protocol 802.1q \
action drop index 104

sudo tc filter add dev lo parent ffff: protocol ip prio 8 u32 \
match ip dst 127.0.0.10/32 flowid 1:10 \
action vlan push id 101 protocol 802.1q \
action drop index 104

#
sudo tc -s filter ls dev lo parent ffff: protocol ip

#this will now delete action gact index 104(drop) from display
sudo tc -s actions get action drop index 104

sudo tc -s filter ls dev lo parent ffff: protocol ip

#But you can still see it if you do this:
sudo tc -s actions get action drop index 104


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-21 12:46                   ` Jamal Hadi Salim
@ 2019-05-21 13:23                     ` Vlad Buslov
  2019-05-22 15:08                       ` Vlad Buslov
  0 siblings, 1 reply; 32+ messages in thread
From: Vlad Buslov @ 2019-05-21 13:23 UTC (permalink / raw)
  To: Jamal Hadi Salim
  Cc: Edward Cree, Jiri Pirko, Pablo Neira Ayuso, David Miller, netdev,
	Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni, Vlad Buslov, Lucas Bates


On Tue 21 May 2019 at 15:46, Jamal Hadi Salim <jhs@mojatatu.com> wrote:
> On 2019-05-20 5:12 p.m., Jamal Hadi Salim wrote:
>> On 2019-05-20 2:36 p.m., Edward Cree wrote:
>>> On 20/05/2019 17:29, Jamal Hadi Salim wrote:
>
>> Ok, so the "get" does it. Will try to reproduce when i get some
>> cycles. Meantime CCing Cong and Vlad.
>> 
>
>
> I have reproduced it in a simpler setup. See attached. Vlad this is
> likely from your changes. Sorry no cycles to dig more.

Jamal, thanks for minimizing the reproduction. I'll look into it.

> Lucas, can we add this to the testcases?
>
>
> cheers,
> jamal
>
> sudo tc qdisc del dev lo ingress
> sudo tc qdisc add dev lo ingress
>
> sudo tc filter add dev lo parent ffff: protocol ip prio 8 u32 \
> match ip dst 127.0.0.8/32 flowid 1:10 \
> action vlan push id 100 protocol 802.1q \
> action drop index 104
>
> sudo tc filter add dev lo parent ffff: protocol ip prio 8 u32 \
> match ip dst 127.0.0.10/32 flowid 1:10 \
> action vlan push id 101 protocol 802.1q \
> action drop index 104
>
> #
> sudo tc -s filter ls dev lo parent ffff: protocol ip
>
> #this will now delete action gact index 104(drop) from display
> sudo tc -s actions get action drop index 104
>
> sudo tc -s filter ls dev lo parent ffff: protocol ip
>
> #But you can still see it if you do this:
> sudo tc -s actions get action drop index 104


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-21 13:23                     ` Vlad Buslov
@ 2019-05-22 15:08                       ` Vlad Buslov
  2019-05-22 15:13                         ` [RFC PATCH net-next] net: sched: don't use tc_action->order during action dump Vlad Buslov
  2019-05-22 17:24                         ` [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics Jamal Hadi Salim
  0 siblings, 2 replies; 32+ messages in thread
From: Vlad Buslov @ 2019-05-22 15:08 UTC (permalink / raw)
  To: Jamal Hadi Salim, Cong Wang
  Cc: Vlad Buslov, Edward Cree, Jiri Pirko, Pablo Neira Ayuso,
	David Miller, netdev, Andy Gospodarek, Jakub Kicinski,
	Michael Chan, Vishal Kulkarni, Lucas Bates


On Tue 21 May 2019 at 16:23, Vlad Buslov <vladbu@mellanox.com> wrote:
> On Tue 21 May 2019 at 15:46, Jamal Hadi Salim <jhs@mojatatu.com> wrote:
>> On 2019-05-20 5:12 p.m., Jamal Hadi Salim wrote:
>>> On 2019-05-20 2:36 p.m., Edward Cree wrote:
>>>> On 20/05/2019 17:29, Jamal Hadi Salim wrote:
>>
>>> Ok, so the "get" does it. Will try to reproduce when i get some
>>> cycles. Meantime CCing Cong and Vlad.
>>>
>>
>>
>> I have reproduced it in a simpler setup. See attached. Vlad this is
>> likely from your changes. Sorry no cycles to dig more.
>
> Jamal, thanks for minimizing the reproduction. I'll look into it.
>
>> Lucas, can we add this to the testcases?
>>
>>
>> cheers,
>> jamal
>>
>> sudo tc qdisc del dev lo ingress
>> sudo tc qdisc add dev lo ingress
>>
>> sudo tc filter add dev lo parent ffff: protocol ip prio 8 u32 \
>> match ip dst 127.0.0.8/32 flowid 1:10 \
>> action vlan push id 100 protocol 802.1q \
>> action drop index 104
>>
>> sudo tc filter add dev lo parent ffff: protocol ip prio 8 u32 \
>> match ip dst 127.0.0.10/32 flowid 1:10 \
>> action vlan push id 101 protocol 802.1q \
>> action drop index 104
>>
>> #
>> sudo tc -s filter ls dev lo parent ffff: protocol ip
>>
>> #this will now delete action gact index 104(drop) from display
>> sudo tc -s actions get action drop index 104
>>
>> sudo tc -s filter ls dev lo parent ffff: protocol ip
>>
>> #But you can still see it if you do this:
>> sudo tc -s actions get action drop index 104

It seems that culprit in this case is tc_action->order field. It is used
as nla attrtype when dumping actions. Initially it is set according to
ordering of actions of filter that creates them. However, it is
overwritten in tca_action_gd() each time action is dumped through action
API (according to action position in tb array) and when new filter is
attached to shared action (according to action order on the filter).
With this we have another way to reproduce the bug:

sudo tc qdisc add dev lo ingress

#shared action is the first action (order 1)
sudo tc filter add dev lo parent ffff: protocol ip prio 8 u32 \
match ip dst 127.0.0.8/32 flowid 1:10 \
action drop index 104 \
action vlan push id 100 protocol 802.1q

#shared action is the the second action (order 2)
sudo tc filter add dev lo parent ffff: protocol ip prio 8 u32 \
match ip dst 127.0.0.10/32 flowid 1:10 \
action vlan push id 101 protocol 802.1q \
action drop index 104

# Now action is only visible on one filter
sudo tc -s filter ls dev lo parent ffff: protocol ip

The usage of tc_action->order is inherently incorrect for shared actions
and I don't really understand the reason for using it in first place.
I'm sending RFC patch that fixes the issue by just using action position
in tb array for attrtype instead of order field, and it seems to solve
both issues for me. Please check it out to verify that I'm not breaking
something. Also, please advise on "fixes" tag since this problem doesn't
seem to be directly caused by my act API refactoring.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH net-next] net: sched: don't use tc_action->order during action dump
  2019-05-22 15:08                       ` Vlad Buslov
@ 2019-05-22 15:13                         ` Vlad Buslov
  2019-05-22 17:24                         ` [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics Jamal Hadi Salim
  1 sibling, 0 replies; 32+ messages in thread
From: Vlad Buslov @ 2019-05-22 15:13 UTC (permalink / raw)
  To: jhs, xiyou.wangcong
  Cc: ecree, jiri, pablo, davem, netdev, andy, jakub.kicinski,
	michael.chan, vishal, lucasb, Vlad Buslov

Function tcf_action_dump() relies on tc_action->order field when starting
nested nla to send action data to userspace. This approach breaks in
several cases:

- When multiple filters point to same shared action, tc_action->order field
  is overwritten each time it is attached to filter. This causes filter
  dump to output action with incorrect attribute for all filters that have
  the action in different position (different order) from the last set
  tc_action->order value.

- When action data is displayed using tc action API (RTM_GETACTION), action
  order is overwritten by tca_action_gd() according to its position in
  resulting array of nl attributes, which will break filter dump for all
  filters attached to that shared action that expect it to have different
  order value.

Don't rely on tc_action->order when dumping actions. Set nla according to
action position in resulting array of actions instead.

Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
---
 net/sched/act_api.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/net/sched/act_api.c b/net/sched/act_api.c
index 683fcc00da49..c42ecf4b3c10 100644
--- a/net/sched/act_api.c
+++ b/net/sched/act_api.c
@@ -800,7 +800,7 @@ int tcf_action_dump(struct sk_buff *skb, struct tc_action *actions[],
 
 	for (i = 0; i < TCA_ACT_MAX_PRIO && actions[i]; i++) {
 		a = actions[i];
-		nest = nla_nest_start_noflag(skb, a->order);
+		nest = nla_nest_start_noflag(skb, i + 1);
 		if (nest == NULL)
 			goto nla_put_failure;
 		err = tcf_action_dump_1(skb, a, bind, ref);
@@ -1303,7 +1303,6 @@ tca_action_gd(struct net *net, struct nlattr *nla, struct nlmsghdr *n,
 			ret = PTR_ERR(act);
 			goto err;
 		}
-		act->order = i;
 		attr_size += tcf_action_fill_size(act);
 		actions[i - 1] = act;
 	}
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-22 15:08                       ` Vlad Buslov
  2019-05-22 15:13                         ` [RFC PATCH net-next] net: sched: don't use tc_action->order during action dump Vlad Buslov
@ 2019-05-22 17:24                         ` Jamal Hadi Salim
  2019-05-22 17:49                           ` Vlad Buslov
  1 sibling, 1 reply; 32+ messages in thread
From: Jamal Hadi Salim @ 2019-05-22 17:24 UTC (permalink / raw)
  To: Vlad Buslov, Cong Wang
  Cc: Edward Cree, Jiri Pirko, Pablo Neira Ayuso, David Miller, netdev,
	Andy Gospodarek, Jakub Kicinski, Michael Chan, Vishal Kulkarni,
	Lucas Bates

On 2019-05-22 11:08 a.m., Vlad Buslov wrote:
> 
> On Tue 21 May 2019 at 16:23, Vlad Buslov <vladbu@mellanox.com> wrote:

> 
> It seems that culprit in this case is tc_action->order field. It is used
> as nla attrtype when dumping actions. Initially it is set according to
> ordering of actions of filter that creates them. However, it is
> overwritten in tca_action_gd() each time action is dumped through action
> API (according to action position in tb array) and when new filter is
> attached to shared action (according to action order on the filter).
> With this we have another way to reproduce the bug:
> 
> sudo tc qdisc add dev lo ingress
> 
> #shared action is the first action (order 1)
> sudo tc filter add dev lo parent ffff: protocol ip prio 8 u32 \
> match ip dst 127.0.0.8/32 flowid 1:10 \
> action drop index 104 \
> action vlan push id 100 protocol 802.1q
> 
> #shared action is the the second action (order 2)
> sudo tc filter add dev lo parent ffff: protocol ip prio 8 u32 \
> match ip dst 127.0.0.10/32 flowid 1:10 \
> action vlan push id 101 protocol 802.1q \
> action drop index 104
> 
> # Now action is only visible on one filter
> sudo tc -s filter ls dev lo parent ffff: protocol ip
> 

Ok, thanks for chasing this. A test case i had in mind is to
maybe have 3 actions. Add the drop in the middle for one
and at the begging for another and see if they are visible
with the patch.
If you dont have time I can test maybe AM tommorow.

> The usage of tc_action->order is inherently incorrect for shared actions
> and I don't really understand the reason for using it in first place.
> I'm sending RFC patch that fixes the issue by just using action position
> in tb array for attrtype instead of order field, and it seems to solve
> both issues for me. Please check it out to verify that I'm not breaking
> something. Also, please advise on "fixes" tag since this problem doesn't
> seem to be directly caused by my act API refactoring.
> 

I dont know when this broke then.
Seems to be working correctly on the machine i am right now
(4.4) but broken on 4.19. So somewhere in between things broke.
I dont have other kernels to compare at the moment.

cheers,
jamal

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-22 17:24                         ` [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics Jamal Hadi Salim
@ 2019-05-22 17:49                           ` Vlad Buslov
  2019-05-22 18:23                             ` Jamal Hadi Salim
  0 siblings, 1 reply; 32+ messages in thread
From: Vlad Buslov @ 2019-05-22 17:49 UTC (permalink / raw)
  To: Jamal Hadi Salim
  Cc: Vlad Buslov, Cong Wang, Edward Cree, Jiri Pirko,
	Pablo Neira Ayuso, David Miller, netdev, Andy Gospodarek,
	Jakub Kicinski, Michael Chan, Vishal Kulkarni, Lucas Bates


On Wed 22 May 2019 at 20:24, Jamal Hadi Salim <jhs@mojatatu.com> wrote:
> On 2019-05-22 11:08 a.m., Vlad Buslov wrote:
>>
>> On Tue 21 May 2019 at 16:23, Vlad Buslov <vladbu@mellanox.com> wrote:
>
>>
>> It seems that culprit in this case is tc_action->order field. It is used
>> as nla attrtype when dumping actions. Initially it is set according to
>> ordering of actions of filter that creates them. However, it is
>> overwritten in tca_action_gd() each time action is dumped through action
>> API (according to action position in tb array) and when new filter is
>> attached to shared action (according to action order on the filter).
>> With this we have another way to reproduce the bug:
>>
>> sudo tc qdisc add dev lo ingress
>>
>> #shared action is the first action (order 1)
>> sudo tc filter add dev lo parent ffff: protocol ip prio 8 u32 \
>> match ip dst 127.0.0.8/32 flowid 1:10 \
>> action drop index 104 \
>> action vlan push id 100 protocol 802.1q
>>
>> #shared action is the the second action (order 2)
>> sudo tc filter add dev lo parent ffff: protocol ip prio 8 u32 \
>> match ip dst 127.0.0.10/32 flowid 1:10 \
>> action vlan push id 101 protocol 802.1q \
>> action drop index 104
>>
>> # Now action is only visible on one filter
>> sudo tc -s filter ls dev lo parent ffff: protocol ip
>>
>
> Ok, thanks for chasing this. A test case i had in mind is to
> maybe have 3 actions. Add the drop in the middle for one
> and at the begging for another and see if they are visible
> with the patch.
> If you dont have time I can test maybe AM tommorow.

Works with my patch:

~$ sudo tc qdisc del dev lo ingress
~$ sudo tc qdisc add dev lo ingress
~$ sudo tc filter add dev lo parent ffff: protocol ip prio 8 u32 \
> match ip dst 127.0.0.8/32 flowid 1:10 \
> action drop index 104 \
> mirred egress redirect dev ens1f0 \
> action vlan push id 100 protocol 802.1q
~$ sudo tc filter add dev lo parent ffff: protocol ip prio 8 u32 \
> match ip dst 127.0.0.10/32 flowid 1:10 \
> action vlan push id 101 protocol 802.1q \
> action drop index 104 \
> mirred egress redirect dev ens1f0
~$ sudo tc -s filter ls dev lo parent ffff: protocol ip
filter pref 8 u32 chain 0
filter pref 8 u32 chain 0 fh 800: ht divisor 1
filter pref 8 u32 chain 0 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:10 not_in_hw  (rule hit 0 success 0)
  match 7f000008/ffffffff at 16 (success 0 )
        action order 1: gact action drop
         random type none pass val 0
         index 104 ref 2 bind 2 installed 8 sec used 8 sec
        Action statistics:
        Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
        backlog 0b 0p requeues 0

        action order 2: mirred (Egress Redirect to device ens1f0) stolen
        index 1 ref 1 bind 1 installed 8 sec used 8 sec
        Action statistics:
        Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
        backlog 0b 0p requeues 0

        action order 3: vlan  push id 100 protocol 802.1Q priority 0 pipe
         index 1 ref 1 bind 1 installed 8 sec used 8 sec
        Action statistics:
        Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
        backlog 0b 0p requeues 0

filter pref 8 u32 chain 0 fh 800::801 order 2049 key ht 800 bkt 0 flowid 1:10 not_in_hw  (rule hit 0 success 0)
  match 7f00000a/ffffffff at 16 (success 0 )
        action order 1: vlan  push id 101 protocol 802.1Q priority 0 pipe
         index 2 ref 1 bind 1 installed 4 sec used 4 sec
        Action statistics:
        Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
        backlog 0b 0p requeues 0

        action order 2: gact action drop
         random type none pass val 0
         index 104 ref 2 bind 2 installed 8 sec used 8 sec
        Action statistics:
        Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
        backlog 0b 0p requeues 0

        action order 3: mirred (Egress Redirect to device ens1f0) stolen
        index 2 ref 1 bind 1 installed 4 sec used 4 sec
        Action statistics:
        Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
        backlog 0b 0p requeues 0


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-22 17:49                           ` Vlad Buslov
@ 2019-05-22 18:23                             ` Jamal Hadi Salim
  2019-05-22 18:26                               ` Jamal Hadi Salim
  0 siblings, 1 reply; 32+ messages in thread
From: Jamal Hadi Salim @ 2019-05-22 18:23 UTC (permalink / raw)
  To: Vlad Buslov
  Cc: Cong Wang, Edward Cree, Jiri Pirko, Pablo Neira Ayuso,
	David Miller, netdev, Andy Gospodarek, Jakub Kicinski,
	Michael Chan, Vishal Kulkarni, Lucas Bates

On 2019-05-22 1:49 p.m., Vlad Buslov wrote:
> 
> On Wed 22 May 2019 at 20:24, Jamal Hadi Salim <jhs@mojatatu.com> wrote:

>> Ok, thanks for chasing this. A test case i had in mind is to
>> maybe have 3 actions. Add the drop in the middle for one
>> and at the begging for another and see if they are visible
>> with the patch.
>> If you dont have time I can test maybe AM tommorow.
> 
> Works with my patch:
> 

Ok, thanks Vlad.

Acked-by: Jamal Hadi Salim <jhs@mojatatu.com>

cheers,
jamal

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-22 18:23                             ` Jamal Hadi Salim
@ 2019-05-22 18:26                               ` Jamal Hadi Salim
  0 siblings, 0 replies; 32+ messages in thread
From: Jamal Hadi Salim @ 2019-05-22 18:26 UTC (permalink / raw)
  To: Vlad Buslov
  Cc: Cong Wang, Edward Cree, Jiri Pirko, Pablo Neira Ayuso,
	David Miller, netdev, Andy Gospodarek, Jakub Kicinski,
	Michael Chan, Vishal Kulkarni, Lucas Bates

On 2019-05-22 2:23 p.m., Jamal Hadi Salim wrote:
> On 2019-05-22 1:49 p.m., Vlad Buslov wrote:
>>
>> On Wed 22 May 2019 at 20:24, Jamal Hadi Salim <jhs@mojatatu.com> wrote:
> 
>>> Ok, thanks for chasing this. A test case i had in mind is to
>>> maybe have 3 actions. Add the drop in the middle for one
>>> and at the begging for another and see if they are visible
>>> with the patch.
>>> If you dont have time I can test maybe AM tommorow.
>>
>> Works with my patch:
>>
> 
> Ok, thanks Vlad.
> 
> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com>
> 
> cheers,
> jamal


Actually this is net/stable material.
Can you resubmit to net and add my ACK?

cheers,
jamal

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2019-05-22 18:26 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-15 19:39 [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics Edward Cree
2019-05-15 19:42 ` [RFC PATCH v2 net-next 1/3] flow_offload: copy tcfa_index into flow_action_entry Edward Cree
2019-05-18 20:30   ` Jamal Hadi Salim
2019-05-15 19:42 ` [RFC PATCH v2 net-next 2/3] flow_offload: restore ability to collect separate stats per action Edward Cree
2019-05-18 20:35   ` Jamal Hadi Salim
2019-05-20 15:46     ` Edward Cree
2019-05-20 15:55       ` Jamal Hadi Salim
2019-05-15 19:42 ` [RFC PATCH v2 net-next 3/3] flow_offload: include linux/kernel.h from flow_offload.h Edward Cree
2019-05-17 15:27 ` [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics Edward Cree
2019-05-17 17:14   ` Edward Cree
2019-05-18 20:39     ` Jamal Hadi Salim
2019-05-20 15:26       ` Edward Cree
2019-05-20 15:38         ` Jamal Hadi Salim
2019-05-20 16:10           ` Edward Cree
2019-05-20 16:29             ` Jamal Hadi Salim
2019-05-20 16:32               ` Jamal Hadi Salim
2019-05-20 17:58               ` Edward Cree
2019-05-20 18:36               ` Edward Cree
2019-05-20 21:12                 ` Jamal Hadi Salim
2019-05-21 12:46                   ` Jamal Hadi Salim
2019-05-21 13:23                     ` Vlad Buslov
2019-05-22 15:08                       ` Vlad Buslov
2019-05-22 15:13                         ` [RFC PATCH net-next] net: sched: don't use tc_action->order during action dump Vlad Buslov
2019-05-22 17:24                         ` [RFC PATCH v2 net-next 0/3] flow_offload: Re-add per-action statistics Jamal Hadi Salim
2019-05-22 17:49                           ` Vlad Buslov
2019-05-22 18:23                             ` Jamal Hadi Salim
2019-05-22 18:26                               ` Jamal Hadi Salim
2019-05-19  0:22   ` Pablo Neira Ayuso
2019-05-20 15:37     ` Edward Cree
2019-05-20 15:40       ` Jamal Hadi Salim
2019-05-20 15:44       ` Pablo Neira Ayuso
2019-05-18 20:30 ` Jamal Hadi Salim

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.