All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics
@ 2019-05-22 21:37 Edward Cree
  2019-05-22 21:38 ` [PATCH v3 net-next 1/3] flow_offload: add a cookie to flow_action_entry Edward Cree
                   ` (3 more replies)
  0 siblings, 4 replies; 24+ messages in thread
From: Edward Cree @ 2019-05-22 21:37 UTC (permalink / raw)
  To: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

When the flow_offload infrastructure was added, per-action statistics,
 which were previously possible for drivers to support in TC offload,
 were not plumbed through, perhaps because the drivers in the tree did
 not implement them.
In TC (and in the previous offload API) statistics are per-action,
 though generally only on 'delivery' actions like mirred, ok and shot.
 Actions also have an index, which may be supplied by the user, which
 allows the sharing of entities such as counters between multiple rules.
 (These indices are per action type, so 'drop index 1' and 'mirred index
 1' are different actions.  However, this means that a mirror and a
 redirect action are in the same index namespace, since they're both
 mirred actions.)  The existing driver implementations did not support
 this, however, instead allocating a single counter per rule.  The
 flow_offload API did not support this either, as (a) the information
 that two actions were actually the same action never reached the
 driver, and (b) the TC_CLSFLOWER_STATS callback was only able to return
 a single set of stats which were added to all counters for actions on
 the rule.  Patch #1 of this series fixes (a) by adding a cookie member
 to struct flow_action_entry, while patch #2 fixes (b) by changing the
 .stats member of struct tc_cls_flower_offload from a single set of
 stats to an array.  Existing drivers are changed to retain their
 current behaviour, by adding their one set of stats to every element of
 this array (note however that this will behave oddly if an action
 appears more than once on a rule; I'm not sure whether that's a
 problem).  I have not tested these driver changes on corresponding
 hardware.
Patch #3 ensures that flow_offload.h is buildable in isolation, rather
 than relying on linux/kernel.h having already been included elsewhere;
 this is logically separable from the rest of the series.

Changed in v3:
* replaced action_index with cookie, so drivers don't have to look at
  flow_action_id to distinguish shared actions
* removed RFC tags

Changed in v2:
* use array instead of new callback for getting stats
* remove CVLAN patch (posted separately for net)
* add header inclusion fix

Edward Cree (3):
  flow_offload: add a cookie to flow_action_entry
  flow_offload: restore ability to collect separate stats per action
  flow_offload: include linux/kernel.h from flow_offload.h

 drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c     |  6 ++++--
 .../net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c | 10 ++++++----
 drivers/net/ethernet/mellanox/mlx5/core/en_tc.c  |  4 +++-
 .../ethernet/mellanox/mlxsw/spectrum_flower.c    |  5 +++--
 .../net/ethernet/netronome/nfp/flower/offload.c  |  8 ++++++--
 .../net/ethernet/netronome/nfp/flower/qos_conf.c |  6 ++++--
 include/net/flow_offload.h                       | 13 +++++++++++--
 include/net/pkt_cls.h                            | 13 +++++++++----
 net/core/flow_offload.c                          | 16 ++++++++++++++++
 net/sched/cls_api.c                              |  1 +
 net/sched/cls_flower.c                           |  9 ++++++---
 net/sched/cls_matchall.c                         |  7 +++++--
 12 files changed, 74 insertions(+), 24 deletions(-)


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v3 net-next 1/3] flow_offload: add a cookie to flow_action_entry
  2019-05-22 21:37 [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics Edward Cree
@ 2019-05-22 21:38 ` Edward Cree
  2019-05-22 21:38 ` [PATCH v3 net-next 2/3] flow_offload: restore ability to collect separate stats per action Edward Cree
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 24+ messages in thread
From: Edward Cree @ 2019-05-22 21:38 UTC (permalink / raw)
  To: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

Populated with the address of the struct tc_action from which it was made.
Required for support of shared counters (and possibly other shared per-
 action entities in future).

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
 include/net/flow_offload.h | 1 +
 net/sched/cls_api.c        | 1 +
 2 files changed, 2 insertions(+)

diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
index a2df99f9b196..d5f4cc0b45d4 100644
--- a/include/net/flow_offload.h
+++ b/include/net/flow_offload.h
@@ -139,6 +139,7 @@ enum flow_action_mangle_base {
 
 struct flow_action_entry {
 	enum flow_action_id		id;
+	unsigned long			cookie;
 	union {
 		u32			chain_index;	/* FLOW_ACTION_GOTO */
 		struct net_device	*dev;		/* FLOW_ACTION_REDIRECT */
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index d4699156974a..5411cec17af5 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -3195,6 +3195,7 @@ int tc_setup_flow_action(struct flow_action *flow_action,
 		struct flow_action_entry *entry;
 
 		entry = &flow_action->entries[j];
+		entry->cookie = (unsigned long)act;
 		if (is_tcf_gact_ok(act)) {
 			entry->id = FLOW_ACTION_ACCEPT;
 		} else if (is_tcf_gact_shot(act)) {


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 net-next 2/3] flow_offload: restore ability to collect separate stats per action
  2019-05-22 21:37 [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics Edward Cree
  2019-05-22 21:38 ` [PATCH v3 net-next 1/3] flow_offload: add a cookie to flow_action_entry Edward Cree
@ 2019-05-22 21:38 ` Edward Cree
  2019-05-22 21:38 ` [PATCH v3 net-next 3/3] flow_offload: include linux/kernel.h from flow_offload.h Edward Cree
  2019-05-22 22:20 ` [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics Jakub Kicinski
  3 siblings, 0 replies; 24+ messages in thread
From: Edward Cree @ 2019-05-22 21:38 UTC (permalink / raw)
  To: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

In the TC_CLSFLOWER_STATS callback from fl_hw_update_stats(), pass an
 array of struct flow_stats_entry, one for each action in the flow rule.
Current drivers (which do not collect per-action stats, but rather per-
 rule) call flow_stats_update() in a loop with the same values for all
 actions; this is roughly what they did before 3b1903ef97c0, except that
 there is not a helper function (like tcf_exts_stats_update()) that does
 it for them, because we don't want to encourage future drivers to do
 the same thing (and there isn't a need for a preempt_disable() like in
 tcf_exts_stats_update()).

Also do the same in mall_stats_hw_filter()'s TC_CLSMATCHALL_STATS
 callback, since it also uses tcf_exts_stats_update().

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c     |  6 ++++--
 .../net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c | 10 ++++++----
 drivers/net/ethernet/mellanox/mlx5/core/en_tc.c  |  4 +++-
 .../ethernet/mellanox/mlxsw/spectrum_flower.c    |  5 +++--
 .../net/ethernet/netronome/nfp/flower/offload.c  |  8 ++++++--
 .../net/ethernet/netronome/nfp/flower/qos_conf.c |  6 ++++--
 include/net/flow_offload.h                       | 11 +++++++++--
 include/net/pkt_cls.h                            | 13 +++++++++----
 net/core/flow_offload.c                          | 16 ++++++++++++++++
 net/sched/cls_flower.c                           |  9 ++++++---
 net/sched/cls_matchall.c                         |  7 +++++--
 11 files changed, 71 insertions(+), 24 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
index 44d6c5743fb9..911178825daf 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
@@ -1370,6 +1370,7 @@ static int bnxt_tc_get_flow_stats(struct bnxt *bp,
 	struct bnxt_tc_flow_node *flow_node;
 	struct bnxt_tc_flow *flow;
 	unsigned long lastused;
+	int i;
 
 	flow_node = rhashtable_lookup_fast(&tc_info->flow_table,
 					   &tc_flow_cmd->cookie,
@@ -1388,8 +1389,9 @@ static int bnxt_tc_get_flow_stats(struct bnxt *bp,
 	lastused = flow->lastused;
 	spin_unlock(&flow->stats_lock);
 
-	flow_stats_update(&tc_flow_cmd->stats, stats.bytes, stats.packets,
-			  lastused);
+	for (i = 0; i < tc_flow_cmd->stats->num_entries; i++)
+		flow_stats_update(&tc_flow_cmd->stats->entries[i],
+				  stats.bytes, stats.packets, lastused);
 	return 0;
 }
 
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
index 6e2d80008a79..cf19ca9f72c5 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
@@ -788,8 +788,8 @@ int cxgb4_tc_flower_stats(struct net_device *dev,
 	struct ch_tc_flower_stats *ofld_stats;
 	struct ch_tc_flower_entry *ch_flower;
 	u64 packets;
+	int ret, i;
 	u64 bytes;
-	int ret;
 
 	ch_flower = ch_flower_lookup(adap, cls->cookie);
 	if (!ch_flower) {
@@ -808,9 +808,11 @@ int cxgb4_tc_flower_stats(struct net_device *dev,
 	if (ofld_stats->packet_count != packets) {
 		if (ofld_stats->prev_packet_count != packets)
 			ofld_stats->last_used = jiffies;
-		flow_stats_update(&cls->stats, bytes - ofld_stats->byte_count,
-				  packets - ofld_stats->packet_count,
-				  ofld_stats->last_used);
+		for (i = 0; i < cls->stats->num_entries; i++)
+			flow_stats_update(&cls->stats->entries[i],
+					  bytes - ofld_stats->byte_count,
+					  packets - ofld_stats->packet_count,
+					  ofld_stats->last_used);
 
 		ofld_stats->packet_count = packets;
 		ofld_stats->byte_count = bytes;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 31cd02f11499..e9fe98a90b66 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -3439,6 +3439,7 @@ int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv,
 	u64 lastuse = 0;
 	u64 packets = 0;
 	u64 bytes = 0;
+	int i;
 
 	flow = rhashtable_lookup_fast(tc_ht, &f->cookie, tc_ht_params);
 	if (!flow || !same_flow_direction(flow, flags))
@@ -3478,7 +3479,8 @@ int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv,
 no_peer_counter:
 	mlx5_devcom_release_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
 out:
-	flow_stats_update(&f->stats, bytes, packets, lastuse);
+	for (i = 0; i < f->stats->num_entries; i++)
+		flow_stats_update(&f->stats->entries[i], bytes, packets, lastuse);
 
 	return 0;
 }
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
index 15f804453cd6..ea65b4d832b4 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
@@ -453,8 +453,8 @@ int mlxsw_sp_flower_stats(struct mlxsw_sp *mlxsw_sp,
 	struct mlxsw_sp_acl_rule *rule;
 	u64 packets;
 	u64 lastuse;
+	int err, i;
 	u64 bytes;
-	int err;
 
 	ruleset = mlxsw_sp_acl_ruleset_get(mlxsw_sp, block,
 					   f->common.chain_index,
@@ -471,7 +471,8 @@ int mlxsw_sp_flower_stats(struct mlxsw_sp *mlxsw_sp,
 	if (err)
 		goto err_rule_get_stats;
 
-	flow_stats_update(&f->stats, bytes, packets, lastuse);
+	for (i = 0; i < f->stats->num_entries; i++)
+		flow_stats_update(&f->stats->entries[i], bytes, packets, lastuse);
 
 	mlxsw_sp_acl_ruleset_put(mlxsw_sp, ruleset);
 	return 0;
diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
index 1fbfeb43c538..889839bc11a5 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
@@ -1132,6 +1132,7 @@ nfp_flower_get_stats(struct nfp_app *app, struct net_device *netdev,
 	struct nfp_flower_priv *priv = app->priv;
 	struct nfp_fl_payload *nfp_flow;
 	u32 ctx_id;
+	int i;
 
 	nfp_flow = nfp_flower_search_fl_table(app, flow->cookie, netdev);
 	if (!nfp_flow)
@@ -1144,8 +1145,11 @@ nfp_flower_get_stats(struct nfp_app *app, struct net_device *netdev,
 	if (!list_empty(&nfp_flow->linked_flows))
 		nfp_flower_update_merge_stats(app, nfp_flow);
 
-	flow_stats_update(&flow->stats, priv->stats[ctx_id].bytes,
-			  priv->stats[ctx_id].pkts, priv->stats[ctx_id].used);
+	for (i = 0; i < flow->stats->num_entries; i++)
+		flow_stats_update(&flow->stats->entries[i],
+				  priv->stats[ctx_id].bytes,
+				  priv->stats[ctx_id].pkts,
+				  priv->stats[ctx_id].used);
 
 	priv->stats[ctx_id].pkts = 0;
 	priv->stats[ctx_id].bytes = 0;
diff --git a/drivers/net/ethernet/netronome/nfp/flower/qos_conf.c b/drivers/net/ethernet/netronome/nfp/flower/qos_conf.c
index 86e968cd5ffd..0ed202f315ee 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/qos_conf.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/qos_conf.c
@@ -297,6 +297,7 @@ nfp_flower_stats_rate_limiter(struct nfp_app *app, struct net_device *netdev,
 	struct nfp_stat_pair *prev_stats;
 	u64 diff_bytes, diff_pkts;
 	struct nfp_repr *repr;
+	int i;
 
 	if (!nfp_netdev_is_nfp_repr(netdev)) {
 		NL_SET_ERR_MSG_MOD(extack, "unsupported offload: qos rate limit offload not supported on higher level port");
@@ -319,8 +320,9 @@ nfp_flower_stats_rate_limiter(struct nfp_app *app, struct net_device *netdev,
 	prev_stats->bytes = curr_stats->bytes;
 	spin_unlock_bh(&fl_priv->qos_stats_lock);
 
-	flow_stats_update(&flow->stats, diff_bytes, diff_pkts,
-			  repr_priv->qos_table.last_update);
+	for (i = 0; i < flow->stats->num_entries; i++)
+		flow_stats_update(&flow->stats->entries[i], diff_bytes,
+				  diff_pkts, repr_priv->qos_table.last_update);
 	return 0;
 }
 
diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
index d5f4cc0b45d4..ae7cf27cd5e3 100644
--- a/include/net/flow_offload.h
+++ b/include/net/flow_offload.h
@@ -212,13 +212,20 @@ static inline bool flow_rule_match_key(const struct flow_rule *rule,
 	return dissector_uses_key(rule->match.dissector, key);
 }
 
-struct flow_stats {
+struct flow_stats_entry {
 	u64	pkts;
 	u64	bytes;
 	u64	lastused;
 };
 
-static inline void flow_stats_update(struct flow_stats *flow_stats,
+struct flow_stats {
+	unsigned int			num_entries;
+	struct flow_stats_entry		entries[0];
+};
+
+struct flow_stats *flow_stats_alloc(unsigned int num_actions);
+
+static inline void flow_stats_update(struct flow_stats_entry *flow_stats,
 				     u64 bytes, u64 pkts, u64 lastused)
 {
 	flow_stats->pkts	+= pkts;
diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index 514e3c80ecc1..0e17ea8ba302 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -344,7 +344,7 @@ static inline void tcf_exts_put_net(struct tcf_exts *exts)
 
 static inline void
 tcf_exts_stats_update(const struct tcf_exts *exts,
-		      u64 bytes, u64 packets, u64 lastuse)
+		      const struct flow_stats *stats)
 {
 #ifdef CONFIG_NET_CLS_ACT
 	int i;
@@ -352,9 +352,14 @@ tcf_exts_stats_update(const struct tcf_exts *exts,
 	preempt_disable();
 
 	for (i = 0; i < exts->nr_actions; i++) {
+		const struct flow_stats_entry *stats_entry;
 		struct tc_action *a = exts->actions[i];
 
-		tcf_action_stats_update(a, bytes, packets, lastuse, true);
+		if (i > stats->num_entries)
+			break;
+		stats_entry = &stats->entries[i];
+		tcf_action_stats_update(a, stats_entry->bytes, stats_entry->pkts,
+					stats_entry->lastused, true);
 	}
 
 	preempt_enable();
@@ -752,7 +757,7 @@ struct tc_cls_flower_offload {
 	enum tc_fl_command command;
 	unsigned long cookie;
 	struct flow_rule *rule;
-	struct flow_stats stats;
+	struct flow_stats *stats;
 	u32 classid;
 };
 
@@ -772,7 +777,7 @@ struct tc_cls_matchall_offload {
 	struct tc_cls_common_offload common;
 	enum tc_matchall_command command;
 	struct flow_rule *rule;
-	struct flow_stats stats;
+	struct flow_stats *stats;
 	unsigned long cookie;
 };
 
diff --git a/net/core/flow_offload.c b/net/core/flow_offload.c
index 5ce7d47a960e..326a86f63714 100644
--- a/net/core/flow_offload.c
+++ b/net/core/flow_offload.c
@@ -158,3 +158,19 @@ void flow_rule_match_enc_opts(const struct flow_rule *rule,
 	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_OPTS, out);
 }
 EXPORT_SYMBOL(flow_rule_match_enc_opts);
+
+struct flow_stats *flow_stats_alloc(unsigned int num_actions)
+{
+	struct flow_stats *stats;
+
+	stats = kzalloc(sizeof(struct flow_stats) +
+		       sizeof(struct flow_stats_entry) * num_actions,
+		       GFP_KERNEL);
+	if (!stats)
+		return NULL;
+
+	stats->num_entries = num_actions;
+
+	return stats;
+}
+EXPORT_SYMBOL(flow_stats_alloc);
diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
index f6685fc53119..8775657fb03b 100644
--- a/net/sched/cls_flower.c
+++ b/net/sched/cls_flower.c
@@ -482,13 +482,16 @@ static void fl_hw_update_stats(struct tcf_proto *tp, struct cls_fl_filter *f,
 	cls_flower.command = TC_CLSFLOWER_STATS;
 	cls_flower.cookie = (unsigned long) f;
 	cls_flower.classid = f->res.classid;
+	cls_flower.stats = flow_stats_alloc(tcf_exts_num_actions(&f->exts));
+	if (!cls_flower.stats)
+		goto out_unlock;
 
 	tc_setup_cb_call(block, TC_SETUP_CLSFLOWER, &cls_flower, false);
 
-	tcf_exts_stats_update(&f->exts, cls_flower.stats.bytes,
-			      cls_flower.stats.pkts,
-			      cls_flower.stats.lastused);
+	tcf_exts_stats_update(&f->exts, cls_flower.stats);
+	kfree(cls_flower.stats);
 
+out_unlock:
 	if (!rtnl_held)
 		rtnl_unlock();
 }
diff --git a/net/sched/cls_matchall.c b/net/sched/cls_matchall.c
index db42d97a2006..b6b7b041fd6a 100644
--- a/net/sched/cls_matchall.c
+++ b/net/sched/cls_matchall.c
@@ -335,11 +335,14 @@ static void mall_stats_hw_filter(struct tcf_proto *tp,
 	tc_cls_common_offload_init(&cls_mall.common, tp, head->flags, NULL);
 	cls_mall.command = TC_CLSMATCHALL_STATS;
 	cls_mall.cookie = cookie;
+	cls_mall.stats = flow_stats_alloc(tcf_exts_num_actions(&head->exts));
+	if (!cls_mall.stats)
+		return;
 
 	tc_setup_cb_call(block, TC_SETUP_CLSMATCHALL, &cls_mall, false);
 
-	tcf_exts_stats_update(&head->exts, cls_mall.stats.bytes,
-			      cls_mall.stats.pkts, cls_mall.stats.lastused);
+	tcf_exts_stats_update(&head->exts, cls_mall.stats);
+	kfree(cls_mall.stats);
 }
 
 static int mall_dump(struct net *net, struct tcf_proto *tp, void *fh,


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 net-next 3/3] flow_offload: include linux/kernel.h from flow_offload.h
  2019-05-22 21:37 [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics Edward Cree
  2019-05-22 21:38 ` [PATCH v3 net-next 1/3] flow_offload: add a cookie to flow_action_entry Edward Cree
  2019-05-22 21:38 ` [PATCH v3 net-next 2/3] flow_offload: restore ability to collect separate stats per action Edward Cree
@ 2019-05-22 21:38 ` Edward Cree
  2019-05-22 22:20 ` [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics Jakub Kicinski
  3 siblings, 0 replies; 24+ messages in thread
From: Edward Cree @ 2019-05-22 21:38 UTC (permalink / raw)
  To: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

flow_stats_update() uses max_t, so ensure we have that defined.

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
 include/net/flow_offload.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
index ae7cf27cd5e3..7554e88581d2 100644
--- a/include/net/flow_offload.h
+++ b/include/net/flow_offload.h
@@ -1,6 +1,7 @@
 #ifndef _NET_FLOW_OFFLOAD_H
 #define _NET_FLOW_OFFLOAD_H
 
+#include <linux/kernel.h>
 #include <net/flow_dissector.h>
 
 struct flow_match {

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-22 21:37 [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics Edward Cree
                   ` (2 preceding siblings ...)
  2019-05-22 21:38 ` [PATCH v3 net-next 3/3] flow_offload: include linux/kernel.h from flow_offload.h Edward Cree
@ 2019-05-22 22:20 ` Jakub Kicinski
  2019-05-23 13:19   ` Jamal Hadi Salim
  2019-05-23 16:21   ` [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics Edward Cree
  3 siblings, 2 replies; 24+ messages in thread
From: Jakub Kicinski @ 2019-05-22 22:20 UTC (permalink / raw)
  To: Edward Cree
  Cc: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller,
	netdev, Cong Wang, Andy Gospodarek, Michael Chan,
	Vishal Kulkarni

On Wed, 22 May 2019 22:37:16 +0100, Edward Cree wrote:
> * removed RFC tags

Why?  There is still no upstream user for this (my previous 
objections of this being only partially correct aside).

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-22 22:20 ` [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics Jakub Kicinski
@ 2019-05-23 13:19   ` Jamal Hadi Salim
  2019-05-23 16:11     ` Jakub Kicinski
  2019-05-23 16:21   ` [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics Edward Cree
  1 sibling, 1 reply; 24+ messages in thread
From: Jamal Hadi Salim @ 2019-05-23 13:19 UTC (permalink / raw)
  To: Jakub Kicinski, Edward Cree
  Cc: Jiri Pirko, Pablo Neira Ayuso, David Miller, netdev, Cong Wang,
	Andy Gospodarek, Michael Chan, Vishal Kulkarni

On 2019-05-22 6:20 p.m., Jakub Kicinski wrote:
> On Wed, 22 May 2019 22:37:16 +0100, Edward Cree wrote:
>> * removed RFC tags
> 
> Why?  There is still no upstream user for this (my previous
> objections of this being only partially correct aside).


IIRC your point was to get the dumping to work with RTM_GETACTION.

That would still work here, no? There will be some latency
based on the frequency of hardware->kernel stats updates.

cheers,
jamal

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-23 13:19   ` Jamal Hadi Salim
@ 2019-05-23 16:11     ` Jakub Kicinski
  2019-05-23 16:40       ` Edward Cree
  0 siblings, 1 reply; 24+ messages in thread
From: Jakub Kicinski @ 2019-05-23 16:11 UTC (permalink / raw)
  To: Jamal Hadi Salim
  Cc: Edward Cree, Jiri Pirko, Pablo Neira Ayuso, David Miller, netdev,
	Cong Wang, Andy Gospodarek, Michael Chan, Vishal Kulkarni

On Thu, 23 May 2019 09:19:49 -0400, Jamal Hadi Salim wrote:
> On 2019-05-22 6:20 p.m., Jakub Kicinski wrote:
> > On Wed, 22 May 2019 22:37:16 +0100, Edward Cree wrote:  
> >> * removed RFC tags  
> > 
> > Why?  There is still no upstream user for this (my previous
> > objections of this being only partially correct aside).  
> 
> 
> IIRC your point was to get the dumping to work with RTM_GETACTION.

Yup.

> That would still work here, no? There will be some latency
> based on the frequency of hardware->kernel stats updates.

I don't think so, I think the stats are only updated on classifier
dumps in Ed's code.  But we can't be 100% sure without seeing driver
code.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-22 22:20 ` [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics Jakub Kicinski
  2019-05-23 13:19   ` Jamal Hadi Salim
@ 2019-05-23 16:21   ` Edward Cree
  2019-05-23 16:33     ` Jakub Kicinski
  1 sibling, 1 reply; 24+ messages in thread
From: Edward Cree @ 2019-05-23 16:21 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller,
	netdev, Cong Wang, Andy Gospodarek, Michael Chan,
	Vishal Kulkarni

On 22/05/2019 23:20, Jakub Kicinski wrote:
> On Wed, 22 May 2019 22:37:16 +0100, Edward Cree wrote:
>> * removed RFC tags
> Why?  There is still no upstream user for this
Well, patch #2 updates drivers to the changed API, which is kinda an
 "upstream user" if you squint... admittedly patch #1 is a bit dubious
 in that regard; I was kinda hoping someone on one of the existing
 drivers would either drop in a patch or at least point me at the info
 needed to write one for their HW, but no luck :-(
My defence re patch #1 is that I'm not really adding a 'new feature'
 to the kernel, just restoring something that used to be there.  It's
 a grey area and I'm waiting for the maintainer(s) to say yea or nay
 (Jiri noped v1 but that had a much bigger unused interface (the
 TC_CLSFLOWER_STATS_BYINDEX callback) so I'm still hoping).
> (my previous objections of this being only partially correct aside).
I didn't realise you still had outstanding objections, sorry.
Regarding RTM_GETACTION dumping, it should be possible to add an offload
 callback based on this, which would just pass the action cookie to the
 driver.  Then the driver just has to look the cookie up in its stats
 tables to find the counter.  That would deal with the 'stale stats'
 issue.
But right now, that really _would_ be an API without a user; still, I
 might be able to do that as an RFC patch to prove it's possible, would
 that help this series to go in as-is?

-Ed

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-23 16:21   ` [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics Edward Cree
@ 2019-05-23 16:33     ` Jakub Kicinski
  2019-05-23 17:09       ` Edward Cree
  0 siblings, 1 reply; 24+ messages in thread
From: Jakub Kicinski @ 2019-05-23 16:33 UTC (permalink / raw)
  To: Edward Cree
  Cc: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller,
	netdev, Cong Wang, Andy Gospodarek, Michael Chan,
	Vishal Kulkarni

On Thu, 23 May 2019 17:21:49 +0100, Edward Cree wrote:
> On 22/05/2019 23:20, Jakub Kicinski wrote:
> > On Wed, 22 May 2019 22:37:16 +0100, Edward Cree wrote:  
> >> * removed RFC tags  
> > Why?  There is still no upstream user for this  
> Well, patch #2 updates drivers to the changed API, which is kinda an
>  "upstream user" if you squint... admittedly patch #1 is a bit dubious
>  in that regard;

Both 1 and 2 are dubious if you ask me.  Complexity added for no
upstream gain.  3 is a good patch, perhaps worth posting it separately
rather than keeping it hostage to the rest of the series? :)

Side note - it's not clear why open code the loop in every driver rather
than having flow_stats_update() handle the looping?

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-23 16:11     ` Jakub Kicinski
@ 2019-05-23 16:40       ` Edward Cree
  2019-05-23 17:25         ` Jakub Kicinski
  0 siblings, 1 reply; 24+ messages in thread
From: Edward Cree @ 2019-05-23 16:40 UTC (permalink / raw)
  To: Jakub Kicinski, Jamal Hadi Salim
  Cc: Jiri Pirko, Pablo Neira Ayuso, David Miller, netdev, Cong Wang,
	Andy Gospodarek, Michael Chan, Vishal Kulkarni

On 23/05/2019 17:11, Jakub Kicinski wrote:
> On Thu, 23 May 2019 09:19:49 -0400, Jamal Hadi Salim wrote:
>> That would still work here, no? There will be some latency
>> based on the frequency of hardware->kernel stats updates.
> I don't think so, I think the stats are only updated on classifier
> dumps in Ed's code.
Yep currently that's the case, but not as an inherent restriction (see
 my other mail).

> But we can't be 100% sure without seeing driver code.
Would it help if I posted my driver code to the list?  It's gonna be
 upstream eventually anyway, it's just that the driver as a whole
 isn't really in a shape to be merged just yet (mainly 'cos the
 hardware folks are planning some breaking changes).  But I can post
 my TC handling code, or even the whole driver, if demonstrating how
 these interfaces can be used will help matters.

-Ed

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-23 16:33     ` Jakub Kicinski
@ 2019-05-23 17:09       ` Edward Cree
  0 siblings, 0 replies; 24+ messages in thread
From: Edward Cree @ 2019-05-23 17:09 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller,
	netdev, Cong Wang, Andy Gospodarek, Michael Chan,
	Vishal Kulkarni

On 23/05/2019 17:33, Jakub Kicinski wrote:
> On Thu, 23 May 2019 17:21:49 +0100, Edward Cree wrote:
>> Well, patch #2 updates drivers to the changed API, which is kinda an
>>  "upstream user" if you squint... admittedly patch #1 is a bit dubious
>>  in that regard;
> Both 1 and 2 are dubious if you ask me.  Complexity added for no
> upstream gain.
Just to explain the pickle I'm in: when this hw finally ships, we intend
 to both submit a driver upstream and also provide an out-of-tree driver
 that builds on older kernels (same as we do with existing hardware).  Now
 there are already some kernel releases (5.1 and, when it comes out of -rc,
 5.2) on which we'll just have to say "yeah, shared actions don't work here
 and you'll get slightly-bogus stats if you use them".  But I'd like to
 minimise the set of kernels on which that's the case.
I realise that from a strict/absolutist perspective, that's "not the
 upstream kernel's problem"; but at the same time it's qualitatively
 different from, say, a vendor asking for hooks for a proprietary driver
 that will *never* be upstream.  Does a strict interpretation of the rules
 here really make for a better kernel?  There *will* be an upstream gain
 eventually (when our driver goes in), just not in this release; it's not
 like the kernel's never taken patches on that basis before (with a
 specific use in mind, i.e. distinct from "oh, there *might* be an
 upstream gain if someone uses this someday").

> 3 is a good patch, perhaps worth posting it separately
> rather than keeping it hostage to the rest of the series? :)

If the series gets a definitive nack or needs a respin then I'll do that;
 otherwise posting it separately just makes more work for everyone.
 

> Side note - it's not clear why open code the loop in every driver rather
> than having flow_stats_update() handle the looping?
Because (as I alluded in the patch description) we don't *want* people to do
 the looping, we want them to implement per-action (rather than per-rule)
 stats.  Having such a loop in flow_stats_update() would encourage them to
 think "there is a function to do this, other drivers call it, so it must be
 what you're meant to do".  Whereas if we make the Wrong Thing be a bit ugly,
 hopefully developers will do the Right Thing instead.
(Perhaps I should add some comments into the drivers, instead, basically
 saying "don't copy this, it's bogus"?)
IMHO the original (pre 5.1) tcf_exts_stats_update() never should have had a
 loop in the first place, it was a fundamentally broken API wrt TC semantics.

-Ed

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-23 16:40       ` Edward Cree
@ 2019-05-23 17:25         ` Jakub Kicinski
  2019-05-24 13:09           ` Edward Cree
  0 siblings, 1 reply; 24+ messages in thread
From: Jakub Kicinski @ 2019-05-23 17:25 UTC (permalink / raw)
  To: Edward Cree
  Cc: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller,
	netdev, Cong Wang, Andy Gospodarek, Michael Chan,
	Vishal Kulkarni

On Thu, 23 May 2019 17:40:08 +0100, Edward Cree wrote:
> On 23/05/2019 17:11, Jakub Kicinski wrote:
> > On Thu, 23 May 2019 09:19:49 -0400, Jamal Hadi Salim wrote:  
> >> That would still work here, no? There will be some latency
> >> based on the frequency of hardware->kernel stats updates.  
> > I don't think so, I think the stats are only updated on classifier
> > dumps in Ed's code.  
> Yep currently that's the case, but not as an inherent restriction (see
>  my other mail).

I think we can all agree that the current stats offload only reporting
up-to-date HW stats when classifiers are dumped makes slight mockery of
the kernel API guarantees.  I feel like HW vendors found a subset of
the ABI to poke things in and out of the hardware, and things work
correctly if you limit yourself to that very subset.  So you only get
up-to-date stats if you dump classifiers, if you dump actions - no dice.

Whether it's on you to fix this is debatable :)  Since you're diving
into actions and adding support for shared ones, I'd say it's time to
rectify the situation.

Let's look at it this way - if you fix the RTM_GETACTION you will
necessarily add the cookie and all the other stuff you need in your
upcoming driver :)

> > But we can't be 100% sure without seeing driver code.  
> Would it help if I posted my driver code to the list?  It's gonna be
>  upstream eventually anyway, it's just that the driver as a whole
>  isn't really in a shape to be merged just yet (mainly 'cos the
>  hardware folks are planning some breaking changes).  But I can post
>  my TC handling code, or even the whole driver, if demonstrating how
>  these interfaces can be used will help matters.

From my perspective - you answered the question so I'm at 100% now ;)

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-23 17:25         ` Jakub Kicinski
@ 2019-05-24 13:09           ` Edward Cree
  2019-05-24 13:57             ` Edward Cree
  2019-05-29 20:07             ` [RFC PATCH net-next 0/2] RTM_GETACTION stats offload Edward Cree
  0 siblings, 2 replies; 24+ messages in thread
From: Edward Cree @ 2019-05-24 13:09 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller,
	netdev, Cong Wang, Andy Gospodarek, Michael Chan,
	Vishal Kulkarni

On 23/05/2019 18:25, Jakub Kicinski wrote:
> Whether it's on you to fix this is debatable :) Since you're diving
> into actions and adding support for shared ones, I'd say it's time to
> rectify the situation.
>
> Let's look at it this way - if you fix the RTM_GETACTION you will
> necessarily add the cookie and all the other stuff you need in your
> upcoming driver :)
Yes, but then I don't have an upstream user for RTM_GETACTION offload!
I'd need to teach an existing driver to look up counters by action
 cookie, which would probably mean a hashtable, a bunch of locking and
 lifetime rules... not something I'm comfortable doing in someone else's
 driver that I don't know the innards of and probably don't have
 hardware to test on.

I'll put together an RFC patch, anyway, and maybe some other driver
 maintainer (hint, hint!) will follow up with a user ;-)
Should it be a new entry in enum tc_setup_type (TC_SETUP_ACTION), or in
 enum tc_fl_command (TC_CLSFLOWER_GETACTION)?  I'd imagine the former as
 we want this to (a) be generic beyond just flower and perhaps also (b)
 support creating and deleting actions as well (not just dumping them).

-Ed

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-24 13:09           ` Edward Cree
@ 2019-05-24 13:57             ` Edward Cree
  2019-05-24 14:44               ` Jamal Hadi Salim
  2019-05-24 17:03               ` Jakub Kicinski
  2019-05-29 20:07             ` [RFC PATCH net-next 0/2] RTM_GETACTION stats offload Edward Cree
  1 sibling, 2 replies; 24+ messages in thread
From: Edward Cree @ 2019-05-24 13:57 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller,
	netdev, Cong Wang, Andy Gospodarek, Michael Chan,
	Vishal Kulkarni

On 24/05/2019 14:09, Edward Cree wrote:
> I'll put together an RFC patch, anyway
Argh, there's a problem: an action doesn't have a (directly) associated
 block, and all the TC offload machinery nowadays is built around blocks.
Since this action might have been used in _any_ block (and afaik there's
 no way, from the action, to find which) we'd have to make callbacks on
 _every_ block in the system, which sounds like it'd perform even worse
 than the rule-dumping approach.
Any ideas?

-Ed

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-24 13:57             ` Edward Cree
@ 2019-05-24 14:44               ` Jamal Hadi Salim
  2019-05-24 15:09                 ` Edward Cree
  2019-05-24 17:03               ` Jakub Kicinski
  1 sibling, 1 reply; 24+ messages in thread
From: Jamal Hadi Salim @ 2019-05-24 14:44 UTC (permalink / raw)
  To: Edward Cree, Jakub Kicinski
  Cc: Jiri Pirko, Pablo Neira Ayuso, David Miller, netdev, Cong Wang,
	Andy Gospodarek, Michael Chan, Vishal Kulkarni

On 2019-05-24 9:57 a.m., Edward Cree wrote:
> On 24/05/2019 14:09, Edward Cree wrote:
>> I'll put together an RFC patch, anyway
> Argh, there's a problem: an action doesn't have a (directly) associated
>   block, and all the TC offload machinery nowadays is built around blocks.
> Since this action might have been used in _any_ block (and afaik there's
>   no way, from the action, to find which) we'd have to make callbacks on
>   _every_ block in the system, which sounds like it'd perform even worse
>   than the rule-dumping approach.
> Any ideas?

If you have the hardware just send the stats periodically to the kernel
and manage to map each stat to an action type/index (which i think your
cookie seemed able to do) wouldnt this solve the problem?

cheers,
jamal


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-24 14:44               ` Jamal Hadi Salim
@ 2019-05-24 15:09                 ` Edward Cree
  2019-05-24 17:59                   ` Jamal Hadi Salim
  0 siblings, 1 reply; 24+ messages in thread
From: Edward Cree @ 2019-05-24 15:09 UTC (permalink / raw)
  To: Jamal Hadi Salim, Jakub Kicinski
  Cc: Jiri Pirko, Pablo Neira Ayuso, David Miller, netdev, Cong Wang,
	Andy Gospodarek, Michael Chan, Vishal Kulkarni

On 24/05/2019 15:44, Jamal Hadi Salim wrote:
> On 2019-05-24 9:57 a.m., Edward Cree wrote:
>> Argh, there's a problem: an action doesn't have a (directly) associated
>>  block, and all the TC offload machinery nowadays is built around blocks.
>> Since this action might have been used in _any_ block (and afaik there's
>>  no way, from the action, to find which) we'd have to make callbacks on
>>  _every_ block in the system, which sounds like it'd perform even worse
>>  than the rule-dumping approach.
>> Any ideas?
>
> If you have the hardware just send the stats periodically to the kernel
> and manage to map each stat to an action type/index (which i think your
> cookie seemed able to do) wouldnt this solve the problem?
Oh, a push rather than pull model?
That could work, but I worry about the overhead in the case of very large
 numbers of rules (the OVS people talk about 1 million plus) each pushing
 a stats update to the kernel every few seconds.  I'd much rather only
 fetch stats when the kernel asks for them.  (Although admittedly drivers
 already have to periodically fetch at least the stats of encap actions
 so that they know whether to keepalive the ARP entries.)
Also, getting from the cookie back to the action wouldn't be trivial; if
 there were only TC it would (just cast cookie back to a struct
 tc_action *) but soon some of these will be NF actions instead.  So
 something slightly smarter will be required.

-Ed

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-24 13:57             ` Edward Cree
  2019-05-24 14:44               ` Jamal Hadi Salim
@ 2019-05-24 17:03               ` Jakub Kicinski
  2019-05-24 17:27                 ` Edward Cree
  1 sibling, 1 reply; 24+ messages in thread
From: Jakub Kicinski @ 2019-05-24 17:03 UTC (permalink / raw)
  To: Edward Cree
  Cc: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller,
	netdev, Cong Wang, Andy Gospodarek, Michael Chan,
	Vishal Kulkarni

On Fri, 24 May 2019 14:57:24 +0100, Edward Cree wrote:
> On 24/05/2019 14:09, Edward Cree wrote:
> > I'll put together an RFC patch, anyway  
> Argh, there's a problem: an action doesn't have a (directly) associated
>  block, and all the TC offload machinery nowadays is built around blocks.
> Since this action might have been used in _any_ block (and afaik there's
>  no way, from the action, to find which) we'd have to make callbacks on
>  _every_ block in the system, which sounds like it'd perform even worse
>  than the rule-dumping approach.
> Any ideas?

Simplest would be to keep a list of offloaders per action, but maybe
something more clever would appear as one rummages through the code.

We're happy to help out with the driver side if you get stuck on that,
BTW.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-24 17:03               ` Jakub Kicinski
@ 2019-05-24 17:27                 ` Edward Cree
  2019-05-24 17:44                   ` Jakub Kicinski
  0 siblings, 1 reply; 24+ messages in thread
From: Edward Cree @ 2019-05-24 17:27 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller,
	netdev, Cong Wang, Andy Gospodarek, Michael Chan,
	Vishal Kulkarni

On 24/05/2019 18:03, Jakub Kicinski wrote:
> On Fri, 24 May 2019 14:57:24 +0100, Edward Cree wrote:
>> Argh, there's a problem: an action doesn't have a (directly) associated
>>  block, and all the TC offload machinery nowadays is built around blocks.
>> Since this action might have been used in _any_ block (and afaik there's
>>  no way, from the action, to find which) we'd have to make callbacks on
>>  _every_ block in the system, which sounds like it'd perform even worse
>>  than the rule-dumping approach.
>> Any ideas?
> Simplest would be to keep a list of offloaders per action, but maybe
> something more clever would appear as one rummages through the code.
Problem with that is where to put the list heads; you'd need something that
 was allocated per action x block, for those blocks on which at least one
 offloader handled the rule (in_hw_count > 0).
Then you'd also have to update that when a driver bound/unbound from a
 block (fl_reoffload() time).
Best I can think of is keeping the cls_flower.rule allocated in
 fl_hw_replace_filter() around instead of immediately freeing it, and
 having a list_head in each flow_action_entry.  But that really looks like
 an overcomplicated mess.
TBH I'm starting to wonder if just calling all tc blocks in existence is
 really all that bad.  Is there a plausible use case with huge numbers of
 bound blocks?

-Ed

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-24 17:27                 ` Edward Cree
@ 2019-05-24 17:44                   ` Jakub Kicinski
  2019-05-28 16:27                     ` Edward Cree
  0 siblings, 1 reply; 24+ messages in thread
From: Jakub Kicinski @ 2019-05-24 17:44 UTC (permalink / raw)
  To: Edward Cree
  Cc: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller,
	netdev, Cong Wang, Andy Gospodarek, Michael Chan,
	Vishal Kulkarni

On Fri, 24 May 2019 18:27:39 +0100, Edward Cree wrote:
> On 24/05/2019 18:03, Jakub Kicinski wrote:
> > On Fri, 24 May 2019 14:57:24 +0100, Edward Cree wrote:  
> >> Argh, there's a problem: an action doesn't have a (directly) associated
> >>  block, and all the TC offload machinery nowadays is built around blocks.
> >> Since this action might have been used in _any_ block (and afaik there's
> >>  no way, from the action, to find which) we'd have to make callbacks on
> >>  _every_ block in the system, which sounds like it'd perform even worse
> >>  than the rule-dumping approach.
> >> Any ideas?  
> > Simplest would be to keep a list of offloaders per action, but maybe
> > something more clever would appear as one rummages through the code.  
> Problem with that is where to put the list heads; you'd need something that
>  was allocated per action x block, for those blocks on which at least one
>  offloader handled the rule (in_hw_count > 0).

I was thinking of having the list per action, but I haven't looked at
the code TBH.  Driver would then request to be added to each action's
list..

> Then you'd also have to update that when a driver bound/unbound from a
>  block (fl_reoffload() time).
> Best I can think of is keeping the cls_flower.rule allocated in
>  fl_hw_replace_filter() around instead of immediately freeing it, and
>  having a list_head in each flow_action_entry.  But that really looks like
>  an overcomplicated mess.
> TBH I'm starting to wonder if just calling all tc blocks in existence is
>  really all that bad.  Is there a plausible use case with huge numbers of
>  bound blocks?

Once per RTM_GETACTION?  The simplicity of that has it's allure..
It doesn't give you an upstream user for a cookie, though :S

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-24 15:09                 ` Edward Cree
@ 2019-05-24 17:59                   ` Jamal Hadi Salim
  0 siblings, 0 replies; 24+ messages in thread
From: Jamal Hadi Salim @ 2019-05-24 17:59 UTC (permalink / raw)
  To: Edward Cree, Jakub Kicinski
  Cc: Jiri Pirko, Pablo Neira Ayuso, David Miller, netdev, Cong Wang,
	Andy Gospodarek, Michael Chan, Vishal Kulkarni

On 2019-05-24 11:09 a.m., Edward Cree wrote:

> Oh, a push rather than pull model?
Right. I thought the switchdev (or what used to be called switchdev)
did a push of some of the tables periodically.

> That could work, but I worry about the overhead in the case of very large
>   numbers of rules (the OVS people talk about 1 million plus) each pushing
>   a stats update to the kernel every few seconds.  I'd much rather only
>   fetch stats when the kernel asks for them.  (Although admittedly drivers
>   already have to periodically fetch at least the stats of encap actions
>   so that they know whether to keepalive the ARP entries.)

So that is part of the challenge. We had this discussion in Prague in
the tc workshop. Andy (on Cc) presented and a discussion ensued
(recorded on the video, about minute 24).

If you are going to poll the hardware from the kernel then, if possible,
you should put a time filter for "last changed" and only get things
that have changed in the last delta.
If you are using a model where the hardware pushes then ability to set
a time filter for "last changed" would be needed.

Note, Ive had to deal with this problem with tc actions from kernel-user
point of view.
See commit e62e484df049 - the idea is to not retrieve the 1M+ stats
but rather only things that have changed since last attempt.

> Also, getting from the cookie back to the action wouldn't be trivial; if
>   there were only TC it would (just cast cookie back to a struct
>   tc_action *) but soon some of these will be NF actions instead.  So
>   something slightly smarter will be required.

Well - this is that issue i was raising earlier as a concern.
I think netfilter should have called into the tc api instead
of directly.

cheers,
jamal

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics
  2019-05-24 17:44                   ` Jakub Kicinski
@ 2019-05-28 16:27                     ` Edward Cree
  0 siblings, 0 replies; 24+ messages in thread
From: Edward Cree @ 2019-05-28 16:27 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller,
	netdev, Cong Wang, Andy Gospodarek, Michael Chan,
	Vishal Kulkarni

On 24/05/2019 18:44, Jakub Kicinski wrote:
> On Fri, 24 May 2019 18:27:39 +0100, Edward Cree wrote:
>> On 24/05/2019 18:03, Jakub Kicinski wrote:
>>> Simplest would be to keep a list of offloaders per action, but maybe
>>> something more clever would appear as one rummages through the code.  
>> Problem with that is where to put the list heads; you'd need something that
>>  was allocated per action x block, for those blocks on which at least one
>>  offloader handled the rule (in_hw_count > 0).
> I was thinking of having the list per action, but I haven't looked at
> the code TBH.  Driver would then request to be added to each action's
> list..
The problem is not where the list goes, it's where the list_head for each
 item on the list goes.  I don't want the driver to have to do anything to
 make this happen, so the core would have to allocate something to hold a
 list_head each time a driver successfully offloads an action.

>> TBH I'm starting to wonder if just calling all tc blocks in existence is
>>  really all that bad.  Is there a plausible use case with huge numbers of
>>  bound blocks?
> Once per RTM_GETACTION?  The simplicity of that has it's allure..
OTOH I'm now finding that it's really quite hard to get "all tc blocks in
 existence" as a thing, so it's not as simple as it seemed, sadly.

> It doesn't give you an upstream user for a cookie, though :S
I don't think any of these approaches do; an upstream user necessarily
 involves an upstream driver that collects per-action stats, rather than
 the per-rule that they all do today.  RTM_GETACTION offload won't change
 that, because those drivers won't be able to support it either for the
 same reason.

-Ed

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [RFC PATCH net-next 0/2] RTM_GETACTION stats offload
  2019-05-24 13:09           ` Edward Cree
  2019-05-24 13:57             ` Edward Cree
@ 2019-05-29 20:07             ` Edward Cree
  2019-05-29 20:10               ` [RFC PATCH net-next 1/2] net/sched: add callback to get stats on an action from clsflower offload Edward Cree
  2019-05-29 20:11               ` [RFC PATCH net-next 2/2] net/sched: add action block binding to other classifiers Edward Cree
  1 sibling, 2 replies; 24+ messages in thread
From: Edward Cree @ 2019-05-29 20:07 UTC (permalink / raw)
  To: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

Additional patches on top of [v3] flow_offload: Re-add per-action statistics.
Each time we offload a rule to a block, if any hardware offload results
 (in_hw_count), we allocate a binding object for each action, and add it to a
 list on that action (hw_blocks); this is then updated on reoffload and freed
 on destroy.
Then, when handling RTM_GETACTION, just before copying the action stats into
 the netlink response we make a TC_SETUP_ACTION call to each block on the
 hw_blocks list, in tcf_action_update_stats().  The naming is slightly odd as
 tcf_action_update_stats() ends up calling tcf_action_stats_update(), but I
 couldn't think of something less confusing.
Patch #1 adds the machinery and hooks it from cls_flower; I have tested this
 but possibly not explored every possible sequence of events around binding
 and unbinding blocks from actions.
Patch #2 adds the hooks into the other classifiers with offload (matchall,
 u32 and bpf).  I have not tested these at all (except build testing).

There is nothing even remotely resembling an in-tree user of this (hence RFC).
 It does however prove it can be done, and thus that action cookies don't
 restrict our ability to 'fix RTM_GETACTION' once we _do_ have drivers using
 them.

I do somewhat wonder if we could go further, and instead of making, say, a
 TC_CLSFLOWER_STATS callback on a rule, the core could make TC_ACTION_STATS
 calls on each of its actions.  Initially we'd need to keep both callbacks,
 and drivers could choose to implement one or the other (but not both).  And
 of course we could elide the calls when a->ops->stats_update == NULL.
Maybe that's a crazy idea, I don't know.

Edward Cree (2):
  net/sched: add callback to get stats on an action from clsflower
    offload
  net/sched: add action block binding to other classifiers

 include/linux/netdevice.h |  1 +
 include/net/act_api.h     |  2 +-
 include/net/pkt_cls.h     | 18 ++++++++++++++
 net/sched/act_api.c       | 51 +++++++++++++++++++++++++++++++++++++++
 net/sched/cls_bpf.c       | 10 +++++++-
 net/sched/cls_flower.c    |  7 ++++++
 net/sched/cls_matchall.c  |  7 ++++++
 net/sched/cls_u32.c       |  7 ++++++
 8 files changed, 101 insertions(+), 2 deletions(-)


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [RFC PATCH net-next 1/2] net/sched: add callback to get stats on an action from clsflower offload
  2019-05-29 20:07             ` [RFC PATCH net-next 0/2] RTM_GETACTION stats offload Edward Cree
@ 2019-05-29 20:10               ` Edward Cree
  2019-05-29 20:11               ` [RFC PATCH net-next 2/2] net/sched: add action block binding to other classifiers Edward Cree
  1 sibling, 0 replies; 24+ messages in thread
From: Edward Cree @ 2019-05-29 20:10 UTC (permalink / raw)
  To: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

When handling RTM_GETACTION (e.g. tc actions get/list), make a callback to
 blocks with hardware offload of the action to update stats from hardware.
In order to support this, track each action/block binding by allocating a
 struct tc_action_block_binding and adding it to a list on the action.

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
 include/linux/netdevice.h |  1 +
 include/net/act_api.h     |  2 +-
 include/net/pkt_cls.h     | 18 ++++++++++++++
 net/sched/act_api.c       | 51 +++++++++++++++++++++++++++++++++++++++
 net/sched/cls_flower.c    |  7 ++++++
 5 files changed, 78 insertions(+), 1 deletion(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 44b47e9df94a..dee84954a1c7 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -849,6 +849,7 @@ enum tc_setup_type {
 	TC_SETUP_QDISC_ETF,
 	TC_SETUP_ROOT_QDISC,
 	TC_SETUP_QDISC_GRED,
+	TC_SETUP_ACTION,
 };
 
 /* These structures hold the attributes of bpf state that are being passed
diff --git a/include/net/act_api.h b/include/net/act_api.h
index c61a1bf4e3de..38d1769f279b 100644
--- a/include/net/act_api.h
+++ b/include/net/act_api.h
@@ -40,6 +40,7 @@ struct tc_action {
 	struct gnet_stats_queue __percpu *cpu_qstats;
 	struct tc_cookie	__rcu *act_cookie;
 	struct tcf_chain	__rcu *goto_chain;
+	struct list_head		hw_blocks;
 };
 #define tcf_index	common.tcfa_index
 #define tcf_refcnt	common.tcfa_refcnt
@@ -199,5 +200,4 @@ static inline void tcf_action_stats_update(struct tc_action *a, u64 bytes,
 #endif
 }
 
-
 #endif
diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index 0e17ea8ba302..5902131c1240 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -342,6 +342,14 @@ static inline void tcf_exts_put_net(struct tcf_exts *exts)
 	for (; 0; (void)(i), (void)(a), (void)(exts))
 #endif
 
+struct tc_action_block_binding {
+	struct list_head list;
+	struct tcf_block *block;
+};
+
+void tc_bind_action_blocks(struct tcf_exts *exts, struct tcf_block *block);
+void tc_unbind_action_blocks(struct tcf_exts *exts, struct tcf_block *block);
+
 static inline void
 tcf_exts_stats_update(const struct tcf_exts *exts,
 		      const struct flow_stats *stats)
@@ -958,4 +966,14 @@ struct tc_root_qopt_offload {
 	bool ingress;
 };
 
+enum tc_action_command {
+	TC_ACTION_STATS,
+};
+
+struct tc_action_offload {
+	enum tc_action_command command;
+	unsigned long cookie;
+	struct flow_stats_entry *stats;
+};
+
 #endif
diff --git a/net/sched/act_api.c b/net/sched/act_api.c
index 683fcc00da49..79bd63000bd6 100644
--- a/net/sched/act_api.c
+++ b/net/sched/act_api.c
@@ -427,6 +427,7 @@ int tcf_idr_create(struct tc_action_net *tn, u32 index, struct nlattr *est,
 			goto err3;
 	}
 	spin_lock_init(&p->tcfa_lock);
+	INIT_LIST_HEAD(&p->hw_blocks);
 	p->tcfa_index = index;
 	p->tcfa_tm.install = jiffies;
 	p->tcfa_tm.lastuse = jiffies;
@@ -753,6 +754,21 @@ tcf_action_dump_old(struct sk_buff *skb, struct tc_action *a, int bind, int ref)
 	return a->ops->dump(skb, a, bind, ref);
 }
 
+static void tcf_action_update_stats(struct tc_action *a)
+{
+	struct tc_action_block_binding *bind;
+	struct tc_action_offload offl = {};
+	struct flow_stats_entry stats = {};
+
+	offl.command = TC_ACTION_STATS;
+	offl.cookie = (unsigned long)a;
+	offl.stats = &stats;
+	ASSERT_RTNL();
+	list_for_each_entry(bind, &a->hw_blocks, list)
+		tc_setup_cb_call(bind->block, TC_SETUP_ACTION, &offl, false);
+	tcf_action_stats_update(a, stats.bytes, stats.pkts, stats.lastused, true);
+}
+
 int
 tcf_action_dump_1(struct sk_buff *skb, struct tc_action *a, int bind, int ref)
 {
@@ -763,6 +779,7 @@ tcf_action_dump_1(struct sk_buff *skb, struct tc_action *a, int bind, int ref)
 
 	if (nla_put_string(skb, TCA_KIND, a->ops->kind))
 		goto nla_put_failure;
+	tcf_action_update_stats(a);
 	if (tcf_action_copy_stats(skb, a, 0))
 		goto nla_put_failure;
 
@@ -1539,6 +1556,40 @@ static int tc_dump_action(struct sk_buff *skb, struct netlink_callback *cb)
 	return skb->len;
 }
 
+/** Add a binding for %block to hw_blocks list of each action in %exts */
+void tc_bind_action_blocks(struct tcf_exts *exts, struct tcf_block *block)
+{
+	struct tc_action_block_binding *bind;
+	struct tc_action *act;
+	int i;
+
+	tcf_exts_for_each_action(i, act, exts) {
+		bind = kzalloc(sizeof(*bind), GFP_KERNEL);
+		if (WARN_ON_ONCE(!bind))
+			continue; /* just skip it, stats won't update timely */
+		bind->block = block;
+		list_add_tail(&bind->list, &act->hw_blocks);
+	}
+}
+EXPORT_SYMBOL(tc_bind_action_blocks);
+
+/** Remove one instance of %block from binding list of each action in %exts */
+void tc_unbind_action_blocks(struct tcf_exts *exts, struct tcf_block *block)
+{
+	struct tc_action_block_binding *bind;
+	struct tc_action *act;
+	int i;
+
+	tcf_exts_for_each_action(i, act, exts)
+		list_for_each_entry(bind, &act->hw_blocks, list)
+			if (bind->block == block) {
+				list_del(&bind->list);
+				kfree(bind);
+				break;
+			}
+}
+EXPORT_SYMBOL(tc_unbind_action_blocks);
+
 static int __init tc_action_init(void)
 {
 	rtnl_register(PF_UNSPEC, RTM_NEWACTION, tc_ctl_action, NULL, 0);
diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
index 8775657fb03b..87f27cd02ba4 100644
--- a/net/sched/cls_flower.c
+++ b/net/sched/cls_flower.c
@@ -394,6 +394,8 @@ static void fl_hw_destroy_filter(struct tcf_proto *tp, struct cls_fl_filter *f,
 	cls_flower.cookie = (unsigned long) f;
 
 	tc_setup_cb_call(block, TC_SETUP_CLSFLOWER, &cls_flower, false);
+	if (f->in_hw_count)
+		tc_unbind_action_blocks(&f->exts, block);
 	spin_lock(&tp->lock);
 	list_del_init(&f->hw_list);
 	tcf_block_offload_dec(block, &f->flags);
@@ -448,6 +450,7 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
 		goto errout;
 	} else if (err > 0) {
 		f->in_hw_count = err;
+		tc_bind_action_blocks(&f->exts, block);
 		err = 0;
 		spin_lock(&tp->lock);
 		tcf_block_offload_inc(block, &f->flags);
@@ -1792,10 +1795,14 @@ static int fl_reoffload(struct tcf_proto *tp, bool add, tc_setup_cb_t *cb,
 			goto next_flow;
 		}
 
+		if (add && !f->in_hw_count)
+			tc_bind_action_blocks(&f->exts, block);
 		spin_lock(&tp->lock);
 		tc_cls_offload_cnt_update(block, &f->in_hw_count, &f->flags,
 					  add);
 		spin_unlock(&tp->lock);
+		if (!add && !f->in_hw_count)
+			tc_unbind_action_blocks(&f->exts, block);
 next_flow:
 		__fl_put(f);
 	}


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC PATCH net-next 2/2] net/sched: add action block binding to other classifiers
  2019-05-29 20:07             ` [RFC PATCH net-next 0/2] RTM_GETACTION stats offload Edward Cree
  2019-05-29 20:10               ` [RFC PATCH net-next 1/2] net/sched: add callback to get stats on an action from clsflower offload Edward Cree
@ 2019-05-29 20:11               ` Edward Cree
  1 sibling, 0 replies; 24+ messages in thread
From: Edward Cree @ 2019-05-29 20:11 UTC (permalink / raw)
  To: Jamal Hadi Salim, Jiri Pirko, Pablo Neira Ayuso, David Miller
  Cc: netdev, Cong Wang, Andy Gospodarek, Jakub Kicinski, Michael Chan,
	Vishal Kulkarni

cls_matchall, cls_u32, and cls_bpf all have offloads as well, so they also
 need to bind actions to blocks for RTM_GETACTION stats collection.

Signed-off-by: Edward Cree <ecree@solarflare.com>
---
 net/sched/cls_bpf.c      | 10 +++++++++-
 net/sched/cls_matchall.c |  7 +++++++
 net/sched/cls_u32.c      |  7 +++++++
 3 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/net/sched/cls_bpf.c b/net/sched/cls_bpf.c
index 27365ed3fe0b..c99e53cbf83d 100644
--- a/net/sched/cls_bpf.c
+++ b/net/sched/cls_bpf.c
@@ -165,8 +165,11 @@ static int cls_bpf_offload_cmd(struct tcf_proto *tp, struct cls_bpf_prog *prog,
 	cls_bpf.name = obj->bpf_name;
 	cls_bpf.exts_integrated = obj->exts_integrated;
 
-	if (oldprog)
+	if (oldprog) {
+		if (oldprog->in_hw_count)
+			tc_unbind_action_blocks(&oldprog->exts, block);
 		tcf_block_offload_dec(block, &oldprog->gen_flags);
+	}
 
 	err = tc_setup_cb_call(block, TC_SETUP_CLSBPF, &cls_bpf, skip_sw);
 	if (prog) {
@@ -175,6 +178,7 @@ static int cls_bpf_offload_cmd(struct tcf_proto *tp, struct cls_bpf_prog *prog,
 			return err;
 		} else if (err > 0) {
 			prog->in_hw_count = err;
+			tc_bind_action_blocks(&prog->exts, block);
 			tcf_block_offload_inc(block, &prog->gen_flags);
 		}
 	}
@@ -683,8 +687,12 @@ static int cls_bpf_reoffload(struct tcf_proto *tp, bool add, tc_setup_cb_t *cb,
 			continue;
 		}
 
+		if (add && !prog->in_hw_count)
+			tc_bind_action_blocks(&prog->exts, block);
 		tc_cls_offload_cnt_update(block, &prog->in_hw_count,
 					  &prog->gen_flags, add);
+		if (!add && !prog->in_hw_count)
+			tc_unbind_action_blocks(&prog->exts, block);
 	}
 
 	return 0;
diff --git a/net/sched/cls_matchall.c b/net/sched/cls_matchall.c
index b6b7b041fd6a..c65782cf2924 100644
--- a/net/sched/cls_matchall.c
+++ b/net/sched/cls_matchall.c
@@ -79,6 +79,8 @@ static void mall_destroy_hw_filter(struct tcf_proto *tp,
 	cls_mall.cookie = cookie;
 
 	tc_setup_cb_call(block, TC_SETUP_CLSMATCHALL, &cls_mall, false);
+	if (head->in_hw_count)
+		tc_unbind_action_blocks(&head->exts, block);
 	tcf_block_offload_dec(block, &head->flags);
 }
 
@@ -120,6 +122,7 @@ static int mall_replace_hw_filter(struct tcf_proto *tp,
 		return err;
 	} else if (err > 0) {
 		head->in_hw_count = err;
+		tc_bind_action_blocks(&head->exts, block);
 		tcf_block_offload_inc(block, &head->flags);
 	}
 
@@ -320,7 +323,11 @@ static int mall_reoffload(struct tcf_proto *tp, bool add, tc_setup_cb_t *cb,
 		return 0;
 	}
 
+	if (add && !head->in_hw_count)
+		tc_bind_action_blocks(&head->exts, block);
 	tc_cls_offload_cnt_update(block, &head->in_hw_count, &head->flags, add);
+	if (!add && !head->in_hw_count)
+		tc_unbind_action_blocks(&head->exts, block);
 
 	return 0;
 }
diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
index 4b8710a266cc..84f067d9b4a4 100644
--- a/net/sched/cls_u32.c
+++ b/net/sched/cls_u32.c
@@ -534,6 +534,8 @@ static void u32_remove_hw_knode(struct tcf_proto *tp, struct tc_u_knode *n,
 	cls_u32.knode.handle = n->handle;
 
 	tc_setup_cb_call(block, TC_SETUP_CLSU32, &cls_u32, false);
+	if (n->in_hw_count)
+		tc_unbind_action_blocks(&n->exts, block);
 	tcf_block_offload_dec(block, &n->flags);
 }
 
@@ -569,6 +571,7 @@ static int u32_replace_hw_knode(struct tcf_proto *tp, struct tc_u_knode *n,
 		return err;
 	} else if (err > 0) {
 		n->in_hw_count = err;
+		tc_bind_action_blocks(&n->exts, block);
 		tcf_block_offload_inc(block, &n->flags);
 	}
 
@@ -1223,7 +1226,11 @@ static int u32_reoffload_knode(struct tcf_proto *tp, struct tc_u_knode *n,
 		return 0;
 	}
 
+	if (add && !n->in_hw_count)
+		tc_bind_action_blocks(&n->exts, block);
 	tc_cls_offload_cnt_update(block, &n->in_hw_count, &n->flags, add);
+	if (!add && !n->in_hw_count)
+		tc_unbind_action_blocks(&n->exts, block);
 
 	return 0;
 }

^ permalink raw reply related	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2019-05-29 20:11 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-22 21:37 [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics Edward Cree
2019-05-22 21:38 ` [PATCH v3 net-next 1/3] flow_offload: add a cookie to flow_action_entry Edward Cree
2019-05-22 21:38 ` [PATCH v3 net-next 2/3] flow_offload: restore ability to collect separate stats per action Edward Cree
2019-05-22 21:38 ` [PATCH v3 net-next 3/3] flow_offload: include linux/kernel.h from flow_offload.h Edward Cree
2019-05-22 22:20 ` [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics Jakub Kicinski
2019-05-23 13:19   ` Jamal Hadi Salim
2019-05-23 16:11     ` Jakub Kicinski
2019-05-23 16:40       ` Edward Cree
2019-05-23 17:25         ` Jakub Kicinski
2019-05-24 13:09           ` Edward Cree
2019-05-24 13:57             ` Edward Cree
2019-05-24 14:44               ` Jamal Hadi Salim
2019-05-24 15:09                 ` Edward Cree
2019-05-24 17:59                   ` Jamal Hadi Salim
2019-05-24 17:03               ` Jakub Kicinski
2019-05-24 17:27                 ` Edward Cree
2019-05-24 17:44                   ` Jakub Kicinski
2019-05-28 16:27                     ` Edward Cree
2019-05-29 20:07             ` [RFC PATCH net-next 0/2] RTM_GETACTION stats offload Edward Cree
2019-05-29 20:10               ` [RFC PATCH net-next 1/2] net/sched: add callback to get stats on an action from clsflower offload Edward Cree
2019-05-29 20:11               ` [RFC PATCH net-next 2/2] net/sched: add action block binding to other classifiers Edward Cree
2019-05-23 16:21   ` [PATCH v3 net-next 0/3] flow_offload: Re-add per-action statistics Edward Cree
2019-05-23 16:33     ` Jakub Kicinski
2019-05-23 17:09       ` Edward Cree

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.