All of lore.kernel.org
 help / color / mirror / Atom feed
* [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks
@ 2017-10-19 13:50 Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 01/20] net: sched: add block bind/unbind notif. and extended block_get/put Jiri Pirko
                   ` (20 more replies)
  0 siblings, 21 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

This patchset is a bit bigger, but most of the patches are doing the
same changes in multiple classifiers and drivers. I could do some
squashes, but I think it is better split.

This is another dependency on the way to shared block implementation.
The goal is to remove use of tp->q in classifiers code.

Also, this provides drivers possibility to track binding of blocks to
qdiscs. Legacy drivers which do not support shared block offloading.
register one callback per binding. That maintains the current
functionality we have with ndo_setup_tc. Drivers which support block
sharing offload register one callback per block which safes overhead.

Patches 1-4 introduce the binding notifications and per-block callbacks
Patches 5-8 add block callbacks calls to classifiers
Patches 9-17 do convert from ndo_setup_tc calls to block callbacks for
             classifier offloads in drivers
Patches 18-20 do cleanup

---
v1->v2:
- patch1:
  - move new enum value to the end

Jiri Pirko (20):
  net: sched: add block bind/unbind notif. and extended block_get/put
  net: sched: use extended variants of block_get/put in ingress and
    clsact qdiscs
  net: sched: introduce per-block callbacks
  net: sched: use tc_setup_cb_call to call per-block callbacks
  net: sched: cls_matchall: call block callbacks for offload
  net: sched: cls_u32: swap u32_remove_hw_knode and u32_remove_hw_hnode
  net: sched: cls_u32: call block callbacks for offload
  net: sched: cls_bpf: call block callbacks for offload
  mlxsw: spectrum: Convert ndo_setup_tc offloads to block callbacks
  mlx5e: Convert ndo_setup_tc offloads to block callbacks
  bnxt: Convert ndo_setup_tc offloads to block callbacks
  cxgb4: Convert ndo_setup_tc offloads to block callbacks
  ixgbe: Convert ndo_setup_tc offloads to block callbacks
  mlx5e_rep: Convert ndo_setup_tc offloads to block callbacks
  nfp: flower: Convert ndo_setup_tc offloads to block callbacks
  nfp: bpf: Convert ndo_setup_tc offloads to block callbacks
  dsa: Convert ndo_setup_tc offloads to block callbacks
  net: sched: avoid ndo_setup_tc calls for TC_SETUP_CLS*
  net: sched: remove unused classid field from tc_cls_common_offload
  net: sched: remove unused is_classid_clsact_ingress/egress helpers

 drivers/net/ethernet/broadcom/bnxt/bnxt.c          |  37 ++++-
 drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c       |   3 +-
 drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c      |  41 ++++-
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c    |  42 ++++-
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c      |  45 ++++-
 drivers/net/ethernet/mellanox/mlx5/core/en.h       |   4 +-
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c  |  45 ++++-
 drivers/net/ethernet/mellanox/mlx5/core/en_rep.c   |  62 +++++--
 drivers/net/ethernet/mellanox/mlxsw/spectrum.c     |  83 +++++++---
 drivers/net/ethernet/netronome/nfp/bpf/main.c      |  52 +++++-
 .../net/ethernet/netronome/nfp/flower/offload.c    |  54 +++++-
 include/linux/netdevice.h                          |   1 +
 include/net/pkt_cls.h                              | 129 ++++++++++++++-
 include/net/pkt_sched.h                            |  13 --
 include/net/sch_generic.h                          |   1 +
 net/dsa/slave.c                                    |  64 ++++++--
 net/sched/cls_api.c                                | 182 ++++++++++++++++++++-
 net/sched/cls_bpf.c                                |  28 +++-
 net/sched/cls_flower.c                             |  29 +---
 net/sched/cls_matchall.c                           |  58 +++----
 net/sched/cls_u32.c                                |  67 ++++----
 net/sched/sch_ingress.c                            |  36 +++-
 22 files changed, 849 insertions(+), 227 deletions(-)

-- 
2.9.5

^ permalink raw reply	[flat|nested] 41+ messages in thread

* [patch net-next v2 01/20] net: sched: add block bind/unbind notif. and extended block_get/put
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
@ 2017-10-19 13:50 ` Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 02/20] net: sched: use extended variants of block_get/put in ingress and clsact qdiscs Jiri Pirko
                   ` (19 subsequent siblings)
  20 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

Introduce new type of ndo_setup_tc message to propage binding/unbinding
of a block to driver. Call this ndo whenever qdisc gets/puts a block.
Alongside with this, there's need to propagate binder type from qdisc
code down to the notifier. So introduce extended variants of
block_get/put in order to pass this info.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
v1->v2:
- move new enum value to the end
---
 include/linux/netdevice.h |  1 +
 include/net/pkt_cls.h     | 40 +++++++++++++++++++++++++++++++++
 net/sched/cls_api.c       | 56 ++++++++++++++++++++++++++++++++++++++++++++---
 3 files changed, 94 insertions(+), 3 deletions(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index bf014af..4de5b08 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -775,6 +775,7 @@ enum tc_setup_type {
 	TC_SETUP_CLSFLOWER,
 	TC_SETUP_CLSMATCHALL,
 	TC_SETUP_CLSBPF,
+	TC_SETUP_BLOCK,
 };
 
 /* These structures hold the attributes of xdp state that are being passed
diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index 49a143e..41bc7d7 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -17,13 +17,27 @@ struct tcf_walker {
 int register_tcf_proto_ops(struct tcf_proto_ops *ops);
 int unregister_tcf_proto_ops(struct tcf_proto_ops *ops);
 
+enum tcf_block_binder_type {
+	TCF_BLOCK_BINDER_TYPE_UNSPEC,
+};
+
+struct tcf_block_ext_info {
+	enum tcf_block_binder_type binder_type;
+};
+
 #ifdef CONFIG_NET_CLS
 struct tcf_chain *tcf_chain_get(struct tcf_block *block, u32 chain_index,
 				bool create);
 void tcf_chain_put(struct tcf_chain *chain);
 int tcf_block_get(struct tcf_block **p_block,
 		  struct tcf_proto __rcu **p_filter_chain, struct Qdisc *q);
+int tcf_block_get_ext(struct tcf_block **p_block,
+		      struct tcf_proto __rcu **p_filter_chain, struct Qdisc *q,
+		      struct tcf_block_ext_info *ei);
 void tcf_block_put(struct tcf_block *block);
+void tcf_block_put_ext(struct tcf_block *block,
+		       struct tcf_proto __rcu **p_filter_chain, struct Qdisc *q,
+		       struct tcf_block_ext_info *ei);
 
 static inline struct Qdisc *tcf_block_q(struct tcf_block *block)
 {
@@ -46,10 +60,25 @@ int tcf_block_get(struct tcf_block **p_block,
 	return 0;
 }
 
+static inline
+int tcf_block_get_ext(struct tcf_block **p_block,
+		      struct tcf_proto __rcu **p_filter_chain, struct Qdisc *q,
+		      struct tcf_block_ext_info *ei)
+{
+	return 0;
+}
+
 static inline void tcf_block_put(struct tcf_block *block)
 {
 }
 
+static inline
+void tcf_block_put_ext(struct tcf_block *block,
+		       struct tcf_proto __rcu **p_filter_chain, struct Qdisc *q,
+		       struct tcf_block_ext_info *ei)
+{
+}
+
 static inline struct Qdisc *tcf_block_q(struct tcf_block *block)
 {
 	return NULL;
@@ -434,6 +463,17 @@ tcf_match_indev(struct sk_buff *skb, int ifindex)
 int tc_setup_cb_call(struct tcf_exts *exts, enum tc_setup_type type,
 		     void *type_data, bool err_stop);
 
+enum tc_block_command {
+	TC_BLOCK_BIND,
+	TC_BLOCK_UNBIND,
+};
+
+struct tc_block_offload {
+	enum tc_block_command command;
+	enum tcf_block_binder_type binder_type;
+	struct tcf_block *block;
+};
+
 struct tc_cls_common_offload {
 	u32 chain_index;
 	__be16 protocol;
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index 2e8e87f..92dce26 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -240,8 +240,36 @@ tcf_chain_filter_chain_ptr_set(struct tcf_chain *chain,
 	chain->p_filter_chain = p_filter_chain;
 }
 
-int tcf_block_get(struct tcf_block **p_block,
-		  struct tcf_proto __rcu **p_filter_chain, struct Qdisc *q)
+static void tcf_block_offload_cmd(struct tcf_block *block, struct Qdisc *q,
+				  struct tcf_block_ext_info *ei,
+				  enum tc_block_command command)
+{
+	struct net_device *dev = q->dev_queue->dev;
+	struct tc_block_offload bo = {};
+
+	if (!tc_can_offload(dev))
+		return;
+	bo.command = command;
+	bo.binder_type = ei->binder_type;
+	bo.block = block;
+	dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_BLOCK, &bo);
+}
+
+static void tcf_block_offload_bind(struct tcf_block *block, struct Qdisc *q,
+				   struct tcf_block_ext_info *ei)
+{
+	tcf_block_offload_cmd(block, q, ei, TC_BLOCK_BIND);
+}
+
+static void tcf_block_offload_unbind(struct tcf_block *block, struct Qdisc *q,
+				     struct tcf_block_ext_info *ei)
+{
+	tcf_block_offload_cmd(block, q, ei, TC_BLOCK_UNBIND);
+}
+
+int tcf_block_get_ext(struct tcf_block **p_block,
+		      struct tcf_proto __rcu **p_filter_chain, struct Qdisc *q,
+		      struct tcf_block_ext_info *ei)
 {
 	struct tcf_block *block = kzalloc(sizeof(*block), GFP_KERNEL);
 	struct tcf_chain *chain;
@@ -259,6 +287,7 @@ int tcf_block_get(struct tcf_block **p_block,
 	tcf_chain_filter_chain_ptr_set(chain, p_filter_chain);
 	block->net = qdisc_net(q);
 	block->q = q;
+	tcf_block_offload_bind(block, q, ei);
 	*p_block = block;
 	return 0;
 
@@ -266,15 +295,28 @@ int tcf_block_get(struct tcf_block **p_block,
 	kfree(block);
 	return err;
 }
+EXPORT_SYMBOL(tcf_block_get_ext);
+
+int tcf_block_get(struct tcf_block **p_block,
+		  struct tcf_proto __rcu **p_filter_chain, struct Qdisc *q)
+{
+	struct tcf_block_ext_info ei = {0, };
+
+	return tcf_block_get_ext(p_block, p_filter_chain, q, &ei);
+}
 EXPORT_SYMBOL(tcf_block_get);
 
-void tcf_block_put(struct tcf_block *block)
+void tcf_block_put_ext(struct tcf_block *block,
+		       struct tcf_proto __rcu **p_filter_chain, struct Qdisc *q,
+		       struct tcf_block_ext_info *ei)
 {
 	struct tcf_chain *chain, *tmp;
 
 	if (!block)
 		return;
 
+	tcf_block_offload_unbind(block, q, ei);
+
 	/* XXX: Standalone actions are not allowed to jump to any chain, and
 	 * bound actions should be all removed after flushing. However,
 	 * filters are destroyed in RCU callbacks, we have to hold the chains
@@ -302,6 +344,14 @@ void tcf_block_put(struct tcf_block *block)
 		tcf_chain_put(chain);
 	kfree(block);
 }
+EXPORT_SYMBOL(tcf_block_put_ext);
+
+void tcf_block_put(struct tcf_block *block)
+{
+	struct tcf_block_ext_info ei = {0, };
+
+	tcf_block_put_ext(block, NULL, block->q, &ei);
+}
 EXPORT_SYMBOL(tcf_block_put);
 
 /* Main classifier routine: scans classifier chain attached
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [patch net-next v2 02/20] net: sched: use extended variants of block_get/put in ingress and clsact qdiscs
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 01/20] net: sched: add block bind/unbind notif. and extended block_get/put Jiri Pirko
@ 2017-10-19 13:50 ` Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 03/20] net: sched: introduce per-block callbacks Jiri Pirko
                   ` (18 subsequent siblings)
  20 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

Use previously introduced extended variants of block get and put
functions. This allows to specify a binder types specific to clsact
ingress/egress which is useful for drivers to distinguish who actually
got the block.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 include/net/pkt_cls.h   |  2 ++
 net/sched/sch_ingress.c | 36 +++++++++++++++++++++++++++++-------
 2 files changed, 31 insertions(+), 7 deletions(-)

diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index 41bc7d7..5c50af8 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -19,6 +19,8 @@ int unregister_tcf_proto_ops(struct tcf_proto_ops *ops);
 
 enum tcf_block_binder_type {
 	TCF_BLOCK_BINDER_TYPE_UNSPEC,
+	TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS,
+	TCF_BLOCK_BINDER_TYPE_CLSACT_EGRESS,
 };
 
 struct tcf_block_ext_info {
diff --git a/net/sched/sch_ingress.c b/net/sched/sch_ingress.c
index 9ccc1b8..b599db2 100644
--- a/net/sched/sch_ingress.c
+++ b/net/sched/sch_ingress.c
@@ -20,6 +20,7 @@
 
 struct ingress_sched_data {
 	struct tcf_block *block;
+	struct tcf_block_ext_info block_info;
 };
 
 static struct Qdisc *ingress_leaf(struct Qdisc *sch, unsigned long arg)
@@ -59,7 +60,10 @@ static int ingress_init(struct Qdisc *sch, struct nlattr *opt)
 	struct net_device *dev = qdisc_dev(sch);
 	int err;
 
-	err = tcf_block_get(&q->block, &dev->ingress_cl_list, sch);
+	q->block_info.binder_type = TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS;
+
+	err = tcf_block_get_ext(&q->block, &dev->ingress_cl_list,
+				sch, &q->block_info);
 	if (err)
 		return err;
 
@@ -72,8 +76,10 @@ static int ingress_init(struct Qdisc *sch, struct nlattr *opt)
 static void ingress_destroy(struct Qdisc *sch)
 {
 	struct ingress_sched_data *q = qdisc_priv(sch);
+	struct net_device *dev = qdisc_dev(sch);
 
-	tcf_block_put(q->block);
+	tcf_block_put_ext(q->block, &dev->ingress_cl_list,
+			  sch, &q->block_info);
 	net_dec_ingress_queue();
 }
 
@@ -114,6 +120,8 @@ static struct Qdisc_ops ingress_qdisc_ops __read_mostly = {
 struct clsact_sched_data {
 	struct tcf_block *ingress_block;
 	struct tcf_block *egress_block;
+	struct tcf_block_ext_info ingress_block_info;
+	struct tcf_block_ext_info egress_block_info;
 };
 
 static unsigned long clsact_find(struct Qdisc *sch, u32 classid)
@@ -153,13 +161,19 @@ static int clsact_init(struct Qdisc *sch, struct nlattr *opt)
 	struct net_device *dev = qdisc_dev(sch);
 	int err;
 
-	err = tcf_block_get(&q->ingress_block, &dev->ingress_cl_list, sch);
+	q->ingress_block_info.binder_type = TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS;
+
+	err = tcf_block_get_ext(&q->ingress_block, &dev->ingress_cl_list,
+				sch, &q->ingress_block_info);
 	if (err)
 		return err;
 
-	err = tcf_block_get(&q->egress_block, &dev->egress_cl_list, sch);
+	q->egress_block_info.binder_type = TCF_BLOCK_BINDER_TYPE_CLSACT_EGRESS;
+
+	err = tcf_block_get_ext(&q->egress_block, &dev->egress_cl_list,
+				sch, &q->egress_block_info);
 	if (err)
-		return err;
+		goto err_egress_block_get;
 
 	net_inc_ingress_queue();
 	net_inc_egress_queue();
@@ -167,14 +181,22 @@ static int clsact_init(struct Qdisc *sch, struct nlattr *opt)
 	sch->flags |= TCQ_F_CPUSTATS;
 
 	return 0;
+
+err_egress_block_get:
+	tcf_block_put_ext(q->ingress_block, &dev->ingress_cl_list,
+			  sch, &q->ingress_block_info);
+	return err;
 }
 
 static void clsact_destroy(struct Qdisc *sch)
 {
 	struct clsact_sched_data *q = qdisc_priv(sch);
+	struct net_device *dev = qdisc_dev(sch);
 
-	tcf_block_put(q->egress_block);
-	tcf_block_put(q->ingress_block);
+	tcf_block_put_ext(q->egress_block, &dev->egress_cl_list,
+			  sch, &q->egress_block_info);
+	tcf_block_put_ext(q->ingress_block, &dev->ingress_cl_list,
+			  sch, &q->ingress_block_info);
 
 	net_dec_ingress_queue();
 	net_dec_egress_queue();
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [patch net-next v2 03/20] net: sched: introduce per-block callbacks
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 01/20] net: sched: add block bind/unbind notif. and extended block_get/put Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 02/20] net: sched: use extended variants of block_get/put in ingress and clsact qdiscs Jiri Pirko
@ 2017-10-19 13:50 ` Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 04/20] net: sched: use tc_setup_cb_call to call " Jiri Pirko
                   ` (17 subsequent siblings)
  20 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

Introduce infrastructure that allows drivers to register callbacks that
are called whenever tc would offload inserted rule for a specific block.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 include/net/pkt_cls.h     |  81 +++++++++++++++++++++++++++++++++++
 include/net/sch_generic.h |   1 +
 net/sched/cls_api.c       | 105 ++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 187 insertions(+)

diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index 5c50af8..4bc6b1c 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -27,6 +27,8 @@ struct tcf_block_ext_info {
 	enum tcf_block_binder_type binder_type;
 };
 
+struct tcf_block_cb;
+
 #ifdef CONFIG_NET_CLS
 struct tcf_chain *tcf_chain_get(struct tcf_block *block, u32 chain_index,
 				bool create);
@@ -51,6 +53,21 @@ static inline struct net_device *tcf_block_dev(struct tcf_block *block)
 	return tcf_block_q(block)->dev_queue->dev;
 }
 
+void *tcf_block_cb_priv(struct tcf_block_cb *block_cb);
+struct tcf_block_cb *tcf_block_cb_lookup(struct tcf_block *block,
+					 tc_setup_cb_t *cb, void *cb_ident);
+void tcf_block_cb_incref(struct tcf_block_cb *block_cb);
+unsigned int tcf_block_cb_decref(struct tcf_block_cb *block_cb);
+struct tcf_block_cb *__tcf_block_cb_register(struct tcf_block *block,
+					     tc_setup_cb_t *cb, void *cb_ident,
+					     void *cb_priv);
+int tcf_block_cb_register(struct tcf_block *block,
+			  tc_setup_cb_t *cb, void *cb_ident,
+			  void *cb_priv);
+void __tcf_block_cb_unregister(struct tcf_block_cb *block_cb);
+void tcf_block_cb_unregister(struct tcf_block *block,
+			     tc_setup_cb_t *cb, void *cb_ident);
+
 int tcf_classify(struct sk_buff *skb, const struct tcf_proto *tp,
 		 struct tcf_result *res, bool compat_mode);
 
@@ -91,6 +108,70 @@ static inline struct net_device *tcf_block_dev(struct tcf_block *block)
 	return NULL;
 }
 
+static inline
+int tc_setup_cb_block_register(struct tcf_block *block, tc_setup_cb_t *cb,
+			       void *cb_priv)
+{
+	return 0;
+}
+
+static inline
+void tc_setup_cb_block_unregister(struct tcf_block *block, tc_setup_cb_t *cb,
+				  void *cb_priv)
+{
+}
+
+static inline
+void *tcf_block_cb_priv(struct tcf_block_cb *block_cb)
+{
+	return NULL;
+}
+
+static inline
+struct tcf_block_cb *tcf_block_cb_lookup(struct tcf_block *block,
+					 tc_setup_cb_t *cb, void *cb_ident)
+{
+	return NULL;
+}
+
+static inline
+void tcf_block_cb_incref(struct tcf_block_cb *block_cb)
+{
+}
+
+static inline
+unsigned int tcf_block_cb_decref(struct tcf_block_cb *block_cb)
+{
+	return 0;
+}
+
+static inline
+struct tcf_block_cb *__tcf_block_cb_register(struct tcf_block *block,
+					     tc_setup_cb_t *cb, void *cb_ident,
+					     void *cb_priv)
+{
+	return NULL;
+}
+
+static inline
+int tcf_block_cb_register(struct tcf_block *block,
+			  tc_setup_cb_t *cb, void *cb_ident,
+			  void *cb_priv)
+{
+	return 0;
+}
+
+static inline
+void __tcf_block_cb_unregister(struct tcf_block_cb *block_cb)
+{
+}
+
+static inline
+void tcf_block_cb_unregister(struct tcf_block *block,
+			     tc_setup_cb_t *cb, void *cb_ident)
+{
+}
+
 static inline int tcf_classify(struct sk_buff *skb, const struct tcf_proto *tp,
 			       struct tcf_result *res, bool compat_mode)
 {
diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index 0aea9e2..031dffd 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -272,6 +272,7 @@ struct tcf_block {
 	struct list_head chain_list;
 	struct net *net;
 	struct Qdisc *q;
+	struct list_head cb_list;
 };
 
 static inline void qdisc_cb_private_validate(const struct sk_buff *skb, int sz)
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index 92dce26..b16c79c 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -278,6 +278,8 @@ int tcf_block_get_ext(struct tcf_block **p_block,
 	if (!block)
 		return -ENOMEM;
 	INIT_LIST_HEAD(&block->chain_list);
+	INIT_LIST_HEAD(&block->cb_list);
+
 	/* Create chain 0 by default, it has to be always present. */
 	chain = tcf_chain_create(block, 0);
 	if (!chain) {
@@ -354,6 +356,109 @@ void tcf_block_put(struct tcf_block *block)
 }
 EXPORT_SYMBOL(tcf_block_put);
 
+struct tcf_block_cb {
+	struct list_head list;
+	tc_setup_cb_t *cb;
+	void *cb_ident;
+	void *cb_priv;
+	unsigned int refcnt;
+};
+
+void *tcf_block_cb_priv(struct tcf_block_cb *block_cb)
+{
+	return block_cb->cb_priv;
+}
+EXPORT_SYMBOL(tcf_block_cb_priv);
+
+struct tcf_block_cb *tcf_block_cb_lookup(struct tcf_block *block,
+					 tc_setup_cb_t *cb, void *cb_ident)
+{	struct tcf_block_cb *block_cb;
+
+	list_for_each_entry(block_cb, &block->cb_list, list)
+		if (block_cb->cb == cb && block_cb->cb_ident == cb_ident)
+			return block_cb;
+	return NULL;
+}
+EXPORT_SYMBOL(tcf_block_cb_lookup);
+
+void tcf_block_cb_incref(struct tcf_block_cb *block_cb)
+{
+	block_cb->refcnt++;
+}
+EXPORT_SYMBOL(tcf_block_cb_incref);
+
+unsigned int tcf_block_cb_decref(struct tcf_block_cb *block_cb)
+{
+	return --block_cb->refcnt;
+}
+EXPORT_SYMBOL(tcf_block_cb_decref);
+
+struct tcf_block_cb *__tcf_block_cb_register(struct tcf_block *block,
+					     tc_setup_cb_t *cb, void *cb_ident,
+					     void *cb_priv)
+{
+	struct tcf_block_cb *block_cb;
+
+	block_cb = kzalloc(sizeof(*block_cb), GFP_KERNEL);
+	if (!block_cb)
+		return NULL;
+	block_cb->cb = cb;
+	block_cb->cb_ident = cb_ident;
+	block_cb->cb_priv = cb_priv;
+	list_add(&block_cb->list, &block->cb_list);
+	return block_cb;
+}
+EXPORT_SYMBOL(__tcf_block_cb_register);
+
+int tcf_block_cb_register(struct tcf_block *block,
+			  tc_setup_cb_t *cb, void *cb_ident,
+			  void *cb_priv)
+{
+	struct tcf_block_cb *block_cb;
+
+	block_cb = __tcf_block_cb_register(block, cb, cb_ident, cb_priv);
+	return block_cb ? 0 : -ENOMEM;
+}
+EXPORT_SYMBOL(tcf_block_cb_register);
+
+void __tcf_block_cb_unregister(struct tcf_block_cb *block_cb)
+{
+	list_del(&block_cb->list);
+	kfree(block_cb);
+}
+EXPORT_SYMBOL(__tcf_block_cb_unregister);
+
+void tcf_block_cb_unregister(struct tcf_block *block,
+			     tc_setup_cb_t *cb, void *cb_ident)
+{
+	struct tcf_block_cb *block_cb;
+
+	block_cb = tcf_block_cb_lookup(block, cb, cb_ident);
+	if (!block_cb)
+		return;
+	__tcf_block_cb_unregister(block_cb);
+}
+EXPORT_SYMBOL(tcf_block_cb_unregister);
+
+static int tcf_block_cb_call(struct tcf_block *block, enum tc_setup_type type,
+			     void *type_data, bool err_stop)
+{
+	struct tcf_block_cb *block_cb;
+	int ok_count = 0;
+	int err;
+
+	list_for_each_entry(block_cb, &block->cb_list, list) {
+		err = block_cb->cb(type, type_data, block_cb->cb_priv);
+		if (err) {
+			if (err_stop)
+				return err;
+		} else {
+			ok_count++;
+		}
+	}
+	return ok_count;
+}
+
 /* Main classifier routine: scans classifier chain attached
  * to this qdisc, (optionally) tests for protocol and asks
  * specific classifiers.
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [patch net-next v2 04/20] net: sched: use tc_setup_cb_call to call per-block callbacks
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
                   ` (2 preceding siblings ...)
  2017-10-19 13:50 ` [patch net-next v2 03/20] net: sched: introduce per-block callbacks Jiri Pirko
@ 2017-10-19 13:50 ` Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 05/20] net: sched: cls_matchall: call block callbacks for offload Jiri Pirko
                   ` (16 subsequent siblings)
  20 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

Extend the tc_setup_cb_call entrypoint function originally used only for
action egress devices callbacks to call per-block callbacks as well.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 include/net/pkt_cls.h  |  4 ++--
 net/sched/cls_api.c    | 21 ++++++++++++++++++---
 net/sched/cls_flower.c |  9 ++++++---
 3 files changed, 26 insertions(+), 8 deletions(-)

diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index 4bc6b1c..fcca5a9 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -543,8 +543,8 @@ tcf_match_indev(struct sk_buff *skb, int ifindex)
 }
 #endif /* CONFIG_NET_CLS_IND */
 
-int tc_setup_cb_call(struct tcf_exts *exts, enum tc_setup_type type,
-		     void *type_data, bool err_stop);
+int tc_setup_cb_call(struct tcf_block *block, struct tcf_exts *exts,
+		     enum tc_setup_type type, void *type_data, bool err_stop);
 
 enum tc_block_command {
 	TC_BLOCK_BIND,
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index b16c79c..cdfdc24 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -1206,10 +1206,25 @@ static int tc_exts_setup_cb_egdev_call(struct tcf_exts *exts,
 	return ok_count;
 }
 
-int tc_setup_cb_call(struct tcf_exts *exts, enum tc_setup_type type,
-		     void *type_data, bool err_stop)
+int tc_setup_cb_call(struct tcf_block *block, struct tcf_exts *exts,
+		     enum tc_setup_type type, void *type_data, bool err_stop)
 {
-	return tc_exts_setup_cb_egdev_call(exts, type, type_data, err_stop);
+	int ok_count;
+	int ret;
+
+	ret = tcf_block_cb_call(block, type, type_data, err_stop);
+	if (ret < 0)
+		return ret;
+	ok_count = ret;
+
+	if (!exts)
+		return ok_count;
+	ret = tc_exts_setup_cb_egdev_call(exts, type, type_data, err_stop);
+	if (ret < 0)
+		return ret;
+	ok_count += ret;
+
+	return ok_count;
 }
 EXPORT_SYMBOL(tc_setup_cb_call);
 
diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
index 5b7bb96..76b4e0a 100644
--- a/net/sched/cls_flower.c
+++ b/net/sched/cls_flower.c
@@ -201,6 +201,7 @@ static void fl_hw_destroy_filter(struct tcf_proto *tp, struct cls_fl_filter *f)
 {
 	struct tc_cls_flower_offload cls_flower = {};
 	struct net_device *dev = tp->q->dev_queue->dev;
+	struct tcf_block *block = tp->chain->block;
 
 	tc_cls_common_offload_init(&cls_flower.common, tp);
 	cls_flower.command = TC_CLSFLOWER_DESTROY;
@@ -209,7 +210,7 @@ static void fl_hw_destroy_filter(struct tcf_proto *tp, struct cls_fl_filter *f)
 	if (tc_can_offload(dev))
 		dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSFLOWER,
 					      &cls_flower);
-	tc_setup_cb_call(&f->exts, TC_SETUP_CLSFLOWER,
+	tc_setup_cb_call(block, &f->exts, TC_SETUP_CLSFLOWER,
 			 &cls_flower, false);
 }
 
@@ -220,6 +221,7 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
 {
 	struct net_device *dev = tp->q->dev_queue->dev;
 	struct tc_cls_flower_offload cls_flower = {};
+	struct tcf_block *block = tp->chain->block;
 	bool skip_sw = tc_skip_sw(f->flags);
 	int err;
 
@@ -242,7 +244,7 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
 		}
 	}
 
-	err = tc_setup_cb_call(&f->exts, TC_SETUP_CLSFLOWER,
+	err = tc_setup_cb_call(block, &f->exts, TC_SETUP_CLSFLOWER,
 			       &cls_flower, skip_sw);
 	if (err < 0) {
 		fl_hw_destroy_filter(tp, f);
@@ -261,6 +263,7 @@ static void fl_hw_update_stats(struct tcf_proto *tp, struct cls_fl_filter *f)
 {
 	struct tc_cls_flower_offload cls_flower = {};
 	struct net_device *dev = tp->q->dev_queue->dev;
+	struct tcf_block *block = tp->chain->block;
 
 	tc_cls_common_offload_init(&cls_flower.common, tp);
 	cls_flower.command = TC_CLSFLOWER_STATS;
@@ -270,7 +273,7 @@ static void fl_hw_update_stats(struct tcf_proto *tp, struct cls_fl_filter *f)
 	if (tc_can_offload(dev))
 		dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSFLOWER,
 					      &cls_flower);
-	tc_setup_cb_call(&f->exts, TC_SETUP_CLSFLOWER,
+	tc_setup_cb_call(block, &f->exts, TC_SETUP_CLSFLOWER,
 			 &cls_flower, false);
 }
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [patch net-next v2 05/20] net: sched: cls_matchall: call block callbacks for offload
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
                   ` (3 preceding siblings ...)
  2017-10-19 13:50 ` [patch net-next v2 04/20] net: sched: use tc_setup_cb_call to call " Jiri Pirko
@ 2017-10-19 13:50 ` Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 06/20] net: sched: cls_u32: swap u32_remove_hw_knode and u32_remove_hw_hnode Jiri Pirko
                   ` (15 subsequent siblings)
  20 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

Use the newly introduced callbacks infrastructure and call block
callbacks alongside with the existing per-netdev ndo_setup_tc.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 net/sched/cls_matchall.c | 72 ++++++++++++++++++++++++++++++------------------
 1 file changed, 45 insertions(+), 27 deletions(-)

diff --git a/net/sched/cls_matchall.c b/net/sched/cls_matchall.c
index eeac606..5278534 100644
--- a/net/sched/cls_matchall.c
+++ b/net/sched/cls_matchall.c
@@ -50,50 +50,73 @@ static void mall_destroy_rcu(struct rcu_head *rcu)
 	kfree(head);
 }
 
-static int mall_replace_hw_filter(struct tcf_proto *tp,
-				  struct cls_mall_head *head,
-				  unsigned long cookie)
+static void mall_destroy_hw_filter(struct tcf_proto *tp,
+				   struct cls_mall_head *head,
+				   unsigned long cookie)
 {
 	struct net_device *dev = tp->q->dev_queue->dev;
 	struct tc_cls_matchall_offload cls_mall = {};
-	int err;
+	struct tcf_block *block = tp->chain->block;
 
 	tc_cls_common_offload_init(&cls_mall.common, tp);
-	cls_mall.command = TC_CLSMATCHALL_REPLACE;
-	cls_mall.exts = &head->exts;
+	cls_mall.command = TC_CLSMATCHALL_DESTROY;
 	cls_mall.cookie = cookie;
 
-	err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSMATCHALL,
-					    &cls_mall);
-	if (!err)
-		head->flags |= TCA_CLS_FLAGS_IN_HW;
-
-	return err;
+	if (tc_can_offload(dev))
+		dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSMATCHALL,
+					      &cls_mall);
+	tc_setup_cb_call(block, NULL, TC_SETUP_CLSMATCHALL, &cls_mall, false);
 }
 
-static void mall_destroy_hw_filter(struct tcf_proto *tp,
-				   struct cls_mall_head *head,
-				   unsigned long cookie)
+static int mall_replace_hw_filter(struct tcf_proto *tp,
+				  struct cls_mall_head *head,
+				  unsigned long cookie)
 {
 	struct net_device *dev = tp->q->dev_queue->dev;
 	struct tc_cls_matchall_offload cls_mall = {};
+	struct tcf_block *block = tp->chain->block;
+	bool skip_sw = tc_skip_sw(head->flags);
+	int err;
 
 	tc_cls_common_offload_init(&cls_mall.common, tp);
-	cls_mall.command = TC_CLSMATCHALL_DESTROY;
+	cls_mall.command = TC_CLSMATCHALL_REPLACE;
+	cls_mall.exts = &head->exts;
 	cls_mall.cookie = cookie;
 
-	dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSMATCHALL, &cls_mall);
+	if (tc_can_offload(dev)) {
+		err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSMATCHALL,
+						    &cls_mall);
+		if (err) {
+			if (skip_sw)
+				return err;
+		} else {
+			head->flags |= TCA_CLS_FLAGS_IN_HW;
+		}
+	}
+
+	err = tc_setup_cb_call(block, NULL, TC_SETUP_CLSMATCHALL,
+			       &cls_mall, skip_sw);
+	if (err < 0) {
+		mall_destroy_hw_filter(tp, head, cookie);
+		return err;
+	} else if (err > 0) {
+		head->flags |= TCA_CLS_FLAGS_IN_HW;
+	}
+
+	if (skip_sw && !(head->flags & TCA_CLS_FLAGS_IN_HW))
+		return -EINVAL;
+
+	return 0;
 }
 
 static void mall_destroy(struct tcf_proto *tp)
 {
 	struct cls_mall_head *head = rtnl_dereference(tp->root);
-	struct net_device *dev = tp->q->dev_queue->dev;
 
 	if (!head)
 		return;
 
-	if (tc_should_offload(dev, head->flags))
+	if (!tc_skip_hw(head->flags))
 		mall_destroy_hw_filter(tp, head, (unsigned long) head);
 
 	call_rcu(&head->rcu, mall_destroy_rcu);
@@ -133,7 +156,6 @@ static int mall_change(struct net *net, struct sk_buff *in_skb,
 		       void **arg, bool ovr)
 {
 	struct cls_mall_head *head = rtnl_dereference(tp->root);
-	struct net_device *dev = tp->q->dev_queue->dev;
 	struct nlattr *tb[TCA_MATCHALL_MAX + 1];
 	struct cls_mall_head *new;
 	u32 flags = 0;
@@ -173,14 +195,10 @@ static int mall_change(struct net *net, struct sk_buff *in_skb,
 	if (err)
 		goto err_set_parms;
 
-	if (tc_should_offload(dev, flags)) {
+	if (!tc_skip_hw(new->flags)) {
 		err = mall_replace_hw_filter(tp, new, (unsigned long) new);
-		if (err) {
-			if (tc_skip_sw(flags))
-				goto err_replace_hw_filter;
-			else
-				err = 0;
-		}
+		if (err)
+			goto err_replace_hw_filter;
 	}
 
 	if (!tc_in_hw(new->flags))
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [patch net-next v2 06/20] net: sched: cls_u32: swap u32_remove_hw_knode and u32_remove_hw_hnode
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
                   ` (4 preceding siblings ...)
  2017-10-19 13:50 ` [patch net-next v2 05/20] net: sched: cls_matchall: call block callbacks for offload Jiri Pirko
@ 2017-10-19 13:50 ` Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 07/20] net: sched: cls_u32: call block callbacks for offload Jiri Pirko
                   ` (14 subsequent siblings)
  20 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 net/sched/cls_u32.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
index b6d4606..f407f13 100644
--- a/net/sched/cls_u32.c
+++ b/net/sched/cls_u32.c
@@ -462,7 +462,7 @@ static int u32_delete_key(struct tcf_proto *tp, struct tc_u_knode *key)
 	return 0;
 }
 
-static void u32_remove_hw_knode(struct tcf_proto *tp, u32 handle)
+static void u32_clear_hw_hnode(struct tcf_proto *tp, struct tc_u_hnode *h)
 {
 	struct net_device *dev = tp->q->dev_queue->dev;
 	struct tc_cls_u32_offload cls_u32 = {};
@@ -471,8 +471,10 @@ static void u32_remove_hw_knode(struct tcf_proto *tp, u32 handle)
 		return;
 
 	tc_cls_common_offload_init(&cls_u32.common, tp);
-	cls_u32.command = TC_CLSU32_DELETE_KNODE;
-	cls_u32.knode.handle = handle;
+	cls_u32.command = TC_CLSU32_DELETE_HNODE;
+	cls_u32.hnode.divisor = h->divisor;
+	cls_u32.hnode.handle = h->handle;
+	cls_u32.hnode.prio = h->prio;
 
 	dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSU32, &cls_u32);
 }
@@ -500,7 +502,7 @@ static int u32_replace_hw_hnode(struct tcf_proto *tp, struct tc_u_hnode *h,
 	return 0;
 }
 
-static void u32_clear_hw_hnode(struct tcf_proto *tp, struct tc_u_hnode *h)
+static void u32_remove_hw_knode(struct tcf_proto *tp, u32 handle)
 {
 	struct net_device *dev = tp->q->dev_queue->dev;
 	struct tc_cls_u32_offload cls_u32 = {};
@@ -509,10 +511,8 @@ static void u32_clear_hw_hnode(struct tcf_proto *tp, struct tc_u_hnode *h)
 		return;
 
 	tc_cls_common_offload_init(&cls_u32.common, tp);
-	cls_u32.command = TC_CLSU32_DELETE_HNODE;
-	cls_u32.hnode.divisor = h->divisor;
-	cls_u32.hnode.handle = h->handle;
-	cls_u32.hnode.prio = h->prio;
+	cls_u32.command = TC_CLSU32_DELETE_KNODE;
+	cls_u32.knode.handle = handle;
 
 	dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSU32, &cls_u32);
 }
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [patch net-next v2 07/20] net: sched: cls_u32: call block callbacks for offload
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
                   ` (5 preceding siblings ...)
  2017-10-19 13:50 ` [patch net-next v2 06/20] net: sched: cls_u32: swap u32_remove_hw_knode and u32_remove_hw_hnode Jiri Pirko
@ 2017-10-19 13:50 ` Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 08/20] net: sched: cls_bpf: " Jiri Pirko
                   ` (13 subsequent siblings)
  20 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

Use the newly introduced callbacks infrastructure and call block
callbacks alongside with the existing per-netdev ndo_setup_tc.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 net/sched/cls_u32.c | 72 ++++++++++++++++++++++++++++++++++++++---------------
 1 file changed, 52 insertions(+), 20 deletions(-)

diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
index f407f13..24cc429 100644
--- a/net/sched/cls_u32.c
+++ b/net/sched/cls_u32.c
@@ -465,39 +465,57 @@ static int u32_delete_key(struct tcf_proto *tp, struct tc_u_knode *key)
 static void u32_clear_hw_hnode(struct tcf_proto *tp, struct tc_u_hnode *h)
 {
 	struct net_device *dev = tp->q->dev_queue->dev;
+	struct tcf_block *block = tp->chain->block;
 	struct tc_cls_u32_offload cls_u32 = {};
 
-	if (!tc_should_offload(dev, 0))
-		return;
-
 	tc_cls_common_offload_init(&cls_u32.common, tp);
 	cls_u32.command = TC_CLSU32_DELETE_HNODE;
 	cls_u32.hnode.divisor = h->divisor;
 	cls_u32.hnode.handle = h->handle;
 	cls_u32.hnode.prio = h->prio;
 
-	dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSU32, &cls_u32);
+	if (tc_can_offload(dev))
+		dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSU32, &cls_u32);
+	tc_setup_cb_call(block, NULL, TC_SETUP_CLSU32, &cls_u32, false);
 }
 
 static int u32_replace_hw_hnode(struct tcf_proto *tp, struct tc_u_hnode *h,
 				u32 flags)
 {
 	struct net_device *dev = tp->q->dev_queue->dev;
+	struct tcf_block *block = tp->chain->block;
 	struct tc_cls_u32_offload cls_u32 = {};
+	bool skip_sw = tc_skip_sw(flags);
+	bool offloaded = false;
 	int err;
 
-	if (!tc_should_offload(dev, flags))
-		return tc_skip_sw(flags) ? -EINVAL : 0;
-
 	tc_cls_common_offload_init(&cls_u32.common, tp);
 	cls_u32.command = TC_CLSU32_NEW_HNODE;
 	cls_u32.hnode.divisor = h->divisor;
 	cls_u32.hnode.handle = h->handle;
 	cls_u32.hnode.prio = h->prio;
 
-	err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSU32, &cls_u32);
-	if (tc_skip_sw(flags))
+	if (tc_can_offload(dev)) {
+		err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSU32,
+						    &cls_u32);
+		if (err) {
+			if (skip_sw)
+				return err;
+		} else {
+			offloaded = true;
+		}
+	}
+
+	err = tc_setup_cb_call(block, NULL, TC_SETUP_CLSU32, &cls_u32, skip_sw);
+	if (err < 0) {
+		u32_clear_hw_hnode(tp, h);
 		return err;
+	} else if (err > 0) {
+		offloaded = true;
+	}
+
+	if (skip_sw && !offloaded)
+		return -EINVAL;
 
 	return 0;
 }
@@ -505,28 +523,27 @@ static int u32_replace_hw_hnode(struct tcf_proto *tp, struct tc_u_hnode *h,
 static void u32_remove_hw_knode(struct tcf_proto *tp, u32 handle)
 {
 	struct net_device *dev = tp->q->dev_queue->dev;
+	struct tcf_block *block = tp->chain->block;
 	struct tc_cls_u32_offload cls_u32 = {};
 
-	if (!tc_should_offload(dev, 0))
-		return;
-
 	tc_cls_common_offload_init(&cls_u32.common, tp);
 	cls_u32.command = TC_CLSU32_DELETE_KNODE;
 	cls_u32.knode.handle = handle;
 
-	dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSU32, &cls_u32);
+	if (tc_can_offload(dev))
+		dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSU32, &cls_u32);
+	tc_setup_cb_call(block, NULL, TC_SETUP_CLSU32, &cls_u32, false);
 }
 
 static int u32_replace_hw_knode(struct tcf_proto *tp, struct tc_u_knode *n,
 				u32 flags)
 {
 	struct net_device *dev = tp->q->dev_queue->dev;
+	struct tcf_block *block = tp->chain->block;
 	struct tc_cls_u32_offload cls_u32 = {};
+	bool skip_sw = tc_skip_sw(flags);
 	int err;
 
-	if (!tc_should_offload(dev, flags))
-		return tc_skip_sw(flags) ? -EINVAL : 0;
-
 	tc_cls_common_offload_init(&cls_u32.common, tp);
 	cls_u32.command = TC_CLSU32_REPLACE_KNODE;
 	cls_u32.knode.handle = n->handle;
@@ -543,13 +560,28 @@ static int u32_replace_hw_knode(struct tcf_proto *tp, struct tc_u_knode *n,
 	if (n->ht_down)
 		cls_u32.knode.link_handle = n->ht_down->handle;
 
-	err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSU32, &cls_u32);
 
-	if (!err)
-		n->flags |= TCA_CLS_FLAGS_IN_HW;
+	if (tc_can_offload(dev)) {
+		err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSU32,
+						    &cls_u32);
+		if (err) {
+			if (skip_sw)
+				return err;
+		} else {
+			n->flags |= TCA_CLS_FLAGS_IN_HW;
+		}
+	}
 
-	if (tc_skip_sw(flags))
+	err = tc_setup_cb_call(block, NULL, TC_SETUP_CLSU32, &cls_u32, skip_sw);
+	if (err < 0) {
+		u32_remove_hw_knode(tp, n->handle);
 		return err;
+	} else if (err > 0) {
+		n->flags |= TCA_CLS_FLAGS_IN_HW;
+	}
+
+	if (skip_sw && !(n->flags && TCA_CLS_FLAGS_IN_HW))
+		return -EINVAL;
 
 	return 0;
 }
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [patch net-next v2 08/20] net: sched: cls_bpf: call block callbacks for offload
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
                   ` (6 preceding siblings ...)
  2017-10-19 13:50 ` [patch net-next v2 07/20] net: sched: cls_u32: call block callbacks for offload Jiri Pirko
@ 2017-10-19 13:50 ` Jiri Pirko
  2017-11-01  0:44   ` Jakub Kicinski
  2017-10-19 13:50 ` [patch net-next v2 09/20] mlxsw: spectrum: Convert ndo_setup_tc offloads to block callbacks Jiri Pirko
                   ` (12 subsequent siblings)
  20 siblings, 1 reply; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

Use the newly introduced callbacks infrastructure and call block
callbacks alongside with the existing per-netdev ndo_setup_tc.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 net/sched/cls_bpf.c | 40 ++++++++++++++++++++++++++++++++--------
 1 file changed, 32 insertions(+), 8 deletions(-)

diff --git a/net/sched/cls_bpf.c b/net/sched/cls_bpf.c
index 6c6b21f..e379fdf 100644
--- a/net/sched/cls_bpf.c
+++ b/net/sched/cls_bpf.c
@@ -147,7 +147,10 @@ static bool cls_bpf_is_ebpf(const struct cls_bpf_prog *prog)
 static int cls_bpf_offload_cmd(struct tcf_proto *tp, struct cls_bpf_prog *prog,
 			       enum tc_clsbpf_command cmd)
 {
+	bool addorrep = cmd == TC_CLSBPF_ADD || cmd == TC_CLSBPF_REPLACE;
 	struct net_device *dev = tp->q->dev_queue->dev;
+	struct tcf_block *block = tp->chain->block;
+	bool skip_sw = tc_skip_sw(prog->gen_flags);
 	struct tc_cls_bpf_offload cls_bpf = {};
 	int err;
 
@@ -159,17 +162,38 @@ static int cls_bpf_offload_cmd(struct tcf_proto *tp, struct cls_bpf_prog *prog,
 	cls_bpf.exts_integrated = prog->exts_integrated;
 	cls_bpf.gen_flags = prog->gen_flags;
 
-	err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSBPF, &cls_bpf);
-	if (!err && (cmd == TC_CLSBPF_ADD || cmd == TC_CLSBPF_REPLACE))
-		prog->gen_flags |= TCA_CLS_FLAGS_IN_HW;
+	if (tc_can_offload(dev)) {
+		err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSBPF,
+						    &cls_bpf);
+		if (addorrep) {
+			if (err) {
+				if (skip_sw)
+					return err;
+			} else {
+				prog->gen_flags |= TCA_CLS_FLAGS_IN_HW;
+			}
+		}
+	}
+
+	err = tc_setup_cb_call(block, NULL, TC_SETUP_CLSBPF, &cls_bpf, skip_sw);
+	if (addorrep) {
+		if (err < 0) {
+			cls_bpf_offload_cmd(tp, prog, TC_CLSBPF_DESTROY);
+			return err;
+		} else if (err > 0) {
+			prog->gen_flags |= TCA_CLS_FLAGS_IN_HW;
+		}
+	}
 
-	return err;
+	if (addorrep && skip_sw && !(prog->gen_flags && TCA_CLS_FLAGS_IN_HW))
+		return -EINVAL;
+
+	return 0;
 }
 
 static int cls_bpf_offload(struct tcf_proto *tp, struct cls_bpf_prog *prog,
 			   struct cls_bpf_prog *oldprog)
 {
-	struct net_device *dev = tp->q->dev_queue->dev;
 	struct cls_bpf_prog *obj = prog;
 	enum tc_clsbpf_command cmd;
 	bool skip_sw;
@@ -179,7 +203,7 @@ static int cls_bpf_offload(struct tcf_proto *tp, struct cls_bpf_prog *prog,
 		(oldprog && tc_skip_sw(oldprog->gen_flags));
 
 	if (oldprog && oldprog->offloaded) {
-		if (tc_should_offload(dev, prog->gen_flags)) {
+		if (!tc_skip_hw(prog->gen_flags)) {
 			cmd = TC_CLSBPF_REPLACE;
 		} else if (!tc_skip_sw(prog->gen_flags)) {
 			obj = oldprog;
@@ -188,14 +212,14 @@ static int cls_bpf_offload(struct tcf_proto *tp, struct cls_bpf_prog *prog,
 			return -EINVAL;
 		}
 	} else {
-		if (!tc_should_offload(dev, prog->gen_flags))
+		if (tc_skip_hw(prog->gen_flags))
 			return skip_sw ? -EINVAL : 0;
 		cmd = TC_CLSBPF_ADD;
 	}
 
 	ret = cls_bpf_offload_cmd(tp, obj, cmd);
 	if (ret)
-		return skip_sw ? ret : 0;
+		return ret;
 
 	obj->offloaded = true;
 	if (oldprog)
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [patch net-next v2 09/20] mlxsw: spectrum: Convert ndo_setup_tc offloads to block callbacks
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
                   ` (7 preceding siblings ...)
  2017-10-19 13:50 ` [patch net-next v2 08/20] net: sched: cls_bpf: " Jiri Pirko
@ 2017-10-19 13:50 ` Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 10/20] mlx5e: " Jiri Pirko
                   ` (11 subsequent siblings)
  20 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

Benefit from the newly introduced block callback infrastructure and
convert ndo_setup_tc calls for matchall and flower offloads to block
callbacks.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlxsw/spectrum.c | 82 +++++++++++++++++++-------
 1 file changed, 60 insertions(+), 22 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
index e1e11c7..08e321a 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
@@ -1697,17 +1697,9 @@ static void mlxsw_sp_port_del_cls_matchall(struct mlxsw_sp_port *mlxsw_sp_port,
 }
 
 static int mlxsw_sp_setup_tc_cls_matchall(struct mlxsw_sp_port *mlxsw_sp_port,
-					  struct tc_cls_matchall_offload *f)
+					  struct tc_cls_matchall_offload *f,
+					  bool ingress)
 {
-	bool ingress;
-
-	if (is_classid_clsact_ingress(f->common.classid))
-		ingress = true;
-	else if (is_classid_clsact_egress(f->common.classid))
-		ingress = false;
-	else
-		return -EOPNOTSUPP;
-
 	if (f->common.chain_index)
 		return -EOPNOTSUPP;
 
@@ -1725,17 +1717,9 @@ static int mlxsw_sp_setup_tc_cls_matchall(struct mlxsw_sp_port *mlxsw_sp_port,
 
 static int
 mlxsw_sp_setup_tc_cls_flower(struct mlxsw_sp_port *mlxsw_sp_port,
-			     struct tc_cls_flower_offload *f)
+			     struct tc_cls_flower_offload *f,
+			     bool ingress)
 {
-	bool ingress;
-
-	if (is_classid_clsact_ingress(f->common.classid))
-		ingress = true;
-	else if (is_classid_clsact_egress(f->common.classid))
-		ingress = false;
-	else
-		return -EOPNOTSUPP;
-
 	switch (f->command) {
 	case TC_CLSFLOWER_REPLACE:
 		return mlxsw_sp_flower_replace(mlxsw_sp_port, ingress, f);
@@ -1749,6 +1733,59 @@ mlxsw_sp_setup_tc_cls_flower(struct mlxsw_sp_port *mlxsw_sp_port,
 	}
 }
 
+static int mlxsw_sp_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
+				      void *cb_priv, bool ingress)
+{
+	struct mlxsw_sp_port *mlxsw_sp_port = cb_priv;
+
+	switch (type) {
+	case TC_SETUP_CLSMATCHALL:
+		return mlxsw_sp_setup_tc_cls_matchall(mlxsw_sp_port, type_data,
+						      ingress);
+	case TC_SETUP_CLSFLOWER:
+		return mlxsw_sp_setup_tc_cls_flower(mlxsw_sp_port, type_data,
+						    ingress);
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+static int mlxsw_sp_setup_tc_block_cb_ig(enum tc_setup_type type,
+					 void *type_data, void *cb_priv)
+{
+	return mlxsw_sp_setup_tc_block_cb(type, type_data, cb_priv, true);
+}
+
+static int mlxsw_sp_setup_tc_block_cb_eg(enum tc_setup_type type,
+					 void *type_data, void *cb_priv)
+{
+	return mlxsw_sp_setup_tc_block_cb(type, type_data, cb_priv, false);
+}
+
+static int mlxsw_sp_setup_tc_block(struct mlxsw_sp_port *mlxsw_sp_port,
+				   struct tc_block_offload *f)
+{
+	tc_setup_cb_t *cb;
+
+	if (f->binder_type == TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
+		cb = mlxsw_sp_setup_tc_block_cb_ig;
+	else if (f->binder_type == TCF_BLOCK_BINDER_TYPE_CLSACT_EGRESS)
+		cb = mlxsw_sp_setup_tc_block_cb_eg;
+	else
+		return -EOPNOTSUPP;
+
+	switch (f->command) {
+	case TC_BLOCK_BIND:
+		return tcf_block_cb_register(f->block, cb, mlxsw_sp_port,
+					     mlxsw_sp_port);
+	case TC_BLOCK_UNBIND:
+		tcf_block_cb_unregister(f->block, cb, mlxsw_sp_port);
+		return 0;
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
 static int mlxsw_sp_setup_tc(struct net_device *dev, enum tc_setup_type type,
 			     void *type_data)
 {
@@ -1756,9 +1793,10 @@ static int mlxsw_sp_setup_tc(struct net_device *dev, enum tc_setup_type type,
 
 	switch (type) {
 	case TC_SETUP_CLSMATCHALL:
-		return mlxsw_sp_setup_tc_cls_matchall(mlxsw_sp_port, type_data);
 	case TC_SETUP_CLSFLOWER:
-		return mlxsw_sp_setup_tc_cls_flower(mlxsw_sp_port, type_data);
+		return 0; /* will be removed after conversion from ndo */
+	case TC_SETUP_BLOCK:
+		return mlxsw_sp_setup_tc_block(mlxsw_sp_port, type_data);
 	default:
 		return -EOPNOTSUPP;
 	}
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [patch net-next v2 10/20] mlx5e: Convert ndo_setup_tc offloads to block callbacks
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
                   ` (8 preceding siblings ...)
  2017-10-19 13:50 ` [patch net-next v2 09/20] mlxsw: spectrum: Convert ndo_setup_tc offloads to block callbacks Jiri Pirko
@ 2017-10-19 13:50 ` Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 11/20] bnxt: " Jiri Pirko
                   ` (10 subsequent siblings)
  20 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

Benefit from the newly introduced block callback infrastructure and
convert ndo_setup_tc calls for flower offloads to block callbacks.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/en.h      |  4 +-
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 45 ++++++++++++++++++++---
 drivers/net/ethernet/mellanox/mlx5/core/en_rep.c  | 24 +++++-------
 3 files changed, 51 insertions(+), 22 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index ca8845b..e613ce0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -1056,8 +1056,8 @@ int mlx5e_ethtool_get_ts_info(struct mlx5e_priv *priv,
 int mlx5e_ethtool_flash_device(struct mlx5e_priv *priv,
 			       struct ethtool_flash *flash);
 
-int mlx5e_setup_tc(struct net_device *dev, enum tc_setup_type type,
-		   void *type_data);
+int mlx5e_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
+			    void *cb_priv);
 
 /* mlx5e generic netdev management API */
 struct net_device*
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 3a1969a..e810868 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -3083,13 +3083,10 @@ static int mlx5e_setup_tc_mqprio(struct net_device *netdev,
 }
 
 #ifdef CONFIG_MLX5_ESWITCH
-static int mlx5e_setup_tc_cls_flower(struct net_device *dev,
+static int mlx5e_setup_tc_cls_flower(struct mlx5e_priv *priv,
 				     struct tc_cls_flower_offload *cls_flower)
 {
-	struct mlx5e_priv *priv = netdev_priv(dev);
-
-	if (!is_classid_clsact_ingress(cls_flower->common.classid) ||
-	    cls_flower->common.chain_index)
+	if (cls_flower->common.chain_index)
 		return -EOPNOTSUPP;
 
 	switch (cls_flower->command) {
@@ -3103,6 +3100,40 @@ static int mlx5e_setup_tc_cls_flower(struct net_device *dev,
 		return -EOPNOTSUPP;
 	}
 }
+
+int mlx5e_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
+			    void *cb_priv)
+{
+	struct mlx5e_priv *priv = cb_priv;
+
+	switch (type) {
+	case TC_SETUP_CLSFLOWER:
+		return mlx5e_setup_tc_cls_flower(priv, type_data);
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+static int mlx5e_setup_tc_block(struct net_device *dev,
+				struct tc_block_offload *f)
+{
+	struct mlx5e_priv *priv = netdev_priv(dev);
+
+	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
+		return -EOPNOTSUPP;
+
+	switch (f->command) {
+	case TC_BLOCK_BIND:
+		return tcf_block_cb_register(f->block, mlx5e_setup_tc_block_cb,
+					     priv, priv);
+	case TC_BLOCK_UNBIND:
+		tcf_block_cb_unregister(f->block, mlx5e_setup_tc_block_cb,
+					priv);
+		return 0;
+	default:
+		return -EOPNOTSUPP;
+	}
+}
 #endif
 
 int mlx5e_setup_tc(struct net_device *dev, enum tc_setup_type type,
@@ -3111,7 +3142,9 @@ int mlx5e_setup_tc(struct net_device *dev, enum tc_setup_type type,
 	switch (type) {
 #ifdef CONFIG_MLX5_ESWITCH
 	case TC_SETUP_CLSFLOWER:
-		return mlx5e_setup_tc_cls_flower(dev, type_data);
+		return 0; /* will be removed after conversion from ndo */
+	case TC_SETUP_BLOCK:
+		return mlx5e_setup_tc_block(dev, type_data);
 #endif
 	case TC_SETUP_MQPRIO:
 		return mlx5e_setup_tc_mqprio(dev, type_data);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
index 765fc74..4edd92d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
@@ -691,14 +691,6 @@ static int mlx5e_rep_setup_tc(struct net_device *dev, enum tc_setup_type type,
 	}
 }
 
-static int mlx5e_rep_setup_tc_cb(enum tc_setup_type type, void *type_data,
-				 void *cb_priv)
-{
-	struct net_device *dev = cb_priv;
-
-	return mlx5e_setup_tc(dev, type, type_data);
-}
-
 bool mlx5e_is_uplink_rep(struct mlx5e_priv *priv)
 {
 	struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
@@ -987,6 +979,7 @@ mlx5e_vport_rep_load(struct mlx5_eswitch *esw, struct mlx5_eswitch_rep *rep)
 {
 	struct mlx5e_rep_priv *rpriv;
 	struct net_device *netdev;
+	struct mlx5e_priv *upriv;
 	int err;
 
 	rpriv = kzalloc(sizeof(*rpriv), GFP_KERNEL);
@@ -1018,8 +1011,9 @@ mlx5e_vport_rep_load(struct mlx5_eswitch *esw, struct mlx5_eswitch_rep *rep)
 		goto err_detach_netdev;
 	}
 
-	err = tc_setup_cb_egdev_register(netdev, mlx5e_rep_setup_tc_cb,
-					 mlx5_eswitch_get_uplink_netdev(esw));
+	upriv = netdev_priv(mlx5_eswitch_get_uplink_netdev(esw));
+	err = tc_setup_cb_egdev_register(netdev, mlx5e_setup_tc_block_cb,
+					 upriv);
 	if (err)
 		goto err_neigh_cleanup;
 
@@ -1033,8 +1027,8 @@ mlx5e_vport_rep_load(struct mlx5_eswitch *esw, struct mlx5_eswitch_rep *rep)
 	return 0;
 
 err_egdev_cleanup:
-	tc_setup_cb_egdev_unregister(netdev, mlx5e_rep_setup_tc_cb,
-				     mlx5_eswitch_get_uplink_netdev(esw));
+	tc_setup_cb_egdev_unregister(netdev, mlx5e_setup_tc_block_cb,
+				     upriv);
 
 err_neigh_cleanup:
 	mlx5e_rep_neigh_cleanup(rpriv);
@@ -1055,10 +1049,12 @@ mlx5e_vport_rep_unload(struct mlx5_eswitch *esw, struct mlx5_eswitch_rep *rep)
 	struct mlx5e_priv *priv = netdev_priv(netdev);
 	struct mlx5e_rep_priv *rpriv = priv->ppriv;
 	void *ppriv = priv->ppriv;
+	struct mlx5e_priv *upriv;
 
 	unregister_netdev(rep->netdev);
-	tc_setup_cb_egdev_unregister(netdev, mlx5e_rep_setup_tc_cb,
-				     mlx5_eswitch_get_uplink_netdev(esw));
+	upriv = netdev_priv(mlx5_eswitch_get_uplink_netdev(esw));
+	tc_setup_cb_egdev_unregister(netdev, mlx5e_setup_tc_block_cb,
+				     upriv);
 	mlx5e_rep_neigh_cleanup(rpriv);
 	mlx5e_detach_netdev(priv);
 	mlx5e_destroy_netdev(priv);
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [patch net-next v2 11/20] bnxt: Convert ndo_setup_tc offloads to block callbacks
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
                   ` (9 preceding siblings ...)
  2017-10-19 13:50 ` [patch net-next v2 10/20] mlx5e: " Jiri Pirko
@ 2017-10-19 13:50 ` Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 12/20] cxgb4: " Jiri Pirko
                   ` (9 subsequent siblings)
  20 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

Benefit from the newly introduced block callback infrastructure and
convert ndo_setup_tc calls for flower offloads to block callbacks.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c     | 37 +++++++++++++++++++----
 drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c  |  3 +-
 drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c | 43 +++++++++++++++++++++++++--
 3 files changed, 73 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 5ba4993..4dde2b8 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -7295,15 +7295,40 @@ int bnxt_setup_mq_tc(struct net_device *dev, u8 tc)
 	return 0;
 }
 
-static int bnxt_setup_flower(struct net_device *dev,
-			     struct tc_cls_flower_offload *cls_flower)
+static int bnxt_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
+				  void *cb_priv)
 {
-	struct bnxt *bp = netdev_priv(dev);
+	struct bnxt *bp = cb_priv;
 
 	if (BNXT_VF(bp))
 		return -EOPNOTSUPP;
 
-	return bnxt_tc_setup_flower(bp, bp->pf.fw_fid, cls_flower);
+	switch (type) {
+	case TC_SETUP_CLSFLOWER:
+		return bnxt_tc_setup_flower(bp, bp->pf.fw_fid, type_data);
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+static int bnxt_setup_tc_block(struct net_device *dev,
+			       struct tc_block_offload *f)
+{
+	struct bnxt *bp = netdev_priv(dev);
+
+	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
+		return -EOPNOTSUPP;
+
+	switch (f->command) {
+	case TC_BLOCK_BIND:
+		return tcf_block_cb_register(f->block, bnxt_setup_tc_block_cb,
+					     bp, bp);
+	case TC_BLOCK_UNBIND:
+		tcf_block_cb_unregister(f->block, bnxt_setup_tc_block_cb, bp);
+		return 0;
+	default:
+		return -EOPNOTSUPP;
+	}
 }
 
 static int bnxt_setup_tc(struct net_device *dev, enum tc_setup_type type,
@@ -7311,7 +7336,9 @@ static int bnxt_setup_tc(struct net_device *dev, enum tc_setup_type type,
 {
 	switch (type) {
 	case TC_SETUP_CLSFLOWER:
-		return bnxt_setup_flower(dev, type_data);
+		return 0; /* will be removed after conversion from ndo */
+	case TC_SETUP_BLOCK:
+		return bnxt_setup_tc_block(dev, type_data);
 	case TC_SETUP_MQPRIO: {
 		struct tc_mqprio_qopt *mqprio = type_data;
 
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
index 4730c04..a9cb653 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
@@ -748,8 +748,7 @@ int bnxt_tc_setup_flower(struct bnxt *bp, u16 src_fid,
 {
 	int rc = 0;
 
-	if (!is_classid_clsact_ingress(cls_flower->common.classid) ||
-	    cls_flower->common.chain_index)
+	if (cls_flower->common.chain_index)
 		return -EOPNOTSUPP;
 
 	switch (cls_flower->command) {
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c
index e75db04..cc278d7 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c
@@ -115,10 +115,11 @@ bnxt_vf_rep_get_stats64(struct net_device *dev,
 	stats->tx_bytes = vf_rep->tx_stats.bytes;
 }
 
-static int bnxt_vf_rep_setup_tc(struct net_device *dev, enum tc_setup_type type,
-				void *type_data)
+static int bnxt_vf_rep_setup_tc_block_cb(enum tc_setup_type type,
+					 void *type_data,
+					 void *cb_priv)
 {
-	struct bnxt_vf_rep *vf_rep = netdev_priv(dev);
+	struct bnxt_vf_rep *vf_rep = cb_priv;
 	struct bnxt *bp = vf_rep->bp;
 	int vf_fid = bp->pf.vf[vf_rep->vf_idx].fw_fid;
 
@@ -130,6 +131,42 @@ static int bnxt_vf_rep_setup_tc(struct net_device *dev, enum tc_setup_type type,
 	}
 }
 
+static int bnxt_vf_rep_setup_tc_block(struct net_device *dev,
+				      struct tc_block_offload *f)
+{
+	struct bnxt_vf_rep *vf_rep = netdev_priv(dev);
+
+	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
+		return -EOPNOTSUPP;
+
+	switch (f->command) {
+	case TC_BLOCK_BIND:
+		return tcf_block_cb_register(f->block,
+					     bnxt_vf_rep_setup_tc_block_cb,
+					     vf_rep, vf_rep);
+		return 0;
+	case TC_BLOCK_UNBIND:
+		tcf_block_cb_unregister(f->block,
+					bnxt_vf_rep_setup_tc_block_cb, vf_rep);
+		return 0;
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+static int bnxt_vf_rep_setup_tc(struct net_device *dev, enum tc_setup_type type,
+				void *type_data)
+{
+	switch (type) {
+	case TC_SETUP_CLSFLOWER:
+		return 0; /* will be removed after conversion from ndo */
+	case TC_SETUP_BLOCK:
+		return bnxt_vf_rep_setup_tc_block(dev, type_data);
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
 struct net_device *bnxt_get_vf_rep(struct bnxt *bp, u16 cfa_code)
 {
 	u16 vf_idx;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [patch net-next v2 12/20] cxgb4: Convert ndo_setup_tc offloads to block callbacks
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
                   ` (10 preceding siblings ...)
  2017-10-19 13:50 ` [patch net-next v2 11/20] bnxt: " Jiri Pirko
@ 2017-10-19 13:50 ` Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 13/20] ixgbe: " Jiri Pirko
                   ` (8 subsequent siblings)
  20 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

Benefit from the newly introduced block callback infrastructure and
convert ndo_setup_tc calls for flower and u32 offloads to block callbacks.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c | 45 +++++++++++++++++++++----
 1 file changed, 39 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
index 8d97ae6..ca0b96b 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
@@ -2884,8 +2884,7 @@ static int cxgb_set_tx_maxrate(struct net_device *dev, int index, u32 rate)
 static int cxgb_setup_tc_flower(struct net_device *dev,
 				struct tc_cls_flower_offload *cls_flower)
 {
-	if (!is_classid_clsact_ingress(cls_flower->common.classid) ||
-	    cls_flower->common.chain_index)
+	if (cls_flower->common.chain_index)
 		return -EOPNOTSUPP;
 
 	switch (cls_flower->command) {
@@ -2903,8 +2902,7 @@ static int cxgb_setup_tc_flower(struct net_device *dev,
 static int cxgb_setup_tc_cls_u32(struct net_device *dev,
 				 struct tc_cls_u32_offload *cls_u32)
 {
-	if (!is_classid_clsact_ingress(cls_u32->common.classid) ||
-	    cls_u32->common.chain_index)
+	if (cls_u32->common.chain_index)
 		return -EOPNOTSUPP;
 
 	switch (cls_u32->command) {
@@ -2918,9 +2916,10 @@ static int cxgb_setup_tc_cls_u32(struct net_device *dev,
 	}
 }
 
-static int cxgb_setup_tc(struct net_device *dev, enum tc_setup_type type,
-			 void *type_data)
+static int cxgb_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
+				  void *cb_priv)
 {
+	struct net_device *dev = cb_priv;
 	struct port_info *pi = netdev2pinfo(dev);
 	struct adapter *adap = netdev2adap(dev);
 
@@ -2941,6 +2940,40 @@ static int cxgb_setup_tc(struct net_device *dev, enum tc_setup_type type,
 	}
 }
 
+static int cxgb_setup_tc_block(struct net_device *dev,
+			       struct tc_block_offload *f)
+{
+	struct port_info *pi = netdev2pinfo(dev);
+
+	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
+		return -EOPNOTSUPP;
+
+	switch (f->command) {
+	case TC_BLOCK_BIND:
+		return tcf_block_cb_register(f->block, cxgb_setup_tc_block_cb,
+					     pi, dev);
+	case TC_BLOCK_UNBIND:
+		tcf_block_cb_unregister(f->block, cxgb_setup_tc_block_cb, pi);
+		return 0;
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+static int cxgb_setup_tc(struct net_device *dev, enum tc_setup_type type,
+			 void *type_data)
+{
+	switch (type) {
+	case TC_SETUP_CLSU32:
+	case TC_SETUP_CLSFLOWER:
+		return 0; /* will be removed after conversion from ndo */
+	case TC_SETUP_BLOCK:
+		return cxgb_setup_tc_block(dev, type_data);
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
 static netdev_features_t cxgb_fix_features(struct net_device *dev,
 					   netdev_features_t features)
 {
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [patch net-next v2 13/20] ixgbe: Convert ndo_setup_tc offloads to block callbacks
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
                   ` (11 preceding siblings ...)
  2017-10-19 13:50 ` [patch net-next v2 12/20] cxgb4: " Jiri Pirko
@ 2017-10-19 13:50 ` Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 14/20] mlx5e_rep: " Jiri Pirko
                   ` (7 subsequent siblings)
  20 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

Benefit from the newly introduced block callback infrastructure and
convert ndo_setup_tc calls for u32 offloads to block callbacks.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 45 +++++++++++++++++++++++----
 1 file changed, 39 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 3e83edd..38e01e0 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -9365,13 +9365,10 @@ static int ixgbe_configure_clsu32(struct ixgbe_adapter *adapter,
 	return err;
 }
 
-static int ixgbe_setup_tc_cls_u32(struct net_device *dev,
+static int ixgbe_setup_tc_cls_u32(struct ixgbe_adapter *adapter,
 				  struct tc_cls_u32_offload *cls_u32)
 {
-	struct ixgbe_adapter *adapter = netdev_priv(dev);
-
-	if (!is_classid_clsact_ingress(cls_u32->common.classid) ||
-	    cls_u32->common.chain_index)
+	if (cls_u32->common.chain_index)
 		return -EOPNOTSUPP;
 
 	switch (cls_u32->command) {
@@ -9390,6 +9387,40 @@ static int ixgbe_setup_tc_cls_u32(struct net_device *dev,
 	}
 }
 
+static int ixgbe_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
+				   void *cb_priv)
+{
+	struct ixgbe_adapter *adapter = cb_priv;
+
+	switch (type) {
+	case TC_SETUP_CLSU32:
+		return ixgbe_setup_tc_cls_u32(adapter, type_data);
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+static int ixgbe_setup_tc_block(struct net_device *dev,
+				struct tc_block_offload *f)
+{
+	struct ixgbe_adapter *adapter = netdev_priv(dev);
+
+	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
+		return -EOPNOTSUPP;
+
+	switch (f->command) {
+	case TC_BLOCK_BIND:
+		return tcf_block_cb_register(f->block, ixgbe_setup_tc_block_cb,
+					     adapter, adapter);
+	case TC_BLOCK_UNBIND:
+		tcf_block_cb_unregister(f->block, ixgbe_setup_tc_block_cb,
+					adapter);
+		return 0;
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
 static int ixgbe_setup_tc_mqprio(struct net_device *dev,
 				 struct tc_mqprio_qopt *mqprio)
 {
@@ -9402,7 +9433,9 @@ static int __ixgbe_setup_tc(struct net_device *dev, enum tc_setup_type type,
 {
 	switch (type) {
 	case TC_SETUP_CLSU32:
-		return ixgbe_setup_tc_cls_u32(dev, type_data);
+		return 0; /* will be removed after conversion from ndo */
+	case TC_SETUP_BLOCK:
+		return ixgbe_setup_tc_block(dev, type_data);
 	case TC_SETUP_MQPRIO:
 		return ixgbe_setup_tc_mqprio(dev, type_data);
 	default:
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [patch net-next v2 14/20] mlx5e_rep: Convert ndo_setup_tc offloads to block callbacks
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
                   ` (12 preceding siblings ...)
  2017-10-19 13:50 ` [patch net-next v2 13/20] ixgbe: " Jiri Pirko
@ 2017-10-19 13:50 ` Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 15/20] nfp: flower: " Jiri Pirko
                   ` (6 subsequent siblings)
  20 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

Benefit from the newly introduced block callback infrastructure and
convert ndo_setup_tc calls for flower offloads to block callbacks.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/en_rep.c | 44 ++++++++++++++++++++----
 1 file changed, 38 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
index 4edd92d..f59d81a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
@@ -659,13 +659,10 @@ static int mlx5e_rep_get_phys_port_name(struct net_device *dev,
 }
 
 static int
-mlx5e_rep_setup_tc_cls_flower(struct net_device *dev,
+mlx5e_rep_setup_tc_cls_flower(struct mlx5e_priv *priv,
 			      struct tc_cls_flower_offload *cls_flower)
 {
-	struct mlx5e_priv *priv = netdev_priv(dev);
-
-	if (!is_classid_clsact_ingress(cls_flower->common.classid) ||
-	    cls_flower->common.chain_index)
+	if (cls_flower->common.chain_index)
 		return -EOPNOTSUPP;
 
 	switch (cls_flower->command) {
@@ -680,12 +677,47 @@ mlx5e_rep_setup_tc_cls_flower(struct net_device *dev,
 	}
 }
 
+static int mlx5e_rep_setup_tc_cb(enum tc_setup_type type, void *type_data,
+				 void *cb_priv)
+{
+	struct mlx5e_priv *priv = cb_priv;
+
+	switch (type) {
+	case TC_SETUP_CLSFLOWER:
+		return mlx5e_rep_setup_tc_cls_flower(priv, type_data);
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+static int mlx5e_rep_setup_tc_block(struct net_device *dev,
+				    struct tc_block_offload *f)
+{
+	struct mlx5e_priv *priv = netdev_priv(dev);
+
+	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
+		return -EOPNOTSUPP;
+
+	switch (f->command) {
+	case TC_BLOCK_BIND:
+		return tcf_block_cb_register(f->block, mlx5e_rep_setup_tc_cb,
+					     priv, priv);
+	case TC_BLOCK_UNBIND:
+		tcf_block_cb_unregister(f->block, mlx5e_rep_setup_tc_cb, priv);
+		return 0;
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
 static int mlx5e_rep_setup_tc(struct net_device *dev, enum tc_setup_type type,
 			      void *type_data)
 {
 	switch (type) {
 	case TC_SETUP_CLSFLOWER:
-		return mlx5e_rep_setup_tc_cls_flower(dev, type_data);
+		return 0; /* will be removed after conversion from ndo */
+	case TC_SETUP_BLOCK:
+		return mlx5e_rep_setup_tc_block(dev, type_data);
 	default:
 		return -EOPNOTSUPP;
 	}
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [patch net-next v2 15/20] nfp: flower: Convert ndo_setup_tc offloads to block callbacks
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
                   ` (13 preceding siblings ...)
  2017-10-19 13:50 ` [patch net-next v2 14/20] mlx5e_rep: " Jiri Pirko
@ 2017-10-19 13:50 ` Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 16/20] nfp: bpf: " Jiri Pirko
                   ` (5 subsequent siblings)
  20 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

Benefit from the newly introduced block callback infrastructure and
convert ndo_setup_tc calls for flower offloads to block callbacks.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 .../net/ethernet/netronome/nfp/flower/offload.c    | 56 ++++++++++++++++++----
 1 file changed, 48 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
index 6f239c2..f8523df 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
@@ -449,6 +449,10 @@ static int
 nfp_flower_repr_offload(struct nfp_app *app, struct net_device *netdev,
 			struct tc_cls_flower_offload *flower)
 {
+	if (!eth_proto_is_802_3(flower->common.protocol) ||
+	    flower->common.chain_index)
+		return -EOPNOTSUPP;
+
 	switch (flower->command) {
 	case TC_CLSFLOWER_REPLACE:
 		return nfp_flower_add_offload(app, netdev, flower);
@@ -461,16 +465,52 @@ nfp_flower_repr_offload(struct nfp_app *app, struct net_device *netdev,
 	return -EOPNOTSUPP;
 }
 
-int nfp_flower_setup_tc(struct nfp_app *app, struct net_device *netdev,
-			enum tc_setup_type type, void *type_data)
+static int nfp_flower_setup_tc_block_cb(enum tc_setup_type type,
+					void *type_data, void *cb_priv)
+{
+	struct nfp_net *nn = cb_priv;
+
+	switch (type) {
+	case TC_SETUP_CLSFLOWER:
+		return nfp_flower_repr_offload(nn->app, nn->port->netdev,
+					       type_data);
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+static int nfp_flower_setup_tc_block(struct net_device *netdev,
+				     struct tc_block_offload *f)
 {
-	struct tc_cls_flower_offload *cls_flower = type_data;
+	struct nfp_net *nn = netdev_priv(netdev);
 
-	if (type != TC_SETUP_CLSFLOWER ||
-	    !is_classid_clsact_ingress(cls_flower->common.classid) ||
-	    !eth_proto_is_802_3(cls_flower->common.protocol) ||
-	    cls_flower->common.chain_index)
+	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
 		return -EOPNOTSUPP;
 
-	return nfp_flower_repr_offload(app, netdev, cls_flower);
+	switch (f->command) {
+	case TC_BLOCK_BIND:
+		return tcf_block_cb_register(f->block,
+					     nfp_flower_setup_tc_block_cb,
+					     nn, nn);
+	case TC_BLOCK_UNBIND:
+		tcf_block_cb_unregister(f->block,
+					nfp_flower_setup_tc_block_cb,
+					nn);
+		return 0;
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+int nfp_flower_setup_tc(struct nfp_app *app, struct net_device *netdev,
+			enum tc_setup_type type, void *type_data)
+{
+	switch (type) {
+	case TC_SETUP_CLSFLOWER:
+		return 0; /* will be removed after conversion from ndo */
+	case TC_SETUP_BLOCK:
+		return nfp_flower_setup_tc_block(netdev, type_data);
+	default:
+		return -EOPNOTSUPP;
+	}
 }
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [patch net-next v2 16/20] nfp: bpf: Convert ndo_setup_tc offloads to block callbacks
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
                   ` (14 preceding siblings ...)
  2017-10-19 13:50 ` [patch net-next v2 15/20] nfp: flower: " Jiri Pirko
@ 2017-10-19 13:50 ` Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 17/20] dsa: " Jiri Pirko
                   ` (4 subsequent siblings)
  20 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

Benefit from the newly introduced block callback infrastructure and
convert ndo_setup_tc calls for bpf offloads to block callbacks.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 drivers/net/ethernet/netronome/nfp/bpf/main.c | 54 ++++++++++++++++++++++-----
 1 file changed, 45 insertions(+), 9 deletions(-)

diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.c b/drivers/net/ethernet/netronome/nfp/bpf/main.c
index 6e74f8d..64f97b3 100644
--- a/drivers/net/ethernet/netronome/nfp/bpf/main.c
+++ b/drivers/net/ethernet/netronome/nfp/bpf/main.c
@@ -114,22 +114,58 @@ static void nfp_bpf_vnic_free(struct nfp_app *app, struct nfp_net *nn)
 	kfree(nn->app_priv);
 }
 
-static int nfp_bpf_setup_tc(struct nfp_app *app, struct net_device *netdev,
-			    enum tc_setup_type type, void *type_data)
+static int nfp_bpf_setup_tc_block_cb(enum tc_setup_type type,
+				     void *type_data, void *cb_priv)
 {
 	struct tc_cls_bpf_offload *cls_bpf = type_data;
+	struct nfp_net *nn = cb_priv;
+
+	switch (type) {
+	case TC_SETUP_CLSBPF:
+		if (!nfp_net_ebpf_capable(nn) ||
+		    cls_bpf->common.protocol != htons(ETH_P_ALL) ||
+		    cls_bpf->common.chain_index)
+			return -EOPNOTSUPP;
+		return nfp_net_bpf_offload(nn, cls_bpf);
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+static int nfp_bpf_setup_tc_block(struct net_device *netdev,
+				  struct tc_block_offload *f)
+{
 	struct nfp_net *nn = netdev_priv(netdev);
 
-	if (type != TC_SETUP_CLSBPF || !nfp_net_ebpf_capable(nn) ||
-	    !is_classid_clsact_ingress(cls_bpf->common.classid) ||
-	    cls_bpf->common.protocol != htons(ETH_P_ALL) ||
-	    cls_bpf->common.chain_index)
+	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
 		return -EOPNOTSUPP;
 
-	if (nn->dp.bpf_offload_xdp)
-		return -EBUSY;
+	switch (f->command) {
+	case TC_BLOCK_BIND:
+		return tcf_block_cb_register(f->block,
+					     nfp_bpf_setup_tc_block_cb,
+					     nn, nn);
+	case TC_BLOCK_UNBIND:
+		tcf_block_cb_unregister(f->block,
+					nfp_bpf_setup_tc_block_cb,
+					nn);
+		return 0;
+	default:
+		return -EOPNOTSUPP;
+	}
+}
 
-	return nfp_net_bpf_offload(nn, cls_bpf);
+static int nfp_bpf_setup_tc(struct nfp_app *app, struct net_device *netdev,
+			    enum tc_setup_type type, void *type_data)
+{
+	switch (type) {
+	case TC_SETUP_CLSBPF:
+		return 0; /* will be removed after conversion from ndo */
+	case TC_SETUP_BLOCK:
+		return nfp_bpf_setup_tc_block(netdev, type_data);
+	default:
+		return -EOPNOTSUPP;
+	}
 }
 
 static bool nfp_bpf_tc_busy(struct nfp_app *app, struct nfp_net *nn)
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [patch net-next v2 17/20] dsa: Convert ndo_setup_tc offloads to block callbacks
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
                   ` (15 preceding siblings ...)
  2017-10-19 13:50 ` [patch net-next v2 16/20] nfp: bpf: " Jiri Pirko
@ 2017-10-19 13:50 ` Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 18/20] net: sched: avoid ndo_setup_tc calls for TC_SETUP_CLS* Jiri Pirko
                   ` (3 subsequent siblings)
  20 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

Benefit from the newly introduced block callback infrastructure and
convert ndo_setup_tc calls for matchall offloads to block callbacks.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 net/dsa/slave.c | 64 +++++++++++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 53 insertions(+), 11 deletions(-)

diff --git a/net/dsa/slave.c b/net/dsa/slave.c
index 6906de0..8014291 100644
--- a/net/dsa/slave.c
+++ b/net/dsa/slave.c
@@ -777,17 +777,9 @@ static void dsa_slave_del_cls_matchall(struct net_device *dev,
 }
 
 static int dsa_slave_setup_tc_cls_matchall(struct net_device *dev,
-					   struct tc_cls_matchall_offload *cls)
+					   struct tc_cls_matchall_offload *cls,
+					   bool ingress)
 {
-	bool ingress;
-
-	if (is_classid_clsact_ingress(cls->common.classid))
-		ingress = true;
-	else if (is_classid_clsact_egress(cls->common.classid))
-		ingress = false;
-	else
-		return -EOPNOTSUPP;
-
 	if (cls->common.chain_index)
 		return -EOPNOTSUPP;
 
@@ -802,12 +794,62 @@ static int dsa_slave_setup_tc_cls_matchall(struct net_device *dev,
 	}
 }
 
+static int dsa_slave_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
+				       void *cb_priv, bool ingress)
+{
+	struct net_device *dev = cb_priv;
+
+	switch (type) {
+	case TC_SETUP_CLSMATCHALL:
+		return dsa_slave_setup_tc_cls_matchall(dev, type_data, ingress);
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+static int dsa_slave_setup_tc_block_cb_ig(enum tc_setup_type type,
+					  void *type_data, void *cb_priv)
+{
+	return dsa_slave_setup_tc_block_cb(type, type_data, cb_priv, true);
+}
+
+static int dsa_slave_setup_tc_block_cb_eg(enum tc_setup_type type,
+					  void *type_data, void *cb_priv)
+{
+	return dsa_slave_setup_tc_block_cb(type, type_data, cb_priv, false);
+}
+
+static int dsa_slave_setup_tc_block(struct net_device *dev,
+				    struct tc_block_offload *f)
+{
+	tc_setup_cb_t *cb;
+
+	if (f->binder_type == TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
+		cb = dsa_slave_setup_tc_block_cb_ig;
+	else if (f->binder_type == TCF_BLOCK_BINDER_TYPE_CLSACT_EGRESS)
+		cb = dsa_slave_setup_tc_block_cb_eg;
+	else
+		return -EOPNOTSUPP;
+
+	switch (f->command) {
+	case TC_BLOCK_BIND:
+		return tcf_block_cb_register(f->block, cb, dev, dev);
+	case TC_BLOCK_UNBIND:
+		tcf_block_cb_unregister(f->block, cb, dev);
+		return 0;
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
 static int dsa_slave_setup_tc(struct net_device *dev, enum tc_setup_type type,
 			      void *type_data)
 {
 	switch (type) {
 	case TC_SETUP_CLSMATCHALL:
-		return dsa_slave_setup_tc_cls_matchall(dev, type_data);
+		return 0; /* will be removed after conversion from ndo */
+	case TC_SETUP_BLOCK:
+		return dsa_slave_setup_tc_block(dev, type_data);
 	default:
 		return -EOPNOTSUPP;
 	}
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [patch net-next v2 18/20] net: sched: avoid ndo_setup_tc calls for TC_SETUP_CLS*
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
                   ` (16 preceding siblings ...)
  2017-10-19 13:50 ` [patch net-next v2 17/20] dsa: " Jiri Pirko
@ 2017-10-19 13:50 ` Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 19/20] net: sched: remove unused classid field from tc_cls_common_offload Jiri Pirko
                   ` (2 subsequent siblings)
  20 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

All drivers are converted to use block callbacks for TC_SETUP_CLS*.
So it is now safe to remove the calls to ndo_setup_tc from cls_*

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c          |  2 --
 drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c      |  2 --
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c    |  3 ---
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c      |  2 --
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c  |  2 --
 drivers/net/ethernet/mellanox/mlx5/core/en_rep.c   |  2 --
 drivers/net/ethernet/mellanox/mlxsw/spectrum.c     |  3 ---
 drivers/net/ethernet/netronome/nfp/bpf/main.c      |  2 --
 .../net/ethernet/netronome/nfp/flower/offload.c    |  2 --
 net/dsa/slave.c                                    |  2 --
 net/sched/cls_bpf.c                                | 14 ----------
 net/sched/cls_flower.c                             | 20 --------------
 net/sched/cls_matchall.c                           | 16 -----------
 net/sched/cls_u32.c                                | 31 ----------------------
 14 files changed, 103 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 4dde2b8..22a94b1 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -7335,8 +7335,6 @@ static int bnxt_setup_tc(struct net_device *dev, enum tc_setup_type type,
 			 void *type_data)
 {
 	switch (type) {
-	case TC_SETUP_CLSFLOWER:
-		return 0; /* will be removed after conversion from ndo */
 	case TC_SETUP_BLOCK:
 		return bnxt_setup_tc_block(dev, type_data);
 	case TC_SETUP_MQPRIO: {
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c
index cc278d7..6dff5aa 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c
@@ -158,8 +158,6 @@ static int bnxt_vf_rep_setup_tc(struct net_device *dev, enum tc_setup_type type,
 				void *type_data)
 {
 	switch (type) {
-	case TC_SETUP_CLSFLOWER:
-		return 0; /* will be removed after conversion from ndo */
 	case TC_SETUP_BLOCK:
 		return bnxt_vf_rep_setup_tc_block(dev, type_data);
 	default:
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
index ca0b96b..6edbf54 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
@@ -2964,9 +2964,6 @@ static int cxgb_setup_tc(struct net_device *dev, enum tc_setup_type type,
 			 void *type_data)
 {
 	switch (type) {
-	case TC_SETUP_CLSU32:
-	case TC_SETUP_CLSFLOWER:
-		return 0; /* will be removed after conversion from ndo */
 	case TC_SETUP_BLOCK:
 		return cxgb_setup_tc_block(dev, type_data);
 	default:
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 38e01e0..7f503d3 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -9432,8 +9432,6 @@ static int __ixgbe_setup_tc(struct net_device *dev, enum tc_setup_type type,
 			    void *type_data)
 {
 	switch (type) {
-	case TC_SETUP_CLSU32:
-		return 0; /* will be removed after conversion from ndo */
 	case TC_SETUP_BLOCK:
 		return ixgbe_setup_tc_block(dev, type_data);
 	case TC_SETUP_MQPRIO:
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index e810868..560b208 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -3141,8 +3141,6 @@ int mlx5e_setup_tc(struct net_device *dev, enum tc_setup_type type,
 {
 	switch (type) {
 #ifdef CONFIG_MLX5_ESWITCH
-	case TC_SETUP_CLSFLOWER:
-		return 0; /* will be removed after conversion from ndo */
 	case TC_SETUP_BLOCK:
 		return mlx5e_setup_tc_block(dev, type_data);
 #endif
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
index f59d81a..0edb706 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
@@ -714,8 +714,6 @@ static int mlx5e_rep_setup_tc(struct net_device *dev, enum tc_setup_type type,
 			      void *type_data)
 {
 	switch (type) {
-	case TC_SETUP_CLSFLOWER:
-		return 0; /* will be removed after conversion from ndo */
 	case TC_SETUP_BLOCK:
 		return mlx5e_rep_setup_tc_block(dev, type_data);
 	default:
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
index 08e321a..9fe51a6 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
@@ -1792,9 +1792,6 @@ static int mlxsw_sp_setup_tc(struct net_device *dev, enum tc_setup_type type,
 	struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev);
 
 	switch (type) {
-	case TC_SETUP_CLSMATCHALL:
-	case TC_SETUP_CLSFLOWER:
-		return 0; /* will be removed after conversion from ndo */
 	case TC_SETUP_BLOCK:
 		return mlxsw_sp_setup_tc_block(mlxsw_sp_port, type_data);
 	default:
diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.c b/drivers/net/ethernet/netronome/nfp/bpf/main.c
index 64f97b3..fa0ac90 100644
--- a/drivers/net/ethernet/netronome/nfp/bpf/main.c
+++ b/drivers/net/ethernet/netronome/nfp/bpf/main.c
@@ -159,8 +159,6 @@ static int nfp_bpf_setup_tc(struct nfp_app *app, struct net_device *netdev,
 			    enum tc_setup_type type, void *type_data)
 {
 	switch (type) {
-	case TC_SETUP_CLSBPF:
-		return 0; /* will be removed after conversion from ndo */
 	case TC_SETUP_BLOCK:
 		return nfp_bpf_setup_tc_block(netdev, type_data);
 	default:
diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
index f8523df..c47753f 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
@@ -506,8 +506,6 @@ int nfp_flower_setup_tc(struct nfp_app *app, struct net_device *netdev,
 			enum tc_setup_type type, void *type_data)
 {
 	switch (type) {
-	case TC_SETUP_CLSFLOWER:
-		return 0; /* will be removed after conversion from ndo */
 	case TC_SETUP_BLOCK:
 		return nfp_flower_setup_tc_block(netdev, type_data);
 	default:
diff --git a/net/dsa/slave.c b/net/dsa/slave.c
index 8014291..d0ae701 100644
--- a/net/dsa/slave.c
+++ b/net/dsa/slave.c
@@ -846,8 +846,6 @@ static int dsa_slave_setup_tc(struct net_device *dev, enum tc_setup_type type,
 			      void *type_data)
 {
 	switch (type) {
-	case TC_SETUP_CLSMATCHALL:
-		return 0; /* will be removed after conversion from ndo */
 	case TC_SETUP_BLOCK:
 		return dsa_slave_setup_tc_block(dev, type_data);
 	default:
diff --git a/net/sched/cls_bpf.c b/net/sched/cls_bpf.c
index e379fdf..0f8b510 100644
--- a/net/sched/cls_bpf.c
+++ b/net/sched/cls_bpf.c
@@ -148,7 +148,6 @@ static int cls_bpf_offload_cmd(struct tcf_proto *tp, struct cls_bpf_prog *prog,
 			       enum tc_clsbpf_command cmd)
 {
 	bool addorrep = cmd == TC_CLSBPF_ADD || cmd == TC_CLSBPF_REPLACE;
-	struct net_device *dev = tp->q->dev_queue->dev;
 	struct tcf_block *block = tp->chain->block;
 	bool skip_sw = tc_skip_sw(prog->gen_flags);
 	struct tc_cls_bpf_offload cls_bpf = {};
@@ -162,19 +161,6 @@ static int cls_bpf_offload_cmd(struct tcf_proto *tp, struct cls_bpf_prog *prog,
 	cls_bpf.exts_integrated = prog->exts_integrated;
 	cls_bpf.gen_flags = prog->gen_flags;
 
-	if (tc_can_offload(dev)) {
-		err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSBPF,
-						    &cls_bpf);
-		if (addorrep) {
-			if (err) {
-				if (skip_sw)
-					return err;
-			} else {
-				prog->gen_flags |= TCA_CLS_FLAGS_IN_HW;
-			}
-		}
-	}
-
 	err = tc_setup_cb_call(block, NULL, TC_SETUP_CLSBPF, &cls_bpf, skip_sw);
 	if (addorrep) {
 		if (err < 0) {
diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
index 76b4e0a..16f58ab 100644
--- a/net/sched/cls_flower.c
+++ b/net/sched/cls_flower.c
@@ -200,16 +200,12 @@ static void fl_destroy_filter(struct rcu_head *head)
 static void fl_hw_destroy_filter(struct tcf_proto *tp, struct cls_fl_filter *f)
 {
 	struct tc_cls_flower_offload cls_flower = {};
-	struct net_device *dev = tp->q->dev_queue->dev;
 	struct tcf_block *block = tp->chain->block;
 
 	tc_cls_common_offload_init(&cls_flower.common, tp);
 	cls_flower.command = TC_CLSFLOWER_DESTROY;
 	cls_flower.cookie = (unsigned long) f;
 
-	if (tc_can_offload(dev))
-		dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSFLOWER,
-					      &cls_flower);
 	tc_setup_cb_call(block, &f->exts, TC_SETUP_CLSFLOWER,
 			 &cls_flower, false);
 }
@@ -219,7 +215,6 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
 				struct fl_flow_key *mask,
 				struct cls_fl_filter *f)
 {
-	struct net_device *dev = tp->q->dev_queue->dev;
 	struct tc_cls_flower_offload cls_flower = {};
 	struct tcf_block *block = tp->chain->block;
 	bool skip_sw = tc_skip_sw(f->flags);
@@ -233,17 +228,6 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
 	cls_flower.key = &f->mkey;
 	cls_flower.exts = &f->exts;
 
-	if (tc_can_offload(dev)) {
-		err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSFLOWER,
-						    &cls_flower);
-		if (err) {
-			if (skip_sw)
-				return err;
-		} else {
-			f->flags |= TCA_CLS_FLAGS_IN_HW;
-		}
-	}
-
 	err = tc_setup_cb_call(block, &f->exts, TC_SETUP_CLSFLOWER,
 			       &cls_flower, skip_sw);
 	if (err < 0) {
@@ -262,7 +246,6 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
 static void fl_hw_update_stats(struct tcf_proto *tp, struct cls_fl_filter *f)
 {
 	struct tc_cls_flower_offload cls_flower = {};
-	struct net_device *dev = tp->q->dev_queue->dev;
 	struct tcf_block *block = tp->chain->block;
 
 	tc_cls_common_offload_init(&cls_flower.common, tp);
@@ -270,9 +253,6 @@ static void fl_hw_update_stats(struct tcf_proto *tp, struct cls_fl_filter *f)
 	cls_flower.cookie = (unsigned long) f;
 	cls_flower.exts = &f->exts;
 
-	if (tc_can_offload(dev))
-		dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSFLOWER,
-					      &cls_flower);
 	tc_setup_cb_call(block, &f->exts, TC_SETUP_CLSFLOWER,
 			 &cls_flower, false);
 }
diff --git a/net/sched/cls_matchall.c b/net/sched/cls_matchall.c
index 5278534..70e78d7 100644
--- a/net/sched/cls_matchall.c
+++ b/net/sched/cls_matchall.c
@@ -54,7 +54,6 @@ static void mall_destroy_hw_filter(struct tcf_proto *tp,
 				   struct cls_mall_head *head,
 				   unsigned long cookie)
 {
-	struct net_device *dev = tp->q->dev_queue->dev;
 	struct tc_cls_matchall_offload cls_mall = {};
 	struct tcf_block *block = tp->chain->block;
 
@@ -62,9 +61,6 @@ static void mall_destroy_hw_filter(struct tcf_proto *tp,
 	cls_mall.command = TC_CLSMATCHALL_DESTROY;
 	cls_mall.cookie = cookie;
 
-	if (tc_can_offload(dev))
-		dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSMATCHALL,
-					      &cls_mall);
 	tc_setup_cb_call(block, NULL, TC_SETUP_CLSMATCHALL, &cls_mall, false);
 }
 
@@ -72,7 +68,6 @@ static int mall_replace_hw_filter(struct tcf_proto *tp,
 				  struct cls_mall_head *head,
 				  unsigned long cookie)
 {
-	struct net_device *dev = tp->q->dev_queue->dev;
 	struct tc_cls_matchall_offload cls_mall = {};
 	struct tcf_block *block = tp->chain->block;
 	bool skip_sw = tc_skip_sw(head->flags);
@@ -83,17 +78,6 @@ static int mall_replace_hw_filter(struct tcf_proto *tp,
 	cls_mall.exts = &head->exts;
 	cls_mall.cookie = cookie;
 
-	if (tc_can_offload(dev)) {
-		err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSMATCHALL,
-						    &cls_mall);
-		if (err) {
-			if (skip_sw)
-				return err;
-		} else {
-			head->flags |= TCA_CLS_FLAGS_IN_HW;
-		}
-	}
-
 	err = tc_setup_cb_call(block, NULL, TC_SETUP_CLSMATCHALL,
 			       &cls_mall, skip_sw);
 	if (err < 0) {
diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
index 24cc429..0e4bd30 100644
--- a/net/sched/cls_u32.c
+++ b/net/sched/cls_u32.c
@@ -464,7 +464,6 @@ static int u32_delete_key(struct tcf_proto *tp, struct tc_u_knode *key)
 
 static void u32_clear_hw_hnode(struct tcf_proto *tp, struct tc_u_hnode *h)
 {
-	struct net_device *dev = tp->q->dev_queue->dev;
 	struct tcf_block *block = tp->chain->block;
 	struct tc_cls_u32_offload cls_u32 = {};
 
@@ -474,15 +473,12 @@ static void u32_clear_hw_hnode(struct tcf_proto *tp, struct tc_u_hnode *h)
 	cls_u32.hnode.handle = h->handle;
 	cls_u32.hnode.prio = h->prio;
 
-	if (tc_can_offload(dev))
-		dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSU32, &cls_u32);
 	tc_setup_cb_call(block, NULL, TC_SETUP_CLSU32, &cls_u32, false);
 }
 
 static int u32_replace_hw_hnode(struct tcf_proto *tp, struct tc_u_hnode *h,
 				u32 flags)
 {
-	struct net_device *dev = tp->q->dev_queue->dev;
 	struct tcf_block *block = tp->chain->block;
 	struct tc_cls_u32_offload cls_u32 = {};
 	bool skip_sw = tc_skip_sw(flags);
@@ -495,17 +491,6 @@ static int u32_replace_hw_hnode(struct tcf_proto *tp, struct tc_u_hnode *h,
 	cls_u32.hnode.handle = h->handle;
 	cls_u32.hnode.prio = h->prio;
 
-	if (tc_can_offload(dev)) {
-		err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSU32,
-						    &cls_u32);
-		if (err) {
-			if (skip_sw)
-				return err;
-		} else {
-			offloaded = true;
-		}
-	}
-
 	err = tc_setup_cb_call(block, NULL, TC_SETUP_CLSU32, &cls_u32, skip_sw);
 	if (err < 0) {
 		u32_clear_hw_hnode(tp, h);
@@ -522,7 +507,6 @@ static int u32_replace_hw_hnode(struct tcf_proto *tp, struct tc_u_hnode *h,
 
 static void u32_remove_hw_knode(struct tcf_proto *tp, u32 handle)
 {
-	struct net_device *dev = tp->q->dev_queue->dev;
 	struct tcf_block *block = tp->chain->block;
 	struct tc_cls_u32_offload cls_u32 = {};
 
@@ -530,15 +514,12 @@ static void u32_remove_hw_knode(struct tcf_proto *tp, u32 handle)
 	cls_u32.command = TC_CLSU32_DELETE_KNODE;
 	cls_u32.knode.handle = handle;
 
-	if (tc_can_offload(dev))
-		dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSU32, &cls_u32);
 	tc_setup_cb_call(block, NULL, TC_SETUP_CLSU32, &cls_u32, false);
 }
 
 static int u32_replace_hw_knode(struct tcf_proto *tp, struct tc_u_knode *n,
 				u32 flags)
 {
-	struct net_device *dev = tp->q->dev_queue->dev;
 	struct tcf_block *block = tp->chain->block;
 	struct tc_cls_u32_offload cls_u32 = {};
 	bool skip_sw = tc_skip_sw(flags);
@@ -560,18 +541,6 @@ static int u32_replace_hw_knode(struct tcf_proto *tp, struct tc_u_knode *n,
 	if (n->ht_down)
 		cls_u32.knode.link_handle = n->ht_down->handle;
 
-
-	if (tc_can_offload(dev)) {
-		err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSU32,
-						    &cls_u32);
-		if (err) {
-			if (skip_sw)
-				return err;
-		} else {
-			n->flags |= TCA_CLS_FLAGS_IN_HW;
-		}
-	}
-
 	err = tc_setup_cb_call(block, NULL, TC_SETUP_CLSU32, &cls_u32, skip_sw);
 	if (err < 0) {
 		u32_remove_hw_knode(tp, n->handle);
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [patch net-next v2 19/20] net: sched: remove unused classid field from tc_cls_common_offload
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
                   ` (17 preceding siblings ...)
  2017-10-19 13:50 ` [patch net-next v2 18/20] net: sched: avoid ndo_setup_tc calls for TC_SETUP_CLS* Jiri Pirko
@ 2017-10-19 13:50 ` Jiri Pirko
  2017-10-19 13:50 ` [patch net-next v2 20/20] net: sched: remove unused is_classid_clsact_ingress/egress helpers Jiri Pirko
  2017-10-21  2:04 ` [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks David Miller
  20 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

It is no longer used by the drivers, so remove it.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 include/net/pkt_cls.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index fcca5a9..04caa24 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -561,7 +561,6 @@ struct tc_cls_common_offload {
 	u32 chain_index;
 	__be16 protocol;
 	u32 prio;
-	u32 classid;
 };
 
 static inline void
@@ -571,7 +570,6 @@ tc_cls_common_offload_init(struct tc_cls_common_offload *cls_common,
 	cls_common->chain_index = tp->chain->index;
 	cls_common->protocol = tp->protocol;
 	cls_common->prio = tp->prio;
-	cls_common->classid = tp->classid;
 }
 
 struct tc_cls_u32_knode {
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [patch net-next v2 20/20] net: sched: remove unused is_classid_clsact_ingress/egress helpers
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
                   ` (18 preceding siblings ...)
  2017-10-19 13:50 ` [patch net-next v2 19/20] net: sched: remove unused classid field from tc_cls_common_offload Jiri Pirko
@ 2017-10-19 13:50 ` Jiri Pirko
  2017-10-21  2:04 ` [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks David Miller
  20 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-19 13:50 UTC (permalink / raw)
  To: netdev
  Cc: davem, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@mellanox.com>

These helpers are no longer in use by drivers, so remove them.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
---
 include/net/pkt_sched.h | 13 -------------
 1 file changed, 13 deletions(-)

diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
index 2d234af..b8ecafc 100644
--- a/include/net/pkt_sched.h
+++ b/include/net/pkt_sched.h
@@ -135,19 +135,6 @@ static inline unsigned int psched_mtu(const struct net_device *dev)
 	return dev->mtu + dev->hard_header_len;
 }
 
-static inline bool is_classid_clsact_ingress(u32 classid)
-{
-	/* This also returns true for ingress qdisc */
-	return TC_H_MAJ(classid) == TC_H_MAJ(TC_H_CLSACT) &&
-	       TC_H_MIN(classid) != TC_H_MIN(TC_H_MIN_EGRESS);
-}
-
-static inline bool is_classid_clsact_egress(u32 classid)
-{
-	return TC_H_MAJ(classid) == TC_H_MAJ(TC_H_CLSACT) &&
-	       TC_H_MIN(classid) == TC_H_MIN(TC_H_MIN_EGRESS);
-}
-
 static inline struct net *qdisc_net(struct Qdisc *q)
 {
 	return dev_net(q->dev_queue->dev);
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks
  2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
                   ` (19 preceding siblings ...)
  2017-10-19 13:50 ` [patch net-next v2 20/20] net: sched: remove unused is_classid_clsact_ingress/egress helpers Jiri Pirko
@ 2017-10-21  2:04 ` David Miller
  2017-10-24 16:01   ` Alexander Duyck
  20 siblings, 1 reply; 41+ messages in thread
From: David Miller @ 2017-10-21  2:04 UTC (permalink / raw)
  To: jiri
  Cc: netdev, jhs, xiyou.wangcong, mlxsw, andrew, vivien.didelot,
	f.fainelli, michael.chan, ganeshgr, jeffrey.t.kirsher, saeedm,
	matanb, leonro, idosch, jakub.kicinski, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

From: Jiri Pirko <jiri@resnulli.us>
Date: Thu, 19 Oct 2017 15:50:28 +0200

> This patchset is a bit bigger, but most of the patches are doing the
> same changes in multiple classifiers and drivers. I could do some
> squashes, but I think it is better split.
> 
> This is another dependency on the way to shared block implementation.
> The goal is to remove use of tp->q in classifiers code.
> 
> Also, this provides drivers possibility to track binding of blocks to
> qdiscs. Legacy drivers which do not support shared block offloading.
> register one callback per binding. That maintains the current
> functionality we have with ndo_setup_tc. Drivers which support block
> sharing offload register one callback per block which safes overhead.
> 
> Patches 1-4 introduce the binding notifications and per-block callbacks
> Patches 5-8 add block callbacks calls to classifiers
> Patches 9-17 do convert from ndo_setup_tc calls to block callbacks for
>              classifier offloads in drivers
> Patches 18-20 do cleanup

Series applied.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks
  2017-10-21  2:04 ` [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks David Miller
@ 2017-10-24 16:01   ` Alexander Duyck
  2017-10-24 16:24     ` Alexander Duyck
  2017-10-25 12:15     ` Jiri Pirko
  0 siblings, 2 replies; 41+ messages in thread
From: Alexander Duyck @ 2017-10-24 16:01 UTC (permalink / raw)
  To: David Miller
  Cc: Jiri Pirko, Netdev, Jamal Hadi Salim, Cong Wang, mlxsw,
	Andrew Lunn, Vivien Didelot, Florian Fainelli, Michael Chan,
	Ganesh Goudar, Jeff Kirsher, Saeed Mahameed, Matan Barak,
	Leon Romanovsky, idosch, Jakub Kicinski, Alexei Starovoitov,
	Daniel Borkmann, Simon Horman, pieter.jansenvanvuuren,
	john.hurley

On Fri, Oct 20, 2017 at 7:04 PM, David Miller <davem@davemloft.net> wrote:
> From: Jiri Pirko <jiri@resnulli.us>
> Date: Thu, 19 Oct 2017 15:50:28 +0200
>
>> This patchset is a bit bigger, but most of the patches are doing the
>> same changes in multiple classifiers and drivers. I could do some
>> squashes, but I think it is better split.
>>
>> This is another dependency on the way to shared block implementation.
>> The goal is to remove use of tp->q in classifiers code.
>>
>> Also, this provides drivers possibility to track binding of blocks to
>> qdiscs. Legacy drivers which do not support shared block offloading.
>> register one callback per binding. That maintains the current
>> functionality we have with ndo_setup_tc. Drivers which support block
>> sharing offload register one callback per block which safes overhead.
>>
>> Patches 1-4 introduce the binding notifications and per-block callbacks
>> Patches 5-8 add block callbacks calls to classifiers
>> Patches 9-17 do convert from ndo_setup_tc calls to block callbacks for
>>              classifier offloads in drivers
>> Patches 18-20 do cleanup
>
> Series applied.

We are getting internal reports of regressions being seen with this
patch set applied. Specifically the issues below have been pointed out
to me. My understanding is that these issues are all being reported on
ixgbe:

1.       To offload filter into HW, the hw-tc-offload feature flag has
to be turned on before creating the ingress qdisc.

Previously, this could also be turned on after the qdisc was created
and the filters could still be offloaded. Looks like this is because,
previously the offload flag was checked as a part of filter
integration in the classifier, and now it is checked as part of qdisc
creation (ingress_init). So, if no offload capability is advertised at
ingress qdisc creation time then hardware will not be asked to offload
filters later if the flag is enabled.

2.       Deleting the ingress qdisc fails to remove filters added in
HW. Filters in SW gets deleted.

We haven’t exactly root-caused this, the changes being extensive, but
our guess is again something wrong with the offload check or similar
while unregistering the block callback (tcf_block_cb_unregister) and
further to the classifier (CLS_U32/CLS_FLOWER etc.) with the
DESTROY/REMOVE command.

3.       Deleting u32 filters using handle fails to remove filter from
HW, filter in SW gets deleted.

Probably similar reasons, also we see some u32 specific patches as
well for remove nodes.

We are still digging into this further, but wanted to put this out
there so we can address the issues before we go much further down this
path.

Thanks.

- Alex

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks
  2017-10-24 16:01   ` Alexander Duyck
@ 2017-10-24 16:24     ` Alexander Duyck
       [not found]       ` <CAJ3xEMgdRBEuv0hb_G43zJXXRy=PWgY2tdHuwDN0Opc2NVF35g@mail.gmail.com>
  2017-10-25  8:36       ` Jiri Pirko
  2017-10-25 12:15     ` Jiri Pirko
  1 sibling, 2 replies; 41+ messages in thread
From: Alexander Duyck @ 2017-10-24 16:24 UTC (permalink / raw)
  To: David Miller
  Cc: Jiri Pirko, Netdev, Jamal Hadi Salim, Cong Wang, mlxsw,
	Andrew Lunn, Vivien Didelot, Florian Fainelli, Michael Chan,
	Ganesh Goudar, Jeff Kirsher, Saeed Mahameed, Matan Barak,
	Leon Romanovsky, idosch, Jakub Kicinski, Alexei Starovoitov,
	Daniel Borkmann, Simon Horman, pieter.jansenvanvuuren,
	john.hurley

On Tue, Oct 24, 2017 at 9:01 AM, Alexander Duyck
<alexander.duyck@gmail.com> wrote:
> On Fri, Oct 20, 2017 at 7:04 PM, David Miller <davem@davemloft.net> wrote:
>> From: Jiri Pirko <jiri@resnulli.us>
>> Date: Thu, 19 Oct 2017 15:50:28 +0200
>>
>>> This patchset is a bit bigger, but most of the patches are doing the
>>> same changes in multiple classifiers and drivers. I could do some
>>> squashes, but I think it is better split.
>>>
>>> This is another dependency on the way to shared block implementation.
>>> The goal is to remove use of tp->q in classifiers code.
>>>
>>> Also, this provides drivers possibility to track binding of blocks to
>>> qdiscs. Legacy drivers which do not support shared block offloading.
>>> register one callback per binding. That maintains the current
>>> functionality we have with ndo_setup_tc. Drivers which support block
>>> sharing offload register one callback per block which safes overhead.
>>>
>>> Patches 1-4 introduce the binding notifications and per-block callbacks
>>> Patches 5-8 add block callbacks calls to classifiers
>>> Patches 9-17 do convert from ndo_setup_tc calls to block callbacks for
>>>              classifier offloads in drivers
>>> Patches 18-20 do cleanup
>>
>> Series applied.
>
> We are getting internal reports of regressions being seen with this
> patch set applied. Specifically the issues below have been pointed out
> to me. My understanding is that these issues are all being reported on
> ixgbe:
>
> 1.       To offload filter into HW, the hw-tc-offload feature flag has
> to be turned on before creating the ingress qdisc.
>
> Previously, this could also be turned on after the qdisc was created
> and the filters could still be offloaded. Looks like this is because,
> previously the offload flag was checked as a part of filter
> integration in the classifier, and now it is checked as part of qdisc
> creation (ingress_init). So, if no offload capability is advertised at
> ingress qdisc creation time then hardware will not be asked to offload
> filters later if the flag is enabled.
>
> 2.       Deleting the ingress qdisc fails to remove filters added in
> HW. Filters in SW gets deleted.
>
> We haven’t exactly root-caused this, the changes being extensive, but
> our guess is again something wrong with the offload check or similar
> while unregistering the block callback (tcf_block_cb_unregister) and
> further to the classifier (CLS_U32/CLS_FLOWER etc.) with the
> DESTROY/REMOVE command.
>
> 3.       Deleting u32 filters using handle fails to remove filter from
> HW, filter in SW gets deleted.
>
> Probably similar reasons, also we see some u32 specific patches as
> well for remove nodes.
>
> We are still digging into this further, but wanted to put this out
> there so we can address the issues before we go much further down this
> path.
>
> Thanks.
>
> - Alex

So a quick update. Item 3 is no longer an issue, it was a
configuration issue on our side. So only items 1 and 2 are left to be
addressed.

Thanks.

- Alex

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks
       [not found]       ` <CAJ3xEMgdRBEuv0hb_G43zJXXRy=PWgY2tdHuwDN0Opc2NVF35g@mail.gmail.com>
@ 2017-10-24 17:03         ` Or Gerlitz
  2017-10-24 17:22         ` Nambiar, Amritha
  1 sibling, 0 replies; 41+ messages in thread
From: Or Gerlitz @ 2017-10-24 17:03 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: David Miller, Jiri Pirko, Netdev, Jamal Hadi Salim, Cong Wang,
	mlxsw, Andrew Lunn, Vivien Didelot, Florian Fainelli,
	Michael Chan, Ganesh Goudar, Jeff Kirsher, Saeed Mahameed,
	Matan Barak, Leon Romanovsky, Ido Schimmel, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Simon Horman

On Tue, Oct 24, 2017 at 8:02 PM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
> On Tue, Oct 24, 2017 at 7:24 PM, Alexander Duyck <alexander.duyck@gmail.com> wrote:
>> On Tue, Oct 24, 2017 at 9:01 AM, Alexander Duyck <alexander.duyck@gmail.com> wrote:

>>> We are getting internal reports of regressions being seen with this
>>> patch set applied. Specifically the issues below have been pointed out
>>> to me. My understanding is that these issues are all being reported on ixgbe:

>>> 1.       To offload filter into HW, the hw-tc-offload feature flag has
>>> to be turned on before creating the ingress qdisc.

>>> 2.       Deleting the ingress qdisc fails to remove filters added in
>>> HW. Filters in SW gets deleted.

> FWIW, I don't think this is what you're hitting, but please note there are
> two  fixes  to the series in net-next
>
> 9d452ce net/sched: Fix actions list corruption when adding offloaded tc flows
> 0843c09 net/sched: Set the net-device for egress device instance
>
> I had to looked on other stuff today, but I will keep stressing see if/what
> is broken w.r.t mlx5
> and help to get things to work later this and next week so by netdev we'll
> be fine
>
> Or.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks
       [not found]       ` <CAJ3xEMgdRBEuv0hb_G43zJXXRy=PWgY2tdHuwDN0Opc2NVF35g@mail.gmail.com>
  2017-10-24 17:03         ` Or Gerlitz
@ 2017-10-24 17:22         ` Nambiar, Amritha
  1 sibling, 0 replies; 41+ messages in thread
From: Nambiar, Amritha @ 2017-10-24 17:22 UTC (permalink / raw)
  To: Or Gerlitz, Alexander Duyck
  Cc: David Miller, Jiri Pirko, Netdev, Jamal Hadi Salim, Cong Wang,
	mlxsw, Andrew Lunn, Vivien Didelot, Florian Fainelli,
	Michael Chan, Ganesh Goudar, Jeff Kirsher, Saeed Mahameed,
	Matan Barak, Leon Romanovsky, Ido Schimmel, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Simon Horman

On 10/24/2017 10:02 AM, Or Gerlitz wrote:
> On Tue, Oct 24, 2017 at 7:24 PM, Alexander Duyck
> <alexander.duyck@gmail.com <mailto:alexander.duyck@gmail.com>> wrote:
> 
>     On Tue, Oct 24, 2017 at 9:01 AM, Alexander Duyck
>     <alexander.duyck@gmail.com <mailto:alexander.duyck@gmail.com>> wrote:
> 
>  
> 
>     > We are getting internal reports of regressions being seen with this
>     > patch set applied. Specifically the issues below have been pointed out
>     > to me. My understanding is that these issues are all being reported on
>     > ixgbe:
>     >
>     > 1.       To offload filter into HW, the hw-tc-offload feature flag has
>     > to be turned on before creating the ingress qdisc.
> 
>  
> 
>     > 2.       Deleting the ingress qdisc fails to remove filters added in
>     > HW. Filters in SW gets deleted.
> 
> 
> 
> FWIW, I don't think this is what you're hitting, but please note there
> are two  fixes 
> to the series in net-next 
> 
> 9d452ce net/sched: Fix actions list corruption when adding offloaded tc
> flows

It looks like this patch just got applied, this fix wasn't included
during our testing.

> 0843c09 net/sched: Set the net-device for egress device instance

This fix was already in place during our testing. Also, the filters
added were matching on IPv4 addresses with action drop.

> I had to looked on other stuff today, but I will keep stressing see
> if/what is broken w.r.t mlx5 
> and help to get things to work later this and next week so by netdev
> we'll be fine
> 
> Or.
> 
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks
  2017-10-24 16:24     ` Alexander Duyck
       [not found]       ` <CAJ3xEMgdRBEuv0hb_G43zJXXRy=PWgY2tdHuwDN0Opc2NVF35g@mail.gmail.com>
@ 2017-10-25  8:36       ` Jiri Pirko
  1 sibling, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-25  8:36 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: David Miller, Netdev, Jamal Hadi Salim, Cong Wang, mlxsw,
	Andrew Lunn, Vivien Didelot, Florian Fainelli, Michael Chan,
	Ganesh Goudar, Jeff Kirsher, Saeed Mahameed, Matan Barak,
	Leon Romanovsky, idosch, Jakub Kicinski, Alexei Starovoitov,
	Daniel Borkmann, Simon Horman, pieter.jansenvanvuuren, john.hurl

Tue, Oct 24, 2017 at 06:24:01PM CEST, alexander.duyck@gmail.com wrote:
>On Tue, Oct 24, 2017 at 9:01 AM, Alexander Duyck
><alexander.duyck@gmail.com> wrote:
>> On Fri, Oct 20, 2017 at 7:04 PM, David Miller <davem@davemloft.net> wrote:
>>> From: Jiri Pirko <jiri@resnulli.us>
>>> Date: Thu, 19 Oct 2017 15:50:28 +0200
>>>
>>>> This patchset is a bit bigger, but most of the patches are doing the
>>>> same changes in multiple classifiers and drivers. I could do some
>>>> squashes, but I think it is better split.
>>>>
>>>> This is another dependency on the way to shared block implementation.
>>>> The goal is to remove use of tp->q in classifiers code.
>>>>
>>>> Also, this provides drivers possibility to track binding of blocks to
>>>> qdiscs. Legacy drivers which do not support shared block offloading.
>>>> register one callback per binding. That maintains the current
>>>> functionality we have with ndo_setup_tc. Drivers which support block
>>>> sharing offload register one callback per block which safes overhead.
>>>>
>>>> Patches 1-4 introduce the binding notifications and per-block callbacks
>>>> Patches 5-8 add block callbacks calls to classifiers
>>>> Patches 9-17 do convert from ndo_setup_tc calls to block callbacks for
>>>>              classifier offloads in drivers
>>>> Patches 18-20 do cleanup
>>>
>>> Series applied.
>>
>> We are getting internal reports of regressions being seen with this
>> patch set applied. Specifically the issues below have been pointed out
>> to me. My understanding is that these issues are all being reported on
>> ixgbe:
>>
>> 1.       To offload filter into HW, the hw-tc-offload feature flag has
>> to be turned on before creating the ingress qdisc.
>>
>> Previously, this could also be turned on after the qdisc was created
>> and the filters could still be offloaded. Looks like this is because,
>> previously the offload flag was checked as a part of filter
>> integration in the classifier, and now it is checked as part of qdisc
>> creation (ingress_init). So, if no offload capability is advertised at
>> ingress qdisc creation time then hardware will not be asked to offload
>> filters later if the flag is enabled.
>>
>> 2.       Deleting the ingress qdisc fails to remove filters added in
>> HW. Filters in SW gets deleted.
>>
>> We haven’t exactly root-caused this, the changes being extensive, but
>> our guess is again something wrong with the offload check or similar
>> while unregistering the block callback (tcf_block_cb_unregister) and
>> further to the classifier (CLS_U32/CLS_FLOWER etc.) with the
>> DESTROY/REMOVE command.
>>
>> 3.       Deleting u32 filters using handle fails to remove filter from
>> HW, filter in SW gets deleted.
>>
>> Probably similar reasons, also we see some u32 specific patches as
>> well for remove nodes.
>>
>> We are still digging into this further, but wanted to put this out
>> there so we can address the issues before we go much further down this
>> path.
>>
>> Thanks.
>>
>> - Alex
>
>So a quick update. Item 3 is no longer an issue, it was a
>configuration issue on our side. So only items 1 and 2 are left to be
>addressed.

Will look at 1 and 2. Thanks for the report.

>
>Thanks.
>
>- Alex

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks
  2017-10-24 16:01   ` Alexander Duyck
  2017-10-24 16:24     ` Alexander Duyck
@ 2017-10-25 12:15     ` Jiri Pirko
       [not found]       ` <aaa66a3d-766b-f758-692a-eba7f5a8702f@mellanox.com>
  2017-10-26 20:24       ` Nambiar, Amritha
  1 sibling, 2 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-25 12:15 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: David Miller, Netdev, Jamal Hadi Salim, Cong Wang, mlxsw,
	Andrew Lunn, Vivien Didelot, Florian Fainelli, Michael Chan,
	Ganesh Goudar, Jeff Kirsher, Saeed Mahameed, Matan Barak,
	Leon Romanovsky, idosch, Jakub Kicinski, Alexei Starovoitov,
	Daniel Borkmann, Simon Horman, pieter.jansenvanvuuren, john.hurl

Tue, Oct 24, 2017 at 06:01:34PM CEST, alexander.duyck@gmail.com wrote:

[...]

>1.       To offload filter into HW, the hw-tc-offload feature flag has
>to be turned on before creating the ingress qdisc.
>
>Previously, this could also be turned on after the qdisc was created
>and the filters could still be offloaded. Looks like this is because,
>previously the offload flag was checked as a part of filter
>integration in the classifier, and now it is checked as part of qdisc
>creation (ingress_init). So, if no offload capability is advertised at
>ingress qdisc creation time then hardware will not be asked to offload
>filters later if the flag is enabled.

I have patchset that fixes this in my queue now. Will do some smoke
testing and send later today.


>
>2.       Deleting the ingress qdisc fails to remove filters added in
>HW. Filters in SW gets deleted.
>
>We haven’t exactly root-caused this, the changes being extensive, but
>our guess is again something wrong with the offload check or similar
>while unregistering the block callback (tcf_block_cb_unregister) and
>further to the classifier (CLS_U32/CLS_FLOWER etc.) with the
>DESTROY/REMOVE command.

Hmm. How does this worked previously. I mean, do you see change of
behaviour? I'm asking because I don't see how rules added only to HW
could be removed, driver should care of it. Or are you talking about
rules added to both SW and HW?

Thanks!

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks
       [not found]       ` <aaa66a3d-766b-f758-692a-eba7f5a8702f@mellanox.com>
@ 2017-10-25 13:42         ` Jiri Pirko
  2017-10-25 13:48           ` Or Gerlitz
  0 siblings, 1 reply; 41+ messages in thread
From: Jiri Pirko @ 2017-10-25 13:42 UTC (permalink / raw)
  To: Or Gerlitz
  Cc: Alexander Duyck, David Miller, Netdev, Jamal Hadi Salim,
	Cong Wang, mlxsw, Andrew Lunn, Vivien Didelot, Florian Fainelli,
	Michael Chan, Ganesh Goudar, Jeff Kirsher, Saeed Mahameed,
	Matan Barak, Leon Romanovsky, idosch, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Simon Horman

Wed, Oct 25, 2017 at 03:29:27PM CEST, ogerlitz@mellanox.com wrote:
>On 10/25/2017 3:15 PM, Jiri Pirko wrote:
>> > 2. Deleting the ingress qdisc fails to remove filters added in
>> > HW. Filters in SW gets deleted.
>> > 
>> > We haven’t exactly root-caused this, the changes being extensive, but our guess is again something wrong with the offload check or similar while unregistering the block callback (tcf_block_cb_unregister) and further to the classifier (CLS_U32/CLS_FLOWER etc.) with the DESTROY/REMOVE command.
>> Hmm. How does this worked previously. I mean, do you see change of
>> behaviour? I'm asking because I don't see how rules added only to HW
>> could be removed, driver should care of it. Or are you talking about
>> rules added to both SW and HW?
>
>Jiri, on a possibly related note, dealing with some other tc/flower problems

Unrelated. What you describe is a separate issue.


>on net, I came across a situation where we fail in the driverto offload some
>flow (return -EINVALtowards the stack), and we immediately get a call from
>the stack to delete this flow (f->cookie)
>
>this is the cookie and thereturn value
>
>mlx5e_configure_flower f->cookie c50e8c80 err -22
>
>and then we getthis cookie for deletion where we fail again, b/c the flow is
>not offloaded
>
>mlx5e_delete_flower f->cookie c50e8c80

Yes, that is intentional. The thing is, there might be multiple block
callbacks registered and to be called. If there is a fail with one, we
need to cleanup all. So in your case you have 1 cb registered, that
means that in case of an error during insertion, you will get cb called
to remove. Driver has to take care of that. I was checking that and was
under impression that mlx5 deals with that.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks
  2017-10-25 13:42         ` Jiri Pirko
@ 2017-10-25 13:48           ` Or Gerlitz
  0 siblings, 0 replies; 41+ messages in thread
From: Or Gerlitz @ 2017-10-25 13:48 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: Alexander Duyck, David Miller, Netdev, Jamal Hadi Salim,
	Cong Wang, mlxsw, Andrew Lunn, Vivien Didelot, Florian Fainelli,
	Michael Chan, Ganesh Goudar, Jeff Kirsher, Saeed Mahameed,
	Matan Barak, Leon Romanovsky, idosch, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Simon Horman

On 10/25/2017 4:42 PM, Jiri Pirko wrote:
> Yes, that is intentional. The thing is, there might be multiple block
> callbacks registered and to be called. If there is a fail with one, we
> need to cleanup all. So in your case you have 1 cb registered, that
> means that in case of an error during insertion, you will get cb called
> to remove. Driver has to take care of that. I was checking that and was
> under impression that mlx5 deals with that.

I see, yeah, in mlx5 we identify that the flow doesn't exist in our 
tables, I sent a patch that deals with that and also another issue, lets 
discuss it there.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks
  2017-10-25 12:15     ` Jiri Pirko
       [not found]       ` <aaa66a3d-766b-f758-692a-eba7f5a8702f@mellanox.com>
@ 2017-10-26 20:24       ` Nambiar, Amritha
  2017-10-27  7:27         ` Jiri Pirko
  1 sibling, 1 reply; 41+ messages in thread
From: Nambiar, Amritha @ 2017-10-26 20:24 UTC (permalink / raw)
  To: Jiri Pirko, Alexander Duyck
  Cc: David Miller, Netdev, Jamal Hadi Salim, Cong Wang, mlxsw,
	Andrew Lunn, Vivien Didelot, Florian Fainelli, Michael Chan,
	Ganesh Goudar, Jeff Kirsher, Saeed Mahameed, Matan Barak,
	Leon Romanovsky, idosch, Jakub Kicinski, Alexei Starovoitov,
	Daniel Borkmann, Simon Horman, pieter.jansenvanvuuren, john.hurl

On 10/25/2017 5:15 AM, Jiri Pirko wrote:
> Tue, Oct 24, 2017 at 06:01:34PM CEST, alexander.duyck@gmail.com wrote:
> 
> [...]
> 
>> 1.       To offload filter into HW, the hw-tc-offload feature flag has
>> to be turned on before creating the ingress qdisc.
>>
>> Previously, this could also be turned on after the qdisc was created
>> and the filters could still be offloaded. Looks like this is because,
>> previously the offload flag was checked as a part of filter
>> integration in the classifier, and now it is checked as part of qdisc
>> creation (ingress_init). So, if no offload capability is advertised at
>> ingress qdisc creation time then hardware will not be asked to offload
>> filters later if the flag is enabled.
> 
> I have patchset that fixes this in my queue now. Will do some smoke
> testing and send later today.
> 
> 
>>
>> 2.       Deleting the ingress qdisc fails to remove filters added in
>> HW. Filters in SW gets deleted.
>>
>> We haven’t exactly root-caused this, the changes being extensive, but
>> our guess is again something wrong with the offload check or similar
>> while unregistering the block callback (tcf_block_cb_unregister) and
>> further to the classifier (CLS_U32/CLS_FLOWER etc.) with the
>> DESTROY/REMOVE command.
> 
> Hmm. How does this worked previously. I mean, do you see change of
> behaviour? I'm asking because I don't see how rules added only to HW
> could be removed, driver should care of it. Or are you talking about
> rules added to both SW and HW?

These are rules added to both SW and HW. Previously all cls_* had
ndo_setup_tc calls based on the offload capability.

commit 8d26d5636d "net: sched: avoid ndo_setup_tc calls for
TC_SETUP_CLS*" removed this bit to work with the new block callback. Is
there something similar in the block callback flow while acting on the
tcf_proto destroy call initiated when the qdisc is cleared?

> 
> Thanks!
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks
  2017-10-26 20:24       ` Nambiar, Amritha
@ 2017-10-27  7:27         ` Jiri Pirko
  2017-10-28  0:52           ` Jakub Kicinski
  0 siblings, 1 reply; 41+ messages in thread
From: Jiri Pirko @ 2017-10-27  7:27 UTC (permalink / raw)
  To: Nambiar, Amritha
  Cc: Alexander Duyck, David Miller, Netdev, Jamal Hadi Salim,
	Cong Wang, mlxsw, Andrew Lunn, Vivien Didelot, Florian Fainelli,
	Michael Chan, Ganesh Goudar, Jeff Kirsher, Saeed Mahameed,
	Matan Barak, Leon Romanovsky, idosch, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Simon Horman

Thu, Oct 26, 2017 at 10:24:31PM CEST, amritha.nambiar@intel.com wrote:
>On 10/25/2017 5:15 AM, Jiri Pirko wrote:
>> Tue, Oct 24, 2017 at 06:01:34PM CEST, alexander.duyck@gmail.com wrote:
>> 
>> [...]
>> 
>>> 1.       To offload filter into HW, the hw-tc-offload feature flag has
>>> to be turned on before creating the ingress qdisc.
>>>
>>> Previously, this could also be turned on after the qdisc was created
>>> and the filters could still be offloaded. Looks like this is because,
>>> previously the offload flag was checked as a part of filter
>>> integration in the classifier, and now it is checked as part of qdisc
>>> creation (ingress_init). So, if no offload capability is advertised at
>>> ingress qdisc creation time then hardware will not be asked to offload
>>> filters later if the flag is enabled.
>> 
>> I have patchset that fixes this in my queue now. Will do some smoke
>> testing and send later today.
>> 
>> 
>>>
>>> 2.       Deleting the ingress qdisc fails to remove filters added in
>>> HW. Filters in SW gets deleted.
>>>
>>> We haven’t exactly root-caused this, the changes being extensive, but
>>> our guess is again something wrong with the offload check or similar
>>> while unregistering the block callback (tcf_block_cb_unregister) and
>>> further to the classifier (CLS_U32/CLS_FLOWER etc.) with the
>>> DESTROY/REMOVE command.
>> 
>> Hmm. How does this worked previously. I mean, do you see change of
>> behaviour? I'm asking because I don't see how rules added only to HW
>> could be removed, driver should care of it. Or are you talking about
>> rules added to both SW and HW?
>
>These are rules added to both SW and HW. Previously all cls_* had
>ndo_setup_tc calls based on the offload capability.
>
>commit 8d26d5636d "net: sched: avoid ndo_setup_tc calls for
>TC_SETUP_CLS*" removed this bit to work with the new block callback. Is
>there something similar in the block callback flow while acting on the
>tcf_proto destroy call initiated when the qdisc is cleared?

Yes, it is the same.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks
  2017-10-27  7:27         ` Jiri Pirko
@ 2017-10-28  0:52           ` Jakub Kicinski
  2017-10-28  7:20             ` Jiri Pirko
  0 siblings, 1 reply; 41+ messages in thread
From: Jakub Kicinski @ 2017-10-28  0:52 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: Nambiar, Amritha, Alexander Duyck, David Miller, Netdev,
	Jamal Hadi Salim, Cong Wang, mlxsw, Andrew Lunn, Vivien Didelot,
	Florian Fainelli, Michael Chan, Ganesh Goudar, Jeff Kirsher,
	Saeed Mahameed, Matan Barak, Leon Romanovsky, idosch,
	Alexei Starovoitov, Daniel Borkmann, Simon Horman

On Fri, 27 Oct 2017 09:27:30 +0200, Jiri Pirko wrote:
> >>> 2.       Deleting the ingress qdisc fails to remove filters added in
> >>> HW. Filters in SW gets deleted.
> >>>
> >>> We haven’t exactly root-caused this, the changes being extensive, but
> >>> our guess is again something wrong with the offload check or similar
> >>> while unregistering the block callback (tcf_block_cb_unregister) and
> >>> further to the classifier (CLS_U32/CLS_FLOWER etc.) with the
> >>> DESTROY/REMOVE command.  
> >> 
> >> Hmm. How does this worked previously. I mean, do you see change of
> >> behaviour? I'm asking because I don't see how rules added only to HW
> >> could be removed, driver should care of it. Or are you talking about
> >> rules added to both SW and HW?  
> >
> >These are rules added to both SW and HW. Previously all cls_* had
> >ndo_setup_tc calls based on the offload capability.
> >
> >commit 8d26d5636d "net: sched: avoid ndo_setup_tc calls for
> >TC_SETUP_CLS*" removed this bit to work with the new block callback. Is
> >there something similar in the block callback flow while acting on the
> >tcf_proto destroy call initiated when the qdisc is cleared?  
> 
> Yes, it is the same.

FWIW I also see what Amritha and Alex are describing here, for cls_bpf
there are no DESTROYs coming on rmmod or qdisc del.  There is a DESTROY
if I manually remove the filter (or if an ADD with skip_sw fails).

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks
  2017-10-28  0:52           ` Jakub Kicinski
@ 2017-10-28  7:20             ` Jiri Pirko
  2017-10-28  7:53               ` Jakub Kicinski
  0 siblings, 1 reply; 41+ messages in thread
From: Jiri Pirko @ 2017-10-28  7:20 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Nambiar, Amritha, Alexander Duyck, David Miller, Netdev,
	Jamal Hadi Salim, Cong Wang, mlxsw, Andrew Lunn, Vivien Didelot,
	Florian Fainelli, Michael Chan, Ganesh Goudar, Jeff Kirsher,
	Saeed Mahameed, Matan Barak, Leon Romanovsky, idosch,
	Alexei Starovoitov, Daniel Borkmann, Simon Horman

Sat, Oct 28, 2017 at 02:52:00AM CEST, kubakici@wp.pl wrote:
>On Fri, 27 Oct 2017 09:27:30 +0200, Jiri Pirko wrote:
>> >>> 2.       Deleting the ingress qdisc fails to remove filters added in
>> >>> HW. Filters in SW gets deleted.
>> >>>
>> >>> We haven’t exactly root-caused this, the changes being extensive, but
>> >>> our guess is again something wrong with the offload check or similar
>> >>> while unregistering the block callback (tcf_block_cb_unregister) and
>> >>> further to the classifier (CLS_U32/CLS_FLOWER etc.) with the
>> >>> DESTROY/REMOVE command.  
>> >> 
>> >> Hmm. How does this worked previously. I mean, do you see change of
>> >> behaviour? I'm asking because I don't see how rules added only to HW
>> >> could be removed, driver should care of it. Or are you talking about
>> >> rules added to both SW and HW?  
>> >
>> >These are rules added to both SW and HW. Previously all cls_* had
>> >ndo_setup_tc calls based on the offload capability.
>> >
>> >commit 8d26d5636d "net: sched: avoid ndo_setup_tc calls for
>> >TC_SETUP_CLS*" removed this bit to work with the new block callback. Is
>> >there something similar in the block callback flow while acting on the
>> >tcf_proto destroy call initiated when the qdisc is cleared?  
>> 
>> Yes, it is the same.
>
>FWIW I also see what Amritha and Alex are describing here, for cls_bpf
>there are no DESTROYs coming on rmmod or qdisc del.  There is a DESTROY
>if I manually remove the filter (or if an ADD with skip_sw fails).

Is this different to the original behaviour? Just for cls_bpf?

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks
  2017-10-28  7:20             ` Jiri Pirko
@ 2017-10-28  7:53               ` Jakub Kicinski
  2017-10-28  8:43                 ` Jiri Pirko
  0 siblings, 1 reply; 41+ messages in thread
From: Jakub Kicinski @ 2017-10-28  7:53 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: Nambiar, Amritha, Alexander Duyck, David Miller, Netdev,
	Jamal Hadi Salim, Cong Wang, mlxsw, Andrew Lunn, Vivien Didelot,
	Florian Fainelli, Michael Chan, Ganesh Goudar, Jeff Kirsher,
	Saeed Mahameed, Matan Barak, Leon Romanovsky, idosch,
	Alexei Starovoitov, Daniel Borkmann, Simon Horman

On Sat, 28 Oct 2017 09:20:31 +0200, Jiri Pirko wrote:
> Sat, Oct 28, 2017 at 02:52:00AM CEST, kubakici@wp.pl wrote:
> >On Fri, 27 Oct 2017 09:27:30 +0200, Jiri Pirko wrote:  
> >> Yes, it is the same.  
> >
> >FWIW I also see what Amritha and Alex are describing here, for cls_bpf
> >there are no DESTROYs coming on rmmod or qdisc del.  There is a DESTROY
> >if I manually remove the filter (or if an ADD with skip_sw fails).  
> 
> Is this different to the original behaviour? Just for cls_bpf?

For cls_bpf the callbacks used to be 100% symmetrical, i.e. destroy
would always be guaranteed if add succeeded (regardless of state of
skip_* flags).

I haven't checked cls_flower on the nfp because it implodes on add
already...  I hear the fix is simple so I can check, if that helps.
Although from quick code inspection it seems fl_hw_destroy_filter()
was always invoked before fl_destroy_filter(), so it looks like since
flower doesn't track which filters were offloaded successfully it may
send destroy events for filter drivers don't hold, but destroy should
always be guaranteed there too.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks
  2017-10-28  7:53               ` Jakub Kicinski
@ 2017-10-28  8:43                 ` Jiri Pirko
  2017-10-28 17:17                   ` Jakub Kicinski
  0 siblings, 1 reply; 41+ messages in thread
From: Jiri Pirko @ 2017-10-28  8:43 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Nambiar, Amritha, Alexander Duyck, David Miller, Netdev,
	Jamal Hadi Salim, Cong Wang, mlxsw, Andrew Lunn, Vivien Didelot,
	Florian Fainelli, Michael Chan, Ganesh Goudar, Jeff Kirsher,
	Saeed Mahameed, Matan Barak, Leon Romanovsky, idosch,
	Alexei Starovoitov, Daniel Borkmann, Simon Horman

Sat, Oct 28, 2017 at 09:53:21AM CEST, kubakici@wp.pl wrote:
>On Sat, 28 Oct 2017 09:20:31 +0200, Jiri Pirko wrote:
>> Sat, Oct 28, 2017 at 02:52:00AM CEST, kubakici@wp.pl wrote:
>> >On Fri, 27 Oct 2017 09:27:30 +0200, Jiri Pirko wrote:  
>> >> Yes, it is the same.  
>> >
>> >FWIW I also see what Amritha and Alex are describing here, for cls_bpf
>> >there are no DESTROYs coming on rmmod or qdisc del.  There is a DESTROY
>> >if I manually remove the filter (or if an ADD with skip_sw fails).  
>> 
>> Is this different to the original behaviour? Just for cls_bpf?
>
>For cls_bpf the callbacks used to be 100% symmetrical, i.e. destroy
>would always be guaranteed if add succeeded (regardless of state of
>skip_* flags).

Hmm. It still should be symmetrical. Looking at following path:
cls_bpf_destroy->
   __cls_bpf_delete->
      cls_bpf_stop_offload->
         cls_bpf_offload_cmd(tp, prog, TC_CLSBPF_DESTROY)

I don't see how any tp could be missed. Could you please check this
callpath is utilized during your action (rmmod or qdisc del)?


>
>I haven't checked cls_flower on the nfp because it implodes on add
>already...  I hear the fix is simple so I can check, if that helps.
>Although from quick code inspection it seems fl_hw_destroy_filter()
>was always invoked before fl_destroy_filter(), so it looks like since
>flower doesn't track which filters were offloaded successfully it may
>send destroy events for filter drivers don't hold, but destroy should
>always be guaranteed there too.

You are right, that is following path:
fl_delete->
   fl_hw_destroy_filter

I will check it.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks
  2017-10-28  8:43                 ` Jiri Pirko
@ 2017-10-28 17:17                   ` Jakub Kicinski
  2017-10-29  7:26                     ` Jiri Pirko
  0 siblings, 1 reply; 41+ messages in thread
From: Jakub Kicinski @ 2017-10-28 17:17 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: Nambiar, Amritha, Alexander Duyck, David Miller, Netdev,
	Jamal Hadi Salim, Cong Wang, mlxsw, Andrew Lunn, Vivien Didelot,
	Florian Fainelli, Michael Chan, Ganesh Goudar, Jeff Kirsher,
	Saeed Mahameed, Matan Barak, Leon Romanovsky, idosch,
	Alexei Starovoitov, Daniel Borkmann, Simon Horman

On Sat, 28 Oct 2017 10:43:51 +0200, Jiri Pirko wrote:
> Sat, Oct 28, 2017 at 09:53:21AM CEST, kubakici@wp.pl wrote:
> >On Sat, 28 Oct 2017 09:20:31 +0200, Jiri Pirko wrote:  
> >> Sat, Oct 28, 2017 at 02:52:00AM CEST, kubakici@wp.pl wrote:  
> >> >On Fri, 27 Oct 2017 09:27:30 +0200, Jiri Pirko wrote:    
> >> >> Yes, it is the same.    
> >> >
> >> >FWIW I also see what Amritha and Alex are describing here, for cls_bpf
> >> >there are no DESTROYs coming on rmmod or qdisc del.  There is a DESTROY
> >> >if I manually remove the filter (or if an ADD with skip_sw fails).    
> >> 
> >> Is this different to the original behaviour? Just for cls_bpf?  
> >
> >For cls_bpf the callbacks used to be 100% symmetrical, i.e. destroy
> >would always be guaranteed if add succeeded (regardless of state of
> >skip_* flags).  
> 
> Hmm. It still should be symmetrical. Looking at following path:
> cls_bpf_destroy->
>    __cls_bpf_delete->
>       cls_bpf_stop_offload->
>          cls_bpf_offload_cmd(tp, prog, TC_CLSBPF_DESTROY)
> 
> I don't see how any tp could be missed. Could you please check this
> callpath is utilized during your action (rmmod or qdisc del)?

The same path seems to be utilized but the unbind comes before the
filters are destroyed.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks
  2017-10-28 17:17                   ` Jakub Kicinski
@ 2017-10-29  7:26                     ` Jiri Pirko
  2017-10-31 10:46                       ` Jiri Pirko
  0 siblings, 1 reply; 41+ messages in thread
From: Jiri Pirko @ 2017-10-29  7:26 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Nambiar, Amritha, Alexander Duyck, David Miller, Netdev,
	Jamal Hadi Salim, Cong Wang, mlxsw, Andrew Lunn, Vivien Didelot,
	Florian Fainelli, Michael Chan, Ganesh Goudar, Jeff Kirsher,
	Saeed Mahameed, Matan Barak, Leon Romanovsky, idosch,
	Alexei Starovoitov, Daniel Borkmann, Simon Horman

Sat, Oct 28, 2017 at 07:17:24PM CEST, kubakici@wp.pl wrote:
>On Sat, 28 Oct 2017 10:43:51 +0200, Jiri Pirko wrote:
>> Sat, Oct 28, 2017 at 09:53:21AM CEST, kubakici@wp.pl wrote:
>> >On Sat, 28 Oct 2017 09:20:31 +0200, Jiri Pirko wrote:  
>> >> Sat, Oct 28, 2017 at 02:52:00AM CEST, kubakici@wp.pl wrote:  
>> >> >On Fri, 27 Oct 2017 09:27:30 +0200, Jiri Pirko wrote:    
>> >> >> Yes, it is the same.    
>> >> >
>> >> >FWIW I also see what Amritha and Alex are describing here, for cls_bpf
>> >> >there are no DESTROYs coming on rmmod or qdisc del.  There is a DESTROY
>> >> >if I manually remove the filter (or if an ADD with skip_sw fails).    
>> >> 
>> >> Is this different to the original behaviour? Just for cls_bpf?  
>> >
>> >For cls_bpf the callbacks used to be 100% symmetrical, i.e. destroy
>> >would always be guaranteed if add succeeded (regardless of state of
>> >skip_* flags).  
>> 
>> Hmm. It still should be symmetrical. Looking at following path:
>> cls_bpf_destroy->
>>    __cls_bpf_delete->
>>       cls_bpf_stop_offload->
>>          cls_bpf_offload_cmd(tp, prog, TC_CLSBPF_DESTROY)
>> 
>> I don't see how any tp could be missed. Could you please check this
>> callpath is utilized during your action (rmmod or qdisc del)?
>
>The same path seems to be utilized but the unbind comes before the
>filters are destroyed.

Ah, will fix. Thanks!

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks
  2017-10-29  7:26                     ` Jiri Pirko
@ 2017-10-31 10:46                       ` Jiri Pirko
  0 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-10-31 10:46 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Nambiar, Amritha, Alexander Duyck, David Miller, Netdev,
	Jamal Hadi Salim, Cong Wang, mlxsw, Andrew Lunn, Vivien Didelot,
	Florian Fainelli, Michael Chan, Ganesh Goudar, Jeff Kirsher,
	Saeed Mahameed, Matan Barak, Leon Romanovsky, idosch,
	Alexei Starovoitov, Daniel Borkmann, Simon Horman

Sun, Oct 29, 2017 at 08:26:25AM CET, jiri@resnulli.us wrote:
>Sat, Oct 28, 2017 at 07:17:24PM CEST, kubakici@wp.pl wrote:
>>On Sat, 28 Oct 2017 10:43:51 +0200, Jiri Pirko wrote:
>>> Sat, Oct 28, 2017 at 09:53:21AM CEST, kubakici@wp.pl wrote:
>>> >On Sat, 28 Oct 2017 09:20:31 +0200, Jiri Pirko wrote:  
>>> >> Sat, Oct 28, 2017 at 02:52:00AM CEST, kubakici@wp.pl wrote:  
>>> >> >On Fri, 27 Oct 2017 09:27:30 +0200, Jiri Pirko wrote:    
>>> >> >> Yes, it is the same.    
>>> >> >
>>> >> >FWIW I also see what Amritha and Alex are describing here, for cls_bpf
>>> >> >there are no DESTROYs coming on rmmod or qdisc del.  There is a DESTROY
>>> >> >if I manually remove the filter (or if an ADD with skip_sw fails).    
>>> >> 
>>> >> Is this different to the original behaviour? Just for cls_bpf?  
>>> >
>>> >For cls_bpf the callbacks used to be 100% symmetrical, i.e. destroy
>>> >would always be guaranteed if add succeeded (regardless of state of
>>> >skip_* flags).  
>>> 
>>> Hmm. It still should be symmetrical. Looking at following path:
>>> cls_bpf_destroy->
>>>    __cls_bpf_delete->
>>>       cls_bpf_stop_offload->
>>>          cls_bpf_offload_cmd(tp, prog, TC_CLSBPF_DESTROY)
>>> 
>>> I don't see how any tp could be missed. Could you please check this
>>> callpath is utilized during your action (rmmod or qdisc del)?
>>
>>The same path seems to be utilized but the unbind comes before the
>>filters are destroyed.
>
>Ah, will fix. Thanks!

We need to move tcf_block_offload_unbind(block, q, ei); call after the
chains are flushed. There are big waves around this code in net and
net-next atm. Will send a patch fix this once the storm is over.

Thanks!

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [patch net-next v2 08/20] net: sched: cls_bpf: call block callbacks for offload
  2017-10-19 13:50 ` [patch net-next v2 08/20] net: sched: cls_bpf: " Jiri Pirko
@ 2017-11-01  0:44   ` Jakub Kicinski
  2017-11-01  8:33     ` Jiri Pirko
  0 siblings, 1 reply; 41+ messages in thread
From: Jakub Kicinski @ 2017-11-01  0:44 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: netdev, davem, jhs, xiyou.wangcong, mlxsw, andrew,
	vivien.didelot, f.fainelli, michael.chan, ganeshgr,
	jeffrey.t.kirsher, saeedm, matanb, leonro, idosch, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

On Thu, 19 Oct 2017 15:50:36 +0200, Jiri Pirko wrote:
> @@ -159,17 +162,38 @@ static int cls_bpf_offload_cmd(struct tcf_proto *tp, struct cls_bpf_prog *prog,
>  	cls_bpf.exts_integrated = prog->exts_integrated;
>  	cls_bpf.gen_flags = prog->gen_flags;
>  
> -	err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSBPF, &cls_bpf);
> -	if (!err && (cmd == TC_CLSBPF_ADD || cmd == TC_CLSBPF_REPLACE))
> -		prog->gen_flags |= TCA_CLS_FLAGS_IN_HW;
> +	if (tc_can_offload(dev)) {
> +		err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSBPF,
> +						    &cls_bpf);
> +		if (addorrep) {
> +			if (err) {
> +				if (skip_sw)
> +					return err;
> +			} else {
> +				prog->gen_flags |= TCA_CLS_FLAGS_IN_HW;
> +			}
> +		}
> +	}
> +
> +	err = tc_setup_cb_call(block, NULL, TC_SETUP_CLSBPF, &cls_bpf, skip_sw);
> +	if (addorrep) {
> +		if (err < 0) {
> +			cls_bpf_offload_cmd(tp, prog, TC_CLSBPF_DESTROY);

It seems counter intuitive that the appropriate action for a failed
REPLACE is DESTROY.  One would expect a bad REPLACE X -> Y to be 
followed by a REPLACE Y -> X (i.e. go back to X).

At least my reading of cls_bpf is that if replace fails software path
will keep using the old prog.  Is this maybe something that's different
in flower?  Or am I reading the code wrong?

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [patch net-next v2 08/20] net: sched: cls_bpf: call block callbacks for offload
  2017-11-01  0:44   ` Jakub Kicinski
@ 2017-11-01  8:33     ` Jiri Pirko
  0 siblings, 0 replies; 41+ messages in thread
From: Jiri Pirko @ 2017-11-01  8:33 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: netdev, davem, jhs, xiyou.wangcong, mlxsw, andrew,
	vivien.didelot, f.fainelli, michael.chan, ganeshgr,
	jeffrey.t.kirsher, saeedm, matanb, leonro, idosch, ast, daniel,
	simon.horman, pieter.jansenvanvuuren, john.hurley,
	alexander.h.duyck

Wed, Nov 01, 2017 at 01:44:09AM CET, jakub.kicinski@netronome.com wrote:
>On Thu, 19 Oct 2017 15:50:36 +0200, Jiri Pirko wrote:
>> @@ -159,17 +162,38 @@ static int cls_bpf_offload_cmd(struct tcf_proto *tp, struct cls_bpf_prog *prog,
>>  	cls_bpf.exts_integrated = prog->exts_integrated;
>>  	cls_bpf.gen_flags = prog->gen_flags;
>>  
>> -	err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSBPF, &cls_bpf);
>> -	if (!err && (cmd == TC_CLSBPF_ADD || cmd == TC_CLSBPF_REPLACE))
>> -		prog->gen_flags |= TCA_CLS_FLAGS_IN_HW;
>> +	if (tc_can_offload(dev)) {
>> +		err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_CLSBPF,
>> +						    &cls_bpf);
>> +		if (addorrep) {
>> +			if (err) {
>> +				if (skip_sw)
>> +					return err;
>> +			} else {
>> +				prog->gen_flags |= TCA_CLS_FLAGS_IN_HW;
>> +			}
>> +		}
>> +	}
>> +
>> +	err = tc_setup_cb_call(block, NULL, TC_SETUP_CLSBPF, &cls_bpf, skip_sw);
>> +	if (addorrep) {
>> +		if (err < 0) {
>> +			cls_bpf_offload_cmd(tp, prog, TC_CLSBPF_DESTROY);
>
>It seems counter intuitive that the appropriate action for a failed
>REPLACE is DESTROY.  One would expect a bad REPLACE X -> Y to be 
>followed by a REPLACE Y -> X (i.e. go back to X).

That makes sense.

>
>At least my reading of cls_bpf is that if replace fails software path
>will keep using the old prog.  Is this maybe something that's different
>in flower?  Or am I reading the code wrong?

In flower, there is not possible to do replace. First, the new one is
added and then the old one is removed.

^ permalink raw reply	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2017-11-01  8:34 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-19 13:50 [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks Jiri Pirko
2017-10-19 13:50 ` [patch net-next v2 01/20] net: sched: add block bind/unbind notif. and extended block_get/put Jiri Pirko
2017-10-19 13:50 ` [patch net-next v2 02/20] net: sched: use extended variants of block_get/put in ingress and clsact qdiscs Jiri Pirko
2017-10-19 13:50 ` [patch net-next v2 03/20] net: sched: introduce per-block callbacks Jiri Pirko
2017-10-19 13:50 ` [patch net-next v2 04/20] net: sched: use tc_setup_cb_call to call " Jiri Pirko
2017-10-19 13:50 ` [patch net-next v2 05/20] net: sched: cls_matchall: call block callbacks for offload Jiri Pirko
2017-10-19 13:50 ` [patch net-next v2 06/20] net: sched: cls_u32: swap u32_remove_hw_knode and u32_remove_hw_hnode Jiri Pirko
2017-10-19 13:50 ` [patch net-next v2 07/20] net: sched: cls_u32: call block callbacks for offload Jiri Pirko
2017-10-19 13:50 ` [patch net-next v2 08/20] net: sched: cls_bpf: " Jiri Pirko
2017-11-01  0:44   ` Jakub Kicinski
2017-11-01  8:33     ` Jiri Pirko
2017-10-19 13:50 ` [patch net-next v2 09/20] mlxsw: spectrum: Convert ndo_setup_tc offloads to block callbacks Jiri Pirko
2017-10-19 13:50 ` [patch net-next v2 10/20] mlx5e: " Jiri Pirko
2017-10-19 13:50 ` [patch net-next v2 11/20] bnxt: " Jiri Pirko
2017-10-19 13:50 ` [patch net-next v2 12/20] cxgb4: " Jiri Pirko
2017-10-19 13:50 ` [patch net-next v2 13/20] ixgbe: " Jiri Pirko
2017-10-19 13:50 ` [patch net-next v2 14/20] mlx5e_rep: " Jiri Pirko
2017-10-19 13:50 ` [patch net-next v2 15/20] nfp: flower: " Jiri Pirko
2017-10-19 13:50 ` [patch net-next v2 16/20] nfp: bpf: " Jiri Pirko
2017-10-19 13:50 ` [patch net-next v2 17/20] dsa: " Jiri Pirko
2017-10-19 13:50 ` [patch net-next v2 18/20] net: sched: avoid ndo_setup_tc calls for TC_SETUP_CLS* Jiri Pirko
2017-10-19 13:50 ` [patch net-next v2 19/20] net: sched: remove unused classid field from tc_cls_common_offload Jiri Pirko
2017-10-19 13:50 ` [patch net-next v2 20/20] net: sched: remove unused is_classid_clsact_ingress/egress helpers Jiri Pirko
2017-10-21  2:04 ` [patch net-next v2 00/20] net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks David Miller
2017-10-24 16:01   ` Alexander Duyck
2017-10-24 16:24     ` Alexander Duyck
     [not found]       ` <CAJ3xEMgdRBEuv0hb_G43zJXXRy=PWgY2tdHuwDN0Opc2NVF35g@mail.gmail.com>
2017-10-24 17:03         ` Or Gerlitz
2017-10-24 17:22         ` Nambiar, Amritha
2017-10-25  8:36       ` Jiri Pirko
2017-10-25 12:15     ` Jiri Pirko
     [not found]       ` <aaa66a3d-766b-f758-692a-eba7f5a8702f@mellanox.com>
2017-10-25 13:42         ` Jiri Pirko
2017-10-25 13:48           ` Or Gerlitz
2017-10-26 20:24       ` Nambiar, Amritha
2017-10-27  7:27         ` Jiri Pirko
2017-10-28  0:52           ` Jakub Kicinski
2017-10-28  7:20             ` Jiri Pirko
2017-10-28  7:53               ` Jakub Kicinski
2017-10-28  8:43                 ` Jiri Pirko
2017-10-28 17:17                   ` Jakub Kicinski
2017-10-29  7:26                     ` Jiri Pirko
2017-10-31 10:46                       ` Jiri Pirko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.