netfilter-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter
@ 2019-04-26  0:33 Pablo Neira Ayuso
  2019-04-26  0:33 ` [PATCH net-next,RFC 1/9] net: sched: move tcf_block_cb before indr_block Pablo Neira Ayuso
                   ` (10 more replies)
  0 siblings, 11 replies; 20+ messages in thread
From: Pablo Neira Ayuso @ 2019-04-26  0:33 UTC (permalink / raw)
  To: netfilter-devel
  Cc: davem, netdev, jiri, john.hurley, jakub.kicinski, ogerlitz

Hi,

This patchset aims to introduce changes to reuse the existing .ndo_setup_tc
netdev operations from netfilter.

The idea is to move tcf_block_cb to net/core/flow_offload.c and rename
it to flow_block_cb. This object provides the minimal infrastructure to
set up per-block callbacks that are called to offload policies to
hardware.

The tcf_block object is specific for TC to share policies between
ingress devices. This object has a list of tcf_block_cb objects that are
called to offload the policies to hardware. In netfilter, the idea is to
store the list of tcf_block_cb objects in a chain that would be bound to
several devices, eg.

  chain x {
	type filter hook ingress devices = { eth0, eth1 } priority 0;
	...
  }

Hence, this emulates the shared blocks available in TC that Jiri made.

Note that the list of tcf_block_cb objects will be called to offload
policies in this chain.

To reuse this infrastructure, I need remove the dependency with the
tcf_block object and tc/cls_api (see .reoffload) that is called from the
driver side, this patchset reworks the per-block callback infrastructure
to set up the tcf_block_cb object from the driver, then convey the list
of callbacks using the tc_block_offload object back to the core.

            cls_api                         driver
	TC_SETUP_BLOCK    ---------->  setup tcf_block_cb
       tc_block_offload           add it to tc_block_offload->cb_list
                                                |
          register     <------------------------'
         tcf_block_cb
         ->reoffload

Therefore, registration does not happen from drivers anymore, instead
it is done from the core. The driver just sets up the tcf_block_cb
object that wires up the connection between the offloaded block (chains
in case of netfilter) and the driver.

This patchset is compile tested only at this stage.

Comments welcome, thanks.

Pablo Neira Ayuso (9):
  net: sched: move tcf_block_cb before indr_block
  net: sched: add tcf_block_cb_alloc()
  net: sched: add tcf_block_cb_free()
  net: sched: add tcf_block_setup()
  net: sched: add release callback to struct tcf_block_cb
  net: sched: add tcf_setup_block_offload()
  net: use tcf_block_setup() infrastructure
  net: sched: remove tcf_block_cb_{register,unregister}()
  net: cls_api: do not expose tcf_block to drivers

 drivers/net/ethernet/broadcom/bnxt/bnxt.c          |  26 +-
 drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c      |  28 +-
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c    |  26 +-
 drivers/net/ethernet/intel/i40e/i40e_main.c        |  26 +-
 drivers/net/ethernet/intel/iavf/iavf_main.c        |  35 +-
 drivers/net/ethernet/intel/igb/igb_main.c          |  23 +-
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c      |  27 +-
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c  |  27 +-
 drivers/net/ethernet/mellanox/mlx5/core/en_rep.c   |  59 +-
 drivers/net/ethernet/mellanox/mlxsw/spectrum.c     |  64 ++-
 drivers/net/ethernet/netronome/nfp/abm/cls.c       |  17 +-
 drivers/net/ethernet/netronome/nfp/bpf/main.c      |  29 +-
 .../net/ethernet/netronome/nfp/flower/offload.c    |  62 +--
 drivers/net/ethernet/qlogic/qede/qede_main.c       |  23 +-
 drivers/net/ethernet/stmicro/stmmac/stmmac_main.c  |  22 +-
 drivers/net/netdevsim/netdev.c                     |  26 +-
 include/net/pkt_cls.h                              |  29 +-
 net/dsa/slave.c                                    |  15 +-
 net/sched/cls_api.c                                | 598 ++++++++++++---------
 19 files changed, 528 insertions(+), 634 deletions(-)

-- 
2.11.0

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH net-next,RFC 1/9] net: sched: move tcf_block_cb before indr_block
  2019-04-26  0:33 [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter Pablo Neira Ayuso
@ 2019-04-26  0:33 ` Pablo Neira Ayuso
  2019-04-26  0:33 ` [PATCH net-next,RFC 2/9] net: sched: add tcf_block_cb_alloc() Pablo Neira Ayuso
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Pablo Neira Ayuso @ 2019-04-26  0:33 UTC (permalink / raw)
  To: netfilter-devel
  Cc: davem, netdev, jiri, john.hurley, jakub.kicinski, ogerlitz

The indr_block infrastructure will depend on the tcf_block_cb, move this
code on top to avoid forward declarations.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
 net/sched/cls_api.c | 484 ++++++++++++++++++++++++++--------------------------
 1 file changed, 242 insertions(+), 242 deletions(-)

diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index 9115f053883f..71ec0a34100d 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -568,6 +568,248 @@ static struct tcf_block *tc_dev_ingress_block(struct net_device *dev)
 	return cops->tcf_block(qdisc, TC_H_MIN_INGRESS, NULL);
 }
 
+static struct tcf_chain *
+__tcf_get_next_chain(struct tcf_block *block, struct tcf_chain *chain)
+{
+	mutex_lock(&block->lock);
+	if (chain)
+		chain = list_is_last(&chain->list, &block->chain_list) ?
+			NULL : list_next_entry(chain, list);
+	else
+		chain = list_first_entry_or_null(&block->chain_list,
+						 struct tcf_chain, list);
+
+	/* skip all action-only chains */
+	while (chain && tcf_chain_held_by_acts_only(chain))
+		chain = list_is_last(&chain->list, &block->chain_list) ?
+			NULL : list_next_entry(chain, list);
+
+	if (chain)
+		tcf_chain_hold(chain);
+	mutex_unlock(&block->lock);
+
+	return chain;
+}
+
+/* Function to be used by all clients that want to iterate over all chains on
+ * block. It properly obtains block->lock and takes reference to chain before
+ * returning it. Users of this function must be tolerant to concurrent chain
+ * insertion/deletion or ensure that no concurrent chain modification is
+ * possible. Note that all netlink dump callbacks cannot guarantee to provide
+ * consistent dump because rtnl lock is released each time skb is filled with
+ * data and sent to user-space.
+ */
+
+struct tcf_chain *
+tcf_get_next_chain(struct tcf_block *block, struct tcf_chain *chain)
+{
+	struct tcf_chain *chain_next = __tcf_get_next_chain(block, chain);
+
+	if (chain)
+		tcf_chain_put(chain);
+
+	return chain_next;
+}
+EXPORT_SYMBOL(tcf_get_next_chain);
+
+static struct tcf_proto *
+__tcf_get_next_proto(struct tcf_chain *chain, struct tcf_proto *tp)
+{
+	u32 prio = 0;
+
+	ASSERT_RTNL();
+	mutex_lock(&chain->filter_chain_lock);
+
+	if (!tp) {
+		tp = tcf_chain_dereference(chain->filter_chain, chain);
+	} else if (tcf_proto_is_deleting(tp)) {
+		/* 'deleting' flag is set and chain->filter_chain_lock was
+		 * unlocked, which means next pointer could be invalid. Restart
+		 * search.
+		 */
+		prio = tp->prio + 1;
+		tp = tcf_chain_dereference(chain->filter_chain, chain);
+
+		for (; tp; tp = tcf_chain_dereference(tp->next, chain))
+			if (!tp->deleting && tp->prio >= prio)
+				break;
+	} else {
+		tp = tcf_chain_dereference(tp->next, chain);
+	}
+
+	if (tp)
+		tcf_proto_get(tp);
+
+	mutex_unlock(&chain->filter_chain_lock);
+
+	return tp;
+}
+
+/* Function to be used by all clients that want to iterate over all tp's on
+ * chain. Users of this function must be tolerant to concurrent tp
+ * insertion/deletion or ensure that no concurrent chain modification is
+ * possible. Note that all netlink dump callbacks cannot guarantee to provide
+ * consistent dump because rtnl lock is released each time skb is filled with
+ * data and sent to user-space.
+ */
+
+struct tcf_proto *
+tcf_get_next_proto(struct tcf_chain *chain, struct tcf_proto *tp,
+		   bool rtnl_held)
+{
+	struct tcf_proto *tp_next = __tcf_get_next_proto(chain, tp);
+
+	if (tp)
+		tcf_proto_put(tp, rtnl_held, NULL);
+
+	return tp_next;
+}
+EXPORT_SYMBOL(tcf_get_next_proto);
+
+static int
+tcf_block_playback_offloads(struct tcf_block *block, tc_setup_cb_t *cb,
+			    void *cb_priv, bool add, bool offload_in_use,
+			    struct netlink_ext_ack *extack)
+{
+	struct tcf_chain *chain, *chain_prev;
+	struct tcf_proto *tp, *tp_prev;
+	int err;
+
+	for (chain = __tcf_get_next_chain(block, NULL);
+	     chain;
+	     chain_prev = chain,
+		     chain = __tcf_get_next_chain(block, chain),
+		     tcf_chain_put(chain_prev)) {
+		for (tp = __tcf_get_next_proto(chain, NULL); tp;
+		     tp_prev = tp,
+			     tp = __tcf_get_next_proto(chain, tp),
+			     tcf_proto_put(tp_prev, true, NULL)) {
+			if (tp->ops->reoffload) {
+				err = tp->ops->reoffload(tp, add, cb, cb_priv,
+							 extack);
+				if (err && add)
+					goto err_playback_remove;
+			} else if (add && offload_in_use) {
+				err = -EOPNOTSUPP;
+				NL_SET_ERR_MSG(extack, "Filter HW offload failed - classifier without re-offloading support");
+				goto err_playback_remove;
+			}
+		}
+	}
+
+	return 0;
+
+err_playback_remove:
+	tcf_proto_put(tp, true, NULL);
+	tcf_chain_put(chain);
+	tcf_block_playback_offloads(block, cb, cb_priv, false, offload_in_use,
+				    extack);
+	return err;
+}
+
+struct tcf_block_cb {
+	struct list_head list;
+	tc_setup_cb_t *cb;
+	void *cb_ident;
+	void *cb_priv;
+	unsigned int refcnt;
+};
+
+static bool tcf_block_offload_in_use(struct tcf_block *block)
+{
+	return block->offloadcnt;
+}
+
+void *tcf_block_cb_priv(struct tcf_block_cb *block_cb)
+{
+	return block_cb->cb_priv;
+}
+EXPORT_SYMBOL(tcf_block_cb_priv);
+
+struct tcf_block_cb *tcf_block_cb_lookup(struct tcf_block *block,
+					 tc_setup_cb_t *cb, void *cb_ident)
+{	struct tcf_block_cb *block_cb;
+
+	list_for_each_entry(block_cb, &block->cb_list, list)
+		if (block_cb->cb == cb && block_cb->cb_ident == cb_ident)
+			return block_cb;
+	return NULL;
+}
+EXPORT_SYMBOL(tcf_block_cb_lookup);
+
+void tcf_block_cb_incref(struct tcf_block_cb *block_cb)
+{
+	block_cb->refcnt++;
+}
+EXPORT_SYMBOL(tcf_block_cb_incref);
+
+unsigned int tcf_block_cb_decref(struct tcf_block_cb *block_cb)
+{
+	return --block_cb->refcnt;
+}
+EXPORT_SYMBOL(tcf_block_cb_decref);
+
+struct tcf_block_cb *__tcf_block_cb_register(struct tcf_block *block,
+					     tc_setup_cb_t *cb, void *cb_ident,
+					     void *cb_priv,
+					     struct netlink_ext_ack *extack)
+{
+	struct tcf_block_cb *block_cb;
+	int err;
+
+	/* Replay any already present rules */
+	err = tcf_block_playback_offloads(block, cb, cb_priv, true,
+					  tcf_block_offload_in_use(block),
+					  extack);
+	if (err)
+		return ERR_PTR(err);
+
+	block_cb = kzalloc(sizeof(*block_cb), GFP_KERNEL);
+	if (!block_cb)
+		return ERR_PTR(-ENOMEM);
+	block_cb->cb = cb;
+	block_cb->cb_ident = cb_ident;
+	block_cb->cb_priv = cb_priv;
+	list_add(&block_cb->list, &block->cb_list);
+	return block_cb;
+}
+EXPORT_SYMBOL(__tcf_block_cb_register);
+
+int tcf_block_cb_register(struct tcf_block *block,
+			  tc_setup_cb_t *cb, void *cb_ident,
+			  void *cb_priv, struct netlink_ext_ack *extack)
+{
+	struct tcf_block_cb *block_cb;
+
+	block_cb = __tcf_block_cb_register(block, cb, cb_ident, cb_priv,
+					   extack);
+	return PTR_ERR_OR_ZERO(block_cb);
+}
+EXPORT_SYMBOL(tcf_block_cb_register);
+
+void __tcf_block_cb_unregister(struct tcf_block *block,
+			       struct tcf_block_cb *block_cb)
+{
+	tcf_block_playback_offloads(block, block_cb->cb, block_cb->cb_priv,
+				    false, tcf_block_offload_in_use(block),
+				    NULL);
+	list_del(&block_cb->list);
+	kfree(block_cb);
+}
+EXPORT_SYMBOL(__tcf_block_cb_unregister);
+
+void tcf_block_cb_unregister(struct tcf_block *block,
+			     tc_setup_cb_t *cb, void *cb_ident)
+{
+	struct tcf_block_cb *block_cb;
+
+	block_cb = tcf_block_cb_lookup(block, cb, cb_ident);
+	if (!block_cb)
+		return;
+	__tcf_block_cb_unregister(block, block_cb);
+}
+EXPORT_SYMBOL(tcf_block_cb_unregister);
+
 static struct rhashtable indr_setup_block_ht;
 
 struct tc_indr_block_dev {
@@ -785,11 +1027,6 @@ static void tc_indr_block_call(struct tcf_block *block, struct net_device *dev,
 				  &bo);
 }
 
-static bool tcf_block_offload_in_use(struct tcf_block *block)
-{
-	return block->offloadcnt;
-}
-
 static int tcf_block_offload_cmd(struct tcf_block *block,
 				 struct net_device *dev,
 				 struct tcf_block_ext_info *ei,
@@ -1006,104 +1243,6 @@ static struct tcf_block *tcf_block_refcnt_get(struct net *net, u32 block_index)
 	return block;
 }
 
-static struct tcf_chain *
-__tcf_get_next_chain(struct tcf_block *block, struct tcf_chain *chain)
-{
-	mutex_lock(&block->lock);
-	if (chain)
-		chain = list_is_last(&chain->list, &block->chain_list) ?
-			NULL : list_next_entry(chain, list);
-	else
-		chain = list_first_entry_or_null(&block->chain_list,
-						 struct tcf_chain, list);
-
-	/* skip all action-only chains */
-	while (chain && tcf_chain_held_by_acts_only(chain))
-		chain = list_is_last(&chain->list, &block->chain_list) ?
-			NULL : list_next_entry(chain, list);
-
-	if (chain)
-		tcf_chain_hold(chain);
-	mutex_unlock(&block->lock);
-
-	return chain;
-}
-
-/* Function to be used by all clients that want to iterate over all chains on
- * block. It properly obtains block->lock and takes reference to chain before
- * returning it. Users of this function must be tolerant to concurrent chain
- * insertion/deletion or ensure that no concurrent chain modification is
- * possible. Note that all netlink dump callbacks cannot guarantee to provide
- * consistent dump because rtnl lock is released each time skb is filled with
- * data and sent to user-space.
- */
-
-struct tcf_chain *
-tcf_get_next_chain(struct tcf_block *block, struct tcf_chain *chain)
-{
-	struct tcf_chain *chain_next = __tcf_get_next_chain(block, chain);
-
-	if (chain)
-		tcf_chain_put(chain);
-
-	return chain_next;
-}
-EXPORT_SYMBOL(tcf_get_next_chain);
-
-static struct tcf_proto *
-__tcf_get_next_proto(struct tcf_chain *chain, struct tcf_proto *tp)
-{
-	u32 prio = 0;
-
-	ASSERT_RTNL();
-	mutex_lock(&chain->filter_chain_lock);
-
-	if (!tp) {
-		tp = tcf_chain_dereference(chain->filter_chain, chain);
-	} else if (tcf_proto_is_deleting(tp)) {
-		/* 'deleting' flag is set and chain->filter_chain_lock was
-		 * unlocked, which means next pointer could be invalid. Restart
-		 * search.
-		 */
-		prio = tp->prio + 1;
-		tp = tcf_chain_dereference(chain->filter_chain, chain);
-
-		for (; tp; tp = tcf_chain_dereference(tp->next, chain))
-			if (!tp->deleting && tp->prio >= prio)
-				break;
-	} else {
-		tp = tcf_chain_dereference(tp->next, chain);
-	}
-
-	if (tp)
-		tcf_proto_get(tp);
-
-	mutex_unlock(&chain->filter_chain_lock);
-
-	return tp;
-}
-
-/* Function to be used by all clients that want to iterate over all tp's on
- * chain. Users of this function must be tolerant to concurrent tp
- * insertion/deletion or ensure that no concurrent chain modification is
- * possible. Note that all netlink dump callbacks cannot guarantee to provide
- * consistent dump because rtnl lock is released each time skb is filled with
- * data and sent to user-space.
- */
-
-struct tcf_proto *
-tcf_get_next_proto(struct tcf_chain *chain, struct tcf_proto *tp,
-		   bool rtnl_held)
-{
-	struct tcf_proto *tp_next = __tcf_get_next_proto(chain, tp);
-
-	if (tp)
-		tcf_proto_put(tp, rtnl_held, NULL);
-
-	return tp_next;
-}
-EXPORT_SYMBOL(tcf_get_next_proto);
-
 static void tcf_block_flush_all_chains(struct tcf_block *block, bool rtnl_held)
 {
 	struct tcf_chain *chain;
@@ -1497,145 +1636,6 @@ void tcf_block_put(struct tcf_block *block)
 
 EXPORT_SYMBOL(tcf_block_put);
 
-struct tcf_block_cb {
-	struct list_head list;
-	tc_setup_cb_t *cb;
-	void *cb_ident;
-	void *cb_priv;
-	unsigned int refcnt;
-};
-
-void *tcf_block_cb_priv(struct tcf_block_cb *block_cb)
-{
-	return block_cb->cb_priv;
-}
-EXPORT_SYMBOL(tcf_block_cb_priv);
-
-struct tcf_block_cb *tcf_block_cb_lookup(struct tcf_block *block,
-					 tc_setup_cb_t *cb, void *cb_ident)
-{	struct tcf_block_cb *block_cb;
-
-	list_for_each_entry(block_cb, &block->cb_list, list)
-		if (block_cb->cb == cb && block_cb->cb_ident == cb_ident)
-			return block_cb;
-	return NULL;
-}
-EXPORT_SYMBOL(tcf_block_cb_lookup);
-
-void tcf_block_cb_incref(struct tcf_block_cb *block_cb)
-{
-	block_cb->refcnt++;
-}
-EXPORT_SYMBOL(tcf_block_cb_incref);
-
-unsigned int tcf_block_cb_decref(struct tcf_block_cb *block_cb)
-{
-	return --block_cb->refcnt;
-}
-EXPORT_SYMBOL(tcf_block_cb_decref);
-
-static int
-tcf_block_playback_offloads(struct tcf_block *block, tc_setup_cb_t *cb,
-			    void *cb_priv, bool add, bool offload_in_use,
-			    struct netlink_ext_ack *extack)
-{
-	struct tcf_chain *chain, *chain_prev;
-	struct tcf_proto *tp, *tp_prev;
-	int err;
-
-	for (chain = __tcf_get_next_chain(block, NULL);
-	     chain;
-	     chain_prev = chain,
-		     chain = __tcf_get_next_chain(block, chain),
-		     tcf_chain_put(chain_prev)) {
-		for (tp = __tcf_get_next_proto(chain, NULL); tp;
-		     tp_prev = tp,
-			     tp = __tcf_get_next_proto(chain, tp),
-			     tcf_proto_put(tp_prev, true, NULL)) {
-			if (tp->ops->reoffload) {
-				err = tp->ops->reoffload(tp, add, cb, cb_priv,
-							 extack);
-				if (err && add)
-					goto err_playback_remove;
-			} else if (add && offload_in_use) {
-				err = -EOPNOTSUPP;
-				NL_SET_ERR_MSG(extack, "Filter HW offload failed - classifier without re-offloading support");
-				goto err_playback_remove;
-			}
-		}
-	}
-
-	return 0;
-
-err_playback_remove:
-	tcf_proto_put(tp, true, NULL);
-	tcf_chain_put(chain);
-	tcf_block_playback_offloads(block, cb, cb_priv, false, offload_in_use,
-				    extack);
-	return err;
-}
-
-struct tcf_block_cb *__tcf_block_cb_register(struct tcf_block *block,
-					     tc_setup_cb_t *cb, void *cb_ident,
-					     void *cb_priv,
-					     struct netlink_ext_ack *extack)
-{
-	struct tcf_block_cb *block_cb;
-	int err;
-
-	/* Replay any already present rules */
-	err = tcf_block_playback_offloads(block, cb, cb_priv, true,
-					  tcf_block_offload_in_use(block),
-					  extack);
-	if (err)
-		return ERR_PTR(err);
-
-	block_cb = kzalloc(sizeof(*block_cb), GFP_KERNEL);
-	if (!block_cb)
-		return ERR_PTR(-ENOMEM);
-	block_cb->cb = cb;
-	block_cb->cb_ident = cb_ident;
-	block_cb->cb_priv = cb_priv;
-	list_add(&block_cb->list, &block->cb_list);
-	return block_cb;
-}
-EXPORT_SYMBOL(__tcf_block_cb_register);
-
-int tcf_block_cb_register(struct tcf_block *block,
-			  tc_setup_cb_t *cb, void *cb_ident,
-			  void *cb_priv, struct netlink_ext_ack *extack)
-{
-	struct tcf_block_cb *block_cb;
-
-	block_cb = __tcf_block_cb_register(block, cb, cb_ident, cb_priv,
-					   extack);
-	return PTR_ERR_OR_ZERO(block_cb);
-}
-EXPORT_SYMBOL(tcf_block_cb_register);
-
-void __tcf_block_cb_unregister(struct tcf_block *block,
-			       struct tcf_block_cb *block_cb)
-{
-	tcf_block_playback_offloads(block, block_cb->cb, block_cb->cb_priv,
-				    false, tcf_block_offload_in_use(block),
-				    NULL);
-	list_del(&block_cb->list);
-	kfree(block_cb);
-}
-EXPORT_SYMBOL(__tcf_block_cb_unregister);
-
-void tcf_block_cb_unregister(struct tcf_block *block,
-			     tc_setup_cb_t *cb, void *cb_ident)
-{
-	struct tcf_block_cb *block_cb;
-
-	block_cb = tcf_block_cb_lookup(block, cb, cb_ident);
-	if (!block_cb)
-		return;
-	__tcf_block_cb_unregister(block, block_cb);
-}
-EXPORT_SYMBOL(tcf_block_cb_unregister);
-
 /* Main classifier routine: scans classifier chain attached
  * to this qdisc, (optionally) tests for protocol and asks
  * specific classifiers.
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next,RFC 2/9] net: sched: add tcf_block_cb_alloc()
  2019-04-26  0:33 [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter Pablo Neira Ayuso
  2019-04-26  0:33 ` [PATCH net-next,RFC 1/9] net: sched: move tcf_block_cb before indr_block Pablo Neira Ayuso
@ 2019-04-26  0:33 ` Pablo Neira Ayuso
  2019-04-26  0:33 ` [PATCH net-next,RFC 3/9] net: sched: add tcf_block_cb_free() Pablo Neira Ayuso
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Pablo Neira Ayuso @ 2019-04-26  0:33 UTC (permalink / raw)
  To: netfilter-devel
  Cc: davem, netdev, jiri, john.hurley, jakub.kicinski, ogerlitz

Add a new helper function to allocate tcf_block_cb objects.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
 include/net/pkt_cls.h |  2 ++
 net/sched/cls_api.c   | 23 +++++++++++++++++++----
 2 files changed, 21 insertions(+), 4 deletions(-)

diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index d5e7a1af346f..8ddfbe94e253 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -71,6 +71,8 @@ static inline struct Qdisc *tcf_block_q(struct tcf_block *block)
 	return block->q;
 }
 
+struct tcf_block_cb *tcf_block_cb_alloc(tc_setup_cb_t *cb,
+					void *cb_ident, void *cb_priv);
 void *tcf_block_cb_priv(struct tcf_block_cb *block_cb);
 struct tcf_block_cb *tcf_block_cb_lookup(struct tcf_block *block,
 					 tc_setup_cb_t *cb, void *cb_ident);
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index 71ec0a34100d..ddfccf31aac9 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -749,6 +749,23 @@ unsigned int tcf_block_cb_decref(struct tcf_block_cb *block_cb)
 }
 EXPORT_SYMBOL(tcf_block_cb_decref);
 
+struct tcf_block_cb *tcf_block_cb_alloc(tc_setup_cb_t *cb,
+					void *cb_ident, void *cb_priv)
+{
+	struct tcf_block_cb *block_cb;
+
+	block_cb = kzalloc(sizeof(*block_cb), GFP_KERNEL);
+	if (!block_cb)
+		return NULL;
+
+	block_cb->cb = cb;
+	block_cb->cb_ident = cb_ident;
+	block_cb->cb_priv = cb_priv;
+
+	return block_cb;
+}
+EXPORT_SYMBOL(tcf_block_cb_alloc);
+
 struct tcf_block_cb *__tcf_block_cb_register(struct tcf_block *block,
 					     tc_setup_cb_t *cb, void *cb_ident,
 					     void *cb_priv,
@@ -764,12 +781,10 @@ struct tcf_block_cb *__tcf_block_cb_register(struct tcf_block *block,
 	if (err)
 		return ERR_PTR(err);
 
-	block_cb = kzalloc(sizeof(*block_cb), GFP_KERNEL);
+	block_cb = tcf_block_cb_alloc(cb, cb_ident, cb_priv);
 	if (!block_cb)
 		return ERR_PTR(-ENOMEM);
-	block_cb->cb = cb;
-	block_cb->cb_ident = cb_ident;
-	block_cb->cb_priv = cb_priv;
+
 	list_add(&block_cb->list, &block->cb_list);
 	return block_cb;
 }
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next,RFC 3/9] net: sched: add tcf_block_cb_free()
  2019-04-26  0:33 [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter Pablo Neira Ayuso
  2019-04-26  0:33 ` [PATCH net-next,RFC 1/9] net: sched: move tcf_block_cb before indr_block Pablo Neira Ayuso
  2019-04-26  0:33 ` [PATCH net-next,RFC 2/9] net: sched: add tcf_block_cb_alloc() Pablo Neira Ayuso
@ 2019-04-26  0:33 ` Pablo Neira Ayuso
  2019-04-26  0:33 ` [PATCH net-next,RFC 4/9] net: sched: add tcf_block_setup() Pablo Neira Ayuso
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Pablo Neira Ayuso @ 2019-04-26  0:33 UTC (permalink / raw)
  To: netfilter-devel
  Cc: davem, netdev, jiri, john.hurley, jakub.kicinski, ogerlitz

Just a stub to release tcf_block_cb objects, follow up patch extends it
to have a release callback in it for the private data.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
 include/net/pkt_cls.h | 1 +
 net/sched/cls_api.c   | 6 ++++++
 2 files changed, 7 insertions(+)

diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index 8ddfbe94e253..565775289b41 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -73,6 +73,7 @@ static inline struct Qdisc *tcf_block_q(struct tcf_block *block)
 
 struct tcf_block_cb *tcf_block_cb_alloc(tc_setup_cb_t *cb,
 					void *cb_ident, void *cb_priv);
+void tcf_block_cb_free(struct tcf_block_cb *block_cb);
 void *tcf_block_cb_priv(struct tcf_block_cb *block_cb);
 struct tcf_block_cb *tcf_block_cb_lookup(struct tcf_block *block,
 					 tc_setup_cb_t *cb, void *cb_ident);
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index ddfccf31aac9..9f61a2c3cf6f 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -766,6 +766,12 @@ struct tcf_block_cb *tcf_block_cb_alloc(tc_setup_cb_t *cb,
 }
 EXPORT_SYMBOL(tcf_block_cb_alloc);
 
+void tcf_block_cb_free(struct tcf_block_cb *block_cb)
+{
+	kfree(block_cb);
+}
+EXPORT_SYMBOL(tcf_block_cb_free);
+
 struct tcf_block_cb *__tcf_block_cb_register(struct tcf_block *block,
 					     tc_setup_cb_t *cb, void *cb_ident,
 					     void *cb_priv,
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next,RFC 4/9] net: sched: add tcf_block_setup()
  2019-04-26  0:33 [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter Pablo Neira Ayuso
                   ` (2 preceding siblings ...)
  2019-04-26  0:33 ` [PATCH net-next,RFC 3/9] net: sched: add tcf_block_cb_free() Pablo Neira Ayuso
@ 2019-04-26  0:33 ` Pablo Neira Ayuso
  2019-04-26  0:33 ` [PATCH net-next,RFC 5/9] net: sched: add release callback to struct tcf_block_cb Pablo Neira Ayuso
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Pablo Neira Ayuso @ 2019-04-26  0:33 UTC (permalink / raw)
  To: netfilter-devel
  Cc: davem, netdev, jiri, john.hurley, jakub.kicinski, ogerlitz

This new function allows us to handle tcf_block_cb registrations /
unregistrations from the core, in order to remove a dependency with the
tcf_block object and the .reoffload cls_api callback.

This patch adds a global tcf_block_cb_list, to find block objects based
on the block index field that tells what tcf_block owns this
tcf_block_cb object.

The tcf_block_cb_list_add() call places the tcf_block_cb object that has
been set up from the driver in the tc_block_offload->cb_list that is
used to convey this tcf_block_cb object back to the core for
registration / unregistration.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
 include/net/pkt_cls.h |   4 ++
 net/sched/cls_api.c   | 105 +++++++++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 107 insertions(+), 2 deletions(-)

diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index 565775289b41..d40fcd8c502b 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -74,6 +74,9 @@ static inline struct Qdisc *tcf_block_q(struct tcf_block *block)
 struct tcf_block_cb *tcf_block_cb_alloc(tc_setup_cb_t *cb,
 					void *cb_ident, void *cb_priv);
 void tcf_block_cb_free(struct tcf_block_cb *block_cb);
+void tcf_block_cb_list_add(struct tcf_block_cb *block_cb, struct list_head *cb_list);
+void tcf_block_cb_list_move(struct tcf_block_cb *block_cb, struct list_head *cb_list);
+
 void *tcf_block_cb_priv(struct tcf_block_cb *block_cb);
 struct tcf_block_cb *tcf_block_cb_lookup(struct tcf_block *block,
 					 tc_setup_cb_t *cb, void *cb_ident);
@@ -643,6 +646,7 @@ enum tc_block_command {
 struct tc_block_offload {
 	enum tc_block_command command;
 	enum tcf_block_binder_type binder_type;
+	struct list_head cb_list;
 	struct tcf_block *block;
 	struct netlink_ext_ack *extack;
 };
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index 9f61a2c3cf6f..3c16ac802dc4 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -708,6 +708,7 @@ tcf_block_playback_offloads(struct tcf_block *block, tc_setup_cb_t *cb,
 }
 
 struct tcf_block_cb {
+	struct list_head global_list;
 	struct list_head list;
 	tc_setup_cb_t *cb;
 	void *cb_ident;
@@ -726,6 +727,8 @@ void *tcf_block_cb_priv(struct tcf_block_cb *block_cb)
 }
 EXPORT_SYMBOL(tcf_block_cb_priv);
 
+static LIST_HEAD(tcf_block_cb_list);
+
 struct tcf_block_cb *tcf_block_cb_lookup(struct tcf_block *block,
 					 tc_setup_cb_t *cb, void *cb_ident)
 {	struct tcf_block_cb *block_cb;
@@ -772,6 +775,20 @@ void tcf_block_cb_free(struct tcf_block_cb *block_cb)
 }
 EXPORT_SYMBOL(tcf_block_cb_free);
 
+void tcf_block_cb_list_add(struct tcf_block_cb *block_cb,
+			   struct list_head *cb_list)
+{
+	list_add(&block_cb->global_list, cb_list);
+}
+EXPORT_SYMBOL(tcf_block_cb_list_add);
+
+void tcf_block_cb_list_move(struct tcf_block_cb *block_cb,
+			    struct list_head *cb_list)
+{
+	list_move(&block_cb->global_list, cb_list);
+}
+EXPORT_SYMBOL(tcf_block_cb_list_move);
+
 struct tcf_block_cb *__tcf_block_cb_register(struct tcf_block *block,
 					     tc_setup_cb_t *cb, void *cb_ident,
 					     void *cb_priv,
@@ -831,6 +848,78 @@ void tcf_block_cb_unregister(struct tcf_block *block,
 }
 EXPORT_SYMBOL(tcf_block_cb_unregister);
 
+static int tcf_block_bind(struct tcf_block *block, struct tc_block_offload *bo)
+{
+	struct tcf_block_cb *block_cb, *failed_cb;
+	int err, i = 0;
+
+	list_for_each_entry(block_cb, &bo->cb_list, global_list) {
+		err = tcf_block_playback_offloads(block, block_cb->cb,
+						  block_cb->cb_priv, true,
+						  tcf_block_offload_in_use(block),
+						  bo->extack);
+		if (err) {
+			failed_cb = block_cb;
+			goto err_unroll;
+		}
+		list_add(&block_cb->list, &block->cb_list);
+		i++;
+	}
+	list_splice(&bo->cb_list, &tcf_block_cb_list);
+
+	return 0;
+
+err_unroll:
+	list_for_each_entry(block_cb, &bo->cb_list, global_list) {
+		if (i-- > 0) {
+			list_del(&block_cb->list);
+			tcf_block_playback_offloads(block, block_cb->cb,
+						    block_cb->cb_priv, false,
+						    tcf_block_offload_in_use(block),
+						    NULL);
+		}
+		kfree(block_cb);
+	}
+
+	return err;
+}
+
+static void tcf_block_unbind(struct tcf_block *block,
+			     struct tc_block_offload *bo)
+{
+	struct tcf_block_cb *block_cb, *next;
+
+	list_for_each_entry_safe(block_cb, next, &bo->cb_list, global_list) {
+		list_del(&block_cb->global_list);
+		tcf_block_playback_offloads(block, block_cb->cb,
+					    block_cb->cb_priv, false,
+					    tcf_block_offload_in_use(block),
+					    NULL);
+		list_del(&block_cb->list);
+		tcf_block_cb_free(block_cb);
+	}
+}
+
+static int tcf_block_setup(struct tcf_block *block, struct tc_block_offload *bo)
+{
+	int err;
+
+	switch (bo->command) {
+	case TC_BLOCK_BIND:
+		err = tcf_block_bind(block, bo);
+		break;
+	case TC_BLOCK_UNBIND:
+		err = 0;
+		tcf_block_unbind(block, bo);
+		break;
+	default:
+		WARN_ON_ONCE(1);
+		err = -EOPNOTSUPP;
+	}
+
+	return err;
+}
+
 static struct rhashtable indr_setup_block_ht;
 
 struct tc_indr_block_dev {
@@ -947,12 +1036,14 @@ static void tc_indr_block_ing_cmd(struct tc_indr_block_dev *indr_dev,
 		.binder_type	= TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS,
 		.block		= indr_dev->block,
 	};
+	INIT_LIST_HEAD(&bo.cb_list);
 
 	if (!indr_dev->block)
 		return;
 
 	indr_block_cb->cb(indr_dev->dev, indr_block_cb->cb_priv, TC_SETUP_BLOCK,
 			  &bo);
+	tcf_block_setup(indr_dev->block, &bo);
 }
 
 int __tc_indr_block_cb_register(struct net_device *dev, void *cb_priv,
@@ -1036,6 +1127,7 @@ static void tc_indr_block_call(struct tcf_block *block, struct net_device *dev,
 		.block		= block,
 		.extack		= extack,
 	};
+	INIT_LIST_HEAD(&bo.cb_list);
 
 	indr_dev = tc_indr_block_dev_lookup(dev);
 	if (!indr_dev)
@@ -1043,9 +1135,11 @@ static void tc_indr_block_call(struct tcf_block *block, struct net_device *dev,
 
 	indr_dev->block = command == TC_BLOCK_BIND ? block : NULL;
 
-	list_for_each_entry(indr_block_cb, &indr_dev->cb_list, list)
+	list_for_each_entry(indr_block_cb, &indr_dev->cb_list, list) {
 		indr_block_cb->cb(dev, indr_block_cb->cb_priv, TC_SETUP_BLOCK,
 				  &bo);
+		tcf_block_setup(block, &bo);
+	}
 }
 
 static int tcf_block_offload_cmd(struct tcf_block *block,
@@ -1055,12 +1149,19 @@ static int tcf_block_offload_cmd(struct tcf_block *block,
 				 struct netlink_ext_ack *extack)
 {
 	struct tc_block_offload bo = {};
+	int err;
 
 	bo.command = command;
 	bo.binder_type = ei->binder_type;
 	bo.block = block;
 	bo.extack = extack;
-	return dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_BLOCK, &bo);
+	INIT_LIST_HEAD(&bo.cb_list);
+
+	err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_BLOCK, &bo);
+	if (err < 0)
+		return err;
+
+	return tcf_block_setup(block, &bo);
 }
 
 static int tcf_block_offload_bind(struct tcf_block *block, struct Qdisc *q,
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next,RFC 5/9] net: sched: add release callback to struct tcf_block_cb
  2019-04-26  0:33 [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter Pablo Neira Ayuso
                   ` (3 preceding siblings ...)
  2019-04-26  0:33 ` [PATCH net-next,RFC 4/9] net: sched: add tcf_block_setup() Pablo Neira Ayuso
@ 2019-04-26  0:33 ` Pablo Neira Ayuso
  2019-04-26  0:33 ` [PATCH net-next,RFC 6/9] net: sched: add tcf_setup_block_offload() Pablo Neira Ayuso
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Pablo Neira Ayuso @ 2019-04-26  0:33 UTC (permalink / raw)
  To: netfilter-devel
  Cc: davem, netdev, jiri, john.hurley, jakub.kicinski, ogerlitz

Call it on tcf_block_cb object to release the private area.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
 include/net/pkt_cls.h |  3 ++-
 net/sched/cls_api.c   | 10 ++++++++--
 2 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index d40fcd8c502b..52a6484428ce 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -72,7 +72,8 @@ static inline struct Qdisc *tcf_block_q(struct tcf_block *block)
 }
 
 struct tcf_block_cb *tcf_block_cb_alloc(tc_setup_cb_t *cb,
-					void *cb_ident, void *cb_priv);
+					void *cb_ident, void *cb_priv,
+					void (*release)(void *cb_priv));
 void tcf_block_cb_free(struct tcf_block_cb *block_cb);
 void tcf_block_cb_list_add(struct tcf_block_cb *block_cb, struct list_head *cb_list);
 void tcf_block_cb_list_move(struct tcf_block_cb *block_cb, struct list_head *cb_list);
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index 3c16ac802dc4..963511ec758d 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -711,6 +711,7 @@ struct tcf_block_cb {
 	struct list_head global_list;
 	struct list_head list;
 	tc_setup_cb_t *cb;
+	void (*release)(void *cb_priv);
 	void *cb_ident;
 	void *cb_priv;
 	unsigned int refcnt;
@@ -753,7 +754,8 @@ unsigned int tcf_block_cb_decref(struct tcf_block_cb *block_cb)
 EXPORT_SYMBOL(tcf_block_cb_decref);
 
 struct tcf_block_cb *tcf_block_cb_alloc(tc_setup_cb_t *cb,
-					void *cb_ident, void *cb_priv)
+					void *cb_ident, void *cb_priv,
+					void (*release)(void *cb_priv))
 {
 	struct tcf_block_cb *block_cb;
 
@@ -763,6 +765,7 @@ struct tcf_block_cb *tcf_block_cb_alloc(tc_setup_cb_t *cb,
 
 	block_cb->cb = cb;
 	block_cb->cb_ident = cb_ident;
+	block_cb->release = release;
 	block_cb->cb_priv = cb_priv;
 
 	return block_cb;
@@ -771,6 +774,9 @@ EXPORT_SYMBOL(tcf_block_cb_alloc);
 
 void tcf_block_cb_free(struct tcf_block_cb *block_cb)
 {
+	if (block_cb->release)
+		block_cb->release(block_cb->cb_priv);
+
 	kfree(block_cb);
 }
 EXPORT_SYMBOL(tcf_block_cb_free);
@@ -804,7 +810,7 @@ struct tcf_block_cb *__tcf_block_cb_register(struct tcf_block *block,
 	if (err)
 		return ERR_PTR(err);
 
-	block_cb = tcf_block_cb_alloc(cb, cb_ident, cb_priv);
+	block_cb = tcf_block_cb_alloc(cb, cb_ident, cb_priv, NULL);
 	if (!block_cb)
 		return ERR_PTR(-ENOMEM);
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next,RFC 6/9] net: sched: add tcf_setup_block_offload()
  2019-04-26  0:33 [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter Pablo Neira Ayuso
                   ` (4 preceding siblings ...)
  2019-04-26  0:33 ` [PATCH net-next,RFC 5/9] net: sched: add release callback to struct tcf_block_cb Pablo Neira Ayuso
@ 2019-04-26  0:33 ` Pablo Neira Ayuso
  2019-04-26  0:33 ` [PATCH net-next,RFC 7/9] net: use tcf_block_setup() infrastructure Pablo Neira Ayuso
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Pablo Neira Ayuso @ 2019-04-26  0:33 UTC (permalink / raw)
  To: netfilter-devel
  Cc: davem, netdev, jiri, john.hurley, jakub.kicinski, ogerlitz

Most drivers do the same thing to set up the block, add a helper
function to do this.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c          | 26 ++++------------
 drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c      | 28 ++++-------------
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c    | 26 ++++------------
 drivers/net/ethernet/intel/i40e/i40e_main.c        | 26 ++++------------
 drivers/net/ethernet/intel/iavf/iavf_main.c        | 35 ++++------------------
 drivers/net/ethernet/intel/igb/igb_main.c          | 23 ++------------
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c      | 27 ++++-------------
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c  | 27 ++++-------------
 drivers/net/ethernet/mellanox/mlx5/core/en_rep.c   | 26 ++++------------
 drivers/net/ethernet/netronome/nfp/abm/cls.c       | 17 ++---------
 drivers/net/ethernet/netronome/nfp/bpf/main.c      | 29 ++++--------------
 .../net/ethernet/netronome/nfp/flower/offload.c    | 29 ++++--------------
 drivers/net/ethernet/qlogic/qede/qede_main.c       | 23 ++------------
 drivers/net/ethernet/stmicro/stmmac/stmmac_main.c  | 22 ++------------
 drivers/net/netdevsim/netdev.c                     | 26 ++++------------
 include/net/pkt_cls.h                              |  3 ++
 net/sched/cls_api.c                                | 20 +++++++++++++
 17 files changed, 89 insertions(+), 324 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 6528a597367b..0c2da8b87e7e 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -9726,32 +9726,16 @@ static int bnxt_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
 	}
 }
 
-static int bnxt_setup_tc_block(struct net_device *dev,
-			       struct tc_block_offload *f)
-{
-	struct bnxt *bp = netdev_priv(dev);
-
-	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
-		return -EOPNOTSUPP;
-
-	switch (f->command) {
-	case TC_BLOCK_BIND:
-		return tcf_block_cb_register(f->block, bnxt_setup_tc_block_cb,
-					     bp, bp, f->extack);
-	case TC_BLOCK_UNBIND:
-		tcf_block_cb_unregister(f->block, bnxt_setup_tc_block_cb, bp);
-		return 0;
-	default:
-		return -EOPNOTSUPP;
-	}
-}
-
 static int bnxt_setup_tc(struct net_device *dev, enum tc_setup_type type,
 			 void *type_data)
 {
+	struct bnxt *bp = netdev_priv(dev);
+
 	switch (type) {
 	case TC_SETUP_BLOCK:
-		return bnxt_setup_tc_block(dev, type_data);
+		return tcf_setup_block_offload(type_data,
+					       bnxt_setup_tc_block_cb, bp,
+					       true);
 	case TC_SETUP_QDISC_MQPRIO: {
 		struct tc_mqprio_qopt *mqprio = type_data;
 
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c
index f760921389a3..172fabc62140 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c
@@ -161,34 +161,16 @@ static int bnxt_vf_rep_setup_tc_block_cb(enum tc_setup_type type,
 	}
 }
 
-static int bnxt_vf_rep_setup_tc_block(struct net_device *dev,
-				      struct tc_block_offload *f)
-{
-	struct bnxt_vf_rep *vf_rep = netdev_priv(dev);
-
-	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
-		return -EOPNOTSUPP;
-
-	switch (f->command) {
-	case TC_BLOCK_BIND:
-		return tcf_block_cb_register(f->block,
-					     bnxt_vf_rep_setup_tc_block_cb,
-					     vf_rep, vf_rep, f->extack);
-	case TC_BLOCK_UNBIND:
-		tcf_block_cb_unregister(f->block,
-					bnxt_vf_rep_setup_tc_block_cb, vf_rep);
-		return 0;
-	default:
-		return -EOPNOTSUPP;
-	}
-}
-
 static int bnxt_vf_rep_setup_tc(struct net_device *dev, enum tc_setup_type type,
 				void *type_data)
 {
+	struct bnxt_vf_rep *vf_rep = netdev_priv(dev);
+
 	switch (type) {
 	case TC_SETUP_BLOCK:
-		return bnxt_vf_rep_setup_tc_block(dev, type_data);
+		return tcf_setup_block_offload(type_data,
+					       bnxt_vf_rep_setup_tc_block_cb,
+					       vf_rep, true);
 	default:
 		return -EOPNOTSUPP;
 	}
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
index 3339f1f4bcdd..293c4cbdfc0a 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
@@ -3098,32 +3098,16 @@ static int cxgb_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
 	}
 }
 
-static int cxgb_setup_tc_block(struct net_device *dev,
-			       struct tc_block_offload *f)
-{
-	struct port_info *pi = netdev2pinfo(dev);
-
-	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
-		return -EOPNOTSUPP;
-
-	switch (f->command) {
-	case TC_BLOCK_BIND:
-		return tcf_block_cb_register(f->block, cxgb_setup_tc_block_cb,
-					     pi, dev, f->extack);
-	case TC_BLOCK_UNBIND:
-		tcf_block_cb_unregister(f->block, cxgb_setup_tc_block_cb, pi);
-		return 0;
-	default:
-		return -EOPNOTSUPP;
-	}
-}
-
 static int cxgb_setup_tc(struct net_device *dev, enum tc_setup_type type,
 			 void *type_data)
 {
+	struct port_info *pi = netdev2pinfo(dev);
+
 	switch (type) {
 	case TC_SETUP_BLOCK:
-		return cxgb_setup_tc_block(dev, type_data);
+		return tcf_setup_block_offload(type_data,
+					       cxgb_setup_tc_block_cb, pi,
+					       true);
 	default:
 		return -EOPNOTSUPP;
 	}
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index b1c265012c8a..4449df0b883b 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -7607,34 +7607,18 @@ static int i40e_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
 	}
 }
 
-static int i40e_setup_tc_block(struct net_device *dev,
-			       struct tc_block_offload *f)
-{
-	struct i40e_netdev_priv *np = netdev_priv(dev);
-
-	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
-		return -EOPNOTSUPP;
-
-	switch (f->command) {
-	case TC_BLOCK_BIND:
-		return tcf_block_cb_register(f->block, i40e_setup_tc_block_cb,
-					     np, np, f->extack);
-	case TC_BLOCK_UNBIND:
-		tcf_block_cb_unregister(f->block, i40e_setup_tc_block_cb, np);
-		return 0;
-	default:
-		return -EOPNOTSUPP;
-	}
-}
-
 static int __i40e_setup_tc(struct net_device *netdev, enum tc_setup_type type,
 			   void *type_data)
 {
+	struct i40e_netdev_priv *np = netdev_priv(netdev);
+
 	switch (type) {
 	case TC_SETUP_QDISC_MQPRIO:
 		return i40e_setup_tc(netdev, type_data);
 	case TC_SETUP_BLOCK:
-		return i40e_setup_tc_block(netdev, type_data);
+		return tcf_setup_block_offload(type_data,
+					       i40e_setup_tc_block_cb, np,
+					       true);
 	default:
 		return -EOPNOTSUPP;
 	}
diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
index 4569d69a2b55..872f3470f54c 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
@@ -2847,35 +2847,6 @@ static int iavf_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
 }
 
 /**
- * iavf_setup_tc_block - register callbacks for tc
- * @netdev: network interface device structure
- * @f: tc offload data
- *
- * This function registers block callbacks for tc
- * offloads
- **/
-static int iavf_setup_tc_block(struct net_device *dev,
-			       struct tc_block_offload *f)
-{
-	struct iavf_adapter *adapter = netdev_priv(dev);
-
-	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
-		return -EOPNOTSUPP;
-
-	switch (f->command) {
-	case TC_BLOCK_BIND:
-		return tcf_block_cb_register(f->block, iavf_setup_tc_block_cb,
-					     adapter, adapter, f->extack);
-	case TC_BLOCK_UNBIND:
-		tcf_block_cb_unregister(f->block, iavf_setup_tc_block_cb,
-					adapter);
-		return 0;
-	default:
-		return -EOPNOTSUPP;
-	}
-}
-
-/**
  * iavf_setup_tc - configure multiple traffic classes
  * @netdev: network interface device structure
  * @type: type of offload
@@ -2889,11 +2860,15 @@ static int iavf_setup_tc_block(struct net_device *dev,
 static int iavf_setup_tc(struct net_device *netdev, enum tc_setup_type type,
 			 void *type_data)
 {
+	struct iavf_adapter *adapter = netdev_priv(netdev);
+
 	switch (type) {
 	case TC_SETUP_QDISC_MQPRIO:
 		return __iavf_setup_tc(netdev, type_data);
 	case TC_SETUP_BLOCK:
-		return iavf_setup_tc_block(netdev, type_data);
+		return tcf_setup_block_offload(type_data,
+					       iavf_setup_tc_block_cb, adapter,
+					       true);
 	default:
 		return -EOPNOTSUPP;
 	}
diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
index acbb5b4f333d..1d63287bbc64 100644
--- a/drivers/net/ethernet/intel/igb/igb_main.c
+++ b/drivers/net/ethernet/intel/igb/igb_main.c
@@ -2782,25 +2782,6 @@ static int igb_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
 	}
 }
 
-static int igb_setup_tc_block(struct igb_adapter *adapter,
-			      struct tc_block_offload *f)
-{
-	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
-		return -EOPNOTSUPP;
-
-	switch (f->command) {
-	case TC_BLOCK_BIND:
-		return tcf_block_cb_register(f->block, igb_setup_tc_block_cb,
-					     adapter, adapter, f->extack);
-	case TC_BLOCK_UNBIND:
-		tcf_block_cb_unregister(f->block, igb_setup_tc_block_cb,
-					adapter);
-		return 0;
-	default:
-		return -EOPNOTSUPP;
-	}
-}
-
 static int igb_offload_txtime(struct igb_adapter *adapter,
 			      struct tc_etf_qopt_offload *qopt)
 {
@@ -2833,7 +2814,9 @@ static int igb_setup_tc(struct net_device *dev, enum tc_setup_type type,
 	case TC_SETUP_QDISC_CBS:
 		return igb_offload_cbs(adapter, type_data);
 	case TC_SETUP_BLOCK:
-		return igb_setup_tc_block(adapter, type_data);
+		return tcf_setup_block_offload(type_data, igb_setup_tc_block_cb,
+					       adapter, true);
+
 	case TC_SETUP_QDISC_ETF:
 		return igb_offload_txtime(adapter, type_data);
 
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 60cec3540dd7..125139c3796f 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -9608,27 +9608,6 @@ static int ixgbe_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
 	}
 }
 
-static int ixgbe_setup_tc_block(struct net_device *dev,
-				struct tc_block_offload *f)
-{
-	struct ixgbe_adapter *adapter = netdev_priv(dev);
-
-	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
-		return -EOPNOTSUPP;
-
-	switch (f->command) {
-	case TC_BLOCK_BIND:
-		return tcf_block_cb_register(f->block, ixgbe_setup_tc_block_cb,
-					     adapter, adapter, f->extack);
-	case TC_BLOCK_UNBIND:
-		tcf_block_cb_unregister(f->block, ixgbe_setup_tc_block_cb,
-					adapter);
-		return 0;
-	default:
-		return -EOPNOTSUPP;
-	}
-}
-
 static int ixgbe_setup_tc_mqprio(struct net_device *dev,
 				 struct tc_mqprio_qopt *mqprio)
 {
@@ -9639,9 +9618,13 @@ static int ixgbe_setup_tc_mqprio(struct net_device *dev,
 static int __ixgbe_setup_tc(struct net_device *dev, enum tc_setup_type type,
 			    void *type_data)
 {
+	struct ixgbe_adapter *adapter = netdev_priv(dev);
+
 	switch (type) {
 	case TC_SETUP_BLOCK:
-		return ixgbe_setup_tc_block(dev, type_data);
+		return tcf_setup_block_offload(type_data,
+					       ixgbe_setup_tc_block_cb,
+					       adapter, true);
 	case TC_SETUP_QDISC_MQPRIO:
 		return ixgbe_setup_tc_mqprio(dev, type_data);
 	default:
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index ba705392b46b..3bcfe895e613 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -3416,36 +3416,19 @@ static int mlx5e_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
 		return -EOPNOTSUPP;
 	}
 }
-
-static int mlx5e_setup_tc_block(struct net_device *dev,
-				struct tc_block_offload *f)
-{
-	struct mlx5e_priv *priv = netdev_priv(dev);
-
-	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
-		return -EOPNOTSUPP;
-
-	switch (f->command) {
-	case TC_BLOCK_BIND:
-		return tcf_block_cb_register(f->block, mlx5e_setup_tc_block_cb,
-					     priv, priv, f->extack);
-	case TC_BLOCK_UNBIND:
-		tcf_block_cb_unregister(f->block, mlx5e_setup_tc_block_cb,
-					priv);
-		return 0;
-	default:
-		return -EOPNOTSUPP;
-	}
-}
 #endif
 
 static int mlx5e_setup_tc(struct net_device *dev, enum tc_setup_type type,
 			  void *type_data)
 {
+	struct mlx5e_priv *priv = netdev_priv(dev);
+
 	switch (type) {
 #ifdef CONFIG_MLX5_ESWITCH
 	case TC_SETUP_BLOCK:
-		return mlx5e_setup_tc_block(dev, type_data);
+		return tcf_setup_block_offload(type_data,
+					       mlx5e_setup_tc_block_cb, priv,
+					       true);
 #endif
 	case TC_SETUP_QDISC_MQPRIO:
 		return mlx5e_setup_tc_mqprio(dev, type_data);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
index 6bfdefa8b9f4..ee63a902c261 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
@@ -1181,32 +1181,16 @@ static int mlx5e_rep_setup_tc_cb(enum tc_setup_type type, void *type_data,
 	}
 }
 
-static int mlx5e_rep_setup_tc_block(struct net_device *dev,
-				    struct tc_block_offload *f)
-{
-	struct mlx5e_priv *priv = netdev_priv(dev);
-
-	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
-		return -EOPNOTSUPP;
-
-	switch (f->command) {
-	case TC_BLOCK_BIND:
-		return tcf_block_cb_register(f->block, mlx5e_rep_setup_tc_cb,
-					     priv, priv, f->extack);
-	case TC_BLOCK_UNBIND:
-		tcf_block_cb_unregister(f->block, mlx5e_rep_setup_tc_cb, priv);
-		return 0;
-	default:
-		return -EOPNOTSUPP;
-	}
-}
-
 static int mlx5e_rep_setup_tc(struct net_device *dev, enum tc_setup_type type,
 			      void *type_data)
 {
+	struct mlx5e_priv *priv = netdev_priv(dev);
+
 	switch (type) {
 	case TC_SETUP_BLOCK:
-		return mlx5e_rep_setup_tc_block(dev, type_data);
+		return tcf_setup_block_offload(type_data,
+					       mlx5e_rep_setup_tc_cb,
+					       priv, true);
 	default:
 		return -EOPNOTSUPP;
 	}
diff --git a/drivers/net/ethernet/netronome/nfp/abm/cls.c b/drivers/net/ethernet/netronome/nfp/abm/cls.c
index 9852080cf454..e177cc64ef29 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/cls.c
+++ b/drivers/net/ethernet/netronome/nfp/abm/cls.c
@@ -265,19 +265,6 @@ static int nfp_abm_setup_tc_block_cb(enum tc_setup_type type,
 int nfp_abm_setup_cls_block(struct net_device *netdev, struct nfp_repr *repr,
 			    struct tc_block_offload *f)
 {
-	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_EGRESS)
-		return -EOPNOTSUPP;
-
-	switch (f->command) {
-	case TC_BLOCK_BIND:
-		return tcf_block_cb_register(f->block,
-					     nfp_abm_setup_tc_block_cb,
-					     repr, repr, f->extack);
-	case TC_BLOCK_UNBIND:
-		tcf_block_cb_unregister(f->block, nfp_abm_setup_tc_block_cb,
-					repr);
-		return 0;
-	default:
-		return -EOPNOTSUPP;
-	}
+	return tcf_setup_block_offload(f, nfp_abm_setup_tc_block_cb, repr,
+				       true);
 }
diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.c b/drivers/net/ethernet/netronome/nfp/bpf/main.c
index 9c136da25221..5a3b9fde8e01 100644
--- a/drivers/net/ethernet/netronome/nfp/bpf/main.c
+++ b/drivers/net/ethernet/netronome/nfp/bpf/main.c
@@ -160,35 +160,16 @@ static int nfp_bpf_setup_tc_block_cb(enum tc_setup_type type,
 	return 0;
 }
 
-static int nfp_bpf_setup_tc_block(struct net_device *netdev,
-				  struct tc_block_offload *f)
-{
-	struct nfp_net *nn = netdev_priv(netdev);
-
-	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
-		return -EOPNOTSUPP;
-
-	switch (f->command) {
-	case TC_BLOCK_BIND:
-		return tcf_block_cb_register(f->block,
-					     nfp_bpf_setup_tc_block_cb,
-					     nn, nn, f->extack);
-	case TC_BLOCK_UNBIND:
-		tcf_block_cb_unregister(f->block,
-					nfp_bpf_setup_tc_block_cb,
-					nn);
-		return 0;
-	default:
-		return -EOPNOTSUPP;
-	}
-}
-
 static int nfp_bpf_setup_tc(struct nfp_app *app, struct net_device *netdev,
 			    enum tc_setup_type type, void *type_data)
 {
+	struct nfp_net *nn = netdev_priv(netdev);
+
 	switch (type) {
 	case TC_SETUP_BLOCK:
-		return nfp_bpf_setup_tc_block(netdev, type_data);
+		return tcf_setup_block_offload(type_data,
+					       nfp_bpf_setup_tc_block_cb,
+					       nn, true);
 	default:
 		return -EOPNOTSUPP;
 	}
diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
index aefe211da82c..ba41252b1c14 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
@@ -1190,35 +1190,16 @@ static int nfp_flower_setup_tc_block_cb(enum tc_setup_type type,
 	}
 }
 
-static int nfp_flower_setup_tc_block(struct net_device *netdev,
-				     struct tc_block_offload *f)
-{
-	struct nfp_repr *repr = netdev_priv(netdev);
-
-	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
-		return -EOPNOTSUPP;
-
-	switch (f->command) {
-	case TC_BLOCK_BIND:
-		return tcf_block_cb_register(f->block,
-					     nfp_flower_setup_tc_block_cb,
-					     repr, repr, f->extack);
-	case TC_BLOCK_UNBIND:
-		tcf_block_cb_unregister(f->block,
-					nfp_flower_setup_tc_block_cb,
-					repr);
-		return 0;
-	default:
-		return -EOPNOTSUPP;
-	}
-}
-
 int nfp_flower_setup_tc(struct nfp_app *app, struct net_device *netdev,
 			enum tc_setup_type type, void *type_data)
 {
+	struct nfp_repr *repr = netdev_priv(netdev);
+
 	switch (type) {
 	case TC_SETUP_BLOCK:
-		return nfp_flower_setup_tc_block(netdev, type_data);
+		return tcf_setup_block_offload(type_data,
+					       nfp_flower_setup_tc_block_cb,
+					       repr, true);
 	default:
 		return -EOPNOTSUPP;
 	}
diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c
index 02a97c659e29..39efe7309f0a 100644
--- a/drivers/net/ethernet/qlogic/qede/qede_main.c
+++ b/drivers/net/ethernet/qlogic/qede/qede_main.c
@@ -578,25 +578,6 @@ static int qede_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
 	}
 }
 
-static int qede_setup_tc_block(struct qede_dev *edev,
-			       struct tc_block_offload *f)
-{
-	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
-		return -EOPNOTSUPP;
-
-	switch (f->command) {
-	case TC_BLOCK_BIND:
-		return tcf_block_cb_register(f->block,
-					     qede_setup_tc_block_cb,
-					     edev, edev, f->extack);
-	case TC_BLOCK_UNBIND:
-		tcf_block_cb_unregister(f->block, qede_setup_tc_block_cb, edev);
-		return 0;
-	default:
-		return -EOPNOTSUPP;
-	}
-}
-
 static int
 qede_setup_tc_offload(struct net_device *dev, enum tc_setup_type type,
 		      void *type_data)
@@ -606,7 +587,9 @@ qede_setup_tc_offload(struct net_device *dev, enum tc_setup_type type,
 
 	switch (type) {
 	case TC_SETUP_BLOCK:
-		return qede_setup_tc_block(edev, type_data);
+		return tcf_setup_block_offload(type_data,
+					       qede_setup_tc_block_cb, edev,
+					       true);
 	case TC_SETUP_QDISC_MQPRIO:
 		mqprio = type_data;
 
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index a26e36dbb5df..fd7f37942331 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -3844,24 +3844,6 @@ static int stmmac_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
 	return ret;
 }
 
-static int stmmac_setup_tc_block(struct stmmac_priv *priv,
-				 struct tc_block_offload *f)
-{
-	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
-		return -EOPNOTSUPP;
-
-	switch (f->command) {
-	case TC_BLOCK_BIND:
-		return tcf_block_cb_register(f->block, stmmac_setup_tc_block_cb,
-				priv, priv, f->extack);
-	case TC_BLOCK_UNBIND:
-		tcf_block_cb_unregister(f->block, stmmac_setup_tc_block_cb, priv);
-		return 0;
-	default:
-		return -EOPNOTSUPP;
-	}
-}
-
 static int stmmac_setup_tc(struct net_device *ndev, enum tc_setup_type type,
 			   void *type_data)
 {
@@ -3869,7 +3851,9 @@ static int stmmac_setup_tc(struct net_device *ndev, enum tc_setup_type type,
 
 	switch (type) {
 	case TC_SETUP_BLOCK:
-		return stmmac_setup_tc_block(priv, type_data);
+		return tcf_setup_block_offload(type_data,
+					       stmmac_setup_tc_block_cb,
+					       priv, true);
 	case TC_SETUP_QDISC_CBS:
 		return stmmac_tc_setup_cbs(priv, priv, type_data);
 	default:
diff --git a/drivers/net/netdevsim/netdev.c b/drivers/net/netdevsim/netdev.c
index 7805fa840383..a3c45433b886 100644
--- a/drivers/net/netdevsim/netdev.c
+++ b/drivers/net/netdevsim/netdev.c
@@ -272,26 +272,6 @@ nsim_setup_tc_block_cb(enum tc_setup_type type, void *type_data, void *cb_priv)
 	return nsim_bpf_setup_tc_block_cb(type, type_data, cb_priv);
 }
 
-static int
-nsim_setup_tc_block(struct net_device *dev, struct tc_block_offload *f)
-{
-	struct netdevsim *ns = netdev_priv(dev);
-
-	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
-		return -EOPNOTSUPP;
-
-	switch (f->command) {
-	case TC_BLOCK_BIND:
-		return tcf_block_cb_register(f->block, nsim_setup_tc_block_cb,
-					     ns, ns, f->extack);
-	case TC_BLOCK_UNBIND:
-		tcf_block_cb_unregister(f->block, nsim_setup_tc_block_cb, ns);
-		return 0;
-	default:
-		return -EOPNOTSUPP;
-	}
-}
-
 static int nsim_set_vf_mac(struct net_device *dev, int vf, u8 *mac)
 {
 	struct netdevsim *ns = netdev_priv(dev);
@@ -412,9 +392,13 @@ static int nsim_set_vf_link_state(struct net_device *dev, int vf, int state)
 static int
 nsim_setup_tc(struct net_device *dev, enum tc_setup_type type, void *type_data)
 {
+	struct netdevsim *ns = netdev_priv(dev);
+
 	switch (type) {
 	case TC_SETUP_BLOCK:
-		return nsim_setup_tc_block(dev, type_data);
+		return tcf_setup_block_offload(type_data,
+					       nsim_setup_tc_block_cb, ns,
+					       true);
 	default:
 		return -EOPNOTSUPP;
 	}
diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index 52a6484428ce..7a50f7bb6880 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -652,6 +652,9 @@ struct tc_block_offload {
 	struct netlink_ext_ack *extack;
 };
 
+int tcf_setup_block_offload(struct tc_block_offload *f, tc_setup_cb_t *cb,
+			    void *cb_priv, bool ingress_only);
+
 struct tc_cls_common_offload {
 	u32 chain_index;
 	__be16 protocol;
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index 963511ec758d..a00463c8cfa9 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -926,6 +926,26 @@ static int tcf_block_setup(struct tcf_block *block, struct tc_block_offload *bo)
 	return err;
 }
 
+int tcf_setup_block_offload(struct tc_block_offload *f, tc_setup_cb_t *cb,
+			    void *cb_priv, bool ingress_only)
+{
+	if (ingress_only &&
+	    f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
+		return -EOPNOTSUPP;
+
+	switch (f->command) {
+	case TC_BLOCK_BIND:
+		return tcf_block_cb_register(f->block, cb, cb_priv, cb_priv,
+					     f->extack);
+	case TC_BLOCK_UNBIND:
+		tcf_block_cb_unregister(f->block, cb, cb_priv);
+		return 0;
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+EXPORT_SYMBOL(tcf_setup_block_offload);
+
 static struct rhashtable indr_setup_block_ht;
 
 struct tc_indr_block_dev {
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next,RFC 7/9] net: use tcf_block_setup() infrastructure
  2019-04-26  0:33 [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter Pablo Neira Ayuso
                   ` (5 preceding siblings ...)
  2019-04-26  0:33 ` [PATCH net-next,RFC 6/9] net: sched: add tcf_setup_block_offload() Pablo Neira Ayuso
@ 2019-04-26  0:33 ` Pablo Neira Ayuso
  2019-04-26 14:27   ` Jiri Pirko
  2019-04-26  0:33 ` [PATCH net-next,RFC 8/9] net: cls_api: do not expose tcf_block to drivers Pablo Neira Ayuso
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 20+ messages in thread
From: Pablo Neira Ayuso @ 2019-04-26  0:33 UTC (permalink / raw)
  To: netfilter-devel
  Cc: davem, netdev, jiri, john.hurley, jakub.kicinski, ogerlitz

This allows us to register / unregister tcf_block_cb objects from the
core. The idea is to allocate the tcf_block_cb object from the driver,
attach it to the tc_block_offload->cb_list, then the core registers
them.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
 drivers/net/ethernet/mellanox/mlx5/core/en_rep.c   | 33 +++++++----
 drivers/net/ethernet/mellanox/mlxsw/spectrum.c     | 64 +++++++++++++---------
 .../net/ethernet/netronome/nfp/flower/offload.c    | 33 +++++++----
 include/net/pkt_cls.h                              |  6 +-
 net/dsa/slave.c                                    | 15 ++++-
 net/sched/cls_api.c                                | 35 ++++++++----
 6 files changed, 124 insertions(+), 62 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
index ee63a902c261..f09135b7a605 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
@@ -697,13 +697,21 @@ static int mlx5e_rep_indr_setup_block_cb(enum tc_setup_type type,
 	}
 }
 
+static void mlx5e_rep_indr_tc_block_unbind(void *cb_priv)
+{
+	struct mlx5e_rep_indr_block_priv *indr_priv = cb_priv;
+
+	list_del(&indr_priv->list);
+	kfree(indr_priv);
+}
+
 static int
 mlx5e_rep_indr_setup_tc_block(struct net_device *netdev,
 			      struct mlx5e_rep_priv *rpriv,
 			      struct tc_block_offload *f)
 {
 	struct mlx5e_rep_indr_block_priv *indr_priv;
-	int err = 0;
+	struct tcf_block_cb *block_cb;
 
 	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
 		return -EOPNOTSUPP;
@@ -723,26 +731,29 @@ mlx5e_rep_indr_setup_tc_block(struct net_device *netdev,
 		list_add(&indr_priv->list,
 			 &rpriv->uplink_priv.tc_indr_block_priv_list);
 
-		err = tcf_block_cb_register(f->block,
-					    mlx5e_rep_indr_setup_block_cb,
-					    indr_priv, indr_priv, f->extack);
-		if (err) {
+		block_cb = tcf_block_cb_alloc(f->block->index,
+					      mlx5e_rep_indr_setup_block_cb,
+					      indr_priv, indr_priv,
+					      mlx5e_rep_indr_tc_block_unbind);
+		if (!block_cb) {
 			list_del(&indr_priv->list);
 			kfree(indr_priv);
 		}
+		tcf_block_cb_list_add(block_cb, &f->cb_list);
 
-		return err;
+		return 0;
 	case TC_BLOCK_UNBIND:
 		indr_priv = mlx5e_rep_indr_block_priv_lookup(rpriv, netdev);
 		if (!indr_priv)
 			return -ENOENT;
 
-		tcf_block_cb_unregister(f->block,
-					mlx5e_rep_indr_setup_block_cb,
-					indr_priv);
-		list_del(&indr_priv->list);
-		kfree(indr_priv);
+		block_cb = tcf_block_cb_lookup(f->block->index,
+					       mlx5e_rep_indr_setup_block_cb,
+					       indr_priv);
+		if (!block_cb)
+			return -ENOENT;
 
+		tcf_block_cb_list_move(block_cb, &f->cb_list);
 		return 0;
 	default:
 		return -EOPNOTSUPP;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
index fc325f1213fb..c07fa0487b39 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
@@ -1514,25 +1514,34 @@ static int mlxsw_sp_setup_tc_block_cb_flower(enum tc_setup_type type,
 	}
 }
 
+static void mlxsw_sp_tc_block_flower_release(void *cb_priv)
+{
+	struct mlxsw_sp_acl_block *acl_block = cb_priv;
+
+	mlxsw_sp_acl_block_destroy(acl_block);
+}
+
 static int
 mlxsw_sp_setup_tc_block_flower_bind(struct mlxsw_sp_port *mlxsw_sp_port,
-				    struct tcf_block *block, bool ingress,
-				    struct netlink_ext_ack *extack)
+			            struct tc_block_offload *f, bool ingress)
 {
 	struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+	struct net *net = dev_net(mlxsw_sp_port->dev);
 	struct mlxsw_sp_acl_block *acl_block;
 	struct tcf_block_cb *block_cb;
 	int err;
 
-	block_cb = tcf_block_cb_lookup(block, mlxsw_sp_setup_tc_block_cb_flower,
+	block_cb = tcf_block_cb_lookup(f->block->index,
+				       mlxsw_sp_setup_tc_block_cb_flower,
 				       mlxsw_sp);
 	if (!block_cb) {
-		acl_block = mlxsw_sp_acl_block_create(mlxsw_sp, block->net);
+		acl_block = mlxsw_sp_acl_block_create(mlxsw_sp, net);
 		if (!acl_block)
 			return -ENOMEM;
-		block_cb = __tcf_block_cb_register(block,
-						   mlxsw_sp_setup_tc_block_cb_flower,
-						   mlxsw_sp, acl_block, extack);
+		block_cb = tcf_block_cb_alloc(f->block->index,
+					      mlxsw_sp_setup_tc_block_cb_flower,
+					      mlxsw_sp, acl_block,
+					      mlxsw_sp_tc_block_flower_release);
 		if (IS_ERR(block_cb)) {
 			err = PTR_ERR(block_cb);
 			goto err_cb_register;
@@ -1551,27 +1560,28 @@ mlxsw_sp_setup_tc_block_flower_bind(struct mlxsw_sp_port *mlxsw_sp_port,
 	else
 		mlxsw_sp_port->eg_acl_block = acl_block;
 
+	tcf_block_cb_list_add(block_cb, &f->cb_list);
 	return 0;
 
 err_block_bind:
 	if (!tcf_block_cb_decref(block_cb)) {
-		__tcf_block_cb_unregister(block, block_cb);
 err_cb_register:
-		mlxsw_sp_acl_block_destroy(acl_block);
+		tcf_block_cb_free(block_cb);
 	}
 	return err;
 }
 
 static void
 mlxsw_sp_setup_tc_block_flower_unbind(struct mlxsw_sp_port *mlxsw_sp_port,
-				      struct tcf_block *block, bool ingress)
+				      struct tc_block_offload *f, bool ingress)
 {
 	struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
 	struct mlxsw_sp_acl_block *acl_block;
 	struct tcf_block_cb *block_cb;
 	int err;
 
-	block_cb = tcf_block_cb_lookup(block, mlxsw_sp_setup_tc_block_cb_flower,
+	block_cb = tcf_block_cb_lookup(f->block->index,
+				       mlxsw_sp_setup_tc_block_cb_flower,
 				       mlxsw_sp);
 	if (!block_cb)
 		return;
@@ -1584,15 +1594,14 @@ mlxsw_sp_setup_tc_block_flower_unbind(struct mlxsw_sp_port *mlxsw_sp_port,
 	acl_block = tcf_block_cb_priv(block_cb);
 	err = mlxsw_sp_acl_block_unbind(mlxsw_sp, acl_block,
 					mlxsw_sp_port, ingress);
-	if (!err && !tcf_block_cb_decref(block_cb)) {
-		__tcf_block_cb_unregister(block, block_cb);
-		mlxsw_sp_acl_block_destroy(acl_block);
-	}
+	if (!err && !tcf_block_cb_decref(block_cb))
+		tcf_block_cb_list_move(block_cb, &f->cb_list);
 }
 
 static int mlxsw_sp_setup_tc_block(struct mlxsw_sp_port *mlxsw_sp_port,
 				   struct tc_block_offload *f)
 {
+	struct tcf_block_cb *block_cb;
 	tc_setup_cb_t *cb;
 	bool ingress;
 	int err;
@@ -1609,22 +1618,27 @@ static int mlxsw_sp_setup_tc_block(struct mlxsw_sp_port *mlxsw_sp_port,
 
 	switch (f->command) {
 	case TC_BLOCK_BIND:
-		err = tcf_block_cb_register(f->block, cb, mlxsw_sp_port,
-					    mlxsw_sp_port, f->extack);
-		if (err)
-			return err;
-		err = mlxsw_sp_setup_tc_block_flower_bind(mlxsw_sp_port,
-							  f->block, ingress,
-							  f->extack);
+		block_cb = tcf_block_cb_alloc(f->block->index, cb, mlxsw_sp_port,
+					 mlxsw_sp_port, NULL);
+		if (!block_cb)
+			return -ENOMEM;
+		err = mlxsw_sp_setup_tc_block_flower_bind(mlxsw_sp_port, f,
+							  ingress);
 		if (err) {
-			tcf_block_cb_unregister(f->block, cb, mlxsw_sp_port);
+			tcf_block_cb_free(block_cb);
 			return err;
 		}
+		tcf_block_cb_list_add(block_cb, &f->cb_list);
 		return 0;
 	case TC_BLOCK_UNBIND:
 		mlxsw_sp_setup_tc_block_flower_unbind(mlxsw_sp_port,
-						      f->block, ingress);
-		tcf_block_cb_unregister(f->block, cb, mlxsw_sp_port);
+						      f, ingress);
+		block_cb = tcf_block_cb_lookup(f->block->index, cb,
+					       mlxsw_sp_port);
+		if (!block_cb)
+			return -ENOENT;
+
+		tcf_block_cb_list_move(block_cb, &f->cb_list);
 		return 0;
 	default:
 		return -EOPNOTSUPP;
diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
index ba41252b1c14..d2bc36859952 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
@@ -1246,13 +1246,21 @@ static int nfp_flower_setup_indr_block_cb(enum tc_setup_type type,
 	}
 }
 
+static void nfp_flower_setup_indr_tc_release(void *cb_priv)
+{
+	struct nfp_flower_indr_block_cb_priv *priv = cb_priv;
+
+	list_del(&priv->list);
+	kfree(priv);
+}
+
 static int
 nfp_flower_setup_indr_tc_block(struct net_device *netdev, struct nfp_app *app,
 			       struct tc_block_offload *f)
 {
 	struct nfp_flower_indr_block_cb_priv *cb_priv;
 	struct nfp_flower_priv *priv = app->priv;
-	int err;
+	struct tcf_block_cb *block_cb;
 
 	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS &&
 	    !(f->binder_type == TCF_BLOCK_BINDER_TYPE_CLSACT_EGRESS &&
@@ -1269,26 +1277,29 @@ nfp_flower_setup_indr_tc_block(struct net_device *netdev, struct nfp_app *app,
 		cb_priv->app = app;
 		list_add(&cb_priv->list, &priv->indr_block_cb_priv);
 
-		err = tcf_block_cb_register(f->block,
-					    nfp_flower_setup_indr_block_cb,
-					    cb_priv, cb_priv, f->extack);
-		if (err) {
+		block_cb = tcf_block_cb_alloc(f->block->index,
+					      nfp_flower_setup_indr_block_cb,
+					      cb_priv, cb_priv,
+					      nfp_flower_setup_indr_tc_release);
+		if (!block_cb) {
 			list_del(&cb_priv->list);
 			kfree(cb_priv);
 		}
 
-		return err;
+		tcf_block_cb_list_add(block_cb, &f->cb_list);
+		return 0;
 	case TC_BLOCK_UNBIND:
 		cb_priv = nfp_flower_indr_block_cb_priv_lookup(app, netdev);
 		if (!cb_priv)
 			return -ENOENT;
 
-		tcf_block_cb_unregister(f->block,
-					nfp_flower_setup_indr_block_cb,
-					cb_priv);
-		list_del(&cb_priv->list);
-		kfree(cb_priv);
+		block_cb = tcf_block_cb_lookup(f->block->index,
+					       nfp_flower_setup_indr_block_cb,
+					       cb_priv);
+		if (!block_cb)
+			return -ENOENT;
 
+		tcf_block_cb_list_move(block_cb, &f->cb_list);
 		return 0;
 	default:
 		return -EOPNOTSUPP;
diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index 7a50f7bb6880..8f0486a00d70 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -71,7 +71,7 @@ static inline struct Qdisc *tcf_block_q(struct tcf_block *block)
 	return block->q;
 }
 
-struct tcf_block_cb *tcf_block_cb_alloc(tc_setup_cb_t *cb,
+struct tcf_block_cb *tcf_block_cb_alloc(u32 block_index, tc_setup_cb_t *cb,
 					void *cb_ident, void *cb_priv,
 					void (*release)(void *cb_priv));
 void tcf_block_cb_free(struct tcf_block_cb *block_cb);
@@ -79,8 +79,8 @@ void tcf_block_cb_list_add(struct tcf_block_cb *block_cb, struct list_head *cb_l
 void tcf_block_cb_list_move(struct tcf_block_cb *block_cb, struct list_head *cb_list);
 
 void *tcf_block_cb_priv(struct tcf_block_cb *block_cb);
-struct tcf_block_cb *tcf_block_cb_lookup(struct tcf_block *block,
-					 tc_setup_cb_t *cb, void *cb_ident);
+struct tcf_block_cb *tcf_block_cb_lookup(u32 block_index, tc_setup_cb_t *cb,
+					 void *cb_ident);
 void tcf_block_cb_incref(struct tcf_block_cb *block_cb);
 unsigned int tcf_block_cb_decref(struct tcf_block_cb *block_cb);
 struct tcf_block_cb *__tcf_block_cb_register(struct tcf_block *block,
diff --git a/net/dsa/slave.c b/net/dsa/slave.c
index ce26dddc8270..7ef34cc2f574 100644
--- a/net/dsa/slave.c
+++ b/net/dsa/slave.c
@@ -902,6 +902,7 @@ static int dsa_slave_setup_tc_block_cb_eg(enum tc_setup_type type,
 static int dsa_slave_setup_tc_block(struct net_device *dev,
 				    struct tc_block_offload *f)
 {
+	struct tcf_block_cb *block_cb;
 	tc_setup_cb_t *cb;
 
 	if (f->binder_type == TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
@@ -913,9 +914,19 @@ static int dsa_slave_setup_tc_block(struct net_device *dev,
 
 	switch (f->command) {
 	case TC_BLOCK_BIND:
-		return tcf_block_cb_register(f->block, cb, dev, dev, f->extack);
+		block_cb = tcf_block_cb_alloc(f->block->index, cb, dev, dev,
+					      NULL);
+		if (!block_cb)
+			return -ENOMEM;
+
+		tcf_block_cb_list_add(block_cb, &f->cb_list);
+		return 0;
 	case TC_BLOCK_UNBIND:
-		tcf_block_cb_unregister(f->block, cb, dev);
+		block_cb = tcf_block_cb_lookup(f->block->index, cb, dev);
+		if (!block_cb)
+			return -ENOENT;
+
+		tcf_block_cb_list_move(block_cb, &f->cb_list);
 		return 0;
 	default:
 		return -EOPNOTSUPP;
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index a00463c8cfa9..f7f6f42d58d1 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -714,6 +714,7 @@ struct tcf_block_cb {
 	void (*release)(void *cb_priv);
 	void *cb_ident;
 	void *cb_priv;
+	u32 block_index;
 	unsigned int refcnt;
 };
 
@@ -730,12 +731,14 @@ EXPORT_SYMBOL(tcf_block_cb_priv);
 
 static LIST_HEAD(tcf_block_cb_list);
 
-struct tcf_block_cb *tcf_block_cb_lookup(struct tcf_block *block,
-					 tc_setup_cb_t *cb, void *cb_ident)
+struct tcf_block_cb *tcf_block_cb_lookup(u32 block_index, tc_setup_cb_t *cb,
+					 void *cb_ident)
 {	struct tcf_block_cb *block_cb;
 
-	list_for_each_entry(block_cb, &block->cb_list, list)
-		if (block_cb->cb == cb && block_cb->cb_ident == cb_ident)
+	list_for_each_entry(block_cb, &tcf_block_cb_list, list)
+		if (block_cb->block_index == block_index &&
+		    block_cb->cb == cb &&
+		    block_cb->cb_ident == cb_ident)
 			return block_cb;
 	return NULL;
 }
@@ -753,7 +756,7 @@ unsigned int tcf_block_cb_decref(struct tcf_block_cb *block_cb)
 }
 EXPORT_SYMBOL(tcf_block_cb_decref);
 
-struct tcf_block_cb *tcf_block_cb_alloc(tc_setup_cb_t *cb,
+struct tcf_block_cb *tcf_block_cb_alloc(u32 block_index, tc_setup_cb_t *cb,
 					void *cb_ident, void *cb_priv,
 					void (*release)(void *cb_priv))
 {
@@ -767,6 +770,7 @@ struct tcf_block_cb *tcf_block_cb_alloc(tc_setup_cb_t *cb,
 	block_cb->cb_ident = cb_ident;
 	block_cb->release = release;
 	block_cb->cb_priv = cb_priv;
+	block_cb->block_index = block_index;
 
 	return block_cb;
 }
@@ -810,7 +814,7 @@ struct tcf_block_cb *__tcf_block_cb_register(struct tcf_block *block,
 	if (err)
 		return ERR_PTR(err);
 
-	block_cb = tcf_block_cb_alloc(cb, cb_ident, cb_priv, NULL);
+	block_cb = tcf_block_cb_alloc(block->index, cb, cb_ident, cb_priv, NULL);
 	if (!block_cb)
 		return ERR_PTR(-ENOMEM);
 
@@ -847,7 +851,7 @@ void tcf_block_cb_unregister(struct tcf_block *block,
 {
 	struct tcf_block_cb *block_cb;
 
-	block_cb = tcf_block_cb_lookup(block, cb, cb_ident);
+	block_cb = tcf_block_cb_lookup(block->index, cb, cb_ident);
 	if (!block_cb)
 		return;
 	__tcf_block_cb_unregister(block, block_cb);
@@ -929,16 +933,27 @@ static int tcf_block_setup(struct tcf_block *block, struct tc_block_offload *bo)
 int tcf_setup_block_offload(struct tc_block_offload *f, tc_setup_cb_t *cb,
 			    void *cb_priv, bool ingress_only)
 {
+	struct tcf_block_cb *block_cb;
+
 	if (ingress_only &&
 	    f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
 		return -EOPNOTSUPP;
 
 	switch (f->command) {
 	case TC_BLOCK_BIND:
-		return tcf_block_cb_register(f->block, cb, cb_priv, cb_priv,
-					     f->extack);
+		block_cb = tcf_block_cb_alloc(f->block->index, cb, cb_priv,
+					      cb_priv, NULL);
+		if (!block_cb)
+			return -ENOMEM;
+
+		tcf_block_cb_list_add(block_cb, &f->cb_list);
+		return 0;
 	case TC_BLOCK_UNBIND:
-		tcf_block_cb_unregister(f->block, cb, cb_priv);
+		block_cb = tcf_block_cb_lookup(f->block->index, cb, cb_priv);
+		if (!block_cb)
+			return -ENOENT;
+
+		tcf_block_cb_list_move(block_cb, &f->cb_list);
 		return 0;
 	default:
 		return -EOPNOTSUPP;
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next,RFC 8/9] net: cls_api: do not expose tcf_block to drivers
  2019-04-26  0:33 [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter Pablo Neira Ayuso
                   ` (6 preceding siblings ...)
  2019-04-26  0:33 ` [PATCH net-next,RFC 7/9] net: use tcf_block_setup() infrastructure Pablo Neira Ayuso
@ 2019-04-26  0:33 ` Pablo Neira Ayuso
  2019-04-26  0:33 ` [PATCH net-next,RFC 8/9] net: sched: remove tcf_block_cb_{register,unregister}() Pablo Neira Ayuso
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Pablo Neira Ayuso @ 2019-04-26  0:33 UTC (permalink / raw)
  To: netfilter-devel
  Cc: davem, netdev, jiri, john.hurley, jakub.kicinski, ogerlitz

Instead, expose the block index that is sufficient to look up for the
tcf_block_cb object.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
 drivers/net/ethernet/mellanox/mlx5/core/en_rep.c    |  4 ++--
 drivers/net/ethernet/mellanox/mlxsw/spectrum.c      | 10 +++++-----
 drivers/net/ethernet/netronome/nfp/flower/offload.c |  4 ++--
 include/net/pkt_cls.h                               |  2 +-
 net/dsa/slave.c                                     |  4 ++--
 net/sched/cls_api.c                                 | 10 +++++-----
 6 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
index f09135b7a605..f85f725fc0dc 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
@@ -731,7 +731,7 @@ mlx5e_rep_indr_setup_tc_block(struct net_device *netdev,
 		list_add(&indr_priv->list,
 			 &rpriv->uplink_priv.tc_indr_block_priv_list);
 
-		block_cb = tcf_block_cb_alloc(f->block->index,
+		block_cb = tcf_block_cb_alloc(f->block_index,
 					      mlx5e_rep_indr_setup_block_cb,
 					      indr_priv, indr_priv,
 					      mlx5e_rep_indr_tc_block_unbind);
@@ -747,7 +747,7 @@ mlx5e_rep_indr_setup_tc_block(struct net_device *netdev,
 		if (!indr_priv)
 			return -ENOENT;
 
-		block_cb = tcf_block_cb_lookup(f->block->index,
+		block_cb = tcf_block_cb_lookup(f->block_index,
 					       mlx5e_rep_indr_setup_block_cb,
 					       indr_priv);
 		if (!block_cb)
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
index c07fa0487b39..060a0de69638 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
@@ -1531,14 +1531,14 @@ mlxsw_sp_setup_tc_block_flower_bind(struct mlxsw_sp_port *mlxsw_sp_port,
 	struct tcf_block_cb *block_cb;
 	int err;
 
-	block_cb = tcf_block_cb_lookup(f->block->index,
+	block_cb = tcf_block_cb_lookup(f->block_index,
 				       mlxsw_sp_setup_tc_block_cb_flower,
 				       mlxsw_sp);
 	if (!block_cb) {
 		acl_block = mlxsw_sp_acl_block_create(mlxsw_sp, net);
 		if (!acl_block)
 			return -ENOMEM;
-		block_cb = tcf_block_cb_alloc(f->block->index,
+		block_cb = tcf_block_cb_alloc(f->block_index,
 					      mlxsw_sp_setup_tc_block_cb_flower,
 					      mlxsw_sp, acl_block,
 					      mlxsw_sp_tc_block_flower_release);
@@ -1580,7 +1580,7 @@ mlxsw_sp_setup_tc_block_flower_unbind(struct mlxsw_sp_port *mlxsw_sp_port,
 	struct tcf_block_cb *block_cb;
 	int err;
 
-	block_cb = tcf_block_cb_lookup(f->block->index,
+	block_cb = tcf_block_cb_lookup(f->block_index,
 				       mlxsw_sp_setup_tc_block_cb_flower,
 				       mlxsw_sp);
 	if (!block_cb)
@@ -1618,7 +1618,7 @@ static int mlxsw_sp_setup_tc_block(struct mlxsw_sp_port *mlxsw_sp_port,
 
 	switch (f->command) {
 	case TC_BLOCK_BIND:
-		block_cb = tcf_block_cb_alloc(f->block->index, cb, mlxsw_sp_port,
+		block_cb = tcf_block_cb_alloc(f->block_index, cb, mlxsw_sp_port,
 					 mlxsw_sp_port, NULL);
 		if (!block_cb)
 			return -ENOMEM;
@@ -1633,7 +1633,7 @@ static int mlxsw_sp_setup_tc_block(struct mlxsw_sp_port *mlxsw_sp_port,
 	case TC_BLOCK_UNBIND:
 		mlxsw_sp_setup_tc_block_flower_unbind(mlxsw_sp_port,
 						      f, ingress);
-		block_cb = tcf_block_cb_lookup(f->block->index, cb,
+		block_cb = tcf_block_cb_lookup(f->block_index, cb,
 					       mlxsw_sp_port);
 		if (!block_cb)
 			return -ENOENT;
diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
index d2bc36859952..1ad46cf2413f 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
@@ -1277,7 +1277,7 @@ nfp_flower_setup_indr_tc_block(struct net_device *netdev, struct nfp_app *app,
 		cb_priv->app = app;
 		list_add(&cb_priv->list, &priv->indr_block_cb_priv);
 
-		block_cb = tcf_block_cb_alloc(f->block->index,
+		block_cb = tcf_block_cb_alloc(f->block_index,
 					      nfp_flower_setup_indr_block_cb,
 					      cb_priv, cb_priv,
 					      nfp_flower_setup_indr_tc_release);
@@ -1293,7 +1293,7 @@ nfp_flower_setup_indr_tc_block(struct net_device *netdev, struct nfp_app *app,
 		if (!cb_priv)
 			return -ENOENT;
 
-		block_cb = tcf_block_cb_lookup(f->block->index,
+		block_cb = tcf_block_cb_lookup(f->block_index,
 					       nfp_flower_setup_indr_block_cb,
 					       cb_priv);
 		if (!block_cb)
diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index 8f0486a00d70..a3a3cfbf0ba0 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -648,7 +648,7 @@ struct tc_block_offload {
 	enum tc_block_command command;
 	enum tcf_block_binder_type binder_type;
 	struct list_head cb_list;
-	struct tcf_block *block;
+	u32 block_index;
 	struct netlink_ext_ack *extack;
 };
 
diff --git a/net/dsa/slave.c b/net/dsa/slave.c
index 7ef34cc2f574..4fe5386f2613 100644
--- a/net/dsa/slave.c
+++ b/net/dsa/slave.c
@@ -914,7 +914,7 @@ static int dsa_slave_setup_tc_block(struct net_device *dev,
 
 	switch (f->command) {
 	case TC_BLOCK_BIND:
-		block_cb = tcf_block_cb_alloc(f->block->index, cb, dev, dev,
+		block_cb = tcf_block_cb_alloc(f->block_index, cb, dev, dev,
 					      NULL);
 		if (!block_cb)
 			return -ENOMEM;
@@ -922,7 +922,7 @@ static int dsa_slave_setup_tc_block(struct net_device *dev,
 		tcf_block_cb_list_add(block_cb, &f->cb_list);
 		return 0;
 	case TC_BLOCK_UNBIND:
-		block_cb = tcf_block_cb_lookup(f->block->index, cb, dev);
+		block_cb = tcf_block_cb_lookup(f->block_index, cb, dev);
 		if (!block_cb)
 			return -ENOENT;
 
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index f7f6f42d58d1..d6fb80f68eec 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -941,7 +941,7 @@ int tcf_setup_block_offload(struct tc_block_offload *f, tc_setup_cb_t *cb,
 
 	switch (f->command) {
 	case TC_BLOCK_BIND:
-		block_cb = tcf_block_cb_alloc(f->block->index, cb, cb_priv,
+		block_cb = tcf_block_cb_alloc(f->block_index, cb, cb_priv,
 					      cb_priv, NULL);
 		if (!block_cb)
 			return -ENOMEM;
@@ -949,7 +949,7 @@ int tcf_setup_block_offload(struct tc_block_offload *f, tc_setup_cb_t *cb,
 		tcf_block_cb_list_add(block_cb, &f->cb_list);
 		return 0;
 	case TC_BLOCK_UNBIND:
-		block_cb = tcf_block_cb_lookup(f->block->index, cb, cb_priv);
+		block_cb = tcf_block_cb_lookup(f->block_index, cb, cb_priv);
 		if (!block_cb)
 			return -ENOENT;
 
@@ -1075,7 +1075,7 @@ static void tc_indr_block_ing_cmd(struct tc_indr_block_dev *indr_dev,
 	struct tc_block_offload bo = {
 		.command	= command,
 		.binder_type	= TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS,
-		.block		= indr_dev->block,
+		.block_index	= indr_dev->block->index,
 	};
 	INIT_LIST_HEAD(&bo.cb_list);
 
@@ -1165,7 +1165,7 @@ static void tc_indr_block_call(struct tcf_block *block, struct net_device *dev,
 	struct tc_block_offload bo = {
 		.command	= command,
 		.binder_type	= ei->binder_type,
-		.block		= block,
+		.block_index	= block->index,
 		.extack		= extack,
 	};
 	INIT_LIST_HEAD(&bo.cb_list);
@@ -1194,7 +1194,7 @@ static int tcf_block_offload_cmd(struct tcf_block *block,
 
 	bo.command = command;
 	bo.binder_type = ei->binder_type;
-	bo.block = block;
+	bo.block_index = block->index;
 	bo.extack = extack;
 	INIT_LIST_HEAD(&bo.cb_list);
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next,RFC 8/9] net: sched: remove tcf_block_cb_{register,unregister}()
  2019-04-26  0:33 [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter Pablo Neira Ayuso
                   ` (7 preceding siblings ...)
  2019-04-26  0:33 ` [PATCH net-next,RFC 8/9] net: cls_api: do not expose tcf_block to drivers Pablo Neira Ayuso
@ 2019-04-26  0:33 ` Pablo Neira Ayuso
  2019-04-26  0:33 ` [PATCH net-next,RFC 9/9] net: cls_api: do not expose tcf_block to drivers Pablo Neira Ayuso
  2019-04-26 14:32 ` [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter Jiri Pirko
  10 siblings, 0 replies; 20+ messages in thread
From: Pablo Neira Ayuso @ 2019-04-26  0:33 UTC (permalink / raw)
  To: netfilter-devel
  Cc: davem, netdev, jiri, john.hurley, jakub.kicinski, ogerlitz

Now replaced by the tcf_block_setup() core registration, remove this.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
 include/net/pkt_cls.h | 12 +----------
 net/sched/cls_api.c   | 59 ---------------------------------------------------
 2 files changed, 1 insertion(+), 70 deletions(-)

diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index 8f0486a00d70..b61d0c89ba5b 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -83,17 +83,7 @@ struct tcf_block_cb *tcf_block_cb_lookup(u32 block_index, tc_setup_cb_t *cb,
 					 void *cb_ident);
 void tcf_block_cb_incref(struct tcf_block_cb *block_cb);
 unsigned int tcf_block_cb_decref(struct tcf_block_cb *block_cb);
-struct tcf_block_cb *__tcf_block_cb_register(struct tcf_block *block,
-					     tc_setup_cb_t *cb, void *cb_ident,
-					     void *cb_priv,
-					     struct netlink_ext_ack *extack);
-int tcf_block_cb_register(struct tcf_block *block,
-			  tc_setup_cb_t *cb, void *cb_ident,
-			  void *cb_priv, struct netlink_ext_ack *extack);
-void __tcf_block_cb_unregister(struct tcf_block *block,
-			       struct tcf_block_cb *block_cb);
-void tcf_block_cb_unregister(struct tcf_block *block,
-			     tc_setup_cb_t *cb, void *cb_ident);
+
 int __tc_indr_block_cb_register(struct net_device *dev, void *cb_priv,
 				tc_indr_block_bind_cb_t *cb, void *cb_ident);
 int tc_indr_block_cb_register(struct net_device *dev, void *cb_priv,
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index f7f6f42d58d1..bd8b08f158c0 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -799,65 +799,6 @@ void tcf_block_cb_list_move(struct tcf_block_cb *block_cb,
 }
 EXPORT_SYMBOL(tcf_block_cb_list_move);
 
-struct tcf_block_cb *__tcf_block_cb_register(struct tcf_block *block,
-					     tc_setup_cb_t *cb, void *cb_ident,
-					     void *cb_priv,
-					     struct netlink_ext_ack *extack)
-{
-	struct tcf_block_cb *block_cb;
-	int err;
-
-	/* Replay any already present rules */
-	err = tcf_block_playback_offloads(block, cb, cb_priv, true,
-					  tcf_block_offload_in_use(block),
-					  extack);
-	if (err)
-		return ERR_PTR(err);
-
-	block_cb = tcf_block_cb_alloc(block->index, cb, cb_ident, cb_priv, NULL);
-	if (!block_cb)
-		return ERR_PTR(-ENOMEM);
-
-	list_add(&block_cb->list, &block->cb_list);
-	return block_cb;
-}
-EXPORT_SYMBOL(__tcf_block_cb_register);
-
-int tcf_block_cb_register(struct tcf_block *block,
-			  tc_setup_cb_t *cb, void *cb_ident,
-			  void *cb_priv, struct netlink_ext_ack *extack)
-{
-	struct tcf_block_cb *block_cb;
-
-	block_cb = __tcf_block_cb_register(block, cb, cb_ident, cb_priv,
-					   extack);
-	return PTR_ERR_OR_ZERO(block_cb);
-}
-EXPORT_SYMBOL(tcf_block_cb_register);
-
-void __tcf_block_cb_unregister(struct tcf_block *block,
-			       struct tcf_block_cb *block_cb)
-{
-	tcf_block_playback_offloads(block, block_cb->cb, block_cb->cb_priv,
-				    false, tcf_block_offload_in_use(block),
-				    NULL);
-	list_del(&block_cb->list);
-	kfree(block_cb);
-}
-EXPORT_SYMBOL(__tcf_block_cb_unregister);
-
-void tcf_block_cb_unregister(struct tcf_block *block,
-			     tc_setup_cb_t *cb, void *cb_ident)
-{
-	struct tcf_block_cb *block_cb;
-
-	block_cb = tcf_block_cb_lookup(block->index, cb, cb_ident);
-	if (!block_cb)
-		return;
-	__tcf_block_cb_unregister(block, block_cb);
-}
-EXPORT_SYMBOL(tcf_block_cb_unregister);
-
 static int tcf_block_bind(struct tcf_block *block, struct tc_block_offload *bo)
 {
 	struct tcf_block_cb *block_cb, *failed_cb;
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next,RFC 9/9] net: cls_api: do not expose tcf_block to drivers
  2019-04-26  0:33 [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter Pablo Neira Ayuso
                   ` (8 preceding siblings ...)
  2019-04-26  0:33 ` [PATCH net-next,RFC 8/9] net: sched: remove tcf_block_cb_{register,unregister}() Pablo Neira Ayuso
@ 2019-04-26  0:33 ` Pablo Neira Ayuso
  2019-04-26 14:32 ` [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter Jiri Pirko
  10 siblings, 0 replies; 20+ messages in thread
From: Pablo Neira Ayuso @ 2019-04-26  0:33 UTC (permalink / raw)
  To: netfilter-devel
  Cc: davem, netdev, jiri, john.hurley, jakub.kicinski, ogerlitz

Instead, expose the block index that is sufficient to look up for the
tcf_block_cb object.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
 drivers/net/ethernet/mellanox/mlx5/core/en_rep.c    |  4 ++--
 drivers/net/ethernet/mellanox/mlxsw/spectrum.c      | 10 +++++-----
 drivers/net/ethernet/netronome/nfp/flower/offload.c |  4 ++--
 include/net/pkt_cls.h                               |  2 +-
 net/dsa/slave.c                                     |  4 ++--
 net/sched/cls_api.c                                 | 10 +++++-----
 6 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
index f09135b7a605..f85f725fc0dc 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
@@ -731,7 +731,7 @@ mlx5e_rep_indr_setup_tc_block(struct net_device *netdev,
 		list_add(&indr_priv->list,
 			 &rpriv->uplink_priv.tc_indr_block_priv_list);
 
-		block_cb = tcf_block_cb_alloc(f->block->index,
+		block_cb = tcf_block_cb_alloc(f->block_index,
 					      mlx5e_rep_indr_setup_block_cb,
 					      indr_priv, indr_priv,
 					      mlx5e_rep_indr_tc_block_unbind);
@@ -747,7 +747,7 @@ mlx5e_rep_indr_setup_tc_block(struct net_device *netdev,
 		if (!indr_priv)
 			return -ENOENT;
 
-		block_cb = tcf_block_cb_lookup(f->block->index,
+		block_cb = tcf_block_cb_lookup(f->block_index,
 					       mlx5e_rep_indr_setup_block_cb,
 					       indr_priv);
 		if (!block_cb)
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
index c07fa0487b39..060a0de69638 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
@@ -1531,14 +1531,14 @@ mlxsw_sp_setup_tc_block_flower_bind(struct mlxsw_sp_port *mlxsw_sp_port,
 	struct tcf_block_cb *block_cb;
 	int err;
 
-	block_cb = tcf_block_cb_lookup(f->block->index,
+	block_cb = tcf_block_cb_lookup(f->block_index,
 				       mlxsw_sp_setup_tc_block_cb_flower,
 				       mlxsw_sp);
 	if (!block_cb) {
 		acl_block = mlxsw_sp_acl_block_create(mlxsw_sp, net);
 		if (!acl_block)
 			return -ENOMEM;
-		block_cb = tcf_block_cb_alloc(f->block->index,
+		block_cb = tcf_block_cb_alloc(f->block_index,
 					      mlxsw_sp_setup_tc_block_cb_flower,
 					      mlxsw_sp, acl_block,
 					      mlxsw_sp_tc_block_flower_release);
@@ -1580,7 +1580,7 @@ mlxsw_sp_setup_tc_block_flower_unbind(struct mlxsw_sp_port *mlxsw_sp_port,
 	struct tcf_block_cb *block_cb;
 	int err;
 
-	block_cb = tcf_block_cb_lookup(f->block->index,
+	block_cb = tcf_block_cb_lookup(f->block_index,
 				       mlxsw_sp_setup_tc_block_cb_flower,
 				       mlxsw_sp);
 	if (!block_cb)
@@ -1618,7 +1618,7 @@ static int mlxsw_sp_setup_tc_block(struct mlxsw_sp_port *mlxsw_sp_port,
 
 	switch (f->command) {
 	case TC_BLOCK_BIND:
-		block_cb = tcf_block_cb_alloc(f->block->index, cb, mlxsw_sp_port,
+		block_cb = tcf_block_cb_alloc(f->block_index, cb, mlxsw_sp_port,
 					 mlxsw_sp_port, NULL);
 		if (!block_cb)
 			return -ENOMEM;
@@ -1633,7 +1633,7 @@ static int mlxsw_sp_setup_tc_block(struct mlxsw_sp_port *mlxsw_sp_port,
 	case TC_BLOCK_UNBIND:
 		mlxsw_sp_setup_tc_block_flower_unbind(mlxsw_sp_port,
 						      f, ingress);
-		block_cb = tcf_block_cb_lookup(f->block->index, cb,
+		block_cb = tcf_block_cb_lookup(f->block_index, cb,
 					       mlxsw_sp_port);
 		if (!block_cb)
 			return -ENOENT;
diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
index d2bc36859952..1ad46cf2413f 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
@@ -1277,7 +1277,7 @@ nfp_flower_setup_indr_tc_block(struct net_device *netdev, struct nfp_app *app,
 		cb_priv->app = app;
 		list_add(&cb_priv->list, &priv->indr_block_cb_priv);
 
-		block_cb = tcf_block_cb_alloc(f->block->index,
+		block_cb = tcf_block_cb_alloc(f->block_index,
 					      nfp_flower_setup_indr_block_cb,
 					      cb_priv, cb_priv,
 					      nfp_flower_setup_indr_tc_release);
@@ -1293,7 +1293,7 @@ nfp_flower_setup_indr_tc_block(struct net_device *netdev, struct nfp_app *app,
 		if (!cb_priv)
 			return -ENOENT;
 
-		block_cb = tcf_block_cb_lookup(f->block->index,
+		block_cb = tcf_block_cb_lookup(f->block_index,
 					       nfp_flower_setup_indr_block_cb,
 					       cb_priv);
 		if (!block_cb)
diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index b61d0c89ba5b..f0540507a397 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -638,7 +638,7 @@ struct tc_block_offload {
 	enum tc_block_command command;
 	enum tcf_block_binder_type binder_type;
 	struct list_head cb_list;
-	struct tcf_block *block;
+	u32 block_index;
 	struct netlink_ext_ack *extack;
 };
 
diff --git a/net/dsa/slave.c b/net/dsa/slave.c
index 7ef34cc2f574..4fe5386f2613 100644
--- a/net/dsa/slave.c
+++ b/net/dsa/slave.c
@@ -914,7 +914,7 @@ static int dsa_slave_setup_tc_block(struct net_device *dev,
 
 	switch (f->command) {
 	case TC_BLOCK_BIND:
-		block_cb = tcf_block_cb_alloc(f->block->index, cb, dev, dev,
+		block_cb = tcf_block_cb_alloc(f->block_index, cb, dev, dev,
 					      NULL);
 		if (!block_cb)
 			return -ENOMEM;
@@ -922,7 +922,7 @@ static int dsa_slave_setup_tc_block(struct net_device *dev,
 		tcf_block_cb_list_add(block_cb, &f->cb_list);
 		return 0;
 	case TC_BLOCK_UNBIND:
-		block_cb = tcf_block_cb_lookup(f->block->index, cb, dev);
+		block_cb = tcf_block_cb_lookup(f->block_index, cb, dev);
 		if (!block_cb)
 			return -ENOENT;
 
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index bd8b08f158c0..2b23e07e9a75 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -882,7 +882,7 @@ int tcf_setup_block_offload(struct tc_block_offload *f, tc_setup_cb_t *cb,
 
 	switch (f->command) {
 	case TC_BLOCK_BIND:
-		block_cb = tcf_block_cb_alloc(f->block->index, cb, cb_priv,
+		block_cb = tcf_block_cb_alloc(f->block_index, cb, cb_priv,
 					      cb_priv, NULL);
 		if (!block_cb)
 			return -ENOMEM;
@@ -890,7 +890,7 @@ int tcf_setup_block_offload(struct tc_block_offload *f, tc_setup_cb_t *cb,
 		tcf_block_cb_list_add(block_cb, &f->cb_list);
 		return 0;
 	case TC_BLOCK_UNBIND:
-		block_cb = tcf_block_cb_lookup(f->block->index, cb, cb_priv);
+		block_cb = tcf_block_cb_lookup(f->block_index, cb, cb_priv);
 		if (!block_cb)
 			return -ENOENT;
 
@@ -1016,7 +1016,7 @@ static void tc_indr_block_ing_cmd(struct tc_indr_block_dev *indr_dev,
 	struct tc_block_offload bo = {
 		.command	= command,
 		.binder_type	= TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS,
-		.block		= indr_dev->block,
+		.block_index	= indr_dev->block->index,
 	};
 	INIT_LIST_HEAD(&bo.cb_list);
 
@@ -1106,7 +1106,7 @@ static void tc_indr_block_call(struct tcf_block *block, struct net_device *dev,
 	struct tc_block_offload bo = {
 		.command	= command,
 		.binder_type	= ei->binder_type,
-		.block		= block,
+		.block_index	= block->index,
 		.extack		= extack,
 	};
 	INIT_LIST_HEAD(&bo.cb_list);
@@ -1135,7 +1135,7 @@ static int tcf_block_offload_cmd(struct tcf_block *block,
 
 	bo.command = command;
 	bo.binder_type = ei->binder_type;
-	bo.block = block;
+	bo.block_index = block->index;
 	bo.extack = extack;
 	INIT_LIST_HEAD(&bo.cb_list);
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH net-next,RFC 7/9] net: use tcf_block_setup() infrastructure
  2019-04-26  0:33 ` [PATCH net-next,RFC 7/9] net: use tcf_block_setup() infrastructure Pablo Neira Ayuso
@ 2019-04-26 14:27   ` Jiri Pirko
  2019-04-26 16:12     ` Pablo Neira Ayuso
  0 siblings, 1 reply; 20+ messages in thread
From: Jiri Pirko @ 2019-04-26 14:27 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: netfilter-devel, davem, netdev, jiri, john.hurley,
	jakub.kicinski, ogerlitz

Fri, Apr 26, 2019 at 02:33:44AM CEST, pablo@netfilter.org wrote:
>This allows us to register / unregister tcf_block_cb objects from the
>core. The idea is to allocate the tcf_block_cb object from the driver,
>attach it to the tc_block_offload->cb_list, then the core registers
>them.
>
>Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
>---

[...]


>--- a/net/dsa/slave.c
>+++ b/net/dsa/slave.c
>@@ -902,6 +902,7 @@ static int dsa_slave_setup_tc_block_cb_eg(enum tc_setup_type type,
> static int dsa_slave_setup_tc_block(struct net_device *dev,
> 				    struct tc_block_offload *f)
> {
>+	struct tcf_block_cb *block_cb;
> 	tc_setup_cb_t *cb;
> 
> 	if (f->binder_type == TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
>@@ -913,9 +914,19 @@ static int dsa_slave_setup_tc_block(struct net_device *dev,
> 
> 	switch (f->command) {
> 	case TC_BLOCK_BIND:
>-		return tcf_block_cb_register(f->block, cb, dev, dev, f->extack);
>+		block_cb = tcf_block_cb_alloc(f->block->index, cb, dev, dev,
>+					      NULL);
>+		if (!block_cb)
>+			return -ENOMEM;
>+
>+		tcf_block_cb_list_add(block_cb, &f->cb_list);
>+		return 0;
> 	case TC_BLOCK_UNBIND:
>-		tcf_block_cb_unregister(f->block, cb, dev);
>+		block_cb = tcf_block_cb_lookup(f->block->index, cb, dev);
>+		if (!block_cb)
>+			return -ENOENT;
>+
>+		tcf_block_cb_list_move(block_cb, &f->cb_list);

What you are trying to achieve with list_move here, wrapped by
tcf_block_cb_list_move() for some reason is a mystery for me.
You are moving to &f->cb_list, but you are already there...

The whole work with lists here, including tcf_block_cb_list is very
confusing for me.


> 		return 0;
> 	default:
> 		return -EOPNOTSUPP;
>diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
>index a00463c8cfa9..f7f6f42d58d1 100644
>--- a/net/sched/cls_api.c
>+++ b/net/sched/cls_api.c
>@@ -714,6 +714,7 @@ struct tcf_block_cb {
> 	void (*release)(void *cb_priv);
> 	void *cb_ident;
> 	void *cb_priv;
>+	u32 block_index;
> 	unsigned int refcnt;
> };
> 
>@@ -730,12 +731,14 @@ EXPORT_SYMBOL(tcf_block_cb_priv);
> 
> static LIST_HEAD(tcf_block_cb_list);
> 
>-struct tcf_block_cb *tcf_block_cb_lookup(struct tcf_block *block,
>-					 tc_setup_cb_t *cb, void *cb_ident)
>+struct tcf_block_cb *tcf_block_cb_lookup(u32 block_index, tc_setup_cb_t *cb,
>+					 void *cb_ident)
> {	struct tcf_block_cb *block_cb;
> 
>-	list_for_each_entry(block_cb, &block->cb_list, list)
>-		if (block_cb->cb == cb && block_cb->cb_ident == cb_ident)
>+	list_for_each_entry(block_cb, &tcf_block_cb_list, list)
>+		if (block_cb->block_index == block_index &&
>+		    block_cb->cb == cb &&
>+		    block_cb->cb_ident == cb_ident)
> 			return block_cb;
> 	return NULL;
> }

[...]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter
  2019-04-26  0:33 [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter Pablo Neira Ayuso
                   ` (9 preceding siblings ...)
  2019-04-26  0:33 ` [PATCH net-next,RFC 9/9] net: cls_api: do not expose tcf_block to drivers Pablo Neira Ayuso
@ 2019-04-26 14:32 ` Jiri Pirko
  2019-04-26 16:28   ` Pablo Neira Ayuso
  10 siblings, 1 reply; 20+ messages in thread
From: Jiri Pirko @ 2019-04-26 14:32 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: netfilter-devel, davem, netdev, jiri, john.hurley,
	jakub.kicinski, ogerlitz

Fri, Apr 26, 2019 at 02:33:37AM CEST, pablo@netfilter.org wrote:
>Hi,
>
>This patchset aims to introduce changes to reuse the existing .ndo_setup_tc
>netdev operations from netfilter.
>
>The idea is to move tcf_block_cb to net/core/flow_offload.c and rename
>it to flow_block_cb. This object provides the minimal infrastructure to
>set up per-block callbacks that are called to offload policies to
>hardware.
>
>The tcf_block object is specific for TC to share policies between
>ingress devices. This object has a list of tcf_block_cb objects that are
>called to offload the policies to hardware. In netfilter, the idea is to
>store the list of tcf_block_cb objects in a chain that would be bound to
>several devices, eg.
>
>  chain x {
>	type filter hook ingress devices = { eth0, eth1 } priority 0;
>	...
>  }
>

Do you have the follow-up patchset somewhere? I'm curius about your
goal. Without that, it is hard to understand what you are getting at.


>Hence, this emulates the shared blocks available in TC that Jiri made.
>
>Note that the list of tcf_block_cb objects will be called to offload
>policies in this chain.

So you are going to use chain_id (if there is anything like that) as
block_index during offload, right?


>
>To reuse this infrastructure, I need remove the dependency with the
>tcf_block object and tc/cls_api (see .reoffload) that is called from the
>driver side, this patchset reworks the per-block callback infrastructure
>to set up the tcf_block_cb object from the driver, then convey the list
>of callbacks using the tc_block_offload object back to the core.
>
>            cls_api                         driver
>	TC_SETUP_BLOCK    ---------->  setup tcf_block_cb
>       tc_block_offload           add it to tc_block_offload->cb_list
>                                                |
>          register     <------------------------'
>         tcf_block_cb
>         ->reoffload
>
>Therefore, registration does not happen from drivers anymore, instead
>it is done from the core. The driver just sets up the tcf_block_cb
>object that wires up the connection between the offloaded block (chains
>in case of netfilter) and the driver.
>
>This patchset is compile tested only at this stage.
>
>Comments welcome, thanks.
>
>Pablo Neira Ayuso (9):
>  net: sched: move tcf_block_cb before indr_block
>  net: sched: add tcf_block_cb_alloc()
>  net: sched: add tcf_block_cb_free()
>  net: sched: add tcf_block_setup()
>  net: sched: add release callback to struct tcf_block_cb
>  net: sched: add tcf_setup_block_offload()
>  net: use tcf_block_setup() infrastructure
>  net: sched: remove tcf_block_cb_{register,unregister}()
>  net: cls_api: do not expose tcf_block to drivers
>
> drivers/net/ethernet/broadcom/bnxt/bnxt.c          |  26 +-
> drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c      |  28 +-
> drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c    |  26 +-
> drivers/net/ethernet/intel/i40e/i40e_main.c        |  26 +-
> drivers/net/ethernet/intel/iavf/iavf_main.c        |  35 +-
> drivers/net/ethernet/intel/igb/igb_main.c          |  23 +-
> drivers/net/ethernet/intel/ixgbe/ixgbe_main.c      |  27 +-
> drivers/net/ethernet/mellanox/mlx5/core/en_main.c  |  27 +-
> drivers/net/ethernet/mellanox/mlx5/core/en_rep.c   |  59 +-
> drivers/net/ethernet/mellanox/mlxsw/spectrum.c     |  64 ++-
> drivers/net/ethernet/netronome/nfp/abm/cls.c       |  17 +-
> drivers/net/ethernet/netronome/nfp/bpf/main.c      |  29 +-
> .../net/ethernet/netronome/nfp/flower/offload.c    |  62 +--
> drivers/net/ethernet/qlogic/qede/qede_main.c       |  23 +-
> drivers/net/ethernet/stmicro/stmmac/stmmac_main.c  |  22 +-
> drivers/net/netdevsim/netdev.c                     |  26 +-
> include/net/pkt_cls.h                              |  29 +-
> net/dsa/slave.c                                    |  15 +-
> net/sched/cls_api.c                                | 598 ++++++++++++---------
> 19 files changed, 528 insertions(+), 634 deletions(-)
>
>-- 
>2.11.0

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH net-next,RFC 7/9] net: use tcf_block_setup() infrastructure
  2019-04-26 14:27   ` Jiri Pirko
@ 2019-04-26 16:12     ` Pablo Neira Ayuso
  0 siblings, 0 replies; 20+ messages in thread
From: Pablo Neira Ayuso @ 2019-04-26 16:12 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: netfilter-devel, davem, netdev, jiri, john.hurley,
	jakub.kicinski, ogerlitz

On Fri, Apr 26, 2019 at 04:27:15PM +0200, Jiri Pirko wrote:
> Fri, Apr 26, 2019 at 02:33:44AM CEST, pablo@netfilter.org wrote:
> >This allows us to register / unregister tcf_block_cb objects from the
> >core. The idea is to allocate the tcf_block_cb object from the driver,
> >attach it to the tc_block_offload->cb_list, then the core registers
> >them.
> >
> >Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
> >---
> 
> [...]
> 
> 
> >--- a/net/dsa/slave.c
> >+++ b/net/dsa/slave.c
> >@@ -902,6 +902,7 @@ static int dsa_slave_setup_tc_block_cb_eg(enum tc_setup_type type,
> > static int dsa_slave_setup_tc_block(struct net_device *dev,
> > 				    struct tc_block_offload *f)
> > {
> >+	struct tcf_block_cb *block_cb;
> > 	tc_setup_cb_t *cb;
> > 
> > 	if (f->binder_type == TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
> >@@ -913,9 +914,19 @@ static int dsa_slave_setup_tc_block(struct net_device *dev,
> > 
> > 	switch (f->command) {
> > 	case TC_BLOCK_BIND:
> >-		return tcf_block_cb_register(f->block, cb, dev, dev, f->extack);
> >+		block_cb = tcf_block_cb_alloc(f->block->index, cb, dev, dev,
> >+					      NULL);
> >+		if (!block_cb)
> >+			return -ENOMEM;
> >+
> >+		tcf_block_cb_list_add(block_cb, &f->cb_list);
> >+		return 0;
> > 	case TC_BLOCK_UNBIND:
> >-		tcf_block_cb_unregister(f->block, cb, dev);
> >+		block_cb = tcf_block_cb_lookup(f->block->index, cb, dev);
> >+		if (!block_cb)
> >+			return -ENOENT;
> >+
> >+		tcf_block_cb_list_move(block_cb, &f->cb_list);
> 
> What you are trying to achieve with list_move here, wrapped by
> tcf_block_cb_list_move() for some reason is a mystery for me.
> You are moving to &f->cb_list, but you are already there...
> 
> The whole work with lists here, including tcf_block_cb_list is very
> confusing for me.

The f->cb_list is a temporary list. The block_cb's are traveling back
to core through this list. Then, tcf_block_setup() takes these blocks
in this f->cb_list and place them in the right per-block and the
global block list.

So, instead of placing the block_cb's in the block list from the
driver, the driver just places block_cb in this temporary list, so the
core (either tc or netfilter) is responsible for attaching this to
either tcf_block or nft_chain.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter
  2019-04-26 14:32 ` [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter Jiri Pirko
@ 2019-04-26 16:28   ` Pablo Neira Ayuso
  2019-04-26 18:29     ` Jakub Kicinski
  0 siblings, 1 reply; 20+ messages in thread
From: Pablo Neira Ayuso @ 2019-04-26 16:28 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: netfilter-devel, davem, netdev, jiri, john.hurley,
	jakub.kicinski, ogerlitz

On Fri, Apr 26, 2019 at 04:32:58PM +0200, Jiri Pirko wrote:
> Fri, Apr 26, 2019 at 02:33:37AM CEST, pablo@netfilter.org wrote:
> >Hi,
> >
> >This patchset aims to introduce changes to reuse the existing .ndo_setup_tc
> >netdev operations from netfilter.
> >
> >The idea is to move tcf_block_cb to net/core/flow_offload.c and rename
> >it to flow_block_cb. This object provides the minimal infrastructure to
> >set up per-block callbacks that are called to offload policies to
> >hardware.
> >
> >The tcf_block object is specific for TC to share policies between
> >ingress devices. This object has a list of tcf_block_cb objects that are
> >called to offload the policies to hardware. In netfilter, the idea is to
> >store the list of tcf_block_cb objects in a chain that would be bound to
> >several devices, eg.
> >
> >  chain x {
> >	type filter hook ingress devices = { eth0, eth1 } priority 0;
> >	...
> >  }
> >
> 
> Do you have the follow-up patchset somewhere? I'm curius about your
> goal. Without that, it is hard to understand what you are getting at.

Goal is to use the TC_SETUP_BLOCK logic in the existing drivers from
netfilter. So Netfilter calls TC_SETUP_BLOCK by when a chain is set up
to configure the driver, hence reuse your whole logic with minimal
changes.

Currently, the tcf_block_cb_register() call assumes there's a
tcf_block object in place and it internally invokes the tc
.reoffload() callback. This tcf_block corresponds to the nft_chain
object in netfilter, and I need to add my own .reoffload() callback
for the nft_chain object. This patch uses the block_index instead from
the driver, instead of exposing tcf_block.

This patchset updates the TC_SETUP_BLOCK path to only configure the
block_cb objects. The registration is done from the core, by iterating
the list of block_cb's that the driver offers in the temporary
tc_block_offload->cb_list, and then iterate over that list and
register them from the core.

My patchset moves the tcf_block_cb object to net/core/flow_offload.c
(it renames it to flow_block_cb) so it can be used both by tc and
netfilter.

Follow up patchset in netfilter calls TC_SETUP_BLOCK when the offloadi
flag is set on. Then, it has its own version of tc_setup_cb_call(),
which iterates over the block_cb() in this chain to reuse existing
driver codebase.

> >Hence, this emulates the shared blocks available in TC that Jiri made.
> >
> >Note that the list of tcf_block_cb objects will be called to offload
> >policies in this chain.
> 
> So you are going to use chain_id (if there is anything like that) as
> block_index during offload, right?

Yes. But I don't need to expose this chain_index to userspace though,
I can internally allocate it, I only need to make sure it does not
overlap with any of the existing tc block_indexed. I can just use a
different index space which does not overlap with the tc block index
space.

Thanks.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter
  2019-04-26 16:28   ` Pablo Neira Ayuso
@ 2019-04-26 18:29     ` Jakub Kicinski
  2019-04-26 18:41       ` Pablo Neira Ayuso
  0 siblings, 1 reply; 20+ messages in thread
From: Jakub Kicinski @ 2019-04-26 18:29 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: Jiri Pirko, netfilter-devel, davem, netdev, jiri, john.hurley, ogerlitz

On Fri, 26 Apr 2019 18:28:36 +0200, Pablo Neira Ayuso wrote:
> On Fri, Apr 26, 2019 at 04:32:58PM +0200, Jiri Pirko wrote:
> > Fri, Apr 26, 2019 at 02:33:37AM CEST, pablo@netfilter.org wrote:  
> > >Hi,
> > >
> > >This patchset aims to introduce changes to reuse the existing .ndo_setup_tc
> > >netdev operations from netfilter.
> > >
> > >The idea is to move tcf_block_cb to net/core/flow_offload.c and rename
> > >it to flow_block_cb. This object provides the minimal infrastructure to
> > >set up per-block callbacks that are called to offload policies to
> > >hardware.
> > >
> > >The tcf_block object is specific for TC to share policies between
> > >ingress devices. This object has a list of tcf_block_cb objects that are
> > >called to offload the policies to hardware. In netfilter, the idea is to
> > >store the list of tcf_block_cb objects in a chain that would be bound to
> > >several devices, eg.
> > >
> > >  chain x {
> > >	type filter hook ingress devices = { eth0, eth1 } priority 0;
> > >	...
> > >  }
> > >  
> > 
> > Do you have the follow-up patchset somewhere? I'm curius about your
> > goal. Without that, it is hard to understand what you are getting at.  
> 
> Goal is to use the TC_SETUP_BLOCK logic in the existing drivers from
> netfilter. So Netfilter calls TC_SETUP_BLOCK by when a chain is set up
> to configure the driver, hence reuse your whole logic with minimal
> changes.
> 
> Currently, the tcf_block_cb_register() call assumes there's a
> tcf_block object in place and it internally invokes the tc
> .reoffload() callback. This tcf_block corresponds to the nft_chain
> object in netfilter, and I need to add my own .reoffload() callback
> for the nft_chain object. This patch uses the block_index instead from
> the driver, instead of exposing tcf_block.

block_index is non-0 only for shared blocks, right?  Did you change
that?

> This patchset updates the TC_SETUP_BLOCK path to only configure the
> block_cb objects. The registration is done from the core, by iterating
> the list of block_cb's that the driver offers in the temporary
> tc_block_offload->cb_list, and then iterate over that list and
> register them from the core.
> 
> My patchset moves the tcf_block_cb object to net/core/flow_offload.c
> (it renames it to flow_block_cb) so it can be used both by tc and
> netfilter.
> 
> Follow up patchset in netfilter calls TC_SETUP_BLOCK when the offloadi
> flag is set on. Then, it has its own version of tc_setup_cb_call(),
> which iterates over the block_cb() in this chain to reuse existing
> driver codebase.
> 
> > >Hence, this emulates the shared blocks available in TC that Jiri made.
> > >
> > >Note that the list of tcf_block_cb objects will be called to offload
> > >policies in this chain.  
> > 
> > So you are going to use chain_id (if there is anything like that) as
> > block_index during offload, right?  
> 
> Yes. But I don't need to expose this chain_index to userspace though,
> I can internally allocate it, I only need to make sure it does not
> overlap with any of the existing tc block_indexed. I can just use a
> different index space which does not overlap with the tc block index
> space.

How will the association between a block and a device work for
netfilter?

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter
  2019-04-26 18:29     ` Jakub Kicinski
@ 2019-04-26 18:41       ` Pablo Neira Ayuso
  2019-04-26 18:55         ` Jakub Kicinski
  0 siblings, 1 reply; 20+ messages in thread
From: Pablo Neira Ayuso @ 2019-04-26 18:41 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Jiri Pirko, netfilter-devel, davem, netdev, jiri, john.hurley, ogerlitz

On Fri, Apr 26, 2019 at 11:29:56AM -0700, Jakub Kicinski wrote:
> On Fri, 26 Apr 2019 18:28:36 +0200, Pablo Neira Ayuso wrote:
> > On Fri, Apr 26, 2019 at 04:32:58PM +0200, Jiri Pirko wrote:
> > > Fri, Apr 26, 2019 at 02:33:37AM CEST, pablo@netfilter.org wrote:  
> > > >Hi,
> > > >
> > > >This patchset aims to introduce changes to reuse the existing .ndo_setup_tc
> > > >netdev operations from netfilter.
> > > >
> > > >The idea is to move tcf_block_cb to net/core/flow_offload.c and rename
> > > >it to flow_block_cb. This object provides the minimal infrastructure to
> > > >set up per-block callbacks that are called to offload policies to
> > > >hardware.
> > > >
> > > >The tcf_block object is specific for TC to share policies between
> > > >ingress devices. This object has a list of tcf_block_cb objects that are
> > > >called to offload the policies to hardware. In netfilter, the idea is to
> > > >store the list of tcf_block_cb objects in a chain that would be bound to
> > > >several devices, eg.
> > > >
> > > >  chain x {
> > > >	type filter hook ingress devices = { eth0, eth1 } priority 0;
> > > >	...
> > > >  }
> > > >  
> > > 
> > > Do you have the follow-up patchset somewhere? I'm curius about your
> > > goal. Without that, it is hard to understand what you are getting at.  
> > 
> > Goal is to use the TC_SETUP_BLOCK logic in the existing drivers from
> > netfilter. So Netfilter calls TC_SETUP_BLOCK by when a chain is set up
> > to configure the driver, hence reuse your whole logic with minimal
> > changes.
> > 
> > Currently, the tcf_block_cb_register() call assumes there's a
> > tcf_block object in place and it internally invokes the tc
> > .reoffload() callback. This tcf_block corresponds to the nft_chain
> > object in netfilter, and I need to add my own .reoffload() callback
> > for the nft_chain object. This patch uses the block_index instead from
> > the driver, instead of exposing tcf_block.
> 
> block_index is non-0 only for shared blocks, right?  Did you change
> that?

I didn't change that, that's still the same.

The tcf_block_cb is identified via tuple [ block index, cb, cb_ident ]
to retrieve it via lookup, so in case the block index is zero, things
will still work fine.

> > This patchset updates the TC_SETUP_BLOCK path to only configure the
> > block_cb objects. The registration is done from the core, by iterating
> > the list of block_cb's that the driver offers in the temporary
> > tc_block_offload->cb_list, and then iterate over that list and
> > register them from the core.
> > 
> > My patchset moves the tcf_block_cb object to net/core/flow_offload.c
> > (it renames it to flow_block_cb) so it can be used both by tc and
> > netfilter.
> > 
> > Follow up patchset in netfilter calls TC_SETUP_BLOCK when the offloadi
> > flag is set on. Then, it has its own version of tc_setup_cb_call(),
> > which iterates over the block_cb() in this chain to reuse existing
> > driver codebase.
> > 
> > > >Hence, this emulates the shared blocks available in TC that Jiri made.
> > > >
> > > >Note that the list of tcf_block_cb objects will be called to offload
> > > >policies in this chain.  
> > > 
> > > So you are going to use chain_id (if there is anything like that) as
> > > block_index during offload, right?  
> > 
> > Yes. But I don't need to expose this chain_index to userspace though,
> > I can internally allocate it, I only need to make sure it does not
> > overlap with any of the existing tc block_indexed. I can just use a
> > different index space which does not overlap with the tc block index
> > space.
> 
> How will the association between a block and a device work for
> netfilter?

My proposal is that Netfilter doesn't need to expose anything similar
to the TC block concept. I mean, not to the user, not through the
command line and netlink itself.

If netfilter supports for chain definitions like this:

        table x {
                chain y {
                        type filter hook ingress devices = { eth0, eth1 } priority 0;
                }
        }

Then the chain 'y' implicitly becomes the block for the 'eth0' and
'eth1' devices.

Note that the above is not yet supported, I need to extend the netlink
API for this, but having chains that are attached to multiple devices
is feasible and it makes sense for plain software configurations where
no offload is involved (as useful as the TC block for pure software to
avoid policy duplication).

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter
  2019-04-26 18:41       ` Pablo Neira Ayuso
@ 2019-04-26 18:55         ` Jakub Kicinski
  2019-04-26 19:14           ` Pablo Neira Ayuso
  0 siblings, 1 reply; 20+ messages in thread
From: Jakub Kicinski @ 2019-04-26 18:55 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: Jiri Pirko, netfilter-devel, davem, netdev, jiri, john.hurley, ogerlitz

On Fri, 26 Apr 2019 20:41:45 +0200, Pablo Neira Ayuso wrote:
> > > > >Hence, this emulates the shared blocks available in TC that Jiri made.
> > > > >
> > > > >Note that the list of tcf_block_cb objects will be called to offload
> > > > >policies in this chain.    
> > > > 
> > > > So you are going to use chain_id (if there is anything like that) as
> > > > block_index during offload, right?    
> > > 
> > > Yes. But I don't need to expose this chain_index to userspace though,
> > > I can internally allocate it, I only need to make sure it does not
> > > overlap with any of the existing tc block_indexed. I can just use a
> > > different index space which does not overlap with the tc block index
> > > space.  
> > 
> > How will the association between a block and a device work for
> > netfilter?  
> 
> My proposal is that Netfilter doesn't need to expose anything similar
> to the TC block concept. I mean, not to the user, not through the
> command line and netlink itself.

Yes, yes.

> If netfilter supports for chain definitions like this:
> 
>         table x {
>                 chain y {
>                         type filter hook ingress devices = { eth0, eth1 } priority 0;
>                 }
>         }
> 
> Then the chain 'y' implicitly becomes the block for the 'eth0' and
> 'eth1' devices.

Can there be more chains for those devices?  Or those will only run y
from netfilter perspective?

> Note that the above is not yet supported, I need to extend the netlink
> API for this, but having chains that are attached to multiple devices
> is feasible and it makes sense for plain software configurations where
> no offload is involved (as useful as the TC block for pure software to
> avoid policy duplication).

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter
  2019-04-26 18:55         ` Jakub Kicinski
@ 2019-04-26 19:14           ` Pablo Neira Ayuso
  2019-04-26 19:22             ` Pablo Neira Ayuso
  0 siblings, 1 reply; 20+ messages in thread
From: Pablo Neira Ayuso @ 2019-04-26 19:14 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Jiri Pirko, netfilter-devel, davem, netdev, jiri, john.hurley, ogerlitz

On Fri, Apr 26, 2019 at 11:55:12AM -0700, Jakub Kicinski wrote:
> On Fri, 26 Apr 2019 20:41:45 +0200, Pablo Neira Ayuso wrote:
> > If netfilter supports for chain definitions like this:
> > 
> >         table x {
> >                 chain y {
> >                         type filter hook ingress devices = { eth0, eth1 } priority 0;
> >                 }
> >         }
> > 
> > Then the chain 'y' implicitly becomes the block for the 'eth0' and
> > 'eth1' devices.
> 
> Can there be more chains for those devices?  Or those will only run y
> from netfilter perspective?

In software, the existing control plane allows you to register as
many chains as you want, that would allow to include 'eth0' and 'eth1'.
However, But I don't have a usecase for this: One single chain should
be enough given that the ingress hook is only used for filtering. We
are inheriting this semantics from iptables, where multiple tables at
different priorities (which specifies the order) make sense, such as
'raw', 'mangle' and so on. At ingress, these don't make sense and a
single chain with priority 0 should be enough.

In case of hardware offload, I think we should just allow one single
chain at ingress with 'eth0' and 'eth1', just like tc does. Just
return EOPNOTSUPP.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter
  2019-04-26 19:14           ` Pablo Neira Ayuso
@ 2019-04-26 19:22             ` Pablo Neira Ayuso
  0 siblings, 0 replies; 20+ messages in thread
From: Pablo Neira Ayuso @ 2019-04-26 19:22 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Jiri Pirko, netfilter-devel, davem, netdev, jiri, john.hurley, ogerlitz

On Fri, Apr 26, 2019 at 09:14:40PM +0200, Pablo Neira Ayuso wrote:
> On Fri, Apr 26, 2019 at 11:55:12AM -0700, Jakub Kicinski wrote:
> > On Fri, 26 Apr 2019 20:41:45 +0200, Pablo Neira Ayuso wrote:
> > > If netfilter supports for chain definitions like this:
> > > 
> > >         table x {
> > >                 chain y {
> > >                         type filter hook ingress devices = { eth0, eth1 } priority 0;
> > >                 }
> > >         }
> > > 
> > > Then the chain 'y' implicitly becomes the block for the 'eth0' and
> > > 'eth1' devices.
> > 
> > Can there be more chains for those devices?  Or those will only run y
> > from netfilter perspective?
> 
> In software, the existing control plane allows you to register as
> many chains as you want, that would allow to include 'eth0' and 'eth1'.

I mean, you could add 'eth0' and 'eth1' to several chains, yes.

> However, But I don't have a usecase for this: One single chain should
> be enough given that the ingress hook is only used for filtering. We
> are inheriting this semantics from iptables, where multiple tables at
> different priorities (which specifies the order) make sense, such as
> 'raw', 'mangle' and so on. At ingress, these don't make sense and a
> single chain with priority 0 should be enough.

.. but it doesn't make much sense as I explained.

> In case of hardware offload, I think we should just allow one single
> chain at ingress with 'eth0' and 'eth1', just like tc does. Just
> return EOPNOTSUPP.

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2019-04-26 19:22 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-26  0:33 [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter Pablo Neira Ayuso
2019-04-26  0:33 ` [PATCH net-next,RFC 1/9] net: sched: move tcf_block_cb before indr_block Pablo Neira Ayuso
2019-04-26  0:33 ` [PATCH net-next,RFC 2/9] net: sched: add tcf_block_cb_alloc() Pablo Neira Ayuso
2019-04-26  0:33 ` [PATCH net-next,RFC 3/9] net: sched: add tcf_block_cb_free() Pablo Neira Ayuso
2019-04-26  0:33 ` [PATCH net-next,RFC 4/9] net: sched: add tcf_block_setup() Pablo Neira Ayuso
2019-04-26  0:33 ` [PATCH net-next,RFC 5/9] net: sched: add release callback to struct tcf_block_cb Pablo Neira Ayuso
2019-04-26  0:33 ` [PATCH net-next,RFC 6/9] net: sched: add tcf_setup_block_offload() Pablo Neira Ayuso
2019-04-26  0:33 ` [PATCH net-next,RFC 7/9] net: use tcf_block_setup() infrastructure Pablo Neira Ayuso
2019-04-26 14:27   ` Jiri Pirko
2019-04-26 16:12     ` Pablo Neira Ayuso
2019-04-26  0:33 ` [PATCH net-next,RFC 8/9] net: cls_api: do not expose tcf_block to drivers Pablo Neira Ayuso
2019-04-26  0:33 ` [PATCH net-next,RFC 8/9] net: sched: remove tcf_block_cb_{register,unregister}() Pablo Neira Ayuso
2019-04-26  0:33 ` [PATCH net-next,RFC 9/9] net: cls_api: do not expose tcf_block to drivers Pablo Neira Ayuso
2019-04-26 14:32 ` [PATCH net-next,RFC 0/9] net: sched: prepare to reuse per-block callbacks from netfilter Jiri Pirko
2019-04-26 16:28   ` Pablo Neira Ayuso
2019-04-26 18:29     ` Jakub Kicinski
2019-04-26 18:41       ` Pablo Neira Ayuso
2019-04-26 18:55         ` Jakub Kicinski
2019-04-26 19:14           ` Pablo Neira Ayuso
2019-04-26 19:22             ` Pablo Neira Ayuso

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).