All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next 0/4] nfp: flower tc block support and nfp PCI updates
@ 2018-04-25  4:17 Jakub Kicinski
  2018-04-25  4:17 ` [PATCH net-next 1/4] nfp: reset local locks on init Jakub Kicinski
                   ` (4 more replies)
  0 siblings, 5 replies; 14+ messages in thread
From: Jakub Kicinski @ 2018-04-25  4:17 UTC (permalink / raw)
  To: davem; +Cc: netdev, oss-drivers, Jakub Kicinski

This series improves the nfp PCIe code by making use of the new
pcie_print_link_status() helper and resetting NFP locks when
driver loads.  This can help us avoid lock ups after host crashes
and is rebooted with PCIe reset or when kdump kernel is loaded.

The flower changes come from John, he says:

This patchset fixes offload issues when multiple repr netdevs are bound to
a tc block and filter rules added. Previously the rule would be passed to
the reprs and would be rejected in all but the first as the cookie value
will indicate a duplicate. The first patch extends the flow lookup
function to consider both host context and ingress netdev along with the
cookie value. This means that a rule with a given cookie can exist
multiple times assuming the ingress netdev is different. The host context
ensures that stats from fw are associated with the correct instance of the
rule.

The second patch protects against rejecting add/del/stat messages when a
rule has a repr as both an ingress port and an egress dev. In such cases a
callback can be triggered twice (once for ingress and once for egress)
and can lead to duplicate rule detection or incorrect double calls.


Jakub Kicinski (2):
  nfp: reset local locks on init
  nfp: print PCIe link bandwidth on probe

John Hurley (2):
  nfp: flower: support offloading multiple rules with same cookie
  nfp: flower: ignore duplicate cb requests for same rule

 drivers/net/ethernet/netronome/nfp/flower/main.h   |  9 +++-
 .../net/ethernet/netronome/nfp/flower/metadata.c   | 20 +++++---
 .../net/ethernet/netronome/nfp/flower/offload.c    | 50 ++++++++++++++----
 drivers/net/ethernet/netronome/nfp/nfp_main.c      |  5 ++
 drivers/net/ethernet/netronome/nfp/nfpcore/nfp.h   |  2 +
 .../ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c  |  1 +
 .../net/ethernet/netronome/nfp/nfpcore/nfp_cpp.h   |  2 +
 .../net/ethernet/netronome/nfp/nfpcore/nfp_mutex.c | 45 +++++++++++++++++
 .../ethernet/netronome/nfp/nfpcore/nfp_resource.c  | 59 ++++++++++++++++++++++
 9 files changed, 173 insertions(+), 20 deletions(-)

-- 
2.16.2

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH net-next 1/4] nfp: reset local locks on init
  2018-04-25  4:17 [PATCH net-next 0/4] nfp: flower tc block support and nfp PCI updates Jakub Kicinski
@ 2018-04-25  4:17 ` Jakub Kicinski
  2018-04-25  4:17 ` [PATCH net-next 2/4] nfp: print PCIe link bandwidth on probe Jakub Kicinski
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 14+ messages in thread
From: Jakub Kicinski @ 2018-04-25  4:17 UTC (permalink / raw)
  To: davem; +Cc: netdev, oss-drivers, Jakub Kicinski

NFP locks record the owner when held, for PCIe devices the owner
ID will be the PCIe link number.  When driver loads it should scan
known locks and if they indicate that they are held by local
endpoint but the driver doesn't hold them - release them.

Locks can be left taken for instance when kernel gets kexec-ed or
after a crash.  Management FW tries to clean up stale locks too,
but it currently depends on PCIe link going down which doesn't
always happen.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/nfp_main.c      |  5 ++
 drivers/net/ethernet/netronome/nfp/nfpcore/nfp.h   |  2 +
 .../net/ethernet/netronome/nfp/nfpcore/nfp_cpp.h   |  2 +
 .../net/ethernet/netronome/nfp/nfpcore/nfp_mutex.c | 45 +++++++++++++++++
 .../ethernet/netronome/nfp/nfpcore/nfp_resource.c  | 59 ++++++++++++++++++++++
 5 files changed, 113 insertions(+)

diff --git a/drivers/net/ethernet/netronome/nfp/nfp_main.c b/drivers/net/ethernet/netronome/nfp/nfp_main.c
index 6a3e3231e111..c8aef9508109 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_main.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_main.c
@@ -489,6 +489,10 @@ static int nfp_pci_probe(struct pci_dev *pdev,
 		goto err_disable_msix;
 	}
 
+	err = nfp_resource_table_init(pf->cpp);
+	if (err)
+		goto err_cpp_free;
+
 	pf->hwinfo = nfp_hwinfo_read(pf->cpp);
 
 	dev_info(&pdev->dev, "Assembly: %s%s%s-%s CPLD: %s\n",
@@ -551,6 +555,7 @@ static int nfp_pci_probe(struct pci_dev *pdev,
 	vfree(pf->dumpspec);
 err_hwinfo_free:
 	kfree(pf->hwinfo);
+err_cpp_free:
 	nfp_cpp_free(pf->cpp);
 err_disable_msix:
 	destroy_workqueue(pf->wq);
diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp.h b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp.h
index ced62d112aa2..f44d0a857314 100644
--- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp.h
+++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp.h
@@ -94,6 +94,8 @@ int nfp_nsp_read_sensors(struct nfp_nsp *state, unsigned int sensor_mask,
 /* MAC Statistics Accumulator */
 #define NFP_RESOURCE_MAC_STATISTICS	"mac.stat"
 
+int nfp_resource_table_init(struct nfp_cpp *cpp);
+
 struct nfp_resource *
 nfp_resource_acquire(struct nfp_cpp *cpp, const char *name);
 
diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cpp.h b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cpp.h
index c8f2c064cce3..4e19add1c539 100644
--- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cpp.h
+++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cpp.h
@@ -295,6 +295,8 @@ void nfp_cpp_mutex_free(struct nfp_cpp_mutex *mutex);
 int nfp_cpp_mutex_lock(struct nfp_cpp_mutex *mutex);
 int nfp_cpp_mutex_unlock(struct nfp_cpp_mutex *mutex);
 int nfp_cpp_mutex_trylock(struct nfp_cpp_mutex *mutex);
+int nfp_cpp_mutex_reclaim(struct nfp_cpp *cpp, int target,
+			  unsigned long long address);
 
 /**
  * nfp_cppcore_pcie_unit() - Get PCI Unit of a CPP handle
diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_mutex.c b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_mutex.c
index cb28ac03e4ca..c88bf673cb76 100644
--- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_mutex.c
+++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_mutex.c
@@ -59,6 +59,11 @@ static u32 nfp_mutex_unlocked(u16 interface)
 	return (u32)interface << 16 | 0x0000;
 }
 
+static u32 nfp_mutex_owner(u32 val)
+{
+	return val >> 16;
+}
+
 static bool nfp_mutex_is_locked(u32 val)
 {
 	return (val & 0xffff) == 0x000f;
@@ -351,3 +356,43 @@ int nfp_cpp_mutex_trylock(struct nfp_cpp_mutex *mutex)
 
 	return nfp_mutex_is_locked(tmp) ? -EBUSY : -EINVAL;
 }
+
+/**
+ * nfp_cpp_mutex_reclaim() - Unlock mutex if held by local endpoint
+ * @cpp:	NFP CPP handle
+ * @target:	NFP CPP target ID (ie NFP_CPP_TARGET_CLS or NFP_CPP_TARGET_MU)
+ * @address:	Offset into the address space of the NFP CPP target ID
+ *
+ * Release lock if held by local system.  Extreme care is advised, call only
+ * when no local lock users can exist.
+ *
+ * Return:      0 if the lock was OK, 1 if locked by us, -errno on invalid mutex
+ */
+int nfp_cpp_mutex_reclaim(struct nfp_cpp *cpp, int target,
+			  unsigned long long address)
+{
+	const u32 mur = NFP_CPP_ID(target, 3, 0);	/* atomic_read */
+	const u32 muw = NFP_CPP_ID(target, 4, 0);	/* atomic_write */
+	u16 interface = nfp_cpp_interface(cpp);
+	int err;
+	u32 tmp;
+
+	err = nfp_cpp_mutex_validate(interface, &target, address);
+	if (err)
+		return err;
+
+	/* Check lock */
+	err = nfp_cpp_readl(cpp, mur, address, &tmp);
+	if (err < 0)
+		return err;
+
+	if (nfp_mutex_is_unlocked(tmp) || nfp_mutex_owner(tmp) != interface)
+		return 0;
+
+	/* Bust the lock */
+	err = nfp_cpp_writel(cpp, muw, address, nfp_mutex_unlocked(interface));
+	if (err < 0)
+		return err;
+
+	return 1;
+}
diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_resource.c b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_resource.c
index 7e14725055c7..2dd89dba9311 100644
--- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_resource.c
+++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_resource.c
@@ -338,3 +338,62 @@ u64 nfp_resource_size(struct nfp_resource *res)
 {
 	return res->size;
 }
+
+/**
+ * nfp_resource_table_init() - Run initial checks on the resource table
+ * @cpp:	NFP CPP handle
+ *
+ * Start-of-day init procedure for resource table.  Must be called before
+ * any local resource table users may exist.
+ *
+ * Return: 0 on success, -errno on failure
+ */
+int nfp_resource_table_init(struct nfp_cpp *cpp)
+{
+	struct nfp_cpp_mutex *dev_mutex;
+	int i, err;
+
+	err = nfp_cpp_mutex_reclaim(cpp, NFP_RESOURCE_TBL_TARGET,
+				    NFP_RESOURCE_TBL_BASE);
+	if (err < 0) {
+		nfp_err(cpp, "Error: failed to reclaim resource table mutex\n");
+		return err;
+	}
+	if (err)
+		nfp_warn(cpp, "Warning: busted main resource table mutex\n");
+
+	dev_mutex = nfp_cpp_mutex_alloc(cpp, NFP_RESOURCE_TBL_TARGET,
+					NFP_RESOURCE_TBL_BASE,
+					NFP_RESOURCE_TBL_KEY);
+	if (!dev_mutex)
+		return -ENOMEM;
+
+	if (nfp_cpp_mutex_lock(dev_mutex)) {
+		nfp_err(cpp, "Error: failed to claim resource table mutex\n");
+		nfp_cpp_mutex_free(dev_mutex);
+		return -EINVAL;
+	}
+
+	/* Resource 0 is the dev_mutex, start from 1 */
+	for (i = 1; i < NFP_RESOURCE_TBL_ENTRIES; i++) {
+		u64 addr = NFP_RESOURCE_TBL_BASE +
+			sizeof(struct nfp_resource_entry) * i;
+
+		err = nfp_cpp_mutex_reclaim(cpp, NFP_RESOURCE_TBL_TARGET, addr);
+		if (err < 0) {
+			nfp_err(cpp,
+				"Error: failed to reclaim resource %d mutex\n",
+				i);
+			goto err_unlock;
+		}
+		if (err)
+			nfp_warn(cpp, "Warning: busted resource %d mutex\n", i);
+	}
+
+	err = 0;
+err_unlock:
+	nfp_cpp_mutex_unlock(dev_mutex);
+	nfp_cpp_mutex_free(dev_mutex);
+
+	return err;
+}
-- 
2.16.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next 2/4] nfp: print PCIe link bandwidth on probe
  2018-04-25  4:17 [PATCH net-next 0/4] nfp: flower tc block support and nfp PCI updates Jakub Kicinski
  2018-04-25  4:17 ` [PATCH net-next 1/4] nfp: reset local locks on init Jakub Kicinski
@ 2018-04-25  4:17 ` Jakub Kicinski
  2018-04-25  4:17 ` [PATCH net-next 3/4] nfp: flower: support offloading multiple rules with same cookie Jakub Kicinski
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 14+ messages in thread
From: Jakub Kicinski @ 2018-04-25  4:17 UTC (permalink / raw)
  To: davem; +Cc: netdev, oss-drivers, Jakub Kicinski

To aid debugging of performance issues caused by limited PCIe
bandwidth print the PCIe link information on probe.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c
index cd678323bacb..a0e336bd1d85 100644
--- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c
+++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c
@@ -1330,6 +1330,7 @@ struct nfp_cpp *nfp_cpp_from_nfp6000_pcie(struct pci_dev *pdev)
 	/*  Finished with card initialization. */
 	dev_info(&pdev->dev,
 		 "Netronome Flow Processor NFP4000/NFP6000 PCIe Card Probe\n");
+	pcie_print_link_status(pdev);
 
 	nfp = kzalloc(sizeof(*nfp), GFP_KERNEL);
 	if (!nfp) {
-- 
2.16.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next 3/4] nfp: flower: support offloading multiple rules with same cookie
  2018-04-25  4:17 [PATCH net-next 0/4] nfp: flower tc block support and nfp PCI updates Jakub Kicinski
  2018-04-25  4:17 ` [PATCH net-next 1/4] nfp: reset local locks on init Jakub Kicinski
  2018-04-25  4:17 ` [PATCH net-next 2/4] nfp: print PCIe link bandwidth on probe Jakub Kicinski
@ 2018-04-25  4:17 ` Jakub Kicinski
  2018-04-25  6:31   ` Or Gerlitz
  2018-04-25  4:17 ` [PATCH net-next 4/4] nfp: flower: ignore duplicate cb requests for same rule Jakub Kicinski
  2018-04-25 18:07 ` [PATCH net-next 0/4] nfp: flower tc block support and nfp PCI updates David Miller
  4 siblings, 1 reply; 14+ messages in thread
From: Jakub Kicinski @ 2018-04-25  4:17 UTC (permalink / raw)
  To: davem; +Cc: netdev, oss-drivers, John Hurley

From: John Hurley <john.hurley@netronome.com>

When multiple netdevs are attached to a tc offload block and register for
callbacks, a rule added to the block will be propogated to all netdevs.
Previously these were detected as duplicates (based on cookie) and
rejected. Modify the rule nfp lookup function to optionally include an
ingress netdev and a host context along with the cookie value when
searching for a rule. When a new rule is passed to the driver, the netdev
the rule is to be attached to is considered when searching for dublicates.
When a stats update is received from HW, the host context is used
alongside the cookie to map to the correct host rule.

Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/flower/main.h   |  8 +++++--
 .../net/ethernet/netronome/nfp/flower/metadata.c   | 20 +++++++++-------
 .../net/ethernet/netronome/nfp/flower/offload.c    | 27 ++++++++++++++++------
 3 files changed, 38 insertions(+), 17 deletions(-)

diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.h b/drivers/net/ethernet/netronome/nfp/flower/main.h
index c67e1b54c614..9e6804bc9b40 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/main.h
+++ b/drivers/net/ethernet/netronome/nfp/flower/main.h
@@ -47,6 +47,7 @@
 struct net_device;
 struct nfp_app;
 
+#define NFP_FL_STATS_CTX_DONT_CARE	cpu_to_be32(0xffffffff)
 #define NFP_FL_STATS_ENTRY_RS		BIT(20)
 #define NFP_FL_STATS_ELEM_RS		4
 #define NFP_FL_REPEATED_HASH_MAX	BIT(17)
@@ -189,6 +190,7 @@ struct nfp_fl_payload {
 	spinlock_t lock; /* lock stats */
 	struct nfp_fl_stats stats;
 	__be32 nfp_tun_ipv4_addr;
+	struct net_device *ingress_dev;
 	char *unmasked_data;
 	char *mask_data;
 	char *action_data;
@@ -216,12 +218,14 @@ int nfp_flower_compile_action(struct tc_cls_flower_offload *flow,
 			      struct nfp_fl_payload *nfp_flow);
 int nfp_compile_flow_metadata(struct nfp_app *app,
 			      struct tc_cls_flower_offload *flow,
-			      struct nfp_fl_payload *nfp_flow);
+			      struct nfp_fl_payload *nfp_flow,
+			      struct net_device *netdev);
 int nfp_modify_flow_metadata(struct nfp_app *app,
 			     struct nfp_fl_payload *nfp_flow);
 
 struct nfp_fl_payload *
-nfp_flower_search_fl_table(struct nfp_app *app, unsigned long tc_flower_cookie);
+nfp_flower_search_fl_table(struct nfp_app *app, unsigned long tc_flower_cookie,
+			   struct net_device *netdev, __be32 host_ctx);
 struct nfp_fl_payload *
 nfp_flower_remove_fl_table(struct nfp_app *app, unsigned long tc_flower_cookie);
 
diff --git a/drivers/net/ethernet/netronome/nfp/flower/metadata.c b/drivers/net/ethernet/netronome/nfp/flower/metadata.c
index db977cf8e933..21668aa435e8 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/metadata.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/metadata.c
@@ -99,14 +99,18 @@ static int nfp_get_stats_entry(struct nfp_app *app, u32 *stats_context_id)
 
 /* Must be called with either RTNL or rcu_read_lock */
 struct nfp_fl_payload *
-nfp_flower_search_fl_table(struct nfp_app *app, unsigned long tc_flower_cookie)
+nfp_flower_search_fl_table(struct nfp_app *app, unsigned long tc_flower_cookie,
+			   struct net_device *netdev, __be32 host_ctx)
 {
 	struct nfp_flower_priv *priv = app->priv;
 	struct nfp_fl_payload *flower_entry;
 
 	hash_for_each_possible_rcu(priv->flow_table, flower_entry, link,
 				   tc_flower_cookie)
-		if (flower_entry->tc_flower_cookie == tc_flower_cookie)
+		if (flower_entry->tc_flower_cookie == tc_flower_cookie &&
+		    (!netdev || flower_entry->ingress_dev == netdev) &&
+		    (host_ctx == NFP_FL_STATS_CTX_DONT_CARE ||
+		     flower_entry->meta.host_ctx_id == host_ctx))
 			return flower_entry;
 
 	return NULL;
@@ -121,13 +125,11 @@ nfp_flower_update_stats(struct nfp_app *app, struct nfp_fl_stats_frame *stats)
 	flower_cookie = be64_to_cpu(stats->stats_cookie);
 
 	rcu_read_lock();
-	nfp_flow = nfp_flower_search_fl_table(app, flower_cookie);
+	nfp_flow = nfp_flower_search_fl_table(app, flower_cookie, NULL,
+					      stats->stats_con_id);
 	if (!nfp_flow)
 		goto exit_rcu_unlock;
 
-	if (nfp_flow->meta.host_ctx_id != stats->stats_con_id)
-		goto exit_rcu_unlock;
-
 	spin_lock(&nfp_flow->lock);
 	nfp_flow->stats.pkts += be32_to_cpu(stats->pkt_count);
 	nfp_flow->stats.bytes += be64_to_cpu(stats->byte_count);
@@ -317,7 +319,8 @@ nfp_check_mask_remove(struct nfp_app *app, char *mask_data, u32 mask_len,
 
 int nfp_compile_flow_metadata(struct nfp_app *app,
 			      struct tc_cls_flower_offload *flow,
-			      struct nfp_fl_payload *nfp_flow)
+			      struct nfp_fl_payload *nfp_flow,
+			      struct net_device *netdev)
 {
 	struct nfp_flower_priv *priv = app->priv;
 	struct nfp_fl_payload *check_entry;
@@ -348,7 +351,8 @@ int nfp_compile_flow_metadata(struct nfp_app *app,
 	nfp_flow->stats.bytes = 0;
 	nfp_flow->stats.used = jiffies;
 
-	check_entry = nfp_flower_search_fl_table(app, flow->cookie);
+	check_entry = nfp_flower_search_fl_table(app, flow->cookie, netdev,
+						 NFP_FL_STATS_CTX_DONT_CARE);
 	if (check_entry) {
 		if (nfp_release_stats_entry(app, stats_cxt))
 			return -EINVAL;
diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
index 114d2ab02a38..bdc82e11a31e 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
@@ -419,6 +419,8 @@ nfp_flower_add_offload(struct nfp_app *app, struct net_device *netdev,
 		goto err_free_key_ls;
 	}
 
+	flow_pay->ingress_dev = egress ? NULL : netdev;
+
 	err = nfp_flower_compile_flow_match(flow, key_layer, netdev, flow_pay,
 					    tun_type);
 	if (err)
@@ -428,7 +430,8 @@ nfp_flower_add_offload(struct nfp_app *app, struct net_device *netdev,
 	if (err)
 		goto err_destroy_flow;
 
-	err = nfp_compile_flow_metadata(app, flow, flow_pay);
+	err = nfp_compile_flow_metadata(app, flow, flow_pay,
+					flow_pay->ingress_dev);
 	if (err)
 		goto err_destroy_flow;
 
@@ -462,6 +465,7 @@ nfp_flower_add_offload(struct nfp_app *app, struct net_device *netdev,
  * @app:	Pointer to the APP handle
  * @netdev:	netdev structure.
  * @flow:	TC flower classifier offload structure
+ * @egress:	Netdev is the egress dev.
  *
  * Removes a flow from the repeated hash structure and clears the
  * action payload.
@@ -470,13 +474,16 @@ nfp_flower_add_offload(struct nfp_app *app, struct net_device *netdev,
  */
 static int
 nfp_flower_del_offload(struct nfp_app *app, struct net_device *netdev,
-		       struct tc_cls_flower_offload *flow)
+		       struct tc_cls_flower_offload *flow, bool egress)
 {
 	struct nfp_port *port = nfp_port_from_netdev(netdev);
 	struct nfp_fl_payload *nfp_flow;
+	struct net_device *ingr_dev;
 	int err;
 
-	nfp_flow = nfp_flower_search_fl_table(app, flow->cookie);
+	ingr_dev = egress ? NULL : netdev;
+	nfp_flow = nfp_flower_search_fl_table(app, flow->cookie, ingr_dev,
+					      NFP_FL_STATS_CTX_DONT_CARE);
 	if (!nfp_flow)
 		return -ENOENT;
 
@@ -505,7 +512,9 @@ nfp_flower_del_offload(struct nfp_app *app, struct net_device *netdev,
 /**
  * nfp_flower_get_stats() - Populates flow stats obtained from hardware.
  * @app:	Pointer to the APP handle
+ * @netdev:	Netdev structure.
  * @flow:	TC flower classifier offload structure
+ * @egress:	Netdev is the egress dev.
  *
  * Populates a flow statistics structure which which corresponds to a
  * specific flow.
@@ -513,11 +522,15 @@ nfp_flower_del_offload(struct nfp_app *app, struct net_device *netdev,
  * Return: negative value on error, 0 if stats populated successfully.
  */
 static int
-nfp_flower_get_stats(struct nfp_app *app, struct tc_cls_flower_offload *flow)
+nfp_flower_get_stats(struct nfp_app *app, struct net_device *netdev,
+		     struct tc_cls_flower_offload *flow, bool egress)
 {
 	struct nfp_fl_payload *nfp_flow;
+	struct net_device *ingr_dev;
 
-	nfp_flow = nfp_flower_search_fl_table(app, flow->cookie);
+	ingr_dev = egress ? NULL : netdev;
+	nfp_flow = nfp_flower_search_fl_table(app, flow->cookie, ingr_dev,
+					      NFP_FL_STATS_CTX_DONT_CARE);
 	if (!nfp_flow)
 		return -EINVAL;
 
@@ -543,9 +556,9 @@ nfp_flower_repr_offload(struct nfp_app *app, struct net_device *netdev,
 	case TC_CLSFLOWER_REPLACE:
 		return nfp_flower_add_offload(app, netdev, flower, egress);
 	case TC_CLSFLOWER_DESTROY:
-		return nfp_flower_del_offload(app, netdev, flower);
+		return nfp_flower_del_offload(app, netdev, flower, egress);
 	case TC_CLSFLOWER_STATS:
-		return nfp_flower_get_stats(app, flower);
+		return nfp_flower_get_stats(app, netdev, flower, egress);
 	}
 
 	return -EOPNOTSUPP;
-- 
2.16.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next 4/4] nfp: flower: ignore duplicate cb requests for same rule
  2018-04-25  4:17 [PATCH net-next 0/4] nfp: flower tc block support and nfp PCI updates Jakub Kicinski
                   ` (2 preceding siblings ...)
  2018-04-25  4:17 ` [PATCH net-next 3/4] nfp: flower: support offloading multiple rules with same cookie Jakub Kicinski
@ 2018-04-25  4:17 ` Jakub Kicinski
  2018-04-25  9:17   ` Or Gerlitz
  2018-04-25 18:07 ` [PATCH net-next 0/4] nfp: flower tc block support and nfp PCI updates David Miller
  4 siblings, 1 reply; 14+ messages in thread
From: Jakub Kicinski @ 2018-04-25  4:17 UTC (permalink / raw)
  To: davem; +Cc: netdev, oss-drivers, John Hurley

From: John Hurley <john.hurley@netronome.com>

If a flower rule has a repr both as ingress and egress port then 2
callbacks may be generated for the same rule request.

Add an indicator to each flow as to whether or not it was added from an
ingress registered cb. If so then ignore add/del/stat requests to it from
an egress cb.

Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/flower/main.h   |  1 +
 .../net/ethernet/netronome/nfp/flower/offload.c    | 23 +++++++++++++++++++---
 2 files changed, 21 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.h b/drivers/net/ethernet/netronome/nfp/flower/main.h
index 9e6804bc9b40..733ff53cc601 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/main.h
+++ b/drivers/net/ethernet/netronome/nfp/flower/main.h
@@ -194,6 +194,7 @@ struct nfp_fl_payload {
 	char *unmasked_data;
 	char *mask_data;
 	char *action_data;
+	bool ingress_offload;
 };
 
 struct nfp_fl_stats_frame {
diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
index bdc82e11a31e..70ec9d821b91 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
@@ -345,7 +345,7 @@ nfp_flower_calculate_key_layers(struct nfp_app *app,
 }
 
 static struct nfp_fl_payload *
-nfp_flower_allocate_new(struct nfp_fl_key_ls *key_layer)
+nfp_flower_allocate_new(struct nfp_fl_key_ls *key_layer, bool egress)
 {
 	struct nfp_fl_payload *flow_pay;
 
@@ -371,6 +371,8 @@ nfp_flower_allocate_new(struct nfp_fl_key_ls *key_layer)
 	flow_pay->meta.flags = 0;
 	spin_lock_init(&flow_pay->lock);
 
+	flow_pay->ingress_offload = !egress;
+
 	return flow_pay;
 
 err_free_mask:
@@ -402,8 +404,20 @@ nfp_flower_add_offload(struct nfp_app *app, struct net_device *netdev,
 	struct nfp_flower_priv *priv = app->priv;
 	struct nfp_fl_payload *flow_pay;
 	struct nfp_fl_key_ls *key_layer;
+	struct net_device *ingr_dev;
 	int err;
 
+	ingr_dev = egress ? NULL : netdev;
+	flow_pay = nfp_flower_search_fl_table(app, flow->cookie, ingr_dev,
+					      NFP_FL_STATS_CTX_DONT_CARE);
+	if (flow_pay) {
+		/* Ignore as duplicate if it has been added by different cb. */
+		if (flow_pay->ingress_offload && egress)
+			return 0;
+		else
+			return -EOPNOTSUPP;
+	}
+
 	key_layer = kmalloc(sizeof(*key_layer), GFP_KERNEL);
 	if (!key_layer)
 		return -ENOMEM;
@@ -413,7 +427,7 @@ nfp_flower_add_offload(struct nfp_app *app, struct net_device *netdev,
 	if (err)
 		goto err_free_key_ls;
 
-	flow_pay = nfp_flower_allocate_new(key_layer);
+	flow_pay = nfp_flower_allocate_new(key_layer, egress);
 	if (!flow_pay) {
 		err = -ENOMEM;
 		goto err_free_key_ls;
@@ -485,7 +499,7 @@ nfp_flower_del_offload(struct nfp_app *app, struct net_device *netdev,
 	nfp_flow = nfp_flower_search_fl_table(app, flow->cookie, ingr_dev,
 					      NFP_FL_STATS_CTX_DONT_CARE);
 	if (!nfp_flow)
-		return -ENOENT;
+		return egress ? 0 : -ENOENT;
 
 	err = nfp_modify_flow_metadata(app, nfp_flow);
 	if (err)
@@ -534,6 +548,9 @@ nfp_flower_get_stats(struct nfp_app *app, struct net_device *netdev,
 	if (!nfp_flow)
 		return -EINVAL;
 
+	if (nfp_flow->ingress_offload && egress)
+		return 0;
+
 	spin_lock_bh(&nfp_flow->lock);
 	tcf_exts_stats_update(flow->exts, nfp_flow->stats.bytes,
 			      nfp_flow->stats.pkts, nfp_flow->stats.used);
-- 
2.16.2

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next 3/4] nfp: flower: support offloading multiple rules with same cookie
  2018-04-25  4:17 ` [PATCH net-next 3/4] nfp: flower: support offloading multiple rules with same cookie Jakub Kicinski
@ 2018-04-25  6:31   ` Or Gerlitz
  2018-04-25  8:51     ` John Hurley
  0 siblings, 1 reply; 14+ messages in thread
From: Or Gerlitz @ 2018-04-25  6:31 UTC (permalink / raw)
  To: Jakub Kicinski, John Hurley
  Cc: David Miller, Linux Netdev List, oss-drivers, ASAP_Direct_Dev

On Wed, Apr 25, 2018 at 7:17 AM, Jakub Kicinski
<jakub.kicinski@netronome.com> wrote:
> From: John Hurley <john.hurley@netronome.com>
>
> When multiple netdevs are attached to a tc offload block and register for
> callbacks, a rule added to the block will be propogated to all netdevs.
> Previously these were detected as duplicates (based on cookie) and
> rejected. Modify the rule nfp lookup function to optionally include an
> ingress netdev and a host context along with the cookie value when
> searching for a rule. When a new rule is passed to the driver, the netdev
> the rule is to be attached to is considered when searching for dublicates.

so if the same rule (cookie) is provided to the driver through multiple ingress
devices you will not reject it -- what is the use case for that, is it
block sharing?

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next 3/4] nfp: flower: support offloading multiple rules with same cookie
  2018-04-25  6:31   ` Or Gerlitz
@ 2018-04-25  8:51     ` John Hurley
  2018-04-25  8:56       ` Or Gerlitz
  0 siblings, 1 reply; 14+ messages in thread
From: John Hurley @ 2018-04-25  8:51 UTC (permalink / raw)
  To: Or Gerlitz
  Cc: Jakub Kicinski, David Miller, Linux Netdev List, oss-drivers,
	ASAP_Direct_Dev

On Wed, Apr 25, 2018 at 7:31 AM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
> On Wed, Apr 25, 2018 at 7:17 AM, Jakub Kicinski
> <jakub.kicinski@netronome.com> wrote:
>> From: John Hurley <john.hurley@netronome.com>
>>
>> When multiple netdevs are attached to a tc offload block and register for
>> callbacks, a rule added to the block will be propogated to all netdevs.
>> Previously these were detected as duplicates (based on cookie) and
>> rejected. Modify the rule nfp lookup function to optionally include an
>> ingress netdev and a host context along with the cookie value when
>> searching for a rule. When a new rule is passed to the driver, the netdev
>> the rule is to be attached to is considered when searching for dublicates.
>
> so if the same rule (cookie) is provided to the driver through multiple ingress
> devices you will not reject it -- what is the use case for that, is it
> block sharing?

Hi Or,
Yes, block sharing is the current use-case.
Simple example for clarity....
Here we want to offload the filter to both ingress devs nfp_0 and nfp_1:

tc qdisc add dev nfp_p0 ingress_block 22 ingress
tc qdisc add dev nfp_p1 ingress_block 22 ingress
tc filter add block 22 protocol ip parent ffff: flower skip_sw
ip_proto tcp action drop

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next 3/4] nfp: flower: support offloading multiple rules with same cookie
  2018-04-25  8:51     ` John Hurley
@ 2018-04-25  8:56       ` Or Gerlitz
  2018-04-25  9:02         ` John Hurley
  0 siblings, 1 reply; 14+ messages in thread
From: Or Gerlitz @ 2018-04-25  8:56 UTC (permalink / raw)
  To: John Hurley
  Cc: Jakub Kicinski, David Miller, Linux Netdev List, oss-drivers,
	ASAP_Direct_Dev

On Wed, Apr 25, 2018 at 11:51 AM, John Hurley <john.hurley@netronome.com> wrote:
> On Wed, Apr 25, 2018 at 7:31 AM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
>> On Wed, Apr 25, 2018 at 7:17 AM, Jakub Kicinski
>> <jakub.kicinski@netronome.com> wrote:
>>> From: John Hurley <john.hurley@netronome.com>
>>>
>>> When multiple netdevs are attached to a tc offload block and register for
>>> callbacks, a rule added to the block will be propogated to all netdevs.
>>> Previously these were detected as duplicates (based on cookie) and
>>> rejected. Modify the rule nfp lookup function to optionally include an
>>> ingress netdev and a host context along with the cookie value when
>>> searching for a rule. When a new rule is passed to the driver, the netdev
>>> the rule is to be attached to is considered when searching for dublicates.
>>
>> so if the same rule (cookie) is provided to the driver through multiple ingress
>> devices you will not reject it -- what is the use case for that, is it
>> block sharing?
>
> Hi Or,
> Yes, block sharing is the current use-case.
> Simple example for clarity....
> Here we want to offload the filter to both ingress devs nfp_0 and nfp_1:
>
> tc qdisc add dev nfp_p0 ingress_block 22 ingress
> tc qdisc add dev nfp_p1 ingress_block 22 ingress
> tc filter add block 22 protocol ip parent ffff: flower skip_sw
> ip_proto tcp action drop

cool!

Just out of curiosity, do you actually share this HW rule or you duplicate it?

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next 3/4] nfp: flower: support offloading multiple rules with same cookie
  2018-04-25  8:56       ` Or Gerlitz
@ 2018-04-25  9:02         ` John Hurley
  2018-04-25  9:13           ` Or Gerlitz
  0 siblings, 1 reply; 14+ messages in thread
From: John Hurley @ 2018-04-25  9:02 UTC (permalink / raw)
  To: Or Gerlitz
  Cc: Jakub Kicinski, David Miller, Linux Netdev List, oss-drivers,
	ASAP_Direct_Dev

On Wed, Apr 25, 2018 at 9:56 AM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
> On Wed, Apr 25, 2018 at 11:51 AM, John Hurley <john.hurley@netronome.com> wrote:
>> On Wed, Apr 25, 2018 at 7:31 AM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
>>> On Wed, Apr 25, 2018 at 7:17 AM, Jakub Kicinski
>>> <jakub.kicinski@netronome.com> wrote:
>>>> From: John Hurley <john.hurley@netronome.com>
>>>>
>>>> When multiple netdevs are attached to a tc offload block and register for
>>>> callbacks, a rule added to the block will be propogated to all netdevs.
>>>> Previously these were detected as duplicates (based on cookie) and
>>>> rejected. Modify the rule nfp lookup function to optionally include an
>>>> ingress netdev and a host context along with the cookie value when
>>>> searching for a rule. When a new rule is passed to the driver, the netdev
>>>> the rule is to be attached to is considered when searching for dublicates.
>>>
>>> so if the same rule (cookie) is provided to the driver through multiple ingress
>>> devices you will not reject it -- what is the use case for that, is it
>>> block sharing?
>>
>> Hi Or,
>> Yes, block sharing is the current use-case.
>> Simple example for clarity....
>> Here we want to offload the filter to both ingress devs nfp_0 and nfp_1:
>>
>> tc qdisc add dev nfp_p0 ingress_block 22 ingress
>> tc qdisc add dev nfp_p1 ingress_block 22 ingress
>> tc filter add block 22 protocol ip parent ffff: flower skip_sw
>> ip_proto tcp action drop
>
> cool!
>
> Just out of curiosity, do you actually share this HW rule or you duplicate it?

It's duplicated.
At HW level the ingress port is part of the match so technically it's
a different rule.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next 3/4] nfp: flower: support offloading multiple rules with same cookie
  2018-04-25  9:02         ` John Hurley
@ 2018-04-25  9:13           ` Or Gerlitz
  2018-04-25  9:27             ` John Hurley
  0 siblings, 1 reply; 14+ messages in thread
From: Or Gerlitz @ 2018-04-25  9:13 UTC (permalink / raw)
  To: John Hurley
  Cc: Jakub Kicinski, David Miller, Linux Netdev List, oss-drivers,
	ASAP_Direct_Dev

On Wed, Apr 25, 2018 at 12:02 PM, John Hurley <john.hurley@netronome.com> wrote:
> On Wed, Apr 25, 2018 at 9:56 AM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
>> On Wed, Apr 25, 2018 at 11:51 AM, John Hurley <john.hurley@netronome.com> wrote:
>>> On Wed, Apr 25, 2018 at 7:31 AM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
>>>> On Wed, Apr 25, 2018 at 7:17 AM, Jakub Kicinski
>>>> <jakub.kicinski@netronome.com> wrote:
>>>>> From: John Hurley <john.hurley@netronome.com>
>>>>>
>>>>> When multiple netdevs are attached to a tc offload block and register for
>>>>> callbacks, a rule added to the block will be propogated to all netdevs.
>>>>> Previously these were detected as duplicates (based on cookie) and
>>>>> rejected. Modify the rule nfp lookup function to optionally include an
>>>>> ingress netdev and a host context along with the cookie value when
>>>>> searching for a rule. When a new rule is passed to the driver, the netdev
>>>>> the rule is to be attached to is considered when searching for dublicates.
>>>>
>>>> so if the same rule (cookie) is provided to the driver through multiple ingress
>>>> devices you will not reject it -- what is the use case for that, is it
>>>> block sharing?
>>>
>>> Hi Or,
>>> Yes, block sharing is the current use-case.
>>> Simple example for clarity....
>>> Here we want to offload the filter to both ingress devs nfp_0 and nfp_1:
>>>
>>> tc qdisc add dev nfp_p0 ingress_block 22 ingress
>>> tc qdisc add dev nfp_p1 ingress_block 22 ingress
>>> tc filter add block 22 protocol ip parent ffff: flower skip_sw
>>> ip_proto tcp action drop
>>
>> cool!
>>
>> Just out of curiosity, do you actually share this HW rule or you duplicate it?
>
> It's duplicated. At HW level the ingress port is part of the match so technically it's
> a different rule.

I see, we have also a match on the ingress port as part of the HW API, which
means we will have to apply a similar practice if we want to support
block sharing quickly.

Just to make sure, under tc block sharing the tc stack calls for hw
offloading of the
same rule (same cookie) multiple times, each with different ingress
device, right?


Or.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next 4/4] nfp: flower: ignore duplicate cb requests for same rule
  2018-04-25  4:17 ` [PATCH net-next 4/4] nfp: flower: ignore duplicate cb requests for same rule Jakub Kicinski
@ 2018-04-25  9:17   ` Or Gerlitz
  2018-04-25  9:45     ` John Hurley
  0 siblings, 1 reply; 14+ messages in thread
From: Or Gerlitz @ 2018-04-25  9:17 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: David Miller, Linux Netdev List, oss-drivers, John Hurley

On Wed, Apr 25, 2018 at 7:17 AM, Jakub Kicinski
<jakub.kicinski@netronome.com> wrote:
> From: John Hurley <john.hurley@netronome.com>
>
> If a flower rule has a repr both as ingress and egress port then 2
> callbacks may be generated for the same rule request.
>
> Add an indicator to each flow as to whether or not it was added from an
> ingress registered cb. If so then ignore add/del/stat requests to it from
> an egress cb.

So on add() you ignore (return success) - I wasn't sure from the patch
what do you do for stat()/del() -- success? why not err? as you know I am
working on the same patch for mlx5, lets align here please.

Or.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next 3/4] nfp: flower: support offloading multiple rules with same cookie
  2018-04-25  9:13           ` Or Gerlitz
@ 2018-04-25  9:27             ` John Hurley
  0 siblings, 0 replies; 14+ messages in thread
From: John Hurley @ 2018-04-25  9:27 UTC (permalink / raw)
  To: Or Gerlitz
  Cc: Jakub Kicinski, David Miller, Linux Netdev List, oss-drivers,
	ASAP_Direct_Dev

On Wed, Apr 25, 2018 at 10:13 AM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
> On Wed, Apr 25, 2018 at 12:02 PM, John Hurley <john.hurley@netronome.com> wrote:
>> On Wed, Apr 25, 2018 at 9:56 AM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
>>> On Wed, Apr 25, 2018 at 11:51 AM, John Hurley <john.hurley@netronome.com> wrote:
>>>> On Wed, Apr 25, 2018 at 7:31 AM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
>>>>> On Wed, Apr 25, 2018 at 7:17 AM, Jakub Kicinski
>>>>> <jakub.kicinski@netronome.com> wrote:
>>>>>> From: John Hurley <john.hurley@netronome.com>
>>>>>>
>>>>>> When multiple netdevs are attached to a tc offload block and register for
>>>>>> callbacks, a rule added to the block will be propogated to all netdevs.
>>>>>> Previously these were detected as duplicates (based on cookie) and
>>>>>> rejected. Modify the rule nfp lookup function to optionally include an
>>>>>> ingress netdev and a host context along with the cookie value when
>>>>>> searching for a rule. When a new rule is passed to the driver, the netdev
>>>>>> the rule is to be attached to is considered when searching for dublicates.
>>>>>
>>>>> so if the same rule (cookie) is provided to the driver through multiple ingress
>>>>> devices you will not reject it -- what is the use case for that, is it
>>>>> block sharing?
>>>>
>>>> Hi Or,
>>>> Yes, block sharing is the current use-case.
>>>> Simple example for clarity....
>>>> Here we want to offload the filter to both ingress devs nfp_0 and nfp_1:
>>>>
>>>> tc qdisc add dev nfp_p0 ingress_block 22 ingress
>>>> tc qdisc add dev nfp_p1 ingress_block 22 ingress
>>>> tc filter add block 22 protocol ip parent ffff: flower skip_sw
>>>> ip_proto tcp action drop
>>>
>>> cool!
>>>
>>> Just out of curiosity, do you actually share this HW rule or you duplicate it?
>>
>> It's duplicated. At HW level the ingress port is part of the match so technically it's
>> a different rule.
>
> I see, we have also a match on the ingress port as part of the HW API, which
> means we will have to apply a similar practice if we want to support
> block sharing quickly.
>
> Just to make sure, under tc block sharing the tc stack calls for hw
> offloading of the
> same rule (same cookie) multiple times, each with different ingress
> device, right?
>
>
> Or.

So in the example above, when each qdisc add is called, a callback
will be registered to the block.
For each callback, the dev used is passed as priv data (presumably you
do similar).
When the filter is added, the block code triggers all callbacks with
the same rule data [1].
We differentiate the callbacks with the priv data (ingress dev).

[1] https://elixir.bootlin.com/linux/v4.17-rc2/source/net/sched/cls_api.c#L741

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next 4/4] nfp: flower: ignore duplicate cb requests for same rule
  2018-04-25  9:17   ` Or Gerlitz
@ 2018-04-25  9:45     ` John Hurley
  0 siblings, 0 replies; 14+ messages in thread
From: John Hurley @ 2018-04-25  9:45 UTC (permalink / raw)
  To: Or Gerlitz; +Cc: Jakub Kicinski, David Miller, Linux Netdev List, oss-drivers

On Wed, Apr 25, 2018 at 10:17 AM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
> On Wed, Apr 25, 2018 at 7:17 AM, Jakub Kicinski
> <jakub.kicinski@netronome.com> wrote:
>> From: John Hurley <john.hurley@netronome.com>
>>
>> If a flower rule has a repr both as ingress and egress port then 2
>> callbacks may be generated for the same rule request.
>>
>> Add an indicator to each flow as to whether or not it was added from an
>> ingress registered cb. If so then ignore add/del/stat requests to it from
>> an egress cb.
>
> So on add() you ignore (return success) - I wasn't sure from the patch
> what do you do for stat()/del() -- success? why not err? as you know I am
> working on the same patch for mlx5, lets align here please.
>
> Or.

ok, this is way Ive implemented the calls...

add:
- if egress cb duplicate but the flow has been added on ingress cb
then ignore (return success)
- if egress cb duplicate and flow not added on ingress then err (this
is a 'true duplicate')

stats:
- if egress cb but the flow has an ingress cb then ignore - stat
request will have been covered on ingress cb so shouldn't error, just
don't repeat

del:
- if egress cb and flow no longer exists then assume it was deleted on
ingress so ignore (return success)
- if ingress cb and flow no longer exists then (as ingress cb is hit
first) this is a bad request whereby trying to delete a flow that was
never added - return err

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next 0/4] nfp: flower tc block support and nfp PCI updates
  2018-04-25  4:17 [PATCH net-next 0/4] nfp: flower tc block support and nfp PCI updates Jakub Kicinski
                   ` (3 preceding siblings ...)
  2018-04-25  4:17 ` [PATCH net-next 4/4] nfp: flower: ignore duplicate cb requests for same rule Jakub Kicinski
@ 2018-04-25 18:07 ` David Miller
  4 siblings, 0 replies; 14+ messages in thread
From: David Miller @ 2018-04-25 18:07 UTC (permalink / raw)
  To: jakub.kicinski; +Cc: netdev, oss-drivers

From: Jakub Kicinski <jakub.kicinski@netronome.com>
Date: Tue, 24 Apr 2018 21:17:00 -0700

> This series improves the nfp PCIe code by making use of the new
> pcie_print_link_status() helper and resetting NFP locks when
> driver loads.  This can help us avoid lock ups after host crashes
> and is rebooted with PCIe reset or when kdump kernel is loaded.
> 
> The flower changes come from John, he says:
> 
> This patchset fixes offload issues when multiple repr netdevs are bound to
> a tc block and filter rules added. Previously the rule would be passed to
> the reprs and would be rejected in all but the first as the cookie value
> will indicate a duplicate. The first patch extends the flow lookup
> function to consider both host context and ingress netdev along with the
> cookie value. This means that a rule with a given cookie can exist
> multiple times assuming the ingress netdev is different. The host context
> ensures that stats from fw are associated with the correct instance of the
> rule.
> 
> The second patch protects against rejecting add/del/stat messages when a
> rule has a repr as both an ingress port and an egress dev. In such cases a
> callback can be triggered twice (once for ingress and once for egress)
> and can lead to duplicate rule detection or incorrect double calls.

Series applied, thanks Jakub.

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2018-04-25 18:07 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-25  4:17 [PATCH net-next 0/4] nfp: flower tc block support and nfp PCI updates Jakub Kicinski
2018-04-25  4:17 ` [PATCH net-next 1/4] nfp: reset local locks on init Jakub Kicinski
2018-04-25  4:17 ` [PATCH net-next 2/4] nfp: print PCIe link bandwidth on probe Jakub Kicinski
2018-04-25  4:17 ` [PATCH net-next 3/4] nfp: flower: support offloading multiple rules with same cookie Jakub Kicinski
2018-04-25  6:31   ` Or Gerlitz
2018-04-25  8:51     ` John Hurley
2018-04-25  8:56       ` Or Gerlitz
2018-04-25  9:02         ` John Hurley
2018-04-25  9:13           ` Or Gerlitz
2018-04-25  9:27             ` John Hurley
2018-04-25  4:17 ` [PATCH net-next 4/4] nfp: flower: ignore duplicate cb requests for same rule Jakub Kicinski
2018-04-25  9:17   ` Or Gerlitz
2018-04-25  9:45     ` John Hurley
2018-04-25 18:07 ` [PATCH net-next 0/4] nfp: flower tc block support and nfp PCI updates David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.