All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next 00/14] gred: add offload support
@ 2018-11-19 23:21 Jakub Kicinski
  2018-11-19 23:21 ` [PATCH net-next 01/14] nfp: abm: map per-band symbols Jakub Kicinski
                   ` (14 more replies)
  0 siblings, 15 replies; 16+ messages in thread
From: Jakub Kicinski @ 2018-11-19 23:21 UTC (permalink / raw)
  To: davem; +Cc: oss-drivers, netdev, Jakub Kicinski

Hi!

This series adds support for GRED offload in the nfp driver.  So
far we have only supported the RED Qdisc offload, but we need a
way to differentiate traffic types e.g. based on DSCP marking.

It may seem like PRIO+RED is a good match for this job, however,
(a) we don't need strict priority behaviour of PRIO, and (b) PRIO
uses the legacy way of mapping ToS fields to bands, which is quite
awkward and limitting.

The less commonly used GRED Qdisc is a better much for the scenario,
it allows multiple sets of RED parameters and queue lengths to be
maintained with a single FIFO queue.  This is exactly how nfp offload
behaves.  We use a trivial u32 classifier to assign packets to virtual
queues.

There is also the minor advantage that GRED can't have its child
changed, therefore limitting ways in which the configuration of SW
path can diverge from HW offload.

Last patch of the series adds support for (G)RED in non-ECN mode,
where packets are dropped instead of marked.


Jakub Kicinski (14):
  nfp: abm: map per-band symbols
  nfp: abm: pass band parameter to functions
  nfp: abm: size threshold table to account for bands
  nfp: abm: switch to extended stats for reading packet/byte counts
  nfp: abm: add up bands for sto/non-sto stats
  net: sched: gred: add basic Qdisc offload
  net: sched: gred: support reporting stats from offloads
  nfp: abm: wrap RED parameters in bands
  nfp: abm: add GRED offload
  net: sched: cls_u32: add res to offload information
  nfp: abm: calculate PRIO map len and check mailbox size
  nfp: abm: add functions to update DSCP -> virtual queue map
  nfp: abm: add cls_u32 offload for simple band classification
  nfp: abm: add support for more threshold actions

 drivers/net/ethernet/netronome/nfp/Makefile   |   1 +
 drivers/net/ethernet/netronome/nfp/abm/cls.c  | 283 +++++++++++++++++
 drivers/net/ethernet/netronome/nfp/abm/ctrl.c | 287 +++++++++++++++---
 drivers/net/ethernet/netronome/nfp/abm/main.c |  48 ++-
 drivers/net/ethernet/netronome/nfp/abm/main.h | 113 ++++++-
 .../net/ethernet/netronome/nfp/abm/qdisc.c    | 279 ++++++++++++++---
 drivers/net/ethernet/netronome/nfp/nfp_net.h  |   1 +
 .../ethernet/netronome/nfp/nfp_net_common.c   |   2 +-
 .../net/ethernet/netronome/nfp/nfp_net_ctrl.h |   2 +
 include/linux/netdevice.h                     |   1 +
 include/net/pkt_cls.h                         |  45 +++
 net/sched/cls_u32.c                           |   2 +
 net/sched/sch_gred.c                          |  94 ++++++
 13 files changed, 1042 insertions(+), 116 deletions(-)
 create mode 100644 drivers/net/ethernet/netronome/nfp/abm/cls.c

-- 
2.17.1

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH net-next 01/14] nfp: abm: map per-band symbols
  2018-11-19 23:21 [PATCH net-next 00/14] gred: add offload support Jakub Kicinski
@ 2018-11-19 23:21 ` Jakub Kicinski
  2018-11-19 23:21 ` [PATCH net-next 02/14] nfp: abm: pass band parameter to functions Jakub Kicinski
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Jakub Kicinski @ 2018-11-19 23:21 UTC (permalink / raw)
  To: davem; +Cc: oss-drivers, netdev, Jakub Kicinski

In preparation for multi-band RED offload if FW is capable map
the extended symbols which will allow us to set per-band parameters
and read stats.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/abm/ctrl.c | 56 ++++++++++++++-----
 drivers/net/ethernet/netronome/nfp/abm/main.h | 11 ++++
 2 files changed, 54 insertions(+), 13 deletions(-)

diff --git a/drivers/net/ethernet/netronome/nfp/abm/ctrl.c b/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
index 1629b07f727b..d9c5fff97547 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
+++ b/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
@@ -2,6 +2,7 @@
 /* Copyright (C) 2018 Netronome Systems, Inc. */
 
 #include <linux/kernel.h>
+#include <linux/log2.h>
 
 #include "../nfpcore/nfp_cpp.h"
 #include "../nfpcore/nfp_nffw.h"
@@ -11,13 +12,16 @@
 #include "../nfp_net.h"
 #include "main.h"
 
-#define NFP_QLVL_SYM_NAME	"_abi_nfd_out_q_lvls_%u"
+#define NFP_NUM_PRIOS_SYM_NAME	"_abi_pci_dscp_num_prio_%u"
+#define NFP_NUM_BANDS_SYM_NAME	"_abi_pci_dscp_num_band_%u"
+
+#define NFP_QLVL_SYM_NAME	"_abi_nfd_out_q_lvls_%u%s"
 #define NFP_QLVL_STRIDE		16
 #define NFP_QLVL_BLOG_BYTES	0
 #define NFP_QLVL_BLOG_PKTS	4
 #define NFP_QLVL_THRS		8
 
-#define NFP_QMSTAT_SYM_NAME	"_abi_nfdqm%u_stats"
+#define NFP_QMSTAT_SYM_NAME	"_abi_nfdqm%u_stats%s"
 #define NFP_QMSTAT_STRIDE	32
 #define NFP_QMSTAT_NON_STO	0
 #define NFP_QMSTAT_STO		8
@@ -189,30 +193,56 @@ nfp_abm_ctrl_find_rtsym(struct nfp_pf *pf, const char *name, unsigned int size)
 }
 
 static const struct nfp_rtsym *
-nfp_abm_ctrl_find_q_rtsym(struct nfp_pf *pf, const char *name,
-			  unsigned int size)
+nfp_abm_ctrl_find_q_rtsym(struct nfp_abm *abm, const char *name_fmt,
+			  size_t size)
 {
-	return nfp_abm_ctrl_find_rtsym(pf, name, size * NFP_NET_MAX_RX_RINGS);
+	char pf_symbol[64];
+
+	size = array3_size(size, abm->num_bands, NFP_NET_MAX_RX_RINGS);
+	snprintf(pf_symbol, sizeof(pf_symbol), name_fmt,
+		 abm->pf_id, nfp_abm_has_prio(abm) ? "_per_band" : "");
+
+	return nfp_abm_ctrl_find_rtsym(abm->app->pf, pf_symbol, size);
 }
 
 int nfp_abm_ctrl_find_addrs(struct nfp_abm *abm)
 {
 	struct nfp_pf *pf = abm->app->pf;
 	const struct nfp_rtsym *sym;
-	unsigned int pf_id;
-	char pf_symbol[64];
+	int res;
 
-	pf_id =	nfp_cppcore_pcie_unit(pf->cpp);
-	abm->pf_id = pf_id;
+	abm->pf_id = nfp_cppcore_pcie_unit(pf->cpp);
+
+	/* Read count of prios and prio bands */
+	res = nfp_pf_rtsym_read_optional(pf, NFP_NUM_BANDS_SYM_NAME, 1);
+	if (res < 0)
+		return res;
+	abm->num_bands = res;
+
+	res = nfp_pf_rtsym_read_optional(pf, NFP_NUM_PRIOS_SYM_NAME, 1);
+	if (res < 0)
+		return res;
+	abm->num_prios = res;
+
+	/* Check values are sane, U16_MAX is arbitrarily chosen as max */
+	if (!is_power_of_2(abm->num_bands) || !is_power_of_2(abm->num_prios) ||
+	    abm->num_bands > U16_MAX || abm->num_prios > U16_MAX ||
+	    (abm->num_bands == 1) != (abm->num_prios == 1)) {
+		nfp_err(pf->cpp,
+			"invalid priomap description num bands: %u and num prios: %u\n",
+			abm->num_bands, abm->num_prios);
+		return -EINVAL;
+	}
 
-	snprintf(pf_symbol, sizeof(pf_symbol), NFP_QLVL_SYM_NAME, pf_id);
-	sym = nfp_abm_ctrl_find_q_rtsym(pf, pf_symbol, NFP_QLVL_STRIDE);
+	/* Find level and stat symbols */
+	sym = nfp_abm_ctrl_find_q_rtsym(abm, NFP_QLVL_SYM_NAME,
+					NFP_QLVL_STRIDE);
 	if (IS_ERR(sym))
 		return PTR_ERR(sym);
 	abm->q_lvls = sym;
 
-	snprintf(pf_symbol, sizeof(pf_symbol), NFP_QMSTAT_SYM_NAME, pf_id);
-	sym = nfp_abm_ctrl_find_q_rtsym(pf, pf_symbol, NFP_QMSTAT_STRIDE);
+	sym = nfp_abm_ctrl_find_q_rtsym(abm, NFP_QMSTAT_SYM_NAME,
+					NFP_QMSTAT_STRIDE);
 	if (IS_ERR(sym))
 		return PTR_ERR(sym);
 	abm->qm_stats = sym;
diff --git a/drivers/net/ethernet/netronome/nfp/abm/main.h b/drivers/net/ethernet/netronome/nfp/abm/main.h
index 240e2c8683fe..b10c067b15c8 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/main.h
+++ b/drivers/net/ethernet/netronome/nfp/abm/main.h
@@ -27,6 +27,9 @@ struct nfp_net;
  * @app:	back pointer to nfp_app
  * @pf_id:	ID of our PF link
  *
+ * @num_prios:	number of supported DSCP priorities
+ * @num_bands:	number of supported DSCP priority bands
+ *
  * @thresholds:		current threshold configuration
  * @threshold_undef:	bitmap of thresholds which have not been set
  * @num_thresholds:	number of @thresholds and bits in @threshold_undef
@@ -40,6 +43,9 @@ struct nfp_abm {
 	struct nfp_app *app;
 	unsigned int pf_id;
 
+	unsigned int num_prios;
+	unsigned int num_bands;
+
 	u32 *thresholds;
 	unsigned long *threshold_undef;
 	size_t num_thresholds;
@@ -166,6 +172,11 @@ struct nfp_abm_link {
 	struct radix_tree_root qdiscs;
 };
 
+static inline bool nfp_abm_has_prio(struct nfp_abm *abm)
+{
+	return abm->num_bands > 1;
+}
+
 void nfp_abm_qdisc_offload_update(struct nfp_abm_link *alink);
 int nfp_abm_setup_root(struct net_device *netdev, struct nfp_abm_link *alink,
 		       struct tc_root_qopt_offload *opt);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 02/14] nfp: abm: pass band parameter to functions
  2018-11-19 23:21 [PATCH net-next 00/14] gred: add offload support Jakub Kicinski
  2018-11-19 23:21 ` [PATCH net-next 01/14] nfp: abm: map per-band symbols Jakub Kicinski
@ 2018-11-19 23:21 ` Jakub Kicinski
  2018-11-19 23:21 ` [PATCH net-next 03/14] nfp: abm: size threshold table to account for bands Jakub Kicinski
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Jakub Kicinski @ 2018-11-19 23:21 UTC (permalink / raw)
  To: davem; +Cc: oss-drivers, netdev, Jakub Kicinski

In preparation for per-band RED offload pass band parameter to
functions.  For now it will always be 0.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/abm/ctrl.c | 52 ++++++++++---------
 drivers/net/ethernet/netronome/nfp/abm/main.h | 10 ++--
 .../net/ethernet/netronome/nfp/abm/qdisc.c    | 30 +++++------
 3 files changed, 49 insertions(+), 43 deletions(-)

diff --git a/drivers/net/ethernet/netronome/nfp/abm/ctrl.c b/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
index d9c5fff97547..8b2598a223de 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
+++ b/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
@@ -30,23 +30,25 @@
 
 static int
 nfp_abm_ctrl_stat(struct nfp_abm_link *alink, const struct nfp_rtsym *sym,
-		  unsigned int stride, unsigned int offset, unsigned int i,
-		  bool is_u64, u64 *res)
+		  unsigned int stride, unsigned int offset, unsigned int band,
+		  unsigned int queue, bool is_u64, u64 *res)
 {
 	struct nfp_cpp *cpp = alink->abm->app->cpp;
 	u64 val, sym_offset;
+	unsigned int qid;
 	u32 val32;
 	int err;
 
-	sym_offset = (alink->queue_base + i) * stride + offset;
+	qid = band * NFP_NET_MAX_RX_RINGS + alink->queue_base + queue;
+
+	sym_offset = qid * stride + offset;
 	if (is_u64)
 		err = __nfp_rtsym_readq(cpp, sym, 3, 0, sym_offset, &val);
 	else
 		err = __nfp_rtsym_readl(cpp, sym, 3, 0, sym_offset, &val32);
 	if (err) {
-		nfp_err(cpp,
-			"RED offload reading stat failed on vNIC %d queue %d\n",
-			alink->id, i);
+		nfp_err(cpp, "RED offload reading stat failed on vNIC %d band %d queue %d (+ %d)\n",
+			alink->id, band, queue, alink->queue_base);
 		return err;
 	}
 
@@ -77,12 +79,12 @@ int __nfp_abm_ctrl_set_q_lvl(struct nfp_abm *abm, unsigned int id, u32 val)
 	return 0;
 }
 
-int nfp_abm_ctrl_set_q_lvl(struct nfp_abm_link *alink, unsigned int queue,
-			   u32 val)
+int nfp_abm_ctrl_set_q_lvl(struct nfp_abm_link *alink, unsigned int band,
+			   unsigned int queue, u32 val)
 {
 	unsigned int threshold;
 
-	threshold = alink->queue_base + queue;
+	threshold = band * NFP_NET_MAX_RX_RINGS + alink->queue_base + queue;
 
 	return __nfp_abm_ctrl_set_q_lvl(alink->abm, threshold, val);
 }
@@ -92,7 +94,7 @@ u64 nfp_abm_ctrl_stat_non_sto(struct nfp_abm_link *alink, unsigned int i)
 	u64 val;
 
 	if (nfp_abm_ctrl_stat(alink, alink->abm->qm_stats, NFP_QMSTAT_STRIDE,
-			      NFP_QMSTAT_NON_STO, i, true, &val))
+			      NFP_QMSTAT_NON_STO, 0, i, true, &val))
 		return 0;
 	return val;
 }
@@ -102,56 +104,58 @@ u64 nfp_abm_ctrl_stat_sto(struct nfp_abm_link *alink, unsigned int i)
 	u64 val;
 
 	if (nfp_abm_ctrl_stat(alink, alink->abm->qm_stats, NFP_QMSTAT_STRIDE,
-			      NFP_QMSTAT_STO, i, true, &val))
+			      NFP_QMSTAT_STO, 0, i, true, &val))
 		return 0;
 	return val;
 }
 
-int nfp_abm_ctrl_read_q_stats(struct nfp_abm_link *alink, unsigned int i,
-			      struct nfp_alink_stats *stats)
+int nfp_abm_ctrl_read_q_stats(struct nfp_abm_link *alink, unsigned int band,
+			      unsigned int queue, struct nfp_alink_stats *stats)
 {
 	int err;
 
-	stats->tx_pkts = nn_readq(alink->vnic, NFP_NET_CFG_RXR_STATS(i));
-	stats->tx_bytes = nn_readq(alink->vnic, NFP_NET_CFG_RXR_STATS(i) + 8);
+	stats->tx_pkts += nn_readq(alink->vnic, NFP_NET_CFG_RXR_STATS(queue));
+	stats->tx_bytes += nn_readq(alink->vnic,
+				    NFP_NET_CFG_RXR_STATS(queue) + 8);
 
-	err = nfp_abm_ctrl_stat(alink, alink->abm->q_lvls,
-				NFP_QLVL_STRIDE, NFP_QLVL_BLOG_BYTES,
-				i, false, &stats->backlog_bytes);
+	err = nfp_abm_ctrl_stat(alink, alink->abm->q_lvls, NFP_QLVL_STRIDE,
+				NFP_QLVL_BLOG_BYTES, band, queue, false,
+				&stats->backlog_bytes);
 	if (err)
 		return err;
 
 	err = nfp_abm_ctrl_stat(alink, alink->abm->q_lvls,
 				NFP_QLVL_STRIDE, NFP_QLVL_BLOG_PKTS,
-				i, false, &stats->backlog_pkts);
+				band, queue, false, &stats->backlog_pkts);
 	if (err)
 		return err;
 
 	err = nfp_abm_ctrl_stat(alink, alink->abm->qm_stats,
 				NFP_QMSTAT_STRIDE, NFP_QMSTAT_DROP,
-				i, true, &stats->drops);
+				band, queue, true, &stats->drops);
 	if (err)
 		return err;
 
 	return nfp_abm_ctrl_stat(alink, alink->abm->qm_stats,
 				 NFP_QMSTAT_STRIDE, NFP_QMSTAT_ECN,
-				 i, true, &stats->overlimits);
+				 band, queue, true, &stats->overlimits);
 }
 
-int nfp_abm_ctrl_read_q_xstats(struct nfp_abm_link *alink, unsigned int i,
+int nfp_abm_ctrl_read_q_xstats(struct nfp_abm_link *alink,
+			       unsigned int band, unsigned int queue,
 			       struct nfp_alink_xstats *xstats)
 {
 	int err;
 
 	err = nfp_abm_ctrl_stat(alink, alink->abm->qm_stats,
 				NFP_QMSTAT_STRIDE, NFP_QMSTAT_DROP,
-				i, true, &xstats->pdrop);
+				band, queue, true, &xstats->pdrop);
 	if (err)
 		return err;
 
 	return nfp_abm_ctrl_stat(alink, alink->abm->qm_stats,
 				 NFP_QMSTAT_STRIDE, NFP_QMSTAT_ECN,
-				 i, true, &xstats->ecn_marked);
+				 band, queue, true, &xstats->ecn_marked);
 }
 
 int nfp_abm_ctrl_qm_enable(struct nfp_abm *abm)
diff --git a/drivers/net/ethernet/netronome/nfp/abm/main.h b/drivers/net/ethernet/netronome/nfp/abm/main.h
index b10c067b15c8..b18a699dac50 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/main.h
+++ b/drivers/net/ethernet/netronome/nfp/abm/main.h
@@ -188,11 +188,13 @@ int nfp_abm_setup_tc_mq(struct net_device *netdev, struct nfp_abm_link *alink,
 void nfp_abm_ctrl_read_params(struct nfp_abm_link *alink);
 int nfp_abm_ctrl_find_addrs(struct nfp_abm *abm);
 int __nfp_abm_ctrl_set_q_lvl(struct nfp_abm *abm, unsigned int id, u32 val);
-int nfp_abm_ctrl_set_q_lvl(struct nfp_abm_link *alink, unsigned int queue,
-			   u32 val);
-int nfp_abm_ctrl_read_q_stats(struct nfp_abm_link *alink, unsigned int i,
+int nfp_abm_ctrl_set_q_lvl(struct nfp_abm_link *alink, unsigned int band,
+			   unsigned int queue, u32 val);
+int nfp_abm_ctrl_read_q_stats(struct nfp_abm_link *alink,
+			      unsigned int band, unsigned int queue,
 			      struct nfp_alink_stats *stats);
-int nfp_abm_ctrl_read_q_xstats(struct nfp_abm_link *alink, unsigned int i,
+int nfp_abm_ctrl_read_q_xstats(struct nfp_abm_link *alink,
+			       unsigned int band, unsigned int queue,
 			       struct nfp_alink_xstats *xstats);
 u64 nfp_abm_ctrl_stat_non_sto(struct nfp_abm_link *alink, unsigned int i);
 u64 nfp_abm_ctrl_stat_sto(struct nfp_abm_link *alink, unsigned int i);
diff --git a/drivers/net/ethernet/netronome/nfp/abm/qdisc.c b/drivers/net/ethernet/netronome/nfp/abm/qdisc.c
index 16c4afe3a37f..251ce3070564 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/qdisc.c
+++ b/drivers/net/ethernet/netronome/nfp/abm/qdisc.c
@@ -51,15 +51,15 @@ nfp_abm_stats_update_red(struct nfp_abm_link *alink, struct nfp_qdisc *qdisc,
 	if (!qdisc->offloaded)
 		return;
 
-	err = nfp_abm_ctrl_read_q_stats(alink, queue, &qdisc->red.stats);
+	err = nfp_abm_ctrl_read_q_stats(alink, 0, queue, &qdisc->red.stats);
 	if (err)
-		nfp_err(cpp, "RED stats (%d) read failed with error %d\n",
-			queue, err);
+		nfp_err(cpp, "RED stats (%d, %d) read failed with error %d\n",
+			0, queue, err);
 
-	err = nfp_abm_ctrl_read_q_xstats(alink, queue, &qdisc->red.xstats);
+	err = nfp_abm_ctrl_read_q_xstats(alink, 0, queue, &qdisc->red.xstats);
 	if (err)
-		nfp_err(cpp, "RED xstats (%d) read failed with error %d\n",
-			queue, err);
+		nfp_err(cpp, "RED xstats (%d, %d) read failed with error %d\n",
+			0, queue, err);
 }
 
 static void
@@ -126,7 +126,7 @@ nfp_abm_qdisc_offload_stop(struct nfp_abm_link *alink, struct nfp_qdisc *qdisc)
 }
 
 static int
-__nfp_abm_stats_init(struct nfp_abm_link *alink,
+__nfp_abm_stats_init(struct nfp_abm_link *alink, unsigned int band,
 		     unsigned int queue, struct nfp_alink_stats *prev_stats,
 		     struct nfp_alink_xstats *prev_xstats)
 {
@@ -139,19 +139,19 @@ __nfp_abm_stats_init(struct nfp_abm_link *alink,
 	backlog_pkts = prev_stats->backlog_pkts;
 	backlog_bytes = prev_stats->backlog_bytes;
 
-	err = nfp_abm_ctrl_read_q_stats(alink, queue, prev_stats);
+	err = nfp_abm_ctrl_read_q_stats(alink, band, queue, prev_stats);
 	if (err) {
 		nfp_err(alink->abm->app->cpp,
-			"RED stats init (%d) failed with error %d\n",
-			queue, err);
+			"RED stats init (%d, %d) failed with error %d\n",
+			band, queue, err);
 		return err;
 	}
 
-	err = nfp_abm_ctrl_read_q_xstats(alink, queue, prev_xstats);
+	err = nfp_abm_ctrl_read_q_xstats(alink, band, queue, prev_xstats);
 	if (err) {
 		nfp_err(alink->abm->app->cpp,
-			"RED xstats init (%d) failed with error %d\n",
-			queue, err);
+			"RED xstats init (%d, %d) failed with error %d\n",
+			band, queue, err);
 		return err;
 	}
 
@@ -164,7 +164,7 @@ static int
 nfp_abm_stats_init(struct nfp_abm_link *alink, struct nfp_qdisc *qdisc,
 		   unsigned int queue)
 {
-	return __nfp_abm_stats_init(alink, queue,
+	return __nfp_abm_stats_init(alink, 0, queue,
 				    &qdisc->red.prev_stats,
 				    &qdisc->red.prev_xstats);
 }
@@ -186,7 +186,7 @@ nfp_abm_offload_compile_red(struct nfp_abm_link *alink, struct nfp_qdisc *qdisc,
 	if (!qdisc->offload_mark)
 		return;
 
-	nfp_abm_ctrl_set_q_lvl(alink, queue, qdisc->red.threshold);
+	nfp_abm_ctrl_set_q_lvl(alink, 0, queue, qdisc->red.threshold);
 }
 
 static void
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 03/14] nfp: abm: size threshold table to account for bands
  2018-11-19 23:21 [PATCH net-next 00/14] gred: add offload support Jakub Kicinski
  2018-11-19 23:21 ` [PATCH net-next 01/14] nfp: abm: map per-band symbols Jakub Kicinski
  2018-11-19 23:21 ` [PATCH net-next 02/14] nfp: abm: pass band parameter to functions Jakub Kicinski
@ 2018-11-19 23:21 ` Jakub Kicinski
  2018-11-19 23:21 ` [PATCH net-next 04/14] nfp: abm: switch to extended stats for reading packet/byte counts Jakub Kicinski
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Jakub Kicinski @ 2018-11-19 23:21 UTC (permalink / raw)
  To: davem; +Cc: oss-drivers, netdev, Jakub Kicinski

Make sure the threshold table is large enough to hold information
for all bands.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/abm/main.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/netronome/nfp/abm/main.c b/drivers/net/ethernet/netronome/nfp/abm/main.c
index a5732d3bd1b7..b21250b95475 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/main.c
+++ b/drivers/net/ethernet/netronome/nfp/abm/main.c
@@ -422,7 +422,7 @@ static int nfp_abm_init(struct nfp_app *app)
 		goto err_free_abm;
 
 	err = -ENOMEM;
-	abm->num_thresholds = NFP_NET_MAX_RX_RINGS;
+	abm->num_thresholds = array_size(abm->num_bands, NFP_NET_MAX_RX_RINGS);
 	abm->threshold_undef = bitmap_zalloc(abm->num_thresholds, GFP_KERNEL);
 	if (!abm->threshold_undef)
 		goto err_free_abm;
@@ -431,7 +431,7 @@ static int nfp_abm_init(struct nfp_app *app)
 				   sizeof(*abm->thresholds), GFP_KERNEL);
 	if (!abm->thresholds)
 		goto err_free_thresh_umap;
-	for (i = 0; i < NFP_NET_MAX_RX_RINGS; i++)
+	for (i = 0; i < abm->num_bands * NFP_NET_MAX_RX_RINGS; i++)
 		__nfp_abm_ctrl_set_q_lvl(abm, i, NFP_ABM_LVL_INFINITY);
 
 	/* We start in legacy mode, make sure advanced queuing is disabled */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 04/14] nfp: abm: switch to extended stats for reading packet/byte counts
  2018-11-19 23:21 [PATCH net-next 00/14] gred: add offload support Jakub Kicinski
                   ` (2 preceding siblings ...)
  2018-11-19 23:21 ` [PATCH net-next 03/14] nfp: abm: size threshold table to account for bands Jakub Kicinski
@ 2018-11-19 23:21 ` Jakub Kicinski
  2018-11-19 23:21 ` [PATCH net-next 05/14] nfp: abm: add up bands for sto/non-sto stats Jakub Kicinski
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Jakub Kicinski @ 2018-11-19 23:21 UTC (permalink / raw)
  To: davem; +Cc: oss-drivers, netdev, Jakub Kicinski

In PRIO-enabled FW read the statistics from per-band symbol, rather
than from the standard per-PCIe-queue counters.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/abm/ctrl.c | 47 +++++++++++++++++--
 drivers/net/ethernet/netronome/nfp/abm/main.h |  2 +
 2 files changed, 46 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/netronome/nfp/abm/ctrl.c b/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
index 8b2598a223de..013ba6c85d2b 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
+++ b/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
@@ -28,6 +28,11 @@
 #define NFP_QMSTAT_DROP		16
 #define NFP_QMSTAT_ECN		24
 
+#define NFP_Q_STAT_SYM_NAME	"_abi_nfd_rxq_stats%u%s"
+#define NFP_Q_STAT_STRIDE	16
+#define NFP_Q_STAT_PKTS		0
+#define NFP_Q_STAT_BYTES	8
+
 static int
 nfp_abm_ctrl_stat(struct nfp_abm_link *alink, const struct nfp_rtsym *sym,
 		  unsigned int stride, unsigned int offset, unsigned int band,
@@ -109,14 +114,42 @@ u64 nfp_abm_ctrl_stat_sto(struct nfp_abm_link *alink, unsigned int i)
 	return val;
 }
 
+static int
+nfp_abm_ctrl_stat_basic(struct nfp_abm_link *alink, unsigned int band,
+			unsigned int queue, unsigned int off, u64 *val)
+{
+	if (!nfp_abm_has_prio(alink->abm)) {
+		if (!band) {
+			unsigned int id = alink->queue_base + queue;
+
+			*val = nn_readq(alink->vnic,
+					NFP_NET_CFG_RXR_STATS(id) + off);
+		} else {
+			*val = 0;
+		}
+
+		return 0;
+	} else {
+		return nfp_abm_ctrl_stat(alink, alink->abm->q_stats,
+					 NFP_Q_STAT_STRIDE, off, band, queue,
+					 true, val);
+	}
+}
+
 int nfp_abm_ctrl_read_q_stats(struct nfp_abm_link *alink, unsigned int band,
 			      unsigned int queue, struct nfp_alink_stats *stats)
 {
 	int err;
 
-	stats->tx_pkts += nn_readq(alink->vnic, NFP_NET_CFG_RXR_STATS(queue));
-	stats->tx_bytes += nn_readq(alink->vnic,
-				    NFP_NET_CFG_RXR_STATS(queue) + 8);
+	err = nfp_abm_ctrl_stat_basic(alink, band, queue, NFP_Q_STAT_PKTS,
+				      &stats->tx_pkts);
+	if (err)
+		return err;
+
+	err = nfp_abm_ctrl_stat_basic(alink, band, queue, NFP_Q_STAT_BYTES,
+				      &stats->tx_bytes);
+	if (err)
+		return err;
 
 	err = nfp_abm_ctrl_stat(alink, alink->abm->q_lvls, NFP_QLVL_STRIDE,
 				NFP_QLVL_BLOG_BYTES, band, queue, false,
@@ -251,5 +284,13 @@ int nfp_abm_ctrl_find_addrs(struct nfp_abm *abm)
 		return PTR_ERR(sym);
 	abm->qm_stats = sym;
 
+	if (nfp_abm_has_prio(abm)) {
+		sym = nfp_abm_ctrl_find_q_rtsym(abm, NFP_Q_STAT_SYM_NAME,
+						NFP_Q_STAT_STRIDE);
+		if (IS_ERR(sym))
+			return PTR_ERR(sym);
+		abm->q_stats = sym;
+	}
+
 	return 0;
 }
diff --git a/drivers/net/ethernet/netronome/nfp/abm/main.h b/drivers/net/ethernet/netronome/nfp/abm/main.h
index b18a699dac50..054228c29184 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/main.h
+++ b/drivers/net/ethernet/netronome/nfp/abm/main.h
@@ -38,6 +38,7 @@ struct nfp_net;
  *			in switchdev mode
  * @q_lvls:	queue level control area
  * @qm_stats:	queue statistics symbol
+ * @q_stats:	basic queue statistics (only in per-band case)
  */
 struct nfp_abm {
 	struct nfp_app *app;
@@ -53,6 +54,7 @@ struct nfp_abm {
 	enum devlink_eswitch_mode eswitch_mode;
 	const struct nfp_rtsym *q_lvls;
 	const struct nfp_rtsym *qm_stats;
+	const struct nfp_rtsym *q_stats;
 };
 
 /**
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 05/14] nfp: abm: add up bands for sto/non-sto stats
  2018-11-19 23:21 [PATCH net-next 00/14] gred: add offload support Jakub Kicinski
                   ` (3 preceding siblings ...)
  2018-11-19 23:21 ` [PATCH net-next 04/14] nfp: abm: switch to extended stats for reading packet/byte counts Jakub Kicinski
@ 2018-11-19 23:21 ` Jakub Kicinski
  2018-11-19 23:21 ` [PATCH net-next 06/14] net: sched: gred: add basic Qdisc offload Jakub Kicinski
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Jakub Kicinski @ 2018-11-19 23:21 UTC (permalink / raw)
  To: davem; +Cc: oss-drivers, netdev, Jakub Kicinski

Add up stats for all bands for the extra ethtool statistics.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/abm/ctrl.c | 36 ++++++++++++-------
 1 file changed, 24 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ethernet/netronome/nfp/abm/ctrl.c b/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
index 013ba6c85d2b..10a571b5b565 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
+++ b/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
@@ -94,24 +94,36 @@ int nfp_abm_ctrl_set_q_lvl(struct nfp_abm_link *alink, unsigned int band,
 	return __nfp_abm_ctrl_set_q_lvl(alink->abm, threshold, val);
 }
 
-u64 nfp_abm_ctrl_stat_non_sto(struct nfp_abm_link *alink, unsigned int i)
+u64 nfp_abm_ctrl_stat_non_sto(struct nfp_abm_link *alink, unsigned int queue)
 {
-	u64 val;
+	unsigned int band;
+	u64 val, sum = 0;
+
+	for (band = 0; band < alink->abm->num_bands; band++) {
+		if (nfp_abm_ctrl_stat(alink, alink->abm->qm_stats,
+				      NFP_QMSTAT_STRIDE, NFP_QMSTAT_NON_STO,
+				      band, queue, true, &val))
+			return 0;
+		sum += val;
+	}
 
-	if (nfp_abm_ctrl_stat(alink, alink->abm->qm_stats, NFP_QMSTAT_STRIDE,
-			      NFP_QMSTAT_NON_STO, 0, i, true, &val))
-		return 0;
-	return val;
+	return sum;
 }
 
-u64 nfp_abm_ctrl_stat_sto(struct nfp_abm_link *alink, unsigned int i)
+u64 nfp_abm_ctrl_stat_sto(struct nfp_abm_link *alink, unsigned int queue)
 {
-	u64 val;
+	unsigned int band;
+	u64 val, sum = 0;
+
+	for (band = 0; band < alink->abm->num_bands; band++) {
+		if (nfp_abm_ctrl_stat(alink, alink->abm->qm_stats,
+				      NFP_QMSTAT_STRIDE, NFP_QMSTAT_STO,
+				      band, queue, true, &val))
+			return 0;
+		sum += val;
+	}
 
-	if (nfp_abm_ctrl_stat(alink, alink->abm->qm_stats, NFP_QMSTAT_STRIDE,
-			      NFP_QMSTAT_STO, 0, i, true, &val))
-		return 0;
-	return val;
+	return sum;
 }
 
 static int
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 06/14] net: sched: gred: add basic Qdisc offload
  2018-11-19 23:21 [PATCH net-next 00/14] gred: add offload support Jakub Kicinski
                   ` (4 preceding siblings ...)
  2018-11-19 23:21 ` [PATCH net-next 05/14] nfp: abm: add up bands for sto/non-sto stats Jakub Kicinski
@ 2018-11-19 23:21 ` Jakub Kicinski
  2018-11-19 23:21 ` [PATCH net-next 07/14] net: sched: gred: support reporting stats from offloads Jakub Kicinski
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Jakub Kicinski @ 2018-11-19 23:21 UTC (permalink / raw)
  To: davem; +Cc: oss-drivers, netdev, Jakub Kicinski

Add basic offload for the GRED Qdisc.  Inform the drivers any
time Qdisc or virtual queue configuration changes.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
---
 include/linux/netdevice.h |  1 +
 include/net/pkt_cls.h     | 36 ++++++++++++++++++++++++++++++
 net/sched/sch_gred.c      | 47 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 84 insertions(+)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 086e64d88597..4b4207ebd5c0 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -846,6 +846,7 @@ enum tc_setup_type {
 	TC_SETUP_QDISC_MQ,
 	TC_SETUP_QDISC_ETF,
 	TC_SETUP_ROOT_QDISC,
+	TC_SETUP_QDISC_GRED,
 };
 
 /* These structures hold the attributes of bpf state that are being passed
diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index c497ada7f591..c9198797aaed 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -868,6 +868,42 @@ struct tc_red_qopt_offload {
 	};
 };
 
+enum tc_gred_command {
+	TC_GRED_REPLACE,
+	TC_GRED_DESTROY,
+};
+
+struct tc_gred_vq_qopt_offload_params {
+	bool present;
+	u32 limit;
+	u32 prio;
+	u32 min;
+	u32 max;
+	bool is_ecn;
+	bool is_harddrop;
+	u32 probability;
+	/* Only need backlog, see struct tc_prio_qopt_offload_params */
+	u32 *backlog;
+};
+
+struct tc_gred_qopt_offload_params {
+	bool grio_on;
+	bool wred_on;
+	unsigned int dp_cnt;
+	unsigned int dp_def;
+	struct gnet_stats_queue *qstats;
+	struct tc_gred_vq_qopt_offload_params tab[MAX_DPs];
+};
+
+struct tc_gred_qopt_offload {
+	enum tc_gred_command command;
+	u32 handle;
+	u32 parent;
+	union {
+		struct tc_gred_qopt_offload_params set;
+	};
+};
+
 enum tc_prio_command {
 	TC_PRIO_REPLACE,
 	TC_PRIO_DESTROY,
diff --git a/net/sched/sch_gred.c b/net/sched/sch_gred.c
index 8b8c325f48bc..908c9d1dfdf8 100644
--- a/net/sched/sch_gred.c
+++ b/net/sched/sch_gred.c
@@ -23,6 +23,7 @@
 #include <linux/types.h>
 #include <linux/kernel.h>
 #include <linux/skbuff.h>
+#include <net/pkt_cls.h>
 #include <net/pkt_sched.h>
 #include <net/red.h>
 
@@ -311,6 +312,48 @@ static void gred_reset(struct Qdisc *sch)
 	}
 }
 
+static void gred_offload(struct Qdisc *sch, enum tc_gred_command command)
+{
+	struct gred_sched *table = qdisc_priv(sch);
+	struct net_device *dev = qdisc_dev(sch);
+	struct tc_gred_qopt_offload opt = {
+		.command	= command,
+		.handle		= sch->handle,
+		.parent		= sch->parent,
+	};
+
+	if (!tc_can_offload(dev) || !dev->netdev_ops->ndo_setup_tc)
+		return;
+
+	if (command == TC_GRED_REPLACE) {
+		unsigned int i;
+
+		opt.set.grio_on = gred_rio_mode(table);
+		opt.set.wred_on = gred_wred_mode(table);
+		opt.set.dp_cnt = table->DPs;
+		opt.set.dp_def = table->def;
+
+		for (i = 0; i < table->DPs; i++) {
+			struct gred_sched_data *q = table->tab[i];
+
+			if (!q)
+				continue;
+			opt.set.tab[i].present = true;
+			opt.set.tab[i].limit = q->limit;
+			opt.set.tab[i].prio = q->prio;
+			opt.set.tab[i].min = q->parms.qth_min >> q->parms.Wlog;
+			opt.set.tab[i].max = q->parms.qth_max >> q->parms.Wlog;
+			opt.set.tab[i].is_ecn = gred_use_ecn(q);
+			opt.set.tab[i].is_harddrop = gred_use_harddrop(q);
+			opt.set.tab[i].probability = q->parms.max_P;
+			opt.set.tab[i].backlog = &q->backlog;
+		}
+		opt.set.qstats = &sch->qstats;
+	}
+
+	dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_QDISC_GRED, &opt);
+}
+
 static inline void gred_destroy_vq(struct gred_sched_data *q)
 {
 	kfree(q);
@@ -385,6 +428,7 @@ static int gred_change_table_def(struct Qdisc *sch, struct nlattr *dps,
 		}
 	}
 
+	gred_offload(sch, TC_GRED_REPLACE);
 	return 0;
 }
 
@@ -630,6 +674,8 @@ static int gred_change(struct Qdisc *sch, struct nlattr *opt,
 
 	sch_tree_unlock(sch);
 	kfree(prealloc);
+
+	gred_offload(sch, TC_GRED_REPLACE);
 	return 0;
 
 err_unlock_free:
@@ -815,6 +861,7 @@ static void gred_destroy(struct Qdisc *sch)
 		if (table->tab[i])
 			gred_destroy_vq(table->tab[i]);
 	}
+	gred_offload(sch, TC_GRED_DESTROY);
 }
 
 static struct Qdisc_ops gred_qdisc_ops __read_mostly = {
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 07/14] net: sched: gred: support reporting stats from offloads
  2018-11-19 23:21 [PATCH net-next 00/14] gred: add offload support Jakub Kicinski
                   ` (5 preceding siblings ...)
  2018-11-19 23:21 ` [PATCH net-next 06/14] net: sched: gred: add basic Qdisc offload Jakub Kicinski
@ 2018-11-19 23:21 ` Jakub Kicinski
  2018-11-19 23:21 ` [PATCH net-next 08/14] nfp: abm: wrap RED parameters in bands Jakub Kicinski
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Jakub Kicinski @ 2018-11-19 23:21 UTC (permalink / raw)
  To: davem; +Cc: oss-drivers, netdev, Jakub Kicinski

Allow drivers which offload GRED to report back statistics.  Since
A lot of GRED stats is fairly ad hoc in nature pass to drivers the
standard struct gnet_stats_basic/gnet_stats_queue pairs, and
untangle the values in the core.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
---
 include/net/pkt_cls.h |  8 ++++++++
 net/sched/sch_gred.c  | 47 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 55 insertions(+)

diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index c9198797aaed..d0e9a8091426 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -871,6 +871,7 @@ struct tc_red_qopt_offload {
 enum tc_gred_command {
 	TC_GRED_REPLACE,
 	TC_GRED_DESTROY,
+	TC_GRED_STATS,
 };
 
 struct tc_gred_vq_qopt_offload_params {
@@ -895,12 +896,19 @@ struct tc_gred_qopt_offload_params {
 	struct tc_gred_vq_qopt_offload_params tab[MAX_DPs];
 };
 
+struct tc_gred_qopt_offload_stats {
+	struct gnet_stats_basic_packed bstats[MAX_DPs];
+	struct gnet_stats_queue qstats[MAX_DPs];
+	struct red_stats *xstats[MAX_DPs];
+};
+
 struct tc_gred_qopt_offload {
 	enum tc_gred_command command;
 	u32 handle;
 	u32 parent;
 	union {
 		struct tc_gred_qopt_offload_params set;
+		struct tc_gred_qopt_offload_stats stats;
 	};
 };
 
diff --git a/net/sched/sch_gred.c b/net/sched/sch_gred.c
index 908c9d1dfdf8..234afbf9115b 100644
--- a/net/sched/sch_gred.c
+++ b/net/sched/sch_gred.c
@@ -354,6 +354,50 @@ static void gred_offload(struct Qdisc *sch, enum tc_gred_command command)
 	dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_QDISC_GRED, &opt);
 }
 
+static int gred_offload_dump_stats(struct Qdisc *sch)
+{
+	struct gred_sched *table = qdisc_priv(sch);
+	struct tc_gred_qopt_offload *hw_stats;
+	unsigned int i;
+	int ret;
+
+	hw_stats = kzalloc(sizeof(*hw_stats), GFP_KERNEL);
+	if (!hw_stats)
+		return -ENOMEM;
+
+	hw_stats->command = TC_GRED_STATS;
+	hw_stats->handle = sch->handle;
+	hw_stats->parent = sch->parent;
+
+	for (i = 0; i < MAX_DPs; i++)
+		if (table->tab[i])
+			hw_stats->stats.xstats[i] = &table->tab[i]->stats;
+
+	ret = qdisc_offload_dump_helper(sch, TC_SETUP_QDISC_GRED, hw_stats);
+	/* Even if driver returns failure adjust the stats - in case offload
+	 * ended but driver still wants to adjust the values.
+	 */
+	for (i = 0; i < MAX_DPs; i++) {
+		if (!table->tab[i])
+			continue;
+		table->tab[i]->packetsin += hw_stats->stats.bstats[i].packets;
+		table->tab[i]->bytesin += hw_stats->stats.bstats[i].bytes;
+		table->tab[i]->backlog += hw_stats->stats.qstats[i].backlog;
+
+		_bstats_update(&sch->bstats,
+			       hw_stats->stats.bstats[i].bytes,
+			       hw_stats->stats.bstats[i].packets);
+		sch->qstats.qlen += hw_stats->stats.qstats[i].qlen;
+		sch->qstats.backlog += hw_stats->stats.qstats[i].backlog;
+		sch->qstats.drops += hw_stats->stats.qstats[i].drops;
+		sch->qstats.requeues += hw_stats->stats.qstats[i].requeues;
+		sch->qstats.overlimits += hw_stats->stats.qstats[i].overlimits;
+	}
+
+	kfree(hw_stats);
+	return ret;
+}
+
 static inline void gred_destroy_vq(struct gred_sched_data *q)
 {
 	kfree(q);
@@ -725,6 +769,9 @@ static int gred_dump(struct Qdisc *sch, struct sk_buff *skb)
 		.flags	= table->red_flags,
 	};
 
+	if (gred_offload_dump_stats(sch))
+		goto nla_put_failure;
+
 	opts = nla_nest_start(skb, TCA_OPTIONS);
 	if (opts == NULL)
 		goto nla_put_failure;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 08/14] nfp: abm: wrap RED parameters in bands
  2018-11-19 23:21 [PATCH net-next 00/14] gred: add offload support Jakub Kicinski
                   ` (6 preceding siblings ...)
  2018-11-19 23:21 ` [PATCH net-next 07/14] net: sched: gred: support reporting stats from offloads Jakub Kicinski
@ 2018-11-19 23:21 ` Jakub Kicinski
  2018-11-19 23:21 ` [PATCH net-next 09/14] nfp: abm: add GRED offload Jakub Kicinski
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Jakub Kicinski @ 2018-11-19 23:21 UTC (permalink / raw)
  To: davem; +Cc: oss-drivers, netdev, Jakub Kicinski

Wrap RED parameters and stats into a structure, and a 1-element
array.  Upcoming GRED offload will add the support for more bands.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/abm/main.h | 26 +++---
 .../net/ethernet/netronome/nfp/abm/qdisc.c    | 88 ++++++++++++-------
 2 files changed, 74 insertions(+), 40 deletions(-)

diff --git a/drivers/net/ethernet/netronome/nfp/abm/main.h b/drivers/net/ethernet/netronome/nfp/abm/main.h
index 054228c29184..47888288a706 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/main.h
+++ b/drivers/net/ethernet/netronome/nfp/abm/main.h
@@ -112,11 +112,13 @@ enum nfp_qdisc_type {
  * @mq.prev_stats:	previously reported @mq.stats
  *
  * @red:		RED Qdisc specific parameters and state
- * @red.threshold:	ECN marking threshold
- * @red.stats:		current stats of the RED Qdisc
- * @red.prev_stats:	previously reported @red.stats
- * @red.xstats:		extended stats for RED - current
- * @red.prev_xstats:	extended stats for RED - previously reported
+ * @red.num_bands:	Number of valid entries in the @red.band table
+ * @red.band:		Per-band array of RED instances
+ * @red.band.threshold:		ECN marking threshold
+ * @red.band.stats:		current stats of the RED Qdisc
+ * @red.band.prev_stats:	previously reported @red.stats
+ * @red.band.xstats:		extended stats for RED - current
+ * @red.band.prev_xstats:	extended stats for RED - previously reported
  */
 struct nfp_qdisc {
 	struct net_device *netdev;
@@ -139,11 +141,15 @@ struct nfp_qdisc {
 		} mq;
 		/* TC_SETUP_QDISC_RED */
 		struct {
-			u32 threshold;
-			struct nfp_alink_stats stats;
-			struct nfp_alink_stats prev_stats;
-			struct nfp_alink_xstats xstats;
-			struct nfp_alink_xstats prev_xstats;
+			unsigned int num_bands;
+
+			struct {
+				u32 threshold;
+				struct nfp_alink_stats stats;
+				struct nfp_alink_stats prev_stats;
+				struct nfp_alink_xstats xstats;
+				struct nfp_alink_xstats prev_xstats;
+			} band[1];
 		} red;
 	};
 };
diff --git a/drivers/net/ethernet/netronome/nfp/abm/qdisc.c b/drivers/net/ethernet/netronome/nfp/abm/qdisc.c
index 251ce3070564..b65b3177c94a 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/qdisc.c
+++ b/drivers/net/ethernet/netronome/nfp/abm/qdisc.c
@@ -46,20 +46,25 @@ nfp_abm_stats_update_red(struct nfp_abm_link *alink, struct nfp_qdisc *qdisc,
 			 unsigned int queue)
 {
 	struct nfp_cpp *cpp = alink->abm->app->cpp;
+	unsigned int i;
 	int err;
 
 	if (!qdisc->offloaded)
 		return;
 
-	err = nfp_abm_ctrl_read_q_stats(alink, 0, queue, &qdisc->red.stats);
-	if (err)
-		nfp_err(cpp, "RED stats (%d, %d) read failed with error %d\n",
-			0, queue, err);
-
-	err = nfp_abm_ctrl_read_q_xstats(alink, 0, queue, &qdisc->red.xstats);
-	if (err)
-		nfp_err(cpp, "RED xstats (%d, %d) read failed with error %d\n",
-			0, queue, err);
+	for (i = 0; i < qdisc->red.num_bands; i++) {
+		err = nfp_abm_ctrl_read_q_stats(alink, i, queue,
+						&qdisc->red.band[i].stats);
+		if (err)
+			nfp_err(cpp, "RED stats (%d, %d) read failed with error %d\n",
+				i, queue, err);
+
+		err = nfp_abm_ctrl_read_q_xstats(alink, i, queue,
+						 &qdisc->red.band[i].xstats);
+		if (err)
+			nfp_err(cpp, "RED xstats (%d, %d) read failed with error %d\n",
+				i, queue, err);
+	}
 }
 
 static void
@@ -113,6 +118,8 @@ nfp_abm_qdisc_unlink_children(struct nfp_qdisc *qdisc,
 static void
 nfp_abm_qdisc_offload_stop(struct nfp_abm_link *alink, struct nfp_qdisc *qdisc)
 {
+	unsigned int i;
+
 	/* Don't complain when qdisc is getting unlinked */
 	if (qdisc->use_cnt)
 		nfp_warn(alink->abm->app->cpp, "Offload of '%08x' stopped\n",
@@ -121,8 +128,10 @@ nfp_abm_qdisc_offload_stop(struct nfp_abm_link *alink, struct nfp_qdisc *qdisc)
 	if (!nfp_abm_qdisc_is_red(qdisc))
 		return;
 
-	qdisc->red.stats.backlog_pkts = 0;
-	qdisc->red.stats.backlog_bytes = 0;
+	for (i = 0; i < qdisc->red.num_bands; i++) {
+		qdisc->red.band[i].stats.backlog_pkts = 0;
+		qdisc->red.band[i].stats.backlog_bytes = 0;
+	}
 }
 
 static int
@@ -164,15 +173,26 @@ static int
 nfp_abm_stats_init(struct nfp_abm_link *alink, struct nfp_qdisc *qdisc,
 		   unsigned int queue)
 {
-	return __nfp_abm_stats_init(alink, 0, queue,
-				    &qdisc->red.prev_stats,
-				    &qdisc->red.prev_xstats);
+	unsigned int i;
+	int err;
+
+	for (i = 0; i < qdisc->red.num_bands; i++) {
+		err = __nfp_abm_stats_init(alink, i, queue,
+					   &qdisc->red.band[i].prev_stats,
+					   &qdisc->red.band[i].prev_xstats);
+		if (err)
+			return err;
+	}
+
+	return 0;
 }
 
 static void
 nfp_abm_offload_compile_red(struct nfp_abm_link *alink, struct nfp_qdisc *qdisc,
 			    unsigned int queue)
 {
+	unsigned int i;
+
 	qdisc->offload_mark = qdisc->type == NFP_QDISC_RED &&
 			      qdisc->params_ok &&
 			      qdisc->use_cnt == 1 &&
@@ -186,7 +206,9 @@ nfp_abm_offload_compile_red(struct nfp_abm_link *alink, struct nfp_qdisc *qdisc,
 	if (!qdisc->offload_mark)
 		return;
 
-	nfp_abm_ctrl_set_q_lvl(alink, 0, queue, qdisc->red.threshold);
+	for (i = 0; i < alink->abm->num_bands; i++)
+		nfp_abm_ctrl_set_q_lvl(alink, i, queue,
+				       qdisc->red.band[i].threshold);
 }
 
 static void
@@ -217,8 +239,10 @@ void nfp_abm_qdisc_offload_update(struct nfp_abm_link *alink)
 	size_t i;
 
 	/* Mark all thresholds as unconfigured */
-	__bitmap_set(abm->threshold_undef,
-		     alink->queue_base, alink->total_queues);
+	for (i = 0; i < abm->num_bands; i++)
+		__bitmap_set(abm->threshold_undef,
+			     i * NFP_NET_MAX_RX_RINGS + alink->queue_base,
+			     alink->total_queues);
 
 	/* Clear offload marks */
 	radix_tree_for_each_slot(slot, &alink->qdiscs, &iter, 0) {
@@ -451,10 +475,10 @@ nfp_abm_red_xstats(struct nfp_abm_link *alink, struct tc_red_qopt_offload *opt)
 	if (!qdisc || !qdisc->offloaded)
 		return -EOPNOTSUPP;
 
-	nfp_abm_stats_red_calculate(&qdisc->red.xstats,
-				    &qdisc->red.prev_xstats,
+	nfp_abm_stats_red_calculate(&qdisc->red.band[0].xstats,
+				    &qdisc->red.band[0].prev_xstats,
 				    opt->xstats);
-	qdisc->red.prev_xstats = qdisc->red.xstats;
+	qdisc->red.band[0].prev_xstats = qdisc->red.band[0].xstats;
 	return 0;
 }
 
@@ -473,10 +497,10 @@ nfp_abm_red_stats(struct nfp_abm_link *alink, u32 handle,
 	 * counters back so carry on even if qdisc is not currently offloaded.
 	 */
 
-	nfp_abm_stats_calculate(&qdisc->red.stats,
-				&qdisc->red.prev_stats,
+	nfp_abm_stats_calculate(&qdisc->red.band[0].stats,
+				&qdisc->red.band[0].prev_stats,
 				stats->bstats, stats->qstats);
-	qdisc->red.prev_stats = qdisc->red.stats;
+	qdisc->red.band[0].prev_stats = qdisc->red.band[0].stats;
 
 	return qdisc->offloaded ? 0 : -EOPNOTSUPP;
 }
@@ -538,8 +562,10 @@ nfp_abm_red_replace(struct net_device *netdev, struct nfp_abm_link *alink,
 	}
 
 	qdisc->params_ok = nfp_abm_red_check_params(alink, opt);
-	if (qdisc->params_ok)
-		qdisc->red.threshold = opt->set.min;
+	if (qdisc->params_ok) {
+		qdisc->red.num_bands = 1;
+		qdisc->red.band[0].threshold = opt->set.min;
+	}
 
 	if (qdisc->use_cnt == 1)
 		nfp_abm_qdisc_offload_update(alink);
@@ -592,7 +618,7 @@ nfp_abm_mq_stats(struct nfp_abm_link *alink, u32 handle,
 		 struct tc_qopt_offload_stats *stats)
 {
 	struct nfp_qdisc *qdisc, *red;
-	unsigned int i;
+	unsigned int i, j;
 
 	qdisc = nfp_abm_qdisc_find(alink, handle);
 	if (!qdisc)
@@ -614,10 +640,12 @@ nfp_abm_mq_stats(struct nfp_abm_link *alink, u32 handle,
 			continue;
 		red = qdisc->children[i];
 
-		nfp_abm_stats_propagate(&qdisc->mq.stats,
-					&red->red.stats);
-		nfp_abm_stats_propagate(&qdisc->mq.prev_stats,
-					&red->red.prev_stats);
+		for (j = 0; j < red->red.num_bands; j++) {
+			nfp_abm_stats_propagate(&qdisc->mq.stats,
+						&red->red.band[j].stats);
+			nfp_abm_stats_propagate(&qdisc->mq.prev_stats,
+						&red->red.band[j].prev_stats);
+		}
 	}
 
 	nfp_abm_stats_calculate(&qdisc->mq.stats, &qdisc->mq.prev_stats,
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 09/14] nfp: abm: add GRED offload
  2018-11-19 23:21 [PATCH net-next 00/14] gred: add offload support Jakub Kicinski
                   ` (7 preceding siblings ...)
  2018-11-19 23:21 ` [PATCH net-next 08/14] nfp: abm: wrap RED parameters in bands Jakub Kicinski
@ 2018-11-19 23:21 ` Jakub Kicinski
  2018-11-19 23:21 ` [PATCH net-next 10/14] net: sched: cls_u32: add res to offload information Jakub Kicinski
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Jakub Kicinski @ 2018-11-19 23:21 UTC (permalink / raw)
  To: davem; +Cc: oss-drivers, netdev, Jakub Kicinski

Add support for GRED offload.  It behaves much like RED, but
can apply different parameters to different bands.  GRED operates
pretty much exactly like our HW/FW with a single FIFO and different
RED state instances.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/abm/main.c |   2 +
 drivers/net/ethernet/netronome/nfp/abm/main.h |  12 +-
 .../net/ethernet/netronome/nfp/abm/qdisc.c    | 154 +++++++++++++++++-
 3 files changed, 158 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/netronome/nfp/abm/main.c b/drivers/net/ethernet/netronome/nfp/abm/main.c
index b21250b95475..4f95f7b4430b 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/main.c
+++ b/drivers/net/ethernet/netronome/nfp/abm/main.c
@@ -44,6 +44,8 @@ nfp_abm_setup_tc(struct nfp_app *app, struct net_device *netdev,
 		return nfp_abm_setup_tc_mq(netdev, repr->app_priv, type_data);
 	case TC_SETUP_QDISC_RED:
 		return nfp_abm_setup_tc_red(netdev, repr->app_priv, type_data);
+	case TC_SETUP_QDISC_GRED:
+		return nfp_abm_setup_tc_gred(netdev, repr->app_priv, type_data);
 	default:
 		return -EOPNOTSUPP;
 	}
diff --git a/drivers/net/ethernet/netronome/nfp/abm/main.h b/drivers/net/ethernet/netronome/nfp/abm/main.h
index 47888288a706..6bb4e60c1ad8 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/main.h
+++ b/drivers/net/ethernet/netronome/nfp/abm/main.h
@@ -8,6 +8,7 @@
 #include <linux/radix-tree.h>
 #include <net/devlink.h>
 #include <net/pkt_cls.h>
+#include <net/pkt_sched.h>
 
 /* Dump of 64 PRIOs and 256 REDs seems to take 850us on Xeon v4 @ 2.20GHz;
  * 2.5ms / 400Hz seems more than sufficient for stats resolution.
@@ -89,6 +90,7 @@ enum nfp_qdisc_type {
 	NFP_QDISC_NONE = 0,
 	NFP_QDISC_MQ,
 	NFP_QDISC_RED,
+	NFP_QDISC_GRED,
 };
 
 #define NFP_QDISC_UNTRACKED	((struct nfp_qdisc *)1UL)
@@ -139,7 +141,7 @@ struct nfp_qdisc {
 			struct nfp_alink_stats stats;
 			struct nfp_alink_stats prev_stats;
 		} mq;
-		/* TC_SETUP_QDISC_RED */
+		/* TC_SETUP_QDISC_RED, TC_SETUP_QDISC_GRED */
 		struct {
 			unsigned int num_bands;
 
@@ -149,7 +151,7 @@ struct nfp_qdisc {
 				struct nfp_alink_stats prev_stats;
 				struct nfp_alink_xstats xstats;
 				struct nfp_alink_xstats prev_xstats;
-			} band[1];
+			} band[MAX_DPs];
 		} red;
 	};
 };
@@ -164,6 +166,8 @@ struct nfp_qdisc {
  *
  * @last_stats_update:	ktime of last stats update
  *
+ * @def_band:		default band to use
+ *
  * @root_qdisc:	pointer to the current root of the Qdisc hierarchy
  * @qdiscs:	all qdiscs recorded by major part of the handle
  */
@@ -176,6 +180,8 @@ struct nfp_abm_link {
 
 	u64 last_stats_update;
 
+	u8 def_band;
+
 	struct nfp_qdisc *root_qdisc;
 	struct radix_tree_root qdiscs;
 };
@@ -192,6 +198,8 @@ int nfp_abm_setup_tc_red(struct net_device *netdev, struct nfp_abm_link *alink,
 			 struct tc_red_qopt_offload *opt);
 int nfp_abm_setup_tc_mq(struct net_device *netdev, struct nfp_abm_link *alink,
 			struct tc_mq_qopt_offload *opt);
+int nfp_abm_setup_tc_gred(struct net_device *netdev, struct nfp_abm_link *alink,
+			  struct tc_gred_qopt_offload *opt);
 
 void nfp_abm_ctrl_read_params(struct nfp_abm_link *alink);
 int nfp_abm_ctrl_find_addrs(struct nfp_abm *abm);
diff --git a/drivers/net/ethernet/netronome/nfp/abm/qdisc.c b/drivers/net/ethernet/netronome/nfp/abm/qdisc.c
index b65b3177c94a..e80a3d40a48b 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/qdisc.c
+++ b/drivers/net/ethernet/netronome/nfp/abm/qdisc.c
@@ -15,7 +15,7 @@
 
 static bool nfp_abm_qdisc_is_red(struct nfp_qdisc *qdisc)
 {
-	return qdisc->type == NFP_QDISC_RED;
+	return qdisc->type == NFP_QDISC_RED || qdisc->type == NFP_QDISC_GRED;
 }
 
 static bool nfp_abm_qdisc_child_valid(struct nfp_qdisc *qdisc, unsigned int id)
@@ -191,12 +191,17 @@ static void
 nfp_abm_offload_compile_red(struct nfp_abm_link *alink, struct nfp_qdisc *qdisc,
 			    unsigned int queue)
 {
+	bool good_red, good_gred;
 	unsigned int i;
 
-	qdisc->offload_mark = qdisc->type == NFP_QDISC_RED &&
-			      qdisc->params_ok &&
-			      qdisc->use_cnt == 1 &&
-			      !qdisc->children[0];
+	good_red = qdisc->type == NFP_QDISC_RED &&
+		   qdisc->params_ok &&
+		   qdisc->use_cnt == 1 &&
+		   !qdisc->children[0];
+	good_gred = qdisc->type == NFP_QDISC_GRED &&
+		    qdisc->params_ok &&
+		    qdisc->use_cnt == 1;
+	qdisc->offload_mark = good_red || good_gred;
 
 	/* If we are starting offload init prev_stats */
 	if (qdisc->offload_mark && !qdisc->offloaded)
@@ -336,9 +341,11 @@ nfp_abm_qdisc_alloc(struct net_device *netdev, struct nfp_abm_link *alink,
 	if (!qdisc)
 		return NULL;
 
-	qdisc->children = kcalloc(children, sizeof(void *), GFP_KERNEL);
-	if (!qdisc->children)
-		goto err_free_qdisc;
+	if (children) {
+		qdisc->children = kcalloc(children, sizeof(void *), GFP_KERNEL);
+		if (!qdisc->children)
+			goto err_free_qdisc;
+	}
 
 	qdisc->netdev = netdev;
 	qdisc->type = type;
@@ -464,6 +471,137 @@ nfp_abm_stats_red_calculate(struct nfp_alink_xstats *new,
 	stats->pdrop += new->pdrop - old->pdrop;
 }
 
+static int
+nfp_abm_gred_stats(struct nfp_abm_link *alink, u32 handle,
+		   struct tc_gred_qopt_offload_stats *stats)
+{
+	struct nfp_qdisc *qdisc;
+	unsigned int i;
+
+	nfp_abm_stats_update(alink);
+
+	qdisc = nfp_abm_qdisc_find(alink, handle);
+	if (!qdisc)
+		return -EOPNOTSUPP;
+	/* If the qdisc offload has stopped we may need to adjust the backlog
+	 * counters back so carry on even if qdisc is not currently offloaded.
+	 */
+
+	for (i = 0; i < qdisc->red.num_bands; i++) {
+		if (!stats->xstats[i])
+			continue;
+
+		nfp_abm_stats_calculate(&qdisc->red.band[i].stats,
+					&qdisc->red.band[i].prev_stats,
+					&stats->bstats[i], &stats->qstats[i]);
+		qdisc->red.band[i].prev_stats = qdisc->red.band[i].stats;
+
+		nfp_abm_stats_red_calculate(&qdisc->red.band[i].xstats,
+					    &qdisc->red.band[i].prev_xstats,
+					    stats->xstats[i]);
+		qdisc->red.band[i].prev_xstats = qdisc->red.band[i].xstats;
+	}
+
+	return qdisc->offloaded ? 0 : -EOPNOTSUPP;
+}
+
+static bool
+nfp_abm_gred_check_params(struct nfp_abm_link *alink,
+			  struct tc_gred_qopt_offload *opt)
+{
+	struct nfp_cpp *cpp = alink->abm->app->cpp;
+	struct nfp_abm *abm = alink->abm;
+	unsigned int i;
+
+	if (opt->set.grio_on || opt->set.wred_on) {
+		nfp_warn(cpp, "GRED offload failed - GRIO and WRED not supported (p:%08x h:%08x)\n",
+			 opt->parent, opt->handle);
+		return false;
+	}
+	if (opt->set.dp_def != alink->def_band) {
+		nfp_warn(cpp, "GRED offload failed - default band must be %d (p:%08x h:%08x)\n",
+			 alink->def_band, opt->parent, opt->handle);
+		return false;
+	}
+	if (opt->set.dp_cnt != abm->num_bands) {
+		nfp_warn(cpp, "GRED offload failed - band count must be %d (p:%08x h:%08x)\n",
+			 abm->num_bands, opt->parent, opt->handle);
+		return false;
+	}
+
+	for (i = 0; i < abm->num_bands; i++) {
+		struct tc_gred_vq_qopt_offload_params *band = &opt->set.tab[i];
+
+		if (!band->present)
+			return false;
+		if (!band->is_ecn) {
+			nfp_warn(cpp, "GRED offload failed - drop is not supported (ECN option required) (p:%08x h:%08x vq:%d)\n",
+				 opt->parent, opt->handle, i);
+			return false;
+		}
+		if (band->is_harddrop) {
+			nfp_warn(cpp, "GRED offload failed - harddrop is not supported (p:%08x h:%08x vq:%d)\n",
+				 opt->parent, opt->handle, i);
+			return false;
+		}
+		if (band->min != band->max) {
+			nfp_warn(cpp, "GRED offload failed - threshold mismatch (p:%08x h:%08x vq:%d)\n",
+				 opt->parent, opt->handle, i);
+			return false;
+		}
+		if (band->min > S32_MAX) {
+			nfp_warn(cpp, "GRED offload failed - threshold too large %d > %d (p:%08x h:%08x vq:%d)\n",
+				 band->min, S32_MAX, opt->parent, opt->handle,
+				 i);
+			return false;
+		}
+	}
+
+	return true;
+}
+
+static int
+nfp_abm_gred_replace(struct net_device *netdev, struct nfp_abm_link *alink,
+		     struct tc_gred_qopt_offload *opt)
+{
+	struct nfp_qdisc *qdisc;
+	unsigned int i;
+	int ret;
+
+	ret = nfp_abm_qdisc_replace(netdev, alink, NFP_QDISC_GRED, opt->parent,
+				    opt->handle, 0, &qdisc);
+	if (ret < 0)
+		return ret;
+
+	qdisc->params_ok = nfp_abm_gred_check_params(alink, opt);
+	if (qdisc->params_ok) {
+		qdisc->red.num_bands = opt->set.dp_cnt;
+		for (i = 0; i < qdisc->red.num_bands; i++)
+			qdisc->red.band[i].threshold = opt->set.tab[i].min;
+	}
+
+	if (qdisc->use_cnt)
+		nfp_abm_qdisc_offload_update(alink);
+
+	return 0;
+}
+
+int nfp_abm_setup_tc_gred(struct net_device *netdev, struct nfp_abm_link *alink,
+			  struct tc_gred_qopt_offload *opt)
+{
+	switch (opt->command) {
+	case TC_GRED_REPLACE:
+		return nfp_abm_gred_replace(netdev, alink, opt);
+	case TC_GRED_DESTROY:
+		nfp_abm_qdisc_destroy(netdev, alink, opt->handle);
+		return 0;
+	case TC_GRED_STATS:
+		return nfp_abm_gred_stats(alink, opt->handle, &opt->stats);
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
 static int
 nfp_abm_red_xstats(struct nfp_abm_link *alink, struct tc_red_qopt_offload *opt)
 {
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 10/14] net: sched: cls_u32: add res to offload information
  2018-11-19 23:21 [PATCH net-next 00/14] gred: add offload support Jakub Kicinski
                   ` (8 preceding siblings ...)
  2018-11-19 23:21 ` [PATCH net-next 09/14] nfp: abm: add GRED offload Jakub Kicinski
@ 2018-11-19 23:21 ` Jakub Kicinski
  2018-11-19 23:21 ` [PATCH net-next 11/14] nfp: abm: calculate PRIO map len and check mailbox size Jakub Kicinski
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Jakub Kicinski @ 2018-11-19 23:21 UTC (permalink / raw)
  To: davem; +Cc: oss-drivers, netdev, Jakub Kicinski

In case of egress offloads the class/flowid assigned by the filter
may be very important for offloaded Qdisc selection.  Provide this
info to drivers.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
---
 include/net/pkt_cls.h | 1 +
 net/sched/cls_u32.c   | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index d0e9a8091426..ea191d8cfcc9 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -643,6 +643,7 @@ struct tc_cls_common_offload {
 
 struct tc_cls_u32_knode {
 	struct tcf_exts *exts;
+	struct tcf_result *res;
 	struct tc_u32_sel *sel;
 	u32 handle;
 	u32 val;
diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
index 4b28fd44576d..4c54bc440798 100644
--- a/net/sched/cls_u32.c
+++ b/net/sched/cls_u32.c
@@ -558,6 +558,7 @@ static int u32_replace_hw_knode(struct tcf_proto *tp, struct tc_u_knode *n,
 	cls_u32.knode.mask = 0;
 #endif
 	cls_u32.knode.sel = &n->sel;
+	cls_u32.knode.res = &n->res;
 	cls_u32.knode.exts = &n->exts;
 	if (n->ht_down)
 		cls_u32.knode.link_handle = ht->handle;
@@ -1206,6 +1207,7 @@ static int u32_reoffload_knode(struct tcf_proto *tp, struct tc_u_knode *n,
 		cls_u32.knode.mask = 0;
 #endif
 		cls_u32.knode.sel = &n->sel;
+		cls_u32.knode.res = &n->res;
 		cls_u32.knode.exts = &n->exts;
 		if (n->ht_down)
 			cls_u32.knode.link_handle = ht->handle;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 11/14] nfp: abm: calculate PRIO map len and check mailbox size
  2018-11-19 23:21 [PATCH net-next 00/14] gred: add offload support Jakub Kicinski
                   ` (9 preceding siblings ...)
  2018-11-19 23:21 ` [PATCH net-next 10/14] net: sched: cls_u32: add res to offload information Jakub Kicinski
@ 2018-11-19 23:21 ` Jakub Kicinski
  2018-11-19 23:21 ` [PATCH net-next 12/14] nfp: abm: add functions to update DSCP -> virtual queue map Jakub Kicinski
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Jakub Kicinski @ 2018-11-19 23:21 UTC (permalink / raw)
  To: davem; +Cc: oss-drivers, netdev, Jakub Kicinski

In preparation for PRIO offload calculate how long the prio map
for FW will be and make sure the configuration can be performed
via the vNIC mailbox.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/abm/ctrl.c | 42 ++++++++++++++++++-
 drivers/net/ethernet/netronome/nfp/abm/main.c |  5 ++-
 drivers/net/ethernet/netronome/nfp/abm/main.h |  6 ++-
 3 files changed, 50 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/netronome/nfp/abm/ctrl.c b/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
index 10a571b5b565..ef10a2e730bc 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
+++ b/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
@@ -33,6 +33,12 @@
 #define NFP_Q_STAT_PKTS		0
 #define NFP_Q_STAT_BYTES	8
 
+#define NFP_NET_ABM_MBOX_CMD		NFP_NET_CFG_MBOX_SIMPLE_CMD
+#define NFP_NET_ABM_MBOX_RET		NFP_NET_CFG_MBOX_SIMPLE_RET
+#define NFP_NET_ABM_MBOX_DATALEN	NFP_NET_CFG_MBOX_SIMPLE_VAL
+#define NFP_NET_ABM_MBOX_RESERVED	(NFP_NET_CFG_MBOX_SIMPLE_VAL + 4)
+#define NFP_NET_ABM_MBOX_DATA		(NFP_NET_CFG_MBOX_SIMPLE_VAL + 8)
+
 static int
 nfp_abm_ctrl_stat(struct nfp_abm_link *alink, const struct nfp_rtsym *sym,
 		  unsigned int stride, unsigned int offset, unsigned int band,
@@ -215,10 +221,42 @@ int nfp_abm_ctrl_qm_disable(struct nfp_abm *abm)
 			    NULL, 0, NULL, 0);
 }
 
-void nfp_abm_ctrl_read_params(struct nfp_abm_link *alink)
+static int nfp_abm_ctrl_prio_check_params(struct nfp_abm_link *alink)
+{
+	struct nfp_abm *abm = alink->abm;
+	struct nfp_net *nn = alink->vnic;
+	unsigned int min_mbox_sz;
+
+	if (!nfp_abm_has_prio(alink->abm))
+		return 0;
+
+	min_mbox_sz = NFP_NET_ABM_MBOX_DATA + alink->abm->prio_map_len;
+	if (nn->tlv_caps.mbox_len < min_mbox_sz) {
+		nfp_err(abm->app->pf->cpp, "vNIC mailbox too small for prio offload: %u, need: %u\n",
+			nn->tlv_caps.mbox_len,  min_mbox_sz);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int nfp_abm_ctrl_read_params(struct nfp_abm_link *alink)
 {
 	alink->queue_base = nn_readl(alink->vnic, NFP_NET_CFG_START_RXQ);
 	alink->queue_base /= alink->vnic->stride_rx;
+
+	return nfp_abm_ctrl_prio_check_params(alink);
+}
+
+static unsigned int nfp_abm_ctrl_prio_map_size(struct nfp_abm *abm)
+{
+	unsigned int size;
+
+	size = roundup_pow_of_two(order_base_2(abm->num_bands));
+	size = DIV_ROUND_UP(size * abm->num_prios, BITS_PER_BYTE);
+	size = round_up(size, sizeof(u32));
+
+	return size;
 }
 
 static const struct nfp_rtsym *
@@ -273,6 +311,8 @@ int nfp_abm_ctrl_find_addrs(struct nfp_abm *abm)
 		return res;
 	abm->num_prios = res;
 
+	abm->prio_map_len = nfp_abm_ctrl_prio_map_size(abm);
+
 	/* Check values are sane, U16_MAX is arbitrarily chosen as max */
 	if (!is_power_of_2(abm->num_bands) || !is_power_of_2(abm->num_prios) ||
 	    abm->num_bands > U16_MAX || abm->num_prios > U16_MAX ||
diff --git a/drivers/net/ethernet/netronome/nfp/abm/main.c b/drivers/net/ethernet/netronome/nfp/abm/main.c
index 4f95f7b4430b..aeb0c9a1f260 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/main.c
+++ b/drivers/net/ethernet/netronome/nfp/abm/main.c
@@ -315,6 +315,10 @@ nfp_abm_vnic_alloc(struct nfp_app *app, struct nfp_net *nn, unsigned int id)
 	alink->id = id;
 	alink->total_queues = alink->vnic->max_rx_rings;
 
+	err = nfp_abm_ctrl_read_params(alink);
+	if (err)
+		goto err_free_alink;
+
 	/* This is a multi-host app, make sure MAC/PHY is up, but don't
 	 * make the MAC/PHY state follow the state of any of the ports.
 	 */
@@ -325,7 +329,6 @@ nfp_abm_vnic_alloc(struct nfp_app *app, struct nfp_net *nn, unsigned int id)
 	netif_keep_dst(nn->dp.netdev);
 
 	nfp_abm_vnic_set_mac(app->pf, abm, nn, id);
-	nfp_abm_ctrl_read_params(alink);
 	INIT_RADIX_TREE(&alink->qdiscs, GFP_KERNEL);
 
 	return 0;
diff --git a/drivers/net/ethernet/netronome/nfp/abm/main.h b/drivers/net/ethernet/netronome/nfp/abm/main.h
index 6bb4e60c1ad8..1ca2768cd5a2 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/main.h
+++ b/drivers/net/ethernet/netronome/nfp/abm/main.h
@@ -34,9 +34,11 @@ struct nfp_net;
  * @thresholds:		current threshold configuration
  * @threshold_undef:	bitmap of thresholds which have not been set
  * @num_thresholds:	number of @thresholds and bits in @threshold_undef
+ * @prio_map_len:	computed length of FW priority map (in bytes)
  *
  * @eswitch_mode:	devlink eswitch mode, advanced functions only visible
  *			in switchdev mode
+ *
  * @q_lvls:	queue level control area
  * @qm_stats:	queue statistics symbol
  * @q_stats:	basic queue statistics (only in per-band case)
@@ -51,8 +53,10 @@ struct nfp_abm {
 	u32 *thresholds;
 	unsigned long *threshold_undef;
 	size_t num_thresholds;
+	unsigned int prio_map_len;
 
 	enum devlink_eswitch_mode eswitch_mode;
+
 	const struct nfp_rtsym *q_lvls;
 	const struct nfp_rtsym *qm_stats;
 	const struct nfp_rtsym *q_stats;
@@ -201,7 +205,7 @@ int nfp_abm_setup_tc_mq(struct net_device *netdev, struct nfp_abm_link *alink,
 int nfp_abm_setup_tc_gred(struct net_device *netdev, struct nfp_abm_link *alink,
 			  struct tc_gred_qopt_offload *opt);
 
-void nfp_abm_ctrl_read_params(struct nfp_abm_link *alink);
+int nfp_abm_ctrl_read_params(struct nfp_abm_link *alink);
 int nfp_abm_ctrl_find_addrs(struct nfp_abm *abm);
 int __nfp_abm_ctrl_set_q_lvl(struct nfp_abm *abm, unsigned int id, u32 val);
 int nfp_abm_ctrl_set_q_lvl(struct nfp_abm_link *alink, unsigned int band,
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 12/14] nfp: abm: add functions to update DSCP -> virtual queue map
  2018-11-19 23:21 [PATCH net-next 00/14] gred: add offload support Jakub Kicinski
                   ` (10 preceding siblings ...)
  2018-11-19 23:21 ` [PATCH net-next 11/14] nfp: abm: calculate PRIO map len and check mailbox size Jakub Kicinski
@ 2018-11-19 23:21 ` Jakub Kicinski
  2018-11-19 23:21 ` [PATCH net-next 13/14] nfp: abm: add cls_u32 offload for simple band classification Jakub Kicinski
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Jakub Kicinski @ 2018-11-19 23:21 UTC (permalink / raw)
  To: davem; +Cc: oss-drivers, netdev, Jakub Kicinski

Learn how to set the DSCP map.  FW uses a packed array which
geometry depends on the number of supported priorities and
virtual queues.  Write code to assemble this map and to communicate
the setting to the FW via mailbox.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/abm/ctrl.c | 23 +++++++++++++++++++
 drivers/net/ethernet/netronome/nfp/abm/main.h |  1 +
 drivers/net/ethernet/netronome/nfp/nfp_net.h  |  1 +
 .../ethernet/netronome/nfp/nfp_net_common.c   |  2 +-
 .../net/ethernet/netronome/nfp/nfp_net_ctrl.h |  2 ++
 5 files changed, 28 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/netronome/nfp/abm/ctrl.c b/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
index ef10a2e730bc..77dbc509a637 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
+++ b/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
 /* Copyright (C) 2018 Netronome Systems, Inc. */
 
+#include <linux/bitops.h>
 #include <linux/kernel.h>
 #include <linux/log2.h>
 
@@ -221,6 +222,28 @@ int nfp_abm_ctrl_qm_disable(struct nfp_abm *abm)
 			    NULL, 0, NULL, 0);
 }
 
+int nfp_abm_ctrl_prio_map_update(struct nfp_abm_link *alink, u32 *packed)
+{
+	struct nfp_net *nn = alink->vnic;
+	unsigned int i;
+	int err;
+
+	/* Write data_len and wipe reserved */
+	nn_writeq(nn, nn->tlv_caps.mbox_off + NFP_NET_ABM_MBOX_DATALEN,
+		  alink->abm->prio_map_len);
+
+	for (i = 0; i < alink->abm->prio_map_len; i += sizeof(u32))
+		nn_writel(nn, nn->tlv_caps.mbox_off + NFP_NET_ABM_MBOX_DATA + i,
+			  packed[i / sizeof(u32)]);
+
+	err = nfp_net_reconfig_mbox(nn,
+				    NFP_NET_CFG_MBOX_CMD_PCI_DSCP_PRIOMAP_SET);
+	if (err)
+		nfp_err(alink->abm->app->cpp,
+			"setting DSCP -> VQ map failed with error %d\n", err);
+	return err;
+}
+
 static int nfp_abm_ctrl_prio_check_params(struct nfp_abm_link *alink)
 {
 	struct nfp_abm *abm = alink->abm;
diff --git a/drivers/net/ethernet/netronome/nfp/abm/main.h b/drivers/net/ethernet/netronome/nfp/abm/main.h
index 1ca2768cd5a2..bc378b464f2c 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/main.h
+++ b/drivers/net/ethernet/netronome/nfp/abm/main.h
@@ -220,4 +220,5 @@ u64 nfp_abm_ctrl_stat_non_sto(struct nfp_abm_link *alink, unsigned int i);
 u64 nfp_abm_ctrl_stat_sto(struct nfp_abm_link *alink, unsigned int i);
 int nfp_abm_ctrl_qm_enable(struct nfp_abm *abm);
 int nfp_abm_ctrl_qm_disable(struct nfp_abm *abm);
+int nfp_abm_ctrl_prio_map_update(struct nfp_abm_link *alink, u32 *packed);
 #endif
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h
index dda02fefc806..bb3dbd74583b 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net.h
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h
@@ -868,6 +868,7 @@ unsigned int nfp_net_rss_key_sz(struct nfp_net *nn);
 void nfp_net_rss_write_itbl(struct nfp_net *nn);
 void nfp_net_rss_write_key(struct nfp_net *nn);
 void nfp_net_coalesce_write_cfg(struct nfp_net *nn);
+int nfp_net_reconfig_mbox(struct nfp_net *nn, u32 mbox_cmd);
 
 unsigned int
 nfp_net_irqs_alloc(struct pci_dev *pdev, struct msix_entry *irq_entries,
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
index a0343f25068a..9aa6265bf4de 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
@@ -279,7 +279,7 @@ int nfp_net_reconfig(struct nfp_net *nn, u32 update)
  *
  * Return: Negative errno on error, 0 on success
  */
-static int nfp_net_reconfig_mbox(struct nfp_net *nn, u32 mbox_cmd)
+int nfp_net_reconfig_mbox(struct nfp_net *nn, u32 mbox_cmd)
 {
 	u32 mbox = nn->tlv_caps.mbox_off;
 	int ret;
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h
index d7c8518ac952..ccec69944de5 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h
@@ -397,6 +397,8 @@
 #define NFP_NET_CFG_MBOX_CMD_CTAG_FILTER_ADD 1
 #define NFP_NET_CFG_MBOX_CMD_CTAG_FILTER_KILL 2
 
+#define NFP_NET_CFG_MBOX_CMD_PCI_DSCP_PRIOMAP_SET	5
+
 /**
  * VLAN filtering using general use mailbox
  * %NFP_NET_CFG_VLAN_FILTER:		Base address of VLAN filter mailbox
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 13/14] nfp: abm: add cls_u32 offload for simple band classification
  2018-11-19 23:21 [PATCH net-next 00/14] gred: add offload support Jakub Kicinski
                   ` (11 preceding siblings ...)
  2018-11-19 23:21 ` [PATCH net-next 12/14] nfp: abm: add functions to update DSCP -> virtual queue map Jakub Kicinski
@ 2018-11-19 23:21 ` Jakub Kicinski
  2018-11-19 23:21 ` [PATCH net-next 14/14] nfp: abm: add support for more threshold actions Jakub Kicinski
  2018-11-20  2:54 ` [PATCH net-next 00/14] gred: add offload support David Miller
  14 siblings, 0 replies; 16+ messages in thread
From: Jakub Kicinski @ 2018-11-19 23:21 UTC (permalink / raw)
  To: davem; +Cc: oss-drivers, netdev, Jakub Kicinski

Use offload of very simple u32 filters to direct packets to GRED
bands based on the DSCP marking.  No u32 hashing is supported,
just plain simple filters matching on ToS or Priority with
appropriate mask device can support.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/Makefile   |   1 +
 drivers/net/ethernet/netronome/nfp/abm/cls.c  | 283 ++++++++++++++++++
 drivers/net/ethernet/netronome/nfp/abm/ctrl.c |   1 +
 drivers/net/ethernet/netronome/nfp/abm/main.c |  23 +-
 drivers/net/ethernet/netronome/nfp/abm/main.h |  16 +
 .../net/ethernet/netronome/nfp/abm/qdisc.c    |   1 +
 6 files changed, 324 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/ethernet/netronome/nfp/abm/cls.c

diff --git a/drivers/net/ethernet/netronome/nfp/Makefile b/drivers/net/ethernet/netronome/nfp/Makefile
index 190e8b56a41f..47c708f08ade 100644
--- a/drivers/net/ethernet/netronome/nfp/Makefile
+++ b/drivers/net/ethernet/netronome/nfp/Makefile
@@ -56,6 +56,7 @@ endif
 
 ifeq ($(CONFIG_NFP_APP_ABM_NIC),y)
 nfp-objs += \
+	    abm/cls.o \
 	    abm/ctrl.o \
 	    abm/qdisc.o \
 	    abm/main.o
diff --git a/drivers/net/ethernet/netronome/nfp/abm/cls.c b/drivers/net/ethernet/netronome/nfp/abm/cls.c
new file mode 100644
index 000000000000..9852080cf454
--- /dev/null
+++ b/drivers/net/ethernet/netronome/nfp/abm/cls.c
@@ -0,0 +1,283 @@
+// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+/* Copyright (C) 2018 Netronome Systems, Inc. */
+
+#include <linux/bitfield.h>
+#include <net/pkt_cls.h>
+
+#include "../nfpcore/nfp_cpp.h"
+#include "../nfp_app.h"
+#include "../nfp_net_repr.h"
+#include "main.h"
+
+struct nfp_abm_u32_match {
+	u32 handle;
+	u32 band;
+	u8 mask;
+	u8 val;
+	struct list_head list;
+};
+
+static bool
+nfp_abm_u32_check_knode(struct nfp_abm *abm, struct tc_cls_u32_knode *knode,
+			__be16 proto, struct netlink_ext_ack *extack)
+{
+	struct tc_u32_key *k;
+	unsigned int tos_off;
+
+	if (knode->exts && tcf_exts_has_actions(knode->exts)) {
+		NL_SET_ERR_MSG_MOD(extack, "action offload not supported");
+		return false;
+	}
+	if (knode->link_handle) {
+		NL_SET_ERR_MSG_MOD(extack, "linking not supported");
+		return false;
+	}
+	if (knode->sel->flags != TC_U32_TERMINAL) {
+		NL_SET_ERR_MSG_MOD(extack,
+				   "flags must be equal to TC_U32_TERMINAL");
+		return false;
+	}
+	if (knode->sel->off || knode->sel->offshift || knode->sel->offmask ||
+	    knode->sel->offoff || knode->fshift) {
+		NL_SET_ERR_MSG_MOD(extack, "variable offseting not supported");
+		return false;
+	}
+	if (knode->sel->hoff || knode->sel->hmask) {
+		NL_SET_ERR_MSG_MOD(extack, "hashing not supported");
+		return false;
+	}
+	if (knode->val || knode->mask) {
+		NL_SET_ERR_MSG_MOD(extack, "matching on mark not supported");
+		return false;
+	}
+	if (knode->res && knode->res->class) {
+		NL_SET_ERR_MSG_MOD(extack, "setting non-0 class not supported");
+		return false;
+	}
+	if (knode->res && knode->res->classid >= abm->num_bands) {
+		NL_SET_ERR_MSG_MOD(extack,
+				   "classid higher than number of bands");
+		return false;
+	}
+	if (knode->sel->nkeys != 1) {
+		NL_SET_ERR_MSG_MOD(extack, "exactly one key required");
+		return false;
+	}
+
+	switch (proto) {
+	case htons(ETH_P_IP):
+		tos_off = 16;
+		break;
+	case htons(ETH_P_IPV6):
+		tos_off = 20;
+		break;
+	default:
+		NL_SET_ERR_MSG_MOD(extack, "only IP and IPv6 supported as filter protocol");
+		return false;
+	}
+
+	k = &knode->sel->keys[0];
+	if (k->offmask) {
+		NL_SET_ERR_MSG_MOD(extack, "offset mask - variable offseting not supported");
+		return false;
+	}
+	if (k->off) {
+		NL_SET_ERR_MSG_MOD(extack, "only DSCP fields can be matched");
+		return false;
+	}
+	if (k->val & ~k->mask) {
+		NL_SET_ERR_MSG_MOD(extack, "mask does not cover the key");
+		return false;
+	}
+	if (be32_to_cpu(k->mask) >> tos_off & ~abm->dscp_mask) {
+		NL_SET_ERR_MSG_MOD(extack, "only high DSCP class selector bits can be used");
+		nfp_err(abm->app->cpp,
+			"u32 offload: requested mask %x FW can support only %x\n",
+			be32_to_cpu(k->mask) >> tos_off, abm->dscp_mask);
+		return false;
+	}
+
+	return true;
+}
+
+/* This filter list -> map conversion is O(n * m), we expect single digit or
+ * low double digit number of prios and likewise for the filters.  Also u32
+ * doesn't report stats, so it's really only setup time cost.
+ */
+static unsigned int
+nfp_abm_find_band_for_prio(struct nfp_abm_link *alink, unsigned int prio)
+{
+	struct nfp_abm_u32_match *iter;
+
+	list_for_each_entry(iter, &alink->dscp_map, list)
+		if ((prio & iter->mask) == iter->val)
+			return iter->band;
+
+	return alink->def_band;
+}
+
+static int nfp_abm_update_band_map(struct nfp_abm_link *alink)
+{
+	unsigned int i, bits_per_prio, prios_per_word, base_shift;
+	struct nfp_abm *abm = alink->abm;
+	u32 field_mask;
+
+	alink->has_prio = !list_empty(&alink->dscp_map);
+
+	bits_per_prio = roundup_pow_of_two(order_base_2(abm->num_bands));
+	field_mask = (1 << bits_per_prio) - 1;
+	prios_per_word = sizeof(u32) * BITS_PER_BYTE / bits_per_prio;
+
+	/* FW mask applies from top bits */
+	base_shift = 8 - order_base_2(abm->num_prios);
+
+	for (i = 0; i < abm->num_prios; i++) {
+		unsigned int offset;
+		u32 *word;
+		u8 band;
+
+		word = &alink->prio_map[i / prios_per_word];
+		offset = (i % prios_per_word) * bits_per_prio;
+
+		band = nfp_abm_find_band_for_prio(alink, i << base_shift);
+
+		*word &= ~(field_mask << offset);
+		*word |= band << offset;
+	}
+
+	/* Qdisc offload status may change if has_prio changed */
+	nfp_abm_qdisc_offload_update(alink);
+
+	return nfp_abm_ctrl_prio_map_update(alink, alink->prio_map);
+}
+
+static void
+nfp_abm_u32_knode_delete(struct nfp_abm_link *alink,
+			 struct tc_cls_u32_knode *knode)
+{
+	struct nfp_abm_u32_match *iter;
+
+	list_for_each_entry(iter, &alink->dscp_map, list)
+		if (iter->handle == knode->handle) {
+			list_del(&iter->list);
+			kfree(iter);
+			nfp_abm_update_band_map(alink);
+			return;
+		}
+}
+
+static int
+nfp_abm_u32_knode_replace(struct nfp_abm_link *alink,
+			  struct tc_cls_u32_knode *knode,
+			  __be16 proto, struct netlink_ext_ack *extack)
+{
+	struct nfp_abm_u32_match *match = NULL, *iter;
+	unsigned int tos_off;
+	u8 mask, val;
+	int err;
+
+	if (!nfp_abm_u32_check_knode(alink->abm, knode, proto, extack))
+		goto err_delete;
+
+	tos_off = proto == htons(ETH_P_IP) ? 16 : 20;
+
+	/* Extract the DSCP Class Selector bits */
+	val = be32_to_cpu(knode->sel->keys[0].val) >> tos_off & 0xff;
+	mask = be32_to_cpu(knode->sel->keys[0].mask) >> tos_off & 0xff;
+
+	/* Check if there is no conflicting mapping and find match by handle */
+	list_for_each_entry(iter, &alink->dscp_map, list) {
+		u32 cmask;
+
+		if (iter->handle == knode->handle) {
+			match = iter;
+			continue;
+		}
+
+		cmask = iter->mask & mask;
+		if ((iter->val & cmask) == (val & cmask) &&
+		    iter->band != knode->res->classid) {
+			NL_SET_ERR_MSG_MOD(extack, "conflict with already offloaded filter");
+			goto err_delete;
+		}
+	}
+
+	if (!match) {
+		match = kzalloc(sizeof(*match), GFP_KERNEL);
+		if (!match)
+			return -ENOMEM;
+		list_add(&match->list, &alink->dscp_map);
+	}
+	match->handle = knode->handle;
+	match->band = knode->res->classid;
+	match->mask = mask;
+	match->val = val;
+
+	err = nfp_abm_update_band_map(alink);
+	if (err)
+		goto err_delete;
+
+	return 0;
+
+err_delete:
+	nfp_abm_u32_knode_delete(alink, knode);
+	return -EOPNOTSUPP;
+}
+
+static int nfp_abm_setup_tc_block_cb(enum tc_setup_type type,
+				     void *type_data, void *cb_priv)
+{
+	struct tc_cls_u32_offload *cls_u32 = type_data;
+	struct nfp_repr *repr = cb_priv;
+	struct nfp_abm_link *alink;
+
+	alink = repr->app_priv;
+
+	if (type != TC_SETUP_CLSU32) {
+		NL_SET_ERR_MSG_MOD(cls_u32->common.extack,
+				   "only offload of u32 classifier supported");
+		return -EOPNOTSUPP;
+	}
+	if (!tc_cls_can_offload_and_chain0(repr->netdev, &cls_u32->common))
+		return -EOPNOTSUPP;
+
+	if (cls_u32->common.protocol != htons(ETH_P_IP) &&
+	    cls_u32->common.protocol != htons(ETH_P_IPV6)) {
+		NL_SET_ERR_MSG_MOD(cls_u32->common.extack,
+				   "only IP and IPv6 supported as filter protocol");
+		return -EOPNOTSUPP;
+	}
+
+	switch (cls_u32->command) {
+	case TC_CLSU32_NEW_KNODE:
+	case TC_CLSU32_REPLACE_KNODE:
+		return nfp_abm_u32_knode_replace(alink, &cls_u32->knode,
+						 cls_u32->common.protocol,
+						 cls_u32->common.extack);
+	case TC_CLSU32_DELETE_KNODE:
+		nfp_abm_u32_knode_delete(alink, &cls_u32->knode);
+		return 0;
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+int nfp_abm_setup_cls_block(struct net_device *netdev, struct nfp_repr *repr,
+			    struct tc_block_offload *f)
+{
+	if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_EGRESS)
+		return -EOPNOTSUPP;
+
+	switch (f->command) {
+	case TC_BLOCK_BIND:
+		return tcf_block_cb_register(f->block,
+					     nfp_abm_setup_tc_block_cb,
+					     repr, repr, f->extack);
+	case TC_BLOCK_UNBIND:
+		tcf_block_cb_unregister(f->block, nfp_abm_setup_tc_block_cb,
+					repr);
+		return 0;
+	default:
+		return -EOPNOTSUPP;
+	}
+}
diff --git a/drivers/net/ethernet/netronome/nfp/abm/ctrl.c b/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
index 77dbc509a637..2447e935e2d9 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
+++ b/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
@@ -335,6 +335,7 @@ int nfp_abm_ctrl_find_addrs(struct nfp_abm *abm)
 	abm->num_prios = res;
 
 	abm->prio_map_len = nfp_abm_ctrl_prio_map_size(abm);
+	abm->dscp_mask = GENMASK(7, 8 - order_base_2(abm->num_prios));
 
 	/* Check values are sane, U16_MAX is arbitrarily chosen as max */
 	if (!is_power_of_2(abm->num_bands) || !is_power_of_2(abm->num_prios) ||
diff --git a/drivers/net/ethernet/netronome/nfp/abm/main.c b/drivers/net/ethernet/netronome/nfp/abm/main.c
index aeb0c9a1f260..ecdef63a20f3 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/main.c
+++ b/drivers/net/ethernet/netronome/nfp/abm/main.c
@@ -46,6 +46,8 @@ nfp_abm_setup_tc(struct nfp_app *app, struct net_device *netdev,
 		return nfp_abm_setup_tc_red(netdev, repr->app_priv, type_data);
 	case TC_SETUP_QDISC_GRED:
 		return nfp_abm_setup_tc_gred(netdev, repr->app_priv, type_data);
+	case TC_SETUP_BLOCK:
+		return nfp_abm_setup_cls_block(netdev, repr, type_data);
 	default:
 		return -EOPNOTSUPP;
 	}
@@ -315,16 +317,22 @@ nfp_abm_vnic_alloc(struct nfp_app *app, struct nfp_net *nn, unsigned int id)
 	alink->id = id;
 	alink->total_queues = alink->vnic->max_rx_rings;
 
+	INIT_LIST_HEAD(&alink->dscp_map);
+
 	err = nfp_abm_ctrl_read_params(alink);
 	if (err)
 		goto err_free_alink;
 
+	alink->prio_map = kzalloc(abm->prio_map_len, GFP_KERNEL);
+	if (!alink->prio_map)
+		goto err_free_alink;
+
 	/* This is a multi-host app, make sure MAC/PHY is up, but don't
 	 * make the MAC/PHY state follow the state of any of the ports.
 	 */
 	err = nfp_eth_set_configured(app->cpp, eth_port->index, true);
 	if (err < 0)
-		goto err_free_alink;
+		goto err_free_priomap;
 
 	netif_keep_dst(nn->dp.netdev);
 
@@ -333,6 +341,8 @@ nfp_abm_vnic_alloc(struct nfp_app *app, struct nfp_net *nn, unsigned int id)
 
 	return 0;
 
+err_free_priomap:
+	kfree(alink->prio_map);
 err_free_alink:
 	kfree(alink);
 	return err;
@@ -344,9 +354,19 @@ static void nfp_abm_vnic_free(struct nfp_app *app, struct nfp_net *nn)
 
 	nfp_abm_kill_reprs(alink->abm, alink);
 	WARN(!radix_tree_empty(&alink->qdiscs), "left over qdiscs\n");
+	kfree(alink->prio_map);
 	kfree(alink);
 }
 
+static int nfp_abm_vnic_init(struct nfp_app *app, struct nfp_net *nn)
+{
+	struct nfp_abm_link *alink = nn->app_priv;
+
+	if (nfp_abm_has_prio(alink->abm))
+		return nfp_abm_ctrl_prio_map_update(alink, alink->prio_map);
+	return 0;
+}
+
 static u64 *
 nfp_abm_port_get_stats(struct nfp_app *app, struct nfp_port *port, u64 *data)
 {
@@ -491,6 +511,7 @@ const struct nfp_app_type app_abm = {
 
 	.vnic_alloc	= nfp_abm_vnic_alloc,
 	.vnic_free	= nfp_abm_vnic_free,
+	.vnic_init	= nfp_abm_vnic_init,
 
 	.port_get_stats		= nfp_abm_port_get_stats,
 	.port_get_stats_count	= nfp_abm_port_get_stats_count,
diff --git a/drivers/net/ethernet/netronome/nfp/abm/main.h b/drivers/net/ethernet/netronome/nfp/abm/main.h
index bc378b464f2c..9352992ab386 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/main.h
+++ b/drivers/net/ethernet/netronome/nfp/abm/main.h
@@ -5,6 +5,7 @@
 #define __NFP_ABM_H__ 1
 
 #include <linux/bits.h>
+#include <linux/list.h>
 #include <linux/radix-tree.h>
 #include <net/devlink.h>
 #include <net/pkt_cls.h>
@@ -34,7 +35,9 @@ struct nfp_net;
  * @thresholds:		current threshold configuration
  * @threshold_undef:	bitmap of thresholds which have not been set
  * @num_thresholds:	number of @thresholds and bits in @threshold_undef
+ *
  * @prio_map_len:	computed length of FW priority map (in bytes)
+ * @dscp_mask:		mask FW will apply on DSCP field
  *
  * @eswitch_mode:	devlink eswitch mode, advanced functions only visible
  *			in switchdev mode
@@ -53,7 +56,9 @@ struct nfp_abm {
 	u32 *thresholds;
 	unsigned long *threshold_undef;
 	size_t num_thresholds;
+
 	unsigned int prio_map_len;
+	u8 dscp_mask;
 
 	enum devlink_eswitch_mode eswitch_mode;
 
@@ -170,7 +175,11 @@ struct nfp_qdisc {
  *
  * @last_stats_update:	ktime of last stats update
  *
+ * @prio_map:		current map of priorities
+ * @has_prio:		@prio_map is valid
+ *
  * @def_band:		default band to use
+ * @dscp_map:		list of DSCP to band mappings
  *
  * @root_qdisc:	pointer to the current root of the Qdisc hierarchy
  * @qdiscs:	all qdiscs recorded by major part of the handle
@@ -184,7 +193,11 @@ struct nfp_abm_link {
 
 	u64 last_stats_update;
 
+	u32 *prio_map;
+	bool has_prio;
+
 	u8 def_band;
+	struct list_head dscp_map;
 
 	struct nfp_qdisc *root_qdisc;
 	struct radix_tree_root qdiscs;
@@ -204,6 +217,8 @@ int nfp_abm_setup_tc_mq(struct net_device *netdev, struct nfp_abm_link *alink,
 			struct tc_mq_qopt_offload *opt);
 int nfp_abm_setup_tc_gred(struct net_device *netdev, struct nfp_abm_link *alink,
 			  struct tc_gred_qopt_offload *opt);
+int nfp_abm_setup_cls_block(struct net_device *netdev, struct nfp_repr *repr,
+			    struct tc_block_offload *opt);
 
 int nfp_abm_ctrl_read_params(struct nfp_abm_link *alink);
 int nfp_abm_ctrl_find_addrs(struct nfp_abm *abm);
@@ -220,5 +235,6 @@ u64 nfp_abm_ctrl_stat_non_sto(struct nfp_abm_link *alink, unsigned int i);
 u64 nfp_abm_ctrl_stat_sto(struct nfp_abm_link *alink, unsigned int i);
 int nfp_abm_ctrl_qm_enable(struct nfp_abm *abm);
 int nfp_abm_ctrl_qm_disable(struct nfp_abm *abm);
+void nfp_abm_prio_map_update(struct nfp_abm *abm);
 int nfp_abm_ctrl_prio_map_update(struct nfp_abm_link *alink, u32 *packed);
 #endif
diff --git a/drivers/net/ethernet/netronome/nfp/abm/qdisc.c b/drivers/net/ethernet/netronome/nfp/abm/qdisc.c
index e80a3d40a48b..8f6e43667757 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/qdisc.c
+++ b/drivers/net/ethernet/netronome/nfp/abm/qdisc.c
@@ -197,6 +197,7 @@ nfp_abm_offload_compile_red(struct nfp_abm_link *alink, struct nfp_qdisc *qdisc,
 	good_red = qdisc->type == NFP_QDISC_RED &&
 		   qdisc->params_ok &&
 		   qdisc->use_cnt == 1 &&
+		   !alink->has_prio &&
 		   !qdisc->children[0];
 	good_gred = qdisc->type == NFP_QDISC_GRED &&
 		    qdisc->params_ok &&
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 14/14] nfp: abm: add support for more threshold actions
  2018-11-19 23:21 [PATCH net-next 00/14] gred: add offload support Jakub Kicinski
                   ` (12 preceding siblings ...)
  2018-11-19 23:21 ` [PATCH net-next 13/14] nfp: abm: add cls_u32 offload for simple band classification Jakub Kicinski
@ 2018-11-19 23:21 ` Jakub Kicinski
  2018-11-20  2:54 ` [PATCH net-next 00/14] gred: add offload support David Miller
  14 siblings, 0 replies; 16+ messages in thread
From: Jakub Kicinski @ 2018-11-19 23:21 UTC (permalink / raw)
  To: davem; +Cc: oss-drivers, netdev, Jakub Kicinski

Original FW only allowed us to perform ECN marking.  Newer releases
also support plain old drop.  Add the ability to configure drop
policy.  This is particularly useful in combination with GRED,
because different bands can have different ECN marking setting.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: John Hurley <john.hurley@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/abm/ctrl.c | 42 +++++++++++++++++++
 drivers/net/ethernet/netronome/nfp/abm/main.c | 14 ++++++-
 drivers/net/ethernet/netronome/nfp/abm/main.h | 31 ++++++++++++++
 .../net/ethernet/netronome/nfp/abm/qdisc.c    | 28 +++++++++++--
 4 files changed, 109 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/netronome/nfp/abm/ctrl.c b/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
index 2447e935e2d9..ad6c2a621c7a 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
+++ b/drivers/net/ethernet/netronome/nfp/abm/ctrl.c
@@ -15,12 +15,14 @@
 
 #define NFP_NUM_PRIOS_SYM_NAME	"_abi_pci_dscp_num_prio_%u"
 #define NFP_NUM_BANDS_SYM_NAME	"_abi_pci_dscp_num_band_%u"
+#define NFP_ACT_MASK_SYM_NAME	"_abi_nfd_out_q_actions_%u"
 
 #define NFP_QLVL_SYM_NAME	"_abi_nfd_out_q_lvls_%u%s"
 #define NFP_QLVL_STRIDE		16
 #define NFP_QLVL_BLOG_BYTES	0
 #define NFP_QLVL_BLOG_PKTS	4
 #define NFP_QLVL_THRS		8
+#define NFP_QLVL_ACT		12
 
 #define NFP_QMSTAT_SYM_NAME	"_abi_nfdqm%u_stats%s"
 #define NFP_QMSTAT_STRIDE	32
@@ -101,6 +103,39 @@ int nfp_abm_ctrl_set_q_lvl(struct nfp_abm_link *alink, unsigned int band,
 	return __nfp_abm_ctrl_set_q_lvl(alink->abm, threshold, val);
 }
 
+int __nfp_abm_ctrl_set_q_act(struct nfp_abm *abm, unsigned int id,
+			     enum nfp_abm_q_action act)
+{
+	struct nfp_cpp *cpp = abm->app->cpp;
+	u64 sym_offset;
+	int err;
+
+	if (abm->actions[id] == act)
+		return 0;
+
+	sym_offset = id * NFP_QLVL_STRIDE + NFP_QLVL_ACT;
+	err = __nfp_rtsym_writel(cpp, abm->q_lvls, 4, 0, sym_offset, act);
+	if (err) {
+		nfp_err(cpp,
+			"RED offload setting action failed on subqueue %d\n",
+			id);
+		return err;
+	}
+
+	abm->actions[id] = act;
+	return 0;
+}
+
+int nfp_abm_ctrl_set_q_act(struct nfp_abm_link *alink, unsigned int band,
+			   unsigned int queue, enum nfp_abm_q_action act)
+{
+	unsigned int qid;
+
+	qid = band * NFP_NET_MAX_RX_RINGS + alink->queue_base + queue;
+
+	return __nfp_abm_ctrl_set_q_act(alink->abm, qid, act);
+}
+
 u64 nfp_abm_ctrl_stat_non_sto(struct nfp_abm_link *alink, unsigned int queue)
 {
 	unsigned int band;
@@ -334,6 +369,13 @@ int nfp_abm_ctrl_find_addrs(struct nfp_abm *abm)
 		return res;
 	abm->num_prios = res;
 
+	/* Read available actions */
+	res = nfp_pf_rtsym_read_optional(pf, NFP_ACT_MASK_SYM_NAME,
+					 BIT(NFP_ABM_ACT_MARK_DROP));
+	if (res < 0)
+		return res;
+	abm->action_mask = res;
+
 	abm->prio_map_len = nfp_abm_ctrl_prio_map_size(abm);
 	abm->dscp_mask = GENMASK(7, 8 - order_base_2(abm->num_prios));
 
diff --git a/drivers/net/ethernet/netronome/nfp/abm/main.c b/drivers/net/ethernet/netronome/nfp/abm/main.c
index ecdef63a20f3..7a4d55f794c2 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/main.c
+++ b/drivers/net/ethernet/netronome/nfp/abm/main.c
@@ -459,15 +459,22 @@ static int nfp_abm_init(struct nfp_app *app)
 	for (i = 0; i < abm->num_bands * NFP_NET_MAX_RX_RINGS; i++)
 		__nfp_abm_ctrl_set_q_lvl(abm, i, NFP_ABM_LVL_INFINITY);
 
+	abm->actions = kvcalloc(abm->num_thresholds, sizeof(*abm->actions),
+				GFP_KERNEL);
+	if (!abm->actions)
+		goto err_free_thresh;
+	for (i = 0; i < abm->num_bands * NFP_NET_MAX_RX_RINGS; i++)
+		__nfp_abm_ctrl_set_q_act(abm, i, NFP_ABM_ACT_DROP);
+
 	/* We start in legacy mode, make sure advanced queuing is disabled */
 	err = nfp_abm_ctrl_qm_disable(abm);
 	if (err)
-		goto err_free_thresh;
+		goto err_free_act;
 
 	err = -ENOMEM;
 	reprs = nfp_reprs_alloc(pf->max_data_vnics);
 	if (!reprs)
-		goto err_free_thresh;
+		goto err_free_act;
 	RCU_INIT_POINTER(app->reprs[NFP_REPR_TYPE_PHYS_PORT], reprs);
 
 	reprs = nfp_reprs_alloc(pf->max_data_vnics);
@@ -479,6 +486,8 @@ static int nfp_abm_init(struct nfp_app *app)
 
 err_free_phys:
 	nfp_reprs_clean_and_free_by_type(app, NFP_REPR_TYPE_PHYS_PORT);
+err_free_act:
+	kvfree(abm->actions);
 err_free_thresh:
 	kvfree(abm->thresholds);
 err_free_thresh_umap:
@@ -497,6 +506,7 @@ static void nfp_abm_clean(struct nfp_app *app)
 	nfp_reprs_clean_and_free_by_type(app, NFP_REPR_TYPE_PF);
 	nfp_reprs_clean_and_free_by_type(app, NFP_REPR_TYPE_PHYS_PORT);
 	bitmap_free(abm->threshold_undef);
+	kvfree(abm->actions);
 	kvfree(abm->thresholds);
 	kfree(abm);
 	app->priv = NULL;
diff --git a/drivers/net/ethernet/netronome/nfp/abm/main.h b/drivers/net/ethernet/netronome/nfp/abm/main.h
index 9352992ab386..4dcf5881fb4b 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/main.h
+++ b/drivers/net/ethernet/netronome/nfp/abm/main.h
@@ -24,6 +24,17 @@ struct nfp_net;
 #define NFP_ABM_PORTID_TYPE	GENMASK(23, 16)
 #define NFP_ABM_PORTID_ID	GENMASK(7, 0)
 
+/* The possible actions if thresholds are exceeded */
+enum nfp_abm_q_action {
+	/* mark if ECN capable, otherwise drop */
+	NFP_ABM_ACT_MARK_DROP		= 0,
+	/* mark if ECN capable, otherwise goto QM */
+	NFP_ABM_ACT_MARK_QUEUE		= 1,
+	NFP_ABM_ACT_DROP		= 2,
+	NFP_ABM_ACT_QUEUE		= 3,
+	NFP_ABM_ACT_NOQUEUE		= 4,
+};
+
 /**
  * struct nfp_abm - ABM NIC app structure
  * @app:	back pointer to nfp_app
@@ -31,9 +42,11 @@ struct nfp_net;
  *
  * @num_prios:	number of supported DSCP priorities
  * @num_bands:	number of supported DSCP priority bands
+ * @action_mask:	bitmask of supported actions
  *
  * @thresholds:		current threshold configuration
  * @threshold_undef:	bitmap of thresholds which have not been set
+ * @actions:		current FW action configuration
  * @num_thresholds:	number of @thresholds and bits in @threshold_undef
  *
  * @prio_map_len:	computed length of FW priority map (in bytes)
@@ -52,9 +65,11 @@ struct nfp_abm {
 
 	unsigned int num_prios;
 	unsigned int num_bands;
+	unsigned int action_mask;
 
 	u32 *thresholds;
 	unsigned long *threshold_undef;
+	u8 *actions;
 	size_t num_thresholds;
 
 	unsigned int prio_map_len;
@@ -125,6 +140,7 @@ enum nfp_qdisc_type {
  * @red:		RED Qdisc specific parameters and state
  * @red.num_bands:	Number of valid entries in the @red.band table
  * @red.band:		Per-band array of RED instances
+ * @red.band.ecn:		ECN marking is enabled (rather than drop)
  * @red.band.threshold:		ECN marking threshold
  * @red.band.stats:		current stats of the RED Qdisc
  * @red.band.prev_stats:	previously reported @red.stats
@@ -155,6 +171,7 @@ struct nfp_qdisc {
 			unsigned int num_bands;
 
 			struct {
+				bool ecn;
 				u32 threshold;
 				struct nfp_alink_stats stats;
 				struct nfp_alink_stats prev_stats;
@@ -208,6 +225,16 @@ static inline bool nfp_abm_has_prio(struct nfp_abm *abm)
 	return abm->num_bands > 1;
 }
 
+static inline bool nfp_abm_has_drop(struct nfp_abm *abm)
+{
+	return abm->action_mask & BIT(NFP_ABM_ACT_DROP);
+}
+
+static inline bool nfp_abm_has_mark(struct nfp_abm *abm)
+{
+	return abm->action_mask & BIT(NFP_ABM_ACT_MARK_DROP);
+}
+
 void nfp_abm_qdisc_offload_update(struct nfp_abm_link *alink);
 int nfp_abm_setup_root(struct net_device *netdev, struct nfp_abm_link *alink,
 		       struct tc_root_qopt_offload *opt);
@@ -225,6 +252,10 @@ int nfp_abm_ctrl_find_addrs(struct nfp_abm *abm);
 int __nfp_abm_ctrl_set_q_lvl(struct nfp_abm *abm, unsigned int id, u32 val);
 int nfp_abm_ctrl_set_q_lvl(struct nfp_abm_link *alink, unsigned int band,
 			   unsigned int queue, u32 val);
+int __nfp_abm_ctrl_set_q_act(struct nfp_abm *abm, unsigned int id,
+			     enum nfp_abm_q_action act);
+int nfp_abm_ctrl_set_q_act(struct nfp_abm_link *alink, unsigned int band,
+			   unsigned int queue, enum nfp_abm_q_action act);
 int nfp_abm_ctrl_read_q_stats(struct nfp_abm_link *alink,
 			      unsigned int band, unsigned int queue,
 			      struct nfp_alink_stats *stats);
diff --git a/drivers/net/ethernet/netronome/nfp/abm/qdisc.c b/drivers/net/ethernet/netronome/nfp/abm/qdisc.c
index 8f6e43667757..2473fb5f75e5 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/qdisc.c
+++ b/drivers/net/ethernet/netronome/nfp/abm/qdisc.c
@@ -212,9 +212,15 @@ nfp_abm_offload_compile_red(struct nfp_abm_link *alink, struct nfp_qdisc *qdisc,
 	if (!qdisc->offload_mark)
 		return;
 
-	for (i = 0; i < alink->abm->num_bands; i++)
+	for (i = 0; i < alink->abm->num_bands; i++) {
+		enum nfp_abm_q_action act;
+
 		nfp_abm_ctrl_set_q_lvl(alink, i, queue,
 				       qdisc->red.band[i].threshold);
+		act = qdisc->red.band[i].ecn ?
+			NFP_ABM_ACT_MARK_DROP : NFP_ABM_ACT_DROP;
+		nfp_abm_ctrl_set_q_act(alink, i, queue, act);
+	}
 }
 
 static void
@@ -535,11 +541,16 @@ nfp_abm_gred_check_params(struct nfp_abm_link *alink,
 
 		if (!band->present)
 			return false;
-		if (!band->is_ecn) {
+		if (!band->is_ecn && !nfp_abm_has_drop(abm)) {
 			nfp_warn(cpp, "GRED offload failed - drop is not supported (ECN option required) (p:%08x h:%08x vq:%d)\n",
 				 opt->parent, opt->handle, i);
 			return false;
 		}
+		if (band->is_ecn && !nfp_abm_has_mark(abm)) {
+			nfp_warn(cpp, "GRED offload failed - ECN marking not supported (p:%08x h:%08x vq:%d)\n",
+				 opt->parent, opt->handle, i);
+			return false;
+		}
 		if (band->is_harddrop) {
 			nfp_warn(cpp, "GRED offload failed - harddrop is not supported (p:%08x h:%08x vq:%d)\n",
 				 opt->parent, opt->handle, i);
@@ -577,8 +588,10 @@ nfp_abm_gred_replace(struct net_device *netdev, struct nfp_abm_link *alink,
 	qdisc->params_ok = nfp_abm_gred_check_params(alink, opt);
 	if (qdisc->params_ok) {
 		qdisc->red.num_bands = opt->set.dp_cnt;
-		for (i = 0; i < qdisc->red.num_bands; i++)
+		for (i = 0; i < qdisc->red.num_bands; i++) {
+			qdisc->red.band[i].ecn = opt->set.tab[i].is_ecn;
 			qdisc->red.band[i].threshold = opt->set.tab[i].min;
+		}
 	}
 
 	if (qdisc->use_cnt)
@@ -649,12 +662,18 @@ nfp_abm_red_check_params(struct nfp_abm_link *alink,
 			 struct tc_red_qopt_offload *opt)
 {
 	struct nfp_cpp *cpp = alink->abm->app->cpp;
+	struct nfp_abm *abm = alink->abm;
 
-	if (!opt->set.is_ecn) {
+	if (!opt->set.is_ecn && !nfp_abm_has_drop(abm)) {
 		nfp_warn(cpp, "RED offload failed - drop is not supported (ECN option required) (p:%08x h:%08x)\n",
 			 opt->parent, opt->handle);
 		return false;
 	}
+	if (opt->set.is_ecn && !nfp_abm_has_mark(abm)) {
+		nfp_warn(cpp, "RED offload failed - ECN marking not supported (p:%08x h:%08x)\n",
+			 opt->parent, opt->handle);
+		return false;
+	}
 	if (opt->set.is_harddrop) {
 		nfp_warn(cpp, "RED offload failed - harddrop is not supported (p:%08x h:%08x)\n",
 			 opt->parent, opt->handle);
@@ -703,6 +722,7 @@ nfp_abm_red_replace(struct net_device *netdev, struct nfp_abm_link *alink,
 	qdisc->params_ok = nfp_abm_red_check_params(alink, opt);
 	if (qdisc->params_ok) {
 		qdisc->red.num_bands = 1;
+		qdisc->red.band[0].ecn = opt->set.is_ecn;
 		qdisc->red.band[0].threshold = opt->set.min;
 	}
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next 00/14] gred: add offload support
  2018-11-19 23:21 [PATCH net-next 00/14] gred: add offload support Jakub Kicinski
                   ` (13 preceding siblings ...)
  2018-11-19 23:21 ` [PATCH net-next 14/14] nfp: abm: add support for more threshold actions Jakub Kicinski
@ 2018-11-20  2:54 ` David Miller
  14 siblings, 0 replies; 16+ messages in thread
From: David Miller @ 2018-11-20  2:54 UTC (permalink / raw)
  To: jakub.kicinski; +Cc: oss-drivers, netdev

From: Jakub Kicinski <jakub.kicinski@netronome.com>
Date: Mon, 19 Nov 2018 15:21:36 -0800

> This series adds support for GRED offload in the nfp driver.  So
> far we have only supported the RED Qdisc offload, but we need a
> way to differentiate traffic types e.g. based on DSCP marking.
> 
> It may seem like PRIO+RED is a good match for this job, however,
> (a) we don't need strict priority behaviour of PRIO, and (b) PRIO
> uses the legacy way of mapping ToS fields to bands, which is quite
> awkward and limitting.
> 
> The less commonly used GRED Qdisc is a better much for the scenario,
> it allows multiple sets of RED parameters and queue lengths to be
> maintained with a single FIFO queue.  This is exactly how nfp offload
> behaves.  We use a trivial u32 classifier to assign packets to virtual
> queues.
> 
> There is also the minor advantage that GRED can't have its child
> changed, therefore limitting ways in which the configuration of SW
> path can diverge from HW offload.
> 
> Last patch of the series adds support for (G)RED in non-ECN mode,
> where packets are dropped instead of marked.

Series applied, thanks Jakub.

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2018-11-20 13:20 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-19 23:21 [PATCH net-next 00/14] gred: add offload support Jakub Kicinski
2018-11-19 23:21 ` [PATCH net-next 01/14] nfp: abm: map per-band symbols Jakub Kicinski
2018-11-19 23:21 ` [PATCH net-next 02/14] nfp: abm: pass band parameter to functions Jakub Kicinski
2018-11-19 23:21 ` [PATCH net-next 03/14] nfp: abm: size threshold table to account for bands Jakub Kicinski
2018-11-19 23:21 ` [PATCH net-next 04/14] nfp: abm: switch to extended stats for reading packet/byte counts Jakub Kicinski
2018-11-19 23:21 ` [PATCH net-next 05/14] nfp: abm: add up bands for sto/non-sto stats Jakub Kicinski
2018-11-19 23:21 ` [PATCH net-next 06/14] net: sched: gred: add basic Qdisc offload Jakub Kicinski
2018-11-19 23:21 ` [PATCH net-next 07/14] net: sched: gred: support reporting stats from offloads Jakub Kicinski
2018-11-19 23:21 ` [PATCH net-next 08/14] nfp: abm: wrap RED parameters in bands Jakub Kicinski
2018-11-19 23:21 ` [PATCH net-next 09/14] nfp: abm: add GRED offload Jakub Kicinski
2018-11-19 23:21 ` [PATCH net-next 10/14] net: sched: cls_u32: add res to offload information Jakub Kicinski
2018-11-19 23:21 ` [PATCH net-next 11/14] nfp: abm: calculate PRIO map len and check mailbox size Jakub Kicinski
2018-11-19 23:21 ` [PATCH net-next 12/14] nfp: abm: add functions to update DSCP -> virtual queue map Jakub Kicinski
2018-11-19 23:21 ` [PATCH net-next 13/14] nfp: abm: add cls_u32 offload for simple band classification Jakub Kicinski
2018-11-19 23:21 ` [PATCH net-next 14/14] nfp: abm: add support for more threshold actions Jakub Kicinski
2018-11-20  2:54 ` [PATCH net-next 00/14] gred: add offload support David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.