linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 00/18] net: mvpp2: add RSS support
@ 2018-07-12 11:54 Maxime Chevallier
  2018-07-12 11:54 ` [PATCH net-next 01/18] net: mvpp2: fix include guards in mvpp2_prs.h Maxime Chevallier
                   ` (18 more replies)
  0 siblings, 19 replies; 21+ messages in thread
From: Maxime Chevallier @ 2018-07-12 11:54 UTC (permalink / raw)
  To: davem
  Cc: Maxime Chevallier, netdev, linux-kernel, Antoine Tenart,
	thomas.petazzoni, gregory.clement, miquel.raynal, nadavh,
	stefanc, ymarkman, mw

This series adds support for RSS on PPv2. There already was some code to
handle the RSS tables, but the driver was missing all the classification
steps required to actually use these tables.

RSS is used through the classifier, using at least 2 lookups :
 - One using the C2 engine, a TCAM engine that match the packet based on
   some header extracted fields, assigns the default rx queue for that
   packet and tag it for RSS
 - One using the C3Hx engine, which computes the hash that's used to perform
   the lookup in the RSS table.

Since RSS spreads the load across CPUs, we need to make sure that packets
from the same flow are always assigned the same rx queue, to prevent
re-ordering.

This series therefore adds a classification step based on the Header Parser,
that separate ingress traffic into 52 flows, based on some L2, L3 and L4
parameters.

Patches 1 and 2 fix some header issues, from the driver splitting

Patches 3 to 7 make sure the correct receive queue setup is used for RSS

Patches 8 to 14 deal with the way we handle the RSS tables

Patch 15 implement basic classifier configuration, by using it to assign the
default receive queue

Patch 16 implement the ingress traffic splitting into multiple flows

Patch 17 adds RSS support, by using the needed classification steps

Patch 18 adds the required ethtool ops to configure the flow hash parameters

This was tested on MacchiatoBin, giving some nice performance improvements
using ip forwarding (going from 5Gbps to 9.6Gbps total throughput).

RSS is disabled by default.

Antoine Tenart (3):
  net: mvpp2: fix a typo in the RSS code
  net: mvpp2: RSS indirection table support
  net: mvpp2: improve the distribution of packets on CPUs when using RSS

Maxime Chevallier (14):
  net: mvpp2: fix include guards in mvpp2_prs.h
  net: mvpp2: define the number of RSS entries per table in mvpp2.h
  net: mvpp2: make sure we use single queue mode on PPv2.1
  net: mvpp2: make multi queue mode the default mode
  net: mvpp2: fix hardcoded number of rx queues
  net: mvpp2: use only one rx queue per port per CPU
  net: mvpp2: fix RSS register definitions
  net: mvpp2: use one RSS table per port
  net: mvpp2: make sure we don't spread load on disabled CPUs
  net: mvpp2: rename per-port RSS init function
  net: mvpp2: use classifier to assign default rx queue
  net: mvpp2: split ingress traffic into multiple flows
  net: mvpp2: add an RSS classification step for each flow
  net: mvpp2: allow setting RSS flow hash parameters with ethtool

Yan Markman (1):
  net: mvpp2: use RSS only when using multi-queue mode

 drivers/net/ethernet/marvell/mvpp2/mvpp2.h      |  52 +-
 drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c  | 929 +++++++++++++++++++++++-
 drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h  | 177 ++++-
 drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 127 +++-
 drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c  |  35 +
 drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.h  |  18 +-
 6 files changed, 1307 insertions(+), 31 deletions(-)

-- 
2.11.0


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH net-next 01/18] net: mvpp2: fix include guards in mvpp2_prs.h
  2018-07-12 11:54 [PATCH net-next 00/18] net: mvpp2: add RSS support Maxime Chevallier
@ 2018-07-12 11:54 ` Maxime Chevallier
  2018-07-12 11:54 ` [PATCH net-next 02/18] net: mvpp2: define the number of RSS entries per table in mvpp2.h Maxime Chevallier
                   ` (17 subsequent siblings)
  18 siblings, 0 replies; 21+ messages in thread
From: Maxime Chevallier @ 2018-07-12 11:54 UTC (permalink / raw)
  To: davem
  Cc: Maxime Chevallier, netdev, linux-kernel, Antoine Tenart,
	thomas.petazzoni, gregory.clement, miquel.raynal, nadavh,
	stefanc, ymarkman, mw

Include guards should be put before #includes. This doesn't fix any bug,
but prevent future compilation issues when adding new files in the mvpp2
driver

The Header Parser init function needs the platform_device definition,
and with the fixed include guards we need to add the missing include.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.h | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.h
index a7c8d0818432..64a64666257c 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.h
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.h
@@ -9,14 +9,15 @@
  * License version 2. This program is licensed "as is" without any
  * warranty of any kind, whether express or implied.
  */
+#ifndef _MVPP2_PRS_H_
+#define _MVPP2_PRS_H_
+
 #include <linux/kernel.h>
 #include <linux/netdevice.h>
+#include <linux/platform_device.h>
 
 #include "mvpp2.h"
 
-#ifndef _MVPP2_PRS_H_
-#define _MVPP2_PRS_H_
-
 /* Parser constants */
 #define MVPP2_PRS_TCAM_SRAM_SIZE	256
 #define MVPP2_PRS_TCAM_WORDS		6
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH net-next 02/18] net: mvpp2: define the number of RSS entries per table in mvpp2.h
  2018-07-12 11:54 [PATCH net-next 00/18] net: mvpp2: add RSS support Maxime Chevallier
  2018-07-12 11:54 ` [PATCH net-next 01/18] net: mvpp2: fix include guards in mvpp2_prs.h Maxime Chevallier
@ 2018-07-12 11:54 ` Maxime Chevallier
  2018-07-12 11:54 ` [PATCH net-next 03/18] net: mvpp2: make sure we use single queue mode on PPv2.1 Maxime Chevallier
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 21+ messages in thread
From: Maxime Chevallier @ 2018-07-12 11:54 UTC (permalink / raw)
  To: davem
  Cc: Maxime Chevallier, netdev, linux-kernel, Antoine Tenart,
	thomas.petazzoni, gregory.clement, miquel.raynal, nadavh,
	stefanc, ymarkman, mw

The size of the the RSS indirection tables should be defined in mvpp2.h,
so that we can use it in all files of the PPv2 driver.

This commit moves the define in mvpp2.h, and adds the missing #include
in mvpp2_cls.h.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2.h     | 3 +++
 drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h | 5 ++---
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
index 18834619bb3a..5ef5b99a1935 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
@@ -557,6 +557,9 @@
 #define MVPP2_BIT_TO_WORD(bit)		((bit) / 32)
 #define MVPP2_BIT_IN_WORD(bit)		((bit) % 32)
 
+/* RSS constants */
+#define MVPP22_RSS_TABLE_ENTRIES	32
+
 /* IPv6 max L3 address size */
 #define MVPP2_MAX_L3_ADDR_SIZE		16
 
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
index 8e1d7f9ffa0b..e571238a83cc 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
@@ -13,15 +13,14 @@
 #ifndef _MVPP2_CLS_H_
 #define _MVPP2_CLS_H_
 
+#include "mvpp2.h"
+
 /* Classifier constants */
 #define MVPP2_CLS_FLOWS_TBL_SIZE	512
 #define MVPP2_CLS_FLOWS_TBL_DATA_WORDS	3
 #define MVPP2_CLS_LKP_TBL_SIZE		64
 #define MVPP2_CLS_RX_QUEUES		256
 
-/* RSS constants */
-#define MVPP22_RSS_TABLE_ENTRIES	32
-
 struct mvpp2_cls_flow_entry {
 	u32 index;
 	u32 data[MVPP2_CLS_FLOWS_TBL_DATA_WORDS];
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH net-next 03/18] net: mvpp2: make sure we use single queue mode on PPv2.1
  2018-07-12 11:54 [PATCH net-next 00/18] net: mvpp2: add RSS support Maxime Chevallier
  2018-07-12 11:54 ` [PATCH net-next 01/18] net: mvpp2: fix include guards in mvpp2_prs.h Maxime Chevallier
  2018-07-12 11:54 ` [PATCH net-next 02/18] net: mvpp2: define the number of RSS entries per table in mvpp2.h Maxime Chevallier
@ 2018-07-12 11:54 ` Maxime Chevallier
  2018-07-12 14:10   ` Sergei Shtylyov
  2018-07-12 11:54 ` [PATCH net-next 04/18] net: mvpp2: make multi queue mode the default mode Maxime Chevallier
                   ` (15 subsequent siblings)
  18 siblings, 1 reply; 21+ messages in thread
From: Maxime Chevallier @ 2018-07-12 11:54 UTC (permalink / raw)
  To: davem
  Cc: Maxime Chevallier, netdev, linux-kernel, Antoine Tenart,
	thomas.petazzoni, gregory.clement, miquel.raynal, nadavh,
	stefanc, ymarkman, mw

The PPv2 driver defines 2 "queue_modes" :
 - QDIST_SINGLE_MODE, where we each port share one rx queue vector
   between all CPUs
 - QDIST_MULTI_MODE, where each port has one rx queue vector per CPU.

Multi queue mode isn't available on PPv2.1, make sure we fallback to
single mode when running on this revision.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index 88f3da184d76..0d2dde336ea2 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -5014,6 +5014,12 @@ static int mvpp2_probe(struct platform_device *pdev)
 			(unsigned long)of_device_get_match_data(&pdev->dev);
 	}
 
+	/* multi queue mode isn't supported on PPV2.1, fallback to single
+	 * mode
+	 */
+	if (priv->hw_version == MVPP21)
+		queue_mode = MVPP2_QDIST_SINGLE_MODE;
+
 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
 	base = devm_ioremap_resource(&pdev->dev, res);
 	if (IS_ERR(base))
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH net-next 04/18] net: mvpp2: make multi queue mode the default mode
  2018-07-12 11:54 [PATCH net-next 00/18] net: mvpp2: add RSS support Maxime Chevallier
                   ` (2 preceding siblings ...)
  2018-07-12 11:54 ` [PATCH net-next 03/18] net: mvpp2: make sure we use single queue mode on PPv2.1 Maxime Chevallier
@ 2018-07-12 11:54 ` Maxime Chevallier
  2018-07-12 11:54 ` [PATCH net-next 05/18] net: mvpp2: use RSS only when using multi-queue mode Maxime Chevallier
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 21+ messages in thread
From: Maxime Chevallier @ 2018-07-12 11:54 UTC (permalink / raw)
  To: davem
  Cc: Maxime Chevallier, netdev, linux-kernel, Antoine Tenart,
	thomas.petazzoni, gregory.clement, miquel.raynal, nadavh,
	stefanc, ymarkman, mw

The multi queue mode is needed to have RSS available, and offers some
nice advantages, being able to have one rx queue vector per CPU.

This mode has been usable through the use of a module parameter, this
commit makes it the default value.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index 0d2dde336ea2..b1d812b12421 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -66,7 +66,7 @@ static void mvpp2_mac_config(struct net_device *dev, unsigned int mode,
 #define MVPP2_QDIST_SINGLE_MODE	0
 #define MVPP2_QDIST_MULTI_MODE	1
 
-static int queue_mode = MVPP2_QDIST_SINGLE_MODE;
+static int queue_mode = MVPP2_QDIST_MULTI_MODE;
 
 module_param(queue_mode, int, 0444);
 MODULE_PARM_DESC(queue_mode, "Set queue_mode (single=0, multi=1)");
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH net-next 05/18] net: mvpp2: use RSS only when using multi-queue mode
  2018-07-12 11:54 [PATCH net-next 00/18] net: mvpp2: add RSS support Maxime Chevallier
                   ` (3 preceding siblings ...)
  2018-07-12 11:54 ` [PATCH net-next 04/18] net: mvpp2: make multi queue mode the default mode Maxime Chevallier
@ 2018-07-12 11:54 ` Maxime Chevallier
  2018-07-12 11:54 ` [PATCH net-next 06/18] net: mvpp2: fix hardcoded number of rx queues Maxime Chevallier
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 21+ messages in thread
From: Maxime Chevallier @ 2018-07-12 11:54 UTC (permalink / raw)
  To: davem
  Cc: Yan Markman, netdev, linux-kernel, Antoine Tenart,
	thomas.petazzoni, gregory.clement, miquel.raynal, nadavh,
	stefanc, mw, Maxime Chevallier

From: Yan Markman <ymarkman@marvell.com>

Since RSS only applies when we have per-cpu rx queues, it should only
be enabled when the driver is configured to make use of multi-queue
mode.

Signed-off-by: Yan Markman <ymarkman@marvell.com>
[Maxime: Commit message]
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index b1d812b12421..75aa0dece8ae 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -3276,6 +3276,11 @@ static void mvpp2_irqs_deinit(struct mvpp2_port *port)
 	}
 }
 
+static bool mvpp22_rss_is_supported(void)
+{
+	return queue_mode == MVPP2_QDIST_MULTI_MODE;
+}
+
 static int mvpp2_open(struct net_device *dev)
 {
 	struct mvpp2_port *port = netdev_priv(dev);
@@ -3368,7 +3373,7 @@ static int mvpp2_open(struct net_device *dev)
 
 	mvpp2_start_dev(port);
 
-	if (priv->hw_version == MVPP22)
+	if (mvpp22_rss_is_supported())
 		mvpp22_init_rss(port);
 
 	/* Start hardware statistics gathering */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH net-next 06/18] net: mvpp2: fix hardcoded number of rx queues
  2018-07-12 11:54 [PATCH net-next 00/18] net: mvpp2: add RSS support Maxime Chevallier
                   ` (4 preceding siblings ...)
  2018-07-12 11:54 ` [PATCH net-next 05/18] net: mvpp2: use RSS only when using multi-queue mode Maxime Chevallier
@ 2018-07-12 11:54 ` Maxime Chevallier
  2018-07-12 11:54 ` [PATCH net-next 07/18] net: mvpp2: use only one rx queue per port per CPU Maxime Chevallier
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 21+ messages in thread
From: Maxime Chevallier @ 2018-07-12 11:54 UTC (permalink / raw)
  To: davem
  Cc: Maxime Chevallier, netdev, linux-kernel, Antoine Tenart,
	thomas.petazzoni, gregory.clement, miquel.raynal, nadavh,
	stefanc, ymarkman, mw

There's a dedicated #define that indicates the number of rx queues per
port per cpu, this commit removes a harcoded use of that value

This doesn't fix any runtime bugs since the harcoded value matches the
expected value.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index 75aa0dece8ae..4bc0b893b026 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -3993,8 +3993,8 @@ static int mvpp2_port_init(struct mvpp2_port *port)
 	    MVPP2_MAX_PORTS * priv->max_port_rxqs)
 		return -EINVAL;
 
-	if (port->nrxqs % 4 || (port->nrxqs > priv->max_port_rxqs) ||
-	    (port->ntxqs > MVPP2_MAX_TXQ))
+	if (port->nrxqs % MVPP2_DEFAULT_RXQ ||
+	    port->nrxqs > priv->max_port_rxqs || port->ntxqs > MVPP2_MAX_TXQ)
 		return -EINVAL;
 
 	/* Disable port */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH net-next 07/18] net: mvpp2: use only one rx queue per port per CPU
  2018-07-12 11:54 [PATCH net-next 00/18] net: mvpp2: add RSS support Maxime Chevallier
                   ` (5 preceding siblings ...)
  2018-07-12 11:54 ` [PATCH net-next 06/18] net: mvpp2: fix hardcoded number of rx queues Maxime Chevallier
@ 2018-07-12 11:54 ` Maxime Chevallier
  2018-07-12 11:54 ` [PATCH net-next 08/18] net: mvpp2: fix a typo in the RSS code Maxime Chevallier
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 21+ messages in thread
From: Maxime Chevallier @ 2018-07-12 11:54 UTC (permalink / raw)
  To: davem
  Cc: Maxime Chevallier, netdev, linux-kernel, Antoine Tenart,
	thomas.petazzoni, gregory.clement, miquel.raynal, nadavh,
	stefanc, ymarkman, mw

The number of receive queue per port is :
 - MVPP2_DEFAULT_RXQ if in single queue mode
 - MVPP2_DEFAULT_RXQ * num_possible_cpus if in multi queue mode

with MVPP2_DEFAULT_RXQ = 4.

However, we don't use the extra rx queues at the moment, we really only
need one per port per CPU, until some more advanced classification rules
are implemented.

Suggested-by: Stefan Chulski <stefanc@marvell.com>
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
index 5ef5b99a1935..dee5231b0073 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
@@ -500,7 +500,7 @@
 #define MVPP2_MAX_SKB_DESCS		(MVPP2_MAX_TSO_SEGS * 2 + MAX_SKB_FRAGS)
 
 /* Dfault number of RXQs in use */
-#define MVPP2_DEFAULT_RXQ		4
+#define MVPP2_DEFAULT_RXQ		1
 
 /* Max number of Rx descriptors */
 #define MVPP2_MAX_RXD_MAX		1024
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH net-next 08/18] net: mvpp2: fix a typo in the RSS code
  2018-07-12 11:54 [PATCH net-next 00/18] net: mvpp2: add RSS support Maxime Chevallier
                   ` (6 preceding siblings ...)
  2018-07-12 11:54 ` [PATCH net-next 07/18] net: mvpp2: use only one rx queue per port per CPU Maxime Chevallier
@ 2018-07-12 11:54 ` Maxime Chevallier
  2018-07-12 11:54 ` [PATCH net-next 09/18] net: mvpp2: fix RSS register definitions Maxime Chevallier
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 21+ messages in thread
From: Maxime Chevallier @ 2018-07-12 11:54 UTC (permalink / raw)
  To: davem
  Cc: Antoine Tenart, netdev, linux-kernel, thomas.petazzoni,
	gregory.clement, miquel.raynal, nadavh, stefanc, ymarkman, mw,
	Maxime Chevallier

From: Antoine Tenart <antoine.tenart@bootlin.com>

Cosmetic patch fixing a typo in one of the RSS comments.

Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
index 8581d5b17dd5..4d187f30725c 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
@@ -128,7 +128,7 @@ void mvpp22_init_rss(struct mvpp2_port *port)
 	}
 
 	/* Configure the first table to evenly distribute the packets across
-	 * real Rx Queues. The table entries map a hash to an port Rx Queue.
+	 * real Rx Queues. The table entries map a hash to a port Rx Queue.
 	 */
 	for (i = 0; i < MVPP22_RSS_TABLE_ENTRIES; i++) {
 		u32 sel = MVPP22_RSS_INDEX_TABLE(0) |
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH net-next 09/18] net: mvpp2: fix RSS register definitions
  2018-07-12 11:54 [PATCH net-next 00/18] net: mvpp2: add RSS support Maxime Chevallier
                   ` (7 preceding siblings ...)
  2018-07-12 11:54 ` [PATCH net-next 08/18] net: mvpp2: fix a typo in the RSS code Maxime Chevallier
@ 2018-07-12 11:54 ` Maxime Chevallier
  2018-07-12 11:54 ` [PATCH net-next 10/18] net: mvpp2: use one RSS table per port Maxime Chevallier
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 21+ messages in thread
From: Maxime Chevallier @ 2018-07-12 11:54 UTC (permalink / raw)
  To: davem
  Cc: Maxime Chevallier, netdev, linux-kernel, Antoine Tenart,
	thomas.petazzoni, gregory.clement, miquel.raynal, nadavh,
	stefanc, ymarkman, mw

There is no RSS_TABLE register in PPv2 Controller. The register 0x1510
which was specified is actually named "RSS_HASH_SEL", but isn't used by
this driver at all.

Based on how this register was used, it should have been the
RXQ2RSS_TABLE register, which allows to select the RSS table that will
be used for the incoming packet.

The RSS_TABLE_POINTER is actually a field of this RXQ2RSS_TABLE
register.

Since RSS tables are actually not used by the driver for now, this
commit does not fix a runtime bug.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2.h     | 4 ++--
 drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
index dee5231b0073..e6b182d8be5b 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
@@ -73,9 +73,9 @@
 #define     MVPP22_RSS_INDEX_TABLE_ENTRY(idx)	(idx)
 #define     MVPP22_RSS_INDEX_TABLE(idx)		((idx) << 8)
 #define     MVPP22_RSS_INDEX_QUEUE(idx)		((idx) << 16)
-#define MVPP22_RSS_TABLE_ENTRY			0x1508
-#define MVPP22_RSS_TABLE			0x1510
+#define MVPP22_RXQ2RSS_TABLE			0x1504
 #define     MVPP22_RSS_TABLE_POINTER(p)		(p)
+#define MVPP22_RSS_TABLE_ENTRY			0x1508
 #define MVPP22_RSS_WIDTH			0x150c
 
 /* Classifier Registers */
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
index 4d187f30725c..fe4bcaf983b2 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
@@ -123,7 +123,7 @@ void mvpp22_init_rss(struct mvpp2_port *port)
 	 */
 	for (i = 0; i < MVPP2_CLS_RX_QUEUES; i++) {
 		mvpp2_write(priv, MVPP22_RSS_INDEX, MVPP22_RSS_INDEX_QUEUE(i));
-		mvpp2_write(priv, MVPP22_RSS_TABLE,
+		mvpp2_write(priv, MVPP22_RXQ2RSS_TABLE,
 			    MVPP22_RSS_TABLE_POINTER(0));
 	}
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH net-next 10/18] net: mvpp2: use one RSS table per port
  2018-07-12 11:54 [PATCH net-next 00/18] net: mvpp2: add RSS support Maxime Chevallier
                   ` (8 preceding siblings ...)
  2018-07-12 11:54 ` [PATCH net-next 09/18] net: mvpp2: fix RSS register definitions Maxime Chevallier
@ 2018-07-12 11:54 ` Maxime Chevallier
  2018-07-12 11:54 ` [PATCH net-next 11/18] net: mvpp2: RSS indirection table support Maxime Chevallier
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 21+ messages in thread
From: Maxime Chevallier @ 2018-07-12 11:54 UTC (permalink / raw)
  To: davem
  Cc: Maxime Chevallier, netdev, linux-kernel, Antoine Tenart,
	thomas.petazzoni, gregory.clement, miquel.raynal, nadavh,
	stefanc, ymarkman, mw

PPv2 Controller has 8 RSS Tables, of 32 entries each. A lookup in the
RXQ2RSS_TABLE is performed for each incoming packet, and the RSS Table
to be used is chosen according to the default rx queue that would be
used for the packet.

This default rx queue is set in the Lookup_id Table (also called
Decoding Table), and is equal to the port->first_rxq.

Since the Classifier itself isn't active at any time for the moment,
this doesn't have a direct effect, the default rx queue at the moment is
the one where all packets end-up into.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
index fe4bcaf983b2..c8cf3db85ffe 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
@@ -115,23 +115,22 @@ void mvpp22_init_rss(struct mvpp2_port *port)
 	/* Set the table width: replace the whole classifier Rx queue number
 	 * with the ones configured in RSS table entries.
 	 */
-	mvpp2_write(priv, MVPP22_RSS_INDEX, MVPP22_RSS_INDEX_TABLE(0));
+	mvpp2_write(priv, MVPP22_RSS_INDEX, MVPP22_RSS_INDEX_TABLE(port->id));
 	mvpp2_write(priv, MVPP22_RSS_WIDTH, 8);
 
-	/* Loop through the classifier Rx Queues and map them to a RSS table.
-	 * Map them all to the first table (0) by default.
+	/* The default RxQ is used as a key to select the RSS table to use.
+	 * We use one RSS table per port.
 	 */
-	for (i = 0; i < MVPP2_CLS_RX_QUEUES; i++) {
-		mvpp2_write(priv, MVPP22_RSS_INDEX, MVPP22_RSS_INDEX_QUEUE(i));
-		mvpp2_write(priv, MVPP22_RXQ2RSS_TABLE,
-			    MVPP22_RSS_TABLE_POINTER(0));
-	}
+	mvpp2_write(priv, MVPP22_RSS_INDEX,
+		    MVPP22_RSS_INDEX_QUEUE(port->first_rxq));
+	mvpp2_write(priv, MVPP22_RXQ2RSS_TABLE,
+		    MVPP22_RSS_TABLE_POINTER(port->id));
 
 	/* Configure the first table to evenly distribute the packets across
 	 * real Rx Queues. The table entries map a hash to a port Rx Queue.
 	 */
 	for (i = 0; i < MVPP22_RSS_TABLE_ENTRIES; i++) {
-		u32 sel = MVPP22_RSS_INDEX_TABLE(0) |
+		u32 sel = MVPP22_RSS_INDEX_TABLE(port->id) |
 			  MVPP22_RSS_INDEX_TABLE_ENTRY(i);
 		mvpp2_write(priv, MVPP22_RSS_INDEX, sel);
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH net-next 11/18] net: mvpp2: RSS indirection table support
  2018-07-12 11:54 [PATCH net-next 00/18] net: mvpp2: add RSS support Maxime Chevallier
                   ` (9 preceding siblings ...)
  2018-07-12 11:54 ` [PATCH net-next 10/18] net: mvpp2: use one RSS table per port Maxime Chevallier
@ 2018-07-12 11:54 ` Maxime Chevallier
  2018-07-12 11:54 ` [PATCH net-next 12/18] net: mvpp2: improve the distribution of packets on CPUs when using RSS Maxime Chevallier
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 21+ messages in thread
From: Maxime Chevallier @ 2018-07-12 11:54 UTC (permalink / raw)
  To: davem
  Cc: Antoine Tenart, netdev, linux-kernel, thomas.petazzoni,
	gregory.clement, miquel.raynal, nadavh, stefanc, ymarkman, mw,
	Maxime Chevallier

From: Antoine Tenart <antoine.tenart@bootlin.com>

This patch adds the RSS indirection table support, allowing to use the
ethtool -x and -X options to dump and set this table.

Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
[Maxime: Small warning fixes, use one table per port]
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2.h      |  3 ++
 drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c  | 24 ++++++---
 drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h  |  2 +
 drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 70 +++++++++++++++++++++++++
 4 files changed, 92 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
index e6b182d8be5b..2afbbf5e71e2 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
@@ -801,6 +801,9 @@ struct mvpp2_port {
 	bool has_tx_irqs;
 
 	u32 tx_time_coal;
+
+	/* RSS indirection table */
+	u32 indir[MVPP22_RSS_TABLE_ENTRIES];
 };
 
 /* The mvpp2_tx_desc and mvpp2_rx_desc structures describe the
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
index c8cf3db85ffe..c80a1a549224 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
@@ -107,6 +107,20 @@ void mvpp2_cls_oversize_rxq_set(struct mvpp2_port *port)
 	mvpp2_write(port->priv, MVPP2_CLS_SWFWD_PCTRL_REG, val);
 }
 
+void mvpp22_rss_fill_table(struct mvpp2_port *port, u32 table)
+{
+	struct mvpp2 *priv = port->priv;
+	int i;
+
+	for (i = 0; i < MVPP22_RSS_TABLE_ENTRIES; i++) {
+		u32 sel = MVPP22_RSS_INDEX_TABLE(table) |
+			  MVPP22_RSS_INDEX_TABLE_ENTRY(i);
+		mvpp2_write(priv, MVPP22_RSS_INDEX, sel);
+
+		mvpp2_write(priv, MVPP22_RSS_TABLE_ENTRY, port->indir[i]);
+	}
+}
+
 void mvpp22_init_rss(struct mvpp2_port *port)
 {
 	struct mvpp2 *priv = port->priv;
@@ -129,12 +143,8 @@ void mvpp22_init_rss(struct mvpp2_port *port)
 	/* Configure the first table to evenly distribute the packets across
 	 * real Rx Queues. The table entries map a hash to a port Rx Queue.
 	 */
-	for (i = 0; i < MVPP22_RSS_TABLE_ENTRIES; i++) {
-		u32 sel = MVPP22_RSS_INDEX_TABLE(port->id) |
-			  MVPP22_RSS_INDEX_TABLE_ENTRY(i);
-		mvpp2_write(priv, MVPP22_RSS_INDEX, sel);
-
-		mvpp2_write(priv, MVPP22_RSS_TABLE_ENTRY, i % port->nrxqs);
-	}
+	for (i = 0; i < MVPP22_RSS_TABLE_ENTRIES; i++)
+		port->indir[i] = ethtool_rxfh_indir_default(i, port->nrxqs);
 
+	mvpp22_rss_fill_table(port, port->id);
 }
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
index e571238a83cc..4c7be7469a92 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
@@ -32,6 +32,8 @@ struct mvpp2_cls_lookup_entry {
 	u32 data;
 };
 
+void mvpp22_rss_fill_table(struct mvpp2_port *port, u32 table);
+
 void mvpp22_init_rss(struct mvpp2_port *port);
 
 void mvpp2_cls_init(struct mvpp2 *priv);
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index 4bc0b893b026..f0b5fb78cd67 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -3821,6 +3821,71 @@ static int mvpp2_ethtool_set_link_ksettings(struct net_device *dev,
 	return phylink_ethtool_ksettings_set(port->phylink, cmd);
 }
 
+static int mvpp2_ethtool_get_rxnfc(struct net_device *dev,
+				   struct ethtool_rxnfc *info, u32 *rules)
+{
+	struct mvpp2_port *port = netdev_priv(dev);
+
+	if (!mvpp22_rss_is_supported())
+		return -EOPNOTSUPP;
+
+	switch (info->cmd) {
+	case ETHTOOL_GRXRINGS:
+		info->data = port->nrxqs;
+		break;
+	default:
+		return -ENOTSUPP;
+	}
+
+	return 0;
+}
+
+static u32 mvpp2_ethtool_get_rxfh_indir_size(struct net_device *dev)
+{
+	return mvpp22_rss_is_supported() ? MVPP22_RSS_TABLE_ENTRIES : 0;
+}
+
+static int mvpp2_ethtool_get_rxfh(struct net_device *dev, u32 *indir, u8 *key,
+				  u8 *hfunc)
+{
+	struct mvpp2_port *port = netdev_priv(dev);
+
+	if (!mvpp22_rss_is_supported())
+		return -EOPNOTSUPP;
+
+	if (indir)
+		memcpy(indir, port->indir,
+		       ARRAY_SIZE(port->indir) * sizeof(port->indir[0]));
+
+	if (hfunc)
+		*hfunc = ETH_RSS_HASH_CRC32;
+
+	return 0;
+}
+
+static int mvpp2_ethtool_set_rxfh(struct net_device *dev, const u32 *indir,
+				  const u8 *key, const u8 hfunc)
+{
+	struct mvpp2_port *port = netdev_priv(dev);
+
+	if (!mvpp22_rss_is_supported())
+		return -EOPNOTSUPP;
+
+	if (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != ETH_RSS_HASH_CRC32)
+		return -EOPNOTSUPP;
+
+	if (key)
+		return -EOPNOTSUPP;
+
+	if (indir) {
+		memcpy(port->indir, indir,
+		       ARRAY_SIZE(port->indir) * sizeof(port->indir[0]));
+		mvpp22_rss_fill_table(port, port->id);
+	}
+
+	return 0;
+}
+
 /* Device ops */
 
 static const struct net_device_ops mvpp2_netdev_ops = {
@@ -3852,6 +3917,11 @@ static const struct ethtool_ops mvpp2_eth_tool_ops = {
 	.set_pauseparam		= mvpp2_ethtool_set_pause_param,
 	.get_link_ksettings	= mvpp2_ethtool_get_link_ksettings,
 	.set_link_ksettings	= mvpp2_ethtool_set_link_ksettings,
+	.get_rxnfc		= mvpp2_ethtool_get_rxnfc,
+	.get_rxfh_indir_size	= mvpp2_ethtool_get_rxfh_indir_size,
+	.get_rxfh		= mvpp2_ethtool_get_rxfh,
+	.set_rxfh		= mvpp2_ethtool_set_rxfh,
+
 };
 
 /* Used for PPv2.1, or PPv2.2 with the old Device Tree binding that
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH net-next 12/18] net: mvpp2: improve the distribution of packets on CPUs when using RSS
  2018-07-12 11:54 [PATCH net-next 00/18] net: mvpp2: add RSS support Maxime Chevallier
                   ` (10 preceding siblings ...)
  2018-07-12 11:54 ` [PATCH net-next 11/18] net: mvpp2: RSS indirection table support Maxime Chevallier
@ 2018-07-12 11:54 ` Maxime Chevallier
  2018-07-12 11:54 ` [PATCH net-next 13/18] net: mvpp2: make sure we don't spread load on disabled CPUs Maxime Chevallier
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 21+ messages in thread
From: Maxime Chevallier @ 2018-07-12 11:54 UTC (permalink / raw)
  To: davem
  Cc: Antoine Tenart, netdev, linux-kernel, thomas.petazzoni,
	gregory.clement, miquel.raynal, nadavh, stefanc, ymarkman, mw,
	Maxime Chevallier

From: Antoine Tenart <antoine.tenart@bootlin.com>

This patch adds an extra indirection when setting the indirection table
into the RSS hardware table to improve the packets distribution across
CPUs. For example, if 2 queues are used on a multi-core system this new
indirection will choose two queues on two different CPUs instead of the
two first queues which are on the same first CPU.

Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
index c80a1a549224..7dafc8c425b8 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
@@ -107,6 +107,19 @@ void mvpp2_cls_oversize_rxq_set(struct mvpp2_port *port)
 	mvpp2_write(port->priv, MVPP2_CLS_SWFWD_PCTRL_REG, val);
 }
 
+static inline u32 mvpp22_rxfh_indir(struct mvpp2_port *port, u32 rxq)
+{
+	int nrxqs, cpus = num_possible_cpus();
+
+	/* Number of RXQs per CPU */
+	nrxqs = port->nrxqs / cpus;
+
+	/* Indirection to better distribute the paquets on the CPUs when
+	 * configuring the RSS queues.
+	 */
+	return port->first_rxq + ((rxq * nrxqs + rxq / cpus) % port->nrxqs);
+}
+
 void mvpp22_rss_fill_table(struct mvpp2_port *port, u32 table)
 {
 	struct mvpp2 *priv = port->priv;
@@ -117,7 +130,8 @@ void mvpp22_rss_fill_table(struct mvpp2_port *port, u32 table)
 			  MVPP22_RSS_INDEX_TABLE_ENTRY(i);
 		mvpp2_write(priv, MVPP22_RSS_INDEX, sel);
 
-		mvpp2_write(priv, MVPP22_RSS_TABLE_ENTRY, port->indir[i]);
+		mvpp2_write(priv, MVPP22_RSS_TABLE_ENTRY,
+			    mvpp22_rxfh_indir(port, port->indir[i]));
 	}
 }
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH net-next 13/18] net: mvpp2: make sure we don't spread load on disabled CPUs
  2018-07-12 11:54 [PATCH net-next 00/18] net: mvpp2: add RSS support Maxime Chevallier
                   ` (11 preceding siblings ...)
  2018-07-12 11:54 ` [PATCH net-next 12/18] net: mvpp2: improve the distribution of packets on CPUs when using RSS Maxime Chevallier
@ 2018-07-12 11:54 ` Maxime Chevallier
  2018-07-12 11:54 ` [PATCH net-next 14/18] net: mvpp2: rename per-port RSS init function Maxime Chevallier
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 21+ messages in thread
From: Maxime Chevallier @ 2018-07-12 11:54 UTC (permalink / raw)
  To: davem
  Cc: Maxime Chevallier, netdev, linux-kernel, Antoine Tenart,
	thomas.petazzoni, gregory.clement, miquel.raynal, nadavh,
	stefanc, ymarkman, mw

When filling the RSS table, we have to make sure that the rx queue is
attached to an online CPU.

This patch is not a full support for cpu_hotplug, but rather a way to
make sure that we don't break network on system booted with the maxcpus
parameter.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
index 7dafc8c425b8..f2bcfa8f1dfd 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
@@ -109,11 +109,17 @@ void mvpp2_cls_oversize_rxq_set(struct mvpp2_port *port)
 
 static inline u32 mvpp22_rxfh_indir(struct mvpp2_port *port, u32 rxq)
 {
-	int nrxqs, cpus = num_possible_cpus();
+	int nrxqs, cpu, cpus = num_possible_cpus();
 
 	/* Number of RXQs per CPU */
 	nrxqs = port->nrxqs / cpus;
 
+	/* CPU that will handle this rx queue */
+	cpu = rxq / nrxqs;
+
+	if (!cpu_online(cpu))
+		return port->first_rxq;
+
 	/* Indirection to better distribute the paquets on the CPUs when
 	 * configuring the RSS queues.
 	 */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH net-next 14/18] net: mvpp2: rename per-port RSS init function
  2018-07-12 11:54 [PATCH net-next 00/18] net: mvpp2: add RSS support Maxime Chevallier
                   ` (12 preceding siblings ...)
  2018-07-12 11:54 ` [PATCH net-next 13/18] net: mvpp2: make sure we don't spread load on disabled CPUs Maxime Chevallier
@ 2018-07-12 11:54 ` Maxime Chevallier
  2018-07-12 11:54 ` [PATCH net-next 15/18] net: mvpp2: use classifier to assign default rx queue Maxime Chevallier
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 21+ messages in thread
From: Maxime Chevallier @ 2018-07-12 11:54 UTC (permalink / raw)
  To: davem
  Cc: Maxime Chevallier, netdev, linux-kernel, Antoine Tenart,
	thomas.petazzoni, gregory.clement, miquel.raynal, nadavh,
	stefanc, ymarkman, mw

mvpp22_init_rss function configures the RSS parameters for each port, so
rename it accordingly. Since this function relies on classifier
configuration, move its call right after the classifier config.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c  | 2 +-
 drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h  | 2 +-
 drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 6 +++---
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
index f2bcfa8f1dfd..66160b9b8a9a 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
@@ -141,7 +141,7 @@ void mvpp22_rss_fill_table(struct mvpp2_port *port, u32 table)
 	}
 }
 
-void mvpp22_init_rss(struct mvpp2_port *port)
+void mvpp22_rss_port_init(struct mvpp2_port *port)
 {
 	struct mvpp2 *priv = port->priv;
 	int i;
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
index 4c7be7469a92..38a8cf1172df 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
@@ -34,7 +34,7 @@ struct mvpp2_cls_lookup_entry {
 
 void mvpp22_rss_fill_table(struct mvpp2_port *port, u32 table);
 
-void mvpp22_init_rss(struct mvpp2_port *port);
+void mvpp22_rss_port_init(struct mvpp2_port *port);
 
 void mvpp2_cls_init(struct mvpp2 *priv);
 
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index f0b5fb78cd67..93e37d20899d 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -3373,9 +3373,6 @@ static int mvpp2_open(struct net_device *dev)
 
 	mvpp2_start_dev(port);
 
-	if (mvpp22_rss_is_supported())
-		mvpp22_init_rss(port);
-
 	/* Start hardware statistics gathering */
 	queue_delayed_work(priv->stats_queue, &port->stats_work,
 			   MVPP2_MIB_COUNTERS_STATS_DELAY);
@@ -4153,6 +4150,9 @@ static int mvpp2_port_init(struct mvpp2_port *port)
 	mvpp2_cls_oversize_rxq_set(port);
 	mvpp2_cls_port_config(port);
 
+	if (mvpp22_rss_is_supported())
+		mvpp22_rss_port_init(port);
+
 	/* Provide an initial Rx packet size */
 	port->pkt_size = MVPP2_RX_PKT_SIZE(port->dev->mtu);
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH net-next 15/18] net: mvpp2: use classifier to assign default rx queue
  2018-07-12 11:54 [PATCH net-next 00/18] net: mvpp2: add RSS support Maxime Chevallier
                   ` (13 preceding siblings ...)
  2018-07-12 11:54 ` [PATCH net-next 14/18] net: mvpp2: rename per-port RSS init function Maxime Chevallier
@ 2018-07-12 11:54 ` Maxime Chevallier
  2018-07-12 11:54 ` [PATCH net-next 16/18] net: mvpp2: split ingress traffic into multiple flows Maxime Chevallier
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 21+ messages in thread
From: Maxime Chevallier @ 2018-07-12 11:54 UTC (permalink / raw)
  To: davem
  Cc: Maxime Chevallier, netdev, linux-kernel, Antoine Tenart,
	thomas.petazzoni, gregory.clement, miquel.raynal, nadavh,
	stefanc, ymarkman, mw

The PPv2 Controller has a classifier, that can perform multiple lookup
operations for each packet, using different engines.

One of these engines is the C2 engine, which performs TCAM based lookups
on data extracted from the packet header. When a packet matches an
entry, the engine sets various attributes, used to perform
classification operations.

One of these attributes is the rx queue in which the packet should be sent.
The current code uses the lookup_id table (also called decoding table)
to assign the rx queue. However, this only works if we use one entry per
port in the decoding table, which won't be the case once we add RSS
lookups.

This patch uses the C2 engine to assign the rx queue to each packet.

The C2 engine is used through the flow table, which dictates what
classification operations are done for a given flow.

Right now, we have one flow per port, which contains every ingress
packet for this port.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2.h     |  40 +++++++
 drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c | 150 +++++++++++++++++++++++++
 drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h |  56 +++++++++
 3 files changed, 246 insertions(+)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
index 2afbbf5e71e2..749d9720bf5e 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
@@ -87,11 +87,28 @@
 #define     MVPP2_CLS_LKP_INDEX_WAY_OFFS	6
 #define MVPP2_CLS_LKP_TBL_REG			0x1818
 #define     MVPP2_CLS_LKP_TBL_RXQ_MASK		0xff
+#define     MVPP2_CLS_LKP_FLOW_PTR(flow)	((flow) << 16)
 #define     MVPP2_CLS_LKP_TBL_LOOKUP_EN_MASK	BIT(25)
 #define MVPP2_CLS_FLOW_INDEX_REG		0x1820
 #define MVPP2_CLS_FLOW_TBL0_REG			0x1824
+#define     MVPP2_CLS_FLOW_TBL0_LAST		BIT(0)
+#define     MVPP2_CLS_FLOW_TBL0_ENG_MASK	0x7
+#define     MVPP2_CLS_FLOW_TBL0_OFFS		1
+#define     MVPP2_CLS_FLOW_TBL0_ENG(x)		((x) << 1)
+#define     MVPP2_CLS_FLOW_TBL0_PORT_ID_MASK	0xff
+#define     MVPP2_CLS_FLOW_TBL0_PORT_ID(port)	((port) << 4)
+#define     MVPP2_CLS_FLOW_TBL0_PORT_ID_SEL	BIT(23)
 #define MVPP2_CLS_FLOW_TBL1_REG			0x1828
+#define     MVPP2_CLS_FLOW_TBL1_N_FIELDS_MASK	0x7
+#define     MVPP2_CLS_FLOW_TBL1_N_FIELDS(x)	(x)
+#define     MVPP2_CLS_FLOW_TBL1_PRIO_MASK	0x3f
+#define     MVPP2_CLS_FLOW_TBL1_PRIO(x)		((x) << 9)
+#define     MVPP2_CLS_FLOW_TBL1_SEQ_MASK	0x7
+#define     MVPP2_CLS_FLOW_TBL1_SEQ(x)		((x) << 15)
 #define MVPP2_CLS_FLOW_TBL2_REG			0x182c
+#define     MVPP2_CLS_FLOW_TBL2_FLD_MASK	0x3f
+#define     MVPP2_CLS_FLOW_TBL2_FLD_OFFS(n)	((n) * 6)
+#define     MVPP2_CLS_FLOW_TBL2_FLD(n, x)	((x) << ((n) * 6))
 #define MVPP2_CLS_OVERSIZE_RXQ_LOW_REG(port)	(0x1980 + ((port) * 4))
 #define     MVPP2_CLS_OVERSIZE_RXQ_LOW_BITS	3
 #define     MVPP2_CLS_OVERSIZE_RXQ_LOW_MASK	0x7
@@ -99,6 +116,29 @@
 #define MVPP2_CLS_SWFWD_PCTRL_REG		0x19d0
 #define     MVPP2_CLS_SWFWD_PCTRL_MASK(port)	(1 << (port))
 
+/* Classifier C2 engine Registers */
+#define MVPP22_CLS_C2_TCAM_IDX			0x1b00
+#define MVPP22_CLS_C2_TCAM_DATA0		0x1b10
+#define MVPP22_CLS_C2_TCAM_DATA1		0x1b14
+#define MVPP22_CLS_C2_TCAM_DATA2		0x1b18
+#define MVPP22_CLS_C2_TCAM_DATA3		0x1b1c
+#define MVPP22_CLS_C2_TCAM_DATA4		0x1b20
+#define     MVPP22_CLS_C2_PORT_ID(port)		((port) << 8)
+#define MVPP22_CLS_C2_ACT			0x1b60
+#define     MVPP22_CLS_C2_ACT_RSS_EN(act)	(((act) & 0x3) << 19)
+#define     MVPP22_CLS_C2_ACT_FWD(act)		(((act) & 0x7) << 13)
+#define     MVPP22_CLS_C2_ACT_QHIGH(act)	(((act) & 0x3) << 11)
+#define     MVPP22_CLS_C2_ACT_QLOW(act)		(((act) & 0x3) << 9)
+#define MVPP22_CLS_C2_ATTR0			0x1b64
+#define     MVPP22_CLS_C2_ATTR0_QHIGH(qh)	(((qh) & 0x1f) << 24)
+#define     MVPP22_CLS_C2_ATTR0_QHIGH_MASK	0x1f
+#define     MVPP22_CLS_C2_ATTR0_QLOW(ql)	(((ql) & 0x7) << 21)
+#define     MVPP22_CLS_C2_ATTR0_QLOW_MASK	0x7
+#define MVPP22_CLS_C2_ATTR1			0x1b68
+#define MVPP22_CLS_C2_ATTR2			0x1b6c
+#define     MVPP22_CLS_C2_ATTR2_RSS_EN		BIT(30)
+#define MVPP22_CLS_C2_ATTR3			0x1b70
+
 /* Descriptor Manager Top Registers */
 #define MVPP2_RXQ_NUM_REG			0x2040
 #define MVPP2_RXQ_DESC_ADDR_REG			0x2044
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
index 66160b9b8a9a..7cee117efb4f 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
@@ -12,6 +12,7 @@
 
 #include "mvpp2.h"
 #include "mvpp2_cls.h"
+#include "mvpp2_prs.h"
 
 /* Update classification flow table registers */
 static void mvpp2_cls_flow_write(struct mvpp2 *priv,
@@ -34,6 +35,151 @@ static void mvpp2_cls_lookup_write(struct mvpp2 *priv,
 	mvpp2_write(priv, MVPP2_CLS_LKP_TBL_REG, le->data);
 }
 
+static void mvpp2_cls_flow_eng_set(struct mvpp2_cls_flow_entry *fe,
+				   int engine)
+{
+	fe->data[0] &= ~MVPP2_CLS_FLOW_TBL0_ENG(MVPP2_CLS_FLOW_TBL0_ENG_MASK);
+	fe->data[0] |= MVPP2_CLS_FLOW_TBL0_ENG(engine);
+}
+
+static void mvpp2_cls_flow_port_id_sel(struct mvpp2_cls_flow_entry *fe,
+				       bool from_packet)
+{
+	if (from_packet)
+		fe->data[0] |= MVPP2_CLS_FLOW_TBL0_PORT_ID_SEL;
+	else
+		fe->data[0] &= ~MVPP2_CLS_FLOW_TBL0_PORT_ID_SEL;
+}
+
+static void mvpp2_cls_flow_seq_set(struct mvpp2_cls_flow_entry *fe, u32 seq)
+{
+	fe->data[1] &= ~MVPP2_CLS_FLOW_TBL1_SEQ(MVPP2_CLS_FLOW_TBL1_SEQ_MASK);
+	fe->data[1] |= MVPP2_CLS_FLOW_TBL1_SEQ(seq);
+}
+
+static void mvpp2_cls_flow_last_set(struct mvpp2_cls_flow_entry *fe,
+				    bool is_last)
+{
+	fe->data[0] &= ~MVPP2_CLS_FLOW_TBL0_LAST;
+	fe->data[0] |= !!is_last;
+}
+
+static void mvpp2_cls_flow_pri_set(struct mvpp2_cls_flow_entry *fe, int prio)
+{
+	fe->data[1] &= ~MVPP2_CLS_FLOW_TBL1_PRIO(MVPP2_CLS_FLOW_TBL1_PRIO_MASK);
+	fe->data[1] |= MVPP2_CLS_FLOW_TBL1_PRIO(prio);
+}
+
+static void mvpp2_cls_flow_port_add(struct mvpp2_cls_flow_entry *fe,
+				    u32 port)
+{
+	fe->data[0] |= MVPP2_CLS_FLOW_TBL0_PORT_ID(port);
+}
+
+/* Initialize the Lookup Id table entry for the given flow */
+static void mvpp2_cls_flow_lkp_init(struct mvpp2 *priv, int port_id)
+{
+	struct mvpp2_cls_lookup_entry le;
+
+	le.way = 0;
+	le.lkpid = port_id;
+
+	/* The default RxQ for this port is set in the C2 lookup */
+	le.data = 0;
+
+	le.data |= MVPP2_CLS_LKP_FLOW_PTR(port_id);
+	le.data |= MVPP2_CLS_LKP_TBL_LOOKUP_EN_MASK;
+
+	mvpp2_cls_lookup_write(priv, &le);
+}
+
+/* Initialize the flow table entries for the given flow */
+static void mvpp2_cls_flow_init(struct mvpp2 *priv, int port_id)
+{
+	struct mvpp2_cls_flow_entry fe;
+	int i;
+
+	/* C2 lookup */
+	memset(&fe, 0, sizeof(fe));
+	fe.index = port_id;
+
+	mvpp2_cls_flow_eng_set(&fe, MVPP22_CLS_ENGINE_C2);
+	mvpp2_cls_flow_port_id_sel(&fe, true);
+	mvpp2_cls_flow_last_set(&fe, 1);
+	mvpp2_cls_flow_pri_set(&fe, 0);
+	mvpp2_cls_flow_seq_set(&fe, MVPP2_CLS_FLOW_SEQ_LAST);
+
+	/* Add all ports */
+	for (i = 0; i < MVPP2_MAX_PORTS; i++)
+		mvpp2_cls_flow_port_add(&fe, BIT(i));
+
+	mvpp2_cls_flow_write(priv, &fe);
+}
+
+static void mvpp2_cls_port_init_flows(struct mvpp2 *priv)
+{
+	int i;
+
+	for (i = 0; i < MVPP2_MAX_PORTS; i++) {
+		mvpp2_cls_flow_lkp_init(priv, i);
+		mvpp2_cls_flow_init(priv, i);
+	}
+}
+
+static void mvpp2_cls_c2_write(struct mvpp2 *priv,
+			       struct mvpp2_cls_c2_entry *c2)
+{
+	mvpp2_write(priv, MVPP22_CLS_C2_TCAM_IDX, c2->index);
+
+	/* Write TCAM */
+	mvpp2_write(priv, MVPP22_CLS_C2_TCAM_DATA0, c2->tcam[0]);
+	mvpp2_write(priv, MVPP22_CLS_C2_TCAM_DATA1, c2->tcam[1]);
+	mvpp2_write(priv, MVPP22_CLS_C2_TCAM_DATA2, c2->tcam[2]);
+	mvpp2_write(priv, MVPP22_CLS_C2_TCAM_DATA3, c2->tcam[3]);
+	mvpp2_write(priv, MVPP22_CLS_C2_TCAM_DATA4, c2->tcam[4]);
+
+	mvpp2_write(priv, MVPP22_CLS_C2_ACT, c2->act);
+
+	mvpp2_write(priv, MVPP22_CLS_C2_ATTR0, c2->attr[0]);
+	mvpp2_write(priv, MVPP22_CLS_C2_ATTR1, c2->attr[1]);
+	mvpp2_write(priv, MVPP22_CLS_C2_ATTR2, c2->attr[2]);
+	mvpp2_write(priv, MVPP22_CLS_C2_ATTR3, c2->attr[3]);
+}
+
+static void mvpp2_port_c2_cls_init(struct mvpp2_port *port)
+{
+	struct mvpp2_cls_c2_entry c2;
+	u8 qh, ql, pmap;
+
+	memset(&c2, 0, sizeof(c2));
+
+	c2.index = MVPP22_CLS_C2_RSS_ENTRY(port->id);
+
+	pmap = BIT(port->id);
+	c2.tcam[4] = MVPP22_CLS_C2_PORT_ID(pmap);
+	c2.tcam[4] |= MVPP22_CLS_C2_TCAM_EN(MVPP22_CLS_C2_PORT_ID(pmap));
+
+	/* Update RSS status after matching this entry */
+	c2.act = MVPP22_CLS_C2_ACT_RSS_EN(MVPP22_C2_UPD_LOCK);
+
+	/* Mark packet as "forwarded to software", needed for RSS */
+	c2.act |= MVPP22_CLS_C2_ACT_FWD(MVPP22_C2_FWD_SW_LOCK);
+
+	/* Configure the default rx queue : Update Queue Low and Queue High, but
+	 * don't lock, since the rx queue selection might be overridden by RSS
+	 */
+	c2.act |= MVPP22_CLS_C2_ACT_QHIGH(MVPP22_C2_UPD) |
+		   MVPP22_CLS_C2_ACT_QLOW(MVPP22_C2_UPD);
+
+	qh = (port->first_rxq >> 3) & MVPP22_CLS_C2_ATTR0_QHIGH_MASK;
+	ql = port->first_rxq & MVPP22_CLS_C2_ATTR0_QLOW_MASK;
+
+	c2.attr[0] = MVPP22_CLS_C2_ATTR0_QHIGH(qh) |
+		      MVPP22_CLS_C2_ATTR0_QLOW(ql);
+
+	mvpp2_cls_c2_write(port->priv, &c2);
+}
+
 /* Classifier default initialization */
 void mvpp2_cls_init(struct mvpp2 *priv)
 {
@@ -61,6 +207,8 @@ void mvpp2_cls_init(struct mvpp2 *priv)
 		le.way = 1;
 		mvpp2_cls_lookup_write(priv, &le);
 	}
+
+	mvpp2_cls_port_init_flows(priv);
 }
 
 void mvpp2_cls_port_config(struct mvpp2_port *port)
@@ -89,6 +237,8 @@ void mvpp2_cls_port_config(struct mvpp2_port *port)
 
 	/* Update lookup ID table entry */
 	mvpp2_cls_lookup_write(port->priv, &le);
+
+	mvpp2_port_c2_cls_init(port);
 }
 
 /* Set CPU queue number for oversize packets */
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
index 38a8cf1172df..ee4933ca7ed8 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
@@ -14,6 +14,7 @@
 #define _MVPP2_CLS_H_
 
 #include "mvpp2.h"
+#include "mvpp2_prs.h"
 
 /* Classifier constants */
 #define MVPP2_CLS_FLOWS_TBL_SIZE	512
@@ -21,6 +22,61 @@
 #define MVPP2_CLS_LKP_TBL_SIZE		64
 #define MVPP2_CLS_RX_QUEUES		256
 
+/* Classifier flow constants */
+enum mvpp2_cls_engine {
+	MVPP22_CLS_ENGINE_C2 = 1,
+	MVPP22_CLS_ENGINE_C3A,
+	MVPP22_CLS_ENGINE_C3B,
+	MVPP22_CLS_ENGINE_C4,
+	MVPP22_CLS_ENGINE_C3HA = 6,
+	MVPP22_CLS_ENGINE_C3HB = 7,
+};
+
+enum mvpp2_cls_flow_seq {
+	MVPP2_CLS_FLOW_SEQ_NORMAL = 0,
+	MVPP2_CLS_FLOW_SEQ_FIRST1,
+	MVPP2_CLS_FLOW_SEQ_FIRST2,
+	MVPP2_CLS_FLOW_SEQ_LAST,
+	MVPP2_CLS_FLOW_SEQ_MIDDLE
+};
+
+/* Classifier C2 engine constants */
+#define MVPP22_CLS_C2_TCAM_EN(data)		((data) << 16)
+
+enum mvpp22_cls_c2_action {
+	MVPP22_C2_NO_UPD = 0,
+	MVPP22_C2_NO_UPD_LOCK,
+	MVPP22_C2_UPD,
+	MVPP22_C2_UPD_LOCK,
+};
+
+enum mvpp22_cls_c2_fwd_action {
+	MVPP22_C2_FWD_NO_UPD = 0,
+	MVPP22_C2_FWD_NO_UPD_LOCK,
+	MVPP22_C2_FWD_SW,
+	MVPP22_C2_FWD_SW_LOCK,
+	MVPP22_C2_FWD_HW,
+	MVPP22_C2_FWD_HW_LOCK,
+	MVPP22_C2_FWD_HW_LOW_LAT,
+	MVPP22_C2_FWD_HW_LOW_LAT_LOCK,
+};
+
+#define MVPP2_CLS_C2_TCAM_WORDS			5
+#define MVPP2_CLS_C2_ATTR_WORDS			5
+
+struct mvpp2_cls_c2_entry {
+	u32 index;
+	u32 tcam[MVPP2_CLS_C2_TCAM_WORDS];
+	u32 act;
+	u32 attr[MVPP2_CLS_C2_ATTR_WORDS];
+};
+
+/* Classifier C2 engine entries */
+#define MVPP22_CLS_C2_RSS_ENTRY(port)	(port)
+#define MVPP22_CLS_C2_N_ENTRIES		MVPP2_MAX_PORTS
+
+#define MVPP22_RSS_FLOW_C2_OFFS		0
+
 struct mvpp2_cls_flow_entry {
 	u32 index;
 	u32 data[MVPP2_CLS_FLOWS_TBL_DATA_WORDS];
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH net-next 16/18] net: mvpp2: split ingress traffic into multiple flows
  2018-07-12 11:54 [PATCH net-next 00/18] net: mvpp2: add RSS support Maxime Chevallier
                   ` (14 preceding siblings ...)
  2018-07-12 11:54 ` [PATCH net-next 15/18] net: mvpp2: use classifier to assign default rx queue Maxime Chevallier
@ 2018-07-12 11:54 ` Maxime Chevallier
  2018-07-12 11:54 ` [PATCH net-next 17/18] net: mvpp2: add an RSS classification step for each flow Maxime Chevallier
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 21+ messages in thread
From: Maxime Chevallier @ 2018-07-12 11:54 UTC (permalink / raw)
  To: davem
  Cc: Maxime Chevallier, netdev, linux-kernel, Antoine Tenart,
	thomas.petazzoni, gregory.clement, miquel.raynal, nadavh,
	stefanc, ymarkman, mw

The PPv2 classifier allows to perform classification operations on each
ingress packet, based on the flow the packet is assigned to.

The current code uses only 1 flow per port, and the only classification
action consists of assigning the rx queue to the packet, depending on the
port.

In preparation for adding RSS support, we have to split all incoming
traffic into different flows. Since RSS assigns a rx queue depending on
the hash of some header fields, we have to make sure that the hash is
generated in a consistent way for all packets in the same flow.

What we call a "flow" is actually a set of attributes attached to a
packet that depends on various L2/L3/L4 info.

This patch introduces 52 flows, wich are a combination of various L2, L3
and L4 attributes :
 - Whether or not the packet has a VLAN tag
 - Whether the packet is IPv4, IPv6 or something else
 - Whether the packet is TCP, UDP or something else
 - Whether or not the packet is fragmented at L3 level.

The flow is associated to a packet by the Header Parser. Each flow
corresponds to an entry in the decoding table. This entry then points to
the sequence of classification lookups to be performed by the
classifier, represented in the flow table.

For now, the only lookup we perform is a C2 lookup to set the default
rx queue.

               Header parser          Dec table
 Ingress pkt  +-------------+ flow id +----------------------------+
------------->| TCAM + SRAM |-------->|TCP IPv4 w/ VLAN, not frag  |
              +-------------+         |TCP IPv4 w/o VLAN, not frag |
                                      |TCP IPv4 w/ VLAN, frag      |--+
                                      |etc.                        |  |
                                      +----------------------------+  |
                                                                      |
                                           Flow table                 |
                +------------+        +---------------------+         |
     To RxQ <---| Classifier |<-------| flow 0: C2 lookup   |<--------+
                +------------+        | flow 1: C2 lookup   |
                       |              | ...                 |
                +------------+        | flow 51 : C2 lookup |
		| C2 engine  |        +---------------------+
                +------------+

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c | 355 ++++++++++++++++++++++++-
 drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h |  96 +++++++
 drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c |  35 +++
 drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.h |  11 +
 4 files changed, 489 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
index 7cee117efb4f..1fc8a446d94e 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
@@ -14,6 +14,317 @@
 #include "mvpp2_cls.h"
 #include "mvpp2_prs.h"
 
+#define MVPP2_DEF_FLOW(_type, _id, _opts, _ri, _ri_mask)	\
+{								\
+	.flow_type = _type,					\
+	.flow_id = _id,						\
+	.supported_hash_opts = _opts,				\
+	.prs_ri = {						\
+		.ri = _ri,					\
+		.ri_mask = _ri_mask				\
+	}							\
+}
+
+static struct mvpp2_cls_flow cls_flows[MVPP2_N_FLOWS] = {
+	/* TCP over IPv4 flows, Not fragmented, no vlan tag */
+	MVPP2_DEF_FLOW(TCP_V4_FLOW, MVPP2_FL_IP4_TCP_NF_UNTAG,
+		       MVPP22_CLS_HEK_IP4_5T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4 |
+		       MVPP2_PRS_RI_L4_TCP,
+		       MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
+
+	MVPP2_DEF_FLOW(TCP_V4_FLOW, MVPP2_FL_IP4_TCP_NF_UNTAG,
+		       MVPP22_CLS_HEK_IP4_5T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4_OPT |
+		       MVPP2_PRS_RI_L4_TCP,
+		       MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
+
+	MVPP2_DEF_FLOW(TCP_V4_FLOW, MVPP2_FL_IP4_TCP_NF_UNTAG,
+		       MVPP22_CLS_HEK_IP4_5T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4_OTHER |
+		       MVPP2_PRS_RI_L4_TCP,
+		       MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
+
+	/* TCP over IPv4 flows, Not fragmented, with vlan tag */
+	MVPP2_DEF_FLOW(TCP_V4_FLOW, MVPP2_FL_IP4_TCP_NF_TAG,
+		       MVPP22_CLS_HEK_IP4_5T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP4 | MVPP2_PRS_RI_L4_TCP,
+		       MVPP2_PRS_IP_MASK),
+
+	MVPP2_DEF_FLOW(TCP_V4_FLOW, MVPP2_FL_IP4_TCP_NF_TAG,
+		       MVPP22_CLS_HEK_IP4_5T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP4_OPT | MVPP2_PRS_RI_L4_TCP,
+		       MVPP2_PRS_IP_MASK),
+
+	MVPP2_DEF_FLOW(TCP_V4_FLOW, MVPP2_FL_IP4_TCP_NF_TAG,
+		       MVPP22_CLS_HEK_IP4_5T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP4_OTHER | MVPP2_PRS_RI_L4_TCP,
+		       MVPP2_PRS_IP_MASK),
+
+	/* TCP over IPv4 flows, fragmented, no vlan tag */
+	MVPP2_DEF_FLOW(TCP_V4_FLOW, MVPP2_FL_IP4_TCP_FRAG_UNTAG,
+		       MVPP22_CLS_HEK_IP4_2T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4 |
+		       MVPP2_PRS_RI_L4_TCP,
+		       MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
+
+	MVPP2_DEF_FLOW(TCP_V4_FLOW, MVPP2_FL_IP4_TCP_FRAG_UNTAG,
+		       MVPP22_CLS_HEK_IP4_2T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4_OPT |
+		       MVPP2_PRS_RI_L4_TCP,
+		       MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
+
+	MVPP2_DEF_FLOW(TCP_V4_FLOW, MVPP2_FL_IP4_TCP_FRAG_UNTAG,
+		       MVPP22_CLS_HEK_IP4_2T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4_OTHER |
+		       MVPP2_PRS_RI_L4_TCP,
+		       MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
+
+	/* TCP over IPv4 flows, fragmented, with vlan tag */
+	MVPP2_DEF_FLOW(TCP_V4_FLOW, MVPP2_FL_IP4_TCP_FRAG_TAG,
+		       MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP4 | MVPP2_PRS_RI_L4_TCP,
+		       MVPP2_PRS_IP_MASK),
+
+	MVPP2_DEF_FLOW(TCP_V4_FLOW, MVPP2_FL_IP4_TCP_FRAG_TAG,
+		       MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP4_OPT | MVPP2_PRS_RI_L4_TCP,
+		       MVPP2_PRS_IP_MASK),
+
+	MVPP2_DEF_FLOW(TCP_V4_FLOW, MVPP2_FL_IP4_TCP_FRAG_TAG,
+		       MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP4_OTHER | MVPP2_PRS_RI_L4_TCP,
+		       MVPP2_PRS_IP_MASK),
+
+	/* UDP over IPv4 flows, Not fragmented, no vlan tag */
+	MVPP2_DEF_FLOW(UDP_V4_FLOW, MVPP2_FL_IP4_UDP_NF_UNTAG,
+		       MVPP22_CLS_HEK_IP4_5T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4 |
+		       MVPP2_PRS_RI_L4_UDP,
+		       MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
+
+	MVPP2_DEF_FLOW(UDP_V4_FLOW, MVPP2_FL_IP4_UDP_NF_UNTAG,
+		       MVPP22_CLS_HEK_IP4_5T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4_OPT |
+		       MVPP2_PRS_RI_L4_UDP,
+		       MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
+
+	MVPP2_DEF_FLOW(UDP_V4_FLOW, MVPP2_FL_IP4_UDP_NF_UNTAG,
+		       MVPP22_CLS_HEK_IP4_5T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4_OTHER |
+		       MVPP2_PRS_RI_L4_UDP,
+		       MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
+
+	/* UDP over IPv4 flows, Not fragmented, with vlan tag */
+	MVPP2_DEF_FLOW(UDP_V4_FLOW, MVPP2_FL_IP4_UDP_NF_TAG,
+		       MVPP22_CLS_HEK_IP4_5T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP4 | MVPP2_PRS_RI_L4_UDP,
+		       MVPP2_PRS_IP_MASK),
+
+	MVPP2_DEF_FLOW(UDP_V4_FLOW, MVPP2_FL_IP4_UDP_NF_TAG,
+		       MVPP22_CLS_HEK_IP4_5T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP4_OPT | MVPP2_PRS_RI_L4_UDP,
+		       MVPP2_PRS_IP_MASK),
+
+	MVPP2_DEF_FLOW(UDP_V4_FLOW, MVPP2_FL_IP4_UDP_NF_TAG,
+		       MVPP22_CLS_HEK_IP4_5T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP4_OTHER | MVPP2_PRS_RI_L4_UDP,
+		       MVPP2_PRS_IP_MASK),
+
+	/* UDP over IPv4 flows, fragmented, no vlan tag */
+	MVPP2_DEF_FLOW(UDP_V4_FLOW, MVPP2_FL_IP4_UDP_FRAG_UNTAG,
+		       MVPP22_CLS_HEK_IP4_2T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4 |
+		       MVPP2_PRS_RI_L4_UDP,
+		       MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
+
+	MVPP2_DEF_FLOW(UDP_V4_FLOW, MVPP2_FL_IP4_UDP_FRAG_UNTAG,
+		       MVPP22_CLS_HEK_IP4_2T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4_OPT |
+		       MVPP2_PRS_RI_L4_UDP,
+		       MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
+
+	MVPP2_DEF_FLOW(UDP_V4_FLOW, MVPP2_FL_IP4_UDP_FRAG_UNTAG,
+		       MVPP22_CLS_HEK_IP4_2T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4_OTHER |
+		       MVPP2_PRS_RI_L4_UDP,
+		       MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
+
+	/* UDP over IPv4 flows, fragmented, with vlan tag */
+	MVPP2_DEF_FLOW(UDP_V4_FLOW, MVPP2_FL_IP4_UDP_FRAG_TAG,
+		       MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP4 | MVPP2_PRS_RI_L4_UDP,
+		       MVPP2_PRS_IP_MASK),
+
+	MVPP2_DEF_FLOW(UDP_V4_FLOW, MVPP2_FL_IP4_UDP_FRAG_TAG,
+		       MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP4_OPT | MVPP2_PRS_RI_L4_UDP,
+		       MVPP2_PRS_IP_MASK),
+
+	MVPP2_DEF_FLOW(UDP_V4_FLOW, MVPP2_FL_IP4_UDP_FRAG_TAG,
+		       MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP4_OTHER | MVPP2_PRS_RI_L4_UDP,
+		       MVPP2_PRS_IP_MASK),
+
+	/* TCP over IPv6 flows, not fragmented, no vlan tag */
+	MVPP2_DEF_FLOW(TCP_V6_FLOW, MVPP2_FL_IP6_TCP_NF_UNTAG,
+		       MVPP22_CLS_HEK_IP6_5T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP6 |
+		       MVPP2_PRS_RI_L4_TCP,
+		       MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
+
+	MVPP2_DEF_FLOW(TCP_V6_FLOW, MVPP2_FL_IP6_TCP_NF_UNTAG,
+		       MVPP22_CLS_HEK_IP6_5T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP6_EXT |
+		       MVPP2_PRS_RI_L4_TCP,
+		       MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
+
+	/* TCP over IPv6 flows, not fragmented, with vlan tag */
+	MVPP2_DEF_FLOW(TCP_V6_FLOW, MVPP2_FL_IP6_TCP_NF_TAG,
+		       MVPP22_CLS_HEK_IP6_5T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP6 | MVPP2_PRS_RI_L4_TCP,
+		       MVPP2_PRS_IP_MASK),
+
+	MVPP2_DEF_FLOW(TCP_V6_FLOW, MVPP2_FL_IP6_TCP_NF_TAG,
+		       MVPP22_CLS_HEK_IP6_5T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP6_EXT | MVPP2_PRS_RI_L4_TCP,
+		       MVPP2_PRS_IP_MASK),
+
+	/* TCP over IPv6 flows, fragmented, no vlan tag */
+	MVPP2_DEF_FLOW(TCP_V6_FLOW, MVPP2_FL_IP6_TCP_FRAG_UNTAG,
+		       MVPP22_CLS_HEK_IP6_2T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP6 |
+		       MVPP2_PRS_RI_IP_FRAG_TRUE | MVPP2_PRS_RI_L4_TCP,
+		       MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
+
+	MVPP2_DEF_FLOW(TCP_V6_FLOW, MVPP2_FL_IP6_TCP_FRAG_UNTAG,
+		       MVPP22_CLS_HEK_IP6_2T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP6_EXT |
+		       MVPP2_PRS_RI_IP_FRAG_TRUE | MVPP2_PRS_RI_L4_TCP,
+		       MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
+
+	/* TCP over IPv6 flows, fragmented, with vlan tag */
+	MVPP2_DEF_FLOW(TCP_V6_FLOW, MVPP2_FL_IP6_TCP_FRAG_TAG,
+		       MVPP22_CLS_HEK_IP6_2T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP6 | MVPP2_PRS_RI_IP_FRAG_TRUE |
+		       MVPP2_PRS_RI_L4_TCP,
+		       MVPP2_PRS_IP_MASK),
+
+	MVPP2_DEF_FLOW(TCP_V6_FLOW, MVPP2_FL_IP6_TCP_FRAG_TAG,
+		       MVPP22_CLS_HEK_IP6_2T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP6_EXT | MVPP2_PRS_RI_IP_FRAG_TRUE |
+		       MVPP2_PRS_RI_L4_TCP,
+		       MVPP2_PRS_IP_MASK),
+
+	/* UDP over IPv6 flows, not fragmented, no vlan tag */
+	MVPP2_DEF_FLOW(UDP_V6_FLOW, MVPP2_FL_IP6_UDP_NF_UNTAG,
+		       MVPP22_CLS_HEK_IP6_5T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP6 |
+		       MVPP2_PRS_RI_L4_UDP,
+		       MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
+
+	MVPP2_DEF_FLOW(UDP_V6_FLOW, MVPP2_FL_IP6_UDP_NF_UNTAG,
+		       MVPP22_CLS_HEK_IP6_5T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP6_EXT |
+		       MVPP2_PRS_RI_L4_UDP,
+		       MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
+
+	/* UDP over IPv6 flows, not fragmented, with vlan tag */
+	MVPP2_DEF_FLOW(UDP_V6_FLOW, MVPP2_FL_IP6_UDP_NF_TAG,
+		       MVPP22_CLS_HEK_IP6_5T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP6 | MVPP2_PRS_RI_L4_UDP,
+		       MVPP2_PRS_IP_MASK),
+
+	MVPP2_DEF_FLOW(UDP_V6_FLOW, MVPP2_FL_IP6_UDP_NF_TAG,
+		       MVPP22_CLS_HEK_IP6_5T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP6_EXT | MVPP2_PRS_RI_L4_UDP,
+		       MVPP2_PRS_IP_MASK),
+
+	/* UDP over IPv6 flows, fragmented, no vlan tag */
+	MVPP2_DEF_FLOW(UDP_V6_FLOW, MVPP2_FL_IP6_UDP_FRAG_UNTAG,
+		       MVPP22_CLS_HEK_IP6_2T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP6 |
+		       MVPP2_PRS_RI_IP_FRAG_TRUE | MVPP2_PRS_RI_L4_UDP,
+		       MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
+
+	MVPP2_DEF_FLOW(UDP_V6_FLOW, MVPP2_FL_IP6_UDP_FRAG_UNTAG,
+		       MVPP22_CLS_HEK_IP6_2T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP6_EXT |
+		       MVPP2_PRS_RI_IP_FRAG_TRUE | MVPP2_PRS_RI_L4_UDP,
+		       MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
+
+	/* UDP over IPv6 flows, fragmented, with vlan tag */
+	MVPP2_DEF_FLOW(UDP_V6_FLOW, MVPP2_FL_IP6_UDP_FRAG_TAG,
+		       MVPP22_CLS_HEK_IP6_2T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP6 | MVPP2_PRS_RI_IP_FRAG_TRUE |
+		       MVPP2_PRS_RI_L4_UDP,
+		       MVPP2_PRS_IP_MASK),
+
+	MVPP2_DEF_FLOW(UDP_V6_FLOW, MVPP2_FL_IP6_UDP_FRAG_TAG,
+		       MVPP22_CLS_HEK_IP6_2T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP6_EXT | MVPP2_PRS_RI_IP_FRAG_TRUE |
+		       MVPP2_PRS_RI_L4_UDP,
+		       MVPP2_PRS_IP_MASK),
+
+	/* IPv4 flows, no vlan tag */
+	MVPP2_DEF_FLOW(IPV4_FLOW, MVPP2_FL_IP4_UNTAG,
+		       MVPP22_CLS_HEK_IP4_2T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4,
+		       MVPP2_PRS_RI_VLAN_MASK | MVPP2_PRS_RI_L3_PROTO_MASK),
+	MVPP2_DEF_FLOW(IPV4_FLOW, MVPP2_FL_IP4_UNTAG,
+		       MVPP22_CLS_HEK_IP4_2T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4_OPT,
+		       MVPP2_PRS_RI_VLAN_MASK | MVPP2_PRS_RI_L3_PROTO_MASK),
+	MVPP2_DEF_FLOW(IPV4_FLOW, MVPP2_FL_IP4_UNTAG,
+		       MVPP22_CLS_HEK_IP4_2T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4_OTHER,
+		       MVPP2_PRS_RI_VLAN_MASK | MVPP2_PRS_RI_L3_PROTO_MASK),
+
+	/* IPv4 flows, with vlan tag */
+	MVPP2_DEF_FLOW(IPV4_FLOW, MVPP2_FL_IP4_TAG,
+		       MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP4,
+		       MVPP2_PRS_RI_L3_PROTO_MASK),
+	MVPP2_DEF_FLOW(IPV4_FLOW, MVPP2_FL_IP4_TAG,
+		       MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP4_OPT,
+		       MVPP2_PRS_RI_L3_PROTO_MASK),
+	MVPP2_DEF_FLOW(IPV4_FLOW, MVPP2_FL_IP4_TAG,
+		       MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP4_OTHER,
+		       MVPP2_PRS_RI_L3_PROTO_MASK),
+
+	/* IPv6 flows, no vlan tag */
+	MVPP2_DEF_FLOW(IPV6_FLOW, MVPP2_FL_IP6_UNTAG,
+		       MVPP22_CLS_HEK_IP6_2T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP6,
+		       MVPP2_PRS_RI_VLAN_MASK | MVPP2_PRS_RI_L3_PROTO_MASK),
+	MVPP2_DEF_FLOW(IPV6_FLOW, MVPP2_FL_IP6_UNTAG,
+		       MVPP22_CLS_HEK_IP6_2T,
+		       MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP6,
+		       MVPP2_PRS_RI_VLAN_MASK | MVPP2_PRS_RI_L3_PROTO_MASK),
+
+	/* IPv6 flows, with vlan tag */
+	MVPP2_DEF_FLOW(IPV6_FLOW, MVPP2_FL_IP6_TAG,
+		       MVPP22_CLS_HEK_IP6_2T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP6,
+		       MVPP2_PRS_RI_L3_PROTO_MASK),
+	MVPP2_DEF_FLOW(IPV6_FLOW, MVPP2_FL_IP6_TAG,
+		       MVPP22_CLS_HEK_IP6_2T | MVPP22_CLS_HEK_OPT_VLAN,
+		       MVPP2_PRS_RI_L3_IP6,
+		       MVPP2_PRS_RI_L3_PROTO_MASK),
+
+	/* Non IP flow, no vlan tag */
+	MVPP2_DEF_FLOW(ETHER_FLOW, MVPP2_FL_NON_IP_UNTAG,
+		       0,
+		       MVPP2_PRS_RI_VLAN_NONE,
+		       MVPP2_PRS_RI_VLAN_MASK),
+	/* Non IP flow, with vlan tag */
+	MVPP2_DEF_FLOW(ETHER_FLOW, MVPP2_FL_NON_IP_TAG,
+		       MVPP22_CLS_HEK_OPT_VLAN,
+		       0, 0),
+};
+
 /* Update classification flow table registers */
 static void mvpp2_cls_flow_write(struct mvpp2 *priv,
 				 struct mvpp2_cls_flow_entry *fe)
@@ -76,32 +387,46 @@ static void mvpp2_cls_flow_port_add(struct mvpp2_cls_flow_entry *fe,
 	fe->data[0] |= MVPP2_CLS_FLOW_TBL0_PORT_ID(port);
 }
 
+/* Initialize the parser entry for the given flow */
+static void mvpp2_cls_flow_prs_init(struct mvpp2 *priv,
+				    struct mvpp2_cls_flow *flow)
+{
+	mvpp2_prs_add_flow(priv, flow->flow_id, flow->prs_ri.ri,
+			   flow->prs_ri.ri_mask);
+}
+
 /* Initialize the Lookup Id table entry for the given flow */
-static void mvpp2_cls_flow_lkp_init(struct mvpp2 *priv, int port_id)
+static void mvpp2_cls_flow_lkp_init(struct mvpp2 *priv,
+				    struct mvpp2_cls_flow *flow)
 {
 	struct mvpp2_cls_lookup_entry le;
 
 	le.way = 0;
-	le.lkpid = port_id;
+	le.lkpid = flow->flow_id;
 
 	/* The default RxQ for this port is set in the C2 lookup */
 	le.data = 0;
 
-	le.data |= MVPP2_CLS_LKP_FLOW_PTR(port_id);
+	/* We point on the first lookup in the sequence for the flow, that is
+	 * the C2 lookup.
+	 */
+	le.data |= MVPP2_CLS_LKP_FLOW_PTR(MVPP2_FLOW_C2_ENTRY(flow->flow_id));
+
+	/* CLS is always enabled, RSS is enabled/disabled in C2 lookup */
 	le.data |= MVPP2_CLS_LKP_TBL_LOOKUP_EN_MASK;
 
 	mvpp2_cls_lookup_write(priv, &le);
 }
 
 /* Initialize the flow table entries for the given flow */
-static void mvpp2_cls_flow_init(struct mvpp2 *priv, int port_id)
+static void mvpp2_cls_flow_init(struct mvpp2 *priv, struct mvpp2_cls_flow *flow)
 {
 	struct mvpp2_cls_flow_entry fe;
 	int i;
 
 	/* C2 lookup */
 	memset(&fe, 0, sizeof(fe));
-	fe.index = port_id;
+	fe.index = MVPP2_FLOW_C2_ENTRY(flow->flow_id);
 
 	mvpp2_cls_flow_eng_set(&fe, MVPP22_CLS_ENGINE_C2);
 	mvpp2_cls_flow_port_id_sel(&fe, true);
@@ -116,13 +441,27 @@ static void mvpp2_cls_flow_init(struct mvpp2 *priv, int port_id)
 	mvpp2_cls_flow_write(priv, &fe);
 }
 
+struct mvpp2_cls_flow *mvpp2_cls_flow_get(int flow)
+{
+	if (flow >= MVPP2_N_FLOWS)
+		return NULL;
+
+	return &cls_flows[flow];
+}
+
 static void mvpp2_cls_port_init_flows(struct mvpp2 *priv)
 {
+	struct mvpp2_cls_flow *flow;
 	int i;
 
-	for (i = 0; i < MVPP2_MAX_PORTS; i++) {
-		mvpp2_cls_flow_lkp_init(priv, i);
-		mvpp2_cls_flow_init(priv, i);
+	for (i = 0; i < MVPP2_N_FLOWS; i++) {
+		flow = mvpp2_cls_flow_get(i);
+		if (!flow)
+			break;
+
+		mvpp2_cls_flow_prs_init(priv, flow);
+		mvpp2_cls_flow_lkp_init(priv, flow);
+		mvpp2_cls_flow_init(priv, flow);
 	}
 }
 
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
index ee4933ca7ed8..747c5f055b82 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
@@ -23,6 +23,9 @@
 #define MVPP2_CLS_RX_QUEUES		256
 
 /* Classifier flow constants */
+
+#define MVPP2_FLOW_N_FIELDS		4
+
 enum mvpp2_cls_engine {
 	MVPP22_CLS_ENGINE_C2 = 1,
 	MVPP22_CLS_ENGINE_C3A,
@@ -32,6 +35,33 @@ enum mvpp2_cls_engine {
 	MVPP22_CLS_ENGINE_C3HB = 7,
 };
 
+#define MVPP22_CLS_HEK_OPT_MAC_DA	BIT(0)
+#define MVPP22_CLS_HEK_OPT_VLAN		BIT(1)
+#define MVPP22_CLS_HEK_OPT_L3_PROTO	BIT(2)
+#define MVPP22_CLS_HEK_OPT_IP4SA	BIT(3)
+#define MVPP22_CLS_HEK_OPT_IP4DA	BIT(4)
+#define MVPP22_CLS_HEK_OPT_IP6SA	BIT(5)
+#define MVPP22_CLS_HEK_OPT_IP6DA	BIT(6)
+#define MVPP22_CLS_HEK_OPT_L4SIP	BIT(7)
+#define MVPP22_CLS_HEK_OPT_L4DIP	BIT(8)
+#define MVPP22_CLS_HEK_N_FIELDS		9
+
+#define MVPP22_CLS_HEK_L4_OPTS	(MVPP22_CLS_HEK_OPT_L4SIP | \
+				 MVPP22_CLS_HEK_OPT_L4DIP)
+
+#define MVPP22_CLS_HEK_IP4_2T	(MVPP22_CLS_HEK_OPT_IP4SA | \
+				 MVPP22_CLS_HEK_OPT_IP4DA)
+
+#define MVPP22_CLS_HEK_IP6_2T	(MVPP22_CLS_HEK_OPT_IP6SA | \
+				 MVPP22_CLS_HEK_OPT_IP6DA)
+
+/* The fifth tuple in "5T" is the L4_Info field */
+#define MVPP22_CLS_HEK_IP4_5T	(MVPP22_CLS_HEK_IP4_2T | \
+				 MVPP22_CLS_HEK_L4_OPTS)
+
+#define MVPP22_CLS_HEK_IP6_5T	(MVPP22_CLS_HEK_IP6_2T | \
+				 MVPP22_CLS_HEK_L4_OPTS)
+
 enum mvpp2_cls_flow_seq {
 	MVPP2_CLS_FLOW_SEQ_NORMAL = 0,
 	MVPP2_CLS_FLOW_SEQ_FIRST1,
@@ -75,8 +105,74 @@ struct mvpp2_cls_c2_entry {
 #define MVPP22_CLS_C2_RSS_ENTRY(port)	(port)
 #define MVPP22_CLS_C2_N_ENTRIES		MVPP2_MAX_PORTS
 
+/* RSS flow entries in the flow table. We have 2 entries per port for RSS.
+ *
+ * The first performs a lookup using the C2 TCAM engine, to tag the
+ * packet for software forwarding (needed for RSS), enable or disable RSS, and
+ * assign the default rx queue.
+ *
+ * The second configures the hash generation, by specifying which fields of the
+ * packet header are used to generate the hash, and specifies the relevant hash
+ * engine to use.
+ */
 #define MVPP22_RSS_FLOW_C2_OFFS		0
+#define MVPP22_RSS_FLOW_HASH_OFFS	1
+#define MVPP22_RSS_FLOW_SIZE		(MVPP22_RSS_FLOW_HASH_OFFS + 1)
+
+#define MVPP22_RSS_FLOW_C2(port)	((port) * MVPP22_RSS_FLOW_SIZE + \
+					 MVPP22_RSS_FLOW_C2_OFFS)
+#define MVPP22_RSS_FLOW_HASH(port)	((port) * MVPP22_RSS_FLOW_SIZE + \
+					 MVPP22_RSS_FLOW_HASH_OFFS)
+#define MVPP22_RSS_FLOW_FIRST(port)	MVPP22_RSS_FLOW_C2(port)
+
+/* Packet flow ID */
+enum mvpp2_prs_flow {
+	MVPP2_FL_START = 8,
+	MVPP2_FL_IP4_TCP_NF_UNTAG = MVPP2_FL_START,
+	MVPP2_FL_IP4_UDP_NF_UNTAG,
+	MVPP2_FL_IP4_TCP_NF_TAG,
+	MVPP2_FL_IP4_UDP_NF_TAG,
+	MVPP2_FL_IP6_TCP_NF_UNTAG,
+	MVPP2_FL_IP6_UDP_NF_UNTAG,
+	MVPP2_FL_IP6_TCP_NF_TAG,
+	MVPP2_FL_IP6_UDP_NF_TAG,
+	MVPP2_FL_IP4_TCP_FRAG_UNTAG,
+	MVPP2_FL_IP4_UDP_FRAG_UNTAG,
+	MVPP2_FL_IP4_TCP_FRAG_TAG,
+	MVPP2_FL_IP4_UDP_FRAG_TAG,
+	MVPP2_FL_IP6_TCP_FRAG_UNTAG,
+	MVPP2_FL_IP6_UDP_FRAG_UNTAG,
+	MVPP2_FL_IP6_TCP_FRAG_TAG,
+	MVPP2_FL_IP6_UDP_FRAG_TAG,
+	MVPP2_FL_IP4_UNTAG, /* non-TCP, non-UDP, same for below */
+	MVPP2_FL_IP4_TAG,
+	MVPP2_FL_IP6_UNTAG,
+	MVPP2_FL_IP6_TAG,
+	MVPP2_FL_NON_IP_UNTAG,
+	MVPP2_FL_NON_IP_TAG,
+	MVPP2_FL_LAST,
+};
+
+struct mvpp2_cls_flow {
+	/* The L2-L4 traffic flow type */
+	int flow_type;
+
+	/* The first id in the flow table for this flow */
+	u16 flow_id;
+
+	/* The supported HEK fields for this flow */
+	u16 supported_hash_opts;
+
+	/* The Header Parser result_info that matches this flow */
+	struct mvpp2_prs_result_info prs_ri;
+};
+
+#define MVPP2_N_FLOWS	52
 
+#define MVPP2_ENTRIES_PER_FLOW			(MVPP2_MAX_PORTS + 1)
+#define MVPP2_FLOW_C2_ENTRY(id)			((id) * MVPP2_ENTRIES_PER_FLOW)
+#define MVPP2_PORT_FLOW_HASH_ENTRY(port, id)	((id) * MVPP2_ENTRIES_PER_FLOW + \
+						(port) + 1)
 struct mvpp2_cls_flow_entry {
 	u32 index;
 	u32 data[MVPP2_CLS_FLOWS_TBL_DATA_WORDS];
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
index a882c14d7d77..acf9f78d5f80 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
@@ -2409,6 +2409,41 @@ int mvpp2_prs_tag_mode_set(struct mvpp2 *priv, int port, int type)
 	return 0;
 }
 
+int mvpp2_prs_add_flow(struct mvpp2 *priv, int flow, u32 ri, u32 ri_mask)
+{
+	struct mvpp2_prs_entry pe;
+	u8 *ri_byte, *ri_byte_mask;
+	int tid, i;
+
+	memset(&pe, 0, sizeof(pe));
+
+	tid = mvpp2_prs_tcam_first_free(priv,
+					MVPP2_PE_LAST_FREE_TID,
+					MVPP2_PE_FIRST_FREE_TID);
+	if (tid < 0)
+		return tid;
+
+	pe.index = tid;
+
+	ri_byte = (u8 *)&ri;
+	ri_byte_mask = (u8 *)&ri_mask;
+
+	mvpp2_prs_sram_ai_update(&pe, flow, MVPP2_PRS_FLOW_ID_MASK);
+	mvpp2_prs_sram_bits_set(&pe, MVPP2_PRS_SRAM_LU_DONE_BIT, 1);
+
+	for (i = 0; i < 4; i++) {
+		mvpp2_prs_tcam_data_byte_set(&pe, i, ri_byte[i],
+					     ri_byte_mask[i]);
+	}
+
+	mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_FLOWS);
+	mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_FLOWS);
+	mvpp2_prs_tcam_port_map_set(&pe, MVPP2_PRS_PORT_MASK);
+	mvpp2_prs_hw_write(priv, &pe);
+
+	return 0;
+}
+
 /* Set prs flow for the port */
 int mvpp2_prs_def_flow(struct mvpp2_port *port)
 {
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.h
index 64a64666257c..368e90b54477 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.h
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.h
@@ -224,6 +224,10 @@
 #define MVPP2_PRS_RI_UDF7_IP6_LITE		BIT(29)
 #define MVPP2_PRS_RI_DROP_MASK			0x80000000
 
+#define MVPP2_PRS_IP_MASK			(MVPP2_PRS_RI_L3_PROTO_MASK | \
+						MVPP2_PRS_RI_IP_FRAG_MASK | \
+						MVPP2_PRS_RI_L4_PROTO_MASK)
+
 /* Sram additional info bits assignment */
 #define MVPP2_PRS_IPV4_DIP_AI_BIT		BIT(0)
 #define MVPP2_PRS_IPV6_NO_EXT_AI_BIT		BIT(0)
@@ -271,6 +275,11 @@ struct mvpp2_prs_entry {
 	u32 sram[MVPP2_PRS_SRAM_WORDS];
 };
 
+struct mvpp2_prs_result_info {
+	u32 ri;
+	u32 ri_mask;
+};
+
 struct mvpp2_prs_shadow {
 	bool valid;
 	bool finish;
@@ -292,6 +301,8 @@ int mvpp2_prs_mac_da_accept(struct mvpp2_port *port, const u8 *da, bool add);
 
 int mvpp2_prs_tag_mode_set(struct mvpp2 *priv, int port, int type);
 
+int mvpp2_prs_add_flow(struct mvpp2 *priv, int flow, u32 ri, u32 ri_mask);
+
 int mvpp2_prs_def_flow(struct mvpp2_port *port);
 
 void mvpp2_prs_vid_enable_filtering(struct mvpp2_port *port);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH net-next 17/18] net: mvpp2: add an RSS classification step for each flow
  2018-07-12 11:54 [PATCH net-next 00/18] net: mvpp2: add RSS support Maxime Chevallier
                   ` (15 preceding siblings ...)
  2018-07-12 11:54 ` [PATCH net-next 16/18] net: mvpp2: split ingress traffic into multiple flows Maxime Chevallier
@ 2018-07-12 11:54 ` Maxime Chevallier
  2018-07-12 11:54 ` [PATCH net-next 18/18] net: mvpp2: allow setting RSS flow hash parameters with ethtool Maxime Chevallier
  2018-07-13  0:31 ` [PATCH net-next 00/18] net: mvpp2: add RSS support David Miller
  18 siblings, 0 replies; 21+ messages in thread
From: Maxime Chevallier @ 2018-07-12 11:54 UTC (permalink / raw)
  To: davem
  Cc: Maxime Chevallier, netdev, linux-kernel, Antoine Tenart,
	thomas.petazzoni, gregory.clement, miquel.raynal, nadavh,
	stefanc, ymarkman, mw

One of the classification action that can be performed is to compute a
hash of the packet header based on some header fields, and lookup a RSS
table based on this hash to determine the final RxQ.

This is done by adding one lookup entry per flow per port, so that we
can configure the hash generation parameters for each flow and each
port.

There are 2 possible engines that can be used for RSS hash generation :

 - C3HA, that generates a hash based on up to 4 header-extracted fields
 - C3HB, that does the same as c3HA, but also includes L4 info in the hash

There are a lot of fields that can be extracted from the header. For now,
we only use the ones that we can configure using ethtool :
 - DST MAC address
 - L3 info
 - Source IP
 - Destination IP
 - Source port
 - Destination port

The C3HB engine is selected when we use L4 fields (src/dst port).

               Header parser          Dec table
 Ingress pkt  +-------------+ flow id +----------------------------+
------------->| TCAM + SRAM |-------->|TCP IPv4 w/ VLAN, not frag  |
              +-------------+         |TCP IPv4 w/o VLAN, not frag |
                                      |TCP IPv4 w/ VLAN, frag      |--+
                                      |etc.                        |  |
                                      +----------------------------+  |
                                                                      |
                                            Flow table                |
  +---------+   +------------+         +--------------------------+   |
  | RSS tbl |<--| Classifier |<--------| flow 0: C2 lookup        |   |
  +---------+   +------------+         |         C3 lookup port 0 |   |
                 |         |           |         C3 lookup port 1 |   |
         +-----------+ +-------------+ |         ...              |   |
         | C2 engine | | C3H engines | | flow 1: C2 lookup        |<--+
         +-----------+ +-------------+ |         C3 lookup port 0 |
                                       |         ...              |
                                       | ...                      |
                                       | flow 51 : C2 lookup      |
                                       |           ...            |
                                       +--------------------------+

The C2 engine also gains the role of enabling and disabling the RSS
table lookup for this packet.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c  | 279 +++++++++++++++++++++++-
 drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h  |  15 ++
 drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c |  10 +
 3 files changed, 302 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
index 1fc8a446d94e..98e0e25f7dca 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
@@ -325,6 +325,16 @@ static struct mvpp2_cls_flow cls_flows[MVPP2_N_FLOWS] = {
 		       0, 0),
 };
 
+static void mvpp2_cls_flow_read(struct mvpp2 *priv, int index,
+				struct mvpp2_cls_flow_entry *fe)
+{
+	fe->index = index;
+	mvpp2_write(priv, MVPP2_CLS_FLOW_INDEX_REG, index);
+	fe->data[0] = mvpp2_read(priv, MVPP2_CLS_FLOW_TBL0_REG);
+	fe->data[1] = mvpp2_read(priv, MVPP2_CLS_FLOW_TBL1_REG);
+	fe->data[2] = mvpp2_read(priv, MVPP2_CLS_FLOW_TBL2_REG);
+}
+
 /* Update classification flow table registers */
 static void mvpp2_cls_flow_write(struct mvpp2 *priv,
 				 struct mvpp2_cls_flow_entry *fe)
@@ -346,6 +356,34 @@ static void mvpp2_cls_lookup_write(struct mvpp2 *priv,
 	mvpp2_write(priv, MVPP2_CLS_LKP_TBL_REG, le->data);
 }
 
+/* Operations on flow entry */
+static int mvpp2_cls_flow_hek_num_get(struct mvpp2_cls_flow_entry *fe)
+{
+	return fe->data[1] & MVPP2_CLS_FLOW_TBL1_N_FIELDS_MASK;
+}
+
+static void mvpp2_cls_flow_hek_num_set(struct mvpp2_cls_flow_entry *fe,
+				       int num_of_fields)
+{
+	fe->data[1] &= ~MVPP2_CLS_FLOW_TBL1_N_FIELDS_MASK;
+	fe->data[1] |= MVPP2_CLS_FLOW_TBL1_N_FIELDS(num_of_fields);
+}
+
+static int mvpp2_cls_flow_hek_get(struct mvpp2_cls_flow_entry *fe,
+				  int field_index)
+{
+	return (fe->data[2] >> MVPP2_CLS_FLOW_TBL2_FLD_OFFS(field_index)) &
+		MVPP2_CLS_FLOW_TBL2_FLD_MASK;
+}
+
+static void mvpp2_cls_flow_hek_set(struct mvpp2_cls_flow_entry *fe,
+				   int field_index, int field_id)
+{
+	fe->data[2] &= ~MVPP2_CLS_FLOW_TBL2_FLD(field_index,
+						MVPP2_CLS_FLOW_TBL2_FLD_MASK);
+	fe->data[2] |= MVPP2_CLS_FLOW_TBL2_FLD(field_index, field_id);
+}
+
 static void mvpp2_cls_flow_eng_set(struct mvpp2_cls_flow_entry *fe,
 				   int engine)
 {
@@ -430,15 +468,93 @@ static void mvpp2_cls_flow_init(struct mvpp2 *priv, struct mvpp2_cls_flow *flow)
 
 	mvpp2_cls_flow_eng_set(&fe, MVPP22_CLS_ENGINE_C2);
 	mvpp2_cls_flow_port_id_sel(&fe, true);
-	mvpp2_cls_flow_last_set(&fe, 1);
+	mvpp2_cls_flow_last_set(&fe, 0);
 	mvpp2_cls_flow_pri_set(&fe, 0);
-	mvpp2_cls_flow_seq_set(&fe, MVPP2_CLS_FLOW_SEQ_LAST);
+	mvpp2_cls_flow_seq_set(&fe, MVPP2_CLS_FLOW_SEQ_FIRST1);
 
 	/* Add all ports */
 	for (i = 0; i < MVPP2_MAX_PORTS; i++)
 		mvpp2_cls_flow_port_add(&fe, BIT(i));
 
 	mvpp2_cls_flow_write(priv, &fe);
+
+	/* C3Hx lookups */
+	for (i = 0; i < MVPP2_MAX_PORTS; i++) {
+		memset(&fe, 0, sizeof(fe));
+		fe.index = MVPP2_PORT_FLOW_HASH_ENTRY(i, flow->flow_id);
+
+		mvpp2_cls_flow_port_id_sel(&fe, true);
+		mvpp2_cls_flow_pri_set(&fe, i + 1);
+		mvpp2_cls_flow_seq_set(&fe, MVPP2_CLS_FLOW_SEQ_MIDDLE);
+		mvpp2_cls_flow_port_add(&fe, BIT(i));
+
+		mvpp2_cls_flow_write(priv, &fe);
+	}
+
+	/* Update the last entry */
+	mvpp2_cls_flow_last_set(&fe, 1);
+	mvpp2_cls_flow_seq_set(&fe, MVPP2_CLS_FLOW_SEQ_LAST);
+
+	mvpp2_cls_flow_write(priv, &fe);
+}
+
+/* Adds a field to the Header Extracted Key generation parameters*/
+static int mvpp2_flow_add_hek_field(struct mvpp2_cls_flow_entry *fe,
+				    u32 field_id)
+{
+	int nb_fields = mvpp2_cls_flow_hek_num_get(fe);
+
+	if (nb_fields == MVPP2_FLOW_N_FIELDS)
+		return -EINVAL;
+
+	mvpp2_cls_flow_hek_set(fe, nb_fields, field_id);
+
+	mvpp2_cls_flow_hek_num_set(fe, nb_fields + 1);
+
+	return 0;
+}
+
+static int mvpp2_flow_set_hek_fields(struct mvpp2_cls_flow_entry *fe,
+				     unsigned long hash_opts)
+{
+	u32 field_id;
+	int i;
+
+	/* Clear old fields */
+	mvpp2_cls_flow_hek_num_set(fe, 0);
+	fe->data[2] = 0;
+
+	for_each_set_bit(i, &hash_opts, MVPP22_CLS_HEK_N_FIELDS) {
+		switch (BIT(i)) {
+		case MVPP22_CLS_HEK_OPT_VLAN:
+			field_id = MVPP22_CLS_FIELD_VLAN;
+			break;
+		case MVPP22_CLS_HEK_OPT_IP4SA:
+			field_id = MVPP22_CLS_FIELD_IP4SA;
+			break;
+		case MVPP22_CLS_HEK_OPT_IP4DA:
+			field_id = MVPP22_CLS_FIELD_IP4DA;
+			break;
+		case MVPP22_CLS_HEK_OPT_IP6SA:
+			field_id = MVPP22_CLS_FIELD_IP6SA;
+			break;
+		case MVPP22_CLS_HEK_OPT_IP6DA:
+			field_id = MVPP22_CLS_FIELD_IP6DA;
+			break;
+		case MVPP22_CLS_HEK_OPT_L4SIP:
+			field_id = MVPP22_CLS_FIELD_L4SIP;
+			break;
+		case MVPP22_CLS_HEK_OPT_L4DIP:
+			field_id = MVPP22_CLS_FIELD_L4DIP;
+			break;
+		default:
+			return -EINVAL;
+		}
+		if (mvpp2_flow_add_hek_field(fe, field_id))
+			return -EINVAL;
+	}
+
+	return 0;
 }
 
 struct mvpp2_cls_flow *mvpp2_cls_flow_get(int flow)
@@ -449,6 +565,104 @@ struct mvpp2_cls_flow *mvpp2_cls_flow_get(int flow)
 	return &cls_flows[flow];
 }
 
+/* Set the hash generation options for the given traffic flow.
+ * One traffic flow (in the ethtool sense) has multiple classification flows,
+ * to handle specific cases such as fragmentation, or the presence of a
+ * VLAN / DSA Tag.
+ *
+ * Each of these individual flows has different constraints, for example we
+ * can't hash fragmented packets on L4 data (else we would risk having packet
+ * re-ordering), so each classification flows masks the options with their
+ * supported ones.
+ *
+ */
+static int mvpp2_port_rss_hash_opts_set(struct mvpp2_port *port, int flow_type,
+					u16 requested_opts)
+{
+	struct mvpp2_cls_flow_entry fe;
+	struct mvpp2_cls_flow *flow;
+	int i, engine, flow_index;
+	u16 hash_opts;
+
+	for (i = 0; i < MVPP2_N_FLOWS; i++) {
+		flow = mvpp2_cls_flow_get(i);
+		if (!flow)
+			return -EINVAL;
+
+		if (flow->flow_type != flow_type)
+			continue;
+
+		flow_index = MVPP2_PORT_FLOW_HASH_ENTRY(port->id,
+							flow->flow_id);
+
+		mvpp2_cls_flow_read(port->priv, flow_index, &fe);
+
+		hash_opts = flow->supported_hash_opts & requested_opts;
+
+		/* Use C3HB engine to access L4 infos. This adds L4 infos to the
+		 * hash parameters
+		 */
+		if (hash_opts & MVPP22_CLS_HEK_L4_OPTS)
+			engine = MVPP22_CLS_ENGINE_C3HB;
+		else
+			engine = MVPP22_CLS_ENGINE_C3HA;
+
+		if (mvpp2_flow_set_hek_fields(&fe, hash_opts))
+			return -EINVAL;
+
+		mvpp2_cls_flow_eng_set(&fe, engine);
+
+		mvpp2_cls_flow_write(port->priv, &fe);
+	}
+
+	return 0;
+}
+
+u16 mvpp2_flow_get_hek_fields(struct mvpp2_cls_flow_entry *fe)
+{
+	u16 hash_opts = 0;
+	int n_fields, i, field;
+
+	n_fields = mvpp2_cls_flow_hek_num_get(fe);
+
+	for (i = 0; i < n_fields; i++) {
+		field = mvpp2_cls_flow_hek_get(fe, i);
+
+		switch (field) {
+		case MVPP22_CLS_FIELD_MAC_DA:
+			hash_opts |= MVPP22_CLS_HEK_OPT_MAC_DA;
+			break;
+		case MVPP22_CLS_FIELD_VLAN:
+			hash_opts |= MVPP22_CLS_HEK_OPT_VLAN;
+			break;
+		case MVPP22_CLS_FIELD_L3_PROTO:
+			hash_opts |= MVPP22_CLS_HEK_OPT_L3_PROTO;
+			break;
+		case MVPP22_CLS_FIELD_IP4SA:
+			hash_opts |= MVPP22_CLS_HEK_OPT_IP4SA;
+			break;
+		case MVPP22_CLS_FIELD_IP4DA:
+			hash_opts |= MVPP22_CLS_HEK_OPT_IP4DA;
+			break;
+		case MVPP22_CLS_FIELD_IP6SA:
+			hash_opts |= MVPP22_CLS_HEK_OPT_IP6SA;
+			break;
+		case MVPP22_CLS_FIELD_IP6DA:
+			hash_opts |= MVPP22_CLS_HEK_OPT_IP6DA;
+			break;
+		case MVPP22_CLS_FIELD_L4SIP:
+			hash_opts |= MVPP22_CLS_HEK_OPT_L4SIP;
+			break;
+		case MVPP22_CLS_FIELD_L4DIP:
+			hash_opts |= MVPP22_CLS_HEK_OPT_L4DIP;
+			break;
+		default:
+			break;
+		}
+	}
+	return hash_opts;
+}
+
 static void mvpp2_cls_port_init_flows(struct mvpp2 *priv)
 {
 	struct mvpp2_cls_flow *flow;
@@ -485,6 +699,27 @@ static void mvpp2_cls_c2_write(struct mvpp2 *priv,
 	mvpp2_write(priv, MVPP22_CLS_C2_ATTR3, c2->attr[3]);
 }
 
+static void mvpp2_cls_c2_read(struct mvpp2 *priv, int index,
+			      struct mvpp2_cls_c2_entry *c2)
+{
+	mvpp2_write(priv, MVPP22_CLS_C2_TCAM_IDX, index);
+
+	c2->index = index;
+
+	c2->tcam[0] = mvpp2_read(priv, MVPP22_CLS_C2_TCAM_DATA0);
+	c2->tcam[1] = mvpp2_read(priv, MVPP22_CLS_C2_TCAM_DATA1);
+	c2->tcam[2] = mvpp2_read(priv, MVPP22_CLS_C2_TCAM_DATA2);
+	c2->tcam[3] = mvpp2_read(priv, MVPP22_CLS_C2_TCAM_DATA3);
+	c2->tcam[4] = mvpp2_read(priv, MVPP22_CLS_C2_TCAM_DATA4);
+
+	c2->act = mvpp2_read(priv, MVPP22_CLS_C2_ACT);
+
+	c2->attr[0] = mvpp2_read(priv, MVPP22_CLS_C2_ATTR0);
+	c2->attr[1] = mvpp2_read(priv, MVPP22_CLS_C2_ATTR1);
+	c2->attr[2] = mvpp2_read(priv, MVPP22_CLS_C2_ATTR2);
+	c2->attr[3] = mvpp2_read(priv, MVPP22_CLS_C2_ATTR3);
+}
+
 static void mvpp2_port_c2_cls_init(struct mvpp2_port *port)
 {
 	struct mvpp2_cls_c2_entry c2;
@@ -580,6 +815,38 @@ void mvpp2_cls_port_config(struct mvpp2_port *port)
 	mvpp2_port_c2_cls_init(port);
 }
 
+static void mvpp2_rss_port_c2_enable(struct mvpp2_port *port)
+{
+	struct mvpp2_cls_c2_entry c2;
+
+	mvpp2_cls_c2_read(port->priv, MVPP22_CLS_C2_RSS_ENTRY(port->id), &c2);
+
+	c2.attr[2] |= MVPP22_CLS_C2_ATTR2_RSS_EN;
+
+	mvpp2_cls_c2_write(port->priv, &c2);
+}
+
+static void mvpp2_rss_port_c2_disable(struct mvpp2_port *port)
+{
+	struct mvpp2_cls_c2_entry c2;
+
+	mvpp2_cls_c2_read(port->priv, MVPP22_CLS_C2_RSS_ENTRY(port->id), &c2);
+
+	c2.attr[2] &= ~MVPP22_CLS_C2_ATTR2_RSS_EN;
+
+	mvpp2_cls_c2_write(port->priv, &c2);
+}
+
+void mvpp22_rss_enable(struct mvpp2_port *port)
+{
+	mvpp2_rss_port_c2_enable(port);
+}
+
+void mvpp22_rss_disable(struct mvpp2_port *port)
+{
+	mvpp2_rss_port_c2_disable(port);
+}
+
 /* Set CPU queue number for oversize packets */
 void mvpp2_cls_oversize_rxq_set(struct mvpp2_port *port)
 {
@@ -656,4 +923,12 @@ void mvpp22_rss_port_init(struct mvpp2_port *port)
 		port->indir[i] = ethtool_rxfh_indir_default(i, port->nrxqs);
 
 	mvpp22_rss_fill_table(port, port->id);
+
+	/* Configure default flows */
+	mvpp2_port_rss_hash_opts_set(port, IPV4_FLOW, MVPP22_CLS_HEK_IP4_2T);
+	mvpp2_port_rss_hash_opts_set(port, IPV6_FLOW, MVPP22_CLS_HEK_IP6_2T);
+	mvpp2_port_rss_hash_opts_set(port, TCP_V4_FLOW, MVPP22_CLS_HEK_IP4_5T);
+	mvpp2_port_rss_hash_opts_set(port, TCP_V6_FLOW, MVPP22_CLS_HEK_IP6_5T);
+	mvpp2_port_rss_hash_opts_set(port, UDP_V4_FLOW, MVPP22_CLS_HEK_IP4_5T);
+	mvpp2_port_rss_hash_opts_set(port, UDP_V6_FLOW, MVPP22_CLS_HEK_IP6_5T);
 }
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
index 747c5f055b82..21bd0249be0d 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
@@ -62,6 +62,18 @@ enum mvpp2_cls_engine {
 #define MVPP22_CLS_HEK_IP6_5T	(MVPP22_CLS_HEK_IP6_2T | \
 				 MVPP22_CLS_HEK_L4_OPTS)
 
+enum mvpp2_cls_field_id {
+	MVPP22_CLS_FIELD_MAC_DA = 0x03,
+	MVPP22_CLS_FIELD_VLAN = 0x06,
+	MVPP22_CLS_FIELD_L3_PROTO = 0x0f,
+	MVPP22_CLS_FIELD_IP4SA = 0x10,
+	MVPP22_CLS_FIELD_IP4DA = 0x11,
+	MVPP22_CLS_FIELD_IP6SA = 0x17,
+	MVPP22_CLS_FIELD_IP6DA = 0x1a,
+	MVPP22_CLS_FIELD_L4SIP = 0x1d,
+	MVPP22_CLS_FIELD_L4DIP = 0x1e,
+};
+
 enum mvpp2_cls_flow_seq {
 	MVPP2_CLS_FLOW_SEQ_NORMAL = 0,
 	MVPP2_CLS_FLOW_SEQ_FIRST1,
@@ -188,6 +200,9 @@ void mvpp22_rss_fill_table(struct mvpp2_port *port, u32 table);
 
 void mvpp22_rss_port_init(struct mvpp2_port *port);
 
+void mvpp22_rss_enable(struct mvpp2_port *port);
+void mvpp22_rss_disable(struct mvpp2_port *port);
+
 void mvpp2_cls_init(struct mvpp2 *priv);
 
 void mvpp2_cls_port_config(struct mvpp2_port *port);
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index 93e37d20899d..61e2d6886a4b 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -3631,6 +3631,13 @@ static int mvpp2_set_features(struct net_device *dev,
 		}
 	}
 
+	if (changed & NETIF_F_RXHASH) {
+		if (features & NETIF_F_RXHASH)
+			mvpp22_rss_enable(port);
+		else
+			mvpp22_rss_disable(port);
+	}
+
 	return 0;
 }
 
@@ -4759,6 +4766,9 @@ static int mvpp2_port_probe(struct platform_device *pdev,
 	dev->hw_features |= features | NETIF_F_RXCSUM | NETIF_F_GRO |
 			    NETIF_F_HW_VLAN_CTAG_FILTER;
 
+	if (mvpp22_rss_is_supported())
+		dev->hw_features |= NETIF_F_RXHASH;
+
 	if (port->pool_long->id == MVPP2_BM_JUMBO && port->id != 0) {
 		dev->features &= ~(NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM);
 		dev->hw_features &= ~(NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH net-next 18/18] net: mvpp2: allow setting RSS flow hash parameters with ethtool
  2018-07-12 11:54 [PATCH net-next 00/18] net: mvpp2: add RSS support Maxime Chevallier
                   ` (16 preceding siblings ...)
  2018-07-12 11:54 ` [PATCH net-next 17/18] net: mvpp2: add an RSS classification step for each flow Maxime Chevallier
@ 2018-07-12 11:54 ` Maxime Chevallier
  2018-07-13  0:31 ` [PATCH net-next 00/18] net: mvpp2: add RSS support David Miller
  18 siblings, 0 replies; 21+ messages in thread
From: Maxime Chevallier @ 2018-07-12 11:54 UTC (permalink / raw)
  To: davem
  Cc: Maxime Chevallier, netdev, linux-kernel, Antoine Tenart,
	thomas.petazzoni, gregory.clement, miquel.raynal, nadavh,
	stefanc, ymarkman, mw

This commit allows setting the RSS hash generation parameters from
ethtool. When setting parameters for a given flow type from ethtool
(e.g. tcp4), all the corresponding flows in the flow table are updated,
according to the supported hash parameters.

For example, when configuring TCP over IPv4 hash parameters to be
src/dst IP  + src/dst port ("ethtool -N eth0 rx-flow-hash tcp4 sdfn"),
we only set the "src/dst port" hash parameters on the non-fragmented TCP
over IPv4 flows.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c  | 104 ++++++++++++++++++++++++
 drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h  |   3 +
 drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c |  26 +++++-
 3 files changed, 132 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
index 98e0e25f7dca..dc7dfa9a6606 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
@@ -663,6 +663,35 @@ u16 mvpp2_flow_get_hek_fields(struct mvpp2_cls_flow_entry *fe)
 	return hash_opts;
 }
 
+/* Returns the hash opts for this flow. There are several classifier flows
+ * for one traffic flow, this returns an aggregation of all configurations.
+ */
+static u16 mvpp2_port_rss_hash_opts_get(struct mvpp2_port *port, int flow_type)
+{
+	struct mvpp2_cls_flow_entry fe;
+	struct mvpp2_cls_flow *flow;
+	int i, flow_index;
+	u16 hash_opts = 0;
+
+	for (i = 0; i < MVPP2_N_FLOWS; i++) {
+		flow = mvpp2_cls_flow_get(i);
+		if (!flow)
+			return 0;
+
+		if (flow->flow_type != flow_type)
+			continue;
+
+		flow_index = MVPP2_PORT_FLOW_HASH_ENTRY(port->id,
+							flow->flow_id);
+
+		mvpp2_cls_flow_read(port->priv, flow_index, &fe);
+
+		hash_opts |= mvpp2_flow_get_hek_fields(&fe);
+	}
+
+	return hash_opts;
+}
+
 static void mvpp2_cls_port_init_flows(struct mvpp2 *priv)
 {
 	struct mvpp2_cls_flow *flow;
@@ -897,6 +926,81 @@ void mvpp22_rss_fill_table(struct mvpp2_port *port, u32 table)
 	}
 }
 
+int mvpp2_ethtool_rxfh_set(struct mvpp2_port *port, struct ethtool_rxnfc *info)
+{
+	u16 hash_opts = 0;
+
+	switch (info->flow_type) {
+	case TCP_V4_FLOW:
+	case UDP_V4_FLOW:
+	case TCP_V6_FLOW:
+	case UDP_V6_FLOW:
+		if (info->data & RXH_L4_B_0_1)
+			hash_opts |= MVPP22_CLS_HEK_OPT_L4SIP;
+		if (info->data & RXH_L4_B_2_3)
+			hash_opts |= MVPP22_CLS_HEK_OPT_L4DIP;
+		/* Fallthrough */
+	case IPV4_FLOW:
+	case IPV6_FLOW:
+		if (info->data & RXH_L2DA)
+			hash_opts |= MVPP22_CLS_HEK_OPT_MAC_DA;
+		if (info->data & RXH_VLAN)
+			hash_opts |= MVPP22_CLS_HEK_OPT_VLAN;
+		if (info->data & RXH_L3_PROTO)
+			hash_opts |= MVPP22_CLS_HEK_OPT_L3_PROTO;
+		if (info->data & RXH_IP_SRC)
+			hash_opts |= (MVPP22_CLS_HEK_OPT_IP4SA |
+				     MVPP22_CLS_HEK_OPT_IP6SA);
+		if (info->data & RXH_IP_DST)
+			hash_opts |= (MVPP22_CLS_HEK_OPT_IP4DA |
+				     MVPP22_CLS_HEK_OPT_IP6DA);
+		break;
+	default: return -EOPNOTSUPP;
+	}
+
+	return mvpp2_port_rss_hash_opts_set(port, info->flow_type, hash_opts);
+}
+
+int mvpp2_ethtool_rxfh_get(struct mvpp2_port *port, struct ethtool_rxnfc *info)
+{
+	unsigned long hash_opts;
+	int i;
+
+	hash_opts = mvpp2_port_rss_hash_opts_get(port, info->flow_type);
+	info->data = 0;
+
+	for_each_set_bit(i, &hash_opts, MVPP22_CLS_HEK_N_FIELDS) {
+		switch (BIT(i)) {
+		case MVPP22_CLS_HEK_OPT_MAC_DA:
+			info->data |= RXH_L2DA;
+			break;
+		case MVPP22_CLS_HEK_OPT_VLAN:
+			info->data |= RXH_VLAN;
+			break;
+		case MVPP22_CLS_HEK_OPT_L3_PROTO:
+			info->data |= RXH_L3_PROTO;
+			break;
+		case MVPP22_CLS_HEK_OPT_IP4SA:
+		case MVPP22_CLS_HEK_OPT_IP6SA:
+			info->data |= RXH_IP_SRC;
+			break;
+		case MVPP22_CLS_HEK_OPT_IP4DA:
+		case MVPP22_CLS_HEK_OPT_IP6DA:
+			info->data |= RXH_IP_DST;
+			break;
+		case MVPP22_CLS_HEK_OPT_L4SIP:
+			info->data |= RXH_L4_B_0_1;
+			break;
+		case MVPP22_CLS_HEK_OPT_L4DIP:
+			info->data |= RXH_L4_B_2_3;
+			break;
+		default:
+			return -EINVAL;
+		}
+	}
+	return 0;
+}
+
 void mvpp22_rss_port_init(struct mvpp2_port *port)
 {
 	struct mvpp2 *priv = port->priv;
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
index 21bd0249be0d..151d791a91b6 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
@@ -203,6 +203,9 @@ void mvpp22_rss_port_init(struct mvpp2_port *port);
 void mvpp22_rss_enable(struct mvpp2_port *port);
 void mvpp22_rss_disable(struct mvpp2_port *port);
 
+int mvpp2_ethtool_rxfh_get(struct mvpp2_port *port, struct ethtool_rxnfc *info);
+int mvpp2_ethtool_rxfh_set(struct mvpp2_port *port, struct ethtool_rxnfc *info);
+
 void mvpp2_cls_init(struct mvpp2 *priv);
 
 void mvpp2_cls_port_config(struct mvpp2_port *port);
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index 61e2d6886a4b..2283be12d700 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -3829,11 +3829,15 @@ static int mvpp2_ethtool_get_rxnfc(struct net_device *dev,
 				   struct ethtool_rxnfc *info, u32 *rules)
 {
 	struct mvpp2_port *port = netdev_priv(dev);
+	int ret = 0;
 
 	if (!mvpp22_rss_is_supported())
 		return -EOPNOTSUPP;
 
 	switch (info->cmd) {
+	case ETHTOOL_GRXFH:
+		ret = mvpp2_ethtool_rxfh_get(port, info);
+		break;
 	case ETHTOOL_GRXRINGS:
 		info->data = port->nrxqs;
 		break;
@@ -3841,7 +3845,26 @@ static int mvpp2_ethtool_get_rxnfc(struct net_device *dev,
 		return -ENOTSUPP;
 	}
 
-	return 0;
+	return ret;
+}
+
+static int mvpp2_ethtool_set_rxnfc(struct net_device *dev,
+				   struct ethtool_rxnfc *info)
+{
+	struct mvpp2_port *port = netdev_priv(dev);
+	int ret = 0;
+
+	if (!mvpp22_rss_is_supported())
+		return -EOPNOTSUPP;
+
+	switch (info->cmd) {
+	case ETHTOOL_SRXFH:
+		ret = mvpp2_ethtool_rxfh_set(port, info);
+		break;
+	default:
+		return -EOPNOTSUPP;
+	}
+	return ret;
 }
 
 static u32 mvpp2_ethtool_get_rxfh_indir_size(struct net_device *dev)
@@ -3922,6 +3945,7 @@ static const struct ethtool_ops mvpp2_eth_tool_ops = {
 	.get_link_ksettings	= mvpp2_ethtool_get_link_ksettings,
 	.set_link_ksettings	= mvpp2_ethtool_set_link_ksettings,
 	.get_rxnfc		= mvpp2_ethtool_get_rxnfc,
+	.set_rxnfc		= mvpp2_ethtool_set_rxnfc,
 	.get_rxfh_indir_size	= mvpp2_ethtool_get_rxfh_indir_size,
 	.get_rxfh		= mvpp2_ethtool_get_rxfh,
 	.set_rxfh		= mvpp2_ethtool_set_rxfh,
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH net-next 03/18] net: mvpp2: make sure we use single queue mode on PPv2.1
  2018-07-12 11:54 ` [PATCH net-next 03/18] net: mvpp2: make sure we use single queue mode on PPv2.1 Maxime Chevallier
@ 2018-07-12 14:10   ` Sergei Shtylyov
  0 siblings, 0 replies; 21+ messages in thread
From: Sergei Shtylyov @ 2018-07-12 14:10 UTC (permalink / raw)
  To: Maxime Chevallier, davem
  Cc: netdev, linux-kernel, Antoine Tenart, thomas.petazzoni,
	gregory.clement, miquel.raynal, nadavh, stefanc, ymarkman, mw

Hello!

On 07/12/2018 02:54 PM, Maxime Chevallier wrote:

> The PPv2 driver defines 2 "queue_modes" :
>  - QDIST_SINGLE_MODE, where we each port share one rx queue vector

   Doesn't look like "we" is needed here.

>    between all CPUs
>  - QDIST_MULTI_MODE, where each port has one rx queue vector per CPU.
> 
> Multi queue mode isn't available on PPv2.1, make sure we fallback to
> single mode when running on this revision.
> 
> Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
[...]

MBR, Sergei


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH net-next 00/18] net: mvpp2: add RSS support
  2018-07-12 11:54 [PATCH net-next 00/18] net: mvpp2: add RSS support Maxime Chevallier
                   ` (17 preceding siblings ...)
  2018-07-12 11:54 ` [PATCH net-next 18/18] net: mvpp2: allow setting RSS flow hash parameters with ethtool Maxime Chevallier
@ 2018-07-13  0:31 ` David Miller
  18 siblings, 0 replies; 21+ messages in thread
From: David Miller @ 2018-07-13  0:31 UTC (permalink / raw)
  To: maxime.chevallier
  Cc: netdev, linux-kernel, antoine.tenart, thomas.petazzoni,
	gregory.clement, miquel.raynal, nadavh, stefanc, ymarkman, mw

From: Maxime Chevallier <maxime.chevallier@bootlin.com>
Date: Thu, 12 Jul 2018 13:54:09 +0200

> This series adds support for RSS on PPv2. There already was some code to
> handle the RSS tables, but the driver was missing all the classification
> steps required to actually use these tables.
> 
> RSS is used through the classifier, using at least 2 lookups :
>  - One using the C2 engine, a TCAM engine that match the packet based on
>    some header extracted fields, assigns the default rx queue for that
>    packet and tag it for RSS
>  - One using the C3Hx engine, which computes the hash that's used to perform
>    the lookup in the RSS table.
> 
> Since RSS spreads the load across CPUs, we need to make sure that packets
> from the same flow are always assigned the same rx queue, to prevent
> re-ordering.
> 
> This series therefore adds a classification step based on the Header Parser,
> that separate ingress traffic into 52 flows, based on some L2, L3 and L4
> parameters.
> 
> Patches 1 and 2 fix some header issues, from the driver splitting
> 
> Patches 3 to 7 make sure the correct receive queue setup is used for RSS
> 
> Patches 8 to 14 deal with the way we handle the RSS tables
> 
> Patch 15 implement basic classifier configuration, by using it to assign the
> default receive queue
> 
> Patch 16 implement the ingress traffic splitting into multiple flows
> 
> Patch 17 adds RSS support, by using the needed classification steps
> 
> Patch 18 adds the required ethtool ops to configure the flow hash parameters
> 
> This was tested on MacchiatoBin, giving some nice performance improvements
> using ip forwarding (going from 5Gbps to 9.6Gbps total throughput).
> 
> RSS is disabled by default.

Series applied, thank you.

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2018-07-13  0:31 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-12 11:54 [PATCH net-next 00/18] net: mvpp2: add RSS support Maxime Chevallier
2018-07-12 11:54 ` [PATCH net-next 01/18] net: mvpp2: fix include guards in mvpp2_prs.h Maxime Chevallier
2018-07-12 11:54 ` [PATCH net-next 02/18] net: mvpp2: define the number of RSS entries per table in mvpp2.h Maxime Chevallier
2018-07-12 11:54 ` [PATCH net-next 03/18] net: mvpp2: make sure we use single queue mode on PPv2.1 Maxime Chevallier
2018-07-12 14:10   ` Sergei Shtylyov
2018-07-12 11:54 ` [PATCH net-next 04/18] net: mvpp2: make multi queue mode the default mode Maxime Chevallier
2018-07-12 11:54 ` [PATCH net-next 05/18] net: mvpp2: use RSS only when using multi-queue mode Maxime Chevallier
2018-07-12 11:54 ` [PATCH net-next 06/18] net: mvpp2: fix hardcoded number of rx queues Maxime Chevallier
2018-07-12 11:54 ` [PATCH net-next 07/18] net: mvpp2: use only one rx queue per port per CPU Maxime Chevallier
2018-07-12 11:54 ` [PATCH net-next 08/18] net: mvpp2: fix a typo in the RSS code Maxime Chevallier
2018-07-12 11:54 ` [PATCH net-next 09/18] net: mvpp2: fix RSS register definitions Maxime Chevallier
2018-07-12 11:54 ` [PATCH net-next 10/18] net: mvpp2: use one RSS table per port Maxime Chevallier
2018-07-12 11:54 ` [PATCH net-next 11/18] net: mvpp2: RSS indirection table support Maxime Chevallier
2018-07-12 11:54 ` [PATCH net-next 12/18] net: mvpp2: improve the distribution of packets on CPUs when using RSS Maxime Chevallier
2018-07-12 11:54 ` [PATCH net-next 13/18] net: mvpp2: make sure we don't spread load on disabled CPUs Maxime Chevallier
2018-07-12 11:54 ` [PATCH net-next 14/18] net: mvpp2: rename per-port RSS init function Maxime Chevallier
2018-07-12 11:54 ` [PATCH net-next 15/18] net: mvpp2: use classifier to assign default rx queue Maxime Chevallier
2018-07-12 11:54 ` [PATCH net-next 16/18] net: mvpp2: split ingress traffic into multiple flows Maxime Chevallier
2018-07-12 11:54 ` [PATCH net-next 17/18] net: mvpp2: add an RSS classification step for each flow Maxime Chevallier
2018-07-12 11:54 ` [PATCH net-next 18/18] net: mvpp2: allow setting RSS flow hash parameters with ethtool Maxime Chevallier
2018-07-13  0:31 ` [PATCH net-next 00/18] net: mvpp2: add RSS support David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).