All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next 00/11] mlxsw: L3 HW stats improvements
@ 2022-06-16 10:42 Ido Schimmel
  2022-06-16 10:42 ` [PATCH net-next 01/11] mlxsw: Trap ARP packets at layer 3 instead of layer 2 Ido Schimmel
                   ` (11 more replies)
  0 siblings, 12 replies; 13+ messages in thread
From: Ido Schimmel @ 2022-06-16 10:42 UTC (permalink / raw)
  To: netdev; +Cc: davem, kuba, pabeni, edumazet, petrm, amcohen, mlxsw, Ido Schimmel

While testing L3 HW stats [1] on top of mlxsw, two issues were found:

1. Stats cannot be enabled for more than 205 netdevs. This was fixed in
commit 4b7a632ac4e7 ("mlxsw: spectrum_cnt: Reorder counter pools").

2. ARP packets are counted as errors. Patch #1 takes care of that. See
the commit message for details.

The goal of the majority of the rest of the patches is to add selftests
that would have discovered that only about 205 netdevs can have L3 HW
stats supported, despite the HW supporting much more. The obvious place
to plug this in is the scale test framework.

The scale tests are currently testing two things: that some number of
instances of a given resource can actually be created; and that when an
attempt is made to create more than the supported amount, the failures
are noted and handled gracefully.

However the ability to allocate the resource does not mean that the
resource actually works when passing traffic. For that, make it possible
for a given scale to also test traffic.

To that end, this patchset adds traffic tests. The goal of these is to
run traffic and observe whether a sample of the allocated resource
instances actually perform their task. Traffic tests are only run on the
positive leg of the scale test (no point trying to pass traffic when the
expected outcome is that the resource will not be allocated). They are
opt-in, if a given test does not expose it, it is not run.

The patchset proceeds as follows:

- Patches #2 and #3 add to "devlink resource" support for number of
  allocated RIFs, and the capacity. This is necessary, because when
  evaluating how many L3 HW stats instances it should be possible to
  allocate, the limiting resource on Spectrum-2 and above currently is
  not the counters themselves, but actually the RIFs.

- Patch #6 adds support for invocation of a traffic test, if a given scale
  tests exposes it.

- Patch #7 adds support for skipping a given scale test. Because on
  Spectrum-2 and above, the limiting factor to L3 HW stats instances is
  actually the number of RIFs, there is no point in running the failing leg
  of a scale tests, because it would test exhaustion of RIFs, not of RIF
  counters.

- With patch #8, the scale tests drivers pass the target number to the
  cleanup function of a scale test.

- In patch #9, add a traffic test to the tc_flower selftests. This makes
  sure that the flow counters installed with the ACLs actually do count as
  they are supposed to.

- In patch #10, add a new scale selftest for RIF counter scale, including a
  traffic test.

- In patch #11, the scale target for the tc_flower selftest is
  dynamically set instead of being hard coded.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ca0a53dcec9495d1dc5bbc369c810c520d728373

Amit Cohen (2):
  mlxsw: Trap ARP packets at layer 3 instead of layer 2
  selftests: mirror_gre_bridge_1q_lag: Enslave port to bridge before
    other configurations

Ido Schimmel (2):
  selftests: mlxsw: resource_scale: Update scale target after test setup
  selftests: spectrum-2: tc_flower_scale: Dynamically set scale target

Petr Machata (7):
  mlxsw: Keep track of number of allocated RIFs
  mlxsw: Add a resource describing number of RIFs
  selftests: mlxsw: resource_scale: Introduce traffic tests
  selftests: mlxsw: resource_scale: Allow skipping a test
  selftests: mlxsw: resource_scale: Pass target count to cleanup
  selftests: mlxsw: tc_flower_scale: Add a traffic test
  selftests: mlxsw: Add a RIF counter scale test

 .../net/ethernet/mellanox/mlxsw/spectrum.c    |  29 +++++
 .../net/ethernet/mellanox/mlxsw/spectrum.h    |   1 +
 .../ethernet/mellanox/mlxsw/spectrum_router.c |  18 +++
 .../ethernet/mellanox/mlxsw/spectrum_router.h |   1 +
 .../ethernet/mellanox/mlxsw/spectrum_trap.c   |   8 +-
 drivers/net/ethernet/mellanox/mlxsw/trap.h    |   4 +-
 .../drivers/net/mlxsw/rif_counter_scale.sh    | 107 ++++++++++++++++++
 .../net/mlxsw/spectrum-2/resource_scale.sh    |  31 ++++-
 .../net/mlxsw/spectrum-2/rif_counter_scale.sh |   1 +
 .../net/mlxsw/spectrum-2/tc_flower_scale.sh   |  15 ++-
 .../net/mlxsw/spectrum/resource_scale.sh      |  29 ++++-
 .../net/mlxsw/spectrum/rif_counter_scale.sh   |  34 ++++++
 .../drivers/net/mlxsw/tc_flower_scale.sh      |  17 +++
 .../forwarding/mirror_gre_bridge_1q_lag.sh    |   7 +-
 14 files changed, 283 insertions(+), 19 deletions(-)
 create mode 100644 tools/testing/selftests/drivers/net/mlxsw/rif_counter_scale.sh
 create mode 120000 tools/testing/selftests/drivers/net/mlxsw/spectrum-2/rif_counter_scale.sh
 create mode 100644 tools/testing/selftests/drivers/net/mlxsw/spectrum/rif_counter_scale.sh

-- 
2.36.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH net-next 01/11] mlxsw: Trap ARP packets at layer 3 instead of layer 2
  2022-06-16 10:42 [PATCH net-next 00/11] mlxsw: L3 HW stats improvements Ido Schimmel
@ 2022-06-16 10:42 ` Ido Schimmel
  2022-06-16 10:42 ` [PATCH net-next 02/11] mlxsw: Keep track of number of allocated RIFs Ido Schimmel
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Ido Schimmel @ 2022-06-16 10:42 UTC (permalink / raw)
  To: netdev; +Cc: davem, kuba, pabeni, edumazet, petrm, amcohen, mlxsw, Ido Schimmel

From: Amit Cohen <amcohen@nvidia.com>

Currently, the traps 'ARP_REQUEST' and 'ARP_RESPONSE' occur at layer 2.
To allow the packets to be flooded, they are configured with the action
'MIRROR_TO_CPU' which means that the CPU receives a replica of the packet.

Today, Spectrum ASICs also support trapping ARP packets at layer 3. This
behavior is better, then the packets can just be trapped and there is no
need to mirror them. An additional motivation is that using the traps at
layer 2, the ARP packets are dropped in the router as they do not have an
IP header, then they are counted as error packets, which might confuse
users.

Add the relevant traps for layer 3 and use them instead of the existing
traps. There is no visible change to user space.

Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c | 8 ++++----
 drivers/net/ethernet/mellanox/mlxsw/trap.h          | 4 ++--
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c
index ed4d0d3448f3..d0baba38d2a3 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c
@@ -953,16 +953,16 @@ static const struct mlxsw_sp_trap_item mlxsw_sp_trap_items_arr[] = {
 		.trap = MLXSW_SP_TRAP_CONTROL(ARP_REQUEST, NEIGH_DISCOVERY,
 					      MIRROR),
 		.listeners_arr = {
-			MLXSW_SP_RXL_MARK(ARPBC, NEIGH_DISCOVERY, MIRROR_TO_CPU,
-					  false),
+			MLXSW_SP_RXL_MARK(ROUTER_ARPBC, NEIGH_DISCOVERY,
+					  TRAP_TO_CPU, false),
 		},
 	},
 	{
 		.trap = MLXSW_SP_TRAP_CONTROL(ARP_RESPONSE, NEIGH_DISCOVERY,
 					      MIRROR),
 		.listeners_arr = {
-			MLXSW_SP_RXL_MARK(ARPUC, NEIGH_DISCOVERY, MIRROR_TO_CPU,
-					  false),
+			MLXSW_SP_RXL_MARK(ROUTER_ARPUC, NEIGH_DISCOVERY,
+					  TRAP_TO_CPU, false),
 		},
 	},
 	{
diff --git a/drivers/net/ethernet/mellanox/mlxsw/trap.h b/drivers/net/ethernet/mellanox/mlxsw/trap.h
index d888498aed33..8da169663bda 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/trap.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/trap.h
@@ -27,8 +27,6 @@ enum {
 	MLXSW_TRAP_ID_PKT_SAMPLE = 0x38,
 	MLXSW_TRAP_ID_FID_MISS = 0x3D,
 	MLXSW_TRAP_ID_DECAP_ECN0 = 0x40,
-	MLXSW_TRAP_ID_ARPBC = 0x50,
-	MLXSW_TRAP_ID_ARPUC = 0x51,
 	MLXSW_TRAP_ID_MTUERROR = 0x52,
 	MLXSW_TRAP_ID_TTLERROR = 0x53,
 	MLXSW_TRAP_ID_LBERROR = 0x54,
@@ -71,6 +69,8 @@ enum {
 	MLXSW_TRAP_ID_IPV6_BFD = 0xD1,
 	MLXSW_TRAP_ID_ROUTER_ALERT_IPV4 = 0xD6,
 	MLXSW_TRAP_ID_ROUTER_ALERT_IPV6 = 0xD7,
+	MLXSW_TRAP_ID_ROUTER_ARPBC = 0xE0,
+	MLXSW_TRAP_ID_ROUTER_ARPUC = 0xE1,
 	MLXSW_TRAP_ID_DISCARD_NON_ROUTABLE = 0x11A,
 	MLXSW_TRAP_ID_DISCARD_ROUTER2 = 0x130,
 	MLXSW_TRAP_ID_DISCARD_ROUTER3 = 0x131,
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 02/11] mlxsw: Keep track of number of allocated RIFs
  2022-06-16 10:42 [PATCH net-next 00/11] mlxsw: L3 HW stats improvements Ido Schimmel
  2022-06-16 10:42 ` [PATCH net-next 01/11] mlxsw: Trap ARP packets at layer 3 instead of layer 2 Ido Schimmel
@ 2022-06-16 10:42 ` Ido Schimmel
  2022-06-16 10:42 ` [PATCH net-next 03/11] mlxsw: Add a resource describing number of RIFs Ido Schimmel
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Ido Schimmel @ 2022-06-16 10:42 UTC (permalink / raw)
  To: netdev; +Cc: davem, kuba, pabeni, edumazet, petrm, amcohen, mlxsw, Ido Schimmel

From: Petr Machata <petrm@nvidia.com>

In order to expose number of RIFs as a resource, it is going to be handy
to have the number of currently-allocated RIFs as a single number.
Introduce such.

Signed-off-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: Amit Cohen <amcohen@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c | 6 ++++++
 drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h | 1 +
 2 files changed, 7 insertions(+)

diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
index e3f52019cbcb..07d7e244dfbd 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
@@ -8134,6 +8134,7 @@ mlxsw_sp_rif_create(struct mlxsw_sp *mlxsw_sp,
 		mlxsw_sp_rif_counters_alloc(rif);
 	}
 
+	atomic_inc(&mlxsw_sp->router->rifs_count);
 	return rif;
 
 err_stats_enable:
@@ -8163,6 +8164,7 @@ static void mlxsw_sp_rif_destroy(struct mlxsw_sp_rif *rif)
 	struct mlxsw_sp_vr *vr;
 	int i;
 
+	atomic_dec(&mlxsw_sp->router->rifs_count);
 	mlxsw_sp_router_rif_gone_sync(mlxsw_sp, rif);
 	vr = &mlxsw_sp->router->vrs[rif->vr_id];
 
@@ -9652,6 +9654,7 @@ mlxsw_sp_ul_rif_create(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_vr *vr,
 	if (err)
 		goto ul_rif_op_err;
 
+	atomic_inc(&mlxsw_sp->router->rifs_count);
 	return ul_rif;
 
 ul_rif_op_err:
@@ -9664,6 +9667,7 @@ static void mlxsw_sp_ul_rif_destroy(struct mlxsw_sp_rif *ul_rif)
 {
 	struct mlxsw_sp *mlxsw_sp = ul_rif->mlxsw_sp;
 
+	atomic_dec(&mlxsw_sp->router->rifs_count);
 	mlxsw_sp_rif_ipip_lb_ul_rif_op(ul_rif, false);
 	mlxsw_sp->router->rifs[ul_rif->rif_index] = NULL;
 	kfree(ul_rif);
@@ -9819,6 +9823,7 @@ static int mlxsw_sp_rifs_init(struct mlxsw_sp *mlxsw_sp)
 
 	idr_init(&mlxsw_sp->router->rif_mac_profiles_idr);
 	atomic_set(&mlxsw_sp->router->rif_mac_profiles_count, 0);
+	atomic_set(&mlxsw_sp->router->rifs_count, 0);
 	devlink_resource_occ_get_register(devlink,
 					  MLXSW_SP_RESOURCE_RIF_MAC_PROFILES,
 					  mlxsw_sp_rif_mac_profiles_occ_get,
@@ -9832,6 +9837,7 @@ static void mlxsw_sp_rifs_fini(struct mlxsw_sp *mlxsw_sp)
 	struct devlink *devlink = priv_to_devlink(mlxsw_sp->core);
 	int i;
 
+	WARN_ON_ONCE(atomic_read(&mlxsw_sp->router->rifs_count));
 	for (i = 0; i < MLXSW_CORE_RES_GET(mlxsw_sp->core, MAX_RIFS); i++)
 		WARN_ON_ONCE(mlxsw_sp->router->rifs[i]);
 
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h
index f7510be1cf2d..b5c83ec7a87f 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h
@@ -20,6 +20,7 @@ struct mlxsw_sp_router {
 	struct mlxsw_sp_rif **rifs;
 	struct idr rif_mac_profiles_idr;
 	atomic_t rif_mac_profiles_count;
+	atomic_t rifs_count;
 	u8 max_rif_mac_profile;
 	struct mlxsw_sp_vr *vrs;
 	struct rhashtable neigh_ht;
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 03/11] mlxsw: Add a resource describing number of RIFs
  2022-06-16 10:42 [PATCH net-next 00/11] mlxsw: L3 HW stats improvements Ido Schimmel
  2022-06-16 10:42 ` [PATCH net-next 01/11] mlxsw: Trap ARP packets at layer 3 instead of layer 2 Ido Schimmel
  2022-06-16 10:42 ` [PATCH net-next 02/11] mlxsw: Keep track of number of allocated RIFs Ido Schimmel
@ 2022-06-16 10:42 ` Ido Schimmel
  2022-06-16 10:42 ` [PATCH net-next 04/11] selftests: mirror_gre_bridge_1q_lag: Enslave port to bridge before other configurations Ido Schimmel
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Ido Schimmel @ 2022-06-16 10:42 UTC (permalink / raw)
  To: netdev; +Cc: davem, kuba, pabeni, edumazet, petrm, amcohen, mlxsw, Ido Schimmel

From: Petr Machata <petrm@nvidia.com>

The Spectrum ASIC has a limit on how many L3 devices (called RIFs) can be
created. The limit depends on the ASIC and FW revision, and mlxsw reads it
from the FW. In order to communicate both the number of RIFs that there can
be, and how many are taken now (i.e. occupancy), introduce a corresponding
devlink resource.

Signed-off-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: Amit Cohen <amcohen@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
 .../net/ethernet/mellanox/mlxsw/spectrum.c    | 29 +++++++++++++++++++
 .../net/ethernet/mellanox/mlxsw/spectrum.h    |  1 +
 .../ethernet/mellanox/mlxsw/spectrum_router.c | 12 ++++++++
 3 files changed, 42 insertions(+)

diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
index c949005f56dc..a62887b8d98e 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
@@ -3580,6 +3580,25 @@ mlxsw_sp_resources_rif_mac_profile_register(struct mlxsw_core *mlxsw_core)
 					 &size_params);
 }
 
+static int mlxsw_sp_resources_rifs_register(struct mlxsw_core *mlxsw_core)
+{
+	struct devlink *devlink = priv_to_devlink(mlxsw_core);
+	struct devlink_resource_size_params size_params;
+	u64 max_rifs;
+
+	if (!MLXSW_CORE_RES_VALID(mlxsw_core, MAX_RIFS))
+		return -EIO;
+
+	max_rifs = MLXSW_CORE_RES_GET(mlxsw_core, MAX_RIFS);
+	devlink_resource_size_params_init(&size_params, max_rifs, max_rifs,
+					  1, DEVLINK_RESOURCE_UNIT_ENTRY);
+
+	return devlink_resource_register(devlink, "rifs", max_rifs,
+					 MLXSW_SP_RESOURCE_RIFS,
+					 DEVLINK_RESOURCE_ID_PARENT_TOP,
+					 &size_params);
+}
+
 static int mlxsw_sp1_resources_register(struct mlxsw_core *mlxsw_core)
 {
 	int err;
@@ -3604,8 +3623,13 @@ static int mlxsw_sp1_resources_register(struct mlxsw_core *mlxsw_core)
 	if (err)
 		goto err_resources_rif_mac_profile_register;
 
+	err = mlxsw_sp_resources_rifs_register(mlxsw_core);
+	if (err)
+		goto err_resources_rifs_register;
+
 	return 0;
 
+err_resources_rifs_register:
 err_resources_rif_mac_profile_register:
 err_policer_resources_register:
 err_resources_counter_register:
@@ -3638,8 +3662,13 @@ static int mlxsw_sp2_resources_register(struct mlxsw_core *mlxsw_core)
 	if (err)
 		goto err_resources_rif_mac_profile_register;
 
+	err = mlxsw_sp_resources_rifs_register(mlxsw_core);
+	if (err)
+		goto err_resources_rifs_register;
+
 	return 0;
 
+err_resources_rifs_register:
 err_resources_rif_mac_profile_register:
 err_policer_resources_register:
 err_resources_counter_register:
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
index a60d2bbd3aa6..36c6f5b89c71 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
@@ -68,6 +68,7 @@ enum mlxsw_sp_resource_id {
 	MLXSW_SP_RESOURCE_GLOBAL_POLICERS,
 	MLXSW_SP_RESOURCE_SINGLE_RATE_POLICERS,
 	MLXSW_SP_RESOURCE_RIF_MAC_PROFILES,
+	MLXSW_SP_RESOURCE_RIFS,
 };
 
 struct mlxsw_sp_port;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
index 07d7e244dfbd..4c7721506603 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
@@ -8323,6 +8323,13 @@ static u64 mlxsw_sp_rif_mac_profiles_occ_get(void *priv)
 	return atomic_read(&mlxsw_sp->router->rif_mac_profiles_count);
 }
 
+static u64 mlxsw_sp_rifs_occ_get(void *priv)
+{
+	const struct mlxsw_sp *mlxsw_sp = priv;
+
+	return atomic_read(&mlxsw_sp->router->rifs_count);
+}
+
 static struct mlxsw_sp_rif_mac_profile *
 mlxsw_sp_rif_mac_profile_create(struct mlxsw_sp *mlxsw_sp, const char *mac,
 				struct netlink_ext_ack *extack)
@@ -9828,6 +9835,10 @@ static int mlxsw_sp_rifs_init(struct mlxsw_sp *mlxsw_sp)
 					  MLXSW_SP_RESOURCE_RIF_MAC_PROFILES,
 					  mlxsw_sp_rif_mac_profiles_occ_get,
 					  mlxsw_sp);
+	devlink_resource_occ_get_register(devlink,
+					  MLXSW_SP_RESOURCE_RIFS,
+					  mlxsw_sp_rifs_occ_get,
+					  mlxsw_sp);
 
 	return 0;
 }
@@ -9841,6 +9852,7 @@ static void mlxsw_sp_rifs_fini(struct mlxsw_sp *mlxsw_sp)
 	for (i = 0; i < MLXSW_CORE_RES_GET(mlxsw_sp->core, MAX_RIFS); i++)
 		WARN_ON_ONCE(mlxsw_sp->router->rifs[i]);
 
+	devlink_resource_occ_get_unregister(devlink, MLXSW_SP_RESOURCE_RIFS);
 	devlink_resource_occ_get_unregister(devlink,
 					    MLXSW_SP_RESOURCE_RIF_MAC_PROFILES);
 	WARN_ON(!idr_is_empty(&mlxsw_sp->router->rif_mac_profiles_idr));
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 04/11] selftests: mirror_gre_bridge_1q_lag: Enslave port to bridge before other configurations
  2022-06-16 10:42 [PATCH net-next 00/11] mlxsw: L3 HW stats improvements Ido Schimmel
                   ` (2 preceding siblings ...)
  2022-06-16 10:42 ` [PATCH net-next 03/11] mlxsw: Add a resource describing number of RIFs Ido Schimmel
@ 2022-06-16 10:42 ` Ido Schimmel
  2022-06-16 10:42 ` [PATCH net-next 05/11] selftests: mlxsw: resource_scale: Update scale target after test setup Ido Schimmel
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Ido Schimmel @ 2022-06-16 10:42 UTC (permalink / raw)
  To: netdev; +Cc: davem, kuba, pabeni, edumazet, petrm, amcohen, mlxsw, Ido Schimmel

From: Amit Cohen <amcohen@nvidia.com>

Using mlxsw driver, the configurations are offloaded just in case that
there is a physical port which is enslaved to the virtual device
(e.g., to a bridge). In 'mirror_gre_bridge_1q_lag' test, the bridge gets an
address and route before there are ports in the bridge. It means that these
configurations are not offloaded.

Till now the test passes with mlxsw driver even that the RIF of the
bridge is not in the hardware, because the ARP packets are trapped in
layer 2 and also mirrored, so there is no real need of the RIF in hardware.
The previous patch changed the traps 'ARP_REQUEST' and 'ARP_RESPONSE' to
be done at layer 3 instead of layer 2. With this change the ARP packets are
not trapped during the test, as the RIF is not in the hardware because of
the order of configurations.

Reorder the configurations to make them to be offloaded, then the test will
pass with the change of the traps.

Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
 .../selftests/net/forwarding/mirror_gre_bridge_1q_lag.sh   | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q_lag.sh b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q_lag.sh
index 28d568c48a73..91e431cd919e 100755
--- a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q_lag.sh
+++ b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q_lag.sh
@@ -141,12 +141,13 @@ switch_create()
 	ip link set dev $swp4 up
 
 	ip link add name br1 type bridge vlan_filtering 1
-	ip link set dev br1 up
-	__addr_add_del br1 add 192.0.2.129/32
-	ip -4 route add 192.0.2.130/32 dev br1
 
 	team_create lag loadbalance $swp3 $swp4
 	ip link set dev lag master br1
+
+	ip link set dev br1 up
+	__addr_add_del br1 add 192.0.2.129/32
+	ip -4 route add 192.0.2.130/32 dev br1
 }
 
 switch_destroy()
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 05/11] selftests: mlxsw: resource_scale: Update scale target after test setup
  2022-06-16 10:42 [PATCH net-next 00/11] mlxsw: L3 HW stats improvements Ido Schimmel
                   ` (3 preceding siblings ...)
  2022-06-16 10:42 ` [PATCH net-next 04/11] selftests: mirror_gre_bridge_1q_lag: Enslave port to bridge before other configurations Ido Schimmel
@ 2022-06-16 10:42 ` Ido Schimmel
  2022-06-16 10:42 ` [PATCH net-next 06/11] selftests: mlxsw: resource_scale: Introduce traffic tests Ido Schimmel
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Ido Schimmel @ 2022-06-16 10:42 UTC (permalink / raw)
  To: netdev; +Cc: davem, kuba, pabeni, edumazet, petrm, amcohen, mlxsw, Ido Schimmel

The scale of each resource is tested in the following manner:

1. The scale target is queried.
2. The test setup is prepared.
3. The test is invoked.

In some cases, the occupancy of a resource changes as part of the second
step, requiring the test to return a scale target that takes this change
into account.

Make this more robust by re-querying the scale target after the second
step.

Another possible solution is to swap the first and second steps, but
when a test needs to be skipped (i.e., scale target is zero), the setup
would have been in vain.

Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
---
 .../selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh   | 3 +++
 .../selftests/drivers/net/mlxsw/spectrum/resource_scale.sh     | 3 +++
 2 files changed, 6 insertions(+)

diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh
index e9f65bd2e299..22f761442bad 100755
--- a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh
+++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh
@@ -38,6 +38,9 @@ for current_test in ${TESTS:-$ALL_TESTS}; do
 		target=$(${current_test}_get_target "$should_fail")
 		${current_test}_setup_prepare
 		setup_wait $num_netifs
+		# Update target in case occupancy of a certain resource changed
+		# following the test setup.
+		target=$(${current_test}_get_target "$should_fail")
 		${current_test}_test "$target" "$should_fail"
 		${current_test}_cleanup
 		devlink_reload
diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh
index dea33dc93790..12201acc00b9 100755
--- a/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh
+++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh
@@ -43,6 +43,9 @@ for current_test in ${TESTS:-$ALL_TESTS}; do
 			target=$(${current_test}_get_target "$should_fail")
 			${current_test}_setup_prepare
 			setup_wait $num_netifs
+			# Update target in case occupancy of a certain resource
+			# changed following the test setup.
+			target=$(${current_test}_get_target "$should_fail")
 			${current_test}_test "$target" "$should_fail"
 			${current_test}_cleanup
 			if [[ "$should_fail" -eq 0 ]]; then
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 06/11] selftests: mlxsw: resource_scale: Introduce traffic tests
  2022-06-16 10:42 [PATCH net-next 00/11] mlxsw: L3 HW stats improvements Ido Schimmel
                   ` (4 preceding siblings ...)
  2022-06-16 10:42 ` [PATCH net-next 05/11] selftests: mlxsw: resource_scale: Update scale target after test setup Ido Schimmel
@ 2022-06-16 10:42 ` Ido Schimmel
  2022-06-16 10:42 ` [PATCH net-next 07/11] selftests: mlxsw: resource_scale: Allow skipping a test Ido Schimmel
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Ido Schimmel @ 2022-06-16 10:42 UTC (permalink / raw)
  To: netdev; +Cc: davem, kuba, pabeni, edumazet, petrm, amcohen, mlxsw, Ido Schimmel

From: Petr Machata <petrm@nvidia.com>

The scale tests are currently testing two things: that some number of
instances of a given resource can actually be created; and that when an
attempt is made to create more than the supported amount, the failures are
noted and handled gracefully.

However the ability to allocate the resource does not mean that the
resource actually works when passing traffic. For that, make it possible
for a given scale to also test traffic.

Traffic test is only run on the positive leg of the scale test (no point
trying to pass traffic when the expected outcome is that the resource will
not be allocated). Traffic tests are opt-in, if a given test does not
expose it, it is not run.

To this end, delay the test cleanup until after the traffic test is run.

Signed-off-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: Amit Cohen <amcohen@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
 .../drivers/net/mlxsw/spectrum-2/resource_scale.sh   | 12 ++++++++++--
 .../drivers/net/mlxsw/spectrum/resource_scale.sh     | 11 ++++++++++-
 2 files changed, 20 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh
index 22f761442bad..6d7814ba3c03 100755
--- a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh
+++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh
@@ -42,13 +42,21 @@ for current_test in ${TESTS:-$ALL_TESTS}; do
 		# following the test setup.
 		target=$(${current_test}_get_target "$should_fail")
 		${current_test}_test "$target" "$should_fail"
-		${current_test}_cleanup
-		devlink_reload
 		if [[ "$should_fail" -eq 0 ]]; then
 			log_test "'$current_test' $target"
+
+			if ((!RET)); then
+				tt=${current_test}_traffic_test
+				if [[ $(type -t $tt) == "function" ]]; then
+					$tt "$target"
+					log_test "'$current_test' $target traffic test"
+				fi
+			fi
 		else
 			log_test "'$current_test' overflow $target"
 		fi
+		${current_test}_cleanup
+		devlink_reload
 		RET_FIN=$(( RET_FIN || RET ))
 	done
 done
diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh
index 12201acc00b9..a1bc93b966ae 100755
--- a/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh
+++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh
@@ -47,12 +47,21 @@ for current_test in ${TESTS:-$ALL_TESTS}; do
 			# changed following the test setup.
 			target=$(${current_test}_get_target "$should_fail")
 			${current_test}_test "$target" "$should_fail"
-			${current_test}_cleanup
 			if [[ "$should_fail" -eq 0 ]]; then
 				log_test "'$current_test' [$profile] $target"
+
+				if ((!RET)); then
+					tt=${current_test}_traffic_test
+					if [[ $(type -t $tt) == "function" ]]
+					then
+						$tt "$target"
+						log_test "'$current_test' [$profile] $target traffic test"
+					fi
+				fi
 			else
 				log_test "'$current_test' [$profile] overflow $target"
 			fi
+			${current_test}_cleanup
 			RET_FIN=$(( RET_FIN || RET ))
 		done
 	done
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 07/11] selftests: mlxsw: resource_scale: Allow skipping a test
  2022-06-16 10:42 [PATCH net-next 00/11] mlxsw: L3 HW stats improvements Ido Schimmel
                   ` (5 preceding siblings ...)
  2022-06-16 10:42 ` [PATCH net-next 06/11] selftests: mlxsw: resource_scale: Introduce traffic tests Ido Schimmel
@ 2022-06-16 10:42 ` Ido Schimmel
  2022-06-16 10:42 ` [PATCH net-next 08/11] selftests: mlxsw: resource_scale: Pass target count to cleanup Ido Schimmel
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Ido Schimmel @ 2022-06-16 10:42 UTC (permalink / raw)
  To: netdev; +Cc: davem, kuba, pabeni, edumazet, petrm, amcohen, mlxsw, Ido Schimmel

From: Petr Machata <petrm@nvidia.com>

The scale tests are currently testing two things: that some number of
instances of a given resource can actually be created; and that when an
attempt is made to create more than the supported amount, the failures are
noted and handled gracefully.

Sometimes the scale test depends on more than one resource. In particular,
a following patch will add a RIF counter scale test, which depends on the
number of RIF counters that can be bound, and also on the number of RIFs
that can be created.

When the test is limited by the auxiliary resource and not by the primary
one, there's no point trying to run the overflow test, because it would be
testing exhaustion of the wrong resource.

To support this use case, when the $test_get_target yields 0, skip the test
instead.

Signed-off-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: Amit Cohen <amcohen@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
 .../selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh | 5 +++++
 .../selftests/drivers/net/mlxsw/spectrum/resource_scale.sh   | 4 ++++
 2 files changed, 9 insertions(+)

diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh
index 6d7814ba3c03..afe17b108b46 100755
--- a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh
+++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh
@@ -36,6 +36,11 @@ for current_test in ${TESTS:-$ALL_TESTS}; do
 	for should_fail in 0 1; do
 		RET=0
 		target=$(${current_test}_get_target "$should_fail")
+		if ((target == 0)); then
+			log_test_skip "'$current_test' should_fail=$should_fail test"
+			continue
+		fi
+
 		${current_test}_setup_prepare
 		setup_wait $num_netifs
 		# Update target in case occupancy of a certain resource changed
diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh
index a1bc93b966ae..c0da22cd7d20 100755
--- a/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh
+++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh
@@ -41,6 +41,10 @@ for current_test in ${TESTS:-$ALL_TESTS}; do
 		for should_fail in 0 1; do
 			RET=0
 			target=$(${current_test}_get_target "$should_fail")
+			if ((target == 0)); then
+				log_test_skip "'$current_test' [$profile] should_fail=$should_fail test"
+				continue
+			fi
 			${current_test}_setup_prepare
 			setup_wait $num_netifs
 			# Update target in case occupancy of a certain resource
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 08/11] selftests: mlxsw: resource_scale: Pass target count to cleanup
  2022-06-16 10:42 [PATCH net-next 00/11] mlxsw: L3 HW stats improvements Ido Schimmel
                   ` (6 preceding siblings ...)
  2022-06-16 10:42 ` [PATCH net-next 07/11] selftests: mlxsw: resource_scale: Allow skipping a test Ido Schimmel
@ 2022-06-16 10:42 ` Ido Schimmel
  2022-06-16 10:42 ` [PATCH net-next 09/11] selftests: mlxsw: tc_flower_scale: Add a traffic test Ido Schimmel
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Ido Schimmel @ 2022-06-16 10:42 UTC (permalink / raw)
  To: netdev; +Cc: davem, kuba, pabeni, edumazet, petrm, amcohen, mlxsw, Ido Schimmel

From: Petr Machata <petrm@nvidia.com>

The scale tests are verifying behavior of mlxsw when number of instances of
some resource reaches the ASIC capacity. The number of instances is
referred to as "target" number.

No scale tests so far needed to know this target number to clean up. E.g.
the tc_flower simply removes the clsact qdisc that all the tested filters
are hooked onto, and that takes care of collecting all the filters.

However, for the RIF counter test, which is being added in a future patch,
VLAN netdevices are created. These are created as part of the test, but of
course the cleanup needs to undo them again. For that it needs to know how
many there were. To support this usage, pass the target number to the
cleanup callback.

Signed-off-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: Amit Cohen <amcohen@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
 .../selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh    | 2 +-
 .../selftests/drivers/net/mlxsw/spectrum/resource_scale.sh      | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh
index afe17b108b46..1a7a472edfd0 100755
--- a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh
+++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh
@@ -60,7 +60,7 @@ for current_test in ${TESTS:-$ALL_TESTS}; do
 		else
 			log_test "'$current_test' overflow $target"
 		fi
-		${current_test}_cleanup
+		${current_test}_cleanup $target
 		devlink_reload
 		RET_FIN=$(( RET_FIN || RET ))
 	done
diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh
index c0da22cd7d20..70c9da8fe303 100755
--- a/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh
+++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh
@@ -65,7 +65,7 @@ for current_test in ${TESTS:-$ALL_TESTS}; do
 			else
 				log_test "'$current_test' [$profile] overflow $target"
 			fi
-			${current_test}_cleanup
+			${current_test}_cleanup $target
 			RET_FIN=$(( RET_FIN || RET ))
 		done
 	done
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 09/11] selftests: mlxsw: tc_flower_scale: Add a traffic test
  2022-06-16 10:42 [PATCH net-next 00/11] mlxsw: L3 HW stats improvements Ido Schimmel
                   ` (7 preceding siblings ...)
  2022-06-16 10:42 ` [PATCH net-next 08/11] selftests: mlxsw: resource_scale: Pass target count to cleanup Ido Schimmel
@ 2022-06-16 10:42 ` Ido Schimmel
  2022-06-16 10:42 ` [PATCH net-next 10/11] selftests: mlxsw: Add a RIF counter scale test Ido Schimmel
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Ido Schimmel @ 2022-06-16 10:42 UTC (permalink / raw)
  To: netdev; +Cc: davem, kuba, pabeni, edumazet, petrm, amcohen, mlxsw, Ido Schimmel

From: Petr Machata <petrm@nvidia.com>

Add a test that checks that the created filters do actually trigger on
matching traffic.

Exercising all the rules would be a very lengthy process. Instead, take a
log2 subset of rules. The logic behind picking log2 rules is that then
every bit of the instantiated item's number is exercised. This should catch
issues whether they happen at the high end, low end, or somewhere in
between.

Signed-off-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: Amit Cohen <amcohen@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
 .../drivers/net/mlxsw/tc_flower_scale.sh        | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/tools/testing/selftests/drivers/net/mlxsw/tc_flower_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/tc_flower_scale.sh
index aa74be9f47c8..d3d9e60d6ddf 100644
--- a/tools/testing/selftests/drivers/net/mlxsw/tc_flower_scale.sh
+++ b/tools/testing/selftests/drivers/net/mlxsw/tc_flower_scale.sh
@@ -77,6 +77,7 @@ tc_flower_rules_create()
 			filter add dev $h2 ingress \
 				prot ipv6 \
 				pref 1000 \
+				handle 42$i \
 				flower $tcflags dst_ip $(tc_flower_addr $i) \
 				action drop
 		EOF
@@ -121,3 +122,19 @@ tc_flower_test()
 	tcflags="skip_sw"
 	__tc_flower_test $count $should_fail
 }
+
+tc_flower_traffic_test()
+{
+	local count=$1; shift
+	local i;
+
+	for ((i = count - 1; i > 0; i /= 2)); do
+		$MZ -6 $h1 -c 1 -d 20msec -p 100 -a own -b $(mac_get $h2) \
+		    -A $(tc_flower_addr 0) -B $(tc_flower_addr $i) \
+		    -q -t udp sp=54321,dp=12345
+	done
+	for ((i = count - 1; i > 0; i /= 2)); do
+		tc_check_packets "dev $h2 ingress" 42$i 1
+		check_err $? "Traffic not seen at rule #$i"
+	done
+}
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 10/11] selftests: mlxsw: Add a RIF counter scale test
  2022-06-16 10:42 [PATCH net-next 00/11] mlxsw: L3 HW stats improvements Ido Schimmel
                   ` (8 preceding siblings ...)
  2022-06-16 10:42 ` [PATCH net-next 09/11] selftests: mlxsw: tc_flower_scale: Add a traffic test Ido Schimmel
@ 2022-06-16 10:42 ` Ido Schimmel
  2022-06-16 10:42 ` [PATCH net-next 11/11] selftests: spectrum-2: tc_flower_scale: Dynamically set scale target Ido Schimmel
  2022-06-17 10:00 ` [PATCH net-next 00/11] mlxsw: L3 HW stats improvements patchwork-bot+netdevbpf
  11 siblings, 0 replies; 13+ messages in thread
From: Ido Schimmel @ 2022-06-16 10:42 UTC (permalink / raw)
  To: netdev; +Cc: davem, kuba, pabeni, edumazet, petrm, amcohen, mlxsw, Ido Schimmel

From: Petr Machata <petrm@nvidia.com>

This tests creates as many RIFs as possible, ideally more than there can be
RIF counters (though that is currently only possible on Spectrum-1). It
then tries to enable L3 HW stats on each of the RIFs. It also contains the
traffic test, which tries to run traffic through a log2 of those counters
and checks that the traffic is shown in the counter values.

Like with tc_flower traffic test, take a log2 subset of rules. The logic
behind picking log2 rules is that then every bit of the instantiated item's
number is exercised. This should catch issues whether they happen at the
high end, low end, or somewhere in between.

Signed-off-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: Amit Cohen <amcohen@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
 .../drivers/net/mlxsw/rif_counter_scale.sh    | 107 ++++++++++++++++++
 .../net/mlxsw/spectrum-2/resource_scale.sh    |  11 +-
 .../net/mlxsw/spectrum-2/rif_counter_scale.sh |   1 +
 .../net/mlxsw/spectrum/resource_scale.sh      |  11 +-
 .../net/mlxsw/spectrum/rif_counter_scale.sh   |  34 ++++++
 5 files changed, 162 insertions(+), 2 deletions(-)
 create mode 100644 tools/testing/selftests/drivers/net/mlxsw/rif_counter_scale.sh
 create mode 120000 tools/testing/selftests/drivers/net/mlxsw/spectrum-2/rif_counter_scale.sh
 create mode 100644 tools/testing/selftests/drivers/net/mlxsw/spectrum/rif_counter_scale.sh

diff --git a/tools/testing/selftests/drivers/net/mlxsw/rif_counter_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/rif_counter_scale.sh
new file mode 100644
index 000000000000..a43a9926e690
--- /dev/null
+++ b/tools/testing/selftests/drivers/net/mlxsw/rif_counter_scale.sh
@@ -0,0 +1,107 @@
+# SPDX-License-Identifier: GPL-2.0
+
+RIF_COUNTER_NUM_NETIFS=2
+
+rif_counter_addr4()
+{
+	local i=$1; shift
+	local p=$1; shift
+
+	printf 192.0.%d.%d $((i / 64)) $(((4 * i % 256) + p))
+}
+
+rif_counter_addr4pfx()
+{
+	rif_counter_addr4 $@
+	printf /30
+}
+
+rif_counter_h1_create()
+{
+	simple_if_init $h1
+}
+
+rif_counter_h1_destroy()
+{
+	simple_if_fini $h1
+}
+
+rif_counter_h2_create()
+{
+	simple_if_init $h2
+}
+
+rif_counter_h2_destroy()
+{
+	simple_if_fini $h2
+}
+
+rif_counter_setup_prepare()
+{
+	h1=${NETIFS[p1]}
+	h2=${NETIFS[p2]}
+
+	vrf_prepare
+
+	rif_counter_h1_create
+	rif_counter_h2_create
+}
+
+rif_counter_cleanup()
+{
+	local count=$1; shift
+
+	pre_cleanup
+
+	for ((i = 1; i <= count; i++)); do
+		vlan_destroy $h2 $i
+	done
+
+	rif_counter_h2_destroy
+	rif_counter_h1_destroy
+
+	vrf_cleanup
+
+	if [[ -v RIF_COUNTER_BATCH_FILE ]]; then
+		rm -f $RIF_COUNTER_BATCH_FILE
+	fi
+}
+
+
+rif_counter_test()
+{
+	local count=$1; shift
+	local should_fail=$1; shift
+
+	RIF_COUNTER_BATCH_FILE="$(mktemp)"
+
+	for ((i = 1; i <= count; i++)); do
+		vlan_create $h2 $i v$h2 $(rif_counter_addr4pfx $i 2)
+	done
+	for ((i = 1; i <= count; i++)); do
+		cat >> $RIF_COUNTER_BATCH_FILE <<-EOF
+			stats set dev $h2.$i l3_stats on
+		EOF
+	done
+
+	ip -b $RIF_COUNTER_BATCH_FILE
+	check_err_fail $should_fail $? "RIF counter enablement"
+}
+
+rif_counter_traffic_test()
+{
+	local count=$1; shift
+	local i;
+
+	for ((i = count; i > 0; i /= 2)); do
+		$MZ $h1 -Q $i -c 1 -d 20msec -p 100 -a own -b $(mac_get $h2) \
+		    -A $(rif_counter_addr4 $i 1) \
+		    -B $(rif_counter_addr4 $i 2) \
+		    -q -t udp sp=54321,dp=12345
+	done
+	for ((i = count; i > 0; i /= 2)); do
+		busywait "$TC_HIT_TIMEOUT" until_counter_is "== 1" \
+			 hw_stats_get l3_stats $h2.$i rx packets > /dev/null
+		check_err $? "Traffic not seen at RIF $h2.$i"
+	done
+}
diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh
index 1a7a472edfd0..688338bbeb97 100755
--- a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh
+++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh
@@ -25,7 +25,16 @@ cleanup()
 
 trap cleanup EXIT
 
-ALL_TESTS="router tc_flower mirror_gre tc_police port rif_mac_profile"
+ALL_TESTS="
+	router
+	tc_flower
+	mirror_gre
+	tc_police
+	port
+	rif_mac_profile
+	rif_counter
+"
+
 for current_test in ${TESTS:-$ALL_TESTS}; do
 	RET_FIN=0
 	source ${current_test}_scale.sh
diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/rif_counter_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/rif_counter_scale.sh
new file mode 120000
index 000000000000..1f5752e8ffc0
--- /dev/null
+++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/rif_counter_scale.sh
@@ -0,0 +1 @@
+../spectrum/rif_counter_scale.sh
\ No newline at end of file
diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh
index 70c9da8fe303..95d9f710a630 100755
--- a/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh
+++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh
@@ -22,7 +22,16 @@ cleanup()
 devlink_sp_read_kvd_defaults
 trap cleanup EXIT
 
-ALL_TESTS="router tc_flower mirror_gre tc_police port rif_mac_profile"
+ALL_TESTS="
+	router
+	tc_flower
+	mirror_gre
+	tc_police
+	port
+	rif_mac_profile
+	rif_counter
+"
+
 for current_test in ${TESTS:-$ALL_TESTS}; do
 	RET_FIN=0
 	source ${current_test}_scale.sh
diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum/rif_counter_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum/rif_counter_scale.sh
new file mode 100644
index 000000000000..d44536276e8a
--- /dev/null
+++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum/rif_counter_scale.sh
@@ -0,0 +1,34 @@
+# SPDX-License-Identifier: GPL-2.0
+source ../rif_counter_scale.sh
+
+rif_counter_get_target()
+{
+	local should_fail=$1; shift
+	local max_cnts
+	local max_rifs
+	local target
+
+	max_rifs=$(devlink_resource_size_get rifs)
+	max_cnts=$(devlink_resource_size_get counters rif)
+
+	# Remove already allocated RIFs.
+	((max_rifs -= $(devlink_resource_occ_get rifs)))
+
+	# 10 KVD slots per counter, ingress+egress counters per RIF
+	((max_cnts /= 20))
+
+	# Pointless to run the overflow test if we don't have enough RIFs to
+	# host all the counters.
+	if ((max_cnts > max_rifs && should_fail)); then
+		echo 0
+		return
+	fi
+
+	target=$((max_rifs < max_cnts ? max_rifs : max_cnts))
+
+	if ((! should_fail)); then
+		echo $target
+	else
+		echo $((target + 1))
+	fi
+}
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 11/11] selftests: spectrum-2: tc_flower_scale: Dynamically set scale target
  2022-06-16 10:42 [PATCH net-next 00/11] mlxsw: L3 HW stats improvements Ido Schimmel
                   ` (9 preceding siblings ...)
  2022-06-16 10:42 ` [PATCH net-next 10/11] selftests: mlxsw: Add a RIF counter scale test Ido Schimmel
@ 2022-06-16 10:42 ` Ido Schimmel
  2022-06-17 10:00 ` [PATCH net-next 00/11] mlxsw: L3 HW stats improvements patchwork-bot+netdevbpf
  11 siblings, 0 replies; 13+ messages in thread
From: Ido Schimmel @ 2022-06-16 10:42 UTC (permalink / raw)
  To: netdev; +Cc: davem, kuba, pabeni, edumazet, petrm, amcohen, mlxsw, Ido Schimmel

Instead of hard coding the scale target in the test, dynamically set it
based on the maximum number of flow counters and their current
occupancy.

Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
---
 .../net/mlxsw/spectrum-2/tc_flower_scale.sh       | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower_scale.sh
index efd798a85931..4444bbace1a9 100644
--- a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower_scale.sh
+++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower_scale.sh
@@ -4,17 +4,22 @@ source ../tc_flower_scale.sh
 tc_flower_get_target()
 {
 	local should_fail=$1; shift
+	local max_cnts
 
 	# The driver associates a counter with each tc filter, which means the
 	# number of supported filters is bounded by the number of available
 	# counters.
-	# Currently, the driver supports 30K (30,720) flow counters and six of
-	# these are used for multicast routing.
-	local target=30714
+	max_cnts=$(devlink_resource_size_get counters flow)
+
+	# Remove already allocated counters.
+	((max_cnts -= $(devlink_resource_occ_get counters flow)))
+
+	# Each rule uses two counters, for packets and bytes.
+	((max_cnts /= 2))
 
 	if ((! should_fail)); then
-		echo $target
+		echo $max_cnts
 	else
-		echo $((target + 1))
+		echo $((max_cnts + 1))
 	fi
 }
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 00/11] mlxsw: L3 HW stats improvements
  2022-06-16 10:42 [PATCH net-next 00/11] mlxsw: L3 HW stats improvements Ido Schimmel
                   ` (10 preceding siblings ...)
  2022-06-16 10:42 ` [PATCH net-next 11/11] selftests: spectrum-2: tc_flower_scale: Dynamically set scale target Ido Schimmel
@ 2022-06-17 10:00 ` patchwork-bot+netdevbpf
  11 siblings, 0 replies; 13+ messages in thread
From: patchwork-bot+netdevbpf @ 2022-06-17 10:00 UTC (permalink / raw)
  To: Ido Schimmel; +Cc: netdev, davem, kuba, pabeni, edumazet, petrm, amcohen, mlxsw

Hello:

This series was applied to netdev/net-next.git (master)
by David S. Miller <davem@davemloft.net>:

On Thu, 16 Jun 2022 13:42:34 +0300 you wrote:
> While testing L3 HW stats [1] on top of mlxsw, two issues were found:
> 
> 1. Stats cannot be enabled for more than 205 netdevs. This was fixed in
> commit 4b7a632ac4e7 ("mlxsw: spectrum_cnt: Reorder counter pools").
> 
> 2. ARP packets are counted as errors. Patch #1 takes care of that. See
> the commit message for details.
> 
> [...]

Here is the summary with links:
  - [net-next,01/11] mlxsw: Trap ARP packets at layer 3 instead of layer 2
    https://git.kernel.org/netdev/net-next/c/4b1cc357f843
  - [net-next,02/11] mlxsw: Keep track of number of allocated RIFs
    https://git.kernel.org/netdev/net-next/c/b9840fe035ac
  - [net-next,03/11] mlxsw: Add a resource describing number of RIFs
    https://git.kernel.org/netdev/net-next/c/4ec2feb26cc3
  - [net-next,04/11] selftests: mirror_gre_bridge_1q_lag: Enslave port to bridge before other configurations
    https://git.kernel.org/netdev/net-next/c/e386a527fc08
  - [net-next,05/11] selftests: mlxsw: resource_scale: Update scale target after test setup
    https://git.kernel.org/netdev/net-next/c/d3ffeb2dba63
  - [net-next,06/11] selftests: mlxsw: resource_scale: Introduce traffic tests
    https://git.kernel.org/netdev/net-next/c/3128b9f51ee7
  - [net-next,07/11] selftests: mlxsw: resource_scale: Allow skipping a test
    https://git.kernel.org/netdev/net-next/c/8cad339db339
  - [net-next,08/11] selftests: mlxsw: resource_scale: Pass target count to cleanup
    https://git.kernel.org/netdev/net-next/c/35d5829e86c2
  - [net-next,09/11] selftests: mlxsw: tc_flower_scale: Add a traffic test
    https://git.kernel.org/netdev/net-next/c/dd5d20e17c96
  - [net-next,10/11] selftests: mlxsw: Add a RIF counter scale test
    https://git.kernel.org/netdev/net-next/c/be00853bfd2e
  - [net-next,11/11] selftests: spectrum-2: tc_flower_scale: Dynamically set scale target
    https://git.kernel.org/netdev/net-next/c/ed62af45467a

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-06-17 10:00 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-16 10:42 [PATCH net-next 00/11] mlxsw: L3 HW stats improvements Ido Schimmel
2022-06-16 10:42 ` [PATCH net-next 01/11] mlxsw: Trap ARP packets at layer 3 instead of layer 2 Ido Schimmel
2022-06-16 10:42 ` [PATCH net-next 02/11] mlxsw: Keep track of number of allocated RIFs Ido Schimmel
2022-06-16 10:42 ` [PATCH net-next 03/11] mlxsw: Add a resource describing number of RIFs Ido Schimmel
2022-06-16 10:42 ` [PATCH net-next 04/11] selftests: mirror_gre_bridge_1q_lag: Enslave port to bridge before other configurations Ido Schimmel
2022-06-16 10:42 ` [PATCH net-next 05/11] selftests: mlxsw: resource_scale: Update scale target after test setup Ido Schimmel
2022-06-16 10:42 ` [PATCH net-next 06/11] selftests: mlxsw: resource_scale: Introduce traffic tests Ido Schimmel
2022-06-16 10:42 ` [PATCH net-next 07/11] selftests: mlxsw: resource_scale: Allow skipping a test Ido Schimmel
2022-06-16 10:42 ` [PATCH net-next 08/11] selftests: mlxsw: resource_scale: Pass target count to cleanup Ido Schimmel
2022-06-16 10:42 ` [PATCH net-next 09/11] selftests: mlxsw: tc_flower_scale: Add a traffic test Ido Schimmel
2022-06-16 10:42 ` [PATCH net-next 10/11] selftests: mlxsw: Add a RIF counter scale test Ido Schimmel
2022-06-16 10:42 ` [PATCH net-next 11/11] selftests: spectrum-2: tc_flower_scale: Dynamically set scale target Ido Schimmel
2022-06-17 10:00 ` [PATCH net-next 00/11] mlxsw: L3 HW stats improvements patchwork-bot+netdevbpf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.