All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next 00/14] mlxsw: Shared buffer improvements
@ 2019-04-22 12:08 Ido Schimmel
  2019-04-22 12:08 ` [PATCH net-next 01/14] net: devlink: Add extack to shared buffer operations Ido Schimmel
                   ` (15 more replies)
  0 siblings, 16 replies; 20+ messages in thread
From: Ido Schimmel @ 2019-04-22 12:08 UTC (permalink / raw)
  To: netdev
  Cc: davem, Jiri Pirko, Petr Machata, Alex Kushnarov, mlxsw, Ido Schimmel

This patchset includes two improvements with regards to shared buffer
configuration in mlxsw.

The first part of this patchset forbids the user from performing illegal
shared buffer configuration that can result in unnecessary packet loss.
In order to better communicate these configuration failures to the user,
extack is propagated from devlink towards drivers. This is done in
patches #1-#8.

The second part of the patchset deals with the shared buffer
configuration of the CPU port. When a packet is trapped by the device,
it is sent across the PCI bus to the attached host CPU. From the
device's perspective, it is as if the packet is transmitted through the
CPU port.

While testing traffic directed at the CPU it became apparent that for
certain packet sizes and certain burst sizes, the current shared buffer
configuration of the CPU port is inadequate and results in packet drops.
The configuration is adjusted by patches #9-#14 that create two new pools
- ingress & egress - which are dedicated for CPU traffic.

Ido Schimmel (14):
  net: devlink: Add extack to shared buffer operations
  mlxsw: spectrum_buffers: Add extack messages for invalid
    configurations
  mlxsw: spectrum_buffers: Use defines for pool indices
  mlxsw: spectrum_buffers: Add ability to veto pool's configuration
  mlxsw: spectrum_buffers: Add ability to veto TC's configuration
  mlxsw: spectrum_buffers: Forbid configuration of multicast pool
  mlxsw: spectrum_buffers: Forbid changing threshold type of first
    egress pool
  mlxsw: spectrum_buffers: Forbid changing multicast TCs' attributes
  mlxsw: spectrum_buffers: Remove assumption about pool order
  mlxsw: spectrum_buffers: Add pools for CPU traffic
  mlxsw: spectrum_buffers: Use new CPU ingress pool for control packets
  mlxsw: spectrum_buffers: Split business logic from
    mlxsw_sp_port_sb_pms_init()
  mlxsw: spectrum_buffers: Allow skipping ingress port quota
    configuration
  mlxsw: spectrum_buffers: Adjust CPU port shared buffer egress quotas

 drivers/net/ethernet/mellanox/mlxsw/core.c    |  16 +-
 drivers/net/ethernet/mellanox/mlxsw/core.h    |   8 +-
 .../net/ethernet/mellanox/mlxsw/spectrum.h    |   8 +-
 .../mellanox/mlxsw/spectrum_buffers.c         | 388 ++++++++++++------
 .../net/ethernet/netronome/nfp/nfp_devlink.c  |   3 +-
 include/net/devlink.h                         |   8 +-
 net/core/devlink.c                            |  22 +-
 7 files changed, 300 insertions(+), 153 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH net-next 01/14] net: devlink: Add extack to shared buffer operations
  2019-04-22 12:08 [PATCH net-next 00/14] mlxsw: Shared buffer improvements Ido Schimmel
@ 2019-04-22 12:08 ` Ido Schimmel
  2019-04-22 12:08 ` [PATCH net-next 02/14] mlxsw: spectrum_buffers: Add extack messages for invalid configurations Ido Schimmel
                   ` (14 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Ido Schimmel @ 2019-04-22 12:08 UTC (permalink / raw)
  To: netdev
  Cc: davem, Jiri Pirko, Petr Machata, Alex Kushnarov, mlxsw,
	Ido Schimmel, Jakub Kicinski

Add extack to shared buffer set operations, so that meaningful error
messages could be propagated to the user.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
Cc: Jakub Kicinski <jakub.kicinski@netronome.com>
---
 drivers/net/ethernet/mellanox/mlxsw/core.c    |  9 +++++---
 .../net/ethernet/netronome/nfp/nfp_devlink.c  |  3 ++-
 include/net/devlink.h                         |  8 ++++---
 net/core/devlink.c                            | 22 +++++++++++--------
 4 files changed, 26 insertions(+), 16 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
index 9e8e3e92f369..a6533d42b285 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
@@ -781,7 +781,8 @@ mlxsw_devlink_sb_pool_get(struct devlink *devlink,
 static int
 mlxsw_devlink_sb_pool_set(struct devlink *devlink,
 			  unsigned int sb_index, u16 pool_index, u32 size,
-			  enum devlink_sb_threshold_type threshold_type)
+			  enum devlink_sb_threshold_type threshold_type,
+			  struct netlink_ext_ack *extack)
 {
 	struct mlxsw_core *mlxsw_core = devlink_priv(devlink);
 	struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver;
@@ -829,7 +830,8 @@ static int mlxsw_devlink_sb_port_pool_get(struct devlink_port *devlink_port,
 
 static int mlxsw_devlink_sb_port_pool_set(struct devlink_port *devlink_port,
 					  unsigned int sb_index, u16 pool_index,
-					  u32 threshold)
+					  u32 threshold,
+					  struct netlink_ext_ack *extack)
 {
 	struct mlxsw_core *mlxsw_core = devlink_priv(devlink_port->devlink);
 	struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver;
@@ -864,7 +866,8 @@ static int
 mlxsw_devlink_sb_tc_pool_bind_set(struct devlink_port *devlink_port,
 				  unsigned int sb_index, u16 tc_index,
 				  enum devlink_sb_pool_type pool_type,
-				  u16 pool_index, u32 threshold)
+				  u16 pool_index, u32 threshold,
+				  struct netlink_ext_ack *extack)
 {
 	struct mlxsw_core *mlxsw_core = devlink_priv(devlink_port->devlink);
 	struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver;
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
index 8e7591241e7c..c50fce42f473 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
@@ -144,7 +144,8 @@ nfp_devlink_sb_pool_get(struct devlink *devlink, unsigned int sb_index,
 static int
 nfp_devlink_sb_pool_set(struct devlink *devlink, unsigned int sb_index,
 			u16 pool_index,
-			u32 size, enum devlink_sb_threshold_type threshold_type)
+			u32 size, enum devlink_sb_threshold_type threshold_type,
+			struct netlink_ext_ack *extack)
 {
 	struct nfp_pf *pf = devlink_priv(devlink);
 
diff --git a/include/net/devlink.h b/include/net/devlink.h
index 70c7d1ac8344..4f5e41613503 100644
--- a/include/net/devlink.h
+++ b/include/net/devlink.h
@@ -491,13 +491,14 @@ struct devlink_ops {
 			   struct devlink_sb_pool_info *pool_info);
 	int (*sb_pool_set)(struct devlink *devlink, unsigned int sb_index,
 			   u16 pool_index, u32 size,
-			   enum devlink_sb_threshold_type threshold_type);
+			   enum devlink_sb_threshold_type threshold_type,
+			   struct netlink_ext_ack *extack);
 	int (*sb_port_pool_get)(struct devlink_port *devlink_port,
 				unsigned int sb_index, u16 pool_index,
 				u32 *p_threshold);
 	int (*sb_port_pool_set)(struct devlink_port *devlink_port,
 				unsigned int sb_index, u16 pool_index,
-				u32 threshold);
+				u32 threshold, struct netlink_ext_ack *extack);
 	int (*sb_tc_pool_bind_get)(struct devlink_port *devlink_port,
 				   unsigned int sb_index,
 				   u16 tc_index,
@@ -507,7 +508,8 @@ struct devlink_ops {
 				   unsigned int sb_index,
 				   u16 tc_index,
 				   enum devlink_sb_pool_type pool_type,
-				   u16 pool_index, u32 threshold);
+				   u16 pool_index, u32 threshold,
+				   struct netlink_ext_ack *extack);
 	int (*sb_occ_snapshot)(struct devlink *devlink,
 			       unsigned int sb_index);
 	int (*sb_occ_max_clear)(struct devlink *devlink,
diff --git a/net/core/devlink.c b/net/core/devlink.c
index b2715a187a11..7b91605e75d6 100644
--- a/net/core/devlink.c
+++ b/net/core/devlink.c
@@ -1047,14 +1047,15 @@ static int devlink_nl_cmd_sb_pool_get_dumpit(struct sk_buff *msg,
 
 static int devlink_sb_pool_set(struct devlink *devlink, unsigned int sb_index,
 			       u16 pool_index, u32 size,
-			       enum devlink_sb_threshold_type threshold_type)
+			       enum devlink_sb_threshold_type threshold_type,
+			       struct netlink_ext_ack *extack)
 
 {
 	const struct devlink_ops *ops = devlink->ops;
 
 	if (ops->sb_pool_set)
 		return ops->sb_pool_set(devlink, sb_index, pool_index,
-					size, threshold_type);
+					size, threshold_type, extack);
 	return -EOPNOTSUPP;
 }
 
@@ -1082,7 +1083,8 @@ static int devlink_nl_cmd_sb_pool_set_doit(struct sk_buff *skb,
 
 	size = nla_get_u32(info->attrs[DEVLINK_ATTR_SB_POOL_SIZE]);
 	return devlink_sb_pool_set(devlink, devlink_sb->index,
-				   pool_index, size, threshold_type);
+				   pool_index, size, threshold_type,
+				   info->extack);
 }
 
 static int devlink_nl_sb_port_pool_fill(struct sk_buff *msg,
@@ -1243,14 +1245,15 @@ static int devlink_nl_cmd_sb_port_pool_get_dumpit(struct sk_buff *msg,
 
 static int devlink_sb_port_pool_set(struct devlink_port *devlink_port,
 				    unsigned int sb_index, u16 pool_index,
-				    u32 threshold)
+				    u32 threshold,
+				    struct netlink_ext_ack *extack)
 
 {
 	const struct devlink_ops *ops = devlink_port->devlink->ops;
 
 	if (ops->sb_port_pool_set)
 		return ops->sb_port_pool_set(devlink_port, sb_index,
-					     pool_index, threshold);
+					     pool_index, threshold, extack);
 	return -EOPNOTSUPP;
 }
 
@@ -1273,7 +1276,7 @@ static int devlink_nl_cmd_sb_port_pool_set_doit(struct sk_buff *skb,
 
 	threshold = nla_get_u32(info->attrs[DEVLINK_ATTR_SB_THRESHOLD]);
 	return devlink_sb_port_pool_set(devlink_port, devlink_sb->index,
-					pool_index, threshold);
+					pool_index, threshold, info->extack);
 }
 
 static int
@@ -1472,7 +1475,8 @@ devlink_nl_cmd_sb_tc_pool_bind_get_dumpit(struct sk_buff *msg,
 static int devlink_sb_tc_pool_bind_set(struct devlink_port *devlink_port,
 				       unsigned int sb_index, u16 tc_index,
 				       enum devlink_sb_pool_type pool_type,
-				       u16 pool_index, u32 threshold)
+				       u16 pool_index, u32 threshold,
+				       struct netlink_ext_ack *extack)
 
 {
 	const struct devlink_ops *ops = devlink_port->devlink->ops;
@@ -1480,7 +1484,7 @@ static int devlink_sb_tc_pool_bind_set(struct devlink_port *devlink_port,
 	if (ops->sb_tc_pool_bind_set)
 		return ops->sb_tc_pool_bind_set(devlink_port, sb_index,
 						tc_index, pool_type,
-						pool_index, threshold);
+						pool_index, threshold, extack);
 	return -EOPNOTSUPP;
 }
 
@@ -1515,7 +1519,7 @@ static int devlink_nl_cmd_sb_tc_pool_bind_set_doit(struct sk_buff *skb,
 	threshold = nla_get_u32(info->attrs[DEVLINK_ATTR_SB_THRESHOLD]);
 	return devlink_sb_tc_pool_bind_set(devlink_port, devlink_sb->index,
 					   tc_index, pool_type,
-					   pool_index, threshold);
+					   pool_index, threshold, info->extack);
 }
 
 static int devlink_nl_cmd_sb_occ_snapshot_doit(struct sk_buff *skb,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next 02/14] mlxsw: spectrum_buffers: Add extack messages for invalid configurations
  2019-04-22 12:08 [PATCH net-next 00/14] mlxsw: Shared buffer improvements Ido Schimmel
  2019-04-22 12:08 ` [PATCH net-next 01/14] net: devlink: Add extack to shared buffer operations Ido Schimmel
@ 2019-04-22 12:08 ` Ido Schimmel
  2019-04-22 12:08 ` [PATCH net-next 03/14] mlxsw: spectrum_buffers: Use defines for pool indices Ido Schimmel
                   ` (13 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Ido Schimmel @ 2019-04-22 12:08 UTC (permalink / raw)
  To: netdev
  Cc: davem, Jiri Pirko, Petr Machata, Alex Kushnarov, mlxsw, Ido Schimmel

Add extack messages to better communicate invalid configuration to the
user.

Example:

# devlink sb pool set pci/0000:01:00.0 pool 0 size 104857600 thtype dynamic
Error: mlxsw_spectrum: Exceeded shared buffer size.
devlink answers: Invalid argument

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlxsw/core.c    |  7 +++--
 drivers/net/ethernet/mellanox/mlxsw/core.h    |  8 ++++--
 .../net/ethernet/mellanox/mlxsw/spectrum.h    |  8 ++++--
 .../mellanox/mlxsw/spectrum_buffers.c         | 28 +++++++++++++------
 4 files changed, 33 insertions(+), 18 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
index a6533d42b285..bcbe07ec22be 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
@@ -790,7 +790,8 @@ mlxsw_devlink_sb_pool_set(struct devlink *devlink,
 	if (!mlxsw_driver->sb_pool_set)
 		return -EOPNOTSUPP;
 	return mlxsw_driver->sb_pool_set(mlxsw_core, sb_index,
-					 pool_index, size, threshold_type);
+					 pool_index, size, threshold_type,
+					 extack);
 }
 
 static void *__dl_port(struct devlink_port *devlink_port)
@@ -841,7 +842,7 @@ static int mlxsw_devlink_sb_port_pool_set(struct devlink_port *devlink_port,
 	    !mlxsw_core_port_check(mlxsw_core_port))
 		return -EOPNOTSUPP;
 	return mlxsw_driver->sb_port_pool_set(mlxsw_core_port, sb_index,
-					      pool_index, threshold);
+					      pool_index, threshold, extack);
 }
 
 static int
@@ -878,7 +879,7 @@ mlxsw_devlink_sb_tc_pool_bind_set(struct devlink_port *devlink_port,
 		return -EOPNOTSUPP;
 	return mlxsw_driver->sb_tc_pool_bind_set(mlxsw_core_port, sb_index,
 						 tc_index, pool_type,
-						 pool_index, threshold);
+						 pool_index, threshold, extack);
 }
 
 static int mlxsw_devlink_sb_occ_snapshot(struct devlink *devlink,
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.h b/drivers/net/ethernet/mellanox/mlxsw/core.h
index d51dfc3560b6..917be621c904 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/core.h
@@ -254,13 +254,14 @@ struct mlxsw_driver {
 			   struct devlink_sb_pool_info *pool_info);
 	int (*sb_pool_set)(struct mlxsw_core *mlxsw_core,
 			   unsigned int sb_index, u16 pool_index, u32 size,
-			   enum devlink_sb_threshold_type threshold_type);
+			   enum devlink_sb_threshold_type threshold_type,
+			   struct netlink_ext_ack *extack);
 	int (*sb_port_pool_get)(struct mlxsw_core_port *mlxsw_core_port,
 				unsigned int sb_index, u16 pool_index,
 				u32 *p_threshold);
 	int (*sb_port_pool_set)(struct mlxsw_core_port *mlxsw_core_port,
 				unsigned int sb_index, u16 pool_index,
-				u32 threshold);
+				u32 threshold, struct netlink_ext_ack *extack);
 	int (*sb_tc_pool_bind_get)(struct mlxsw_core_port *mlxsw_core_port,
 				   unsigned int sb_index, u16 tc_index,
 				   enum devlink_sb_pool_type pool_type,
@@ -268,7 +269,8 @@ struct mlxsw_driver {
 	int (*sb_tc_pool_bind_set)(struct mlxsw_core_port *mlxsw_core_port,
 				   unsigned int sb_index, u16 tc_index,
 				   enum devlink_sb_pool_type pool_type,
-				   u16 pool_index, u32 threshold);
+				   u16 pool_index, u32 threshold,
+				   struct netlink_ext_ack *extack);
 	int (*sb_occ_snapshot)(struct mlxsw_core *mlxsw_core,
 			       unsigned int sb_index);
 	int (*sb_occ_max_clear)(struct mlxsw_core *mlxsw_core,
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
index da6278b0caa4..8601b3041acd 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
@@ -371,13 +371,14 @@ int mlxsw_sp_sb_pool_get(struct mlxsw_core *mlxsw_core,
 			 struct devlink_sb_pool_info *pool_info);
 int mlxsw_sp_sb_pool_set(struct mlxsw_core *mlxsw_core,
 			 unsigned int sb_index, u16 pool_index, u32 size,
-			 enum devlink_sb_threshold_type threshold_type);
+			 enum devlink_sb_threshold_type threshold_type,
+			 struct netlink_ext_ack *extack);
 int mlxsw_sp_sb_port_pool_get(struct mlxsw_core_port *mlxsw_core_port,
 			      unsigned int sb_index, u16 pool_index,
 			      u32 *p_threshold);
 int mlxsw_sp_sb_port_pool_set(struct mlxsw_core_port *mlxsw_core_port,
 			      unsigned int sb_index, u16 pool_index,
-			      u32 threshold);
+			      u32 threshold, struct netlink_ext_ack *extack);
 int mlxsw_sp_sb_tc_pool_bind_get(struct mlxsw_core_port *mlxsw_core_port,
 				 unsigned int sb_index, u16 tc_index,
 				 enum devlink_sb_pool_type pool_type,
@@ -385,7 +386,8 @@ int mlxsw_sp_sb_tc_pool_bind_get(struct mlxsw_core_port *mlxsw_core_port,
 int mlxsw_sp_sb_tc_pool_bind_set(struct mlxsw_core_port *mlxsw_core_port,
 				 unsigned int sb_index, u16 tc_index,
 				 enum devlink_sb_pool_type pool_type,
-				 u16 pool_index, u32 threshold);
+				 u16 pool_index, u32 threshold,
+				 struct netlink_ext_ack *extack);
 int mlxsw_sp_sb_occ_snapshot(struct mlxsw_core *mlxsw_core,
 			     unsigned int sb_index);
 int mlxsw_sp_sb_occ_max_clear(struct mlxsw_core *mlxsw_core,
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
index d633bef5f105..3329f7037746 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
@@ -6,6 +6,7 @@
 #include <linux/dcbnl.h>
 #include <linux/if_ether.h>
 #include <linux/list.h>
+#include <linux/netlink.h>
 
 #include "spectrum.h"
 #include "core.h"
@@ -900,14 +901,17 @@ int mlxsw_sp_sb_pool_get(struct mlxsw_core *mlxsw_core,
 
 int mlxsw_sp_sb_pool_set(struct mlxsw_core *mlxsw_core,
 			 unsigned int sb_index, u16 pool_index, u32 size,
-			 enum devlink_sb_threshold_type threshold_type)
+			 enum devlink_sb_threshold_type threshold_type,
+			 struct netlink_ext_ack *extack)
 {
 	struct mlxsw_sp *mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core);
 	u32 pool_size = mlxsw_sp_bytes_cells(mlxsw_sp, size);
 	enum mlxsw_reg_sbpr_mode mode;
 
-	if (size > MLXSW_CORE_RES_GET(mlxsw_sp->core, MAX_BUFFER_SIZE))
+	if (size > MLXSW_CORE_RES_GET(mlxsw_sp->core, MAX_BUFFER_SIZE)) {
+		NL_SET_ERR_MSG_MOD(extack, "Exceeded shared buffer size");
 		return -EINVAL;
+	}
 
 	mode = (enum mlxsw_reg_sbpr_mode) threshold_type;
 	return mlxsw_sp_sb_pr_write(mlxsw_sp, pool_index, mode,
@@ -927,7 +931,8 @@ static u32 mlxsw_sp_sb_threshold_out(struct mlxsw_sp *mlxsw_sp, u16 pool_index,
 }
 
 static int mlxsw_sp_sb_threshold_in(struct mlxsw_sp *mlxsw_sp, u16 pool_index,
-				    u32 threshold, u32 *p_max_buff)
+				    u32 threshold, u32 *p_max_buff,
+				    struct netlink_ext_ack *extack)
 {
 	struct mlxsw_sp_sb_pr *pr = mlxsw_sp_sb_pr_get(mlxsw_sp, pool_index);
 
@@ -936,8 +941,10 @@ static int mlxsw_sp_sb_threshold_in(struct mlxsw_sp *mlxsw_sp, u16 pool_index,
 
 		val = threshold + MLXSW_SP_SB_THRESHOLD_TO_ALPHA_OFFSET;
 		if (val < MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN ||
-		    val > MLXSW_REG_SBXX_DYN_MAX_BUFF_MAX)
+		    val > MLXSW_REG_SBXX_DYN_MAX_BUFF_MAX) {
+			NL_SET_ERR_MSG_MOD(extack, "Invalid dynamic threshold value");
 			return -EINVAL;
+		}
 		*p_max_buff = val;
 	} else {
 		*p_max_buff = mlxsw_sp_bytes_cells(mlxsw_sp, threshold);
@@ -963,7 +970,7 @@ int mlxsw_sp_sb_port_pool_get(struct mlxsw_core_port *mlxsw_core_port,
 
 int mlxsw_sp_sb_port_pool_set(struct mlxsw_core_port *mlxsw_core_port,
 			      unsigned int sb_index, u16 pool_index,
-			      u32 threshold)
+			      u32 threshold, struct netlink_ext_ack *extack)
 {
 	struct mlxsw_sp_port *mlxsw_sp_port =
 			mlxsw_core_port_driver_priv(mlxsw_core_port);
@@ -973,7 +980,7 @@ int mlxsw_sp_sb_port_pool_set(struct mlxsw_core_port *mlxsw_core_port,
 	int err;
 
 	err = mlxsw_sp_sb_threshold_in(mlxsw_sp, pool_index,
-				       threshold, &max_buff);
+				       threshold, &max_buff, extack);
 	if (err)
 		return err;
 
@@ -1004,7 +1011,8 @@ int mlxsw_sp_sb_tc_pool_bind_get(struct mlxsw_core_port *mlxsw_core_port,
 int mlxsw_sp_sb_tc_pool_bind_set(struct mlxsw_core_port *mlxsw_core_port,
 				 unsigned int sb_index, u16 tc_index,
 				 enum devlink_sb_pool_type pool_type,
-				 u16 pool_index, u32 threshold)
+				 u16 pool_index, u32 threshold,
+				 struct netlink_ext_ack *extack)
 {
 	struct mlxsw_sp_port *mlxsw_sp_port =
 			mlxsw_core_port_driver_priv(mlxsw_core_port);
@@ -1015,11 +1023,13 @@ int mlxsw_sp_sb_tc_pool_bind_set(struct mlxsw_core_port *mlxsw_core_port,
 	u32 max_buff;
 	int err;
 
-	if (dir != mlxsw_sp->sb_vals->pool_dess[pool_index].dir)
+	if (dir != mlxsw_sp->sb_vals->pool_dess[pool_index].dir) {
+		NL_SET_ERR_MSG_MOD(extack, "Binding egress TC to ingress pool and vice versa is forbidden");
 		return -EINVAL;
+	}
 
 	err = mlxsw_sp_sb_threshold_in(mlxsw_sp, pool_index,
-				       threshold, &max_buff);
+				       threshold, &max_buff, extack);
 	if (err)
 		return err;
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next 03/14] mlxsw: spectrum_buffers: Use defines for pool indices
  2019-04-22 12:08 [PATCH net-next 00/14] mlxsw: Shared buffer improvements Ido Schimmel
  2019-04-22 12:08 ` [PATCH net-next 01/14] net: devlink: Add extack to shared buffer operations Ido Schimmel
  2019-04-22 12:08 ` [PATCH net-next 02/14] mlxsw: spectrum_buffers: Add extack messages for invalid configurations Ido Schimmel
@ 2019-04-22 12:08 ` Ido Schimmel
  2019-04-22 12:08 ` [PATCH net-next 04/14] mlxsw: spectrum_buffers: Add ability to veto pool's configuration Ido Schimmel
                   ` (12 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Ido Schimmel @ 2019-04-22 12:08 UTC (permalink / raw)
  To: netdev
  Cc: davem, Jiri Pirko, Petr Machata, Alex Kushnarov, mlxsw, Ido Schimmel

The pool indices are currently hard coded throughout the code, which
makes the code hard to follow and extend.

Overcome this by using defines for the pool indices.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
---
 .../mellanox/mlxsw/spectrum_buffers.c         | 182 ++++++++++--------
 1 file changed, 104 insertions(+), 78 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
index 3329f7037746..28f44116ad86 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
@@ -49,6 +49,11 @@ struct mlxsw_sp_sb_pool_des {
 	u8 pool;
 };
 
+#define MLXSW_SP_SB_POOL_ING		0
+#define MLXSW_SP_SB_POOL_ING_MNG	3
+#define MLXSW_SP_SB_POOL_EGR		4
+#define MLXSW_SP_SB_POOL_EGR_MC		8
+
 /* Order ingress pools before egress pools. */
 static const struct mlxsw_sp_sb_pool_des mlxsw_sp1_sb_pool_dess[] = {
 	{MLXSW_REG_SBXX_DIR_INGRESS, 0},
@@ -465,83 +470,104 @@ static int mlxsw_sp_sb_prs_init(struct mlxsw_sp *mlxsw_sp,
 		.pool_index = _pool,			\
 	}
 
+#define MLXSW_SP_SB_CM_ING(_min_buff, _max_buff)	\
+	{						\
+		.min_buff = _min_buff,			\
+		.max_buff = _max_buff,			\
+		.pool_index = MLXSW_SP_SB_POOL_ING,	\
+	}
+
+#define MLXSW_SP_SB_CM_EGR(_min_buff, _max_buff)	\
+	{						\
+		.min_buff = _min_buff,			\
+		.max_buff = _max_buff,			\
+		.pool_index = MLXSW_SP_SB_POOL_EGR,	\
+	}
+
+#define MLXSW_SP_SB_CM_EGR_MC(_min_buff, _max_buff)	\
+	{						\
+		.min_buff = _min_buff,			\
+		.max_buff = _max_buff,			\
+		.pool_index = MLXSW_SP_SB_POOL_EGR_MC,	\
+	}
+
 static const struct mlxsw_sp_sb_cm mlxsw_sp1_sb_cms_ingress[] = {
-	MLXSW_SP_SB_CM(10000, 8, 0),
-	MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
-	MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
-	MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
-	MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
-	MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
-	MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
-	MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
-	MLXSW_SP_SB_CM(0, 0, 0), /* dummy, this PG does not exist */
-	MLXSW_SP_SB_CM(20000, 1, 3),
+	MLXSW_SP_SB_CM_ING(10000, 8),
+	MLXSW_SP_SB_CM_ING(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
+	MLXSW_SP_SB_CM_ING(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
+	MLXSW_SP_SB_CM_ING(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
+	MLXSW_SP_SB_CM_ING(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
+	MLXSW_SP_SB_CM_ING(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
+	MLXSW_SP_SB_CM_ING(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
+	MLXSW_SP_SB_CM_ING(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
+	MLXSW_SP_SB_CM_ING(0, 0), /* dummy, this PG does not exist */
+	MLXSW_SP_SB_CM(20000, 1, MLXSW_SP_SB_POOL_ING_MNG),
 };
 
 static const struct mlxsw_sp_sb_cm mlxsw_sp2_sb_cms_ingress[] = {
-	MLXSW_SP_SB_CM(0, 7, 0),
-	MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
-	MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
-	MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
-	MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
-	MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
-	MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
-	MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
-	MLXSW_SP_SB_CM(0, 0, 0), /* dummy, this PG does not exist */
-	MLXSW_SP_SB_CM(20000, 1, 3),
+	MLXSW_SP_SB_CM_ING(0, 7),
+	MLXSW_SP_SB_CM_ING(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
+	MLXSW_SP_SB_CM_ING(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
+	MLXSW_SP_SB_CM_ING(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
+	MLXSW_SP_SB_CM_ING(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
+	MLXSW_SP_SB_CM_ING(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
+	MLXSW_SP_SB_CM_ING(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
+	MLXSW_SP_SB_CM_ING(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
+	MLXSW_SP_SB_CM_ING(0, 0), /* dummy, this PG does not exist */
+	MLXSW_SP_SB_CM(20000, 1, MLXSW_SP_SB_POOL_ING_MNG),
 };
 
 static const struct mlxsw_sp_sb_cm mlxsw_sp1_sb_cms_egress[] = {
-	MLXSW_SP_SB_CM(1500, 9, 4),
-	MLXSW_SP_SB_CM(1500, 9, 4),
-	MLXSW_SP_SB_CM(1500, 9, 4),
-	MLXSW_SP_SB_CM(1500, 9, 4),
-	MLXSW_SP_SB_CM(1500, 9, 4),
-	MLXSW_SP_SB_CM(1500, 9, 4),
-	MLXSW_SP_SB_CM(1500, 9, 4),
-	MLXSW_SP_SB_CM(1500, 9, 4),
-	MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8),
-	MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8),
-	MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8),
-	MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8),
-	MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8),
-	MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8),
-	MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8),
-	MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8),
-	MLXSW_SP_SB_CM(1, 0xff, 4),
+	MLXSW_SP_SB_CM_EGR(1500, 9),
+	MLXSW_SP_SB_CM_EGR(1500, 9),
+	MLXSW_SP_SB_CM_EGR(1500, 9),
+	MLXSW_SP_SB_CM_EGR(1500, 9),
+	MLXSW_SP_SB_CM_EGR(1500, 9),
+	MLXSW_SP_SB_CM_EGR(1500, 9),
+	MLXSW_SP_SB_CM_EGR(1500, 9),
+	MLXSW_SP_SB_CM_EGR(1500, 9),
+	MLXSW_SP_SB_CM_EGR_MC(0, MLXSW_SP_SB_INFI),
+	MLXSW_SP_SB_CM_EGR_MC(0, MLXSW_SP_SB_INFI),
+	MLXSW_SP_SB_CM_EGR_MC(0, MLXSW_SP_SB_INFI),
+	MLXSW_SP_SB_CM_EGR_MC(0, MLXSW_SP_SB_INFI),
+	MLXSW_SP_SB_CM_EGR_MC(0, MLXSW_SP_SB_INFI),
+	MLXSW_SP_SB_CM_EGR_MC(0, MLXSW_SP_SB_INFI),
+	MLXSW_SP_SB_CM_EGR_MC(0, MLXSW_SP_SB_INFI),
+	MLXSW_SP_SB_CM_EGR_MC(0, MLXSW_SP_SB_INFI),
+	MLXSW_SP_SB_CM_EGR(1, 0xff),
 };
 
 static const struct mlxsw_sp_sb_cm mlxsw_sp2_sb_cms_egress[] = {
-	MLXSW_SP_SB_CM(0, 7, 4),
-	MLXSW_SP_SB_CM(0, 7, 4),
-	MLXSW_SP_SB_CM(0, 7, 4),
-	MLXSW_SP_SB_CM(0, 7, 4),
-	MLXSW_SP_SB_CM(0, 7, 4),
-	MLXSW_SP_SB_CM(0, 7, 4),
-	MLXSW_SP_SB_CM(0, 7, 4),
-	MLXSW_SP_SB_CM(0, 7, 4),
-	MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8),
-	MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8),
-	MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8),
-	MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8),
-	MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8),
-	MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8),
-	MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8),
-	MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8),
-	MLXSW_SP_SB_CM(1, 0xff, 4),
+	MLXSW_SP_SB_CM_EGR(0, 7),
+	MLXSW_SP_SB_CM_EGR(0, 7),
+	MLXSW_SP_SB_CM_EGR(0, 7),
+	MLXSW_SP_SB_CM_EGR(0, 7),
+	MLXSW_SP_SB_CM_EGR(0, 7),
+	MLXSW_SP_SB_CM_EGR(0, 7),
+	MLXSW_SP_SB_CM_EGR(0, 7),
+	MLXSW_SP_SB_CM_EGR(0, 7),
+	MLXSW_SP_SB_CM_EGR_MC(0, MLXSW_SP_SB_INFI),
+	MLXSW_SP_SB_CM_EGR_MC(0, MLXSW_SP_SB_INFI),
+	MLXSW_SP_SB_CM_EGR_MC(0, MLXSW_SP_SB_INFI),
+	MLXSW_SP_SB_CM_EGR_MC(0, MLXSW_SP_SB_INFI),
+	MLXSW_SP_SB_CM_EGR_MC(0, MLXSW_SP_SB_INFI),
+	MLXSW_SP_SB_CM_EGR_MC(0, MLXSW_SP_SB_INFI),
+	MLXSW_SP_SB_CM_EGR_MC(0, MLXSW_SP_SB_INFI),
+	MLXSW_SP_SB_CM_EGR_MC(0, MLXSW_SP_SB_INFI),
+	MLXSW_SP_SB_CM_EGR(1, 0xff),
 };
 
-#define MLXSW_SP_CPU_PORT_SB_CM MLXSW_SP_SB_CM(0, 0, 4)
+#define MLXSW_SP_CPU_PORT_SB_CM MLXSW_SP_SB_CM(0, 0, MLXSW_SP_SB_POOL_EGR)
 
 static const struct mlxsw_sp_sb_cm mlxsw_sp_cpu_port_sb_cms[] = {
 	MLXSW_SP_CPU_PORT_SB_CM,
-	MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, 4),
-	MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, 4),
-	MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, 4),
-	MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, 4),
-	MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, 4),
+	MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, MLXSW_SP_SB_POOL_EGR),
+	MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, MLXSW_SP_SB_POOL_EGR),
+	MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, MLXSW_SP_SB_POOL_EGR),
+	MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, MLXSW_SP_SB_POOL_EGR),
+	MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, MLXSW_SP_SB_POOL_EGR),
 	MLXSW_SP_CPU_PORT_SB_CM,
-	MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, 4),
+	MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, MLXSW_SP_SB_POOL_EGR),
 	MLXSW_SP_CPU_PORT_SB_CM,
 	MLXSW_SP_CPU_PORT_SB_CM,
 	MLXSW_SP_CPU_PORT_SB_CM,
@@ -700,29 +726,29 @@ static int mlxsw_sp_port_sb_pms_init(struct mlxsw_sp_port *mlxsw_sp_port)
 	return 0;
 }
 
-#define MLXSW_SP_SB_MM(_min_buff, _max_buff, _pool)	\
+#define MLXSW_SP_SB_MM(_min_buff, _max_buff)		\
 	{						\
 		.min_buff = _min_buff,			\
 		.max_buff = _max_buff,			\
-		.pool_index = _pool,			\
+		.pool_index = MLXSW_SP_SB_POOL_EGR,	\
 	}
 
 static const struct mlxsw_sp_sb_mm mlxsw_sp_sb_mms[] = {
-	MLXSW_SP_SB_MM(0, 6, 4),
-	MLXSW_SP_SB_MM(0, 6, 4),
-	MLXSW_SP_SB_MM(0, 6, 4),
-	MLXSW_SP_SB_MM(0, 6, 4),
-	MLXSW_SP_SB_MM(0, 6, 4),
-	MLXSW_SP_SB_MM(0, 6, 4),
-	MLXSW_SP_SB_MM(0, 6, 4),
-	MLXSW_SP_SB_MM(0, 6, 4),
-	MLXSW_SP_SB_MM(0, 6, 4),
-	MLXSW_SP_SB_MM(0, 6, 4),
-	MLXSW_SP_SB_MM(0, 6, 4),
-	MLXSW_SP_SB_MM(0, 6, 4),
-	MLXSW_SP_SB_MM(0, 6, 4),
-	MLXSW_SP_SB_MM(0, 6, 4),
-	MLXSW_SP_SB_MM(0, 6, 4),
+	MLXSW_SP_SB_MM(0, 6),
+	MLXSW_SP_SB_MM(0, 6),
+	MLXSW_SP_SB_MM(0, 6),
+	MLXSW_SP_SB_MM(0, 6),
+	MLXSW_SP_SB_MM(0, 6),
+	MLXSW_SP_SB_MM(0, 6),
+	MLXSW_SP_SB_MM(0, 6),
+	MLXSW_SP_SB_MM(0, 6),
+	MLXSW_SP_SB_MM(0, 6),
+	MLXSW_SP_SB_MM(0, 6),
+	MLXSW_SP_SB_MM(0, 6),
+	MLXSW_SP_SB_MM(0, 6),
+	MLXSW_SP_SB_MM(0, 6),
+	MLXSW_SP_SB_MM(0, 6),
+	MLXSW_SP_SB_MM(0, 6),
 };
 
 static int mlxsw_sp_sb_mms_init(struct mlxsw_sp *mlxsw_sp)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next 04/14] mlxsw: spectrum_buffers: Add ability to veto pool's configuration
  2019-04-22 12:08 [PATCH net-next 00/14] mlxsw: Shared buffer improvements Ido Schimmel
                   ` (2 preceding siblings ...)
  2019-04-22 12:08 ` [PATCH net-next 03/14] mlxsw: spectrum_buffers: Use defines for pool indices Ido Schimmel
@ 2019-04-22 12:08 ` Ido Schimmel
  2019-04-22 12:08 ` [PATCH net-next 05/14] mlxsw: spectrum_buffers: Add ability to veto TC's configuration Ido Schimmel
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Ido Schimmel @ 2019-04-22 12:08 UTC (permalink / raw)
  To: netdev
  Cc: davem, Jiri Pirko, Petr Machata, Alex Kushnarov, mlxsw, Ido Schimmel

Subsequent patches are going to need to veto changes in certain pools'
size and / or threshold type (mode).

Add two fields to the pool's struct that indicate if either of these
attributes is allowed to change and enforce that.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
---
 .../ethernet/mellanox/mlxsw/spectrum_buffers.c  | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
index 28f44116ad86..f49b8791dcd9 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
@@ -16,6 +16,8 @@
 struct mlxsw_sp_sb_pr {
 	enum mlxsw_reg_sbpr_mode mode;
 	u32 size;
+	u8 freeze_mode:1,
+	   freeze_size:1;
 };
 
 struct mlxsw_cp_sb_occ {
@@ -932,14 +934,27 @@ int mlxsw_sp_sb_pool_set(struct mlxsw_core *mlxsw_core,
 {
 	struct mlxsw_sp *mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core);
 	u32 pool_size = mlxsw_sp_bytes_cells(mlxsw_sp, size);
+	const struct mlxsw_sp_sb_pr *pr;
 	enum mlxsw_reg_sbpr_mode mode;
 
+	mode = (enum mlxsw_reg_sbpr_mode) threshold_type;
+	pr = &mlxsw_sp->sb_vals->prs[pool_index];
+
 	if (size > MLXSW_CORE_RES_GET(mlxsw_sp->core, MAX_BUFFER_SIZE)) {
 		NL_SET_ERR_MSG_MOD(extack, "Exceeded shared buffer size");
 		return -EINVAL;
 	}
 
-	mode = (enum mlxsw_reg_sbpr_mode) threshold_type;
+	if (pr->freeze_mode && pr->mode != mode) {
+		NL_SET_ERR_MSG_MOD(extack, "Changing this pool's threshold type is forbidden");
+		return -EINVAL;
+	};
+
+	if (pr->freeze_size && pr->size != size) {
+		NL_SET_ERR_MSG_MOD(extack, "Changing this pool's size is forbidden");
+		return -EINVAL;
+	};
+
 	return mlxsw_sp_sb_pr_write(mlxsw_sp, pool_index, mode,
 				    pool_size, false);
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next 05/14] mlxsw: spectrum_buffers: Add ability to veto TC's configuration
  2019-04-22 12:08 [PATCH net-next 00/14] mlxsw: Shared buffer improvements Ido Schimmel
                   ` (3 preceding siblings ...)
  2019-04-22 12:08 ` [PATCH net-next 04/14] mlxsw: spectrum_buffers: Add ability to veto pool's configuration Ido Schimmel
@ 2019-04-22 12:08 ` Ido Schimmel
  2019-04-22 12:08 ` [PATCH net-next 06/14] mlxsw: spectrum_buffers: Forbid configuration of multicast pool Ido Schimmel
                   ` (10 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Ido Schimmel @ 2019-04-22 12:08 UTC (permalink / raw)
  To: netdev
  Cc: davem, Jiri Pirko, Petr Machata, Alex Kushnarov, mlxsw, Ido Schimmel

Subsequent patches are going to need to veto changes in certain TCs'
binding and threshold configurations.

Add fields to the TC's struct that indicate if the TC can be bound to a
different pool and whether its threshold can change and enforce that.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
---
 .../ethernet/mellanox/mlxsw/spectrum_buffers.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
index f49b8791dcd9..c41f828241c2 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
@@ -30,6 +30,8 @@ struct mlxsw_sp_sb_cm {
 	u32 max_buff;
 	u16 pool_index;
 	struct mlxsw_cp_sb_occ occ;
+	u8 freeze_pool:1,
+	   freeze_thresh:1;
 };
 
 #define MLXSW_SP_SB_INFI -1U
@@ -1059,6 +1061,7 @@ int mlxsw_sp_sb_tc_pool_bind_set(struct mlxsw_core_port *mlxsw_core_port,
 			mlxsw_core_port_driver_priv(mlxsw_core_port);
 	struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
 	u8 local_port = mlxsw_sp_port->local_port;
+	const struct mlxsw_sp_sb_cm *cm;
 	u8 pg_buff = tc_index;
 	enum mlxsw_reg_sbxx_dir dir = (enum mlxsw_reg_sbxx_dir) pool_type;
 	u32 max_buff;
@@ -1069,6 +1072,21 @@ int mlxsw_sp_sb_tc_pool_bind_set(struct mlxsw_core_port *mlxsw_core_port,
 		return -EINVAL;
 	}
 
+	if (dir == MLXSW_REG_SBXX_DIR_INGRESS)
+		cm = &mlxsw_sp->sb_vals->cms_ingress[tc_index];
+	else
+		cm = &mlxsw_sp->sb_vals->cms_egress[tc_index];
+
+	if (cm->freeze_pool && cm->pool_index != pool_index) {
+		NL_SET_ERR_MSG_MOD(extack, "Binding this TC to a different pool is forbidden");
+		return -EINVAL;
+	}
+
+	if (cm->freeze_thresh && cm->max_buff != threshold) {
+		NL_SET_ERR_MSG_MOD(extack, "Changing this TC's threshold is forbidden");
+		return -EINVAL;
+	}
+
 	err = mlxsw_sp_sb_threshold_in(mlxsw_sp, pool_index,
 				       threshold, &max_buff, extack);
 	if (err)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next 06/14] mlxsw: spectrum_buffers: Forbid configuration of multicast pool
  2019-04-22 12:08 [PATCH net-next 00/14] mlxsw: Shared buffer improvements Ido Schimmel
                   ` (4 preceding siblings ...)
  2019-04-22 12:08 ` [PATCH net-next 05/14] mlxsw: spectrum_buffers: Add ability to veto TC's configuration Ido Schimmel
@ 2019-04-22 12:08 ` Ido Schimmel
  2019-04-22 12:08 ` [PATCH net-next 07/14] mlxsw: spectrum_buffers: Forbid changing threshold type of first egress pool Ido Schimmel
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Ido Schimmel @ 2019-04-22 12:08 UTC (permalink / raw)
  To: netdev
  Cc: davem, Jiri Pirko, Petr Machata, Alex Kushnarov, mlxsw, Ido Schimmel

Commit e83c045e53d7 ("mlxsw: spectrum_buffers: Configure MC pool") added
a dedicated pool for multicast traffic. The pool is visible to the user
so that it would be possible to monitor its occupancy, but its
configuration should be forbidden in order to maintain its intended
operation.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
---
 .../net/ethernet/mellanox/mlxsw/spectrum_buffers.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
index c41f828241c2..0d82cf68ff6d 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
@@ -400,6 +400,14 @@ static void mlxsw_sp_sb_ports_fini(struct mlxsw_sp *mlxsw_sp)
 		.size = _size,		\
 	}
 
+#define MLXSW_SP_SB_PR_EXT(_mode, _size, _freeze_mode, _freeze_size)	\
+	{								\
+		.mode = _mode,						\
+		.size = _size,						\
+		.freeze_mode = _freeze_mode,				\
+		.freeze_size = _freeze_size,				\
+	}
+
 #define MLXSW_SP1_SB_PR_INGRESS_SIZE	12440000
 #define MLXSW_SP1_SB_PR_INGRESS_MNG_SIZE (200 * 1000)
 #define MLXSW_SP1_SB_PR_EGRESS_SIZE	13232000
@@ -418,7 +426,8 @@ static const struct mlxsw_sp_sb_pr mlxsw_sp1_sb_prs[] = {
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC, 0),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC, 0),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC, 0),
-	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, MLXSW_SP_SB_INFI),
+	MLXSW_SP_SB_PR_EXT(MLXSW_REG_SBPR_MODE_STATIC, MLXSW_SP_SB_INFI,
+			   true, true),
 };
 
 #define MLXSW_SP2_SB_PR_INGRESS_SIZE	40960000
@@ -439,7 +448,8 @@ static const struct mlxsw_sp_sb_pr mlxsw_sp2_sb_prs[] = {
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, 0),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, 0),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, 0),
-	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, MLXSW_SP_SB_INFI),
+	MLXSW_SP_SB_PR_EXT(MLXSW_REG_SBPR_MODE_STATIC, MLXSW_SP_SB_INFI,
+			   true, true),
 };
 
 static int mlxsw_sp_sb_prs_init(struct mlxsw_sp *mlxsw_sp,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next 07/14] mlxsw: spectrum_buffers: Forbid changing threshold type of first egress pool
  2019-04-22 12:08 [PATCH net-next 00/14] mlxsw: Shared buffer improvements Ido Schimmel
                   ` (5 preceding siblings ...)
  2019-04-22 12:08 ` [PATCH net-next 06/14] mlxsw: spectrum_buffers: Forbid configuration of multicast pool Ido Schimmel
@ 2019-04-22 12:08 ` Ido Schimmel
  2019-04-22 12:08 ` [PATCH net-next 08/14] mlxsw: spectrum_buffers: Forbid changing multicast TCs' attributes Ido Schimmel
                   ` (8 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Ido Schimmel @ 2019-04-22 12:08 UTC (permalink / raw)
  To: netdev
  Cc: davem, Jiri Pirko, Petr Machata, Alex Kushnarov, mlxsw, Ido Schimmel

Multicast packets have three egress quotas:
* Per egress port
* Per egress port and traffic class
* Per switch priority

The limits on the switch priority are not exposed to the user and
specified as dynamic threshold on the first egress pool.

Forbid changing the threshold type of the first egress pool so that
these limits are always valid.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
index 0d82cf68ff6d..3c08816183bc 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
@@ -421,8 +421,8 @@ static const struct mlxsw_sp_sb_pr mlxsw_sp1_sb_prs[] = {
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC,
 		       MLXSW_SP1_SB_PR_INGRESS_MNG_SIZE),
 	/* Egress pools. */
-	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC,
-		       MLXSW_SP1_SB_PR_EGRESS_SIZE),
+	MLXSW_SP_SB_PR_EXT(MLXSW_REG_SBPR_MODE_DYNAMIC,
+			   MLXSW_SP1_SB_PR_EGRESS_SIZE, true, false),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC, 0),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC, 0),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC, 0),
@@ -443,8 +443,8 @@ static const struct mlxsw_sp_sb_pr mlxsw_sp2_sb_prs[] = {
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC,
 		       MLXSW_SP2_SB_PR_INGRESS_MNG_SIZE),
 	/* Egress pools. */
-	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC,
-		       MLXSW_SP2_SB_PR_EGRESS_SIZE),
+	MLXSW_SP_SB_PR_EXT(MLXSW_REG_SBPR_MODE_DYNAMIC,
+			   MLXSW_SP2_SB_PR_EGRESS_SIZE, true, false),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, 0),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, 0),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, 0),
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next 08/14] mlxsw: spectrum_buffers: Forbid changing multicast TCs' attributes
  2019-04-22 12:08 [PATCH net-next 00/14] mlxsw: Shared buffer improvements Ido Schimmel
                   ` (6 preceding siblings ...)
  2019-04-22 12:08 ` [PATCH net-next 07/14] mlxsw: spectrum_buffers: Forbid changing threshold type of first egress pool Ido Schimmel
@ 2019-04-22 12:08 ` Ido Schimmel
  2019-04-22 12:08 ` [PATCH net-next 09/14] mlxsw: spectrum_buffers: Remove assumption about pool order Ido Schimmel
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Ido Schimmel @ 2019-04-22 12:08 UTC (permalink / raw)
  To: netdev
  Cc: davem, Jiri Pirko, Petr Machata, Alex Kushnarov, mlxsw, Ido Schimmel

Commit e83c045e53d7 ("mlxsw: spectrum_buffers: Configure MC pool")
configured the threshold of the multicast TCs as infinite so that the
admission of multicast packets is only depended on per-switch priority
threshold.

Forbid the user from changing the thresholds of these multicast TCs and
their binding to a different pool.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
index 3c08816183bc..6e2b701e7036 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
@@ -503,6 +503,8 @@ static int mlxsw_sp_sb_prs_init(struct mlxsw_sp *mlxsw_sp,
 		.min_buff = _min_buff,			\
 		.max_buff = _max_buff,			\
 		.pool_index = MLXSW_SP_SB_POOL_EGR_MC,	\
+		.freeze_pool = true,			\
+		.freeze_thresh = true,			\
 	}
 
 static const struct mlxsw_sp_sb_cm mlxsw_sp1_sb_cms_ingress[] = {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next 09/14] mlxsw: spectrum_buffers: Remove assumption about pool order
  2019-04-22 12:08 [PATCH net-next 00/14] mlxsw: Shared buffer improvements Ido Schimmel
                   ` (7 preceding siblings ...)
  2019-04-22 12:08 ` [PATCH net-next 08/14] mlxsw: spectrum_buffers: Forbid changing multicast TCs' attributes Ido Schimmel
@ 2019-04-22 12:08 ` Ido Schimmel
  2019-04-22 12:08 ` [PATCH net-next 10/14] mlxsw: spectrum_buffers: Add pools for CPU traffic Ido Schimmel
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Ido Schimmel @ 2019-04-22 12:08 UTC (permalink / raw)
  To: netdev
  Cc: davem, Jiri Pirko, Petr Machata, Alex Kushnarov, mlxsw, Ido Schimmel

The code currently assumes that ingress pools have lower indices than
egress pools. This makes it impossible to add more ingress pools
without breaking user configuration that relies on a certain pool index
to correspond to an egress pool.

Remove such assumptions from the code, so that more ingress pools could
be added by subsequent patches.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
---
 .../mellanox/mlxsw/spectrum_buffers.c         | 31 ++++++++-----------
 1 file changed, 13 insertions(+), 18 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
index 6e2b701e7036..6932b1d5a6bc 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
@@ -58,7 +58,6 @@ struct mlxsw_sp_sb_pool_des {
 #define MLXSW_SP_SB_POOL_EGR		4
 #define MLXSW_SP_SB_POOL_EGR_MC		8
 
-/* Order ingress pools before egress pools. */
 static const struct mlxsw_sp_sb_pool_des mlxsw_sp1_sb_pool_dess[] = {
 	{MLXSW_REG_SBXX_DIR_INGRESS, 0},
 	{MLXSW_REG_SBXX_DIR_INGRESS, 1},
@@ -412,15 +411,14 @@ static void mlxsw_sp_sb_ports_fini(struct mlxsw_sp *mlxsw_sp)
 #define MLXSW_SP1_SB_PR_INGRESS_MNG_SIZE (200 * 1000)
 #define MLXSW_SP1_SB_PR_EGRESS_SIZE	13232000
 
+/* Order according to mlxsw_sp1_sb_pool_dess */
 static const struct mlxsw_sp_sb_pr mlxsw_sp1_sb_prs[] = {
-	/* Ingress pools. */
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC,
 		       MLXSW_SP1_SB_PR_INGRESS_SIZE),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC, 0),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC, 0),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC,
 		       MLXSW_SP1_SB_PR_INGRESS_MNG_SIZE),
-	/* Egress pools. */
 	MLXSW_SP_SB_PR_EXT(MLXSW_REG_SBPR_MODE_DYNAMIC,
 			   MLXSW_SP1_SB_PR_EGRESS_SIZE, true, false),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC, 0),
@@ -434,15 +432,14 @@ static const struct mlxsw_sp_sb_pr mlxsw_sp1_sb_prs[] = {
 #define MLXSW_SP2_SB_PR_INGRESS_MNG_SIZE (200 * 1000)
 #define MLXSW_SP2_SB_PR_EGRESS_SIZE	40960000
 
+/* Order according to mlxsw_sp2_sb_pool_dess */
 static const struct mlxsw_sp_sb_pr mlxsw_sp2_sb_prs[] = {
-	/* Ingress pools. */
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC,
 		       MLXSW_SP2_SB_PR_INGRESS_SIZE),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, 0),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, 0),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC,
 		       MLXSW_SP2_SB_PR_INGRESS_MNG_SIZE),
-	/* Egress pools. */
 	MLXSW_SP_SB_PR_EXT(MLXSW_REG_SBPR_MODE_DYNAMIC,
 			   MLXSW_SP2_SB_PR_EGRESS_SIZE, true, false),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, 0),
@@ -691,13 +688,12 @@ static int mlxsw_sp_cpu_port_sb_cms_init(struct mlxsw_sp *mlxsw_sp)
 		.max_buff = _max_buff,		\
 	}
 
+/* Order according to mlxsw_sp1_sb_pool_dess */
 static const struct mlxsw_sp_sb_pm mlxsw_sp1_sb_pms[] = {
-	/* Ingress pools. */
 	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MAX),
 	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
 	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
 	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MAX),
-	/* Egress pools. */
 	MLXSW_SP_SB_PM(0, 7),
 	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
 	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
@@ -705,13 +701,12 @@ static const struct mlxsw_sp_sb_pm mlxsw_sp1_sb_pms[] = {
 	MLXSW_SP_SB_PM(10000, 90000),
 };
 
+/* Order according to mlxsw_sp2_sb_pool_dess */
 static const struct mlxsw_sp_sb_pm mlxsw_sp2_sb_pms[] = {
-	/* Ingress pools. */
 	MLXSW_SP_SB_PM(0, 7),
 	MLXSW_SP_SB_PM(0, 0),
 	MLXSW_SP_SB_PM(0, 0),
 	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MAX),
-	/* Egress pools. */
 	MLXSW_SP_SB_PM(0, 7),
 	MLXSW_SP_SB_PM(0, 0),
 	MLXSW_SP_SB_PM(0, 0),
@@ -798,15 +793,15 @@ static void mlxsw_sp_pool_count(struct mlxsw_sp *mlxsw_sp,
 {
 	int i;
 
-	for (i = 0; i < mlxsw_sp->sb_vals->pool_count; ++i)
+	for (i = 0; i < mlxsw_sp->sb_vals->pool_count; ++i) {
 		if (mlxsw_sp->sb_vals->pool_dess[i].dir ==
-		    MLXSW_REG_SBXX_DIR_EGRESS)
-			goto out;
-	WARN(1, "No egress pools\n");
+		    MLXSW_REG_SBXX_DIR_INGRESS)
+			(*p_ingress_len)++;
+		else
+			(*p_egress_len)++;
+	}
 
-out:
-	*p_ingress_len = i;
-	*p_egress_len = mlxsw_sp->sb_vals->pool_count - i;
+	WARN(*p_egress_len == 0, "No egress pools\n");
 }
 
 const struct mlxsw_sp_sb_vals mlxsw_sp1_sb_vals = {
@@ -842,8 +837,8 @@ const struct mlxsw_sp_sb_vals mlxsw_sp2_sb_vals = {
 int mlxsw_sp_buffers_init(struct mlxsw_sp *mlxsw_sp)
 {
 	u32 max_headroom_size;
-	u16 ing_pool_count;
-	u16 eg_pool_count;
+	u16 ing_pool_count = 0;
+	u16 eg_pool_count = 0;
 	int err;
 
 	if (!MLXSW_CORE_RES_VALID(mlxsw_sp->core, CELL_SIZE))
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next 10/14] mlxsw: spectrum_buffers: Add pools for CPU traffic
  2019-04-22 12:08 [PATCH net-next 00/14] mlxsw: Shared buffer improvements Ido Schimmel
                   ` (8 preceding siblings ...)
  2019-04-22 12:08 ` [PATCH net-next 09/14] mlxsw: spectrum_buffers: Remove assumption about pool order Ido Schimmel
@ 2019-04-22 12:08 ` Ido Schimmel
  2019-04-22 12:08 ` [PATCH net-next 11/14] mlxsw: spectrum_buffers: Use new CPU ingress pool for control packets Ido Schimmel
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Ido Schimmel @ 2019-04-22 12:08 UTC (permalink / raw)
  To: netdev
  Cc: davem, Jiri Pirko, Petr Machata, Alex Kushnarov, mlxsw, Ido Schimmel

Packets that are trapped to the CPU are transmitted through the CPU port
to the attached host. The CPU port is therefore like any other port and
needs to have shared buffer configuration.

The maximum quotas configured for the CPU are provided using dynamic
threshold and cannot be changed by the user. In order to make sure that
these thresholds are always valid, the configuration of the threshold
type of these pools is forbidden.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
---
 .../mellanox/mlxsw/spectrum_buffers.c         | 20 +++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
index 6932b1d5a6bc..89fe7cbe2ccc 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
@@ -57,6 +57,8 @@ struct mlxsw_sp_sb_pool_des {
 #define MLXSW_SP_SB_POOL_ING_MNG	3
 #define MLXSW_SP_SB_POOL_EGR		4
 #define MLXSW_SP_SB_POOL_EGR_MC		8
+#define MLXSW_SP_SB_POOL_ING_CPU	9
+#define MLXSW_SP_SB_POOL_EGR_CPU	10
 
 static const struct mlxsw_sp_sb_pool_des mlxsw_sp1_sb_pool_dess[] = {
 	{MLXSW_REG_SBXX_DIR_INGRESS, 0},
@@ -68,6 +70,8 @@ static const struct mlxsw_sp_sb_pool_des mlxsw_sp1_sb_pool_dess[] = {
 	{MLXSW_REG_SBXX_DIR_EGRESS, 2},
 	{MLXSW_REG_SBXX_DIR_EGRESS, 3},
 	{MLXSW_REG_SBXX_DIR_EGRESS, 15},
+	{MLXSW_REG_SBXX_DIR_INGRESS, 4},
+	{MLXSW_REG_SBXX_DIR_EGRESS, 4},
 };
 
 static const struct mlxsw_sp_sb_pool_des mlxsw_sp2_sb_pool_dess[] = {
@@ -80,6 +84,8 @@ static const struct mlxsw_sp_sb_pool_des mlxsw_sp2_sb_pool_dess[] = {
 	{MLXSW_REG_SBXX_DIR_EGRESS, 2},
 	{MLXSW_REG_SBXX_DIR_EGRESS, 3},
 	{MLXSW_REG_SBXX_DIR_EGRESS, 15},
+	{MLXSW_REG_SBXX_DIR_INGRESS, 4},
+	{MLXSW_REG_SBXX_DIR_EGRESS, 4},
 };
 
 #define MLXSW_SP_SB_ING_TC_COUNT 8
@@ -410,6 +416,7 @@ static void mlxsw_sp_sb_ports_fini(struct mlxsw_sp *mlxsw_sp)
 #define MLXSW_SP1_SB_PR_INGRESS_SIZE	12440000
 #define MLXSW_SP1_SB_PR_INGRESS_MNG_SIZE (200 * 1000)
 #define MLXSW_SP1_SB_PR_EGRESS_SIZE	13232000
+#define MLXSW_SP1_SB_PR_CPU_SIZE	(256 * 1000)
 
 /* Order according to mlxsw_sp1_sb_pool_dess */
 static const struct mlxsw_sp_sb_pr mlxsw_sp1_sb_prs[] = {
@@ -426,11 +433,16 @@ static const struct mlxsw_sp_sb_pr mlxsw_sp1_sb_prs[] = {
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC, 0),
 	MLXSW_SP_SB_PR_EXT(MLXSW_REG_SBPR_MODE_STATIC, MLXSW_SP_SB_INFI,
 			   true, true),
+	MLXSW_SP_SB_PR_EXT(MLXSW_REG_SBPR_MODE_DYNAMIC,
+			   MLXSW_SP1_SB_PR_CPU_SIZE, true, false),
+	MLXSW_SP_SB_PR_EXT(MLXSW_REG_SBPR_MODE_DYNAMIC,
+			   MLXSW_SP1_SB_PR_CPU_SIZE, true, false),
 };
 
 #define MLXSW_SP2_SB_PR_INGRESS_SIZE	40960000
 #define MLXSW_SP2_SB_PR_INGRESS_MNG_SIZE (200 * 1000)
 #define MLXSW_SP2_SB_PR_EGRESS_SIZE	40960000
+#define MLXSW_SP2_SB_PR_CPU_SIZE	(256 * 1000)
 
 /* Order according to mlxsw_sp2_sb_pool_dess */
 static const struct mlxsw_sp_sb_pr mlxsw_sp2_sb_prs[] = {
@@ -447,6 +459,10 @@ static const struct mlxsw_sp_sb_pr mlxsw_sp2_sb_prs[] = {
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, 0),
 	MLXSW_SP_SB_PR_EXT(MLXSW_REG_SBPR_MODE_STATIC, MLXSW_SP_SB_INFI,
 			   true, true),
+	MLXSW_SP_SB_PR_EXT(MLXSW_REG_SBPR_MODE_DYNAMIC,
+			   MLXSW_SP2_SB_PR_CPU_SIZE, true, false),
+	MLXSW_SP_SB_PR_EXT(MLXSW_REG_SBPR_MODE_DYNAMIC,
+			   MLXSW_SP2_SB_PR_CPU_SIZE, true, false),
 };
 
 static int mlxsw_sp_sb_prs_init(struct mlxsw_sp *mlxsw_sp,
@@ -699,6 +715,8 @@ static const struct mlxsw_sp_sb_pm mlxsw_sp1_sb_pms[] = {
 	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
 	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
 	MLXSW_SP_SB_PM(10000, 90000),
+	MLXSW_SP_SB_PM(0, 8),	/* 50% occupancy */
+	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
 };
 
 /* Order according to mlxsw_sp2_sb_pool_dess */
@@ -712,6 +730,8 @@ static const struct mlxsw_sp_sb_pm mlxsw_sp2_sb_pms[] = {
 	MLXSW_SP_SB_PM(0, 0),
 	MLXSW_SP_SB_PM(0, 0),
 	MLXSW_SP_SB_PM(10000, 90000),
+	MLXSW_SP_SB_PM(0, 8),	/* 50% occupancy */
+	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
 };
 
 static int mlxsw_sp_port_sb_pms_init(struct mlxsw_sp_port *mlxsw_sp_port)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next 11/14] mlxsw: spectrum_buffers: Use new CPU ingress pool for control packets
  2019-04-22 12:08 [PATCH net-next 00/14] mlxsw: Shared buffer improvements Ido Schimmel
                   ` (9 preceding siblings ...)
  2019-04-22 12:08 ` [PATCH net-next 10/14] mlxsw: spectrum_buffers: Add pools for CPU traffic Ido Schimmel
@ 2019-04-22 12:08 ` Ido Schimmel
  2019-04-22 12:08 ` [PATCH net-next 12/14] mlxsw: spectrum_buffers: Split business logic from mlxsw_sp_port_sb_pms_init() Ido Schimmel
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Ido Schimmel @ 2019-04-22 12:08 UTC (permalink / raw)
  To: netdev
  Cc: davem, Jiri Pirko, Petr Machata, Alex Kushnarov, mlxsw, Ido Schimmel

Use the new ingress pool that was added in the previous patch for
control packets (e.g., STP, LACP) that are trapped to the CPU.

The previous management pool is no longer necessary and therefore its
size is set to 0.

The maximum quota for traffic towards the CPU is increased to 50% of the
free space in the new ingress pool and therefore the reserved space is
reduced by half, to 10KB - in both the shared and headroom buffer. This
allows for more efficient utilization of the shared buffer as reserved
space cannot be used for other purposes.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
---
 .../mellanox/mlxsw/spectrum_buffers.c         | 19 +++++++------------
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
index 89fe7cbe2ccc..06b852e11cb1 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
@@ -54,7 +54,6 @@ struct mlxsw_sp_sb_pool_des {
 };
 
 #define MLXSW_SP_SB_POOL_ING		0
-#define MLXSW_SP_SB_POOL_ING_MNG	3
 #define MLXSW_SP_SB_POOL_EGR		4
 #define MLXSW_SP_SB_POOL_EGR_MC		8
 #define MLXSW_SP_SB_POOL_ING_CPU	9
@@ -290,7 +289,7 @@ static int mlxsw_sp_port_pb_init(struct mlxsw_sp_port *mlxsw_sp_port)
 {
 	const u32 pbs[] = {
 		[0] = MLXSW_SP_PB_HEADROOM * mlxsw_sp_port->mapping.width,
-		[9] = 2 * MLXSW_PORT_MAX_MTU,
+		[9] = MLXSW_PORT_MAX_MTU,
 	};
 	struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
 	char pbmc_pl[MLXSW_REG_PBMC_LEN];
@@ -414,7 +413,6 @@ static void mlxsw_sp_sb_ports_fini(struct mlxsw_sp *mlxsw_sp)
 	}
 
 #define MLXSW_SP1_SB_PR_INGRESS_SIZE	12440000
-#define MLXSW_SP1_SB_PR_INGRESS_MNG_SIZE (200 * 1000)
 #define MLXSW_SP1_SB_PR_EGRESS_SIZE	13232000
 #define MLXSW_SP1_SB_PR_CPU_SIZE	(256 * 1000)
 
@@ -424,8 +422,7 @@ static const struct mlxsw_sp_sb_pr mlxsw_sp1_sb_prs[] = {
 		       MLXSW_SP1_SB_PR_INGRESS_SIZE),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC, 0),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC, 0),
-	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC,
-		       MLXSW_SP1_SB_PR_INGRESS_MNG_SIZE),
+	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC, 0),
 	MLXSW_SP_SB_PR_EXT(MLXSW_REG_SBPR_MODE_DYNAMIC,
 			   MLXSW_SP1_SB_PR_EGRESS_SIZE, true, false),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC, 0),
@@ -440,7 +437,6 @@ static const struct mlxsw_sp_sb_pr mlxsw_sp1_sb_prs[] = {
 };
 
 #define MLXSW_SP2_SB_PR_INGRESS_SIZE	40960000
-#define MLXSW_SP2_SB_PR_INGRESS_MNG_SIZE (200 * 1000)
 #define MLXSW_SP2_SB_PR_EGRESS_SIZE	40960000
 #define MLXSW_SP2_SB_PR_CPU_SIZE	(256 * 1000)
 
@@ -450,8 +446,7 @@ static const struct mlxsw_sp_sb_pr mlxsw_sp2_sb_prs[] = {
 		       MLXSW_SP2_SB_PR_INGRESS_SIZE),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, 0),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, 0),
-	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC,
-		       MLXSW_SP2_SB_PR_INGRESS_MNG_SIZE),
+	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, 0),
 	MLXSW_SP_SB_PR_EXT(MLXSW_REG_SBPR_MODE_DYNAMIC,
 			   MLXSW_SP2_SB_PR_EGRESS_SIZE, true, false),
 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, 0),
@@ -530,7 +525,7 @@ static const struct mlxsw_sp_sb_cm mlxsw_sp1_sb_cms_ingress[] = {
 	MLXSW_SP_SB_CM_ING(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
 	MLXSW_SP_SB_CM_ING(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
 	MLXSW_SP_SB_CM_ING(0, 0), /* dummy, this PG does not exist */
-	MLXSW_SP_SB_CM(20000, 1, MLXSW_SP_SB_POOL_ING_MNG),
+	MLXSW_SP_SB_CM(10000, 8, MLXSW_SP_SB_POOL_ING_CPU),
 };
 
 static const struct mlxsw_sp_sb_cm mlxsw_sp2_sb_cms_ingress[] = {
@@ -543,7 +538,7 @@ static const struct mlxsw_sp_sb_cm mlxsw_sp2_sb_cms_ingress[] = {
 	MLXSW_SP_SB_CM_ING(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
 	MLXSW_SP_SB_CM_ING(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
 	MLXSW_SP_SB_CM_ING(0, 0), /* dummy, this PG does not exist */
-	MLXSW_SP_SB_CM(20000, 1, MLXSW_SP_SB_POOL_ING_MNG),
+	MLXSW_SP_SB_CM(10000, 8, MLXSW_SP_SB_POOL_ING_CPU),
 };
 
 static const struct mlxsw_sp_sb_cm mlxsw_sp1_sb_cms_egress[] = {
@@ -709,7 +704,7 @@ static const struct mlxsw_sp_sb_pm mlxsw_sp1_sb_pms[] = {
 	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MAX),
 	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
 	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
-	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MAX),
+	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
 	MLXSW_SP_SB_PM(0, 7),
 	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
 	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
@@ -724,7 +719,7 @@ static const struct mlxsw_sp_sb_pm mlxsw_sp2_sb_pms[] = {
 	MLXSW_SP_SB_PM(0, 7),
 	MLXSW_SP_SB_PM(0, 0),
 	MLXSW_SP_SB_PM(0, 0),
-	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MAX),
+	MLXSW_SP_SB_PM(0, 0),
 	MLXSW_SP_SB_PM(0, 7),
 	MLXSW_SP_SB_PM(0, 0),
 	MLXSW_SP_SB_PM(0, 0),
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next 12/14] mlxsw: spectrum_buffers: Split business logic from mlxsw_sp_port_sb_pms_init()
  2019-04-22 12:08 [PATCH net-next 00/14] mlxsw: Shared buffer improvements Ido Schimmel
                   ` (10 preceding siblings ...)
  2019-04-22 12:08 ` [PATCH net-next 11/14] mlxsw: spectrum_buffers: Use new CPU ingress pool for control packets Ido Schimmel
@ 2019-04-22 12:08 ` Ido Schimmel
  2019-04-22 12:08 ` [PATCH net-next 13/14] mlxsw: spectrum_buffers: Allow skipping ingress port quota configuration Ido Schimmel
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Ido Schimmel @ 2019-04-22 12:08 UTC (permalink / raw)
  To: netdev
  Cc: davem, Jiri Pirko, Petr Machata, Alex Kushnarov, mlxsw, Ido Schimmel

The function is used to set the per-port shared buffer quotas.
Currently, these quotas are only set for front panel ports, but a
subsequent patch will configure these quotas for the CPU port as well.

The configuration required for the CPU port is a bit different than that
of the front panel ports, so split the business logic into a separate
function which will be called with different parameters for the CPU
port.

No functional changes intended.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
---
 .../mellanox/mlxsw/spectrum_buffers.c         | 21 ++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
index 06b852e11cb1..e324fad8b100 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
@@ -729,14 +729,13 @@ static const struct mlxsw_sp_sb_pm mlxsw_sp2_sb_pms[] = {
 	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
 };
 
-static int mlxsw_sp_port_sb_pms_init(struct mlxsw_sp_port *mlxsw_sp_port)
+static int mlxsw_sp_sb_pms_init(struct mlxsw_sp *mlxsw_sp, u8 local_port,
+				const struct mlxsw_sp_sb_pm *pms)
 {
-	struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
-	int i;
-	int err;
+	int i, err;
 
 	for (i = 0; i < mlxsw_sp->sb_vals->pool_count; i++) {
-		const struct mlxsw_sp_sb_pm *pm = &mlxsw_sp->sb_vals->pms[i];
+		const struct mlxsw_sp_sb_pm *pm = &pms[i];
 		u32 max_buff;
 		u32 min_buff;
 
@@ -744,14 +743,22 @@ static int mlxsw_sp_port_sb_pms_init(struct mlxsw_sp_port *mlxsw_sp_port)
 		max_buff = pm->max_buff;
 		if (mlxsw_sp_sb_pool_is_static(mlxsw_sp, i))
 			max_buff = mlxsw_sp_bytes_cells(mlxsw_sp, max_buff);
-		err = mlxsw_sp_sb_pm_write(mlxsw_sp, mlxsw_sp_port->local_port,
-					   i, min_buff, max_buff);
+		err = mlxsw_sp_sb_pm_write(mlxsw_sp, local_port, i, min_buff,
+					   max_buff);
 		if (err)
 			return err;
 	}
 	return 0;
 }
 
+static int mlxsw_sp_port_sb_pms_init(struct mlxsw_sp_port *mlxsw_sp_port)
+{
+	struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+
+	return mlxsw_sp_sb_pms_init(mlxsw_sp, mlxsw_sp_port->local_port,
+				    mlxsw_sp->sb_vals->pms);
+}
+
 #define MLXSW_SP_SB_MM(_min_buff, _max_buff)		\
 	{						\
 		.min_buff = _min_buff,			\
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next 13/14] mlxsw: spectrum_buffers: Allow skipping ingress port quota configuration
  2019-04-22 12:08 [PATCH net-next 00/14] mlxsw: Shared buffer improvements Ido Schimmel
                   ` (11 preceding siblings ...)
  2019-04-22 12:08 ` [PATCH net-next 12/14] mlxsw: spectrum_buffers: Split business logic from mlxsw_sp_port_sb_pms_init() Ido Schimmel
@ 2019-04-22 12:08 ` Ido Schimmel
  2019-04-22 12:08 ` [PATCH net-next 14/14] mlxsw: spectrum_buffers: Adjust CPU port shared buffer egress quotas Ido Schimmel
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Ido Schimmel @ 2019-04-22 12:08 UTC (permalink / raw)
  To: netdev
  Cc: davem, Jiri Pirko, Petr Machata, Alex Kushnarov, mlxsw, Ido Schimmel

The CPU port is used to transmit traffic that is trapped to the host
CPU. It is therefore irrelevant to define ingress quota for it.

Add a 'skip_ingress' argument to the function tasked with configuring
per-port quotas, so that ingress quotas could be skipped in case the
passed local port is the CPU port.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
index e324fad8b100..32a74cc3d6e4 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
@@ -730,15 +730,21 @@ static const struct mlxsw_sp_sb_pm mlxsw_sp2_sb_pms[] = {
 };
 
 static int mlxsw_sp_sb_pms_init(struct mlxsw_sp *mlxsw_sp, u8 local_port,
-				const struct mlxsw_sp_sb_pm *pms)
+				const struct mlxsw_sp_sb_pm *pms,
+				bool skip_ingress)
 {
 	int i, err;
 
 	for (i = 0; i < mlxsw_sp->sb_vals->pool_count; i++) {
 		const struct mlxsw_sp_sb_pm *pm = &pms[i];
+		const struct mlxsw_sp_sb_pool_des *des;
 		u32 max_buff;
 		u32 min_buff;
 
+		des = &mlxsw_sp->sb_vals->pool_dess[i];
+		if (skip_ingress && des->dir == MLXSW_REG_SBXX_DIR_INGRESS)
+			continue;
+
 		min_buff = mlxsw_sp_bytes_cells(mlxsw_sp, pm->min_buff);
 		max_buff = pm->max_buff;
 		if (mlxsw_sp_sb_pool_is_static(mlxsw_sp, i))
@@ -756,7 +762,7 @@ static int mlxsw_sp_port_sb_pms_init(struct mlxsw_sp_port *mlxsw_sp_port)
 	struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
 
 	return mlxsw_sp_sb_pms_init(mlxsw_sp, mlxsw_sp_port->local_port,
-				    mlxsw_sp->sb_vals->pms);
+				    mlxsw_sp->sb_vals->pms, false);
 }
 
 #define MLXSW_SP_SB_MM(_min_buff, _max_buff)		\
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH net-next 14/14] mlxsw: spectrum_buffers: Adjust CPU port shared buffer egress quotas
  2019-04-22 12:08 [PATCH net-next 00/14] mlxsw: Shared buffer improvements Ido Schimmel
                   ` (12 preceding siblings ...)
  2019-04-22 12:08 ` [PATCH net-next 13/14] mlxsw: spectrum_buffers: Allow skipping ingress port quota configuration Ido Schimmel
@ 2019-04-22 12:08 ` Ido Schimmel
  2019-04-22 21:17 ` [PATCH net-next 00/14] mlxsw: Shared buffer improvements Jakub Kicinski
  2019-04-23  5:12 ` David Miller
  15 siblings, 0 replies; 20+ messages in thread
From: Ido Schimmel @ 2019-04-22 12:08 UTC (permalink / raw)
  To: netdev
  Cc: davem, Jiri Pirko, Petr Machata, Alex Kushnarov, mlxsw, Ido Schimmel

Switch the CPU port to use the new dedicated egress pool instead the
previously used egress pool which was shared with normal front panel
ports.

Add per-port quotas for the amount of traffic that can be buffered for
the CPU port and also adjust the per-{port, TC} quotas.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
---
 .../mellanox/mlxsw/spectrum_buffers.c         | 42 +++++++++++++++----
 1 file changed, 35 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
index 32a74cc3d6e4..8512dd49e420 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
@@ -108,6 +108,7 @@ struct mlxsw_sp_sb_vals {
 	unsigned int pool_count;
 	const struct mlxsw_sp_sb_pool_des *pool_dess;
 	const struct mlxsw_sp_sb_pm *pms;
+	const struct mlxsw_sp_sb_pm *pms_cpu;
 	const struct mlxsw_sp_sb_pr *prs;
 	const struct mlxsw_sp_sb_mm *mms;
 	const struct mlxsw_sp_sb_cm *cms_ingress;
@@ -581,17 +582,17 @@ static const struct mlxsw_sp_sb_cm mlxsw_sp2_sb_cms_egress[] = {
 	MLXSW_SP_SB_CM_EGR(1, 0xff),
 };
 
-#define MLXSW_SP_CPU_PORT_SB_CM MLXSW_SP_SB_CM(0, 0, MLXSW_SP_SB_POOL_EGR)
+#define MLXSW_SP_CPU_PORT_SB_CM MLXSW_SP_SB_CM(0, 0, MLXSW_SP_SB_POOL_EGR_CPU)
 
 static const struct mlxsw_sp_sb_cm mlxsw_sp_cpu_port_sb_cms[] = {
 	MLXSW_SP_CPU_PORT_SB_CM,
-	MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, MLXSW_SP_SB_POOL_EGR),
-	MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, MLXSW_SP_SB_POOL_EGR),
-	MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, MLXSW_SP_SB_POOL_EGR),
-	MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, MLXSW_SP_SB_POOL_EGR),
-	MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, MLXSW_SP_SB_POOL_EGR),
+	MLXSW_SP_SB_CM(1000, 8, MLXSW_SP_SB_POOL_EGR_CPU),
+	MLXSW_SP_SB_CM(1000, 8, MLXSW_SP_SB_POOL_EGR_CPU),
+	MLXSW_SP_SB_CM(1000, 8, MLXSW_SP_SB_POOL_EGR_CPU),
+	MLXSW_SP_SB_CM(1000, 8, MLXSW_SP_SB_POOL_EGR_CPU),
+	MLXSW_SP_SB_CM(1000, 8, MLXSW_SP_SB_POOL_EGR_CPU),
 	MLXSW_SP_CPU_PORT_SB_CM,
-	MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, MLXSW_SP_SB_POOL_EGR),
+	MLXSW_SP_SB_CM(1000, 8, MLXSW_SP_SB_POOL_EGR_CPU),
 	MLXSW_SP_CPU_PORT_SB_CM,
 	MLXSW_SP_CPU_PORT_SB_CM,
 	MLXSW_SP_CPU_PORT_SB_CM,
@@ -729,6 +730,21 @@ static const struct mlxsw_sp_sb_pm mlxsw_sp2_sb_pms[] = {
 	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
 };
 
+/* Order according to mlxsw_sp*_sb_pool_dess */
+static const struct mlxsw_sp_sb_pm mlxsw_sp_cpu_port_sb_pms[] = {
+	MLXSW_SP_SB_PM(0, 0),
+	MLXSW_SP_SB_PM(0, 0),
+	MLXSW_SP_SB_PM(0, 0),
+	MLXSW_SP_SB_PM(0, 0),
+	MLXSW_SP_SB_PM(0, 0),
+	MLXSW_SP_SB_PM(0, 0),
+	MLXSW_SP_SB_PM(0, 0),
+	MLXSW_SP_SB_PM(0, 0),
+	MLXSW_SP_SB_PM(0, 90000),
+	MLXSW_SP_SB_PM(0, 0),
+	MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MAX),
+};
+
 static int mlxsw_sp_sb_pms_init(struct mlxsw_sp *mlxsw_sp, u8 local_port,
 				const struct mlxsw_sp_sb_pm *pms,
 				bool skip_ingress)
@@ -765,6 +781,12 @@ static int mlxsw_sp_port_sb_pms_init(struct mlxsw_sp_port *mlxsw_sp_port)
 				    mlxsw_sp->sb_vals->pms, false);
 }
 
+static int mlxsw_sp_cpu_port_sb_pms_init(struct mlxsw_sp *mlxsw_sp)
+{
+	return mlxsw_sp_sb_pms_init(mlxsw_sp, 0, mlxsw_sp->sb_vals->pms_cpu,
+				    true);
+}
+
 #define MLXSW_SP_SB_MM(_min_buff, _max_buff)		\
 	{						\
 		.min_buff = _min_buff,			\
@@ -836,6 +858,7 @@ const struct mlxsw_sp_sb_vals mlxsw_sp1_sb_vals = {
 	.pool_count = ARRAY_SIZE(mlxsw_sp1_sb_pool_dess),
 	.pool_dess = mlxsw_sp1_sb_pool_dess,
 	.pms = mlxsw_sp1_sb_pms,
+	.pms_cpu = mlxsw_sp_cpu_port_sb_pms,
 	.prs = mlxsw_sp1_sb_prs,
 	.mms = mlxsw_sp_sb_mms,
 	.cms_ingress = mlxsw_sp1_sb_cms_ingress,
@@ -851,6 +874,7 @@ const struct mlxsw_sp_sb_vals mlxsw_sp2_sb_vals = {
 	.pool_count = ARRAY_SIZE(mlxsw_sp2_sb_pool_dess),
 	.pool_dess = mlxsw_sp2_sb_pool_dess,
 	.pms = mlxsw_sp2_sb_pms,
+	.pms_cpu = mlxsw_sp_cpu_port_sb_pms,
 	.prs = mlxsw_sp2_sb_prs,
 	.mms = mlxsw_sp_sb_mms,
 	.cms_ingress = mlxsw_sp2_sb_cms_ingress,
@@ -900,6 +924,9 @@ int mlxsw_sp_buffers_init(struct mlxsw_sp *mlxsw_sp)
 	err = mlxsw_sp_cpu_port_sb_cms_init(mlxsw_sp);
 	if (err)
 		goto err_sb_cpu_port_sb_cms_init;
+	err = mlxsw_sp_cpu_port_sb_pms_init(mlxsw_sp);
+	if (err)
+		goto err_sb_cpu_port_pms_init;
 	err = mlxsw_sp_sb_mms_init(mlxsw_sp);
 	if (err)
 		goto err_sb_mms_init;
@@ -917,6 +944,7 @@ int mlxsw_sp_buffers_init(struct mlxsw_sp *mlxsw_sp)
 
 err_devlink_sb_register:
 err_sb_mms_init:
+err_sb_cpu_port_pms_init:
 err_sb_cpu_port_sb_cms_init:
 err_sb_prs_init:
 	mlxsw_sp_sb_ports_fini(mlxsw_sp);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH net-next 00/14] mlxsw: Shared buffer improvements
  2019-04-22 12:08 [PATCH net-next 00/14] mlxsw: Shared buffer improvements Ido Schimmel
                   ` (13 preceding siblings ...)
  2019-04-22 12:08 ` [PATCH net-next 14/14] mlxsw: spectrum_buffers: Adjust CPU port shared buffer egress quotas Ido Schimmel
@ 2019-04-22 21:17 ` Jakub Kicinski
  2019-04-23  5:26   ` Ido Schimmel
  2019-04-23  5:12 ` David Miller
  15 siblings, 1 reply; 20+ messages in thread
From: Jakub Kicinski @ 2019-04-22 21:17 UTC (permalink / raw)
  To: Ido Schimmel
  Cc: netdev, davem, Jiri Pirko, Petr Machata, Alex Kushnarov, mlxsw

On Mon, 22 Apr 2019 12:08:38 +0000, Ido Schimmel wrote:
> This patchset includes two improvements with regards to shared buffer
> configuration in mlxsw.
> 
> The first part of this patchset forbids the user from performing illegal
> shared buffer configuration that can result in unnecessary packet loss.
> In order to better communicate these configuration failures to the user,
> extack is propagated from devlink towards drivers. This is done in
> patches #1-#8.
> 
> The second part of the patchset deals with the shared buffer
> configuration of the CPU port. When a packet is trapped by the device,
> it is sent across the PCI bus to the attached host CPU. From the
> device's perspective, it is as if the packet is transmitted through the
> CPU port.
> 
> While testing traffic directed at the CPU it became apparent that for
> certain packet sizes and certain burst sizes, the current shared buffer
> configuration of the CPU port is inadequate and results in packet drops.
> The configuration is adjusted by patches #9-#14 that create two new pools
> - ingress & egress - which are dedicated for CPU traffic.

Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>

Out of curiosity - are you guys considering adding CPU flavour ports,
or is there a good reason not to have it exposed?

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH net-next 00/14] mlxsw: Shared buffer improvements
  2019-04-22 12:08 [PATCH net-next 00/14] mlxsw: Shared buffer improvements Ido Schimmel
                   ` (14 preceding siblings ...)
  2019-04-22 21:17 ` [PATCH net-next 00/14] mlxsw: Shared buffer improvements Jakub Kicinski
@ 2019-04-23  5:12 ` David Miller
  15 siblings, 0 replies; 20+ messages in thread
From: David Miller @ 2019-04-23  5:12 UTC (permalink / raw)
  To: idosch; +Cc: netdev, jiri, petrm, alexanderk, mlxsw

From: Ido Schimmel <idosch@mellanox.com>
Date: Mon, 22 Apr 2019 12:08:38 +0000

> This patchset includes two improvements with regards to shared buffer
> configuration in mlxsw.
> 
> The first part of this patchset forbids the user from performing illegal
> shared buffer configuration that can result in unnecessary packet loss.
> In order to better communicate these configuration failures to the user,
> extack is propagated from devlink towards drivers. This is done in
> patches #1-#8.
> 
> The second part of the patchset deals with the shared buffer
> configuration of the CPU port. When a packet is trapped by the device,
> it is sent across the PCI bus to the attached host CPU. From the
> device's perspective, it is as if the packet is transmitted through the
> CPU port.
> 
> While testing traffic directed at the CPU it became apparent that for
> certain packet sizes and certain burst sizes, the current shared buffer
> configuration of the CPU port is inadequate and results in packet drops.
> The configuration is adjusted by patches #9-#14 that create two new pools
> - ingress & egress - which are dedicated for CPU traffic.

Series applied, thank you.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH net-next 00/14] mlxsw: Shared buffer improvements
  2019-04-22 21:17 ` [PATCH net-next 00/14] mlxsw: Shared buffer improvements Jakub Kicinski
@ 2019-04-23  5:26   ` Ido Schimmel
  2019-04-23  7:28     ` Jiri Pirko
  0 siblings, 1 reply; 20+ messages in thread
From: Ido Schimmel @ 2019-04-23  5:26 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: netdev, davem, Jiri Pirko, Petr Machata, Alex Kushnarov, mlxsw

On Mon, Apr 22, 2019 at 02:17:39PM -0700, Jakub Kicinski wrote:
> Out of curiosity - are you guys considering adding CPU flavour ports,
> or is there a good reason not to have it exposed?

Yes, we are considering that. In fact, Alexander (Cc-ed) just asked me if
it is possible to monitor the occupancy in the egress CPU pool. This is
impossible without exposing the CPU port and it would have saved us a
lot of time while debugging these issues.

Do you need this functionality for nfp as well? If so, do you have any
thoughts about it? My thinking is that it would be a devlink port
without a backing netdev.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH net-next 00/14] mlxsw: Shared buffer improvements
  2019-04-23  5:26   ` Ido Schimmel
@ 2019-04-23  7:28     ` Jiri Pirko
  2019-04-23 16:31       ` Jakub Kicinski
  0 siblings, 1 reply; 20+ messages in thread
From: Jiri Pirko @ 2019-04-23  7:28 UTC (permalink / raw)
  To: Ido Schimmel
  Cc: Jakub Kicinski, netdev, davem, Jiri Pirko, Petr Machata,
	Alex Kushnarov, mlxsw

Tue, Apr 23, 2019 at 07:26:06AM CEST, idosch@mellanox.com wrote:
>On Mon, Apr 22, 2019 at 02:17:39PM -0700, Jakub Kicinski wrote:
>> Out of curiosity - are you guys considering adding CPU flavour ports,
>> or is there a good reason not to have it exposed?
>
>Yes, we are considering that. In fact, Alexander (Cc-ed) just asked me if
>it is possible to monitor the occupancy in the egress CPU pool. This is
>impossible without exposing the CPU port and it would have saved us a
>lot of time while debugging these issues.
>
>Do you need this functionality for nfp as well? If so, do you have any
>thoughts about it? My thinking is that it would be a devlink port
>without a backing netdev.

Similar to CPU port in DSA. It also does not have netdev associated with
it. We can use the same port flavour:
DEVLINK_PORT_FLAVOUR_CPU

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH net-next 00/14] mlxsw: Shared buffer improvements
  2019-04-23  7:28     ` Jiri Pirko
@ 2019-04-23 16:31       ` Jakub Kicinski
  0 siblings, 0 replies; 20+ messages in thread
From: Jakub Kicinski @ 2019-04-23 16:31 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: Ido Schimmel, netdev, davem, Jiri Pirko, Petr Machata,
	Alex Kushnarov, mlxsw

On Tue, 23 Apr 2019 09:28:02 +0200, Jiri Pirko wrote:
> Tue, Apr 23, 2019 at 07:26:06AM CEST, idosch@mellanox.com wrote:
> >On Mon, Apr 22, 2019 at 02:17:39PM -0700, Jakub Kicinski wrote:  
> >> Out of curiosity - are you guys considering adding CPU flavour ports,
> >> or is there a good reason not to have it exposed?  
> >
> >Yes, we are considering that. In fact, Alexander (Cc-ed) just asked me if
> >it is possible to monitor the occupancy in the egress CPU pool. This is
> >impossible without exposing the CPU port and it would have saved us a
> >lot of time while debugging these issues.
> >
> >Do you need this functionality for nfp as well? If so, do you have any
> >thoughts about it? My thinking is that it would be a devlink port
> >without a backing netdev.  
> 
> Similar to CPU port in DSA. It also does not have netdev associated with
> it. We can use the same port flavour:
> DEVLINK_PORT_FLAVOUR_CPU

Yup, to answer Ido's question - not really in nfp we are exposing PCIe
devices as full ports with netdevs because of multi-host and QoS
offloads.  So I never though too much about CPU ports, but for switches
they seem to make perfect sense :)

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2019-04-23 16:31 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-22 12:08 [PATCH net-next 00/14] mlxsw: Shared buffer improvements Ido Schimmel
2019-04-22 12:08 ` [PATCH net-next 01/14] net: devlink: Add extack to shared buffer operations Ido Schimmel
2019-04-22 12:08 ` [PATCH net-next 02/14] mlxsw: spectrum_buffers: Add extack messages for invalid configurations Ido Schimmel
2019-04-22 12:08 ` [PATCH net-next 03/14] mlxsw: spectrum_buffers: Use defines for pool indices Ido Schimmel
2019-04-22 12:08 ` [PATCH net-next 04/14] mlxsw: spectrum_buffers: Add ability to veto pool's configuration Ido Schimmel
2019-04-22 12:08 ` [PATCH net-next 05/14] mlxsw: spectrum_buffers: Add ability to veto TC's configuration Ido Schimmel
2019-04-22 12:08 ` [PATCH net-next 06/14] mlxsw: spectrum_buffers: Forbid configuration of multicast pool Ido Schimmel
2019-04-22 12:08 ` [PATCH net-next 07/14] mlxsw: spectrum_buffers: Forbid changing threshold type of first egress pool Ido Schimmel
2019-04-22 12:08 ` [PATCH net-next 08/14] mlxsw: spectrum_buffers: Forbid changing multicast TCs' attributes Ido Schimmel
2019-04-22 12:08 ` [PATCH net-next 09/14] mlxsw: spectrum_buffers: Remove assumption about pool order Ido Schimmel
2019-04-22 12:08 ` [PATCH net-next 10/14] mlxsw: spectrum_buffers: Add pools for CPU traffic Ido Schimmel
2019-04-22 12:08 ` [PATCH net-next 11/14] mlxsw: spectrum_buffers: Use new CPU ingress pool for control packets Ido Schimmel
2019-04-22 12:08 ` [PATCH net-next 12/14] mlxsw: spectrum_buffers: Split business logic from mlxsw_sp_port_sb_pms_init() Ido Schimmel
2019-04-22 12:08 ` [PATCH net-next 13/14] mlxsw: spectrum_buffers: Allow skipping ingress port quota configuration Ido Schimmel
2019-04-22 12:08 ` [PATCH net-next 14/14] mlxsw: spectrum_buffers: Adjust CPU port shared buffer egress quotas Ido Schimmel
2019-04-22 21:17 ` [PATCH net-next 00/14] mlxsw: Shared buffer improvements Jakub Kicinski
2019-04-23  5:26   ` Ido Schimmel
2019-04-23  7:28     ` Jiri Pirko
2019-04-23 16:31       ` Jakub Kicinski
2019-04-23  5:12 ` David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.