All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ido Schimmel <idosch@nvidia.com>
To: netdev@vger.kernel.org
Cc: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com,
	edumazet@google.com, petrm@nvidia.com, amcohen@nvidia.com,
	mlxsw@nvidia.com, Ido Schimmel <idosch@nvidia.com>
Subject: [PATCH net-next 03/13] mlxsw: Configure ingress RIF classification
Date: Thu, 30 Jun 2022 11:22:47 +0300	[thread overview]
Message-ID: <20220630082257.903759-4-idosch@nvidia.com> (raw)
In-Reply-To: <20220630082257.903759-1-idosch@nvidia.com>

From: Amit Cohen <amcohen@nvidia.com>

Before layer 2 forwarding, the device classifies an incoming packet to
a FID. The classification is done based on one of the following keys:

1. FID
2. VNI (after decapsulation)
3. VID / {Port, VID}

After classification, the FID is known, but also all the attributes of
the FID, such as the router interface (RIF) via which a packet that
needs to be routed will ingress the router block.

In the legacy model, when a RIF was created / destroyed, it was
firmware's responsibility to update it in the previously mentioned FID
classification records. In the unified bridge model, this responsibility
moved to software.

The third classification requires to iterate over the FID's {Port, VID}
list and issue SVFA write with the correct mapping table according to the
port's mode (virtual or not). We never map multiple VLANs to the same FID
using VID->FID mapping, so such a mapping needs to be performed once.

When a new FID classification entry is configured and the FID already has
a RIF, set the RIF as part of SVFA configuration.

The reverse needs to be done when clearing a RIF from a FID. Currently,
clearing is done by issuing mlxsw_sp_fid_rif_set() with a NULL RIF pointer.
Instead, introduce mlxsw_sp_fid_rif_unset().

Note that mlxsw_sp_fid_rif_set() is called after the RIF is fully
operational, so it conforms to the internal requirement regarding
SVFA.irif_v: "Must not be set for a non-enabled RIF".

Do not set the ingress RIF for rFIDs, as the {Port, VID}->rFID entry is
configured by firmware when legacy model is used, a next patch will
handle this configuration for rFIDs and unified bridge model.

Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlxsw/reg.h     |  17 +-
 .../net/ethernet/mellanox/mlxsw/spectrum.h    |   4 +-
 .../ethernet/mellanox/mlxsw/spectrum_fid.c    | 173 ++++++++++++++++--
 .../ethernet/mellanox/mlxsw/spectrum_router.c |  20 +-
 .../ethernet/mellanox/mlxsw/spectrum_router.h |   1 -
 5 files changed, 189 insertions(+), 26 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h
index b0b5806a22ed..46ed2c1810be 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/reg.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h
@@ -1658,40 +1658,43 @@ MLXSW_ITEM32(reg, svfa, irif, 0x14, 0, 16);
 
 static inline void __mlxsw_reg_svfa_pack(char *payload,
 					 enum mlxsw_reg_svfa_mt mt, bool valid,
-					 u16 fid)
+					 u16 fid, bool irif_v, u16 irif)
 {
 	MLXSW_REG_ZERO(svfa, payload);
 	mlxsw_reg_svfa_swid_set(payload, 0);
 	mlxsw_reg_svfa_mapping_table_set(payload, mt);
 	mlxsw_reg_svfa_v_set(payload, valid);
 	mlxsw_reg_svfa_fid_set(payload, fid);
+	mlxsw_reg_svfa_irif_v_set(payload, irif_v);
+	mlxsw_reg_svfa_irif_set(payload, irif_v ? irif : 0);
 }
 
 static inline void mlxsw_reg_svfa_port_vid_pack(char *payload, u16 local_port,
-						bool valid, u16 fid, u16 vid)
+						bool valid, u16 fid, u16 vid,
+						bool irif_v, u16 irif)
 {
 	enum mlxsw_reg_svfa_mt mt = MLXSW_REG_SVFA_MT_PORT_VID_TO_FID;
 
-	__mlxsw_reg_svfa_pack(payload, mt, valid, fid);
+	__mlxsw_reg_svfa_pack(payload, mt, valid, fid, irif_v, irif);
 	mlxsw_reg_svfa_local_port_set(payload, local_port);
 	mlxsw_reg_svfa_vid_set(payload, vid);
 }
 
 static inline void mlxsw_reg_svfa_vid_pack(char *payload, bool valid, u16 fid,
-					   u16 vid)
+					   u16 vid, bool irif_v, u16 irif)
 {
 	enum mlxsw_reg_svfa_mt mt = MLXSW_REG_SVFA_MT_VID_TO_FID;
 
-	__mlxsw_reg_svfa_pack(payload, mt, valid, fid);
+	__mlxsw_reg_svfa_pack(payload, mt, valid, fid, irif_v, irif);
 	mlxsw_reg_svfa_vid_set(payload, vid);
 }
 
 static inline void mlxsw_reg_svfa_vni_pack(char *payload, bool valid, u16 fid,
-					   u32 vni)
+					   u32 vni, bool irif_v, u16 irif)
 {
 	enum mlxsw_reg_svfa_mt mt = MLXSW_REG_SVFA_MT_VNI_TO_FID;
 
-	__mlxsw_reg_svfa_pack(payload, mt, valid, fid);
+	__mlxsw_reg_svfa_pack(payload, mt, valid, fid, irif_v, irif);
 	mlxsw_reg_svfa_vni_set(payload, vni);
 }
 
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
index 8de3bdcdf143..b1810a22a1a6 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
@@ -737,6 +737,7 @@ union mlxsw_sp_l3addr {
 	struct in6_addr addr6;
 };
 
+u16 mlxsw_sp_rif_index(const struct mlxsw_sp_rif *rif);
 int mlxsw_sp_router_init(struct mlxsw_sp *mlxsw_sp,
 			 struct netlink_ext_ack *extack);
 void mlxsw_sp_router_fini(struct mlxsw_sp *mlxsw_sp);
@@ -1285,7 +1286,8 @@ void mlxsw_sp_fid_port_vid_unmap(struct mlxsw_sp_fid *fid,
 				 struct mlxsw_sp_port *mlxsw_sp_port, u16 vid);
 u16 mlxsw_sp_fid_index(const struct mlxsw_sp_fid *fid);
 enum mlxsw_sp_fid_type mlxsw_sp_fid_type(const struct mlxsw_sp_fid *fid);
-void mlxsw_sp_fid_rif_set(struct mlxsw_sp_fid *fid, struct mlxsw_sp_rif *rif);
+int mlxsw_sp_fid_rif_set(struct mlxsw_sp_fid *fid, struct mlxsw_sp_rif *rif);
+void mlxsw_sp_fid_rif_unset(struct mlxsw_sp_fid *fid);
 struct mlxsw_sp_rif *mlxsw_sp_fid_rif(const struct mlxsw_sp_fid *fid);
 enum mlxsw_sp_rif_type
 mlxsw_sp_fid_type_rif_type(const struct mlxsw_sp *mlxsw_sp,
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_fid.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_fid.c
index ffe8c583865d..a8fecf47eaf5 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_fid.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_fid.c
@@ -404,11 +404,6 @@ enum mlxsw_sp_fid_type mlxsw_sp_fid_type(const struct mlxsw_sp_fid *fid)
 	return fid->fid_family->type;
 }
 
-void mlxsw_sp_fid_rif_set(struct mlxsw_sp_fid *fid, struct mlxsw_sp_rif *rif)
-{
-	fid->rif = rif;
-}
-
 struct mlxsw_sp_rif *mlxsw_sp_fid_rif(const struct mlxsw_sp_fid *fid)
 {
 	return fid->rif;
@@ -465,7 +460,8 @@ static int mlxsw_sp_fid_op(const struct mlxsw_sp_fid *fid, bool valid)
 	return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sfmr), sfmr_pl);
 }
 
-static int mlxsw_sp_fid_edit_op(const struct mlxsw_sp_fid *fid)
+static int mlxsw_sp_fid_edit_op(const struct mlxsw_sp_fid *fid,
+				const struct mlxsw_sp_rif *rif)
 {
 	struct mlxsw_sp *mlxsw_sp = fid->fid_family->mlxsw_sp;
 	enum mlxsw_reg_bridge_type bridge_type = 0;
@@ -484,32 +480,176 @@ static int mlxsw_sp_fid_edit_op(const struct mlxsw_sp_fid *fid)
 	mlxsw_reg_sfmr_vni_set(sfmr_pl, be32_to_cpu(fid->vni));
 	mlxsw_reg_sfmr_vtfp_set(sfmr_pl, fid->nve_flood_index_valid);
 	mlxsw_reg_sfmr_nve_tunnel_flood_ptr_set(sfmr_pl, fid->nve_flood_index);
+
+	if (mlxsw_sp->ubridge && rif) {
+		mlxsw_reg_sfmr_irif_v_set(sfmr_pl, true);
+		mlxsw_reg_sfmr_irif_set(sfmr_pl, mlxsw_sp_rif_index(rif));
+	}
+
 	return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sfmr), sfmr_pl);
 }
 
 static int mlxsw_sp_fid_vni_to_fid_map(const struct mlxsw_sp_fid *fid,
+				       const struct mlxsw_sp_rif *rif,
 				       bool valid)
 {
 	struct mlxsw_sp *mlxsw_sp = fid->fid_family->mlxsw_sp;
 	char svfa_pl[MLXSW_REG_SVFA_LEN];
+	bool irif_valid;
+	u16 irif_index;
+
+	irif_valid = !!rif;
+	irif_index = rif ? mlxsw_sp_rif_index(rif) : 0;
 
 	mlxsw_reg_svfa_vni_pack(svfa_pl, valid, fid->fid_index,
-				be32_to_cpu(fid->vni));
+				be32_to_cpu(fid->vni), irif_valid, irif_index);
+	return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(svfa), svfa_pl);
+}
+
+static int mlxsw_sp_fid_to_fid_rif_update(const struct mlxsw_sp_fid *fid,
+					  const struct mlxsw_sp_rif *rif)
+{
+	return mlxsw_sp_fid_edit_op(fid, rif);
+}
+
+static int mlxsw_sp_fid_vni_to_fid_rif_update(const struct mlxsw_sp_fid *fid,
+					      const struct mlxsw_sp_rif *rif)
+{
+	if (!fid->vni_valid)
+		return 0;
+
+	return mlxsw_sp_fid_vni_to_fid_map(fid, rif, fid->vni_valid);
+}
+
+static int
+mlxsw_sp_fid_port_vid_to_fid_rif_update_one(const struct mlxsw_sp_fid *fid,
+					    struct mlxsw_sp_fid_port_vid *pv,
+					    bool irif_valid, u16 irif_index)
+{
+	struct mlxsw_sp *mlxsw_sp = fid->fid_family->mlxsw_sp;
+	char svfa_pl[MLXSW_REG_SVFA_LEN];
+
+	mlxsw_reg_svfa_port_vid_pack(svfa_pl, pv->local_port, true,
+				     fid->fid_index, pv->vid, irif_valid,
+				     irif_index);
+
 	return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(svfa), svfa_pl);
 }
 
+static int mlxsw_sp_fid_vid_to_fid_rif_set(const struct mlxsw_sp_fid *fid,
+					   const struct mlxsw_sp_rif *rif)
+{
+	struct mlxsw_sp *mlxsw_sp = fid->fid_family->mlxsw_sp;
+	struct mlxsw_sp_fid_port_vid *pv;
+	u16 irif_index;
+	int err;
+
+	irif_index = mlxsw_sp_rif_index(rif);
+
+	list_for_each_entry(pv, &fid->port_vid_list, list) {
+		/* If port is not in virtual mode, then it does not have any
+		 * {Port, VID}->FID mappings that need to be updated with the
+		 * ingress RIF.
+		 */
+		if (!mlxsw_sp->fid_core->port_fid_mappings[pv->local_port])
+			continue;
+
+		err = mlxsw_sp_fid_port_vid_to_fid_rif_update_one(fid, pv,
+								  true,
+								  irif_index);
+		if (err)
+			goto err_port_vid_to_fid_rif_update_one;
+	}
+
+	return 0;
+
+err_port_vid_to_fid_rif_update_one:
+	list_for_each_entry_continue_reverse(pv, &fid->port_vid_list, list) {
+		if (!mlxsw_sp->fid_core->port_fid_mappings[pv->local_port])
+			continue;
+
+		mlxsw_sp_fid_port_vid_to_fid_rif_update_one(fid, pv, false, 0);
+	}
+
+	return err;
+}
+
+static void mlxsw_sp_fid_vid_to_fid_rif_unset(const struct mlxsw_sp_fid *fid)
+{
+	struct mlxsw_sp *mlxsw_sp = fid->fid_family->mlxsw_sp;
+	struct mlxsw_sp_fid_port_vid *pv;
+
+	list_for_each_entry(pv, &fid->port_vid_list, list) {
+		/* If port is not in virtual mode, then it does not have any
+		 * {Port, VID}->FID mappings that need to be updated.
+		 */
+		if (!mlxsw_sp->fid_core->port_fid_mappings[pv->local_port])
+			continue;
+
+		mlxsw_sp_fid_port_vid_to_fid_rif_update_one(fid, pv, false, 0);
+	}
+}
+
+int mlxsw_sp_fid_rif_set(struct mlxsw_sp_fid *fid, struct mlxsw_sp_rif *rif)
+{
+	int err;
+
+	if (!fid->fid_family->mlxsw_sp->ubridge) {
+		fid->rif = rif;
+		return 0;
+	}
+
+	err = mlxsw_sp_fid_to_fid_rif_update(fid, rif);
+	if (err)
+		return err;
+
+	err = mlxsw_sp_fid_vni_to_fid_rif_update(fid, rif);
+	if (err)
+		goto err_vni_to_fid_rif_update;
+
+	err = mlxsw_sp_fid_vid_to_fid_rif_set(fid, rif);
+	if (err)
+		goto err_vid_to_fid_rif_set;
+
+	fid->rif = rif;
+	return 0;
+
+err_vid_to_fid_rif_set:
+	mlxsw_sp_fid_vni_to_fid_rif_update(fid, NULL);
+err_vni_to_fid_rif_update:
+	mlxsw_sp_fid_to_fid_rif_update(fid, NULL);
+	return err;
+}
+
+void mlxsw_sp_fid_rif_unset(struct mlxsw_sp_fid *fid)
+{
+	if (!fid->fid_family->mlxsw_sp->ubridge) {
+		fid->rif = NULL;
+		return;
+	}
+
+	if (!fid->rif)
+		return;
+
+	fid->rif = NULL;
+	mlxsw_sp_fid_vid_to_fid_rif_unset(fid);
+	mlxsw_sp_fid_vni_to_fid_rif_update(fid, NULL);
+	mlxsw_sp_fid_to_fid_rif_update(fid, NULL);
+}
+
 static int mlxsw_sp_fid_vni_op(const struct mlxsw_sp_fid *fid)
 {
 	struct mlxsw_sp *mlxsw_sp = fid->fid_family->mlxsw_sp;
 	int err;
 
 	if (mlxsw_sp->ubridge) {
-		err = mlxsw_sp_fid_vni_to_fid_map(fid, fid->vni_valid);
+		err = mlxsw_sp_fid_vni_to_fid_map(fid, fid->rif,
+						  fid->vni_valid);
 		if (err)
 			return err;
 	}
 
-	err = mlxsw_sp_fid_edit_op(fid);
+	err = mlxsw_sp_fid_edit_op(fid, fid->rif);
 	if (err)
 		goto err_fid_edit_op;
 
@@ -517,7 +657,7 @@ static int mlxsw_sp_fid_vni_op(const struct mlxsw_sp_fid *fid)
 
 err_fid_edit_op:
 	if (mlxsw_sp->ubridge)
-		mlxsw_sp_fid_vni_to_fid_map(fid, !fid->vni_valid);
+		mlxsw_sp_fid_vni_to_fid_map(fid, fid->rif, !fid->vni_valid);
 	return err;
 }
 
@@ -526,9 +666,16 @@ static int __mlxsw_sp_fid_port_vid_map(const struct mlxsw_sp_fid *fid,
 {
 	struct mlxsw_sp *mlxsw_sp = fid->fid_family->mlxsw_sp;
 	char svfa_pl[MLXSW_REG_SVFA_LEN];
+	bool irif_valid = false;
+	u16 irif_index = 0;
+
+	if (mlxsw_sp->ubridge && fid->rif) {
+		irif_valid = true;
+		irif_index = mlxsw_sp_rif_index(fid->rif);
+	}
 
 	mlxsw_reg_svfa_port_vid_pack(svfa_pl, local_port, valid, fid->fid_index,
-				     vid);
+				     vid, irif_valid, irif_index);
 	return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(svfa), svfa_pl);
 }
 
@@ -768,12 +915,12 @@ static void mlxsw_sp_fid_8021d_vni_clear(struct mlxsw_sp_fid *fid)
 
 static int mlxsw_sp_fid_8021d_nve_flood_index_set(struct mlxsw_sp_fid *fid)
 {
-	return mlxsw_sp_fid_edit_op(fid);
+	return mlxsw_sp_fid_edit_op(fid, fid->rif);
 }
 
 static void mlxsw_sp_fid_8021d_nve_flood_index_clear(struct mlxsw_sp_fid *fid)
 {
-	mlxsw_sp_fid_edit_op(fid);
+	mlxsw_sp_fid_edit_op(fid, fid->rif);
 }
 
 static void
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
index 63652460c40d..4a34138985bf 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
@@ -9351,9 +9351,15 @@ static int mlxsw_sp_rif_subport_configure(struct mlxsw_sp_rif *rif,
 	if (err)
 		goto err_rif_fdb_op;
 
-	mlxsw_sp_fid_rif_set(rif->fid, rif);
+	err = mlxsw_sp_fid_rif_set(rif->fid, rif);
+	if (err)
+		goto err_fid_rif_set;
+
 	return 0;
 
+err_fid_rif_set:
+	mlxsw_sp_rif_fdb_op(rif->mlxsw_sp, rif->dev->dev_addr,
+			    mlxsw_sp_fid_index(rif->fid), false);
 err_rif_fdb_op:
 	mlxsw_sp_rif_subport_op(rif, false);
 err_rif_subport_op:
@@ -9365,7 +9371,7 @@ static void mlxsw_sp_rif_subport_deconfigure(struct mlxsw_sp_rif *rif)
 {
 	struct mlxsw_sp_fid *fid = rif->fid;
 
-	mlxsw_sp_fid_rif_set(fid, NULL);
+	mlxsw_sp_fid_rif_unset(fid);
 	mlxsw_sp_rif_fdb_op(rif->mlxsw_sp, rif->dev->dev_addr,
 			    mlxsw_sp_fid_index(fid), false);
 	mlxsw_sp_rif_macvlan_flush(rif);
@@ -9442,9 +9448,15 @@ static int mlxsw_sp_rif_fid_configure(struct mlxsw_sp_rif *rif,
 	if (err)
 		goto err_rif_fdb_op;
 
-	mlxsw_sp_fid_rif_set(rif->fid, rif);
+	err = mlxsw_sp_fid_rif_set(rif->fid, rif);
+	if (err)
+		goto err_fid_rif_set;
+
 	return 0;
 
+err_fid_rif_set:
+	mlxsw_sp_rif_fdb_op(rif->mlxsw_sp, rif->dev->dev_addr,
+			    mlxsw_sp_fid_index(rif->fid), false);
 err_rif_fdb_op:
 	mlxsw_sp_fid_flood_set(rif->fid, MLXSW_SP_FLOOD_TYPE_BC,
 			       mlxsw_sp_router_port(mlxsw_sp), false);
@@ -9464,7 +9476,7 @@ static void mlxsw_sp_rif_fid_deconfigure(struct mlxsw_sp_rif *rif)
 	struct mlxsw_sp *mlxsw_sp = rif->mlxsw_sp;
 	struct mlxsw_sp_fid *fid = rif->fid;
 
-	mlxsw_sp_fid_rif_set(fid, NULL);
+	mlxsw_sp_fid_rif_unset(fid);
 	mlxsw_sp_rif_fdb_op(rif->mlxsw_sp, rif->dev->dev_addr,
 			    mlxsw_sp_fid_index(fid), false);
 	mlxsw_sp_rif_macvlan_flush(rif);
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h
index b5c83ec7a87f..c5dfb972b433 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h
@@ -82,7 +82,6 @@ struct mlxsw_sp_ipip_entry;
 
 struct mlxsw_sp_rif *mlxsw_sp_rif_by_index(const struct mlxsw_sp *mlxsw_sp,
 					   u16 rif_index);
-u16 mlxsw_sp_rif_index(const struct mlxsw_sp_rif *rif);
 u16 mlxsw_sp_ipip_lb_rif_index(const struct mlxsw_sp_rif_ipip_lb *rif);
 u16 mlxsw_sp_ipip_lb_ul_vr_id(const struct mlxsw_sp_rif_ipip_lb *rif);
 u16 mlxsw_sp_ipip_lb_ul_rif_id(const struct mlxsw_sp_rif_ipip_lb *lb_rif);
-- 
2.36.1


  parent reply	other threads:[~2022-06-30  8:24 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-30  8:22 [PATCH net-next 00/13] mlxsw: Unified bridge conversion - part 6/6 Ido Schimmel
2022-06-30  8:22 ` [PATCH net-next 01/13] mlxsw: Configure egress VID for unicast FDB entries Ido Schimmel
2022-07-01  3:17   ` Jakub Kicinski
2022-07-01 15:23     ` Ido Schimmel
2022-06-30  8:22 ` [PATCH net-next 02/13] mlxsw: spectrum_fid: Configure VNI to FID classification Ido Schimmel
2022-06-30  8:22 ` Ido Schimmel [this message]
2022-06-30  8:22 ` [PATCH net-next 04/13] mlxsw: spectrum_fid: Configure layer 3 egress VID classification Ido Schimmel
2022-06-30  8:22 ` [PATCH net-next 05/13] mlxsw: spectrum_router: Do not configure VID for sub-port RIFs Ido Schimmel
2022-06-30  8:22 ` [PATCH net-next 06/13] mlxsw: Configure egress FID classification after routing Ido Schimmel
2022-06-30  8:22 ` [PATCH net-next 07/13] mlxsw: Add support for VLAN RIFs Ido Schimmel
2022-06-30  8:22 ` [PATCH net-next 08/13] mlxsw: Add new FID families for unified bridge model Ido Schimmel
2022-06-30  8:22 ` [PATCH net-next 09/13] mlxsw: Add support for 802.1Q FID family Ido Schimmel
2022-06-30  8:22 ` [PATCH net-next 10/13] mlxsw: Add ubridge to config profile Ido Schimmel
2022-06-30  8:22 ` [PATCH net-next 11/13] mlxsw: Enable unified bridge model Ido Schimmel
2022-06-30  8:22 ` [PATCH net-next 12/13] mlxsw: spectrum_fid: Remove flood_index() from FID operation structure Ido Schimmel
2022-06-30  8:22 ` [PATCH net-next 13/13] mlxsw: spectrum_fid: Remove '_ub_' indication from structures and defines Ido Schimmel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220630082257.903759-4-idosch@nvidia.com \
    --to=idosch@nvidia.com \
    --cc=amcohen@nvidia.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=kuba@kernel.org \
    --cc=mlxsw@nvidia.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=petrm@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.