netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC net-next v2 0/5] Add MACsec support for TJA11XX C45 PHYs
@ 2023-08-24  9:16 Radu Pirea (NXP OSS)
  2023-08-24  9:16 ` [RFC net-next v2 1/5] net: macsec: documentation for macsec_context and macsec_ops Radu Pirea (NXP OSS)
                   ` (4 more replies)
  0 siblings, 5 replies; 34+ messages in thread
From: Radu Pirea (NXP OSS) @ 2023-08-24  9:16 UTC (permalink / raw)
  To: andrew, hkallweit1, linux, davem, edumazet, kuba, pabeni,
	richardcochran, sd, sebastian.tobuschat
  Cc: netdev, linux-kernel, Radu Pirea (NXP OSS)

This is the MACsec support for TJA11XX PHYs. The MACsec block encrypts
the ethernet frames on the fly and has no buffering. This operation will
grow the frames by 32 bytes. If the frames are sent back to back, the
MACsec block will not have enough room to insert the SecTAG and the ICV
and the frames will be dropped. 

To mitigate this, the PHY can parse a specific ethertype with some
padding bytes and replace them with the SecTAG and ICV. These padding
bytes might be dummy or might contain information about TX SC that must
be used to encrypt the frame.

On the RX path, the PHY can insert a similar tag that contains
information about the RX SC which received the frame. However, the RX tag
is not enabled in this patch series, but I consider this important for the
review.

Radu P.

Changes in v2:
- added documentation
- reordered and split patches
- WARN_ON_ONCE if reg address is not properly aligned
- removed unnecesary checks in insert_tx_tag
- adjusted mdo_insert_tx_tag parameters. macsec_context replaced with 
 phy_device and sk_buff
- added extscs parameter to allow the user to choose the TX SC selection
 mechanism
- improved patches description

Radu Pirea (NXP OSS) (5):
  net: macsec: documentation for macsec_context and macsec_ops
  net: macsec: introduce mdo_insert_tx_tag
  net: phy: nxp-c45-tja11xx add MACsec support
  net: phy: nxp-c45-tja11xx: add MACsec statistics
  net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag

 MAINTAINERS                              |    2 +-
 drivers/net/macsec.c                     |   96 +-
 drivers/net/phy/Kconfig                  |    2 +-
 drivers/net/phy/Makefile                 |    4 +
 drivers/net/phy/nxp-c45-tja11xx-macsec.c | 1877 ++++++++++++++++++++++
 drivers/net/phy/nxp-c45-tja11xx.c        |   72 +-
 drivers/net/phy/nxp-c45-tja11xx.h        |   55 +
 include/net/macsec.h                     |   46 +
 8 files changed, 2122 insertions(+), 32 deletions(-)
 create mode 100644 drivers/net/phy/nxp-c45-tja11xx-macsec.c
 create mode 100644 drivers/net/phy/nxp-c45-tja11xx.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [RFC net-next v2 1/5] net: macsec: documentation for macsec_context and macsec_ops
  2023-08-24  9:16 [RFC net-next v2 0/5] Add MACsec support for TJA11XX C45 PHYs Radu Pirea (NXP OSS)
@ 2023-08-24  9:16 ` Radu Pirea (NXP OSS)
  2023-08-24 13:26   ` Antoine Tenart
  2023-08-24  9:16 ` [RFC net-next v2 2/5] net: macsec: introduce mdo_insert_tx_tag Radu Pirea (NXP OSS)
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 34+ messages in thread
From: Radu Pirea (NXP OSS) @ 2023-08-24  9:16 UTC (permalink / raw)
  To: andrew, hkallweit1, linux, davem, edumazet, kuba, pabeni,
	richardcochran, sd, sebastian.tobuschat
  Cc: netdev, linux-kernel, Radu Pirea (NXP OSS)

Add description for fields of struct macsec_context and struct
macsec_ops.

Signed-off-by: Radu Pirea (NXP OSS) <radu-nicolae.pirea@oss.nxp.com>
---
 include/net/macsec.h | 36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/include/net/macsec.h b/include/net/macsec.h
index 441ed8fd4b5f..76f024727bb4 100644
--- a/include/net/macsec.h
+++ b/include/net/macsec.h
@@ -247,6 +247,20 @@ struct macsec_secy {
 
 /**
  * struct macsec_context - MACsec context for hardware offloading
+ * @netdev: pointer to the netdev if the SecY is offloaded to a MAC
+ * @phydev: pointer to the phydev if the SecY is offloaded to a PHY
+ * @offload: MACsec offload status
+ * @secy: pointer to a MACsec SecY
+ * @rx_sc: pointer to a RX SC
+ * @assoc_num: association number of the target SA
+ * @key: key of the target SA
+ * @rx_sa: pointer to an RX SA if a RX SA is added/updated/removed
+ * @tx_sa: pointer to an TX SA if a TX SA is added/updated/removed
+ * @tx_sc_stats: pointer to TX SC stats structure
+ * @tx_sa_stats: pointer to TX SA stats structure
+ * @rx_sc_stats: pointer to TX SC stats structure
+ * @rx_sa_stats: pointer to RX SA stats structure
+ * @dev_stats: pointer to dev stats structure
  */
 struct macsec_context {
 	union {
@@ -276,6 +290,28 @@ struct macsec_context {
 
 /**
  * struct macsec_ops - MACsec offloading operations
+ * @mdo_dev_open: called when the MACsec interface transitions to the up state
+ * @mdo_dev_stop: called when the MACsec interface transitions to the down
+ *	state
+ * @mdo_add_secy: called when a new SecY is added
+ * @mdo_upd_secy: called when the SecY flags are changed or the MAC address of
+ *	the MACsec interface is changed
+ * @mdo_del_secy: called when the hw offload is disabled or the MACsec
+ *	interface is removed
+ * @mdo_add_rxsc: called when a new RX SC is added
+ * @mdo_upd_rxsc: called when a certain RX SC is updated
+ * @mdo_del_rxsc: called when a certain RX SC is removed
+ * @mdo_add_rxsa: called when a new RX SA is added
+ * @mdo_upd_rxsa: called when a certain RX SA is updated
+ * @mdo_del_rxsa: called when a certain RX SA is removed
+ * @mdo_add_txsa: called when a new TX SA is added
+ * @mdo_upd_txsa: called when a certain TX SA is updated
+ * @mdo_del_txsa: called when a certain TX SA is removed
+ * @mdo_get_dev_stats: called when dev stats are read
+ * @mdo_get_tx_sc_stats: called when TX SC stats are read
+ * @mdo_get_tx_sa_stats: called when TX SA stats are read
+ * @mdo_get_rx_sc_stats: called when RX SC stats are read
+ * @mdo_get_rx_sa_stats: called when RX SA stats are read
  */
 struct macsec_ops {
 	/* Device wide */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [RFC net-next v2 2/5] net: macsec: introduce mdo_insert_tx_tag
  2023-08-24  9:16 [RFC net-next v2 0/5] Add MACsec support for TJA11XX C45 PHYs Radu Pirea (NXP OSS)
  2023-08-24  9:16 ` [RFC net-next v2 1/5] net: macsec: documentation for macsec_context and macsec_ops Radu Pirea (NXP OSS)
@ 2023-08-24  9:16 ` Radu Pirea (NXP OSS)
  2023-08-24 14:54   ` Sabrina Dubroca
  2023-08-24  9:16 ` [RFC net-next v2 3/5] net: phy: nxp-c45-tja11xx add MACsec support Radu Pirea (NXP OSS)
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 34+ messages in thread
From: Radu Pirea (NXP OSS) @ 2023-08-24  9:16 UTC (permalink / raw)
  To: andrew, hkallweit1, linux, davem, edumazet, kuba, pabeni,
	richardcochran, sd, sebastian.tobuschat
  Cc: netdev, linux-kernel, Radu Pirea (NXP OSS)

Offloading MACsec in PHYs requires inserting the SecTAG and the ICV in
the ethernet frame. This operation will increase the frame size with up
to 32 bytes. If the frames are sent at line rate, the PHY will not have
enough room to insert the SecTAG and the ICV.

Some PHYs use a hardware buffer to store a number of ethernet frames and,
if it fills up, a pause frame is sent to the MAC to control the flow.
This HW implementation does not need any modification in the stack.

Other PHYs might offer to use a specific ethertype with some padding
bytes present in the ethernet frame. This ethertype and its associated
bytes will be replaced by the SecTAG and ICV.

mdo_insert_tx_tag allows the PHY drivers to add any specific tag in the
skb.

Signed-off-by: Radu Pirea (NXP OSS) <radu-nicolae.pirea@oss.nxp.com>
---
 drivers/net/macsec.c | 96 +++++++++++++++++++++++++++++++++++++++++++-
 include/net/macsec.h | 10 +++++
 2 files changed, 105 insertions(+), 1 deletion(-)

diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
index ae60817ec5c2..5541aaced61f 100644
--- a/drivers/net/macsec.c
+++ b/drivers/net/macsec.c
@@ -93,6 +93,7 @@ struct pcpu_secy_stats {
  * @secys: linked list of SecY's on the underlying device
  * @gro_cells: pointer to the Generic Receive Offload cell
  * @offload: status of offloading on the MACsec device
+ * @insert_tx_tag: insert tx tag if true
  */
 struct macsec_dev {
 	struct macsec_secy secy;
@@ -102,6 +103,7 @@ struct macsec_dev {
 	struct list_head secys;
 	struct gro_cells gro_cells;
 	enum macsec_offload offload;
+	bool insert_tx_tag;
 };
 
 /**
@@ -2582,6 +2584,33 @@ static bool macsec_is_configured(struct macsec_dev *macsec)
 	return false;
 }
 
+static bool macsec_can_insert_tx_tag(struct macsec_dev *macsec,
+				     const struct macsec_ops *ops)
+{
+	return macsec->offload == MACSEC_OFFLOAD_PHY &&
+		ops->mdo_insert_tx_tag;
+}
+
+static void macsec_adjust_room(struct net_device *dev,
+			       const struct macsec_ops *ops)
+{
+	struct macsec_dev *macsec = macsec = macsec_priv(dev);
+
+	if (macsec_is_offloaded(macsec)) {
+		dev->needed_headroom -= MACSEC_NEEDED_HEADROOM;
+		dev->needed_headroom += ops->needed_headroom;
+		dev->needed_tailroom -= MACSEC_NEEDED_TAILROOM;
+		dev->needed_tailroom += ops->needed_tailroom;
+
+		return;
+	}
+
+	dev->needed_headroom -= ops->needed_headroom;
+	dev->needed_headroom += MACSEC_NEEDED_HEADROOM;
+	dev->needed_tailroom -= ops->needed_tailroom;
+	dev->needed_tailroom += MACSEC_NEEDED_TAILROOM;
+}
+
 static int macsec_update_offload(struct net_device *dev, enum macsec_offload offload)
 {
 	enum macsec_offload prev_offload;
@@ -2619,9 +2648,15 @@ static int macsec_update_offload(struct net_device *dev, enum macsec_offload off
 	ctx.secy = &macsec->secy;
 	ret = offload == MACSEC_OFFLOAD_OFF ? macsec_offload(ops->mdo_del_secy, &ctx)
 					    : macsec_offload(ops->mdo_add_secy, &ctx);
-	if (ret)
+	if (ret) {
 		macsec->offload = prev_offload;
+		goto out;
+	}
+
+	macsec_adjust_room(dev, ops);
+	macsec->insert_tx_tag = macsec_can_insert_tx_tag(macsec, ops);
 
+out:
 	return ret;
 }
 
@@ -3378,6 +3413,55 @@ static struct genl_family macsec_fam __ro_after_init = {
 	.resv_start_op	= MACSEC_CMD_UPD_OFFLOAD + 1,
 };
 
+static struct sk_buff *insert_tx_tag(struct sk_buff *skb,
+				     struct net_device *dev)
+{
+	struct macsec_dev *macsec = macsec_priv(dev);
+	const struct macsec_ops *ops;
+	struct phy_device *phydev;
+	struct macsec_context ctx;
+	int err;
+
+	if (!macsec->insert_tx_tag)
+		return skb;
+
+	ops = macsec_get_ops(macsec, &ctx);
+	phydev = macsec->real_dev->phydev;
+
+	if (unlikely(skb_headroom(skb) < ops->needed_headroom ||
+		     skb_tailroom(skb) < ops->needed_tailroom)) {
+		struct sk_buff *nskb = skb_copy_expand(skb,
+						       ops->needed_headroom,
+						       ops->needed_tailroom,
+						       GFP_ATOMIC);
+		if (likely(nskb)) {
+			consume_skb(skb);
+			skb = nskb;
+		} else {
+			err = -ENOMEM;
+			goto cleanup;
+		}
+	} else {
+		skb = skb_unshare(skb, GFP_ATOMIC);
+		if (!skb)
+			return ERR_PTR(-ENOMEM);
+	}
+
+	err = ops->mdo_insert_tx_tag(phydev, skb);
+	if (unlikely(err))
+		goto cleanup;
+
+	if (unlikely(skb->len - ETH_HLEN > macsec_priv(dev)->real_dev->mtu)) {
+		err = -EINVAL;
+		goto cleanup;
+	}
+
+	return skb;
+cleanup:
+	kfree_skb(skb);
+	return ERR_PTR(err);
+}
+
 static netdev_tx_t macsec_start_xmit(struct sk_buff *skb,
 				     struct net_device *dev)
 {
@@ -3392,6 +3476,13 @@ static netdev_tx_t macsec_start_xmit(struct sk_buff *skb,
 		skb_dst_drop(skb);
 		dst_hold(&md_dst->dst);
 		skb_dst_set(skb, &md_dst->dst);
+
+		skb = insert_tx_tag(skb, dev);
+		if (IS_ERR(skb)) {
+			DEV_STATS_INC(dev, tx_dropped);
+			return NETDEV_TX_OK;
+		}
+
 		skb->dev = macsec->real_dev;
 		return dev_queue_xmit(skb);
 	}
@@ -4125,6 +4216,9 @@ static int macsec_newlink(struct net *net, struct net_device *dev,
 			err = macsec_offload(ops->mdo_add_secy, &ctx);
 			if (err)
 				goto del_dev;
+
+			macsec_adjust_room(dev, ops);
+			macsec->insert_tx_tag = macsec_can_insert_tx_tag(macsec, ops);
 		}
 	}
 
diff --git a/include/net/macsec.h b/include/net/macsec.h
index 76f024727bb4..9577921897f9 100644
--- a/include/net/macsec.h
+++ b/include/net/macsec.h
@@ -312,6 +312,11 @@ struct macsec_context {
  * @mdo_get_tx_sa_stats: called when TX SA stats are read
  * @mdo_get_rx_sc_stats: called when RX SC stats are read
  * @mdo_get_rx_sa_stats: called when RX SA stats are read
+ * @mdo_insert_tx_tag: called to insert the TX offload tag
+ * @needed_headroom: number of bytes reserved at the beginning of the sk_buff
+ *	for the TX Tag
+ * @needed_tailroom: number of bytes reserved at the end of the sk_buff for the
+ *	TX Tag
  */
 struct macsec_ops {
 	/* Device wide */
@@ -338,6 +343,11 @@ struct macsec_ops {
 	int (*mdo_get_tx_sa_stats)(struct macsec_context *ctx);
 	int (*mdo_get_rx_sc_stats)(struct macsec_context *ctx);
 	int (*mdo_get_rx_sa_stats)(struct macsec_context *ctx);
+	/* Offload tag */
+	int (*mdo_insert_tx_tag)(struct phy_device *phydev,
+				 struct sk_buff *skb);
+	unsigned int needed_headroom;
+	unsigned int needed_tailroom;
 };
 
 void macsec_pn_wrapped(struct macsec_secy *secy, struct macsec_tx_sa *tx_sa);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [RFC net-next v2 3/5] net: phy: nxp-c45-tja11xx add MACsec support
  2023-08-24  9:16 [RFC net-next v2 0/5] Add MACsec support for TJA11XX C45 PHYs Radu Pirea (NXP OSS)
  2023-08-24  9:16 ` [RFC net-next v2 1/5] net: macsec: documentation for macsec_context and macsec_ops Radu Pirea (NXP OSS)
  2023-08-24  9:16 ` [RFC net-next v2 2/5] net: macsec: introduce mdo_insert_tx_tag Radu Pirea (NXP OSS)
@ 2023-08-24  9:16 ` Radu Pirea (NXP OSS)
  2023-08-25 12:52   ` Sabrina Dubroca
  2023-08-27  8:03   ` Simon Horman
  2023-08-24  9:16 ` [RFC net-next v2 4/5] net: phy: nxp-c45-tja11xx: add MACsec statistics Radu Pirea (NXP OSS)
  2023-08-24  9:16 ` [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag Radu Pirea (NXP OSS)
  4 siblings, 2 replies; 34+ messages in thread
From: Radu Pirea (NXP OSS) @ 2023-08-24  9:16 UTC (permalink / raw)
  To: andrew, hkallweit1, linux, davem, edumazet, kuba, pabeni,
	richardcochran, sd, sebastian.tobuschat
  Cc: netdev, linux-kernel, Radu Pirea (NXP OSS)

Add MACsec support.
The MACsec block has four TX SCs and four RX SCs. The driver supports up
to four SecY. Each SecY with one TX SC and one RX SC.
The RX SCs can have two keys, key A and key B, written in hardware and
enabled at the same time.
The TX SCs can have two keys written in hardware, but only one can be
active at a given time.
On TX, the SC is selected using the MAC source address. Due of this
selection mechanism, each offloaded netdev must have a unique MAC
address.
On RX, the SC is selected by SCI(found in SecTAG or calculated using MAC
SA), or using RX SC 0 as implicit.

Signed-off-by: Radu Pirea (NXP OSS) <radu-nicolae.pirea@oss.nxp.com>
---
 MAINTAINERS                              |    2 +-
 drivers/net/phy/Kconfig                  |    2 +-
 drivers/net/phy/Makefile                 |    4 +
 drivers/net/phy/nxp-c45-tja11xx-macsec.c | 1397 ++++++++++++++++++++++
 drivers/net/phy/nxp-c45-tja11xx.c        |   72 +-
 drivers/net/phy/nxp-c45-tja11xx.h        |   55 +
 6 files changed, 1501 insertions(+), 31 deletions(-)
 create mode 100644 drivers/net/phy/nxp-c45-tja11xx-macsec.c
 create mode 100644 drivers/net/phy/nxp-c45-tja11xx.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 9cc15c50c2c6..3d1e2c9c278b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -15173,7 +15173,7 @@ NXP C45 TJA11XX PHY DRIVER
 M:	Radu Pirea <radu-nicolae.pirea@oss.nxp.com>
 L:	netdev@vger.kernel.org
 S:	Maintained
-F:	drivers/net/phy/nxp-c45-tja11xx.c
+F:	drivers/net/phy/nxp-c45-tja11xx*
 
 NXP FSPI DRIVER
 M:	Han Xu <han.xu@nxp.com>
diff --git a/drivers/net/phy/Kconfig b/drivers/net/phy/Kconfig
index 107880d13d21..79f54f773af2 100644
--- a/drivers/net/phy/Kconfig
+++ b/drivers/net/phy/Kconfig
@@ -306,7 +306,7 @@ config NXP_C45_TJA11XX_PHY
 	depends on PTP_1588_CLOCK_OPTIONAL
 	help
 	  Enable support for NXP C45 TJA11XX PHYs.
-	  Currently supports the TJA1103 and TJA1120 PHYs.
+	  Currently supports the TJA1103, TJA1104 and TJA1120 PHYs.
 
 config NXP_TJA11XX_PHY
 	tristate "NXP TJA11xx PHYs support"
diff --git a/drivers/net/phy/Makefile b/drivers/net/phy/Makefile
index c945ed9bd14b..ee53e2fdb968 100644
--- a/drivers/net/phy/Makefile
+++ b/drivers/net/phy/Makefile
@@ -83,6 +83,10 @@ obj-$(CONFIG_MICROSEMI_PHY)	+= mscc/
 obj-$(CONFIG_MOTORCOMM_PHY)	+= motorcomm.o
 obj-$(CONFIG_NATIONAL_PHY)	+= national.o
 obj-$(CONFIG_NCN26000_PHY)	+= ncn26000.o
+nxp-c45-tja11xx-objs		+= nxp-c45-tja11xx.o
+ifdef CONFIG_MACSEC
+nxp-c45-tja11xx-objs		+= nxp-c45-tja11xx-macsec.o
+endif
 obj-$(CONFIG_NXP_C45_TJA11XX_PHY)	+= nxp-c45-tja11xx.o
 obj-$(CONFIG_NXP_CBTX_PHY)	+= nxp-cbtx.o
 obj-$(CONFIG_NXP_TJA11XX_PHY)	+= nxp-tja11xx.o
diff --git a/drivers/net/phy/nxp-c45-tja11xx-macsec.c b/drivers/net/phy/nxp-c45-tja11xx-macsec.c
new file mode 100644
index 000000000000..1567865b8de4
--- /dev/null
+++ b/drivers/net/phy/nxp-c45-tja11xx-macsec.c
@@ -0,0 +1,1397 @@
+// SPDX-License-Identifier: GPL-2.0
+/* NXP C45 PTP PHY driver interface
+ * Copyright 2023 NXP
+ * Author: Radu Pirea <radu-nicolae.pirea@oss.nxp.com>
+ */
+
+#include <linux/delay.h>
+#include <linux/ethtool_netlink.h>
+#include <linux/kernel.h>
+#include <linux/mii.h>
+#include <linux/module.h>
+#include <linux/phy.h>
+#include <linux/processor.h>
+#include <net/macsec.h>
+
+#include "nxp-c45-tja11xx.h"
+
+#define MACSEC_REG_SIZE			32
+#define TX_SC_MAX			4
+
+#define TX_SC_BIT(secy_id) BIT(MACSEC_REG_SIZE - (secy_id) - 1)
+
+#define VEND1_MACSEC_BASE		0x9000
+
+#define MACSEC_CFG			0x0000
+#define MACSEC_CFG_BYPASS		BIT(1)
+#define MACSEC_CFG_S0I			BIT(0)
+
+#define MACSEC_TPNET			0x0044
+#define PN_WRAP_THRESHOLD		0xffffffff
+
+#define MACSEC_RXSCA			0x0080
+#define MACSEC_RXSCKA			0x0084
+
+#define MACSEC_TXSCA			0x00C0
+#define MACSEC_TXSCKA			0x00C4
+
+#define MACSEC_RXSC_SCI_1H		0x0100
+#define MACSEC_RXSC_SCI_2H		0x0104
+
+#define MACSEC_RXSC_CFG			0x0128
+#define MACSEC_RXSC_CFG_XPN		BIT(25)
+#define MACSEC_RXSC_CFG_AES_256		BIT(24)
+#define MACSEC_RXSC_CFG_SCI_EN		BIT(11)
+#define MACSEC_RXSC_CFG_RP		BIT(10)
+#define MACSEC_RXSC_CFG_VF_MASK		GENMASK(9, 8)
+#define MACSEC_RXSC_CFG_VF_OFF		8
+
+#define MACSEC_RPW			0x012C
+
+#define MACSEC_RXSA_A_CS		0x0180
+#define MACSEC_RXSA_A_NPN		0x0184
+#define MACSEC_RXSA_A_XNPN		0x0188
+#define MACSEC_RXSA_A_LNPN		0x018C
+#define MACSEC_RXSA_A_LXNPN		0x0190
+
+#define MACSEC_RXSA_B_CS		0x01C0
+#define MACSEC_RXSA_B_NPN		0x01C4
+#define MACSEC_RXSA_B_XNPN		0x01C8
+#define MACSEC_RXSA_B_LNPN		0x01CC
+#define MACSEC_RXSA_B_LXNPN		0x01D0
+
+#define MACSEC_RXSA_CS_A		BIT(31)
+#define MACSEC_RXSA_CS_AN_OFF		1
+#define MACSEC_RXSA_CS_EN		BIT(0)
+
+#define MACSEC_TXSC_SCI_1H		0x0200
+#define MACSEC_TXSC_SCI_2H		0x0204
+#define MACSEC_TXSC_CFG			0x0228
+#define MACSEC_TXSC_CFG_XPN		BIT(25)
+#define MACSEC_TXSC_CFG_AES_256		BIT(24)
+#define MACSEC_TXSC_CFG_AN_MASK		GENMASK(19, 18)
+#define MACSEC_TXSC_CFG_AN_OFF		18
+#define MACSEC_TXSC_CFG_ASA		BIT(17)
+#define MACSEC_TXSC_CFG_SCE		BIT(16)
+#define MACSEC_TXSC_CFG_ENCRYPT		BIT(4)
+#define MACSEC_TXSC_CFG_PROTECT		BIT(3)
+#define MACSEC_TXSC_CFG_SEND_SCI	BIT(2)
+#define MACSEC_TXSC_CFG_END_STATION	BIT(1)
+#define MACSEC_TXSC_CFG_SCI		BIT(0)
+
+#define MACSEC_TXSA_A_CS		0x0280
+#define MACSEC_TXSA_A_NPN		0x0284
+#define MACSEC_TXSA_A_XNPN		0x0288
+
+#define MACSEC_TXSA_B_CS		0x02C0
+#define MACSEC_TXSA_B_NPN		0x02C4
+#define MACSEC_TXSA_B_XNPN		0x02C8
+
+#define MACSEC_TXSA_CS_A		BIT(31)
+
+#define MACSEC_EVR			0x0400
+#define MACSEC_EVER			0x0404
+
+#define MACSEC_RXSA_A_KA		0x0700
+#define MACSEC_RXSA_A_SSCI		0x0720
+#define MACSEC_RXSA_A_SALT		0x0724
+
+#define MACSEC_RXSA_B_KA		0x0740
+#define MACSEC_RXSA_B_SSCI		0x0760
+#define MACSEC_RXSA_B_SALT		0x0764
+
+#define MACSEC_TXSA_A_KA		0x0780
+#define MACSEC_TXSA_A_SSCI		0x07A0
+#define MACSEC_TXSA_A_SALT		0x07A4
+
+#define MACSEC_TXSA_B_KA		0x07C0
+#define MACSEC_TXSA_B_SSCI		0x07E0
+#define MACSEC_TXSA_B_SALT		0x07E4
+
+#define MACSEC_UPFR0D2			0x0A08
+#define MACSEC_UPFR0M1			0x0A10
+#define MACSEC_OVP			BIT(12)
+
+#define	MACSEC_UPFR0M2			0x0A14
+#define ETYPE_MASK			0xffff
+
+#define MACSEC_UPFR0R			0x0A18
+#define MACSEC_UPFR_EN			BIT(0)
+
+#define ADPTR_CNTRL			0x0F00
+#define ADPTR_CNTRL_CONFIG_EN		BIT(14)
+#define ADPTR_CNTRL_ADPTR_EN		BIT(12)
+
+#define TX_SC_FLT_BASE			0x800
+#define TX_SC_FLT_SIZE			0x10
+#define TX_FLT_BASE(flt_id)		(TX_SC_FLT_BASE + \
+	TX_SC_FLT_SIZE * (flt_id))
+
+#define TX_SC_FLT_OFF_MAC_DA_SA		0x04
+#define TX_SC_FLT_OFF_MAC_SA		0x08
+#define TX_SC_FLT_OFF_MAC_CFG		0x0C
+#define TX_SC_FLT_BY_SA			BIT(14)
+#define TX_SC_FLT_EN			BIT(8)
+
+#define TX_SC_FLT_MAC_DA_SA(base)	((base) + TX_SC_FLT_OFF_MAC_DA_SA)
+#define TX_SC_FLT_MAC_SA(base)		((base) + TX_SC_FLT_OFF_MAC_SA)
+#define TX_SC_FLT_MAC_CFG(base)		((base) + TX_SC_FLT_OFF_MAC_CFG)
+
+#define ADAPTER_EN	BIT(6)
+#define MACSEC_EN	BIT(5)
+
+struct nxp_c45_rx_sc {
+	struct macsec_rx_sc *rx_sc;
+	struct macsec_rx_sa *rx_sa_a;
+	struct macsec_rx_sa *rx_sa_b;
+};
+
+struct nxp_c45_tx_sa {
+	struct macsec_tx_sa *tx_sa;
+	u8 key[MACSEC_MAX_KEY_LEN];
+	u8 salt[MACSEC_SALT_LEN];
+	u8 an;
+	u64 next_pn;
+	bool is_enabled;
+	bool is_key_a;
+};
+
+struct nxp_c45_secy {
+	struct macsec_secy *secy;
+	struct nxp_c45_tx_sa *tx_sa[MACSEC_NUM_AN];
+	struct nxp_c45_rx_sc *rx_sc;
+	int enabled_an;
+	int secy_id;
+	bool tx_sa_key_a;
+	bool point_to_point;
+	struct list_head list;
+};
+
+struct nxp_c45_macsec {
+	struct list_head secy_list;
+	DECLARE_BITMAP(secy_bitmap, TX_SC_MAX);
+	DECLARE_BITMAP(tx_sc_bitmap, TX_SC_MAX);
+};
+
+struct nxp_c45_macsec_sa_regs {
+	u16 rxsa_cs;
+	u16 rxsa_npn;
+	u16 rxsa_xnpn;
+	u16 rxsa_lnpn;
+	u16 rxsa_lxnpn;
+	u16 txsa_cs;
+	u16 txsa_npn;
+	u16 txsa_xnpn;
+	u16 rxsa_ka;
+	u16 rxsa_ssci;
+	u16 rxsa_salt;
+	u16 txsa_ka;
+	u16 txsa_ssci;
+	u16 txsa_salt;
+};
+
+static const struct nxp_c45_macsec_sa_regs sa_a_regs = {
+	.rxsa_cs	=	MACSEC_RXSA_A_CS,
+	.rxsa_npn	=	MACSEC_RXSA_A_NPN,
+	.rxsa_xnpn	=	MACSEC_RXSA_A_XNPN,
+	.rxsa_lnpn	=	MACSEC_RXSA_A_LNPN,
+	.rxsa_lxnpn	=	MACSEC_RXSA_A_LXNPN,
+	.txsa_cs	=	MACSEC_TXSA_A_CS,
+	.txsa_npn	=	MACSEC_TXSA_A_NPN,
+	.txsa_xnpn	=	MACSEC_TXSA_A_XNPN,
+	.rxsa_ka	=	MACSEC_RXSA_A_KA,
+	.rxsa_ssci	=	MACSEC_RXSA_A_SSCI,
+	.rxsa_salt	=	MACSEC_RXSA_A_SALT,
+	.txsa_ka	=	MACSEC_TXSA_A_KA,
+	.txsa_ssci	=	MACSEC_TXSA_A_SSCI,
+	.txsa_salt	=	MACSEC_TXSA_A_SALT,
+};
+
+static const struct nxp_c45_macsec_sa_regs sa_b_regs = {
+	.rxsa_cs	=	MACSEC_RXSA_B_CS,
+	.rxsa_npn	=	MACSEC_RXSA_B_NPN,
+	.rxsa_xnpn	=	MACSEC_RXSA_B_XNPN,
+	.rxsa_lnpn	=	MACSEC_RXSA_B_LNPN,
+	.rxsa_lxnpn	=	MACSEC_RXSA_B_LXNPN,
+	.txsa_cs	=	MACSEC_TXSA_B_CS,
+	.txsa_npn	=	MACSEC_TXSA_B_NPN,
+	.txsa_xnpn	=	MACSEC_TXSA_B_XNPN,
+	.rxsa_ka	=	MACSEC_RXSA_B_KA,
+	.rxsa_ssci	=	MACSEC_RXSA_B_SSCI,
+	.rxsa_salt	=	MACSEC_RXSA_B_SALT,
+	.txsa_ka	=	MACSEC_TXSA_B_KA,
+	.txsa_ssci	=	MACSEC_TXSA_B_SSCI,
+	.txsa_salt	=	MACSEC_TXSA_B_SALT,
+};
+
+static const
+struct nxp_c45_macsec_sa_regs *nxp_c45_get_macsec_sa_regs(bool key_a)
+{
+	if (key_a)
+		return &sa_a_regs;
+
+	return &sa_b_regs;
+}
+
+static int nxp_c45_macsec_write(struct phy_device *phydev, u16 reg, u32 val)
+{
+	WARN_ON_ONCE(reg % 4);
+
+	reg = reg / 2;
+	phy_write_mmd(phydev, MDIO_MMD_VEND2,
+		      VEND1_MACSEC_BASE + reg, val);
+	phy_write_mmd(phydev, MDIO_MMD_VEND2,
+		      VEND1_MACSEC_BASE + reg + 1, val >> 16);
+	return 0;
+}
+
+static int nxp_c45_macsec_read(struct phy_device *phydev, u16 reg, u32 *value)
+{
+	u32 lvalue;
+	int ret;
+
+	WARN_ON_ONCE(reg % 4);
+
+	reg = reg / 2;
+	ret = phy_read_mmd(phydev, MDIO_MMD_VEND2, VEND1_MACSEC_BASE + reg);
+	if (ret < 0)
+		return ret;
+
+	lvalue = (u32)ret & 0xffff;
+
+	ret = phy_read_mmd(phydev, MDIO_MMD_VEND2, VEND1_MACSEC_BASE + reg + 1);
+	if (ret < 0)
+		return ret;
+
+	lvalue |= (u32)ret << 16;
+	*value = lvalue;
+
+	return 0;
+}
+
+static void nxp_c45_select_secy(struct phy_device *phydev, u8 id)
+{
+	nxp_c45_macsec_write(phydev, MACSEC_RXSCA, id);
+	nxp_c45_macsec_write(phydev, MACSEC_RXSCKA, id);
+	nxp_c45_macsec_write(phydev, MACSEC_TXSCA, id);
+	nxp_c45_macsec_write(phydev, MACSEC_TXSCKA, id);
+}
+
+void nxp_c45_macsec_config_init(struct phy_device *phydev)
+{
+	if (!phydev->macsec_ops)
+		return;
+
+	phy_set_bits_mmd(phydev, MDIO_MMD_VEND1, VEND1_PORT_FUNC_ENABLES,
+			 MACSEC_EN | ADAPTER_EN);
+
+	nxp_c45_macsec_write(phydev, ADPTR_CNTRL, ADPTR_CNTRL_CONFIG_EN |
+			     ADPTR_CNTRL_ADPTR_EN);
+	nxp_c45_macsec_write(phydev, ADPTR_CNTRL, ADPTR_CNTRL_ADPTR_EN);
+
+	nxp_c45_macsec_write(phydev, MACSEC_TPNET, PN_WRAP_THRESHOLD);
+
+	/* Set MKA filter. */
+	nxp_c45_macsec_write(phydev, MACSEC_UPFR0D2, ETH_P_PAE);
+	nxp_c45_macsec_write(phydev, MACSEC_UPFR0M1, MACSEC_OVP);
+	nxp_c45_macsec_write(phydev, MACSEC_UPFR0M2, ETYPE_MASK);
+	nxp_c45_macsec_write(phydev, MACSEC_UPFR0R, MACSEC_UPFR_EN);
+}
+
+static void nxp_c45_macsec_cfg_ptp(struct phy_device *phydev, bool enable)
+{
+	u32 reg = 0;
+
+	nxp_c45_macsec_read(phydev, MACSEC_CFG, &reg);
+	if (enable)
+		reg |= MACSEC_CFG_S0I;
+	else
+		reg &= ~MACSEC_CFG_S0I;
+	nxp_c45_macsec_write(phydev, MACSEC_CFG, reg);
+}
+
+static bool nxp_c45_mac_addr_free(struct macsec_context *ctx)
+{
+	struct nxp_c45_phy *priv = ctx->phydev->priv;
+	struct nxp_c45_secy *pos, *tmp;
+
+	list_for_each_entry_safe(pos, tmp, &priv->macsec->secy_list, list) {
+		if (pos->secy == ctx->secy)
+			continue;
+
+		if (memcmp(pos->secy->netdev->dev_addr,
+			   ctx->secy->netdev->dev_addr, ETH_ALEN) == 0)
+			return false;
+	}
+
+	return true;
+}
+
+static bool nxp_c45_is_macsec_ptp_enabled(struct list_head *secy_list)
+{
+	struct nxp_c45_secy *pos, *tmp;
+
+	list_for_each_entry_safe(pos, tmp, secy_list, list)
+		if (pos->point_to_point)
+			return pos->point_to_point;
+
+	return false;
+}
+
+static struct nxp_c45_secy *nxp_c45_find_secy(struct list_head *secy_list,
+					      sci_t sci)
+{
+	struct nxp_c45_secy *pos, *tmp;
+
+	list_for_each_entry_safe(pos, tmp, secy_list, list)
+		if (pos->secy->sci == sci)
+			return pos;
+
+	return ERR_PTR(-ENOENT);
+}
+
+static void nxp_c45_rx_sc_en(struct phy_device *phydev,
+			     struct nxp_c45_rx_sc *rx_sc,
+			     bool en)
+{
+	u32 reg = 0;
+
+	nxp_c45_macsec_read(phydev, MACSEC_RXSC_CFG, &reg);
+	if (rx_sc->rx_sc->active && en)
+		reg |= MACSEC_RXSC_CFG_SCI_EN;
+	else
+		reg &= ~MACSEC_RXSC_CFG_SCI_EN;
+	nxp_c45_macsec_write(phydev, MACSEC_RXSC_CFG, reg);
+}
+
+static int nxp_c45_tx_sc_en_flt(struct phy_device *phydev, int secy_id, bool en)
+{
+	u32 tx_flt_base = TX_FLT_BASE(secy_id);
+	u32 reg = 0;
+
+	nxp_c45_macsec_read(phydev, TX_SC_FLT_MAC_CFG(tx_flt_base), &reg);
+	if (en)
+		reg |= TX_SC_FLT_EN;
+	else
+		reg &= ~TX_SC_FLT_EN;
+	nxp_c45_macsec_write(phydev, TX_SC_FLT_MAC_CFG(tx_flt_base), reg);
+
+	return 0;
+}
+
+static int nxp_c45_mdo_dev_open(struct macsec_context *ctx)
+{
+	struct nxp_c45_phy *priv = ctx->phydev->priv;
+	struct phy_device *phydev = ctx->phydev;
+	struct nxp_c45_secy *phy_secy;
+	int any_bit_set;
+	u32 reg = 0;
+
+	phy_secy = nxp_c45_find_secy(&priv->macsec->secy_list, ctx->secy->sci);
+	if (IS_ERR(phy_secy))
+		return PTR_ERR(phy_secy);
+
+	nxp_c45_select_secy(phydev, phy_secy->secy_id);
+
+	nxp_c45_tx_sc_en_flt(phydev, phy_secy->secy_id, true);
+	nxp_c45_macsec_cfg_ptp(phydev, phy_secy->point_to_point);
+	if (phy_secy->rx_sc)
+		nxp_c45_rx_sc_en(phydev, phy_secy->rx_sc, true);
+
+	any_bit_set = find_first_bit(priv->macsec->secy_bitmap, TX_SC_MAX);
+	if (any_bit_set == TX_SC_MAX) {
+		nxp_c45_macsec_read(phydev, MACSEC_CFG, &reg);
+		reg |= MACSEC_CFG_BYPASS;
+		nxp_c45_macsec_write(phydev, MACSEC_CFG, reg);
+	}
+
+	set_bit(phy_secy->secy_id, priv->macsec->secy_bitmap);
+
+	return 0;
+}
+
+static int nxp_c45_mdo_dev_stop(struct macsec_context *ctx)
+{
+	struct nxp_c45_phy *priv = ctx->phydev->priv;
+	struct phy_device *phydev = ctx->phydev;
+	struct nxp_c45_secy *phy_secy;
+	int any_bit_set;
+	u32 reg = 0;
+
+	phy_secy = nxp_c45_find_secy(&priv->macsec->secy_list, ctx->secy->sci);
+	if (IS_ERR(phy_secy))
+		return PTR_ERR(phy_secy);
+
+	nxp_c45_select_secy(phydev, phy_secy->secy_id);
+
+	nxp_c45_tx_sc_en_flt(phydev, phy_secy->secy_id, false);
+	if (phy_secy->rx_sc)
+		nxp_c45_rx_sc_en(phydev, phy_secy->rx_sc, false);
+	nxp_c45_macsec_cfg_ptp(phydev, false);
+
+	clear_bit(phy_secy->secy_id, priv->macsec->secy_bitmap);
+	any_bit_set = find_first_bit(priv->macsec->secy_bitmap, TX_SC_MAX);
+	if (any_bit_set == TX_SC_MAX) {
+		nxp_c45_macsec_read(phydev, MACSEC_CFG, &reg);
+		reg &= ~MACSEC_CFG_BYPASS;
+		nxp_c45_macsec_write(phydev, MACSEC_CFG, reg);
+	}
+
+	return 0;
+}
+
+static int nxp_c45_tx_sc_set_flt(struct macsec_context *ctx,  int secy_id)
+{
+	const u8 *dev_addr = ctx->secy->netdev->dev_addr;
+	u32 tx_flt_base = TX_FLT_BASE(secy_id);
+	u32 mac_sa;
+
+	mac_sa = dev_addr[0] << 8 | dev_addr[1];
+	nxp_c45_macsec_write(ctx->phydev, TX_SC_FLT_MAC_DA_SA(tx_flt_base),
+			     mac_sa);
+	mac_sa = dev_addr[5] | dev_addr[4] << 8 |
+		dev_addr[3] << 16 | dev_addr[2] << 24;
+
+	nxp_c45_macsec_write(ctx->phydev, TX_SC_FLT_MAC_SA(tx_flt_base),
+			     mac_sa);
+	nxp_c45_macsec_write(ctx->phydev, TX_SC_FLT_MAC_CFG(tx_flt_base),
+			     TX_SC_FLT_BY_SA | TX_SC_FLT_EN | secy_id);
+
+	return 0;
+}
+
+static bool nxp_c45_port_valid(struct nxp_c45_secy *phy_secy, u16 port)
+{
+	if (phy_secy->secy->tx_sc.end_station &&
+	    __be16_to_cpu((__force __be16)port) != 1)
+		return false;
+
+	return true;
+}
+
+static bool nxp_c45_rx_sc_valid(struct nxp_c45_secy *phy_secy,
+				struct macsec_rx_sc *rx_sc)
+{
+	u16 port =  (__force u64)rx_sc->sci >> (ETH_ALEN * 8);
+
+	if (phy_secy->point_to_point && phy_secy->secy_id != 0)
+		return false;
+
+	return nxp_c45_port_valid(phy_secy, port);
+}
+
+static bool nxp_c45_secy_cfg_valid(struct nxp_c45_secy *phy_secy, bool can_ptp)
+{
+	u16 port =  (__force u64)phy_secy->secy->sci >> (ETH_ALEN * 8);
+
+	if (phy_secy->secy->tx_sc.scb)
+		return false;
+
+	if (phy_secy->secy->tx_sc.send_sci && phy_secy->secy->tx_sc.end_station)
+		return false;
+
+	if (!phy_secy->secy->tx_sc.send_sci &&
+	    !phy_secy->secy->tx_sc.end_station) {
+		if (!can_ptp)
+			return false;
+
+		if (phy_secy->secy_id != 0)
+			return false;
+
+		phy_secy->point_to_point = true;
+	} else {
+		phy_secy->point_to_point = false;
+	}
+
+	return nxp_c45_port_valid(phy_secy, port);
+}
+
+static int nxp_c45_update_tx_sc_secy_cfg(struct phy_device *phydev,
+					 struct nxp_c45_secy *phy_secy)
+{
+	u32 cfg = 0;
+
+	nxp_c45_macsec_read(phydev, MACSEC_TXSC_CFG, &cfg);
+
+	phydev_dbg(phydev, "XPN %s\n", phy_secy->secy->xpn ? "on" : "off");
+	if (phy_secy->secy->xpn)
+		cfg |= MACSEC_TXSC_CFG_XPN;
+	else
+		cfg &= ~MACSEC_TXSC_CFG_XPN;
+
+	phydev_dbg(phydev, "key len %u\n", phy_secy->secy->key_len);
+	if (phy_secy->secy->key_len == 32)
+		cfg |= MACSEC_TXSC_CFG_AES_256;
+	else
+		cfg &= ~MACSEC_TXSC_CFG_AES_256;
+
+	phydev_dbg(phydev, "encryption %s\n",
+		   phy_secy->secy->tx_sc.encrypt ? "on" : "off");
+	if (phy_secy->secy->tx_sc.encrypt)
+		cfg |= MACSEC_TXSC_CFG_ENCRYPT;
+	else
+		cfg &= ~MACSEC_TXSC_CFG_ENCRYPT;
+
+	phydev_dbg(phydev, "protect frames %s\n",
+		   phy_secy->secy->protect_frames ? "on" : "off");
+	if (phy_secy->secy->protect_frames)
+		cfg |= MACSEC_TXSC_CFG_PROTECT;
+	else
+		cfg &= ~MACSEC_TXSC_CFG_PROTECT;
+
+	phydev_dbg(phydev, "send sci %s\n",
+		   phy_secy->secy->tx_sc.send_sci ? "on" : "off");
+	if (phy_secy->secy->tx_sc.send_sci)
+		cfg |= MACSEC_TXSC_CFG_SEND_SCI;
+	else
+		cfg &= ~MACSEC_TXSC_CFG_SEND_SCI;
+
+	phydev_dbg(phydev, "end station %s\n",
+		   phy_secy->secy->tx_sc.end_station ? "on" : "off");
+	if (phy_secy->secy->tx_sc.end_station)
+		cfg |= MACSEC_TXSC_CFG_END_STATION;
+	else
+		cfg &= ~MACSEC_TXSC_CFG_END_STATION;
+
+	phydev_dbg(phydev, "scb %s\n",
+		   phy_secy->secy->tx_sc.scb ? "on" : "off");
+	if (phy_secy->secy->tx_sc.scb)
+		cfg |= MACSEC_TXSC_CFG_SCI;
+	else
+		cfg &= ~MACSEC_TXSC_CFG_SCI;
+
+	nxp_c45_macsec_write(phydev, MACSEC_TXSC_CFG, cfg);
+
+	return 0;
+}
+
+static int nxp_c45_update_rx_sc_secy_cfg(struct phy_device *phydev,
+					 struct nxp_c45_secy *phy_secy)
+{
+	struct nxp_c45_rx_sc *rx_sc = phy_secy->rx_sc;
+	struct nxp_c45_phy *priv = phydev->priv;
+	u32 cfg = 0;
+
+	nxp_c45_macsec_read(phydev, MACSEC_RXSC_CFG, &cfg);
+	cfg &= ~MACSEC_RXSC_CFG_VF_MASK;
+	cfg = phy_secy->secy->validate_frames << MACSEC_RXSC_CFG_VF_OFF;
+
+	phydev_dbg(phydev, "validate frames %u\n",
+		   phy_secy->secy->validate_frames);
+	phydev_dbg(phydev, "replay_protect %s window %u\n",
+		   phy_secy->secy->replay_protect ? "on" : "off",
+		   phy_secy->secy->replay_window);
+	if (phy_secy->secy->replay_protect) {
+		cfg |= MACSEC_RXSC_CFG_RP;
+		if (cfg & MACSEC_RXSC_CFG_SCI_EN) {
+			phydev_dbg(phydev, "RX SC enabled, window will not be updated\n");
+		} else {
+			phydev_dbg(phydev, "RX SC enabled, window will be updated\n");
+			nxp_c45_macsec_write(phydev, MACSEC_RPW,
+					     phy_secy->secy->replay_window);
+		}
+	} else {
+		cfg &= ~MACSEC_RXSC_CFG_RP;
+	}
+
+	phydev_dbg(phydev, "rx_sc->active %s\n",
+		   rx_sc->rx_sc->active ? "on" : "off");
+	if (rx_sc->rx_sc->active &&
+	    test_bit(phy_secy->secy_id, priv->macsec->secy_bitmap))
+		cfg |= MACSEC_RXSC_CFG_SCI_EN;
+	else
+		cfg &= ~MACSEC_RXSC_CFG_SCI_EN;
+
+	phydev_dbg(phydev, "key len %u\n", phy_secy->secy->key_len);
+	if (phy_secy->secy->key_len == 32)
+		cfg |= MACSEC_RXSC_CFG_AES_256;
+	else
+		cfg &= ~MACSEC_RXSC_CFG_AES_256;
+
+	phydev_dbg(phydev, "XPN %s\n", phy_secy->secy->xpn ? "on" : "off");
+	if (phy_secy->secy->xpn)
+		cfg |= MACSEC_RXSC_CFG_XPN;
+	else
+		cfg &= ~MACSEC_RXSC_CFG_XPN;
+
+	nxp_c45_macsec_write(phydev, MACSEC_RXSC_CFG, cfg);
+	return 0;
+}
+
+static int nxp_c45_update_key_status(struct phy_device *phydev,
+				     struct nxp_c45_tx_sa *tx_sa)
+{
+	bool key_a = tx_sa->is_key_a;
+	u32 cfg = 0;
+
+	nxp_c45_macsec_read(phydev, MACSEC_TXSC_CFG, &cfg);
+
+	cfg &= ~MACSEC_TXSC_CFG_AN_MASK;
+	cfg |= tx_sa->an << MACSEC_TXSC_CFG_AN_OFF;
+
+	if (!key_a)
+		cfg |= MACSEC_TXSC_CFG_ASA;
+	else
+		cfg &= ~MACSEC_TXSC_CFG_ASA;
+
+	tx_sa->is_enabled = tx_sa->tx_sa->active;
+	if (tx_sa->tx_sa->active)
+		cfg |= MACSEC_TXSC_CFG_SCE;
+	else
+		cfg &= ~MACSEC_TXSC_CFG_SCE;
+
+	nxp_c45_macsec_write(phydev, MACSEC_TXSC_CFG, cfg);
+
+	return 0;
+}
+
+static int nxp_c45_tx_sa_disable(struct phy_device *phydev,
+				 struct nxp_c45_secy *phy_secy)
+{
+	u32 cfg = 0;
+
+	nxp_c45_macsec_read(phydev, MACSEC_TXSC_CFG, &cfg);
+	cfg &= ~MACSEC_TXSC_CFG_SCE;
+	nxp_c45_macsec_write(phydev, MACSEC_TXSC_CFG, cfg);
+
+	return 0;
+}
+
+static int nxp_c45_txsa_set_pn(struct phy_device *phydev,
+			       struct nxp_c45_tx_sa *tx_sa)
+{
+	const struct nxp_c45_macsec_sa_regs *sa_regs;
+
+	sa_regs = nxp_c45_get_macsec_sa_regs(tx_sa->is_key_a);
+
+	nxp_c45_macsec_write(phydev, sa_regs->txsa_npn, tx_sa->next_pn);
+	nxp_c45_macsec_write(phydev, sa_regs->txsa_xnpn, tx_sa->next_pn >> 32);
+
+	return 0;
+}
+
+static int nxp_c45_txsa_get_pn(struct phy_device *phydev,
+			       struct nxp_c45_tx_sa *tx_sa)
+{
+	const struct nxp_c45_macsec_sa_regs *sa_regs;
+	u32 reg = 0;
+
+	sa_regs = nxp_c45_get_macsec_sa_regs(tx_sa->is_key_a);
+
+	nxp_c45_macsec_read(phydev, sa_regs->txsa_npn, &reg);
+	tx_sa->next_pn = reg;
+	nxp_c45_macsec_read(phydev, sa_regs->txsa_xnpn, &reg);
+	tx_sa->next_pn |= (u64)reg << 32;
+
+	return 0;
+}
+
+static int nxp_c45_set_rxsa_key_cfg(struct macsec_context *ctx,
+				    bool key_a, bool upd)
+{
+	const struct nxp_c45_macsec_sa_regs *sa_regs;
+	u64 npn = ctx->sa.rx_sa->next_pn;
+	u32 cfg;
+
+	sa_regs = nxp_c45_get_macsec_sa_regs(key_a);
+
+	if (npn && !upd) {
+		nxp_c45_macsec_write(ctx->phydev, sa_regs->rxsa_npn, npn);
+		nxp_c45_macsec_write(ctx->phydev, sa_regs->rxsa_lnpn, npn);
+		if (ctx->secy->xpn) {
+			nxp_c45_macsec_write(ctx->phydev, sa_regs->rxsa_xnpn,
+					     npn >> 32);
+			nxp_c45_macsec_write(ctx->phydev, sa_regs->rxsa_lxnpn,
+					     npn >> 32);
+		}
+	} else if (npn && upd) {
+		if (npn > ctx->secy->replay_window)
+			npn -= ctx->secy->replay_window;
+		else
+			npn = 1;
+
+		nxp_c45_macsec_write(ctx->phydev, sa_regs->rxsa_lnpn, npn);
+		if (ctx->secy->xpn)
+			nxp_c45_macsec_write(ctx->phydev, sa_regs->rxsa_lxnpn,
+					     npn >> 32);
+	}
+
+	cfg = MACSEC_RXSA_CS_A | (ctx->sa.assoc_num << MACSEC_RXSA_CS_AN_OFF);
+	cfg |= ctx->sa.rx_sa->active ? MACSEC_RXSA_CS_EN : 0;
+	nxp_c45_macsec_write(ctx->phydev, sa_regs->rxsa_cs, cfg);
+
+	return 0;
+}
+
+static int nxp_c45_txsa_set_key(struct macsec_context *ctx,
+				struct nxp_c45_tx_sa *tx_sa)
+{
+	const struct nxp_c45_macsec_sa_regs *sa_regs;
+	u32 ssci = (__force u32)tx_sa->tx_sa->ssci;
+	u32 key_size = ctx->secy->key_len / 4;
+	u32 salt_size = MACSEC_SALT_LEN / 4;
+	u32 *salt = (u32 *)tx_sa->salt;
+	u32 *key = (u32 *)tx_sa->key;
+	u32 reg;
+	int i;
+
+	sa_regs = nxp_c45_get_macsec_sa_regs(tx_sa->is_key_a);
+
+	for (i = 0; i < key_size; i++) {
+		reg = sa_regs->txsa_ka + i * 4;
+		nxp_c45_macsec_write(ctx->phydev, reg,
+				     (__force u32)cpu_to_be32(key[i]));
+	}
+
+	if (ctx->secy->xpn) {
+		for (i = 0; i < salt_size; i++) {
+			reg = sa_regs->txsa_salt + (2 - i) * 4;
+			nxp_c45_macsec_write(ctx->phydev, reg,
+					     (__force u32)cpu_to_be32(salt[i]));
+		}
+
+		nxp_c45_macsec_write(ctx->phydev, sa_regs->txsa_ssci,
+				     (__force u32)cpu_to_be32(ssci));
+	}
+
+	nxp_c45_macsec_write(ctx->phydev, sa_regs->txsa_cs, MACSEC_TXSA_CS_A);
+
+	return 0;
+}
+
+static int nxp_c45_commit_rx_sc_cfg(struct phy_device *phydev,
+				    struct nxp_c45_secy *phy_secy)
+{
+	struct nxp_c45_rx_sc *rx_sc = phy_secy->rx_sc;
+	u64 sci = (__force u64)rx_sc->rx_sc->sci;
+
+	nxp_c45_macsec_write(phydev, MACSEC_RXSC_SCI_1H,
+			     (__force u32)cpu_to_be32(sci));
+	nxp_c45_macsec_write(phydev, MACSEC_RXSC_SCI_2H,
+			     (__force u32)cpu_to_be32(sci >> 32));
+
+	return nxp_c45_update_rx_sc_secy_cfg(phydev, phy_secy);
+}
+
+static int nxp_c45_disable_rxsa_key(struct phy_device *phydev, bool key_a)
+{
+	const struct nxp_c45_macsec_sa_regs *sa_regs;
+	u32 reg = 0;
+
+	sa_regs = nxp_c45_get_macsec_sa_regs(key_a);
+
+	nxp_c45_macsec_read(phydev, sa_regs->rxsa_cs, &reg);
+	reg &= ~MACSEC_RXSA_CS_EN;
+	nxp_c45_macsec_write(phydev, sa_regs->rxsa_cs, reg);
+
+	return 0;
+}
+
+static int nxp_c45_rx_sc_del(struct phy_device *phydev,
+			     struct nxp_c45_rx_sc *rx_sc)
+{
+	nxp_c45_macsec_write(phydev, MACSEC_RXSC_CFG, 0);
+	nxp_c45_macsec_write(phydev, MACSEC_RXSC_SCI_1H, 0);
+	nxp_c45_macsec_write(phydev, MACSEC_RXSC_SCI_2H, 0);
+	nxp_c45_macsec_write(phydev, MACSEC_RPW, 0);
+
+	if (rx_sc->rx_sa_a)
+		nxp_c45_disable_rxsa_key(phydev, true);
+
+	if (rx_sc->rx_sa_b)
+		nxp_c45_disable_rxsa_key(phydev, false);
+
+	return 0;
+}
+
+static int nxp_c45_set_rxsa_key(struct macsec_context *ctx, bool key_a)
+{
+	u32 *salt = (u32 *)ctx->sa.rx_sa->key.salt.bytes;
+	const struct nxp_c45_macsec_sa_regs *sa_regs;
+	u32 ssci = (__force u32)ctx->sa.rx_sa->ssci;
+	u32 key_size = ctx->secy->key_len / 4;
+	u32 salt_size = MACSEC_SALT_LEN / 4;
+	u32 *key = (u32 *)ctx->sa.key;
+	u32 reg;
+	int i;
+
+	sa_regs = nxp_c45_get_macsec_sa_regs(key_a);
+
+	for (i = 0; i < key_size; i++) {
+		reg = sa_regs->rxsa_ka + i * 4;
+		nxp_c45_macsec_write(ctx->phydev, reg,
+				     (__force u32)cpu_to_be32(key[i]));
+	}
+
+	if (ctx->secy->xpn) {
+		for (i = 0; i < salt_size; i++) {
+			reg = sa_regs->rxsa_salt + (2 - i) * 4;
+			nxp_c45_macsec_write(ctx->phydev, reg,
+					     (__force u32)cpu_to_be32(salt[i]));
+		}
+		nxp_c45_macsec_write(ctx->phydev, sa_regs->rxsa_ssci,
+				     (__force u32)cpu_to_be32(ssci));
+	}
+
+	nxp_c45_set_rxsa_key_cfg(ctx, key_a, false);
+
+	return 0;
+}
+
+static void nxp_c45_tx_sc_clear(struct nxp_c45_secy *phy_secy)
+{
+	struct nxp_c45_tx_sa **tx_sa;
+	u8 i;
+
+	tx_sa = phy_secy->tx_sa;
+	for (i = 0; i < ARRAY_SIZE(phy_secy->tx_sa); i++) {
+		kfree(tx_sa[i]);
+		tx_sa[i] = NULL;
+	}
+}
+
+static int nxp_c45_mdo_add_secy(struct macsec_context *ctx)
+{
+	struct phy_device *phydev = ctx->phydev;
+	struct nxp_c45_phy *priv = phydev->priv;
+	u64 sci = (__force u64)ctx->secy->sci;
+	struct nxp_c45_secy *phy_secy;
+	bool can_ptp;
+	int idx;
+	u32 reg;
+
+	phydev_dbg(ctx->phydev, "add secy SCI %llu\n", ctx->secy->sci);
+
+	if (!nxp_c45_mac_addr_free(ctx))
+		return -EBUSY;
+
+	if (nxp_c45_is_macsec_ptp_enabled(&priv->macsec->secy_list))
+		return -EBUSY;
+
+	idx = find_first_zero_bit(priv->macsec->tx_sc_bitmap, TX_SC_MAX);
+	if (idx == TX_SC_MAX)
+		return -EBUSY;
+
+	phy_secy = kzalloc(sizeof(*phy_secy), GFP_KERNEL);
+	if (!phy_secy)
+		return -ENOMEM;
+
+	phy_secy->secy = ctx->secy;
+	phy_secy->secy_id = idx;
+	phy_secy->enabled_an = ctx->secy->tx_sc.encoding_sa;
+	phy_secy->tx_sa_key_a = true;
+
+	/* If the point to point mode should be enabled, we should have only
+	 * one secy enabled, respectively the new one.
+	 */
+	can_ptp = list_count_nodes(&priv->macsec->secy_list) == 0;
+	if (!nxp_c45_secy_cfg_valid(phy_secy, can_ptp)) {
+		kfree(phy_secy);
+		return -EINVAL;
+	}
+
+	nxp_c45_select_secy(phydev, phy_secy->secy_id);
+	nxp_c45_macsec_write(phydev, MACSEC_TXSC_SCI_1H,
+			     (__force u32)cpu_to_be32(sci));
+	nxp_c45_macsec_write(phydev, MACSEC_TXSC_SCI_2H,
+			     (__force u32)cpu_to_be32(sci >> 32));
+	nxp_c45_tx_sc_set_flt(ctx, phy_secy->secy_id);
+	nxp_c45_update_tx_sc_secy_cfg(phydev, phy_secy);
+	if (phy_interrupt_is_valid(phydev)) {
+		nxp_c45_macsec_read(phydev, MACSEC_EVER, &reg);
+		reg |= TX_SC_BIT(phy_secy->secy_id);
+		nxp_c45_macsec_write(phydev, MACSEC_EVER, reg);
+	}
+	set_bit(idx, priv->macsec->tx_sc_bitmap);
+	list_add_tail(&phy_secy->list, &priv->macsec->secy_list);
+
+	return 0;
+}
+
+static int nxp_c45_mdo_upd_secy(struct macsec_context *ctx)
+{
+	struct nxp_c45_tx_sa *new_tx_sa, *old_tx_sa;
+	struct phy_device *phydev = ctx->phydev;
+	struct nxp_c45_phy *priv = phydev->priv;
+	struct nxp_c45_secy *phy_secy;
+	bool can_ptp;
+
+	phydev_dbg(phydev, "update secy SCI %llu\n", ctx->secy->sci);
+
+	phy_secy = nxp_c45_find_secy(&priv->macsec->secy_list, ctx->secy->sci);
+	if (IS_ERR(phy_secy))
+		return PTR_ERR(phy_secy);
+
+	if (!nxp_c45_mac_addr_free(ctx))
+		return -EBUSY;
+
+	/* If the point to point mode should be enabled, we should have only
+	 * one secy enabled, respectively the new one.
+	 */
+	can_ptp = list_count_nodes(&priv->macsec->secy_list) == 1;
+	if (!nxp_c45_secy_cfg_valid(phy_secy, can_ptp))
+		return -EINVAL;
+
+	nxp_c45_select_secy(phydev, phy_secy->secy_id);
+
+	nxp_c45_tx_sc_set_flt(ctx, phy_secy->secy_id);
+	nxp_c45_update_tx_sc_secy_cfg(phydev, phy_secy);
+
+	if (phy_secy->enabled_an != ctx->secy->tx_sc.encoding_sa) {
+		old_tx_sa = phy_secy->tx_sa[phy_secy->enabled_an];
+		phy_secy->enabled_an = ctx->secy->tx_sc.encoding_sa;
+		new_tx_sa = phy_secy->tx_sa[phy_secy->enabled_an];
+		if (!new_tx_sa) {
+			nxp_c45_tx_sa_disable(phydev, phy_secy);
+			goto disable_old_tx_sa;
+		}
+
+		if (!new_tx_sa->tx_sa->active) {
+			nxp_c45_tx_sa_disable(phydev, phy_secy);
+			goto disable_old_tx_sa;
+		}
+
+		new_tx_sa->is_key_a = phy_secy->tx_sa_key_a;
+		phy_secy->tx_sa_key_a = phy_secy->tx_sa_key_a;
+		nxp_c45_txsa_set_key(ctx, new_tx_sa);
+		nxp_c45_txsa_set_pn(phydev, new_tx_sa);
+		nxp_c45_update_key_status(phydev, new_tx_sa);
+
+disable_old_tx_sa:
+		if (old_tx_sa) {
+			old_tx_sa->is_enabled = false;
+			nxp_c45_txsa_get_pn(phydev, old_tx_sa);
+		}
+	}
+
+	if (test_bit(phy_secy->secy_id, priv->macsec->secy_bitmap))
+		nxp_c45_macsec_cfg_ptp(phydev, phy_secy->point_to_point);
+
+	if (phy_secy->rx_sc)
+		nxp_c45_update_rx_sc_secy_cfg(phydev, phy_secy);
+
+	return 0;
+}
+
+static int nxp_c45_mdo_del_secy(struct macsec_context *ctx)
+{
+	struct phy_device *phydev = ctx->phydev;
+	struct nxp_c45_phy *priv = phydev->priv;
+	struct nxp_c45_secy *phy_secy;
+	u32 reg;
+
+	phydev_dbg(phydev, "delete secy SCI %llu\n", ctx->secy->sci);
+
+	phy_secy = nxp_c45_find_secy(&priv->macsec->secy_list, ctx->secy->sci);
+	if (IS_ERR(phy_secy))
+		return PTR_ERR(phy_secy);
+	nxp_c45_select_secy(phydev, phy_secy->secy_id);
+
+	nxp_c45_mdo_dev_stop(ctx);
+	nxp_c45_tx_sa_disable(phydev, phy_secy);
+	nxp_c45_tx_sc_clear(phy_secy);
+	if (phy_secy->rx_sc) {
+		nxp_c45_rx_sc_del(phydev, phy_secy->rx_sc);
+		kfree(phy_secy->rx_sc);
+	}
+
+	if (phy_interrupt_is_valid(phydev)) {
+		nxp_c45_macsec_read(phydev, MACSEC_EVER, &reg);
+		reg &= ~TX_SC_BIT(phy_secy->secy_id);
+		nxp_c45_macsec_write(phydev, MACSEC_EVER, reg);
+	}
+
+	clear_bit(phy_secy->secy_id, priv->macsec->tx_sc_bitmap);
+	list_del(&phy_secy->list);
+	kfree(phy_secy);
+
+	return 0;
+}
+
+static int nxp_c45_mdo_add_rxsc(struct macsec_context *ctx)
+{
+	struct phy_device *phydev = ctx->phydev;
+	struct nxp_c45_phy *priv = phydev->priv;
+	struct nxp_c45_secy *phy_secy;
+	struct nxp_c45_rx_sc *rx_sc;
+
+	phydev_dbg(phydev, "add RX SC %s\n",
+		   ctx->rx_sc->active ? "enabled" : "disabled");
+
+	phy_secy = nxp_c45_find_secy(&priv->macsec->secy_list, ctx->secy->sci);
+	if (IS_ERR(phy_secy))
+		return PTR_ERR(phy_secy);
+
+	if (phy_secy->rx_sc)
+		return -ENOMEM;
+
+	if (!nxp_c45_rx_sc_valid(phy_secy, ctx->rx_sc))
+		return -EINVAL;
+
+	rx_sc = kzalloc(sizeof(*rx_sc), GFP_KERNEL);
+	if (!rx_sc)
+		return -ENOMEM;
+
+	phy_secy->rx_sc = rx_sc;
+	rx_sc->rx_sc = ctx->rx_sc;
+
+	nxp_c45_select_secy(phydev, phy_secy->secy_id);
+	nxp_c45_commit_rx_sc_cfg(phydev, phy_secy);
+
+	return 0;
+}
+
+static int nxp_c45_mdo_upd_rxsc(struct macsec_context *ctx)
+{
+	struct phy_device *phydev = ctx->phydev;
+	struct nxp_c45_phy *priv = phydev->priv;
+	struct nxp_c45_secy *phy_secy;
+	struct nxp_c45_rx_sc *rx_sc;
+
+	phydev_dbg(phydev, "update RX SC %llu %s\n", ctx->rx_sc->sci,
+		   ctx->rx_sc->active ? "enabled" : "disabled");
+
+	phy_secy = nxp_c45_find_secy(&priv->macsec->secy_list, ctx->secy->sci);
+	if (IS_ERR(phy_secy))
+		return PTR_ERR(phy_secy);
+
+	rx_sc = phy_secy->rx_sc;
+	if (rx_sc->rx_sc != ctx->rx_sc)
+		return -EINVAL;
+
+	nxp_c45_select_secy(phydev, phy_secy->secy_id);
+	nxp_c45_commit_rx_sc_cfg(phydev, phy_secy);
+
+	return 0;
+}
+
+static int nxp_c45_mdo_del_rxsc(struct macsec_context *ctx)
+{
+	struct phy_device *phydev = ctx->phydev;
+	struct nxp_c45_phy *priv = phydev->priv;
+	struct nxp_c45_secy *phy_secy;
+	struct nxp_c45_rx_sc *rx_sc;
+
+	phydev_dbg(phydev, "delete RX SC %llu %s\n", ctx->rx_sc->sci,
+		   ctx->rx_sc->active ? "enabled" : "disabled");
+
+	phy_secy = nxp_c45_find_secy(&priv->macsec->secy_list, ctx->secy->sci);
+	if (IS_ERR(phy_secy))
+		return PTR_ERR(phy_secy);
+
+	rx_sc = phy_secy->rx_sc;
+	if (rx_sc->rx_sc != ctx->rx_sc)
+		return -EINVAL;
+
+	nxp_c45_select_secy(phydev, phy_secy->secy_id);
+	nxp_c45_rx_sc_del(phydev, rx_sc);
+	kfree(rx_sc);
+	phy_secy->rx_sc = NULL;
+
+	return 0;
+}
+
+static int nxp_c45_mdo_add_rxsa(struct macsec_context *ctx)
+{
+	struct phy_device *phydev = ctx->phydev;
+	struct nxp_c45_phy *priv = phydev->priv;
+	struct nxp_c45_secy *phy_secy;
+	struct nxp_c45_rx_sc *rx_sc;
+	struct macsec_rx_sa *rx_sa;
+	u8 an = ctx->sa.assoc_num;
+
+	phy_secy = nxp_c45_find_secy(&priv->macsec->secy_list, ctx->secy->sci);
+	if (IS_ERR(phy_secy))
+		return PTR_ERR(phy_secy);
+
+	rx_sc = phy_secy->rx_sc;
+	if (rx_sc->rx_sc != ctx->sa.rx_sa->sc)
+		return -EINVAL;
+
+	rx_sa = ctx->sa.rx_sa;
+	nxp_c45_select_secy(phydev, phy_secy->secy_id);
+	if (!rx_sc->rx_sa_a) {
+		phydev_dbg(phydev, "add RX SA A %u %s\n",
+			   an, rx_sa->active ? "enabled" : "disabled");
+		nxp_c45_set_rxsa_key(ctx, true);
+		rx_sc->rx_sa_a = rx_sa;
+		return 0;
+	}
+
+	if (!rx_sc->rx_sa_b) {
+		phydev_dbg(phydev, "add RX SA B %u %s\n",
+			   an, rx_sa->active ? "enabled" : "disabled");
+		nxp_c45_set_rxsa_key(ctx, false);
+		rx_sc->rx_sa_b = rx_sa;
+		return 0;
+	}
+
+	return -ENOMEM;
+}
+
+static int nxp_c45_mdo_upd_rxsa(struct macsec_context *ctx)
+{
+	struct phy_device *phydev = ctx->phydev;
+	struct nxp_c45_phy *priv = phydev->priv;
+	struct nxp_c45_secy *phy_secy;
+	struct nxp_c45_rx_sc *rx_sc;
+	struct macsec_rx_sa *rx_sa;
+	u8 an = ctx->sa.assoc_num;
+
+	phy_secy = nxp_c45_find_secy(&priv->macsec->secy_list, ctx->secy->sci);
+	if (IS_ERR(phy_secy))
+		return PTR_ERR(phy_secy);
+
+	rx_sc = phy_secy->rx_sc;
+	if (rx_sc->rx_sc != ctx->sa.rx_sa->sc)
+		return -EINVAL;
+
+	rx_sa = ctx->sa.rx_sa;
+	nxp_c45_select_secy(phydev, phy_secy->secy_id);
+	if (rx_sc->rx_sa_a == rx_sa) {
+		phydev_dbg(phydev, "update RX SA A %u %s\n",
+			   an, rx_sa->active ? "enabled" : "disabled");
+		nxp_c45_set_rxsa_key_cfg(ctx, true, true);
+		return 0;
+	}
+
+	if (rx_sc->rx_sa_b == rx_sa) {
+		phydev_dbg(phydev, "update RX SA B %u %s\n",
+			   an, rx_sa->active ? "enabled" : "disabled");
+		nxp_c45_set_rxsa_key_cfg(ctx, false, true);
+		return 0;
+	}
+
+	return -ENOENT;
+}
+
+static int nxp_c45_mdo_del_rxsa(struct macsec_context *ctx)
+{
+	struct phy_device *phydev = ctx->phydev;
+	struct nxp_c45_phy *priv = phydev->priv;
+	struct nxp_c45_secy *phy_secy;
+	struct nxp_c45_rx_sc *rx_sc;
+	struct macsec_rx_sa *rx_sa;
+	u8 an = ctx->sa.assoc_num;
+
+	phy_secy = nxp_c45_find_secy(&priv->macsec->secy_list, ctx->secy->sci);
+	if (IS_ERR(phy_secy))
+		return PTR_ERR(phy_secy);
+
+	rx_sc = phy_secy->rx_sc;
+	if (rx_sc->rx_sc != ctx->sa.rx_sa->sc)
+		return -EINVAL;
+
+	rx_sa = ctx->sa.rx_sa;
+	nxp_c45_select_secy(phydev, phy_secy->secy_id);
+	if (rx_sc->rx_sa_a == rx_sa) {
+		phydev_dbg(phydev, "delete RX SA A %u %s\n",
+			   an, rx_sa->active ? "enabled" : "disabled");
+		nxp_c45_disable_rxsa_key(phydev, true);
+		rx_sc->rx_sa_a = NULL;
+		return 0;
+	}
+
+	if (rx_sc->rx_sa_b == rx_sa) {
+		phydev_dbg(phydev, "delete RX SA B %u %s\n",
+			   an, rx_sa->active ? "enabled" : "disabled");
+		nxp_c45_disable_rxsa_key(phydev, false);
+		rx_sc->rx_sa_b = NULL;
+		return 0;
+	}
+
+	return -ENOENT;
+}
+
+static int nxp_c45_mdo_add_txsa(struct macsec_context *ctx)
+{
+	struct phy_device *phydev = ctx->phydev;
+	struct nxp_c45_phy *priv = phydev->priv;
+	struct nxp_c45_secy *phy_secy;
+	struct nxp_c45_tx_sa *tx_sa;
+	u8 sa = ctx->sa.assoc_num;
+
+	phydev_dbg(phydev, "add TX SA %u %s\n", ctx->sa.assoc_num,
+		   ctx->sa.tx_sa->active ? "enabled" : "disabled");
+
+	phy_secy = nxp_c45_find_secy(&priv->macsec->secy_list, ctx->secy->sci);
+	if (IS_ERR(phy_secy))
+		return PTR_ERR(phy_secy);
+
+	if (phy_secy->tx_sa[sa])
+		return -EBUSY;
+
+	tx_sa = kzalloc(sizeof(*tx_sa), GFP_KERNEL);
+	tx_sa->an = ctx->sa.assoc_num;
+	memcpy(tx_sa->key, ctx->sa.key, MACSEC_MAX_KEY_LEN);
+	memcpy(tx_sa->salt, ctx->sa.tx_sa->key.salt.bytes, MACSEC_SALT_LEN);
+	tx_sa->tx_sa = ctx->sa.tx_sa;
+	tx_sa->next_pn = ctx->sa.tx_sa->next_pn;
+	phy_secy->tx_sa[sa] = tx_sa;
+
+	if (tx_sa->an == phy_secy->enabled_an && tx_sa->tx_sa->active) {
+		nxp_c45_select_secy(phydev, phy_secy->secy_id);
+		tx_sa->is_key_a = phy_secy->tx_sa_key_a;
+		phy_secy->tx_sa_key_a = !phy_secy->tx_sa_key_a;
+		nxp_c45_txsa_set_key(ctx, tx_sa);
+		nxp_c45_txsa_set_pn(phydev, tx_sa);
+		nxp_c45_update_key_status(phydev, tx_sa);
+	}
+
+	return 0;
+}
+
+static int nxp_c45_mdo_upd_txsa(struct macsec_context *ctx)
+{
+	struct phy_device *phydev = ctx->phydev;
+	struct nxp_c45_phy *priv = phydev->priv;
+	u64 next_pn = ctx->sa.tx_sa->next_pn;
+	struct nxp_c45_secy *phy_secy;
+	struct nxp_c45_tx_sa *tx_sa;
+	u8 sa = ctx->sa.assoc_num;
+
+	phydev_dbg(phydev, "update TX SA %u %s\n", ctx->sa.assoc_num,
+		   ctx->sa.tx_sa->active ? "enabled" : "disabled");
+
+	phy_secy = nxp_c45_find_secy(&priv->macsec->secy_list, ctx->secy->sci);
+	if (IS_ERR(phy_secy))
+		return PTR_ERR(phy_secy);
+
+	tx_sa = phy_secy->tx_sa[sa];
+	if (!tx_sa)
+		return -EINVAL;
+
+	nxp_c45_select_secy(phydev, phy_secy->secy_id);
+
+	if (tx_sa->is_enabled && tx_sa->tx_sa->active && next_pn) {
+		tx_sa->next_pn = next_pn;
+		nxp_c45_txsa_set_pn(phydev, tx_sa);
+
+		return 0;
+	}
+
+	if (tx_sa->is_enabled && !tx_sa->tx_sa->active) {
+		if (next_pn)
+			tx_sa->next_pn = next_pn;
+		else
+			nxp_c45_txsa_get_pn(phydev, tx_sa);
+
+		nxp_c45_update_key_status(phydev, tx_sa);
+
+		return 0;
+	}
+
+	if (!tx_sa->is_enabled && tx_sa->tx_sa->active &&
+	    tx_sa->an == phy_secy->enabled_an) {
+		if (next_pn)
+			tx_sa->next_pn = next_pn;
+
+		tx_sa->is_key_a = phy_secy->tx_sa_key_a;
+		phy_secy->tx_sa_key_a = !phy_secy->tx_sa_key_a;
+		nxp_c45_txsa_set_key(ctx, tx_sa);
+		nxp_c45_txsa_set_pn(phydev, tx_sa);
+		nxp_c45_update_key_status(phydev, tx_sa);
+
+		return 0;
+	}
+
+	if (!tx_sa->is_enabled && !tx_sa->tx_sa->active)
+		tx_sa->next_pn = next_pn;
+
+	return 0;
+}
+
+static int nxp_c45_mdo_del_txsa(struct macsec_context *ctx)
+{
+	struct macsec_tx_sa *ctx_sa = ctx->sa.tx_sa;
+	struct phy_device *phydev = ctx->phydev;
+	struct nxp_c45_phy *priv = phydev->priv;
+	struct nxp_c45_secy *phy_secy;
+	struct nxp_c45_tx_sa *tx_sa;
+	u8 sa = ctx->sa.assoc_num;
+
+	phydev_dbg(phydev, "delete TX SA %u %s\n", sa,
+		   ctx_sa->active ? "enabled" : "disabled");
+
+	phy_secy = nxp_c45_find_secy(&priv->macsec->secy_list, ctx->secy->sci);
+	if (IS_ERR(phy_secy))
+		return PTR_ERR(phy_secy);
+
+	tx_sa = phy_secy->tx_sa[sa];
+	if (!tx_sa)
+		return -EINVAL;
+
+	nxp_c45_select_secy(phydev, phy_secy->secy_id);
+
+	if (tx_sa->is_enabled)
+		nxp_c45_update_key_status(phydev, tx_sa);
+
+	phy_secy->tx_sa[sa] = NULL;
+	kfree(tx_sa);
+
+	return 0;
+}
+
+static const struct macsec_ops nxp_c45_macsec_ops = {
+	.mdo_dev_open = nxp_c45_mdo_dev_open,
+	.mdo_dev_stop = nxp_c45_mdo_dev_stop,
+	.mdo_add_secy = nxp_c45_mdo_add_secy,
+	.mdo_upd_secy = nxp_c45_mdo_upd_secy,
+	.mdo_del_secy = nxp_c45_mdo_del_secy,
+	.mdo_add_rxsc = nxp_c45_mdo_add_rxsc,
+	.mdo_upd_rxsc = nxp_c45_mdo_upd_rxsc,
+	.mdo_del_rxsc = nxp_c45_mdo_del_rxsc,
+	.mdo_add_rxsa = nxp_c45_mdo_add_rxsa,
+	.mdo_upd_rxsa = nxp_c45_mdo_upd_rxsa,
+	.mdo_del_rxsa = nxp_c45_mdo_del_rxsa,
+	.mdo_add_txsa = nxp_c45_mdo_add_txsa,
+	.mdo_upd_txsa = nxp_c45_mdo_upd_txsa,
+	.mdo_del_txsa = nxp_c45_mdo_del_txsa,
+};
+
+int nxp_c45_macsec_probe(struct phy_device *phydev)
+{
+	struct nxp_c45_phy *priv = phydev->priv;
+
+	priv->macsec = kzalloc(sizeof(*priv->macsec), GFP_KERNEL);
+	if (!priv->macsec)
+		return -ENOMEM;
+
+	INIT_LIST_HEAD(&priv->macsec->secy_list);
+	phydev->macsec_ops = &nxp_c45_macsec_ops;
+
+	return 0;
+}
+
+void nxp_c45_handle_macsec_interrupt(struct phy_device *phydev,
+				     irqreturn_t *ret)
+{
+	struct nxp_c45_phy *priv = phydev->priv;
+	struct nxp_c45_secy *pos, *tmp;
+	struct nxp_c45_tx_sa *tx_sa;
+	int secy_id;
+	u32 reg = 0;
+
+	if (!phydev->macsec_ops)
+		return;
+
+	do {
+		nxp_c45_macsec_read(phydev, MACSEC_EVR, &reg);
+		if (!reg)
+			return;
+
+		secy_id = MACSEC_REG_SIZE - ffs(reg);
+		list_for_each_entry_safe(pos, tmp, &priv->macsec->secy_list,
+					 list)
+			if (pos->secy_id == secy_id)
+				break;
+
+		phydev_dbg(phydev, "pn_wrapped: tx sc %d, tx sa an %u\n",
+			   pos->secy_id, pos->enabled_an);
+		tx_sa = pos->tx_sa[pos->enabled_an];
+		macsec_pn_wrapped(pos->secy, tx_sa->tx_sa);
+		nxp_c45_macsec_write(phydev, MACSEC_EVR,
+				     TX_SC_BIT(pos->secy_id));
+		*ret = IRQ_HANDLED;
+	} while (reg);
+}
diff --git a/drivers/net/phy/nxp-c45-tja11xx.c b/drivers/net/phy/nxp-c45-tja11xx.c
index 7ab080ff02df..5bf7caa4e63d 100644
--- a/drivers/net/phy/nxp-c45-tja11xx.c
+++ b/drivers/net/phy/nxp-c45-tja11xx.c
@@ -14,9 +14,10 @@
 #include <linux/processor.h>
 #include <linux/property.h>
 #include <linux/ptp_classify.h>
-#include <linux/ptp_clock_kernel.h>
 #include <linux/net_tstamp.h>
 
+#include "nxp-c45-tja11xx.h"
+
 #define PHY_ID_TJA_1103			0x001BB010
 #define PHY_ID_TJA_1120			0x001BB031
 
@@ -75,9 +76,11 @@
 #define PORT_CONTROL_EN			BIT(14)
 
 #define VEND1_PORT_ABILITIES		0x8046
+#define MACSEC_ABILITY			BIT(5)
 #define PTP_ABILITY			BIT(3)
 
 #define VEND1_PORT_FUNC_IRQ_EN		0x807A
+#define MACSEC_IRQS			BIT(5)
 #define PTP_IRQS			BIT(3)
 
 #define VEND1_PTP_IRQ_ACK		0x9008
@@ -148,7 +151,6 @@
 
 #define TS_SEC_MASK			GENMASK(1, 0)
 
-#define VEND1_PORT_FUNC_ENABLES		0x8048
 #define PTP_ENABLE			BIT(3)
 #define PHY_TEST_ENABLE			BIT(0)
 
@@ -281,25 +283,6 @@ struct nxp_c45_phy_data {
 			    irqreturn_t *irq_status);
 };
 
-struct nxp_c45_phy {
-	const struct nxp_c45_phy_data *phy_data;
-	struct phy_device *phydev;
-	struct mii_timestamper mii_ts;
-	struct ptp_clock *ptp_clock;
-	struct ptp_clock_info caps;
-	struct sk_buff_head tx_queue;
-	struct sk_buff_head rx_queue;
-	/* used to access the PTP registers atomic */
-	struct mutex ptp_lock;
-	int hwts_tx;
-	int hwts_rx;
-	u32 tx_delay;
-	u32 rx_delay;
-	struct timespec64 extts_ts;
-	int extts_index;
-	bool extts;
-};
-
 static const
 struct nxp_c45_phy_data *nxp_c45_get_data(struct phy_device *phydev)
 {
@@ -1218,12 +1201,25 @@ static int nxp_c45_start_op(struct phy_device *phydev)
 
 static int nxp_c45_config_intr(struct phy_device *phydev)
 {
-	if (phydev->interrupts == PHY_INTERRUPT_ENABLED)
+	int ret;
+
+	if (phydev->interrupts == PHY_INTERRUPT_ENABLED) {
+		ret = phy_set_bits_mmd(phydev, MDIO_MMD_VEND1,
+				       VEND1_PORT_FUNC_IRQ_EN, MACSEC_IRQS);
+		if (ret)
+			return ret;
+
 		return phy_set_bits_mmd(phydev, MDIO_MMD_VEND1,
 					VEND1_PHY_IRQ_EN, PHY_IRQ_LINK_EVENT);
-	else
-		return phy_clear_bits_mmd(phydev, MDIO_MMD_VEND1,
-					  VEND1_PHY_IRQ_EN, PHY_IRQ_LINK_EVENT);
+	}
+
+	ret = phy_clear_bits_mmd(phydev, MDIO_MMD_VEND1,
+				 VEND1_PORT_FUNC_IRQ_EN, MACSEC_IRQS);
+	if (ret)
+		return ret;
+
+	return phy_clear_bits_mmd(phydev, MDIO_MMD_VEND1,
+				  VEND1_PHY_IRQ_EN, PHY_IRQ_LINK_EVENT);
 }
 
 static int tja1103_config_intr(struct phy_device *phydev)
@@ -1289,6 +1285,7 @@ static irqreturn_t nxp_c45_handle_interrupt(struct phy_device *phydev)
 	}
 
 	data->nmi_handler(phydev, &ret);
+	nxp_c45_handle_macsec_interrupt(phydev, &ret);
 
 	return ret;
 }
@@ -1614,6 +1611,7 @@ static int nxp_c45_config_init(struct phy_device *phydev)
 
 	nxp_c45_counters_enable(phydev);
 	nxp_c45_ptp_init(phydev);
+	nxp_c45_macsec_config_init(phydev);
 
 	return nxp_c45_start_op(phydev);
 }
@@ -1629,7 +1627,9 @@ static int nxp_c45_get_features(struct phy_device *phydev)
 static int nxp_c45_probe(struct phy_device *phydev)
 {
 	struct nxp_c45_phy *priv;
-	int ptp_ability;
+	bool macsec_ability;
+	int phy_abilities;
+	bool ptp_ability;
 	int ret = 0;
 
 	priv = devm_kzalloc(&phydev->mdio.dev, sizeof(*priv), GFP_KERNEL);
@@ -1645,9 +1645,9 @@ static int nxp_c45_probe(struct phy_device *phydev)
 
 	mutex_init(&priv->ptp_lock);
 
-	ptp_ability = phy_read_mmd(phydev, MDIO_MMD_VEND1,
-				   VEND1_PORT_ABILITIES);
-	ptp_ability = !!(ptp_ability & PTP_ABILITY);
+	phy_abilities = phy_read_mmd(phydev, MDIO_MMD_VEND1,
+				     VEND1_PORT_ABILITIES);
+	ptp_ability = !!(phy_abilities & PTP_ABILITY);
 	if (!ptp_ability) {
 		phydev_dbg(phydev, "the phy does not support PTP");
 		goto no_ptp_support;
@@ -1666,6 +1666,20 @@ static int nxp_c45_probe(struct phy_device *phydev)
 	}
 
 no_ptp_support:
+	macsec_ability = !!(phy_abilities & MACSEC_ABILITY);
+	if (!macsec_ability) {
+		phydev_info(phydev, "the phy does not support MACsec\n");
+		goto no_macsec_support;
+	}
+
+	if (IS_ENABLED(CONFIG_MACSEC)) {
+		ret = nxp_c45_macsec_probe(phydev);
+		phydev_dbg(phydev, "MACsec support enabled.");
+	} else {
+		phydev_dbg(phydev, "MACsec support not enabled even if the phy supports it");
+	}
+
+no_macsec_support:
 
 	return ret;
 }
diff --git a/drivers/net/phy/nxp-c45-tja11xx.h b/drivers/net/phy/nxp-c45-tja11xx.h
new file mode 100644
index 000000000000..905c5afb0a5e
--- /dev/null
+++ b/drivers/net/phy/nxp-c45-tja11xx.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* NXP C45 PHY driver header file
+ * Copyright 2023 NXP
+ * Author: Radu Pirea <radu-nicolae.pirea@oss.nxp.com>
+ */
+
+#include <linux/ptp_clock_kernel.h>
+
+#define VEND1_PORT_FUNC_ENABLES		0x8048
+
+struct nxp_c45_macsec;
+
+struct nxp_c45_phy {
+	const struct nxp_c45_phy_data *phy_data;
+	struct phy_device *phydev;
+	struct mii_timestamper mii_ts;
+	struct ptp_clock *ptp_clock;
+	struct ptp_clock_info caps;
+	struct sk_buff_head tx_queue;
+	struct sk_buff_head rx_queue;
+	/* used to access the PTP registers atomic */
+	struct mutex ptp_lock;
+	int hwts_tx;
+	int hwts_rx;
+	u32 tx_delay;
+	u32 rx_delay;
+	struct timespec64 extts_ts;
+	int extts_index;
+	bool extts;
+	struct nxp_c45_macsec *macsec;
+};
+
+#if IS_ENABLED(CONFIG_MACSEC)
+void nxp_c45_macsec_config_init(struct phy_device *phydev);
+void nxp_c45_handle_macsec_interrupt(struct phy_device *phydev,
+				     irqreturn_t *ret);
+int nxp_c45_macsec_probe(struct phy_device *phydev);
+#else
+static inline
+void nxp_c45_macsec_config_init(struct phy_device *phydev)
+{
+}
+
+static inline
+void nxp_c45_handle_macsec_interrupt(struct phy_device *phydev,
+				     irqreturn_t *ret)
+{
+}
+
+static inline
+int nxp_c45_macsec_probe(struct phy_device *phydev)
+{
+	return 0;
+}
+#endif
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [RFC net-next v2 4/5] net: phy: nxp-c45-tja11xx: add MACsec statistics
  2023-08-24  9:16 [RFC net-next v2 0/5] Add MACsec support for TJA11XX C45 PHYs Radu Pirea (NXP OSS)
                   ` (2 preceding siblings ...)
  2023-08-24  9:16 ` [RFC net-next v2 3/5] net: phy: nxp-c45-tja11xx add MACsec support Radu Pirea (NXP OSS)
@ 2023-08-24  9:16 ` Radu Pirea (NXP OSS)
  2023-08-25 13:41   ` Sabrina Dubroca
  2023-08-24  9:16 ` [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag Radu Pirea (NXP OSS)
  4 siblings, 1 reply; 34+ messages in thread
From: Radu Pirea (NXP OSS) @ 2023-08-24  9:16 UTC (permalink / raw)
  To: andrew, hkallweit1, linux, davem, edumazet, kuba, pabeni,
	richardcochran, sd, sebastian.tobuschat
  Cc: netdev, linux-kernel, Radu Pirea (NXP OSS)

Add MACsec statistics callbacks.
The statistic registers must be set to 0 if the SC/SA is
deleted to read relevant values next time when the SC/SA is used.

Signed-off-by: Radu Pirea (NXP OSS) <radu-nicolae.pirea@oss.nxp.com>
---
 drivers/net/phy/nxp-c45-tja11xx-macsec.c | 416 ++++++++++++++++++++++-
 1 file changed, 415 insertions(+), 1 deletion(-)

diff --git a/drivers/net/phy/nxp-c45-tja11xx-macsec.c b/drivers/net/phy/nxp-c45-tja11xx-macsec.c
index 1567865b8de4..edfdd2540d39 100644
--- a/drivers/net/phy/nxp-c45-tja11xx-macsec.c
+++ b/drivers/net/phy/nxp-c45-tja11xx-macsec.c
@@ -140,6 +140,33 @@
 #define ADAPTER_EN	BIT(6)
 #define MACSEC_EN	BIT(5)
 
+#define MACSEC_INOV1HS			0x0140
+#define MACSEC_INOV2HS			0x0144
+#define MACSEC_INOD1HS			0x0148
+#define MACSEC_INOD2HS			0x014C
+#define MACSEC_RXSCIPUS			0x0150
+#define MACSEC_RXSCIPDS			0x0154
+#define MACSEC_RXSCIPLS			0x0158
+#define MACSEC_RXAN0INUSS		0x0160
+#define MACSEC_RXAN0IPUSS		0x0170
+#define MACSEC_RXSAAIPOS		0x0194
+#define MACSEC_RXSAAIPIS		0x01B0
+#define MACSEC_RXSAAIPNVS		0x01B4
+#define MACSEC_RXSABIPOS		0x01D4
+#define MACSEC_RXSABIPIS		0x01F0
+#define MACSEC_RXSABIPNVS		0x01F4
+#define MACSEC_OPUS			0x021C
+#define MACSEC_OPTLS			0x022C
+#define MACSEC_OOE1HS			0x0248
+#define MACSEC_OOE2HS			0x024C
+#define MACSEC_TXSAAOPPS		0x028C
+#define MACSEC_TXSAAOPES		0x0290
+#define MACSEC_TXSABOPPS		0x02CC
+#define MACSEC_TXSABOPES		0x02D0
+#define MACSEC_INPWTS			0x0630
+#define MACSEC_INPBTS			0x0638
+#define MACSEC_IPSNFS			0x063C
+
 struct nxp_c45_rx_sc {
 	struct macsec_rx_sc *rx_sc;
 	struct macsec_rx_sa *rx_sa_a;
@@ -148,6 +175,7 @@ struct nxp_c45_rx_sc {
 
 struct nxp_c45_tx_sa {
 	struct macsec_tx_sa *tx_sa;
+	struct macsec_tx_sa_stats stats;
 	u8 key[MACSEC_MAX_KEY_LEN];
 	u8 salt[MACSEC_SALT_LEN];
 	u8 an;
@@ -787,6 +815,113 @@ static int nxp_c45_disable_rxsa_key(struct phy_device *phydev, bool key_a)
 	return 0;
 }
 
+static void nxp_c45_clear_dev_stats(struct phy_device *phydev,
+				    struct nxp_c45_secy *phy_secy)
+{
+	nxp_c45_macsec_write(phydev, MACSEC_OPUS, 0);
+	nxp_c45_macsec_write(phydev, MACSEC_OPTLS, 0);
+	nxp_c45_macsec_write(phydev, MACSEC_OOE1HS, 0);
+	nxp_c45_macsec_write(phydev, MACSEC_OOE2HS, 0);
+	nxp_c45_macsec_write(phydev, MACSEC_TXSAAOPES, 0);
+	nxp_c45_macsec_write(phydev, MACSEC_TXSABOPES, 0);
+	nxp_c45_macsec_write(phydev, MACSEC_TXSAAOPPS, 0);
+	nxp_c45_macsec_write(phydev, MACSEC_TXSABOPPS, 0);
+
+	if (phy_secy->rx_sc) {
+		nxp_c45_macsec_write(phydev, MACSEC_INPBTS, 0);
+		nxp_c45_macsec_write(phydev, MACSEC_INPWTS, 0);
+		nxp_c45_macsec_write(phydev, MACSEC_IPSNFS, 0);
+	}
+}
+
+static void nxp_c45_clear_rx_sc_stats(struct phy_device *phydev)
+{
+	int i;
+
+	nxp_c45_macsec_write(phydev, MACSEC_RXSAAIPIS, 0);
+	nxp_c45_macsec_write(phydev, MACSEC_RXSAAIPNVS, 0);
+	nxp_c45_macsec_write(phydev, MACSEC_RXSAAIPOS, 0);
+
+	nxp_c45_macsec_write(phydev, MACSEC_RXSABIPIS, 0);
+	nxp_c45_macsec_write(phydev, MACSEC_RXSABIPNVS, 0);
+	nxp_c45_macsec_write(phydev, MACSEC_RXSABIPOS, 0);
+
+	nxp_c45_macsec_write(phydev, MACSEC_INOD1HS, 0);
+	nxp_c45_macsec_write(phydev, MACSEC_INOD2HS, 0);
+
+	nxp_c45_macsec_write(phydev, MACSEC_INOV1HS, 0);
+	nxp_c45_macsec_write(phydev, MACSEC_INOV2HS, 0);
+
+	nxp_c45_macsec_write(phydev, MACSEC_RXSCIPDS, 0);
+	nxp_c45_macsec_write(phydev, MACSEC_RXSCIPLS, 0);
+	nxp_c45_macsec_write(phydev, MACSEC_RXSCIPUS, 0);
+
+	for (i = 0; i < MACSEC_NUM_AN; i++)
+		nxp_c45_macsec_write(phydev, MACSEC_RXAN0INUSS + i * 4, 0);
+
+	for (i = 0; i < MACSEC_NUM_AN; i++)
+		nxp_c45_macsec_write(phydev, MACSEC_RXAN0IPUSS + i * 4, 0);
+}
+
+static void nxp_c45_tx_sa_stats_read(struct phy_device *phydev,
+				     struct nxp_c45_tx_sa *tx_sa,
+				     struct macsec_tx_sa_stats *stats)
+{
+	u32 reg = 0;
+
+	if (tx_sa->is_key_a) {
+		nxp_c45_macsec_read(phydev, MACSEC_TXSAAOPES, &reg);
+		stats->OutPktsEncrypted = reg;
+		nxp_c45_macsec_read(phydev, MACSEC_TXSAAOPPS, &reg);
+		stats->OutPktsProtected = reg;
+	} else {
+		nxp_c45_macsec_read(phydev, MACSEC_TXSABOPES, &reg);
+		stats->OutPktsEncrypted = reg;
+		nxp_c45_macsec_read(phydev, MACSEC_TXSABOPPS, &reg);
+		stats->OutPktsProtected = reg;
+	}
+}
+
+static void nxp_c45_tx_sa_stats_clear(struct phy_device *phydev,
+				      struct nxp_c45_tx_sa *tx_sa)
+{
+	if (tx_sa->is_key_a) {
+		nxp_c45_macsec_write(phydev, MACSEC_TXSAAOPES, 0);
+		nxp_c45_macsec_write(phydev, MACSEC_TXSAAOPPS, 0);
+	} else {
+		nxp_c45_macsec_write(phydev, MACSEC_TXSABOPES, 0);
+		nxp_c45_macsec_write(phydev, MACSEC_TXSABOPPS, 0);
+	}
+}
+
+static void nxp_c45_tx_sa_stats_backup(struct phy_device *phydev,
+				       struct nxp_c45_tx_sa *tx_sa)
+{
+	struct macsec_tx_sa_stats stats;
+
+	nxp_c45_tx_sa_stats_read(phydev, tx_sa, &stats);
+	tx_sa->stats.OutPktsEncrypted += stats.OutPktsEncrypted;
+	tx_sa->stats.OutPktsProtected += stats.OutPktsProtected;
+	nxp_c45_tx_sa_stats_clear(phydev, tx_sa);
+}
+
+static void nxp_c45_clear_rx_sa_stats(struct phy_device *phydev,
+				      u8 an, bool is_key_a)
+{
+	if (is_key_a) {
+		nxp_c45_macsec_write(phydev, MACSEC_RXSAAIPIS, 0);
+		nxp_c45_macsec_write(phydev, MACSEC_RXSAAIPNVS, 0);
+		nxp_c45_macsec_write(phydev, MACSEC_RXSAAIPOS, 0);
+	} else {
+		nxp_c45_macsec_write(phydev, MACSEC_RXSABIPIS, 0);
+		nxp_c45_macsec_write(phydev, MACSEC_RXSABIPNVS, 0);
+		nxp_c45_macsec_write(phydev, MACSEC_RXSABIPOS, 0);
+	}
+
+	nxp_c45_macsec_write(phydev, MACSEC_RXAN0INUSS + an * 4, 0);
+	nxp_c45_macsec_write(phydev, MACSEC_RXAN0IPUSS + an * 4, 0);
+}
+
 static int nxp_c45_rx_sc_del(struct phy_device *phydev,
 			     struct nxp_c45_rx_sc *rx_sc)
 {
@@ -801,6 +936,8 @@ static int nxp_c45_rx_sc_del(struct phy_device *phydev,
 	if (rx_sc->rx_sa_b)
 		nxp_c45_disable_rxsa_key(phydev, false);
 
+	nxp_c45_clear_rx_sc_stats(phydev);
+
 	return 0;
 }
 
@@ -961,6 +1098,7 @@ static int nxp_c45_mdo_upd_secy(struct macsec_context *ctx)
 		if (old_tx_sa) {
 			old_tx_sa->is_enabled = false;
 			nxp_c45_txsa_get_pn(phydev, old_tx_sa);
+			nxp_c45_tx_sa_stats_backup(phydev, old_tx_sa);
 		}
 	}
 
@@ -979,6 +1117,7 @@ static int nxp_c45_mdo_del_secy(struct macsec_context *ctx)
 	struct nxp_c45_phy *priv = phydev->priv;
 	struct nxp_c45_secy *phy_secy;
 	u32 reg;
+	int i;
 
 	phydev_dbg(phydev, "delete secy SCI %llu\n", ctx->secy->sci);
 
@@ -990,11 +1129,16 @@ static int nxp_c45_mdo_del_secy(struct macsec_context *ctx)
 	nxp_c45_mdo_dev_stop(ctx);
 	nxp_c45_tx_sa_disable(phydev, phy_secy);
 	nxp_c45_tx_sc_clear(phy_secy);
+	nxp_c45_clear_dev_stats(phydev, phy_secy);
 	if (phy_secy->rx_sc) {
 		nxp_c45_rx_sc_del(phydev, phy_secy->rx_sc);
 		kfree(phy_secy->rx_sc);
 	}
 
+	for (i = 0; i < ARRAY_SIZE(phy_secy->tx_sa); i++)
+		if (phy_secy->tx_sa[i] && phy_secy->tx_sa[i]->is_enabled)
+			nxp_c45_tx_sa_stats_clear(phydev, phy_secy->tx_sa[i]);
+
 	if (phy_interrupt_is_valid(phydev)) {
 		nxp_c45_macsec_read(phydev, MACSEC_EVER, &reg);
 		reg &= ~TX_SC_BIT(phy_secy->secy_id);
@@ -1188,6 +1332,7 @@ static int nxp_c45_mdo_del_rxsa(struct macsec_context *ctx)
 		phydev_dbg(phydev, "delete RX SA A %u %s\n",
 			   an, rx_sa->active ? "enabled" : "disabled");
 		nxp_c45_disable_rxsa_key(phydev, true);
+		nxp_c45_clear_rx_sa_stats(phydev, an, true);
 		rx_sc->rx_sa_a = NULL;
 		return 0;
 	}
@@ -1196,6 +1341,7 @@ static int nxp_c45_mdo_del_rxsa(struct macsec_context *ctx)
 		phydev_dbg(phydev, "delete RX SA B %u %s\n",
 			   an, rx_sa->active ? "enabled" : "disabled");
 		nxp_c45_disable_rxsa_key(phydev, false);
+		nxp_c45_clear_rx_sa_stats(phydev, an, false);
 		rx_sc->rx_sa_b = NULL;
 		return 0;
 	}
@@ -1323,8 +1469,10 @@ static int nxp_c45_mdo_del_txsa(struct macsec_context *ctx)
 
 	nxp_c45_select_secy(phydev, phy_secy->secy_id);
 
-	if (tx_sa->is_enabled)
+	if (tx_sa->is_enabled) {
 		nxp_c45_update_key_status(phydev, tx_sa);
+		nxp_c45_tx_sa_stats_clear(phydev, tx_sa);
+	}
 
 	phy_secy->tx_sa[sa] = NULL;
 	kfree(tx_sa);
@@ -1332,6 +1480,267 @@ static int nxp_c45_mdo_del_txsa(struct macsec_context *ctx)
 	return 0;
 }
 
+static int nxp_c45_mdo_get_dev_stats(struct macsec_context *ctx)
+{
+	struct phy_device *phydev = ctx->phydev;
+	struct nxp_c45_phy *priv = phydev->priv;
+	struct macsec_dev_stats  *dev_stats;
+	struct nxp_c45_secy *phy_secy;
+	u32 reg = 0;
+
+	phy_secy = nxp_c45_find_secy(&priv->macsec->secy_list, ctx->secy->sci);
+	if (IS_ERR(phy_secy))
+		return PTR_ERR(phy_secy);
+
+	dev_stats = ctx->stats.dev_stats;
+	nxp_c45_select_secy(phydev, phy_secy->secy_id);
+
+	nxp_c45_macsec_read(phydev, MACSEC_OPUS, &reg);
+	dev_stats->OutPktsUntagged = reg;
+	nxp_c45_macsec_read(phydev, MACSEC_OPTLS, &reg);
+	dev_stats->OutPktsTooLong = reg;
+
+	dev_stats->InPktsUntagged = 0;
+	dev_stats->InPktsNoTag = 0;
+	dev_stats->InPktsBadTag = 0;
+	dev_stats->InPktsUnknownSCI = 0;
+	dev_stats->InPktsNoSCI = 0;
+
+	if (phy_secy->rx_sc) {
+		nxp_c45_macsec_read(phydev, MACSEC_INPBTS, &reg);
+		dev_stats->InPktsBadTag = reg;
+
+		nxp_c45_macsec_read(phydev, MACSEC_INPWTS, &reg);
+		if (phy_secy->secy->validate_frames == MACSEC_VALIDATE_STRICT)
+			dev_stats->InPktsNoTag += reg;
+		else
+			dev_stats->InPktsUntagged += reg;
+
+		nxp_c45_macsec_read(phydev, MACSEC_IPSNFS, &reg);
+		if (phy_secy->secy->validate_frames == MACSEC_VALIDATE_STRICT)
+			dev_stats->InPktsNoSCI += reg;
+		else
+			dev_stats->InPktsUnknownSCI += reg;
+	}
+
+	/* Always 0. */
+	dev_stats->InPktsOverrun = 0;
+
+	return 0;
+}
+
+static int nxp_c45_mdo_get_tx_sc_stats(struct macsec_context *ctx)
+{
+	struct phy_device *phydev = ctx->phydev;
+	struct nxp_c45_phy *priv = phydev->priv;
+	struct macsec_tx_sc_stats *tx_sc_stats;
+	struct macsec_tx_sa_stats tx_sa_stats;
+	struct nxp_c45_secy *phy_secy;
+	struct nxp_c45_tx_sa *tx_sa;
+	u32 reg = 0;
+	u64 stat;
+	int i;
+
+	phy_secy = nxp_c45_find_secy(&priv->macsec->secy_list, ctx->secy->sci);
+	if (IS_ERR(phy_secy))
+		return PTR_ERR(phy_secy);
+
+	tx_sc_stats = ctx->stats.tx_sc_stats;
+	nxp_c45_select_secy(phydev, phy_secy->secy_id);
+
+	nxp_c45_macsec_read(phydev, MACSEC_OOE1HS, &reg);
+	stat = (u64)reg << 32;
+	nxp_c45_macsec_read(phydev, MACSEC_OOE2HS, &reg);
+	stat |= reg;
+	if (ctx->secy->tx_sc.encrypt)
+		tx_sc_stats->OutOctetsEncrypted = stat;
+	else
+		tx_sc_stats->OutOctetsEncrypted = 0;
+
+	if (ctx->secy->protect_frames)
+		tx_sc_stats->OutOctetsProtected = stat;
+	else
+		tx_sc_stats->OutOctetsProtected = 0;
+
+	tx_sc_stats->OutPktsEncrypted = 0;
+	tx_sc_stats->OutPktsProtected = 0;
+
+	for (i = 0; i < ARRAY_SIZE(phy_secy->tx_sa); i++) {
+		tx_sa = phy_secy->tx_sa[i];
+		if (!tx_sa)
+			continue;
+
+		if (tx_sa->is_enabled) {
+			nxp_c45_tx_sa_stats_read(phydev, tx_sa, &tx_sa_stats);
+			tx_sc_stats->OutPktsEncrypted +=
+				tx_sa_stats.OutPktsEncrypted;
+			tx_sc_stats->OutPktsProtected +=
+				tx_sa_stats.OutPktsProtected;
+			continue;
+		}
+
+		tx_sc_stats->OutPktsEncrypted += tx_sa->stats.OutPktsEncrypted;
+		tx_sc_stats->OutPktsProtected += tx_sa->stats.OutPktsProtected;
+	}
+
+	return 0;
+}
+
+static int nxp_c45_mdo_get_tx_sa_stats(struct macsec_context *ctx)
+{
+	struct phy_device *phydev = ctx->phydev;
+	struct nxp_c45_phy *priv = phydev->priv;
+	struct macsec_tx_sa_stats *tx_sa_stats;
+	struct nxp_c45_secy *phy_secy;
+	struct nxp_c45_tx_sa *tx_sa;
+
+	phy_secy = nxp_c45_find_secy(&priv->macsec->secy_list, ctx->secy->sci);
+	if (IS_ERR(phy_secy))
+		return PTR_ERR(phy_secy);
+
+	tx_sa = phy_secy->tx_sa[ctx->sa.assoc_num];
+	if (!tx_sa)
+		return -EINVAL;
+
+	tx_sa_stats = ctx->stats.tx_sa_stats;
+
+	if (!tx_sa->is_enabled) {
+		tx_sa_stats->OutPktsEncrypted = tx_sa->stats.OutPktsEncrypted;
+		tx_sa_stats->OutPktsProtected = tx_sa->stats.OutPktsProtected;
+		return 0;
+	}
+
+	nxp_c45_select_secy(phydev, phy_secy->secy_id);
+	nxp_c45_tx_sa_stats_read(phydev, tx_sa, tx_sa_stats);
+
+	return 0;
+}
+
+static int nxp_c45_mdo_get_rx_sc_stats(struct macsec_context *ctx)
+{
+	struct phy_device *phydev = ctx->phydev;
+	struct nxp_c45_phy *priv = phydev->priv;
+	struct macsec_rx_sc_stats *rx_sc_stats;
+	struct nxp_c45_secy *phy_secy;
+	struct nxp_c45_rx_sc *rx_sc;
+	u32 reg = 0;
+	int i;
+
+	phy_secy = nxp_c45_find_secy(&priv->macsec->secy_list, ctx->secy->sci);
+	if (IS_ERR(phy_secy))
+		return PTR_ERR(phy_secy);
+
+	rx_sc = phy_secy->rx_sc;
+	if (rx_sc->rx_sc != ctx->rx_sc)
+		return -EINVAL;
+
+	rx_sc_stats = ctx->stats.rx_sc_stats;
+	nxp_c45_select_secy(phydev, phy_secy->secy_id);
+
+	rx_sc_stats->InPktsInvalid = 0;
+	rx_sc_stats->InPktsNotValid = 0;
+	rx_sc_stats->InPktsOK = 0;
+
+	if (rx_sc->rx_sa_a) {
+		nxp_c45_macsec_read(phydev, MACSEC_RXSAAIPIS, &reg);
+		rx_sc_stats->InPktsInvalid += reg;
+		nxp_c45_macsec_read(phydev, MACSEC_RXSAAIPNVS, &reg);
+		rx_sc_stats->InPktsNotValid += reg;
+		nxp_c45_macsec_read(phydev, MACSEC_RXSAAIPOS, &reg);
+		rx_sc_stats->InPktsOK += reg;
+	}
+
+	if (rx_sc->rx_sa_b) {
+		nxp_c45_macsec_read(phydev, MACSEC_RXSABIPIS, &reg);
+		rx_sc_stats->InPktsInvalid += reg;
+		nxp_c45_macsec_read(phydev, MACSEC_RXSABIPNVS, &reg);
+		rx_sc_stats->InPktsNotValid += reg;
+		nxp_c45_macsec_read(phydev, MACSEC_RXSABIPOS, &reg);
+		rx_sc_stats->InPktsOK += reg;
+	}
+
+	ctx->stats.rx_sa_stats->InPktsNotUsingSA = 0;
+	for (i = 0; i < MACSEC_NUM_AN; i++) {
+		nxp_c45_macsec_read(phydev, MACSEC_RXAN0INUSS + i * 4, &reg);
+		rx_sc_stats->InPktsNotUsingSA += reg;
+	}
+
+	ctx->stats.rx_sa_stats->InPktsUnusedSA = 0;
+	for (i = 0; i < MACSEC_NUM_AN; i++) {
+		nxp_c45_macsec_read(phydev, MACSEC_RXAN0IPUSS + i * 4, &reg);
+		rx_sc_stats->InPktsUnusedSA += reg;
+	}
+
+	nxp_c45_macsec_read(phydev, MACSEC_INOD1HS, &reg);
+	rx_sc_stats->InOctetsDecrypted = (u64)reg << 32;
+	nxp_c45_macsec_read(phydev, MACSEC_INOD2HS, &reg);
+	rx_sc_stats->InOctetsDecrypted |= reg;
+
+	nxp_c45_macsec_read(phydev, MACSEC_INOV1HS, &reg);
+	rx_sc_stats->InOctetsValidated = (u64)reg << 32;
+	nxp_c45_macsec_read(phydev, MACSEC_INOV2HS, &reg);
+	rx_sc_stats->InOctetsValidated |= reg;
+
+	nxp_c45_macsec_read(phydev, MACSEC_RXSCIPDS, &reg);
+	rx_sc_stats->InPktsDelayed = reg;
+	nxp_c45_macsec_read(phydev, MACSEC_RXSCIPLS, &reg);
+	rx_sc_stats->InPktsLate = reg;
+	nxp_c45_macsec_read(phydev, MACSEC_RXSCIPUS, &reg);
+	rx_sc_stats->InPktsUnchecked = reg;
+
+	return 0;
+}
+
+static int nxp_c45_mdo_get_rx_sa_stats(struct macsec_context *ctx)
+{
+	struct phy_device *phydev = ctx->phydev;
+	struct nxp_c45_phy *priv = phydev->priv;
+	struct macsec_rx_sa_stats *rx_sa_stats;
+	struct nxp_c45_secy *phy_secy;
+	struct nxp_c45_rx_sc *rx_sc;
+	u8 an = ctx->sa.assoc_num;
+	u32 reg = 0;
+
+	phy_secy = nxp_c45_find_secy(&priv->macsec->secy_list, ctx->secy->sci);
+	if (IS_ERR(phy_secy))
+		return PTR_ERR(phy_secy);
+
+	rx_sc = phy_secy->rx_sc;
+	if (rx_sc->rx_sc != ctx->sa.rx_sa->sc)
+		return -EINVAL;
+
+	if (!rx_sc->rx_sa_a && !rx_sc->rx_sa_b)
+		return -EINVAL;
+
+	rx_sa_stats = ctx->stats.rx_sa_stats;
+	nxp_c45_select_secy(phydev, phy_secy->secy_id);
+
+	if (rx_sc->rx_sa_a == ctx->sa.rx_sa) {
+		nxp_c45_macsec_read(phydev, MACSEC_RXSAAIPIS, &reg);
+		rx_sa_stats->InPktsInvalid = reg;
+		nxp_c45_macsec_read(phydev, MACSEC_RXSAAIPNVS, &reg);
+		rx_sa_stats->InPktsNotValid = reg;
+		nxp_c45_macsec_read(phydev, MACSEC_RXSAAIPOS, &reg);
+		rx_sa_stats->InPktsOK = reg;
+	}
+
+	if (rx_sc->rx_sa_b == ctx->sa.rx_sa) {
+		nxp_c45_macsec_read(phydev, MACSEC_RXSABIPIS, &reg);
+		rx_sa_stats->InPktsInvalid = reg;
+		nxp_c45_macsec_read(phydev, MACSEC_RXSABIPNVS, &reg);
+		rx_sa_stats->InPktsNotValid = reg;
+		nxp_c45_macsec_read(phydev, MACSEC_RXSABIPOS, &reg);
+		rx_sa_stats->InPktsOK = reg;
+	}
+
+	nxp_c45_macsec_read(phydev, MACSEC_RXAN0INUSS + an * 4, &reg);
+	rx_sa_stats->InPktsNotUsingSA = reg;
+	nxp_c45_macsec_read(phydev, MACSEC_RXAN0IPUSS + an * 4, &reg);
+	rx_sa_stats->InPktsUnusedSA = reg;
+
+	return 0;
+}
+
 static const struct macsec_ops nxp_c45_macsec_ops = {
 	.mdo_dev_open = nxp_c45_mdo_dev_open,
 	.mdo_dev_stop = nxp_c45_mdo_dev_stop,
@@ -1347,6 +1756,11 @@ static const struct macsec_ops nxp_c45_macsec_ops = {
 	.mdo_add_txsa = nxp_c45_mdo_add_txsa,
 	.mdo_upd_txsa = nxp_c45_mdo_upd_txsa,
 	.mdo_del_txsa = nxp_c45_mdo_del_txsa,
+	.mdo_get_dev_stats = nxp_c45_mdo_get_dev_stats,
+	.mdo_get_tx_sc_stats = nxp_c45_mdo_get_tx_sc_stats,
+	.mdo_get_tx_sa_stats = nxp_c45_mdo_get_tx_sa_stats,
+	.mdo_get_rx_sc_stats = nxp_c45_mdo_get_rx_sc_stats,
+	.mdo_get_rx_sa_stats = nxp_c45_mdo_get_rx_sa_stats,
 };
 
 int nxp_c45_macsec_probe(struct phy_device *phydev)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag
  2023-08-24  9:16 [RFC net-next v2 0/5] Add MACsec support for TJA11XX C45 PHYs Radu Pirea (NXP OSS)
                   ` (3 preceding siblings ...)
  2023-08-24  9:16 ` [RFC net-next v2 4/5] net: phy: nxp-c45-tja11xx: add MACsec statistics Radu Pirea (NXP OSS)
@ 2023-08-24  9:16 ` Radu Pirea (NXP OSS)
  2023-08-27  8:05   ` Simon Horman
  2023-08-28 10:17   ` Sabrina Dubroca
  4 siblings, 2 replies; 34+ messages in thread
From: Radu Pirea (NXP OSS) @ 2023-08-24  9:16 UTC (permalink / raw)
  To: andrew, hkallweit1, linux, davem, edumazet, kuba, pabeni,
	richardcochran, sd, sebastian.tobuschat
  Cc: netdev, linux-kernel, Radu Pirea (NXP OSS)

Implement mdo_insert_tx_tag to insert the TLV header in the ethernet
frame.

If extscs parameter is set to 1, then the TLV header will contain the
TX SC that will be used to encrypt the frame, otherwise the TX SC will
be selected using the MAC source address.

Signed-off-by: Radu Pirea (NXP OSS) <radu-nicolae.pirea@oss.nxp.com>
---
 drivers/net/phy/nxp-c45-tja11xx-macsec.c | 66 ++++++++++++++++++++++++
 1 file changed, 66 insertions(+)

diff --git a/drivers/net/phy/nxp-c45-tja11xx-macsec.c b/drivers/net/phy/nxp-c45-tja11xx-macsec.c
index edfdd2540d39..f06505c04b13 100644
--- a/drivers/net/phy/nxp-c45-tja11xx-macsec.c
+++ b/drivers/net/phy/nxp-c45-tja11xx-macsec.c
@@ -11,6 +11,7 @@
 #include <linux/module.h>
 #include <linux/phy.h>
 #include <linux/processor.h>
+#include <net/dst_metadata.h>
 #include <net/macsec.h>
 
 #include "nxp-c45-tja11xx.h"
@@ -23,6 +24,7 @@
 #define VEND1_MACSEC_BASE		0x9000
 
 #define MACSEC_CFG			0x0000
+#define MACSEC_CFG_EXTSCS		BIT(26)
 #define MACSEC_CFG_BYPASS		BIT(1)
 #define MACSEC_CFG_S0I			BIT(0)
 
@@ -121,6 +123,8 @@
 #define ADPTR_CNTRL			0x0F00
 #define ADPTR_CNTRL_CONFIG_EN		BIT(14)
 #define ADPTR_CNTRL_ADPTR_EN		BIT(12)
+#define ADPTR_TX_TAG_CNTRL		0x0F0C
+#define ADPTR_TX_TAG_CNTRL_ENA		BIT(31)
 
 #define TX_SC_FLT_BASE			0x800
 #define TX_SC_FLT_SIZE			0x10
@@ -167,6 +171,18 @@
 #define MACSEC_INPBTS			0x0638
 #define MACSEC_IPSNFS			0x063C
 
+#define TJA11XX_TLV_TX_NEEDED_HEADROOM	(32)
+#define TJA11XX_TLV_NEEDED_TAILROOM	(0)
+
+#define MACSEC_TLV_CP			BIT(0)
+#define MACSEC_TLV_SC_ID_OFF		(2)
+
+#define ETH_P_TJA11XX_TLV		(0x4e58)
+
+bool extscs;
+module_param(extscs, bool, 0);
+MODULE_PARM_DESC(extscs, "Select the TX SC using TLV header information.");
+
 struct nxp_c45_rx_sc {
 	struct macsec_rx_sc *rx_sc;
 	struct macsec_rx_sa *rx_sa_a;
@@ -315,6 +331,8 @@ void nxp_c45_macsec_config_init(struct phy_device *phydev)
 
 	nxp_c45_macsec_write(phydev, ADPTR_CNTRL, ADPTR_CNTRL_CONFIG_EN |
 			     ADPTR_CNTRL_ADPTR_EN);
+	nxp_c45_macsec_write(phydev, ADPTR_TX_TAG_CNTRL,
+			     ADPTR_TX_TAG_CNTRL_ENA);
 	nxp_c45_macsec_write(phydev, ADPTR_CNTRL, ADPTR_CNTRL_ADPTR_EN);
 
 	nxp_c45_macsec_write(phydev, MACSEC_TPNET, PN_WRAP_THRESHOLD);
@@ -324,6 +342,9 @@ void nxp_c45_macsec_config_init(struct phy_device *phydev)
 	nxp_c45_macsec_write(phydev, MACSEC_UPFR0M1, MACSEC_OVP);
 	nxp_c45_macsec_write(phydev, MACSEC_UPFR0M2, ETYPE_MASK);
 	nxp_c45_macsec_write(phydev, MACSEC_UPFR0R, MACSEC_UPFR_EN);
+
+	if (extscs)
+		nxp_c45_macsec_write(phydev, MACSEC_CFG, MACSEC_CFG_EXTSCS);
 }
 
 static void nxp_c45_macsec_cfg_ptp(struct phy_device *phydev, bool enable)
@@ -1741,6 +1762,48 @@ static int nxp_c45_mdo_get_rx_sa_stats(struct macsec_context *ctx)
 	return 0;
 }
 
+struct tja11xx_tlv_header {
+	struct ethhdr eth;
+	u8 subtype;
+	u8 len;
+	u8 payload[28];
+};
+
+static int nxp_c45_mdo_insert_tx_tag(struct phy_device *phydev,
+				     struct sk_buff *skb)
+{
+	struct nxp_c45_phy *priv = phydev->priv;
+	struct tja11xx_tlv_header *tlv;
+	struct nxp_c45_secy *phy_secy;
+	struct metadata_dst *md_dst;
+	struct ethhdr *eth;
+	sci_t sci;
+
+	eth = eth_hdr(skb);
+	tlv = skb_push(skb, TJA11XX_TLV_TX_NEEDED_HEADROOM);
+	memmove(tlv, eth, sizeof(*eth));
+	skb_reset_mac_header(skb);
+	tlv->eth.h_proto = htons(ETH_P_TJA11XX_TLV);
+	tlv->subtype = 1;
+	tlv->len = sizeof(tlv->payload);
+	memset(tlv->payload, 0, sizeof(tlv->payload));
+
+	if (!extscs)
+		return 0;
+
+	/* md_dst should be always set if MACsec is offloaded. */
+	md_dst = skb_metadata_dst(skb);
+	sci = md_dst->u.macsec_info.sci;
+	phy_secy = nxp_c45_find_secy(&priv->macsec->secy_list, sci);
+	if (IS_ERR(phy_secy))
+		return PTR_ERR(phy_secy);
+
+	tlv->payload[3] = phy_secy->secy_id << MACSEC_TLV_SC_ID_OFF |
+		MACSEC_TLV_CP;
+
+	return 0;
+}
+
 static const struct macsec_ops nxp_c45_macsec_ops = {
 	.mdo_dev_open = nxp_c45_mdo_dev_open,
 	.mdo_dev_stop = nxp_c45_mdo_dev_stop,
@@ -1761,6 +1824,9 @@ static const struct macsec_ops nxp_c45_macsec_ops = {
 	.mdo_get_tx_sa_stats = nxp_c45_mdo_get_tx_sa_stats,
 	.mdo_get_rx_sc_stats = nxp_c45_mdo_get_rx_sc_stats,
 	.mdo_get_rx_sa_stats = nxp_c45_mdo_get_rx_sa_stats,
+	.mdo_insert_tx_tag = nxp_c45_mdo_insert_tx_tag,
+	.needed_headroom = TJA11XX_TLV_TX_NEEDED_HEADROOM,
+	.needed_tailroom = TJA11XX_TLV_NEEDED_TAILROOM,
 };
 
 int nxp_c45_macsec_probe(struct phy_device *phydev)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 1/5] net: macsec: documentation for macsec_context and macsec_ops
  2023-08-24  9:16 ` [RFC net-next v2 1/5] net: macsec: documentation for macsec_context and macsec_ops Radu Pirea (NXP OSS)
@ 2023-08-24 13:26   ` Antoine Tenart
  0 siblings, 0 replies; 34+ messages in thread
From: Antoine Tenart @ 2023-08-24 13:26 UTC (permalink / raw)
  To: Radu Pirea, andrew, davem, edumazet, hkallweit1, kuba, linux,
	pabeni, richardcochran, sd, sebastian.tobuschat
  Cc: netdev, linux-kernel, Radu Pirea

Hello,

Quoting Radu Pirea (NXP OSS) (2023-08-24 11:16:11)
>  
>  /**
>   * struct macsec_context - MACsec context for hardware offloading
> + * @netdev: pointer to the netdev if the SecY is offloaded to a MAC
> + * @phydev: pointer to the phydev if the SecY is offloaded to a PHY
> + * @offload: MACsec offload status

As this selects were the offload happens and how the two previous
pointers can be accessed, might be nice to be a bit more explicit in the
comments.

> + * @secy: pointer to a MACsec SecY
> + * @rx_sc: pointer to a RX SC
> + * @assoc_num: association number of the target SA
> + * @key: key of the target SA
> + * @rx_sa: pointer to an RX SA if a RX SA is added/updated/removed
> + * @tx_sa: pointer to an TX SA if a TX SA is added/updated/removed
> + * @tx_sc_stats: pointer to TX SC stats structure
> + * @tx_sa_stats: pointer to TX SA stats structure
> + * @rx_sc_stats: pointer to TX SC stats structure

s/TX/RX/

Thanks,
Antoine

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 2/5] net: macsec: introduce mdo_insert_tx_tag
  2023-08-24  9:16 ` [RFC net-next v2 2/5] net: macsec: introduce mdo_insert_tx_tag Radu Pirea (NXP OSS)
@ 2023-08-24 14:54   ` Sabrina Dubroca
  2023-08-25 10:01     ` Radu Pirea (OSS)
  0 siblings, 1 reply; 34+ messages in thread
From: Sabrina Dubroca @ 2023-08-24 14:54 UTC (permalink / raw)
  To: Radu Pirea (NXP OSS)
  Cc: andrew, hkallweit1, linux, davem, edumazet, kuba, pabeni,
	richardcochran, sebastian.tobuschat, netdev, linux-kernel

2023-08-24, 12:16:12 +0300, Radu Pirea (NXP OSS) wrote:
> Offloading MACsec in PHYs requires inserting the SecTAG and the ICV in
> the ethernet frame. This operation will increase the frame size with up
> to 32 bytes. If the frames are sent at line rate, the PHY will not have
> enough room to insert the SecTAG and the ICV.
> 
> Some PHYs use a hardware buffer to store a number of ethernet frames and,
> if it fills up, a pause frame is sent to the MAC to control the flow.
> This HW implementation does not need any modification in the stack.
> 
> Other PHYs might offer to use a specific ethertype with some padding
> bytes present in the ethernet frame. This ethertype and its associated
> bytes will be replaced by the SecTAG and ICV.
> 
> mdo_insert_tx_tag allows the PHY drivers to add any specific tag in the
> skb.

Please add a per-patch changelog between versions. For example:

v2:
 - add doc for the new fields in macsec_ops
 - add insert_tx_tag to macsec_dev
 - use unsigned int for macsec_ops.needed_{head,tail}room
[etc]

> Signed-off-by: Radu Pirea (NXP OSS) <radu-nicolae.pirea@oss.nxp.com>
> ---
>  drivers/net/macsec.c | 96 +++++++++++++++++++++++++++++++++++++++++++-
>  include/net/macsec.h | 10 +++++
>  2 files changed, 105 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
> index ae60817ec5c2..5541aaced61f 100644
> --- a/drivers/net/macsec.c
> +++ b/drivers/net/macsec.c
> @@ -93,6 +93,7 @@ struct pcpu_secy_stats {
>   * @secys: linked list of SecY's on the underlying device
>   * @gro_cells: pointer to the Generic Receive Offload cell
>   * @offload: status of offloading on the MACsec device
> + * @insert_tx_tag: insert tx tag if true

(probably a bit nitpicky)
Maybe briefly mention something about offloading and why devices might
needed that tag? Otherwise this doc feels a bit like it's there just
to make a checker happy, it doesn't say anything that "bool
insert_tx_tag" doesn't already tell us.

Maybe something like:
"when offloading, device requires to insert an additional tag"

>   */
>  struct macsec_dev {
>  	struct macsec_secy secy;
> @@ -102,6 +103,7 @@ struct macsec_dev {
>  	struct list_head secys;
>  	struct gro_cells gro_cells;
>  	enum macsec_offload offload;
> +	bool insert_tx_tag;
>  };
>  
>  /**
> @@ -2582,6 +2584,33 @@ static bool macsec_is_configured(struct macsec_dev *macsec)
>  	return false;
>  }
>  
> +static bool macsec_can_insert_tx_tag(struct macsec_dev *macsec,

It feels more like a "needs" than a "can" situation to me. The device
needs this tag inserted in order to fully work.

> +				     const struct macsec_ops *ops)
> +{
> +	return macsec->offload == MACSEC_OFFLOAD_PHY &&
> +		ops->mdo_insert_tx_tag;
> +}
> +
> +static void macsec_adjust_room(struct net_device *dev,
> +			       const struct macsec_ops *ops)
> +{
> +	struct macsec_dev *macsec = macsec = macsec_priv(dev);

duplicate "macsec = macsec = ..."

> +
> +	if (macsec_is_offloaded(macsec)) {

Shouldn't that whole adjustment (in both directions) depend on
->insert_tx_tag?

> +		dev->needed_headroom -= MACSEC_NEEDED_HEADROOM;
> +		dev->needed_headroom += ops->needed_headroom;

I would compute "diff = ops->needed_headroom - MACSEC_NEEDED_HEADROOM"
at the start and then we can simply do "+= diff" or "-= diff" (and
same for tailroom).

> +		dev->needed_tailroom -= MACSEC_NEEDED_TAILROOM;
> +		dev->needed_tailroom += ops->needed_tailroom;
> +
> +		return;
> +	}

nit: else instead of the early return would make things more
symmetrical.

> +
> +	dev->needed_headroom -= ops->needed_headroom;
> +	dev->needed_headroom += MACSEC_NEEDED_HEADROOM;
> +	dev->needed_tailroom -= ops->needed_tailroom;
> +	dev->needed_tailroom += MACSEC_NEEDED_TAILROOM;
> +}
> +
>  static int macsec_update_offload(struct net_device *dev, enum macsec_offload offload)
>  {
>  	enum macsec_offload prev_offload;
> @@ -2619,9 +2648,15 @@ static int macsec_update_offload(struct net_device *dev, enum macsec_offload off
>  	ctx.secy = &macsec->secy;
>  	ret = offload == MACSEC_OFFLOAD_OFF ? macsec_offload(ops->mdo_del_secy, &ctx)
>  					    : macsec_offload(ops->mdo_add_secy, &ctx);
> -	if (ret)
> +	if (ret) {
>  		macsec->offload = prev_offload;
> +		goto out;

I would prefer a direct return right here instead of this goto.

> +	}
> +
> +	macsec_adjust_room(dev, ops);
> +	macsec->insert_tx_tag = macsec_can_insert_tx_tag(macsec, ops);
>  
> +out:
>  	return ret;
>  }
>  
> @@ -3378,6 +3413,55 @@ static struct genl_family macsec_fam __ro_after_init = {
>  	.resv_start_op	= MACSEC_CMD_UPD_OFFLOAD + 1,
>  };
>  
> +static struct sk_buff *insert_tx_tag(struct sk_buff *skb,
> +				     struct net_device *dev)
> +{
> +	struct macsec_dev *macsec = macsec_priv(dev);
> +	const struct macsec_ops *ops;
> +	struct phy_device *phydev;
> +	struct macsec_context ctx;
> +	int err;
> +
> +	if (!macsec->insert_tx_tag)
> +		return skb;

I think it would look a bit nicer if this test was moved out, before
calling insert_tx_tag(). Then if we call insert_tx_tag(), we know we
have to insert it.

> +	ops = macsec_get_ops(macsec, &ctx);
> +	phydev = macsec->real_dev->phydev;
> +

[...]
> @@ -4125,6 +4216,9 @@ static int macsec_newlink(struct net *net, struct net_device *dev,
>  			err = macsec_offload(ops->mdo_add_secy, &ctx);
>  			if (err)
>  				goto del_dev;
> +
> +			macsec_adjust_room(dev, ops);
> +			macsec->insert_tx_tag = macsec_can_insert_tx_tag(macsec, ops);
>  		}
>  	}
>  
> diff --git a/include/net/macsec.h b/include/net/macsec.h
> index 76f024727bb4..9577921897f9 100644
> --- a/include/net/macsec.h
> +++ b/include/net/macsec.h
> @@ -312,6 +312,11 @@ struct macsec_context {
>   * @mdo_get_tx_sa_stats: called when TX SA stats are read
>   * @mdo_get_rx_sc_stats: called when RX SC stats are read
>   * @mdo_get_rx_sa_stats: called when RX SA stats are read
> + * @mdo_insert_tx_tag: called to insert the TX offload tag
> + * @needed_headroom: number of bytes reserved at the beginning of the sk_buff
> + *	for the TX Tag
> + * @needed_tailroom: number of bytes reserved at the end of the sk_buff for the
> + *	TX Tag

It would be nice to use a consistent name (either "TX offload tag" or
"TX tag") and case in those 3 descriptions (slight preference for
"tag" over "Tag" on my side).

I'd also add ", to be filled by mdo_insert_tx_tag" (not sure whether
that needs to be @mdo_insert_tx_tag or just mdo_insert_tx_tag) to the
needed_headroom/needed_tailroom descriptions, just to be really clear.

-- 
Sabrina


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 2/5] net: macsec: introduce mdo_insert_tx_tag
  2023-08-24 14:54   ` Sabrina Dubroca
@ 2023-08-25 10:01     ` Radu Pirea (OSS)
  0 siblings, 0 replies; 34+ messages in thread
From: Radu Pirea (OSS) @ 2023-08-25 10:01 UTC (permalink / raw)
  To: Sabrina Dubroca
  Cc: andrew, hkallweit1, linux, davem, edumazet, kuba, pabeni,
	richardcochran, sebastian.tobuschat, netdev, linux-kernel


On 24.08.2023 17:54, Sabrina Dubroca wrote:
> 2023-08-24, 12:16:12 +0300, Radu Pirea (NXP OSS) wrote:
> 
>> +
>> +	if (macsec_is_offloaded(macsec)) {
> 
> Shouldn't that whole adjustment (in both directions) depend on
> ->insert_tx_tag?

I asked myself the same thing and the answer was "It depends". This 
adjustment can be restricted only to the offloaded MACsec devs that 
require a special TX tag. However, the offloaded MACsec devs do not need 
additional headroom/tailroom.
>> +
>> +	if (!macsec->insert_tx_tag)
>> +		return skb;
> 
> I think it would look a bit nicer if this test was moved out, before
> calling insert_tx_tag(). Then if we call insert_tx_tag(), we know we
> have to insert it.
> 

I expected this suggestion :)

-- 
Radu P.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 3/5] net: phy: nxp-c45-tja11xx add MACsec support
  2023-08-24  9:16 ` [RFC net-next v2 3/5] net: phy: nxp-c45-tja11xx add MACsec support Radu Pirea (NXP OSS)
@ 2023-08-25 12:52   ` Sabrina Dubroca
  2023-08-25 13:29     ` Andrew Lunn
  2023-08-27  8:03   ` Simon Horman
  1 sibling, 1 reply; 34+ messages in thread
From: Sabrina Dubroca @ 2023-08-25 12:52 UTC (permalink / raw)
  To: Radu Pirea (NXP OSS)
  Cc: andrew, hkallweit1, linux, davem, edumazet, kuba, pabeni,
	richardcochran, sebastian.tobuschat, netdev, linux-kernel

[Some of the questions I'm asking are probably dumb since I don't know
anything about phy drivers. Sorry if that's the case.]

General code organization nit: I think it would be easier to review
the code if helpers functions were grouped by the type of object they
work on. All the RXSA-related functions together, all the TXSA
functions together, same for RXSC and then TXSC/SecY. Right now I see
some RXSA functions in a group of TXSA functions, another in the
middle of a group of RXSC functions. It makes navigating through the
code a bit less convenient.

Another nit: for consistency, it would be nice to stick to either
"tx_sa" or "txsa" (same for rxsa and rxsc) in function names.

2023-08-24, 12:16:13 +0300, Radu Pirea (NXP OSS) wrote:
> +static int nxp_c45_macsec_write(struct phy_device *phydev, u16 reg, u32 val)
> +{
> +	WARN_ON_ONCE(reg % 4);
> +
> +	reg = reg / 2;
> +	phy_write_mmd(phydev, MDIO_MMD_VEND2,
> +		      VEND1_MACSEC_BASE + reg, val);
> +	phy_write_mmd(phydev, MDIO_MMD_VEND2,
> +		      VEND1_MACSEC_BASE + reg + 1, val >> 16);

Can these calls fail? ie, do you need to handle errors like in
nxp_c45_macsec_read (and then in callers of nxp_c45_macsec_write)?

I see that no caller of nxp_c45_macsec_read actually checks the return
value, so maybe those errors don't matter.


[...]
> +void nxp_c45_macsec_config_init(struct phy_device *phydev)
> +{
> +	if (!phydev->macsec_ops)
> +		return;
> +
> +	phy_set_bits_mmd(phydev, MDIO_MMD_VEND1, VEND1_PORT_FUNC_ENABLES,
> +			 MACSEC_EN | ADAPTER_EN);

The calls to phy_set_bits_mmd() in nxp_c45_config_intr() have error
handling. Does this need error handling as well?

[...]
> +static bool nxp_c45_port_valid(struct nxp_c45_secy *phy_secy, u16 port)
> +{
> +	if (phy_secy->secy->tx_sc.end_station &&
> +	    __be16_to_cpu((__force __be16)port) != 1)
> +		return false;
> +
> +	return true;
> +}
> +
> +static bool nxp_c45_rx_sc_valid(struct nxp_c45_secy *phy_secy,
> +				struct macsec_rx_sc *rx_sc)
> +{
> +	u16 port =  (__force u64)rx_sc->sci >> (ETH_ALEN * 8);

u64 sci = be64_to_cpu((__force __be64)rx_sc->sci);
u16 port = (u16)sci;

And then drop the __be16_to_cpu conversion from nxp_c45_port_valid

> +
> +	if (phy_secy->point_to_point && phy_secy->secy_id != 0)
> +		return false;
> +
> +	return nxp_c45_port_valid(phy_secy, port);
> +}
> +
> +static bool nxp_c45_secy_cfg_valid(struct nxp_c45_secy *phy_secy, bool can_ptp)
> +{
> +	u16 port =  (__force u64)phy_secy->secy->sci >> (ETH_ALEN * 8);

u64 sci = be64_to_cpu((__force __be64)rx_sc->sci);
u16 port = (u16)sci;

> +	if (phy_secy->secy->tx_sc.scb)
> +		return false;

[...]
> +static int nxp_c45_update_tx_sc_secy_cfg(struct phy_device *phydev,
> +					 struct nxp_c45_secy *phy_secy)
> +{
[...]
> +	phydev_dbg(phydev, "scb %s\n",
> +		   phy_secy->secy->tx_sc.scb ? "on" : "off");
> +	if (phy_secy->secy->tx_sc.scb)
> +		cfg |= MACSEC_TXSC_CFG_SCI;
> +	else
> +		cfg &= ~MACSEC_TXSC_CFG_SCI;

Should that be called MACSEC_TXSC_CFG_SCB? I had to check that it
wasn't using the wrong constant, using "SCI" for "SCB" (when SCI is
already a well-defined thing in macsec) confused me.

> +
> +	nxp_c45_macsec_write(phydev, MACSEC_TXSC_CFG, cfg);
> +
> +	return 0;
> +}
> +
[...]
> +static int nxp_c45_set_rxsa_key(struct macsec_context *ctx, bool key_a)
> +{
> +	u32 *salt = (u32 *)ctx->sa.rx_sa->key.salt.bytes;
> +	const struct nxp_c45_macsec_sa_regs *sa_regs;
> +	u32 ssci = (__force u32)ctx->sa.rx_sa->ssci;
> +	u32 key_size = ctx->secy->key_len / 4;
> +	u32 salt_size = MACSEC_SALT_LEN / 4;
> +	u32 *key = (u32 *)ctx->sa.key;
> +	u32 reg;
> +	int i;
> +
> +	sa_regs = nxp_c45_get_macsec_sa_regs(key_a);
> +
> +	for (i = 0; i < key_size; i++) {
> +		reg = sa_regs->rxsa_ka + i * 4;
> +		nxp_c45_macsec_write(ctx->phydev, reg,
> +				     (__force u32)cpu_to_be32(key[i]));
> +	}
> +
> +	if (ctx->secy->xpn) {
> +		for (i = 0; i < salt_size; i++) {
> +			reg = sa_regs->rxsa_salt + (2 - i) * 4;
> +			nxp_c45_macsec_write(ctx->phydev, reg,
> +					     (__force u32)cpu_to_be32(salt[i]));
> +		}
> +		nxp_c45_macsec_write(ctx->phydev, sa_regs->rxsa_ssci,
> +				     (__force u32)cpu_to_be32(ssci));
> +	}

This looks basically identical to nxp_c45_txsa_set_key except for the
registers it writes to. It could be turned into 2 or 3 small helpers
(one for the key, then salt and ssci).

> +
> +	nxp_c45_set_rxsa_key_cfg(ctx, key_a, false);
> +
> +	return 0;
> +}

[...]
> +static int nxp_c45_mdo_add_secy(struct macsec_context *ctx)
> +{
> +	struct phy_device *phydev = ctx->phydev;
> +	struct nxp_c45_phy *priv = phydev->priv;
> +	u64 sci = (__force u64)ctx->secy->sci;
> +	struct nxp_c45_secy *phy_secy;
> +	bool can_ptp;
> +	int idx;
> +	u32 reg;
> +
> +	phydev_dbg(ctx->phydev, "add secy SCI %llu\n", ctx->secy->sci);

nit: %016llx feels more natural for SCIs since they can be broken down
into address+port.

And since it's stored in network byte order, you'll want to convert it
via be64_to_cpu before you print it out.

I'd suggest doing that directly into the sci variable:

    u64 sci = be64_to_cpu((__force __be64)ctx->secy->sci);

and then adapt the uses of sci further down.

Feel free to move the sci_to_cpu function from
drivers/net/netdevsim/macsec.c to include/net/macsec.h and reuse it.


[...]
> +static int nxp_c45_mdo_upd_secy(struct macsec_context *ctx)
> +{
[...]
> +	if (phy_secy->enabled_an != ctx->secy->tx_sc.encoding_sa) {
> +		old_tx_sa = phy_secy->tx_sa[phy_secy->enabled_an];
> +		phy_secy->enabled_an = ctx->secy->tx_sc.encoding_sa;
> +		new_tx_sa = phy_secy->tx_sa[phy_secy->enabled_an];
> +		if (!new_tx_sa) {
> +			nxp_c45_tx_sa_disable(phydev, phy_secy);
> +			goto disable_old_tx_sa;
> +		}
> +
> +		if (!new_tx_sa->tx_sa->active) {
> +			nxp_c45_tx_sa_disable(phydev, phy_secy);
> +			goto disable_old_tx_sa;
> +		}

You can combine those two conditions into

		if (!new_tx_sa || !new_tx_sa->tx_sa->active) {
			nxp_c45_tx_sa_disable(phydev, phy_secy);
			goto disable_old_tx_sa;
		}

> +
> +		new_tx_sa->is_key_a = phy_secy->tx_sa_key_a;
> +		phy_secy->tx_sa_key_a = phy_secy->tx_sa_key_a;

Is this missing a ! on the right side?

Maybe worth creating a "next_sa_key_id" helper (or something like
that) that returns the current value and updates tx_sa_key_a, since
you use this pattern a few times.


[...]
> +static int nxp_c45_mdo_add_rxsc(struct macsec_context *ctx)
> +{
> +	struct phy_device *phydev = ctx->phydev;
> +	struct nxp_c45_phy *priv = phydev->priv;
> +	struct nxp_c45_secy *phy_secy;
> +	struct nxp_c45_rx_sc *rx_sc;
> +
> +	phydev_dbg(phydev, "add RX SC %s\n",
> +		   ctx->rx_sc->active ? "enabled" : "disabled");

If the HW/driver supports multiple TXSC/RXSC on the same device, it
would probably be helpful to add their SCIs to this debug message (and
the update/delete ones, also for the mdo_*_rxsa and mdo_*_txsa
functions).

[...]
> +static int nxp_c45_mdo_add_rxsa(struct macsec_context *ctx)
> +{
[...]
> +	if (!rx_sc->rx_sa_b) {
> +		phydev_dbg(phydev, "add RX SA B %u %s\n",
> +			   an, rx_sa->active ? "enabled" : "disabled");
> +		nxp_c45_set_rxsa_key(ctx, false);
> +		rx_sc->rx_sa_b = rx_sa;
> +		return 0;
> +	}
> +
> +	return -ENOMEM;

maybe -ENOSPC would fit better?

> +}
> +

[...]
> +static int nxp_c45_mdo_add_txsa(struct macsec_context *ctx)
> +{
[...]
> +	if (phy_secy->tx_sa[sa])
> +		return -EBUSY;
> +
> +	tx_sa = kzalloc(sizeof(*tx_sa), GFP_KERNEL);

missing NULL check

[...]
> +static int nxp_c45_mdo_del_txsa(struct macsec_context *ctx)
> +{
[...]
> +
> +	phy_secy->tx_sa[sa] = NULL;
> +	kfree(tx_sa);

tx_sa contains the key, so this needs to be kfree_sensitive, or add a
memzero_explicit(tx_sa->key) before freeing. Or if possible, don't
copy the key at all into tx_sa.

similar changes in the mscc driver:
1b16b3fdf675 ("net: phy: mscc: macsec: clear encryption keys when freeing a flow")
0dc33c65835d ("net: phy: mscc: macsec: do not copy encryption keys")


[...]
> diff --git a/drivers/net/phy/nxp-c45-tja11xx.c b/drivers/net/phy/nxp-c45-tja11xx.c
> index 7ab080ff02df..5bf7caa4e63d 100644
> --- a/drivers/net/phy/nxp-c45-tja11xx.c
> +++ b/drivers/net/phy/nxp-c45-tja11xx.c
[...]
> @@ -1218,12 +1201,25 @@ static int nxp_c45_start_op(struct phy_device *phydev)
>  
>  static int nxp_c45_config_intr(struct phy_device *phydev)
>  {
> -	if (phydev->interrupts == PHY_INTERRUPT_ENABLED)
> +	int ret;
> +
> +	if (phydev->interrupts == PHY_INTERRUPT_ENABLED) {
> +		ret = phy_set_bits_mmd(phydev, MDIO_MMD_VEND1,
> +				       VEND1_PORT_FUNC_IRQ_EN, MACSEC_IRQS);
> +		if (ret)
> +			return ret;
> +
>  		return phy_set_bits_mmd(phydev, MDIO_MMD_VEND1,
>  					VEND1_PHY_IRQ_EN, PHY_IRQ_LINK_EVENT);

Maybe a dumb question: should we be clearing the MACSEC_IRQS bits when
this 2nd call to phy_set_bits_mmd fails? (and same below, reset when
the 2nd clear fails)


> -	else
> -		return phy_clear_bits_mmd(phydev, MDIO_MMD_VEND1,
> -					  VEND1_PHY_IRQ_EN, PHY_IRQ_LINK_EVENT);
> +	}
> +
> +	ret = phy_clear_bits_mmd(phydev, MDIO_MMD_VEND1,
> +				 VEND1_PORT_FUNC_IRQ_EN, MACSEC_IRQS);
> +	if (ret)
> +		return ret;
> +
> +	return phy_clear_bits_mmd(phydev, MDIO_MMD_VEND1,
> +				  VEND1_PHY_IRQ_EN, PHY_IRQ_LINK_EVENT);
>  }

[...]
> @@ -1666,6 +1666,20 @@ static int nxp_c45_probe(struct phy_device *phydev)
>  	}
>  
>  no_ptp_support:
> +	macsec_ability = !!(phy_abilities & MACSEC_ABILITY);
> +	if (!macsec_ability) {
> +		phydev_info(phydev, "the phy does not support MACsec\n");
> +		goto no_macsec_support;
> +	}
> +
> +	if (IS_ENABLED(CONFIG_MACSEC)) {
> +		ret = nxp_c45_macsec_probe(phydev);

I don't know how this probing is handled so maybe another dumb
question: if that fails, are we going to leak resources allocated
earlier?  (devm_kzalloc for example)

> +		phydev_dbg(phydev, "MACsec support enabled.");
> +	} else {
> +		phydev_dbg(phydev, "MACsec support not enabled even if the phy supports it");
> +	}
> +
> +no_macsec_support:
>  
>  	return ret;
>  }

-- 
Sabrina


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 3/5] net: phy: nxp-c45-tja11xx add MACsec support
  2023-08-25 12:52   ` Sabrina Dubroca
@ 2023-08-25 13:29     ` Andrew Lunn
  2023-08-25 13:44       ` Radu Pirea (OSS)
  2023-08-28 10:43       ` Sabrina Dubroca
  0 siblings, 2 replies; 34+ messages in thread
From: Andrew Lunn @ 2023-08-25 13:29 UTC (permalink / raw)
  To: Sabrina Dubroca
  Cc: Radu Pirea (NXP OSS),
	hkallweit1, linux, davem, edumazet, kuba, pabeni, richardcochran,
	sebastian.tobuschat, netdev, linux-kernel

On Fri, Aug 25, 2023 at 02:52:57PM +0200, Sabrina Dubroca wrote:
> [Some of the questions I'm asking are probably dumb since I don't know
> anything about phy drivers. Sorry if that's the case.]
> 
> General code organization nit: I think it would be easier to review
> the code if helpers functions were grouped by the type of object they
> work on. All the RXSA-related functions together, all the TXSA
> functions together, same for RXSC and then TXSC/SecY. Right now I see
> some RXSA functions in a group of TXSA functions, another in the
> middle of a group of RXSC functions. It makes navigating through the
> code a bit less convenient.

For networking, and Linux in general, forward declarations are not
liked. Functions should appear before they are used. That places a bit
of restrictions on ordering, but in general you can still group code
in meaningful ways.

> 2023-08-24, 12:16:13 +0300, Radu Pirea (NXP OSS) wrote:
> > +static int nxp_c45_macsec_write(struct phy_device *phydev, u16 reg, u32 val)
> > +{
> > +	WARN_ON_ONCE(reg % 4);
> > +
> > +	reg = reg / 2;
> > +	phy_write_mmd(phydev, MDIO_MMD_VEND2,
> > +		      VEND1_MACSEC_BASE + reg, val);
> > +	phy_write_mmd(phydev, MDIO_MMD_VEND2,
> > +		      VEND1_MACSEC_BASE + reg + 1, val >> 16);
> 
> Can these calls fail? ie, do you need to handle errors like in
> nxp_c45_macsec_read (and then in callers of nxp_c45_macsec_write)?

Access to PHY devices can fail, but if it does, such failures are
generally fatal and there is no real recovery, also the next read/
write is also likely to fail. So we do recommend checking return codes
and just return the error up the stack. That failure might get trapped
up the stack, and turned into a phy_error() call which will disable
the PHY.

> > +static bool nxp_c45_rx_sc_valid(struct nxp_c45_secy *phy_secy,
> > +				struct macsec_rx_sc *rx_sc)
> > +{
> > +	u16 port =  (__force u64)rx_sc->sci >> (ETH_ALEN * 8);
> 
> u64 sci = be64_to_cpu((__force __be64)rx_sc->sci);

why is the __force needed? What happens with a normal cast?

    Andrew

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 4/5] net: phy: nxp-c45-tja11xx: add MACsec statistics
  2023-08-24  9:16 ` [RFC net-next v2 4/5] net: phy: nxp-c45-tja11xx: add MACsec statistics Radu Pirea (NXP OSS)
@ 2023-08-25 13:41   ` Sabrina Dubroca
  2023-08-25 14:22     ` Radu Pirea (OSS)
  0 siblings, 1 reply; 34+ messages in thread
From: Sabrina Dubroca @ 2023-08-25 13:41 UTC (permalink / raw)
  To: Radu Pirea (NXP OSS)
  Cc: andrew, hkallweit1, linux, davem, edumazet, kuba, pabeni,
	richardcochran, sebastian.tobuschat, netdev, linux-kernel

2023-08-24, 12:16:14 +0300, Radu Pirea (NXP OSS) wrote:
> @@ -1323,8 +1469,10 @@ static int nxp_c45_mdo_del_txsa(struct macsec_context *ctx)
>  
>  	nxp_c45_select_secy(phydev, phy_secy->secy_id);
>  
> -	if (tx_sa->is_enabled)
> +	if (tx_sa->is_enabled) {
>  		nxp_c45_update_key_status(phydev, tx_sa);
> +		nxp_c45_tx_sa_stats_clear(phydev, tx_sa);

If the TXSA was already disabled (via mdo_upd_txsa), don't you need to
clear the stats as well?

[...]
> +static int nxp_c45_mdo_get_tx_sc_stats(struct macsec_context *ctx)
> +{
[...]
> +	nxp_c45_macsec_read(phydev, MACSEC_OOE1HS, &reg);
> +	stat = (u64)reg << 32;
> +	nxp_c45_macsec_read(phydev, MACSEC_OOE2HS, &reg);
> +	stat |= reg;
> +	if (ctx->secy->tx_sc.encrypt)
> +		tx_sc_stats->OutOctetsEncrypted = stat;
> +	else
> +		tx_sc_stats->OutOctetsEncrypted = 0;
> +
> +	if (ctx->secy->protect_frames)
> +		tx_sc_stats->OutOctetsProtected = stat;
> +	else
> +		tx_sc_stats->OutOctetsProtected = 0;

This doesn't look consistent with macsec_count_tx
(drivers/net/macsec.c).

-- 
Sabrina


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 3/5] net: phy: nxp-c45-tja11xx add MACsec support
  2023-08-25 13:29     ` Andrew Lunn
@ 2023-08-25 13:44       ` Radu Pirea (OSS)
  2023-08-25 13:50         ` Andrew Lunn
  2023-08-28 10:43       ` Sabrina Dubroca
  1 sibling, 1 reply; 34+ messages in thread
From: Radu Pirea (OSS) @ 2023-08-25 13:44 UTC (permalink / raw)
  To: Andrew Lunn, Sabrina Dubroca
  Cc: hkallweit1, linux, davem, edumazet, kuba, pabeni, richardcochran,
	sebastian.tobuschat, netdev, linux-kernel



On 25.08.2023 16:29, Andrew Lunn wrote:

>>> +static bool nxp_c45_rx_sc_valid(struct nxp_c45_secy *phy_secy,
>>> +				struct macsec_rx_sc *rx_sc)
>>> +{
>>> +	u16 port =  (__force u64)rx_sc->sci >> (ETH_ALEN * 8);
>>
>> u64 sci = be64_to_cpu((__force __be64)rx_sc->sci);
> 
> why is the __force needed? What happens with a normal cast?
> 

Sparse will print warnings if __force is missing.

>      Andrew
> 

-- 
Radu P.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 3/5] net: phy: nxp-c45-tja11xx add MACsec support
  2023-08-25 13:44       ` Radu Pirea (OSS)
@ 2023-08-25 13:50         ` Andrew Lunn
  2023-08-25 14:12           ` Radu Pirea (OSS)
  2023-08-30 12:06           ` Russell King (Oracle)
  0 siblings, 2 replies; 34+ messages in thread
From: Andrew Lunn @ 2023-08-25 13:50 UTC (permalink / raw)
  To: Radu Pirea (OSS)
  Cc: Sabrina Dubroca, hkallweit1, linux, davem, edumazet, kuba,
	pabeni, richardcochran, sebastian.tobuschat, netdev,
	linux-kernel

> > > > +static bool nxp_c45_rx_sc_valid(struct nxp_c45_secy *phy_secy,
> > > > +				struct macsec_rx_sc *rx_sc)
> > > > +{
> > > > +	u16 port =  (__force u64)rx_sc->sci >> (ETH_ALEN * 8);
> > > 
> > > u64 sci = be64_to_cpu((__force __be64)rx_sc->sci);
> > 
> > why is the __force needed? What happens with a normal cast?
> > 
> 
> Sparse will print warnings if __force is missing.

What is the warning? I just want to make sure __force is the correct
solution, not that something has the wrong type and we should be
fixing a design issue.

       Andrew
 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 3/5] net: phy: nxp-c45-tja11xx add MACsec support
  2023-08-25 13:50         ` Andrew Lunn
@ 2023-08-25 14:12           ` Radu Pirea (OSS)
  2023-08-30 12:06           ` Russell King (Oracle)
  1 sibling, 0 replies; 34+ messages in thread
From: Radu Pirea (OSS) @ 2023-08-25 14:12 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: Sabrina Dubroca, hkallweit1, linux, davem, edumazet, kuba,
	pabeni, richardcochran, sebastian.tobuschat, netdev,
	linux-kernel



On 25.08.2023 16:50, Andrew Lunn wrote:
>>>>> +static bool nxp_c45_rx_sc_valid(struct nxp_c45_secy *phy_secy,
>>>>> +				struct macsec_rx_sc *rx_sc)
>>>>> +{
>>>>> +	u16 port =  (__force u64)rx_sc->sci >> (ETH_ALEN * 8);
>>>>
>>>> u64 sci = be64_to_cpu((__force __be64)rx_sc->sci);
>>>
>>> why is the __force needed? What happens with a normal cast?
>>>
>>
>> Sparse will print warnings if __force is missing.
> 
> What is the warning? I just want to make sure __force is the correct
> solution, not that something has the wrong type and we should be
> fixing a design issue.

Let's consider the following example:
Function declaration:
static int nxp_c45_macsec_write(struct phy_device *phydev, u16 reg,
				u32 val)
Call without __force:
nxp_c45_macsec_write(ctx->phydev, sa_regs->txsa_ssci,
		     (u32)cpu_to_be32(ssci));

Warning:
drivers/net/phy/nxp-c45-tja11xx-macsec.c:803:39: warning: cast from 
restricted __be32

Even if I will write another function that takes an __be32 as parameter, 
I will need to silent sparse for phy_write_mmd calls.

And in the following example will cry because of sci_t to __be64 conversion:
u64 sci = be64_to_cpu((__force __be64)rx_sc->sci);


>         Andrew
>   

-- 
Radu P.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 4/5] net: phy: nxp-c45-tja11xx: add MACsec statistics
  2023-08-25 13:41   ` Sabrina Dubroca
@ 2023-08-25 14:22     ` Radu Pirea (OSS)
  0 siblings, 0 replies; 34+ messages in thread
From: Radu Pirea (OSS) @ 2023-08-25 14:22 UTC (permalink / raw)
  To: Sabrina Dubroca
  Cc: andrew, hkallweit1, linux, davem, edumazet, kuba, pabeni,
	richardcochran, sebastian.tobuschat, netdev, linux-kernel



On 25.08.2023 16:41, Sabrina Dubroca wrote:
> 2023-08-24, 12:16:14 +0300, Radu Pirea (NXP OSS) wrote:
>> @@ -1323,8 +1469,10 @@ static int nxp_c45_mdo_del_txsa(struct macsec_context *ctx)
>>   
>>   	nxp_c45_select_secy(phydev, phy_secy->secy_id);
>>   
>> -	if (tx_sa->is_enabled)
>> +	if (tx_sa->is_enabled) {
>>   		nxp_c45_update_key_status(phydev, tx_sa);
>> +		nxp_c45_tx_sa_stats_clear(phydev, tx_sa);
> 
> If the TXSA was already disabled (via mdo_upd_txsa), don't you need to
> clear the stats as well?

Yes, I will clear them.

> 
> [...]
>> +static int nxp_c45_mdo_get_tx_sc_stats(struct macsec_context *ctx)
>> +{
> [...]
>> +	nxp_c45_macsec_read(phydev, MACSEC_OOE1HS, &reg);
>> +	stat = (u64)reg << 32;
>> +	nxp_c45_macsec_read(phydev, MACSEC_OOE2HS, &reg);
>> +	stat |= reg;
>> +	if (ctx->secy->tx_sc.encrypt)
>> +		tx_sc_stats->OutOctetsEncrypted = stat;
>> +	else
>> +		tx_sc_stats->OutOctetsEncrypted = 0;
>> +
>> +	if (ctx->secy->protect_frames)
>> +		tx_sc_stats->OutOctetsProtected = stat;
>> +	else
>> +		tx_sc_stats->OutOctetsProtected = 0;
> 
> This doesn't look consistent with macsec_count_tx
> (drivers/net/macsec.c).

Actually OutOctetsProtected register is not read from the hardware.

-- 
Radu P.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 3/5] net: phy: nxp-c45-tja11xx add MACsec support
  2023-08-24  9:16 ` [RFC net-next v2 3/5] net: phy: nxp-c45-tja11xx add MACsec support Radu Pirea (NXP OSS)
  2023-08-25 12:52   ` Sabrina Dubroca
@ 2023-08-27  8:03   ` Simon Horman
  1 sibling, 0 replies; 34+ messages in thread
From: Simon Horman @ 2023-08-27  8:03 UTC (permalink / raw)
  To: Radu Pirea (NXP OSS)
  Cc: andrew, hkallweit1, linux, davem, edumazet, kuba, pabeni,
	richardcochran, sd, sebastian.tobuschat, netdev, linux-kernel

On Thu, Aug 24, 2023 at 12:16:13PM +0300, Radu Pirea (NXP OSS) wrote:
> Add MACsec support.
> The MACsec block has four TX SCs and four RX SCs. The driver supports up
> to four SecY. Each SecY with one TX SC and one RX SC.
> The RX SCs can have two keys, key A and key B, written in hardware and
> enabled at the same time.
> The TX SCs can have two keys written in hardware, but only one can be
> active at a given time.
> On TX, the SC is selected using the MAC source address. Due of this
> selection mechanism, each offloaded netdev must have a unique MAC
> address.
> On RX, the SC is selected by SCI(found in SecTAG or calculated using MAC
> SA), or using RX SC 0 as implicit.
> 
> Signed-off-by: Radu Pirea (NXP OSS) <radu-nicolae.pirea@oss.nxp.com>

...

> diff --git a/drivers/net/phy/Makefile b/drivers/net/phy/Makefile
> index c945ed9bd14b..ee53e2fdb968 100644
> --- a/drivers/net/phy/Makefile
> +++ b/drivers/net/phy/Makefile
> @@ -83,6 +83,10 @@ obj-$(CONFIG_MICROSEMI_PHY)	+= mscc/
>  obj-$(CONFIG_MOTORCOMM_PHY)	+= motorcomm.o
>  obj-$(CONFIG_NATIONAL_PHY)	+= national.o
>  obj-$(CONFIG_NCN26000_PHY)	+= ncn26000.o
> +nxp-c45-tja11xx-objs		+= nxp-c45-tja11xx.o

Hi Radu,

The coincidence of "nxp-c45-tja11x" on both sides of the "+=" operator
seems to cause a build failure (for x86_64 allmodconfig with gcc-13).

Circular drivers/net/phy/nxp-c45-tja11xx.o <- drivers/net/phy/nxp-c45-tja11xx.o dependency dropped.

> +ifdef CONFIG_MACSEC
> +nxp-c45-tja11xx-objs		+= nxp-c45-tja11xx-macsec.o
> +endif

>  obj-$(CONFIG_NXP_C45_TJA11XX_PHY)	+= nxp-c45-tja11xx.o
>  obj-$(CONFIG_NXP_CBTX_PHY)	+= nxp-cbtx.o
>  obj-$(CONFIG_NXP_TJA11XX_PHY)	+= nxp-tja11xx.o

...

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag
  2023-08-24  9:16 ` [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag Radu Pirea (NXP OSS)
@ 2023-08-27  8:05   ` Simon Horman
  2023-08-28 10:17   ` Sabrina Dubroca
  1 sibling, 0 replies; 34+ messages in thread
From: Simon Horman @ 2023-08-27  8:05 UTC (permalink / raw)
  To: Radu Pirea (NXP OSS)
  Cc: andrew, hkallweit1, linux, davem, edumazet, kuba, pabeni,
	richardcochran, sd, sebastian.tobuschat, netdev, linux-kernel

On Thu, Aug 24, 2023 at 12:16:15PM +0300, Radu Pirea (NXP OSS) wrote:
> Implement mdo_insert_tx_tag to insert the TLV header in the ethernet
> frame.
> 
> If extscs parameter is set to 1, then the TLV header will contain the
> TX SC that will be used to encrypt the frame, otherwise the TX SC will
> be selected using the MAC source address.
> 
> Signed-off-by: Radu Pirea (NXP OSS) <radu-nicolae.pirea@oss.nxp.com>
> ---
>  drivers/net/phy/nxp-c45-tja11xx-macsec.c | 66 ++++++++++++++++++++++++
>  1 file changed, 66 insertions(+)
> 
> diff --git a/drivers/net/phy/nxp-c45-tja11xx-macsec.c b/drivers/net/phy/nxp-c45-tja11xx-macsec.c

...

> @@ -167,6 +171,18 @@
>  #define MACSEC_INPBTS			0x0638
>  #define MACSEC_IPSNFS			0x063C
>  
> +#define TJA11XX_TLV_TX_NEEDED_HEADROOM	(32)
> +#define TJA11XX_TLV_NEEDED_TAILROOM	(0)
> +
> +#define MACSEC_TLV_CP			BIT(0)
> +#define MACSEC_TLV_SC_ID_OFF		(2)
> +
> +#define ETH_P_TJA11XX_TLV		(0x4e58)
> +
> +bool extscs;

Hi Radu,

Sparse suggests that extscs should be static.

> +module_param(extscs, bool, 0);
> +MODULE_PARM_DESC(extscs, "Select the TX SC using TLV header information.");

...

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag
  2023-08-24  9:16 ` [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag Radu Pirea (NXP OSS)
  2023-08-27  8:05   ` Simon Horman
@ 2023-08-28 10:17   ` Sabrina Dubroca
  2023-08-28 13:46     ` Radu Pirea (OSS)
  1 sibling, 1 reply; 34+ messages in thread
From: Sabrina Dubroca @ 2023-08-28 10:17 UTC (permalink / raw)
  To: Radu Pirea (NXP OSS)
  Cc: andrew, hkallweit1, linux, davem, edumazet, kuba, pabeni,
	richardcochran, sebastian.tobuschat, netdev, linux-kernel

2023-08-24, 12:16:15 +0300, Radu Pirea (NXP OSS) wrote:
> Implement mdo_insert_tx_tag to insert the TLV header in the ethernet
> frame.
> 
> If extscs parameter is set to 1, then the TLV header will contain the
> TX SC that will be used to encrypt the frame, otherwise the TX SC will
> be selected using the MAC source address.

In which case would a user choose not to use the SCI? Using the MAC
address is probably fine in basic setups, but having to fiddle with a
module parameter (so unloading and reloading the module, which means
losing network connectivity) to make things work when the setup
evolves is really not convenient.

Is there a drawback to always using the SCI?

-- 
Sabrina


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 3/5] net: phy: nxp-c45-tja11xx add MACsec support
  2023-08-25 13:29     ` Andrew Lunn
  2023-08-25 13:44       ` Radu Pirea (OSS)
@ 2023-08-28 10:43       ` Sabrina Dubroca
  1 sibling, 0 replies; 34+ messages in thread
From: Sabrina Dubroca @ 2023-08-28 10:43 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: Radu Pirea (NXP OSS),
	hkallweit1, linux, davem, edumazet, kuba, pabeni, richardcochran,
	sebastian.tobuschat, netdev, linux-kernel

2023-08-25, 15:29:30 +0200, Andrew Lunn wrote:
> On Fri, Aug 25, 2023 at 02:52:57PM +0200, Sabrina Dubroca wrote:
> > 2023-08-24, 12:16:13 +0300, Radu Pirea (NXP OSS) wrote:
> > > +static int nxp_c45_macsec_write(struct phy_device *phydev, u16 reg, u32 val)
> > > +{
> > > +	WARN_ON_ONCE(reg % 4);
> > > +
> > > +	reg = reg / 2;
> > > +	phy_write_mmd(phydev, MDIO_MMD_VEND2,
> > > +		      VEND1_MACSEC_BASE + reg, val);
> > > +	phy_write_mmd(phydev, MDIO_MMD_VEND2,
> > > +		      VEND1_MACSEC_BASE + reg + 1, val >> 16);
> > 
> > Can these calls fail? ie, do you need to handle errors like in
> > nxp_c45_macsec_read (and then in callers of nxp_c45_macsec_write)?
> 
> Access to PHY devices can fail, but if it does, such failures are
> generally fatal and there is no real recovery, also the next read/
> write is also likely to fail. So we do recommend checking return codes
> and just return the error up the stack. That failure might get trapped
> up the stack, and turned into a phy_error() call which will disable
> the PHY.

Ok, thanks. A lot of the calls to nxp_c45_macsec_write come from the
core macsec code (via mdo_*), so at least this part of the stack isn't
going to catch them. Either these errors can be caught directly in the
driver, or we'll have to ignore them (once we return from the driver
to the macsec core, we can't know if the error was fatal so we have to
assume it's not).  And phy_error's doc says it can't be called under
phydev->lock, which we're holding in all those mdo_* functions (called
from macsec_offload()).

-- 
Sabrina


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag
  2023-08-28 10:17   ` Sabrina Dubroca
@ 2023-08-28 13:46     ` Radu Pirea (OSS)
  2023-08-30 11:35       ` Sabrina Dubroca
  0 siblings, 1 reply; 34+ messages in thread
From: Radu Pirea (OSS) @ 2023-08-28 13:46 UTC (permalink / raw)
  To: Sabrina Dubroca
  Cc: andrew, hkallweit1, linux, davem, edumazet, kuba, pabeni,
	richardcochran, sebastian.tobuschat, netdev, linux-kernel



On 28.08.2023 13:17, Sabrina Dubroca wrote:
> 2023-08-24, 12:16:15 +0300, Radu Pirea (NXP OSS) wrote:
>> Implement mdo_insert_tx_tag to insert the TLV header in the ethernet
>> frame.
>>
>> If extscs parameter is set to 1, then the TLV header will contain the
>> TX SC that will be used to encrypt the frame, otherwise the TX SC will
>> be selected using the MAC source address.
> 
> In which case would a user choose not to use the SCI? Using the MAC
> address is probably fine in basic setups, but having to fiddle with a
> module parameter (so unloading and reloading the module, which means
> losing network connectivity) to make things work when the setup
> evolves is really not convenient.
> 
> Is there a drawback to always using the SCI?
> 

I see your concern. If the PHY driver is reloaded, then the offloaded 
MACsec configuration will vanish from the hardware. Actually, just a 
call to phy_disconnect is enough to break an offloaded MACsec iface and 
can be achieved by:
ip link set eth0 down && ip link set eth0 up

The only drawback is related to the PTP frames encryption. Due to 
hardware limitations, PHY timestamping + MACsec will not work if the 
custom header is inserted. The only way to get this work is by using the 
MAC SA selection and running PTP on the real netdev.


-- 
Radu P.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag
  2023-08-28 13:46     ` Radu Pirea (OSS)
@ 2023-08-30 11:35       ` Sabrina Dubroca
  2023-09-01  9:09         ` Radu Pirea
  0 siblings, 1 reply; 34+ messages in thread
From: Sabrina Dubroca @ 2023-08-30 11:35 UTC (permalink / raw)
  To: Radu Pirea (OSS)
  Cc: andrew, hkallweit1, linux, davem, edumazet, kuba, pabeni,
	richardcochran, sebastian.tobuschat, netdev, linux-kernel

2023-08-28, 16:46:02 +0300, Radu Pirea (OSS) wrote:
> 
> 
> On 28.08.2023 13:17, Sabrina Dubroca wrote:
> > 2023-08-24, 12:16:15 +0300, Radu Pirea (NXP OSS) wrote:
> > > Implement mdo_insert_tx_tag to insert the TLV header in the ethernet
> > > frame.
> > > 
> > > If extscs parameter is set to 1, then the TLV header will contain the
> > > TX SC that will be used to encrypt the frame, otherwise the TX SC will
> > > be selected using the MAC source address.
> > 
> > In which case would a user choose not to use the SCI? Using the MAC
> > address is probably fine in basic setups, but having to fiddle with a
> > module parameter (so unloading and reloading the module, which means
> > losing network connectivity) to make things work when the setup
> > evolves is really not convenient.
> > 
> > Is there a drawback to always using the SCI?
> > 
> 
> I see your concern. If the PHY driver is reloaded, then the offloaded MACsec
> configuration will vanish from the hardware. Actually, just a call to
> phy_disconnect is enough to break an offloaded MACsec iface and can be
> achieved by:
> ip link set eth0 down && ip link set eth0 up

And it's not restored when the link goes back up? That's inconvenient :/
Do we end up with inconsistent state? ie driver and core believe
everything is still offloaded, but HW lost all state? do we leak
some resources allocated by the driver?

We could add a flush/restore in macsec_notify when the lower device
goes down/up, maybe limited to devices that request this (I don't know
if all devices would need it, or maybe all devices offloading to the
PHY but not to the MAC).

And what happens in this case?
    ip link add link eth0 type macsec offload phy
    ip link set eth0 down
    ip macsec add macsec0 rx sci ...
    ip macsec add macsec0 tx sa 0 ...
    # etc
    ip link set eth0 up

Will offload work with the current code?

> The only drawback is related to the PTP frames encryption. Due to hardware
> limitations, PHY timestamping + MACsec will not work if the custom header is
> inserted. The only way to get this work is by using the MAC SA selection and
> running PTP on the real netdev.

Could you add some documentation explaining that? Users need this
information to make the right choice for their use case. Maybe
directly in the description of the module parameter, something like:
"Select the TX SC using TLV header information. PTP frames encryption
cannot work when this feature is enabled."

If it's in the module parameter I guess it can't be too
verbose. Otherwise I don't know where else to put it.

And the parameter's name and/or description should probably include
macsec/MACsec if it's visible at the level of the whole module (ie if
macsec support isn't a separate module), just to give context at to
what the TXSC is (and what the encryption for the PTP frames refers
to).

-- 
Sabrina


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 3/5] net: phy: nxp-c45-tja11xx add MACsec support
  2023-08-25 13:50         ` Andrew Lunn
  2023-08-25 14:12           ` Radu Pirea (OSS)
@ 2023-08-30 12:06           ` Russell King (Oracle)
  1 sibling, 0 replies; 34+ messages in thread
From: Russell King (Oracle) @ 2023-08-30 12:06 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: Radu Pirea (OSS),
	Sabrina Dubroca, hkallweit1, davem, edumazet, kuba, pabeni,
	richardcochran, sebastian.tobuschat, netdev, linux-kernel

On Fri, Aug 25, 2023 at 03:50:06PM +0200, Andrew Lunn wrote:
> > > > > +static bool nxp_c45_rx_sc_valid(struct nxp_c45_secy *phy_secy,
> > > > > +				struct macsec_rx_sc *rx_sc)
> > > > > +{
> > > > > +	u16 port =  (__force u64)rx_sc->sci >> (ETH_ALEN * 8);
> > > > 
> > > > u64 sci = be64_to_cpu((__force __be64)rx_sc->sci);
> > > 
> > > why is the __force needed? What happens with a normal cast?
> > > 
> > 
> > Sparse will print warnings if __force is missing.
> 
> What is the warning? I just want to make sure __force is the correct
> solution, not that something has the wrong type and we should be
> fixing a design issue.

Hi Andrew,

rx_sc->sci is sci_t, which is defined as:

typedef u64 __bitwise sci_t;

Sparse documentation (Documentation/dev-tools/sparse.rst) states that:

  "__bitwise" is a type attribute, so you have to do something like this::
...
  which makes PM_SUSPEND and PM_RESUME "bitwise" integers (the "__force"
  is there because sparse will complain about casting to/from a bitwise
  type, but in this case we really _do_ want to force the conversion).

So basically, sci is a bitwise type, which means sparse gives it
special properties that ensures it can only be operated on using
similarly typed integers.

So, those __force casts are needed to convert sci to something else.

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 80Mbps down 10Mbps up. Decent connectivity at last!

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag
  2023-08-30 11:35       ` Sabrina Dubroca
@ 2023-09-01  9:09         ` Radu Pirea
  2023-09-01  9:27           ` Russell King (Oracle)
  2023-09-01 10:07           ` Sabrina Dubroca
  0 siblings, 2 replies; 34+ messages in thread
From: Radu Pirea @ 2023-09-01  9:09 UTC (permalink / raw)
  To: atenart, Radu-nicolae Pirea (OSS), sd
  Cc: andrew, linux, hkallweit1, davem, Sebastian Tobuschat,
	linux-kernel, pabeni, richardcochran, edumazet, kuba, netdev

On Wed, 2023-08-30 at 13:35 +0200, Sabrina Dubroca wrote:
...

> And it's not restored when the link goes back up? That's inconvenient
> :/
> Do we end up with inconsistent state? ie driver and core believe
> everything is still offloaded, but HW lost all state? do we leak
> some resources allocated by the driver?

Yes. We end up with inconsistent state. The HW will lost all state when
the phy is reseted. No resource is leaked, everything is there, but the
configuration needs to be reapplied.

> 
> We could add a flush/restore in macsec_notify when the lower device
> goes down/up, maybe limited to devices that request this (I don't
> know
> if all devices would need it, or maybe all devices offloading to the
> PHY but not to the MAC).

Agreed.
We can do a flush very simple, but to restore the configuration maybe
we should to save the key in the macsec_key structure. I am not sure if
the key can be extracted from crypto_aead structure.

> 
> And what happens in this case?
>     ip link add link eth0 type macsec offload phy
>     ip link set eth0 down
>     ip macsec add macsec0 rx sci ...
>     ip macsec add macsec0 tx sa 0 ...
>     # etc
>     ip link set eth0 up
> 
> Will offload work with the current code?

(the interface was up before)
[root@alarm ~]# ip link add link end0 macsec0 type macsec encrypt on
offload phy 
[root@alarm ~]# ip link set end0 down
[root@alarm ~]# ip macsec add macsec0 rx port 1 address
00:01:be:be:ef:33
RTNETLINK answers: Operation not supported

But let's consider the next case:
    ip link add link eth0 type macsec offload phy
    ip link set eth0 down
    ip link set eth0 up
    ip macsec add macsec0 rx sci ...
    ip macsec add macsec0 tx sa 0 ...
    # etc

In this case, any HW configuration written by .mdo_add_secy will be
lost.

> 
> > The only drawback is related to the PTP frames encryption. Due to
> > hardware
> > limitations, PHY timestamping + MACsec will not work if the custom
> > header is
> > inserted. The only way to get this work is by using the MAC SA
> > selection and
> > running PTP on the real netdev.
> 
> Could you add some documentation explaining that? Users need this
> information to make the right choice for their use case. Maybe
> directly in the description of the module parameter, something like:
> "Select the TX SC using TLV header information. PTP frames encryption
> cannot work when this feature is enabled."
> 
> If it's in the module parameter I guess it can't be too
> verbose. Otherwise I don't know where else to put it.
> 
> And the parameter's name and/or description should probably include
> macsec/MACsec if it's visible at the level of the whole module (ie if
> macsec support isn't a separate module), just to give context at to
> what the TXSC is (and what the encryption for the PTP frames refers
> to).

I will improve the comment and change the name. Thank you.



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag
  2023-09-01  9:09         ` Radu Pirea
@ 2023-09-01  9:27           ` Russell King (Oracle)
  2023-09-01 11:31             ` Radu Pirea (OSS)
  2023-09-01 10:07           ` Sabrina Dubroca
  1 sibling, 1 reply; 34+ messages in thread
From: Russell King (Oracle) @ 2023-09-01  9:27 UTC (permalink / raw)
  To: Radu Pirea
  Cc: atenart, Radu-nicolae Pirea (OSS),
	sd, andrew, hkallweit1, davem, Sebastian Tobuschat, linux-kernel,
	pabeni, richardcochran, edumazet, kuba, netdev

On Fri, Sep 01, 2023 at 09:09:06AM +0000, Radu Pirea wrote:
> On Wed, 2023-08-30 at 13:35 +0200, Sabrina Dubroca wrote:
> ...
> 
> > And it's not restored when the link goes back up? That's inconvenient
> > :/
> > Do we end up with inconsistent state? ie driver and core believe
> > everything is still offloaded, but HW lost all state? do we leak
> > some resources allocated by the driver?
> 
> Yes. We end up with inconsistent state. The HW will lost all state when
> the phy is reseted. No resource is leaked, everything is there, but the
> configuration needs to be reapplied.

If it's happening because the PHY is being re-attached from the network
driver, then wouldn't it be a good idea to synchronise the hardware
state with the software configuration in the ->config_init function?

Presumably the hardware state is also lost when resuming from suspend
as well? If so, that'll also fix that issue as well.

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 80Mbps down 10Mbps up. Decent connectivity at last!

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag
  2023-09-01  9:09         ` Radu Pirea
  2023-09-01  9:27           ` Russell King (Oracle)
@ 2023-09-01 10:07           ` Sabrina Dubroca
  2023-09-01 10:32             ` Russell King (Oracle)
  2023-09-01 11:58             ` Radu Pirea (OSS)
  1 sibling, 2 replies; 34+ messages in thread
From: Sabrina Dubroca @ 2023-09-01 10:07 UTC (permalink / raw)
  To: Radu Pirea
  Cc: atenart, Radu-nicolae Pirea (OSS),
	andrew, linux, hkallweit1, davem, Sebastian Tobuschat,
	linux-kernel, pabeni, richardcochran, edumazet, kuba, netdev

2023-09-01, 09:09:06 +0000, Radu Pirea wrote:
> On Wed, 2023-08-30 at 13:35 +0200, Sabrina Dubroca wrote:
> ...
> 
> > And it's not restored when the link goes back up? That's inconvenient
> > :/
> > Do we end up with inconsistent state? ie driver and core believe
> > everything is still offloaded, but HW lost all state? do we leak
> > some resources allocated by the driver?
> 
> Yes. We end up with inconsistent state. The HW will lost all state when
> the phy is reseted. No resource is leaked, everything is there, but the
> configuration needs to be reapplied.
> 
> > 
> > We could add a flush/restore in macsec_notify when the lower device
> > goes down/up, maybe limited to devices that request this (I don't
> > know
> > if all devices would need it, or maybe all devices offloading to the
> > PHY but not to the MAC).
> 
> Agreed.
> We can do a flush very simple, but to restore the configuration maybe
> we should to save the key in the macsec_key structure. I am not sure if
> the key can be extracted from crypto_aead structure.

Either that or in the driver. I have a small preference for driver,
because then cases that don't need this restore won't have to keep the
key in memory, reducing the likelihood of accidentally sharing it.
OTOH, if we centralize that code, it's easier to make sure everything
is cleared from kernel memory when we delete the SA.


> > And what happens in this case?
> >     ip link add link eth0 type macsec offload phy
> >     ip link set eth0 down
> >     ip macsec add macsec0 rx sci ...
> >     ip macsec add macsec0 tx sa 0 ...
> >     # etc
> >     ip link set eth0 up
> > 
> > Will offload work with the current code?
> 
> (the interface was up before)
> [root@alarm ~]# ip link add link end0 macsec0 type macsec encrypt on
> offload phy 
> [root@alarm ~]# ip link set end0 down
> [root@alarm ~]# ip macsec add macsec0 rx port 1 address
> 00:01:be:be:ef:33
> RTNETLINK answers: Operation not supported

Where does that EOPNOTSUPP come from? nxp_c45_mdo_add_rxsc from this
version of the code can't return that, and macsec_add_rxsc also
shouldn't at this point.

Ideally all implementations (HW or SW) should behave the same, but at
least that saves us from having to restore state in the HW, if we
couldn't create it at all.

> But let's consider the next case:
>     ip link add link eth0 type macsec offload phy
>     ip link set eth0 down
>     ip link set eth0 up
>     ip macsec add macsec0 rx sci ...
>     ip macsec add macsec0 tx sa 0 ...
>     # etc
> 
> In this case, any HW configuration written by .mdo_add_secy will be
> lost.

So we need a way to restore the config in HW, whether that's done
entirely in the driver or initiated by macsec itself.


Antoine, is any of this relevant to the mscc driver?

-- 
Sabrina


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag
  2023-09-01 10:07           ` Sabrina Dubroca
@ 2023-09-01 10:32             ` Russell King (Oracle)
  2023-09-01 13:56               ` Sabrina Dubroca
  2023-09-01 11:58             ` Radu Pirea (OSS)
  1 sibling, 1 reply; 34+ messages in thread
From: Russell King (Oracle) @ 2023-09-01 10:32 UTC (permalink / raw)
  To: Sabrina Dubroca
  Cc: Radu Pirea, atenart, Radu-nicolae Pirea (OSS),
	andrew, hkallweit1, davem, Sebastian Tobuschat, linux-kernel,
	pabeni, richardcochran, edumazet, kuba, netdev

On Fri, Sep 01, 2023 at 12:07:32PM +0200, Sabrina Dubroca wrote:
> 2023-09-01, 09:09:06 +0000, Radu Pirea wrote:
> > On Wed, 2023-08-30 at 13:35 +0200, Sabrina Dubroca wrote:
> > ...
> > 
> > > And it's not restored when the link goes back up? That's inconvenient
> > > :/
> > > Do we end up with inconsistent state? ie driver and core believe
> > > everything is still offloaded, but HW lost all state? do we leak
> > > some resources allocated by the driver?
> > 
> > Yes. We end up with inconsistent state. The HW will lost all state when
> > the phy is reseted. No resource is leaked, everything is there, but the
> > configuration needs to be reapplied.
> > 
> > > 
> > > We could add a flush/restore in macsec_notify when the lower device
> > > goes down/up, maybe limited to devices that request this (I don't
> > > know
> > > if all devices would need it, or maybe all devices offloading to the
> > > PHY but not to the MAC).
> > 
> > Agreed.
> > We can do a flush very simple, but to restore the configuration maybe
> > we should to save the key in the macsec_key structure. I am not sure if
> > the key can be extracted from crypto_aead structure.
> 
> Either that or in the driver. I have a small preference for driver,
> because then cases that don't need this restore won't have to keep the
> key in memory, reducing the likelihood of accidentally sharing it.
> OTOH, if we centralize that code, it's easier to make sure everything
> is cleared from kernel memory when we delete the SA.

Maybe consider about doing it as a library function, so drivers that
need this don't have to reimplement the functionality in randomly
buggy ways?

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 80Mbps down 10Mbps up. Decent connectivity at last!

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag
  2023-09-01  9:27           ` Russell King (Oracle)
@ 2023-09-01 11:31             ` Radu Pirea (OSS)
  2023-09-01 12:45               ` Russell King (Oracle)
  0 siblings, 1 reply; 34+ messages in thread
From: Radu Pirea (OSS) @ 2023-09-01 11:31 UTC (permalink / raw)
  To: Russell King (Oracle), Radu Pirea
  Cc: atenart, sd, andrew, hkallweit1, davem, Sebastian Tobuschat,
	linux-kernel, pabeni, richardcochran, edumazet, kuba, netdev

On 01.09.2023 12:27, Russell King (Oracle) wrote:
> On Fri, Sep 01, 2023 at 09:09:06AM +0000, Radu Pirea wrote:
>> On Wed, 2023-08-30 at 13:35 +0200, Sabrina Dubroca wrote:
>> ...
>>
>>> And it's not restored when the link goes back up? That's inconvenient
>>> :/
>>> Do we end up with inconsistent state? ie driver and core believe
>>> everything is still offloaded, but HW lost all state? do we leak
>>> some resources allocated by the driver?
>>
>> Yes. We end up with inconsistent state. The HW will lost all state when
>> the phy is reseted. No resource is leaked, everything is there, but the
>> configuration needs to be reapplied.
> 
> If it's happening because the PHY is being re-attached from the network
> driver, then wouldn't it be a good idea to synchronise the hardware > state with the software configuration in the ->config_init function?

.config_init might be an option, but keeping the keys in the driver 
might not be a good idea.

> 
> Presumably the hardware state is also lost when resuming from suspend
> as well? If so, that'll also fix that issue as well.
soft_reset is called when resuming from suspend, so, in this case, the 
MACsec configuration will be lost.

-- 
Radu P.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag
  2023-09-01 10:07           ` Sabrina Dubroca
  2023-09-01 10:32             ` Russell King (Oracle)
@ 2023-09-01 11:58             ` Radu Pirea (OSS)
  2023-09-01 13:57               ` Sabrina Dubroca
  1 sibling, 1 reply; 34+ messages in thread
From: Radu Pirea (OSS) @ 2023-09-01 11:58 UTC (permalink / raw)
  To: Sabrina Dubroca, Radu Pirea
  Cc: atenart, andrew, linux, hkallweit1, davem, Sebastian Tobuschat,
	linux-kernel, pabeni, richardcochran, edumazet, kuba, netdev

On 01.09.2023 13:07, Sabrina Dubroca wrote:
...
> 
>>> And what happens in this case?
>>>      ip link add link eth0 type macsec offload phy
>>>      ip link set eth0 down
>>>      ip macsec add macsec0 rx sci ...
>>>      ip macsec add macsec0 tx sa 0 ...
>>>      # etc
>>>      ip link set eth0 up
>>>
>>> Will offload work with the current code?
>>
>> (the interface was up before)
>> [root@alarm ~]# ip link add link end0 macsec0 type macsec encrypt on
>> offload phy
>> [root@alarm ~]# ip link set end0 down
>> [root@alarm ~]# ip macsec add macsec0 rx port 1 address
>> 00:01:be:be:ef:33
>> RTNETLINK answers: Operation not supported
> 
> Where does that EOPNOTSUPP come from? nxp_c45_mdo_add_rxsc from this
> version of the code can't return that, and macsec_add_rxsc also
> shouldn't at this point.

This is the source of -EOPNOTSUPP
https://elixir.bootlin.com/linux/latest/source/drivers/net/macsec.c#L1928

-- 
Radu P.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag
  2023-09-01 11:31             ` Radu Pirea (OSS)
@ 2023-09-01 12:45               ` Russell King (Oracle)
  0 siblings, 0 replies; 34+ messages in thread
From: Russell King (Oracle) @ 2023-09-01 12:45 UTC (permalink / raw)
  To: Radu Pirea (OSS)
  Cc: Radu Pirea, atenart, sd, andrew, hkallweit1, davem,
	Sebastian Tobuschat, linux-kernel, pabeni, richardcochran,
	edumazet, kuba, netdev

On Fri, Sep 01, 2023 at 02:31:32PM +0300, Radu Pirea (OSS) wrote:
> On 01.09.2023 12:27, Russell King (Oracle) wrote:
> > On Fri, Sep 01, 2023 at 09:09:06AM +0000, Radu Pirea wrote:
> > > On Wed, 2023-08-30 at 13:35 +0200, Sabrina Dubroca wrote:
> > > ...
> > > 
> > > > And it's not restored when the link goes back up? That's inconvenient
> > > > :/
> > > > Do we end up with inconsistent state? ie driver and core believe
> > > > everything is still offloaded, but HW lost all state? do we leak
> > > > some resources allocated by the driver?
> > > 
> > > Yes. We end up with inconsistent state. The HW will lost all state when
> > > the phy is reseted. No resource is leaked, everything is there, but the
> > > configuration needs to be reapplied.
> > 
> > If it's happening because the PHY is being re-attached from the network
> > driver, then wouldn't it be a good idea to synchronise the hardware > state with the software configuration in the ->config_init function?
> 
> .config_init might be an option, but keeping the keys in the driver might
> not be a good idea.
> 
> > 
> > Presumably the hardware state is also lost when resuming from suspend
> > as well? If so, that'll also fix that issue as well.
> soft_reset is called when resuming from suspend, so, in this case, the
> MACsec configuration will be lost.

Depending on what loses power at suspend time, it could be that the PHY
is powered down, and thus would lose all configuration. This is
something that the MACSEC core _has_ to expect may happen, and there
has to be some way to restore the configuration, including the
keys!

One can't "write configuration to hardware and then forget" when the
hardware may lose state, no matter what the configuration is.

Take for example hibernation... where the system may be effectively
powered off - maybe even if it's a piece of mains powered equipment,
it may be unplugged from the mains. When the system resumes, shouldn't
the configuration be completely restored, keys and all, so that it
continues to function as it was before hibernation?

The only possible alternative would be to have some kind of way for
the driver to tell the core that state was lost, so the core can
invalidate that state and inform userspace of that event, so userspace
gets the opportunity itself to restore the lost state.

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 80Mbps down 10Mbps up. Decent connectivity at last!

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag
  2023-09-01 10:32             ` Russell King (Oracle)
@ 2023-09-01 13:56               ` Sabrina Dubroca
  0 siblings, 0 replies; 34+ messages in thread
From: Sabrina Dubroca @ 2023-09-01 13:56 UTC (permalink / raw)
  To: Russell King (Oracle)
  Cc: Radu Pirea, atenart, Radu-nicolae Pirea (OSS),
	andrew, hkallweit1, davem, Sebastian Tobuschat, linux-kernel,
	pabeni, richardcochran, edumazet, kuba, netdev

2023-09-01, 11:32:19 +0100, Russell King (Oracle) wrote:
> On Fri, Sep 01, 2023 at 12:07:32PM +0200, Sabrina Dubroca wrote:
> > 2023-09-01, 09:09:06 +0000, Radu Pirea wrote:
> > > We can do a flush very simple, but to restore the configuration maybe
> > > we should to save the key in the macsec_key structure. I am not sure if
> > > the key can be extracted from crypto_aead structure.
> > 
> > Either that or in the driver. I have a small preference for driver,
> > because then cases that don't need this restore won't have to keep the
> > key in memory, reducing the likelihood of accidentally sharing it.
> > OTOH, if we centralize that code, it's easier to make sure everything
> > is cleared from kernel memory when we delete the SA.
> 
> Maybe consider about doing it as a library function, so drivers that
> need this don't have to reimplement the functionality in randomly
> buggy ways?

But then the driver would depend on the macsec module, right? It's not
a large module, but that seems a bit undesirable.

I think I'd rather add the key to macsec_key, and only copy it there
in case we're offloading (we currently don't allow enabling offloading
after installing some SAs/keys so that would be fine). Maybe add a
driver flag to request keeping the keys in memory (I don't know if all
drivers will require that -- seems like all PHY drivers would, but what
about the MAC ones?).

-- 
Sabrina


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag
  2023-09-01 11:58             ` Radu Pirea (OSS)
@ 2023-09-01 13:57               ` Sabrina Dubroca
  2023-09-01 14:22                 ` Radu Pirea (OSS)
  0 siblings, 1 reply; 34+ messages in thread
From: Sabrina Dubroca @ 2023-09-01 13:57 UTC (permalink / raw)
  To: Radu Pirea (OSS)
  Cc: Radu Pirea, atenart, andrew, linux, hkallweit1, davem,
	Sebastian Tobuschat, linux-kernel, pabeni, richardcochran,
	edumazet, kuba, netdev

2023-09-01, 14:58:12 +0300, Radu Pirea (OSS) wrote:
> On 01.09.2023 13:07, Sabrina Dubroca wrote:
> > > (the interface was up before)
> > > [root@alarm ~]# ip link add link end0 macsec0 type macsec encrypt on
> > > offload phy
> > > [root@alarm ~]# ip link set end0 down
> > > [root@alarm ~]# ip macsec add macsec0 rx port 1 address
> > > 00:01:be:be:ef:33
> > > RTNETLINK answers: Operation not supported
> > 
> > Where does that EOPNOTSUPP come from? nxp_c45_mdo_add_rxsc from this
> > version of the code can't return that, and macsec_add_rxsc also
> > shouldn't at this point.
> 
> This is the source of -EOPNOTSUPP
> https://elixir.bootlin.com/linux/latest/source/drivers/net/macsec.c#L1928

Could you check which part of macsec_get_ops is failing? Since
macsec_newlink with "offload phy" worked, macsec_check_offload
shouldn't fail, so why does macsec_get_ops return NULL?
real_dev->phydev was NULL'ed?

-- 
Sabrina


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag
  2023-09-01 13:57               ` Sabrina Dubroca
@ 2023-09-01 14:22                 ` Radu Pirea (OSS)
  2023-09-01 15:37                   ` Sabrina Dubroca
  0 siblings, 1 reply; 34+ messages in thread
From: Radu Pirea (OSS) @ 2023-09-01 14:22 UTC (permalink / raw)
  To: Sabrina Dubroca
  Cc: atenart, andrew, linux, hkallweit1, davem, Sebastian Tobuschat,
	linux-kernel, pabeni, richardcochran, edumazet, kuba, netdev



On 01.09.2023 16:57, Sabrina Dubroca wrote:
> 2023-09-01, 14:58:12 +0300, Radu Pirea (OSS) wrote:
>> On 01.09.2023 13:07, Sabrina Dubroca wrote:
>>>> (the interface was up before)
>>>> [root@alarm ~]# ip link add link end0 macsec0 type macsec encrypt on
>>>> offload phy
>>>> [root@alarm ~]# ip link set end0 down
>>>> [root@alarm ~]# ip macsec add macsec0 rx port 1 address
>>>> 00:01:be:be:ef:33
>>>> RTNETLINK answers: Operation not supported
>>>
>>> Where does that EOPNOTSUPP come from? nxp_c45_mdo_add_rxsc from this
>>> version of the code can't return that, and macsec_add_rxsc also
>>> shouldn't at this point.
>>
>> This is the source of -EOPNOTSUPP
>> https://elixir.bootlin.com/linux/latest/source/drivers/net/macsec.c#L1928
> 
> Could you check which part of macsec_get_ops is failing? Since
> macsec_newlink with "offload phy" worked, macsec_check_offload
> shouldn't fail, so why does macsec_get_ops return NULL?
> real_dev->phydev was NULL'ed?

This check logical and returns false:
https://elixir.bootlin.com/linux/latest/source/drivers/net/macsec.c#L343

real_dev->phydev was nulled.
The call stack is next:
fec_enet_close -> phy_disconnect -> phy_detach ->
https://elixir.bootlin.com/linux/latest/source/drivers/net/phy/phy_device.c#L1815

-- 
Radu P.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag
  2023-09-01 14:22                 ` Radu Pirea (OSS)
@ 2023-09-01 15:37                   ` Sabrina Dubroca
  0 siblings, 0 replies; 34+ messages in thread
From: Sabrina Dubroca @ 2023-09-01 15:37 UTC (permalink / raw)
  To: Radu Pirea (OSS)
  Cc: atenart, andrew, linux, hkallweit1, davem, Sebastian Tobuschat,
	linux-kernel, pabeni, richardcochran, edumazet, kuba, netdev

2023-09-01, 17:22:49 +0300, Radu Pirea (OSS) wrote:
> 
> 
> On 01.09.2023 16:57, Sabrina Dubroca wrote:
> > 2023-09-01, 14:58:12 +0300, Radu Pirea (OSS) wrote:
> > > On 01.09.2023 13:07, Sabrina Dubroca wrote:
> > > > > (the interface was up before)
> > > > > [root@alarm ~]# ip link add link end0 macsec0 type macsec encrypt on
> > > > > offload phy
> > > > > [root@alarm ~]# ip link set end0 down
> > > > > [root@alarm ~]# ip macsec add macsec0 rx port 1 address
> > > > > 00:01:be:be:ef:33
> > > > > RTNETLINK answers: Operation not supported
> > > > 
> > > > Where does that EOPNOTSUPP come from? nxp_c45_mdo_add_rxsc from this
> > > > version of the code can't return that, and macsec_add_rxsc also
> > > > shouldn't at this point.
> > > 
> > > This is the source of -EOPNOTSUPP
> > > https://elixir.bootlin.com/linux/latest/source/drivers/net/macsec.c#L1928
> > 
> > Could you check which part of macsec_get_ops is failing? Since
> > macsec_newlink with "offload phy" worked, macsec_check_offload
> > shouldn't fail, so why does macsec_get_ops return NULL?
> > real_dev->phydev was NULL'ed?
> 
> This check logical and returns false:
> https://elixir.bootlin.com/linux/latest/source/drivers/net/macsec.c#L343
> 
> real_dev->phydev was nulled.
> The call stack is next:
> fec_enet_close -> phy_disconnect -> phy_detach ->
> https://elixir.bootlin.com/linux/latest/source/drivers/net/phy/phy_device.c#L1815

Ok, thanks for looking this up. So we can't have a consistent behavior
between SW and PHY modes unfortunately.

-- 
Sabrina


^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2023-09-01 15:38 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-24  9:16 [RFC net-next v2 0/5] Add MACsec support for TJA11XX C45 PHYs Radu Pirea (NXP OSS)
2023-08-24  9:16 ` [RFC net-next v2 1/5] net: macsec: documentation for macsec_context and macsec_ops Radu Pirea (NXP OSS)
2023-08-24 13:26   ` Antoine Tenart
2023-08-24  9:16 ` [RFC net-next v2 2/5] net: macsec: introduce mdo_insert_tx_tag Radu Pirea (NXP OSS)
2023-08-24 14:54   ` Sabrina Dubroca
2023-08-25 10:01     ` Radu Pirea (OSS)
2023-08-24  9:16 ` [RFC net-next v2 3/5] net: phy: nxp-c45-tja11xx add MACsec support Radu Pirea (NXP OSS)
2023-08-25 12:52   ` Sabrina Dubroca
2023-08-25 13:29     ` Andrew Lunn
2023-08-25 13:44       ` Radu Pirea (OSS)
2023-08-25 13:50         ` Andrew Lunn
2023-08-25 14:12           ` Radu Pirea (OSS)
2023-08-30 12:06           ` Russell King (Oracle)
2023-08-28 10:43       ` Sabrina Dubroca
2023-08-27  8:03   ` Simon Horman
2023-08-24  9:16 ` [RFC net-next v2 4/5] net: phy: nxp-c45-tja11xx: add MACsec statistics Radu Pirea (NXP OSS)
2023-08-25 13:41   ` Sabrina Dubroca
2023-08-25 14:22     ` Radu Pirea (OSS)
2023-08-24  9:16 ` [RFC net-next v2 5/5] net: phy: nxp-c45-tja11xx: implement mdo_insert_tx_tag Radu Pirea (NXP OSS)
2023-08-27  8:05   ` Simon Horman
2023-08-28 10:17   ` Sabrina Dubroca
2023-08-28 13:46     ` Radu Pirea (OSS)
2023-08-30 11:35       ` Sabrina Dubroca
2023-09-01  9:09         ` Radu Pirea
2023-09-01  9:27           ` Russell King (Oracle)
2023-09-01 11:31             ` Radu Pirea (OSS)
2023-09-01 12:45               ` Russell King (Oracle)
2023-09-01 10:07           ` Sabrina Dubroca
2023-09-01 10:32             ` Russell King (Oracle)
2023-09-01 13:56               ` Sabrina Dubroca
2023-09-01 11:58             ` Radu Pirea (OSS)
2023-09-01 13:57               ` Sabrina Dubroca
2023-09-01 14:22                 ` Radu Pirea (OSS)
2023-09-01 15:37                   ` Sabrina Dubroca

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).