All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC v2 net-next 0/2] Add RTNL interface for SyncE
@ 2021-08-29  8:05 ` Maciej Machnikowski
  0 siblings, 0 replies; 43+ messages in thread
From: Maciej Machnikowski @ 2021-08-29  8:05 UTC (permalink / raw)
  To: maciej.machnikowski, netdev, intel-wired-lan
  Cc: richardcochran, abyagowi, anthony.l.nguyen, davem, kuba, linux-kselftest

Synchronous Ethernet networks use a physical layer clock to syntonize
the frequency across different network elements.

Multiple reference clock sources can be used. Clocks recovered from 
PHY ports on the RX side or external sources like 1PPS GPS, etc.

This patch series introduces basic interface for reading the DPLL
state on a SyncE capable device. This state gives us information
about the source of the syntonization signal and whether the DPLL
circuit is tuned to the incoming signal.

Next steps:
 - add interface to enable recovered clocks and get information
   about them

v2:
- removed whitespace changes
- fix issues reported by test robot

Maciej Machnikowski (2):
  rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  ice: add support for reading SyncE DPLL state

 drivers/net/ethernet/intel/ice/ice.h          |  5 ++
 .../net/ethernet/intel/ice/ice_adminq_cmd.h   | 34 ++++++++
 drivers/net/ethernet/intel/ice/ice_common.c   | 62 +++++++++++++++
 drivers/net/ethernet/intel/ice/ice_common.h   |  4 +
 drivers/net/ethernet/intel/ice/ice_devids.h   |  3 +
 drivers/net/ethernet/intel/ice/ice_main.c     | 55 +++++++++++++
 drivers/net/ethernet/intel/ice/ice_ptp.c      | 35 +++++++++
 drivers/net/ethernet/intel/ice/ice_ptp_hw.c   | 44 +++++++++++
 drivers/net/ethernet/intel/ice/ice_ptp_hw.h   | 22 ++++++
 include/linux/netdevice.h                     |  6 ++
 include/uapi/linux/if_link.h                  | 43 +++++++++++
 include/uapi/linux/rtnetlink.h                | 11 ++-
 net/core/rtnetlink.c                          | 77 +++++++++++++++++++
 security/selinux/nlmsgtab.c                   |  3 +-
 14 files changed, 399 insertions(+), 5 deletions(-)

-- 
2.26.3


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 0/2] Add RTNL interface for SyncE
@ 2021-08-29  8:05 ` Maciej Machnikowski
  0 siblings, 0 replies; 43+ messages in thread
From: Maciej Machnikowski @ 2021-08-29  8:05 UTC (permalink / raw)
  To: intel-wired-lan

Synchronous Ethernet networks use a physical layer clock to syntonize
the frequency across different network elements.

Multiple reference clock sources can be used. Clocks recovered from 
PHY ports on the RX side or external sources like 1PPS GPS, etc.

This patch series introduces basic interface for reading the DPLL
state on a SyncE capable device. This state gives us information
about the source of the syntonization signal and whether the DPLL
circuit is tuned to the incoming signal.

Next steps:
 - add interface to enable recovered clocks and get information
   about them

v2:
- removed whitespace changes
- fix issues reported by test robot

Maciej Machnikowski (2):
  rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  ice: add support for reading SyncE DPLL state

 drivers/net/ethernet/intel/ice/ice.h          |  5 ++
 .../net/ethernet/intel/ice/ice_adminq_cmd.h   | 34 ++++++++
 drivers/net/ethernet/intel/ice/ice_common.c   | 62 +++++++++++++++
 drivers/net/ethernet/intel/ice/ice_common.h   |  4 +
 drivers/net/ethernet/intel/ice/ice_devids.h   |  3 +
 drivers/net/ethernet/intel/ice/ice_main.c     | 55 +++++++++++++
 drivers/net/ethernet/intel/ice/ice_ptp.c      | 35 +++++++++
 drivers/net/ethernet/intel/ice/ice_ptp_hw.c   | 44 +++++++++++
 drivers/net/ethernet/intel/ice/ice_ptp_hw.h   | 22 ++++++
 include/linux/netdevice.h                     |  6 ++
 include/uapi/linux/if_link.h                  | 43 +++++++++++
 include/uapi/linux/rtnetlink.h                | 11 ++-
 net/core/rtnetlink.c                          | 77 +++++++++++++++++++
 security/selinux/nlmsgtab.c                   |  3 +-
 14 files changed, 399 insertions(+), 5 deletions(-)

-- 
2.26.3


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  2021-08-29  8:05 ` [Intel-wired-lan] " Maciej Machnikowski
@ 2021-08-29  8:05   ` Maciej Machnikowski
  -1 siblings, 0 replies; 43+ messages in thread
From: Maciej Machnikowski @ 2021-08-29  8:05 UTC (permalink / raw)
  To: maciej.machnikowski, netdev, intel-wired-lan
  Cc: richardcochran, abyagowi, anthony.l.nguyen, davem, kuba, linux-kselftest

This patch adds the new RTM_GETSYNCESTATE message to query the status
of SyncE syntonization on the device.

Initial implementation returns:
 - SyncE DPLL state
 - Source of signal driving SyncE DPLL (SyncE, GNSS, PTP or External)
 - Current index of Pin driving the DPLL

SyncE state read needs to be implemented as ndo_get_synce_state function.

This patch is SyncE-oriented. Future implementation can add additional
functionality for reading different DPLL states using the same structure.

Signed-off-by: Maciej Machnikowski <maciej.machnikowski@intel.com>
---
 include/linux/netdevice.h      |  6 +++
 include/uapi/linux/if_link.h   | 43 +++++++++++++++++++
 include/uapi/linux/rtnetlink.h | 11 +++--
 net/core/rtnetlink.c           | 77 ++++++++++++++++++++++++++++++++++
 security/selinux/nlmsgtab.c    |  3 +-
 5 files changed, 135 insertions(+), 5 deletions(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 6fd3a4d42668..a091a35706a7 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -1344,6 +1344,8 @@ struct netdev_net_notifier {
  *	The caller must be under RCU read context.
  * int (*ndo_fill_forward_path)(struct net_device_path_ctx *ctx, struct net_device_path *path);
  *     Get the forwarding path to reach the real device from the HW destination address
+ * int (*ndo_get_synce_state)(struct net_device *dev, struct if_synce_state_msg *state)
+ *	Get state of physical layer frequency syntonization (SyncE)
  */
 struct net_device_ops {
 	int			(*ndo_init)(struct net_device *dev);
@@ -1563,6 +1565,10 @@ struct net_device_ops {
 	struct net_device *	(*ndo_get_peer_dev)(struct net_device *dev);
 	int                     (*ndo_fill_forward_path)(struct net_device_path_ctx *ctx,
                                                          struct net_device_path *path);
+	int			(*ndo_get_synce_state)(struct net_device *dev,
+						       enum if_synce_state *state,
+						       enum if_synce_src *src,
+						       u8 *pin_idx);
 };
 
 /**
diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h
index eebd3894fe89..1fcb9c71dc7c 100644
--- a/include/uapi/linux/if_link.h
+++ b/include/uapi/linux/if_link.h
@@ -1273,4 +1273,47 @@ enum {
 
 #define IFLA_MCTP_MAX (__IFLA_MCTP_MAX - 1)
 
+/* SyncE section */
+
+enum if_synce_state {
+	IF_SYNCE_STATE_INVALID = 0,
+	IF_SYNCE_STATE_FREERUN,
+	IF_SYNCE_STATE_LOCKACQ,
+	IF_SYNCE_STATE_LOCKREC,
+	IF_SYNCE_STATE_LOCKED,
+	IF_SYNCE_STATE_HOLDOVER,
+	IF_SYNCE_STATE_OPEN_LOOP,
+	__IF_SYNCE_STATE_MAX,
+};
+
+#define IF_SYNCE_STATE_MAX (__IF_SYNCE_STATE_MAX - 1)
+
+enum if_synce_src {
+	IF_SYNCE_SRC_INVALID = 0,
+	IF_SYNCE_SRC_UNKNOWN,
+	IF_SYNCE_SRC_SYNCE,
+	IF_SYNCE_SRC_GNSS,
+	IF_SYNCE_SRC_PTP,
+	IF_SYNCE_SRC_EXT,
+	__IF_SYNCE_SRC_MAX,
+};
+
+#define IF_SYNCE_PIN_UNKNOWN	0xFF
+
+struct if_synce_state_msg {
+	__u32 ifindex;
+	__u8 state;
+	__u8 src;
+	__u8 pin;
+	__u8 pad;
+};
+
+enum {
+	IFLA_SYNCE_UNSPEC,
+	IFLA_SYNCE_STATE,
+	__IFLA_SYNCE_MAX,
+};
+
+#define IFLA_SYNCE_MAX (__IFLA_SYNCE_MAX - 1)
+
 #endif /* _UAPI_LINUX_IF_LINK_H */
diff --git a/include/uapi/linux/rtnetlink.h b/include/uapi/linux/rtnetlink.h
index 5888492a5257..cd68045c475b 100644
--- a/include/uapi/linux/rtnetlink.h
+++ b/include/uapi/linux/rtnetlink.h
@@ -185,6 +185,9 @@ enum {
 	RTM_GETNEXTHOPBUCKET,
 #define RTM_GETNEXTHOPBUCKET	RTM_GETNEXTHOPBUCKET
 
+	RTM_GETSYNCESTATE = 120,
+#define RTM_GETSYNCESTATE	RTM_GETSYNCESTATE
+
 	__RTM_MAX,
 #define RTM_MAX		(((__RTM_MAX + 3) & ~3) - 1)
 };
@@ -193,7 +196,7 @@ enum {
 #define RTM_NR_FAMILIES	(RTM_NR_MSGTYPES >> 2)
 #define RTM_FAM(cmd)	(((cmd) - RTM_BASE) >> 2)
 
-/* 
+/*
    Generic structure for encapsulation of optional route information.
    It is reminiscent of sockaddr, but with sa_family replaced
    with attribute type.
@@ -233,7 +236,7 @@ struct rtmsg {
 
 	unsigned char		rtm_table;	/* Routing table id */
 	unsigned char		rtm_protocol;	/* Routing protocol; see below	*/
-	unsigned char		rtm_scope;	/* See below */	
+	unsigned char		rtm_scope;	/* See below */
 	unsigned char		rtm_type;	/* See below	*/
 
 	unsigned		rtm_flags;
@@ -555,7 +558,7 @@ struct ifinfomsg {
 };
 
 /********************************************************************
- *		prefix information 
+ *		prefix information
  ****/
 
 struct prefixmsg {
@@ -569,7 +572,7 @@ struct prefixmsg {
 	unsigned char	prefix_pad3;
 };
 
-enum 
+enum
 {
 	PREFIX_UNSPEC,
 	PREFIX_ADDRESS,
diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
index 972c8cb303a5..8c9638421049 100644
--- a/net/core/rtnetlink.c
+++ b/net/core/rtnetlink.c
@@ -5468,6 +5468,81 @@ static int rtnl_stats_dump(struct sk_buff *skb, struct netlink_callback *cb)
 	return skb->len;
 }
 
+static int rtnl_fill_synce_state(struct sk_buff *msg, struct net_device *dev,
+				 u32 portid, u32 seq,
+				 struct netlink_callback *cb,
+				 int flags)
+{
+	const struct net_device_ops *ops = dev->netdev_ops;
+	struct if_synce_state_msg *state;
+	struct nlmsghdr *nlh;
+
+	ASSERT_RTNL();
+
+	nlh = nlmsg_put(msg, portid, seq, RTM_GETSYNCESTATE,
+			sizeof(*state), flags);
+	if (!nlh)
+		return -EMSGSIZE;
+
+	state = nlmsg_data(nlh);
+
+	if (ops->ndo_get_synce_state) {
+		enum if_synce_state sync_state;
+		enum if_synce_src src;
+		int err;
+		u8 pin;
+
+		err = ops->ndo_get_synce_state(dev, &sync_state, &src, &pin);
+		if (err)
+			return err;
+
+		memset(state, 0, sizeof(*state));
+
+		state->ifindex = dev->ifindex;
+		state->state = (u8)sync_state;
+		state->pin = pin;
+		state->src = (u8)src;
+
+		return 0;
+	}
+
+	return -EOPNOTSUPP;
+}
+
+static int rtnl_synce_state_get(struct sk_buff *skb, struct nlmsghdr *nlh,
+				struct netlink_ext_ack *extack)
+{
+	struct net *net = sock_net(skb->sk);
+	struct if_synce_state_msg *state;
+	struct net_device *dev = NULL;
+	struct sk_buff *nskb;
+	u32 filter_mask;
+	int err;
+
+	state = nlmsg_data(nlh);
+	if (state->ifindex > 0)
+		dev = __dev_get_by_index(net, state->ifindex);
+	else
+		return -EINVAL;
+
+	nskb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
+	if (!nskb)
+		return -ENOBUFS;
+
+	if (!dev)
+		return -ENODEV;
+
+	err = rtnl_fill_synce_state(nskb, dev,
+				    NETLINK_CB(skb).portid, nlh->nlmsg_seq,
+				    NULL, filter_mask);
+	if (err < 0)
+		kfree_skb(nskb);
+	else
+		err = rtnl_unicast(nskb, net, NETLINK_CB(skb).portid);
+
+	return err;
+}
+
 /* Process one rtnetlink message. */
 
 static int rtnetlink_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh,
@@ -5693,4 +5768,6 @@ void __init rtnetlink_init(void)
 
 	rtnl_register(PF_UNSPEC, RTM_GETSTATS, rtnl_stats_get, rtnl_stats_dump,
 		      0);
+
+	rtnl_register(PF_UNSPEC, RTM_GETSYNCESTATE, rtnl_synce_state_get, NULL, 0);
 }
diff --git a/security/selinux/nlmsgtab.c b/security/selinux/nlmsgtab.c
index d59276f48d4f..b4bea120f2af 100644
--- a/security/selinux/nlmsgtab.c
+++ b/security/selinux/nlmsgtab.c
@@ -91,6 +91,7 @@ static const struct nlmsg_perm nlmsg_route_perms[] =
 	{ RTM_NEWNEXTHOPBUCKET,	NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
 	{ RTM_DELNEXTHOPBUCKET,	NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
 	{ RTM_GETNEXTHOPBUCKET,	NETLINK_ROUTE_SOCKET__NLMSG_READ  },
+	{ RTM_GETSYNCESTATE,	NETLINK_ROUTE_SOCKET__NLMSG_READ  },
 };
 
 static const struct nlmsg_perm nlmsg_tcpdiag_perms[] =
@@ -174,7 +175,7 @@ int selinux_nlmsg_lookup(u16 sclass, u16 nlmsg_type, u32 *perm)
 		 * structures at the top of this file with the new mappings
 		 * before updating the BUILD_BUG_ON() macro!
 		 */
-		BUILD_BUG_ON(RTM_MAX != (RTM_NEWNEXTHOPBUCKET + 3));
+		BUILD_BUG_ON(RTM_MAX != (RTM_GETSYNCESTATE + 3));
 		err = nlmsg_perm(nlmsg_type, perm, nlmsg_route_perms,
 				 sizeof(nlmsg_route_perms));
 		break;
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
@ 2021-08-29  8:05   ` Maciej Machnikowski
  0 siblings, 0 replies; 43+ messages in thread
From: Maciej Machnikowski @ 2021-08-29  8:05 UTC (permalink / raw)
  To: intel-wired-lan

This patch adds the new RTM_GETSYNCESTATE message to query the status
of SyncE syntonization on the device.

Initial implementation returns:
 - SyncE DPLL state
 - Source of signal driving SyncE DPLL (SyncE, GNSS, PTP or External)
 - Current index of Pin driving the DPLL

SyncE state read needs to be implemented as ndo_get_synce_state function.

This patch is SyncE-oriented. Future implementation can add additional
functionality for reading different DPLL states using the same structure.

Signed-off-by: Maciej Machnikowski <maciej.machnikowski@intel.com>
---
 include/linux/netdevice.h      |  6 +++
 include/uapi/linux/if_link.h   | 43 +++++++++++++++++++
 include/uapi/linux/rtnetlink.h | 11 +++--
 net/core/rtnetlink.c           | 77 ++++++++++++++++++++++++++++++++++
 security/selinux/nlmsgtab.c    |  3 +-
 5 files changed, 135 insertions(+), 5 deletions(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 6fd3a4d42668..a091a35706a7 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -1344,6 +1344,8 @@ struct netdev_net_notifier {
  *	The caller must be under RCU read context.
  * int (*ndo_fill_forward_path)(struct net_device_path_ctx *ctx, struct net_device_path *path);
  *     Get the forwarding path to reach the real device from the HW destination address
+ * int (*ndo_get_synce_state)(struct net_device *dev, struct if_synce_state_msg *state)
+ *	Get state of physical layer frequency syntonization (SyncE)
  */
 struct net_device_ops {
 	int			(*ndo_init)(struct net_device *dev);
@@ -1563,6 +1565,10 @@ struct net_device_ops {
 	struct net_device *	(*ndo_get_peer_dev)(struct net_device *dev);
 	int                     (*ndo_fill_forward_path)(struct net_device_path_ctx *ctx,
                                                          struct net_device_path *path);
+	int			(*ndo_get_synce_state)(struct net_device *dev,
+						       enum if_synce_state *state,
+						       enum if_synce_src *src,
+						       u8 *pin_idx);
 };
 
 /**
diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h
index eebd3894fe89..1fcb9c71dc7c 100644
--- a/include/uapi/linux/if_link.h
+++ b/include/uapi/linux/if_link.h
@@ -1273,4 +1273,47 @@ enum {
 
 #define IFLA_MCTP_MAX (__IFLA_MCTP_MAX - 1)
 
+/* SyncE section */
+
+enum if_synce_state {
+	IF_SYNCE_STATE_INVALID = 0,
+	IF_SYNCE_STATE_FREERUN,
+	IF_SYNCE_STATE_LOCKACQ,
+	IF_SYNCE_STATE_LOCKREC,
+	IF_SYNCE_STATE_LOCKED,
+	IF_SYNCE_STATE_HOLDOVER,
+	IF_SYNCE_STATE_OPEN_LOOP,
+	__IF_SYNCE_STATE_MAX,
+};
+
+#define IF_SYNCE_STATE_MAX (__IF_SYNCE_STATE_MAX - 1)
+
+enum if_synce_src {
+	IF_SYNCE_SRC_INVALID = 0,
+	IF_SYNCE_SRC_UNKNOWN,
+	IF_SYNCE_SRC_SYNCE,
+	IF_SYNCE_SRC_GNSS,
+	IF_SYNCE_SRC_PTP,
+	IF_SYNCE_SRC_EXT,
+	__IF_SYNCE_SRC_MAX,
+};
+
+#define IF_SYNCE_PIN_UNKNOWN	0xFF
+
+struct if_synce_state_msg {
+	__u32 ifindex;
+	__u8 state;
+	__u8 src;
+	__u8 pin;
+	__u8 pad;
+};
+
+enum {
+	IFLA_SYNCE_UNSPEC,
+	IFLA_SYNCE_STATE,
+	__IFLA_SYNCE_MAX,
+};
+
+#define IFLA_SYNCE_MAX (__IFLA_SYNCE_MAX - 1)
+
 #endif /* _UAPI_LINUX_IF_LINK_H */
diff --git a/include/uapi/linux/rtnetlink.h b/include/uapi/linux/rtnetlink.h
index 5888492a5257..cd68045c475b 100644
--- a/include/uapi/linux/rtnetlink.h
+++ b/include/uapi/linux/rtnetlink.h
@@ -185,6 +185,9 @@ enum {
 	RTM_GETNEXTHOPBUCKET,
 #define RTM_GETNEXTHOPBUCKET	RTM_GETNEXTHOPBUCKET
 
+	RTM_GETSYNCESTATE = 120,
+#define RTM_GETSYNCESTATE	RTM_GETSYNCESTATE
+
 	__RTM_MAX,
 #define RTM_MAX		(((__RTM_MAX + 3) & ~3) - 1)
 };
@@ -193,7 +196,7 @@ enum {
 #define RTM_NR_FAMILIES	(RTM_NR_MSGTYPES >> 2)
 #define RTM_FAM(cmd)	(((cmd) - RTM_BASE) >> 2)
 
-/* 
+/*
    Generic structure for encapsulation of optional route information.
    It is reminiscent of sockaddr, but with sa_family replaced
    with attribute type.
@@ -233,7 +236,7 @@ struct rtmsg {
 
 	unsigned char		rtm_table;	/* Routing table id */
 	unsigned char		rtm_protocol;	/* Routing protocol; see below	*/
-	unsigned char		rtm_scope;	/* See below */	
+	unsigned char		rtm_scope;	/* See below */
 	unsigned char		rtm_type;	/* See below	*/
 
 	unsigned		rtm_flags;
@@ -555,7 +558,7 @@ struct ifinfomsg {
 };
 
 /********************************************************************
- *		prefix information 
+ *		prefix information
  ****/
 
 struct prefixmsg {
@@ -569,7 +572,7 @@ struct prefixmsg {
 	unsigned char	prefix_pad3;
 };
 
-enum 
+enum
 {
 	PREFIX_UNSPEC,
 	PREFIX_ADDRESS,
diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
index 972c8cb303a5..8c9638421049 100644
--- a/net/core/rtnetlink.c
+++ b/net/core/rtnetlink.c
@@ -5468,6 +5468,81 @@ static int rtnl_stats_dump(struct sk_buff *skb, struct netlink_callback *cb)
 	return skb->len;
 }
 
+static int rtnl_fill_synce_state(struct sk_buff *msg, struct net_device *dev,
+				 u32 portid, u32 seq,
+				 struct netlink_callback *cb,
+				 int flags)
+{
+	const struct net_device_ops *ops = dev->netdev_ops;
+	struct if_synce_state_msg *state;
+	struct nlmsghdr *nlh;
+
+	ASSERT_RTNL();
+
+	nlh = nlmsg_put(msg, portid, seq, RTM_GETSYNCESTATE,
+			sizeof(*state), flags);
+	if (!nlh)
+		return -EMSGSIZE;
+
+	state = nlmsg_data(nlh);
+
+	if (ops->ndo_get_synce_state) {
+		enum if_synce_state sync_state;
+		enum if_synce_src src;
+		int err;
+		u8 pin;
+
+		err = ops->ndo_get_synce_state(dev, &sync_state, &src, &pin);
+		if (err)
+			return err;
+
+		memset(state, 0, sizeof(*state));
+
+		state->ifindex = dev->ifindex;
+		state->state = (u8)sync_state;
+		state->pin = pin;
+		state->src = (u8)src;
+
+		return 0;
+	}
+
+	return -EOPNOTSUPP;
+}
+
+static int rtnl_synce_state_get(struct sk_buff *skb, struct nlmsghdr *nlh,
+				struct netlink_ext_ack *extack)
+{
+	struct net *net = sock_net(skb->sk);
+	struct if_synce_state_msg *state;
+	struct net_device *dev = NULL;
+	struct sk_buff *nskb;
+	u32 filter_mask;
+	int err;
+
+	state = nlmsg_data(nlh);
+	if (state->ifindex > 0)
+		dev = __dev_get_by_index(net, state->ifindex);
+	else
+		return -EINVAL;
+
+	nskb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
+	if (!nskb)
+		return -ENOBUFS;
+
+	if (!dev)
+		return -ENODEV;
+
+	err = rtnl_fill_synce_state(nskb, dev,
+				    NETLINK_CB(skb).portid, nlh->nlmsg_seq,
+				    NULL, filter_mask);
+	if (err < 0)
+		kfree_skb(nskb);
+	else
+		err = rtnl_unicast(nskb, net, NETLINK_CB(skb).portid);
+
+	return err;
+}
+
 /* Process one rtnetlink message. */
 
 static int rtnetlink_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh,
@@ -5693,4 +5768,6 @@ void __init rtnetlink_init(void)
 
 	rtnl_register(PF_UNSPEC, RTM_GETSTATS, rtnl_stats_get, rtnl_stats_dump,
 		      0);
+
+	rtnl_register(PF_UNSPEC, RTM_GETSYNCESTATE, rtnl_synce_state_get, NULL, 0);
 }
diff --git a/security/selinux/nlmsgtab.c b/security/selinux/nlmsgtab.c
index d59276f48d4f..b4bea120f2af 100644
--- a/security/selinux/nlmsgtab.c
+++ b/security/selinux/nlmsgtab.c
@@ -91,6 +91,7 @@ static const struct nlmsg_perm nlmsg_route_perms[] =
 	{ RTM_NEWNEXTHOPBUCKET,	NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
 	{ RTM_DELNEXTHOPBUCKET,	NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
 	{ RTM_GETNEXTHOPBUCKET,	NETLINK_ROUTE_SOCKET__NLMSG_READ  },
+	{ RTM_GETSYNCESTATE,	NETLINK_ROUTE_SOCKET__NLMSG_READ  },
 };
 
 static const struct nlmsg_perm nlmsg_tcpdiag_perms[] =
@@ -174,7 +175,7 @@ int selinux_nlmsg_lookup(u16 sclass, u16 nlmsg_type, u32 *perm)
 		 * structures@the top of this file with the new mappings
 		 * before updating the BUILD_BUG_ON() macro!
 		 */
-		BUILD_BUG_ON(RTM_MAX != (RTM_NEWNEXTHOPBUCKET + 3));
+		BUILD_BUG_ON(RTM_MAX != (RTM_GETSYNCESTATE + 3));
 		err = nlmsg_perm(nlmsg_type, perm, nlmsg_route_perms,
 				 sizeof(nlmsg_route_perms));
 		break;
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [RFC v2 net-next 2/2] ice: add support for reading SyncE DPLL state
  2021-08-29  8:05 ` [Intel-wired-lan] " Maciej Machnikowski
@ 2021-08-29  8:05   ` Maciej Machnikowski
  -1 siblings, 0 replies; 43+ messages in thread
From: Maciej Machnikowski @ 2021-08-29  8:05 UTC (permalink / raw)
  To: maciej.machnikowski, netdev, intel-wired-lan
  Cc: richardcochran, abyagowi, anthony.l.nguyen, davem, kuba, linux-kselftest

Implement SyncE DPLL monitoring for E810-T devices.
Poll loop will periodically check the state of the DPLL and cache it
in the pf structure. State change will be logged in the system log.

Cached state can be read using the RTM_GETSYNCESTATE rtnetlink
message.

Different SyncE sources will be reported depending on the pin driving
the DPLL:
 - pins 0-1: can be driven by PTP clock
 - pins 2-5: are used by SyncE recovered clocks
 - pins 6-7: can be used to connect external frequency sources
 - pin 8: is connected to the optional GNSS receiver

Signed-off-by: Maciej Machnikowski <maciej.machnikowski@intel.com>
---
 drivers/net/ethernet/intel/ice/ice.h          |  5 ++
 .../net/ethernet/intel/ice/ice_adminq_cmd.h   | 34 ++++++++++
 drivers/net/ethernet/intel/ice/ice_common.c   | 62 +++++++++++++++++++
 drivers/net/ethernet/intel/ice/ice_common.h   |  4 ++
 drivers/net/ethernet/intel/ice/ice_devids.h   |  3 +
 drivers/net/ethernet/intel/ice/ice_main.c     | 55 ++++++++++++++++
 drivers/net/ethernet/intel/ice/ice_ptp.c      | 35 +++++++++++
 drivers/net/ethernet/intel/ice/ice_ptp_hw.c   | 44 +++++++++++++
 drivers/net/ethernet/intel/ice/ice_ptp_hw.h   | 22 +++++++
 9 files changed, 264 insertions(+)

diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index eadcb9958346..c10ff638cddb 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -508,6 +508,11 @@ struct ice_pf {
 #define ICE_VF_AGG_NODE_ID_START	65
 #define ICE_MAX_VF_AGG_NODES		32
 	struct ice_agg_node vf_agg_node[ICE_MAX_VF_AGG_NODES];
+
+	enum if_synce_state synce_dpll_state;
+	u8 synce_dpll_pin;
+	enum if_synce_state ptp_dpll_state;
+	u8 ptp_dpll_pin;
 };
 
 struct ice_netdev_priv {
diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index 21b4c7cd6f05..b84da5e9d025 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -1727,6 +1727,36 @@ struct ice_aqc_add_rdma_qset_data {
 	struct ice_aqc_add_tx_rdma_qset_entry rdma_qsets[];
 };
 
+/* Get CGU DPLL status (direct 0x0C66) */
+struct ice_aqc_get_cgu_dpll_status {
+	u8 dpll_num;
+	u8 ref_state;
+#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_LOS		BIT(0)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_SCM		BIT(1)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_CFM		BIT(2)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_GST		BIT(3)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_PFM		BIT(4)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_ESYNC	BIT(6)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_FAST_LOCK_EN	BIT(7)
+	__le16 dpll_state;
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_LOCK		BIT(0)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_HO		BIT(1)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_HO_READY	BIT(2)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_FLHIT		BIT(5)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_PSLHIT	BIT(7)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_CLK_REF_SHIFT	8
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_CLK_REF_SEL	\
+	ICE_M(0x1F, ICE_AQC_GET_CGU_DPLL_STATUS_STATE_CLK_REF_SHIFT)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_MODE_SHIFT	13
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_MODE \
+	ICE_M(0x7, ICE_AQC_GET_CGU_DPLL_STATUS_STATE_MODE_SHIFT)
+	__le32 phase_offset_h;
+	__le32 phase_offset_l;
+	u8 eec_mode;
+	u8 rsvd[1];
+	__le16 node_handle;
+};
+
 /* Configure Firmware Logging Command (indirect 0xFF09)
  * Logging Information Read Response (indirect 0xFF10)
  * Note: The 0xFF10 command has no input parameters.
@@ -1954,6 +1984,7 @@ struct ice_aq_desc {
 		struct ice_aqc_fw_logging fw_logging;
 		struct ice_aqc_get_clear_fw_log get_clear_fw_log;
 		struct ice_aqc_download_pkg download_pkg;
+		struct ice_aqc_get_cgu_dpll_status get_cgu_dpll_status;
 		struct ice_aqc_driver_shared_params drv_shared_params;
 		struct ice_aqc_set_mac_lb set_mac_lb;
 		struct ice_aqc_alloc_free_res_cmd sw_res_ctrl;
@@ -2108,6 +2139,9 @@ enum ice_adminq_opc {
 	ice_aqc_opc_update_pkg				= 0x0C42,
 	ice_aqc_opc_get_pkg_info_list			= 0x0C43,
 
+	/* 1588/SyncE commands/events */
+	ice_aqc_opc_get_cgu_dpll_status			= 0x0C66,
+
 	ice_aqc_opc_driver_shared_params		= 0x0C90,
 
 	/* Standalone Commands/Events */
diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 2fb81e359cdf..e7474643a421 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -69,6 +69,31 @@ bool ice_is_e810(struct ice_hw *hw)
 	return hw->mac_type == ICE_MAC_E810;
 }
 
+/**
+ * ice_is_e810t
+ * @hw: pointer to the hardware structure
+ *
+ * returns true if the device is E810T based, false if not.
+ */
+bool ice_is_e810t(struct ice_hw *hw)
+{
+	switch (hw->device_id) {
+	case ICE_DEV_ID_E810C_SFP:
+		if (hw->subsystem_device_id == ICE_SUBDEV_ID_E810T ||
+		    hw->subsystem_device_id == ICE_SUBDEV_ID_E810T2)
+			return true;
+		break;
+	case ICE_DEV_ID_E810C_QSFP:
+		if (hw->subsystem_device_id == ICE_SUBDEV_ID_E810T2)
+			return true;
+		break;
+	default:
+		break;
+	}
+
+	return false;
+}
+
 /**
  * ice_clear_pf_cfg - Clear PF configuration
  * @hw: pointer to the hardware structure
@@ -4520,6 +4545,42 @@ ice_dis_vsi_rdma_qset(struct ice_port_info *pi, u16 count, u32 *qset_teid,
 	return ice_status_to_errno(status);
 }
 
+/**
+ * ice_aq_get_cgu_dpll_status
+ * @hw: pointer to the HW struct
+ * @dpll_num: DPLL index
+ * @ref_state: Reference clock state
+ * @dpll_state: DPLL state
+ * @phase_offset: Phase offset in ps
+ * @eec_mode: EEC_mode
+ *
+ * Get CGU DPLL status (0x0C66)
+ */
+enum ice_status
+ice_aq_get_cgu_dpll_status(struct ice_hw *hw, u8 dpll_num, u8 *ref_state,
+			   u16 *dpll_state, u64 *phase_offset, u8 *eec_mode)
+{
+	struct ice_aqc_get_cgu_dpll_status *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_cgu_dpll_status);
+	cmd = &desc.params.get_cgu_dpll_status;
+	cmd->dpll_num = dpll_num;
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+	if (!status) {
+		*ref_state = cmd->ref_state;
+		*dpll_state = le16_to_cpu(cmd->dpll_state);
+		*phase_offset = le32_to_cpu(cmd->phase_offset_h);
+		*phase_offset <<= 32;
+		*phase_offset += le32_to_cpu(cmd->phase_offset_l);
+		*eec_mode = cmd->eec_mode;
+	}
+
+	return status;
+}
+
 /**
  * ice_replay_pre_init - replay pre initialization
  * @hw: pointer to the HW struct
@@ -4974,3 +5035,4 @@ bool ice_fw_supports_report_dflt_cfg(struct ice_hw *hw)
 	}
 	return false;
 }
+
diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h
index fb16070f02e2..ccd76c0cbf2c 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.h
+++ b/drivers/net/ethernet/intel/ice/ice_common.h
@@ -100,6 +100,7 @@ enum ice_status
 ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
 			struct ice_sq_cd *cd);
 bool ice_is_e810(struct ice_hw *hw);
+bool ice_is_e810t(struct ice_hw *hw);
 enum ice_status ice_clear_pf_cfg(struct ice_hw *hw);
 enum ice_status
 ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi,
@@ -156,6 +157,9 @@ ice_cfg_vsi_rdma(struct ice_port_info *pi, u16 vsi_handle, u16 tc_bitmap,
 int
 ice_ena_vsi_rdma_qset(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
 		      u16 *rdma_qset, u16 num_qsets, u32 *qset_teid);
+enum ice_status
+ice_aq_get_cgu_dpll_status(struct ice_hw *hw, u8 dpll_num, u8 *ref_state,
+			   u16 *dpll_state, u64 *phase_offset, u8 *eec_mode);
 int
 ice_dis_vsi_rdma_qset(struct ice_port_info *pi, u16 count, u32 *qset_teid,
 		      u16 *q_id);
diff --git a/drivers/net/ethernet/intel/ice/ice_devids.h b/drivers/net/ethernet/intel/ice/ice_devids.h
index 9d8194671f6a..e52dbeddb783 100644
--- a/drivers/net/ethernet/intel/ice/ice_devids.h
+++ b/drivers/net/ethernet/intel/ice/ice_devids.h
@@ -52,4 +52,7 @@
 /* Intel(R) Ethernet Connection E822-L 1GbE */
 #define ICE_DEV_ID_E822L_SGMII		0x189A
 
+#define ICE_SUBDEV_ID_E810T		0x000E
+#define ICE_SUBDEV_ID_E810T2		0x000F
+
 #endif /* _ICE_DEVIDS_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 60d55d043a94..a0f4c200394f 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -5973,6 +5973,60 @@ static void ice_napi_disable_all(struct ice_vsi *vsi)
 	}
 }
 
+/**
+ * ice_get_synce_state - get state of SyncE DPLL
+ * @netdev: network interface device structure
+ * @state: state of SyncE DPLL
+ * @src: source type driving SyncE DPLL
+ * @pin_idx: index of pin driving SyncE DPLL
+ */
+static int
+ice_get_synce_state(struct net_device *netdev, enum if_synce_state *state,
+		    enum if_synce_src *src, u8 *pin_idx)
+{
+	struct ice_netdev_priv *np = netdev_priv(netdev);
+	struct ice_vsi *vsi = np->vsi;
+	struct ice_pf *pf = vsi->back;
+
+	if (!ice_is_e810t(&pf->hw))
+		return -EOPNOTSUPP;
+
+	if (state)
+		*state = pf->synce_dpll_state;
+	if (pin_idx) {
+		*pin_idx = pf->synce_dpll_pin;
+	if (src)
+		switch (pf->synce_dpll_pin) {
+		case REF0P:
+		case REF0N:
+			*src = IF_SYNCE_SRC_PTP;
+			break;
+		case REF1P:
+		case REF1N:
+		case REF2P:
+		case REF2N:
+			*src = IF_SYNCE_SRC_SYNCE;
+			break;
+		case REF3P:
+		case REF3N:
+			*src = IF_SYNCE_SRC_EXT;
+			break;
+		case REF4P:
+			*src = IF_SYNCE_SRC_GNSS;
+			break;
+		default:
+			*src = IF_SYNCE_SRC_INVALID;
+			break;
+		}
+
+		/* Always report invalid source if state is not Locked */
+		if (pf->synce_dpll_state != IF_SYNCE_STATE_LOCKED)
+			*src = IF_SYNCE_SRC_INVALID;
+	}
+
+	return 0;
+}
+
 /**
  * ice_down - Shutdown the connection
  * @vsi: The VSI being stopped
@@ -7263,4 +7317,5 @@ static const struct net_device_ops ice_netdev_ops = {
 	.ndo_bpf = ice_xdp,
 	.ndo_xdp_xmit = ice_xdp_xmit,
 	.ndo_xsk_wakeup = ice_xsk_wakeup,
+	.ndo_get_synce_state = ice_get_synce_state,
 };
diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
index 9e3ddb9b8b51..8a133ffb70c5 100644
--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
+++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
@@ -1370,6 +1370,36 @@ static void ice_ptp_tx_tstamp_cleanup(struct ice_ptp_tx *tx)
 	}
 }
 
+static void ice_handle_cgu_state(struct ice_pf *pf)
+{
+	enum if_synce_state cgu_state;
+	u8 pin;
+
+	cgu_state = ice_get_zl_state_e810t(&pf->hw, ICE_CGU_DPLL_SYNCE, &pin);
+	if (pf->synce_dpll_state != cgu_state) {
+		pf->synce_dpll_state = cgu_state;
+		pf->synce_dpll_pin = pin;
+
+		dev_warn(ice_pf_to_dev(pf),
+			 "<DPLL%i> state changed to: %d, pin %d",
+			 ICE_CGU_DPLL_SYNCE,
+			 pf->synce_dpll_state,
+			 pin);
+	}
+
+	cgu_state = ice_get_zl_state_e810t(&pf->hw, ICE_CGU_DPLL_PTP, &pin);
+	if (pf->ptp_dpll_state != cgu_state) {
+		pf->ptp_dpll_state = cgu_state;
+		pf->ptp_dpll_pin = pin;
+
+		dev_warn(ice_pf_to_dev(pf),
+			 "<DPLL%i> state changed to: %d, pin %d",
+			 ICE_CGU_DPLL_PTP,
+			 pf->ptp_dpll_state,
+			 pin);
+	}
+}
+
 static void ice_ptp_periodic_work(struct kthread_work *work)
 {
 	struct ice_ptp *ptp = container_of(work, struct ice_ptp, work.work);
@@ -1378,6 +1408,10 @@ static void ice_ptp_periodic_work(struct kthread_work *work)
 	if (!test_bit(ICE_FLAG_PTP, pf->flags))
 		return;
 
+	if (ice_is_e810t(&pf->hw) &&
+	    &pf->hw.func_caps.ts_func_info.src_tmr_owned)
+		ice_handle_cgu_state(pf);
+
 	ice_ptp_update_cached_phctime(pf);
 
 	ice_ptp_tx_tstamp_cleanup(&pf->ptp.port.tx);
@@ -1556,3 +1590,4 @@ void ice_ptp_release(struct ice_pf *pf)
 
 	dev_info(ice_pf_to_dev(pf), "Removed PTP clock\n");
 }
+
diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
index 3eca0e4eab0b..4bd7bc10c237 100644
--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
+++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
@@ -375,6 +375,50 @@ static int ice_ptp_port_cmd_e810(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd)
 	return 0;
 }
 
+/**
+ * ice_get_zl_state_e810t - get the state of the DPLL
+ * @hw: pointer to the hw struct
+ * @dpll_idx: Index of internal DPLL unit
+ * @pin: pointer to a buffer for returning currently active pin
+ *
+ * This function will read the state of the DPLL(dpll_idx). If optional
+ * parameter pin is given it'll be used to retrieve currently active pin.
+ *
+ * Return: state of the DPLL
+ */
+enum if_synce_state
+ice_get_zl_state_e810t(struct ice_hw *hw, u8 dpll_idx, u8 *pin)
+{
+	enum ice_status status;
+	u64 phase_offset;
+	u16 dpll_state;
+	u8 ref_state;
+	u8 eec_mode;
+
+	if (dpll_idx >= ICE_CGU_DPLL_MAX)
+		return IF_SYNCE_STATE_INVALID;
+
+	status = ice_aq_get_cgu_dpll_status(hw, dpll_idx, &ref_state,
+					    &dpll_state, &phase_offset,
+					    &eec_mode);
+	if (status)
+		return IF_SYNCE_STATE_INVALID;
+
+	if (pin) {
+		/* current ref pin in dpll_state_refsel_status_X register */
+		*pin = (dpll_state &
+			ICE_AQC_GET_CGU_DPLL_STATUS_STATE_CLK_REF_SEL) >>
+		       ICE_AQC_GET_CGU_DPLL_STATUS_STATE_CLK_REF_SHIFT;
+	}
+
+	if (dpll_state & ICE_AQC_GET_CGU_DPLL_STATUS_STATE_LOCK)
+		return IF_SYNCE_STATE_LOCKED;
+	else if (dpll_state & ICE_AQC_GET_CGU_DPLL_STATUS_STATE_HO)
+		return IF_SYNCE_STATE_HOLDOVER;
+
+	return IF_SYNCE_STATE_FREERUN;
+}
+
 /* Device agnostic functions
  *
  * The following functions implement useful behavior to hide the differences
diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
index 55a414e87018..a46f634db68c 100644
--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
+++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
@@ -30,6 +30,8 @@ int ice_clear_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx);
 
 /* E810 family functions */
 int ice_ptp_init_phy_e810(struct ice_hw *hw);
+enum if_synce_state
+ice_get_zl_state_e810t(struct ice_hw *hw, u8 dpll_idx, u8 *pin);
 
 #define PFTSYN_SEM_BYTES	4
 
@@ -76,4 +78,24 @@ int ice_ptp_init_phy_e810(struct ice_hw *hw);
 #define LOW_TX_MEMORY_BANK_START	0x03090000
 #define HIGH_TX_MEMORY_BANK_START	0x03090004
 
+enum ice_e810t_cgu_dpll {
+	ICE_CGU_DPLL_SYNCE,
+	ICE_CGU_DPLL_PTP,
+	ICE_CGU_DPLL_MAX
+};
+
+enum ice_e810t_cgu_pins {
+	REF0P,
+	REF0N,
+	REF1P,
+	REF1N,
+	REF2P,
+	REF2N,
+	REF3P,
+	REF3N,
+	REF4P,
+	REF4N,
+	NUM_E810T_CGU_PINS
+};
+
 #endif /* _ICE_PTP_HW_H_ */
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 2/2] ice: add support for reading SyncE DPLL state
@ 2021-08-29  8:05   ` Maciej Machnikowski
  0 siblings, 0 replies; 43+ messages in thread
From: Maciej Machnikowski @ 2021-08-29  8:05 UTC (permalink / raw)
  To: intel-wired-lan

Implement SyncE DPLL monitoring for E810-T devices.
Poll loop will periodically check the state of the DPLL and cache it
in the pf structure. State change will be logged in the system log.

Cached state can be read using the RTM_GETSYNCESTATE rtnetlink
message.

Different SyncE sources will be reported depending on the pin driving
the DPLL:
 - pins 0-1: can be driven by PTP clock
 - pins 2-5: are used by SyncE recovered clocks
 - pins 6-7: can be used to connect external frequency sources
 - pin 8: is connected to the optional GNSS receiver

Signed-off-by: Maciej Machnikowski <maciej.machnikowski@intel.com>
---
 drivers/net/ethernet/intel/ice/ice.h          |  5 ++
 .../net/ethernet/intel/ice/ice_adminq_cmd.h   | 34 ++++++++++
 drivers/net/ethernet/intel/ice/ice_common.c   | 62 +++++++++++++++++++
 drivers/net/ethernet/intel/ice/ice_common.h   |  4 ++
 drivers/net/ethernet/intel/ice/ice_devids.h   |  3 +
 drivers/net/ethernet/intel/ice/ice_main.c     | 55 ++++++++++++++++
 drivers/net/ethernet/intel/ice/ice_ptp.c      | 35 +++++++++++
 drivers/net/ethernet/intel/ice/ice_ptp_hw.c   | 44 +++++++++++++
 drivers/net/ethernet/intel/ice/ice_ptp_hw.h   | 22 +++++++
 9 files changed, 264 insertions(+)

diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index eadcb9958346..c10ff638cddb 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -508,6 +508,11 @@ struct ice_pf {
 #define ICE_VF_AGG_NODE_ID_START	65
 #define ICE_MAX_VF_AGG_NODES		32
 	struct ice_agg_node vf_agg_node[ICE_MAX_VF_AGG_NODES];
+
+	enum if_synce_state synce_dpll_state;
+	u8 synce_dpll_pin;
+	enum if_synce_state ptp_dpll_state;
+	u8 ptp_dpll_pin;
 };
 
 struct ice_netdev_priv {
diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index 21b4c7cd6f05..b84da5e9d025 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -1727,6 +1727,36 @@ struct ice_aqc_add_rdma_qset_data {
 	struct ice_aqc_add_tx_rdma_qset_entry rdma_qsets[];
 };
 
+/* Get CGU DPLL status (direct 0x0C66) */
+struct ice_aqc_get_cgu_dpll_status {
+	u8 dpll_num;
+	u8 ref_state;
+#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_LOS		BIT(0)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_SCM		BIT(1)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_CFM		BIT(2)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_GST		BIT(3)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_PFM		BIT(4)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_ESYNC	BIT(6)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_FAST_LOCK_EN	BIT(7)
+	__le16 dpll_state;
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_LOCK		BIT(0)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_HO		BIT(1)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_HO_READY	BIT(2)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_FLHIT		BIT(5)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_PSLHIT	BIT(7)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_CLK_REF_SHIFT	8
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_CLK_REF_SEL	\
+	ICE_M(0x1F, ICE_AQC_GET_CGU_DPLL_STATUS_STATE_CLK_REF_SHIFT)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_MODE_SHIFT	13
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_MODE \
+	ICE_M(0x7, ICE_AQC_GET_CGU_DPLL_STATUS_STATE_MODE_SHIFT)
+	__le32 phase_offset_h;
+	__le32 phase_offset_l;
+	u8 eec_mode;
+	u8 rsvd[1];
+	__le16 node_handle;
+};
+
 /* Configure Firmware Logging Command (indirect 0xFF09)
  * Logging Information Read Response (indirect 0xFF10)
  * Note: The 0xFF10 command has no input parameters.
@@ -1954,6 +1984,7 @@ struct ice_aq_desc {
 		struct ice_aqc_fw_logging fw_logging;
 		struct ice_aqc_get_clear_fw_log get_clear_fw_log;
 		struct ice_aqc_download_pkg download_pkg;
+		struct ice_aqc_get_cgu_dpll_status get_cgu_dpll_status;
 		struct ice_aqc_driver_shared_params drv_shared_params;
 		struct ice_aqc_set_mac_lb set_mac_lb;
 		struct ice_aqc_alloc_free_res_cmd sw_res_ctrl;
@@ -2108,6 +2139,9 @@ enum ice_adminq_opc {
 	ice_aqc_opc_update_pkg				= 0x0C42,
 	ice_aqc_opc_get_pkg_info_list			= 0x0C43,
 
+	/* 1588/SyncE commands/events */
+	ice_aqc_opc_get_cgu_dpll_status			= 0x0C66,
+
 	ice_aqc_opc_driver_shared_params		= 0x0C90,
 
 	/* Standalone Commands/Events */
diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 2fb81e359cdf..e7474643a421 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -69,6 +69,31 @@ bool ice_is_e810(struct ice_hw *hw)
 	return hw->mac_type == ICE_MAC_E810;
 }
 
+/**
+ * ice_is_e810t
+ * @hw: pointer to the hardware structure
+ *
+ * returns true if the device is E810T based, false if not.
+ */
+bool ice_is_e810t(struct ice_hw *hw)
+{
+	switch (hw->device_id) {
+	case ICE_DEV_ID_E810C_SFP:
+		if (hw->subsystem_device_id == ICE_SUBDEV_ID_E810T ||
+		    hw->subsystem_device_id == ICE_SUBDEV_ID_E810T2)
+			return true;
+		break;
+	case ICE_DEV_ID_E810C_QSFP:
+		if (hw->subsystem_device_id == ICE_SUBDEV_ID_E810T2)
+			return true;
+		break;
+	default:
+		break;
+	}
+
+	return false;
+}
+
 /**
  * ice_clear_pf_cfg - Clear PF configuration
  * @hw: pointer to the hardware structure
@@ -4520,6 +4545,42 @@ ice_dis_vsi_rdma_qset(struct ice_port_info *pi, u16 count, u32 *qset_teid,
 	return ice_status_to_errno(status);
 }
 
+/**
+ * ice_aq_get_cgu_dpll_status
+ * @hw: pointer to the HW struct
+ * @dpll_num: DPLL index
+ * @ref_state: Reference clock state
+ * @dpll_state: DPLL state
+ * @phase_offset: Phase offset in ps
+ * @eec_mode: EEC_mode
+ *
+ * Get CGU DPLL status (0x0C66)
+ */
+enum ice_status
+ice_aq_get_cgu_dpll_status(struct ice_hw *hw, u8 dpll_num, u8 *ref_state,
+			   u16 *dpll_state, u64 *phase_offset, u8 *eec_mode)
+{
+	struct ice_aqc_get_cgu_dpll_status *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_cgu_dpll_status);
+	cmd = &desc.params.get_cgu_dpll_status;
+	cmd->dpll_num = dpll_num;
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+	if (!status) {
+		*ref_state = cmd->ref_state;
+		*dpll_state = le16_to_cpu(cmd->dpll_state);
+		*phase_offset = le32_to_cpu(cmd->phase_offset_h);
+		*phase_offset <<= 32;
+		*phase_offset += le32_to_cpu(cmd->phase_offset_l);
+		*eec_mode = cmd->eec_mode;
+	}
+
+	return status;
+}
+
 /**
  * ice_replay_pre_init - replay pre initialization
  * @hw: pointer to the HW struct
@@ -4974,3 +5035,4 @@ bool ice_fw_supports_report_dflt_cfg(struct ice_hw *hw)
 	}
 	return false;
 }
+
diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h
index fb16070f02e2..ccd76c0cbf2c 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.h
+++ b/drivers/net/ethernet/intel/ice/ice_common.h
@@ -100,6 +100,7 @@ enum ice_status
 ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
 			struct ice_sq_cd *cd);
 bool ice_is_e810(struct ice_hw *hw);
+bool ice_is_e810t(struct ice_hw *hw);
 enum ice_status ice_clear_pf_cfg(struct ice_hw *hw);
 enum ice_status
 ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi,
@@ -156,6 +157,9 @@ ice_cfg_vsi_rdma(struct ice_port_info *pi, u16 vsi_handle, u16 tc_bitmap,
 int
 ice_ena_vsi_rdma_qset(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
 		      u16 *rdma_qset, u16 num_qsets, u32 *qset_teid);
+enum ice_status
+ice_aq_get_cgu_dpll_status(struct ice_hw *hw, u8 dpll_num, u8 *ref_state,
+			   u16 *dpll_state, u64 *phase_offset, u8 *eec_mode);
 int
 ice_dis_vsi_rdma_qset(struct ice_port_info *pi, u16 count, u32 *qset_teid,
 		      u16 *q_id);
diff --git a/drivers/net/ethernet/intel/ice/ice_devids.h b/drivers/net/ethernet/intel/ice/ice_devids.h
index 9d8194671f6a..e52dbeddb783 100644
--- a/drivers/net/ethernet/intel/ice/ice_devids.h
+++ b/drivers/net/ethernet/intel/ice/ice_devids.h
@@ -52,4 +52,7 @@
 /* Intel(R) Ethernet Connection E822-L 1GbE */
 #define ICE_DEV_ID_E822L_SGMII		0x189A
 
+#define ICE_SUBDEV_ID_E810T		0x000E
+#define ICE_SUBDEV_ID_E810T2		0x000F
+
 #endif /* _ICE_DEVIDS_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 60d55d043a94..a0f4c200394f 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -5973,6 +5973,60 @@ static void ice_napi_disable_all(struct ice_vsi *vsi)
 	}
 }
 
+/**
+ * ice_get_synce_state - get state of SyncE DPLL
+ * @netdev: network interface device structure
+ * @state: state of SyncE DPLL
+ * @src: source type driving SyncE DPLL
+ * @pin_idx: index of pin driving SyncE DPLL
+ */
+static int
+ice_get_synce_state(struct net_device *netdev, enum if_synce_state *state,
+		    enum if_synce_src *src, u8 *pin_idx)
+{
+	struct ice_netdev_priv *np = netdev_priv(netdev);
+	struct ice_vsi *vsi = np->vsi;
+	struct ice_pf *pf = vsi->back;
+
+	if (!ice_is_e810t(&pf->hw))
+		return -EOPNOTSUPP;
+
+	if (state)
+		*state = pf->synce_dpll_state;
+	if (pin_idx) {
+		*pin_idx = pf->synce_dpll_pin;
+	if (src)
+		switch (pf->synce_dpll_pin) {
+		case REF0P:
+		case REF0N:
+			*src = IF_SYNCE_SRC_PTP;
+			break;
+		case REF1P:
+		case REF1N:
+		case REF2P:
+		case REF2N:
+			*src = IF_SYNCE_SRC_SYNCE;
+			break;
+		case REF3P:
+		case REF3N:
+			*src = IF_SYNCE_SRC_EXT;
+			break;
+		case REF4P:
+			*src = IF_SYNCE_SRC_GNSS;
+			break;
+		default:
+			*src = IF_SYNCE_SRC_INVALID;
+			break;
+		}
+
+		/* Always report invalid source if state is not Locked */
+		if (pf->synce_dpll_state != IF_SYNCE_STATE_LOCKED)
+			*src = IF_SYNCE_SRC_INVALID;
+	}
+
+	return 0;
+}
+
 /**
  * ice_down - Shutdown the connection
  * @vsi: The VSI being stopped
@@ -7263,4 +7317,5 @@ static const struct net_device_ops ice_netdev_ops = {
 	.ndo_bpf = ice_xdp,
 	.ndo_xdp_xmit = ice_xdp_xmit,
 	.ndo_xsk_wakeup = ice_xsk_wakeup,
+	.ndo_get_synce_state = ice_get_synce_state,
 };
diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
index 9e3ddb9b8b51..8a133ffb70c5 100644
--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
+++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
@@ -1370,6 +1370,36 @@ static void ice_ptp_tx_tstamp_cleanup(struct ice_ptp_tx *tx)
 	}
 }
 
+static void ice_handle_cgu_state(struct ice_pf *pf)
+{
+	enum if_synce_state cgu_state;
+	u8 pin;
+
+	cgu_state = ice_get_zl_state_e810t(&pf->hw, ICE_CGU_DPLL_SYNCE, &pin);
+	if (pf->synce_dpll_state != cgu_state) {
+		pf->synce_dpll_state = cgu_state;
+		pf->synce_dpll_pin = pin;
+
+		dev_warn(ice_pf_to_dev(pf),
+			 "<DPLL%i> state changed to: %d, pin %d",
+			 ICE_CGU_DPLL_SYNCE,
+			 pf->synce_dpll_state,
+			 pin);
+	}
+
+	cgu_state = ice_get_zl_state_e810t(&pf->hw, ICE_CGU_DPLL_PTP, &pin);
+	if (pf->ptp_dpll_state != cgu_state) {
+		pf->ptp_dpll_state = cgu_state;
+		pf->ptp_dpll_pin = pin;
+
+		dev_warn(ice_pf_to_dev(pf),
+			 "<DPLL%i> state changed to: %d, pin %d",
+			 ICE_CGU_DPLL_PTP,
+			 pf->ptp_dpll_state,
+			 pin);
+	}
+}
+
 static void ice_ptp_periodic_work(struct kthread_work *work)
 {
 	struct ice_ptp *ptp = container_of(work, struct ice_ptp, work.work);
@@ -1378,6 +1408,10 @@ static void ice_ptp_periodic_work(struct kthread_work *work)
 	if (!test_bit(ICE_FLAG_PTP, pf->flags))
 		return;
 
+	if (ice_is_e810t(&pf->hw) &&
+	    &pf->hw.func_caps.ts_func_info.src_tmr_owned)
+		ice_handle_cgu_state(pf);
+
 	ice_ptp_update_cached_phctime(pf);
 
 	ice_ptp_tx_tstamp_cleanup(&pf->ptp.port.tx);
@@ -1556,3 +1590,4 @@ void ice_ptp_release(struct ice_pf *pf)
 
 	dev_info(ice_pf_to_dev(pf), "Removed PTP clock\n");
 }
+
diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
index 3eca0e4eab0b..4bd7bc10c237 100644
--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
+++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
@@ -375,6 +375,50 @@ static int ice_ptp_port_cmd_e810(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd)
 	return 0;
 }
 
+/**
+ * ice_get_zl_state_e810t - get the state of the DPLL
+ * @hw: pointer to the hw struct
+ * @dpll_idx: Index of internal DPLL unit
+ * @pin: pointer to a buffer for returning currently active pin
+ *
+ * This function will read the state of the DPLL(dpll_idx). If optional
+ * parameter pin is given it'll be used to retrieve currently active pin.
+ *
+ * Return: state of the DPLL
+ */
+enum if_synce_state
+ice_get_zl_state_e810t(struct ice_hw *hw, u8 dpll_idx, u8 *pin)
+{
+	enum ice_status status;
+	u64 phase_offset;
+	u16 dpll_state;
+	u8 ref_state;
+	u8 eec_mode;
+
+	if (dpll_idx >= ICE_CGU_DPLL_MAX)
+		return IF_SYNCE_STATE_INVALID;
+
+	status = ice_aq_get_cgu_dpll_status(hw, dpll_idx, &ref_state,
+					    &dpll_state, &phase_offset,
+					    &eec_mode);
+	if (status)
+		return IF_SYNCE_STATE_INVALID;
+
+	if (pin) {
+		/* current ref pin in dpll_state_refsel_status_X register */
+		*pin = (dpll_state &
+			ICE_AQC_GET_CGU_DPLL_STATUS_STATE_CLK_REF_SEL) >>
+		       ICE_AQC_GET_CGU_DPLL_STATUS_STATE_CLK_REF_SHIFT;
+	}
+
+	if (dpll_state & ICE_AQC_GET_CGU_DPLL_STATUS_STATE_LOCK)
+		return IF_SYNCE_STATE_LOCKED;
+	else if (dpll_state & ICE_AQC_GET_CGU_DPLL_STATUS_STATE_HO)
+		return IF_SYNCE_STATE_HOLDOVER;
+
+	return IF_SYNCE_STATE_FREERUN;
+}
+
 /* Device agnostic functions
  *
  * The following functions implement useful behavior to hide the differences
diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
index 55a414e87018..a46f634db68c 100644
--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
+++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
@@ -30,6 +30,8 @@ int ice_clear_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx);
 
 /* E810 family functions */
 int ice_ptp_init_phy_e810(struct ice_hw *hw);
+enum if_synce_state
+ice_get_zl_state_e810t(struct ice_hw *hw, u8 dpll_idx, u8 *pin);
 
 #define PFTSYN_SEM_BYTES	4
 
@@ -76,4 +78,24 @@ int ice_ptp_init_phy_e810(struct ice_hw *hw);
 #define LOW_TX_MEMORY_BANK_START	0x03090000
 #define HIGH_TX_MEMORY_BANK_START	0x03090004
 
+enum ice_e810t_cgu_dpll {
+	ICE_CGU_DPLL_SYNCE,
+	ICE_CGU_DPLL_PTP,
+	ICE_CGU_DPLL_MAX
+};
+
+enum ice_e810t_cgu_pins {
+	REF0P,
+	REF0N,
+	REF1P,
+	REF1N,
+	REF2P,
+	REF2N,
+	REF3P,
+	REF3N,
+	REF4P,
+	REF4N,
+	NUM_E810T_CGU_PINS
+};
+
 #endif /* _ICE_PTP_HW_H_ */
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: [RFC v2 net-next 2/2] ice: add support for reading SyncE DPLL state
  2021-08-29  8:05   ` [Intel-wired-lan] " Maciej Machnikowski
  (?)
@ 2021-08-29 11:11   ` kernel test robot
  -1 siblings, 0 replies; 43+ messages in thread
From: kernel test robot @ 2021-08-29 11:11 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 3579 bytes --]

Hi Maciej,

[FYI, it's a private test report for your RFC patch.]
[auto build test WARNING on net-next/master]

url:    https://github.com/0day-ci/linux/commits/Maciej-Machnikowski/Add-RTNL-interface-for-SyncE/20210829-162237
base:   https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git c77225119daa0ca0a9c6c007945c0a87b3e4a2e8
config: microblaze-randconfig-r002-20210829 (attached as .config)
compiler: microblaze-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/74dd8e1aad3f2d707f68559964c98cbc436da95c
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Maciej-Machnikowski/Add-RTNL-interface-for-SyncE/20210829-162237
        git checkout 74dd8e1aad3f2d707f68559964c98cbc436da95c
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross ARCH=microblaze 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   drivers/net/ethernet/intel/ice/ice_main.c: In function 'ice_get_synce_state':
>> drivers/net/ethernet/intel/ice/ice_main.c:5998:9: warning: this 'if' clause does not guard... [-Wmisleading-indentation]
    5998 |         if (src)
         |         ^~
   drivers/net/ethernet/intel/ice/ice_main.c:6023:17: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the 'if'
    6023 |                 if (pf->synce_dpll_state != IF_SYNCE_STATE_LOCKED)
         |                 ^~


vim +/if +5998 drivers/net/ethernet/intel/ice/ice_main.c

  5975	
  5976	/**
  5977	 * ice_get_synce_state - get state of SyncE DPLL
  5978	 * @netdev: network interface device structure
  5979	 * @state: state of SyncE DPLL
  5980	 * @src: source type driving SyncE DPLL
  5981	 * @pin_idx: index of pin driving SyncE DPLL
  5982	 */
  5983	static int
  5984	ice_get_synce_state(struct net_device *netdev, enum if_synce_state *state,
  5985			    enum if_synce_src *src, u8 *pin_idx)
  5986	{
  5987		struct ice_netdev_priv *np = netdev_priv(netdev);
  5988		struct ice_vsi *vsi = np->vsi;
  5989		struct ice_pf *pf = vsi->back;
  5990	
  5991		if (!ice_is_e810t(&pf->hw))
  5992			return -EOPNOTSUPP;
  5993	
  5994		if (state)
  5995			*state = pf->synce_dpll_state;
  5996		if (pin_idx) {
  5997			*pin_idx = pf->synce_dpll_pin;
> 5998		if (src)
  5999			switch (pf->synce_dpll_pin) {
  6000			case REF0P:
  6001			case REF0N:
  6002				*src = IF_SYNCE_SRC_PTP;
  6003				break;
  6004			case REF1P:
  6005			case REF1N:
  6006			case REF2P:
  6007			case REF2N:
  6008				*src = IF_SYNCE_SRC_SYNCE;
  6009				break;
  6010			case REF3P:
  6011			case REF3N:
  6012				*src = IF_SYNCE_SRC_EXT;
  6013				break;
  6014			case REF4P:
  6015				*src = IF_SYNCE_SRC_GNSS;
  6016				break;
  6017			default:
  6018				*src = IF_SYNCE_SRC_INVALID;
  6019				break;
  6020			}
  6021	
  6022			/* Always report invalid source if state is not Locked */
  6023			if (pf->synce_dpll_state != IF_SYNCE_STATE_LOCKED)
  6024				*src = IF_SYNCE_SRC_INVALID;
  6025		}
  6026	
  6027		return 0;
  6028	}
  6029	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 44538 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  2021-08-29  8:05   ` [Intel-wired-lan] " Maciej Machnikowski
@ 2021-08-29 15:10     ` Richard Cochran
  -1 siblings, 0 replies; 43+ messages in thread
From: Richard Cochran @ 2021-08-29 15:10 UTC (permalink / raw)
  To: Maciej Machnikowski
  Cc: netdev, intel-wired-lan, abyagowi, anthony.l.nguyen, davem, kuba,
	linux-kselftest

On Sun, Aug 29, 2021 at 10:05:11AM +0200, Maciej Machnikowski wrote:
> This patch adds the new RTM_GETSYNCESTATE message to query the status
> of SyncE syntonization on the device.
> 
> Initial implementation returns:
>  - SyncE DPLL state
>  - Source of signal driving SyncE DPLL (SyncE, GNSS, PTP or External)
>  - Current index of Pin driving the DPLL
> 
> SyncE state read needs to be implemented as ndo_get_synce_state function.
> 
> This patch is SyncE-oriented. Future implementation can add additional
> functionality for reading different DPLL states using the same structure.

I would call this more "ice oriented" than SyncE oriented.  I'm not
sure there is even such a thing as "SyncE DPLL".  Does that term come
from 802.3?  To my understanding, that is one just way of implementing
it that works on super-Gigabit speed devices.

I have nothing against exposing the DPLL if you need to, however I'd
like to have an interface that support plain Gigabit as well.  This
could be done in a generic way by offering Control Register 9 as
described in 802.3.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
@ 2021-08-29 15:10     ` Richard Cochran
  0 siblings, 0 replies; 43+ messages in thread
From: Richard Cochran @ 2021-08-29 15:10 UTC (permalink / raw)
  To: intel-wired-lan

On Sun, Aug 29, 2021 at 10:05:11AM +0200, Maciej Machnikowski wrote:
> This patch adds the new RTM_GETSYNCESTATE message to query the status
> of SyncE syntonization on the device.
> 
> Initial implementation returns:
>  - SyncE DPLL state
>  - Source of signal driving SyncE DPLL (SyncE, GNSS, PTP or External)
>  - Current index of Pin driving the DPLL
> 
> SyncE state read needs to be implemented as ndo_get_synce_state function.
> 
> This patch is SyncE-oriented. Future implementation can add additional
> functionality for reading different DPLL states using the same structure.

I would call this more "ice oriented" than SyncE oriented.  I'm not
sure there is even such a thing as "SyncE DPLL".  Does that term come
from 802.3?  To my understanding, that is one just way of implementing
it that works on super-Gigabit speed devices.

I have nothing against exposing the DPLL if you need to, however I'd
like to have an interface that support plain Gigabit as well.  This
could be done in a generic way by offering Control Register 9 as
described in 802.3.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  2021-08-29 15:10     ` [Intel-wired-lan] " Richard Cochran
@ 2021-08-29 16:42       ` Machnikowski, Maciej
  -1 siblings, 0 replies; 43+ messages in thread
From: Machnikowski, Maciej @ 2021-08-29 16:42 UTC (permalink / raw)
  To: Richard Cochran
  Cc: netdev, intel-wired-lan, abyagowi, Nguyen, Anthony L, davem,
	kuba, linux-kselftest

> -----Original Message-----
> From: Richard Cochran <richardcochran@gmail.com>
> Sent: Sunday, August 29, 2021 5:10 PM
> Subject: Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE
> message to get SyncE status
> 
> On Sun, Aug 29, 2021 at 10:05:11AM +0200, Maciej Machnikowski wrote:
> >
> > This patch is SyncE-oriented. Future implementation can add additional
> > functionality for reading different DPLL states using the same structure.
> 
> I would call this more "ice oriented" than SyncE oriented.  I'm not sure there is
> even such a thing as "SyncE DPLL".  Does that term come from 802.3?  To my
> understanding, that is one just way of implementing it that works on super-
> Gigabit speed devices.
> 
Hi,
This interface is ITU-T G.8264 SyncE-oriented. It is meant to monitor the state
of Ethernet Equipment Clock.

ITU-T G.8264 recommendation defines Synchronous Ethernet equipment
as a device  equipped with a system clock (e.g., a synchronous Ethernet
equipment clock). SyncE interfaces are able to extract the received clock
and pass it to a system clock.

Please take a look at the 10.2 Operation modes of the G.8264 and at the Figure A.1
which depicts the EEC. This interface is to report the status of the EEC.

If you prefer EEC over DPLL I'm fine with the name change. I think it will be less confusing.

> I have nothing against exposing the DPLL if you need to, however I'd like to have
> an interface that support plain Gigabit as well.  This could be done in a generic
> way by offering Control Register 9 as described in 802.3.

This part of Gigabit interface is a different part of SyncE device. It controls Master/Slave
operation of auto-negotiation. 
You would use it in slave mode if you want your EEC to tune to the frequency recovered
from network and to master if you use external source for your EEC and want to send it
as a reference for another devices. The decision can be made based on the EEC state
read by the interface proposed in this RFC.

This is a functionality that belongs to a different interface mentioned in the next steps.

Regards
Maciek

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
@ 2021-08-29 16:42       ` Machnikowski, Maciej
  0 siblings, 0 replies; 43+ messages in thread
From: Machnikowski, Maciej @ 2021-08-29 16:42 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Richard Cochran <richardcochran@gmail.com>
> Sent: Sunday, August 29, 2021 5:10 PM
> Subject: Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE
> message to get SyncE status
> 
> On Sun, Aug 29, 2021 at 10:05:11AM +0200, Maciej Machnikowski wrote:
> >
> > This patch is SyncE-oriented. Future implementation can add additional
> > functionality for reading different DPLL states using the same structure.
> 
> I would call this more "ice oriented" than SyncE oriented.  I'm not sure there is
> even such a thing as "SyncE DPLL".  Does that term come from 802.3?  To my
> understanding, that is one just way of implementing it that works on super-
> Gigabit speed devices.
> 
Hi,
This interface is ITU-T G.8264 SyncE-oriented. It is meant to monitor the state
of Ethernet Equipment Clock.

ITU-T G.8264 recommendation defines Synchronous Ethernet equipment
as a device  equipped with a system clock (e.g., a synchronous Ethernet
equipment clock). SyncE interfaces are able to extract the received clock
and pass it to a system clock.

Please take a look at the 10.2 Operation modes of the G.8264 and at the Figure A.1
which depicts the EEC. This interface is to report the status of the EEC.

If you prefer EEC over DPLL I'm fine with the name change. I think it will be less confusing.

> I have nothing against exposing the DPLL if you need to, however I'd like to have
> an interface that support plain Gigabit as well.  This could be done in a generic
> way by offering Control Register 9 as described in 802.3.

This part of Gigabit interface is a different part of SyncE device. It controls Master/Slave
operation of auto-negotiation. 
You would use it in slave mode if you want your EEC to tune to the frequency recovered
from network and to master if you use external source for your EEC and want to send it
as a reference for another devices. The decision can be made based on the EEC state
read by the interface proposed in this RFC.

This is a functionality that belongs to a different interface mentioned in the next steps.

Regards
Maciek

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  2021-08-29 16:42       ` [Intel-wired-lan] " Machnikowski, Maciej
@ 2021-08-29 20:16         ` Andrew Lunn
  -1 siblings, 0 replies; 43+ messages in thread
From: Andrew Lunn @ 2021-08-29 20:16 UTC (permalink / raw)
  To: Machnikowski, Maciej
  Cc: Richard Cochran, netdev, intel-wired-lan, abyagowi, Nguyen,
	Anthony L, davem, kuba, linux-kselftest

> > I have nothing against exposing the DPLL if you need to, however I'd like to have
> > an interface that support plain Gigabit as well.  This could be done in a generic
> > way by offering Control Register 9 as described in 802.3.

Are we talking about Clause 22, register 9, also known as MII_CTRL1000?

> This part of Gigabit interface is a different part of SyncE device. It controls Master/Slave
> operation of auto-negotiation.

This is controlled using ethtool -s

      ethtool -s devname [speed N] [duplex half|full] [port tp|aui|bnc|mii]
              ....
              [sopass xx:yy:zz:aa:bb:cc] [master-slave preferred-
              master|preferred-slave|forced-master|forced-slave]

      Andrew

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
@ 2021-08-29 20:16         ` Andrew Lunn
  0 siblings, 0 replies; 43+ messages in thread
From: Andrew Lunn @ 2021-08-29 20:16 UTC (permalink / raw)
  To: intel-wired-lan

> > I have nothing against exposing the DPLL if you need to, however I'd like to have
> > an interface that support plain Gigabit as well.  This could be done in a generic
> > way by offering Control Register 9 as described in 802.3.

Are we talking about Clause 22, register 9, also known as MII_CTRL1000?

> This part of Gigabit interface is a different part of SyncE device. It controls Master/Slave
> operation of auto-negotiation.

This is controlled using ethtool -s

      ethtool -s devname [speed N] [duplex half|full] [port tp|aui|bnc|mii]
              ....
              [sopass xx:yy:zz:aa:bb:cc] [master-slave preferred-
              master|preferred-slave|forced-master|forced-slave]

      Andrew

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  2021-08-29 16:42       ` [Intel-wired-lan] " Machnikowski, Maciej
@ 2021-08-30 20:57         ` Richard Cochran
  -1 siblings, 0 replies; 43+ messages in thread
From: Richard Cochran @ 2021-08-30 20:57 UTC (permalink / raw)
  To: Machnikowski, Maciej
  Cc: netdev, intel-wired-lan, abyagowi, Nguyen, Anthony L, davem,
	kuba, linux-kselftest

On Sun, Aug 29, 2021 at 04:42:55PM +0000, Machnikowski, Maciej wrote:

> Please take a look at the 10.2 Operation modes of the G.8264 and at the Figure A.1
> which depicts the EEC. This interface is to report the status of the EEC.

Well, I read it, and it is still fairly high level with no mention at
all of "DPLL".  I hope that the new RTNL states will cover other
possible EEC implementations, too.

The "Reference source selection mechanism" is also quite vague.  Your
patch is more specific:

+enum if_eec_src {
+       IF_EEC_SRC_INVALID = 0,
+       IF_EEC_SRC_UNKNOWN,
+       IF_EEC_SRC_SYNCE,
+       IF_EEC_SRC_GNSS,
+       IF_EEC_SRC_PTP,
+       IF_EEC_SRC_EXT,
+       __IF_EEC_SRC_MAX,
+};

But I guess your list is reasonable.  It can always be expanded, right?


> If you prefer EEC over DPLL I'm fine with the name change. I think it will be less confusing.

Yes, thanks for doing that.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
@ 2021-08-30 20:57         ` Richard Cochran
  0 siblings, 0 replies; 43+ messages in thread
From: Richard Cochran @ 2021-08-30 20:57 UTC (permalink / raw)
  To: intel-wired-lan

On Sun, Aug 29, 2021 at 04:42:55PM +0000, Machnikowski, Maciej wrote:

> Please take a look at the 10.2 Operation modes of the G.8264 and at the Figure A.1
> which depicts the EEC. This interface is to report the status of the EEC.

Well, I read it, and it is still fairly high level with no mention at
all of "DPLL".  I hope that the new RTNL states will cover other
possible EEC implementations, too.

The "Reference source selection mechanism" is also quite vague.  Your
patch is more specific:

+enum if_eec_src {
+       IF_EEC_SRC_INVALID = 0,
+       IF_EEC_SRC_UNKNOWN,
+       IF_EEC_SRC_SYNCE,
+       IF_EEC_SRC_GNSS,
+       IF_EEC_SRC_PTP,
+       IF_EEC_SRC_EXT,
+       __IF_EEC_SRC_MAX,
+};

But I guess your list is reasonable.  It can always be expanded, right?


> If you prefer EEC over DPLL I'm fine with the name change. I think it will be less confusing.

Yes, thanks for doing that.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  2021-08-30 20:57         ` [Intel-wired-lan] " Richard Cochran
@ 2021-08-30 23:29           ` Jakub Kicinski
  -1 siblings, 0 replies; 43+ messages in thread
From: Jakub Kicinski @ 2021-08-30 23:29 UTC (permalink / raw)
  To: Richard Cochran
  Cc: Machnikowski, Maciej, netdev, intel-wired-lan, abyagowi, Nguyen,
	Anthony L, davem, linux-kselftest, bsd

On Mon, 30 Aug 2021 13:57:58 -0700 Richard Cochran wrote:
> > Please take a look at the 10.2 Operation modes of the G.8264 and at the Figure A.1
> > which depicts the EEC. This interface is to report the status of the EEC.  
> 
> Well, I read it, and it is still fairly high level with no mention at
> all of "DPLL".  I hope that the new RTNL states will cover other
> possible EEC implementations, too.
> 
> The "Reference source selection mechanism" is also quite vague.  Your
> patch is more specific:
> 
> +enum if_eec_src {
> +       IF_EEC_SRC_INVALID = 0,
> +       IF_EEC_SRC_UNKNOWN,
> +       IF_EEC_SRC_SYNCE,
> +       IF_EEC_SRC_GNSS,

Hmm, IDK if this really belongs in RTNL. The OCP time card that
Jonathan works on also wants to report signal lock, and it locks
to GNSS. It doesn't have any networking functionality whatsoever.

Can we add a genetlink family for clock info/configuration? From 
what I understood discussing this with Jonathan it sounded like most
clocks today have a vendor-specific character device for configuration
and reading status.

I'm happy to write the plumbing if this seems like an okay idea 
but too much work for anyone to commit.

> +       IF_EEC_SRC_PTP,
> +       IF_EEC_SRC_EXT,
> +       __IF_EEC_SRC_MAX,
> +};
> 
> But I guess your list is reasonable.  It can always be expanded, right?

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
@ 2021-08-30 23:29           ` Jakub Kicinski
  0 siblings, 0 replies; 43+ messages in thread
From: Jakub Kicinski @ 2021-08-30 23:29 UTC (permalink / raw)
  To: intel-wired-lan

On Mon, 30 Aug 2021 13:57:58 -0700 Richard Cochran wrote:
> > Please take a look at the 10.2 Operation modes of the G.8264 and at the Figure A.1
> > which depicts the EEC. This interface is to report the status of the EEC.  
> 
> Well, I read it, and it is still fairly high level with no mention at
> all of "DPLL".  I hope that the new RTNL states will cover other
> possible EEC implementations, too.
> 
> The "Reference source selection mechanism" is also quite vague.  Your
> patch is more specific:
> 
> +enum if_eec_src {
> +       IF_EEC_SRC_INVALID = 0,
> +       IF_EEC_SRC_UNKNOWN,
> +       IF_EEC_SRC_SYNCE,
> +       IF_EEC_SRC_GNSS,

Hmm, IDK if this really belongs in RTNL. The OCP time card that
Jonathan works on also wants to report signal lock, and it locks
to GNSS. It doesn't have any networking functionality whatsoever.

Can we add a genetlink family for clock info/configuration? From 
what I understood discussing this with Jonathan it sounded like most
clocks today have a vendor-specific character device for configuration
and reading status.

I'm happy to write the plumbing if this seems like an okay idea 
but too much work for anyone to commit.

> +       IF_EEC_SRC_PTP,
> +       IF_EEC_SRC_EXT,
> +       __IF_EEC_SRC_MAX,
> +};
> 
> But I guess your list is reasonable.  It can always be expanded, right?

^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  2021-08-30 23:29           ` [Intel-wired-lan] " Jakub Kicinski
@ 2021-08-31 10:20             ` Machnikowski, Maciej
  -1 siblings, 0 replies; 43+ messages in thread
From: Machnikowski, Maciej @ 2021-08-31 10:20 UTC (permalink / raw)
  To: Jakub Kicinski, Richard Cochran
  Cc: netdev, intel-wired-lan, abyagowi, Nguyen, Anthony L, davem,
	linux-kselftest, bsd



> -----Original Message-----
> From: Jakub Kicinski <kuba@kernel.org>
> Sent: Tuesday, August 31, 2021 1:29 AM
> To: Richard Cochran <richardcochran@gmail.com>
> Cc: Machnikowski, Maciej <maciej.machnikowski@intel.com>;
> netdev@vger.kernel.org; intel-wired-lan@lists.osuosl.org;
> abyagowi@fb.com; Nguyen, Anthony L <anthony.l.nguyen@intel.com>;
> davem@davemloft.net; linux-kselftest@vger.kernel.org; bsd@fb.com
> Subject: Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE
> message to get SyncE status
> 
> On Mon, 30 Aug 2021 13:57:58 -0700 Richard Cochran wrote:
> > > Please take a look at the 10.2 Operation modes of the G.8264 and at the
> Figure A.1
> > > which depicts the EEC. This interface is to report the status of the EEC.
> >
> > Well, I read it, and it is still fairly high level with no mention at
> > all of "DPLL".  I hope that the new RTNL states will cover other
> > possible EEC implementations, too.
> >
> > The "Reference source selection mechanism" is also quite vague.  Your
> > patch is more specific:
> >
> > +enum if_eec_src {
> > +       IF_EEC_SRC_INVALID = 0,
> > +       IF_EEC_SRC_UNKNOWN,
> > +       IF_EEC_SRC_SYNCE,
> > +       IF_EEC_SRC_GNSS,
> 
> Hmm, IDK if this really belongs in RTNL. The OCP time card that
> Jonathan works on also wants to report signal lock, and it locks
> to GNSS. It doesn't have any networking functionality whatsoever.
> 
> Can we add a genetlink family for clock info/configuration? From
> what I understood discussing this with Jonathan it sounded like most
> clocks today have a vendor-specific character device for configuration
> and reading status.
> 
> I'm happy to write the plumbing if this seems like an okay idea
> but too much work for anyone to commit.
> 

I agree that this also is useful for Time card, yet it's also useful here.
PTP subsystem should implement a similar logic to this one for
DPLL-driven timers which can lock its frequency to external sources.

The reasoning behind putting it here is to enable returning the lock
to the GNSS receiver embedded on the NIC as a source for the
SyncE frequency. It helps distinguishing the embedded GNSS
from the external sources. As a result - the upper layer can report
GNSS lock based only on this message without the need to put the
embedded  GNSS receiver in the config file. On the other hand - if
sync to External source is reported such SW would need to read
the source of external sync from the config file.

And the list is expandable - if we need to define more embedded
sync source types we can always add more to it.

Regards
Maciek

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
@ 2021-08-31 10:20             ` Machnikowski, Maciej
  0 siblings, 0 replies; 43+ messages in thread
From: Machnikowski, Maciej @ 2021-08-31 10:20 UTC (permalink / raw)
  To: intel-wired-lan



> -----Original Message-----
> From: Jakub Kicinski <kuba@kernel.org>
> Sent: Tuesday, August 31, 2021 1:29 AM
> To: Richard Cochran <richardcochran@gmail.com>
> Cc: Machnikowski, Maciej <maciej.machnikowski@intel.com>;
> netdev at vger.kernel.org; intel-wired-lan at lists.osuosl.org;
> abyagowi at fb.com; Nguyen, Anthony L <anthony.l.nguyen@intel.com>;
> davem at davemloft.net; linux-kselftest at vger.kernel.org; bsd at fb.com
> Subject: Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE
> message to get SyncE status
> 
> On Mon, 30 Aug 2021 13:57:58 -0700 Richard Cochran wrote:
> > > Please take a look at the 10.2 Operation modes of the G.8264 and at the
> Figure A.1
> > > which depicts the EEC. This interface is to report the status of the EEC.
> >
> > Well, I read it, and it is still fairly high level with no mention at
> > all of "DPLL".  I hope that the new RTNL states will cover other
> > possible EEC implementations, too.
> >
> > The "Reference source selection mechanism" is also quite vague.  Your
> > patch is more specific:
> >
> > +enum if_eec_src {
> > +       IF_EEC_SRC_INVALID = 0,
> > +       IF_EEC_SRC_UNKNOWN,
> > +       IF_EEC_SRC_SYNCE,
> > +       IF_EEC_SRC_GNSS,
> 
> Hmm, IDK if this really belongs in RTNL. The OCP time card that
> Jonathan works on also wants to report signal lock, and it locks
> to GNSS. It doesn't have any networking functionality whatsoever.
> 
> Can we add a genetlink family for clock info/configuration? From
> what I understood discussing this with Jonathan it sounded like most
> clocks today have a vendor-specific character device for configuration
> and reading status.
> 
> I'm happy to write the plumbing if this seems like an okay idea
> but too much work for anyone to commit.
> 

I agree that this also is useful for Time card, yet it's also useful here.
PTP subsystem should implement a similar logic to this one for
DPLL-driven timers which can lock its frequency to external sources.

The reasoning behind putting it here is to enable returning the lock
to the GNSS receiver embedded on the NIC as a source for the
SyncE frequency. It helps distinguishing the embedded GNSS
from the external sources. As a result - the upper layer can report
GNSS lock based only on this message without the need to put the
embedded  GNSS receiver in the config file. On the other hand - if
sync to External source is reported such SW would need to read
the source of external sync from the config file.

And the list is expandable - if we need to define more embedded
sync source types we can always add more to it.

Regards
Maciek

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  2021-08-31 10:20             ` [Intel-wired-lan] " Machnikowski, Maciej
@ 2021-08-31 13:33               ` Jakub Kicinski
  -1 siblings, 0 replies; 43+ messages in thread
From: Jakub Kicinski @ 2021-08-31 13:33 UTC (permalink / raw)
  To: Machnikowski, Maciej
  Cc: Richard Cochran, netdev, intel-wired-lan, abyagowi, Nguyen,
	Anthony L, davem, linux-kselftest, Jonathan Lemon

On Tue, 31 Aug 2021 10:20:18 +0000 Machnikowski, Maciej wrote:
> > Hmm, IDK if this really belongs in RTNL. The OCP time card that
> > Jonathan works on also wants to report signal lock, and it locks
> > to GNSS. It doesn't have any networking functionality whatsoever.
> > 
> > Can we add a genetlink family for clock info/configuration? From
> > what I understood discussing this with Jonathan it sounded like most
> > clocks today have a vendor-specific character device for configuration
> > and reading status.
> > 
> > I'm happy to write the plumbing if this seems like an okay idea
> > but too much work for anyone to commit.
> >   
> 
> I agree that this also is useful for Time card, yet it's also useful here.
> PTP subsystem should implement a similar logic to this one for
> DPLL-driven timers which can lock its frequency to external sources.

Why would we have two APIs for doing the same thing? IIUC Richard does
not want this in the PTP ioctls which is fair, but we need to cater to
devices which do not have netdevs.

> The reasoning behind putting it here is to enable returning the lock
> to the GNSS receiver embedded on the NIC as a source for the
> SyncE frequency. It helps distinguishing the embedded GNSS
> from the external sources. As a result - the upper layer can report
> GNSS lock based only on this message without the need to put the
> embedded  GNSS receiver in the config file. On the other hand - if
> sync to External source is reported such SW would need to read
> the source of external sync from the config file.
> 
> And the list is expandable - if we need to define more embedded
> sync source types we can always add more to it.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
@ 2021-08-31 13:33               ` Jakub Kicinski
  0 siblings, 0 replies; 43+ messages in thread
From: Jakub Kicinski @ 2021-08-31 13:33 UTC (permalink / raw)
  To: intel-wired-lan

On Tue, 31 Aug 2021 10:20:18 +0000 Machnikowski, Maciej wrote:
> > Hmm, IDK if this really belongs in RTNL. The OCP time card that
> > Jonathan works on also wants to report signal lock, and it locks
> > to GNSS. It doesn't have any networking functionality whatsoever.
> > 
> > Can we add a genetlink family for clock info/configuration? From
> > what I understood discussing this with Jonathan it sounded like most
> > clocks today have a vendor-specific character device for configuration
> > and reading status.
> > 
> > I'm happy to write the plumbing if this seems like an okay idea
> > but too much work for anyone to commit.
> >   
> 
> I agree that this also is useful for Time card, yet it's also useful here.
> PTP subsystem should implement a similar logic to this one for
> DPLL-driven timers which can lock its frequency to external sources.

Why would we have two APIs for doing the same thing? IIUC Richard does
not want this in the PTP ioctls which is fair, but we need to cater to
devices which do not have netdevs.

> The reasoning behind putting it here is to enable returning the lock
> to the GNSS receiver embedded on the NIC as a source for the
> SyncE frequency. It helps distinguishing the embedded GNSS
> from the external sources. As a result - the upper layer can report
> GNSS lock based only on this message without the need to put the
> embedded  GNSS receiver in the config file. On the other hand - if
> sync to External source is reported such SW would need to read
> the source of external sync from the config file.
> 
> And the list is expandable - if we need to define more embedded
> sync source types we can always add more to it.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  2021-08-31 13:33               ` [Intel-wired-lan] " Jakub Kicinski
@ 2021-08-31 14:07                 ` Machnikowski, Maciej
  -1 siblings, 0 replies; 43+ messages in thread
From: Machnikowski, Maciej @ 2021-08-31 14:07 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Richard Cochran, netdev, intel-wired-lan, abyagowi, Nguyen,
	Anthony L, davem, linux-kselftest, Jonathan Lemon

> -----Original Message-----
> From: Jakub Kicinski <kuba@kernel.org>
> Sent: Tuesday, August 31, 2021 3:33 PM
> Subject: Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE
> message to get SyncE status
> 
> On Tue, 31 Aug 2021 10:20:18 +0000 Machnikowski, Maciej wrote:
> > > Hmm, IDK if this really belongs in RTNL. The OCP time card that
> > > Jonathan works on also wants to report signal lock, and it locks to
> > > GNSS. It doesn't have any networking functionality whatsoever.
> > >
> > > Can we add a genetlink family for clock info/configuration? From
> > > what I understood discussing this with Jonathan it sounded like most
> > > clocks today have a vendor-specific character device for
> > > configuration and reading status.
> > >
> > > I'm happy to write the plumbing if this seems like an okay idea but
> > > too much work for anyone to commit.
> > >
> >
> > I agree that this also is useful for Time card, yet it's also useful here.
> > PTP subsystem should implement a similar logic to this one for
> > DPLL-driven timers which can lock its frequency to external sources.
> 
> Why would we have two APIs for doing the same thing? IIUC Richard does
> not want this in the PTP ioctls which is fair, but we need to cater to devices
> which do not have netdevs.

From technical point of view - it can be explained by the fact that the DPLL
driving the SyncE logic can be separate from the one driving PTP.  Also
SyncE is frequency-only oriented and doesn't care about phase and
Time of Day that PTP also needs. The GNSS lock on the PTP side will be
multi-layered, as the full lock would mean that our PTP clock is not only
syntonized, but also has its time and phase set correctly.

A PTP can reuse the "physical" part of this interface later on, but it also needs
to solve more SW-specific challenges, like reporting the PTP lock on a SW level.

I agree that having such API for PTP subsystem will be very useful,
but let's address SyncE in netdev first and build the PTP netlink on top of what
we learn here. We can always move the structures defined here to the layer
above without affecting any APIs.

> 
> > The reasoning behind putting it here is to enable returning the lock
> > to the GNSS receiver embedded on the NIC as a source for the SyncE
> > frequency. It helps distinguishing the embedded GNSS from the external
> > sources. As a result - the upper layer can report GNSS lock based only
> > on this message without the need to put the embedded  GNSS receiver in
> > the config file. On the other hand - if sync to External source is
> > reported such SW would need to read the source of external sync from
> > the config file.
> >
> > And the list is expandable - if we need to define more embedded sync
> > source types we can always add more to it.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
@ 2021-08-31 14:07                 ` Machnikowski, Maciej
  0 siblings, 0 replies; 43+ messages in thread
From: Machnikowski, Maciej @ 2021-08-31 14:07 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Jakub Kicinski <kuba@kernel.org>
> Sent: Tuesday, August 31, 2021 3:33 PM
> Subject: Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE
> message to get SyncE status
> 
> On Tue, 31 Aug 2021 10:20:18 +0000 Machnikowski, Maciej wrote:
> > > Hmm, IDK if this really belongs in RTNL. The OCP time card that
> > > Jonathan works on also wants to report signal lock, and it locks to
> > > GNSS. It doesn't have any networking functionality whatsoever.
> > >
> > > Can we add a genetlink family for clock info/configuration? From
> > > what I understood discussing this with Jonathan it sounded like most
> > > clocks today have a vendor-specific character device for
> > > configuration and reading status.
> > >
> > > I'm happy to write the plumbing if this seems like an okay idea but
> > > too much work for anyone to commit.
> > >
> >
> > I agree that this also is useful for Time card, yet it's also useful here.
> > PTP subsystem should implement a similar logic to this one for
> > DPLL-driven timers which can lock its frequency to external sources.
> 
> Why would we have two APIs for doing the same thing? IIUC Richard does
> not want this in the PTP ioctls which is fair, but we need to cater to devices
> which do not have netdevs.

From technical point of view - it can be explained by the fact that the DPLL
driving the SyncE logic can be separate from the one driving PTP.  Also
SyncE is frequency-only oriented and doesn't care about phase and
Time of Day that PTP also needs. The GNSS lock on the PTP side will be
multi-layered, as the full lock would mean that our PTP clock is not only
syntonized, but also has its time and phase set correctly.

A PTP can reuse the "physical" part of this interface later on, but it also needs
to solve more SW-specific challenges, like reporting the PTP lock on a SW level.

I agree that having such API for PTP subsystem will be very useful,
but let's address SyncE in netdev first and build the PTP netlink on top of what
we learn here. We can always move the structures defined here to the layer
above without affecting any APIs.

> 
> > The reasoning behind putting it here is to enable returning the lock
> > to the GNSS receiver embedded on the NIC as a source for the SyncE
> > frequency. It helps distinguishing the embedded GNSS from the external
> > sources. As a result - the upper layer can report GNSS lock based only
> > on this message without the need to put the embedded  GNSS receiver in
> > the config file. On the other hand - if sync to External source is
> > reported such SW would need to read the source of external sync from
> > the config file.
> >
> > And the list is expandable - if we need to define more embedded sync
> > source types we can always add more to it.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  2021-08-31 14:07                 ` [Intel-wired-lan] " Machnikowski, Maciej
@ 2021-08-31 14:18                   ` Jakub Kicinski
  -1 siblings, 0 replies; 43+ messages in thread
From: Jakub Kicinski @ 2021-08-31 14:18 UTC (permalink / raw)
  To: Machnikowski, Maciej
  Cc: Richard Cochran, netdev, intel-wired-lan, abyagowi, Nguyen,
	Anthony L, davem, linux-kselftest, Jonathan Lemon

On Tue, 31 Aug 2021 14:07:32 +0000 Machnikowski, Maciej wrote:
> > > I agree that this also is useful for Time card, yet it's also useful here.
> > > PTP subsystem should implement a similar logic to this one for
> > > DPLL-driven timers which can lock its frequency to external sources.  
> > 
> > Why would we have two APIs for doing the same thing? IIUC Richard does
> > not want this in the PTP ioctls which is fair, but we need to cater to devices
> > which do not have netdevs.  
> 
> From technical point of view - it can be explained by the fact that the DPLL
> driving the SyncE logic can be separate from the one driving PTP.  Also
> SyncE is frequency-only oriented and doesn't care about phase and
> Time of Day that PTP also needs. The GNSS lock on the PTP side will be
> multi-layered, as the full lock would mean that our PTP clock is not only
> syntonized, but also has its time and phase set correctly.

Just because GNSS lock addresses more parameters (potentially) doesn't
mean the syntonization part shouldn't be addressed by the same API.

> A PTP can reuse the "physical" part of this interface later on, but it also needs
> to solve more SW-specific challenges, like reporting the PTP lock on a SW level.
> 
> I agree that having such API for PTP subsystem will be very useful,
> but let's address SyncE in netdev first and build the PTP netlink on top of what
> we learn here. We can always move the structures defined here to the layer
> above without affecting any APIs.

It's a reasonable SW design strategy to start simple. Unfortunately, 
it doesn't apply to stable uAPI design. You're adding a RTNL op, which
will have to be supported for ever. If we add anything "later" it will
be a strict addition, and will have to be backward compatible. Which
I'm not sure how to do when the object we'd operate on would be
completely different (clock vs netdev).

As I said I can write the boilerplate code for you if you prefer, the
code implementing the command and the driver interface will be almost
identical.

Is there a reason why RTNL is better?

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
@ 2021-08-31 14:18                   ` Jakub Kicinski
  0 siblings, 0 replies; 43+ messages in thread
From: Jakub Kicinski @ 2021-08-31 14:18 UTC (permalink / raw)
  To: intel-wired-lan

On Tue, 31 Aug 2021 14:07:32 +0000 Machnikowski, Maciej wrote:
> > > I agree that this also is useful for Time card, yet it's also useful here.
> > > PTP subsystem should implement a similar logic to this one for
> > > DPLL-driven timers which can lock its frequency to external sources.  
> > 
> > Why would we have two APIs for doing the same thing? IIUC Richard does
> > not want this in the PTP ioctls which is fair, but we need to cater to devices
> > which do not have netdevs.  
> 
> From technical point of view - it can be explained by the fact that the DPLL
> driving the SyncE logic can be separate from the one driving PTP.  Also
> SyncE is frequency-only oriented and doesn't care about phase and
> Time of Day that PTP also needs. The GNSS lock on the PTP side will be
> multi-layered, as the full lock would mean that our PTP clock is not only
> syntonized, but also has its time and phase set correctly.

Just because GNSS lock addresses more parameters (potentially) doesn't
mean the syntonization part shouldn't be addressed by the same API.

> A PTP can reuse the "physical" part of this interface later on, but it also needs
> to solve more SW-specific challenges, like reporting the PTP lock on a SW level.
> 
> I agree that having such API for PTP subsystem will be very useful,
> but let's address SyncE in netdev first and build the PTP netlink on top of what
> we learn here. We can always move the structures defined here to the layer
> above without affecting any APIs.

It's a reasonable SW design strategy to start simple. Unfortunately, 
it doesn't apply to stable uAPI design. You're adding a RTNL op, which
will have to be supported for ever. If we add anything "later" it will
be a strict addition, and will have to be backward compatible. Which
I'm not sure how to do when the object we'd operate on would be
completely different (clock vs netdev).

As I said I can write the boilerplate code for you if you prefer, the
code implementing the command and the driver interface will be almost
identical.

Is there a reason why RTNL is better?

^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  2021-08-31 14:18                   ` [Intel-wired-lan] " Jakub Kicinski
@ 2021-08-31 15:19                     ` Machnikowski, Maciej
  -1 siblings, 0 replies; 43+ messages in thread
From: Machnikowski, Maciej @ 2021-08-31 15:19 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Richard Cochran, netdev, intel-wired-lan, abyagowi, Nguyen,
	Anthony L, davem, linux-kselftest, Jonathan Lemon

> -----Original Message-----
> From: Jakub Kicinski <kuba@kernel.org>
> Sent: Tuesday, August 31, 2021 4:18 PM
> To: Machnikowski, Maciej <maciej.machnikowski@intel.com>
> Subject: Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE
> message to get SyncE status
> 
> On Tue, 31 Aug 2021 14:07:32 +0000 Machnikowski, Maciej wrote:
> > > > I agree that this also is useful for Time card, yet it's also useful here.
> > > > PTP subsystem should implement a similar logic to this one for
> > > > DPLL-driven timers which can lock its frequency to external sources.
> > >
> > > Why would we have two APIs for doing the same thing? IIUC Richard
> does
> > > not want this in the PTP ioctls which is fair, but we need to cater to
> devices
> > > which do not have netdevs.
> >
> > From technical point of view - it can be explained by the fact that the DPLL
> > driving the SyncE logic can be separate from the one driving PTP.  Also
> > SyncE is frequency-only oriented and doesn't care about phase and
> > Time of Day that PTP also needs. The GNSS lock on the PTP side will be
> > multi-layered, as the full lock would mean that our PTP clock is not only
> > syntonized, but also has its time and phase set correctly.
> 
> Just because GNSS lock addresses more parameters (potentially) doesn't
> mean the syntonization part shouldn't be addressed by the same API.

Fair enough.

> 
> > A PTP can reuse the "physical" part of this interface later on, but it also
> needs
> > to solve more SW-specific challenges, like reporting the PTP lock on a SW
> level.
> >
> > I agree that having such API for PTP subsystem will be very useful,
> > but let's address SyncE in netdev first and build the PTP netlink on top of
> what
> > we learn here. We can always move the structures defined here to the
> layer
> > above without affecting any APIs.
> 
> It's a reasonable SW design strategy to start simple. Unfortunately,
> it doesn't apply to stable uAPI design. You're adding a RTNL op, which
> will have to be supported for ever. If we add anything "later" it will
> be a strict addition, and will have to be backward compatible. Which
> I'm not sure how to do when the object we'd operate on would be
> completely different (clock vs netdev).

I agree - the point I'm trying to make here is that the existence of
the PTP-specific interface will not invalidate the need of having 
SyncE-specific one as well. Even if we report lock-states for the clock
we will still need to report lock-states for devices that don't use PTP
clocks, but support SyncE. (that's also a reason why RTNL is still required).

The RTNL interface will also address devices that only need the 
frequency syntonization (especially in Radio Access Networks).

> 
> As I said I can write the boilerplate code for you if you prefer, the
> code implementing the command and the driver interface will be almost
> identical.

I think it's a great idea to start that in parallel to this patch. Then move
the common structures to the generic layer and use them in both
SyncE-specific RTNL implementation and PTP-specific part that will
be added. This won't affect SyncE specific APIs. The "worst" that can
happen is that the driver will put the same info for PTP part and
SyncE part if that's the design someone follows.

Regards
Maciek

> 
> Is there a reason why RTNL is better?

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
@ 2021-08-31 15:19                     ` Machnikowski, Maciej
  0 siblings, 0 replies; 43+ messages in thread
From: Machnikowski, Maciej @ 2021-08-31 15:19 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Jakub Kicinski <kuba@kernel.org>
> Sent: Tuesday, August 31, 2021 4:18 PM
> To: Machnikowski, Maciej <maciej.machnikowski@intel.com>
> Subject: Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE
> message to get SyncE status
> 
> On Tue, 31 Aug 2021 14:07:32 +0000 Machnikowski, Maciej wrote:
> > > > I agree that this also is useful for Time card, yet it's also useful here.
> > > > PTP subsystem should implement a similar logic to this one for
> > > > DPLL-driven timers which can lock its frequency to external sources.
> > >
> > > Why would we have two APIs for doing the same thing? IIUC Richard
> does
> > > not want this in the PTP ioctls which is fair, but we need to cater to
> devices
> > > which do not have netdevs.
> >
> > From technical point of view - it can be explained by the fact that the DPLL
> > driving the SyncE logic can be separate from the one driving PTP.  Also
> > SyncE is frequency-only oriented and doesn't care about phase and
> > Time of Day that PTP also needs. The GNSS lock on the PTP side will be
> > multi-layered, as the full lock would mean that our PTP clock is not only
> > syntonized, but also has its time and phase set correctly.
> 
> Just because GNSS lock addresses more parameters (potentially) doesn't
> mean the syntonization part shouldn't be addressed by the same API.

Fair enough.

> 
> > A PTP can reuse the "physical" part of this interface later on, but it also
> needs
> > to solve more SW-specific challenges, like reporting the PTP lock on a SW
> level.
> >
> > I agree that having such API for PTP subsystem will be very useful,
> > but let's address SyncE in netdev first and build the PTP netlink on top of
> what
> > we learn here. We can always move the structures defined here to the
> layer
> > above without affecting any APIs.
> 
> It's a reasonable SW design strategy to start simple. Unfortunately,
> it doesn't apply to stable uAPI design. You're adding a RTNL op, which
> will have to be supported for ever. If we add anything "later" it will
> be a strict addition, and will have to be backward compatible. Which
> I'm not sure how to do when the object we'd operate on would be
> completely different (clock vs netdev).

I agree - the point I'm trying to make here is that the existence of
the PTP-specific interface will not invalidate the need of having 
SyncE-specific one as well. Even if we report lock-states for the clock
we will still need to report lock-states for devices that don't use PTP
clocks, but support SyncE. (that's also a reason why RTNL is still required).

The RTNL interface will also address devices that only need the 
frequency syntonization (especially in Radio Access Networks).

> 
> As I said I can write the boilerplate code for you if you prefer, the
> code implementing the command and the driver interface will be almost
> identical.

I think it's a great idea to start that in parallel to this patch. Then move
the common structures to the generic layer and use them in both
SyncE-specific RTNL implementation and PTP-specific part that will
be added. This won't affect SyncE specific APIs. The "worst" that can
happen is that the driver will put the same info for PTP part and
SyncE part if that's the design someone follows.

Regards
Maciek

> 
> Is there a reason why RTNL is better?

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  2021-08-31 15:19                     ` [Intel-wired-lan] " Machnikowski, Maciej
@ 2021-08-31 15:32                       ` Jakub Kicinski
  -1 siblings, 0 replies; 43+ messages in thread
From: Jakub Kicinski @ 2021-08-31 15:32 UTC (permalink / raw)
  To: Machnikowski, Maciej
  Cc: Richard Cochran, netdev, intel-wired-lan, abyagowi, Nguyen,
	Anthony L, davem, linux-kselftest, Jonathan Lemon

On Tue, 31 Aug 2021 15:19:36 +0000 Machnikowski, Maciej wrote:
> > It's a reasonable SW design strategy to start simple. Unfortunately,
> > it doesn't apply to stable uAPI design. You're adding a RTNL op, which
> > will have to be supported for ever. If we add anything "later" it will
> > be a strict addition, and will have to be backward compatible. Which
> > I'm not sure how to do when the object we'd operate on would be
> > completely different (clock vs netdev).  
> 
> I agree - the point I'm trying to make here is that the existence of
> the PTP-specific interface will not invalidate the need of having 
> SyncE-specific one as well. Even if we report lock-states for the clock
> we will still need to report lock-states for devices that don't use PTP
> clocks, but support SyncE. (that's also a reason why RTNL is still required).
> 
> The RTNL interface will also address devices that only need the 
> frequency syntonization (especially in Radio Access Networks).
> 
> > 
> > As I said I can write the boilerplate code for you if you prefer, the
> > code implementing the command and the driver interface will be almost
> > identical.  
> 
> I think it's a great idea to start that in parallel to this patch. Then move
> the common structures to the generic layer and use them in both
> SyncE-specific RTNL implementation and PTP-specific part that will
> be added. This won't affect SyncE specific APIs. The "worst" that can
> happen is that the driver will put the same info for PTP part and
> SyncE part if that's the design someone follows.

I don't understand why we need the SyncE RTNL if we have clock API for
controlling the freq source. Are you saying that there are
implementations out there which use SyncE to recover Rx clock and use
it for Tx but the PTP ticker is in a different clock domain?

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
@ 2021-08-31 15:32                       ` Jakub Kicinski
  0 siblings, 0 replies; 43+ messages in thread
From: Jakub Kicinski @ 2021-08-31 15:32 UTC (permalink / raw)
  To: intel-wired-lan

On Tue, 31 Aug 2021 15:19:36 +0000 Machnikowski, Maciej wrote:
> > It's a reasonable SW design strategy to start simple. Unfortunately,
> > it doesn't apply to stable uAPI design. You're adding a RTNL op, which
> > will have to be supported for ever. If we add anything "later" it will
> > be a strict addition, and will have to be backward compatible. Which
> > I'm not sure how to do when the object we'd operate on would be
> > completely different (clock vs netdev).  
> 
> I agree - the point I'm trying to make here is that the existence of
> the PTP-specific interface will not invalidate the need of having 
> SyncE-specific one as well. Even if we report lock-states for the clock
> we will still need to report lock-states for devices that don't use PTP
> clocks, but support SyncE. (that's also a reason why RTNL is still required).
> 
> The RTNL interface will also address devices that only need the 
> frequency syntonization (especially in Radio Access Networks).
> 
> > 
> > As I said I can write the boilerplate code for you if you prefer, the
> > code implementing the command and the driver interface will be almost
> > identical.  
> 
> I think it's a great idea to start that in parallel to this patch. Then move
> the common structures to the generic layer and use them in both
> SyncE-specific RTNL implementation and PTP-specific part that will
> be added. This won't affect SyncE specific APIs. The "worst" that can
> happen is that the driver will put the same info for PTP part and
> SyncE part if that's the design someone follows.

I don't understand why we need the SyncE RTNL if we have clock API for
controlling the freq source. Are you saying that there are
implementations out there which use SyncE to recover Rx clock and use
it for Tx but the PTP ticker is in a different clock domain?

^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  2021-08-31 15:32                       ` [Intel-wired-lan] " Jakub Kicinski
@ 2021-08-31 16:00                         ` Machnikowski, Maciej
  -1 siblings, 0 replies; 43+ messages in thread
From: Machnikowski, Maciej @ 2021-08-31 16:00 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Richard Cochran, netdev, intel-wired-lan, abyagowi, Nguyen,
	Anthony L, davem, linux-kselftest, Jonathan Lemon

> -----Original Message-----
> From: Jakub Kicinski <kuba@kernel.org>
> Sent: Tuesday, August 31, 2021 5:32 PM
> To: Machnikowski, Maciej <maciej.machnikowski@intel.com>
> Subject: Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE
> message to get SyncE status
> 
> On Tue, 31 Aug 2021 15:19:36 +0000 Machnikowski, Maciej wrote:
> > > It's a reasonable SW design strategy to start simple. Unfortunately,
> > > it doesn't apply to stable uAPI design. You're adding a RTNL op, which
> > > will have to be supported for ever. If we add anything "later" it will
> > > be a strict addition, and will have to be backward compatible. Which
> > > I'm not sure how to do when the object we'd operate on would be
> > > completely different (clock vs netdev).
> >
> > I agree - the point I'm trying to make here is that the existence of
> > the PTP-specific interface will not invalidate the need of having
> > SyncE-specific one as well. Even if we report lock-states for the clock
> > we will still need to report lock-states for devices that don't use PTP
> > clocks, but support SyncE. (that's also a reason why RTNL is still required).
> >
> > The RTNL interface will also address devices that only need the
> > frequency syntonization (especially in Radio Access Networks).
> >
> > >
> > > As I said I can write the boilerplate code for you if you prefer, the
> > > code implementing the command and the driver interface will be almost
> > > identical.
> >
> > I think it's a great idea to start that in parallel to this patch. Then move
> > the common structures to the generic layer and use them in both
> > SyncE-specific RTNL implementation and PTP-specific part that will
> > be added. This won't affect SyncE specific APIs. The "worst" that can
> > happen is that the driver will put the same info for PTP part and
> > SyncE part if that's the design someone follows.
> 
> I don't understand why we need the SyncE RTNL if we have clock API for
> controlling the freq source. Are you saying that there are
> implementations out there which use SyncE to recover Rx clock and use
> it for Tx but the PTP ticker is in a different clock domain?

Exactly!
You can also have multiple PTP timers which may use different clock ticks
to support multiple time domains.

The PTP ticker can also be completely absent if all you need is the frequency
reference. There are many radio devices that expect 10 MHz reference only.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
@ 2021-08-31 16:00                         ` Machnikowski, Maciej
  0 siblings, 0 replies; 43+ messages in thread
From: Machnikowski, Maciej @ 2021-08-31 16:00 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Jakub Kicinski <kuba@kernel.org>
> Sent: Tuesday, August 31, 2021 5:32 PM
> To: Machnikowski, Maciej <maciej.machnikowski@intel.com>
> Subject: Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE
> message to get SyncE status
> 
> On Tue, 31 Aug 2021 15:19:36 +0000 Machnikowski, Maciej wrote:
> > > It's a reasonable SW design strategy to start simple. Unfortunately,
> > > it doesn't apply to stable uAPI design. You're adding a RTNL op, which
> > > will have to be supported for ever. If we add anything "later" it will
> > > be a strict addition, and will have to be backward compatible. Which
> > > I'm not sure how to do when the object we'd operate on would be
> > > completely different (clock vs netdev).
> >
> > I agree - the point I'm trying to make here is that the existence of
> > the PTP-specific interface will not invalidate the need of having
> > SyncE-specific one as well. Even if we report lock-states for the clock
> > we will still need to report lock-states for devices that don't use PTP
> > clocks, but support SyncE. (that's also a reason why RTNL is still required).
> >
> > The RTNL interface will also address devices that only need the
> > frequency syntonization (especially in Radio Access Networks).
> >
> > >
> > > As I said I can write the boilerplate code for you if you prefer, the
> > > code implementing the command and the driver interface will be almost
> > > identical.
> >
> > I think it's a great idea to start that in parallel to this patch. Then move
> > the common structures to the generic layer and use them in both
> > SyncE-specific RTNL implementation and PTP-specific part that will
> > be added. This won't affect SyncE specific APIs. The "worst" that can
> > happen is that the driver will put the same info for PTP part and
> > SyncE part if that's the design someone follows.
> 
> I don't understand why we need the SyncE RTNL if we have clock API for
> controlling the freq source. Are you saying that there are
> implementations out there which use SyncE to recover Rx clock and use
> it for Tx but the PTP ticker is in a different clock domain?

Exactly!
You can also have multiple PTP timers which may use different clock ticks
to support multiple time domains.

The PTP ticker can also be completely absent if all you need is the frequency
reference. There are many radio devices that expect 10 MHz reference only.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  2021-08-30 23:29           ` [Intel-wired-lan] " Jakub Kicinski
@ 2021-08-31 16:19             ` Richard Cochran
  -1 siblings, 0 replies; 43+ messages in thread
From: Richard Cochran @ 2021-08-31 16:19 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Machnikowski, Maciej, netdev, intel-wired-lan, abyagowi, Nguyen,
	Anthony L, davem, linux-kselftest, bsd

On Mon, Aug 30, 2021 at 04:29:09PM -0700, Jakub Kicinski wrote:
> Hmm, IDK if this really belongs in RTNL. The OCP time card that
> Jonathan works on also wants to report signal lock, and it locks
> to GNSS. It doesn't have any networking functionality whatsoever.
> 
> Can we add a genetlink family for clock info/configuration? From 
> what I understood discussing this with Jonathan it sounded like most
> clocks today have a vendor-specific character device for configuration
> and reading status.
> 
> I'm happy to write the plumbing if this seems like an okay idea 
> but too much work for anyone to commit.

This sounds nice.

As you said later on in this thread, any API we merge now will have to
last.  That is why I'm being so picky here.  We want new APIs to cover
current HW _and_ be reasonable for the future.

I don't see a DPLL as either a PTP Hardware Clock or a Network
Device.  It is a PLL.

The kernel can and should have a way to represent the relationship
between these three different kind of IP block.  We already have a
way to get from PHC to netdev interface.

I understand that Maciej and team want to get support for their card
ASAP.  However, proper kernel/user API takes time.  For example, the
PHC stuff took one year and fourteen revisions.  But it was worth the
wait, because the API has help up pretty well all these years since
the v3.0 kernel.

There is no need to quickly merge some poorly designed interfaces.

Thanks,
Richard


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
@ 2021-08-31 16:19             ` Richard Cochran
  0 siblings, 0 replies; 43+ messages in thread
From: Richard Cochran @ 2021-08-31 16:19 UTC (permalink / raw)
  To: intel-wired-lan

On Mon, Aug 30, 2021 at 04:29:09PM -0700, Jakub Kicinski wrote:
> Hmm, IDK if this really belongs in RTNL. The OCP time card that
> Jonathan works on also wants to report signal lock, and it locks
> to GNSS. It doesn't have any networking functionality whatsoever.
> 
> Can we add a genetlink family for clock info/configuration? From 
> what I understood discussing this with Jonathan it sounded like most
> clocks today have a vendor-specific character device for configuration
> and reading status.
> 
> I'm happy to write the plumbing if this seems like an okay idea 
> but too much work for anyone to commit.

This sounds nice.

As you said later on in this thread, any API we merge now will have to
last.  That is why I'm being so picky here.  We want new APIs to cover
current HW _and_ be reasonable for the future.

I don't see a DPLL as either a PTP Hardware Clock or a Network
Device.  It is a PLL.

The kernel can and should have a way to represent the relationship
between these three different kind of IP block.  We already have a
way to get from PHC to netdev interface.

I understand that Maciej and team want to get support for their card
ASAP.  However, proper kernel/user API takes time.  For example, the
PHC stuff took one year and fourteen revisions.  But it was worth the
wait, because the API has help up pretty well all these years since
the v3.0 kernel.

There is no need to quickly merge some poorly designed interfaces.

Thanks,
Richard


^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  2021-08-31 16:19             ` [Intel-wired-lan] " Richard Cochran
@ 2021-08-31 22:09               ` Machnikowski, Maciej
  -1 siblings, 0 replies; 43+ messages in thread
From: Machnikowski, Maciej @ 2021-08-31 22:09 UTC (permalink / raw)
  To: Richard Cochran, Jakub Kicinski
  Cc: netdev, intel-wired-lan, abyagowi, Nguyen, Anthony L, davem,
	linux-kselftest, bsd, Jonathan Lemon

> -----Original Message-----
> From: Richard Cochran <richardcochran@gmail.com>
> Sent: Tuesday, August 31, 2021 6:19 PM
> To: Jakub Kicinski <kuba@kernel.org>
> Subject: Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE
> message to get SyncE status
> 
> On Mon, Aug 30, 2021 at 04:29:09PM -0700, Jakub Kicinski wrote:
> > Can we add a genetlink family for clock info/configuration? From
> > what I understood discussing this with Jonathan it sounded like most
> > clocks today have a vendor-specific character device for configuration
> > and reading status.
> >
> > I'm happy to write the plumbing if this seems like an okay idea
> > but too much work for anyone to commit.
> 
> This sounds nice.
> 
> As you said later on in this thread, any API we merge now will have to
> last.  That is why I'm being so picky here.  We want new APIs to cover
> current HW _and_ be reasonable for the future.
> 
> I don't see a DPLL as either a PTP Hardware Clock or a Network
> Device.  It is a PLL.
> 
> The kernel can and should have a way to represent the relationship
> between these three different kind of IP block.  We already have a
> way to get from PHC to netdev interface.

OK I can strip down the RTNL EEC state interface to only report 
the state without any extras, like pin. Next step would be to add 
the control over recovered clock also to the netdev subsystem.

The EEC state read is needed for recovered/source clock validation
and that's why I think it belongs to the RTNL part as it gates the QL
for each port.

Those two interfaces will allow a minimalistic ESMC support
(receive the packet, extract the SSM from it, check if my clock is
recovered and my clock is in locked state, if all are good - pass
the message along to other related ports)

In parallel let's work on a proper clock generator subsystem. 
For starter It should handle:

 - reference configuration
 - reference status
 - reference priorities
 - output settings

Optionally:
 - NCO mode (here we'll duplicate the functionality of PHC in some 
    deployments)

Once we have that in place we can simply 
- reroute the internals of the EEC state the clock generator subsystem 
  on more complex systems,
- keeping the simple state-read for those who use other simpler
  Implementations of EEC.
- be able to support any hybrid between 1 and 2

Once we get there we'll know what else should this RTNL return and
extend it if needed.

Regards
Maciek

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
@ 2021-08-31 22:09               ` Machnikowski, Maciej
  0 siblings, 0 replies; 43+ messages in thread
From: Machnikowski, Maciej @ 2021-08-31 22:09 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Richard Cochran <richardcochran@gmail.com>
> Sent: Tuesday, August 31, 2021 6:19 PM
> To: Jakub Kicinski <kuba@kernel.org>
> Subject: Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE
> message to get SyncE status
> 
> On Mon, Aug 30, 2021 at 04:29:09PM -0700, Jakub Kicinski wrote:
> > Can we add a genetlink family for clock info/configuration? From
> > what I understood discussing this with Jonathan it sounded like most
> > clocks today have a vendor-specific character device for configuration
> > and reading status.
> >
> > I'm happy to write the plumbing if this seems like an okay idea
> > but too much work for anyone to commit.
> 
> This sounds nice.
> 
> As you said later on in this thread, any API we merge now will have to
> last.  That is why I'm being so picky here.  We want new APIs to cover
> current HW _and_ be reasonable for the future.
> 
> I don't see a DPLL as either a PTP Hardware Clock or a Network
> Device.  It is a PLL.
> 
> The kernel can and should have a way to represent the relationship
> between these three different kind of IP block.  We already have a
> way to get from PHC to netdev interface.

OK I can strip down the RTNL EEC state interface to only report 
the state without any extras, like pin. Next step would be to add 
the control over recovered clock also to the netdev subsystem.

The EEC state read is needed for recovered/source clock validation
and that's why I think it belongs to the RTNL part as it gates the QL
for each port.

Those two interfaces will allow a minimalistic ESMC support
(receive the packet, extract the SSM from it, check if my clock is
recovered and my clock is in locked state, if all are good - pass
the message along to other related ports)

In parallel let's work on a proper clock generator subsystem. 
For starter It should handle:

 - reference configuration
 - reference status
 - reference priorities
 - output settings

Optionally:
 - NCO mode (here we'll duplicate the functionality of PHC in some 
    deployments)

Once we have that in place we can simply 
- reroute the internals of the EEC state the clock generator subsystem 
  on more complex systems,
- keeping the simple state-read for those who use other simpler
  Implementations of EEC.
- be able to support any hybrid between 1 and 2

Once we get there we'll know what else should this RTNL return and
extend it if needed.

Regards
Maciek

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  2021-08-31 16:19             ` [Intel-wired-lan] " Richard Cochran
@ 2021-09-01  1:58               ` Jakub Kicinski
  -1 siblings, 0 replies; 43+ messages in thread
From: Jakub Kicinski @ 2021-09-01  1:58 UTC (permalink / raw)
  To: Richard Cochran
  Cc: Machnikowski, Maciej, netdev, intel-wired-lan, abyagowi, Nguyen,
	Anthony L, davem, linux-kselftest, bsd

On Tue, 31 Aug 2021 09:19:27 -0700 Richard Cochran wrote:
> As you said later on in this thread, any API we merge now will have to
> last.  That is why I'm being so picky here.  We want new APIs to cover
> current HW _and_ be reasonable for the future.
> 
> I don't see a DPLL as either a PTP Hardware Clock or a Network
> Device.  It is a PLL.
> 
> The kernel can and should have a way to represent the relationship
> between these three different kind of IP block.  We already have a
> way to get from PHC to netdev interface.

Makes sense to me. I was wondering how to split things at high level
into the areas you mentioned, but TBH the part I'm struggling with is
the delineation of what falls under PTP. PLL by itself seems like an
awfully small unit to create a subsystem for, and PTP already has aux
stuff like PIN control. Then there's the whole bunch of stuff that Jonathan
is adding via driver specific sysfs interfaces [1]. I was hoping the
"new API" would cover his need but PLL would be a tiny part of it.

IOW after looking at the code I'm not so sure how to reasonably divide
things.

[1]
https://lore.kernel.org/netdev/20210830235236.309993-1-jonathan.lemon@gmail.com/

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
@ 2021-09-01  1:58               ` Jakub Kicinski
  0 siblings, 0 replies; 43+ messages in thread
From: Jakub Kicinski @ 2021-09-01  1:58 UTC (permalink / raw)
  To: intel-wired-lan

On Tue, 31 Aug 2021 09:19:27 -0700 Richard Cochran wrote:
> As you said later on in this thread, any API we merge now will have to
> last.  That is why I'm being so picky here.  We want new APIs to cover
> current HW _and_ be reasonable for the future.
> 
> I don't see a DPLL as either a PTP Hardware Clock or a Network
> Device.  It is a PLL.
> 
> The kernel can and should have a way to represent the relationship
> between these three different kind of IP block.  We already have a
> way to get from PHC to netdev interface.

Makes sense to me. I was wondering how to split things at high level
into the areas you mentioned, but TBH the part I'm struggling with is
the delineation of what falls under PTP. PLL by itself seems like an
awfully small unit to create a subsystem for, and PTP already has aux
stuff like PIN control. Then there's the whole bunch of stuff that Jonathan
is adding via driver specific sysfs interfaces [1]. I was hoping the
"new API" would cover his need but PLL would be a tiny part of it.

IOW after looking at the code I'm not so sure how to reasonably divide
things.

[1]
https://lore.kernel.org/netdev/20210830235236.309993-1-jonathan.lemon at gmail.com/

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  2021-08-31 22:09               ` [Intel-wired-lan] " Machnikowski, Maciej
@ 2021-09-01  2:02                 ` Jakub Kicinski
  -1 siblings, 0 replies; 43+ messages in thread
From: Jakub Kicinski @ 2021-09-01  2:02 UTC (permalink / raw)
  To: Machnikowski, Maciej
  Cc: Richard Cochran, netdev, intel-wired-lan, abyagowi, Nguyen,
	Anthony L, davem, linux-kselftest, bsd, Jonathan Lemon

On Tue, 31 Aug 2021 22:09:18 +0000 Machnikowski, Maciej wrote:
> OK I can strip down the RTNL EEC state interface to only report 
> the state without any extras, like pin. Next step would be to add 
> the control over recovered clock also to the netdev subsystem.
> 
> The EEC state read is needed for recovered/source clock validation
> and that's why I think it belongs to the RTNL part as it gates the QL
> for each port.

If you mean just reporting state and have a syncE on/off without any
option for other sources that's fine by me.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
@ 2021-09-01  2:02                 ` Jakub Kicinski
  0 siblings, 0 replies; 43+ messages in thread
From: Jakub Kicinski @ 2021-09-01  2:02 UTC (permalink / raw)
  To: intel-wired-lan

On Tue, 31 Aug 2021 22:09:18 +0000 Machnikowski, Maciej wrote:
> OK I can strip down the RTNL EEC state interface to only report 
> the state without any extras, like pin. Next step would be to add 
> the control over recovered clock also to the netdev subsystem.
> 
> The EEC state read is needed for recovered/source clock validation
> and that's why I think it belongs to the RTNL part as it gates the QL
> for each port.

If you mean just reporting state and have a syncE on/off without any
option for other sources that's fine by me.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  2021-09-01  1:58               ` [Intel-wired-lan] " Jakub Kicinski
@ 2021-09-01  2:55                 ` Richard Cochran
  -1 siblings, 0 replies; 43+ messages in thread
From: Richard Cochran @ 2021-09-01  2:55 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Machnikowski, Maciej, netdev, intel-wired-lan, abyagowi, Nguyen,
	Anthony L, davem, linux-kselftest, bsd

On Tue, Aug 31, 2021 at 06:58:24PM -0700, Jakub Kicinski wrote:
> On Tue, 31 Aug 2021 09:19:27 -0700 Richard Cochran wrote:
> > As you said later on in this thread, any API we merge now will have to
> > last.  That is why I'm being so picky here.  We want new APIs to cover
> > current HW _and_ be reasonable for the future.
> > 
> > I don't see a DPLL as either a PTP Hardware Clock or a Network
> > Device.  It is a PLL.
> > 
> > The kernel can and should have a way to represent the relationship
> > between these three different kind of IP block.  We already have a
> > way to get from PHC to netdev interface.
> 
> Makes sense to me. I was wondering how to split things at high level
> into the areas you mentioned, but TBH the part I'm struggling with is
> the delineation of what falls under PTP. PLL by itself seems like an
> awfully small unit to create a subsystem for, and PTP already has aux
> stuff like PIN control.

These pins are a direct HW interface to the posix dynamic clock that
also generates time stamps on the PTP frames.  They can either
generate time stamps on external signals, or produce output signals
from the very same clock.  So the pins are rather tightly coupled to
the PTP clock itself.

But the pins do NOT cover input clock sources into the IP cores.  This
kind of thing is already covered by the DTS for many SoCs (for a
static input clock choice, not changeable at run time)

> Then there's the whole bunch of stuff that Jonathan
> is adding via driver specific sysfs interfaces [1]. I was hoping the
> "new API" would cover his need but PLL would be a tiny part of it.
> 
> IOW after looking at the code I'm not so sure how to reasonably divide
> things.

Right, me neither.  It is a big topic, and we needn't over engineer it
now, but I still think this DPLL is not part of the PTP clock.  There
has to be a better place for it.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
@ 2021-09-01  2:55                 ` Richard Cochran
  0 siblings, 0 replies; 43+ messages in thread
From: Richard Cochran @ 2021-09-01  2:55 UTC (permalink / raw)
  To: intel-wired-lan

On Tue, Aug 31, 2021 at 06:58:24PM -0700, Jakub Kicinski wrote:
> On Tue, 31 Aug 2021 09:19:27 -0700 Richard Cochran wrote:
> > As you said later on in this thread, any API we merge now will have to
> > last.  That is why I'm being so picky here.  We want new APIs to cover
> > current HW _and_ be reasonable for the future.
> > 
> > I don't see a DPLL as either a PTP Hardware Clock or a Network
> > Device.  It is a PLL.
> > 
> > The kernel can and should have a way to represent the relationship
> > between these three different kind of IP block.  We already have a
> > way to get from PHC to netdev interface.
> 
> Makes sense to me. I was wondering how to split things at high level
> into the areas you mentioned, but TBH the part I'm struggling with is
> the delineation of what falls under PTP. PLL by itself seems like an
> awfully small unit to create a subsystem for, and PTP already has aux
> stuff like PIN control.

These pins are a direct HW interface to the posix dynamic clock that
also generates time stamps on the PTP frames.  They can either
generate time stamps on external signals, or produce output signals
from the very same clock.  So the pins are rather tightly coupled to
the PTP clock itself.

But the pins do NOT cover input clock sources into the IP cores.  This
kind of thing is already covered by the DTS for many SoCs (for a
static input clock choice, not changeable at run time)

> Then there's the whole bunch of stuff that Jonathan
> is adding via driver specific sysfs interfaces [1]. I was hoping the
> "new API" would cover his need but PLL would be a tiny part of it.
> 
> IOW after looking at the code I'm not so sure how to reasonably divide
> things.

Right, me neither.  It is a big topic, and we needn't over engineer it
now, but I still think this DPLL is not part of the PTP clock.  There
has to be a better place for it.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
  2021-09-01  2:02                 ` [Intel-wired-lan] " Jakub Kicinski
@ 2021-09-01  2:56                   ` Richard Cochran
  -1 siblings, 0 replies; 43+ messages in thread
From: Richard Cochran @ 2021-09-01  2:56 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Machnikowski, Maciej, netdev, intel-wired-lan, abyagowi, Nguyen,
	Anthony L, davem, linux-kselftest, bsd, Jonathan Lemon

On Tue, Aug 31, 2021 at 07:02:35PM -0700, Jakub Kicinski wrote:
> On Tue, 31 Aug 2021 22:09:18 +0000 Machnikowski, Maciej wrote:
> > OK I can strip down the RTNL EEC state interface to only report 
> > the state without any extras, like pin. Next step would be to add 
> > the control over recovered clock also to the netdev subsystem.
> > 
> > The EEC state read is needed for recovered/source clock validation
> > and that's why I think it belongs to the RTNL part as it gates the QL
> > for each port.
> 
> If you mean just reporting state and have a syncE on/off without any
> option for other sources that's fine by me.

Yeah, that is what I also imagined for a generic SyncE interface.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-wired-lan] [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
@ 2021-09-01  2:56                   ` Richard Cochran
  0 siblings, 0 replies; 43+ messages in thread
From: Richard Cochran @ 2021-09-01  2:56 UTC (permalink / raw)
  To: intel-wired-lan

On Tue, Aug 31, 2021 at 07:02:35PM -0700, Jakub Kicinski wrote:
> On Tue, 31 Aug 2021 22:09:18 +0000 Machnikowski, Maciej wrote:
> > OK I can strip down the RTNL EEC state interface to only report 
> > the state without any extras, like pin. Next step would be to add 
> > the control over recovered clock also to the netdev subsystem.
> > 
> > The EEC state read is needed for recovered/source clock validation
> > and that's why I think it belongs to the RTNL part as it gates the QL
> > for each port.
> 
> If you mean just reporting state and have a syncE on/off without any
> option for other sources that's fine by me.

Yeah, that is what I also imagined for a generic SyncE interface.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2021-09-01  2:57 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-29  8:05 [RFC v2 net-next 0/2] Add RTNL interface for SyncE Maciej Machnikowski
2021-08-29  8:05 ` [Intel-wired-lan] " Maciej Machnikowski
2021-08-29  8:05 ` [RFC v2 net-next 1/2] rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status Maciej Machnikowski
2021-08-29  8:05   ` [Intel-wired-lan] " Maciej Machnikowski
2021-08-29 15:10   ` Richard Cochran
2021-08-29 15:10     ` [Intel-wired-lan] " Richard Cochran
2021-08-29 16:42     ` Machnikowski, Maciej
2021-08-29 16:42       ` [Intel-wired-lan] " Machnikowski, Maciej
2021-08-29 20:16       ` Andrew Lunn
2021-08-29 20:16         ` [Intel-wired-lan] " Andrew Lunn
2021-08-30 20:57       ` Richard Cochran
2021-08-30 20:57         ` [Intel-wired-lan] " Richard Cochran
2021-08-30 23:29         ` Jakub Kicinski
2021-08-30 23:29           ` [Intel-wired-lan] " Jakub Kicinski
2021-08-31 10:20           ` Machnikowski, Maciej
2021-08-31 10:20             ` [Intel-wired-lan] " Machnikowski, Maciej
2021-08-31 13:33             ` Jakub Kicinski
2021-08-31 13:33               ` [Intel-wired-lan] " Jakub Kicinski
2021-08-31 14:07               ` Machnikowski, Maciej
2021-08-31 14:07                 ` [Intel-wired-lan] " Machnikowski, Maciej
2021-08-31 14:18                 ` Jakub Kicinski
2021-08-31 14:18                   ` [Intel-wired-lan] " Jakub Kicinski
2021-08-31 15:19                   ` Machnikowski, Maciej
2021-08-31 15:19                     ` [Intel-wired-lan] " Machnikowski, Maciej
2021-08-31 15:32                     ` Jakub Kicinski
2021-08-31 15:32                       ` [Intel-wired-lan] " Jakub Kicinski
2021-08-31 16:00                       ` Machnikowski, Maciej
2021-08-31 16:00                         ` [Intel-wired-lan] " Machnikowski, Maciej
2021-08-31 16:19           ` Richard Cochran
2021-08-31 16:19             ` [Intel-wired-lan] " Richard Cochran
2021-08-31 22:09             ` Machnikowski, Maciej
2021-08-31 22:09               ` [Intel-wired-lan] " Machnikowski, Maciej
2021-09-01  2:02               ` Jakub Kicinski
2021-09-01  2:02                 ` [Intel-wired-lan] " Jakub Kicinski
2021-09-01  2:56                 ` Richard Cochran
2021-09-01  2:56                   ` [Intel-wired-lan] " Richard Cochran
2021-09-01  1:58             ` Jakub Kicinski
2021-09-01  1:58               ` [Intel-wired-lan] " Jakub Kicinski
2021-09-01  2:55               ` Richard Cochran
2021-09-01  2:55                 ` [Intel-wired-lan] " Richard Cochran
2021-08-29  8:05 ` [RFC v2 net-next 2/2] ice: add support for reading SyncE DPLL state Maciej Machnikowski
2021-08-29  8:05   ` [Intel-wired-lan] " Maciej Machnikowski
2021-08-29 11:11   ` kernel test robot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.