All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Machnikowski, Maciej" <maciej.machnikowski@intel.com>
To: Jakub Kicinski <kuba@kernel.org>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"intel-wired-lan@lists.osuosl.org"
	<intel-wired-lan@lists.osuosl.org>,
	"richardcochran@gmail.com" <richardcochran@gmail.com>,
	"abyagowi@fb.com" <abyagowi@fb.com>,
	"Nguyen, Anthony L" <anthony.l.nguyen@intel.com>,
	"davem@davemloft.net" <davem@davemloft.net>,
	"linux-kselftest@vger.kernel.org"
	<linux-kselftest@vger.kernel.org>, "Andrew Lunn" <andrew@lunn.ch>,
	Michal Kubecek <mkubecek@suse.cz>,
	Saeed Mahameed <saeed@kernel.org>,
	Michael Chan <michael.chan@broadcom.com>
Subject: RE: [PATCH net-next 1/2] rtnetlink: Add new RTM_GETEECSTATE message to get SyncE status
Date: Thu, 9 Sep 2021 08:11:51 +0000	[thread overview]
Message-ID: <PH0PR11MB4951205B0D078D94ADBAFCACEAD59@PH0PR11MB4951.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20210908151852.7ad8a0f1@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>

> -----Original Message-----
> From: Jakub Kicinski <kuba@kernel.org>
> Sent: Thursday, September 9, 2021 12:19 AM
> To: Machnikowski, Maciej <maciej.machnikowski@intel.com>
> Subject: Re: [PATCH net-next 1/2] rtnetlink: Add new RTM_GETEECSTATE
> message to get SyncE status
> 
> On Wed, 8 Sep 2021 17:30:24 +0000 Machnikowski, Maciej wrote:
> > Lane0
> > ------------- |\  Pin0        RefN ____
> > ------------- | |-----------------|     |      synced clk
> >               | |-----------------| EEC |------------------
> > ------------- |/ PinN         RefM|____ |
> > Lane N      MUX
> >
> > To get the full info a port needs to know the EEC state and which lane is
> used
> > as a source (or rather - my lane or any other).
> 
> EEC here is what the PHY documentation calls "Cleanup PLL" right?

It's a generic term for an internal or external PLL that takes the reference
frequency from the certain port/lane and drives its TX part. In a specific 
case it can be the internal cleanup PLL.

> > The lane -> Pin mapping is buried in the PHY/MAC, but the source of
> frequency
> > is in the EEC.
> 
> Not sure what "source of frequency" means here. There's a lot of
> frequencies here.

It's the source lane/port that drives the DPLL reference. In other words,
the DPLL will tune its frequency to match the one recovered from 
a certain PHY lane/port.

> > What's even more - the Pin->Ref mapping is board specific.
> 
> Breaking down the system into components we have:
> 
> Port
>   A.1 Rx lanes
>   A.2 Rx pins (outputs)
>   A.3 Rx clk divider
>   B.1 Tx lanes
>   B.2 Tx pins (inputs)
> 
> ECC
>   C.1 Inputs
>   C.2 Outputs
>   C.3 PLL state
> 
> In the most general case we want to be able to:
>  map recovered clocks to PHY output pins (A.1 <> A.2)
>  set freq div on the recovered clock (A.2 <> A.3)

Technically the pin (if exists) will be the last thing, so a better way would 
be to map lane to the divider (A.1<>A.3) and then a divider to pin (A.3<>A.2), 
but the idea is the same

>  set the priorities of inputs on ECC (C.1)
>  read the ECC state (C.3)
>  control outputs of the ECC (C.2)

Yes!

>  select the clock source for port Tx (B.2 <> B.1)

And here we usually don't allow control over this. The DPLL is preconfigured to 
output the frequency that's expected on the PHY/MAC clock input and we
shouldn't mess with it in runtime.

> As you said, pin -> ref mapping is board specific, so the API should
> not assume knowledge of routing between Port and ECC. If it does just
> give the pins matching names.

I believe referring to a board user guide is enough to cover that one, just like
with PTP pins. There may be 1000 different ways of connecting those signals.

> We don't have to implement the entire design but the pieces we do create
> must be right for the larger context. With the current code the
> ECC/Cleanup PLL is not represented as a separate entity, and mapping of
> what source means is on the wrong "end" of the A.3 <> C.1 relationship.

That's why I initially started with the EEC state + pin idx of the driving source.
I believe it's a cleaner solution, as then we can implement the same pin
indexes in the recovered clock part of the interface and the user will be
able to see the state and driving pin from the EEC_STATE (both belongs to
the DPLL) and use the PHY pins subsystem to see if the driving pin index
matches the index that I drive. In that case we keep all C-thingies in the C
subsystem and A stuff in A subsystem. The only "mix" is in the pin indexes
that would use numbering from C.
If it's an attribute - it can as well be optional for the deployments that
don't need/support it.

> > The viable solutions are:
> > - Limit to the proposed "I drive the clock" vs "Someone drives it" and
> assume the
> >    Driver returns all info
> > - return the EEC Ref index, figure out which pin is connected to it and then
> check
> >   which MAC/PHY lane that drives it.
> >
> > I assume option one is easy to implement and keep in the future even if
> we
> > finally move to option 2 once we define EEC/DPLL subsystem.
> >
> > In future #1 can take the lock information from the DPLL subsystem, but
> > will also enable simple deployments that won't expose the whole DPLL,
> > like a filter PLL embedded in a multiport PHY that will only work for
> > SyncE in which case this API will only touch a single component.
> 
> Imagine a system with two cascaded switch ASICs and a bunch of PHYs.
> How do you express that by pure extensions to the proposed API?
> Here either the cleanup PLLs would be cascaded (subordinate one needs
> to express that its "source" is another PLL) or single lane can be
> designated as a source for both PLLs (but then there is only one
> "source" bit and multiple "enum if_eec_state"s).

In that case - once we have pins- we'll see that the leader DPLL is synced
to the pin that comes from the PHY/MAC and be able to find the corresponding
lane, and on the follower side we'll see that it's locked to the pin that
corresponds to the master DPLL. The logic to "do something" with
that knowledge needs to be in the userspace app, as there may be
some external connections needed that are unknown at the board level
(think of 2 separate adapters connected with an external cable).

> I think we can't avoid having a separate object for ECC/Cleanup PLL.
> You can add it as a subobject to devlink but new genetlink family seems
> much preferable given the devlink instances themselves have unclear
> semantics at this point. Or you can try to convince Richard that ECC
> belongs as part of PTP :)

> In fact I don't think you care about any of the PHY / port stuff
> currently. All you need is the ECC side of the API. IIUC you have
> relatively simple setup where there is only one pin per port, and
> you don't care about syncing the Tx clock.

I actually do. There's (almost) always less recovered clock resources
(aka pins) than ports/lanes in the system. The TX clock will be
synchronized once the EEC reports the lock state, as it's the part that
generates clocks for the TX part of the PHY.

WARNING: multiple messages have this Message-ID (diff)
From: Machnikowski, Maciej <maciej.machnikowski@intel.com>
To: intel-wired-lan@osuosl.org
Subject: [Intel-wired-lan] [PATCH net-next 1/2] rtnetlink: Add new RTM_GETEECSTATE message to get SyncE status
Date: Thu, 9 Sep 2021 08:11:51 +0000	[thread overview]
Message-ID: <PH0PR11MB4951205B0D078D94ADBAFCACEAD59@PH0PR11MB4951.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20210908151852.7ad8a0f1@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>

> -----Original Message-----
> From: Jakub Kicinski <kuba@kernel.org>
> Sent: Thursday, September 9, 2021 12:19 AM
> To: Machnikowski, Maciej <maciej.machnikowski@intel.com>
> Subject: Re: [PATCH net-next 1/2] rtnetlink: Add new RTM_GETEECSTATE
> message to get SyncE status
> 
> On Wed, 8 Sep 2021 17:30:24 +0000 Machnikowski, Maciej wrote:
> > Lane0
> > ------------- |\  Pin0        RefN ____
> > ------------- | |-----------------|     |      synced clk
> >               | |-----------------| EEC |------------------
> > ------------- |/ PinN         RefM|____ |
> > Lane N      MUX
> >
> > To get the full info a port needs to know the EEC state and which lane is
> used
> > as a source (or rather - my lane or any other).
> 
> EEC here is what the PHY documentation calls "Cleanup PLL" right?

It's a generic term for an internal or external PLL that takes the reference
frequency from the certain port/lane and drives its TX part. In a specific 
case it can be the internal cleanup PLL.

> > The lane -> Pin mapping is buried in the PHY/MAC, but the source of
> frequency
> > is in the EEC.
> 
> Not sure what "source of frequency" means here. There's a lot of
> frequencies here.

It's the source lane/port that drives the DPLL reference. In other words,
the DPLL will tune its frequency to match the one recovered from 
a certain PHY lane/port.

> > What's even more - the Pin->Ref mapping is board specific.
> 
> Breaking down the system into components we have:
> 
> Port
>   A.1 Rx lanes
>   A.2 Rx pins (outputs)
>   A.3 Rx clk divider
>   B.1 Tx lanes
>   B.2 Tx pins (inputs)
> 
> ECC
>   C.1 Inputs
>   C.2 Outputs
>   C.3 PLL state
> 
> In the most general case we want to be able to:
>  map recovered clocks to PHY output pins (A.1 <> A.2)
>  set freq div on the recovered clock (A.2 <> A.3)

Technically the pin (if exists) will be the last thing, so a better way would 
be to map lane to the divider (A.1<>A.3) and then a divider to pin (A.3<>A.2), 
but the idea is the same

>  set the priorities of inputs on ECC (C.1)
>  read the ECC state (C.3)
>  control outputs of the ECC (C.2)

Yes!

>  select the clock source for port Tx (B.2 <> B.1)

And here we usually don't allow control over this. The DPLL is preconfigured to 
output the frequency that's expected on the PHY/MAC clock input and we
shouldn't mess with it in runtime.

> As you said, pin -> ref mapping is board specific, so the API should
> not assume knowledge of routing between Port and ECC. If it does just
> give the pins matching names.

I believe referring to a board user guide is enough to cover that one, just like
with PTP pins. There may be 1000 different ways of connecting those signals.

> We don't have to implement the entire design but the pieces we do create
> must be right for the larger context. With the current code the
> ECC/Cleanup PLL is not represented as a separate entity, and mapping of
> what source means is on the wrong "end" of the A.3 <> C.1 relationship.

That's why I initially started with the EEC state + pin idx of the driving source.
I believe it's a cleaner solution, as then we can implement the same pin
indexes in the recovered clock part of the interface and the user will be
able to see the state and driving pin from the EEC_STATE (both belongs to
the DPLL) and use the PHY pins subsystem to see if the driving pin index
matches the index that I drive. In that case we keep all C-thingies in the C
subsystem and A stuff in A subsystem. The only "mix" is in the pin indexes
that would use numbering from C.
If it's an attribute - it can as well be optional for the deployments that
don't need/support it.

> > The viable solutions are:
> > - Limit to the proposed "I drive the clock" vs "Someone drives it" and
> assume the
> >    Driver returns all info
> > - return the EEC Ref index, figure out which pin is connected to it and then
> check
> >   which MAC/PHY lane that drives it.
> >
> > I assume option one is easy to implement and keep in the future even if
> we
> > finally move to option 2 once we define EEC/DPLL subsystem.
> >
> > In future #1 can take the lock information from the DPLL subsystem, but
> > will also enable simple deployments that won't expose the whole DPLL,
> > like a filter PLL embedded in a multiport PHY that will only work for
> > SyncE in which case this API will only touch a single component.
> 
> Imagine a system with two cascaded switch ASICs and a bunch of PHYs.
> How do you express that by pure extensions to the proposed API?
> Here either the cleanup PLLs would be cascaded (subordinate one needs
> to express that its "source" is another PLL) or single lane can be
> designated as a source for both PLLs (but then there is only one
> "source" bit and multiple "enum if_eec_state"s).

In that case - once we have pins- we'll see that the leader DPLL is synced
to the pin that comes from the PHY/MAC and be able to find the corresponding
lane, and on the follower side we'll see that it's locked to the pin that
corresponds to the master DPLL. The logic to "do something" with
that knowledge needs to be in the userspace app, as there may be
some external connections needed that are unknown at the board level
(think of 2 separate adapters connected with an external cable).

> I think we can't avoid having a separate object for ECC/Cleanup PLL.
> You can add it as a subobject to devlink but new genetlink family seems
> much preferable given the devlink instances themselves have unclear
> semantics at this point. Or you can try to convince Richard that ECC
> belongs as part of PTP :)

> In fact I don't think you care about any of the PHY / port stuff
> currently. All you need is the ECC side of the API. IIUC you have
> relatively simple setup where there is only one pin per port, and
> you don't care about syncing the Tx clock.

I actually do. There's (almost) always less recovered clock resources
(aka pins) than ports/lanes in the system. The TX clock will be
synchronized once the EEC reports the lock state, as it's the part that
generates clocks for the TX part of the PHY.

  parent reply	other threads:[~2021-09-09  8:11 UTC|newest]

Thread overview: 82+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-03 15:14 [RFC v4 net-next 0/2] Add RTNL interface for SyncE Maciej Machnikowski
2021-09-03 15:14 ` [Intel-wired-lan] " Maciej Machnikowski
2021-09-03 15:14 ` [PATCH net-next 1/2] rtnetlink: Add new RTM_GETEECSTATE message to get SyncE status Maciej Machnikowski
2021-09-03 15:14   ` [Intel-wired-lan] " Maciej Machnikowski
2021-09-03 16:18   ` Stephen Hemminger
2021-09-03 16:18     ` [Intel-wired-lan] " Stephen Hemminger
2021-09-03 22:20     ` Machnikowski, Maciej
2021-09-03 22:20       ` [Intel-wired-lan] " Machnikowski, Maciej
2021-09-03 22:14   ` Jakub Kicinski
2021-09-03 22:14     ` [Intel-wired-lan] " Jakub Kicinski
2021-09-06 18:30     ` Machnikowski, Maciej
2021-09-06 18:30       ` [Intel-wired-lan] " Machnikowski, Maciej
2021-09-06 18:39       ` Jakub Kicinski
2021-09-06 18:39         ` [Intel-wired-lan] " Jakub Kicinski
2021-09-06 19:01         ` Machnikowski, Maciej
2021-09-06 19:01           ` [Intel-wired-lan] " Machnikowski, Maciej
2021-09-07  1:01           ` Jakub Kicinski
2021-09-07  1:01             ` [Intel-wired-lan] " Jakub Kicinski
2021-09-07  8:50             ` Machnikowski, Maciej
2021-09-07  8:50               ` [Intel-wired-lan] " Machnikowski, Maciej
2021-09-07 14:55               ` Jakub Kicinski
2021-09-07 14:55                 ` [Intel-wired-lan] " Jakub Kicinski
2021-09-07 15:47                 ` Machnikowski, Maciej
2021-09-07 15:47                   ` [Intel-wired-lan] " Machnikowski, Maciej
2021-09-07 19:47                   ` Jakub Kicinski
2021-09-07 19:47                     ` [Intel-wired-lan] " Jakub Kicinski
2021-09-08  8:03                     ` Machnikowski, Maciej
2021-09-08  8:03                       ` [Intel-wired-lan] " Machnikowski, Maciej
2021-09-08 16:21                       ` Jakub Kicinski
2021-09-08 16:21                         ` [Intel-wired-lan] " Jakub Kicinski
2021-09-08 17:30                         ` Machnikowski, Maciej
2021-09-08 17:30                           ` [Intel-wired-lan] " Machnikowski, Maciej
2021-09-08 19:34                           ` Andrew Lunn
2021-09-08 19:34                             ` [Intel-wired-lan] " Andrew Lunn
2021-09-08 20:27                             ` Machnikowski, Maciej
2021-09-08 20:27                               ` [Intel-wired-lan] " Machnikowski, Maciej
2021-09-08 22:20                             ` Jakub Kicinski
2021-09-08 22:20                               ` [Intel-wired-lan] " Jakub Kicinski
2021-09-08 22:59                               ` Andrew Lunn
2021-09-08 22:59                                 ` [Intel-wired-lan] " Andrew Lunn
2021-09-09  2:09                                 ` Richard Cochran
2021-09-09  2:09                                   ` [Intel-wired-lan] " Richard Cochran
2021-09-09  8:18                                   ` Machnikowski, Maciej
2021-09-09  8:18                                     ` [Intel-wired-lan] " Machnikowski, Maciej
2021-09-10 14:14                                     ` Richard Cochran
2021-09-10 14:14                                       ` [Intel-wired-lan] " Richard Cochran
2021-09-08 22:18                           ` Jakub Kicinski
2021-09-08 22:18                             ` [Intel-wired-lan] " Jakub Kicinski
2021-09-08 23:14                             ` Andrew Lunn
2021-09-08 23:14                               ` [Intel-wired-lan] " Andrew Lunn
2021-09-08 23:58                               ` Jakub Kicinski
2021-09-08 23:58                                 ` [Intel-wired-lan] " Jakub Kicinski
2021-09-09  8:26                                 ` Machnikowski, Maciej
2021-09-09  8:26                                   ` [Intel-wired-lan] " Machnikowski, Maciej
2021-09-09  9:24                                   ` Machnikowski, Maciej
2021-09-09  9:24                                     ` [Intel-wired-lan] " Machnikowski, Maciej
2021-09-09 10:15                                     ` David Miller
2021-09-09 10:15                                       ` [Intel-wired-lan] " David Miller
2021-09-09  8:11                             ` Machnikowski, Maciej [this message]
2021-09-09  8:11                               ` Machnikowski, Maciej
2021-09-13  8:50                         ` Ido Schimmel
2021-09-13  8:50                           ` [Intel-wired-lan] " Ido Schimmel
2021-09-21 13:36                         ` Ido Schimmel
2021-09-21 13:36                           ` [Intel-wired-lan] " Ido Schimmel
2021-09-21 13:15   ` Ido Schimmel
2021-09-21 13:15     ` [Intel-wired-lan] " Ido Schimmel
2021-09-21 13:37     ` Machnikowski, Maciej
2021-09-21 13:37       ` [Intel-wired-lan] " Machnikowski, Maciej
2021-09-21 14:58       ` Ido Schimmel
2021-09-21 14:58         ` [Intel-wired-lan] " Ido Schimmel
2021-09-21 21:14         ` Jakub Kicinski
2021-09-21 21:14           ` [Intel-wired-lan] " Jakub Kicinski
2021-09-22  6:22           ` Ido Schimmel
2021-09-22  6:22             ` [Intel-wired-lan] " Ido Schimmel
2021-09-03 15:14 ` [PATCH net-next 2/2] ice: add support for reading SyncE DPLL state Maciej Machnikowski
2021-09-03 15:14   ` [Intel-wired-lan] " Maciej Machnikowski
2021-09-03 22:06   ` Jakub Kicinski
2021-09-03 22:06     ` [Intel-wired-lan] " Jakub Kicinski
2021-09-21 13:25   ` Ido Schimmel
2021-09-21 13:25     ` [Intel-wired-lan] " Ido Schimmel
  -- strict thread matches above, loose matches on Subject: below --
2021-08-31 11:52 [PATCH net-next 0/2] Add RTNL interface for SyncE EEC state Maciej Machnikowski
2021-08-31 11:52 ` [PATCH net-next 1/2] rtnetlink: Add new RTM_GETEECSTATE message to get SyncE status Maciej Machnikowski
2021-08-31 13:44   ` Jakub Kicinski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=PH0PR11MB4951205B0D078D94ADBAFCACEAD59@PH0PR11MB4951.namprd11.prod.outlook.com \
    --to=maciej.machnikowski@intel.com \
    --cc=abyagowi@fb.com \
    --cc=andrew@lunn.ch \
    --cc=anthony.l.nguyen@intel.com \
    --cc=davem@davemloft.net \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=kuba@kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=michael.chan@broadcom.com \
    --cc=mkubecek@suse.cz \
    --cc=netdev@vger.kernel.org \
    --cc=richardcochran@gmail.com \
    --cc=saeed@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.