All of lore.kernel.org
 help / color / mirror / Atom feed
* Crosschip bridge functionality
@ 2022-12-23 19:37 Colin Foster
  2022-12-23 20:05 ` Andrew Lunn
  0 siblings, 1 reply; 10+ messages in thread
From: Colin Foster @ 2022-12-23 19:37 UTC (permalink / raw)
  To: Andrew Lunn, Florian Fainelli, Vladimir Oltean,
	Alexandre Belloni, netdev

Hello,

I've been looking into what it would take to add the Distributed aspect
to the Felix driver, and I have some general questions about the theory
of operation and if there are any limitations I don't foresee. It might
be a fair bit of work for me to get hardware to even test, so avoiding
dead ends early would be really nice!

Also it seems like all the existing Felix-like hardware is all
integrated into a SOC, so there's really no other potential users at
this time.

For a distributed setup, it looks like I'd just need to create
felix_crosschip_bridge_{join,leave} routines, and use the mv88e6xxx as a
template. These routines would create internal VLANs where, assuming
they use a tagging protocol that the switch can offload (your
documentation specifically mentions Marvell-tagged frames for this
reason, seemingly) everything should be fully offloaded to the switches.

What's the catch?

In the Marvell case, is there any gotcha where "under these scenarios,
the controlling CPU needs to process packets at line rate"?

Thanks for all the great documentation and guidance! (and patience)


Colin Foster

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Crosschip bridge functionality
  2022-12-23 19:37 Crosschip bridge functionality Colin Foster
@ 2022-12-23 20:05 ` Andrew Lunn
  2022-12-23 20:54   ` Colin Foster
  0 siblings, 1 reply; 10+ messages in thread
From: Andrew Lunn @ 2022-12-23 20:05 UTC (permalink / raw)
  To: Colin Foster; +Cc: Florian Fainelli, Vladimir Oltean, Alexandre Belloni, netdev

On Fri, Dec 23, 2022 at 11:37:47AM -0800, Colin Foster wrote:
> Hello,
> 
> I've been looking into what it would take to add the Distributed aspect
> to the Felix driver, and I have some general questions about the theory
> of operation and if there are any limitations I don't foresee. It might
> be a fair bit of work for me to get hardware to even test, so avoiding
> dead ends early would be really nice!
> 
> Also it seems like all the existing Felix-like hardware is all
> integrated into a SOC, so there's really no other potential users at
> this time.
> 
> For a distributed setup, it looks like I'd just need to create
> felix_crosschip_bridge_{join,leave} routines, and use the mv88e6xxx as a
> template. These routines would create internal VLANs where, assuming
> they use a tagging protocol that the switch can offload (your
> documentation specifically mentions Marvell-tagged frames for this
> reason, seemingly) everything should be fully offloaded to the switches.
> 
> What's the catch?

I actually think you need silicon support for this. Earlier versions
of the Marvell Switches are missing some functionality, which results
in VLANs leaking in distributed setups. I think the switches also
share information between themselves, over the DSA ports, i.e. the
ports between switches.

I've no idea if you can replicate the Marvell DSA concept with VLANs.
The Marvell header has D in DSA as a core concept. The SoC can request
a frame is sent out a specific port of a specific switch. And each
switch has a routing table which indicates what egress port to use to
go towards a specific switch. Frames received at the SoC indicate both
the ingress port and the ingress switch, etc.

> In the Marvell case, is there any gotcha where "under these scenarios,
> the controlling CPU needs to process packets at line rate"?

None that i know of. But i'm sure Marvell put a reasonable amount of
thought into how to make a distributed switch. There is at least one
patent covering the concept. It could be that a VLAN based
re-implemention could have such problems. 

	Andrew

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Crosschip bridge functionality
  2022-12-23 20:05 ` Andrew Lunn
@ 2022-12-23 20:54   ` Colin Foster
  2022-12-23 21:18     ` Andrew Lunn
  2022-12-24  0:59     ` Vladimir Oltean
  0 siblings, 2 replies; 10+ messages in thread
From: Colin Foster @ 2022-12-23 20:54 UTC (permalink / raw)
  To: Andrew Lunn; +Cc: Florian Fainelli, Vladimir Oltean, Alexandre Belloni, netdev

On Fri, Dec 23, 2022 at 09:05:27PM +0100, Andrew Lunn wrote:
> On Fri, Dec 23, 2022 at 11:37:47AM -0800, Colin Foster wrote:
> > Hello,
> > 
> > I've been looking into what it would take to add the Distributed aspect
> > to the Felix driver, and I have some general questions about the theory
> > of operation and if there are any limitations I don't foresee. It might
> > be a fair bit of work for me to get hardware to even test, so avoiding
> > dead ends early would be really nice!
> > 
> > Also it seems like all the existing Felix-like hardware is all
> > integrated into a SOC, so there's really no other potential users at
> > this time.
> > 
> > For a distributed setup, it looks like I'd just need to create
> > felix_crosschip_bridge_{join,leave} routines, and use the mv88e6xxx as a
> > template. These routines would create internal VLANs where, assuming
> > they use a tagging protocol that the switch can offload (your
> > documentation specifically mentions Marvell-tagged frames for this
> > reason, seemingly) everything should be fully offloaded to the switches.
> > 
> > What's the catch?
> 
> I actually think you need silicon support for this. Earlier versions
> of the Marvell Switches are missing some functionality, which results
> in VLANs leaking in distributed setups. I think the switches also
> share information between themselves, over the DSA ports, i.e. the
> ports between switches.
> 
> I've no idea if you can replicate the Marvell DSA concept with VLANs.
> The Marvell header has D in DSA as a core concept. The SoC can request
> a frame is sent out a specific port of a specific switch. And each
> switch has a routing table which indicates what egress port to use to
> go towards a specific switch. Frames received at the SoC indicate both
> the ingress port and the ingress switch, etc.

"It might not work at all" is definitely a catch :-)

I haven't looked into the Marvell documentation about this, so maybe
that's where I should go next. It seems Ocelot chips support
double-tagging, which would lend itself to the SoC being able to
determine which port and switch for ingress and egress... though that
might imply it could only work with DSA ports on the first chip, which
would be an understandable limitation.

> 
> > In the Marvell case, is there any gotcha where "under these scenarios,
> > the controlling CPU needs to process packets at line rate"?
> 
> None that i know of. But i'm sure Marvell put a reasonable amount of
> thought into how to make a distributed switch. There is at least one
> patent covering the concept. It could be that a VLAN based
> re-implemention could have such problems. 

I'm starting to understand why there's only one user of
crosschip_bridge_* functions. So this sounds to me like a "don't go down
this path - you're in for trouble" scenario.


Thanks for the info!

> 
> 	Andrew

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Crosschip bridge functionality
  2022-12-23 20:54   ` Colin Foster
@ 2022-12-23 21:18     ` Andrew Lunn
  2022-12-23 22:36       ` Colin Foster
  2022-12-24  0:59     ` Vladimir Oltean
  1 sibling, 1 reply; 10+ messages in thread
From: Andrew Lunn @ 2022-12-23 21:18 UTC (permalink / raw)
  To: Colin Foster; +Cc: Florian Fainelli, Vladimir Oltean, Alexandre Belloni, netdev

> > > What's the catch?
> > 
> > I actually think you need silicon support for this. Earlier versions
> > of the Marvell Switches are missing some functionality, which results
> > in VLANs leaking in distributed setups. I think the switches also
> > share information between themselves, over the DSA ports, i.e. the
> > ports between switches.
> > 
> > I've no idea if you can replicate the Marvell DSA concept with VLANs.
> > The Marvell header has D in DSA as a core concept. The SoC can request
> > a frame is sent out a specific port of a specific switch. And each
> > switch has a routing table which indicates what egress port to use to
> > go towards a specific switch. Frames received at the SoC indicate both
> > the ingress port and the ingress switch, etc.
> 
> "It might not work at all" is definitely a catch :-)
> 
> I haven't looked into the Marvell documentation about this, so maybe
> that's where I should go next. It seems Ocelot chips support
> double-tagging, which would lend itself to the SoC being able to
> determine which port and switch for ingress and egress... though that
> might imply it could only work with DSA ports on the first chip, which
> would be an understandable limitation.
> 
> > 
> > > In the Marvell case, is there any gotcha where "under these scenarios,
> > > the controlling CPU needs to process packets at line rate"?
> > 
> > None that i know of. But i'm sure Marvell put a reasonable amount of
> > thought into how to make a distributed switch. There is at least one
> > patent covering the concept. It could be that a VLAN based
> > re-implemention could have such problems. 
> 
> I'm starting to understand why there's only one user of
> crosschip_bridge_* functions. So this sounds to me like a "don't go down
> this path - you're in for trouble" scenario.

What is your real use case here?

I know people have stacked switches before, and just operated them as
stacked switches. So you need to configure each switch independently.
What Marvell DSA does is make it transparent, so to some extent it
looks like one big switch, not a collection of switches.

      Andrew

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Crosschip bridge functionality
  2022-12-23 21:18     ` Andrew Lunn
@ 2022-12-23 22:36       ` Colin Foster
  2022-12-23 23:03         ` Andrew Lunn
  0 siblings, 1 reply; 10+ messages in thread
From: Colin Foster @ 2022-12-23 22:36 UTC (permalink / raw)
  To: Andrew Lunn; +Cc: Florian Fainelli, Vladimir Oltean, Alexandre Belloni, netdev

On Fri, Dec 23, 2022 at 10:18:54PM +0100, Andrew Lunn wrote:
> > > > What's the catch?
> > > 
> > > I actually think you need silicon support for this. Earlier versions
> > > of the Marvell Switches are missing some functionality, which results
> > > in VLANs leaking in distributed setups. I think the switches also
> > > share information between themselves, over the DSA ports, i.e. the
> > > ports between switches.
> > > 
> > > I've no idea if you can replicate the Marvell DSA concept with VLANs.
> > > The Marvell header has D in DSA as a core concept. The SoC can request
> > > a frame is sent out a specific port of a specific switch. And each
> > > switch has a routing table which indicates what egress port to use to
> > > go towards a specific switch. Frames received at the SoC indicate both
> > > the ingress port and the ingress switch, etc.
> > 
> > "It might not work at all" is definitely a catch :-)
> > 
> > I haven't looked into the Marvell documentation about this, so maybe
> > that's where I should go next. It seems Ocelot chips support
> > double-tagging, which would lend itself to the SoC being able to
> > determine which port and switch for ingress and egress... though that
> > might imply it could only work with DSA ports on the first chip, which
> > would be an understandable limitation.
> > 
> > > 
> > > > In the Marvell case, is there any gotcha where "under these scenarios,
> > > > the controlling CPU needs to process packets at line rate"?
> > > 
> > > None that i know of. But i'm sure Marvell put a reasonable amount of
> > > thought into how to make a distributed switch. There is at least one
> > > patent covering the concept. It could be that a VLAN based
> > > re-implemention could have such problems. 
> > 
> > I'm starting to understand why there's only one user of
> > crosschip_bridge_* functions. So this sounds to me like a "don't go down
> > this path - you're in for trouble" scenario.
> 
> What is your real use case here?

Fair question. We have a baseboard configuration with cards that offer
customization / expansion. An example might be a card that offers
additional fibre / copper ports, which would lend itself very nicely to
a DSA configuration... more cards == more ports.

We can see some interesting use of vlans for all sorts of things. I
haven't been the boots on the ground, so I don't know all the use-cases.
My main hope is to be able to offer as much configurability for the
system integrators as possible. Maybe sw2p2 is a tap of sw1p2, while
sw2p3, sw2p4, and sw1p3 are bridged, with the CPU doing IGMP snooping
and running RSTP.

> 
> I know people have stacked switches before, and just operated them as
> stacked switches. So you need to configure each switch independently.
> What Marvell DSA does is make it transparent, so to some extent it
> looks like one big switch, not a collection of switches.

That is definitely possible. It might make the people doing any system
integration have a lot more knowledge than a simple "add this port to
that bridge". My goal is to make their lives as easy as can be.

It sounds like that all exists with Marvell hardware...

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Crosschip bridge functionality
  2022-12-23 22:36       ` Colin Foster
@ 2022-12-23 23:03         ` Andrew Lunn
  2022-12-23 23:31           ` Colin Foster
  0 siblings, 1 reply; 10+ messages in thread
From: Andrew Lunn @ 2022-12-23 23:03 UTC (permalink / raw)
  To: Colin Foster; +Cc: Florian Fainelli, Vladimir Oltean, Alexandre Belloni, netdev

> Fair question. We have a baseboard configuration with cards that offer
> customization / expansion. An example might be a card that offers
> additional fibre / copper ports, which would lend itself very nicely to
> a DSA configuration... more cards == more ports.
> 
> We can see some interesting use of vlans for all sorts of things. I
> haven't been the boots on the ground, so I don't know all the use-cases.
> My main hope is to be able to offer as much configurability for the
> system integrators as possible. Maybe sw2p2 is a tap of sw1p2, while
> sw2p3, sw2p4, and sw1p3 are bridged, with the CPU doing IGMP snooping
> and running RSTP.
> 
> > 
> > I know people have stacked switches before, and just operated them as
> > stacked switches. So you need to configure each switch independently.
> > What Marvell DSA does is make it transparent, so to some extent it
> > looks like one big switch, not a collection of switches.
> 
> That is definitely possible. It might make the people doing any system
> integration have a lot more knowledge than a simple "add this port to
> that bridge". My goal is to make their lives as easy as can be.
> 
> It sounds like that all exists with Marvell hardware...

You might want get hold of a Turris Mox system, with a few different
cards in it. That will give you a Marvell D in DSA system to play
with. And your system seems quite similar in some ways.

    Andrew

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Crosschip bridge functionality
  2022-12-23 23:03         ` Andrew Lunn
@ 2022-12-23 23:31           ` Colin Foster
  0 siblings, 0 replies; 10+ messages in thread
From: Colin Foster @ 2022-12-23 23:31 UTC (permalink / raw)
  To: Andrew Lunn; +Cc: Florian Fainelli, Vladimir Oltean, Alexandre Belloni, netdev

On Sat, Dec 24, 2022 at 12:03:03AM +0100, Andrew Lunn wrote:
> You might want get hold of a Turris Mox system, with a few different
> cards in it. That will give you a Marvell D in DSA system to play
> with. And your system seems quite similar in some ways.

Indeed I do. Thanks Andrew!


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Crosschip bridge functionality
  2022-12-23 20:54   ` Colin Foster
  2022-12-23 21:18     ` Andrew Lunn
@ 2022-12-24  0:59     ` Vladimir Oltean
  2022-12-24 18:53       ` Colin Foster
  1 sibling, 1 reply; 10+ messages in thread
From: Vladimir Oltean @ 2022-12-24  0:59 UTC (permalink / raw)
  To: Colin Foster, Andrew Lunn; +Cc: Florian Fainelli, Alexandre Belloni, netdev

Hi Colin,

On Fri, Dec 23, 2022 at 12:54:29PM -0800, Colin Foster wrote:
> On Fri, Dec 23, 2022 at 09:05:27PM +0100, Andrew Lunn wrote:
> > On Fri, Dec 23, 2022 at 11:37:47AM -0800, Colin Foster wrote:
> > > Hello,
> > > 
> > > I've been looking into what it would take to add the Distributed aspect
> > > to the Felix driver, and I have some general questions about the theory
> > > of operation and if there are any limitations I don't foresee. It might
> > > be a fair bit of work for me to get hardware to even test, so avoiding
> > > dead ends early would be really nice!
> > > 
> > > Also it seems like all the existing Felix-like hardware is all
> > > integrated into a SOC, so there's really no other potential users at
> > > this time.
> > > 
> > > For a distributed setup, it looks like I'd just need to create
> > > felix_crosschip_bridge_{join,leave} routines, and use the mv88e6xxx as a
> > > template. These routines would create internal VLANs where, assuming
> > > they use a tagging protocol that the switch can offload (your
> > > documentation specifically mentions Marvell-tagged frames for this
> > > reason, seemingly) everything should be fully offloaded to the switches.
> > > 
> > > What's the catch?
> > 
> > I actually think you need silicon support for this. Earlier versions
> > of the Marvell Switches are missing some functionality, which results
> > in VLANs leaking in distributed setups. I think the switches also
> > share information between themselves, over the DSA ports, i.e. the
> > ports between switches.
> > 
> > I've no idea if you can replicate the Marvell DSA concept with VLANs.
> > The Marvell header has D in DSA as a core concept. The SoC can request
> > a frame is sent out a specific port of a specific switch. And each
> > switch has a routing table which indicates what egress port to use to
> > go towards a specific switch. Frames received at the SoC indicate both
> > the ingress port and the ingress switch, etc.
> 
> "It might not work at all" is definitely a catch :-)
> 
> I haven't looked into the Marvell documentation about this, so maybe
> that's where I should go next. It seems Ocelot chips support
> double-tagging, which would lend itself to the SoC being able to
> determine which port and switch for ingress and egress... though that
> might imply it could only work with DSA ports on the first chip, which
> would be an understandable limitation.
> 
> > > In the Marvell case, is there any gotcha where "under these scenarios,
> > > the controlling CPU needs to process packets at line rate"?
> > 
> > None that i know of. But i'm sure Marvell put a reasonable amount of
> > thought into how to make a distributed switch. There is at least one
> > patent covering the concept. It could be that a VLAN based
> > re-implemention could have such problems. 
> 
> I'm starting to understand why there's only one user of
> crosschip_bridge_* functions. So this sounds to me like a "don't go down
> this path - you're in for trouble" scenario.

Trying to build on top of what Andrew has already replied.

Back when I was new to DSA and completely unqualified to be a DSA reviewer/
maintainer (it's debatable whether now I am), I actually had some of the
same questions about what's possible in terms of software support, given
the Vitesse architectural limitations for cross-chip bridging a la Marvell,
in this email thread:
https://patchwork.kernel.org/project/linux-arm-kernel/patch/1561131532-14860-5-git-send-email-claudiu.manoil@nxp.com/

That being said, you need to broaden your detection criteria for cross-chip
bridging; sja1105 (and tag_8021q in general) supports this too, except
it's a bit hidden from the ds->ops->crosschip_bridge_join() operation.
It all relies on the concept of cross-chip notifier chain from switch.c.
dsa_tag_8021q_bridge_join() will emit a DSA_NOTIFIER_TAG_8021Q_VLAN_ADD
event, which the other tag_8021q capable switches in the system will see
and react to.

Because felix and sja1105 each support a tagger based on tag_8021q for
different needs, there is an important difference in their implementations.
The comment in dsa_tag_8021q_bridge_join() - called by sja1105 but not
by felix - summarizes the essence of the difference.

If Felix were to gain support for tag_8021q cross-chip bridging*, the
driver would would need to look at the switch's position within the PCB topology.
On the user ports, tag_8021q would have to be implemented using the VCAP
TCAM rules, to retain support for VLAN-aware bridging and just push/pop the
VLAN that serves as make-shift tag. On the DSA "cascade" ports, tag_8021q
would have to be implemented using the VLAN table, in order to make the
switch understand the tag that's already in the packet and route based
on it, rather than push yet another one. The proper combination of VCAP
rules and VLAN table entries needs much more consideration to cover all
scenarios (CPU RX over a daisy chain; CPU TX over a daisy chain;
autonomous forwarding over 2 switches; autonomous forwarding over 3
switches; autonomous forwarding between sja1105 and felix; forwarding
done by felix for traffic originated by one sja1105 and destined to
another sja1105; forwarding done by felix for traffic originated by a
sja1105 and destined to a felix user port with no other downstream switch).

You might find some of my thoughts on this topic interesting, in the
"Switch topology changes" chapter of this PDF:
https://lpc.events/event/11/contributions/949/attachments/823/1555/paper.pdf

With that development summary in mind, you'll probably be prepared to
use "git log" to better understand some of the stages that tag_8021q
cross-chip bridging has been through.

In principle, when comparing tag_8021q cross-chip bridging to something
proprietary like Marvell, I consider it to be somewhat analogous to
Russian/SSSR engineering: it's placed on the "good" side of the diminishing
returns curve, or i.o.w., it works stupidly well for how simplistic it is.
I could be interested to help if you come up with a sound proposal that
addresses your needs and is generic enough that pieces of it are useful
to others too.

*I seriously doubt that any hw manufacturer would be crazy enough to
use Vitesse switches for an application for which they are essentially
out of spec and out of their intended use. Yet that is more or less the
one-sentence description of what we, at NXP, are doing with them, so I
know what it's like and I don't necessarily discourage it ;) Generally
I'd say they take a bit of pushing quite well (while failing at some
arguably reasonable and basic use cases, like flow control on NPI port -
go figure).

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Crosschip bridge functionality
  2022-12-24  0:59     ` Vladimir Oltean
@ 2022-12-24 18:53       ` Colin Foster
  2023-01-03 10:47         ` Vladimir Oltean
  0 siblings, 1 reply; 10+ messages in thread
From: Colin Foster @ 2022-12-24 18:53 UTC (permalink / raw)
  To: Vladimir Oltean; +Cc: Andrew Lunn, Florian Fainelli, Alexandre Belloni, netdev

On Sat, Dec 24, 2022 at 02:59:34AM +0200, Vladimir Oltean wrote:
> Hi Colin,
> 
> On Fri, Dec 23, 2022 at 12:54:29PM -0800, Colin Foster wrote:
> > On Fri, Dec 23, 2022 at 09:05:27PM +0100, Andrew Lunn wrote:
> > I'm starting to understand why there's only one user of
> > crosschip_bridge_* functions. So this sounds to me like a "don't go down
> > this path - you're in for trouble" scenario.
> 
> Trying to build on top of what Andrew has already replied.
> 
> Back when I was new to DSA and completely unqualified to be a DSA reviewer/
> maintainer (it's debatable whether now I am), I actually had some of the
> same questions about what's possible in terms of software support, given
> the Vitesse architectural limitations for cross-chip bridging a la Marvell,
> in this email thread:
> https://patchwork.kernel.org/project/linux-arm-kernel/patch/1561131532-14860-5-git-send-email-claudiu.manoil@nxp.com/

Thank you for this link. I'll look it over. As usual, I'll need some
time to absorb all this information :-)

> 
> That being said, you need to broaden your detection criteria for cross-chip
> bridging; sja1105 (and tag_8021q in general) supports this too, except
> it's a bit hidden from the ds->ops->crosschip_bridge_join() operation.
> It all relies on the concept of cross-chip notifier chain from switch.c.
> dsa_tag_8021q_bridge_join() will emit a DSA_NOTIFIER_TAG_8021Q_VLAN_ADD
> event, which the other tag_8021q capable switches in the system will see
> and react to.
> 
> Because felix and sja1105 each support a tagger based on tag_8021q for
> different needs, there is an important difference in their implementations.
> The comment in dsa_tag_8021q_bridge_join() - called by sja1105 but not
> by felix - summarizes the essence of the difference.

Hmm... So the Marvell and sja1105 both support "Distributed" but in
slightly different ways?

> 
> If Felix were to gain support for tag_8021q cross-chip bridging*, the
> driver would would need to look at the switch's position within the PCB topology.
> On the user ports, tag_8021q would have to be implemented using the VCAP
> TCAM rules, to retain support for VLAN-aware bridging and just push/pop the
> VLAN that serves as make-shift tag. On the DSA "cascade" ports, tag_8021q
> would have to be implemented using the VLAN table, in order to make the
> switch understand the tag that's already in the packet and route based
> on it, rather than push yet another one. The proper combination of VCAP
> rules and VLAN table entries needs much more consideration to cover all
> scenarios (CPU RX over a daisy chain; CPU TX over a daisy chain;
> autonomous forwarding over 2 switches; autonomous forwarding over 3
> switches; autonomous forwarding between sja1105 and felix; forwarding
> done by felix for traffic originated by one sja1105 and destined to
> another sja1105; forwarding done by felix for traffic originated by a
> sja1105 and destined to a felix user port with no other downstream switch).

^ This paragraph is what I need! Although I'm leaning very much torward
the "run away" solution (and buying some fun hardware in the process)
this is something I'll keep revisiting as I learn. If it isn't
fall-off-a-log easy for you, I probably don't stand a chance.

> 
> You might find some of my thoughts on this topic interesting, in the
> "Switch topology changes" chapter of this PDF:
> https://lpc.events/event/11/contributions/949/attachments/823/1555/paper.pdf

I'm well aware of this paper :-) I'll give it another re-read, as I
always find new things.

> 
> With that development summary in mind, you'll probably be prepared to
> use "git log" to better understand some of the stages that tag_8021q
> cross-chip bridging has been through.

Yes, a couple key terms and a little background can go a very long way!
Thanks.

> 
> In principle, when comparing tag_8021q cross-chip bridging to something
> proprietary like Marvell, I consider it to be somewhat analogous to
> Russian/SSSR engineering: it's placed on the "good" side of the diminishing
> returns curve, or i.o.w., it works stupidly well for how simplistic it is.
> I could be interested to help if you come up with a sound proposal that
> addresses your needs and is generic enough that pieces of it are useful
> to others too.

Great to know. I'm in a very early "theory only" stage of this. First
things first - I need to button up full switch functionality and add
another year to the Copyright notice.

> 
> *I seriously doubt that any hw manufacturer would be crazy enough to
> use Vitesse switches for an application for which they are essentially
> out of spec and out of their intended use. Yet that is more or less the
> one-sentence description of what we, at NXP, are doing with them, so I
> know what it's like and I don't necessarily discourage it ;) Generally
> I'd say they take a bit of pushing quite well (while failing at some
> arguably reasonable and basic use cases, like flow control on NPI port -
> go figure).

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Crosschip bridge functionality
  2022-12-24 18:53       ` Colin Foster
@ 2023-01-03 10:47         ` Vladimir Oltean
  0 siblings, 0 replies; 10+ messages in thread
From: Vladimir Oltean @ 2023-01-03 10:47 UTC (permalink / raw)
  To: Colin Foster; +Cc: Andrew Lunn, Florian Fainelli, Alexandre Belloni, netdev

On Sat, Dec 24, 2022 at 10:53:10AM -0800, Colin Foster wrote:
> > That being said, you need to broaden your detection criteria for cross-chip
> > bridging; sja1105 (and tag_8021q in general) supports this too, except
> > it's a bit hidden from the ds->ops->crosschip_bridge_join() operation.
> > It all relies on the concept of cross-chip notifier chain from switch.c.
> > dsa_tag_8021q_bridge_join() will emit a DSA_NOTIFIER_TAG_8021Q_VLAN_ADD
> > event, which the other tag_8021q capable switches in the system will see
> > and react to.
> > 
> > Because felix and sja1105 each support a tagger based on tag_8021q for
> > different needs, there is an important difference in their implementations.
> > The comment in dsa_tag_8021q_bridge_join() - called by sja1105 but not
> > by felix - summarizes the essence of the difference.
> 
> Hmm... So the Marvell and sja1105 both support "Distributed" but in
> slightly different ways?

Yes, the SJA1105 and SJA1110 switches can also be instantiated multiple
times on the same board (under the same DSA master, forming a single tree),
and have some awareness of each other. The hardware awareness is limited
to PTP timestamping. Only leaf ports should take a MAC timestamp for a
PTP event message. Cascade ports must be configured to forward that
packet timestamp to the CPU and not generate a new one. Otherwise, the
forwarding plane is basically unaware of a multi-switch DSA tree.
The reasoning of the designers was that it doesn't even need to be,
since non-proprietary mechanisms such as VLANs can be used to restrict
the forwarding domain as desired. The cross-chip support in tag_8021q
follows that idea and programs some reserved VLANs to the ports that
need them.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2023-01-03 10:47 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-23 19:37 Crosschip bridge functionality Colin Foster
2022-12-23 20:05 ` Andrew Lunn
2022-12-23 20:54   ` Colin Foster
2022-12-23 21:18     ` Andrew Lunn
2022-12-23 22:36       ` Colin Foster
2022-12-23 23:03         ` Andrew Lunn
2022-12-23 23:31           ` Colin Foster
2022-12-24  0:59     ` Vladimir Oltean
2022-12-24 18:53       ` Colin Foster
2023-01-03 10:47         ` Vladimir Oltean

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.