On Thu Oct 08 2020, Vladimir Oltean wrote: > On Tue, Oct 06, 2020 at 12:13:04PM +0200, Kurt Kanzenbach wrote: >> >> >> +static const struct hellcreek_platform_data de1soc_r1_pdata = { >> >> >> + .num_ports = 4, >> >> >> + .is_100_mbits = 1, >> >> >> + .qbv_support = 1, >> >> >> + .qbv_on_cpu_port = 1, >> >> > >> >> > Why does this matter? >> >> >> >> Because Qbv on the CPU port is a feature and not all switch variants >> >> have that. It will matter as soon as TAPRIO is implemented. >> > >> > How do you plan to install a tc-taprio qdisc on the CPU port? >> >> That's an issue to be sorted out. > > Do you have a compelling use case for tc-taprio on the CPU port though? > I've been waiting for someone to put one on the table. Yes, we do. This feature is a must for switched endpoints. Imagine one port is connected to a PLC with tight cycle times and the other port is connected to the out side world doing best effort traffic. Under no circumstances should the ingressing best effort traffic interfere with the incoming real time traffic. Using strict priorities is not enough as a best effort frame still might block the wire for a certain period of time. Therefore, this feature exists in the hardware and Qbv is needed on the CPU port. > If it's just "nice to have", I don't think that DSA will change just to > accomodate that. The fact that the CPU port doesn't have a net device is > already pretty much the established behavior. Yes, I know that. Anyhow we'll have to find a solution to that problem. Thanks, Kurt