All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-wired-lan] 710/i40e, RSS and 802.1ad (double vlan)
@ 2021-02-06  3:37 dan
  2021-02-07  2:24 ` Brandeburg, Jesse
  0 siblings, 1 reply; 7+ messages in thread
From: dan @ 2021-02-06  3:37 UTC (permalink / raw)
  To: intel-wired-lan

When receiving 802.1ad traffic, the 710 puts it all on one queue by
default which limits the scalability.

The use case I care about is simply skipping both VLAN headers to get
the 5-tuple and select the RSS queue based on that.

I've tried to find a way to do this but have failed.

Can the hardware do this? Would a DDP package be able to do this?

Pointers appreciated.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [Intel-wired-lan] 710/i40e, RSS and 802.1ad (double vlan)
  2021-02-06  3:37 [Intel-wired-lan] 710/i40e, RSS and 802.1ad (double vlan) dan
@ 2021-02-07  2:24 ` Brandeburg, Jesse
       [not found]   ` <8c7d255047890290948cf51450b1f860e013b48c.camel@coverfire.com>
  0 siblings, 1 reply; 7+ messages in thread
From: Brandeburg, Jesse @ 2021-02-07  2:24 UTC (permalink / raw)
  To: intel-wired-lan


> On Feb 5, 2021, at 8:06 PM, dan at coverfire.com wrote:
> 
> ?When receiving 802.1ad traffic, the 710 puts it all on one queue by
> default which limits the scalability.
> 
> The use case I care about is simply skipping both VLAN headers to get
> the 5-tuple and select the RSS queue based on that.
> 
> I've tried to find a way to do this but have failed.
> 
> Can the hardware do this? Would a DDP package be able to do this?

Hi Dan, I am asking around to see what we can do, will get back to you in the coming week. 

--
Jesse Brandeburg


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [Intel-wired-lan] 710/i40e, RSS and 802.1ad (double vlan)
       [not found]   ` <8c7d255047890290948cf51450b1f860e013b48c.camel@coverfire.com>
@ 2021-02-09 15:03     ` Dan Siemon
  2021-02-09 20:02       ` Jesse Brandeburg
  0 siblings, 1 reply; 7+ messages in thread
From: Dan Siemon @ 2021-02-09 15:03 UTC (permalink / raw)
  To: intel-wired-lan

On Sat, 2021-02-06 at 22:59 -0500, Dan Siemon wrote:
> On Sun, 2021-02-07 at 02:24 +0000, Brandeburg, Jesse wrote:
> > Hi Dan, I am asking around to see what we can do, will get back to
> > you in the coming week.
> 
> Thanks. I was looking at some old Intel presentations that sort of
> hinted that the PPPoE DDP profile might support double VLANs. I've
> been
> experimenting with that today without luck so far. The profile loads
> fine (via ethtool) but I don't see any change in the traffic
> distribution.
> 
> The GTP DDP package documentations says:
> 
> "To enable RSS for GTPv1-U with the IPv4 payload we need to map
> packet
> classifier type 22 to the DPDK flow type. Flow types are defined in
> rte_eth_ctrl.h; the first 21 are in use in DPDK 17.11 and so can map
> to
> flows 22 and up. After mapping to a flow type, we can start to port
> again and enable RSS for flow type 22:"
> 
> I haven't been able to find anything that hints at how to do
> something
> like that outside of DPDK.

I loaded the PPP DDP profile via the DPDK tools. Looking at the list of
protocols supported via 'ddp get info' it looks like they don't do
anything with VLANs:

List of used protocols:
  12: IPV4
  13: IPV6
  15: GRENAT
  17: TCP
  18: UDP
  19: SCTP
  20: ICMP
  22: L2TPv2CTRL
  23: ICMPV6
  26: L2TPv2
  27: L2TPv2PAY
  28: PPPoL2TPv2
  29: PPPoE
  33: PAY2
  34: PAY3
  35: PAY4
  44: IPV4FRAG
  48: IPV6FRAG
  52: OIPV4
  53: OIPV6

I found the presentation linked below which introduces DDP and talks
about ctag, ctag in the context of PPPoE.

https://www.slideshare.net/MichelleHolley1/enabling-new-protocol-processing-with-dpdk-using-dynamic-device-personalization

Given some of the complex parsing that the GTP and PPP DDP profiles do,
I suspect the hardware is capable of doing what I require.

For clarity, what I need is the ability to skip 0,1,2 VLAN headers
(802.1a or 802.1ad) and parse the IP/IPv6 flow to get the RSS hash and
spread the traffic across queues. Currently it only handles one VLAN.

Nested VLANs are very common in the service provider space.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [Intel-wired-lan] 710/i40e, RSS and 802.1ad (double vlan)
  2021-02-09 15:03     ` Dan Siemon
@ 2021-02-09 20:02       ` Jesse Brandeburg
  2021-02-09 20:59         ` Dan Siemon
  0 siblings, 1 reply; 7+ messages in thread
From: Jesse Brandeburg @ 2021-02-09 20:02 UTC (permalink / raw)
  To: intel-wired-lan

Dan Siemon wrote:

> On Sat, 2021-02-06 at 22:59 -0500, Dan Siemon wrote:
> > On Sun, 2021-02-07 at 02:24 +0000, Brandeburg, Jesse wrote:
> > > Hi Dan, I am asking around to see what we can do, will get back to
> > > you in the coming week.
> > 
> > Thanks. I was looking at some old Intel presentations that sort of
> > hinted that the PPPoE DDP profile might support double VLANs. I've
> > been
> > experimenting with that today without luck so far. The profile loads
> > fine (via ethtool) but I don't see any change in the traffic
> > distribution.

Hi Dan, I've got some good and bad news, and I have a request. Thanks
for your work troubleshooting this and reporting the issue.

...

> Given some of the complex parsing that the GTP and PPP DDP profiles do,
> I suspect the hardware is capable of doing what I require.
> 
> For clarity, what I need is the ability to skip 0,1,2 VLAN headers
> (802.1a or 802.1ad) and parse the IP/IPv6 flow to get the RSS hash and
> spread the traffic across queues. Currently it only handles one VLAN.
> 
> Nested VLANs are very common in the service provider space.

The hardware *can* support what you're trying to do, our organization
is aware of the issue, but I am going to file a separate internal
ticket to track your issue. The good news is the work is in progress,
the bad news is that we don't have an immediate fix for you. I suspect
the work on flushing out the features and interactions won't be complete
until Q2 or Q3 of this year. The complexity (and delay) comes from
making sure all the options of stacking vlans, working with SR-IOV, etc
still work with the changes. 

Please provide us with which driver/kernel/firmware you're running,
uname -a
ethtool -i ethx
lspci -vvv -s < your pci bus:dev.fn>

This will help me provide details to our engineering. I'd like us to be
able to provide you a short term workaround in code, but I'm
investigating if that is feasible.

Jesse

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [Intel-wired-lan] 710/i40e, RSS and 802.1ad (double vlan)
  2021-02-09 20:02       ` Jesse Brandeburg
@ 2021-02-09 20:59         ` Dan Siemon
  2021-02-12  1:49           ` Jesse Brandeburg
  0 siblings, 1 reply; 7+ messages in thread
From: Dan Siemon @ 2021-02-09 20:59 UTC (permalink / raw)
  To: intel-wired-lan

On Tue, 2021-02-09 at 12:02 -0800, Jesse Brandeburg wrote:
> Please provide us with which driver/kernel/firmware you're running,
> uname -a
> ethtool -i ethx
> lspci -vvv -s < your pci bus:dev.fn>

We are ok to update to the latest 710 firmware and we follow the kernel
releases closely.

As we haven't had problems related to firmware, we still have many 710s
in the field that are on 6.01 firmware. Below are dumps from a couple
of our test boxes where I have upgraded the firmware.

-----

root at lab-5000 ~# /sbin/ethtool -i enp2s0f2
driver: i40e
version: 5.9.9-200.fc33.x86_64
firmware-version: 7.20 0x80007a01 0.0.0
expansion-rom-version: 
bus-info: 0000:02:00.2
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
root at lab-5000 ~# lspci -vvv -s 0000:02:00.2
02:00.2 Ethernet controller: Intel Corporation Ethernet Controller X710
for 10GbE SFP+ (rev 02)
	Subsystem: Intel Corporation Ethernet Converged Network
Adapter X710
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
<TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 17
	Region 0: Memory at dd000000 (64-bit, prefetchable) [size=8M]
	Region 3: Memory at de808000 (64-bit, prefetchable) [size=32K]
	Expansion ROM at df880000 [disabled] [size=512K]
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-
,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-
	Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
		Address: 0000000000000000  Data: 0000
		Masking: 00000000  Pending: 00000000
	Capabilities: [70] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=3 offset=00000000
		PBA: BAR=3 offset=00001000
	Capabilities: [a0] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 2048 bytes, PhantFunc 0,
Latency L0s <512ns, L1 <64us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+
FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+
UnsupReq+
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop-
FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr-
UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 8GT/s, Width x8, ASPM
L1, Exit Latency L1 <16us
			ClockPM- Surprise- LLActRep- BwNot-
ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled-
CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 8GT/s (ok), Width x8 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt-
ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+
NROPrPrP- LTR-
			 10BitTagComp- 10BitTagReq- OBFF Not
Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported,
EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-
LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkSta2: Current De-emphasis Level: -3.5dB,
EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3-
LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [100 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt-
UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt+
UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO-
CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout-
AdvNonFatalErr+
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout-
AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+
ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres-
HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [140 v1] Device Serial Number 0c-1e-78-ff-ff-0b-
90-00
	Capabilities: [150 v1] Alternative Routing-ID Interpretation
(ARI)
		ARICap:	MFVC- ACS-, Next Function: 3
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)
		IOVCap:	Migration-, Interrupt Message Number:
000
		IOVCtl:	Enable- Migration- Interrupt- MSE-
ARIHierarchy-
		IOVSta:	Migration-
		Initial VFs: 32, Total VFs: 32, Number of VFs: 0,
Function Dependency Link: 02
		VF offset: 334, stride: 1, Device ID: 154c
		Supported Page Size: 00000553, System Page Size:
00000001
		Region 0: Memory at 0000000000000000 (64-bit,
prefetchable)
		Region 3: Memory at 0000000000000000 (64-bit,
prefetchable)
		VF Migration: offset: 00000000, BIR: 0
	Capabilities: [1a0 v1] Transaction Processing Hints
		Device specific mode supported
		No steering table available
	Capabilities: [1b0 v1] Access Control Services
		ACSCap:	SrcValid- TransBlk- ReqRedir-
CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
		ACSCtl:	SrcValid- TransBlk- ReqRedir-
CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Kernel driver in use: i40e
	Kernel modules: i40e

------

[root at lab20k ~]# /sbin/ethtool -i enp23s0f0
driver: i40e
version: 5.10.13-200.fc33.x86_64
firmware-version: 8.10 0x8000940b 0.0.0
expansion-rom-version: 
bus-info: 0000:17:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
[root@lab20k ~]# lspci -vvv -s 0000:17:00.0
17:00.0 Ethernet controller: Intel Corporation Ethernet Controller
XL710 for 40GbE QSFP+ (rev 02)
	Subsystem: Intel Corporation Ethernet Converged Network
Adapter XL710-Q2
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
ParErr- Stepping- SERR+ FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
<TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Interrupt: pin A routed to IRQ 40
	NUMA node: 0
	IOMMU group: 38
	Region 0: Memory at c4800000 (64-bit, prefetchable) [size=8M]
	Region 3: Memory at c5808000 (64-bit, prefetchable) [size=32K]
	Expansion ROM at c5e80000 [disabled] [size=512K]
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-
,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-
	Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
		Address: 0000000000000000  Data: 0000
		Masking: 00000000  Pending: 00000000
	Capabilities: [70] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=3 offset=00000000
		PBA: BAR=3 offset=00001000
	Capabilities: [a0] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 2048 bytes, PhantFunc 0,
Latency L0s <512ns, L1 <64us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+
FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr- NonFatalErr- FatalErr-
UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop-
FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr-
UnsupReq+ AuxPwr- TransPend+
		LnkCap:	Port #0, Speed 8GT/s, Width x8, ASPM
L1, Exit Latency L1 <16us
			ClockPM- Surprise- LLActRep- BwNot-
ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled-
CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 8GT/s (ok), Width x8 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt-
ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+
NROPrPrP- LTR-
			 10BitTagComp- 10BitTagReq- OBFF Not
Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported,
EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-
LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink-
Retimer- 2Retimers- DRS-
		LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance-
SpeedDis-
			 Transmit Margin: Normal Operating Range,
EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB,
EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+
LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [100 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt-
UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt+
UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO-
CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout-
AdvNonFatalErr+
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout-
AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+
ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres-
HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [140 v1] Device Serial Number 8b-71-5c-ff-ff-0b-
90-00
	Capabilities: [150 v1] Alternative Routing-ID Interpretation
(ARI)
		ARICap:	MFVC- ACS-, Next Function: 1
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)
		IOVCap:	Migration-, Interrupt Message Number:
000
		IOVCtl:	Enable- Migration- Interrupt- MSE-
ARIHierarchy+
		IOVSta:	Migration-
		Initial VFs: 64, Total VFs: 64, Number of VFs: 0,
Function Dependency Link: 00
		VF offset: 16, stride: 1, Device ID: 154c
		Supported Page Size: 00000553, System Page Size:
00000001
		Region 0: Memory at 00000000c5400000 (64-bit,
prefetchable)
		Region 3: Memory@00000000c5910000 (64-bit,
prefetchable)
		VF Migration: offset: 00000000, BIR: 0
	Capabilities: [1a0 v1] Transaction Processing Hints
		Device specific mode supported
		No steering table available
	Capabilities: [1b0 v1] Access Control Services
		ACSCap:	SrcValid- TransBlk- ReqRedir-
CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
		ACSCtl:	SrcValid- TransBlk- ReqRedir-
CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [1d0 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Kernel driver in use: i40e
	Kernel modules: i40e




^ permalink raw reply	[flat|nested] 7+ messages in thread

* [Intel-wired-lan] 710/i40e, RSS and 802.1ad (double vlan)
  2021-02-09 20:59         ` Dan Siemon
@ 2021-02-12  1:49           ` Jesse Brandeburg
  2021-11-23  2:04             ` Jesse Brandeburg
  0 siblings, 1 reply; 7+ messages in thread
From: Jesse Brandeburg @ 2021-02-12  1:49 UTC (permalink / raw)
  To: intel-wired-lan

Dan Siemon wrote:

> On Tue, 2021-02-09 at 12:02 -0800, Jesse Brandeburg wrote:
> > Please provide us with which driver/kernel/firmware you're running,
> > uname -a
> > ethtool -i ethx
> > lspci -vvv -s < your pci bus:dev.fn>
> 
> We are ok to update to the latest 710 firmware and we follow the kernel
> releases closely.
> 
> As we haven't had problems related to firmware, we still have many 710s
> in the field that are on 6.01 firmware. Below are dumps from a couple
> of our test boxes where I have upgraded the firmware.

Hi Dan, thanks for the detail, I think your firmware is new enough, but
I'm pretty sure our driver isn't configuring enough (it's currently an
unsupported feature in Linux i40e) to get it working. Newer than 6.01 is
all I know of that is required for firmware based on what I know right
now.

I've filed an internal issue tracker against i40e and sometime
(hopefully) soon we'll have the team looking into details. I don't have
any timeline for you currently, sorry.

I agree this is an important use case. We appreciate your reporting the
issue to us. Based on what I found when doing some initial triage, it
doesn't seem like a simple fix in the code, so I can't offer you a
patch to fix the issue like I wish I could.

Please keep us posted if you find any other relevant details, and I'll
try to update this thread if we find any info or get a test patch up
and running.

-Jesse

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [Intel-wired-lan] 710/i40e, RSS and 802.1ad (double vlan)
  2021-02-12  1:49           ` Jesse Brandeburg
@ 2021-11-23  2:04             ` Jesse Brandeburg
  0 siblings, 0 replies; 7+ messages in thread
From: Jesse Brandeburg @ 2021-11-23  2:04 UTC (permalink / raw)
  To: intel-wired-lan

On 2/11/2021 5:49 PM, Jesse Brandeburg wrote:
> Dan Siemon wrote:
> 
>> On Tue, 2021-02-09 at 12:02 -0800, Jesse Brandeburg wrote:
>>> Please provide us with which driver/kernel/firmware you're running,
>>> uname -a
>>> ethtool -i ethx
>>> lspci -vvv -s < your pci bus:dev.fn>
>>
>> We are ok to update to the latest 710 firmware and we follow the kernel
>> releases closely.
>>
>> As we haven't had problems related to firmware, we still have many 710s
>> in the field that are on 6.01 firmware. Below are dumps from a couple
>> of our test boxes where I have upgraded the firmware.
> 
> Hi Dan, thanks for the detail, I think your firmware is new enough, but
> I'm pretty sure our driver isn't configuring enough (it's currently an
> unsupported feature in Linux i40e) to get it working. Newer than 6.01 is
> all I know of that is required for firmware based on what I know right
> now.
> 
> I've filed an internal issue tracker against i40e and sometime
> (hopefully) soon we'll have the team looking into details. I don't have
> any timeline for you currently, sorry.
> 
> I agree this is an important use case. We appreciate your reporting the
> issue to us. Based on what I found when doing some initial triage, it
> doesn't seem like a simple fix in the code, so I can't offer you a
> patch to fix the issue like I wish I could.
> 
> Please keep us posted if you find any other relevant details, and I'll
> try to update this thread if we find any info or get a test patch up
> and running.

An update to this old thread:
The current i40e out-of-tree driver from intel.com/e1000.sf.net has 
support for a limited use case of double vlan for virtual machines (VFs) 
but the interface used is customer specific and not upstream (it's in 
sysfs).

In the meantime, I guess your best bet is DPDK. Currently an 
implementation for the upstream kernel is not available and I don't know 
when it will become so.

Thanks for your patience on this one, I'm working with the team to make 
sure we don't create feature gaps like this between out-of-tree and 
upstream in the future. I appreciate that you brought this to our attention.

Thanks,
  Jesse

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-11-23  2:04 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-06  3:37 [Intel-wired-lan] 710/i40e, RSS and 802.1ad (double vlan) dan
2021-02-07  2:24 ` Brandeburg, Jesse
     [not found]   ` <8c7d255047890290948cf51450b1f860e013b48c.camel@coverfire.com>
2021-02-09 15:03     ` Dan Siemon
2021-02-09 20:02       ` Jesse Brandeburg
2021-02-09 20:59         ` Dan Siemon
2021-02-12  1:49           ` Jesse Brandeburg
2021-11-23  2:04             ` Jesse Brandeburg

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.