From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jeff Kirsher Date: Wed, 10 Oct 2018 12:16:07 -0700 Subject: [Intel-wired-lan] [PATCH v2 06/12] Documentation: ixgbe: Prepare documentation for RST conversion In-Reply-To: <20181010191613.2770-1-jeffrey.t.kirsher@intel.com> References: <20181010191613.2770-1-jeffrey.t.kirsher@intel.com> Message-ID: <20181010191613.2770-7-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: intel-wired-lan@osuosl.org List-ID: Before making the conversion to the rst (reStructured Text) format, there are changes needed to the documentation so that there are no build errors. Also fixed old/broken URLs to the correct or updated URL. Signed-off-by: Jeff Kirsher --- Documentation/networking/ixgbe.txt | 654 ++++++++++++++++++----------- 1 file changed, 416 insertions(+), 238 deletions(-) diff --git a/Documentation/networking/ixgbe.txt b/Documentation/networking/ixgbe.txt index 687835415707..89e1635ea9ef 100644 --- a/Documentation/networking/ixgbe.txt +++ b/Documentation/networking/ixgbe.txt @@ -1,349 +1,527 @@ -Linux* Base Driver for the Intel(R) Ethernet 10 Gigabit PCI Express Family of -Adapters +.. SPDX-License-Identifier: GPL-2.0+ + +Linux* Base Driver for the Intel(R) Ethernet 10 Gigabit PCI Express Adapters ============================================================================= Intel 10 Gigabit Linux driver. -Copyright(c) 1999 - 2013 Intel Corporation. +Copyright(c) 1999-2018 Intel Corporation. Contents ======== - Identifying Your Adapter +- Command Line Parameters - Additional Configurations -- Performance Tuning - Known Issues - Support Identifying Your Adapter ======================== +The driver is compatible with devices based on the following: -The driver in this release is compatible with 82598, 82599 and X540-based -Intel Network Connections. - -For more information on how to identify your adapter, go to the Adapter & -Driver ID Guide at: + * Intel(R) Ethernet Controller 82598 + * Intel(R) Ethernet Controller 82599 + * Intel(R) Ethernet Controller X520 + * Intel(R) Ethernet Controller X540 + * Intel(R) Ethernet Controller x550 + * Intel(R) Ethernet Controller X552 + * Intel(R) Ethernet Controller X553 - http://support.intel.com/support/network/sb/CS-012904.htm +For information on how to identify your adapter, and for the latest Intel +network drivers, refer to the Intel Support website: +https://www.intel.com/support SFP+ Devices with Pluggable Optics ---------------------------------- 82599-BASED ADAPTERS +~~~~~~~~~~~~~~~~~~~~ +NOTES: +- If your 82599-based Intel(R) Network Adapter came with Intel optics or is an +Intel(R) Ethernet Server Adapter X520-2, then it only supports Intel optics +and/or the direct attach cables listed below. +- When 82599-based SFP+ devices are connected back to back, they should be set +to the same Speed setting via ethtool. Results may vary if you mix speed +settings. + ++---------------+---------------------------------------+------------------+ +| Supplier | Type | Part Numbers | ++===============+=======================================+==================+ +| SR Modules | ++---------------+---------------------------------------+------------------+ +| Intel | DUAL RATE 1G/10G SFP+ SR (bailed) | FTLX8571D3BCV-IT | ++---------------+---------------------------------------+------------------+ +| Intel | DUAL RATE 1G/10G SFP+ SR (bailed) | AFBR-703SDZ-IN2 | ++---------------+---------------------------------------+------------------+ +| Intel | DUAL RATE 1G/10G SFP+ SR (bailed) | AFBR-703SDDZ-IN1 | ++---------------+---------------------------------------+------------------+ +| LR Modules | ++---------------+---------------------------------------+------------------+ +| Intel | DUAL RATE 1G/10G SFP+ LR (bailed) | FTLX1471D3BCV-IT | ++---------------+---------------------------------------+------------------+ +| Intel | DUAL RATE 1G/10G SFP+ LR (bailed) | AFCT-701SDZ-IN2 | ++---------------+---------------------------------------+------------------+ +| Intel | DUAL RATE 1G/10G SFP+ LR (bailed) | AFCT-701SDDZ-IN1 | ++---------------+---------------------------------------+------------------+ + +The following is a list of 3rd party SFP+ modules that have received some +testing. Not all modules are applicable to all devices. + ++---------------+---------------------------------------+------------------+ +| Supplier | Type | Part Numbers | ++===============+=======================================+==================+ +| Finisar | SFP+ SR bailed, 10g single rate | FTLX8571D3BCL | ++---------------+---------------------------------------+------------------+ +| Avago | SFP+ SR bailed, 10g single rate | AFBR-700SDZ | ++---------------+---------------------------------------+------------------+ +| Finisar | SFP+ LR bailed, 10g single rate | FTLX1471D3BCL | ++---------------+---------------------------------------+------------------+ +| Finisar | DUAL RATE 1G/10G SFP+ SR (No Bail) | FTLX8571D3QCV-IT | ++---------------+---------------------------------------+------------------+ +| Avago | DUAL RATE 1G/10G SFP+ SR (No Bail) | AFBR-703SDZ-IN1 | ++---------------+---------------------------------------+------------------+ +| Finisar | DUAL RATE 1G/10G SFP+ LR (No Bail) | FTLX1471D3QCV-IT | ++---------------+---------------------------------------+------------------+ +| Avago | DUAL RATE 1G/10G SFP+ LR (No Bail) | AFCT-701SDZ-IN1 | ++---------------+---------------------------------------+------------------+ +| Finisar | 1000BASE-T SFP | FCLF8522P2BTL | ++---------------+---------------------------------------+------------------+ +| Avago | 1000BASE-T | ABCU-5710RZ | ++---------------+---------------------------------------+------------------+ +| HP | 1000BASE-SX SFP | 453153-001 | ++---------------+---------------------------------------+------------------+ -NOTES: If your 82599-based Intel(R) Network Adapter came with Intel optics, or -is an Intel(R) Ethernet Server Adapter X520-2, then it only supports Intel -optics and/or the direct attach cables listed below. +82599-based adapters support all passive and active limiting direct attach +cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. -When 82599-based SFP+ devices are connected back to back, they should be set to -the same Speed setting via ethtool. Results may vary if you mix speed settings. -82598-based adapters support all passive direct attach cables that comply -with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach -cables are not supported. +Laser turns off for SFP+ when ifconfig ethX down +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +"ifconfig ethX down" turns off the laser for 82599-based SFP+ fiber adapters. +"ifconfig ethX up" turns on the laser. +Alternatively, you can use "ip link set [down/up] dev ethX" to turn the +laser off and on. + + +82599-based QSFP+ Adapters +~~~~~~~~~~~~~~~~~~~~~~~~~~ +NOTES: +- If your 82599-based Intel(R) Network Adapter came with Intel optics, it only +supports Intel optics. +- 82599-based QSFP+ adapters only support 4x10 Gbps connections. 1x40 Gbps +connections are not supported. QSFP+ link partners must be configured for +4x10 Gbps. +- 82599-based QSFP+ adapters do not support automatic link speed detection. +The link speed must be configured to either 10 Gbps or 1 Gbps to match the link +partners speed capabilities. Incorrect speed configurations will result in +failure to link. +- Intel(R) Ethernet Converged Network Adapter X520-Q1 only supports the optics +and direct attach cables listed below. + ++---------------+---------------------------------------+------------------+ +| Supplier | Type | Part Numbers | ++===============+=======================================+==================+ +| Intel | DUAL RATE 1G/10G QSFP+ SRL (bailed) | E10GQSFPSR | ++---------------+---------------------------------------+------------------+ + +82599-based QSFP+ adapters support all passive and active limiting QSFP+ +direct attach cables that comply with SFF-8436 v4.1 specifications. -Supplier Type Part Numbers +82598-BASED ADAPTERS +~~~~~~~~~~~~~~~~~~~~ +NOTES: +- Intel(r) Ethernet Network Adapters that support removable optical modules +only support their original module type (for example, the Intel(R) 10 Gigabit +SR Dual Port Express Module only supports SR optical modules). If you plug in +a different type of module, the driver will not load. +- Hot Swapping/hot plugging optical modules is not supported. +- Only single speed, 10 gigabit modules are supported. +- LAN on Motherboard (LOMs) may support DA, SR, or LR modules. Other module +types are not supported. Please see your system documentation for details. + +The following is a list of SFP+ modules and direct attach cables that have +received some testing. Not all modules are applicable to all devices. + ++---------------+---------------------------------------+------------------+ +| Supplier | Type | Part Numbers | ++===============+=======================================+==================+ +| Finisar | SFP+ SR bailed, 10g single rate | FTLX8571D3BCL | ++---------------+---------------------------------------+------------------+ +| Avago | SFP+ SR bailed, 10g single rate | AFBR-700SDZ | ++---------------+---------------------------------------+------------------+ +| Finisar | SFP+ LR bailed, 10g single rate | FTLX1471D3BCL | ++---------------+---------------------------------------+------------------+ + +82598-based adapters support all passive direct attach cables that comply with +SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach cables +are not supported. + +Third party optic modules and cables referred to above are listed only for the +purpose of highlighting third party specifications and potential +compatibility, and are not recommendations or endorsements or sponsorship of +any third party's product by Intel. Intel is not endorsing or promoting +products made by any third party and the third party reference is provided +only to share information regarding certain optic modules and cables with the +above specifications. There may be other manufacturers or suppliers, producing +or supplying optic modules and cables with similar or matching descriptions. +Customers must use their own discretion and diligence to purchase optic +modules and cables from any third party of their choice. Customers are solely +responsible for assessing the suitability of the product and/or devices and +for the selection of the vendor for purchasing any product. THE OPTIC MODULES +AND CABLES REFERRED TO ABOVE ARE NOT WARRANTED OR SUPPORTED BY INTEL. INTEL +ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED +WARRANTY, RELATING TO SALE AND/OR USE OF SUCH THIRD PARTY PRODUCTS OR +SELECTION OF VENDOR BY CUSTOMERS. + +Command Line Parameters +======================= -SR Modules -Intel DUAL RATE 1G/10G SFP+ SR (bailed) FTLX8571D3BCV-IT -Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDDZ-IN1 -Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDZ-IN2 -LR Modules -Intel DUAL RATE 1G/10G SFP+ LR (bailed) FTLX1471D3BCV-IT -Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDDZ-IN1 -Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDZ-IN2 +max_vfs +------- +:Valid Range: 1-63 -The following is a list of 3rd party SFP+ modules and direct attach cables that -have received some testing. Not all modules are applicable to all devices. +This parameter adds support for SR-IOV. It causes the driver to spawn up to +max_vfs worth of virtual functions. +If the value is greater than 0 it will also force the VMDq parameter to be 1 or +more. -Supplier Type Part Numbers +NOTE: This parameter is only used on kernel 3.7.x and below. On kernel 3.8.x +and above, use sysfs to enable VFs. Also, for Red Hat distributions, this +parameter is only used on version 6.6 and older. For version 6.7 and newer, use +sysfs. For example:: -Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL -Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ -Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL + #echo $num_vf_enabled > /sys/class/net/$dev/device/sriov_numvfs // enable VFs + #echo 0 > /sys/class/net/$dev/device/sriov_numvfs //disable VFs -Finisar DUAL RATE 1G/10G SFP+ SR (No Bail) FTLX8571D3QCV-IT -Avago DUAL RATE 1G/10G SFP+ SR (No Bail) AFBR-703SDZ-IN1 -Finisar DUAL RATE 1G/10G SFP+ LR (No Bail) FTLX1471D3QCV-IT -Avago DUAL RATE 1G/10G SFP+ LR (No Bail) AFCT-701SDZ-IN1 -Finistar 1000BASE-T SFP FCLF8522P2BTL -Avago 1000BASE-T SFP ABCU-5710RZ +The parameters for the driver are referenced by position. Thus, if you have a +dual port adapter, or more than one adapter in your system, and want N virtual +functions per port, you must specify a number for each port with each parameter +separated by a comma. For example:: -82599-based adapters support all passive and active limiting direct attach -cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. + modprobe ixgbe max_vfs=4 -Laser turns off for SFP+ when device is down -------------------------------------------- -"ip link set down" turns off the laser for 82599-based SFP+ fiber adapters. -"ip link set up" turns on the laser. +This will spawn 4 VFs on the first port. +:: -82598-BASED ADAPTERS + modprobe ixgbe max_vfs=2,4 -NOTES for 82598-Based Adapters: -- Intel(R) Network Adapters that support removable optical modules only support - their original module type (i.e., the Intel(R) 10 Gigabit SR Dual Port - Express Module only supports SR optical modules). If you plug in a different - type of module, the driver will not load. -- Hot Swapping/hot plugging optical modules is not supported. -- Only single speed, 10 gigabit modules are supported. -- LAN on Motherboard (LOMs) may support DA, SR, or LR modules. Other module - types are not supported. Please see your system documentation for details. +This will spawn 2 VFs on the first port and 4 VFs on the second port. + +NOTE: Caution must be used in loading the driver with these parameters. +Depending on your system configuration, number of slots, etc., it is impossible +to predict in all cases where the positions would be on the command line. + +NOTE: Neither the device nor the driver control how VFs are mapped into config +space. Bus layout will vary by operating system. On operating systems that +support it, you can check sysfs to find the mapping. + +NOTE: When either SR-IOV mode or VMDq mode is enabled, hardware VLAN filtering +and VLAN tag stripping/insertion will remain enabled. Please remove the old +VLAN filter before the new VLAN filter is added. For example, + +:: + + ip link set eth0 vf 0 vlan 100 // set VLAN 100 for VF 0 + ip link set eth0 vf 0 vlan 0 // Delete VLAN 100 + ip link set eth0 vf 0 vlan 200 // set a new VLAN 200 for VF 0 -The following is a list of 3rd party SFP+ modules and direct attach cables that -have received some testing. Not all modules are applicable to all devices. +With kernel 3.6, the driver supports the simultaneous usage of max_vfs and DCB +features, subject to the constraints described below. Prior to kernel 3.6, the +driver did not support the simultaneous operation of max_vfs greater than 0 and +the DCB features (multiple traffic classes utilizing Priority Flow Control and +Extended Transmission Selection). -Supplier Type Part Numbers +When DCB is enabled, network traffic is transmitted and received through +multiple traffic classes (packet buffers in the NIC). The traffic is associated +with a specific class based on priority, which has a value of 0 through 7 used +in the VLAN tag. When SR-IOV is not enabled, each traffic class is associated +with a set of receive/transmit descriptor queue pairs. The number of queue +pairs for a given traffic class depends on the hardware configuration. When +SR-IOV is enabled, the descriptor queue pairs are grouped into pools. The +Physical Function (PF) and each Virtual Function (VF) is allocated a pool of +receive/transmit descriptor queue pairs. When multiple traffic classes are +configured (for example, DCB is enabled), each pool contains a queue pair from +each traffic class. When a single traffic class is configured in the hardware, +the pools contain multiple queue pairs from the single traffic class. -Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL -Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ -Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL +The number of VFs that can be allocated depends on the number of traffic +classes that can be enabled. The configurable number of traffic classes for +each enabled VF is as follows: +0 - 15 VFs = Up to 8 traffic classes, depending on device support +16 - 31 VFs = Up to 4 traffic classes +32 - 63 VFs = 1 traffic class -82598-based adapters support all passive direct attach cables that comply -with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach -cables are not supported. +When VFs are configured, the PF is allocated one pool as well. The PF supports +the DCB features with the constraint that each traffic class will only use a +single queue pair. When zero VFs are configured, the PF can support multiple +queue pairs per traffic class. +allow_unsupported_sfp +--------------------- +:Valid Range: 0,1 +:Default Value: 0 (disabled) + +This parameter allows unsupported and untested SFP+ modules on 82599-based +adapters, as long as the type of module is known to the driver. + +debug +----- +:Valid Range: 0-16 (0=none,...,16=all) +:Default Value: 0 + +This parameter adjusts the level of debug messages displayed in the system +logs. + + +Additional Features and Configurations +====================================== Flow Control ------------ Ethernet Flow Control (IEEE 802.3x) can be configured with ethtool to enable -receiving and transmitting pause frames for ixgbe. When TX is enabled, PAUSE -frames are generated when the receive packet buffer crosses a predefined -threshold. When rx is enabled, the transmit unit will halt for the time delay -specified when a PAUSE frame is received. +receiving and transmitting pause frames for ixgbe. When transmit is enabled, +pause frames are generated when the receive packet buffer crosses a predefined +threshold. When receive is enabled, the transmit unit will halt for the time +delay specified when a pause frame is received. + +NOTE: You must have a flow control capable link partner. + +Flow Control is enabled by default. + +Use ethtool to change the flow control settings. To enable or disable Rx or +Tx Flow Control:: + + ethtool -A eth? rx tx + +Note: This command only enables or disables Flow Control if auto-negotiation is +disabled. If auto-negotiation is enabled, this command changes the parameters +used for auto-negotiation with the link partner. + +To enable or disable auto-negotiation:: -Flow Control is enabled by default. If you want to disable a flow control -capable link partner, use ethtool: + ethtool -s eth? autoneg - ethtool -A eth? autoneg off RX off TX off +Note: Flow Control auto-negotiation is part of link auto-negotiation. Depending +on your device, you may not be able to change the auto-negotiation setting. -NOTE: For 82598 backplane cards entering 1 gig mode, flow control default -behavior is changed to off. Flow control in 1 gig mode on these devices can -lead to Tx hangs. +NOTE: For 82598 backplane cards entering 1 gigabit mode, flow control default +behavior is changed to off. Flow control in 1 gigabit mode on these devices can +lead to transmit hangs. Intel(R) Ethernet Flow Director ------------------------------- -Supports advanced filters that direct receive packets by their flows to -different queues. Enables tight control on routing a flow in the platform. -Matches flows and CPU cores for flow affinity. Supports multiple parameters -for flexible flow classification and load balancing. +The Intel Ethernet Flow Director performs the following tasks: -Flow director is enabled only if the kernel is multiple TX queue capable. +- Directs receive packets according to their flows to different queues. +- Enables tight control on routing a flow in the platform. +- Matches flows and CPU cores for flow affinity. +- Supports multiple parameters for flexible flow classification and load + balancing (in SFP mode only). -An included script (set_irq_affinity.sh) automates setting the IRQ to CPU -affinity. +NOTE: Intel Ethernet Flow Director masking works in the opposite manner from +subnet masking. In the following command:: -You can verify that the driver is using Flow Director by looking at the counter -in ethtool: fdir_miss and fdir_match. + #ethtool -N eth11 flow-type ip4 src-ip 172.4.1.2 m 255.0.0.0 dst-ip \ + 172.21.1.1 m 255.128.0.0 action 31 -Other ethtool Commands: -To enable Flow Director - ethtool -K ethX ntuple on -To add a filter - Use -U switch. e.g., ethtool -U ethX flow-type tcp4 src-ip 10.0.128.23 - action 1 -To see the list of filters currently present: - ethtool -u ethX +The src-ip value that is written to the filter will be 0.4.1.2, not 172.0.0.0 +as might be expected. Similarly, the dst-ip value written to the filter will be +0.21.1.1, not 172.0.0.0. -Perfect Filter: Perfect filter is an interface to load the filter table that -funnels all flow into queue_0 unless an alternative queue is specified using -"action". In that case, any flow that matches the filter criteria will be -directed to the appropriate queue. +To enable or disable the Intel Ethernet Flow Director:: -If the queue is defined as -1, filter will drop matching packets. + # ethtool -K ethX ntuple -To account for filter matches and misses, there are two stats in ethtool: -fdir_match and fdir_miss. In addition, rx_queue_N_packets shows the number of -packets processed by the Nth queue. +When disabling ntuple filters, all the user programmed filters are flushed from +the driver cache and hardware. All needed filters must be re-added when ntuple +is re-enabled. -NOTE: Receive Packet Steering (RPS) and Receive Flow Steering (RFS) are not -compatible with Flow Director. IF Flow Director is enabled, these will be -disabled. +To add a filter that directs packet to queue 2, use -U or -N switch:: -The following three parameters impact Flow Director. + # ethtool -N ethX flow-type tcp4 src-ip 192.168.10.1 dst-ip \ + 192.168.10.2 src-port 2000 dst-port 2001 action 2 [loc 1] -FdirMode --------- -Valid Range: 0-2 (0=off, 1=ATR, 2=Perfect filter mode) -Default Value: 1 +To see the list of filters currently present:: - Flow Director filtering modes. + # ethtool <-u|-n> ethX -FdirPballoc ------------ -Valid Range: 0-2 (0=64k, 1=128k, 2=256k) -Default Value: 0 +Sideband Perfect Filters +------------------------ +Sideband Perfect Filters are used to direct traffic that matches specified +characteristics. They are enabled through ethtool's ntuple interface. To add a +new filter use the following command:: - Flow Director allocated packet buffer size. + ethtool -U flow-type src-ip dst-ip src-port \ + dst-port action -AtrSampleRate --------------- -Valid Range: 1-100 -Default Value: 20 +Where: + - the ethernet device to program + - can be ip4, tcp4, udp4, or sctp4 + - the IP address to match on + - the port number to match on + - the queue to direct traffic towards (-1 discards the matched traffic) - Software ATR Tx packet sample rate. For example, when set to 20, every 20th - packet, looks to see if the packet will create a new flow. +Use the following command to delete a filter:: -Node ----- -Valid Range: 0-n -Default Value: 1 (off) + ethtool -U delete - 0 - n: where n is the number of NUMA nodes (i.e. 0 - 3) currently online in - your system - 1: turns this option off +Where is the filter id displayed when printing all the active filters, and +may also have been specified using "loc " when adding the filter. - The Node parameter will allow you to pick which NUMA node you want to have - the adapter allocate memory on. +The following example matches TCP traffic sent from 192.168.0.1, port 5300, +directed to 192.168.0.5, port 80, and sends it to queue 7:: -max_vfs -------- -Valid Range: 1-63 -Default Value: 0 + ethtool -U enp130s0 flow-type tcp4 src-ip 192.168.0.1 dst-ip 192.168.0.5 \ + src-port 5300 dst-port 80 action 7 + +For each flow-type, the programmed filters must all have the same matching +input set. For example, issuing the following two commands is acceptable:: + + ethtool -U enp130s0 flow-type ip4 src-ip 192.168.0.1 src-port 5300 action 7 + ethtool -U enp130s0 flow-type ip4 src-ip 192.168.0.5 src-port 55 action 10 - If the value is greater than 0 it will also force the VMDq parameter to be 1 - or more. +Issuing the next two commands, however, is not acceptable, since the first +specifies src-ip and the second specifies dst-ip:: - This parameter adds support for SR-IOV. It causes the driver to spawn up to - max_vfs worth of virtual function. + ethtool -U enp130s0 flow-type ip4 src-ip 192.168.0.1 src-port 5300 action 7 + ethtool -U enp130s0 flow-type ip4 dst-ip 192.168.0.5 src-port 55 action 10 +The second command will fail with an error. You may program multiple filters +with the same fields, using different values, but, on one device, you may not +program two TCP4 filters with different matching fields. -Additional Configurations -========================= +Matching on a sub-portion of a field is not supported by the ixgbe driver, thus +partial mask fields are not supported. - Jumbo Frames - ------------ - The driver supports Jumbo Frames for all adapters. Jumbo Frames support is - enabled by changing the MTU to a value larger than the default of 1500. - The maximum value for the MTU is 16110. Use the ip command to - increase the MTU size. For example: +To create filters that direct traffic to a specific Virtual Function, use the +"user-def" parameter. Specify the user-def as a 64 bit value, where the lower 32 +bits represents the queue number, while the next 8 bits represent which VF. +Note that 0 is the PF, so the VF identifier is offset by 1. For example:: - ip link set dev ethx mtu 9000 + ... user-def 0x800000002 ... - The maximum MTU setting for Jumbo Frames is 9710. This value coincides - with the maximum Jumbo Frames size of 9728. +specifies to direct traffic to Virtual Function 7 (8 minus 1) into queue 2 of +that VF. - Generic Receive Offload, aka GRO - -------------------------------- - The driver supports the in-kernel software implementation of GRO. GRO has - shown that by coalescing Rx traffic into larger chunks of data, CPU - utilization can be significantly reduced when under large Rx load. GRO is an - evolution of the previously-used LRO interface. GRO is able to coalesce - other protocols besides TCP. It's also safe to use with configurations that - are problematic for LRO, namely bridging and iSCSI. +Note that these filters will not break internal routing rules, and will not +route traffic that otherwise would not have been sent to the specified Virtual +Function. - Data Center Bridging, aka DCB - ----------------------------- - DCB is a configuration Quality of Service implementation in hardware. - It uses the VLAN priority tag (802.1p) to filter traffic. That means - that there are 8 different priorities that traffic can be filtered into. - It also enables priority flow control which can limit or eliminate the - number of dropped packets during network stress. Bandwidth can be - allocated to each of these priorities, which is enforced at the hardware - level. +Jumbo Frames +------------ +Jumbo Frames support is enabled by changing the Maximum Transmission Unit (MTU) +to a value larger than the default value of 1500. - To enable DCB support in ixgbe, you must enable the DCB netlink layer to - allow the userspace tools (see below) to communicate with the driver. - This can be found in the kernel configuration here: +Use the ifconfig command to increase the MTU size. For example, enter the +following where is the interface number:: - -> Networking support - -> Networking options - -> Data Center Bridging support + ifconfig eth mtu 9000 up - Once this is selected, DCB support must be selected for ixgbe. This can - be found here: +Alternatively, you can use the ip command as follows:: - -> Device Drivers - -> Network device support (NETDEVICES [=y]) - -> Ethernet (10000 Mbit) (NETDEV_10000 [=y]) - -> Intel(R) 10GbE PCI Express adapters support - -> Data Center Bridging (DCB) Support + ip link set mtu 9000 dev eth + ip link set up dev eth - After these options are selected, you must rebuild your kernel and your - modules. +This setting is not saved across reboots. The setting change can be made +permanent by adding 'MTU=9000' to the file:: - In order to use DCB, userspace tools must be downloaded and installed. - The dcbd tools can be found at: + /etc/sysconfig/network-scripts/ifcfg-eth // for RHEL + /etc/sysconfig/network/ // for SLES - http://e1000.sf.net +NOTE: The maximum MTU setting for Jumbo Frames is 9710. This value coincides +with the maximum Jumbo Frames size of 9728 bytes. - Ethtool - ------- - The driver utilizes the ethtool interface for driver configuration and - diagnostics, as well as displaying statistical information. The latest - ethtool version is required for this functionality. +NOTE: This driver will attempt to use multiple page sized buffers to receive +each jumbo packet. This should help to avoid buffer starvation issues when +allocating receive packets. - The latest release of ethtool can be found from - https://www.kernel.org/pub/software/network/ethtool/ +NOTE: For 82599-based network connections, if you are enabling jumbo frames in +a virtual function (VF), jumbo frames must first be enabled in the physical +function (PF). The VF MTU setting cannot be larger than the PF MTU. - FCoE - ---- - This release of the ixgbe driver contains new code to enable users to use - Fiber Channel over Ethernet (FCoE) and Data Center Bridging (DCB) - functionality that is supported by the 82598-based hardware. This code has - no default effect on the regular driver operation, and configuring DCB and - FCoE is outside the scope of this driver README. Refer to - http://www.open-fcoe.org/ for FCoE project information and contact - e1000-eedc at lists.sourceforge.net for DCB information. +Generic Receive Offload, aka GRO +-------------------------------- +The driver supports the in-kernel software implementation of GRO. GRO has +shown that by coalescing Rx traffic into larger chunks of data, CPU +utilization can be significantly reduced when under large Rx load. GRO is an +evolution of the previously-used LRO interface. GRO is able to coalesce +other protocols besides TCP. It's also safe to use with configurations that +are problematic for LRO, namely bridging and iSCSI. - MAC and VLAN anti-spoofing feature - ---------------------------------- - When a malicious driver attempts to send a spoofed packet, it is dropped by - the hardware and not transmitted. An interrupt is sent to the PF driver - notifying it of the spoof attempt. +Data Center Bridging (DCB) +-------------------------- +NOTE: +The kernel assumes that TC0 is available, and will disable Priority Flow +Control (PFC) on the device if TC0 is not available. To fix this, ensure TC0 is +enabled when setting up DCB on your switch. - When a spoofed packet is detected the PF driver will send the following - message to the system log (displayed by the "dmesg" command): +DCB is a configuration Quality of Service implementation in hardware. It uses +the VLAN priority tag (802.1p) to filter traffic. That means that there are 8 +different priorities that traffic can be filtered into. It also enables +priority flow control (802.1Qbb) which can limit or eliminate the number of +dropped packets during network stress. Bandwidth can be allocated to each of +these priorities, which is enforced at the hardware level (802.1Qaz). - Spoof event(s) detected on VF (n) +Adapter firmware implements LLDP and DCBX protocol agents as per 802.1AB and +802.1Qaz respectively. The firmware based DCBX agent runs in willing mode only +and can accept settings from a DCBX capable peer. Software configuration of +DCBX parameters via dcbtool/lldptool are not supported. - Where n=the VF that attempted to do the spoofing. +The ixgbe driver implements the DCB netlink interface layer to allow user-space +to communicate with the driver and query DCB configuration for the port. +ethtool +------- +The driver utilizes the ethtool interface for driver configuration and +diagnostics, as well as displaying statistical information. The latest ethtool +version is required for this functionality. Download it at: +https://www.kernel.org/pub/software/network/ethtool/ -Performance Tuning -================== +FCoE +---- +The ixgbe driver supports Fiber Channel over Ethernet (FCoE) and Data Center +Bridging (DCB). This code has no default effect on the regular driver +operation. Configuring DCB and FCoE is outside the scope of this README. Refer +to http://www.open-fcoe.org/ for FCoE project information and contact +ixgbe-eedc at lists.sourceforge.net for DCB information. -An excellent article on performance tuning can be found at: +MAC and VLAN anti-spoofing feature +---------------------------------- +When a malicious driver attempts to send a spoofed packet, it is dropped by the +hardware and not transmitted. -http://www.redhat.com/promo/summit/2008/downloads/pdf/Thursday/Mark_Wagner.pdf +An interrupt is sent to the PF driver notifying it of the spoof attempt. When a +spoofed packet is detected, the PF driver will send the following message to +the system log (displayed by the "dmesg" command):: + ixgbe ethX: ixgbe_spoof_check: n spoofed packets detected -Known Issues -============ +where "x" is the PF interface number; and "n" is number of spoofed packets. +NOTE: This feature can be disabled for a specific Virtual Function (VF):: - Enabling SR-IOV in a 32-bit or 64-bit Microsoft* Windows* Server 2008/R2 - Guest OS using Intel (R) 82576-based GbE or Intel (R) 82599-based 10GbE - controller under KVM - ------------------------------------------------------------------------ - KVM Hypervisor/VMM supports direct assignment of a PCIe device to a VM. This - includes traditional PCIe devices, as well as SR-IOV-capable devices using - Intel 82576-based and 82599-based controllers. + ip link set vf spoofchk {off|on} - While direct assignment of a PCIe device or an SR-IOV Virtual Function (VF) - to a Linux-based VM running 2.6.32 or later kernel works fine, there is a - known issue with Microsoft Windows Server 2008 VM that results in a "yellow - bang" error. This problem is within the KVM VMM itself, not the Intel driver, - or the SR-IOV logic of the VMM, but rather that KVM emulates an older CPU - model for the guests, and this older CPU model does not support MSI-X - interrupts, which is a requirement for Intel SR-IOV. - If you wish to use the Intel 82576 or 82599-based controllers in SR-IOV mode - with KVM and a Microsoft Windows Server 2008 guest try the following - workaround. The workaround is to tell KVM to emulate a different model of CPU - when using qemu to create the KVM guest: +Known Issues/Troubleshooting +============================ - "-cpu qemu64,model=13" +Enabling SR-IOV in a 64-bit Microsoft* Windows Server* 2012/R2 guest OS +----------------------------------------------------------------------- +Linux KVM Hypervisor/VMM supports direct assignment of a PCIe device to a VM. +This includes traditional PCIe devices, as well as SR-IOV-capable devices based +on the Intel Ethernet Controller XL710. Support ======= - For general information, go to the Intel support website at: - http://support.intel.com +https://www.intel.com/support/ or the Intel Wired Networking project hosted by Sourceforge at: - http://e1000.sourceforge.net +https://sourceforge.net/projects/e1000 -If an issue is identified with the released source code on the supported -kernel with a supported adapter, email the specific information related -to the issue to e1000-devel at lists.sf.net +If an issue is identified with the released source code on a supported kernel +with a supported adapter, email the specific information related to the issue +to e1000-devel at lists.sf.net. -- 2.17.1