From mboxrd@z Thu Jan 1 00:00:00 1970 From: Shahaf Shuler Subject: [PATCH v2 1/2] ethdev: introduce Rx queue offloads API Date: Sun, 10 Sep 2017 15:07:48 +0300 Message-ID: <725cf2c5c2f8c163081958320bc1dbeeeeb1d1ad.1505044395.git.shahafs@mellanox.com> References: Mime-Version: 1.0 Content-Type: text/plain Cc: dev@dpdk.org To: thomas@monjalon.net Return-path: Received: from EUR01-DB5-obe.outbound.protection.outlook.com (mail-db5eur01on0082.outbound.protection.outlook.com [104.47.2.82]) by dpdk.org (Postfix) with ESMTP id C6AA55398 for ; Sun, 10 Sep 2017 14:08:00 +0200 (CEST) In-Reply-To: List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Introduce a new API to configure Rx offloads. In the new API, offloads are divided into per-port and per-queue offloads. The PMD reports capability for each of them. Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags. To enable per-port offload, the offload should be set on both device configuration and queue configuration. To enable per-queue offload, the offloads can be set only on queue configuration. Applications should set the ignore_offload_bitfield bit on rxmode structure in order to move to the new API. The old Rx offloads API is kept for the meanwhile, in order to enable a smooth transition for PMDs and application to the new API. Signed-off-by: Shahaf Shuler --- doc/guides/nics/features.rst | 19 +++-- lib/librte_ether/rte_ethdev.c | 156 +++++++++++++++++++++++++++++++++---- lib/librte_ether/rte_ethdev.h | 52 ++++++++++++- 3 files changed, 204 insertions(+), 23 deletions(-) diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst index 37ffbc68c..f2c8497c2 100644 --- a/doc/guides/nics/features.rst +++ b/doc/guides/nics/features.rst @@ -179,7 +179,7 @@ Jumbo frame Supports Rx jumbo frames. -* **[uses] user config**: ``dev_conf.rxmode.jumbo_frame``, +* **[uses] rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``. ``dev_conf.rxmode.max_rx_pkt_len``. * **[related] rte_eth_dev_info**: ``max_rx_pktlen``. * **[related] API**: ``rte_eth_dev_set_mtu()``. @@ -192,7 +192,7 @@ Scattered Rx Supports receiving segmented mbufs. -* **[uses] user config**: ``dev_conf.rxmode.enable_scatter``. +* **[uses] rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_SCATTER``. * **[implements] datapath**: ``Scattered Rx function``. * **[implements] rte_eth_dev_data**: ``scattered_rx``. * **[provides] eth_dev_ops**: ``rxq_info_get:scattered_rx``. @@ -206,7 +206,7 @@ LRO Supports Large Receive Offload. -* **[uses] user config**: ``dev_conf.rxmode.enable_lro``. +* **[uses] rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``. * **[implements] datapath**: ``LRO functionality``. * **[implements] rte_eth_dev_data**: ``lro``. * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``. @@ -363,7 +363,7 @@ VLAN filter Supports filtering of a VLAN Tag identifier. -* **[uses] user config**: ``dev_conf.rxmode.hw_vlan_filter``. +* **[uses] rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``. * **[implements] eth_dev_ops**: ``vlan_filter_set``. * **[related] API**: ``rte_eth_dev_vlan_filter()``. @@ -499,7 +499,7 @@ CRC offload Supports CRC stripping by hardware. -* **[uses] user config**: ``dev_conf.rxmode.hw_strip_crc``. +* **[uses] rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_CRC_STRIP``. .. _nic_features_vlan_offload: @@ -509,8 +509,7 @@ VLAN offload Supports VLAN offload to hardware. -* **[uses] user config**: ``dev_conf.rxmode.hw_vlan_strip``, - ``dev_conf.rxmode.hw_vlan_filter``, ``dev_conf.rxmode.hw_vlan_extend``. +* **[uses] rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``. * **[implements] eth_dev_ops**: ``vlan_offload_set``. * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``. * **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``, @@ -526,6 +525,7 @@ QinQ offload Supports QinQ (queue in queue) offload. +* **[uses] rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``. * **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``. * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``, ``mbuf.vlan_tci_outer``. @@ -540,7 +540,7 @@ L3 checksum offload Supports L3 checksum offload. -* **[uses] user config**: ``dev_conf.rxmode.hw_ip_checksum``. +* **[uses] rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``. * **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``, ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``. * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` | @@ -557,6 +557,7 @@ L4 checksum offload Supports L4 checksum offload. +* **[uses] rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``. * **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``, ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` | ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``. @@ -574,6 +575,7 @@ MACsec offload Supports MACsec. +* **[uses] rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``. * **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``. * **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``, ``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``. @@ -586,6 +588,7 @@ Inner L3 checksum Supports inner packet L3 checksum. +* **[uses] rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``. * **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``, ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``, ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``, diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c index 0597641ee..b3c10701e 100644 --- a/lib/librte_ether/rte_ethdev.c +++ b/lib/librte_ether/rte_ethdev.c @@ -687,12 +687,90 @@ rte_eth_speed_bitflag(uint32_t speed, int duplex) } } +/** + * A conversion function from rxmode bitfield API. + */ +static void +rte_eth_convert_rx_offload_bitfield(const struct rte_eth_rxmode *rxmode, + uint64_t *rx_offloads) +{ + uint64_t offloads = 0; + + if (rxmode->header_split == 1) + offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT; + if (rxmode->hw_ip_checksum == 1) + offloads |= DEV_RX_OFFLOAD_CHECKSUM; + if (rxmode->hw_vlan_filter == 1) + offloads |= DEV_RX_OFFLOAD_VLAN_FILTER; + if (rxmode->hw_vlan_strip == 1) + offloads |= DEV_RX_OFFLOAD_VLAN_STRIP; + if (rxmode->hw_vlan_extend == 1) + offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND; + if (rxmode->jumbo_frame == 1) + offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; + if (rxmode->hw_strip_crc == 1) + offloads |= DEV_RX_OFFLOAD_CRC_STRIP; + if (rxmode->enable_scatter == 1) + offloads |= DEV_RX_OFFLOAD_SCATTER; + if (rxmode->enable_lro == 1) + offloads |= DEV_RX_OFFLOAD_TCP_LRO; + + *rx_offloads = offloads; +} + +/** + * A conversion function from rxmode offloads API. + */ +static void +rte_eth_convert_rx_offloads(const uint64_t rx_offloads, + struct rte_eth_rxmode *rxmode) +{ + + if (rx_offloads & DEV_RX_OFFLOAD_HEADER_SPLIT) + rxmode->header_split = 1; + else + rxmode->header_split = 0; + if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM) + rxmode->hw_ip_checksum = 1; + else + rxmode->hw_ip_checksum = 0; + if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) + rxmode->hw_vlan_filter = 1; + else + rxmode->hw_vlan_filter = 0; + if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP) + rxmode->hw_vlan_strip = 1; + else + rxmode->hw_vlan_strip = 0; + if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) + rxmode->hw_vlan_extend = 1; + else + rxmode->hw_vlan_extend = 0; + if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) + rxmode->jumbo_frame = 1; + else + rxmode->jumbo_frame = 0; + if (rx_offloads & DEV_RX_OFFLOAD_CRC_STRIP) + rxmode->hw_strip_crc = 1; + else + rxmode->hw_strip_crc = 0; + if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) + rxmode->enable_scatter = 1; + else + rxmode->enable_scatter = 0; + if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) + rxmode->enable_lro = 1; + else + rxmode->enable_lro = 0; +} + int rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, const struct rte_eth_conf *dev_conf) { struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; + struct rte_eth_conf local_conf = *dev_conf; int diag; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL); @@ -722,8 +800,20 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, return -EBUSY; } + /* + * Convert between the offloads API to enable PMDs to support + * only one of them. + */ + if ((dev_conf->rxmode.ignore_offload_bitfield == 0)) { + rte_eth_convert_rx_offload_bitfield( + &dev_conf->rxmode, &local_conf.rxmode.offloads); + } else { + rte_eth_convert_rx_offloads(dev_conf->rxmode.offloads, + &local_conf.rxmode); + } + /* Copy the dev_conf parameter into the dev structure */ - memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data->dev_conf)); + memcpy(&dev->data->dev_conf, &local_conf, sizeof(dev->data->dev_conf)); /* * Check that the numbers of RX and TX queues are not greater @@ -767,7 +857,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, * If jumbo frames are enabled, check that the maximum RX packet * length is supported by the configured device. */ - if (dev_conf->rxmode.jumbo_frame == 1) { + if (local_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) { RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u" @@ -1004,6 +1094,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id, uint32_t mbp_buf_size; struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; + struct rte_eth_rxconf local_conf; void **rxq; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL); @@ -1074,8 +1165,18 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id, if (rx_conf == NULL) rx_conf = &dev_info.default_rxconf; + local_conf = *rx_conf; + if (dev->data->dev_conf.rxmode.ignore_offload_bitfield == 0) { + /** + * Reflect port offloads to queue offloads in order for + * offloads to not be discarded. + */ + rte_eth_convert_rx_offload_bitfield(&dev->data->dev_conf.rxmode, + &local_conf.offloads); + } + ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc, - socket_id, rx_conf, mp); + socket_id, &local_conf, mp); if (!ret) { if (!dev->data->min_rx_buf_size || dev->data->min_rx_buf_size > mbp_buf_size) @@ -1979,7 +2080,8 @@ rte_eth_dev_vlan_filter(uint8_t port_id, uint16_t vlan_id, int on) RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; - if (!(dev->data->dev_conf.rxmode.hw_vlan_filter)) { + if (!(dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_FILTER)) { RTE_PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id); return -ENOSYS; } @@ -2055,23 +2157,41 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask) /*check which option changed by application*/ cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD); - org = !!(dev->data->dev_conf.rxmode.hw_vlan_strip); + org = !!(dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_STRIP); if (cur != org) { - dev->data->dev_conf.rxmode.hw_vlan_strip = (uint8_t)cur; + if (cur) + dev->data->dev_conf.rxmode.offloads |= + DEV_RX_OFFLOAD_VLAN_STRIP; + else + dev->data->dev_conf.rxmode.offloads &= + ~DEV_RX_OFFLOAD_VLAN_STRIP; mask |= ETH_VLAN_STRIP_MASK; } cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD); - org = !!(dev->data->dev_conf.rxmode.hw_vlan_filter); + org = !!(dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_FILTER); if (cur != org) { - dev->data->dev_conf.rxmode.hw_vlan_filter = (uint8_t)cur; + if (cur) + dev->data->dev_conf.rxmode.offloads |= + DEV_RX_OFFLOAD_VLAN_FILTER; + else + dev->data->dev_conf.rxmode.offloads &= + ~DEV_RX_OFFLOAD_VLAN_FILTER; mask |= ETH_VLAN_FILTER_MASK; } cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD); - org = !!(dev->data->dev_conf.rxmode.hw_vlan_extend); + org = !!(dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_EXTEND); if (cur != org) { - dev->data->dev_conf.rxmode.hw_vlan_extend = (uint8_t)cur; + if (cur) + dev->data->dev_conf.rxmode.offloads |= + DEV_RX_OFFLOAD_VLAN_EXTEND; + else + dev->data->dev_conf.rxmode.offloads &= + ~DEV_RX_OFFLOAD_VLAN_EXTEND; mask |= ETH_VLAN_EXTEND_MASK; } @@ -2080,6 +2200,13 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask) return ret; RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP); + + /* + * Convert to the offload bitfield API just in case the underlying PMD + * still supporting it. + */ + rte_eth_convert_rx_offloads(dev->data->dev_conf.rxmode.offloads, + &dev->data->dev_conf.rxmode); (*dev->dev_ops->vlan_offload_set)(dev, mask); return ret; @@ -2094,13 +2221,16 @@ rte_eth_dev_get_vlan_offload(uint8_t port_id) RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; - if (dev->data->dev_conf.rxmode.hw_vlan_strip) + if (dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_STRIP) ret |= ETH_VLAN_STRIP_OFFLOAD; - if (dev->data->dev_conf.rxmode.hw_vlan_filter) + if (dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_FILTER) ret |= ETH_VLAN_FILTER_OFFLOAD; - if (dev->data->dev_conf.rxmode.hw_vlan_extend) + if (dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_EXTEND) ret |= ETH_VLAN_EXTEND_OFFLOAD; return ret; diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h index 0adf3274a..f424cba04 100644 --- a/lib/librte_ether/rte_ethdev.h +++ b/lib/librte_ether/rte_ethdev.h @@ -348,7 +348,18 @@ struct rte_eth_rxmode { enum rte_eth_rx_mq_mode mq_mode; uint32_t max_rx_pkt_len; /**< Only used if jumbo_frame enabled. */ uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/ + uint64_t offloads; + /** + * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags. + * Only offloads set on rx_offload_capa field on rte_eth_dev_info + * structure are allowed to be set. + */ __extension__ + /** + * Below bitfield API is obsolete. Application should + * enable per-port offloads using the offload field + * above. + */ uint16_t header_split : 1, /**< Header Split enable. */ hw_ip_checksum : 1, /**< IP/UDP/TCP checksum offload enable. */ hw_vlan_filter : 1, /**< VLAN filter enable. */ @@ -357,7 +368,17 @@ struct rte_eth_rxmode { jumbo_frame : 1, /**< Jumbo Frame Receipt enable. */ hw_strip_crc : 1, /**< Enable CRC stripping by hardware. */ enable_scatter : 1, /**< Enable scatter packets rx handler */ - enable_lro : 1; /**< Enable LRO */ + enable_lro : 1, /**< Enable LRO */ + ignore_offload_bitfield : 1; + /** + * When set the offload bitfield should be ignored. + * Instead per-port Rx offloads should be set on offloads + * field above. + * Per-queue offloads shuold be set on rte_eth_rxq_conf + * structure. + * This bit is temporary till rxmode bitfield offloads API will + * be deprecated. + */ }; /** @@ -691,6 +712,12 @@ struct rte_eth_rxconf { uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors. */ uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */ uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */ + uint64_t offloads; + /** + * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags. + * Only offloads set on rx_queue_offload_capa field on rte_eth_dev_info + * structure are allowed to be set. + */ }; #define ETH_TXQ_FLAGS_NOMULTSEGS 0x0001 /**< nb_segs=1 for all mbufs */ @@ -706,6 +733,7 @@ struct rte_eth_rxconf { #define ETH_TXQ_FLAGS_NOXSUMS \ (ETH_TXQ_FLAGS_NOXSUMSCTP | ETH_TXQ_FLAGS_NOXSUMUDP | \ ETH_TXQ_FLAGS_NOXSUMTCP) + /** * A structure used to configure a TX ring of an Ethernet port. */ @@ -907,6 +935,18 @@ struct rte_eth_conf { #define DEV_RX_OFFLOAD_QINQ_STRIP 0x00000020 #define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040 #define DEV_RX_OFFLOAD_MACSEC_STRIP 0x00000080 +#define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100 +#define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200 +#define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400 +#define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800 +#define DEV_RX_OFFLOAD_CRC_STRIP 0x00001000 +#define DEV_RX_OFFLOAD_SCATTER 0x00002000 +#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \ + DEV_RX_OFFLOAD_UDP_CKSUM | \ + DEV_RX_OFFLOAD_TCP_CKSUM) +#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \ + DEV_RX_OFFLOAD_VLAN_FILTER | \ + DEV_RX_OFFLOAD_VLAN_EXTEND) /** * TX offload capabilities of a device. @@ -949,8 +989,11 @@ struct rte_eth_dev_info { /** Maximum number of hash MAC addresses for MTA and UTA. */ uint16_t max_vfs; /**< Maximum number of VFs. */ uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */ - uint32_t rx_offload_capa; /**< Device RX offload capabilities. */ + uint64_t rx_offload_capa; + /**< Device per port RX offload capabilities. */ uint32_t tx_offload_capa; /**< Device TX offload capabilities. */ + uint64_t rx_queue_offload_capa; + /**< Device per queue RX offload capabilities. */ uint16_t reta_size; /**< Device redirection table size, the total number of entries. */ uint8_t hash_key_size; /**< Hash key size in bytes */ @@ -1870,6 +1913,9 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex); * each statically configurable offload hardware feature provided by * Ethernet devices, such as IP checksum or VLAN tag stripping for * example. + * The Rx offload bitfield API is obsolete and will be deprecated. + * Applications should set the ignore_bitfield_offloads bit on *rxmode* + * structure and use offloads field to set per-port offloads instead. * - the Receive Side Scaling (RSS) configuration when using multiple RX * queues per port. * @@ -1923,6 +1969,8 @@ void _rte_eth_dev_reset(struct rte_eth_dev *dev); * The *rx_conf* structure contains an *rx_thresh* structure with the values * of the Prefetch, Host, and Write-Back threshold registers of the receive * ring. + * In addition it contains the hardware offloads features to activate using + * the DEV_RX_OFFLOAD_* flags. * @param mb_pool * The pointer to the memory pool from which to allocate *rte_mbuf* network * memory buffers to populate each descriptor of the receive ring. -- 2.12.0