All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode
@ 2018-04-19 11:36 Andrew Rybchenko
  2018-04-19 11:36 ` [PATCH 01/23] net/sfc/base: update autogenerated MCDI and TLV headers Andrew Rybchenko
                   ` (23 more replies)
  0 siblings, 24 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:36 UTC (permalink / raw)
  To: dev

Add support for dedicated DPDK firmware variant which has equal stride
super-buffer Rx mode. The Rx mode uses bucket mempool manager which
supports allocation of contiguos block of mbufs.

It allows to achieve higher packet rate on Rx than traditional single
packet Rx mode.

Also the Rx mode supports rte_flow MARK and FLAG actions.

It should be applied on top of [1], [2], [3], [4], [5].

[1] https://dpdk.org/ml/archives/dev/2018-April/098035.html
[2] https://dpdk.org/ml/archives/dev/2018-April/098047.html
[3] https://dpdk.org/ml/archives/dev/2018-April/095872.html
[4] https://dpdk.org/ml/archives/dev/2018-April/097354.html
[5] https://dpdk.org/ml/archives/dev/2018-April/097365.html

There are a number of known checkpatches.sh warnings in base driver due
to coding style difference and in the PMD itself due to postive errno
used inside the driver.

Andrew Rybchenko (18):
  net/sfc/base: update autogenerated MCDI and TLV headers
  net/sfc/base: make RxQ type data an union
  net/sfc/base: detect equal stride super-buffer support
  net/sfc/base: support equal stride super-buffer Rx mode
  net/sfc/base: add equal stride super-buffer prefix layout
  net/sfc: factor out function to push Rx doorbell
  net/sfc: prepare EF10 Rx event parser to be reused
  net/sfc: move EF10 Rx event parser to shared header
  net/sfc: conditionally compile support for tunnel packets
  net/sfc: allow one Rx queue entry carry many packet buffers
  net/sfc: allow to take mbuf pool into account when sizing
  net/sfc: support equal stride super-buffer Rx mode
  net/sfc: support callback to check if mempool is supported
  net/sfc: check mempool when equal stride super-buffer used
  net/sfc: support DPDK firmware variant
  net/sfc: add Rx descriptor wait timeout
  net/sfc: support flow marks in equal stride super-buffer Rx
  doc: advertise equal stride super-buffer Rx mode support in net/sfc

Roman Zhukov (5):
  net/sfc/base: get actions MARK and FLAG support
  net/sfc/base: support MARK and FLAG actions in filters
  net/sfc/base: get max supported value for action MARK
  net/sfc: make processing of flow rule actions more uniform
  net/sfc: support MARK and FLAG actions in flow API

 doc/guides/nics/sfc_efx.rst            |  42 ++-
 doc/guides/rel_notes/release_18_05.rst |   2 +
 drivers/net/sfc/Makefile               |   1 +
 drivers/net/sfc/base/ef10_ev.c         |  30 +-
 drivers/net/sfc/base/ef10_filter.c     |  31 +-
 drivers/net/sfc/base/ef10_impl.h       |  14 +-
 drivers/net/sfc/base/ef10_nic.c        |  27 +-
 drivers/net/sfc/base/ef10_rx.c         |  84 ++++-
 drivers/net/sfc/base/ef10_tlv_layout.h |  22 ++
 drivers/net/sfc/base/efx.h             |  44 ++-
 drivers/net/sfc/base/efx_check.h       |   7 +
 drivers/net/sfc/base/efx_filter.c      |  21 ++
 drivers/net/sfc/base/efx_impl.h        |  25 +-
 drivers/net/sfc/base/efx_regs_ef10.h   |  15 +
 drivers/net/sfc/base/efx_regs_mcdi.h   | 646 +++++++++++++++++++++++++++++++-
 drivers/net/sfc/base/efx_rx.c          |  70 +++-
 drivers/net/sfc/base/siena_nic.c       |   5 +
 drivers/net/sfc/efsys.h                |   2 +
 drivers/net/sfc/meson.build            |   1 +
 drivers/net/sfc/sfc.c                  |  35 ++
 drivers/net/sfc/sfc.h                  |   2 +
 drivers/net/sfc/sfc_dp.h               |   3 +-
 drivers/net/sfc/sfc_dp_rx.h            |  27 +-
 drivers/net/sfc/sfc_ef10.h             |  34 ++
 drivers/net/sfc/sfc_ef10_essb_rx.c     | 666 +++++++++++++++++++++++++++++++++
 drivers/net/sfc/sfc_ef10_rx.c          | 185 +--------
 drivers/net/sfc/sfc_ef10_rx_ev.h       | 169 +++++++++
 drivers/net/sfc/sfc_ethdev.c           |  23 ++
 drivers/net/sfc/sfc_ev.c               |  34 ++
 drivers/net/sfc/sfc_flow.c             | 119 +++++-
 drivers/net/sfc/sfc_kvargs.c           |   1 +
 drivers/net/sfc/sfc_kvargs.h           |  10 +-
 drivers/net/sfc/sfc_rx.c               |  51 ++-
 drivers/net/sfc/sfc_rx.h               |   1 +
 drivers/net/sfc/sfc_tweak.h            |   8 +
 35 files changed, 2204 insertions(+), 253 deletions(-)
 create mode 100644 drivers/net/sfc/sfc_ef10_essb_rx.c
 create mode 100644 drivers/net/sfc/sfc_ef10_rx_ev.h

-- 
2.7.4

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 01/23] net/sfc/base: update autogenerated MCDI and TLV headers
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
@ 2018-04-19 11:36 ` Andrew Rybchenko
  2018-04-19 11:36 ` [PATCH 02/23] net/sfc/base: make RxQ type data an union Andrew Rybchenko
                   ` (22 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:36 UTC (permalink / raw)
  To: dev

Equal stride super-buffer is a new name instead of deprecated equal
stride packed stream to avoid confusion with previous packed stream.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/base/ef10_tlv_layout.h |  22 ++
 drivers/net/sfc/base/efx_regs_mcdi.h   | 646 ++++++++++++++++++++++++++++++++-
 2 files changed, 654 insertions(+), 14 deletions(-)

diff --git a/drivers/net/sfc/base/ef10_tlv_layout.h b/drivers/net/sfc/base/ef10_tlv_layout.h
index b19dc2a..56cffae 100644
--- a/drivers/net/sfc/base/ef10_tlv_layout.h
+++ b/drivers/net/sfc/base/ef10_tlv_layout.h
@@ -4,6 +4,14 @@
  * All rights reserved.
  */
 
+/*
+ * This is NOT the original source file. Do NOT edit it.
+ * To update the tlv layout, please edit the copy in
+ * the sfregistry repo and then, in that repo,
+ * "make tlv_headers" or "make export" to
+ * regenerate and export all types of headers.
+ */
+
 /* These structures define the layouts for the TLV items stored in static and
  * dynamic configuration partitions in NVRAM for EF10 (Huntington etc.).
  *
@@ -409,6 +417,7 @@ struct tlv_firmware_options {
                                              MC_CMD_FW_PACKED_STREAM_HASH_MODE_1
 #define TLV_FIRMWARE_VARIANT_RULES_ENGINE    MC_CMD_FW_RULES_ENGINE
 #define TLV_FIRMWARE_VARIANT_DPDK            MC_CMD_FW_DPDK
+#define TLV_FIRMWARE_VARIANT_L3XUDP          MC_CMD_FW_L3XUDP
 };
 
 /* Voltage settings
@@ -986,4 +995,17 @@ struct tlv_fastpd_mode {
 #define TLV_FASTPD_MODE_FAST_SUPPORTED 2  /* Supported packet types to the FastPD; everything else to the SoftPD  */
 };
 
+/* L3xUDP datapath firmware UDP port configuration
+ *
+ * Sets the list of UDP ports on which the encapsulation will be handled.
+ * The number of ports in the list is implied by the length of the TLV item.
+ */
+#define TLV_TAG_L3XUDP_PORTS            (0x102a0000)
+struct tlv_l3xudp_ports {
+  uint32_t tag;
+  uint32_t length;
+  uint16_t ports[];
+#define TLV_TAG_L3XUDP_PORTS_MAX_NUM_PORTS 16
+};
+
 #endif /* CI_MGMT_TLV_LAYOUT_H */
diff --git a/drivers/net/sfc/base/efx_regs_mcdi.h b/drivers/net/sfc/base/efx_regs_mcdi.h
index c939fdd..cf8a793 100644
--- a/drivers/net/sfc/base/efx_regs_mcdi.h
+++ b/drivers/net/sfc/base/efx_regs_mcdi.h
@@ -2740,6 +2740,8 @@
 #define	MC_CMD_DRV_ATTACH_IN_PREBOOT_WIDTH 1
 #define	MC_CMD_DRV_ATTACH_IN_SUBVARIANT_AWARE_LBN 2
 #define	MC_CMD_DRV_ATTACH_IN_SUBVARIANT_AWARE_WIDTH 1
+#define	MC_CMD_DRV_ATTACH_IN_WANT_VI_SPREADING_LBN 3
+#define	MC_CMD_DRV_ATTACH_IN_WANT_VI_SPREADING_WIDTH 1
 /* 1 to set new state, or 0 to just report the existing state */
 #define	MC_CMD_DRV_ATTACH_IN_UPDATE_OFST 4
 #define	MC_CMD_DRV_ATTACH_IN_UPDATE_LEN 4
@@ -2768,6 +2770,12 @@
  * bug69716)
  */
 #define	MC_CMD_FW_L3XUDP 0x7
+/* enum: Requests that the MC keep whatever datapath firmware is currently
+ * running. It's used for test purposes, where we want to be able to shmboot
+ * special test firmware variants. This option is only recognised in eftest
+ * (i.e. non-production) builds.
+ */
+#define	MC_CMD_FW_KEEP_CURRENT_EFTEST_ONLY 0xfffffffe
 /* enum: Only this option is allowed for non-admin functions */
 #define	MC_CMD_FW_DONT_CARE 0xffffffff
 
@@ -2797,6 +2805,11 @@
  * refers to the Sorrento external FPGA port.
  */
 #define	MC_CMD_DRV_ATTACH_EXT_OUT_FLAG_NO_ACTIVE_PORT 0x3
+/* enum: If set, indicates that VI spreading is currently enabled. Will always
+ * indicate the current state, regardless of the value in the WANT_VI_SPREADING
+ * input.
+ */
+#define	MC_CMD_DRV_ATTACH_EXT_OUT_FLAG_VI_SPREADING_ENABLED 0x4
 
 
 /***********************************/
@@ -3600,6 +3613,37 @@
 /*            Enum values, see field(s): */
 /*               100M */
 
+/* AN_TYPE structuredef: Auto-negotiation types defined in IEEE802.3 */
+#define	AN_TYPE_LEN 4
+#define	AN_TYPE_TYPE_OFST 0
+#define	AN_TYPE_TYPE_LEN 4
+/* enum: None, AN disabled or not supported */
+#define	MC_CMD_AN_NONE 0x0
+/* enum: Clause 28 - BASE-T */
+#define	MC_CMD_AN_CLAUSE28 0x1
+/* enum: Clause 37 - BASE-X */
+#define	MC_CMD_AN_CLAUSE37 0x2
+/* enum: Clause 73 - BASE-R startup protocol for backplane and copper cable
+ * assemblies. Includes Clause 72/Clause 92 link-training.
+ */
+#define	MC_CMD_AN_CLAUSE73 0x3
+#define	AN_TYPE_TYPE_LBN 0
+#define	AN_TYPE_TYPE_WIDTH 32
+
+/* FEC_TYPE structuredef: Forward error correction types defined in IEEE802.3
+ */
+#define	FEC_TYPE_LEN 4
+#define	FEC_TYPE_TYPE_OFST 0
+#define	FEC_TYPE_TYPE_LEN 4
+/* enum: No FEC */
+#define	MC_CMD_FEC_NONE 0x0
+/* enum: Clause 74 BASE-R FEC (a.k.a Firecode) */
+#define	MC_CMD_FEC_BASER 0x1
+/* enum: Clause 91/Clause 108 Reed-Solomon FEC */
+#define	MC_CMD_FEC_RS 0x2
+#define	FEC_TYPE_TYPE_LBN 0
+#define	FEC_TYPE_TYPE_WIDTH 32
+
 
 /***********************************/
 /* MC_CMD_GET_LINK
@@ -3616,10 +3660,14 @@
 
 /* MC_CMD_GET_LINK_OUT msgresponse */
 #define	MC_CMD_GET_LINK_OUT_LEN 28
-/* near-side advertised capabilities */
+/* Near-side advertised capabilities. Refer to
+ * MC_CMD_GET_PHY_CFG_OUT/SUPPORTED_CAP for bit definitions.
+ */
 #define	MC_CMD_GET_LINK_OUT_CAP_OFST 0
 #define	MC_CMD_GET_LINK_OUT_CAP_LEN 4
-/* link-partner advertised capabilities */
+/* Link-partner advertised capabilities. Refer to
+ * MC_CMD_GET_PHY_CFG_OUT/SUPPORTED_CAP for bit definitions.
+ */
 #define	MC_CMD_GET_LINK_OUT_LP_CAP_OFST 4
 #define	MC_CMD_GET_LINK_OUT_LP_CAP_LEN 4
 /* Autonegotiated speed in mbit/s. The link may still be down even if this
@@ -3662,6 +3710,97 @@
 #define	MC_CMD_MAC_FAULT_PENDING_RECONFIG_LBN 3
 #define	MC_CMD_MAC_FAULT_PENDING_RECONFIG_WIDTH 1
 
+/* MC_CMD_GET_LINK_OUT_V2 msgresponse: Extended link state information */
+#define	MC_CMD_GET_LINK_OUT_V2_LEN 44
+/* Near-side advertised capabilities. Refer to
+ * MC_CMD_GET_PHY_CFG_OUT/SUPPORTED_CAP for bit definitions.
+ */
+#define	MC_CMD_GET_LINK_OUT_V2_CAP_OFST 0
+#define	MC_CMD_GET_LINK_OUT_V2_CAP_LEN 4
+/* Link-partner advertised capabilities. Refer to
+ * MC_CMD_GET_PHY_CFG_OUT/SUPPORTED_CAP for bit definitions.
+ */
+#define	MC_CMD_GET_LINK_OUT_V2_LP_CAP_OFST 4
+#define	MC_CMD_GET_LINK_OUT_V2_LP_CAP_LEN 4
+/* Autonegotiated speed in mbit/s. The link may still be down even if this
+ * reads non-zero.
+ */
+#define	MC_CMD_GET_LINK_OUT_V2_LINK_SPEED_OFST 8
+#define	MC_CMD_GET_LINK_OUT_V2_LINK_SPEED_LEN 4
+/* Current loopback setting. */
+#define	MC_CMD_GET_LINK_OUT_V2_LOOPBACK_MODE_OFST 12
+#define	MC_CMD_GET_LINK_OUT_V2_LOOPBACK_MODE_LEN 4
+/*            Enum values, see field(s): */
+/*               MC_CMD_GET_LOOPBACK_MODES/MC_CMD_GET_LOOPBACK_MODES_OUT/100M */
+#define	MC_CMD_GET_LINK_OUT_V2_FLAGS_OFST 16
+#define	MC_CMD_GET_LINK_OUT_V2_FLAGS_LEN 4
+#define	MC_CMD_GET_LINK_OUT_V2_LINK_UP_LBN 0
+#define	MC_CMD_GET_LINK_OUT_V2_LINK_UP_WIDTH 1
+#define	MC_CMD_GET_LINK_OUT_V2_FULL_DUPLEX_LBN 1
+#define	MC_CMD_GET_LINK_OUT_V2_FULL_DUPLEX_WIDTH 1
+#define	MC_CMD_GET_LINK_OUT_V2_BPX_LINK_LBN 2
+#define	MC_CMD_GET_LINK_OUT_V2_BPX_LINK_WIDTH 1
+#define	MC_CMD_GET_LINK_OUT_V2_PHY_LINK_LBN 3
+#define	MC_CMD_GET_LINK_OUT_V2_PHY_LINK_WIDTH 1
+#define	MC_CMD_GET_LINK_OUT_V2_LINK_FAULT_RX_LBN 6
+#define	MC_CMD_GET_LINK_OUT_V2_LINK_FAULT_RX_WIDTH 1
+#define	MC_CMD_GET_LINK_OUT_V2_LINK_FAULT_TX_LBN 7
+#define	MC_CMD_GET_LINK_OUT_V2_LINK_FAULT_TX_WIDTH 1
+/* This returns the negotiated flow control value. */
+#define	MC_CMD_GET_LINK_OUT_V2_FCNTL_OFST 20
+#define	MC_CMD_GET_LINK_OUT_V2_FCNTL_LEN 4
+/*            Enum values, see field(s): */
+/*               MC_CMD_SET_MAC/MC_CMD_SET_MAC_IN/FCNTL */
+#define	MC_CMD_GET_LINK_OUT_V2_MAC_FAULT_OFST 24
+#define	MC_CMD_GET_LINK_OUT_V2_MAC_FAULT_LEN 4
+/*             MC_CMD_MAC_FAULT_XGMII_LOCAL_LBN 0 */
+/*             MC_CMD_MAC_FAULT_XGMII_LOCAL_WIDTH 1 */
+/*             MC_CMD_MAC_FAULT_XGMII_REMOTE_LBN 1 */
+/*             MC_CMD_MAC_FAULT_XGMII_REMOTE_WIDTH 1 */
+/*             MC_CMD_MAC_FAULT_SGMII_REMOTE_LBN 2 */
+/*             MC_CMD_MAC_FAULT_SGMII_REMOTE_WIDTH 1 */
+/*             MC_CMD_MAC_FAULT_PENDING_RECONFIG_LBN 3 */
+/*             MC_CMD_MAC_FAULT_PENDING_RECONFIG_WIDTH 1 */
+/* True local device capabilities (taking into account currently used PMD/MDI,
+ * e.g. plugged-in module). In general, subset of
+ * MC_CMD_GET_PHY_CFG_OUT/SUPPORTED_CAP, but may include extra _FEC_REQUEST
+ * bits, if the PMD requires FEC. 0 if unknown (e.g. module unplugged). Equal
+ * to SUPPORTED_CAP for non-pluggable PMDs. Refer to
+ * MC_CMD_GET_PHY_CFG_OUT/SUPPORTED_CAP for bit definitions.
+ */
+#define	MC_CMD_GET_LINK_OUT_V2_LD_CAP_OFST 28
+#define	MC_CMD_GET_LINK_OUT_V2_LD_CAP_LEN 4
+/* Auto-negotiation type used on the link */
+#define	MC_CMD_GET_LINK_OUT_V2_AN_TYPE_OFST 32
+#define	MC_CMD_GET_LINK_OUT_V2_AN_TYPE_LEN 4
+/*            Enum values, see field(s): */
+/*               AN_TYPE/TYPE */
+/* Forward error correction used on the link */
+#define	MC_CMD_GET_LINK_OUT_V2_FEC_TYPE_OFST 36
+#define	MC_CMD_GET_LINK_OUT_V2_FEC_TYPE_LEN 4
+/*            Enum values, see field(s): */
+/*               FEC_TYPE/TYPE */
+#define	MC_CMD_GET_LINK_OUT_V2_EXT_FLAGS_OFST 40
+#define	MC_CMD_GET_LINK_OUT_V2_EXT_FLAGS_LEN 4
+#define	MC_CMD_GET_LINK_OUT_V2_PMD_MDI_CONNECTED_LBN 0
+#define	MC_CMD_GET_LINK_OUT_V2_PMD_MDI_CONNECTED_WIDTH 1
+#define	MC_CMD_GET_LINK_OUT_V2_PMD_READY_LBN 1
+#define	MC_CMD_GET_LINK_OUT_V2_PMD_READY_WIDTH 1
+#define	MC_CMD_GET_LINK_OUT_V2_PMD_LINK_UP_LBN 2
+#define	MC_CMD_GET_LINK_OUT_V2_PMD_LINK_UP_WIDTH 1
+#define	MC_CMD_GET_LINK_OUT_V2_PMA_LINK_UP_LBN 3
+#define	MC_CMD_GET_LINK_OUT_V2_PMA_LINK_UP_WIDTH 1
+#define	MC_CMD_GET_LINK_OUT_V2_PCS_LOCK_LBN 4
+#define	MC_CMD_GET_LINK_OUT_V2_PCS_LOCK_WIDTH 1
+#define	MC_CMD_GET_LINK_OUT_V2_ALIGN_LOCK_LBN 5
+#define	MC_CMD_GET_LINK_OUT_V2_ALIGN_LOCK_WIDTH 1
+#define	MC_CMD_GET_LINK_OUT_V2_HI_BER_LBN 6
+#define	MC_CMD_GET_LINK_OUT_V2_HI_BER_WIDTH 1
+#define	MC_CMD_GET_LINK_OUT_V2_FEC_LOCK_LBN 7
+#define	MC_CMD_GET_LINK_OUT_V2_FEC_LOCK_WIDTH 1
+#define	MC_CMD_GET_LINK_OUT_V2_AN_DONE_LBN 8
+#define	MC_CMD_GET_LINK_OUT_V2_AN_DONE_WIDTH 1
+
 
 /***********************************/
 /* MC_CMD_SET_LINK
@@ -3675,7 +3814,9 @@
 
 /* MC_CMD_SET_LINK_IN msgrequest */
 #define	MC_CMD_SET_LINK_IN_LEN 16
-/* ??? */
+/* Near-side advertised capabilities. Refer to
+ * MC_CMD_GET_PHY_CFG_OUT/SUPPORTED_CAP for bit definitions.
+ */
 #define	MC_CMD_SET_LINK_IN_CAP_OFST 0
 #define	MC_CMD_SET_LINK_IN_CAP_LEN 4
 /* Flags */
@@ -4232,6 +4373,37 @@
 /*            Other enum values, see field(s): */
 /*               MC_CMD_MAC_STATS_V2_OUT_NO_DMA/STATISTICS */
 
+/* MC_CMD_MAC_STATS_V4_OUT_DMA msgresponse */
+#define	MC_CMD_MAC_STATS_V4_OUT_DMA_LEN 0
+
+/* MC_CMD_MAC_STATS_V4_OUT_NO_DMA msgresponse */
+#define	MC_CMD_MAC_STATS_V4_OUT_NO_DMA_LEN (((MC_CMD_MAC_NSTATS_V4*64))>>3)
+#define	MC_CMD_MAC_STATS_V4_OUT_NO_DMA_STATISTICS_OFST 0
+#define	MC_CMD_MAC_STATS_V4_OUT_NO_DMA_STATISTICS_LEN 8
+#define	MC_CMD_MAC_STATS_V4_OUT_NO_DMA_STATISTICS_LO_OFST 0
+#define	MC_CMD_MAC_STATS_V4_OUT_NO_DMA_STATISTICS_HI_OFST 4
+#define	MC_CMD_MAC_STATS_V4_OUT_NO_DMA_STATISTICS_NUM MC_CMD_MAC_NSTATS_V4
+/* enum: Start of V4 stats buffer space */
+#define	MC_CMD_MAC_V4_DMABUF_START 0x79
+/* enum: RXDP counter: Number of packets truncated because scattering was
+ * disabled.
+ */
+#define	MC_CMD_MAC_RXDP_SCATTER_DISABLED_TRUNC 0x79
+/* enum: RXDP counter: Number of times the RXDP head of line blocked waiting
+ * for descriptors. Will be zero unless RXDP_HLB_IDLE capability is set.
+ */
+#define	MC_CMD_MAC_RXDP_HLB_IDLE 0x7a
+/* enum: RXDP counter: Number of times the RXDP timed out while head of line
+ * blocking. Will be zero unless RXDP_HLB_IDLE capability is set.
+ */
+#define	MC_CMD_MAC_RXDP_HLB_TIMEOUT 0x7b
+/* enum: This includes the space at offset 124 which is the final
+ * GENERATION_END in a MAC_STATS_V4 response and otherwise unused.
+ */
+#define	MC_CMD_MAC_NSTATS_V4 0x7d
+/*            Other enum values, see field(s): */
+/*               MC_CMD_MAC_STATS_V3_OUT_NO_DMA/STATISTICS */
+
 
 /***********************************/
 /* MC_CMD_SRIOV
@@ -7312,7 +7484,7 @@
 #define	MC_CMD_INIT_RXQ_EXT_IN_TARGET_EVQ_OFST 4
 #define	MC_CMD_INIT_RXQ_EXT_IN_TARGET_EVQ_LEN 4
 /* The value to put in the event data. Check hardware spec. for valid range.
- * This field is ignored if DMA_MODE == EQUAL_STRIDE_PACKED_STREAM or DMA_MODE
+ * This field is ignored if DMA_MODE == EQUAL_STRIDE_SUPER_BUFFER or DMA_MODE
  * == PACKED_STREAM.
  */
 #define	MC_CMD_INIT_RXQ_EXT_IN_LABEL_OFST 8
@@ -7351,6 +7523,8 @@
  * description see SF-119419-TC. This mode is only supported by "dpdk" datapath
  * firmware.
  */
+#define	MC_CMD_INIT_RXQ_EXT_IN_EQUAL_STRIDE_SUPER_BUFFER 0x2
+/* enum: Deprecated name for EQUAL_STRIDE_SUPER_BUFFER. */
 #define	MC_CMD_INIT_RXQ_EXT_IN_EQUAL_STRIDE_PACKED_STREAM 0x2
 #define	MC_CMD_INIT_RXQ_EXT_IN_FLAG_SNAPSHOT_MODE_LBN 14
 #define	MC_CMD_INIT_RXQ_EXT_IN_FLAG_SNAPSHOT_MODE_WIDTH 1
@@ -7392,7 +7566,7 @@
 #define	MC_CMD_INIT_RXQ_V3_IN_TARGET_EVQ_OFST 4
 #define	MC_CMD_INIT_RXQ_V3_IN_TARGET_EVQ_LEN 4
 /* The value to put in the event data. Check hardware spec. for valid range.
- * This field is ignored if DMA_MODE == EQUAL_STRIDE_PACKED_STREAM or DMA_MODE
+ * This field is ignored if DMA_MODE == EQUAL_STRIDE_SUPER_BUFFER or DMA_MODE
  * == PACKED_STREAM.
  */
 #define	MC_CMD_INIT_RXQ_V3_IN_LABEL_OFST 8
@@ -7431,6 +7605,8 @@
  * description see SF-119419-TC. This mode is only supported by "dpdk" datapath
  * firmware.
  */
+#define	MC_CMD_INIT_RXQ_V3_IN_EQUAL_STRIDE_SUPER_BUFFER 0x2
+/* enum: Deprecated name for EQUAL_STRIDE_SUPER_BUFFER. */
 #define	MC_CMD_INIT_RXQ_V3_IN_EQUAL_STRIDE_PACKED_STREAM 0x2
 #define	MC_CMD_INIT_RXQ_V3_IN_FLAG_SNAPSHOT_MODE_LBN 14
 #define	MC_CMD_INIT_RXQ_V3_IN_FLAG_SNAPSHOT_MODE_WIDTH 1
@@ -7461,21 +7637,21 @@
 #define	MC_CMD_INIT_RXQ_V3_IN_SNAPSHOT_LENGTH_OFST 540
 #define	MC_CMD_INIT_RXQ_V3_IN_SNAPSHOT_LENGTH_LEN 4
 /* The number of packet buffers that will be contained within each
- * EQUAL_STRIDE_PACKED_STREAM format bucket supplied by the driver. This field
- * is ignored unless DMA_MODE == EQUAL_STRIDE_PACKED_STREAM.
+ * EQUAL_STRIDE_SUPER_BUFFER format bucket supplied by the driver. This field
+ * is ignored unless DMA_MODE == EQUAL_STRIDE_SUPER_BUFFER.
  */
 #define	MC_CMD_INIT_RXQ_V3_IN_ES_PACKET_BUFFERS_PER_BUCKET_OFST 544
 #define	MC_CMD_INIT_RXQ_V3_IN_ES_PACKET_BUFFERS_PER_BUCKET_LEN 4
 /* The length in bytes of the area in each packet buffer that can be written to
  * by the adapter. This is used to store the packet prefix and the packet
  * payload. This length does not include any end padding added by the driver.
- * This field is ignored unless DMA_MODE == EQUAL_STRIDE_PACKED_STREAM.
+ * This field is ignored unless DMA_MODE == EQUAL_STRIDE_SUPER_BUFFER.
  */
 #define	MC_CMD_INIT_RXQ_V3_IN_ES_MAX_DMA_LEN_OFST 548
 #define	MC_CMD_INIT_RXQ_V3_IN_ES_MAX_DMA_LEN_LEN 4
 /* The length in bytes of a single packet buffer within a
- * EQUAL_STRIDE_PACKED_STREAM format bucket. This field is ignored unless
- * DMA_MODE == EQUAL_STRIDE_PACKED_STREAM.
+ * EQUAL_STRIDE_SUPER_BUFFER format bucket. This field is ignored unless
+ * DMA_MODE == EQUAL_STRIDE_SUPER_BUFFER.
  */
 #define	MC_CMD_INIT_RXQ_V3_IN_ES_PACKET_STRIDE_OFST 552
 #define	MC_CMD_INIT_RXQ_V3_IN_ES_PACKET_STRIDE_LEN 4
@@ -7483,7 +7659,7 @@
  * there are no RX descriptors available. If the timeout is reached and there
  * are still no descriptors then the packet will be dropped. A timeout of 0
  * means the datapath will never be blocked. This field is ignored unless
- * DMA_MODE == EQUAL_STRIDE_PACKED_STREAM.
+ * DMA_MODE == EQUAL_STRIDE_SUPER_BUFFER.
  */
 #define	MC_CMD_INIT_RXQ_V3_IN_ES_HEAD_OF_LINE_BLOCK_TIMEOUT_OFST 556
 #define	MC_CMD_INIT_RXQ_V3_IN_ES_HEAD_OF_LINE_BLOCK_TIMEOUT_LEN 4
@@ -8676,7 +8852,10 @@
  * support the DPDK rte_flow "MARK" action.
  */
 #define	MC_CMD_FILTER_OP_V3_IN_MATCH_ACTION_MARK 0x2
-/* the mark value for MATCH_ACTION_MARK */
+/* the mark value for MATCH_ACTION_MARK. Requesting a value larger than the
+ * maximum (obtained from MC_CMD_GET_CAPABILITIES_V5/FILTER_ACTION_MARK_MAX)
+ * will cause the filter insertion to fail with EINVAL.
+ */
 #define	MC_CMD_FILTER_OP_V3_IN_MATCH_MARK_VALUE_OFST 176
 #define	MC_CMD_FILTER_OP_V3_IN_MATCH_MARK_VALUE_LEN 4
 
@@ -10105,12 +10284,18 @@
 #define	MC_CMD_GET_CAPABILITIES_V2_OUT_FILTER_ACTION_FLAG_WIDTH 1
 #define	MC_CMD_GET_CAPABILITIES_V2_OUT_FILTER_ACTION_MARK_LBN 20
 #define	MC_CMD_GET_CAPABILITIES_V2_OUT_FILTER_ACTION_MARK_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V2_OUT_EQUAL_STRIDE_SUPER_BUFFER_LBN 21
+#define	MC_CMD_GET_CAPABILITIES_V2_OUT_EQUAL_STRIDE_SUPER_BUFFER_WIDTH 1
 #define	MC_CMD_GET_CAPABILITIES_V2_OUT_EQUAL_STRIDE_PACKED_STREAM_LBN 21
 #define	MC_CMD_GET_CAPABILITIES_V2_OUT_EQUAL_STRIDE_PACKED_STREAM_WIDTH 1
 #define	MC_CMD_GET_CAPABILITIES_V2_OUT_L3XUDP_SUPPORT_LBN 22
 #define	MC_CMD_GET_CAPABILITIES_V2_OUT_L3XUDP_SUPPORT_WIDTH 1
 #define	MC_CMD_GET_CAPABILITIES_V2_OUT_FW_SUBVARIANT_NO_TX_CSUM_LBN 23
 #define	MC_CMD_GET_CAPABILITIES_V2_OUT_FW_SUBVARIANT_NO_TX_CSUM_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V2_OUT_VI_SPREADING_LBN 24
+#define	MC_CMD_GET_CAPABILITIES_V2_OUT_VI_SPREADING_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V2_OUT_RXDP_HLB_IDLE_LBN 25
+#define	MC_CMD_GET_CAPABILITIES_V2_OUT_RXDP_HLB_IDLE_WIDTH 1
 /* Number of FATSOv2 contexts per datapath supported by this NIC. Not present
  * on older firmware (check the length).
  */
@@ -10422,12 +10607,18 @@
 #define	MC_CMD_GET_CAPABILITIES_V3_OUT_FILTER_ACTION_FLAG_WIDTH 1
 #define	MC_CMD_GET_CAPABILITIES_V3_OUT_FILTER_ACTION_MARK_LBN 20
 #define	MC_CMD_GET_CAPABILITIES_V3_OUT_FILTER_ACTION_MARK_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V3_OUT_EQUAL_STRIDE_SUPER_BUFFER_LBN 21
+#define	MC_CMD_GET_CAPABILITIES_V3_OUT_EQUAL_STRIDE_SUPER_BUFFER_WIDTH 1
 #define	MC_CMD_GET_CAPABILITIES_V3_OUT_EQUAL_STRIDE_PACKED_STREAM_LBN 21
 #define	MC_CMD_GET_CAPABILITIES_V3_OUT_EQUAL_STRIDE_PACKED_STREAM_WIDTH 1
 #define	MC_CMD_GET_CAPABILITIES_V3_OUT_L3XUDP_SUPPORT_LBN 22
 #define	MC_CMD_GET_CAPABILITIES_V3_OUT_L3XUDP_SUPPORT_WIDTH 1
 #define	MC_CMD_GET_CAPABILITIES_V3_OUT_FW_SUBVARIANT_NO_TX_CSUM_LBN 23
 #define	MC_CMD_GET_CAPABILITIES_V3_OUT_FW_SUBVARIANT_NO_TX_CSUM_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V3_OUT_VI_SPREADING_LBN 24
+#define	MC_CMD_GET_CAPABILITIES_V3_OUT_VI_SPREADING_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V3_OUT_RXDP_HLB_IDLE_LBN 25
+#define	MC_CMD_GET_CAPABILITIES_V3_OUT_RXDP_HLB_IDLE_WIDTH 1
 /* Number of FATSOv2 contexts per datapath supported by this NIC. Not present
  * on older firmware (check the length).
  */
@@ -10764,12 +10955,18 @@
 #define	MC_CMD_GET_CAPABILITIES_V4_OUT_FILTER_ACTION_FLAG_WIDTH 1
 #define	MC_CMD_GET_CAPABILITIES_V4_OUT_FILTER_ACTION_MARK_LBN 20
 #define	MC_CMD_GET_CAPABILITIES_V4_OUT_FILTER_ACTION_MARK_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V4_OUT_EQUAL_STRIDE_SUPER_BUFFER_LBN 21
+#define	MC_CMD_GET_CAPABILITIES_V4_OUT_EQUAL_STRIDE_SUPER_BUFFER_WIDTH 1
 #define	MC_CMD_GET_CAPABILITIES_V4_OUT_EQUAL_STRIDE_PACKED_STREAM_LBN 21
 #define	MC_CMD_GET_CAPABILITIES_V4_OUT_EQUAL_STRIDE_PACKED_STREAM_WIDTH 1
 #define	MC_CMD_GET_CAPABILITIES_V4_OUT_L3XUDP_SUPPORT_LBN 22
 #define	MC_CMD_GET_CAPABILITIES_V4_OUT_L3XUDP_SUPPORT_WIDTH 1
 #define	MC_CMD_GET_CAPABILITIES_V4_OUT_FW_SUBVARIANT_NO_TX_CSUM_LBN 23
 #define	MC_CMD_GET_CAPABILITIES_V4_OUT_FW_SUBVARIANT_NO_TX_CSUM_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V4_OUT_VI_SPREADING_LBN 24
+#define	MC_CMD_GET_CAPABILITIES_V4_OUT_VI_SPREADING_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V4_OUT_RXDP_HLB_IDLE_LBN 25
+#define	MC_CMD_GET_CAPABILITIES_V4_OUT_RXDP_HLB_IDLE_WIDTH 1
 /* Number of FATSOv2 contexts per datapath supported by this NIC. Not present
  * on older firmware (check the length).
  */
@@ -10859,6 +11056,367 @@
 #define	MC_CMD_GET_CAPABILITIES_V4_OUT_MAC_STATS_NUM_STATS_OFST 76
 #define	MC_CMD_GET_CAPABILITIES_V4_OUT_MAC_STATS_NUM_STATS_LEN 2
 
+/* MC_CMD_GET_CAPABILITIES_V5_OUT msgresponse */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_LEN 84
+/* First word of flags. */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_FLAGS1_OFST 0
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_FLAGS1_LEN 4
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_VPORT_RECONFIGURE_LBN 3
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_VPORT_RECONFIGURE_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_STRIPING_LBN 4
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_STRIPING_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_VADAPTOR_QUERY_LBN 5
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_VADAPTOR_QUERY_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_EVB_PORT_VLAN_RESTRICT_LBN 6
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_EVB_PORT_VLAN_RESTRICT_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_DRV_ATTACH_PREBOOT_LBN 7
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_DRV_ATTACH_PREBOOT_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_FORCE_EVENT_MERGING_LBN 8
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_FORCE_EVENT_MERGING_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_SET_MAC_ENHANCED_LBN 9
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_SET_MAC_ENHANCED_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_UNKNOWN_UCAST_DST_FILTER_ALWAYS_MULTI_RECIPIENT_LBN 10
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_UNKNOWN_UCAST_DST_FILTER_ALWAYS_MULTI_RECIPIENT_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_VADAPTOR_PERMIT_SET_MAC_WHEN_FILTERS_INSTALLED_LBN 11
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_VADAPTOR_PERMIT_SET_MAC_WHEN_FILTERS_INSTALLED_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_MAC_SECURITY_FILTERING_LBN 12
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_MAC_SECURITY_FILTERING_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_ADDITIONAL_RSS_MODES_LBN 13
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_ADDITIONAL_RSS_MODES_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_QBB_LBN 14
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_QBB_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_PACKED_STREAM_VAR_BUFFERS_LBN 15
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_PACKED_STREAM_VAR_BUFFERS_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_RSS_LIMITED_LBN 16
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_RSS_LIMITED_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_PACKED_STREAM_LBN 17
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_PACKED_STREAM_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_INCLUDE_FCS_LBN 18
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_INCLUDE_FCS_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_VLAN_INSERTION_LBN 19
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_VLAN_INSERTION_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_VLAN_STRIPPING_LBN 20
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_VLAN_STRIPPING_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_TSO_LBN 21
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_TSO_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_PREFIX_LEN_0_LBN 22
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_PREFIX_LEN_0_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_PREFIX_LEN_14_LBN 23
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_PREFIX_LEN_14_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_TIMESTAMP_LBN 24
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_TIMESTAMP_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_BATCHING_LBN 25
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_BATCHING_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_MCAST_FILTER_CHAINING_LBN 26
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_MCAST_FILTER_CHAINING_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_PM_AND_RXDP_COUNTERS_LBN 27
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_PM_AND_RXDP_COUNTERS_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_DISABLE_SCATTER_LBN 28
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_DISABLE_SCATTER_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_MCAST_UDP_LOOPBACK_LBN 29
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_MCAST_UDP_LOOPBACK_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_EVB_LBN 30
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_EVB_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_VXLAN_NVGRE_LBN 31
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_VXLAN_NVGRE_WIDTH 1
+/* RxDPCPU firmware id. */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_DPCPU_FW_ID_OFST 4
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_DPCPU_FW_ID_LEN 2
+/* enum: Standard RXDP firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXDP 0x0
+/* enum: Low latency RXDP firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXDP_LOW_LATENCY 0x1
+/* enum: Packed stream RXDP firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXDP_PACKED_STREAM 0x2
+/* enum: Rules engine RXDP firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXDP_RULES_ENGINE 0x5
+/* enum: DPDK RXDP firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXDP_DPDK 0x6
+/* enum: BIST RXDP firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXDP_BIST 0x10a
+/* enum: RXDP Test firmware image 1 */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXDP_TEST_FW_TO_MC_CUT_THROUGH 0x101
+/* enum: RXDP Test firmware image 2 */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXDP_TEST_FW_TO_MC_STORE_FORWARD 0x102
+/* enum: RXDP Test firmware image 3 */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXDP_TEST_FW_TO_MC_STORE_FORWARD_FIRST 0x103
+/* enum: RXDP Test firmware image 4 */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXDP_TEST_EVERY_EVENT_BATCHABLE 0x104
+/* enum: RXDP Test firmware image 5 */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXDP_TEST_BACKPRESSURE 0x105
+/* enum: RXDP Test firmware image 6 */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXDP_TEST_FW_PACKET_EDITS 0x106
+/* enum: RXDP Test firmware image 7 */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXDP_TEST_FW_RX_HDR_SPLIT 0x107
+/* enum: RXDP Test firmware image 8 */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXDP_TEST_FW_DISABLE_DL 0x108
+/* enum: RXDP Test firmware image 9 */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXDP_TEST_FW_DOORBELL_DELAY 0x10b
+/* enum: RXDP Test firmware image 10 */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXDP_TEST_FW_SLOW 0x10c
+/* TxDPCPU firmware id. */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_DPCPU_FW_ID_OFST 6
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_DPCPU_FW_ID_LEN 2
+/* enum: Standard TXDP firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXDP 0x0
+/* enum: Low latency TXDP firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXDP_LOW_LATENCY 0x1
+/* enum: High packet rate TXDP firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXDP_HIGH_PACKET_RATE 0x3
+/* enum: Rules engine TXDP firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXDP_RULES_ENGINE 0x5
+/* enum: DPDK TXDP firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXDP_DPDK 0x6
+/* enum: BIST TXDP firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXDP_BIST 0x12d
+/* enum: TXDP Test firmware image 1 */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXDP_TEST_FW_TSO_EDIT 0x101
+/* enum: TXDP Test firmware image 2 */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXDP_TEST_FW_PACKET_EDITS 0x102
+/* enum: TXDP CSR bus test firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXDP_TEST_FW_CSR 0x103
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXPD_FW_VERSION_OFST 8
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXPD_FW_VERSION_LEN 2
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXPD_FW_VERSION_REV_LBN 0
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXPD_FW_VERSION_REV_WIDTH 12
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXPD_FW_VERSION_TYPE_LBN 12
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXPD_FW_VERSION_TYPE_WIDTH 4
+/* enum: reserved value - do not use (may indicate alternative interpretation
+ * of REV field in future)
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXPD_FW_TYPE_RESERVED 0x0
+/* enum: Trivial RX PD firmware for early Huntington development (Huntington
+ * development only)
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXPD_FW_TYPE_FIRST_PKT 0x1
+/* enum: RX PD firmware with approximately Siena-compatible behaviour
+ * (Huntington development only)
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXPD_FW_TYPE_SIENA_COMPAT 0x2
+/* enum: Full featured RX PD production firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXPD_FW_TYPE_FULL_FEATURED 0x3
+/* enum: (deprecated original name for the FULL_FEATURED variant) */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXPD_FW_TYPE_VSWITCH 0x3
+/* enum: siena_compat variant RX PD firmware using PM rather than MAC
+ * (Huntington development only)
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXPD_FW_TYPE_SIENA_COMPAT_PM 0x4
+/* enum: Low latency RX PD production firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXPD_FW_TYPE_LOW_LATENCY 0x5
+/* enum: Packed stream RX PD production firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXPD_FW_TYPE_PACKED_STREAM 0x6
+/* enum: RX PD firmware handling layer 2 only for high packet rate performance
+ * tests (Medford development only)
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXPD_FW_TYPE_LAYER2_PERF 0x7
+/* enum: Rules engine RX PD production firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXPD_FW_TYPE_RULES_ENGINE 0x8
+/* enum: Custom firmware variant (see SF-119495-PD and bug69716) */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXPD_FW_TYPE_L3XUDP 0x9
+/* enum: DPDK RX PD production firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXPD_FW_TYPE_DPDK 0xa
+/* enum: RX PD firmware for GUE parsing prototype (Medford development only) */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXPD_FW_TYPE_TESTFW_GUE_PROTOTYPE 0xe
+/* enum: RX PD firmware parsing but not filtering network overlay tunnel
+ * encapsulations (Medford development only)
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXPD_FW_TYPE_TESTFW_ENCAP_PARSING_ONLY 0xf
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXPD_FW_VERSION_OFST 10
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXPD_FW_VERSION_LEN 2
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXPD_FW_VERSION_REV_LBN 0
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXPD_FW_VERSION_REV_WIDTH 12
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXPD_FW_VERSION_TYPE_LBN 12
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXPD_FW_VERSION_TYPE_WIDTH 4
+/* enum: reserved value - do not use (may indicate alternative interpretation
+ * of REV field in future)
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXPD_FW_TYPE_RESERVED 0x0
+/* enum: Trivial TX PD firmware for early Huntington development (Huntington
+ * development only)
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXPD_FW_TYPE_FIRST_PKT 0x1
+/* enum: TX PD firmware with approximately Siena-compatible behaviour
+ * (Huntington development only)
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXPD_FW_TYPE_SIENA_COMPAT 0x2
+/* enum: Full featured TX PD production firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXPD_FW_TYPE_FULL_FEATURED 0x3
+/* enum: (deprecated original name for the FULL_FEATURED variant) */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXPD_FW_TYPE_VSWITCH 0x3
+/* enum: siena_compat variant TX PD firmware using PM rather than MAC
+ * (Huntington development only)
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXPD_FW_TYPE_SIENA_COMPAT_PM 0x4
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXPD_FW_TYPE_LOW_LATENCY 0x5 /* enum */
+/* enum: TX PD firmware handling layer 2 only for high packet rate performance
+ * tests (Medford development only)
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXPD_FW_TYPE_LAYER2_PERF 0x7
+/* enum: Rules engine TX PD production firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXPD_FW_TYPE_RULES_ENGINE 0x8
+/* enum: Custom firmware variant (see SF-119495-PD and bug69716) */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXPD_FW_TYPE_L3XUDP 0x9
+/* enum: DPDK TX PD production firmware */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXPD_FW_TYPE_DPDK 0xa
+/* enum: RX PD firmware for GUE parsing prototype (Medford development only) */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TXPD_FW_TYPE_TESTFW_GUE_PROTOTYPE 0xe
+/* Hardware capabilities of NIC */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_HW_CAPABILITIES_OFST 12
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_HW_CAPABILITIES_LEN 4
+/* Licensed capabilities */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_LICENSE_CAPABILITIES_OFST 16
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_LICENSE_CAPABILITIES_LEN 4
+/* Second word of flags. Not present on older firmware (check the length). */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_FLAGS2_OFST 20
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_FLAGS2_LEN 4
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_TSO_V2_LBN 0
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_TSO_V2_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_TSO_V2_ENCAP_LBN 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_TSO_V2_ENCAP_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_EVQ_TIMER_CTRL_LBN 2
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_EVQ_TIMER_CTRL_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_EVENT_CUT_THROUGH_LBN 3
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_EVENT_CUT_THROUGH_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_CUT_THROUGH_LBN 4
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_CUT_THROUGH_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_VFIFO_ULL_MODE_LBN 5
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_VFIFO_ULL_MODE_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_MAC_STATS_40G_TX_SIZE_BINS_LBN 6
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_MAC_STATS_40G_TX_SIZE_BINS_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_INIT_EVQ_V2_LBN 7
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_INIT_EVQ_V2_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_MAC_TIMESTAMPING_LBN 8
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_MAC_TIMESTAMPING_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_TIMESTAMP_LBN 9
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_TIMESTAMP_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_SNIFF_LBN 10
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_SNIFF_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_SNIFF_LBN 11
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_SNIFF_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_NVRAM_UPDATE_REPORT_VERIFY_RESULT_LBN 12
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_NVRAM_UPDATE_REPORT_VERIFY_RESULT_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_MCDI_BACKGROUND_LBN 13
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_MCDI_BACKGROUND_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_MCDI_DB_RETURN_LBN 14
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_MCDI_DB_RETURN_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_CTPIO_LBN 15
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_CTPIO_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TSA_SUPPORT_LBN 16
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TSA_SUPPORT_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TSA_BOUND_LBN 17
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TSA_BOUND_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_SF_ADAPTER_AUTHENTICATION_LBN 18
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_SF_ADAPTER_AUTHENTICATION_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_FILTER_ACTION_FLAG_LBN 19
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_FILTER_ACTION_FLAG_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_FILTER_ACTION_MARK_LBN 20
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_FILTER_ACTION_MARK_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_EQUAL_STRIDE_SUPER_BUFFER_LBN 21
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_EQUAL_STRIDE_SUPER_BUFFER_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_EQUAL_STRIDE_PACKED_STREAM_LBN 21
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_EQUAL_STRIDE_PACKED_STREAM_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_L3XUDP_SUPPORT_LBN 22
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_L3XUDP_SUPPORT_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_FW_SUBVARIANT_NO_TX_CSUM_LBN 23
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_FW_SUBVARIANT_NO_TX_CSUM_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_VI_SPREADING_LBN 24
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_VI_SPREADING_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXDP_HLB_IDLE_LBN 25
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RXDP_HLB_IDLE_WIDTH 1
+/* Number of FATSOv2 contexts per datapath supported by this NIC. Not present
+ * on older firmware (check the length).
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_TSO_V2_N_CONTEXTS_OFST 24
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_TSO_V2_N_CONTEXTS_LEN 2
+/* One byte per PF containing the number of the external port assigned to this
+ * PF, indexed by PF number. Special values indicate that a PF is either not
+ * present or not assigned.
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_PFS_TO_PORTS_ASSIGNMENT_OFST 26
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_PFS_TO_PORTS_ASSIGNMENT_LEN 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_PFS_TO_PORTS_ASSIGNMENT_NUM 16
+/* enum: The caller is not permitted to access information on this PF. */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_ACCESS_NOT_PERMITTED 0xff
+/* enum: PF does not exist. */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_PF_NOT_PRESENT 0xfe
+/* enum: PF does exist but is not assigned to any external port. */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_PF_NOT_ASSIGNED 0xfd
+/* enum: This value indicates that PF is assigned, but it cannot be expressed
+ * in this field. It is intended for a possible future situation where a more
+ * complex scheme of PFs to ports mapping is being used. The future driver
+ * should look for a new field supporting the new scheme. The current/old
+ * driver should treat this value as PF_NOT_ASSIGNED.
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_INCOMPATIBLE_ASSIGNMENT 0xfc
+/* One byte per PF containing the number of its VFs, indexed by PF number. A
+ * special value indicates that a PF is not present.
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_NUM_VFS_PER_PF_OFST 42
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_NUM_VFS_PER_PF_LEN 1
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_NUM_VFS_PER_PF_NUM 16
+/* enum: The caller is not permitted to access information on this PF. */
+/*               MC_CMD_GET_CAPABILITIES_V5_OUT_ACCESS_NOT_PERMITTED 0xff */
+/* enum: PF does not exist. */
+/*               MC_CMD_GET_CAPABILITIES_V5_OUT_PF_NOT_PRESENT 0xfe */
+/* Number of VIs available for each external port */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_NUM_VIS_PER_PORT_OFST 58
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_NUM_VIS_PER_PORT_LEN 2
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_NUM_VIS_PER_PORT_NUM 4
+/* Size of RX descriptor cache expressed as binary logarithm The actual size
+ * equals (2 ^ RX_DESC_CACHE_SIZE)
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_DESC_CACHE_SIZE_OFST 66
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_RX_DESC_CACHE_SIZE_LEN 1
+/* Size of TX descriptor cache expressed as binary logarithm The actual size
+ * equals (2 ^ TX_DESC_CACHE_SIZE)
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_DESC_CACHE_SIZE_OFST 67
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_TX_DESC_CACHE_SIZE_LEN 1
+/* Total number of available PIO buffers */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_NUM_PIO_BUFFS_OFST 68
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_NUM_PIO_BUFFS_LEN 2
+/* Size of a single PIO buffer */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_SIZE_PIO_BUFF_OFST 70
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_SIZE_PIO_BUFF_LEN 2
+/* On chips later than Medford the amount of address space assigned to each VI
+ * is configurable. This is a global setting that the driver must query to
+ * discover the VI to address mapping. Cut-through PIO (CTPIO) is not available
+ * with 8k VI windows.
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_VI_WINDOW_MODE_OFST 72
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_VI_WINDOW_MODE_LEN 1
+/* enum: Each VI occupies 8k as on Huntington and Medford. PIO is at offset 4k.
+ * CTPIO is not mapped.
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_VI_WINDOW_MODE_8K 0x0
+/* enum: Each VI occupies 16k. PIO is at offset 4k. CTPIO is at offset 12k. */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_VI_WINDOW_MODE_16K 0x1
+/* enum: Each VI occupies 64k. PIO is at offset 4k. CTPIO is at offset 12k. */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_VI_WINDOW_MODE_64K 0x2
+/* Number of vFIFOs per adapter that can be used for VFIFO Stuffing
+ * (SF-115995-SW) in the present configuration of firmware and port mode.
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_VFIFO_STUFFING_NUM_VFIFOS_OFST 73
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_VFIFO_STUFFING_NUM_VFIFOS_LEN 1
+/* Number of buffers per adapter that can be used for VFIFO Stuffing
+ * (SF-115995-SW) in the present configuration of firmware and port mode.
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_VFIFO_STUFFING_NUM_CP_BUFFERS_OFST 74
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_VFIFO_STUFFING_NUM_CP_BUFFERS_LEN 2
+/* Entry count in the MAC stats array, including the final GENERATION_END
+ * entry. For MAC stats DMA, drivers should allocate a buffer large enough to
+ * hold at least this many 64-bit stats values, if they wish to receive all
+ * available stats. If the buffer is shorter than MAC_STATS_NUM_STATS * 8, the
+ * stats array returned will be truncated.
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_MAC_STATS_NUM_STATS_OFST 76
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_MAC_STATS_NUM_STATS_LEN 2
+/* Maximum supported value for MC_CMD_FILTER_OP_V3/MATCH_MARK_VALUE. This field
+ * will only be non-zero if MC_CMD_GET_CAPABILITIES/FILTER_ACTION_MARK is set.
+ */
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_FILTER_ACTION_MARK_MAX_OFST 80
+#define	MC_CMD_GET_CAPABILITIES_V5_OUT_FILTER_ACTION_MARK_MAX_LEN 4
+
 
 /***********************************/
 /* MC_CMD_V2_EXTN
@@ -13203,12 +13761,12 @@
 #define	MC_CMD_KR_TUNE_LINK_TRAIN_CMD_OUT_C0_STATUS_OFST 4
 #define	MC_CMD_KR_TUNE_LINK_TRAIN_CMD_OUT_C0_STATUS_LEN 4
 /*            Enum values, see field(s): */
-/*               MC_CMD_KR_TUNE_LINK_TRAIN_CMD_OUT/CM1 */
+/*               MC_CMD_KR_TUNE_LINK_TRAIN_CMD_IN/CM1 */
 /* C(+1) status */
 #define	MC_CMD_KR_TUNE_LINK_TRAIN_CMD_OUT_CP1_STATUS_OFST 8
 #define	MC_CMD_KR_TUNE_LINK_TRAIN_CMD_OUT_CP1_STATUS_LEN 4
 /*            Enum values, see field(s): */
-/*               MC_CMD_KR_TUNE_LINK_TRAIN_CMD_OUT/CM1 */
+/*               MC_CMD_KR_TUNE_LINK_TRAIN_CMD_IN/CM1 */
 /* C(-1) value */
 #define	MC_CMD_KR_TUNE_LINK_TRAIN_CMD_OUT_CM1_VALUE_OFST 12
 #define	MC_CMD_KR_TUNE_LINK_TRAIN_CMD_OUT_CM1_VALUE_LEN 4
@@ -17522,5 +18080,65 @@
  */
 #define	MC_CMD_SET_NIC_GLOBAL_IN_FW_SUBVARIANT_NO_TX_CSUM 0x1
 
+
+/***********************************/
+/* MC_CMD_LTSSM_TRACE_POLL
+ * Medford2 hardware has support for logging all LTSSM state transitions to a
+ * hardware buffer. When built with WITH_LTSSM_TRACE=1, the firmware will
+ * periodially dump the contents of this hardware buffer to an internal
+ * firmware buffer for later extraction.
+ */
+#define	MC_CMD_LTSSM_TRACE_POLL 0x12f
+#undef	MC_CMD_0x12f_PRIVILEGE_CTG
+
+#define	MC_CMD_0x12f_PRIVILEGE_CTG SRIOV_CTG_ADMIN
+
+/* MC_CMD_LTSSM_TRACE_POLL_IN msgrequest: Read transitions from the firmware
+ * internal buffer.
+ */
+#define	MC_CMD_LTSSM_TRACE_POLL_IN_LEN 4
+/* The maximum number of row that the caller can accept. The format of each row
+ * is defined in MC_CMD_LTSSM_TRACE_POLL_OUT.
+ */
+#define	MC_CMD_LTSSM_TRACE_POLL_IN_MAX_ROW_COUNT_OFST 0
+#define	MC_CMD_LTSSM_TRACE_POLL_IN_MAX_ROW_COUNT_LEN 4
+
+/* MC_CMD_LTSSM_TRACE_POLL_OUT msgresponse */
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_LENMIN 16
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_LENMAX 248
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_LEN(num) (8+8*(num))
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_FLAGS_OFST 0
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_FLAGS_LEN 4
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_HW_BUFFER_OVERFLOW_LBN 0
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_HW_BUFFER_OVERFLOW_WIDTH 1
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_FW_BUFFER_OVERFLOW_LBN 1
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_FW_BUFFER_OVERFLOW_WIDTH 1
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_CONTINUES_LBN 31
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_CONTINUES_WIDTH 1
+/* The number of rows present in this response. */
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_ROW_COUNT_OFST 4
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_ROW_COUNT_LEN 4
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_ROWS_OFST 8
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_ROWS_LEN 8
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_ROWS_LO_OFST 8
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_ROWS_HI_OFST 12
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_ROWS_MINNUM 0
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_ROWS_MAXNUM 30
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_LTSSM_STATE_LBN 0
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_LTSSM_STATE_WIDTH 6
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_RDLH_LINK_UP_LBN 6
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_RDLH_LINK_UP_WIDTH 1
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_WAKE_N_LBN 7
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_WAKE_N_WIDTH 1
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_TIMESTAMP_PS_LBN 8
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_TIMESTAMP_PS_WIDTH 24
+/* The time of the LTSSM transition. Times are reported as fractional
+ * microseconds since MC boot (wrapping at 2^32us). The fractional part is
+ * reported in picoseconds. 0 <= TIMESTAMP_PS < 1000000 timestamp in seconds =
+ * ((TIMESTAMP_US + TIMESTAMP_PS / 1000000) / 1000000)
+ */
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_TIMESTAMP_US_OFST 12
+#define	MC_CMD_LTSSM_TRACE_POLL_OUT_TIMESTAMP_US_LEN 4
+
 #endif /* _SIENA_MC_DRIVER_PCOL_H */
 /*! \cidoxg_end */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 02/23] net/sfc/base: make RxQ type data an union
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
  2018-04-19 11:36 ` [PATCH 01/23] net/sfc/base: update autogenerated MCDI and TLV headers Andrew Rybchenko
@ 2018-04-19 11:36 ` Andrew Rybchenko
  2018-04-19 11:36 ` [PATCH 03/23] net/sfc/base: detect equal stride super-buffer support Andrew Rybchenko
                   ` (21 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:36 UTC (permalink / raw)
  To: dev

The type is an internal interface. Single integer is insufficient
to carry RxQ type-specific information in the case of equal stride
super-buffer Rx mode (packet buffers per bucket, maximum DMA length,
packet stride, head of line block timeout).

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/base/ef10_impl.h |  4 +++-
 drivers/net/sfc/base/ef10_rx.c   |  4 ++--
 drivers/net/sfc/base/efx_impl.h  | 13 ++++++++++++-
 drivers/net/sfc/base/efx_rx.c    | 18 ++++++++++++------
 4 files changed, 29 insertions(+), 10 deletions(-)

diff --git a/drivers/net/sfc/base/ef10_impl.h b/drivers/net/sfc/base/ef10_impl.h
index b4ad595..36229a7 100644
--- a/drivers/net/sfc/base/ef10_impl.h
+++ b/drivers/net/sfc/base/ef10_impl.h
@@ -967,13 +967,15 @@ extern		void
 ef10_rx_qenable(
 	__in		efx_rxq_t *erp);
 
+union efx_rxq_type_data_u;
+
 extern	__checkReturn	efx_rc_t
 ef10_rx_qcreate(
 	__in		efx_nic_t *enp,
 	__in		unsigned int index,
 	__in		unsigned int label,
 	__in		efx_rxq_type_t type,
-	__in		uint32_t type_data,
+	__in		const union efx_rxq_type_data_u *type_data,
 	__in		efsys_mem_t *esmp,
 	__in		size_t ndescs,
 	__in		uint32_t id,
diff --git a/drivers/net/sfc/base/ef10_rx.c b/drivers/net/sfc/base/ef10_rx.c
index 70e451f..32cca57 100644
--- a/drivers/net/sfc/base/ef10_rx.c
+++ b/drivers/net/sfc/base/ef10_rx.c
@@ -993,7 +993,7 @@ ef10_rx_qcreate(
 	__in		unsigned int index,
 	__in		unsigned int label,
 	__in		efx_rxq_type_t type,
-	__in		uint32_t type_data,
+	__in		const efx_rxq_type_data_t *type_data,
 	__in		efsys_mem_t *esmp,
 	__in		size_t ndescs,
 	__in		uint32_t id,
@@ -1032,7 +1032,7 @@ ef10_rx_qcreate(
 		break;
 #if EFSYS_OPT_RX_PACKED_STREAM
 	case EFX_RXQ_TYPE_PACKED_STREAM:
-		switch (type_data) {
+		switch (type_data->ertd_packed_stream.eps_buf_size) {
 		case EFX_RXQ_PACKED_STREAM_BUF_SIZE_1M:
 			ps_buf_size = MC_CMD_INIT_RXQ_EXT_IN_PS_BUFF_1M;
 			break;
diff --git a/drivers/net/sfc/base/efx_impl.h b/drivers/net/sfc/base/efx_impl.h
index b1d4f57..f130713 100644
--- a/drivers/net/sfc/base/efx_impl.h
+++ b/drivers/net/sfc/base/efx_impl.h
@@ -129,6 +129,16 @@ typedef struct efx_tx_ops_s {
 #endif
 } efx_tx_ops_t;
 
+typedef union efx_rxq_type_data_u {
+	/* Dummy member to have non-empty union if no options are enabled */
+	uint32_t	ertd_dummy;
+#if EFSYS_OPT_RX_PACKED_STREAM
+	struct {
+		uint32_t	eps_buf_size;
+	} ertd_packed_stream;
+#endif
+} efx_rxq_type_data_t;
+
 typedef struct efx_rx_ops_s {
 	efx_rc_t	(*erxo_init)(efx_nic_t *);
 	void		(*erxo_fini)(efx_nic_t *);
@@ -165,7 +175,8 @@ typedef struct efx_rx_ops_s {
 	efx_rc_t	(*erxo_qflush)(efx_rxq_t *);
 	void		(*erxo_qenable)(efx_rxq_t *);
 	efx_rc_t	(*erxo_qcreate)(efx_nic_t *enp, unsigned int,
-					unsigned int, efx_rxq_type_t, uint32_t,
+					unsigned int, efx_rxq_type_t,
+					const efx_rxq_type_data_t *,
 					efsys_mem_t *, size_t, uint32_t,
 					unsigned int,
 					efx_evq_t *, efx_rxq_t *);
diff --git a/drivers/net/sfc/base/efx_rx.c b/drivers/net/sfc/base/efx_rx.c
index d75957f..5f49b3a 100644
--- a/drivers/net/sfc/base/efx_rx.c
+++ b/drivers/net/sfc/base/efx_rx.c
@@ -107,7 +107,7 @@ siena_rx_qcreate(
 	__in		unsigned int index,
 	__in		unsigned int label,
 	__in		efx_rxq_type_t type,
-	__in		uint32_t type_data,
+	__in		const efx_rxq_type_data_t *type_data,
 	__in		efsys_mem_t *esmp,
 	__in		size_t ndescs,
 	__in		uint32_t id,
@@ -745,7 +745,7 @@ efx_rx_qcreate_internal(
 	__in		unsigned int index,
 	__in		unsigned int label,
 	__in		efx_rxq_type_t type,
-	__in		uint32_t type_data,
+	__in		const efx_rxq_type_data_t *type_data,
 	__in		efsys_mem_t *esmp,
 	__in		size_t ndescs,
 	__in		uint32_t id,
@@ -806,8 +806,8 @@ efx_rx_qcreate(
 	__in		efx_evq_t *eep,
 	__deref_out	efx_rxq_t **erpp)
 {
-	return efx_rx_qcreate_internal(enp, index, label, type, 0, esmp, ndescs,
-	    id, flags, eep, erpp);
+	return efx_rx_qcreate_internal(enp, index, label, type, NULL,
+	    esmp, ndescs, id, flags, eep, erpp);
 }
 
 #if EFSYS_OPT_RX_PACKED_STREAM
@@ -823,8 +823,14 @@ efx_rx_qcreate_packed_stream(
 	__in		efx_evq_t *eep,
 	__deref_out	efx_rxq_t **erpp)
 {
+	efx_rxq_type_data_t type_data;
+
+	memset(&type_data, 0, sizeof(type_data));
+
+	type_data.ertd_packed_stream.eps_buf_size = ps_buf_size;
+
 	return efx_rx_qcreate_internal(enp, index, label,
-	    EFX_RXQ_TYPE_PACKED_STREAM, ps_buf_size, esmp, ndescs,
+	    EFX_RXQ_TYPE_PACKED_STREAM, &type_data, esmp, ndescs,
 	    0 /* id unused on EF10 */, EFX_RXQ_FLAG_NONE, eep, erpp);
 }
 
@@ -1475,7 +1481,7 @@ siena_rx_qcreate(
 	__in		unsigned int index,
 	__in		unsigned int label,
 	__in		efx_rxq_type_t type,
-	__in		uint32_t type_data,
+	__in		const efx_rxq_type_data_t *type_data,
 	__in		efsys_mem_t *esmp,
 	__in		size_t ndescs,
 	__in		uint32_t id,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 03/23] net/sfc/base: detect equal stride super-buffer support
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
  2018-04-19 11:36 ` [PATCH 01/23] net/sfc/base: update autogenerated MCDI and TLV headers Andrew Rybchenko
  2018-04-19 11:36 ` [PATCH 02/23] net/sfc/base: make RxQ type data an union Andrew Rybchenko
@ 2018-04-19 11:36 ` Andrew Rybchenko
  2018-04-19 11:36 ` [PATCH 04/23] net/sfc/base: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (20 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:36 UTC (permalink / raw)
  To: dev

Equal stride super-buffer Rx mode is supported on Medford2 by
DPDK firmware variant.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/base/ef10_nic.c  | 6 ++++++
 drivers/net/sfc/base/efx.h       | 1 +
 drivers/net/sfc/base/siena_nic.c | 1 +
 3 files changed, 8 insertions(+)

diff --git a/drivers/net/sfc/base/ef10_nic.c b/drivers/net/sfc/base/ef10_nic.c
index e1f1c2e..35b719a 100644
--- a/drivers/net/sfc/base/ef10_nic.c
+++ b/drivers/net/sfc/base/ef10_nic.c
@@ -1114,6 +1114,12 @@ ef10_get_datapath_caps(
 	else
 		encp->enc_rx_var_packed_stream_supported = B_FALSE;
 
+	/* Check if the firmware supports equal stride super-buffer mode */
+	if (CAP_FLAGS2(req, EQUAL_STRIDE_SUPER_BUFFER))
+		encp->enc_rx_es_super_buffer_supported = B_TRUE;
+	else
+		encp->enc_rx_es_super_buffer_supported = B_FALSE;
+
 	/* Check if the firmware supports FW subvariant w/o Tx checksumming */
 	if (CAP_FLAGS2(req, FW_SUBVARIANT_NO_TX_CSUM))
 		encp->enc_fw_subvariant_no_tx_csum_supported = B_TRUE;
diff --git a/drivers/net/sfc/base/efx.h b/drivers/net/sfc/base/efx.h
index 0b75f0f..dea8d60 100644
--- a/drivers/net/sfc/base/efx.h
+++ b/drivers/net/sfc/base/efx.h
@@ -1270,6 +1270,7 @@ typedef struct efx_nic_cfg_s {
 	boolean_t		enc_init_evq_v2_supported;
 	boolean_t		enc_rx_packed_stream_supported;
 	boolean_t		enc_rx_var_packed_stream_supported;
+	boolean_t		enc_rx_es_super_buffer_supported;
 	boolean_t		enc_fw_subvariant_no_tx_csum_supported;
 	boolean_t		enc_pm_and_rxdp_counters;
 	boolean_t		enc_mac_stats_40g_tx_size_bins;
diff --git a/drivers/net/sfc/base/siena_nic.c b/drivers/net/sfc/base/siena_nic.c
index c3a9495..15aa06b 100644
--- a/drivers/net/sfc/base/siena_nic.c
+++ b/drivers/net/sfc/base/siena_nic.c
@@ -161,6 +161,7 @@ siena_board_cfg(
 	encp->enc_allow_set_mac_with_installed_filters = B_TRUE;
 	encp->enc_rx_packed_stream_supported = B_FALSE;
 	encp->enc_rx_var_packed_stream_supported = B_FALSE;
+	encp->enc_rx_es_super_buffer_supported = B_FALSE;
 	encp->enc_fw_subvariant_no_tx_csum_supported = B_FALSE;
 
 	/* Siena supports two 10G ports, and 8 lanes of PCIe Gen2 */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 04/23] net/sfc/base: support equal stride super-buffer Rx mode
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (2 preceding siblings ...)
  2018-04-19 11:36 ` [PATCH 03/23] net/sfc/base: detect equal stride super-buffer support Andrew Rybchenko
@ 2018-04-19 11:36 ` Andrew Rybchenko
  2018-04-19 11:36 ` [PATCH 05/23] net/sfc/base: add equal stride super-buffer prefix layout Andrew Rybchenko
                   ` (19 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:36 UTC (permalink / raw)
  To: dev

Equal stride super-buffer Rx mode is supported by DPDK firmware
variant. One Rx descriptor provides many Rx buffers to firmware.
Rx buffers follow each other with specified stride.
Also it supports head of line blocking with timeout to address
drops when no Rx descriptors are available. So it gives extra time
to the driver to provide Rx descriptors before drop.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/base/ef10_ev.c   | 30 +++++++++++----
 drivers/net/sfc/base/ef10_impl.h | 10 +++++
 drivers/net/sfc/base/ef10_rx.c   | 80 ++++++++++++++++++++++++++++++++++++----
 drivers/net/sfc/base/efx.h       | 34 ++++++++++++++++-
 drivers/net/sfc/base/efx_check.h |  7 ++++
 drivers/net/sfc/base/efx_impl.h  | 12 +++++-
 drivers/net/sfc/base/efx_rx.c    | 52 ++++++++++++++++++++++++++
 drivers/net/sfc/efsys.h          |  2 +
 8 files changed, 210 insertions(+), 17 deletions(-)

diff --git a/drivers/net/sfc/base/ef10_ev.c b/drivers/net/sfc/base/ef10_ev.c
index 6e00099..7f89a7b 100644
--- a/drivers/net/sfc/base/ef10_ev.c
+++ b/drivers/net/sfc/base/ef10_ev.c
@@ -749,7 +749,7 @@ ef10_ev_qstats_update(
 }
 #endif /* EFSYS_OPT_QSTATS */
 
-#if EFSYS_OPT_RX_PACKED_STREAM
+#if EFSYS_OPT_RX_PACKED_STREAM || EFSYS_OPT_RX_ES_SUPER_BUFFER
 
 static	__checkReturn	boolean_t
 ef10_ev_rx_packed_stream(
@@ -788,7 +788,18 @@ ef10_ev_rx_packed_stream(
 
 	if (new_buffer) {
 		flags |= EFX_PKT_PACKED_STREAM_NEW_BUFFER;
+#if EFSYS_OPT_RX_PACKED_STREAM
+		/*
+		 * If both packed stream and equal stride super-buffer
+		 * modes are compiled in, in theory credits should be
+		 * be maintained for packed stream only, but right now
+		 * these modes are not distinguished in the event queue
+		 * Rx queue state and it is OK to increment the counter
+		 * regardless (it might be event cheaper than branching
+		 * since neighbour structure member are updated as well).
+		 */
 		eersp->eers_rx_packed_stream_credits++;
+#endif
 		eersp->eers_rx_read_ptr++;
 	}
 	current_id = eersp->eers_rx_read_ptr & eersp->eers_rx_mask;
@@ -830,7 +841,7 @@ ef10_ev_rx_packed_stream(
 	return (should_abort);
 }
 
-#endif /* EFSYS_OPT_RX_PACKED_STREAM */
+#endif /* EFSYS_OPT_RX_PACKED_STREAM || EFSYS_OPT_RX_ES_SUPER_BUFFER */
 
 static	__checkReturn	boolean_t
 ef10_ev_rx(
@@ -864,7 +875,7 @@ ef10_ev_rx(
 	label = EFX_QWORD_FIELD(*eqp, ESF_DZ_RX_QLABEL);
 	eersp = &eep->ee_rxq_state[label];
 
-#if EFSYS_OPT_RX_PACKED_STREAM
+#if EFSYS_OPT_RX_PACKED_STREAM || EFSYS_OPT_RX_ES_SUPER_BUFFER
 	/*
 	 * Packed stream events are very different,
 	 * so handle them separately
@@ -1364,8 +1375,9 @@ ef10_ev_rxlabel_init(
 	__in		efx_rxq_type_t type)
 {
 	efx_evq_rxq_state_t *eersp;
-#if EFSYS_OPT_RX_PACKED_STREAM
+#if EFSYS_OPT_RX_PACKED_STREAM || EFSYS_OPT_RX_ES_SUPER_BUFFER
 	boolean_t packed_stream = (type == EFX_RXQ_TYPE_PACKED_STREAM);
+	boolean_t es_super_buffer = (type == EFX_RXQ_TYPE_ES_SUPER_BUFFER);
 #endif
 
 	_NOTE(ARGUNUSED(type))
@@ -1387,9 +1399,11 @@ ef10_ev_rxlabel_init(
 	eersp->eers_rx_read_ptr = 0;
 #endif
 	eersp->eers_rx_mask = erp->er_mask;
-#if EFSYS_OPT_RX_PACKED_STREAM
+#if EFSYS_OPT_RX_PACKED_STREAM || EFSYS_OPT_RX_ES_SUPER_BUFFER
 	eersp->eers_rx_stream_npackets = 0;
-	eersp->eers_rx_packed_stream = packed_stream;
+	eersp->eers_rx_packed_stream = packed_stream || es_super_buffer;
+#endif
+#if EFSYS_OPT_RX_PACKED_STREAM
 	if (packed_stream) {
 		eersp->eers_rx_packed_stream_credits = (eep->ee_mask + 1) /
 		    EFX_DIV_ROUND_UP(EFX_RX_PACKED_STREAM_MEM_PER_CREDIT,
@@ -1423,9 +1437,11 @@ ef10_ev_rxlabel_fini(
 
 	eersp->eers_rx_read_ptr = 0;
 	eersp->eers_rx_mask = 0;
-#if EFSYS_OPT_RX_PACKED_STREAM
+#if EFSYS_OPT_RX_PACKED_STREAM || EFSYS_OPT_RX_ES_SUPER_BUFFER
 	eersp->eers_rx_stream_npackets = 0;
 	eersp->eers_rx_packed_stream = B_FALSE;
+#endif
+#if EFSYS_OPT_RX_PACKED_STREAM
 	eersp->eers_rx_packed_stream_credits = 0;
 #endif
 }
diff --git a/drivers/net/sfc/base/ef10_impl.h b/drivers/net/sfc/base/ef10_impl.h
index 36229a7..4751faf 100644
--- a/drivers/net/sfc/base/ef10_impl.h
+++ b/drivers/net/sfc/base/ef10_impl.h
@@ -1216,6 +1216,16 @@ efx_mcdi_set_nic_global(
 
 #endif /* EFSYS_OPT_RX_PACKED_STREAM */
 
+#if EFSYS_OPT_RX_ES_SUPER_BUFFER
+
+/*
+ * Maximum DMA length and buffer stride alignment.
+ * (see SF-119419-TC, 3.2)
+ */
+#define	EFX_RX_ES_SUPER_BUFFER_BUF_ALIGNMENT	64
+
+#endif
+
 #ifdef	__cplusplus
 }
 #endif
diff --git a/drivers/net/sfc/base/ef10_rx.c b/drivers/net/sfc/base/ef10_rx.c
index 32cca57..a2591cc 100644
--- a/drivers/net/sfc/base/ef10_rx.c
+++ b/drivers/net/sfc/base/ef10_rx.c
@@ -21,12 +21,16 @@ efx_mcdi_init_rxq(
 	__in		efsys_mem_t *esmp,
 	__in		boolean_t disable_scatter,
 	__in		boolean_t want_inner_classes,
-	__in		uint32_t ps_bufsize)
+	__in		uint32_t ps_bufsize,
+	__in		uint32_t es_bufs_per_desc,
+	__in		uint32_t es_max_dma_len,
+	__in		uint32_t es_buf_stride,
+	__in		uint32_t hol_block_timeout)
 {
 	efx_nic_cfg_t *encp = &(enp->en_nic_cfg);
 	efx_mcdi_req_t req;
-	uint8_t payload[MAX(MC_CMD_INIT_RXQ_EXT_IN_LEN,
-			    MC_CMD_INIT_RXQ_EXT_OUT_LEN)];
+	uint8_t payload[MAX(MC_CMD_INIT_RXQ_V3_IN_LEN,
+			    MC_CMD_INIT_RXQ_V3_OUT_LEN)];
 	int npages = EFX_RXQ_NBUFS(ndescs);
 	int i;
 	efx_qword_t *dma_addr;
@@ -44,6 +48,8 @@ efx_mcdi_init_rxq(
 
 	if (ps_bufsize > 0)
 		dma_mode = MC_CMD_INIT_RXQ_EXT_IN_PACKED_STREAM;
+	else if (es_bufs_per_desc > 0)
+		dma_mode = MC_CMD_INIT_RXQ_V3_IN_EQUAL_STRIDE_SUPER_BUFFER;
 	else
 		dma_mode = MC_CMD_INIT_RXQ_EXT_IN_SINGLE_PACKET;
 
@@ -70,9 +76,9 @@ efx_mcdi_init_rxq(
 	(void) memset(payload, 0, sizeof (payload));
 	req.emr_cmd = MC_CMD_INIT_RXQ;
 	req.emr_in_buf = payload;
-	req.emr_in_length = MC_CMD_INIT_RXQ_EXT_IN_LEN;
+	req.emr_in_length = MC_CMD_INIT_RXQ_V3_IN_LEN;
 	req.emr_out_buf = payload;
-	req.emr_out_length = MC_CMD_INIT_RXQ_EXT_OUT_LEN;
+	req.emr_out_length = MC_CMD_INIT_RXQ_V3_OUT_LEN;
 
 	MCDI_IN_SET_DWORD(req, INIT_RXQ_EXT_IN_SIZE, ndescs);
 	MCDI_IN_SET_DWORD(req, INIT_RXQ_EXT_IN_TARGET_EVQ, target_evq);
@@ -92,6 +98,19 @@ efx_mcdi_init_rxq(
 	MCDI_IN_SET_DWORD(req, INIT_RXQ_EXT_IN_OWNER_ID, 0);
 	MCDI_IN_SET_DWORD(req, INIT_RXQ_EXT_IN_PORT_ID, EVB_PORT_ID_ASSIGNED);
 
+	if (es_bufs_per_desc > 0) {
+		MCDI_IN_SET_DWORD(req,
+		    INIT_RXQ_V3_IN_ES_PACKET_BUFFERS_PER_BUCKET,
+		    es_bufs_per_desc);
+		MCDI_IN_SET_DWORD(req,
+		    INIT_RXQ_V3_IN_ES_MAX_DMA_LEN, es_max_dma_len);
+		MCDI_IN_SET_DWORD(req,
+		    INIT_RXQ_V3_IN_ES_PACKET_STRIDE, es_buf_stride);
+		MCDI_IN_SET_DWORD(req,
+		    INIT_RXQ_V3_IN_ES_HEAD_OF_LINE_BLOCK_TIMEOUT,
+		    hol_block_timeout);
+	}
+
 	dma_addr = MCDI_IN2(req, efx_qword_t, INIT_RXQ_IN_DMA_ADDR);
 	addr = EFSYS_MEM_ADDR(esmp);
 
@@ -1006,6 +1025,10 @@ ef10_rx_qcreate(
 	boolean_t disable_scatter;
 	boolean_t want_inner_classes;
 	unsigned int ps_buf_size;
+	uint32_t es_bufs_per_desc = 0;
+	uint32_t es_max_dma_len = 0;
+	uint32_t es_buf_stride = 0;
+	uint32_t hol_block_timeout = 0;
 
 	_NOTE(ARGUNUSED(id, erp, type_data))
 
@@ -1054,6 +1077,19 @@ ef10_rx_qcreate(
 		}
 		break;
 #endif /* EFSYS_OPT_RX_PACKED_STREAM */
+#if EFSYS_OPT_RX_ES_SUPER_BUFFER
+	case EFX_RXQ_TYPE_ES_SUPER_BUFFER:
+		ps_buf_size = 0;
+		es_bufs_per_desc =
+		    type_data->ertd_es_super_buffer.eessb_bufs_per_desc;
+		es_max_dma_len =
+		    type_data->ertd_es_super_buffer.eessb_max_dma_len;
+		es_buf_stride =
+		    type_data->ertd_es_super_buffer.eessb_buf_stride;
+		hol_block_timeout =
+		    type_data->ertd_es_super_buffer.eessb_hol_block_timeout;
+		break;
+#endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
 	default:
 		rc = ENOTSUP;
 		goto fail4;
@@ -1077,6 +1113,27 @@ ef10_rx_qcreate(
 	EFSYS_ASSERT(ps_buf_size == 0);
 #endif /* EFSYS_OPT_RX_PACKED_STREAM */
 
+#if EFSYS_OPT_RX_ES_SUPER_BUFFER
+	if (es_bufs_per_desc > 0) {
+		if (encp->enc_rx_es_super_buffer_supported == B_FALSE) {
+			rc = ENOTSUP;
+			goto fail7;
+		}
+		if (!IS_P2ALIGNED(es_max_dma_len,
+			    EFX_RX_ES_SUPER_BUFFER_BUF_ALIGNMENT)) {
+			rc = EINVAL;
+			goto fail8;
+		}
+		if (!IS_P2ALIGNED(es_buf_stride,
+			    EFX_RX_ES_SUPER_BUFFER_BUF_ALIGNMENT)) {
+			rc = EINVAL;
+			goto fail9;
+		}
+	}
+#else /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
+	EFSYS_ASSERT(es_bufs_per_desc == 0);
+#endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
+
 	/* Scatter can only be disabled if the firmware supports doing so */
 	if (flags & EFX_RXQ_FLAG_SCATTER)
 		disable_scatter = B_FALSE;
@@ -1090,8 +1147,9 @@ ef10_rx_qcreate(
 
 	if ((rc = efx_mcdi_init_rxq(enp, ndescs, eep->ee_index, label, index,
 		    esmp, disable_scatter, want_inner_classes,
-		    ps_buf_size)) != 0)
-		goto fail7;
+		    ps_buf_size, es_bufs_per_desc, es_max_dma_len,
+		    es_buf_stride, hol_block_timeout)) != 0)
+		goto fail10;
 
 	erp->er_eep = eep;
 	erp->er_label = label;
@@ -1102,8 +1160,16 @@ ef10_rx_qcreate(
 
 	return (0);
 
+fail10:
+	EFSYS_PROBE(fail10);
+#if EFSYS_OPT_RX_ES_SUPER_BUFFER
+fail9:
+	EFSYS_PROBE(fail9);
+fail8:
+	EFSYS_PROBE(fail8);
 fail7:
 	EFSYS_PROBE(fail7);
+#endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
 #if EFSYS_OPT_RX_PACKED_STREAM
 fail6:
 	EFSYS_PROBE(fail6);
diff --git a/drivers/net/sfc/base/efx.h b/drivers/net/sfc/base/efx.h
index dea8d60..b334cc5 100644
--- a/drivers/net/sfc/base/efx.h
+++ b/drivers/net/sfc/base/efx.h
@@ -1864,7 +1864,7 @@ typedef	__checkReturn	boolean_t
 	__in		uint32_t size,
 	__in		uint16_t flags);
 
-#if EFSYS_OPT_RX_PACKED_STREAM
+#if EFSYS_OPT_RX_PACKED_STREAM || EFSYS_OPT_RX_ES_SUPER_BUFFER
 
 /*
  * Packed stream mode is documented in SF-112241-TC.
@@ -1874,6 +1874,13 @@ typedef	__checkReturn	boolean_t
  * packets are put there in a continuous stream.
  * The main advantage of such an approach is that RX queue refilling
  * happens much less frequently.
+ *
+ * Equal stride packed stream mode is documented in SF-119419-TC.
+ * The general idea is to utilize advantages of the packed stream,
+ * but avoid indirection in packets representation.
+ * The main advantage of such an approach is that RX queue refilling
+ * happens much less frequently and packets buffers are independent
+ * from upper layers point of view.
  */
 
 typedef	__checkReturn	boolean_t
@@ -1974,7 +1981,7 @@ typedef __checkReturn	boolean_t
 typedef struct efx_ev_callbacks_s {
 	efx_initialized_ev_t		eec_initialized;
 	efx_rx_ev_t			eec_rx;
-#if EFSYS_OPT_RX_PACKED_STREAM
+#if EFSYS_OPT_RX_PACKED_STREAM || EFSYS_OPT_RX_ES_SUPER_BUFFER
 	efx_rx_ps_ev_t			eec_rx_ps;
 #endif
 	efx_tx_ev_t			eec_tx;
@@ -2281,6 +2288,7 @@ efx_pseudo_hdr_pkt_length_get(
 typedef enum efx_rxq_type_e {
 	EFX_RXQ_TYPE_DEFAULT,
 	EFX_RXQ_TYPE_PACKED_STREAM,
+	EFX_RXQ_TYPE_ES_SUPER_BUFFER,
 	EFX_RXQ_NTYPES
 } efx_rxq_type_t;
 
@@ -2334,6 +2342,28 @@ efx_rx_qcreate_packed_stream(
 
 #endif
 
+#if EFSYS_OPT_RX_ES_SUPER_BUFFER
+
+/* Maximum head-of-line block timeout in nanoseconds */
+#define	EFX_RXQ_ES_SUPER_BUFFER_HOL_BLOCK_MAX	(400U * 1000 * 1000)
+
+extern	__checkReturn	efx_rc_t
+efx_rx_qcreate_es_super_buffer(
+	__in		efx_nic_t *enp,
+	__in		unsigned int index,
+	__in		unsigned int label,
+	__in		uint32_t n_bufs_per_desc,
+	__in		uint32_t max_dma_len,
+	__in		uint32_t buf_stride,
+	__in		uint32_t hol_block_timeout,
+	__in		efsys_mem_t *esmp,
+	__in		size_t ndescs,
+	__in		unsigned int flags,
+	__in		efx_evq_t *eep,
+	__deref_out	efx_rxq_t **erpp);
+
+#endif
+
 typedef struct efx_buffer_s {
 	efsys_dma_addr_t	eb_addr;
 	size_t			eb_size;
diff --git a/drivers/net/sfc/base/efx_check.h b/drivers/net/sfc/base/efx_check.h
index 52b0c79..ef5eadc 100644
--- a/drivers/net/sfc/base/efx_check.h
+++ b/drivers/net/sfc/base/efx_check.h
@@ -343,6 +343,13 @@
 # endif
 #endif
 
+#if EFSYS_OPT_RX_ES_SUPER_BUFFER
+/* Support equal stride super-buffer mode */
+# if !(EFSYS_OPT_MEDFORD2)
+#  error "ES_SUPER_BUFFER requires MEDFORD2"
+# endif
+#endif
+
 /* Support hardware assistance for tunnels */
 #if EFSYS_OPT_TUNNEL
 # if !(EFSYS_OPT_MEDFORD || EFSYS_OPT_MEDFORD2)
diff --git a/drivers/net/sfc/base/efx_impl.h b/drivers/net/sfc/base/efx_impl.h
index f130713..548834f 100644
--- a/drivers/net/sfc/base/efx_impl.h
+++ b/drivers/net/sfc/base/efx_impl.h
@@ -137,6 +137,14 @@ typedef union efx_rxq_type_data_u {
 		uint32_t	eps_buf_size;
 	} ertd_packed_stream;
 #endif
+#if EFSYS_OPT_RX_ES_SUPER_BUFFER
+	struct {
+		uint32_t	eessb_bufs_per_desc;
+		uint32_t	eessb_max_dma_len;
+		uint32_t	eessb_buf_stride;
+		uint32_t	eessb_hol_block_timeout;
+	} ertd_es_super_buffer;
+#endif
 } efx_rxq_type_data_t;
 
 typedef struct efx_rx_ops_s {
@@ -735,9 +743,11 @@ typedef	boolean_t (*efx_ev_handler_t)(efx_evq_t *, efx_qword_t *,
 typedef struct efx_evq_rxq_state_s {
 	unsigned int			eers_rx_read_ptr;
 	unsigned int			eers_rx_mask;
-#if EFSYS_OPT_RX_PACKED_STREAM
+#if EFSYS_OPT_RX_PACKED_STREAM || EFSYS_OPT_RX_ES_SUPER_BUFFER
 	unsigned int			eers_rx_stream_npackets;
 	boolean_t			eers_rx_packed_stream;
+#endif
+#if EFSYS_OPT_RX_PACKED_STREAM
 	unsigned int			eers_rx_packed_stream_credits;
 #endif
 } efx_evq_rxq_state_t;
diff --git a/drivers/net/sfc/base/efx_rx.c b/drivers/net/sfc/base/efx_rx.c
index 5f49b3a..1068a5c 100644
--- a/drivers/net/sfc/base/efx_rx.c
+++ b/drivers/net/sfc/base/efx_rx.c
@@ -836,6 +836,58 @@ efx_rx_qcreate_packed_stream(
 
 #endif
 
+#if EFSYS_OPT_RX_ES_SUPER_BUFFER
+
+	__checkReturn	efx_rc_t
+efx_rx_qcreate_es_super_buffer(
+	__in		efx_nic_t *enp,
+	__in		unsigned int index,
+	__in		unsigned int label,
+	__in		uint32_t n_bufs_per_desc,
+	__in		uint32_t max_dma_len,
+	__in		uint32_t buf_stride,
+	__in		uint32_t hol_block_timeout,
+	__in		efsys_mem_t *esmp,
+	__in		size_t ndescs,
+	__in		unsigned int flags,
+	__in		efx_evq_t *eep,
+	__deref_out	efx_rxq_t **erpp)
+{
+	efx_rc_t rc;
+	efx_rxq_type_data_t type_data;
+
+	if (hol_block_timeout > EFX_RXQ_ES_SUPER_BUFFER_HOL_BLOCK_MAX) {
+		rc = EINVAL;
+		goto fail1;
+	}
+
+	memset(&type_data, 0, sizeof(type_data));
+
+	type_data.ertd_es_super_buffer.eessb_bufs_per_desc = n_bufs_per_desc;
+	type_data.ertd_es_super_buffer.eessb_max_dma_len = max_dma_len;
+	type_data.ertd_es_super_buffer.eessb_buf_stride = buf_stride;
+	type_data.ertd_es_super_buffer.eessb_hol_block_timeout =
+	    hol_block_timeout;
+
+	rc = efx_rx_qcreate_internal(enp, index, label,
+	    EFX_RXQ_TYPE_ES_SUPER_BUFFER, &type_data, esmp, ndescs,
+	    0 /* id unused on EF10 */, flags, eep, erpp);
+	if (rc != 0)
+		goto fail2;
+
+	return (0);
+
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+	return (rc);
+}
+
+#endif
+
+
 			void
 efx_rx_qdestroy(
 	__in		efx_rxq_t *erp)
diff --git a/drivers/net/sfc/efsys.h b/drivers/net/sfc/efsys.h
index 12f77dc..f71581c 100644
--- a/drivers/net/sfc/efsys.h
+++ b/drivers/net/sfc/efsys.h
@@ -198,6 +198,8 @@ prefetch_read_once(const volatile void *addr)
 
 #define EFSYS_OPT_RX_PACKED_STREAM 0
 
+#define EFSYS_OPT_RX_ES_SUPER_BUFFER 0
+
 #define EFSYS_OPT_TUNNEL 1
 
 #define EFSYS_OPT_FW_SUBVARIANT_AWARE 1
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 05/23] net/sfc/base: add equal stride super-buffer prefix layout
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (3 preceding siblings ...)
  2018-04-19 11:36 ` [PATCH 04/23] net/sfc/base: support equal stride super-buffer Rx mode Andrew Rybchenko
@ 2018-04-19 11:36 ` Andrew Rybchenko
  2018-04-19 11:36 ` [PATCH 06/23] net/sfc: factor out function to push Rx doorbell Andrew Rybchenko
                   ` (18 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:36 UTC (permalink / raw)
  To: dev

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/base/efx_regs_ef10.h | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/drivers/net/sfc/base/efx_regs_ef10.h b/drivers/net/sfc/base/efx_regs_ef10.h
index 2cb96e8..968aaac 100644
--- a/drivers/net/sfc/base/efx_regs_ef10.h
+++ b/drivers/net/sfc/base/efx_regs_ef10.h
@@ -698,6 +698,21 @@ extern "C" {
 #define	ES_DZ_PS_RX_PREFIX_ORIG_LEN_LBN 48
 #define	ES_DZ_PS_RX_PREFIX_ORIG_LEN_WIDTH 16
 
+/* Equal stride super-buffer RX packet prefix (see SF-119419-TC) */
+#define	ES_EZ_ESSB_RX_PREFIX_LEN 8
+#define	ES_EZ_ESSB_RX_PREFIX_DATA_LEN_LBN 0
+#define	ES_EZ_ESSB_RX_PREFIX_DATA_LEN_WIDTH 16
+#define	ES_EZ_ESSB_RX_PREFIX_MARK_LBN 16
+#define	ES_EZ_ESSB_RX_PREFIX_MARK_WIDTH 8
+#define	ES_EZ_ESSB_RX_PREFIX_HASH_VALID_LBN 28
+#define	ES_EZ_ESSB_RX_PREFIX_HASH_VALID_WIDTH 1
+#define	ES_EZ_ESSB_RX_PREFIX_MARK_VALID_LBN 29
+#define	ES_EZ_ESSB_RX_PREFIX_MARK_VALID_WIDTH 1
+#define	ES_EZ_ESSB_RX_PREFIX_MATCH_FLAG_LBN 30
+#define	ES_EZ_ESSB_RX_PREFIX_MATCH_FLAG_WIDTH 1
+#define	ES_EZ_ESSB_RX_PREFIX_HASH_LBN 32
+#define	ES_EZ_ESSB_RX_PREFIX_HASH_WIDTH 32
+
 /*
  * An extra flag for the packed stream mode,
  * signalling the start of a new buffer
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 06/23] net/sfc: factor out function to push Rx doorbell
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (4 preceding siblings ...)
  2018-04-19 11:36 ` [PATCH 05/23] net/sfc/base: add equal stride super-buffer prefix layout Andrew Rybchenko
@ 2018-04-19 11:36 ` Andrew Rybchenko
  2018-04-19 11:36 ` [PATCH 07/23] net/sfc: prepare EF10 Rx event parser to be reused Andrew Rybchenko
                   ` (17 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:36 UTC (permalink / raw)
  To: dev

The function may be shared by different Rx datapath implementations.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@solarflare.com>
---
 drivers/net/sfc/sfc_ef10.h    | 31 +++++++++++++++++++++++++++++++
 drivers/net/sfc/sfc_ef10_rx.c | 33 +++------------------------------
 2 files changed, 34 insertions(+), 30 deletions(-)

diff --git a/drivers/net/sfc/sfc_ef10.h b/drivers/net/sfc/sfc_ef10.h
index ace6a1d..865359f 100644
--- a/drivers/net/sfc/sfc_ef10.h
+++ b/drivers/net/sfc/sfc_ef10.h
@@ -79,6 +79,37 @@ sfc_ef10_ev_present(const efx_qword_t ev)
 	       ~EFX_QWORD_FIELD(ev, EFX_DWORD_1);
 }
 
+
+/**
+ * Alignment requirement for value written to RX WPTR:
+ * the WPTR must be aligned to an 8 descriptor boundary.
+ */
+#define SFC_EF10_RX_WPTR_ALIGN	8u
+
+static inline void
+sfc_ef10_rx_qpush(volatile void *doorbell, unsigned int added,
+		  unsigned int ptr_mask)
+{
+	efx_dword_t dword;
+
+	/* Hardware has alignment restriction for WPTR */
+	RTE_BUILD_BUG_ON(SFC_RX_REFILL_BULK % SFC_EF10_RX_WPTR_ALIGN != 0);
+	SFC_ASSERT(RTE_ALIGN(added, SFC_EF10_RX_WPTR_ALIGN) == added);
+
+	EFX_POPULATE_DWORD_1(dword, ERF_DZ_RX_DESC_WPTR, added & ptr_mask);
+
+	/* DMA sync to device is not required */
+
+	/*
+	 * rte_write32() has rte_io_wmb() which guarantees that the STORE
+	 * operations (i.e. Rx and event descriptor updates) that precede
+	 * the rte_io_wmb() call are visible to NIC before the STORE
+	 * operations that follow it (i.e. doorbell write).
+	 */
+	rte_write32(dword.ed_u32[0], doorbell);
+}
+
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 7d6b64e..92e1ef0 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -30,12 +30,6 @@
 	SFC_DP_LOG(SFC_KVARG_DATAPATH_EF10, ERR, dpq, __VA_ARGS__)
 
 /**
- * Alignment requirement for value written to RX WPTR:
- * the WPTR must be aligned to an 8 descriptor boundary.
- */
-#define SFC_EF10_RX_WPTR_ALIGN	8
-
-/**
  * Maximum number of descriptors/buffers in the Rx ring.
  * It should guarantee that corresponding event queue never overfill.
  * EF10 native datapath uses event queue of the same size as Rx queue.
@@ -88,29 +82,6 @@ sfc_ef10_rxq_by_dp_rxq(struct sfc_dp_rxq *dp_rxq)
 }
 
 static void
-sfc_ef10_rx_qpush(struct sfc_ef10_rxq *rxq)
-{
-	efx_dword_t dword;
-
-	/* Hardware has alignment restriction for WPTR */
-	RTE_BUILD_BUG_ON(SFC_RX_REFILL_BULK % SFC_EF10_RX_WPTR_ALIGN != 0);
-	SFC_ASSERT(RTE_ALIGN(rxq->added, SFC_EF10_RX_WPTR_ALIGN) == rxq->added);
-
-	EFX_POPULATE_DWORD_1(dword, ERF_DZ_RX_DESC_WPTR,
-			     rxq->added & rxq->ptr_mask);
-
-	/* DMA sync to device is not required */
-
-	/*
-	 * rte_write32() has rte_io_wmb() which guarantees that the STORE
-	 * operations (i.e. Rx and event descriptor updates) that precede
-	 * the rte_io_wmb() call are visible to NIC before the STORE
-	 * operations that follow it (i.e. doorbell write).
-	 */
-	rte_write32(dword.ed_u32[0], rxq->doorbell);
-}
-
-static void
 sfc_ef10_rx_qrefill(struct sfc_ef10_rxq *rxq)
 {
 	const unsigned int ptr_mask = rxq->ptr_mask;
@@ -120,6 +91,8 @@ sfc_ef10_rx_qrefill(struct sfc_ef10_rxq *rxq)
 	void *objs[SFC_RX_REFILL_BULK];
 	unsigned int added = rxq->added;
 
+	RTE_BUILD_BUG_ON(SFC_RX_REFILL_BULK % SFC_EF10_RX_WPTR_ALIGN != 0);
+
 	free_space = rxq->max_fill_level - (added - rxq->completed);
 
 	if (free_space < rxq->refill_threshold)
@@ -178,7 +151,7 @@ sfc_ef10_rx_qrefill(struct sfc_ef10_rxq *rxq)
 
 	SFC_ASSERT(rxq->added != added);
 	rxq->added = added;
-	sfc_ef10_rx_qpush(rxq);
+	sfc_ef10_rx_qpush(rxq->doorbell, added, ptr_mask);
 }
 
 static void
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 07/23] net/sfc: prepare EF10 Rx event parser to be reused
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (5 preceding siblings ...)
  2018-04-19 11:36 ` [PATCH 06/23] net/sfc: factor out function to push Rx doorbell Andrew Rybchenko
@ 2018-04-19 11:36 ` Andrew Rybchenko
  2018-04-19 11:36 ` [PATCH 08/23] net/sfc: move EF10 Rx event parser to shared header Andrew Rybchenko
                   ` (16 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:36 UTC (permalink / raw)
  To: dev

Equal stride super-buffer Rx mode will be handled by the dedicated
Rx datapath and the mode has almost the same Rx event structure as
single packet Rx mode.

Restructure the code to allow the common parts to be shared.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@solarflare.com>
---
 drivers/net/sfc/sfc_ef10_rx.c | 15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 92e1ef0..f8eb3c1 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -199,8 +199,8 @@ sfc_ef10_rx_prepared(struct sfc_ef10_rxq *rxq, struct rte_mbuf **rx_pkts,
 }
 
 static void
-sfc_ef10_rx_ev_to_offloads(struct sfc_ef10_rxq *rxq, const efx_qword_t rx_ev,
-			   struct rte_mbuf *m)
+sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
+			   uint64_t ol_mask)
 {
 	uint32_t tun_ptype = 0;
 	/* Which event bit is mapped to PKT_RX_IP_CKSUM_* */
@@ -330,12 +330,8 @@ sfc_ef10_rx_ev_to_offloads(struct sfc_ef10_rxq *rxq, const efx_qword_t rx_ev,
 		SFC_ASSERT(false);
 	}
 
-	/* Remove RSS hash offload flag if RSS is not enabled */
-	if (~rxq->flags & SFC_EF10_RXQ_RSS_HASH)
-		ol_flags &= ~PKT_RX_RSS_HASH;
-
 done:
-	m->ol_flags = ol_flags;
+	m->ol_flags = ol_flags & ol_mask;
 	m->packet_type = tun_ptype | l2_ptype | l3_ptype | l4_ptype;
 }
 
@@ -397,7 +393,10 @@ sfc_ef10_rx_process_event(struct sfc_ef10_rxq *rxq, efx_qword_t rx_ev,
 	m->rearm_data[0] = rxq->rearm_data;
 
 	/* Classify packet based on Rx event */
-	sfc_ef10_rx_ev_to_offloads(rxq, rx_ev, m);
+	/* Mask RSS hash offload flag if RSS is not enabled */
+	sfc_ef10_rx_ev_to_offloads(rx_ev, m,
+				   (rxq->flags & SFC_EF10_RXQ_RSS_HASH) ?
+				   ~0ull : ~PKT_RX_RSS_HASH);
 
 	/* data_off already moved past pseudo header */
 	pseudo_hdr = (uint8_t *)m->buf_addr + RTE_PKTMBUF_HEADROOM;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 08/23] net/sfc: move EF10 Rx event parser to shared header
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (6 preceding siblings ...)
  2018-04-19 11:36 ` [PATCH 07/23] net/sfc: prepare EF10 Rx event parser to be reused Andrew Rybchenko
@ 2018-04-19 11:36 ` Andrew Rybchenko
  2018-04-19 11:36 ` [PATCH 09/23] net/sfc: conditionally compile support for tunnel packets Andrew Rybchenko
                   ` (15 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:36 UTC (permalink / raw)
  To: dev

Equal stride super-buffer Rx datapath will use it as well.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc_ef10_rx.c    | 138 +-------------------------------
 drivers/net/sfc/sfc_ef10_rx_ev.h | 164 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 165 insertions(+), 137 deletions(-)
 create mode 100644 drivers/net/sfc/sfc_ef10_rx_ev.h

diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index f8eb3c1..7560891 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -25,6 +25,7 @@
 #include "sfc_dp_rx.h"
 #include "sfc_kvargs.h"
 #include "sfc_ef10.h"
+#include "sfc_ef10_rx_ev.h"
 
 #define sfc_ef10_rx_err(dpq, ...) \
 	SFC_DP_LOG(SFC_KVARG_DATAPATH_EF10, ERR, dpq, __VA_ARGS__)
@@ -198,143 +199,6 @@ sfc_ef10_rx_prepared(struct sfc_ef10_rxq *rxq, struct rte_mbuf **rx_pkts,
 	return n_rx_pkts;
 }
 
-static void
-sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
-			   uint64_t ol_mask)
-{
-	uint32_t tun_ptype = 0;
-	/* Which event bit is mapped to PKT_RX_IP_CKSUM_* */
-	int8_t ip_csum_err_bit;
-	/* Which event bit is mapped to PKT_RX_L4_CKSUM_* */
-	int8_t l4_csum_err_bit;
-	uint32_t l2_ptype = 0;
-	uint32_t l3_ptype = 0;
-	uint32_t l4_ptype = 0;
-	uint64_t ol_flags = 0;
-
-	if (unlikely(EFX_TEST_QWORD_BIT(rx_ev, ESF_DZ_RX_PARSE_INCOMPLETE_LBN)))
-		goto done;
-
-	switch (EFX_QWORD_FIELD(rx_ev, ESF_EZ_RX_ENCAP_HDR)) {
-	default:
-		/* Unexpected encapsulation tag class */
-		SFC_ASSERT(false);
-		/* FALLTHROUGH */
-	case ESE_EZ_ENCAP_HDR_NONE:
-		break;
-	case ESE_EZ_ENCAP_HDR_VXLAN:
-		/*
-		 * It is definitely UDP, but we have no information
-		 * about IPv4 vs IPv6 and VLAN tagging.
-		 */
-		tun_ptype = RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_L4_UDP;
-		break;
-	case ESE_EZ_ENCAP_HDR_GRE:
-		/*
-		 * We have no information about IPv4 vs IPv6 and VLAN tagging.
-		 */
-		tun_ptype = RTE_PTYPE_TUNNEL_NVGRE;
-		break;
-	}
-
-	if (tun_ptype == 0) {
-		ip_csum_err_bit = ESF_DZ_RX_IPCKSUM_ERR_LBN;
-		l4_csum_err_bit = ESF_DZ_RX_TCPUDP_CKSUM_ERR_LBN;
-	} else {
-		ip_csum_err_bit = ESF_EZ_RX_IP_INNER_CHKSUM_ERR_LBN;
-		l4_csum_err_bit = ESF_EZ_RX_TCP_UDP_INNER_CHKSUM_ERR_LBN;
-		if (unlikely(EFX_TEST_QWORD_BIT(rx_ev,
-						ESF_DZ_RX_IPCKSUM_ERR_LBN)))
-			ol_flags |= PKT_RX_EIP_CKSUM_BAD;
-	}
-
-	switch (EFX_QWORD_FIELD(rx_ev, ESF_DZ_RX_ETH_TAG_CLASS)) {
-	case ESE_DZ_ETH_TAG_CLASS_NONE:
-		l2_ptype = (tun_ptype == 0) ? RTE_PTYPE_L2_ETHER :
-			RTE_PTYPE_INNER_L2_ETHER;
-		break;
-	case ESE_DZ_ETH_TAG_CLASS_VLAN1:
-		l2_ptype = (tun_ptype == 0) ? RTE_PTYPE_L2_ETHER_VLAN :
-			RTE_PTYPE_INNER_L2_ETHER_VLAN;
-		break;
-	case ESE_DZ_ETH_TAG_CLASS_VLAN2:
-		l2_ptype = (tun_ptype == 0) ? RTE_PTYPE_L2_ETHER_QINQ :
-			RTE_PTYPE_INNER_L2_ETHER_QINQ;
-		break;
-	default:
-		/* Unexpected Eth tag class */
-		SFC_ASSERT(false);
-	}
-
-	switch (EFX_QWORD_FIELD(rx_ev, ESF_DZ_RX_L3_CLASS)) {
-	case ESE_DZ_L3_CLASS_IP4_FRAG:
-		l4_ptype = (tun_ptype == 0) ? RTE_PTYPE_L4_FRAG :
-			RTE_PTYPE_INNER_L4_FRAG;
-		/* FALLTHROUGH */
-	case ESE_DZ_L3_CLASS_IP4:
-		l3_ptype = (tun_ptype == 0) ? RTE_PTYPE_L3_IPV4_EXT_UNKNOWN :
-			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN;
-		ol_flags |= PKT_RX_RSS_HASH |
-			((EFX_TEST_QWORD_BIT(rx_ev, ip_csum_err_bit)) ?
-			 PKT_RX_IP_CKSUM_BAD : PKT_RX_IP_CKSUM_GOOD);
-		break;
-	case ESE_DZ_L3_CLASS_IP6_FRAG:
-		l4_ptype = (tun_ptype == 0) ? RTE_PTYPE_L4_FRAG :
-			RTE_PTYPE_INNER_L4_FRAG;
-		/* FALLTHROUGH */
-	case ESE_DZ_L3_CLASS_IP6:
-		l3_ptype = (tun_ptype == 0) ? RTE_PTYPE_L3_IPV6_EXT_UNKNOWN :
-			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN;
-		ol_flags |= PKT_RX_RSS_HASH;
-		break;
-	case ESE_DZ_L3_CLASS_ARP:
-		/* Override Layer 2 packet type */
-		/* There is no ARP classification for inner packets */
-		if (tun_ptype == 0)
-			l2_ptype = RTE_PTYPE_L2_ETHER_ARP;
-		break;
-	default:
-		/* Unexpected Layer 3 class */
-		SFC_ASSERT(false);
-	}
-
-	/*
-	 * RX_L4_CLASS is 3 bits wide on Huntington and Medford, but is only
-	 * 2 bits wide on Medford2. Check it is safe to use the Medford2 field
-	 * and values for all EF10 controllers.
-	 */
-	RTE_BUILD_BUG_ON(ESF_FZ_RX_L4_CLASS_LBN != ESF_DE_RX_L4_CLASS_LBN);
-	switch (EFX_QWORD_FIELD(rx_ev, ESF_FZ_RX_L4_CLASS)) {
-	case ESE_FZ_L4_CLASS_TCP:
-		 RTE_BUILD_BUG_ON(ESE_FZ_L4_CLASS_TCP != ESE_DE_L4_CLASS_TCP);
-		l4_ptype = (tun_ptype == 0) ? RTE_PTYPE_L4_TCP :
-			RTE_PTYPE_INNER_L4_TCP;
-		ol_flags |=
-			(EFX_TEST_QWORD_BIT(rx_ev, l4_csum_err_bit)) ?
-			PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD;
-		break;
-	case ESE_FZ_L4_CLASS_UDP:
-		 RTE_BUILD_BUG_ON(ESE_FZ_L4_CLASS_UDP != ESE_DE_L4_CLASS_UDP);
-		l4_ptype = (tun_ptype == 0) ? RTE_PTYPE_L4_UDP :
-			RTE_PTYPE_INNER_L4_UDP;
-		ol_flags |=
-			(EFX_TEST_QWORD_BIT(rx_ev, l4_csum_err_bit)) ?
-			PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD;
-		break;
-	case ESE_FZ_L4_CLASS_UNKNOWN:
-		 RTE_BUILD_BUG_ON(ESE_FZ_L4_CLASS_UNKNOWN !=
-				  ESE_DE_L4_CLASS_UNKNOWN);
-		break;
-	default:
-		/* Unexpected Layer 4 class */
-		SFC_ASSERT(false);
-	}
-
-done:
-	m->ol_flags = ol_flags & ol_mask;
-	m->packet_type = tun_ptype | l2_ptype | l3_ptype | l4_ptype;
-}
-
 static uint16_t
 sfc_ef10_rx_pseudo_hdr_get_len(const uint8_t *pseudo_hdr)
 {
diff --git a/drivers/net/sfc/sfc_ef10_rx_ev.h b/drivers/net/sfc/sfc_ef10_rx_ev.h
new file mode 100644
index 0000000..774a789
--- /dev/null
+++ b/drivers/net/sfc/sfc_ef10_rx_ev.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Solarflare Communications Inc.
+ * All rights reserved.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#ifndef _SFC_EF10_RX_EV_H
+#define _SFC_EF10_RX_EV_H
+
+#include <rte_mbuf.h>
+
+#include "efx_types.h"
+#include "efx_regs.h"
+#include "efx_regs_ef10.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+static inline void
+sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
+			   uint64_t ol_mask)
+{
+	uint32_t tun_ptype = 0;
+	/* Which event bit is mapped to PKT_RX_IP_CKSUM_* */
+	int8_t ip_csum_err_bit;
+	/* Which event bit is mapped to PKT_RX_L4_CKSUM_* */
+	int8_t l4_csum_err_bit;
+	uint32_t l2_ptype = 0;
+	uint32_t l3_ptype = 0;
+	uint32_t l4_ptype = 0;
+	uint64_t ol_flags = 0;
+
+	if (unlikely(EFX_TEST_QWORD_BIT(rx_ev, ESF_DZ_RX_PARSE_INCOMPLETE_LBN)))
+		goto done;
+
+	switch (EFX_QWORD_FIELD(rx_ev, ESF_EZ_RX_ENCAP_HDR)) {
+	default:
+		/* Unexpected encapsulation tag class */
+		SFC_ASSERT(false);
+		/* FALLTHROUGH */
+	case ESE_EZ_ENCAP_HDR_NONE:
+		break;
+	case ESE_EZ_ENCAP_HDR_VXLAN:
+		/*
+		 * It is definitely UDP, but we have no information
+		 * about IPv4 vs IPv6 and VLAN tagging.
+		 */
+		tun_ptype = RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_L4_UDP;
+		break;
+	case ESE_EZ_ENCAP_HDR_GRE:
+		/*
+		 * We have no information about IPv4 vs IPv6 and VLAN tagging.
+		 */
+		tun_ptype = RTE_PTYPE_TUNNEL_NVGRE;
+		break;
+	}
+
+	if (tun_ptype == 0) {
+		ip_csum_err_bit = ESF_DZ_RX_IPCKSUM_ERR_LBN;
+		l4_csum_err_bit = ESF_DZ_RX_TCPUDP_CKSUM_ERR_LBN;
+	} else {
+		ip_csum_err_bit = ESF_EZ_RX_IP_INNER_CHKSUM_ERR_LBN;
+		l4_csum_err_bit = ESF_EZ_RX_TCP_UDP_INNER_CHKSUM_ERR_LBN;
+		if (unlikely(EFX_TEST_QWORD_BIT(rx_ev,
+						ESF_DZ_RX_IPCKSUM_ERR_LBN)))
+			ol_flags |= PKT_RX_EIP_CKSUM_BAD;
+	}
+
+	switch (EFX_QWORD_FIELD(rx_ev, ESF_DZ_RX_ETH_TAG_CLASS)) {
+	case ESE_DZ_ETH_TAG_CLASS_NONE:
+		l2_ptype = (tun_ptype == 0) ? RTE_PTYPE_L2_ETHER :
+			RTE_PTYPE_INNER_L2_ETHER;
+		break;
+	case ESE_DZ_ETH_TAG_CLASS_VLAN1:
+		l2_ptype = (tun_ptype == 0) ? RTE_PTYPE_L2_ETHER_VLAN :
+			RTE_PTYPE_INNER_L2_ETHER_VLAN;
+		break;
+	case ESE_DZ_ETH_TAG_CLASS_VLAN2:
+		l2_ptype = (tun_ptype == 0) ? RTE_PTYPE_L2_ETHER_QINQ :
+			RTE_PTYPE_INNER_L2_ETHER_QINQ;
+		break;
+	default:
+		/* Unexpected Eth tag class */
+		SFC_ASSERT(false);
+	}
+
+	switch (EFX_QWORD_FIELD(rx_ev, ESF_DZ_RX_L3_CLASS)) {
+	case ESE_DZ_L3_CLASS_IP4_FRAG:
+		l4_ptype = (tun_ptype == 0) ? RTE_PTYPE_L4_FRAG :
+			RTE_PTYPE_INNER_L4_FRAG;
+		/* FALLTHROUGH */
+	case ESE_DZ_L3_CLASS_IP4:
+		l3_ptype = (tun_ptype == 0) ? RTE_PTYPE_L3_IPV4_EXT_UNKNOWN :
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN;
+		ol_flags |= PKT_RX_RSS_HASH |
+			((EFX_TEST_QWORD_BIT(rx_ev, ip_csum_err_bit)) ?
+			 PKT_RX_IP_CKSUM_BAD : PKT_RX_IP_CKSUM_GOOD);
+		break;
+	case ESE_DZ_L3_CLASS_IP6_FRAG:
+		l4_ptype = (tun_ptype == 0) ? RTE_PTYPE_L4_FRAG :
+			RTE_PTYPE_INNER_L4_FRAG;
+		/* FALLTHROUGH */
+	case ESE_DZ_L3_CLASS_IP6:
+		l3_ptype = (tun_ptype == 0) ? RTE_PTYPE_L3_IPV6_EXT_UNKNOWN :
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN;
+		ol_flags |= PKT_RX_RSS_HASH;
+		break;
+	case ESE_DZ_L3_CLASS_ARP:
+		/* Override Layer 2 packet type */
+		/* There is no ARP classification for inner packets */
+		if (tun_ptype == 0)
+			l2_ptype = RTE_PTYPE_L2_ETHER_ARP;
+		break;
+	default:
+		/* Unexpected Layer 3 class */
+		SFC_ASSERT(false);
+	}
+
+	/*
+	 * RX_L4_CLASS is 3 bits wide on Huntington and Medford, but is only
+	 * 2 bits wide on Medford2. Check it is safe to use the Medford2 field
+	 * and values for all EF10 controllers.
+	 */
+	RTE_BUILD_BUG_ON(ESF_FZ_RX_L4_CLASS_LBN != ESF_DE_RX_L4_CLASS_LBN);
+	switch (EFX_QWORD_FIELD(rx_ev, ESF_FZ_RX_L4_CLASS)) {
+	case ESE_FZ_L4_CLASS_TCP:
+		 RTE_BUILD_BUG_ON(ESE_FZ_L4_CLASS_TCP != ESE_DE_L4_CLASS_TCP);
+		l4_ptype = (tun_ptype == 0) ? RTE_PTYPE_L4_TCP :
+			RTE_PTYPE_INNER_L4_TCP;
+		ol_flags |=
+			(EFX_TEST_QWORD_BIT(rx_ev, l4_csum_err_bit)) ?
+			PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD;
+		break;
+	case ESE_FZ_L4_CLASS_UDP:
+		 RTE_BUILD_BUG_ON(ESE_FZ_L4_CLASS_UDP != ESE_DE_L4_CLASS_UDP);
+		l4_ptype = (tun_ptype == 0) ? RTE_PTYPE_L4_UDP :
+			RTE_PTYPE_INNER_L4_UDP;
+		ol_flags |=
+			(EFX_TEST_QWORD_BIT(rx_ev, l4_csum_err_bit)) ?
+			PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD;
+		break;
+	case ESE_FZ_L4_CLASS_UNKNOWN:
+		 RTE_BUILD_BUG_ON(ESE_FZ_L4_CLASS_UNKNOWN !=
+				  ESE_DE_L4_CLASS_UNKNOWN);
+		break;
+	default:
+		/* Unexpected Layer 4 class */
+		SFC_ASSERT(false);
+	}
+
+done:
+	m->ol_flags = ol_flags & ol_mask;
+	m->packet_type = tun_ptype | l2_ptype | l3_ptype | l4_ptype;
+}
+
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _SFC_EF10_RX_EV_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 09/23] net/sfc: conditionally compile support for tunnel packets
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (7 preceding siblings ...)
  2018-04-19 11:36 ` [PATCH 08/23] net/sfc: move EF10 Rx event parser to shared header Andrew Rybchenko
@ 2018-04-19 11:36 ` Andrew Rybchenko
  2018-04-19 11:36 ` [PATCH 10/23] net/sfc: allow one Rx queue entry carry many packet buffers Andrew Rybchenko
                   ` (14 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:36 UTC (permalink / raw)
  To: dev

Equal stride super-buffer Rx datapath does not support tunnels, code to
parse tunnel packet types and inner checksum offload is not required and
it is important to be able to compile it out on build time to avoid
extra CPU load.

Cutting of tunnels support relies on compiler optimizaitons to
be able to drop extra checks and branches if tun_ptype is always 0.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc_ef10_rx.c    | 2 ++
 drivers/net/sfc/sfc_ef10_rx_ev.h | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 7560891..5ec82db 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -25,6 +25,8 @@
 #include "sfc_dp_rx.h"
 #include "sfc_kvargs.h"
 #include "sfc_ef10.h"
+
+#define SFC_EF10_RX_EV_ENCAP_SUPPORT	1
 #include "sfc_ef10_rx_ev.h"
 
 #define sfc_ef10_rx_err(dpq, ...) \
diff --git a/drivers/net/sfc/sfc_ef10_rx_ev.h b/drivers/net/sfc/sfc_ef10_rx_ev.h
index 774a789..9054fb9 100644
--- a/drivers/net/sfc/sfc_ef10_rx_ev.h
+++ b/drivers/net/sfc/sfc_ef10_rx_ev.h
@@ -37,6 +37,7 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 	if (unlikely(EFX_TEST_QWORD_BIT(rx_ev, ESF_DZ_RX_PARSE_INCOMPLETE_LBN)))
 		goto done;
 
+#if SFC_EF10_RX_EV_ENCAP_SUPPORT
 	switch (EFX_QWORD_FIELD(rx_ev, ESF_EZ_RX_ENCAP_HDR)) {
 	default:
 		/* Unexpected encapsulation tag class */
@@ -58,6 +59,7 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 		tun_ptype = RTE_PTYPE_TUNNEL_NVGRE;
 		break;
 	}
+#endif
 
 	if (tun_ptype == 0) {
 		ip_csum_err_bit = ESF_DZ_RX_IPCKSUM_ERR_LBN;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 10/23] net/sfc: allow one Rx queue entry carry many packet buffers
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (8 preceding siblings ...)
  2018-04-19 11:36 ` [PATCH 09/23] net/sfc: conditionally compile support for tunnel packets Andrew Rybchenko
@ 2018-04-19 11:36 ` Andrew Rybchenko
  2018-04-19 11:36 ` [PATCH 11/23] net/sfc: allow to take mbuf pool into account when sizing Andrew Rybchenko
                   ` (13 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:36 UTC (permalink / raw)
  To: dev

One HW Rx descriptor has many packet buffers in the case of equal
stride super-buffer Rx modes. Each packet buffer is still treated
as separate SW Rx descriptor. rxq_entries is the size of HW Rx ring
whereas nb_rx_desc is the number of SW Rx descriptors.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc_rx.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index a4aae1b..e24a6ef 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -988,7 +988,6 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 		goto fail_size_up_rings;
 	SFC_ASSERT(rxq_entries >= EFX_RXQ_MINNDESCS);
 	SFC_ASSERT(rxq_entries <= EFX_RXQ_MAXNDESCS);
-	SFC_ASSERT(rxq_entries >= nb_rx_desc);
 	SFC_ASSERT(rxq_max_fill_level <= nb_rx_desc);
 
 	rc = sfc_rx_qcheck_conf(sa, rxq_max_fill_level, rx_conf);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 11/23] net/sfc: allow to take mbuf pool into account when sizing
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (9 preceding siblings ...)
  2018-04-19 11:36 ` [PATCH 10/23] net/sfc: allow one Rx queue entry carry many packet buffers Andrew Rybchenko
@ 2018-04-19 11:36 ` Andrew Rybchenko
  2018-04-19 11:36 ` [PATCH 12/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (12 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:36 UTC (permalink / raw)
  To: dev

The new argument will be used by the equal stride super-buffer
Rx datapath.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc_dp_rx.h   | 4 +++-
 drivers/net/sfc/sfc_ef10_rx.c | 1 +
 drivers/net/sfc/sfc_rx.c      | 5 +++--
 3 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/net/sfc/sfc_dp_rx.h b/drivers/net/sfc/sfc_dp_rx.h
index cc9e7c4..ecb486f 100644
--- a/drivers/net/sfc/sfc_dp_rx.h
+++ b/drivers/net/sfc/sfc_dp_rx.h
@@ -91,9 +91,10 @@ typedef void (sfc_dp_rx_get_dev_info_t)(struct rte_eth_dev_info *dev_info);
 
 /**
  * Get size of receive and event queue rings by the number of Rx
- * descriptors.
+ * descriptors and mempool configuration.
  *
  * @param nb_rx_desc		Number of Rx descriptors
+ * @param mb_pool		mbuf pool with Rx buffers
  * @param rxq_entries		Location for number of Rx ring entries
  * @param evq_entries		Location for number of event ring entries
  * @param rxq_max_fill_level	Location for maximum Rx ring fill level
@@ -101,6 +102,7 @@ typedef void (sfc_dp_rx_get_dev_info_t)(struct rte_eth_dev_info *dev_info);
  * @return 0 or positive errno.
  */
 typedef int (sfc_dp_rx_qsize_up_rings_t)(uint16_t nb_rx_desc,
+					 struct rte_mempool *mb_pool,
 					 unsigned int *rxq_entries,
 					 unsigned int *evq_entries,
 					 unsigned int *rxq_max_fill_level);
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 5ec82db..1f0d6a0 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -480,6 +480,7 @@ sfc_ef10_rx_get_dev_info(struct rte_eth_dev_info *dev_info)
 static sfc_dp_rx_qsize_up_rings_t sfc_ef10_rx_qsize_up_rings;
 static int
 sfc_ef10_rx_qsize_up_rings(uint16_t nb_rx_desc,
+			   __rte_unused struct rte_mempool *mb_pool,
 			   unsigned int *rxq_entries,
 			   unsigned int *evq_entries,
 			   unsigned int *rxq_max_fill_level)
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index e24a6ef..7345074 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -384,6 +384,7 @@ sfc_rxq_by_dp_rxq(const struct sfc_dp_rxq *dp_rxq)
 static sfc_dp_rx_qsize_up_rings_t sfc_efx_rx_qsize_up_rings;
 static int
 sfc_efx_rx_qsize_up_rings(uint16_t nb_rx_desc,
+			  __rte_unused struct rte_mempool *mb_pool,
 			  unsigned int *rxq_entries,
 			  unsigned int *evq_entries,
 			  unsigned int *rxq_max_fill_level)
@@ -982,8 +983,8 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	struct sfc_rxq *rxq;
 	struct sfc_dp_rx_qcreate_info info;
 
-	rc = sa->dp_rx->qsize_up_rings(nb_rx_desc, &rxq_entries, &evq_entries,
-				       &rxq_max_fill_level);
+	rc = sa->dp_rx->qsize_up_rings(nb_rx_desc, mb_pool, &rxq_entries,
+				       &evq_entries, &rxq_max_fill_level);
 	if (rc != 0)
 		goto fail_size_up_rings;
 	SFC_ASSERT(rxq_entries >= EFX_RXQ_MINNDESCS);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 12/23] net/sfc: support equal stride super-buffer Rx mode
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (10 preceding siblings ...)
  2018-04-19 11:36 ` [PATCH 11/23] net/sfc: allow to take mbuf pool into account when sizing Andrew Rybchenko
@ 2018-04-19 11:36 ` Andrew Rybchenko
  2018-04-19 11:36 ` [PATCH 13/23] net/sfc: support callback to check if mempool is supported Andrew Rybchenko
                   ` (11 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:36 UTC (permalink / raw)
  To: dev

HW Rx descriptor represents many contiguous packet buffers which
follow each other. Number of buffers, stride and maximum DMA
length are setup-time configurable per Rx queue based on provided
mempool. The mempool must support contiguous block allocation and
get info API to retrieve number of objects in the block.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 doc/guides/nics/sfc_efx.rst        |  20 +-
 drivers/net/sfc/Makefile           |   1 +
 drivers/net/sfc/efsys.h            |   2 +-
 drivers/net/sfc/meson.build        |   1 +
 drivers/net/sfc/sfc_dp.h           |   3 +-
 drivers/net/sfc/sfc_dp_rx.h        |   8 +
 drivers/net/sfc/sfc_ef10.h         |   3 +
 drivers/net/sfc/sfc_ef10_essb_rx.c | 643 +++++++++++++++++++++++++++++++++++++
 drivers/net/sfc/sfc_ef10_rx.c      |   2 +-
 drivers/net/sfc/sfc_ef10_rx_ev.h   |   5 +-
 drivers/net/sfc/sfc_ethdev.c       |   6 +
 drivers/net/sfc/sfc_ev.c           |  34 ++
 drivers/net/sfc/sfc_kvargs.h       |   4 +-
 drivers/net/sfc/sfc_rx.c           |  45 ++-
 drivers/net/sfc/sfc_rx.h           |   1 +
 15 files changed, 767 insertions(+), 11 deletions(-)
 create mode 100644 drivers/net/sfc/sfc_ef10_essb_rx.c

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index abaed67..bbc6e61 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -121,6 +121,21 @@ required in the receive buffer.
 It should be taken into account when mbuf pool for receive is created.
 
 
+Equal stride super-buffer mode
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When the receive queue uses equal stride super-buffer DMA mode, one HW Rx
+descriptor carries many Rx buffers which contiguously follow each other
+with some stride (equal to total size of rte_mbuf as mempool object).
+Each Rx buffer is an independent rte_mbuf.
+However dedicated mempool manager must be used when mempool for the Rx
+queue is created. The manager must support dequeue of the contiguous
+block of objects and provide mempool info API to get the block size.
+
+Another limitation of a equal stride super-buffer mode, imposed by the
+firmware, is that it allows for a single RSS context.
+
+
 Tunnels support
 ---------------
 
@@ -291,7 +306,7 @@ whitelist option like "-w 02:00.0,arg1=value1,...".
 Case-insensitive 1/y/yes/on or 0/n/no/off may be used to specify
 boolean parameters value.
 
-- ``rx_datapath`` [auto|efx|ef10] (default **auto**)
+- ``rx_datapath`` [auto|efx|ef10|ef10_esps] (default **auto**)
 
   Choose receive datapath implementation.
   **auto** allows the driver itself to make a choice based on firmware
@@ -300,6 +315,9 @@ boolean parameters value.
   **ef10** chooses EF10 (SFN7xxx, SFN8xxx, X2xxx) native datapath which is
   more efficient than libefx-based and provides richer packet type
   classification, but lacks Rx scatter support.
+  **ef10_esps** chooses SFNX2xxx equal stride packed stream datapath
+  which may be used on DPDK firmware variant only
+  (see notes about its limitations above).
 
 - ``tx_datapath`` [auto|efx|ef10|ef10_simple] (default **auto**)
 
diff --git a/drivers/net/sfc/Makefile b/drivers/net/sfc/Makefile
index f3e0b4b..3bb41a0 100644
--- a/drivers/net/sfc/Makefile
+++ b/drivers/net/sfc/Makefile
@@ -81,6 +81,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_SFC_EFX_PMD) += sfc_filter.c
 SRCS-$(CONFIG_RTE_LIBRTE_SFC_EFX_PMD) += sfc_flow.c
 SRCS-$(CONFIG_RTE_LIBRTE_SFC_EFX_PMD) += sfc_dp.c
 SRCS-$(CONFIG_RTE_LIBRTE_SFC_EFX_PMD) += sfc_ef10_rx.c
+SRCS-$(CONFIG_RTE_LIBRTE_SFC_EFX_PMD) += sfc_ef10_essb_rx.c
 SRCS-$(CONFIG_RTE_LIBRTE_SFC_EFX_PMD) += sfc_ef10_tx.c
 
 VPATH += $(SRCDIR)/base
diff --git a/drivers/net/sfc/efsys.h b/drivers/net/sfc/efsys.h
index f71581c..b9d2df5 100644
--- a/drivers/net/sfc/efsys.h
+++ b/drivers/net/sfc/efsys.h
@@ -198,7 +198,7 @@ prefetch_read_once(const volatile void *addr)
 
 #define EFSYS_OPT_RX_PACKED_STREAM 0
 
-#define EFSYS_OPT_RX_ES_SUPER_BUFFER 0
+#define EFSYS_OPT_RX_ES_SUPER_BUFFER 1
 
 #define EFSYS_OPT_TUNNEL 1
 
diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index 0de2e17..3aa14c7 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -54,6 +54,7 @@ sources = files(
 	'sfc_flow.c',
 	'sfc_dp.c',
 	'sfc_ef10_rx.c',
+	'sfc_ef10_essb_rx.c',
 	'sfc_ef10_tx.c'
 )
 
diff --git a/drivers/net/sfc/sfc_dp.h b/drivers/net/sfc/sfc_dp.h
index 26e7195..3da65ab 100644
--- a/drivers/net/sfc/sfc_dp.h
+++ b/drivers/net/sfc/sfc_dp.h
@@ -79,7 +79,8 @@ struct sfc_dp {
 	enum sfc_dp_type		type;
 	/* Mask of required hardware/firmware capabilities */
 	unsigned int			hw_fw_caps;
-#define SFC_DP_HW_FW_CAP_EF10		0x1
+#define SFC_DP_HW_FW_CAP_EF10				0x1
+#define SFC_DP_HW_FW_CAP_RX_ES_SUPER_BUFFER		0x2
 };
 
 /** List of datapath variants */
diff --git a/drivers/net/sfc/sfc_dp_rx.h b/drivers/net/sfc/sfc_dp_rx.h
index ecb486f..db075dd 100644
--- a/drivers/net/sfc/sfc_dp_rx.h
+++ b/drivers/net/sfc/sfc_dp_rx.h
@@ -150,6 +150,12 @@ typedef void (sfc_dp_rx_qstop_t)(struct sfc_dp_rxq *dp_rxq,
 typedef bool (sfc_dp_rx_qrx_ev_t)(struct sfc_dp_rxq *dp_rxq, unsigned int id);
 
 /**
+ * Packed stream receive event handler used during queue flush only.
+ */
+typedef bool (sfc_dp_rx_qrx_ps_ev_t)(struct sfc_dp_rxq *dp_rxq,
+				     unsigned int id);
+
+/**
  * Receive queue purge function called after queue flush.
  *
  * Should be used to free unused recevie buffers.
@@ -182,6 +188,7 @@ struct sfc_dp_rx {
 	sfc_dp_rx_qstart_t			*qstart;
 	sfc_dp_rx_qstop_t			*qstop;
 	sfc_dp_rx_qrx_ev_t			*qrx_ev;
+	sfc_dp_rx_qrx_ps_ev_t			*qrx_ps_ev;
 	sfc_dp_rx_qpurge_t			*qpurge;
 	sfc_dp_rx_supported_ptypes_get_t	*supported_ptypes_get;
 	sfc_dp_rx_qdesc_npending_t		*qdesc_npending;
@@ -207,6 +214,7 @@ sfc_dp_find_rx_by_caps(struct sfc_dp_list *head, unsigned int avail_caps)
 
 extern struct sfc_dp_rx sfc_efx_rx;
 extern struct sfc_dp_rx sfc_ef10_rx;
+extern struct sfc_dp_rx sfc_ef10_essb_rx;
 
 #ifdef __cplusplus
 }
diff --git a/drivers/net/sfc/sfc_ef10.h b/drivers/net/sfc/sfc_ef10.h
index 865359f..a73e0bd 100644
--- a/drivers/net/sfc/sfc_ef10.h
+++ b/drivers/net/sfc/sfc_ef10.h
@@ -110,6 +110,9 @@ sfc_ef10_rx_qpush(volatile void *doorbell, unsigned int added,
 }
 
 
+const uint32_t * sfc_ef10_supported_ptypes_get(uint32_t tunnel_encaps);
+
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
new file mode 100644
index 0000000..1df61ff
--- /dev/null
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -0,0 +1,643 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2017-2018 Solarflare Communications Inc.
+ * All rights reserved.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+/* EF10 equal stride packed stream receive native datapath implementation */
+
+#include <stdbool.h>
+
+#include <rte_byteorder.h>
+#include <rte_mbuf_ptype.h>
+#include <rte_mbuf.h>
+#include <rte_io.h>
+
+#include "efx.h"
+#include "efx_types.h"
+#include "efx_regs.h"
+#include "efx_regs_ef10.h"
+
+#include "sfc_tweak.h"
+#include "sfc_dp_rx.h"
+#include "sfc_kvargs.h"
+#include "sfc_ef10.h"
+
+/* Tunnels are not supported */
+#define SFC_EF10_RX_EV_ENCAP_SUPPORT	0
+#include "sfc_ef10_rx_ev.h"
+
+#define sfc_ef10_essb_rx_err(dpq, ...) \
+	SFC_DP_LOG(SFC_KVARG_DATAPATH_EF10_ESSB, ERR, dpq, __VA_ARGS__)
+
+#define sfc_ef10_essb_rx_info(dpq, ...) \
+	SFC_DP_LOG(SFC_KVARG_DATAPATH_EF10_ESSB, INFO, dpq, __VA_ARGS__)
+
+/*
+ * Fake length for RXQ descriptors in equal stride super-buffer mode
+ * to make hardware happy.
+ */
+#define SFC_EF10_ESSB_RX_FAKE_BUF_SIZE	32
+
+/**
+ * Maximum number of descriptors/buffers in the Rx ring.
+ * It should guarantee that corresponding event queue never overfill.
+ */
+#define SFC_EF10_ESSB_RXQ_LIMIT(_nevs) \
+	((_nevs) - 1 /* head must not step on tail */ - \
+	 (SFC_EF10_EV_PER_CACHE_LINE - 1) /* max unused EvQ entries */ - \
+	 1 /* Rx error */ - 1 /* flush */)
+
+struct sfc_ef10_essb_rx_sw_desc {
+	struct rte_mbuf			*first_mbuf;
+};
+
+struct sfc_ef10_essb_rxq {
+	/* Used on data path */
+	unsigned int			flags;
+#define SFC_EF10_ESSB_RXQ_STARTED	0x1
+#define SFC_EF10_ESSB_RXQ_NOT_RUNNING	0x2
+#define SFC_EF10_ESSB_RXQ_EXCEPTION	0x4
+	unsigned int			rxq_ptr_mask;
+	unsigned int			block_size;
+	unsigned int			buf_stride;
+	unsigned int			bufs_ptr;
+	unsigned int			completed;
+	unsigned int			pending_id;
+	unsigned int			bufs_pending;
+	unsigned int			left_in_completed;
+	unsigned int			left_in_pending;
+	unsigned int			evq_read_ptr;
+	unsigned int			evq_ptr_mask;
+	efx_qword_t			*evq_hw_ring;
+	struct sfc_ef10_essb_rx_sw_desc	*sw_ring;
+	uint16_t			port_id;
+
+	/* Used on refill */
+	unsigned int			added;
+	unsigned int			max_fill_level;
+	unsigned int			refill_threshold;
+	struct rte_mempool		*refill_mb_pool;
+	efx_qword_t			*rxq_hw_ring;
+	volatile void			*doorbell;
+
+	/* Datapath receive queue anchor */
+	struct sfc_dp_rxq		dp;
+};
+
+static inline struct sfc_ef10_essb_rxq *
+sfc_ef10_essb_rxq_by_dp_rxq(struct sfc_dp_rxq *dp_rxq)
+{
+	return container_of(dp_rxq, struct sfc_ef10_essb_rxq, dp);
+}
+
+static struct rte_mbuf *
+sfc_ef10_essb_next_mbuf(const struct sfc_ef10_essb_rxq *rxq,
+			struct rte_mbuf *mbuf)
+{
+	return (struct rte_mbuf *)((uintptr_t)mbuf + rxq->buf_stride);
+}
+
+static struct rte_mbuf *
+sfc_ef10_essb_mbuf_by_index(const struct sfc_ef10_essb_rxq *rxq,
+			    struct rte_mbuf *mbuf, unsigned int idx)
+{
+	return (struct rte_mbuf *)((uintptr_t)mbuf + idx * rxq->buf_stride);
+}
+
+static struct rte_mbuf *
+sfc_ef10_essb_maybe_next_completed(struct sfc_ef10_essb_rxq *rxq)
+{
+	const struct sfc_ef10_essb_rx_sw_desc *rxd;
+
+	if (rxq->left_in_completed != 0) {
+		rxd = &rxq->sw_ring[rxq->completed & rxq->rxq_ptr_mask];
+		return sfc_ef10_essb_mbuf_by_index(rxq, rxd->first_mbuf,
+				rxq->block_size - rxq->left_in_completed);
+	} else {
+		rxq->completed++;
+		rxd = &rxq->sw_ring[rxq->completed & rxq->rxq_ptr_mask];
+		rxq->left_in_completed = rxq->block_size;
+		return rxd->first_mbuf;
+	}
+}
+
+static void
+sfc_ef10_essb_rx_qrefill(struct sfc_ef10_essb_rxq *rxq)
+{
+	const unsigned int rxq_ptr_mask = rxq->rxq_ptr_mask;
+	unsigned int free_space;
+	unsigned int bulks;
+	void *mbuf_blocks[SFC_EF10_RX_WPTR_ALIGN];
+	unsigned int added = rxq->added;
+
+	free_space = rxq->max_fill_level - (added - rxq->completed);
+
+	if (free_space < rxq->refill_threshold)
+		return;
+
+	bulks = free_space / RTE_DIM(mbuf_blocks);
+	/* refill_threshold guarantees that bulks is positive */
+	SFC_ASSERT(bulks > 0);
+
+	do {
+		unsigned int id;
+		unsigned int i;
+
+		if (unlikely(rte_mempool_get_contig_blocks(rxq->refill_mb_pool,
+				mbuf_blocks, RTE_DIM(mbuf_blocks)) < 0)) {
+			struct rte_eth_dev_data *dev_data =
+				rte_eth_devices[rxq->port_id].data;
+
+			/*
+			 * It is hardly a safe way to increment counter
+			 * from different contexts, but all PMDs do it.
+			 */
+			dev_data->rx_mbuf_alloc_failed += RTE_DIM(mbuf_blocks);
+			/* Return if we have posted nothing yet */
+			if (added == rxq->added)
+				return;
+			/* Push posted */
+			break;
+		}
+
+		for (i = 0, id = added & rxq_ptr_mask;
+		     i < RTE_DIM(mbuf_blocks);
+		     ++i, ++id) {
+			struct rte_mbuf *m = mbuf_blocks[i];
+			struct sfc_ef10_essb_rx_sw_desc *rxd;
+
+			SFC_ASSERT((id & ~rxq_ptr_mask) == 0);
+			rxd = &rxq->sw_ring[id];
+			rxd->first_mbuf = m;
+
+			/* RX_KER_BYTE_CNT is ignored by firmware */
+			EFX_POPULATE_QWORD_2(rxq->rxq_hw_ring[id],
+					     ESF_DZ_RX_KER_BYTE_CNT,
+					     SFC_EF10_ESSB_RX_FAKE_BUF_SIZE,
+					     ESF_DZ_RX_KER_BUF_ADDR,
+					     rte_mbuf_data_iova_default(m));
+		}
+
+		added += RTE_DIM(mbuf_blocks);
+
+	} while (--bulks > 0);
+
+	SFC_ASSERT(rxq->added != added);
+	rxq->added = added;
+	sfc_ef10_rx_qpush(rxq->doorbell, added, rxq_ptr_mask);
+}
+
+static bool
+sfc_ef10_essb_rx_event_get(struct sfc_ef10_essb_rxq *rxq, efx_qword_t *rx_ev)
+{
+	*rx_ev = rxq->evq_hw_ring[rxq->evq_read_ptr & rxq->evq_ptr_mask];
+
+	if (!sfc_ef10_ev_present(*rx_ev))
+		return false;
+
+	if (unlikely(EFX_QWORD_FIELD(*rx_ev, FSF_AZ_EV_CODE) !=
+		     FSE_AZ_EV_CODE_RX_EV)) {
+		/*
+		 * Do not move read_ptr to keep the event for exception
+		 * handling
+		 */
+		rxq->flags |= SFC_EF10_ESSB_RXQ_EXCEPTION;
+		sfc_ef10_essb_rx_err(&rxq->dp.dpq,
+				     "RxQ exception at EvQ read ptr %#x",
+				     rxq->evq_read_ptr);
+		return false;
+	}
+
+	rxq->evq_read_ptr++;
+	return true;
+}
+
+static void
+sfc_ef10_essb_rx_process_ev(struct sfc_ef10_essb_rxq *rxq, efx_qword_t rx_ev)
+{
+	unsigned int ready;
+
+	ready = (EFX_QWORD_FIELD(rx_ev, ESF_DZ_RX_DSC_PTR_LBITS) -
+		 rxq->bufs_ptr) &
+		EFX_MASK32(ESF_DZ_RX_DSC_PTR_LBITS);
+
+	rxq->bufs_ptr += ready;
+	rxq->bufs_pending += ready;
+
+	SFC_ASSERT(ready > 0);
+	do {
+		const struct sfc_ef10_essb_rx_sw_desc *rxd;
+		struct rte_mbuf *m;
+		unsigned int todo_bufs;
+		struct rte_mbuf *m0;
+
+		rxd = &rxq->sw_ring[rxq->pending_id];
+		m = sfc_ef10_essb_mbuf_by_index(rxq, rxd->first_mbuf,
+			rxq->block_size - rxq->left_in_pending);
+
+		if (ready < rxq->left_in_pending) {
+			todo_bufs = ready;
+			ready = 0;
+			rxq->left_in_pending -= todo_bufs;
+		} else {
+			todo_bufs = rxq->left_in_pending;
+			ready -= todo_bufs;
+			rxq->left_in_pending = rxq->block_size;
+			if (rxq->pending_id != rxq->rxq_ptr_mask)
+				rxq->pending_id++;
+			else
+				rxq->pending_id = 0;
+		}
+
+		SFC_ASSERT(todo_bufs > 0);
+		--todo_bufs;
+
+		sfc_ef10_rx_ev_to_offloads(rx_ev, m, ~0ull);
+
+		/* Prefetch pseudo-header */
+		rte_prefetch0((uint8_t *)m->buf_addr + RTE_PKTMBUF_HEADROOM);
+
+		m0 = m;
+		while (todo_bufs-- > 0) {
+			m = sfc_ef10_essb_next_mbuf(rxq, m);
+			m->ol_flags = m0->ol_flags;
+			m->packet_type = m0->packet_type;
+			/* Prefetch pseudo-header */
+			rte_prefetch0((uint8_t *)m->buf_addr +
+				      RTE_PKTMBUF_HEADROOM);
+		}
+	} while (ready > 0);
+}
+
+static unsigned int
+sfc_ef10_essb_rx_get_pending(struct sfc_ef10_essb_rxq *rxq,
+			     struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	unsigned int n_rx_pkts = 0;
+	unsigned int todo_bufs;
+	struct rte_mbuf *m;
+
+	while ((todo_bufs = RTE_MIN(nb_pkts - n_rx_pkts,
+				    rxq->bufs_pending)) > 0) {
+		m = sfc_ef10_essb_maybe_next_completed(rxq);
+
+		todo_bufs = RTE_MIN(todo_bufs, rxq->left_in_completed);
+
+		rxq->bufs_pending -= todo_bufs;
+		rxq->left_in_completed -= todo_bufs;
+
+		SFC_ASSERT(todo_bufs > 0);
+		todo_bufs--;
+
+		do {
+			const efx_qword_t *qwordp;
+			uint16_t pkt_len;
+
+			rx_pkts[n_rx_pkts++] = m;
+
+			/* Parse pseudo-header */
+			qwordp = (const efx_qword_t *)
+				((uint8_t *)m->buf_addr + RTE_PKTMBUF_HEADROOM);
+			pkt_len =
+				EFX_QWORD_FIELD(*qwordp,
+						ES_EZ_ESSB_RX_PREFIX_DATA_LEN);
+
+			m->data_off = RTE_PKTMBUF_HEADROOM +
+				ES_EZ_ESSB_RX_PREFIX_LEN;
+			m->port = rxq->port_id;
+
+			rte_pktmbuf_pkt_len(m) = pkt_len;
+			rte_pktmbuf_data_len(m) = pkt_len;
+
+			m->ol_flags |=
+				(PKT_RX_RSS_HASH *
+				 !!EFX_TEST_QWORD_BIT(*qwordp,
+					ES_EZ_ESSB_RX_PREFIX_HASH_VALID_LBN));
+
+			/* EFX_QWORD_FIELD converts little-endian to CPU */
+			m->hash.rss =
+				EFX_QWORD_FIELD(*qwordp,
+						ES_EZ_ESSB_RX_PREFIX_HASH);
+
+			m = sfc_ef10_essb_next_mbuf(rxq, m);
+		} while (todo_bufs-- > 0);
+	}
+
+	return n_rx_pkts;
+}
+
+
+static uint16_t
+sfc_ef10_essb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct sfc_ef10_essb_rxq *rxq = sfc_ef10_essb_rxq_by_dp_rxq(rx_queue);
+	const unsigned int evq_old_read_ptr = rxq->evq_read_ptr;
+	uint16_t n_rx_pkts;
+	efx_qword_t rx_ev;
+
+	if (unlikely(rxq->flags & (SFC_EF10_ESSB_RXQ_NOT_RUNNING |
+				   SFC_EF10_ESSB_RXQ_EXCEPTION)))
+		return 0;
+
+	n_rx_pkts = sfc_ef10_essb_rx_get_pending(rxq, rx_pkts, nb_pkts);
+
+	while (n_rx_pkts != nb_pkts &&
+	       sfc_ef10_essb_rx_event_get(rxq, &rx_ev)) {
+		/*
+		 * DROP_EVENT is an internal to the NIC, software should
+		 * never see it and, therefore, may ignore it.
+		 */
+
+		sfc_ef10_essb_rx_process_ev(rxq, rx_ev);
+		n_rx_pkts += sfc_ef10_essb_rx_get_pending(rxq,
+							  rx_pkts + n_rx_pkts,
+							  nb_pkts - n_rx_pkts);
+	}
+
+	sfc_ef10_ev_qclear(rxq->evq_hw_ring, rxq->evq_ptr_mask,
+			   evq_old_read_ptr, rxq->evq_read_ptr);
+
+	/* It is not a problem if we refill in the case of exception */
+	sfc_ef10_essb_rx_qrefill(rxq);
+
+	return n_rx_pkts;
+}
+
+static sfc_dp_rx_qdesc_npending_t sfc_ef10_essb_rx_qdesc_npending;
+static unsigned int
+sfc_ef10_essb_rx_qdesc_npending(__rte_unused struct sfc_dp_rxq *dp_rxq)
+{
+	/*
+	 * Correct implementation requires EvQ polling and events
+	 * processing.
+	 */
+	return -ENOTSUP;
+}
+
+static sfc_dp_rx_get_dev_info_t sfc_ef10_essb_rx_get_dev_info;
+static void
+sfc_ef10_essb_rx_get_dev_info(struct rte_eth_dev_info *dev_info)
+{
+	/*
+	 * Number of descriptors just defines maximum number of pushed
+	 * descriptors (fill level).
+	 */
+	dev_info->rx_desc_lim.nb_min = SFC_RX_REFILL_BULK;
+	dev_info->rx_desc_lim.nb_align = SFC_RX_REFILL_BULK;
+}
+
+static sfc_dp_rx_qsize_up_rings_t sfc_ef10_essb_rx_qsize_up_rings;
+static int
+sfc_ef10_essb_rx_qsize_up_rings(uint16_t nb_rx_desc,
+				struct rte_mempool *mb_pool,
+				unsigned int *rxq_entries,
+				unsigned int *evq_entries,
+				unsigned int *rxq_max_fill_level)
+{
+	int rc;
+	struct rte_mempool_info mp_info;
+	unsigned int nb_hw_rx_desc;
+	unsigned int max_events;
+
+	rc = rte_mempool_ops_get_info(mb_pool, &mp_info);
+	if (rc != 0)
+		return -rc;
+	if (mp_info.contig_block_size == 0)
+		return EINVAL;
+
+	/*
+	 * Calculate required number of hardware Rx descriptors each
+	 * carrying contig block size Rx buffers.
+	 * It cannot be less than Rx write pointer alignment plus 1
+	 * in order to avoid cases when the ring is guaranteed to be
+	 * empty.
+	 */
+	nb_hw_rx_desc = RTE_MAX(SFC_DIV_ROUND_UP(nb_rx_desc,
+						 mp_info.contig_block_size),
+				SFC_EF10_RX_WPTR_ALIGN + 1);
+	if (nb_hw_rx_desc <= EFX_RXQ_MINNDESCS) {
+		*rxq_entries = EFX_RXQ_MINNDESCS;
+	} else {
+		*rxq_entries = rte_align32pow2(nb_hw_rx_desc);
+		if (*rxq_entries > EFX_RXQ_MAXNDESCS)
+			return EINVAL;
+	}
+
+	max_events = RTE_ALIGN_FLOOR(nb_hw_rx_desc, SFC_EF10_RX_WPTR_ALIGN) *
+		mp_info.contig_block_size +
+		(SFC_EF10_EV_PER_CACHE_LINE - 1) /* max unused EvQ entries */ +
+		1 /* Rx error */ + 1 /* flush */ + 1 /* head-tail space */;
+
+	*evq_entries = rte_align32pow2(max_events);
+	*evq_entries = RTE_MAX(*evq_entries, (unsigned int)EFX_EVQ_MINNEVS);
+	*evq_entries = RTE_MIN(*evq_entries, (unsigned int)EFX_EVQ_MAXNEVS);
+
+	/*
+	 * May be even maximum event queue size is insufficient to handle
+	 * so many Rx descriptors. If so, we should limit Rx queue fill level.
+	 */
+	*rxq_max_fill_level = RTE_MIN(nb_rx_desc,
+				      SFC_EF10_ESSB_RXQ_LIMIT(*evq_entries));
+	return 0;
+}
+
+static sfc_dp_rx_qcreate_t sfc_ef10_essb_rx_qcreate;
+static int
+sfc_ef10_essb_rx_qcreate(uint16_t port_id, uint16_t queue_id,
+			 const struct rte_pci_addr *pci_addr, int socket_id,
+			 const struct sfc_dp_rx_qcreate_info *info,
+			 struct sfc_dp_rxq **dp_rxqp)
+{
+	struct rte_mempool * const mp = info->refill_mb_pool;
+	struct rte_mempool_info mp_info;
+	struct sfc_ef10_essb_rxq *rxq;
+	int rc;
+
+	rc = rte_mempool_ops_get_info(mp, &mp_info);
+	if (rc != 0) {
+		/* Positive errno is used in the driver */
+		rc = -rc;
+		goto fail_get_contig_block_size;
+	}
+
+	/* Check if the mempool provides block dequeue */
+	rc = EINVAL;
+	if (mp_info.contig_block_size == 0)
+		goto fail_no_block_dequeue;
+
+	rc = ENOMEM;
+	rxq = rte_zmalloc_socket("sfc-ef10-rxq", sizeof(*rxq),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (rxq == NULL)
+		goto fail_rxq_alloc;
+
+	sfc_dp_queue_init(&rxq->dp.dpq, port_id, queue_id, pci_addr);
+
+	rc = ENOMEM;
+	rxq->sw_ring = rte_calloc_socket("sfc-ef10-rxq-sw_ring",
+					 info->rxq_entries,
+					 sizeof(*rxq->sw_ring),
+					 RTE_CACHE_LINE_SIZE, socket_id);
+	if (rxq->sw_ring == NULL)
+		goto fail_desc_alloc;
+
+	rxq->block_size = mp_info.contig_block_size;
+	rxq->buf_stride = mp->header_size + mp->elt_size + mp->trailer_size;
+	rxq->rxq_ptr_mask = info->rxq_entries - 1;
+	rxq->evq_ptr_mask = info->evq_entries - 1;
+	rxq->evq_hw_ring = info->evq_hw_ring;
+	rxq->port_id = port_id;
+
+	rxq->max_fill_level = info->max_fill_level / mp_info.contig_block_size;
+	rxq->refill_threshold =
+		RTE_MAX(info->refill_threshold / mp_info.contig_block_size,
+			SFC_EF10_RX_WPTR_ALIGN);
+	rxq->refill_mb_pool = mp;
+	rxq->rxq_hw_ring = info->rxq_hw_ring;
+
+	rxq->doorbell = (volatile uint8_t *)info->mem_bar +
+			ER_DZ_RX_DESC_UPD_REG_OFST +
+			(info->hw_index << info->vi_window_shift);
+
+	sfc_ef10_essb_rx_info(&rxq->dp.dpq,
+			      "block size is %u, buf stride is %u",
+			      rxq->block_size, rxq->buf_stride);
+	sfc_ef10_essb_rx_info(&rxq->dp.dpq,
+			      "max fill level is %u descs (%u bufs), "
+			      "refill threashold %u descs (%u bufs)",
+			      rxq->max_fill_level,
+			      rxq->max_fill_level * rxq->block_size,
+			      rxq->refill_threshold,
+			      rxq->refill_threshold * rxq->block_size);
+
+	*dp_rxqp = &rxq->dp;
+	return 0;
+
+fail_desc_alloc:
+	rte_free(rxq);
+
+fail_rxq_alloc:
+fail_no_block_dequeue:
+fail_get_contig_block_size:
+	return rc;
+}
+
+static sfc_dp_rx_qdestroy_t sfc_ef10_essb_rx_qdestroy;
+static void
+sfc_ef10_essb_rx_qdestroy(struct sfc_dp_rxq *dp_rxq)
+{
+	struct sfc_ef10_essb_rxq *rxq = sfc_ef10_essb_rxq_by_dp_rxq(dp_rxq);
+
+	rte_free(rxq->sw_ring);
+	rte_free(rxq);
+}
+
+static sfc_dp_rx_qstart_t sfc_ef10_essb_rx_qstart;
+static int
+sfc_ef10_essb_rx_qstart(struct sfc_dp_rxq *dp_rxq, unsigned int evq_read_ptr)
+{
+	struct sfc_ef10_essb_rxq *rxq = sfc_ef10_essb_rxq_by_dp_rxq(dp_rxq);
+
+	rxq->evq_read_ptr = evq_read_ptr;
+
+	/* Initialize before refill */
+	rxq->completed = rxq->pending_id = rxq->added = 0;
+	rxq->left_in_completed = rxq->left_in_pending = rxq->block_size;
+	rxq->bufs_ptr = UINT_MAX;
+	rxq->bufs_pending = 0;
+
+	sfc_ef10_essb_rx_qrefill(rxq);
+
+	rxq->flags |= SFC_EF10_ESSB_RXQ_STARTED;
+	rxq->flags &=
+		~(SFC_EF10_ESSB_RXQ_NOT_RUNNING | SFC_EF10_ESSB_RXQ_EXCEPTION);
+
+	return 0;
+}
+
+static sfc_dp_rx_qstop_t sfc_ef10_essb_rx_qstop;
+static void
+sfc_ef10_essb_rx_qstop(struct sfc_dp_rxq *dp_rxq, unsigned int *evq_read_ptr)
+{
+	struct sfc_ef10_essb_rxq *rxq = sfc_ef10_essb_rxq_by_dp_rxq(dp_rxq);
+
+	rxq->flags |= SFC_EF10_ESSB_RXQ_NOT_RUNNING;
+
+	*evq_read_ptr = rxq->evq_read_ptr;
+}
+
+static sfc_dp_rx_qrx_ev_t sfc_ef10_essb_rx_qrx_ev;
+static bool
+sfc_ef10_essb_rx_qrx_ev(struct sfc_dp_rxq *dp_rxq, __rte_unused unsigned int id)
+{
+	__rte_unused struct sfc_ef10_essb_rxq *rxq;
+
+	rxq = sfc_ef10_essb_rxq_by_dp_rxq(dp_rxq);
+	SFC_ASSERT(rxq->flags & SFC_EF10_ESSB_RXQ_NOT_RUNNING);
+
+	/*
+	 * It is safe to ignore Rx event since we free all mbufs on
+	 * queue purge anyway.
+	 */
+
+	return false;
+}
+
+static sfc_dp_rx_qpurge_t sfc_ef10_essb_rx_qpurge;
+static void
+sfc_ef10_essb_rx_qpurge(struct sfc_dp_rxq *dp_rxq)
+{
+	struct sfc_ef10_essb_rxq *rxq = sfc_ef10_essb_rxq_by_dp_rxq(dp_rxq);
+	unsigned int i, j;
+	const struct sfc_ef10_essb_rx_sw_desc *rxd;
+	struct rte_mbuf *m;
+
+	if (rxq->completed != rxq->added && rxq->left_in_completed > 0) {
+		rxd = &rxq->sw_ring[rxq->completed & rxq->rxq_ptr_mask];
+		m = sfc_ef10_essb_mbuf_by_index(rxq, rxd->first_mbuf,
+				rxq->block_size - rxq->left_in_completed);
+		do {
+			rxq->left_in_completed--;
+			rte_mempool_put(rxq->refill_mb_pool, m);
+			m = sfc_ef10_essb_next_mbuf(rxq, m);
+		} while (rxq->left_in_completed > 0);
+		rxq->completed++;
+	}
+
+	for (i = rxq->completed; i != rxq->added; ++i) {
+		rxd = &rxq->sw_ring[i & rxq->rxq_ptr_mask];
+		m = rxd->first_mbuf;
+		for (j = 0; j < rxq->block_size; ++j) {
+			rte_mempool_put(rxq->refill_mb_pool, m);
+			m = sfc_ef10_essb_next_mbuf(rxq, m);
+		}
+	}
+
+	rxq->flags &= ~SFC_EF10_ESSB_RXQ_STARTED;
+}
+
+struct sfc_dp_rx sfc_ef10_essb_rx = {
+	.dp = {
+		.name		= SFC_KVARG_DATAPATH_EF10_ESSB,
+		.type		= SFC_DP_RX,
+		.hw_fw_caps	= SFC_DP_HW_FW_CAP_EF10 |
+				  SFC_DP_HW_FW_CAP_RX_ES_SUPER_BUFFER,
+	},
+	.features		= 0,
+	.get_dev_info		= sfc_ef10_essb_rx_get_dev_info,
+	.qsize_up_rings		= sfc_ef10_essb_rx_qsize_up_rings,
+	.qcreate		= sfc_ef10_essb_rx_qcreate,
+	.qdestroy		= sfc_ef10_essb_rx_qdestroy,
+	.qstart			= sfc_ef10_essb_rx_qstart,
+	.qstop			= sfc_ef10_essb_rx_qstop,
+	.qrx_ev			= sfc_ef10_essb_rx_qrx_ev,
+	.qpurge			= sfc_ef10_essb_rx_qpurge,
+	.supported_ptypes_get	= sfc_ef10_supported_ptypes_get,
+	.qdesc_npending		= sfc_ef10_essb_rx_qdesc_npending,
+	.pkt_burst		= sfc_ef10_essb_recv_pkts,
+};
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 1f0d6a0..42b35b9 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -386,7 +386,7 @@ sfc_ef10_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	return n_rx_pkts;
 }
 
-static const uint32_t *
+const uint32_t *
 sfc_ef10_supported_ptypes_get(uint32_t tunnel_encaps)
 {
 	static const uint32_t ef10_native_ptypes[] = {
diff --git a/drivers/net/sfc/sfc_ef10_rx_ev.h b/drivers/net/sfc/sfc_ef10_rx_ev.h
index 9054fb9..615bd29 100644
--- a/drivers/net/sfc/sfc_ef10_rx_ev.h
+++ b/drivers/net/sfc/sfc_ef10_rx_ev.h
@@ -34,7 +34,10 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 	uint32_t l4_ptype = 0;
 	uint64_t ol_flags = 0;
 
-	if (unlikely(EFX_TEST_QWORD_BIT(rx_ev, ESF_DZ_RX_PARSE_INCOMPLETE_LBN)))
+	if (unlikely(rx_ev.eq_u64[0] &
+		rte_cpu_to_le_64((1ull << ESF_DZ_RX_ECC_ERR_LBN) |
+				 (1ull << ESF_DZ_RX_ECRC_ERR_LBN) |
+				 (1ull << ESF_DZ_RX_PARSE_INCOMPLETE_LBN))))
 		goto done;
 
 #if SFC_EF10_RX_EV_ENCAP_SUPPORT
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 35a8301..700e154 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1707,6 +1707,7 @@ static int
 sfc_eth_dev_set_ops(struct rte_eth_dev *dev)
 {
 	struct sfc_adapter *sa = dev->data->dev_private;
+	const efx_nic_cfg_t *encp;
 	unsigned int avail_caps = 0;
 	const char *rx_name = NULL;
 	const char *tx_name = NULL;
@@ -1722,6 +1723,10 @@ sfc_eth_dev_set_ops(struct rte_eth_dev *dev)
 		break;
 	}
 
+	encp = efx_nic_cfg_get(sa->nic);
+	if (encp->enc_rx_es_super_buffer_supported)
+		avail_caps |= SFC_DP_HW_FW_CAP_RX_ES_SUPER_BUFFER;
+
 	rc = sfc_kvargs_process(sa, SFC_KVARG_RX_DATAPATH,
 				sfc_kvarg_string_handler, &rx_name);
 	if (rc != 0)
@@ -1911,6 +1916,7 @@ sfc_register_dp(void)
 	/* Register once */
 	if (TAILQ_EMPTY(&sfc_dp_head)) {
 		/* Prefer EF10 datapath */
+		sfc_dp_register(&sfc_dp_head, &sfc_ef10_essb_rx.dp);
 		sfc_dp_register(&sfc_dp_head, &sfc_ef10_rx.dp);
 		sfc_dp_register(&sfc_dp_head, &sfc_efx_rx.dp);
 
diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index 8a5030b..f93d30e 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -163,6 +163,35 @@ sfc_ev_dp_rx(void *arg, __rte_unused uint32_t label, uint32_t id,
 }
 
 static boolean_t
+sfc_ev_nop_rx_ps(void *arg, uint32_t label, uint32_t id,
+		 uint32_t pkt_count, uint16_t flags)
+{
+	struct sfc_evq *evq = arg;
+
+	sfc_err(evq->sa,
+		"EVQ %u unexpected packed stream Rx event label=%u id=%#x pkt_count=%u flags=%#x",
+		evq->evq_index, label, id, pkt_count, flags);
+	return B_TRUE;
+}
+
+/* It is not actually used on datapath, but required on RxQ flush */
+static boolean_t
+sfc_ev_dp_rx_ps(void *arg, __rte_unused uint32_t label, uint32_t id,
+		__rte_unused uint32_t pkt_count, __rte_unused uint16_t flags)
+{
+	struct sfc_evq *evq = arg;
+	struct sfc_dp_rxq *dp_rxq;
+
+	dp_rxq = evq->dp_rxq;
+	SFC_ASSERT(dp_rxq != NULL);
+
+	if (evq->sa->dp_rx->qrx_ps_ev != NULL)
+		return evq->sa->dp_rx->qrx_ps_ev(dp_rxq, id);
+	else
+		return B_FALSE;
+}
+
+static boolean_t
 sfc_ev_nop_tx(void *arg, uint32_t label, uint32_t id)
 {
 	struct sfc_evq *evq = arg;
@@ -394,6 +423,7 @@ sfc_ev_link_change(void *arg, efx_link_mode_t link_mode)
 static const efx_ev_callbacks_t sfc_ev_callbacks = {
 	.eec_initialized	= sfc_ev_initialized,
 	.eec_rx			= sfc_ev_nop_rx,
+	.eec_rx_ps		= sfc_ev_nop_rx_ps,
 	.eec_tx			= sfc_ev_nop_tx,
 	.eec_exception		= sfc_ev_exception,
 	.eec_rxq_flush_done	= sfc_ev_nop_rxq_flush_done,
@@ -409,6 +439,7 @@ static const efx_ev_callbacks_t sfc_ev_callbacks = {
 static const efx_ev_callbacks_t sfc_ev_callbacks_efx_rx = {
 	.eec_initialized	= sfc_ev_initialized,
 	.eec_rx			= sfc_ev_efx_rx,
+	.eec_rx_ps		= sfc_ev_nop_rx_ps,
 	.eec_tx			= sfc_ev_nop_tx,
 	.eec_exception		= sfc_ev_exception,
 	.eec_rxq_flush_done	= sfc_ev_rxq_flush_done,
@@ -424,6 +455,7 @@ static const efx_ev_callbacks_t sfc_ev_callbacks_efx_rx = {
 static const efx_ev_callbacks_t sfc_ev_callbacks_dp_rx = {
 	.eec_initialized	= sfc_ev_initialized,
 	.eec_rx			= sfc_ev_dp_rx,
+	.eec_rx_ps		= sfc_ev_dp_rx_ps,
 	.eec_tx			= sfc_ev_nop_tx,
 	.eec_exception		= sfc_ev_exception,
 	.eec_rxq_flush_done	= sfc_ev_rxq_flush_done,
@@ -439,6 +471,7 @@ static const efx_ev_callbacks_t sfc_ev_callbacks_dp_rx = {
 static const efx_ev_callbacks_t sfc_ev_callbacks_efx_tx = {
 	.eec_initialized	= sfc_ev_initialized,
 	.eec_rx			= sfc_ev_nop_rx,
+	.eec_rx_ps		= sfc_ev_nop_rx_ps,
 	.eec_tx			= sfc_ev_tx,
 	.eec_exception		= sfc_ev_exception,
 	.eec_rxq_flush_done	= sfc_ev_nop_rxq_flush_done,
@@ -454,6 +487,7 @@ static const efx_ev_callbacks_t sfc_ev_callbacks_efx_tx = {
 static const efx_ev_callbacks_t sfc_ev_callbacks_dp_tx = {
 	.eec_initialized	= sfc_ev_initialized,
 	.eec_rx			= sfc_ev_nop_rx,
+	.eec_rx_ps		= sfc_ev_nop_rx_ps,
 	.eec_tx			= sfc_ev_dp_tx,
 	.eec_exception		= sfc_ev_exception,
 	.eec_rxq_flush_done	= sfc_ev_nop_rxq_flush_done,
diff --git a/drivers/net/sfc/sfc_kvargs.h b/drivers/net/sfc/sfc_kvargs.h
index 1e578e7..057002e 100644
--- a/drivers/net/sfc/sfc_kvargs.h
+++ b/drivers/net/sfc/sfc_kvargs.h
@@ -33,11 +33,13 @@ extern "C" {
 #define SFC_KVARG_DATAPATH_EFX		"efx"
 #define SFC_KVARG_DATAPATH_EF10		"ef10"
 #define SFC_KVARG_DATAPATH_EF10_SIMPLE	"ef10_simple"
+#define SFC_KVARG_DATAPATH_EF10_ESSB	"ef10_essb"
 
 #define SFC_KVARG_RX_DATAPATH		"rx_datapath"
 #define SFC_KVARG_VALUES_RX_DATAPATH \
 	"[" SFC_KVARG_DATAPATH_EFX "|" \
-	    SFC_KVARG_DATAPATH_EF10 "]"
+	    SFC_KVARG_DATAPATH_EF10 "|" \
+	    SFC_KVARG_DATAPATH_EF10_ESSB "]"
 
 #define SFC_KVARG_TX_DATAPATH		"tx_datapath"
 #define SFC_KVARG_VALUES_TX_DATAPATH \
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 7345074..653724f 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -680,10 +680,37 @@ sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 	if (rc != 0)
 		goto fail_ev_qstart;
 
-	rc = efx_rx_qcreate(sa->nic, rxq->hw_index, 0, rxq_info->type,
-			    &rxq->mem, rxq_info->entries,
-			    0 /* not used on EF10 */, rxq_info->type_flags,
-			    evq->common, &rxq->common);
+	switch (rxq_info->type) {
+	case EFX_RXQ_TYPE_DEFAULT:
+		rc = efx_rx_qcreate(sa->nic, rxq->hw_index, 0, rxq_info->type,
+			&rxq->mem, rxq_info->entries, 0 /* not used on EF10 */,
+			rxq_info->type_flags, evq->common, &rxq->common);
+		break;
+	case EFX_RXQ_TYPE_ES_SUPER_BUFFER: {
+		struct rte_mempool *mp = rxq->refill_mb_pool;
+		struct rte_mempool_info mp_info;
+
+		rc = rte_mempool_ops_get_info(mp, &mp_info);
+		if (rc != 0) {
+			/* Positive errno is used in the driver */
+			rc = -rc;
+			goto fail_mp_get_info;
+		}
+		if (mp_info.contig_block_size <= 0) {
+			rc = EINVAL;
+			goto fail_bad_contig_block_size;
+		}
+		rc = efx_rx_qcreate_es_super_buffer(sa->nic, rxq->hw_index, 0,
+			mp_info.contig_block_size, rxq->buf_size,
+			mp->header_size + mp->elt_size + mp->trailer_size,
+			0 /* hol_block_timeout */,
+			&rxq->mem, rxq_info->entries, rxq_info->type_flags,
+			evq->common, &rxq->common);
+		break;
+	}
+	default:
+		rc = ENOTSUP;
+	}
 	if (rc != 0)
 		goto fail_rx_qcreate;
 
@@ -714,6 +741,8 @@ sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 	sfc_rx_qflush(sa, sw_index);
 
 fail_rx_qcreate:
+fail_bad_contig_block_size:
+fail_mp_get_info:
 	sfc_ev_qstop(evq);
 
 fail_ev_qstart:
@@ -1020,7 +1049,12 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 
 	SFC_ASSERT(rxq_entries <= rxq_info->max_entries);
 	rxq_info->entries = rxq_entries;
-	rxq_info->type = EFX_RXQ_TYPE_DEFAULT;
+
+	if (sa->dp_rx->dp.hw_fw_caps & SFC_DP_HW_FW_CAP_RX_ES_SUPER_BUFFER)
+		rxq_info->type = EFX_RXQ_TYPE_ES_SUPER_BUFFER;
+	else
+		rxq_info->type = EFX_RXQ_TYPE_DEFAULT;
+
 	rxq_info->type_flags =
 		(rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) ?
 		EFX_RXQ_FLAG_SCATTER : EFX_RXQ_FLAG_NONE;
@@ -1047,6 +1081,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	rxq->refill_threshold =
 		RTE_MAX(rx_conf->rx_free_thresh, SFC_RX_REFILL_BULK);
 	rxq->refill_mb_pool = mb_pool;
+	rxq->buf_size = buf_size;
 
 	rc = sfc_dma_alloc(sa, "rxq", sw_index, EFX_RXQ_SIZE(rxq_info->entries),
 			   socket_id, &rxq->mem);
diff --git a/drivers/net/sfc/sfc_rx.h b/drivers/net/sfc/sfc_rx.h
index d9e7b0b..3fba7d8 100644
--- a/drivers/net/sfc/sfc_rx.h
+++ b/drivers/net/sfc/sfc_rx.h
@@ -60,6 +60,7 @@ struct sfc_rxq {
 	unsigned int		hw_index;
 	unsigned int		refill_threshold;
 	struct rte_mempool	*refill_mb_pool;
+	uint16_t		buf_size;
 	struct sfc_dp_rxq	*dp;
 	unsigned int		state;
 };
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 13/23] net/sfc: support callback to check if mempool is supported
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (11 preceding siblings ...)
  2018-04-19 11:36 ` [PATCH 12/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
@ 2018-04-19 11:36 ` Andrew Rybchenko
  2018-04-19 11:36 ` [PATCH 14/23] net/sfc: check mempool when equal stride super-buffer used Andrew Rybchenko
                   ` (10 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:36 UTC (permalink / raw)
  To: dev

The callback is a dummy yet since no Rx datapath provides its own
callback, so all pools are supported.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc_dp_rx.h  | 13 +++++++++++++
 drivers/net/sfc/sfc_ethdev.c | 16 ++++++++++++++++
 2 files changed, 29 insertions(+)

diff --git a/drivers/net/sfc/sfc_dp_rx.h b/drivers/net/sfc/sfc_dp_rx.h
index db075dd..cb745e6 100644
--- a/drivers/net/sfc/sfc_dp_rx.h
+++ b/drivers/net/sfc/sfc_dp_rx.h
@@ -90,6 +90,18 @@ struct sfc_dp_rx_qcreate_info {
 typedef void (sfc_dp_rx_get_dev_info_t)(struct rte_eth_dev_info *dev_info);
 
 /**
+ * Test if an Rx datapath supports specific mempool ops.
+ *
+ * @param pool			The name of the pool operations to test.
+ *
+ * @return Check status.
+ * @retval	0		Best mempool ops choice.
+ * @retval	1		Mempool ops are supported.
+ * @retval	-ENOTSUP	Mempool ops not supported.
+ */
+typedef int (sfc_dp_rx_pool_ops_supported_t)(const char *pool);
+
+/**
  * Get size of receive and event queue rings by the number of Rx
  * descriptors and mempool configuration.
  *
@@ -182,6 +194,7 @@ struct sfc_dp_rx {
 #define SFC_DP_RX_FEAT_MULTI_PROCESS		0x2
 #define SFC_DP_RX_FEAT_TUNNELS			0x4
 	sfc_dp_rx_get_dev_info_t		*get_dev_info;
+	sfc_dp_rx_pool_ops_supported_t		*pool_ops_supported;
 	sfc_dp_rx_qsize_up_rings_t		*qsize_up_rings;
 	sfc_dp_rx_qcreate_t			*qcreate;
 	sfc_dp_rx_qdestroy_t			*qdestroy;
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 700e154..c3f37bc 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1630,6 +1630,21 @@ sfc_dev_filter_ctrl(struct rte_eth_dev *dev, enum rte_filter_type filter_type,
 	return -rc;
 }
 
+static int
+sfc_pool_ops_supported(struct rte_eth_dev *dev, const char *pool)
+{
+	struct sfc_adapter *sa = dev->data->dev_private;
+
+	/*
+	 * If Rx datapath does not provide callback to check mempool,
+	 * all pools are supported.
+	 */
+	if (sa->dp_rx->pool_ops_supported == NULL)
+		return 1;
+
+	return sa->dp_rx->pool_ops_supported(pool);
+}
+
 static const struct eth_dev_ops sfc_eth_dev_ops = {
 	.dev_configure			= sfc_dev_configure,
 	.dev_start			= sfc_dev_start,
@@ -1678,6 +1693,7 @@ static const struct eth_dev_ops sfc_eth_dev_ops = {
 	.fw_version_get			= sfc_fw_version_get,
 	.xstats_get_by_id		= sfc_xstats_get_by_id,
 	.xstats_get_names_by_id		= sfc_xstats_get_names_by_id,
+	.pool_ops_supported		= sfc_pool_ops_supported,
 };
 
 /**
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 14/23] net/sfc: check mempool when equal stride super-buffer used
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (12 preceding siblings ...)
  2018-04-19 11:36 ` [PATCH 13/23] net/sfc: support callback to check if mempool is supported Andrew Rybchenko
@ 2018-04-19 11:36 ` Andrew Rybchenko
  2018-04-19 11:36 ` [PATCH 15/23] net/sfc: support DPDK firmware variant Andrew Rybchenko
                   ` (9 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:36 UTC (permalink / raw)
  To: dev

Equal stride super-buffer requires mempool with contiguous object
block allocation mechanism. Bucket mempool is the only which provides it.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc_ef10_essb_rx.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 1df61ff..8dd4396 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -391,6 +391,18 @@ sfc_ef10_essb_rx_get_dev_info(struct rte_eth_dev_info *dev_info)
 	dev_info->rx_desc_lim.nb_align = SFC_RX_REFILL_BULK;
 }
 
+static sfc_dp_rx_pool_ops_supported_t sfc_ef10_essb_rx_pool_ops_supported;
+static int
+sfc_ef10_essb_rx_pool_ops_supported(const char *pool)
+{
+	SFC_ASSERT(pool != NULL);
+
+	if (strcmp(pool, "bucket") == 0)
+		return 0;
+
+	return -ENOTSUP;
+}
+
 static sfc_dp_rx_qsize_up_rings_t sfc_ef10_essb_rx_qsize_up_rings;
 static int
 sfc_ef10_essb_rx_qsize_up_rings(uint16_t nb_rx_desc,
@@ -630,6 +642,7 @@ struct sfc_dp_rx sfc_ef10_essb_rx = {
 	},
 	.features		= 0,
 	.get_dev_info		= sfc_ef10_essb_rx_get_dev_info,
+	.pool_ops_supported	= sfc_ef10_essb_rx_pool_ops_supported,
 	.qsize_up_rings		= sfc_ef10_essb_rx_qsize_up_rings,
 	.qcreate		= sfc_ef10_essb_rx_qcreate,
 	.qdestroy		= sfc_ef10_essb_rx_qdestroy,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 15/23] net/sfc: support DPDK firmware variant
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (13 preceding siblings ...)
  2018-04-19 11:36 ` [PATCH 14/23] net/sfc: check mempool when equal stride super-buffer used Andrew Rybchenko
@ 2018-04-19 11:36 ` Andrew Rybchenko
  2018-04-19 11:36 ` [PATCH 16/23] net/sfc: add Rx descriptor wait timeout Andrew Rybchenko
                   ` (8 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:36 UTC (permalink / raw)
  To: dev

DPDK firmware variant supports equal stride super-buffer Rx mode which
provides higher packet rate and packet marks but requires dedicated
mempool manager with contiguous object block allocation (e.g. bucket).

Also the firmware supports subvariant without checksumming on Tx which
allows to reach higher packet rates on transmit if checksumming is not
required.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 doc/guides/nics/sfc_efx.rst  | 6 +++++-
 drivers/net/sfc/sfc.c        | 4 ++++
 drivers/net/sfc/sfc_kvargs.h | 4 +++-
 3 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index bbc6e61..19b1087 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -354,7 +354,7 @@ boolean parameters value.
   value will select a fixed update period of **1000** milliseconds
 
 - ``fw_variant`` [dont-care|full-feature|ultra-low-latency|
-  capture-packed-stream] (default **dont-care**)
+  capture-packed-stream|dpdk] (default **dont-care**)
 
   Choose the preferred firmware variant to use. In order for the selected
   option to have an effect, the **sfboot** utility must be configured with the
@@ -367,6 +367,10 @@ boolean parameters value.
   **ultra-low-latency** chooses firmware with fewer features but lower latency.
   **capture-packed-stream** chooses firmware for SolarCapture packed stream
   mode.
+  **dpdk** chooses DPDK firmware with equal stride super-buffer Rx mode
+  for higher Rx packet rate and packet marks support and firmware subvariant
+  without checksumming on transmit for higher Tx packet rate if
+  checksumming is not required.
 
 
 Dynamic Logging Parameters
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 37248bc..5458f39 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -829,6 +829,8 @@ sfc_kvarg_fv_variant_handler(__rte_unused const char *key,
 		*value = EFX_FW_VARIANT_LOW_LATENCY;
 	else if (strcasecmp(value_str, SFC_KVARG_FW_VARIANT_PACKED_STREAM) == 0)
 		*value = EFX_FW_VARIANT_PACKED_STREAM;
+	else if (strcasecmp(value_str, SFC_KVARG_FW_VARIANT_DPDK) == 0)
+		*value = EFX_FW_VARIANT_DPDK;
 	else
 		return -EINVAL;
 
@@ -886,6 +888,8 @@ sfc_fw_variant2str(efx_fw_variant_t efv)
 		return SFC_KVARG_FW_VARIANT_LOW_LATENCY;
 	case EFX_RXDP_PACKED_STREAM_FW_ID:
 		return SFC_KVARG_FW_VARIANT_PACKED_STREAM;
+	case EFX_RXDP_DPDK_FW_ID:
+		return SFC_KVARG_FW_VARIANT_DPDK;
 	default:
 		return "unknown";
 	}
diff --git a/drivers/net/sfc/sfc_kvargs.h b/drivers/net/sfc/sfc_kvargs.h
index 057002e..9f21cfd 100644
--- a/drivers/net/sfc/sfc_kvargs.h
+++ b/drivers/net/sfc/sfc_kvargs.h
@@ -53,11 +53,13 @@ extern "C" {
 #define SFC_KVARG_FW_VARIANT_FULL_FEATURED	"full-feature"
 #define SFC_KVARG_FW_VARIANT_LOW_LATENCY	"ultra-low-latency"
 #define SFC_KVARG_FW_VARIANT_PACKED_STREAM	"capture-packed-stream"
+#define SFC_KVARG_FW_VARIANT_DPDK		"dpdk"
 #define SFC_KVARG_VALUES_FW_VARIANT \
 	"[" SFC_KVARG_FW_VARIANT_DONT_CARE "|" \
 	    SFC_KVARG_FW_VARIANT_FULL_FEATURED "|" \
 	    SFC_KVARG_FW_VARIANT_LOW_LATENCY "|" \
-	    SFC_KVARG_FW_VARIANT_PACKED_STREAM "]"
+	    SFC_KVARG_FW_VARIANT_PACKED_STREAM "|" \
+	    SFC_KVARG_FW_VARIANT_DPDK "]"
 
 struct sfc_adapter;
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 16/23] net/sfc: add Rx descriptor wait timeout
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (14 preceding siblings ...)
  2018-04-19 11:36 ` [PATCH 15/23] net/sfc: support DPDK firmware variant Andrew Rybchenko
@ 2018-04-19 11:36 ` Andrew Rybchenko
  2018-04-19 11:37 ` [PATCH 17/23] net/sfc: support flow marks in equal stride super-buffer Rx Andrew Rybchenko
                   ` (7 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:36 UTC (permalink / raw)
  To: dev

Add device argument to customize Rx descriptor wait timeout which
is supported in DPDK firmware variant only in equal stride super-buffer
Rx mode only.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 doc/guides/nics/sfc_efx.rst  | 12 ++++++++++++
 drivers/net/sfc/sfc.c        | 31 +++++++++++++++++++++++++++++++
 drivers/net/sfc/sfc.h        |  2 ++
 drivers/net/sfc/sfc_ethdev.c |  1 +
 drivers/net/sfc/sfc_kvargs.c |  1 +
 drivers/net/sfc/sfc_kvargs.h |  2 ++
 drivers/net/sfc/sfc_rx.c     |  2 +-
 drivers/net/sfc/sfc_tweak.h  |  8 ++++++++
 8 files changed, 58 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index 19b1087..c9354e3 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -372,6 +372,18 @@ boolean parameters value.
   without checksumming on transmit for higher Tx packet rate if
   checksumming is not required.
 
+- ``rxd_wait_timeout_ns`` [long] (default **200 us**)
+
+  Adjust timeout in nanoseconds to head-of-line block to wait for
+  Rx descriptors.
+  The accepted range is 0 to 400 ms.
+  Flow control should be enabled to make it work.
+  The value of **0** disables it and packets are dropped immediately.
+  When a packet is dropped because of no Rx descriptors,
+  ``rx_nodesc_drop_cnt`` counter grows.
+  The feature is supported only by the DPDK firmware variant when equal
+  stride super-buffer Rx mode is used.
+
 
 Dynamic Logging Parameters
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 5458f39..d1abf62 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -21,6 +21,7 @@
 #include "sfc_rx.h"
 #include "sfc_tx.h"
 #include "sfc_kvargs.h"
+#include "sfc_tweak.h"
 
 
 int
@@ -896,6 +897,32 @@ sfc_fw_variant2str(efx_fw_variant_t efv)
 }
 
 static int
+sfc_kvarg_rxd_wait_timeout_ns(struct sfc_adapter *sa)
+{
+	int rc;
+	long value;
+
+	value = SFC_RXD_WAIT_TIMEOUT_NS_DEF;
+
+	rc = sfc_kvargs_process(sa, SFC_KVARG_RXD_WAIT_TIMEOUT_NS,
+				sfc_kvarg_long_handler, &value);
+	if (rc != 0)
+		return rc;
+
+	if (value < 0 ||
+	    (unsigned long)value > EFX_RXQ_ES_SUPER_BUFFER_HOL_BLOCK_MAX) {
+		sfc_err(sa, "wrong '" SFC_KVARG_RXD_WAIT_TIMEOUT_NS "' "
+			    "was set (%ld);", value);
+		sfc_err(sa, "it must not be less than 0 or greater than %u",
+			    EFX_RXQ_ES_SUPER_BUFFER_HOL_BLOCK_MAX);
+		return EINVAL;
+	}
+
+	sa->rxd_wait_timeout_ns = value;
+	return 0;
+}
+
+static int
 sfc_nic_probe(struct sfc_adapter *sa)
 {
 	efx_nic_t *enp = sa->nic;
@@ -912,6 +939,10 @@ sfc_nic_probe(struct sfc_adapter *sa)
 		return rc;
 	}
 
+	rc = sfc_kvarg_rxd_wait_timeout_ns(sa);
+	if (rc != 0)
+		return rc;
+
 	rc = efx_nic_probe(enp, preferred_efv);
 	if (rc == EACCES) {
 		/* Unprivileged functions cannot set FW variant */
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index 3a5f6dc..51be440 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -238,6 +238,8 @@ struct sfc_adapter {
 
 	boolean_t			tso;
 
+	uint32_t			rxd_wait_timeout_ns;
+
 	struct sfc_rss			rss;
 
 	/*
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index c3f37bc..e42d553 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -2109,6 +2109,7 @@ RTE_PMD_REGISTER_PARAM_STRING(net_sfc_efx,
 	SFC_KVARG_TX_DATAPATH "=" SFC_KVARG_VALUES_TX_DATAPATH " "
 	SFC_KVARG_PERF_PROFILE "=" SFC_KVARG_VALUES_PERF_PROFILE " "
 	SFC_KVARG_FW_VARIANT "=" SFC_KVARG_VALUES_FW_VARIANT " "
+	SFC_KVARG_RXD_WAIT_TIMEOUT_NS "=<long> "
 	SFC_KVARG_STATS_UPDATE_PERIOD_MS "=<long>");
 
 RTE_INIT(sfc_driver_register_logtype);
diff --git a/drivers/net/sfc/sfc_kvargs.c b/drivers/net/sfc/sfc_kvargs.c
index 53fa939..7a89c76 100644
--- a/drivers/net/sfc/sfc_kvargs.c
+++ b/drivers/net/sfc/sfc_kvargs.c
@@ -27,6 +27,7 @@ sfc_kvargs_parse(struct sfc_adapter *sa)
 		SFC_KVARG_RX_DATAPATH,
 		SFC_KVARG_TX_DATAPATH,
 		SFC_KVARG_FW_VARIANT,
+		SFC_KVARG_RXD_WAIT_TIMEOUT_NS,
 		NULL,
 	};
 
diff --git a/drivers/net/sfc/sfc_kvargs.h b/drivers/net/sfc/sfc_kvargs.h
index 9f21cfd..4506667 100644
--- a/drivers/net/sfc/sfc_kvargs.h
+++ b/drivers/net/sfc/sfc_kvargs.h
@@ -61,6 +61,8 @@ extern "C" {
 	    SFC_KVARG_FW_VARIANT_PACKED_STREAM "|" \
 	    SFC_KVARG_FW_VARIANT_DPDK "]"
 
+#define SFC_KVARG_RXD_WAIT_TIMEOUT_NS	"rxd_wait_timeout_ns"
+
 struct sfc_adapter;
 
 int sfc_kvargs_parse(struct sfc_adapter *sa);
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 653724f..57ed34f 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -703,7 +703,7 @@ sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 		rc = efx_rx_qcreate_es_super_buffer(sa->nic, rxq->hw_index, 0,
 			mp_info.contig_block_size, rxq->buf_size,
 			mp->header_size + mp->elt_size + mp->trailer_size,
-			0 /* hol_block_timeout */,
+			sa->rxd_wait_timeout_ns,
 			&rxq->mem, rxq_info->entries, rxq_info->type_flags,
 			evq->common, &rxq->common);
 		break;
diff --git a/drivers/net/sfc/sfc_tweak.h b/drivers/net/sfc/sfc_tweak.h
index b402685..4d543f6 100644
--- a/drivers/net/sfc/sfc_tweak.h
+++ b/drivers/net/sfc/sfc_tweak.h
@@ -34,4 +34,12 @@
 /** Number of mbufs to be freed in bulk in a single call */
 #define SFC_TX_REAP_BULK_SIZE		32
 
+/**
+ * Default head-of-line block timeout to wait for Rx descriptor before
+ * packet drop because of no descriptors available.
+ *
+ * DPDK FW variant only with equal stride super-buffer Rx mode.
+ */
+#define SFC_RXD_WAIT_TIMEOUT_NS_DEF	(200U * 1000)
+
 #endif /* _SFC_TWEAK_H_ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 17/23] net/sfc: support flow marks in equal stride super-buffer Rx
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (15 preceding siblings ...)
  2018-04-19 11:36 ` [PATCH 16/23] net/sfc: add Rx descriptor wait timeout Andrew Rybchenko
@ 2018-04-19 11:37 ` Andrew Rybchenko
  2018-04-19 11:37 ` [PATCH 18/23] net/sfc/base: get actions MARK and FLAG support Andrew Rybchenko
                   ` (6 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:37 UTC (permalink / raw)
  To: dev

Equal stride super-buffer Rx mode allows to mark packets in HW
using filters. Process the data on datapath and advertise
corresponding features to allow flow API support to implement it.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_dp_rx.h        |  2 ++
 drivers/net/sfc/sfc_ef10_essb_rx.c | 14 ++++++++++++--
 2 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/drivers/net/sfc/sfc_dp_rx.h b/drivers/net/sfc/sfc_dp_rx.h
index cb745e6..83faad1 100644
--- a/drivers/net/sfc/sfc_dp_rx.h
+++ b/drivers/net/sfc/sfc_dp_rx.h
@@ -193,6 +193,8 @@ struct sfc_dp_rx {
 #define SFC_DP_RX_FEAT_SCATTER			0x1
 #define SFC_DP_RX_FEAT_MULTI_PROCESS		0x2
 #define SFC_DP_RX_FEAT_TUNNELS			0x4
+#define SFC_DP_RX_FEAT_FLOW_FLAG		0x8
+#define SFC_DP_RX_FEAT_FLOW_MARK		0x10
 	sfc_dp_rx_get_dev_info_t		*get_dev_info;
 	sfc_dp_rx_pool_ops_supported_t		*pool_ops_supported;
 	sfc_dp_rx_qsize_up_rings_t		*qsize_up_rings;
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 8dd4396..f051f3c 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -316,12 +316,21 @@ sfc_ef10_essb_rx_get_pending(struct sfc_ef10_essb_rxq *rxq,
 			m->ol_flags |=
 				(PKT_RX_RSS_HASH *
 				 !!EFX_TEST_QWORD_BIT(*qwordp,
-					ES_EZ_ESSB_RX_PREFIX_HASH_VALID_LBN));
+					ES_EZ_ESSB_RX_PREFIX_HASH_VALID_LBN)) |
+				(PKT_RX_FDIR_ID *
+				 !!EFX_TEST_QWORD_BIT(*qwordp,
+					ES_EZ_ESSB_RX_PREFIX_MARK_VALID_LBN)) |
+				(PKT_RX_FDIR *
+				 !!EFX_TEST_QWORD_BIT(*qwordp,
+					ES_EZ_ESSB_RX_PREFIX_MATCH_FLAG_LBN));
 
 			/* EFX_QWORD_FIELD converts little-endian to CPU */
 			m->hash.rss =
 				EFX_QWORD_FIELD(*qwordp,
 						ES_EZ_ESSB_RX_PREFIX_HASH);
+			m->hash.fdir.hi =
+				EFX_QWORD_FIELD(*qwordp,
+						ES_EZ_ESSB_RX_PREFIX_MARK);
 
 			m = sfc_ef10_essb_next_mbuf(rxq, m);
 		} while (todo_bufs-- > 0);
@@ -640,7 +649,8 @@ struct sfc_dp_rx sfc_ef10_essb_rx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_EF10 |
 				  SFC_DP_HW_FW_CAP_RX_ES_SUPER_BUFFER,
 	},
-	.features		= 0,
+	.features		= SFC_DP_RX_FEAT_FLOW_FLAG |
+				  SFC_DP_RX_FEAT_FLOW_MARK,
 	.get_dev_info		= sfc_ef10_essb_rx_get_dev_info,
 	.pool_ops_supported	= sfc_ef10_essb_rx_pool_ops_supported,
 	.qsize_up_rings		= sfc_ef10_essb_rx_qsize_up_rings,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 18/23] net/sfc/base: get actions MARK and FLAG support
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (16 preceding siblings ...)
  2018-04-19 11:37 ` [PATCH 17/23] net/sfc: support flow marks in equal stride super-buffer Rx Andrew Rybchenko
@ 2018-04-19 11:37 ` Andrew Rybchenko
  2018-04-19 11:37 ` [PATCH 19/23] net/sfc/base: support MARK and FLAG actions in filters Andrew Rybchenko
                   ` (5 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:37 UTC (permalink / raw)
  To: dev; +Cc: Roman Zhukov

From: Roman Zhukov <Roman.Zhukov@oktetlabs.ru>

Filter actions MARK and FLAG are supported on Medford2 by DPDK
firmware variant.

Signed-off-by: Roman Zhukov <Roman.Zhukov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/base/ef10_nic.c  | 10 ++++++++++
 drivers/net/sfc/base/efx.h       |  3 +++
 drivers/net/sfc/base/siena_nic.c |  3 +++
 3 files changed, 16 insertions(+)

diff --git a/drivers/net/sfc/base/ef10_nic.c b/drivers/net/sfc/base/ef10_nic.c
index 35b719a..b28226d 100644
--- a/drivers/net/sfc/base/ef10_nic.c
+++ b/drivers/net/sfc/base/ef10_nic.c
@@ -1294,6 +1294,16 @@ ef10_get_datapath_caps(
 		 */
 		encp->enc_rx_scale_l4_hash_supported = B_TRUE;
 	}
+	/* Check if the firmware supports "FLAG" and "MARK" filter actions */
+	if (CAP_FLAGS2(req, FILTER_ACTION_FLAG))
+		encp->enc_filter_action_flag_supported = B_TRUE;
+	else
+		encp->enc_filter_action_flag_supported = B_FALSE;
+
+	if (CAP_FLAGS2(req, FILTER_ACTION_MARK))
+		encp->enc_filter_action_mark_supported = B_TRUE;
+	else
+		encp->enc_filter_action_mark_supported = B_FALSE;
 
 #undef CAP_FLAGS1
 #undef CAP_FLAGS2
diff --git a/drivers/net/sfc/base/efx.h b/drivers/net/sfc/base/efx.h
index b334cc5..cd0e6f8 100644
--- a/drivers/net/sfc/base/efx.h
+++ b/drivers/net/sfc/base/efx.h
@@ -1293,6 +1293,9 @@ typedef struct efx_nic_cfg_s {
 	/* Firmware support for extended MAC_STATS buffer */
 	uint32_t		enc_mac_stats_nstats;
 	boolean_t		enc_fec_counters;
+	/* Firmware support for "FLAG" and "MARK" filter actions */
+	boolean_t		enc_filter_action_flag_supported;
+	boolean_t		enc_filter_action_mark_supported;
 } efx_nic_cfg_t;
 
 #define	EFX_PCI_FUNCTION_IS_PF(_encp)	((_encp)->enc_vf == 0xffff)
diff --git a/drivers/net/sfc/base/siena_nic.c b/drivers/net/sfc/base/siena_nic.c
index 15aa06b..b703369 100644
--- a/drivers/net/sfc/base/siena_nic.c
+++ b/drivers/net/sfc/base/siena_nic.c
@@ -172,6 +172,9 @@ siena_board_cfg(
 
 	encp->enc_mac_stats_nstats = MC_CMD_MAC_NSTATS;
 
+	encp->enc_filter_action_flag_supported = B_FALSE;
+	encp->enc_filter_action_mark_supported = B_FALSE;
+
 	return (0);
 
 fail2:
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 19/23] net/sfc/base: support MARK and FLAG actions in filters
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (17 preceding siblings ...)
  2018-04-19 11:37 ` [PATCH 18/23] net/sfc/base: get actions MARK and FLAG support Andrew Rybchenko
@ 2018-04-19 11:37 ` Andrew Rybchenko
  2018-04-19 11:37 ` [PATCH 20/23] net/sfc/base: get max supported value for action MARK Andrew Rybchenko
                   ` (4 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:37 UTC (permalink / raw)
  To: dev; +Cc: Roman Zhukov

From: Roman Zhukov <Roman.Zhukov@oktetlabs.ru>

This patch adds support for DPDK rte_flow "MARK" and "FLAG" filter
actions to filters on EF10 family NICs.

Signed-off-by: Roman Zhukov <Roman.Zhukov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/base/ef10_filter.c | 31 +++++++++++++++++++++++++++----
 drivers/net/sfc/base/efx.h         |  5 +++++
 drivers/net/sfc/base/efx_filter.c  | 21 +++++++++++++++++++++
 3 files changed, 53 insertions(+), 4 deletions(-)

diff --git a/drivers/net/sfc/base/ef10_filter.c b/drivers/net/sfc/base/ef10_filter.c
index bf4992e..ae87285 100644
--- a/drivers/net/sfc/base/ef10_filter.c
+++ b/drivers/net/sfc/base/ef10_filter.c
@@ -172,7 +172,7 @@ efx_mcdi_filter_op_add(
 	__inout		ef10_filter_handle_t *handle)
 {
 	efx_mcdi_req_t req;
-	uint8_t payload[MAX(MC_CMD_FILTER_OP_EXT_IN_LEN,
+	uint8_t payload[MAX(MC_CMD_FILTER_OP_V3_IN_LEN,
 			    MC_CMD_FILTER_OP_EXT_OUT_LEN)];
 	efx_filter_match_flags_t match_flags;
 	efx_rc_t rc;
@@ -180,7 +180,7 @@ efx_mcdi_filter_op_add(
 	memset(payload, 0, sizeof (payload));
 	req.emr_cmd = MC_CMD_FILTER_OP;
 	req.emr_in_buf = payload;
-	req.emr_in_length = MC_CMD_FILTER_OP_EXT_IN_LEN;
+	req.emr_in_length = MC_CMD_FILTER_OP_V3_IN_LEN;
 	req.emr_out_buf = payload;
 	req.emr_out_length = MC_CMD_FILTER_OP_EXT_OUT_LEN;
 
@@ -316,16 +316,37 @@ efx_mcdi_filter_op_add(
 		    spec->efs_ifrm_loc_mac, EFX_MAC_ADDR_LEN);
 	}
 
+	/*
+	 * Set the "MARK" or "FLAG" action for all packets matching this filter
+	 * if necessary (only useful with equal stride packed stream Rx mode
+	 * which provide the information in pseudo-header).
+	 * These actions require MC_CMD_FILTER_OP_V3_IN msgrequest.
+	 */
+	if ((spec->efs_flags & EFX_FILTER_FLAG_ACTION_MARK) &&
+	    (spec->efs_flags & EFX_FILTER_FLAG_ACTION_FLAG)) {
+		rc = EINVAL;
+		goto fail3;
+	}
+	if (spec->efs_flags & EFX_FILTER_FLAG_ACTION_MARK) {
+		MCDI_IN_SET_DWORD(req, FILTER_OP_V3_IN_MATCH_ACTION,
+		    MC_CMD_FILTER_OP_V3_IN_MATCH_ACTION_MARK);
+		MCDI_IN_SET_DWORD(req, FILTER_OP_V3_IN_MATCH_MARK_VALUE,
+		    spec->efs_mark);
+	} else if (spec->efs_flags & EFX_FILTER_FLAG_ACTION_FLAG) {
+		MCDI_IN_SET_DWORD(req, FILTER_OP_V3_IN_MATCH_ACTION,
+		    MC_CMD_FILTER_OP_V3_IN_MATCH_ACTION_FLAG);
+	}
+
 	efx_mcdi_execute(enp, &req);
 
 	if (req.emr_rc != 0) {
 		rc = req.emr_rc;
-		goto fail3;
+		goto fail4;
 	}
 
 	if (req.emr_out_length_used < MC_CMD_FILTER_OP_EXT_OUT_LEN) {
 		rc = EMSGSIZE;
-		goto fail4;
+		goto fail5;
 	}
 
 	handle->efh_lo = MCDI_OUT_DWORD(req, FILTER_OP_EXT_OUT_HANDLE_LO);
@@ -333,6 +354,8 @@ efx_mcdi_filter_op_add(
 
 	return (0);
 
+fail5:
+	EFSYS_PROBE(fail5);
 fail4:
 	EFSYS_PROBE(fail4);
 fail3:
diff --git a/drivers/net/sfc/base/efx.h b/drivers/net/sfc/base/efx.h
index cd0e6f8..f5ec568 100644
--- a/drivers/net/sfc/base/efx.h
+++ b/drivers/net/sfc/base/efx.h
@@ -2622,6 +2622,10 @@ efx_tx_qdestroy(
 #define	EFX_FILTER_FLAG_RX		0x08
 /* Filter is for TX */
 #define	EFX_FILTER_FLAG_TX		0x10
+/* Set match flag on the received packet */
+#define	EFX_FILTER_FLAG_ACTION_FLAG	0x20
+/* Set match mark on the received packet */
+#define	EFX_FILTER_FLAG_ACTION_MARK	0x40
 
 typedef uint8_t efx_filter_flags_t;
 
@@ -2707,6 +2711,7 @@ typedef struct efx_filter_spec_s {
 	efx_oword_t			efs_loc_host;
 	uint8_t				efs_vni_or_vsid[EFX_VNI_OR_VSID_LEN];
 	uint8_t				efs_ifrm_loc_mac[EFX_MAC_ADDR_LEN];
+	uint32_t			efs_mark;
 } efx_filter_spec_t;
 
 
diff --git a/drivers/net/sfc/base/efx_filter.c b/drivers/net/sfc/base/efx_filter.c
index 97c972c..412298a 100644
--- a/drivers/net/sfc/base/efx_filter.c
+++ b/drivers/net/sfc/base/efx_filter.c
@@ -74,12 +74,33 @@ efx_filter_insert(
 	__inout		efx_filter_spec_t *spec)
 {
 	const efx_filter_ops_t *efop = enp->en_efop;
+	efx_nic_cfg_t *encp = &(enp->en_nic_cfg);
+	efx_rc_t rc;
 
 	EFSYS_ASSERT3U(enp->en_mod_flags, &, EFX_MOD_FILTER);
 	EFSYS_ASSERT3P(spec, !=, NULL);
 	EFSYS_ASSERT3U(spec->efs_flags, &, EFX_FILTER_FLAG_RX);
 
+	if ((spec->efs_flags & EFX_FILTER_FLAG_ACTION_MARK) &&
+	    !encp->enc_filter_action_mark_supported) {
+		rc = ENOTSUP;
+		goto fail1;
+	}
+
+	if ((spec->efs_flags & EFX_FILTER_FLAG_ACTION_FLAG) &&
+	    !encp->enc_filter_action_flag_supported) {
+		rc = ENOTSUP;
+		goto fail2;
+	}
+
 	return (efop->efo_add(enp, spec, B_FALSE));
+
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+	return (rc);
 }
 
 	__checkReturn	efx_rc_t
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 20/23] net/sfc/base: get max supported value for action MARK
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (18 preceding siblings ...)
  2018-04-19 11:37 ` [PATCH 19/23] net/sfc/base: support MARK and FLAG actions in filters Andrew Rybchenko
@ 2018-04-19 11:37 ` Andrew Rybchenko
  2018-04-19 11:37 ` [PATCH 21/23] net/sfc: make processing of flow rule actions more uniform Andrew Rybchenko
                   ` (3 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:37 UTC (permalink / raw)
  To: dev; +Cc: Roman Zhukov

From: Roman Zhukov <Roman.Zhukov@oktetlabs.ru>

The mark value for MATCH_ACTION_MARK has a maximum value.
Requesting a value larger than the maximum will cause the
filter insertion to fail with EINVAL. This patch allows the
driver to check the value at the filter validation.

Signed-off-by: Roman Zhukov <Roman.Zhukov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/base/ef10_nic.c  | 11 +++++++++--
 drivers/net/sfc/base/efx.h       |  1 +
 drivers/net/sfc/base/siena_nic.c |  1 +
 3 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/drivers/net/sfc/base/ef10_nic.c b/drivers/net/sfc/base/ef10_nic.c
index b28226d..44286db 100644
--- a/drivers/net/sfc/base/ef10_nic.c
+++ b/drivers/net/sfc/base/ef10_nic.c
@@ -996,7 +996,7 @@ ef10_get_datapath_caps(
 	efx_nic_cfg_t *encp = &(enp->en_nic_cfg);
 	efx_mcdi_req_t req;
 	uint8_t payload[MAX(MC_CMD_GET_CAPABILITIES_IN_LEN,
-			    MC_CMD_GET_CAPABILITIES_V4_OUT_LEN)];
+			    MC_CMD_GET_CAPABILITIES_V5_OUT_LEN)];
 	efx_rc_t rc;
 
 	if ((rc = ef10_mcdi_get_pf_count(enp, &encp->enc_hw_pf_count)) != 0)
@@ -1008,7 +1008,7 @@ ef10_get_datapath_caps(
 	req.emr_in_buf = payload;
 	req.emr_in_length = MC_CMD_GET_CAPABILITIES_IN_LEN;
 	req.emr_out_buf = payload;
-	req.emr_out_length = MC_CMD_GET_CAPABILITIES_V4_OUT_LEN;
+	req.emr_out_length = MC_CMD_GET_CAPABILITIES_V5_OUT_LEN;
 
 	efx_mcdi_execute_quiet(enp, &req);
 
@@ -1305,6 +1305,13 @@ ef10_get_datapath_caps(
 	else
 		encp->enc_filter_action_mark_supported = B_FALSE;
 
+	/* Get maximum supported value for "MARK" filter action */
+	if (req.emr_out_length_used >= MC_CMD_GET_CAPABILITIES_V5_OUT_LEN)
+		encp->enc_filter_action_mark_max = MCDI_OUT_DWORD(req,
+		    GET_CAPABILITIES_V5_OUT_FILTER_ACTION_MARK_MAX);
+	else
+		encp->enc_filter_action_mark_max = 0;
+
 #undef CAP_FLAGS1
 #undef CAP_FLAGS2
 
diff --git a/drivers/net/sfc/base/efx.h b/drivers/net/sfc/base/efx.h
index f5ec568..332c6d0 100644
--- a/drivers/net/sfc/base/efx.h
+++ b/drivers/net/sfc/base/efx.h
@@ -1296,6 +1296,7 @@ typedef struct efx_nic_cfg_s {
 	/* Firmware support for "FLAG" and "MARK" filter actions */
 	boolean_t		enc_filter_action_flag_supported;
 	boolean_t		enc_filter_action_mark_supported;
+	uint32_t		enc_filter_action_mark_max;
 } efx_nic_cfg_t;
 
 #define	EFX_PCI_FUNCTION_IS_PF(_encp)	((_encp)->enc_vf == 0xffff)
diff --git a/drivers/net/sfc/base/siena_nic.c b/drivers/net/sfc/base/siena_nic.c
index b703369..31eef80 100644
--- a/drivers/net/sfc/base/siena_nic.c
+++ b/drivers/net/sfc/base/siena_nic.c
@@ -174,6 +174,7 @@ siena_board_cfg(
 
 	encp->enc_filter_action_flag_supported = B_FALSE;
 	encp->enc_filter_action_mark_supported = B_FALSE;
+	encp->enc_filter_action_mark_max = 0;
 
 	return (0);
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 21/23] net/sfc: make processing of flow rule actions more uniform
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (19 preceding siblings ...)
  2018-04-19 11:37 ` [PATCH 20/23] net/sfc/base: get max supported value for action MARK Andrew Rybchenko
@ 2018-04-19 11:37 ` Andrew Rybchenko
  2018-04-19 11:37 ` [PATCH 22/23] net/sfc: support MARK and FLAG actions in flow API Andrew Rybchenko
                   ` (2 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:37 UTC (permalink / raw)
  To: dev; +Cc: Roman Zhukov

From: Roman Zhukov <Roman.Zhukov@oktetlabs.ru>

Prepare function that parse flow rule actions to support not
fate-deciding actions.

Signed-off-by: Roman Zhukov <Roman.Zhukov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_flow.c | 57 ++++++++++++++++++++++++++++++----------------
 1 file changed, 37 insertions(+), 20 deletions(-)

diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 55226f1..bec29ae 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -1498,7 +1498,10 @@ sfc_flow_parse_actions(struct sfc_adapter *sa,
 		       struct rte_flow_error *error)
 {
 	int rc;
-	boolean_t is_specified = B_FALSE;
+	uint32_t actions_set = 0;
+	const uint32_t fate_actions_mask = (1UL << RTE_FLOW_ACTION_TYPE_QUEUE) |
+					   (1UL << RTE_FLOW_ACTION_TYPE_RSS) |
+					   (1UL << RTE_FLOW_ACTION_TYPE_DROP);
 
 	if (actions == NULL) {
 		rte_flow_error_set(error, EINVAL,
@@ -1507,21 +1510,22 @@ sfc_flow_parse_actions(struct sfc_adapter *sa,
 		return -rte_errno;
 	}
 
+#define SFC_BUILD_SET_OVERFLOW(_action, _set) \
+	RTE_BUILD_BUG_ON(_action >= sizeof(_set) * CHAR_BIT)
+
 	for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
-		/* This one may appear anywhere multiple times. */
-		if (actions->type == RTE_FLOW_ACTION_TYPE_VOID)
-			continue;
-		/* Fate-deciding actions may appear exactly once. */
-		if (is_specified) {
-			rte_flow_error_set
-				(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
-				 actions,
-				 "Cannot combine several fate-deciding actions,"
-				 "choose between QUEUE, RSS or DROP");
-			return -rte_errno;
-		}
 		switch (actions->type) {
+		case RTE_FLOW_ACTION_TYPE_VOID:
+			SFC_BUILD_SET_OVERFLOW(RTE_FLOW_ACTION_TYPE_VOID,
+					       actions_set);
+			break;
+
 		case RTE_FLOW_ACTION_TYPE_QUEUE:
+			SFC_BUILD_SET_OVERFLOW(RTE_FLOW_ACTION_TYPE_QUEUE,
+					       actions_set);
+			if ((actions_set & fate_actions_mask) != 0)
+				goto fail_fate_actions;
+
 			rc = sfc_flow_parse_queue(sa, actions->conf, flow);
 			if (rc != 0) {
 				rte_flow_error_set(error, EINVAL,
@@ -1529,11 +1533,14 @@ sfc_flow_parse_actions(struct sfc_adapter *sa,
 					"Bad QUEUE action");
 				return -rte_errno;
 			}
-
-			is_specified = B_TRUE;
 			break;
 
 		case RTE_FLOW_ACTION_TYPE_RSS:
+			SFC_BUILD_SET_OVERFLOW(RTE_FLOW_ACTION_TYPE_RSS,
+					       actions_set);
+			if ((actions_set & fate_actions_mask) != 0)
+				goto fail_fate_actions;
+
 			rc = sfc_flow_parse_rss(sa, actions->conf, flow);
 			if (rc != 0) {
 				rte_flow_error_set(error, rc,
@@ -1541,15 +1548,16 @@ sfc_flow_parse_actions(struct sfc_adapter *sa,
 					"Bad RSS action");
 				return -rte_errno;
 			}
-
-			is_specified = B_TRUE;
 			break;
 
 		case RTE_FLOW_ACTION_TYPE_DROP:
+			SFC_BUILD_SET_OVERFLOW(RTE_FLOW_ACTION_TYPE_DROP,
+					       actions_set);
+			if ((actions_set & fate_actions_mask) != 0)
+				goto fail_fate_actions;
+
 			flow->spec.template.efs_dmaq_id =
 				EFX_FILTER_SPEC_RX_DMAQ_ID_DROP;
-
-			is_specified = B_TRUE;
 			break;
 
 		default:
@@ -1558,15 +1566,24 @@ sfc_flow_parse_actions(struct sfc_adapter *sa,
 					   "Action is not supported");
 			return -rte_errno;
 		}
+
+		actions_set |= (1UL << actions->type);
 	}
+#undef SFC_BUILD_SET_OVERFLOW
 
 	/* When fate is unknown, drop traffic. */
-	if (!is_specified) {
+	if ((actions_set & fate_actions_mask) == 0) {
 		flow->spec.template.efs_dmaq_id =
 			EFX_FILTER_SPEC_RX_DMAQ_ID_DROP;
 	}
 
 	return 0;
+
+fail_fate_actions:
+	rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, actions,
+			   "Cannot combine several fate-deciding actions, "
+			   "choose between QUEUE, RSS or DROP");
+	return -rte_errno;
 }
 
 /**
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 22/23] net/sfc: support MARK and FLAG actions in flow API
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (20 preceding siblings ...)
  2018-04-19 11:37 ` [PATCH 21/23] net/sfc: make processing of flow rule actions more uniform Andrew Rybchenko
@ 2018-04-19 11:37 ` Andrew Rybchenko
  2018-04-19 11:37 ` [PATCH 23/23] doc: advertise equal stride super-buffer Rx mode support in net/sfc Andrew Rybchenko
  2018-04-26 22:47 ` [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Ferruh Yigit
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:37 UTC (permalink / raw)
  To: dev; +Cc: Roman Zhukov

From: Roman Zhukov <Roman.Zhukov@oktetlabs.ru>

Signed-off-by: Roman Zhukov <Roman.Zhukov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/nics/sfc_efx.rst |  4 +++
 drivers/net/sfc/sfc_flow.c  | 64 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 68 insertions(+)

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index c9354e3..bbf698e 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -201,6 +201,10 @@ Supported actions:
 
 - DROP
 
+- FLAG (supported only with ef10_essb Rx datapath)
+
+- MARK (supported only with ef10_essb Rx datapath)
+
 Validating flow rules depends on the firmware variant.
 
 Ethernet destinaton individual/group match
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index bec29ae..afd688d 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -23,6 +23,7 @@
 #include "sfc_filter.h"
 #include "sfc_flow.h"
 #include "sfc_log.h"
+#include "sfc_dp_rx.h"
 
 /*
  * At now flow API is implemented in such a manner that each
@@ -1492,16 +1493,35 @@ sfc_flow_filter_remove(struct sfc_adapter *sa,
 }
 
 static int
+sfc_flow_parse_mark(struct sfc_adapter *sa,
+		    const struct rte_flow_action_mark *mark,
+		    struct rte_flow *flow)
+{
+	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
+
+	if (mark == NULL || mark->id > encp->enc_filter_action_mark_max)
+		return EINVAL;
+
+	flow->spec.template.efs_flags |= EFX_FILTER_FLAG_ACTION_MARK;
+	flow->spec.template.efs_mark = mark->id;
+
+	return 0;
+}
+
+static int
 sfc_flow_parse_actions(struct sfc_adapter *sa,
 		       const struct rte_flow_action actions[],
 		       struct rte_flow *flow,
 		       struct rte_flow_error *error)
 {
 	int rc;
+	const unsigned int dp_rx_features = sa->dp_rx->features;
 	uint32_t actions_set = 0;
 	const uint32_t fate_actions_mask = (1UL << RTE_FLOW_ACTION_TYPE_QUEUE) |
 					   (1UL << RTE_FLOW_ACTION_TYPE_RSS) |
 					   (1UL << RTE_FLOW_ACTION_TYPE_DROP);
+	const uint32_t mark_actions_mask = (1UL << RTE_FLOW_ACTION_TYPE_MARK) |
+					   (1UL << RTE_FLOW_ACTION_TYPE_FLAG);
 
 	if (actions == NULL) {
 		rte_flow_error_set(error, EINVAL,
@@ -1560,6 +1580,45 @@ sfc_flow_parse_actions(struct sfc_adapter *sa,
 				EFX_FILTER_SPEC_RX_DMAQ_ID_DROP;
 			break;
 
+		case RTE_FLOW_ACTION_TYPE_FLAG:
+			SFC_BUILD_SET_OVERFLOW(RTE_FLOW_ACTION_TYPE_FLAG,
+					       actions_set);
+			if ((actions_set & mark_actions_mask) != 0)
+				goto fail_actions_overlap;
+
+			if ((dp_rx_features & SFC_DP_RX_FEAT_FLOW_FLAG) == 0) {
+				rte_flow_error_set(error, ENOTSUP,
+					RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+					"FLAG action is not supported on the current Rx datapath");
+				return -rte_errno;
+			}
+
+			flow->spec.template.efs_flags |=
+				EFX_FILTER_FLAG_ACTION_FLAG;
+			break;
+
+		case RTE_FLOW_ACTION_TYPE_MARK:
+			SFC_BUILD_SET_OVERFLOW(RTE_FLOW_ACTION_TYPE_MARK,
+					       actions_set);
+			if ((actions_set & mark_actions_mask) != 0)
+				goto fail_actions_overlap;
+
+			if ((dp_rx_features & SFC_DP_RX_FEAT_FLOW_MARK) == 0) {
+				rte_flow_error_set(error, ENOTSUP,
+					RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+					"MARK action is not supported on the current Rx datapath");
+				return -rte_errno;
+			}
+
+			rc = sfc_flow_parse_mark(sa, actions->conf, flow);
+			if (rc != 0) {
+				rte_flow_error_set(error, rc,
+					RTE_FLOW_ERROR_TYPE_ACTION, actions,
+					"Bad MARK action");
+				return -rte_errno;
+			}
+			break;
+
 		default:
 			rte_flow_error_set(error, ENOTSUP,
 					   RTE_FLOW_ERROR_TYPE_ACTION, actions,
@@ -1584,6 +1643,11 @@ sfc_flow_parse_actions(struct sfc_adapter *sa,
 			   "Cannot combine several fate-deciding actions, "
 			   "choose between QUEUE, RSS or DROP");
 	return -rte_errno;
+
+fail_actions_overlap:
+	rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, actions,
+			   "Overlapping actions are not supported");
+	return -rte_errno;
 }
 
 /**
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 23/23] doc: advertise equal stride super-buffer Rx mode support in net/sfc
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (21 preceding siblings ...)
  2018-04-19 11:37 ` [PATCH 22/23] net/sfc: support MARK and FLAG actions in flow API Andrew Rybchenko
@ 2018-04-19 11:37 ` Andrew Rybchenko
  2018-04-26 22:47 ` [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Ferruh Yigit
  23 siblings, 0 replies; 25+ messages in thread
From: Andrew Rybchenko @ 2018-04-19 11:37 UTC (permalink / raw)
  To: dev

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/rel_notes/release_18_05.rst | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index b8f526b..e99c2a6 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -62,6 +62,8 @@ New Features
   * Added support for Solarflare XtremeScale X2xxx family adapters.
   * Added support for NVGRE, VXLAN and GENEVE filters in flow API.
   * Added support for DROP action in flow API.
+  * Added support for equal stride super-buffer Rx mode (X2xxx only).
+  * Added support for MARK and FLAG actions in flow API (X2xxx only).
 
 * **Added Ethernet poll mode driver for AMD XGBE devices.**
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode
  2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
                   ` (22 preceding siblings ...)
  2018-04-19 11:37 ` [PATCH 23/23] doc: advertise equal stride super-buffer Rx mode support in net/sfc Andrew Rybchenko
@ 2018-04-26 22:47 ` Ferruh Yigit
  23 siblings, 0 replies; 25+ messages in thread
From: Ferruh Yigit @ 2018-04-26 22:47 UTC (permalink / raw)
  To: Andrew Rybchenko, dev

On 4/19/2018 12:36 PM, Andrew Rybchenko wrote:
> Add support for dedicated DPDK firmware variant which has equal stride
> super-buffer Rx mode. The Rx mode uses bucket mempool manager which
> supports allocation of contiguos block of mbufs.
> 
> It allows to achieve higher packet rate on Rx than traditional single
> packet Rx mode.
> 
> Also the Rx mode supports rte_flow MARK and FLAG actions.
> 
> It should be applied on top of [1], [2], [3], [4], [5].
> 
> [1] https://dpdk.org/ml/archives/dev/2018-April/098035.html
> [2] https://dpdk.org/ml/archives/dev/2018-April/098047.html
> [3] https://dpdk.org/ml/archives/dev/2018-April/095872.html
> [4] https://dpdk.org/ml/archives/dev/2018-April/097354.html
> [5] https://dpdk.org/ml/archives/dev/2018-April/097365.html
> 
> There are a number of known checkpatches.sh warnings in base driver due
> to coding style difference and in the PMD itself due to postive errno
> used inside the driver.
> 
> Andrew Rybchenko (18):
>   net/sfc/base: update autogenerated MCDI and TLV headers
>   net/sfc/base: make RxQ type data an union
>   net/sfc/base: detect equal stride super-buffer support
>   net/sfc/base: support equal stride super-buffer Rx mode
>   net/sfc/base: add equal stride super-buffer prefix layout
>   net/sfc: factor out function to push Rx doorbell
>   net/sfc: prepare EF10 Rx event parser to be reused
>   net/sfc: move EF10 Rx event parser to shared header
>   net/sfc: conditionally compile support for tunnel packets
>   net/sfc: allow one Rx queue entry carry many packet buffers
>   net/sfc: allow to take mbuf pool into account when sizing
>   net/sfc: support equal stride super-buffer Rx mode
>   net/sfc: support callback to check if mempool is supported
>   net/sfc: check mempool when equal stride super-buffer used
>   net/sfc: support DPDK firmware variant
>   net/sfc: add Rx descriptor wait timeout
>   net/sfc: support flow marks in equal stride super-buffer Rx
>   doc: advertise equal stride super-buffer Rx mode support in net/sfc
> 
> Roman Zhukov (5):
>   net/sfc/base: get actions MARK and FLAG support
>   net/sfc/base: support MARK and FLAG actions in filters
>   net/sfc/base: get max supported value for action MARK
>   net/sfc: make processing of flow rule actions more uniform
>   net/sfc: support MARK and FLAG actions in flow API

Series applied to dpdk-next-net/master, thanks.

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2018-04-26 22:47 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-19 11:36 [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
2018-04-19 11:36 ` [PATCH 01/23] net/sfc/base: update autogenerated MCDI and TLV headers Andrew Rybchenko
2018-04-19 11:36 ` [PATCH 02/23] net/sfc/base: make RxQ type data an union Andrew Rybchenko
2018-04-19 11:36 ` [PATCH 03/23] net/sfc/base: detect equal stride super-buffer support Andrew Rybchenko
2018-04-19 11:36 ` [PATCH 04/23] net/sfc/base: support equal stride super-buffer Rx mode Andrew Rybchenko
2018-04-19 11:36 ` [PATCH 05/23] net/sfc/base: add equal stride super-buffer prefix layout Andrew Rybchenko
2018-04-19 11:36 ` [PATCH 06/23] net/sfc: factor out function to push Rx doorbell Andrew Rybchenko
2018-04-19 11:36 ` [PATCH 07/23] net/sfc: prepare EF10 Rx event parser to be reused Andrew Rybchenko
2018-04-19 11:36 ` [PATCH 08/23] net/sfc: move EF10 Rx event parser to shared header Andrew Rybchenko
2018-04-19 11:36 ` [PATCH 09/23] net/sfc: conditionally compile support for tunnel packets Andrew Rybchenko
2018-04-19 11:36 ` [PATCH 10/23] net/sfc: allow one Rx queue entry carry many packet buffers Andrew Rybchenko
2018-04-19 11:36 ` [PATCH 11/23] net/sfc: allow to take mbuf pool into account when sizing Andrew Rybchenko
2018-04-19 11:36 ` [PATCH 12/23] net/sfc: support equal stride super-buffer Rx mode Andrew Rybchenko
2018-04-19 11:36 ` [PATCH 13/23] net/sfc: support callback to check if mempool is supported Andrew Rybchenko
2018-04-19 11:36 ` [PATCH 14/23] net/sfc: check mempool when equal stride super-buffer used Andrew Rybchenko
2018-04-19 11:36 ` [PATCH 15/23] net/sfc: support DPDK firmware variant Andrew Rybchenko
2018-04-19 11:36 ` [PATCH 16/23] net/sfc: add Rx descriptor wait timeout Andrew Rybchenko
2018-04-19 11:37 ` [PATCH 17/23] net/sfc: support flow marks in equal stride super-buffer Rx Andrew Rybchenko
2018-04-19 11:37 ` [PATCH 18/23] net/sfc/base: get actions MARK and FLAG support Andrew Rybchenko
2018-04-19 11:37 ` [PATCH 19/23] net/sfc/base: support MARK and FLAG actions in filters Andrew Rybchenko
2018-04-19 11:37 ` [PATCH 20/23] net/sfc/base: get max supported value for action MARK Andrew Rybchenko
2018-04-19 11:37 ` [PATCH 21/23] net/sfc: make processing of flow rule actions more uniform Andrew Rybchenko
2018-04-19 11:37 ` [PATCH 22/23] net/sfc: support MARK and FLAG actions in flow API Andrew Rybchenko
2018-04-19 11:37 ` [PATCH 23/23] doc: advertise equal stride super-buffer Rx mode support in net/sfc Andrew Rybchenko
2018-04-26 22:47 ` [PATCH 00/23] net/sfc: support equal stride super-buffer Rx mode Ferruh Yigit

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.