All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 net-next 0/5] Add tx push buf len param to ethtool
@ 2023-03-09 13:13 Shay Agroskin
  2023-03-09 13:13 ` [PATCH v4 net-next 1/5] ethtool: Add support for configuring tx_push_buf_len Shay Agroskin
                   ` (4 more replies)
  0 siblings, 5 replies; 19+ messages in thread
From: Shay Agroskin @ 2023-03-09 13:13 UTC (permalink / raw)
  To: David Miller, Jakub Kicinski, netdev
  Cc: Shay Agroskin, Woodhouse, David, Machulsky, Zorik, Matushevsky,
	Alexander, Saeed Bshara, Wilson, Matt, Liguori, Anthony, Bshara,
	Nafea, Belgazal, Netanel, Saidi, Ali, Herrenschmidt, Benjamin,
	Kiyanovski, Arthur, Dagan, Noam, Arinzon, David, Itzko, Shahar,
	Abboud, Osama

Changes since v3:
- Removed RFC tag and added a Jakub's signoff on one of the first patch

Changes since v2:
- Added a check that the driver advertises support for TX push buffer
  instead of defaulting the response to 0.
- Moved cosmetic changes to their own commits
- Removed usage of gotos which goes against Linux coding style
- Make ENA driver reject an attempt to configure TX push buffer when
  it's not supported (no LLQ is used)

Changes since v1:
- Added the new ethtool param to generic netlink specs
- Dropped dynamic advertisement of tx push buff support in ENA.
  The driver will advertise it for all platforms

This patchset adds a new sub-configuration to ethtool get/set queue
params (ethtool -g) called 'tx-push-buf-len'.

This configuration specifies the maximum number of bytes of a
transmitted packet a driver can push directly to the underlying
device ('push' mode). The motivation for pushing some of the bytes to
the device has the advantages of

- Allowing a smart device to take fast actions based on the packet's
  header
- Reducing latency for small packets that can be copied completely into
  the device

This new param is practically similar to tx-copybreak value that can be
set using ethtool's tunable but conceptually serves a different purpose.
While tx-copybreak is used to reduce the overhead of DMA mapping and
makes no sense to use if less than the whole segment gets copied,
tx-push-buf-len allows to improve performance by analyzing the packet's
data (usually headers) before performing the DMA operation.

The configuration can be queried and set using the commands:

    $ ethtool -g [interface]

    # ethtool -G [interface] tx-push-buf-len [number of bytes]

This patchset also adds support for the new configuration in ENA driver
for which this parameter ensures efficient resources management on the
device side.

David Arinzon (1):
  net: ena: Add an option to configure large LLQ headers

Shay Agroskin (4):
  ethtool: Add support for configuring tx_push_buf_len
  net: ena: Make few cosmetic preparations to support large LLQ
  net: ena: Recalculate TX state variables every device reset
  net: ena: Add support to changing tx_push_buf_len

 Documentation/netlink/specs/ethtool.yaml      |   8 +
 Documentation/networking/ethtool-netlink.rst  |  43 +--
 drivers/net/ethernet/amazon/ena/ena_eth_com.h |   4 +
 drivers/net/ethernet/amazon/ena/ena_ethtool.c |  57 +++-
 drivers/net/ethernet/amazon/ena/ena_netdev.c  | 259 +++++++++++-------
 drivers/net/ethernet/amazon/ena/ena_netdev.h  |  15 +-
 include/linux/ethtool.h                       |  14 +-
 include/uapi/linux/ethtool_netlink.h          |   2 +
 net/ethtool/netlink.h                         |   2 +-
 net/ethtool/rings.c                           |  33 ++-
 10 files changed, 313 insertions(+), 124 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v4 net-next 1/5] ethtool: Add support for configuring tx_push_buf_len
  2023-03-09 13:13 [PATCH v4 net-next 0/5] Add tx push buf len param to ethtool Shay Agroskin
@ 2023-03-09 13:13 ` Shay Agroskin
  2023-03-09 16:25   ` Simon Horman
  2023-03-09 17:15   ` Gal Pressman
  2023-03-09 13:13 ` [PATCH v4 net-next 2/5] net: ena: Make few cosmetic preparations to support large LLQ Shay Agroskin
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 19+ messages in thread
From: Shay Agroskin @ 2023-03-09 13:13 UTC (permalink / raw)
  To: David Miller, Jakub Kicinski, netdev
  Cc: Shay Agroskin, Woodhouse, David, Machulsky, Zorik, Matushevsky,
	Alexander, Saeed Bshara, Wilson, Matt, Liguori, Anthony, Bshara,
	Nafea, Belgazal, Netanel, Saidi, Ali, Herrenschmidt, Benjamin,
	Kiyanovski, Arthur, Dagan, Noam, Arinzon, David, Itzko, Shahar,
	Abboud, Osama

This attribute, which is part of ethtool's ring param configuration
allows the user to specify the maximum number of the packet's payload
that can be written directly to the device.

Example usage:
    # ethtool -G [interface] tx-push-buf-len [number of bytes]

Co-developed-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Shay Agroskin <shayagr@amazon.com>
---
 Documentation/netlink/specs/ethtool.yaml     |  8 ++++
 Documentation/networking/ethtool-netlink.rst | 43 ++++++++++++--------
 include/linux/ethtool.h                      | 14 +++++--
 include/uapi/linux/ethtool_netlink.h         |  2 +
 net/ethtool/netlink.h                        |  2 +-
 net/ethtool/rings.c                          | 33 ++++++++++++++-
 6 files changed, 79 insertions(+), 23 deletions(-)

diff --git a/Documentation/netlink/specs/ethtool.yaml b/Documentation/netlink/specs/ethtool.yaml
index 08b776908d15..244864c96feb 100644
--- a/Documentation/netlink/specs/ethtool.yaml
+++ b/Documentation/netlink/specs/ethtool.yaml
@@ -174,6 +174,12 @@ attribute-sets:
       -
         name: rx-push
         type: u8
+      -
+        name: tx-push-buf-len
+        type: u32
+      -
+        name: tx-push-buf-len-max
+        type: u32
 
   -
     name: mm-stat
@@ -324,6 +330,8 @@ operations:
             - cqe-size
             - tx-push
             - rx-push
+            - tx-push-buf-len
+            - tx-push-buf-len-max
       dump: *ring-get-op
     -
       name: rings-set
diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst
index e1bc6186d7ea..1aa09e7e8dcc 100644
--- a/Documentation/networking/ethtool-netlink.rst
+++ b/Documentation/networking/ethtool-netlink.rst
@@ -860,22 +860,24 @@ Request contents:
 
 Kernel response contents:
 
-  ====================================  ======  ===========================
-  ``ETHTOOL_A_RINGS_HEADER``            nested  reply header
-  ``ETHTOOL_A_RINGS_RX_MAX``            u32     max size of RX ring
-  ``ETHTOOL_A_RINGS_RX_MINI_MAX``       u32     max size of RX mini ring
-  ``ETHTOOL_A_RINGS_RX_JUMBO_MAX``      u32     max size of RX jumbo ring
-  ``ETHTOOL_A_RINGS_TX_MAX``            u32     max size of TX ring
-  ``ETHTOOL_A_RINGS_RX``                u32     size of RX ring
-  ``ETHTOOL_A_RINGS_RX_MINI``           u32     size of RX mini ring
-  ``ETHTOOL_A_RINGS_RX_JUMBO``          u32     size of RX jumbo ring
-  ``ETHTOOL_A_RINGS_TX``                u32     size of TX ring
-  ``ETHTOOL_A_RINGS_RX_BUF_LEN``        u32     size of buffers on the ring
-  ``ETHTOOL_A_RINGS_TCP_DATA_SPLIT``    u8      TCP header / data split
-  ``ETHTOOL_A_RINGS_CQE_SIZE``          u32     Size of TX/RX CQE
-  ``ETHTOOL_A_RINGS_TX_PUSH``           u8      flag of TX Push mode
-  ``ETHTOOL_A_RINGS_RX_PUSH``           u8      flag of RX Push mode
-  ====================================  ======  ===========================
+  =======================================   ======  ===========================
+  ``ETHTOOL_A_RINGS_HEADER``                nested  reply header
+  ``ETHTOOL_A_RINGS_RX_MAX``                u32     max size of RX ring
+  ``ETHTOOL_A_RINGS_RX_MINI_MAX``           u32     max size of RX mini ring
+  ``ETHTOOL_A_RINGS_RX_JUMBO_MAX``          u32     max size of RX jumbo ring
+  ``ETHTOOL_A_RINGS_TX_MAX``                u32     max size of TX ring
+  ``ETHTOOL_A_RINGS_RX``                    u32     size of RX ring
+  ``ETHTOOL_A_RINGS_RX_MINI``               u32     size of RX mini ring
+  ``ETHTOOL_A_RINGS_RX_JUMBO``              u32     size of RX jumbo ring
+  ``ETHTOOL_A_RINGS_TX``                    u32     size of TX ring
+  ``ETHTOOL_A_RINGS_RX_BUF_LEN``            u32     size of buffers on the ring
+  ``ETHTOOL_A_RINGS_TCP_DATA_SPLIT``        u8      TCP header / data split
+  ``ETHTOOL_A_RINGS_CQE_SIZE``              u32     Size of TX/RX CQE
+  ``ETHTOOL_A_RINGS_TX_PUSH``               u8      flag of TX Push mode
+  ``ETHTOOL_A_RINGS_RX_PUSH``               u8      flag of RX Push mode
+  ``ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN``       u32     size of TX push buffer
+  ``ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN_MAX``   u32     max size of TX push buffer
+  =======================================   ======  ===========================
 
 ``ETHTOOL_A_RINGS_TCP_DATA_SPLIT`` indicates whether the device is usable with
 page-flipping TCP zero-copy receive (``getsockopt(TCP_ZEROCOPY_RECEIVE)``).
@@ -891,6 +893,14 @@ through MMIO writes, thus reducing the latency. However, enabling this feature
 may increase the CPU cost. Drivers may enforce additional per-packet
 eligibility checks (e.g. on packet size).
 
+``ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN`` specifies the maximum number of bytes of a
+transmitted packet a driver can push directly to the underlying device
+('push' mode). Pushing some of the payload bytes to the device has the
+advantages of reducing latency for small packets by avoiding DMA mapping (same
+as ``ETHTOOL_A_RINGS_TX_PUSH`` parameter) as well as allowing the underlying
+device to process packet headers ahead of fetching its payload.
+This can help the device to make fast actions based on the packet's headers.
+
 RINGS_SET
 =========
 
@@ -908,6 +918,7 @@ Request contents:
   ``ETHTOOL_A_RINGS_CQE_SIZE``          u32     Size of TX/RX CQE
   ``ETHTOOL_A_RINGS_TX_PUSH``           u8      flag of TX Push mode
   ``ETHTOOL_A_RINGS_RX_PUSH``           u8      flag of RX Push mode
+  ``ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN``   u32     size of TX push buffer
   ====================================  ======  ===========================
 
 Kernel checks that requested ring sizes do not exceed limits reported by
diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h
index 2792185dda22..798d35890118 100644
--- a/include/linux/ethtool.h
+++ b/include/linux/ethtool.h
@@ -75,6 +75,8 @@ enum {
  * @tx_push: The flag of tx push mode
  * @rx_push: The flag of rx push mode
  * @cqe_size: Size of TX/RX completion queue event
+ * @tx_push_buf_len: Size of TX push buffer
+ * @tx_push_buf_max_len: Maximum allowed size of TX push buffer
  */
 struct kernel_ethtool_ringparam {
 	u32	rx_buf_len;
@@ -82,6 +84,8 @@ struct kernel_ethtool_ringparam {
 	u8	tx_push;
 	u8	rx_push;
 	u32	cqe_size;
+	u32	tx_push_buf_len;
+	u32	tx_push_buf_max_len;
 };
 
 /**
@@ -90,12 +94,14 @@ struct kernel_ethtool_ringparam {
  * @ETHTOOL_RING_USE_CQE_SIZE: capture for setting cqe_size
  * @ETHTOOL_RING_USE_TX_PUSH: capture for setting tx_push
  * @ETHTOOL_RING_USE_RX_PUSH: capture for setting rx_push
+ * @ETHTOOL_RING_USE_TX_PUSH_BUF_LEN: capture for setting tx_push_buf_len
  */
 enum ethtool_supported_ring_param {
-	ETHTOOL_RING_USE_RX_BUF_LEN = BIT(0),
-	ETHTOOL_RING_USE_CQE_SIZE   = BIT(1),
-	ETHTOOL_RING_USE_TX_PUSH    = BIT(2),
-	ETHTOOL_RING_USE_RX_PUSH    = BIT(3),
+	ETHTOOL_RING_USE_RX_BUF_LEN		= BIT(0),
+	ETHTOOL_RING_USE_CQE_SIZE		= BIT(1),
+	ETHTOOL_RING_USE_TX_PUSH		= BIT(2),
+	ETHTOOL_RING_USE_RX_PUSH		= BIT(3),
+	ETHTOOL_RING_USE_TX_PUSH_BUF_LEN	= BIT(4),
 };
 
 #define __ETH_RSS_HASH_BIT(bit)	((u32)1 << (bit))
diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h
index d39ce21381c5..1ebf8d455f07 100644
--- a/include/uapi/linux/ethtool_netlink.h
+++ b/include/uapi/linux/ethtool_netlink.h
@@ -357,6 +357,8 @@ enum {
 	ETHTOOL_A_RINGS_CQE_SIZE,			/* u32 */
 	ETHTOOL_A_RINGS_TX_PUSH,			/* u8 */
 	ETHTOOL_A_RINGS_RX_PUSH,			/* u8 */
+	ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN,		/* u32 */
+	ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN_MAX,		/* u32 */
 
 	/* add new constants above here */
 	__ETHTOOL_A_RINGS_CNT,
diff --git a/net/ethtool/netlink.h b/net/ethtool/netlink.h
index f7b189ed96b2..79424b34b553 100644
--- a/net/ethtool/netlink.h
+++ b/net/ethtool/netlink.h
@@ -413,7 +413,7 @@ extern const struct nla_policy ethnl_features_set_policy[ETHTOOL_A_FEATURES_WANT
 extern const struct nla_policy ethnl_privflags_get_policy[ETHTOOL_A_PRIVFLAGS_HEADER + 1];
 extern const struct nla_policy ethnl_privflags_set_policy[ETHTOOL_A_PRIVFLAGS_FLAGS + 1];
 extern const struct nla_policy ethnl_rings_get_policy[ETHTOOL_A_RINGS_HEADER + 1];
-extern const struct nla_policy ethnl_rings_set_policy[ETHTOOL_A_RINGS_RX_PUSH + 1];
+extern const struct nla_policy ethnl_rings_set_policy[ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN_MAX + 1];
 extern const struct nla_policy ethnl_channels_get_policy[ETHTOOL_A_CHANNELS_HEADER + 1];
 extern const struct nla_policy ethnl_channels_set_policy[ETHTOOL_A_CHANNELS_COMBINED_COUNT + 1];
 extern const struct nla_policy ethnl_coalesce_get_policy[ETHTOOL_A_COALESCE_HEADER + 1];
diff --git a/net/ethtool/rings.c b/net/ethtool/rings.c
index f358cd57d094..574a6f2e57af 100644
--- a/net/ethtool/rings.c
+++ b/net/ethtool/rings.c
@@ -11,6 +11,7 @@ struct rings_reply_data {
 	struct ethnl_reply_data		base;
 	struct ethtool_ringparam	ringparam;
 	struct kernel_ethtool_ringparam	kernel_ringparam;
+	u32				supported_ring_params;
 };
 
 #define RINGS_REPDATA(__reply_base) \
@@ -32,6 +33,8 @@ static int rings_prepare_data(const struct ethnl_req_info *req_base,
 
 	if (!dev->ethtool_ops->get_ringparam)
 		return -EOPNOTSUPP;
+
+	data->supported_ring_params = dev->ethtool_ops->supported_ring_params;
 	ret = ethnl_ops_begin(dev);
 	if (ret < 0)
 		return ret;
@@ -57,7 +60,9 @@ static int rings_reply_size(const struct ethnl_req_info *req_base,
 	       nla_total_size(sizeof(u8))  +	/* _RINGS_TCP_DATA_SPLIT */
 	       nla_total_size(sizeof(u32)  +	/* _RINGS_CQE_SIZE */
 	       nla_total_size(sizeof(u8))  +	/* _RINGS_TX_PUSH */
-	       nla_total_size(sizeof(u8)));	/* _RINGS_RX_PUSH */
+	       nla_total_size(sizeof(u8))) +	/* _RINGS_RX_PUSH */
+	       nla_total_size(sizeof(u32)) +	/* _RINGS_TX_PUSH_BUF_LEN */
+	       nla_total_size(sizeof(u32));	/* _RINGS_TX_PUSH_BUF_LEN_MAX */
 }
 
 static int rings_fill_reply(struct sk_buff *skb,
@@ -67,6 +72,7 @@ static int rings_fill_reply(struct sk_buff *skb,
 	const struct rings_reply_data *data = RINGS_REPDATA(reply_base);
 	const struct kernel_ethtool_ringparam *kr = &data->kernel_ringparam;
 	const struct ethtool_ringparam *ringparam = &data->ringparam;
+	u32 supported_ring_params = data->supported_ring_params;
 
 	WARN_ON(kr->tcp_data_split > ETHTOOL_TCP_DATA_SPLIT_ENABLED);
 
@@ -98,7 +104,12 @@ static int rings_fill_reply(struct sk_buff *skb,
 	    (kr->cqe_size &&
 	     (nla_put_u32(skb, ETHTOOL_A_RINGS_CQE_SIZE, kr->cqe_size))) ||
 	    nla_put_u8(skb, ETHTOOL_A_RINGS_TX_PUSH, !!kr->tx_push) ||
-	    nla_put_u8(skb, ETHTOOL_A_RINGS_RX_PUSH, !!kr->rx_push))
+	    nla_put_u8(skb, ETHTOOL_A_RINGS_RX_PUSH, !!kr->rx_push) ||
+	    ((supported_ring_params & ETHTOOL_RING_USE_TX_PUSH_BUF_LEN) &&
+	     (nla_put_u32(skb, ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN_MAX,
+			  kr->tx_push_buf_max_len) ||
+	      nla_put_u32(skb, ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN,
+			  kr->tx_push_buf_len))))
 		return -EMSGSIZE;
 
 	return 0;
@@ -117,6 +128,7 @@ const struct nla_policy ethnl_rings_set_policy[] = {
 	[ETHTOOL_A_RINGS_CQE_SIZE]		= NLA_POLICY_MIN(NLA_U32, 1),
 	[ETHTOOL_A_RINGS_TX_PUSH]		= NLA_POLICY_MAX(NLA_U8, 1),
 	[ETHTOOL_A_RINGS_RX_PUSH]		= NLA_POLICY_MAX(NLA_U8, 1),
+	[ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN]	= { .type = NLA_U32 },
 };
 
 static int
@@ -158,6 +170,14 @@ ethnl_set_rings_validate(struct ethnl_req_info *req_info,
 		return -EOPNOTSUPP;
 	}
 
+	if (tb[ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN] &&
+	    !(ops->supported_ring_params & ETHTOOL_RING_USE_TX_PUSH_BUF_LEN)) {
+		NL_SET_ERR_MSG_ATTR(info->extack,
+				    tb[ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN],
+				    "setting tx push buf len is not supported");
+		return -EOPNOTSUPP;
+	}
+
 	return ops->get_ringparam && ops->set_ringparam ? 1 : -EOPNOTSUPP;
 }
 
@@ -189,6 +209,8 @@ ethnl_set_rings(struct ethnl_req_info *req_info, struct genl_info *info)
 			tb[ETHTOOL_A_RINGS_TX_PUSH], &mod);
 	ethnl_update_u8(&kernel_ringparam.rx_push,
 			tb[ETHTOOL_A_RINGS_RX_PUSH], &mod);
+	ethnl_update_u32(&kernel_ringparam.tx_push_buf_len,
+			 tb[ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN], &mod);
 	if (!mod)
 		return 0;
 
@@ -209,6 +231,13 @@ ethnl_set_rings(struct ethnl_req_info *req_info, struct genl_info *info)
 		return -EINVAL;
 	}
 
+	if (kernel_ringparam.tx_push_buf_len > kernel_ringparam.tx_push_buf_max_len) {
+		NL_SET_ERR_MSG_ATTR(info->extack, tb[ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN],
+				    "Requested TX push buffer exceeds maximum");
+
+		return -EINVAL;
+	}
+
 	ret = dev->ethtool_ops->set_ringparam(dev, &ringparam,
 					      &kernel_ringparam, info->extack);
 	return ret < 0 ? ret : 1;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 net-next 2/5] net: ena: Make few cosmetic preparations to support large LLQ
  2023-03-09 13:13 [PATCH v4 net-next 0/5] Add tx push buf len param to ethtool Shay Agroskin
  2023-03-09 13:13 ` [PATCH v4 net-next 1/5] ethtool: Add support for configuring tx_push_buf_len Shay Agroskin
@ 2023-03-09 13:13 ` Shay Agroskin
  2023-03-09 16:28   ` Simon Horman
  2023-03-09 13:13 ` [PATCH v4 net-next 3/5] net: ena: Add an option to configure large LLQ headers Shay Agroskin
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 19+ messages in thread
From: Shay Agroskin @ 2023-03-09 13:13 UTC (permalink / raw)
  To: David Miller, Jakub Kicinski, netdev
  Cc: Shay Agroskin, Woodhouse, David, Machulsky, Zorik, Matushevsky,
	Alexander, Saeed Bshara, Wilson, Matt, Liguori, Anthony, Bshara,
	Nafea, Belgazal, Netanel, Saidi, Ali, Herrenschmidt, Benjamin,
	Kiyanovski, Arthur, Dagan, Noam, Arinzon, David, Itzko, Shahar,
	Abboud, Osama

Move ena_calc_io_queue_size() implementation closer to the file's
beginning so that it can be later called from ena_device_init()
function without adding a function declaration.

Also add an empty line at some spots to separate logical blocks in
funcitons.

Signed-off-by: Shay Agroskin <shayagr@amazon.com>
---
 drivers/net/ethernet/amazon/ena/ena_netdev.c | 133 ++++++++++---------
 1 file changed, 67 insertions(+), 66 deletions(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index d3999db7c6a2..372e33831323 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -3364,6 +3364,71 @@ static const struct net_device_ops ena_netdev_ops = {
 	.ndo_xdp_xmit		= ena_xdp_xmit,
 };
 
+static void ena_calc_io_queue_size(struct ena_adapter *adapter,
+				   struct ena_com_dev_get_features_ctx *get_feat_ctx)
+{
+	struct ena_admin_feature_llq_desc *llq = &get_feat_ctx->llq;
+	struct ena_com_dev *ena_dev = adapter->ena_dev;
+	u32 tx_queue_size = ENA_DEFAULT_RING_SIZE;
+	u32 rx_queue_size = ENA_DEFAULT_RING_SIZE;
+	u32 max_tx_queue_size;
+	u32 max_rx_queue_size;
+
+	if (ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) {
+		struct ena_admin_queue_ext_feature_fields *max_queue_ext =
+			&get_feat_ctx->max_queue_ext.max_queue_ext;
+		max_rx_queue_size = min_t(u32, max_queue_ext->max_rx_cq_depth,
+					  max_queue_ext->max_rx_sq_depth);
+		max_tx_queue_size = max_queue_ext->max_tx_cq_depth;
+
+		if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV)
+			max_tx_queue_size = min_t(u32, max_tx_queue_size,
+						  llq->max_llq_depth);
+		else
+			max_tx_queue_size = min_t(u32, max_tx_queue_size,
+						  max_queue_ext->max_tx_sq_depth);
+
+		adapter->max_tx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
+						 max_queue_ext->max_per_packet_tx_descs);
+		adapter->max_rx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
+						 max_queue_ext->max_per_packet_rx_descs);
+	} else {
+		struct ena_admin_queue_feature_desc *max_queues =
+			&get_feat_ctx->max_queues;
+		max_rx_queue_size = min_t(u32, max_queues->max_cq_depth,
+					  max_queues->max_sq_depth);
+		max_tx_queue_size = max_queues->max_cq_depth;
+
+		if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV)
+			max_tx_queue_size = min_t(u32, max_tx_queue_size,
+						  llq->max_llq_depth);
+		else
+			max_tx_queue_size = min_t(u32, max_tx_queue_size,
+						  max_queues->max_sq_depth);
+
+		adapter->max_tx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
+						 max_queues->max_packet_tx_descs);
+		adapter->max_rx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
+						 max_queues->max_packet_rx_descs);
+	}
+
+	max_tx_queue_size = rounddown_pow_of_two(max_tx_queue_size);
+	max_rx_queue_size = rounddown_pow_of_two(max_rx_queue_size);
+
+	tx_queue_size = clamp_val(tx_queue_size, ENA_MIN_RING_SIZE,
+				  max_tx_queue_size);
+	rx_queue_size = clamp_val(rx_queue_size, ENA_MIN_RING_SIZE,
+				  max_rx_queue_size);
+
+	tx_queue_size = rounddown_pow_of_two(tx_queue_size);
+	rx_queue_size = rounddown_pow_of_two(rx_queue_size);
+
+	adapter->max_tx_ring_size  = max_tx_queue_size;
+	adapter->max_rx_ring_size = max_rx_queue_size;
+	adapter->requested_tx_ring_size = tx_queue_size;
+	adapter->requested_rx_ring_size = rx_queue_size;
+}
+
 static int ena_device_validate_params(struct ena_adapter *adapter,
 				      struct ena_com_dev_get_features_ctx *get_feat_ctx)
 {
@@ -4164,72 +4229,6 @@ static void ena_release_bars(struct ena_com_dev *ena_dev, struct pci_dev *pdev)
 	pci_release_selected_regions(pdev, release_bars);
 }
 
-
-static void ena_calc_io_queue_size(struct ena_adapter *adapter,
-				   struct ena_com_dev_get_features_ctx *get_feat_ctx)
-{
-	struct ena_admin_feature_llq_desc *llq = &get_feat_ctx->llq;
-	struct ena_com_dev *ena_dev = adapter->ena_dev;
-	u32 tx_queue_size = ENA_DEFAULT_RING_SIZE;
-	u32 rx_queue_size = ENA_DEFAULT_RING_SIZE;
-	u32 max_tx_queue_size;
-	u32 max_rx_queue_size;
-
-	if (ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) {
-		struct ena_admin_queue_ext_feature_fields *max_queue_ext =
-			&get_feat_ctx->max_queue_ext.max_queue_ext;
-		max_rx_queue_size = min_t(u32, max_queue_ext->max_rx_cq_depth,
-					  max_queue_ext->max_rx_sq_depth);
-		max_tx_queue_size = max_queue_ext->max_tx_cq_depth;
-
-		if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV)
-			max_tx_queue_size = min_t(u32, max_tx_queue_size,
-						  llq->max_llq_depth);
-		else
-			max_tx_queue_size = min_t(u32, max_tx_queue_size,
-						  max_queue_ext->max_tx_sq_depth);
-
-		adapter->max_tx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
-						 max_queue_ext->max_per_packet_tx_descs);
-		adapter->max_rx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
-						 max_queue_ext->max_per_packet_rx_descs);
-	} else {
-		struct ena_admin_queue_feature_desc *max_queues =
-			&get_feat_ctx->max_queues;
-		max_rx_queue_size = min_t(u32, max_queues->max_cq_depth,
-					  max_queues->max_sq_depth);
-		max_tx_queue_size = max_queues->max_cq_depth;
-
-		if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV)
-			max_tx_queue_size = min_t(u32, max_tx_queue_size,
-						  llq->max_llq_depth);
-		else
-			max_tx_queue_size = min_t(u32, max_tx_queue_size,
-						  max_queues->max_sq_depth);
-
-		adapter->max_tx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
-						 max_queues->max_packet_tx_descs);
-		adapter->max_rx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
-						 max_queues->max_packet_rx_descs);
-	}
-
-	max_tx_queue_size = rounddown_pow_of_two(max_tx_queue_size);
-	max_rx_queue_size = rounddown_pow_of_two(max_rx_queue_size);
-
-	tx_queue_size = clamp_val(tx_queue_size, ENA_MIN_RING_SIZE,
-				  max_tx_queue_size);
-	rx_queue_size = clamp_val(rx_queue_size, ENA_MIN_RING_SIZE,
-				  max_rx_queue_size);
-
-	tx_queue_size = rounddown_pow_of_two(tx_queue_size);
-	rx_queue_size = rounddown_pow_of_two(rx_queue_size);
-
-	adapter->max_tx_ring_size  = max_tx_queue_size;
-	adapter->max_rx_ring_size = max_rx_queue_size;
-	adapter->requested_tx_ring_size = tx_queue_size;
-	adapter->requested_rx_ring_size = rx_queue_size;
-}
-
 /* ena_probe - Device Initialization Routine
  * @pdev: PCI device information struct
  * @ent: entry in ena_pci_tbl
@@ -4366,6 +4365,7 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 			"Failed to query interrupt moderation feature\n");
 		goto err_device_destroy;
 	}
+
 	ena_init_io_rings(adapter,
 			  0,
 			  adapter->xdp_num_queues +
@@ -4486,6 +4486,7 @@ static void __ena_shutoff(struct pci_dev *pdev, bool shutdown)
 	rtnl_lock(); /* lock released inside the below if-else block */
 	adapter->reset_reason = ENA_REGS_RESET_SHUTDOWN;
 	ena_destroy_device(adapter, true);
+
 	if (shutdown) {
 		netif_device_detach(netdev);
 		dev_close(netdev);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 net-next 3/5] net: ena: Add an option to configure large LLQ headers
  2023-03-09 13:13 [PATCH v4 net-next 0/5] Add tx push buf len param to ethtool Shay Agroskin
  2023-03-09 13:13 ` [PATCH v4 net-next 1/5] ethtool: Add support for configuring tx_push_buf_len Shay Agroskin
  2023-03-09 13:13 ` [PATCH v4 net-next 2/5] net: ena: Make few cosmetic preparations to support large LLQ Shay Agroskin
@ 2023-03-09 13:13 ` Shay Agroskin
  2023-03-09 16:28   ` Simon Horman
  2023-03-09 13:13 ` [PATCH v4 net-next 4/5] net: ena: Recalculate TX state variables every device reset Shay Agroskin
  2023-03-09 13:13 ` [PATCH v4 net-next 5/5] net: ena: Add support to changing tx_push_buf_len Shay Agroskin
  4 siblings, 1 reply; 19+ messages in thread
From: Shay Agroskin @ 2023-03-09 13:13 UTC (permalink / raw)
  To: David Miller, Jakub Kicinski, netdev
  Cc: David Arinzon, Woodhouse, David, Machulsky, Zorik, Matushevsky,
	Alexander, Saeed Bshara, Wilson, Matt, Liguori, Anthony, Bshara,
	Nafea, Belgazal, Netanel, Saidi, Ali, Herrenschmidt, Benjamin,
	Kiyanovski, Arthur, Dagan, Noam, Agroskin, Shay, Itzko, Shahar,
	Abboud, Osama

From: David Arinzon <darinzon@amazon.com>

Allow configuring the device with large LLQ headers. The Low Latency
Queue (LLQ) allows the driver to write the first N bytes of the packet,
along with the rest of the TX descriptors directly into device (N can be
either 96 or 224 for large LLQ headers configuration).

Having L4 TCP/UDP headers contained in the first 96 bytes of the packet
is required to get maximum performance from the device.

Signed-off-by: David Arinzon <darinzon@amazon.com>
Signed-off-by: Shay Agroskin <shayagr@amazon.com>
---
 drivers/net/ethernet/amazon/ena/ena_netdev.c | 90 +++++++++++++++-----
 drivers/net/ethernet/amazon/ena/ena_netdev.h |  8 ++
 2 files changed, 76 insertions(+), 22 deletions(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index 372e33831323..b97ecae439b2 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -3374,6 +3374,15 @@ static void ena_calc_io_queue_size(struct ena_adapter *adapter,
 	u32 max_tx_queue_size;
 	u32 max_rx_queue_size;
 
+	/* If this function is called after driver load, the ring sizes have already
+	 * been configured. Take it into account when recalculating ring size.
+	 */
+	if (adapter->tx_ring->ring_size)
+		tx_queue_size = adapter->tx_ring->ring_size;
+
+	if (adapter->rx_ring->ring_size)
+		rx_queue_size = adapter->rx_ring->ring_size;
+
 	if (ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) {
 		struct ena_admin_queue_ext_feature_fields *max_queue_ext =
 			&get_feat_ctx->max_queue_ext.max_queue_ext;
@@ -3415,6 +3424,24 @@ static void ena_calc_io_queue_size(struct ena_adapter *adapter,
 	max_tx_queue_size = rounddown_pow_of_two(max_tx_queue_size);
 	max_rx_queue_size = rounddown_pow_of_two(max_rx_queue_size);
 
+	/* When forcing large headers, we multiply the entry size by 2, and therefore divide
+	 * the queue size by 2, leaving the amount of memory used by the queues unchanged.
+	 */
+	if (adapter->large_llq_header_enabled) {
+		if ((llq->entry_size_ctrl_supported & ENA_ADMIN_LIST_ENTRY_SIZE_256B) &&
+		    ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) {
+			max_tx_queue_size /= 2;
+			dev_info(&adapter->pdev->dev,
+				 "Forcing large headers and decreasing maximum TX queue size to %d\n",
+				 max_tx_queue_size);
+		} else {
+			dev_err(&adapter->pdev->dev,
+				"Forcing large headers failed: LLQ is disabled or device does not support large headers\n");
+
+			adapter->large_llq_header_enabled = false;
+		}
+	}
+
 	tx_queue_size = clamp_val(tx_queue_size, ENA_MIN_RING_SIZE,
 				  max_tx_queue_size);
 	rx_queue_size = clamp_val(rx_queue_size, ENA_MIN_RING_SIZE,
@@ -3452,13 +3479,30 @@ static int ena_device_validate_params(struct ena_adapter *adapter,
 	return 0;
 }
 
-static void set_default_llq_configurations(struct ena_llq_configurations *llq_config)
+static void set_default_llq_configurations(struct ena_adapter *adapter,
+					   struct ena_llq_configurations *llq_config,
+					   struct ena_admin_feature_llq_desc *llq)
 {
+	struct ena_com_dev *ena_dev = adapter->ena_dev;
+
 	llq_config->llq_header_location = ENA_ADMIN_INLINE_HEADER;
 	llq_config->llq_stride_ctrl = ENA_ADMIN_MULTIPLE_DESCS_PER_ENTRY;
 	llq_config->llq_num_decs_before_header = ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_2;
-	llq_config->llq_ring_entry_size = ENA_ADMIN_LIST_ENTRY_SIZE_128B;
-	llq_config->llq_ring_entry_size_value = 128;
+
+	adapter->large_llq_header_supported =
+		!!(ena_dev->supported_features & BIT(ENA_ADMIN_LLQ));
+	adapter->large_llq_header_supported &=
+		!!(llq->entry_size_ctrl_supported &
+			ENA_ADMIN_LIST_ENTRY_SIZE_256B);
+
+	if ((llq->entry_size_ctrl_supported & ENA_ADMIN_LIST_ENTRY_SIZE_256B) &&
+	    adapter->large_llq_header_enabled) {
+		llq_config->llq_ring_entry_size = ENA_ADMIN_LIST_ENTRY_SIZE_256B;
+		llq_config->llq_ring_entry_size_value = 256;
+	} else {
+		llq_config->llq_ring_entry_size = ENA_ADMIN_LIST_ENTRY_SIZE_128B;
+		llq_config->llq_ring_entry_size_value = 128;
+	}
 }
 
 static int ena_set_queues_placement_policy(struct pci_dev *pdev,
@@ -3477,6 +3521,13 @@ static int ena_set_queues_placement_policy(struct pci_dev *pdev,
 		return 0;
 	}
 
+	if (!ena_dev->mem_bar) {
+		netdev_err(ena_dev->net_device,
+			   "LLQ is advertised as supported but device doesn't expose mem bar\n");
+		ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST;
+		return 0;
+	}
+
 	rc = ena_com_config_dev_mode(ena_dev, llq, llq_default_configurations);
 	if (unlikely(rc)) {
 		dev_err(&pdev->dev,
@@ -3492,15 +3543,8 @@ static int ena_map_llq_mem_bar(struct pci_dev *pdev, struct ena_com_dev *ena_dev
 {
 	bool has_mem_bar = !!(bars & BIT(ENA_MEM_BAR));
 
-	if (!has_mem_bar) {
-		if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) {
-			dev_err(&pdev->dev,
-				"ENA device does not expose LLQ bar. Fallback to host mode policy.\n");
-			ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST;
-		}
-
+	if (!has_mem_bar)
 		return 0;
-	}
 
 	ena_dev->mem_bar = devm_ioremap_wc(&pdev->dev,
 					   pci_resource_start(pdev, ENA_MEM_BAR),
@@ -3512,10 +3556,11 @@ static int ena_map_llq_mem_bar(struct pci_dev *pdev, struct ena_com_dev *ena_dev
 	return 0;
 }
 
-static int ena_device_init(struct ena_com_dev *ena_dev, struct pci_dev *pdev,
+static int ena_device_init(struct ena_adapter *adapter, struct pci_dev *pdev,
 			   struct ena_com_dev_get_features_ctx *get_feat_ctx,
 			   bool *wd_state)
 {
+	struct ena_com_dev *ena_dev = adapter->ena_dev;
 	struct ena_llq_configurations llq_config;
 	struct device *dev = &pdev->dev;
 	bool readless_supported;
@@ -3600,7 +3645,7 @@ static int ena_device_init(struct ena_com_dev *ena_dev, struct pci_dev *pdev,
 
 	*wd_state = !!(aenq_groups & BIT(ENA_ADMIN_KEEP_ALIVE));
 
-	set_default_llq_configurations(&llq_config);
+	set_default_llq_configurations(adapter, &llq_config, &get_feat_ctx->llq);
 
 	rc = ena_set_queues_placement_policy(pdev, ena_dev, &get_feat_ctx->llq,
 					     &llq_config);
@@ -3609,6 +3654,8 @@ static int ena_device_init(struct ena_com_dev *ena_dev, struct pci_dev *pdev,
 		goto err_admin_init;
 	}
 
+	ena_calc_io_queue_size(adapter, get_feat_ctx);
+
 	return 0;
 
 err_admin_init:
@@ -3707,7 +3754,7 @@ static int ena_restore_device(struct ena_adapter *adapter)
 	int rc;
 
 	set_bit(ENA_FLAG_ONGOING_RESET, &adapter->flags);
-	rc = ena_device_init(ena_dev, adapter->pdev, &get_feat_ctx, &wd_state);
+	rc = ena_device_init(adapter, adapter->pdev, &get_feat_ctx, &wd_state);
 	if (rc) {
 		dev_err(&pdev->dev, "Can not initialize device\n");
 		goto err;
@@ -4311,18 +4358,18 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 
 	pci_set_drvdata(pdev, adapter);
 
-	rc = ena_device_init(ena_dev, pdev, &get_feat_ctx, &wd_state);
+	rc = ena_map_llq_mem_bar(pdev, ena_dev, bars);
 	if (rc) {
-		dev_err(&pdev->dev, "ENA device init failed\n");
-		if (rc == -ETIME)
-			rc = -EPROBE_DEFER;
+		dev_err(&pdev->dev, "ENA LLQ bar mapping failed\n");
 		goto err_netdev_destroy;
 	}
 
-	rc = ena_map_llq_mem_bar(pdev, ena_dev, bars);
+	rc = ena_device_init(adapter, pdev, &get_feat_ctx, &wd_state);
 	if (rc) {
-		dev_err(&pdev->dev, "ENA llq bar mapping failed\n");
-		goto err_device_destroy;
+		dev_err(&pdev->dev, "ENA device init failed\n");
+		if (rc == -ETIME)
+			rc = -EPROBE_DEFER;
+		goto err_netdev_destroy;
 	}
 
 	/* Initial TX and RX interrupt delay. Assumes 1 usec granularity.
@@ -4332,7 +4379,6 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	ena_dev->intr_moder_rx_interval = ENA_INTR_INITIAL_RX_INTERVAL_USECS;
 	ena_dev->intr_delay_resolution = ENA_DEFAULT_INTR_DELAY_RESOLUTION;
 	max_num_io_queues = ena_calc_max_io_queue_num(pdev, ena_dev, &get_feat_ctx);
-	ena_calc_io_queue_size(adapter, &get_feat_ctx);
 	if (unlikely(!max_num_io_queues)) {
 		rc = -EFAULT;
 		goto err_device_destroy;
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h
index 2cb141079474..3e8c4a66c7d8 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.h
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h
@@ -334,6 +334,14 @@ struct ena_adapter {
 
 	u32 msg_enable;
 
+	/* large_llq_header_enabled is used for two purposes:
+	 * 1. Indicates that large LLQ has been requested.
+	 * 2. Indicates whether large LLQ is set or not after device
+	 *    initialization / configuration.
+	 */
+	bool large_llq_header_enabled;
+	bool large_llq_header_supported;
+
 	u16 max_tx_sgl_size;
 	u16 max_rx_sgl_size;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 net-next 4/5] net: ena: Recalculate TX state variables every device reset
  2023-03-09 13:13 [PATCH v4 net-next 0/5] Add tx push buf len param to ethtool Shay Agroskin
                   ` (2 preceding siblings ...)
  2023-03-09 13:13 ` [PATCH v4 net-next 3/5] net: ena: Add an option to configure large LLQ headers Shay Agroskin
@ 2023-03-09 13:13 ` Shay Agroskin
  2023-03-09 13:13 ` [PATCH v4 net-next 5/5] net: ena: Add support to changing tx_push_buf_len Shay Agroskin
  4 siblings, 0 replies; 19+ messages in thread
From: Shay Agroskin @ 2023-03-09 13:13 UTC (permalink / raw)
  To: David Miller, Jakub Kicinski, netdev
  Cc: Shay Agroskin, Woodhouse, David, Machulsky, Zorik, Matushevsky,
	Alexander, Saeed Bshara, Wilson, Matt, Liguori, Anthony, Bshara,
	Nafea, Belgazal, Netanel, Saidi, Ali, Herrenschmidt, Benjamin,
	Kiyanovski, Arthur, Dagan, Noam, Arinzon, David, Itzko, Shahar,
	Abboud, Osama, Simon Horman

With the ability to modify LLQ entry size, the size of packet's
payload that can be written directly to the device changes.
This patch makes the driver recalculate this information every device
negotiation (also called device reset).

Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Shay Agroskin <shayagr@amazon.com>
---
 drivers/net/ethernet/amazon/ena/ena_netdev.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index b97ecae439b2..537951ca4d61 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -3750,8 +3750,9 @@ static int ena_restore_device(struct ena_adapter *adapter)
 	struct ena_com_dev_get_features_ctx get_feat_ctx;
 	struct ena_com_dev *ena_dev = adapter->ena_dev;
 	struct pci_dev *pdev = adapter->pdev;
+	struct ena_ring *txr;
+	int rc, count, i;
 	bool wd_state;
-	int rc;
 
 	set_bit(ENA_FLAG_ONGOING_RESET, &adapter->flags);
 	rc = ena_device_init(adapter, adapter->pdev, &get_feat_ctx, &wd_state);
@@ -3761,6 +3762,13 @@ static int ena_restore_device(struct ena_adapter *adapter)
 	}
 	adapter->wd_state = wd_state;
 
+	count =  adapter->xdp_num_queues + adapter->num_io_queues;
+	for (i = 0 ; i < count; i++) {
+		txr = &adapter->tx_ring[i];
+		txr->tx_mem_queue_type = ena_dev->tx_mem_queue_type;
+		txr->tx_max_header_size = ena_dev->tx_max_header_size;
+	}
+
 	rc = ena_device_validate_params(adapter, &get_feat_ctx);
 	if (rc) {
 		dev_err(&pdev->dev, "Validation of device parameters failed\n");
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 net-next 5/5] net: ena: Add support to changing tx_push_buf_len
  2023-03-09 13:13 [PATCH v4 net-next 0/5] Add tx push buf len param to ethtool Shay Agroskin
                   ` (3 preceding siblings ...)
  2023-03-09 13:13 ` [PATCH v4 net-next 4/5] net: ena: Recalculate TX state variables every device reset Shay Agroskin
@ 2023-03-09 13:13 ` Shay Agroskin
  2023-03-09 16:28   ` Simon Horman
  2023-03-10  6:54   ` Jakub Kicinski
  4 siblings, 2 replies; 19+ messages in thread
From: Shay Agroskin @ 2023-03-09 13:13 UTC (permalink / raw)
  To: David Miller, Jakub Kicinski, netdev
  Cc: Shay Agroskin, Woodhouse, David, Machulsky, Zorik, Matushevsky,
	Alexander, Saeed Bshara, Wilson, Matt, Liguori, Anthony, Bshara,
	Nafea, Belgazal, Netanel, Saidi, Ali, Herrenschmidt, Benjamin,
	Kiyanovski, Arthur, Dagan, Noam, Arinzon, David, Itzko, Shahar,
	Abboud, Osama

The ENA driver allows for two distinct values for the number of bytes
of the packet's payload that can be written directly to the device.

For a value of 224 the driver turns on Large LLQ Header mode in which
the first 224 of the packet's payload are written to the LLQ.

Signed-off-by: Shay Agroskin <shayagr@amazon.com>
---
 drivers/net/ethernet/amazon/ena/ena_eth_com.h |  4 ++
 drivers/net/ethernet/amazon/ena/ena_ethtool.c | 57 +++++++++++++++++--
 drivers/net/ethernet/amazon/ena/ena_netdev.c  | 26 +++++++--
 drivers/net/ethernet/amazon/ena/ena_netdev.h  |  7 ++-
 4 files changed, 82 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_eth_com.h b/drivers/net/ethernet/amazon/ena/ena_eth_com.h
index 689313ee25a8..8bec31fa816c 100644
--- a/drivers/net/ethernet/amazon/ena/ena_eth_com.h
+++ b/drivers/net/ethernet/amazon/ena/ena_eth_com.h
@@ -10,6 +10,10 @@
 
 /* head update threshold in units of (queue size / ENA_COMP_HEAD_THRESH) */
 #define ENA_COMP_HEAD_THRESH 4
+/* we allow 2 DMA descriptors per LLQ entry */
+#define ENA_LLQ_ENTRY_DESC_CHUNK_SIZE	(2 * sizeof(struct ena_eth_io_tx_desc))
+#define ENA_LLQ_HEADER		(128 - ENA_LLQ_ENTRY_DESC_CHUNK_SIZE)
+#define ENA_LLQ_LARGE_HEADER	(256 - ENA_LLQ_ENTRY_DESC_CHUNK_SIZE)
 
 struct ena_com_tx_ctx {
 	struct ena_com_tx_meta ena_meta;
diff --git a/drivers/net/ethernet/amazon/ena/ena_ethtool.c b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
index 8da79eedc057..f71ca72d641b 100644
--- a/drivers/net/ethernet/amazon/ena/ena_ethtool.c
+++ b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
@@ -476,6 +476,19 @@ static void ena_get_ringparam(struct net_device *netdev,
 
 	ring->tx_max_pending = adapter->max_tx_ring_size;
 	ring->rx_max_pending = adapter->max_rx_ring_size;
+	if (adapter->ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) {
+		bool large_llq_supported = adapter->large_llq_header_supported;
+
+		kernel_ring->tx_push_buf_len = adapter->ena_dev->tx_max_header_size;
+		if (large_llq_supported)
+			kernel_ring->tx_push_buf_max_len = ENA_LLQ_LARGE_HEADER;
+		else
+			kernel_ring->tx_push_buf_max_len = ENA_LLQ_HEADER;
+	} else {
+		kernel_ring->tx_push_buf_max_len = 0;
+		kernel_ring->tx_push_buf_len = 0;
+	}
+
 	ring->tx_pending = adapter->tx_ring[0].ring_size;
 	ring->rx_pending = adapter->rx_ring[0].ring_size;
 }
@@ -486,7 +499,8 @@ static int ena_set_ringparam(struct net_device *netdev,
 			     struct netlink_ext_ack *extack)
 {
 	struct ena_adapter *adapter = netdev_priv(netdev);
-	u32 new_tx_size, new_rx_size;
+	u32 new_tx_size, new_rx_size, new_tx_push_buf_len;
+	bool changed = false;
 
 	new_tx_size = ring->tx_pending < ENA_MIN_RING_SIZE ?
 			ENA_MIN_RING_SIZE : ring->tx_pending;
@@ -496,11 +510,45 @@ static int ena_set_ringparam(struct net_device *netdev,
 			ENA_MIN_RING_SIZE : ring->rx_pending;
 	new_rx_size = rounddown_pow_of_two(new_rx_size);
 
-	if (new_tx_size == adapter->requested_tx_ring_size &&
-	    new_rx_size == adapter->requested_rx_ring_size)
+	changed |= new_tx_size != adapter->requested_tx_ring_size ||
+		   new_rx_size != adapter->requested_rx_ring_size;
+
+	/* This value is ignored if LLQ is not supported */
+	new_tx_push_buf_len = adapter->ena_dev->tx_max_header_size;
+
+	/* Validate that the push buffer is supported on the underlying device */
+	if (kernel_ring->tx_push_buf_len) {
+		enum ena_admin_placement_policy_type placement;
+
+		new_tx_push_buf_len = kernel_ring->tx_push_buf_len;
+
+		placement = adapter->ena_dev->tx_mem_queue_type;
+		if (placement == ENA_ADMIN_PLACEMENT_POLICY_HOST)
+			return -EOPNOTSUPP;
+
+		if (new_tx_push_buf_len != ENA_LLQ_HEADER &&
+		    new_tx_push_buf_len != ENA_LLQ_LARGE_HEADER) {
+			bool large_llq_sup = adapter->large_llq_header_supported;
+			char large_llq_size_str[40];
+
+			snprintf(large_llq_size_str, 40, ", %lu", ENA_LLQ_LARGE_HEADER);
+
+			NL_SET_ERR_MSG_FMT_MOD(extack,
+					       "Supported tx push buff values: [%lu%s]",
+					       ENA_LLQ_HEADER,
+					       large_llq_sup ? large_llq_size_str : "");
+
+			return -EINVAL;
+		}
+
+		changed |= new_tx_push_buf_len != adapter->ena_dev->tx_max_header_size;
+	}
+
+	if (!changed)
 		return 0;
 
-	return ena_update_queue_sizes(adapter, new_tx_size, new_rx_size);
+	return ena_update_queue_params(adapter, new_tx_size, new_rx_size,
+				       new_tx_push_buf_len);
 }
 
 static u32 ena_flow_hash_to_flow_type(u16 hash_fields)
@@ -900,6 +948,7 @@ static int ena_set_tunable(struct net_device *netdev,
 static const struct ethtool_ops ena_ethtool_ops = {
 	.supported_coalesce_params = ETHTOOL_COALESCE_USECS |
 				     ETHTOOL_COALESCE_USE_ADAPTIVE_RX,
+	.supported_ring_params	= ETHTOOL_RING_USE_TX_PUSH_BUF_LEN,
 	.get_link_ksettings	= ena_get_link_ksettings,
 	.get_drvinfo		= ena_get_drvinfo,
 	.get_msglevel		= ena_get_msglevel,
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index 537951ca4d61..776a35c27c53 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -2809,11 +2809,13 @@ static int ena_close(struct net_device *netdev)
 	return 0;
 }
 
-int ena_update_queue_sizes(struct ena_adapter *adapter,
-			   u32 new_tx_size,
-			   u32 new_rx_size)
+int ena_update_queue_params(struct ena_adapter *adapter,
+			    u32 new_tx_size,
+			    u32 new_rx_size,
+			    u32 new_llq_header_len)
 {
-	bool dev_was_up;
+	bool dev_was_up, large_llq_changed = false;
+	int rc = 0;
 
 	dev_was_up = test_bit(ENA_FLAG_DEV_UP, &adapter->flags);
 	ena_close(adapter->netdev);
@@ -2823,7 +2825,21 @@ int ena_update_queue_sizes(struct ena_adapter *adapter,
 			  0,
 			  adapter->xdp_num_queues +
 			  adapter->num_io_queues);
-	return dev_was_up ? ena_up(adapter) : 0;
+
+	large_llq_changed = adapter->ena_dev->tx_mem_queue_type ==
+			    ENA_ADMIN_PLACEMENT_POLICY_DEV;
+	large_llq_changed &=
+		new_llq_header_len != adapter->ena_dev->tx_max_header_size;
+
+	/* a check that the configuration is valid is done by caller */
+	if (large_llq_changed) {
+		adapter->large_llq_header_enabled = !adapter->large_llq_header_enabled;
+
+		ena_destroy_device(adapter, false);
+		rc = ena_restore_device(adapter);
+	}
+
+	return dev_was_up && !rc ? ena_up(adapter) : rc;
 }
 
 int ena_set_rx_copybreak(struct ena_adapter *adapter, u32 rx_copybreak)
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h
index 3e8c4a66c7d8..5a0d4ee76172 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.h
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h
@@ -396,9 +396,10 @@ void ena_dump_stats_to_buf(struct ena_adapter *adapter, u8 *buf);
 
 int ena_update_hw_stats(struct ena_adapter *adapter);
 
-int ena_update_queue_sizes(struct ena_adapter *adapter,
-			   u32 new_tx_size,
-			   u32 new_rx_size);
+int ena_update_queue_params(struct ena_adapter *adapter,
+			    u32 new_tx_size,
+			    u32 new_rx_size,
+			    u32 new_llq_header_len);
 
 int ena_update_queue_count(struct ena_adapter *adapter, u32 new_channel_count);
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 net-next 1/5] ethtool: Add support for configuring tx_push_buf_len
  2023-03-09 13:13 ` [PATCH v4 net-next 1/5] ethtool: Add support for configuring tx_push_buf_len Shay Agroskin
@ 2023-03-09 16:25   ` Simon Horman
  2023-03-09 17:15   ` Gal Pressman
  1 sibling, 0 replies; 19+ messages in thread
From: Simon Horman @ 2023-03-09 16:25 UTC (permalink / raw)
  To: Shay Agroskin
  Cc: David Miller, Jakub Kicinski, netdev, Woodhouse, David,
	Machulsky, Zorik, Matushevsky, Alexander, Saeed Bshara, Wilson,
	Matt, Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel, Saidi,
	Ali, Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan, Noam,
	Arinzon, David, Itzko, Shahar, Abboud, Osama

On Thu, Mar 09, 2023 at 03:13:15PM +0200, Shay Agroskin wrote:
> This attribute, which is part of ethtool's ring param configuration
> allows the user to specify the maximum number of the packet's payload
> that can be written directly to the device.
> 
> Example usage:
>     # ethtool -G [interface] tx-push-buf-len [number of bytes]
> 
> Co-developed-by: Jakub Kicinski <kuba@kernel.org>
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
> Signed-off-by: Shay Agroskin <shayagr@amazon.com>

Reviewed-by: Simon Horman <simon.horman@corigine.com>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 net-next 2/5] net: ena: Make few cosmetic preparations to support large LLQ
  2023-03-09 13:13 ` [PATCH v4 net-next 2/5] net: ena: Make few cosmetic preparations to support large LLQ Shay Agroskin
@ 2023-03-09 16:28   ` Simon Horman
  0 siblings, 0 replies; 19+ messages in thread
From: Simon Horman @ 2023-03-09 16:28 UTC (permalink / raw)
  To: Shay Agroskin
  Cc: David Miller, Jakub Kicinski, netdev, Woodhouse, David,
	Machulsky, Zorik, Matushevsky, Alexander, Saeed Bshara, Wilson,
	Matt, Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel, Saidi,
	Ali, Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan, Noam,
	Arinzon, David, Itzko, Shahar, Abboud, Osama

On Thu, Mar 09, 2023 at 03:13:16PM +0200, Shay Agroskin wrote:
> Move ena_calc_io_queue_size() implementation closer to the file's
> beginning so that it can be later called from ena_device_init()
> function without adding a function declaration.
> 
> Also add an empty line at some spots to separate logical blocks in
> funcitons.
> 
> Signed-off-by: Shay Agroskin <shayagr@amazon.com>

Reviewed-by: Simon Horman <simon.horman@corigine.com>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 net-next 3/5] net: ena: Add an option to configure large LLQ headers
  2023-03-09 13:13 ` [PATCH v4 net-next 3/5] net: ena: Add an option to configure large LLQ headers Shay Agroskin
@ 2023-03-09 16:28   ` Simon Horman
  0 siblings, 0 replies; 19+ messages in thread
From: Simon Horman @ 2023-03-09 16:28 UTC (permalink / raw)
  To: Shay Agroskin
  Cc: David Miller, Jakub Kicinski, netdev, David Arinzon, Woodhouse,
	David, Machulsky, Zorik, Matushevsky, Alexander, Saeed Bshara,
	Wilson, Matt, Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel,
	Saidi, Ali, Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan,
	Noam, Itzko, Shahar, Abboud, Osama

On Thu, Mar 09, 2023 at 03:13:17PM +0200, Shay Agroskin wrote:
> From: David Arinzon <darinzon@amazon.com>
> 
> Allow configuring the device with large LLQ headers. The Low Latency
> Queue (LLQ) allows the driver to write the first N bytes of the packet,
> along with the rest of the TX descriptors directly into device (N can be
> either 96 or 224 for large LLQ headers configuration).
> 
> Having L4 TCP/UDP headers contained in the first 96 bytes of the packet
> is required to get maximum performance from the device.
> 
> Signed-off-by: David Arinzon <darinzon@amazon.com>
> Signed-off-by: Shay Agroskin <shayagr@amazon.com>

Reviewed-by: Simon Horman <simon.horman@corigine.com>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 net-next 5/5] net: ena: Add support to changing tx_push_buf_len
  2023-03-09 13:13 ` [PATCH v4 net-next 5/5] net: ena: Add support to changing tx_push_buf_len Shay Agroskin
@ 2023-03-09 16:28   ` Simon Horman
  2023-03-10  6:54   ` Jakub Kicinski
  1 sibling, 0 replies; 19+ messages in thread
From: Simon Horman @ 2023-03-09 16:28 UTC (permalink / raw)
  To: Shay Agroskin
  Cc: David Miller, Jakub Kicinski, netdev, Woodhouse, David,
	Machulsky, Zorik, Matushevsky, Alexander, Saeed Bshara, Wilson,
	Matt, Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel, Saidi,
	Ali, Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan, Noam,
	Arinzon, David, Itzko, Shahar, Abboud, Osama

On Thu, Mar 09, 2023 at 03:13:19PM +0200, Shay Agroskin wrote:
> The ENA driver allows for two distinct values for the number of bytes
> of the packet's payload that can be written directly to the device.
> 
> For a value of 224 the driver turns on Large LLQ Header mode in which
> the first 224 of the packet's payload are written to the LLQ.
> 
> Signed-off-by: Shay Agroskin <shayagr@amazon.com>

Reviewed-by: Simon Horman <simon.horman@corigine.com>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 net-next 1/5] ethtool: Add support for configuring tx_push_buf_len
  2023-03-09 13:13 ` [PATCH v4 net-next 1/5] ethtool: Add support for configuring tx_push_buf_len Shay Agroskin
  2023-03-09 16:25   ` Simon Horman
@ 2023-03-09 17:15   ` Gal Pressman
  2023-03-10  6:53     ` Jakub Kicinski
  2023-03-14 15:46     ` Shay Agroskin
  1 sibling, 2 replies; 19+ messages in thread
From: Gal Pressman @ 2023-03-09 17:15 UTC (permalink / raw)
  To: Shay Agroskin, David Miller, Jakub Kicinski, netdev
  Cc: Woodhouse, David, Machulsky, Zorik, Matushevsky, Alexander,
	Saeed Bshara, Wilson, Matt, Liguori, Anthony, Bshara, Nafea,
	Belgazal, Netanel, Saidi, Ali, Herrenschmidt, Benjamin,
	Kiyanovski, Arthur, Dagan, Noam, Arinzon, David, Itzko, Shahar,
	Abboud, Osama

On 09/03/2023 15:13, Shay Agroskin wrote:
> +``ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN`` specifies the maximum number of bytes of a
> +transmitted packet a driver can push directly to the underlying device
> +('push' mode). Pushing some of the payload bytes to the device has the
> +advantages of reducing latency for small packets by avoiding DMA mapping (same
> +as ``ETHTOOL_A_RINGS_TX_PUSH`` parameter) as well as allowing the underlying
> +device to process packet headers ahead of fetching its payload.
> +This can help the device to make fast actions based on the packet's headers.

I know Jakub prefers the new parameter, but the description of this
still sounds extremely similar to TX copybreak to me..
TX copybreak was traditionally used to copy packets to preallocated DMA
buffers, but this could be implemented as copying the packet to the
(preallocated) WQE's inline part. That usually means DMA memory, but
could also be device memory in this ENA LLQ case.

Are we drawing a line that TX copybreak is the threshold for DMA memory
and tx_push_buf_len is the threshold for device memory?
Are they mutually exclusive?

BTW, as I mentioned in v1, ENA doesn't advertise tx_push, which is a bit
strange given the fact that it now adds tx_push_buf_len support.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 net-next 1/5] ethtool: Add support for configuring tx_push_buf_len
  2023-03-09 17:15   ` Gal Pressman
@ 2023-03-10  6:53     ` Jakub Kicinski
  2023-03-12 12:41       ` Gal Pressman
  2023-03-14 15:46     ` Shay Agroskin
  1 sibling, 1 reply; 19+ messages in thread
From: Jakub Kicinski @ 2023-03-10  6:53 UTC (permalink / raw)
  To: Gal Pressman
  Cc: Shay Agroskin, David Miller, netdev, Woodhouse, David, Machulsky,
	Zorik, Matushevsky, Alexander, Saeed Bshara, Wilson, Matt,
	Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel, Saidi, Ali,
	Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan, Noam,
	Arinzon, David, Itzko, Shahar, Abboud, Osama

On Thu, 9 Mar 2023 19:15:43 +0200 Gal Pressman wrote:
> I know Jakub prefers the new parameter, but the description of this
> still sounds extremely similar to TX copybreak to me..
> TX copybreak was traditionally used to copy packets to preallocated DMA
> buffers, but this could be implemented as copying the packet to the
> (preallocated) WQE's inline part. That usually means DMA memory, but
> could also be device memory in this ENA LLQ case.
> 
> Are we drawing a line that TX copybreak is the threshold for DMA memory
> and tx_push_buf_len is the threshold for device memory?

Pretty much, yes. Not an amazing distinction but since TX copybreak can
already mean two different things (inline or DMA buf) I'd err on 
the side of not overloading it with another one. 
And Pensando needed a similar knob? Maybe I'm misremembering now.

> Are they mutually exclusive?

Not sure.

> BTW, as I mentioned in v1, ENA doesn't advertise tx_push, which is a bit
> strange given the fact that it now adds tx_push_buf_len support.

Fair point.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 net-next 5/5] net: ena: Add support to changing tx_push_buf_len
  2023-03-09 13:13 ` [PATCH v4 net-next 5/5] net: ena: Add support to changing tx_push_buf_len Shay Agroskin
  2023-03-09 16:28   ` Simon Horman
@ 2023-03-10  6:54   ` Jakub Kicinski
  1 sibling, 0 replies; 19+ messages in thread
From: Jakub Kicinski @ 2023-03-10  6:54 UTC (permalink / raw)
  To: Shay Agroskin
  Cc: David Miller, netdev, Woodhouse, David, Machulsky, Zorik,
	Matushevsky, Alexander, Saeed Bshara, Wilson, Matt, Liguori,
	Anthony, Bshara, Nafea, Belgazal, Netanel, Saidi, Ali,
	Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan, Noam,
	Arinzon, David, Itzko, Shahar, Abboud, Osama

On Thu, 9 Mar 2023 15:13:19 +0200 Shay Agroskin wrote:
> +			snprintf(large_llq_size_str, 40, ", %lu", ENA_LLQ_LARGE_HEADER);
> +
> +			NL_SET_ERR_MSG_FMT_MOD(extack,
> +					       "Supported tx push buff values: [%lu%s]",
> +					       ENA_LLQ_HEADER,
> +					       large_llq_sup ? large_llq_size_str : "");
> +

You need to fix up the formats, it generates build warnings on 32 bit.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 net-next 1/5] ethtool: Add support for configuring tx_push_buf_len
  2023-03-10  6:53     ` Jakub Kicinski
@ 2023-03-12 12:41       ` Gal Pressman
  2023-03-13 19:09         ` Jakub Kicinski
  0 siblings, 1 reply; 19+ messages in thread
From: Gal Pressman @ 2023-03-12 12:41 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Shay Agroskin, David Miller, netdev, Woodhouse, David, Machulsky,
	Zorik, Matushevsky, Alexander, Saeed Bshara, Wilson, Matt,
	Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel, Saidi, Ali,
	Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan, Noam,
	Arinzon, David, Itzko, Shahar, Abboud, Osama

On 10/03/2023 8:53, Jakub Kicinski wrote:
> On Thu, 9 Mar 2023 19:15:43 +0200 Gal Pressman wrote:
>> I know Jakub prefers the new parameter, but the description of this
>> still sounds extremely similar to TX copybreak to me..
>> TX copybreak was traditionally used to copy packets to preallocated DMA
>> buffers, but this could be implemented as copying the packet to the
>> (preallocated) WQE's inline part. That usually means DMA memory, but
>> could also be device memory in this ENA LLQ case.
>>
>> Are we drawing a line that TX copybreak is the threshold for DMA memory
>> and tx_push_buf_len is the threshold for device memory?
> 
> Pretty much, yes. Not an amazing distinction but since TX copybreak can
> already mean two different things (inline or DMA buf) I'd err on 
> the side of not overloading it with another one. 

Can we document that please?

> And Pensando needed a similar knob? Maybe I'm misremembering now.
> 
>> Are they mutually exclusive?
> 
> Not sure.

Me neither, but it's not clear who takes precedent when both are set.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 net-next 1/5] ethtool: Add support for configuring tx_push_buf_len
  2023-03-12 12:41       ` Gal Pressman
@ 2023-03-13 19:09         ` Jakub Kicinski
  2023-03-14 15:38           ` Shay Agroskin
  0 siblings, 1 reply; 19+ messages in thread
From: Jakub Kicinski @ 2023-03-13 19:09 UTC (permalink / raw)
  To: Gal Pressman
  Cc: Shay Agroskin, David Miller, netdev, Woodhouse, David, Machulsky,
	Zorik, Matushevsky, Alexander, Saeed Bshara, Wilson, Matt,
	Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel, Saidi, Ali,
	Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan, Noam,
	Arinzon, David, Itzko, Shahar, Abboud, Osama

On Sun, 12 Mar 2023 14:41:39 +0200 Gal Pressman wrote:
> On 10/03/2023 8:53, Jakub Kicinski wrote:
> > On Thu, 9 Mar 2023 19:15:43 +0200 Gal Pressman wrote:  
> >> I know Jakub prefers the new parameter, but the description of this
> >> still sounds extremely similar to TX copybreak to me..
> >> TX copybreak was traditionally used to copy packets to preallocated DMA
> >> buffers, but this could be implemented as copying the packet to the
> >> (preallocated) WQE's inline part. That usually means DMA memory, but
> >> could also be device memory in this ENA LLQ case.
> >>
> >> Are we drawing a line that TX copybreak is the threshold for DMA memory
> >> and tx_push_buf_len is the threshold for device memory?  
> > 
> > Pretty much, yes. Not an amazing distinction but since TX copybreak can
> > already mean two different things (inline or DMA buf) I'd err on 
> > the side of not overloading it with another one.   
> 
> Can we document that please?

Shay, could you add a paragraph in the docs regarding copybreak in v5?

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 net-next 1/5] ethtool: Add support for configuring tx_push_buf_len
  2023-03-13 19:09         ` Jakub Kicinski
@ 2023-03-14 15:38           ` Shay Agroskin
  2023-03-16 11:57             ` Gal Pressman
  0 siblings, 1 reply; 19+ messages in thread
From: Shay Agroskin @ 2023-03-14 15:38 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Gal Pressman, David Miller, netdev, Woodhouse, David, Machulsky,
	Zorik, Matushevsky, Alexander, Saeed Bshara, Wilson, Matt,
	Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel, Saidi, Ali,
	Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan, Noam,
	Arinzon, David, Itzko, Shahar, Abboud, Osama


Jakub Kicinski <kuba@kernel.org> writes:

> CAUTION: This email originated from outside of the 
> organization. Do not click links or open attachments unless you 
> can confirm the sender and know the content is safe.
>
>
>
> On Sun, 12 Mar 2023 14:41:39 +0200 Gal Pressman wrote:
>> On 10/03/2023 8:53, Jakub Kicinski wrote:
>> > On Thu, 9 Mar 2023 19:15:43 +0200 Gal Pressman wrote:
>> >> I know Jakub prefers the new parameter, but the description 
>> >> of this
>> >> still sounds extremely similar to TX copybreak to me..
>> >> TX copybreak was traditionally used to copy packets to 
>> >> preallocated DMA
>> >> buffers, but this could be implemented as copying the packet 
>> >> to the
>> >> (preallocated) WQE's inline part. That usually means DMA 
>> >> memory, but
>> >> could also be device memory in this ENA LLQ case.
>> >>
>> >> Are we drawing a line that TX copybreak is the threshold for 
>> >> DMA memory
>> >> and tx_push_buf_len is the threshold for device memory?
>> >
>> > Pretty much, yes. Not an amazing distinction but since TX 
>> > copybreak can
>> > already mean two different things (inline or DMA buf) I'd err 
>> > on
>> > the side of not overloading it with another one.
>>
>> Can we document that please?
>
> Shay, could you add a paragraph in the docs regarding copybreak 
> in v5?

Document that tx_copybreak defines the threshold below which the 
packet is copied into a preallocated DMA'ed buffer and that 
tx_push_buf defines the same but for device memory?
Are we sure we want to make this distinction ? While the meaning 
of both params can overlap in their current definition, the 
motivation to use them is pretty different.
A driver can implement both for different purposes (and still copy 
both into the device).

I'll modify the documentation in next version

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 net-next 1/5] ethtool: Add support for configuring tx_push_buf_len
  2023-03-09 17:15   ` Gal Pressman
  2023-03-10  6:53     ` Jakub Kicinski
@ 2023-03-14 15:46     ` Shay Agroskin
  1 sibling, 0 replies; 19+ messages in thread
From: Shay Agroskin @ 2023-03-14 15:46 UTC (permalink / raw)
  To: Gal Pressman
  Cc: David Miller, Jakub Kicinski, netdev, Woodhouse, David,
	Machulsky, Zorik, Matushevsky, Alexander, Saeed Bshara, Wilson,
	Matt, Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel, Saidi,
	Ali, Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan, Noam,
	Arinzon, David, Itzko, Shahar, Abboud, Osama


Gal Pressman <gal@nvidia.com> writes:

> CAUTION: This email originated from outside of the 
> organization. Do not click links or open attachments unless you 
> can confirm the sender and know the content is safe.
>
>
>
> On 09/03/2023 15:13, Shay Agroskin wrote:
>> +``ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN`` specifies the maximum 
>> number of bytes of a
>> +transmitted packet a driver can push directly to the 
>> underlying device
>> +('push' mode). Pushing some of the payload bytes to the device 
>> has the
>> +advantages of reducing latency for small packets by avoiding 
>> DMA mapping (same
>> +as ``ETHTOOL_A_RINGS_TX_PUSH`` parameter) as well as allowing 
>> the underlying
>> +device to process packet headers ahead of fetching its 
>> payload.
>> +This can help the device to make fast actions based on the 
>> packet's headers.
> ...
>
> BTW, as I mentioned in v1, ENA doesn't advertise tx_push, which 
> is a bit
> strange given the fact that it now adds tx_push_buf_len support.

Yup you're right, sorry for missing it. I'll advertisement for 
tx_push support in next version

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 net-next 1/5] ethtool: Add support for configuring tx_push_buf_len
  2023-03-14 15:38           ` Shay Agroskin
@ 2023-03-16 11:57             ` Gal Pressman
  2023-03-16 20:42               ` Jakub Kicinski
  0 siblings, 1 reply; 19+ messages in thread
From: Gal Pressman @ 2023-03-16 11:57 UTC (permalink / raw)
  To: Shay Agroskin, Jakub Kicinski
  Cc: David Miller, netdev, Woodhouse, David, Machulsky, Zorik,
	Matushevsky, Alexander, Saeed Bshara, Wilson, Matt, Liguori,
	Anthony, Bshara, Nafea, Belgazal, Netanel, Saidi, Ali,
	Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan, Noam,
	Arinzon, David, Itzko, Shahar, Abboud, Osama

On 14/03/2023 17:38, Shay Agroskin wrote:
> 
> Jakub Kicinski <kuba@kernel.org> writes:
> 
>> CAUTION: This email originated from outside of the organization. Do
>> not click links or open attachments unless you can confirm the sender
>> and know the content is safe.
>>
>>
>>
>> On Sun, 12 Mar 2023 14:41:39 +0200 Gal Pressman wrote:
>>> On 10/03/2023 8:53, Jakub Kicinski wrote:
>>> > On Thu, 9 Mar 2023 19:15:43 +0200 Gal Pressman wrote:
>>> >> I know Jakub prefers the new parameter, but the description >> of
>>> this
>>> >> still sounds extremely similar to TX copybreak to me..
>>> >> TX copybreak was traditionally used to copy packets to >>
>>> preallocated DMA
>>> >> buffers, but this could be implemented as copying the packet >> to
>>> the
>>> >> (preallocated) WQE's inline part. That usually means DMA >>
>>> memory, but
>>> >> could also be device memory in this ENA LLQ case.
>>> >>
>>> >> Are we drawing a line that TX copybreak is the threshold for >>
>>> DMA memory
>>> >> and tx_push_buf_len is the threshold for device memory?
>>> >
>>> > Pretty much, yes. Not an amazing distinction but since TX >
>>> copybreak can
>>> > already mean two different things (inline or DMA buf) I'd err > on
>>> > the side of not overloading it with another one.
>>>
>>> Can we document that please?
>>
>> Shay, could you add a paragraph in the docs regarding copybreak in v5?
> 
> Document that tx_copybreak defines the threshold below which the packet
> is copied into a preallocated DMA'ed buffer and that tx_push_buf defines
> the same but for device memory?
> Are we sure we want to make this distinction ? While the meaning of both
> params can overlap in their current definition, the motivation to use
> them is pretty different.
> A driver can implement both for different purposes (and still copy both
> into the device).

I don't understand what it means to implement both.

It's confusing because both parameters result in skipping the DMA map
operation, but their usage motivation is different?
What are we instructing our customers? Use copybreak when your iommu is
slow, but when should they use this new parameter?

IMO, a reasonable way to use both would be to only account for the
tx_push_buf_len when tx_push is enabled, otherwise use copybreak.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 net-next 1/5] ethtool: Add support for configuring tx_push_buf_len
  2023-03-16 11:57             ` Gal Pressman
@ 2023-03-16 20:42               ` Jakub Kicinski
  0 siblings, 0 replies; 19+ messages in thread
From: Jakub Kicinski @ 2023-03-16 20:42 UTC (permalink / raw)
  To: Gal Pressman
  Cc: Shay Agroskin, David Miller, netdev, Woodhouse, David, Machulsky,
	Zorik, Matushevsky, Alexander, Saeed Bshara, Wilson, Matt,
	Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel, Saidi, Ali,
	Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan, Noam,
	Arinzon, David, Itzko, Shahar, Abboud, Osama

On Thu, 16 Mar 2023 13:57:26 +0200 Gal Pressman wrote:
> On 14/03/2023 17:38, Shay Agroskin wrote:
> >> Shay, could you add a paragraph in the docs regarding copybreak in v5?  
> > 
> > Document that tx_copybreak defines the threshold below which the packet
> > is copied into a preallocated DMA'ed buffer and that tx_push_buf defines
> > the same but for device memory?
> > Are we sure we want to make this distinction ? While the meaning of both
> > params can overlap in their current definition, the motivation to use
> > them is pretty different.
> > A driver can implement both for different purposes (and still copy both
> > into the device).  
> 
> I don't understand what it means to implement both.

If skb head is large you can copy part into the iomem, part into 
a pre-mapped space.

> It's confusing because both parameters result in skipping the DMA map
> operation, but their usage motivation is different?
> What are we instructing our customers? Use copybreak when your iommu is
> slow, but when should they use this new parameter?

Your customers? Is mlx5 going to implement the iomem based push which
needs explicit slot size control?

> IMO, a reasonable way to use both would be to only account for the
> tx_push_buf_len when tx_push is enabled, otherwise use copybreak.

I thought Shay already agreed. Let's get the v5 out.

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2023-03-16 20:42 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-09 13:13 [PATCH v4 net-next 0/5] Add tx push buf len param to ethtool Shay Agroskin
2023-03-09 13:13 ` [PATCH v4 net-next 1/5] ethtool: Add support for configuring tx_push_buf_len Shay Agroskin
2023-03-09 16:25   ` Simon Horman
2023-03-09 17:15   ` Gal Pressman
2023-03-10  6:53     ` Jakub Kicinski
2023-03-12 12:41       ` Gal Pressman
2023-03-13 19:09         ` Jakub Kicinski
2023-03-14 15:38           ` Shay Agroskin
2023-03-16 11:57             ` Gal Pressman
2023-03-16 20:42               ` Jakub Kicinski
2023-03-14 15:46     ` Shay Agroskin
2023-03-09 13:13 ` [PATCH v4 net-next 2/5] net: ena: Make few cosmetic preparations to support large LLQ Shay Agroskin
2023-03-09 16:28   ` Simon Horman
2023-03-09 13:13 ` [PATCH v4 net-next 3/5] net: ena: Add an option to configure large LLQ headers Shay Agroskin
2023-03-09 16:28   ` Simon Horman
2023-03-09 13:13 ` [PATCH v4 net-next 4/5] net: ena: Recalculate TX state variables every device reset Shay Agroskin
2023-03-09 13:13 ` [PATCH v4 net-next 5/5] net: ena: Add support to changing tx_push_buf_len Shay Agroskin
2023-03-09 16:28   ` Simon Horman
2023-03-10  6:54   ` Jakub Kicinski

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.