netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V1 net-next 0/6] Support for dynamic queue size changes
@ 2019-06-06 11:55 sameehj
  2019-06-06 11:55 ` [PATCH V1 net-next 1/6] net: ena: add MAX_QUEUES_EXT get feature admin command sameehj
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: sameehj @ 2019-06-06 11:55 UTC (permalink / raw)
  To: davem, netdev
  Cc: Sameeh Jubran, dwmw, zorik, matua, saeedb, msw, aliguori, nafea,
	gtzalik, netanel, alisaidi, benh, akiyano

From: Sameeh Jubran <sameehj@amazon.com>

This patchset introduces the following:
* add new admin command for supporting different queue size for Tx/Rx
* add support for Tx/Rx queues size modification through ethtool
* allow queues allocation backoff when low on memory
* update driver version

Arthur Kiyanovski (1):
  net: ena: add MAX_QUEUES_EXT get feature admin command

Sameeh Jubran (5):
  net: ena: enable negotiating larger Rx ring size
  net: ena: make ethtool show correct current and max queue sizes
  net: ena: allow queue allocation backoff when low on memory
  net: ena: add ethtool function for changing io queue sizes
  net: ena: update driver version from 2.0.3 to 2.1.0

 .../net/ethernet/amazon/ena/ena_admin_defs.h  |  56 +++-
 drivers/net/ethernet/amazon/ena/ena_com.c     |  76 +++--
 drivers/net/ethernet/amazon/ena/ena_com.h     |   3 +
 drivers/net/ethernet/amazon/ena/ena_ethtool.c |  35 +-
 drivers/net/ethernet/amazon/ena/ena_netdev.c  | 303 +++++++++++++-----
 drivers/net/ethernet/amazon/ena/ena_netdev.h  |  28 +-
 6 files changed, 382 insertions(+), 119 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH V1 net-next 1/6] net: ena: add MAX_QUEUES_EXT get feature admin command
  2019-06-06 11:55 [PATCH V1 net-next 0/6] Support for dynamic queue size changes sameehj
@ 2019-06-06 11:55 ` sameehj
  2019-06-06 11:55 ` [PATCH V1 net-next 2/6] net: ena: enable negotiating larger Rx ring size sameehj
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: sameehj @ 2019-06-06 11:55 UTC (permalink / raw)
  To: davem, netdev
  Cc: Arthur Kiyanovski, dwmw, zorik, matua, saeedb, msw, aliguori,
	nafea, gtzalik, netanel, alisaidi, benh, Sameeh Jubran

From: Arthur Kiyanovski <akiyano@amazon.com>

Add a new admin command to support different queue size for Tx/Rx
queues (the change also support different SQ/CQ sizes)

Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
---
 .../net/ethernet/amazon/ena/ena_admin_defs.h  | 56 +++++++++++++-
 drivers/net/ethernet/amazon/ena/ena_com.c     | 76 ++++++++++++-------
 drivers/net/ethernet/amazon/ena/ena_com.h     |  3 +
 3 files changed, 105 insertions(+), 30 deletions(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_admin_defs.h b/drivers/net/ethernet/amazon/ena/ena_admin_defs.h
index 414bae989..c8638f7b5 100644
--- a/drivers/net/ethernet/amazon/ena/ena_admin_defs.h
+++ b/drivers/net/ethernet/amazon/ena/ena_admin_defs.h
@@ -64,6 +64,7 @@ enum ena_admin_aq_feature_id {
 	ENA_ADMIN_LLQ                               = 4,
 	ENA_ADMIN_EXTRA_PROPERTIES_STRINGS          = 5,
 	ENA_ADMIN_EXTRA_PROPERTIES_FLAGS            = 6,
+	ENA_ADMIN_MAX_QUEUES_EXT                    = 7,
 	ENA_ADMIN_RSS_HASH_FUNCTION                 = 10,
 	ENA_ADMIN_STATELESS_OFFLOAD_CONFIG          = 11,
 	ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG      = 12,
@@ -425,7 +426,13 @@ struct ena_admin_get_set_feature_common_desc {
 	/* as appears in ena_admin_aq_feature_id */
 	u8 feature_id;
 
-	u16 reserved16;
+	/* The driver specifies the max feature version it supports and the
+	 * device responds with the currently supported feature version. The
+	 * field is zero based
+	 */
+	u8 feature_version;
+
+	u8 reserved8;
 };
 
 struct ena_admin_device_attr_feature_desc {
@@ -535,6 +542,34 @@ struct ena_admin_feature_llq_desc {
 	u32 max_tx_burst_size;
 };
 
+struct ena_admin_queue_ext_feature_fields {
+	u32 max_tx_sq_num;
+
+	u32 max_tx_cq_num;
+
+	u32 max_rx_sq_num;
+
+	u32 max_rx_cq_num;
+
+	u32 max_tx_sq_depth;
+
+	u32 max_tx_cq_depth;
+
+	u32 max_rx_sq_depth;
+
+	u32 max_rx_cq_depth;
+
+	u32 max_tx_header_size;
+
+	/* Maximum Descriptors number, including meta descriptor, allowed for
+	 * a single Tx packet
+	 */
+	u16 max_per_packet_tx_descs;
+
+	/* Maximum Descriptors number allowed for a single Rx packet */
+	u16 max_per_packet_rx_descs;
+};
+
 struct ena_admin_queue_feature_desc {
 	u32 max_sq_num;
 
@@ -849,6 +884,19 @@ struct ena_admin_get_feat_cmd {
 	u32 raw[11];
 };
 
+struct ena_admin_queue_ext_feature_desc {
+	/* version */
+	u8 version;
+
+	u8 reserved1[3];
+
+	union {
+		struct ena_admin_queue_ext_feature_fields max_queue_ext;
+
+		u32 raw[10];
+	};
+};
+
 struct ena_admin_get_feat_resp {
 	struct ena_admin_acq_common_desc acq_common_desc;
 
@@ -861,6 +909,8 @@ struct ena_admin_get_feat_resp {
 
 		struct ena_admin_queue_feature_desc max_queue;
 
+		struct ena_admin_queue_ext_feature_desc max_queue_ext;
+
 		struct ena_admin_feature_aenq_desc aenq;
 
 		struct ena_admin_get_feature_link_desc link;
@@ -929,7 +979,9 @@ struct ena_admin_aenq_common_desc {
 
 	u16 syndrom;
 
-	/* 0 : phase */
+	/* 0 : phase
+	 * 7:1 : reserved - MBZ
+	 */
 	u8 flags;
 
 	u8 reserved1[3];
diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c b/drivers/net/ethernet/amazon/ena/ena_com.c
index dbc12e383..9efb4be97 100644
--- a/drivers/net/ethernet/amazon/ena/ena_com.c
+++ b/drivers/net/ethernet/amazon/ena/ena_com.c
@@ -978,7 +978,8 @@ static int ena_com_get_feature_ex(struct ena_com_dev *ena_dev,
 				  struct ena_admin_get_feat_resp *get_resp,
 				  enum ena_admin_aq_feature_id feature_id,
 				  dma_addr_t control_buf_dma_addr,
-				  u32 control_buff_size)
+				  u32 control_buff_size,
+				  u8 feature_ver)
 {
 	struct ena_com_admin_queue *admin_queue;
 	struct ena_admin_get_feat_cmd get_cmd;
@@ -1009,7 +1010,7 @@ static int ena_com_get_feature_ex(struct ena_com_dev *ena_dev,
 	}
 
 	get_cmd.control_buffer.length = control_buff_size;
-
+	get_cmd.feat_common.feature_version = feature_ver;
 	get_cmd.feat_common.feature_id = feature_id;
 
 	ret = ena_com_execute_admin_command(admin_queue,
@@ -1029,13 +1030,15 @@ static int ena_com_get_feature_ex(struct ena_com_dev *ena_dev,
 
 static int ena_com_get_feature(struct ena_com_dev *ena_dev,
 			       struct ena_admin_get_feat_resp *get_resp,
-			       enum ena_admin_aq_feature_id feature_id)
+			       enum ena_admin_aq_feature_id feature_id,
+			       u8 feature_ver)
 {
 	return ena_com_get_feature_ex(ena_dev,
 				      get_resp,
 				      feature_id,
 				      0,
-				      0);
+				      0,
+				      feature_ver);
 }
 
 static int ena_com_hash_key_allocate(struct ena_com_dev *ena_dev)
@@ -1095,7 +1098,7 @@ static int ena_com_indirect_table_allocate(struct ena_com_dev *ena_dev,
 	int ret;
 
 	ret = ena_com_get_feature(ena_dev, &get_resp,
-				  ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG);
+				  ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG, 0);
 	if (unlikely(ret))
 		return ret;
 
@@ -1515,7 +1518,7 @@ int ena_com_set_aenq_config(struct ena_com_dev *ena_dev, u32 groups_flag)
 	struct ena_admin_get_feat_resp get_resp;
 	int ret;
 
-	ret = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_AENQ_CONFIG);
+	ret = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_AENQ_CONFIG, 0);
 	if (ret) {
 		pr_info("Can't get aenq configuration\n");
 		return ret;
@@ -1890,7 +1893,7 @@ void ena_com_destroy_io_queue(struct ena_com_dev *ena_dev, u16 qid)
 int ena_com_get_link_params(struct ena_com_dev *ena_dev,
 			    struct ena_admin_get_feat_resp *resp)
 {
-	return ena_com_get_feature(ena_dev, resp, ENA_ADMIN_LINK_CONFIG);
+	return ena_com_get_feature(ena_dev, resp, ENA_ADMIN_LINK_CONFIG, 0);
 }
 
 int ena_com_extra_properties_strings_init(struct ena_com_dev *ena_dev)
@@ -1916,7 +1919,7 @@ int ena_com_extra_properties_strings_init(struct ena_com_dev *ena_dev)
 	rc = ena_com_get_feature_ex(ena_dev, &resp,
 				    ENA_ADMIN_EXTRA_PROPERTIES_STRINGS,
 				    extra_properties_strings->dma_addr,
-				    extra_properties_strings->size);
+				    extra_properties_strings->size, 0);
 	if (rc) {
 		pr_debug("Failed to get extra properties strings\n");
 		goto err;
@@ -1946,7 +1949,7 @@ int ena_com_get_extra_properties_flags(struct ena_com_dev *ena_dev,
 				       struct ena_admin_get_feat_resp *resp)
 {
 	return ena_com_get_feature(ena_dev, resp,
-				   ENA_ADMIN_EXTRA_PROPERTIES_FLAGS);
+				   ENA_ADMIN_EXTRA_PROPERTIES_FLAGS, 0);
 }
 
 int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev,
@@ -1956,7 +1959,7 @@ int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev,
 	int rc;
 
 	rc = ena_com_get_feature(ena_dev, &get_resp,
-				 ENA_ADMIN_DEVICE_ATTRIBUTES);
+				 ENA_ADMIN_DEVICE_ATTRIBUTES, 0);
 	if (rc)
 		return rc;
 
@@ -1964,17 +1967,34 @@ int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev,
 	       sizeof(get_resp.u.dev_attr));
 	ena_dev->supported_features = get_resp.u.dev_attr.supported_features;
 
-	rc = ena_com_get_feature(ena_dev, &get_resp,
-				 ENA_ADMIN_MAX_QUEUES_NUM);
-	if (rc)
-		return rc;
+	if (ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) {
+		rc = ena_com_get_feature(ena_dev, &get_resp,
+					 ENA_ADMIN_MAX_QUEUES_EXT,
+					 ENA_FEATURE_MAX_QUEUE_EXT_VER);
+		if (rc)
+			return rc;
 
-	memcpy(&get_feat_ctx->max_queues, &get_resp.u.max_queue,
-	       sizeof(get_resp.u.max_queue));
-	ena_dev->tx_max_header_size = get_resp.u.max_queue.max_header_size;
+		if (get_resp.u.max_queue_ext.version != ENA_FEATURE_MAX_QUEUE_EXT_VER)
+			return -EINVAL;
+
+		memcpy(&get_feat_ctx->max_queue_ext, &get_resp.u.max_queue_ext,
+		       sizeof(get_resp.u.max_queue_ext));
+		ena_dev->tx_max_header_size =
+			get_resp.u.max_queue_ext.max_queue_ext.max_tx_header_size;
+	} else {
+		rc = ena_com_get_feature(ena_dev, &get_resp,
+					 ENA_ADMIN_MAX_QUEUES_NUM, 0);
+		memcpy(&get_feat_ctx->max_queues, &get_resp.u.max_queue,
+		       sizeof(get_resp.u.max_queue));
+		ena_dev->tx_max_header_size =
+			get_resp.u.max_queue.max_header_size;
+
+		if (rc)
+			return rc;
+	}
 
 	rc = ena_com_get_feature(ena_dev, &get_resp,
-				 ENA_ADMIN_AENQ_CONFIG);
+				 ENA_ADMIN_AENQ_CONFIG, 0);
 	if (rc)
 		return rc;
 
@@ -1982,7 +2002,7 @@ int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev,
 	       sizeof(get_resp.u.aenq));
 
 	rc = ena_com_get_feature(ena_dev, &get_resp,
-				 ENA_ADMIN_STATELESS_OFFLOAD_CONFIG);
+				 ENA_ADMIN_STATELESS_OFFLOAD_CONFIG, 0);
 	if (rc)
 		return rc;
 
@@ -1992,7 +2012,7 @@ int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev,
 	/* Driver hints isn't mandatory admin command. So in case the
 	 * command isn't supported set driver hints to 0
 	 */
-	rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_HW_HINTS);
+	rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_HW_HINTS, 0);
 
 	if (!rc)
 		memcpy(&get_feat_ctx->hw_hints, &get_resp.u.hw_hints,
@@ -2003,7 +2023,7 @@ int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev,
 	else
 		return rc;
 
-	rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_LLQ);
+	rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_LLQ, 0);
 	if (!rc)
 		memcpy(&get_feat_ctx->llq, &get_resp.u.llq,
 		       sizeof(get_resp.u.llq));
@@ -2240,7 +2260,7 @@ int ena_com_get_offload_settings(struct ena_com_dev *ena_dev,
 	struct ena_admin_get_feat_resp resp;
 
 	ret = ena_com_get_feature(ena_dev, &resp,
-				  ENA_ADMIN_STATELESS_OFFLOAD_CONFIG);
+				  ENA_ADMIN_STATELESS_OFFLOAD_CONFIG, 0);
 	if (unlikely(ret)) {
 		pr_err("Failed to get offload capabilities %d\n", ret);
 		return ret;
@@ -2269,7 +2289,7 @@ int ena_com_set_hash_function(struct ena_com_dev *ena_dev)
 
 	/* Validate hash function is supported */
 	ret = ena_com_get_feature(ena_dev, &get_resp,
-				  ENA_ADMIN_RSS_HASH_FUNCTION);
+				  ENA_ADMIN_RSS_HASH_FUNCTION, 0);
 	if (unlikely(ret))
 		return ret;
 
@@ -2329,7 +2349,7 @@ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev,
 	rc = ena_com_get_feature_ex(ena_dev, &get_resp,
 				    ENA_ADMIN_RSS_HASH_FUNCTION,
 				    rss->hash_key_dma_addr,
-				    sizeof(*rss->hash_key));
+				    sizeof(*rss->hash_key), 0);
 	if (unlikely(rc))
 		return rc;
 
@@ -2381,7 +2401,7 @@ int ena_com_get_hash_function(struct ena_com_dev *ena_dev,
 	rc = ena_com_get_feature_ex(ena_dev, &get_resp,
 				    ENA_ADMIN_RSS_HASH_FUNCTION,
 				    rss->hash_key_dma_addr,
-				    sizeof(*rss->hash_key));
+				    sizeof(*rss->hash_key), 0);
 	if (unlikely(rc))
 		return rc;
 
@@ -2406,7 +2426,7 @@ int ena_com_get_hash_ctrl(struct ena_com_dev *ena_dev,
 	rc = ena_com_get_feature_ex(ena_dev, &get_resp,
 				    ENA_ADMIN_RSS_HASH_INPUT,
 				    rss->hash_ctrl_dma_addr,
-				    sizeof(*rss->hash_ctrl));
+				    sizeof(*rss->hash_ctrl), 0);
 	if (unlikely(rc))
 		return rc;
 
@@ -2642,7 +2662,7 @@ int ena_com_indirect_table_get(struct ena_com_dev *ena_dev, u32 *ind_tbl)
 	rc = ena_com_get_feature_ex(ena_dev, &get_resp,
 				    ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG,
 				    rss->rss_ind_tbl_dma_addr,
-				    tbl_size);
+				    tbl_size, 0);
 	if (unlikely(rc))
 		return rc;
 
@@ -2857,7 +2877,7 @@ int ena_com_init_interrupt_moderation(struct ena_com_dev *ena_dev)
 	int rc;
 
 	rc = ena_com_get_feature(ena_dev, &get_resp,
-				 ENA_ADMIN_INTERRUPT_MODERATION);
+				 ENA_ADMIN_INTERRUPT_MODERATION, 0);
 
 	if (rc) {
 		if (rc == -EOPNOTSUPP) {
diff --git a/drivers/net/ethernet/amazon/ena/ena_com.h b/drivers/net/ethernet/amazon/ena/ena_com.h
index 6d356cb05..4700d92a3 100644
--- a/drivers/net/ethernet/amazon/ena/ena_com.h
+++ b/drivers/net/ethernet/amazon/ena/ena_com.h
@@ -101,6 +101,8 @@
 
 #define ENA_HW_HINTS_NO_TIMEOUT				0xFFFF
 
+#define ENA_FEATURE_MAX_QUEUE_EXT_VER	1
+
 enum ena_intr_moder_level {
 	ENA_INTR_MODER_LOWEST = 0,
 	ENA_INTR_MODER_LOW,
@@ -389,6 +391,7 @@ struct ena_com_dev {
 
 struct ena_com_dev_get_features_ctx {
 	struct ena_admin_queue_feature_desc max_queues;
+	struct ena_admin_queue_ext_feature_desc max_queue_ext;
 	struct ena_admin_device_attr_feature_desc dev_attr;
 	struct ena_admin_feature_aenq_desc aenq;
 	struct ena_admin_feature_offload_desc offload;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH V1 net-next 2/6] net: ena: enable negotiating larger Rx ring size
  2019-06-06 11:55 [PATCH V1 net-next 0/6] Support for dynamic queue size changes sameehj
  2019-06-06 11:55 ` [PATCH V1 net-next 1/6] net: ena: add MAX_QUEUES_EXT get feature admin command sameehj
@ 2019-06-06 11:55 ` sameehj
  2019-06-06 11:55 ` [PATCH V1 net-next 3/6] net: ena: make ethtool show correct current and max queue sizes sameehj
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: sameehj @ 2019-06-06 11:55 UTC (permalink / raw)
  To: davem, netdev
  Cc: Sameeh Jubran, dwmw, zorik, matua, saeedb, msw, aliguori, nafea,
	gtzalik, netanel, alisaidi, benh, akiyano

From: Sameeh Jubran <sameehj@amazon.com>

Use MAX_QUEUES_EXT get feature capability to query the device.

Signed-off-by: Netanel Belgazal <netanel@amazon.com>
Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
---
 drivers/net/ethernet/amazon/ena/ena_netdev.c | 144 ++++++++++++-------
 drivers/net/ethernet/amazon/ena/ena_netdev.h |  15 ++
 2 files changed, 110 insertions(+), 49 deletions(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index 68bed2417..ba5d580e5 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -2465,13 +2465,6 @@ static int ena_device_validate_params(struct ena_adapter *adapter,
 		return -EINVAL;
 	}
 
-	if ((get_feat_ctx->max_queues.max_cq_num < adapter->num_queues) ||
-	    (get_feat_ctx->max_queues.max_sq_num < adapter->num_queues)) {
-		netif_err(adapter, drv, netdev,
-			  "Error, device doesn't support enough queues\n");
-		return -EINVAL;
-	}
-
 	if (get_feat_ctx->dev_attr.max_mtu < netdev->mtu) {
 		netif_err(adapter, drv, netdev,
 			  "Error, device max mtu is smaller than netdev MTU\n");
@@ -3045,18 +3038,32 @@ static int ena_calc_io_queue_num(struct pci_dev *pdev,
 				 struct ena_com_dev *ena_dev,
 				 struct ena_com_dev_get_features_ctx *get_feat_ctx)
 {
-	int io_sq_num, io_queue_num;
+	int io_tx_sq_num, io_tx_cq_num, io_rx_num, io_queue_num;
+
+	if (ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) {
+		struct ena_admin_queue_ext_feature_fields *max_queue_ext =
+			&get_feat_ctx->max_queue_ext.max_queue_ext;
+		io_rx_num = min_t(int, max_queue_ext->max_rx_sq_num,
+				  max_queue_ext->max_rx_cq_num);
 
-	/* In case of LLQ use the llq number in the get feature cmd */
+		io_tx_sq_num = max_queue_ext->max_tx_sq_num;
+		io_tx_cq_num = max_queue_ext->max_tx_cq_num;
+	} else {
+		struct ena_admin_queue_feature_desc *max_queues =
+			&get_feat_ctx->max_queues;
+		io_tx_sq_num = max_queues->max_sq_num;
+		io_tx_cq_num = max_queues->max_cq_num;
+		io_rx_num = min_t(int, io_tx_sq_num, io_tx_cq_num);
+	}
+
+	/* In case of LLQ use the llq fields for the tx SQ/CQ */
 	if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV)
-		io_sq_num = get_feat_ctx->llq.max_llq_num;
-	else
-		io_sq_num = get_feat_ctx->max_queues.max_sq_num;
+		io_tx_sq_num = get_feat_ctx->llq.max_llq_num;
 
 	io_queue_num = min_t(int, num_online_cpus(), ENA_MAX_NUM_IO_QUEUES);
-	io_queue_num = min_t(int, io_queue_num, io_sq_num);
-	io_queue_num = min_t(int, io_queue_num,
-			     get_feat_ctx->max_queues.max_cq_num);
+	io_queue_num = min_t(int, io_queue_num, io_rx_num);
+	io_queue_num = min_t(int, io_queue_num, io_tx_sq_num);
+	io_queue_num = min_t(int, io_queue_num, io_tx_cq_num);
 	/* 1 IRQ for for mgmnt and 1 IRQs for each IO direction */
 	io_queue_num = min_t(int, io_queue_num, pci_msix_vec_count(pdev) - 1);
 	if (unlikely(!io_queue_num)) {
@@ -3239,36 +3246,73 @@ static inline void set_default_llq_configurations(struct ena_llq_configurations
 	llq_config->llq_ring_entry_size_value = 128;
 }
 
-static int ena_calc_queue_size(struct pci_dev *pdev,
-			       struct ena_com_dev *ena_dev,
-			       u16 *max_tx_sgl_size,
-			       u16 *max_rx_sgl_size,
-			       struct ena_com_dev_get_features_ctx *get_feat_ctx)
+static int ena_calc_queue_size(struct ena_calc_queue_size_ctx *ctx)
 {
-	u32 queue_size = ENA_DEFAULT_RING_SIZE;
+	struct ena_admin_feature_llq_desc *llq = &ctx->get_feat_ctx->llq;
+	struct ena_com_dev *ena_dev = ctx->ena_dev;
+	u32 tx_queue_size = ENA_DEFAULT_RING_SIZE;
+	u32 rx_queue_size = ENA_DEFAULT_RING_SIZE;
+	u32 max_tx_queue_size;
+	u32 max_rx_queue_size;
 
-	queue_size = min_t(u32, queue_size,
-			   get_feat_ctx->max_queues.max_cq_depth);
-	queue_size = min_t(u32, queue_size,
-			   get_feat_ctx->max_queues.max_sq_depth);
+	if (ctx->ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) {
+		struct ena_admin_queue_ext_feature_fields *max_queue_ext =
+			&ctx->get_feat_ctx->max_queue_ext.max_queue_ext;
+		max_rx_queue_size = min_t(u32, max_queue_ext->max_rx_cq_depth,
+					  max_queue_ext->max_rx_sq_depth);
+		max_tx_queue_size = max_queue_ext->max_tx_cq_depth;
 
-	if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV)
-		queue_size = min_t(u32, queue_size,
-				   get_feat_ctx->llq.max_llq_depth);
+		if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV)
+			max_tx_queue_size = min_t(u32, max_tx_queue_size,
+						  llq->max_llq_depth);
+		else
+			max_tx_queue_size = min_t(u32, max_tx_queue_size,
+						  max_queue_ext->max_tx_sq_depth);
 
-	queue_size = rounddown_pow_of_two(queue_size);
+		ctx->max_tx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
+					     max_queue_ext->max_per_packet_tx_descs);
+		ctx->max_rx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
+					     max_queue_ext->max_per_packet_rx_descs);
+	} else {
+		struct ena_admin_queue_feature_desc *max_queues =
+			&ctx->get_feat_ctx->max_queues;
+		max_rx_queue_size = min_t(u32, max_queues->max_cq_depth,
+					  max_queues->max_sq_depth);
+		max_tx_queue_size = max_queues->max_cq_depth;
+
+		if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV)
+			max_tx_queue_size = min_t(u32, max_tx_queue_size,
+						  llq->max_llq_depth);
+		else
+			max_tx_queue_size = min_t(u32, max_tx_queue_size,
+						  max_queues->max_sq_depth);
+
+		ctx->max_tx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
+					     max_queues->max_packet_tx_descs);
+		ctx->max_rx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
+					     max_queues->max_packet_rx_descs);
+	}
+
+	max_tx_queue_size = rounddown_pow_of_two(max_tx_queue_size);
+	max_rx_queue_size = rounddown_pow_of_two(max_rx_queue_size);
 
-	if (unlikely(!queue_size)) {
-		dev_err(&pdev->dev, "Invalid queue size\n");
+	tx_queue_size = min_t(u32, tx_queue_size, max_tx_queue_size);
+	rx_queue_size = min_t(u32, rx_queue_size, max_rx_queue_size);
+
+	tx_queue_size = rounddown_pow_of_two(tx_queue_size);
+	rx_queue_size = rounddown_pow_of_two(rx_queue_size);
+
+	if (unlikely(!rx_queue_size || !tx_queue_size)) {
+		dev_err(&ctx->pdev->dev, "Invalid queue size\n");
 		return -EFAULT;
 	}
 
-	*max_tx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
-				 get_feat_ctx->max_queues.max_packet_tx_descs);
-	*max_rx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
-				 get_feat_ctx->max_queues.max_packet_rx_descs);
+	ctx->max_tx_queue_size = max_tx_queue_size;
+	ctx->max_rx_queue_size = max_rx_queue_size;
+	ctx->tx_queue_size = tx_queue_size;
+	ctx->rx_queue_size = rx_queue_size;
 
-	return queue_size;
+	return 0;
 }
 
 /* ena_probe - Device Initialization Routine
@@ -3284,6 +3328,7 @@ static int ena_calc_queue_size(struct pci_dev *pdev,
 static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 {
 	struct ena_com_dev_get_features_ctx get_feat_ctx;
+	struct ena_calc_queue_size_ctx calc_queue_ctx = { 0 };
 	struct ena_llq_configurations llq_config;
 	struct ena_com_dev *ena_dev = NULL;
 	struct ena_adapter *adapter;
@@ -3291,9 +3336,6 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	struct net_device *netdev;
 	static int adapters_found;
 	char *queue_type_str;
-	u16 tx_sgl_size = 0;
-	u16 rx_sgl_size = 0;
-	int queue_size;
 	bool wd_state;
 
 	dev_dbg(&pdev->dev, "%s\n", __func__);
@@ -3350,20 +3392,25 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 		goto err_device_destroy;
 	}
 
+	calc_queue_ctx.ena_dev = ena_dev;
+	calc_queue_ctx.get_feat_ctx = &get_feat_ctx;
+	calc_queue_ctx.pdev = pdev;
+
 	/* initial Tx interrupt delay, Assumes 1 usec granularity.
 	* Updated during device initialization with the real granularity
 	*/
 	ena_dev->intr_moder_tx_interval = ENA_INTR_INITIAL_TX_INTERVAL_USECS;
 	io_queue_num = ena_calc_io_queue_num(pdev, ena_dev, &get_feat_ctx);
-	queue_size = ena_calc_queue_size(pdev, ena_dev, &tx_sgl_size,
-					 &rx_sgl_size, &get_feat_ctx);
-	if ((queue_size <= 0) || (io_queue_num <= 0)) {
+	rc = ena_calc_queue_size(&calc_queue_ctx);
+	if (rc || io_queue_num <= 0) {
 		rc = -EFAULT;
 		goto err_device_destroy;
 	}
 
-	dev_info(&pdev->dev, "creating %d io queues. queue size: %d. LLQ is %s\n",
-		 io_queue_num, queue_size,
+	dev_info(&pdev->dev, "creating %d io queues. rx queue size: %d tx queue size. %d LLQ is %s\n",
+		 io_queue_num,
+		 calc_queue_ctx.rx_queue_size,
+		 calc_queue_ctx.tx_queue_size,
 		 (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) ?
 		 "ENABLED" : "DISABLED");
 
@@ -3389,11 +3436,10 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE);
 	adapter->reset_reason = ENA_REGS_RESET_NORMAL;
 
-	adapter->tx_ring_size = queue_size;
-	adapter->rx_ring_size = queue_size;
-
-	adapter->max_tx_sgl_size = tx_sgl_size;
-	adapter->max_rx_sgl_size = rx_sgl_size;
+	adapter->tx_ring_size = calc_queue_ctx.tx_queue_size;
+	adapter->rx_ring_size = calc_queue_ctx.rx_queue_size;
+	adapter->max_tx_sgl_size = calc_queue_ctx.max_tx_sgl_size;
+	adapter->max_rx_sgl_size = calc_queue_ctx.max_rx_sgl_size;
 
 	adapter->num_queues = io_queue_num;
 	adapter->last_monitored_tx_qid = 0;
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h
index ec111cfc5..afd2769f1 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.h
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h
@@ -154,6 +154,18 @@ struct ena_napi {
 	u32 qid;
 };
 
+struct ena_calc_queue_size_ctx {
+	struct ena_com_dev_get_features_ctx *get_feat_ctx;
+	struct ena_com_dev *ena_dev;
+	struct pci_dev *pdev;
+	u16 tx_queue_size;
+	u16 rx_queue_size;
+	u16 max_tx_queue_size;
+	u16 max_rx_queue_size;
+	u16 max_tx_sgl_size;
+	u16 max_rx_sgl_size;
+};
+
 struct ena_tx_buffer {
 	struct sk_buff *skb;
 	/* num of ena desc for this specific skb
@@ -322,6 +334,9 @@ struct ena_adapter {
 	u32 tx_ring_size;
 	u32 rx_ring_size;
 
+	u32 max_tx_ring_size;
+	u32 max_rx_ring_size;
+
 	u32 msg_enable;
 
 	u16 max_tx_sgl_size;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH V1 net-next 3/6] net: ena: make ethtool show correct current and max queue sizes
  2019-06-06 11:55 [PATCH V1 net-next 0/6] Support for dynamic queue size changes sameehj
  2019-06-06 11:55 ` [PATCH V1 net-next 1/6] net: ena: add MAX_QUEUES_EXT get feature admin command sameehj
  2019-06-06 11:55 ` [PATCH V1 net-next 2/6] net: ena: enable negotiating larger Rx ring size sameehj
@ 2019-06-06 11:55 ` sameehj
  2019-06-06 11:55 ` [PATCH V1 net-next 4/6] net: ena: allow queue allocation backoff when low on memory sameehj
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: sameehj @ 2019-06-06 11:55 UTC (permalink / raw)
  To: davem, netdev
  Cc: Sameeh Jubran, dwmw, zorik, matua, saeedb, msw, aliguori, nafea,
	gtzalik, netanel, alisaidi, benh, akiyano

From: Sameeh Jubran <sameehj@amazon.com>

Currently ethtool -g shows the same size for current and max queue
sizes.

Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
---
 drivers/net/ethernet/amazon/ena/ena_ethtool.c | 10 ++++------
 drivers/net/ethernet/amazon/ena/ena_netdev.c  |  2 ++
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_ethtool.c b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
index 878546a48..62915ad82 100644
--- a/drivers/net/ethernet/amazon/ena/ena_ethtool.c
+++ b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
@@ -488,13 +488,11 @@ static void ena_get_ringparam(struct net_device *netdev,
 			      struct ethtool_ringparam *ring)
 {
 	struct ena_adapter *adapter = netdev_priv(netdev);
-	struct ena_ring *tx_ring = &adapter->tx_ring[0];
-	struct ena_ring *rx_ring = &adapter->rx_ring[0];
 
-	ring->rx_max_pending = rx_ring->ring_size;
-	ring->tx_max_pending = tx_ring->ring_size;
-	ring->rx_pending = rx_ring->ring_size;
-	ring->tx_pending = tx_ring->ring_size;
+	ring->tx_max_pending = adapter->max_tx_ring_size;
+	ring->rx_max_pending = adapter->max_rx_ring_size;
+	ring->tx_pending = adapter->tx_ring_size;
+	ring->rx_pending = adapter->rx_ring_size;
 }
 
 static u32 ena_flow_hash_to_flow_type(u16 hash_fields)
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index ba5d580e5..3fe4a9217 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -3438,6 +3438,8 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 
 	adapter->tx_ring_size = calc_queue_ctx.tx_queue_size;
 	adapter->rx_ring_size = calc_queue_ctx.rx_queue_size;
+	adapter->max_tx_ring_size = calc_queue_ctx.max_tx_queue_size;
+	adapter->max_rx_ring_size = calc_queue_ctx.max_rx_queue_size;
 	adapter->max_tx_sgl_size = calc_queue_ctx.max_tx_sgl_size;
 	adapter->max_rx_sgl_size = calc_queue_ctx.max_rx_sgl_size;
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH V1 net-next 4/6] net: ena: allow queue allocation backoff when low on memory
  2019-06-06 11:55 [PATCH V1 net-next 0/6] Support for dynamic queue size changes sameehj
                   ` (2 preceding siblings ...)
  2019-06-06 11:55 ` [PATCH V1 net-next 3/6] net: ena: make ethtool show correct current and max queue sizes sameehj
@ 2019-06-06 11:55 ` sameehj
  2019-06-06 11:55 ` [PATCH V1 net-next 5/6] net: ena: add ethtool function for changing io queue sizes sameehj
  2019-06-06 11:55 ` [PATCH V1 net-next 6/6] net: ena: update driver version from 2.0.3 to 2.1.0 sameehj
  5 siblings, 0 replies; 11+ messages in thread
From: sameehj @ 2019-06-06 11:55 UTC (permalink / raw)
  To: davem, netdev
  Cc: Sameeh Jubran, dwmw, zorik, matua, saeedb, msw, aliguori, nafea,
	gtzalik, netanel, alisaidi, benh, akiyano

From: Sameeh Jubran <sameehj@amazon.com>

If there is not enough memory to allocate io queues the driver will
try to allocate smaller queues.

The backoff algorithm is as follows:

1. Try to allocate TX and RX and if successful.
1.1. return success

2. Divide by 2 the size of the larger of RX and TX queues (or both if their size is the same).

3. If TX or RX is smaller than 256
3.1. return failure.
4. else
4.1. go back to 1.

Also change the tx_queue_size, rx_queue_size field names in struct
adapter to requested_tx_queue_size and requested_rx_queue_size, and
use RX and TX queue 0 for actual queue sizes.
Explanation:
The original fields were useless as they were simply used to assign
values once from them to each of the queues in the adapter in ena_probe().
They could simply be deleted. However now that we have a backoff
feature, we have use for them. In case of backoff there is a difference
between the requested queue sizes and the actual sizes. Therefore there
is a need to save the requested queue size for future retries of queue
allocation (for example if allocation failed and then ifdown + ifup was
called we want to start the allocation from the original requested size of
the queues).

Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
---
 drivers/net/ethernet/amazon/ena/ena_ethtool.c |   4 +-
 drivers/net/ethernet/amazon/ena/ena_netdev.c  | 159 +++++++++++++-----
 drivers/net/ethernet/amazon/ena/ena_netdev.h  |   6 +-
 3 files changed, 127 insertions(+), 42 deletions(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_ethtool.c b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
index 62915ad82..101d93f16 100644
--- a/drivers/net/ethernet/amazon/ena/ena_ethtool.c
+++ b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
@@ -491,8 +491,8 @@ static void ena_get_ringparam(struct net_device *netdev,
 
 	ring->tx_max_pending = adapter->max_tx_ring_size;
 	ring->rx_max_pending = adapter->max_rx_ring_size;
-	ring->tx_pending = adapter->tx_ring_size;
-	ring->rx_pending = adapter->rx_ring_size;
+	ring->tx_pending = adapter->tx_ring[0].ring_size;
+	ring->rx_pending = adapter->rx_ring[0].ring_size;
 }
 
 static u32 ena_flow_hash_to_flow_type(u16 hash_fields)
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index 3fe4a9217..938aca254 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -182,7 +182,7 @@ static void ena_init_io_rings(struct ena_adapter *adapter)
 		ena_init_io_rings_common(adapter, rxr, i);
 
 		/* TX specific ring state */
-		txr->ring_size = adapter->tx_ring_size;
+		txr->ring_size = adapter->requested_tx_ring_size;
 		txr->tx_max_header_size = ena_dev->tx_max_header_size;
 		txr->tx_mem_queue_type = ena_dev->tx_mem_queue_type;
 		txr->sgl_size = adapter->max_tx_sgl_size;
@@ -190,7 +190,7 @@ static void ena_init_io_rings(struct ena_adapter *adapter)
 			ena_com_get_nonadaptive_moderation_interval_tx(ena_dev);
 
 		/* RX specific ring state */
-		rxr->ring_size = adapter->rx_ring_size;
+		rxr->ring_size = adapter->requested_rx_ring_size;
 		rxr->rx_copybreak = adapter->rx_copybreak;
 		rxr->sgl_size = adapter->max_rx_sgl_size;
 		rxr->smoothed_interval =
@@ -594,7 +594,6 @@ static void ena_free_rx_bufs(struct ena_adapter *adapter,
 
 /* ena_refill_all_rx_bufs - allocate all queues Rx buffers
  * @adapter: board private structure
- *
  */
 static void ena_refill_all_rx_bufs(struct ena_adapter *adapter)
 {
@@ -1638,7 +1637,7 @@ static int ena_create_io_tx_queue(struct ena_adapter *adapter, int qid)
 	ctx.qid = ena_qid;
 	ctx.mem_queue_type = ena_dev->tx_mem_queue_type;
 	ctx.msix_vector = msix_vector;
-	ctx.queue_size = adapter->tx_ring_size;
+	ctx.queue_size = tx_ring->ring_size;
 	ctx.numa_node = cpu_to_node(tx_ring->cpu);
 
 	rc = ena_com_create_io_queue(ena_dev, &ctx);
@@ -1705,7 +1704,7 @@ static int ena_create_io_rx_queue(struct ena_adapter *adapter, int qid)
 	ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX;
 	ctx.mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST;
 	ctx.msix_vector = msix_vector;
-	ctx.queue_size = adapter->rx_ring_size;
+	ctx.queue_size = rx_ring->ring_size;
 	ctx.numa_node = cpu_to_node(rx_ring->cpu);
 
 	rc = ena_com_create_io_queue(ena_dev, &ctx);
@@ -1752,6 +1751,112 @@ create_err:
 	return rc;
 }
 
+static inline void set_io_rings_size(struct ena_adapter *adapter,
+				     int new_tx_size, int new_rx_size)
+{
+	int i;
+
+	for (i = 0; i < adapter->num_queues; i++) {
+		adapter->tx_ring[i].ring_size = new_tx_size;
+		adapter->rx_ring[i].ring_size = new_rx_size;
+	}
+}
+
+/* This function allows queue allocation to backoff when the system is
+ * low on memory. If there is not enough memory to allocate io queues
+ * the driver will try to allocate smaller queues.
+ *
+ * The backoff algorithm is as follows:
+ *  1. Try to allocate TX and RX and if successful.
+ *  1.1. return success
+ *
+ *  2. Divide by 2 the size of the larger of RX and TX queues (or both if their size is the same).
+ *
+ *  3. If TX or RX is smaller than 256
+ *  3.1. return failure.
+ *  4. else
+ *  4.1. go back to 1.
+ */
+static int create_queues_with_size_backoff(struct ena_adapter *adapter)
+{
+	int rc, cur_rx_ring_size, cur_tx_ring_size;
+	int new_rx_ring_size, new_tx_ring_size;
+
+	/* current queue sizes might be set to smaller than the requested
+	 * ones due to past queue allocation failures.
+	 */
+	set_io_rings_size(adapter, adapter->requested_tx_ring_size,
+			  adapter->requested_rx_ring_size);
+
+	while (1) {
+		rc = ena_setup_all_tx_resources(adapter);
+		if (rc)
+			goto err_setup_tx;
+
+		rc = ena_create_all_io_tx_queues(adapter);
+		if (rc)
+			goto err_create_tx_queues;
+
+		rc = ena_setup_all_rx_resources(adapter);
+		if (rc)
+			goto err_setup_rx;
+
+		rc = ena_create_all_io_rx_queues(adapter);
+		if (rc)
+			goto err_create_rx_queues;
+
+		return 0;
+
+err_create_rx_queues:
+		ena_free_all_io_rx_resources(adapter);
+err_setup_rx:
+		ena_destroy_all_tx_queues(adapter);
+err_create_tx_queues:
+		ena_free_all_io_tx_resources(adapter);
+err_setup_tx:
+		if (rc != -ENOMEM) {
+			netif_err(adapter, ifup, adapter->netdev,
+				  "Queue creation failed with error code %d\n",
+				  rc);
+			return rc;
+		}
+
+		cur_tx_ring_size = adapter->tx_ring[0].ring_size;
+		cur_rx_ring_size = adapter->rx_ring[0].ring_size;
+
+		netif_err(adapter, ifup, adapter->netdev,
+			  "Not enough memory to create queues with sizes TX=%d, RX=%d\n",
+			  cur_tx_ring_size, cur_rx_ring_size);
+
+		new_tx_ring_size = cur_tx_ring_size;
+		new_rx_ring_size = cur_rx_ring_size;
+
+		/* Decrease the size of the larger queue, or
+		 * decrease both if they are the same size.
+		 */
+		if (cur_rx_ring_size <= cur_tx_ring_size)
+			new_tx_ring_size = cur_tx_ring_size / 2;
+		if (cur_rx_ring_size >= cur_tx_ring_size)
+			new_rx_ring_size = cur_rx_ring_size / 2;
+
+		if (cur_tx_ring_size < ENA_MIN_RING_SIZE ||
+		    cur_rx_ring_size < ENA_MIN_RING_SIZE) {
+			netif_err(adapter, ifup, adapter->netdev,
+				  "Queue creation failed with the smallest possible queue size of %d for both queues. Not retrying with smaller queues\n",
+				  ENA_MIN_RING_SIZE);
+			return rc;
+		}
+
+		netif_err(adapter, ifup, adapter->netdev,
+			  "Retrying queue creation with sizes TX=%d, RX=%d\n",
+			  new_tx_ring_size,
+			  new_rx_ring_size);
+
+		set_io_rings_size(adapter, new_tx_ring_size,
+				  new_rx_ring_size);
+	}
+}
+
 static int ena_up(struct ena_adapter *adapter)
 {
 	int rc, i;
@@ -1771,25 +1876,9 @@ static int ena_up(struct ena_adapter *adapter)
 	if (rc)
 		goto err_req_irq;
 
-	/* allocate transmit descriptors */
-	rc = ena_setup_all_tx_resources(adapter);
+	rc = create_queues_with_size_backoff(adapter);
 	if (rc)
-		goto err_setup_tx;
-
-	/* allocate receive descriptors */
-	rc = ena_setup_all_rx_resources(adapter);
-	if (rc)
-		goto err_setup_rx;
-
-	/* Create TX queues */
-	rc = ena_create_all_io_tx_queues(adapter);
-	if (rc)
-		goto err_create_tx_queues;
-
-	/* Create RX queues */
-	rc = ena_create_all_io_rx_queues(adapter);
-	if (rc)
-		goto err_create_rx_queues;
+		goto err_create_queues_with_backoff;
 
 	rc = ena_up_complete(adapter);
 	if (rc)
@@ -1818,14 +1907,11 @@ static int ena_up(struct ena_adapter *adapter)
 	return rc;
 
 err_up:
-	ena_destroy_all_rx_queues(adapter);
-err_create_rx_queues:
 	ena_destroy_all_tx_queues(adapter);
-err_create_tx_queues:
-	ena_free_all_io_rx_resources(adapter);
-err_setup_rx:
 	ena_free_all_io_tx_resources(adapter);
-err_setup_tx:
+	ena_destroy_all_rx_queues(adapter);
+	ena_free_all_io_rx_resources(adapter);
+err_create_queues_with_backoff:
 	ena_free_io_irq(adapter);
 err_req_irq:
 	ena_del_napi(adapter);
@@ -3296,17 +3382,14 @@ static int ena_calc_queue_size(struct ena_calc_queue_size_ctx *ctx)
 	max_tx_queue_size = rounddown_pow_of_two(max_tx_queue_size);
 	max_rx_queue_size = rounddown_pow_of_two(max_rx_queue_size);
 
-	tx_queue_size = min_t(u32, tx_queue_size, max_tx_queue_size);
-	rx_queue_size = min_t(u32, rx_queue_size, max_rx_queue_size);
+	tx_queue_size = clamp_val(tx_queue_size, ENA_MIN_RING_SIZE,
+				  max_tx_queue_size);
+	rx_queue_size = clamp_val(rx_queue_size, ENA_MIN_RING_SIZE,
+				  max_rx_queue_size);
 
 	tx_queue_size = rounddown_pow_of_two(tx_queue_size);
 	rx_queue_size = rounddown_pow_of_two(rx_queue_size);
 
-	if (unlikely(!rx_queue_size || !tx_queue_size)) {
-		dev_err(&ctx->pdev->dev, "Invalid queue size\n");
-		return -EFAULT;
-	}
-
 	ctx->max_tx_queue_size = max_tx_queue_size;
 	ctx->max_rx_queue_size = max_rx_queue_size;
 	ctx->tx_queue_size = tx_queue_size;
@@ -3436,8 +3519,8 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE);
 	adapter->reset_reason = ENA_REGS_RESET_NORMAL;
 
-	adapter->tx_ring_size = calc_queue_ctx.tx_queue_size;
-	adapter->rx_ring_size = calc_queue_ctx.rx_queue_size;
+	adapter->requested_tx_ring_size = calc_queue_ctx.tx_queue_size;
+	adapter->requested_rx_ring_size = calc_queue_ctx.rx_queue_size;
 	adapter->max_tx_ring_size = calc_queue_ctx.max_tx_queue_size;
 	adapter->max_rx_ring_size = calc_queue_ctx.max_rx_queue_size;
 	adapter->max_tx_sgl_size = calc_queue_ctx.max_tx_sgl_size;
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h
index afd2769f1..e8fe08cb7 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.h
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h
@@ -79,6 +79,8 @@
 #define ENA_BAR_MASK (BIT(ENA_REG_BAR) | BIT(ENA_MEM_BAR))
 
 #define ENA_DEFAULT_RING_SIZE	(1024)
+#define ENA_MIN_RING_SIZE	(256)
+
 
 #define ENA_TX_WAKEUP_THRESH		(MAX_SKB_FRAGS + 2)
 #define ENA_DEFAULT_RX_COPYBREAK	(256 - NET_IP_ALIGN)
@@ -331,8 +333,8 @@ struct ena_adapter {
 	u32 tx_usecs, rx_usecs; /* interrupt moderation */
 	u32 tx_frames, rx_frames; /* interrupt moderation */
 
-	u32 tx_ring_size;
-	u32 rx_ring_size;
+	u32 requested_tx_ring_size;
+	u32 requested_rx_ring_size;
 
 	u32 max_tx_ring_size;
 	u32 max_rx_ring_size;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH V1 net-next 5/6] net: ena: add ethtool function for changing io queue sizes
  2019-06-06 11:55 [PATCH V1 net-next 0/6] Support for dynamic queue size changes sameehj
                   ` (3 preceding siblings ...)
  2019-06-06 11:55 ` [PATCH V1 net-next 4/6] net: ena: allow queue allocation backoff when low on memory sameehj
@ 2019-06-06 11:55 ` sameehj
  2019-06-06 14:48   ` Michal Kubecek
  2019-06-06 17:11   ` Michal Kubecek
  2019-06-06 11:55 ` [PATCH V1 net-next 6/6] net: ena: update driver version from 2.0.3 to 2.1.0 sameehj
  5 siblings, 2 replies; 11+ messages in thread
From: sameehj @ 2019-06-06 11:55 UTC (permalink / raw)
  To: davem, netdev
  Cc: Sameeh Jubran, dwmw, zorik, matua, saeedb, msw, aliguori, nafea,
	gtzalik, netanel, alisaidi, benh, akiyano

From: Sameeh Jubran <sameehj@amazon.com>

Implement the set_ringparam() function of the ethtool interface
to enable the changing of io queue sizes.

Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
---
 drivers/net/ethernet/amazon/ena/ena_ethtool.c | 25 +++++++++++++++++++
 drivers/net/ethernet/amazon/ena/ena_netdev.c  | 14 +++++++++++
 drivers/net/ethernet/amazon/ena/ena_netdev.h  |  5 +++-
 3 files changed, 43 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_ethtool.c b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
index 101d93f16..33e28ad71 100644
--- a/drivers/net/ethernet/amazon/ena/ena_ethtool.c
+++ b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
@@ -495,6 +495,30 @@ static void ena_get_ringparam(struct net_device *netdev,
 	ring->rx_pending = adapter->rx_ring[0].ring_size;
 }
 
+static int ena_set_ringparam(struct net_device *netdev,
+			     struct ethtool_ringparam *ring)
+{
+	struct ena_adapter *adapter = netdev_priv(netdev);
+	u32 new_tx_size, new_rx_size;
+
+	if (ring->rx_mini_pending || ring->rx_jumbo_pending)
+		return -EINVAL;
+
+	new_tx_size = clamp_val(ring->tx_pending, ENA_MIN_RING_SIZE,
+				adapter->max_tx_ring_size);
+	new_tx_size = rounddown_pow_of_two(new_tx_size);
+
+	new_rx_size = clamp_val(ring->rx_pending, ENA_MIN_RING_SIZE,
+				adapter->max_rx_ring_size);
+	new_rx_size = rounddown_pow_of_two(new_rx_size);
+
+	if (new_tx_size == adapter->requested_tx_ring_size &&
+	    new_rx_size == adapter->requested_rx_ring_size)
+		return 0;
+
+	return ena_update_queue_sizes(adapter, new_tx_size, new_rx_size);
+}
+
 static u32 ena_flow_hash_to_flow_type(u16 hash_fields)
 {
 	u32 data = 0;
@@ -860,6 +884,7 @@ static const struct ethtool_ops ena_ethtool_ops = {
 	.get_coalesce		= ena_get_coalesce,
 	.set_coalesce		= ena_set_coalesce,
 	.get_ringparam		= ena_get_ringparam,
+	.set_ringparam		= ena_set_ringparam,
 	.get_sset_count         = ena_get_sset_count,
 	.get_strings		= ena_get_strings,
 	.get_ethtool_stats      = ena_get_ethtool_stats,
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index 938aca254..7d3837c13 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -2031,6 +2031,20 @@ static int ena_close(struct net_device *netdev)
 	return 0;
 }
 
+int ena_update_queue_sizes(struct ena_adapter *adapter,
+			   int new_tx_size,
+			   int new_rx_size)
+{
+	bool dev_up;
+
+	dev_up = test_bit(ENA_FLAG_DEV_UP, &adapter->flags);
+	ena_close(adapter->netdev);
+	adapter->requested_tx_ring_size = new_tx_size;
+	adapter->requested_rx_ring_size = new_rx_size;
+	ena_init_io_rings(adapter);
+	return dev_up ? ena_up(adapter) : 0;
+}
+
 static void ena_tx_csum(struct ena_com_tx_ctx *ena_tx_ctx, struct sk_buff *skb)
 {
 	u32 mss = skb_shinfo(skb)->gso_size;
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h
index e8fe08cb7..2da20ad5a 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.h
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h
@@ -81,7 +81,6 @@
 #define ENA_DEFAULT_RING_SIZE	(1024)
 #define ENA_MIN_RING_SIZE	(256)
 
-
 #define ENA_TX_WAKEUP_THRESH		(MAX_SKB_FRAGS + 2)
 #define ENA_DEFAULT_RX_COPYBREAK	(256 - NET_IP_ALIGN)
 
@@ -389,6 +388,10 @@ void ena_dump_stats_to_dmesg(struct ena_adapter *adapter);
 
 void ena_dump_stats_to_buf(struct ena_adapter *adapter, u8 *buf);
 
+int ena_update_queue_sizes(struct ena_adapter *adapter,
+			   int new_tx_size,
+			   int new_rx_size);
+
 int ena_get_sset_count(struct net_device *netdev, int sset);
 
 #endif /* !(ENA_H) */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH V1 net-next 6/6] net: ena: update driver version from 2.0.3 to 2.1.0
  2019-06-06 11:55 [PATCH V1 net-next 0/6] Support for dynamic queue size changes sameehj
                   ` (4 preceding siblings ...)
  2019-06-06 11:55 ` [PATCH V1 net-next 5/6] net: ena: add ethtool function for changing io queue sizes sameehj
@ 2019-06-06 11:55 ` sameehj
  5 siblings, 0 replies; 11+ messages in thread
From: sameehj @ 2019-06-06 11:55 UTC (permalink / raw)
  To: davem, netdev
  Cc: Sameeh Jubran, dwmw, zorik, matua, saeedb, msw, aliguori, nafea,
	gtzalik, netanel, alisaidi, benh, akiyano

From: Sameeh Jubran <sameehj@amazon.com>

Update driver version to match device specification.

Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
---
 drivers/net/ethernet/amazon/ena/ena_netdev.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h
index 2da20ad5a..e83a5e041 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.h
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h
@@ -44,8 +44,8 @@
 #include "ena_eth_com.h"
 
 #define DRV_MODULE_VER_MAJOR	2
-#define DRV_MODULE_VER_MINOR	0
-#define DRV_MODULE_VER_SUBMINOR 3
+#define DRV_MODULE_VER_MINOR	1
+#define DRV_MODULE_VER_SUBMINOR 0
 
 #define DRV_MODULE_NAME		"ena"
 #ifndef DRV_MODULE_VERSION
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH V1 net-next 5/6] net: ena: add ethtool function for changing io queue sizes
  2019-06-06 11:55 ` [PATCH V1 net-next 5/6] net: ena: add ethtool function for changing io queue sizes sameehj
@ 2019-06-06 14:48   ` Michal Kubecek
  2019-06-10  7:46     ` Jubran, Samih
  2019-06-06 17:11   ` Michal Kubecek
  1 sibling, 1 reply; 11+ messages in thread
From: Michal Kubecek @ 2019-06-06 14:48 UTC (permalink / raw)
  To: netdev
  Cc: sameehj, davem, dwmw, zorik, matua, saeedb, msw, aliguori, nafea,
	gtzalik, netanel, alisaidi, benh, akiyano

On Thu, Jun 06, 2019 at 02:55:19PM +0300, sameehj@amazon.com wrote:
> From: Sameeh Jubran <sameehj@amazon.com>
> 
> Implement the set_ringparam() function of the ethtool interface
> to enable the changing of io queue sizes.
> 
> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
> Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
> ---
>  drivers/net/ethernet/amazon/ena/ena_ethtool.c | 25 +++++++++++++++++++
>  drivers/net/ethernet/amazon/ena/ena_netdev.c  | 14 +++++++++++
>  drivers/net/ethernet/amazon/ena/ena_netdev.h  |  5 +++-
>  3 files changed, 43 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/ethernet/amazon/ena/ena_ethtool.c b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
> index 101d93f16..33e28ad71 100644
> --- a/drivers/net/ethernet/amazon/ena/ena_ethtool.c
> +++ b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
> @@ -495,6 +495,30 @@ static void ena_get_ringparam(struct net_device *netdev,
>  	ring->rx_pending = adapter->rx_ring[0].ring_size;
>  }
>  
> +static int ena_set_ringparam(struct net_device *netdev,
> +			     struct ethtool_ringparam *ring)
> +{
> +	struct ena_adapter *adapter = netdev_priv(netdev);
> +	u32 new_tx_size, new_rx_size;
> +
> +	if (ring->rx_mini_pending || ring->rx_jumbo_pending)
> +		return -EINVAL;

This check is superfluous as ethtool_set_ringparam() checks supplied
values against maximum returned by ->get_ringparam() which will be 0 in
this case.

> +
> +	new_tx_size = clamp_val(ring->tx_pending, ENA_MIN_RING_SIZE,
> +				adapter->max_tx_ring_size);
> +	new_tx_size = rounddown_pow_of_two(new_tx_size);
> +
> +	new_rx_size = clamp_val(ring->rx_pending, ENA_MIN_RING_SIZE,
> +				adapter->max_rx_ring_size);
> +	new_rx_size = rounddown_pow_of_two(new_rx_size);

For the same reason, clamping from below would suffice here.

Michal Kubecek

> +
> +	if (new_tx_size == adapter->requested_tx_ring_size &&
> +	    new_rx_size == adapter->requested_rx_ring_size)
> +		return 0;
> +
> +	return ena_update_queue_sizes(adapter, new_tx_size, new_rx_size);
> +}

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH V1 net-next 5/6] net: ena: add ethtool function for changing io queue sizes
  2019-06-06 11:55 ` [PATCH V1 net-next 5/6] net: ena: add ethtool function for changing io queue sizes sameehj
  2019-06-06 14:48   ` Michal Kubecek
@ 2019-06-06 17:11   ` Michal Kubecek
  1 sibling, 0 replies; 11+ messages in thread
From: Michal Kubecek @ 2019-06-06 17:11 UTC (permalink / raw)
  To: netdev
  Cc: sameehj, davem, dwmw, zorik, matua, saeedb, msw, aliguori, nafea,
	gtzalik, netanel, alisaidi, benh, akiyano

On Thu, Jun 06, 2019 at 02:55:19PM +0300, sameehj@amazon.com wrote:
> diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
> index 938aca254..7d3837c13 100644
> --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
> +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
> @@ -2031,6 +2031,20 @@ static int ena_close(struct net_device *netdev)
>  	return 0;
>  }
>  
> +int ena_update_queue_sizes(struct ena_adapter *adapter,
> +			   int new_tx_size,
> +			   int new_rx_size)
> +{
> +	bool dev_up;
> +
> +	dev_up = test_bit(ENA_FLAG_DEV_UP, &adapter->flags);
> +	ena_close(adapter->netdev);
> +	adapter->requested_tx_ring_size = new_tx_size;
> +	adapter->requested_rx_ring_size = new_rx_size;
> +	ena_init_io_rings(adapter);
> +	return dev_up ? ena_up(adapter) : 0;
> +}

This function is called with u32 values as arguments by its only caller
and copies them into u32 members of struct ena_adapter. Why are its
arguments new_tx_size and new_rx_size declared as int?

Michal Kubecek

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH V1 net-next 5/6] net: ena: add ethtool function for changing io queue sizes
  2019-06-06 14:48   ` Michal Kubecek
@ 2019-06-10  7:46     ` Jubran, Samih
  2019-06-10 12:21       ` Michal Kubecek
  0 siblings, 1 reply; 11+ messages in thread
From: Jubran, Samih @ 2019-06-10  7:46 UTC (permalink / raw)
  To: Michal Kubecek, netdev
  Cc: davem, Woodhouse, David, Machulsky, Zorik, Matushevsky,
	Alexander, Bshara, Saeed, Wilson, Matt, Liguori, Anthony, Bshara,
	Nafea, Tzalik, Guy, Belgazal, Netanel, Saidi, Ali, Herrenschmidt,
	Benjamin, Kiyanovski, Arthur



> -----Original Message-----
> From: Michal Kubecek <mkubecek@suse.cz>
> Sent: Thursday, June 6, 2019 5:48 PM
> To: netdev@vger.kernel.org
> Cc: Jubran, Samih <sameehj@amazon.com>; davem@davemloft.net;
> Woodhouse, David <dwmw@amazon.co.uk>; Machulsky, Zorik
> <zorik@amazon.com>; Matushevsky, Alexander <matua@amazon.com>;
> Bshara, Saeed <saeedb@amazon.com>; Wilson, Matt <msw@amazon.com>;
> Liguori, Anthony <aliguori@amazon.com>; Bshara, Nafea
> <nafea@amazon.com>; Tzalik, Guy <gtzalik@amazon.com>; Belgazal,
> Netanel <netanel@amazon.com>; Saidi, Ali <alisaidi@amazon.com>;
> Herrenschmidt, Benjamin <benh@amazon.com>; Kiyanovski, Arthur
> <akiyano@amazon.com>
> Subject: Re: [PATCH V1 net-next 5/6] net: ena: add ethtool function for
> changing io queue sizes
> 
> On Thu, Jun 06, 2019 at 02:55:19PM +0300, sameehj@amazon.com wrote:
> > From: Sameeh Jubran <sameehj@amazon.com>
> >
> > Implement the set_ringparam() function of the ethtool interface to
> > enable the changing of io queue sizes.
> >
> > Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
> > Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
> > ---
> >  drivers/net/ethernet/amazon/ena/ena_ethtool.c | 25
> > +++++++++++++++++++
> drivers/net/ethernet/amazon/ena/ena_netdev.c  |
> > 14 +++++++++++  drivers/net/ethernet/amazon/ena/ena_netdev.h  |  5
> > +++-
> >  3 files changed, 43 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/ethernet/amazon/ena/ena_ethtool.c
> > b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
> > index 101d93f16..33e28ad71 100644
> > --- a/drivers/net/ethernet/amazon/ena/ena_ethtool.c
> > +++ b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
> > @@ -495,6 +495,30 @@ static void ena_get_ringparam(struct net_device
> *netdev,
> >  	ring->rx_pending = adapter->rx_ring[0].ring_size;  }
> >
> > +static int ena_set_ringparam(struct net_device *netdev,
> > +			     struct ethtool_ringparam *ring) {
> > +	struct ena_adapter *adapter = netdev_priv(netdev);
> > +	u32 new_tx_size, new_rx_size;
> > +
> > +	if (ring->rx_mini_pending || ring->rx_jumbo_pending)
> > +		return -EINVAL;
> 
> This check is superfluous as ethtool_set_ringparam() checks supplied values
> against maximum returned by ->get_ringparam() which will be 0 in this case.
> 
> > +
> > +	new_tx_size = clamp_val(ring->tx_pending, ENA_MIN_RING_SIZE,
> > +				adapter->max_tx_ring_size);
> > +	new_tx_size = rounddown_pow_of_two(new_tx_size);
> > +
> > +	new_rx_size = clamp_val(ring->rx_pending, ENA_MIN_RING_SIZE,
> > +				adapter->max_rx_ring_size);
> > +	new_rx_size = rounddown_pow_of_two(new_rx_size);
> 
> For the same reason, clamping from below would suffice here.
> 
> Michal Kubecek
> 
> > +
> > +	if (new_tx_size == adapter->requested_tx_ring_size &&
> > +	    new_rx_size == adapter->requested_rx_ring_size)
> > +		return 0;
> > +
> > +	return ena_update_queue_sizes(adapter, new_tx_size,
> new_rx_size); }

You are right with both arguments the way the code is written now, however, in the future the code might change and we prefer to be extra cautious.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH V1 net-next 5/6] net: ena: add ethtool function for changing io queue sizes
  2019-06-10  7:46     ` Jubran, Samih
@ 2019-06-10 12:21       ` Michal Kubecek
  0 siblings, 0 replies; 11+ messages in thread
From: Michal Kubecek @ 2019-06-10 12:21 UTC (permalink / raw)
  To: netdev
  Cc: Jubran, Samih, davem, Woodhouse, David, Machulsky, Zorik,
	Matushevsky, Alexander, Bshara, Saeed, Wilson, Matt, Liguori,
	Anthony, Bshara, Nafea, Tzalik, Guy, Belgazal, Netanel, Saidi,
	Ali, Herrenschmidt, Benjamin, Kiyanovski, Arthur

On Mon, Jun 10, 2019 at 07:46:08AM +0000, Jubran, Samih wrote:
> > -----Original Message-----
> > From: Michal Kubecek <mkubecek@suse.cz>
> > Sent: Thursday, June 6, 2019 5:48 PM
> > To: netdev@vger.kernel.org
> > Cc: Jubran, Samih <sameehj@amazon.com>; davem@davemloft.net;
> > Woodhouse, David <dwmw@amazon.co.uk>; Machulsky, Zorik
> > <zorik@amazon.com>; Matushevsky, Alexander <matua@amazon.com>;
> > Bshara, Saeed <saeedb@amazon.com>; Wilson, Matt <msw@amazon.com>;
> > Liguori, Anthony <aliguori@amazon.com>; Bshara, Nafea
> > <nafea@amazon.com>; Tzalik, Guy <gtzalik@amazon.com>; Belgazal,
> > Netanel <netanel@amazon.com>; Saidi, Ali <alisaidi@amazon.com>;
> > Herrenschmidt, Benjamin <benh@amazon.com>; Kiyanovski, Arthur
> > <akiyano@amazon.com>
> > Subject: Re: [PATCH V1 net-next 5/6] net: ena: add ethtool function for
> > changing io queue sizes
> > 
> > On Thu, Jun 06, 2019 at 02:55:19PM +0300, sameehj@amazon.com wrote:
> > > From: Sameeh Jubran <sameehj@amazon.com>
> > >
> > > Implement the set_ringparam() function of the ethtool interface to
> > > enable the changing of io queue sizes.
> > >
> > > Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
> > > Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
> > > ---
> > >  drivers/net/ethernet/amazon/ena/ena_ethtool.c | 25
> > > +++++++++++++++++++
> > drivers/net/ethernet/amazon/ena/ena_netdev.c  |
> > > 14 +++++++++++  drivers/net/ethernet/amazon/ena/ena_netdev.h  |  5
> > > +++-
> > >  3 files changed, 43 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/net/ethernet/amazon/ena/ena_ethtool.c
> > > b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
> > > index 101d93f16..33e28ad71 100644
> > > --- a/drivers/net/ethernet/amazon/ena/ena_ethtool.c
> > > +++ b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
> > > @@ -495,6 +495,30 @@ static void ena_get_ringparam(struct net_device
> > *netdev,
> > >  	ring->rx_pending = adapter->rx_ring[0].ring_size;  }
> > >
> > > +static int ena_set_ringparam(struct net_device *netdev,
> > > +			     struct ethtool_ringparam *ring) {
> > > +	struct ena_adapter *adapter = netdev_priv(netdev);
> > > +	u32 new_tx_size, new_rx_size;
> > > +
> > > +	if (ring->rx_mini_pending || ring->rx_jumbo_pending)
> > > +		return -EINVAL;
> > 
> > This check is superfluous as ethtool_set_ringparam() checks supplied values
> > against maximum returned by ->get_ringparam() which will be 0 in this case.
> > 
> > > +
> > > +	new_tx_size = clamp_val(ring->tx_pending, ENA_MIN_RING_SIZE,
> > > +				adapter->max_tx_ring_size);
> > > +	new_tx_size = rounddown_pow_of_two(new_tx_size);
> > > +
> > > +	new_rx_size = clamp_val(ring->rx_pending, ENA_MIN_RING_SIZE,
> > > +				adapter->max_rx_ring_size);
> > > +	new_rx_size = rounddown_pow_of_two(new_rx_size);
> > 
> > For the same reason, clamping from below would suffice here.
> > 
> > Michal Kubecek
> > 
> > > +
> > > +	if (new_tx_size == adapter->requested_tx_ring_size &&
> > > +	    new_rx_size == adapter->requested_rx_ring_size)
> > > +		return 0;
> > > +
> > > +	return ena_update_queue_sizes(adapter, new_tx_size,
> > new_rx_size); }
> 
> You are right with both arguments the way the code is written now,
> however, in the future the code might change and we prefer to be extra
> cautious.

If we accept this logic, commit 37e2d99b59c4 ("ethtool: Ensure new ring
parameters are within bounds during SRINGPARAM") would be useless as
every driver would have to duplicate the checks anyway.

Michal Kubecek

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-06-10 12:21 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-06 11:55 [PATCH V1 net-next 0/6] Support for dynamic queue size changes sameehj
2019-06-06 11:55 ` [PATCH V1 net-next 1/6] net: ena: add MAX_QUEUES_EXT get feature admin command sameehj
2019-06-06 11:55 ` [PATCH V1 net-next 2/6] net: ena: enable negotiating larger Rx ring size sameehj
2019-06-06 11:55 ` [PATCH V1 net-next 3/6] net: ena: make ethtool show correct current and max queue sizes sameehj
2019-06-06 11:55 ` [PATCH V1 net-next 4/6] net: ena: allow queue allocation backoff when low on memory sameehj
2019-06-06 11:55 ` [PATCH V1 net-next 5/6] net: ena: add ethtool function for changing io queue sizes sameehj
2019-06-06 14:48   ` Michal Kubecek
2019-06-10  7:46     ` Jubran, Samih
2019-06-10 12:21       ` Michal Kubecek
2019-06-06 17:11   ` Michal Kubecek
2019-06-06 11:55 ` [PATCH V1 net-next 6/6] net: ena: update driver version from 2.0.3 to 2.1.0 sameehj

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).