All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH net-next 0/9] gve: Implement netdev queue api
@ 2024-04-18 19:51 Shailend Chand
  2024-04-18 19:51 ` [RFC PATCH net-next 1/9] queue_api: define " Shailend Chand
                   ` (9 more replies)
  0 siblings, 10 replies; 19+ messages in thread
From: Shailend Chand @ 2024-04-18 19:51 UTC (permalink / raw)
  To: netdev
  Cc: almasrymina, davem, edumazet, kuba, pabeni, willemb, Shailend Chand

Following the discussion on
https://patchwork.kernel.org/project/linux-media/patch/20240305020153.2787423-2-almasrymina@google.com/,
the queue api defined by Mina is implemented for gve.

The first patch is just Mina's introduction of the api. The rest of the
patches make surgical changes in gve to enable it to work correctly with
only a subset of queues present (thus far it had assumed that either all
queues are up or all are down). The final patch has the api
implementation.

Mina Almasry (1):
  queue_api: define queue api

Shailend Chand (8):
  gve: Make the RX free queue funcs idempotent
  gve: Add adminq funcs to add/remove a single Rx queue
  gve: Make gve_turn(up|down) ignore stopped queues
  gve: Make gve_turnup work for nonempty queues
  gve: Avoid rescheduling napi if on wrong cpu
  gve: Reset Rx ring state in the ring-stop funcs
  gve: Account for stopped queues when reading NIC stats
  gve: Implement queue api

 drivers/net/ethernet/google/gve/gve.h         |   7 +
 drivers/net/ethernet/google/gve/gve_adminq.c  |  79 +++++--
 drivers/net/ethernet/google/gve/gve_adminq.h  |   2 +
 drivers/net/ethernet/google/gve/gve_dqo.h     |   6 +
 drivers/net/ethernet/google/gve/gve_ethtool.c |  13 +-
 drivers/net/ethernet/google/gve/gve_main.c    | 200 +++++++++++++++++-
 drivers/net/ethernet/google/gve/gve_rx.c      |  89 +++++---
 drivers/net/ethernet/google/gve/gve_rx_dqo.c  | 114 +++++++---
 include/linux/netdevice.h                     |   3 +
 include/net/netdev_queues.h                   |  27 +++
 10 files changed, 459 insertions(+), 81 deletions(-)

-- 
2.44.0.769.g3c40516874-goog


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH net-next 1/9] queue_api: define queue api
  2024-04-18 19:51 [RFC PATCH net-next 0/9] gve: Implement netdev queue api Shailend Chand
@ 2024-04-18 19:51 ` Shailend Chand
  2024-04-18 19:51 ` [RFC PATCH net-next 2/9] gve: Make the RX free queue funcs idempotent Shailend Chand
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Shailend Chand @ 2024-04-18 19:51 UTC (permalink / raw)
  To: netdev; +Cc: almasrymina, davem, edumazet, kuba, pabeni, willemb

From: Mina Almasry <almasrymina@google.com>

This API enables the net stack to reset the queues used for devmem TCP.

Signed-off-by: Mina Almasry <almasrymina@google.com>
---
 include/linux/netdevice.h   |  3 +++
 include/net/netdev_queues.h | 27 +++++++++++++++++++++++++++
 2 files changed, 30 insertions(+)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index d45f330d083d..5b67dee39818 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -1957,6 +1957,7 @@ enum netdev_reg_state {
  *	@sysfs_rx_queue_group:	Space for optional per-rx queue attributes
  *	@rtnl_link_ops:	Rtnl_link_ops
  *	@stat_ops:	Optional ops for queue-aware statistics
+ *	@queue_mgmt_ops:	Optional ops for queue management
  *
  *	@gso_max_size:	Maximum size of generic segmentation offload
  *	@tso_max_size:	Device (as in HW) limit on the max TSO request size
@@ -2340,6 +2341,8 @@ struct net_device {
 
 	const struct netdev_stat_ops *stat_ops;
 
+	const struct netdev_queue_mgmt_ops *queue_mgmt_ops;
+
 	/* for setting kernel sock attribute on TCP connection setup */
 #define GSO_MAX_SEGS		65535u
 #define GSO_LEGACY_MAX_SIZE	65536u
diff --git a/include/net/netdev_queues.h b/include/net/netdev_queues.h
index 1ec408585373..337df0860ae6 100644
--- a/include/net/netdev_queues.h
+++ b/include/net/netdev_queues.h
@@ -60,6 +60,33 @@ struct netdev_stat_ops {
 			       struct netdev_queue_stats_tx *tx);
 };
 
+/**
+ * struct netdev_queue_mgmt_ops - netdev ops for queue management
+ *
+ * @ndo_queue_mem_alloc: Allocate memory for an RX queue. The memory returned
+ *			 in the form of a void* can be passed to
+ *			 ndo_queue_mem_free() for freeing or to ndo_queue_start
+ *			 to create an RX queue with this memory.
+ *
+ * @ndo_queue_mem_free:	Free memory from an RX queue.
+ *
+ * @ndo_queue_start:	Start an RX queue at the specified index.
+ *
+ * @ndo_queue_stop:	Stop the RX queue at the specified index.
+ */
+struct netdev_queue_mgmt_ops {
+	void *			(*ndo_queue_mem_alloc)(struct net_device *dev,
+						       int idx);
+	void			(*ndo_queue_mem_free)(struct net_device *dev,
+						      void *queue_mem);
+	int			(*ndo_queue_start)(struct net_device *dev,
+						   int idx,
+						   void *queue_mem);
+	int			(*ndo_queue_stop)(struct net_device *dev,
+						  int idx,
+						  void **out_queue_mem);
+};
+
 /**
  * DOC: Lockless queue stopping / waking helpers.
  *
-- 
2.44.0.769.g3c40516874-goog


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH net-next 2/9] gve: Make the RX free queue funcs idempotent
  2024-04-18 19:51 [RFC PATCH net-next 0/9] gve: Implement netdev queue api Shailend Chand
  2024-04-18 19:51 ` [RFC PATCH net-next 1/9] queue_api: define " Shailend Chand
@ 2024-04-18 19:51 ` Shailend Chand
  2024-04-18 19:51 ` [RFC PATCH net-next 3/9] gve: Add adminq funcs to add/remove a single Rx queue Shailend Chand
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Shailend Chand @ 2024-04-18 19:51 UTC (permalink / raw)
  To: netdev
  Cc: almasrymina, davem, edumazet, kuba, pabeni, willemb, Shailend Chand

Although this is not fixing any existing double free bug, making these
functions idempotent allows for a simpler implementation of future ndo
hooks that act on a single queue.

Signed-off-by: Shailend Chand <shailend@google.com>
---
 drivers/net/ethernet/google/gve/gve_rx.c | 29 ++++++++++++++++--------
 1 file changed, 19 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
index cd727e55ae0f..fde7503940f7 100644
--- a/drivers/net/ethernet/google/gve/gve_rx.c
+++ b/drivers/net/ethernet/google/gve/gve_rx.c
@@ -30,6 +30,9 @@ static void gve_rx_unfill_pages(struct gve_priv *priv,
 	u32 slots = rx->mask + 1;
 	int i;
 
+	if (!rx->data.page_info)
+		return;
+
 	if (rx->data.raw_addressing) {
 		for (i = 0; i < slots; i++)
 			gve_rx_free_buffer(&priv->pdev->dev, &rx->data.page_info[i],
@@ -70,20 +73,26 @@ static void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx,
 	int idx = rx->q_num;
 	size_t bytes;
 
-	bytes = sizeof(struct gve_rx_desc) * cfg->ring_size;
-	dma_free_coherent(dev, bytes, rx->desc.desc_ring, rx->desc.bus);
-	rx->desc.desc_ring = NULL;
+	if (rx->desc.desc_ring) {
+		bytes = sizeof(struct gve_rx_desc) * cfg->ring_size;
+		dma_free_coherent(dev, bytes, rx->desc.desc_ring, rx->desc.bus);
+		rx->desc.desc_ring = NULL;
+	}
 
-	dma_free_coherent(dev, sizeof(*rx->q_resources),
-			  rx->q_resources, rx->q_resources_bus);
-	rx->q_resources = NULL;
+	if (rx->q_resources) {
+		dma_free_coherent(dev, sizeof(*rx->q_resources),
+				  rx->q_resources, rx->q_resources_bus);
+		rx->q_resources = NULL;
+	}
 
 	gve_rx_unfill_pages(priv, rx, cfg);
 
-	bytes = sizeof(*rx->data.data_ring) * slots;
-	dma_free_coherent(dev, bytes, rx->data.data_ring,
-			  rx->data.data_bus);
-	rx->data.data_ring = NULL;
+	if (rx->data.data_ring) {
+		bytes = sizeof(*rx->data.data_ring) * slots;
+		dma_free_coherent(dev, bytes, rx->data.data_ring,
+				  rx->data.data_bus);
+		rx->data.data_ring = NULL;
+	}
 
 	kvfree(rx->qpl_copy_pool);
 	rx->qpl_copy_pool = NULL;
-- 
2.44.0.769.g3c40516874-goog


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH net-next 3/9] gve: Add adminq funcs to add/remove a single Rx queue
  2024-04-18 19:51 [RFC PATCH net-next 0/9] gve: Implement netdev queue api Shailend Chand
  2024-04-18 19:51 ` [RFC PATCH net-next 1/9] queue_api: define " Shailend Chand
  2024-04-18 19:51 ` [RFC PATCH net-next 2/9] gve: Make the RX free queue funcs idempotent Shailend Chand
@ 2024-04-18 19:51 ` Shailend Chand
  2024-04-18 19:51 ` [RFC PATCH net-next 4/9] gve: Make gve_turn(up|down) ignore stopped queues Shailend Chand
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Shailend Chand @ 2024-04-18 19:51 UTC (permalink / raw)
  To: netdev
  Cc: almasrymina, davem, edumazet, kuba, pabeni, willemb, Shailend Chand

This allows for implementing future ndo hooks that act on a single
queue.

Signed-off-by: Shailend Chand <shailend@google.com>
---
 drivers/net/ethernet/google/gve/gve_adminq.c | 79 ++++++++++++++------
 drivers/net/ethernet/google/gve/gve_adminq.h |  2 +
 2 files changed, 58 insertions(+), 23 deletions(-)

diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c
index b2b619aa2310..1b066c92d812 100644
--- a/drivers/net/ethernet/google/gve/gve_adminq.c
+++ b/drivers/net/ethernet/google/gve/gve_adminq.c
@@ -630,14 +630,15 @@ int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_que
 	return gve_adminq_kick_and_wait(priv);
 }
 
-static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index)
+static void gve_adminq_get_create_rx_queue_cmd(struct gve_priv *priv,
+					       union gve_adminq_command *cmd,
+					       u32 queue_index)
 {
 	struct gve_rx_ring *rx = &priv->rx[queue_index];
-	union gve_adminq_command cmd;
 
-	memset(&cmd, 0, sizeof(cmd));
-	cmd.opcode = cpu_to_be32(GVE_ADMINQ_CREATE_RX_QUEUE);
-	cmd.create_rx_queue = (struct gve_adminq_create_rx_queue) {
+	memset(cmd, 0, sizeof(*cmd));
+	cmd->opcode = cpu_to_be32(GVE_ADMINQ_CREATE_RX_QUEUE);
+	cmd->create_rx_queue = (struct gve_adminq_create_rx_queue) {
 		.queue_id = cpu_to_be32(queue_index),
 		.ntfy_id = cpu_to_be32(rx->ntfy_id),
 		.queue_resources_addr = cpu_to_be64(rx->q_resources_bus),
@@ -648,13 +649,13 @@ static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index)
 		u32 qpl_id = priv->queue_format == GVE_GQI_RDA_FORMAT ?
 			GVE_RAW_ADDRESSING_QPL_ID : rx->data.qpl->id;
 
-		cmd.create_rx_queue.rx_desc_ring_addr =
+		cmd->create_rx_queue.rx_desc_ring_addr =
 			cpu_to_be64(rx->desc.bus),
-		cmd.create_rx_queue.rx_data_ring_addr =
+		cmd->create_rx_queue.rx_data_ring_addr =
 			cpu_to_be64(rx->data.data_bus),
-		cmd.create_rx_queue.index = cpu_to_be32(queue_index);
-		cmd.create_rx_queue.queue_page_list_id = cpu_to_be32(qpl_id);
-		cmd.create_rx_queue.packet_buffer_size = cpu_to_be16(rx->packet_buffer_size);
+		cmd->create_rx_queue.index = cpu_to_be32(queue_index);
+		cmd->create_rx_queue.queue_page_list_id = cpu_to_be32(qpl_id);
+		cmd->create_rx_queue.packet_buffer_size = cpu_to_be16(rx->packet_buffer_size);
 	} else {
 		u32 qpl_id = 0;
 
@@ -662,25 +663,39 @@ static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index)
 			qpl_id = GVE_RAW_ADDRESSING_QPL_ID;
 		else
 			qpl_id = rx->dqo.qpl->id;
-		cmd.create_rx_queue.queue_page_list_id = cpu_to_be32(qpl_id);
-		cmd.create_rx_queue.rx_desc_ring_addr =
+		cmd->create_rx_queue.queue_page_list_id = cpu_to_be32(qpl_id);
+		cmd->create_rx_queue.rx_desc_ring_addr =
 			cpu_to_be64(rx->dqo.complq.bus);
-		cmd.create_rx_queue.rx_data_ring_addr =
+		cmd->create_rx_queue.rx_data_ring_addr =
 			cpu_to_be64(rx->dqo.bufq.bus);
-		cmd.create_rx_queue.packet_buffer_size =
+		cmd->create_rx_queue.packet_buffer_size =
 			cpu_to_be16(priv->data_buffer_size_dqo);
-		cmd.create_rx_queue.rx_buff_ring_size =
+		cmd->create_rx_queue.rx_buff_ring_size =
 			cpu_to_be16(priv->rx_desc_cnt);
-		cmd.create_rx_queue.enable_rsc =
+		cmd->create_rx_queue.enable_rsc =
 			!!(priv->dev->features & NETIF_F_LRO);
 		if (priv->header_split_enabled)
-			cmd.create_rx_queue.header_buffer_size =
+			cmd->create_rx_queue.header_buffer_size =
 				cpu_to_be16(priv->header_buf_size);
 	}
+}
 
+static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index)
+{
+	union gve_adminq_command cmd;
+
+	gve_adminq_get_create_rx_queue_cmd(priv, &cmd, queue_index);
 	return gve_adminq_issue_cmd(priv, &cmd);
 }
 
+int gve_adminq_create_single_rx_queue(struct gve_priv *priv, u32 queue_index)
+{
+	union gve_adminq_command cmd;
+
+	gve_adminq_get_create_rx_queue_cmd(priv, &cmd, queue_index);
+	return gve_adminq_execute_cmd(priv, &cmd);
+}
+
 int gve_adminq_create_rx_queues(struct gve_priv *priv, u32 num_queues)
 {
 	int err;
@@ -727,17 +742,22 @@ int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_qu
 	return gve_adminq_kick_and_wait(priv);
 }
 
+static void gve_adminq_make_destroy_rx_queue_cmd(union gve_adminq_command *cmd,
+						 u32 queue_index)
+{
+	memset(cmd, 0, sizeof(*cmd));
+	cmd->opcode = cpu_to_be32(GVE_ADMINQ_DESTROY_RX_QUEUE);
+	cmd->destroy_rx_queue = (struct gve_adminq_destroy_rx_queue) {
+		.queue_id = cpu_to_be32(queue_index),
+	};
+}
+
 static int gve_adminq_destroy_rx_queue(struct gve_priv *priv, u32 queue_index)
 {
 	union gve_adminq_command cmd;
 	int err;
 
-	memset(&cmd, 0, sizeof(cmd));
-	cmd.opcode = cpu_to_be32(GVE_ADMINQ_DESTROY_RX_QUEUE);
-	cmd.destroy_rx_queue = (struct gve_adminq_destroy_rx_queue) {
-		.queue_id = cpu_to_be32(queue_index),
-	};
-
+	gve_adminq_make_destroy_rx_queue_cmd(&cmd, queue_index);
 	err = gve_adminq_issue_cmd(priv, &cmd);
 	if (err)
 		return err;
@@ -745,6 +765,19 @@ static int gve_adminq_destroy_rx_queue(struct gve_priv *priv, u32 queue_index)
 	return 0;
 }
 
+int gve_adminq_destroy_single_rx_queue(struct gve_priv *priv, u32 queue_index)
+{
+	union gve_adminq_command cmd;
+	int err;
+
+	gve_adminq_make_destroy_rx_queue_cmd(&cmd, queue_index);
+	err = gve_adminq_execute_cmd(priv, &cmd);
+	if (err)
+		return err;
+
+	return 0;
+}
+
 int gve_adminq_destroy_rx_queues(struct gve_priv *priv, u32 num_queues)
 {
 	int err;
diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/ethernet/google/gve/gve_adminq.h
index beedf2353847..e64f0dbe744d 100644
--- a/drivers/net/ethernet/google/gve/gve_adminq.h
+++ b/drivers/net/ethernet/google/gve/gve_adminq.h
@@ -451,7 +451,9 @@ int gve_adminq_configure_device_resources(struct gve_priv *priv,
 int gve_adminq_deconfigure_device_resources(struct gve_priv *priv);
 int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues);
 int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues);
+int gve_adminq_create_single_rx_queue(struct gve_priv *priv, u32 queue_index);
 int gve_adminq_create_rx_queues(struct gve_priv *priv, u32 num_queues);
+int gve_adminq_destroy_single_rx_queue(struct gve_priv *priv, u32 queue_index);
 int gve_adminq_destroy_rx_queues(struct gve_priv *priv, u32 queue_id);
 int gve_adminq_register_page_list(struct gve_priv *priv,
 				  struct gve_queue_page_list *qpl);
-- 
2.44.0.769.g3c40516874-goog


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH net-next 4/9] gve: Make gve_turn(up|down) ignore stopped queues
  2024-04-18 19:51 [RFC PATCH net-next 0/9] gve: Implement netdev queue api Shailend Chand
                   ` (2 preceding siblings ...)
  2024-04-18 19:51 ` [RFC PATCH net-next 3/9] gve: Add adminq funcs to add/remove a single Rx queue Shailend Chand
@ 2024-04-18 19:51 ` Shailend Chand
  2024-04-18 19:51 ` [RFC PATCH net-next 5/9] gve: Make gve_turnup work for nonempty queues Shailend Chand
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Shailend Chand @ 2024-04-18 19:51 UTC (permalink / raw)
  To: netdev
  Cc: almasrymina, davem, edumazet, kuba, pabeni, willemb, Shailend Chand

Currently the queues are either all live or all dead, toggling from one
state to the other via the ndo open and stop hooks. The future addition
of single-queue ndo hooks changes this, and thus gve_turnup and
gve_turndown should evolve to account for a state where some queues are
live and some aren't.

Signed-off-by: Shailend Chand <shailend@google.com>
---
 drivers/net/ethernet/google/gve/gve_main.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
index a515e5af843c..8e875b598e78 100644
--- a/drivers/net/ethernet/google/gve/gve_main.c
+++ b/drivers/net/ethernet/google/gve/gve_main.c
@@ -1965,12 +1965,16 @@ static void gve_turndown(struct gve_priv *priv)
 		int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx);
 		struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx];
 
+		if (!gve_tx_was_added_to_block(priv, idx))
+			continue;
 		napi_disable(&block->napi);
 	}
 	for (idx = 0; idx < priv->rx_cfg.num_queues; idx++) {
 		int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx);
 		struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx];
 
+		if (!gve_rx_was_added_to_block(priv, idx))
+			continue;
 		napi_disable(&block->napi);
 	}
 
@@ -1993,6 +1997,9 @@ static void gve_turnup(struct gve_priv *priv)
 		int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx);
 		struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx];
 
+		if (!gve_tx_was_added_to_block(priv, idx))
+			continue;
+
 		napi_enable(&block->napi);
 		if (gve_is_gqi(priv)) {
 			iowrite32be(0, gve_irq_doorbell(priv, block));
@@ -2005,6 +2012,9 @@ static void gve_turnup(struct gve_priv *priv)
 		int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx);
 		struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx];
 
+		if (!gve_rx_was_added_to_block(priv, idx))
+			continue;
+
 		napi_enable(&block->napi);
 		if (gve_is_gqi(priv)) {
 			iowrite32be(0, gve_irq_doorbell(priv, block));
-- 
2.44.0.769.g3c40516874-goog


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH net-next 5/9] gve: Make gve_turnup work for nonempty queues
  2024-04-18 19:51 [RFC PATCH net-next 0/9] gve: Implement netdev queue api Shailend Chand
                   ` (3 preceding siblings ...)
  2024-04-18 19:51 ` [RFC PATCH net-next 4/9] gve: Make gve_turn(up|down) ignore stopped queues Shailend Chand
@ 2024-04-18 19:51 ` Shailend Chand
  2024-04-18 19:51 ` [RFC PATCH net-next 6/9] gve: Avoid rescheduling napi if on wrong cpu Shailend Chand
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Shailend Chand @ 2024-04-18 19:51 UTC (permalink / raw)
  To: netdev
  Cc: almasrymina, davem, edumazet, kuba, pabeni, willemb, Shailend Chand

gVNIC has a requirement that all queues have to be quiesced before any
queue is operated on (created or destroyed). To enable the
implementation of future ndo hooks that work on a single queue, we need
to evolve gve_turnup to account for queues already having some
unprocessed descriptors in the ring.

Say rxq 4 is being stopped and started via the queue api. Due to gve's
requirement of quiescence, queues 0 through 3 are not processing their
rings while queue 4 is being toggled. Once they are made live, these
queues need to be poked to cause them to check their rings for
descriptors that were written during their brief period of quiescence.

Signed-off-by: Shailend Chand <shailend@google.com>
---
 drivers/net/ethernet/google/gve/gve_main.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
index 8e875b598e78..4ab90d3eb1cb 100644
--- a/drivers/net/ethernet/google/gve/gve_main.c
+++ b/drivers/net/ethernet/google/gve/gve_main.c
@@ -2007,6 +2007,13 @@ static void gve_turnup(struct gve_priv *priv)
 			gve_set_itr_coalesce_usecs_dqo(priv, block,
 						       priv->tx_coalesce_usecs);
 		}
+
+		/* Any descs written by the NIC before this barrier will be
+		 * handled by the one-off napi schedule below. Whereas any
+		 * descs after the barrier will generate interrupts.
+		 */
+		mb();
+		napi_schedule(&block->napi);
 	}
 	for (idx = 0; idx < priv->rx_cfg.num_queues; idx++) {
 		int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx);
@@ -2022,6 +2029,13 @@ static void gve_turnup(struct gve_priv *priv)
 			gve_set_itr_coalesce_usecs_dqo(priv, block,
 						       priv->rx_coalesce_usecs);
 		}
+
+		/* Any descs written by the NIC before this barrier will be
+		 * handled by the one-off napi schedule below. Whereas any
+		 * descs after the barrier will generate interrupts.
+		 */
+		mb();
+		napi_schedule(&block->napi);
 	}
 
 	gve_set_napi_enabled(priv);
-- 
2.44.0.769.g3c40516874-goog


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH net-next 6/9] gve: Avoid rescheduling napi if on wrong cpu
  2024-04-18 19:51 [RFC PATCH net-next 0/9] gve: Implement netdev queue api Shailend Chand
                   ` (4 preceding siblings ...)
  2024-04-18 19:51 ` [RFC PATCH net-next 5/9] gve: Make gve_turnup work for nonempty queues Shailend Chand
@ 2024-04-18 19:51 ` Shailend Chand
  2024-04-18 19:51 ` [RFC PATCH net-next 7/9] gve: Reset Rx ring state in the ring-stop funcs Shailend Chand
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Shailend Chand @ 2024-04-18 19:51 UTC (permalink / raw)
  To: netdev
  Cc: almasrymina, davem, edumazet, kuba, pabeni, willemb, Shailend Chand

In order to make possible the implementation of per-queue ndo hooks,
gve_turnup was changed in a previous patch to account for queues already
having some unprocessed descriptors: it does a one-off napi_schdule to
handle them. If conditions of consistent high traffic persist in the
immediate aftermath of this, the poll routine for a queue can be "stuck"
on the cpu on which the ndo hooks ran, instead of the cpu its irq has
affinity with.

This situation is exacerbated by the fact that the ndo hooks for all the
queues are invoked on the same cpu, potentially causing all the napi
poll routines to be residing on the same cpu.

A self correcting mechanism in the poll method itself solves this
problem.

Signed-off-by: Shailend Chand <shailend@google.com>
---
 drivers/net/ethernet/google/gve/gve.h      |  1 +
 drivers/net/ethernet/google/gve/gve_main.c | 33 ++++++++++++++++++++--
 2 files changed, 32 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
index e97633b68e25..9f6a897c87cb 100644
--- a/drivers/net/ethernet/google/gve/gve.h
+++ b/drivers/net/ethernet/google/gve/gve.h
@@ -610,6 +610,7 @@ struct gve_notify_block {
 	struct gve_priv *priv;
 	struct gve_tx_ring *tx; /* tx rings on this block */
 	struct gve_rx_ring *rx; /* rx rings on this block */
+	u32 irq;
 };
 
 /* Tracks allowed and current queue settings */
diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
index 4ab90d3eb1cb..c348dff7cca6 100644
--- a/drivers/net/ethernet/google/gve/gve_main.c
+++ b/drivers/net/ethernet/google/gve/gve_main.c
@@ -9,6 +9,7 @@
 #include <linux/etherdevice.h>
 #include <linux/filter.h>
 #include <linux/interrupt.h>
+#include <linux/irq.h>
 #include <linux/module.h>
 #include <linux/pci.h>
 #include <linux/sched.h>
@@ -253,6 +254,18 @@ static irqreturn_t gve_intr_dqo(int irq, void *arg)
 	return IRQ_HANDLED;
 }
 
+static int gve_is_napi_on_home_cpu(struct gve_priv *priv, u32 irq)
+{
+	int cpu_curr = smp_processor_id();
+	const struct cpumask *aff_mask;
+
+	aff_mask = irq_get_effective_affinity_mask(irq);
+	if (unlikely(!aff_mask))
+		return 1;
+
+	return cpumask_test_cpu(cpu_curr, aff_mask);
+}
+
 int gve_napi_poll(struct napi_struct *napi, int budget)
 {
 	struct gve_notify_block *block;
@@ -322,8 +335,21 @@ int gve_napi_poll_dqo(struct napi_struct *napi, int budget)
 		reschedule |= work_done == budget;
 	}
 
-	if (reschedule)
-		return budget;
+	if (reschedule) {
+		/* Reschedule by returning budget only if already on the correct
+		 * cpu.
+		 */
+		if (likely(gve_is_napi_on_home_cpu(priv, block->irq)))
+			return budget;
+
+		/* If not on the cpu with which this queue's irq has affinity
+		 * with, we avoid rescheduling napi and arm the irq instead so
+		 * that napi gets rescheduled back eventually onto the right
+		 * cpu.
+		 */
+		if (work_done == budget)
+			work_done--;
+	}
 
 	if (likely(napi_complete_done(napi, work_done))) {
 		/* Enable interrupts again.
@@ -428,6 +454,7 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv)
 				"Failed to receive msix vector %d\n", i);
 			goto abort_with_some_ntfy_blocks;
 		}
+		block->irq = priv->msix_vectors[msix_idx].vector;
 		irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
 				      get_cpu_mask(i % active_cpus));
 		block->irq_db_index = &priv->irq_db_indices[i].index;
@@ -441,6 +468,7 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv)
 		irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
 				      NULL);
 		free_irq(priv->msix_vectors[msix_idx].vector, block);
+		block->irq = 0;
 	}
 	kvfree(priv->ntfy_blocks);
 	priv->ntfy_blocks = NULL;
@@ -474,6 +502,7 @@ static void gve_free_notify_blocks(struct gve_priv *priv)
 		irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
 				      NULL);
 		free_irq(priv->msix_vectors[msix_idx].vector, block);
+		block->irq = 0;
 	}
 	free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv);
 	kvfree(priv->ntfy_blocks);
-- 
2.44.0.769.g3c40516874-goog


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH net-next 7/9] gve: Reset Rx ring state in the ring-stop funcs
  2024-04-18 19:51 [RFC PATCH net-next 0/9] gve: Implement netdev queue api Shailend Chand
                   ` (5 preceding siblings ...)
  2024-04-18 19:51 ` [RFC PATCH net-next 6/9] gve: Avoid rescheduling napi if on wrong cpu Shailend Chand
@ 2024-04-18 19:51 ` Shailend Chand
  2024-04-18 19:51 ` [RFC PATCH net-next 8/9] gve: Account for stopped queues when reading NIC stats Shailend Chand
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Shailend Chand @ 2024-04-18 19:51 UTC (permalink / raw)
  To: netdev
  Cc: almasrymina, davem, edumazet, kuba, pabeni, willemb, Shailend Chand

This does not fix any existing bug. In anticipation of the ndo queue api
hooks that alloc/free/start/stop a single Rx queue, the already existing
per-queue stop functions are being made more robust. Specifically for
this use case: rx_queue_n.stop() + rx_queue_n.start()

Note that this is not the use case being used in devmem tcp (the first
place these new ndo hooks would be used). There the usecase is:
new_queue.alloc() + old_queue.stop() + new_queue.start() + old_queue.free()

Signed-off-by: Shailend Chand <shailend@google.com>
---
 drivers/net/ethernet/google/gve/gve_rx.c     |  48 +++++++--
 drivers/net/ethernet/google/gve/gve_rx_dqo.c | 102 +++++++++++++++----
 2 files changed, 120 insertions(+), 30 deletions(-)

diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
index fde7503940f7..1d235caab4c5 100644
--- a/drivers/net/ethernet/google/gve/gve_rx.c
+++ b/drivers/net/ethernet/google/gve/gve_rx.c
@@ -54,6 +54,41 @@ static void gve_rx_unfill_pages(struct gve_priv *priv,
 	rx->data.page_info = NULL;
 }
 
+static void gve_rx_ctx_clear(struct gve_rx_ctx *ctx)
+{
+	ctx->skb_head = NULL;
+	ctx->skb_tail = NULL;
+	ctx->total_size = 0;
+	ctx->frag_cnt = 0;
+	ctx->drop_pkt = false;
+}
+
+static void gve_rx_init_ring_state_gqi(struct gve_rx_ring *rx)
+{
+	rx->desc.seqno = 1;
+	rx->cnt = 0;
+	gve_rx_ctx_clear(&rx->ctx);
+}
+
+static void gve_rx_reset_ring_gqi(struct gve_priv *priv, int idx)
+{
+	struct gve_rx_ring *rx = &priv->rx[idx];
+	const u32 slots = priv->rx_desc_cnt;
+	size_t size;
+
+	/* Reset desc ring */
+	if (rx->desc.desc_ring) {
+		size = slots * sizeof(rx->desc.desc_ring[0]);
+		memset(rx->desc.desc_ring, 0, size);
+	}
+
+	/* Reset q_resources */
+	if (rx->q_resources)
+		memset(rx->q_resources, 0, sizeof(*rx->q_resources));
+
+	gve_rx_init_ring_state_gqi(rx);
+}
+
 void gve_rx_stop_ring_gqi(struct gve_priv *priv, int idx)
 {
 	int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx);
@@ -63,6 +98,7 @@ void gve_rx_stop_ring_gqi(struct gve_priv *priv, int idx)
 
 	gve_remove_napi(priv, ntfy_idx);
 	gve_rx_remove_from_block(priv, idx);
+	gve_rx_reset_ring_gqi(priv, idx);
 }
 
 static void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx,
@@ -226,15 +262,6 @@ static int gve_rx_prefill_pages(struct gve_rx_ring *rx,
 	return err;
 }
 
-static void gve_rx_ctx_clear(struct gve_rx_ctx *ctx)
-{
-	ctx->skb_head = NULL;
-	ctx->skb_tail = NULL;
-	ctx->total_size = 0;
-	ctx->frag_cnt = 0;
-	ctx->drop_pkt = false;
-}
-
 void gve_rx_start_ring_gqi(struct gve_priv *priv, int idx)
 {
 	int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx);
@@ -313,9 +340,8 @@ static int gve_rx_alloc_ring_gqi(struct gve_priv *priv,
 		err = -ENOMEM;
 		goto abort_with_q_resources;
 	}
-	rx->cnt = 0;
 	rx->db_threshold = slots / 2;
-	rx->desc.seqno = 1;
+	gve_rx_init_ring_state_gqi(rx);
 
 	rx->packet_buffer_size = GVE_DEFAULT_RX_BUFFER_SIZE;
 	gve_rx_ctx_clear(&rx->ctx);
diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
index 15108407b54f..dc2c6bd92e82 100644
--- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c
+++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
@@ -211,6 +211,82 @@ static void gve_rx_free_hdr_bufs(struct gve_priv *priv, struct gve_rx_ring *rx)
 	}
 }
 
+static void gve_rx_init_ring_state_dqo(struct gve_rx_ring *rx,
+				       const u32 buffer_queue_slots,
+				       const u32 completion_queue_slots)
+{
+	int i;
+
+	/* Set buffer queue state */
+	rx->dqo.bufq.mask = buffer_queue_slots - 1;
+	rx->dqo.bufq.head = 0;
+	rx->dqo.bufq.tail = 0;
+
+	/* Set completion queue state */
+	rx->dqo.complq.num_free_slots = completion_queue_slots;
+	rx->dqo.complq.mask = completion_queue_slots - 1;
+	rx->dqo.complq.cur_gen_bit = 0;
+	rx->dqo.complq.head = 0;
+
+	/* Set RX SKB context */
+	rx->ctx.skb_head = NULL;
+	rx->ctx.skb_tail = NULL;
+
+	/* Set up linked list of buffer IDs */
+	if (rx->dqo.buf_states) {
+		for (i = 0; i < rx->dqo.num_buf_states - 1; i++)
+			rx->dqo.buf_states[i].next = i + 1;
+		rx->dqo.buf_states[rx->dqo.num_buf_states - 1].next = -1;
+	}
+
+	rx->dqo.free_buf_states = 0;
+	rx->dqo.recycled_buf_states.head = -1;
+	rx->dqo.recycled_buf_states.tail = -1;
+	rx->dqo.used_buf_states.head = -1;
+	rx->dqo.used_buf_states.tail = -1;
+}
+
+static void gve_rx_reset_ring_dqo(struct gve_priv *priv, int idx)
+{
+	struct gve_rx_ring *rx = &priv->rx[idx];
+	size_t size;
+	int i;
+
+	const u32 buffer_queue_slots = priv->rx_desc_cnt;
+	const u32 completion_queue_slots = priv->rx_desc_cnt;
+
+	/* Reset buffer queue */
+	if (rx->dqo.bufq.desc_ring) {
+		size = sizeof(rx->dqo.bufq.desc_ring[0]) *
+			buffer_queue_slots;
+		memset(rx->dqo.bufq.desc_ring, 0, size);
+	}
+
+	/* Reset completion queue */
+	if (rx->dqo.complq.desc_ring) {
+		size = sizeof(rx->dqo.complq.desc_ring[0]) *
+			completion_queue_slots;
+		memset(rx->dqo.complq.desc_ring, 0, size);
+	}
+
+	/* Reset q_resources */
+	if (rx->q_resources)
+		memset(rx->q_resources, 0, sizeof(*rx->q_resources));
+
+	/* Reset buf states */
+	if (rx->dqo.buf_states) {
+		for (i = 0; i < rx->dqo.num_buf_states; i++) {
+			struct gve_rx_buf_state_dqo *bs = &rx->dqo.buf_states[i];
+
+			if (bs->page_info.page)
+				gve_free_page_dqo(priv, bs, !rx->dqo.qpl);
+		}
+	}
+
+	gve_rx_init_ring_state_dqo(rx, buffer_queue_slots,
+				   completion_queue_slots);
+}
+
 void gve_rx_stop_ring_dqo(struct gve_priv *priv, int idx)
 {
 	int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx);
@@ -220,6 +296,7 @@ void gve_rx_stop_ring_dqo(struct gve_priv *priv, int idx)
 
 	gve_remove_napi(priv, ntfy_idx);
 	gve_rx_remove_from_block(priv, idx);
+	gve_rx_reset_ring_dqo(priv, idx);
 }
 
 static void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx,
@@ -275,10 +352,10 @@ static void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx,
 	netif_dbg(priv, drv, priv->dev, "freed rx ring %d\n", idx);
 }
 
-static int gve_rx_alloc_hdr_bufs(struct gve_priv *priv, struct gve_rx_ring *rx)
+static int gve_rx_alloc_hdr_bufs(struct gve_priv *priv, struct gve_rx_ring *rx,
+				 const u32 buf_count)
 {
 	struct device *hdev = &priv->pdev->dev;
-	int buf_count = rx->dqo.bufq.mask + 1;
 
 	rx->dqo.hdr_bufs.data = dma_alloc_coherent(hdev, priv->header_buf_size * buf_count,
 						   &rx->dqo.hdr_bufs.addr, GFP_KERNEL);
@@ -303,7 +380,6 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv,
 {
 	struct device *hdev = &priv->pdev->dev;
 	size_t size;
-	int i;
 
 	const u32 buffer_queue_slots = cfg->ring_size;
 	const u32 completion_queue_slots = cfg->ring_size;
@@ -313,11 +389,6 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv,
 	memset(rx, 0, sizeof(*rx));
 	rx->gve = priv;
 	rx->q_num = idx;
-	rx->dqo.bufq.mask = buffer_queue_slots - 1;
-	rx->dqo.complq.num_free_slots = completion_queue_slots;
-	rx->dqo.complq.mask = completion_queue_slots - 1;
-	rx->ctx.skb_head = NULL;
-	rx->ctx.skb_tail = NULL;
 
 	rx->dqo.num_buf_states = cfg->raw_addressing ?
 		min_t(s16, S16_MAX, buffer_queue_slots * 4) :
@@ -330,19 +401,9 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv,
 
 	/* Allocate header buffers for header-split */
 	if (cfg->enable_header_split)
-		if (gve_rx_alloc_hdr_bufs(priv, rx))
+		if (gve_rx_alloc_hdr_bufs(priv, rx, buffer_queue_slots))
 			goto err;
 
-	/* Set up linked list of buffer IDs */
-	for (i = 0; i < rx->dqo.num_buf_states - 1; i++)
-		rx->dqo.buf_states[i].next = i + 1;
-
-	rx->dqo.buf_states[rx->dqo.num_buf_states - 1].next = -1;
-	rx->dqo.recycled_buf_states.head = -1;
-	rx->dqo.recycled_buf_states.tail = -1;
-	rx->dqo.used_buf_states.head = -1;
-	rx->dqo.used_buf_states.tail = -1;
-
 	/* Allocate RX completion queue */
 	size = sizeof(rx->dqo.complq.desc_ring[0]) *
 		completion_queue_slots;
@@ -370,6 +431,9 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv,
 	if (!rx->q_resources)
 		goto err;
 
+	gve_rx_init_ring_state_dqo(rx, buffer_queue_slots,
+				   completion_queue_slots);
+
 	return 0;
 
 err:
-- 
2.44.0.769.g3c40516874-goog


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH net-next 8/9] gve: Account for stopped queues when reading NIC stats
  2024-04-18 19:51 [RFC PATCH net-next 0/9] gve: Implement netdev queue api Shailend Chand
                   ` (6 preceding siblings ...)
  2024-04-18 19:51 ` [RFC PATCH net-next 7/9] gve: Reset Rx ring state in the ring-stop funcs Shailend Chand
@ 2024-04-18 19:51 ` Shailend Chand
  2024-04-18 19:51 ` [RFC PATCH net-next 9/9] gve: Implement queue api Shailend Chand
  2024-04-18 21:55 ` [RFC PATCH net-next 0/9] gve: Implement netdev " Mina Almasry
  9 siblings, 0 replies; 19+ messages in thread
From: Shailend Chand @ 2024-04-18 19:51 UTC (permalink / raw)
  To: netdev
  Cc: almasrymina, davem, edumazet, kuba, pabeni, willemb, Shailend Chand

We now account for the fact that the NIC might send us stats for a
subset of queues. Without this change, gve_get_ethtool_stats might make
an invalid access on the priv->stats_report->stats array.

Signed-off-by: Shailend Chand <shailend@google.com>
---
 drivers/net/ethernet/google/gve/gve_ethtool.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c
index 299206d15c73..2a451aba8328 100644
--- a/drivers/net/ethernet/google/gve/gve_ethtool.c
+++ b/drivers/net/ethernet/google/gve/gve_ethtool.c
@@ -181,12 +181,17 @@ gve_get_ethtool_stats(struct net_device *netdev,
 					    sizeof(int), GFP_KERNEL);
 	if (!rx_qid_to_stats_idx)
 		return;
+	for (ring = 0; ring < priv->rx_cfg.num_queues; ring++)
+		rx_qid_to_stats_idx[ring] = -1;
 	tx_qid_to_stats_idx = kmalloc_array(num_tx_queues,
 					    sizeof(int), GFP_KERNEL);
 	if (!tx_qid_to_stats_idx) {
 		kfree(rx_qid_to_stats_idx);
 		return;
 	}
+	for (ring = 0; ring < num_tx_queues; ring++)
+		tx_qid_to_stats_idx[ring] = -1;
+
 	for (rx_pkts = 0, rx_bytes = 0, rx_hsplit_pkt = 0,
 	     rx_skb_alloc_fail = 0, rx_buf_alloc_fail = 0,
 	     rx_desc_err_dropped_pkt = 0, rx_hsplit_unsplit_pkt = 0,
@@ -308,11 +313,11 @@ gve_get_ethtool_stats(struct net_device *netdev,
 			data[i++] = rx->rx_copybreak_pkt;
 			data[i++] = rx->rx_copied_pkt;
 			/* stats from NIC */
-			if (skip_nic_stats) {
+			stats_idx = rx_qid_to_stats_idx[ring];
+			if (skip_nic_stats || stats_idx == -1) {
 				/* skip NIC rx stats */
 				i += NIC_RX_STATS_REPORT_NUM;
 			} else {
-				stats_idx = rx_qid_to_stats_idx[ring];
 				for (j = 0; j < NIC_RX_STATS_REPORT_NUM; j++) {
 					u64 value =
 						be64_to_cpu(report_stats[stats_idx + j].value);
@@ -383,11 +388,11 @@ gve_get_ethtool_stats(struct net_device *netdev,
 			data[i++] = gve_tx_load_event_counter(priv, tx);
 			data[i++] = tx->dma_mapping_error;
 			/* stats from NIC */
-			if (skip_nic_stats) {
+			stats_idx = tx_qid_to_stats_idx[ring];
+			if (skip_nic_stats || stats_idx == -1) {
 				/* skip NIC tx stats */
 				i += NIC_TX_STATS_REPORT_NUM;
 			} else {
-				stats_idx = tx_qid_to_stats_idx[ring];
 				for (j = 0; j < NIC_TX_STATS_REPORT_NUM; j++) {
 					u64 value =
 						be64_to_cpu(report_stats[stats_idx + j].value);
-- 
2.44.0.769.g3c40516874-goog


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH net-next 9/9] gve: Implement queue api
  2024-04-18 19:51 [RFC PATCH net-next 0/9] gve: Implement netdev queue api Shailend Chand
                   ` (7 preceding siblings ...)
  2024-04-18 19:51 ` [RFC PATCH net-next 8/9] gve: Account for stopped queues when reading NIC stats Shailend Chand
@ 2024-04-18 19:51 ` Shailend Chand
  2024-04-19  1:48   ` Jakub Kicinski
                     ` (2 more replies)
  2024-04-18 21:55 ` [RFC PATCH net-next 0/9] gve: Implement netdev " Mina Almasry
  9 siblings, 3 replies; 19+ messages in thread
From: Shailend Chand @ 2024-04-18 19:51 UTC (permalink / raw)
  To: netdev
  Cc: almasrymina, davem, edumazet, kuba, pabeni, willemb, Shailend Chand

An api enabling the net stack to reset driver queues is implemented for
gve.

Signed-off-by: Shailend Chand <shailend@google.com>
---
 drivers/net/ethernet/google/gve/gve.h        |   6 +
 drivers/net/ethernet/google/gve/gve_dqo.h    |   6 +
 drivers/net/ethernet/google/gve/gve_main.c   | 143 +++++++++++++++++++
 drivers/net/ethernet/google/gve/gve_rx.c     |  12 +-
 drivers/net/ethernet/google/gve/gve_rx_dqo.c |  12 +-
 5 files changed, 167 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
index 9f6a897c87cb..d752e525bde7 100644
--- a/drivers/net/ethernet/google/gve/gve.h
+++ b/drivers/net/ethernet/google/gve/gve.h
@@ -1147,6 +1147,12 @@ bool gve_tx_clean_pending(struct gve_priv *priv, struct gve_tx_ring *tx);
 void gve_rx_write_doorbell(struct gve_priv *priv, struct gve_rx_ring *rx);
 int gve_rx_poll(struct gve_notify_block *block, int budget);
 bool gve_rx_work_pending(struct gve_rx_ring *rx);
+int gve_rx_alloc_ring_gqi(struct gve_priv *priv,
+			  struct gve_rx_alloc_rings_cfg *cfg,
+			  struct gve_rx_ring *rx,
+			  int idx);
+void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx,
+			  struct gve_rx_alloc_rings_cfg *cfg);
 int gve_rx_alloc_rings(struct gve_priv *priv);
 int gve_rx_alloc_rings_gqi(struct gve_priv *priv,
 			   struct gve_rx_alloc_rings_cfg *cfg);
diff --git a/drivers/net/ethernet/google/gve/gve_dqo.h b/drivers/net/ethernet/google/gve/gve_dqo.h
index b81584829c40..e83773fb891f 100644
--- a/drivers/net/ethernet/google/gve/gve_dqo.h
+++ b/drivers/net/ethernet/google/gve/gve_dqo.h
@@ -44,6 +44,12 @@ void gve_tx_free_rings_dqo(struct gve_priv *priv,
 			   struct gve_tx_alloc_rings_cfg *cfg);
 void gve_tx_start_ring_dqo(struct gve_priv *priv, int idx);
 void gve_tx_stop_ring_dqo(struct gve_priv *priv, int idx);
+int gve_rx_alloc_ring_dqo(struct gve_priv *priv,
+			  struct gve_rx_alloc_rings_cfg *cfg,
+			  struct gve_rx_ring *rx,
+			  int idx);
+void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx,
+			  struct gve_rx_alloc_rings_cfg *cfg);
 int gve_rx_alloc_rings_dqo(struct gve_priv *priv,
 			   struct gve_rx_alloc_rings_cfg *cfg);
 void gve_rx_free_rings_dqo(struct gve_priv *priv,
diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
index c348dff7cca6..5e652958f10f 100644
--- a/drivers/net/ethernet/google/gve/gve_main.c
+++ b/drivers/net/ethernet/google/gve/gve_main.c
@@ -17,6 +17,7 @@
 #include <linux/workqueue.h>
 #include <linux/utsname.h>
 #include <linux/version.h>
+#include <net/netdev_queues.h>
 #include <net/sch_generic.h>
 #include <net/xdp_sock_drv.h>
 #include "gve.h"
@@ -2070,6 +2071,15 @@ static void gve_turnup(struct gve_priv *priv)
 	gve_set_napi_enabled(priv);
 }
 
+static void gve_turnup_and_check_status(struct gve_priv *priv)
+{
+	u32 status;
+
+	gve_turnup(priv);
+	status = ioread32be(&priv->reg_bar0->device_status);
+	gve_handle_link_status(priv, GVE_DEVICE_STATUS_LINK_STATUS_MASK & status);
+}
+
 static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue)
 {
 	struct gve_notify_block *block;
@@ -2530,6 +2540,138 @@ static void gve_write_version(u8 __iomem *driver_version_register)
 	writeb('\n', driver_version_register);
 }
 
+static int gve_rx_queue_stop(struct net_device *dev, int idx,
+			     void **out_per_q_mem)
+{
+	struct gve_priv *priv = netdev_priv(dev);
+	struct gve_rx_ring *rx;
+	int err;
+
+	if (!priv->rx)
+		return -EAGAIN;
+	if (idx < 0 || idx >= priv->rx_cfg.max_queues)
+		return -ERANGE;
+
+	/* Destroying queue 0 while other queues exist is not supported in DQO */
+	if (!gve_is_gqi(priv) && idx == 0)
+		return -ERANGE;
+
+	rx = kvzalloc(sizeof(*rx), GFP_KERNEL);
+	if (!rx)
+		return -ENOMEM;
+	*rx = priv->rx[idx];
+
+	/* Single-queue destruction requires quiescence on all queues */
+	gve_turndown(priv);
+
+	/* This failure will trigger a reset - no need to clean up */
+	err = gve_adminq_destroy_single_rx_queue(priv, idx);
+	if (err) {
+		kvfree(rx);
+		return err;
+	}
+
+	if (gve_is_gqi(priv))
+		gve_rx_stop_ring_gqi(priv, idx);
+	else
+		gve_rx_stop_ring_dqo(priv, idx);
+
+	/* Turn the unstopped queues back up */
+	gve_turnup_and_check_status(priv);
+
+	*out_per_q_mem = rx;
+	return 0;
+}
+
+static void gve_rx_queue_mem_free(struct net_device *dev, void *per_q_mem)
+{
+	struct gve_priv *priv = netdev_priv(dev);
+	struct gve_rx_alloc_rings_cfg cfg = {0};
+	struct gve_rx_ring *rx;
+
+	gve_rx_get_curr_alloc_cfg(priv, &cfg);
+	rx = (struct gve_rx_ring *)per_q_mem;
+	if (!rx)
+		return;
+
+	if (gve_is_gqi(priv))
+		gve_rx_free_ring_gqi(priv, rx, &cfg);
+	else
+		gve_rx_free_ring_dqo(priv, rx, &cfg);
+
+	kvfree(per_q_mem);
+}
+
+static void *gve_rx_queue_mem_alloc(struct net_device *dev, int idx)
+{
+	struct gve_priv *priv = netdev_priv(dev);
+	struct gve_rx_alloc_rings_cfg cfg = {0};
+	struct gve_rx_ring *rx;
+	int err;
+
+	gve_rx_get_curr_alloc_cfg(priv, &cfg);
+	if (idx < 0 || idx >= cfg.qcfg->max_queues)
+		return NULL;
+
+	rx = kvzalloc(sizeof(*rx), GFP_KERNEL);
+	if (!rx)
+		return NULL;
+
+	if (gve_is_gqi(priv))
+		err = gve_rx_alloc_ring_gqi(priv, &cfg, rx, idx);
+	else
+		err = gve_rx_alloc_ring_dqo(priv, &cfg, rx, idx);
+
+	if (err) {
+		kvfree(rx);
+		return NULL;
+	}
+	return rx;
+}
+
+static int gve_rx_queue_start(struct net_device *dev, int idx, void *per_q_mem)
+{
+	struct gve_priv *priv = netdev_priv(dev);
+	struct gve_rx_ring *rx;
+	int err;
+
+	if (!priv->rx)
+		return -EAGAIN;
+	if (idx < 0 || idx >= priv->rx_cfg.max_queues)
+		return -ERANGE;
+	rx = (struct gve_rx_ring *)per_q_mem;
+	priv->rx[idx] = *rx;
+
+	/* Single-queue creation requires quiescence on all queues */
+	gve_turndown(priv);
+
+	if (gve_is_gqi(priv))
+		gve_rx_start_ring_gqi(priv, idx);
+	else
+		gve_rx_start_ring_dqo(priv, idx);
+
+	/* This failure will trigger a reset - no need to clean up */
+	err = gve_adminq_create_single_rx_queue(priv, idx);
+	if (err)
+		return err;
+
+	if (gve_is_gqi(priv))
+		gve_rx_write_doorbell(priv, &priv->rx[idx]);
+	else
+		gve_rx_post_buffers_dqo(&priv->rx[idx]);
+
+	/* Turn the unstopped queues back up */
+	gve_turnup_and_check_status(priv);
+	return 0;
+}
+
+static const struct netdev_queue_mgmt_ops gve_queue_mgmt_ops = {
+	.ndo_queue_mem_alloc	=	gve_rx_queue_mem_alloc,
+	.ndo_queue_mem_free	=	gve_rx_queue_mem_free,
+	.ndo_queue_start	=	gve_rx_queue_start,
+	.ndo_queue_stop		=	gve_rx_queue_stop,
+};
+
 static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 {
 	int max_tx_queues, max_rx_queues;
@@ -2584,6 +2726,7 @@ static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	pci_set_drvdata(pdev, dev);
 	dev->ethtool_ops = &gve_ethtool_ops;
 	dev->netdev_ops = &gve_netdev_ops;
+	dev->queue_mgmt_ops = &gve_queue_mgmt_ops;
 
 	/* Set default and supported features.
 	 *
diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
index 1d235caab4c5..307bf97d4778 100644
--- a/drivers/net/ethernet/google/gve/gve_rx.c
+++ b/drivers/net/ethernet/google/gve/gve_rx.c
@@ -101,8 +101,8 @@ void gve_rx_stop_ring_gqi(struct gve_priv *priv, int idx)
 	gve_rx_reset_ring_gqi(priv, idx);
 }
 
-static void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx,
-				 struct gve_rx_alloc_rings_cfg *cfg)
+void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx,
+			  struct gve_rx_alloc_rings_cfg *cfg)
 {
 	struct device *dev = &priv->pdev->dev;
 	u32 slots = rx->mask + 1;
@@ -270,10 +270,10 @@ void gve_rx_start_ring_gqi(struct gve_priv *priv, int idx)
 	gve_add_napi(priv, ntfy_idx, gve_napi_poll);
 }
 
-static int gve_rx_alloc_ring_gqi(struct gve_priv *priv,
-				 struct gve_rx_alloc_rings_cfg *cfg,
-				 struct gve_rx_ring *rx,
-				 int idx)
+int gve_rx_alloc_ring_gqi(struct gve_priv *priv,
+			  struct gve_rx_alloc_rings_cfg *cfg,
+			  struct gve_rx_ring *rx,
+			  int idx)
 {
 	struct device *hdev = &priv->pdev->dev;
 	u32 slots = cfg->ring_size;
diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
index dc2c6bd92e82..dcbc37118870 100644
--- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c
+++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
@@ -299,8 +299,8 @@ void gve_rx_stop_ring_dqo(struct gve_priv *priv, int idx)
 	gve_rx_reset_ring_dqo(priv, idx);
 }
 
-static void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx,
-				 struct gve_rx_alloc_rings_cfg *cfg)
+void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx,
+			  struct gve_rx_alloc_rings_cfg *cfg)
 {
 	struct device *hdev = &priv->pdev->dev;
 	size_t completion_queue_slots;
@@ -373,10 +373,10 @@ void gve_rx_start_ring_dqo(struct gve_priv *priv, int idx)
 	gve_add_napi(priv, ntfy_idx, gve_napi_poll_dqo);
 }
 
-static int gve_rx_alloc_ring_dqo(struct gve_priv *priv,
-				 struct gve_rx_alloc_rings_cfg *cfg,
-				 struct gve_rx_ring *rx,
-				 int idx)
+int gve_rx_alloc_ring_dqo(struct gve_priv *priv,
+			  struct gve_rx_alloc_rings_cfg *cfg,
+			  struct gve_rx_ring *rx,
+			  int idx)
 {
 	struct device *hdev = &priv->pdev->dev;
 	size_t size;
-- 
2.44.0.769.g3c40516874-goog


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH net-next 0/9] gve: Implement netdev queue api
  2024-04-18 19:51 [RFC PATCH net-next 0/9] gve: Implement netdev queue api Shailend Chand
                   ` (8 preceding siblings ...)
  2024-04-18 19:51 ` [RFC PATCH net-next 9/9] gve: Implement queue api Shailend Chand
@ 2024-04-18 21:55 ` Mina Almasry
  9 siblings, 0 replies; 19+ messages in thread
From: Mina Almasry @ 2024-04-18 21:55 UTC (permalink / raw)
  To: Shailend Chand; +Cc: netdev, davem, edumazet, kuba, pabeni, willemb

On Thu, Apr 18, 2024 at 12:52 PM Shailend Chand <shailend@google.com> wrote:
>
> Following the discussion on
> https://patchwork.kernel.org/project/linux-media/patch/20240305020153.2787423-2-almasrymina@google.com/,
> the queue api defined by Mina is implemented for gve.
>
> The first patch is just Mina's introduction of the api. The rest of the
> patches make surgical changes in gve to enable it to work correctly with
> only a subset of queues present (thus far it had assumed that either all
> queues are up or all are down). The final patch has the api
> implementation.
>

I applied the series locally and tested it with devmem TCP. All my
tests pass. You can add to the individual patches:

Tested-by: Mina Almasry <almasrymina@google.com>

> Mina Almasry (1):
>   queue_api: define queue api
>
> Shailend Chand (8):
>   gve: Make the RX free queue funcs idempotent
>   gve: Add adminq funcs to add/remove a single Rx queue
>   gve: Make gve_turn(up|down) ignore stopped queues
>   gve: Make gve_turnup work for nonempty queues
>   gve: Avoid rescheduling napi if on wrong cpu
>   gve: Reset Rx ring state in the ring-stop funcs
>   gve: Account for stopped queues when reading NIC stats
>   gve: Implement queue api
>
>  drivers/net/ethernet/google/gve/gve.h         |   7 +
>  drivers/net/ethernet/google/gve/gve_adminq.c  |  79 +++++--
>  drivers/net/ethernet/google/gve/gve_adminq.h  |   2 +
>  drivers/net/ethernet/google/gve/gve_dqo.h     |   6 +
>  drivers/net/ethernet/google/gve/gve_ethtool.c |  13 +-
>  drivers/net/ethernet/google/gve/gve_main.c    | 200 +++++++++++++++++-
>  drivers/net/ethernet/google/gve/gve_rx.c      |  89 +++++---
>  drivers/net/ethernet/google/gve/gve_rx_dqo.c  | 114 +++++++---
>  include/linux/netdevice.h                     |   3 +
>  include/net/netdev_queues.h                   |  27 +++
>  10 files changed, 459 insertions(+), 81 deletions(-)
>
> --
> 2.44.0.769.g3c40516874-goog
>


-- 
Thanks,
Mina

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH net-next 9/9] gve: Implement queue api
  2024-04-18 19:51 ` [RFC PATCH net-next 9/9] gve: Implement queue api Shailend Chand
@ 2024-04-19  1:48   ` Jakub Kicinski
  2024-04-19 16:10     ` Mina Almasry
  2024-04-19 22:23   ` Shailend Chand
  2024-04-23 17:33   ` David Wei
  2 siblings, 1 reply; 19+ messages in thread
From: Jakub Kicinski @ 2024-04-19  1:48 UTC (permalink / raw)
  To: Shailend Chand; +Cc: netdev, almasrymina, davem, edumazet, pabeni, willemb

On Thu, 18 Apr 2024 19:51:59 +0000 Shailend Chand wrote:
> +static int gve_rx_queue_stop(struct net_device *dev, int idx,
> +			     void **out_per_q_mem)
> +{
> +	struct gve_priv *priv = netdev_priv(dev);
> +	struct gve_rx_ring *rx;
> +	int err;
> +
> +	if (!priv->rx)
> +		return -EAGAIN;
> +	if (idx < 0 || idx >= priv->rx_cfg.max_queues)
> +		return -ERANGE;

A little too defensive? Core should not issue these > current real num
queues.

> +	/* Destroying queue 0 while other queues exist is not supported in DQO */
> +	if (!gve_is_gqi(priv) && idx == 0)
> +		return -ERANGE;
> +
> +	rx = kvzalloc(sizeof(*rx), GFP_KERNEL);

Why allocate in the driver rather than let the core allocate based on
the declared size ?


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH net-next 9/9] gve: Implement queue api
  2024-04-19  1:48   ` Jakub Kicinski
@ 2024-04-19 16:10     ` Mina Almasry
  2024-04-20  3:25       ` Jakub Kicinski
  0 siblings, 1 reply; 19+ messages in thread
From: Mina Almasry @ 2024-04-19 16:10 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: Shailend Chand, netdev, davem, edumazet, pabeni, willemb

On Thu, Apr 18, 2024 at 6:48 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Thu, 18 Apr 2024 19:51:59 +0000 Shailend Chand wrote:
> > +static int gve_rx_queue_stop(struct net_device *dev, int idx,
> > +                          void **out_per_q_mem)
> > +{
> > +     struct gve_priv *priv = netdev_priv(dev);
> > +     struct gve_rx_ring *rx;
> > +     int err;
> > +
> > +     if (!priv->rx)
> > +             return -EAGAIN;
> > +     if (idx < 0 || idx >= priv->rx_cfg.max_queues)
> > +             return -ERANGE;
>
> A little too defensive? Core should not issue these > current real num
> queues.
>
> > +     /* Destroying queue 0 while other queues exist is not supported in DQO */
> > +     if (!gve_is_gqi(priv) && idx == 0)
> > +             return -ERANGE;
> > +
> > +     rx = kvzalloc(sizeof(*rx), GFP_KERNEL);
>
> Why allocate in the driver rather than let the core allocate based on
> the declared size ?
>

Currently the ndos don't include an interface for the driver to
declare the size, right? In theory we could add it to the ndos like
so, if I understood you correctly (untested yet, just to illustrate
what I'm thinking point):

diff --git a/drivers/net/ethernet/google/gve/gve_main.c
b/drivers/net/ethernet/google/gve/gve_main.c
index 7c38dc06a392..efe3944b529a 100644
--- a/drivers/net/ethernet/google/gve/gve_main.c
+++ b/drivers/net/ethernet/google/gve/gve_main.c
@@ -2579,11 +2579,16 @@ static void gve_write_version(u8 __iomem
*driver_version_register)
  writeb('\n', driver_version_register);
 }

+static size_t gve_rx_queue_mem_get_size(void)
+{
+ return sizeof(struct gve_rx_ring);
+}
+
 static int gve_rx_queue_stop(struct net_device *dev, int idx,
-      void **out_per_q_mem)
+      void *out_per_q_mem)
 {
  struct gve_priv *priv = netdev_priv(dev);
- struct gve_rx_ring *rx;
+ struct gve_rx_ring *rx = out_per_q_mem;
  int err;

  if (!priv->rx)
@@ -2595,9 +2600,6 @@ static int gve_rx_queue_stop(struct net_device
*dev, int idx,
  if (!gve_is_gqi(priv) && idx == 0)
  return -ERANGE;

- rx = kvzalloc(sizeof(*rx), GFP_KERNEL);
- if (!rx)
- return -ENOMEM;
  *rx = priv->rx[idx];

  /* Single-queue destruction requires quiescence on all queues */
@@ -2606,7 +2608,6 @@ static int gve_rx_queue_stop(struct net_device
*dev, int idx,
  /* This failure will trigger a reset - no need to clean up */
  err = gve_adminq_destroy_single_rx_queue(priv, idx);
  if (err) {
- kvfree(rx);
  return err;
  }

@@ -2618,7 +2619,6 @@ static int gve_rx_queue_stop(struct net_device
*dev, int idx,
  /* Turn the unstopped queues back up */
  gve_turnup_and_check_status(priv);

- *out_per_q_mem = rx;
  return 0;
 }

@@ -2709,6 +2709,7 @@ static const struct netdev_queue_mgmt_ops
gve_queue_mgmt_ops = {
  .ndo_queue_mem_free = gve_rx_queue_mem_free,
  .ndo_queue_start = gve_rx_queue_start,
  .ndo_queue_stop = gve_rx_queue_stop,
+ .ndo_queue_mem_get_size = gve_rx_queue_mem_get_size,
 };

 static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
diff --git a/include/net/netdev_queues.h b/include/net/netdev_queues.h
index 337df0860ae6..0af08414d8eb 100644
--- a/include/net/netdev_queues.h
+++ b/include/net/netdev_queues.h
@@ -75,6 +75,7 @@ struct netdev_stat_ops {
  * @ndo_queue_stop: Stop the RX queue at the specified index.
  */
 struct netdev_queue_mgmt_ops {
+ size_t (*ndo_queue_mem_get_size)(void);
  void * (*ndo_queue_mem_alloc)(struct net_device *dev,
         int idx);
  void (*ndo_queue_mem_free)(struct net_device *dev,
@@ -84,7 +85,7 @@ struct netdev_queue_mgmt_ops {
     void *queue_mem);
  int (*ndo_queue_stop)(struct net_device *dev,
    int idx,
-   void **out_queue_mem);
+   void *out_queue_mem);
 };

 /**
diff --git a/net/core/devmem.c b/net/core/devmem.c
index 01337de7d6a4..89c90e21f083 100644
--- a/net/core/devmem.c
+++ b/net/core/devmem.c
@@ -60,7 +60,8 @@ static int net_devmem_restart_rx_queue(struct
net_device *dev, int rxq_idx)
  void *old_mem;
  int err;

- if (!dev->queue_mgmt_ops->ndo_queue_stop ||
+ if (!dev->queue_mgmt_ops->ndo_queue_mem_get_size ||
+     !dev->queue_mgmt_ops->ndo_queue_stop ||
      !dev->queue_mgmt_ops->ndo_queue_mem_free ||
      !dev->queue_mgmt_ops->ndo_queue_mem_alloc ||
      !dev->queue_mgmt_ops->ndo_queue_start)
@@ -70,7 +71,11 @@ static int net_devmem_restart_rx_queue(struct
net_device *dev, int rxq_idx)
  if (!new_mem)
  return -ENOMEM;

- err = dev->queue_mgmt_ops->ndo_queue_stop(dev, rxq_idx, &old_mem);
+ old_mem = kvzalloc(dev->queue_mgmt_ops->ndo_queue_mem_get_size(),
+    GFP_KERNEL);
+ BUG_ON(!old_mem); /* TODO */
+
+ err = dev->queue_mgmt_ops->ndo_queue_stop(dev, rxq_idx, old_mem);
  if (err)
  goto err_free_new_mem;

@@ -79,6 +84,7 @@ static int net_devmem_restart_rx_queue(struct
net_device *dev, int rxq_idx)
  goto err_start_queue;

  dev->queue_mgmt_ops->ndo_queue_mem_free(dev, old_mem);
+ kvfree(old_mem);

  return 0;


I think maybe if we want to apply this change to mem_stop, then we
should probably also apply this change to queue_mem_alloc as well,
right? I.e. core will allocate the pointer, and ndo_queue_mem_alloc
would allocate the actual resources and would fill in the entries of
the pointer? Is this what you're looking for here?

--
Thanks,
Mina

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH net-next 9/9] gve: Implement queue api
  2024-04-18 19:51 ` [RFC PATCH net-next 9/9] gve: Implement queue api Shailend Chand
  2024-04-19  1:48   ` Jakub Kicinski
@ 2024-04-19 22:23   ` Shailend Chand
  2024-04-23 17:55     ` David Wei
  2024-04-23 17:33   ` David Wei
  2 siblings, 1 reply; 19+ messages in thread
From: Shailend Chand @ 2024-04-19 22:23 UTC (permalink / raw)
  To: netdev; +Cc: almasrymina, davem, edumazet, kuba, pabeni, willemb

On Thu, Apr 18, 2024 at 12:52 PM Shailend Chand <shailend@google.com> wrote:
>
> An api enabling the net stack to reset driver queues is implemented for
> gve.
>
> Signed-off-by: Shailend Chand <shailend@google.com>
> ---
>  drivers/net/ethernet/google/gve/gve.h        |   6 +
>  drivers/net/ethernet/google/gve/gve_dqo.h    |   6 +
>  drivers/net/ethernet/google/gve/gve_main.c   | 143 +++++++++++++++++++
>  drivers/net/ethernet/google/gve/gve_rx.c     |  12 +-
>  drivers/net/ethernet/google/gve/gve_rx_dqo.c |  12 +-
>  5 files changed, 167 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
> index 9f6a897c87cb..d752e525bde7 100644
> --- a/drivers/net/ethernet/google/gve/gve.h
> +++ b/drivers/net/ethernet/google/gve/gve.h
> @@ -1147,6 +1147,12 @@ bool gve_tx_clean_pending(struct gve_priv *priv, struct gve_tx_ring *tx);
>  void gve_rx_write_doorbell(struct gve_priv *priv, struct gve_rx_ring *rx);
>  int gve_rx_poll(struct gve_notify_block *block, int budget);
>  bool gve_rx_work_pending(struct gve_rx_ring *rx);
> +int gve_rx_alloc_ring_gqi(struct gve_priv *priv,
> +                         struct gve_rx_alloc_rings_cfg *cfg,
> +                         struct gve_rx_ring *rx,
> +                         int idx);
> +void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx,
> +                         struct gve_rx_alloc_rings_cfg *cfg);
>  int gve_rx_alloc_rings(struct gve_priv *priv);
>  int gve_rx_alloc_rings_gqi(struct gve_priv *priv,
>                            struct gve_rx_alloc_rings_cfg *cfg);
> diff --git a/drivers/net/ethernet/google/gve/gve_dqo.h b/drivers/net/ethernet/google/gve/gve_dqo.h
> index b81584829c40..e83773fb891f 100644
> --- a/drivers/net/ethernet/google/gve/gve_dqo.h
> +++ b/drivers/net/ethernet/google/gve/gve_dqo.h
> @@ -44,6 +44,12 @@ void gve_tx_free_rings_dqo(struct gve_priv *priv,
>                            struct gve_tx_alloc_rings_cfg *cfg);
>  void gve_tx_start_ring_dqo(struct gve_priv *priv, int idx);
>  void gve_tx_stop_ring_dqo(struct gve_priv *priv, int idx);
> +int gve_rx_alloc_ring_dqo(struct gve_priv *priv,
> +                         struct gve_rx_alloc_rings_cfg *cfg,
> +                         struct gve_rx_ring *rx,
> +                         int idx);
> +void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx,
> +                         struct gve_rx_alloc_rings_cfg *cfg);
>  int gve_rx_alloc_rings_dqo(struct gve_priv *priv,
>                            struct gve_rx_alloc_rings_cfg *cfg);
>  void gve_rx_free_rings_dqo(struct gve_priv *priv,
> diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
> index c348dff7cca6..5e652958f10f 100644
> --- a/drivers/net/ethernet/google/gve/gve_main.c
> +++ b/drivers/net/ethernet/google/gve/gve_main.c
> @@ -17,6 +17,7 @@
>  #include <linux/workqueue.h>
>  #include <linux/utsname.h>
>  #include <linux/version.h>
> +#include <net/netdev_queues.h>
>  #include <net/sch_generic.h>
>  #include <net/xdp_sock_drv.h>
>  #include "gve.h"
> @@ -2070,6 +2071,15 @@ static void gve_turnup(struct gve_priv *priv)
>         gve_set_napi_enabled(priv);
>  }
>
> +static void gve_turnup_and_check_status(struct gve_priv *priv)
> +{
> +       u32 status;
> +
> +       gve_turnup(priv);
> +       status = ioread32be(&priv->reg_bar0->device_status);
> +       gve_handle_link_status(priv, GVE_DEVICE_STATUS_LINK_STATUS_MASK & status);
> +}
> +
>  static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue)
>  {
>         struct gve_notify_block *block;
> @@ -2530,6 +2540,138 @@ static void gve_write_version(u8 __iomem *driver_version_register)
>         writeb('\n', driver_version_register);
>  }
>
> +static int gve_rx_queue_stop(struct net_device *dev, int idx,
> +                            void **out_per_q_mem)
> +{
> +       struct gve_priv *priv = netdev_priv(dev);
> +       struct gve_rx_ring *rx;
> +       int err;
> +
> +       if (!priv->rx)
> +               return -EAGAIN;
> +       if (idx < 0 || idx >= priv->rx_cfg.max_queues)
> +               return -ERANGE;
> +
> +       /* Destroying queue 0 while other queues exist is not supported in DQO */
> +       if (!gve_is_gqi(priv) && idx == 0)
> +               return -ERANGE;
> +
> +       rx = kvzalloc(sizeof(*rx), GFP_KERNEL);
> +       if (!rx)
> +               return -ENOMEM;
> +       *rx = priv->rx[idx];
> +
> +       /* Single-queue destruction requires quiescence on all queues */
> +       gve_turndown(priv);
> +
> +       /* This failure will trigger a reset - no need to clean up */
> +       err = gve_adminq_destroy_single_rx_queue(priv, idx);
> +       if (err) {
> +               kvfree(rx);
> +               return err;
> +       }
> +
> +       if (gve_is_gqi(priv))
> +               gve_rx_stop_ring_gqi(priv, idx);
> +       else
> +               gve_rx_stop_ring_dqo(priv, idx);
> +
> +       /* Turn the unstopped queues back up */
> +       gve_turnup_and_check_status(priv);
> +
> +       *out_per_q_mem = rx;
> +       return 0;
> +}
> +
> +static void gve_rx_queue_mem_free(struct net_device *dev, void *per_q_mem)
> +{
> +       struct gve_priv *priv = netdev_priv(dev);
> +       struct gve_rx_alloc_rings_cfg cfg = {0};
> +       struct gve_rx_ring *rx;
> +
> +       gve_rx_get_curr_alloc_cfg(priv, &cfg);
> +       rx = (struct gve_rx_ring *)per_q_mem;
> +       if (!rx)
> +               return;
> +
> +       if (gve_is_gqi(priv))
> +               gve_rx_free_ring_gqi(priv, rx, &cfg);
> +       else
> +               gve_rx_free_ring_dqo(priv, rx, &cfg);
> +
> +       kvfree(per_q_mem);
> +}
> +
> +static void *gve_rx_queue_mem_alloc(struct net_device *dev, int idx)
> +{
> +       struct gve_priv *priv = netdev_priv(dev);
> +       struct gve_rx_alloc_rings_cfg cfg = {0};
> +       struct gve_rx_ring *rx;
> +       int err;
> +
> +       gve_rx_get_curr_alloc_cfg(priv, &cfg);
> +       if (idx < 0 || idx >= cfg.qcfg->max_queues)
> +               return NULL;
> +
> +       rx = kvzalloc(sizeof(*rx), GFP_KERNEL);
> +       if (!rx)
> +               return NULL;
> +
> +       if (gve_is_gqi(priv))
> +               err = gve_rx_alloc_ring_gqi(priv, &cfg, rx, idx);
> +       else
> +               err = gve_rx_alloc_ring_dqo(priv, &cfg, rx, idx);
> +
> +       if (err) {
> +               kvfree(rx);
> +               return NULL;
> +       }
> +       return rx;
> +}
> +
> +static int gve_rx_queue_start(struct net_device *dev, int idx, void *per_q_mem)
> +{
> +       struct gve_priv *priv = netdev_priv(dev);
> +       struct gve_rx_ring *rx;
> +       int err;
> +
> +       if (!priv->rx)
> +               return -EAGAIN;
> +       if (idx < 0 || idx >= priv->rx_cfg.max_queues)
> +               return -ERANGE;
> +       rx = (struct gve_rx_ring *)per_q_mem;
> +       priv->rx[idx] = *rx;
> +
> +       /* Single-queue creation requires quiescence on all queues */
> +       gve_turndown(priv);
> +
> +       if (gve_is_gqi(priv))
> +               gve_rx_start_ring_gqi(priv, idx);
> +       else
> +               gve_rx_start_ring_dqo(priv, idx);
> +
> +       /* This failure will trigger a reset - no need to clean up */
> +       err = gve_adminq_create_single_rx_queue(priv, idx);
> +       if (err)
> +               return err;
> +
> +       if (gve_is_gqi(priv))
> +               gve_rx_write_doorbell(priv, &priv->rx[idx]);
> +       else
> +               gve_rx_post_buffers_dqo(&priv->rx[idx]);
> +
> +       /* Turn the unstopped queues back up */
> +       gve_turnup_and_check_status(priv);
> +       return 0;
> +}

I realized that due to not kvfree-ing the passed-in `per_q_mem`, there
is a leak. The alloc and stop hooks kvzalloc
a temp ring struct, which means the start and free hooks ought to have
kvfreed them to keep symmetry and avoid leaking.
The free hook is doing it but I forgot to do it in the start hook.

If we go down the route of making core aware of the ring struct size,
then none of the four hooks
need to worry about the temp struct: core can just alloc and free it
for both the old and new queue.

> +
> +static const struct netdev_queue_mgmt_ops gve_queue_mgmt_ops = {
> +       .ndo_queue_mem_alloc    =       gve_rx_queue_mem_alloc,
> +       .ndo_queue_mem_free     =       gve_rx_queue_mem_free,
> +       .ndo_queue_start        =       gve_rx_queue_start,
> +       .ndo_queue_stop         =       gve_rx_queue_stop,
> +};
> +
>  static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
>  {
>         int max_tx_queues, max_rx_queues;
> @@ -2584,6 +2726,7 @@ static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
>         pci_set_drvdata(pdev, dev);
>         dev->ethtool_ops = &gve_ethtool_ops;
>         dev->netdev_ops = &gve_netdev_ops;
> +       dev->queue_mgmt_ops = &gve_queue_mgmt_ops;
>
>         /* Set default and supported features.
>          *
> diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
> index 1d235caab4c5..307bf97d4778 100644
> --- a/drivers/net/ethernet/google/gve/gve_rx.c
> +++ b/drivers/net/ethernet/google/gve/gve_rx.c
> @@ -101,8 +101,8 @@ void gve_rx_stop_ring_gqi(struct gve_priv *priv, int idx)
>         gve_rx_reset_ring_gqi(priv, idx);
>  }
>
> -static void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx,
> -                                struct gve_rx_alloc_rings_cfg *cfg)
> +void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx,
> +                         struct gve_rx_alloc_rings_cfg *cfg)
>  {
>         struct device *dev = &priv->pdev->dev;
>         u32 slots = rx->mask + 1;
> @@ -270,10 +270,10 @@ void gve_rx_start_ring_gqi(struct gve_priv *priv, int idx)
>         gve_add_napi(priv, ntfy_idx, gve_napi_poll);
>  }
>
> -static int gve_rx_alloc_ring_gqi(struct gve_priv *priv,
> -                                struct gve_rx_alloc_rings_cfg *cfg,
> -                                struct gve_rx_ring *rx,
> -                                int idx)
> +int gve_rx_alloc_ring_gqi(struct gve_priv *priv,
> +                         struct gve_rx_alloc_rings_cfg *cfg,
> +                         struct gve_rx_ring *rx,
> +                         int idx)
>  {
>         struct device *hdev = &priv->pdev->dev;
>         u32 slots = cfg->ring_size;
> diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
> index dc2c6bd92e82..dcbc37118870 100644
> --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c
> +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
> @@ -299,8 +299,8 @@ void gve_rx_stop_ring_dqo(struct gve_priv *priv, int idx)
>         gve_rx_reset_ring_dqo(priv, idx);
>  }
>
> -static void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx,
> -                                struct gve_rx_alloc_rings_cfg *cfg)
> +void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx,
> +                         struct gve_rx_alloc_rings_cfg *cfg)
>  {
>         struct device *hdev = &priv->pdev->dev;
>         size_t completion_queue_slots;
> @@ -373,10 +373,10 @@ void gve_rx_start_ring_dqo(struct gve_priv *priv, int idx)
>         gve_add_napi(priv, ntfy_idx, gve_napi_poll_dqo);
>  }
>
> -static int gve_rx_alloc_ring_dqo(struct gve_priv *priv,
> -                                struct gve_rx_alloc_rings_cfg *cfg,
> -                                struct gve_rx_ring *rx,
> -                                int idx)
> +int gve_rx_alloc_ring_dqo(struct gve_priv *priv,
> +                         struct gve_rx_alloc_rings_cfg *cfg,
> +                         struct gve_rx_ring *rx,
> +                         int idx)
>  {
>         struct device *hdev = &priv->pdev->dev;
>         size_t size;
> --
> 2.44.0.769.g3c40516874-goog
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH net-next 9/9] gve: Implement queue api
  2024-04-19 16:10     ` Mina Almasry
@ 2024-04-20  3:25       ` Jakub Kicinski
  2024-04-22 16:58         ` Mina Almasry
  0 siblings, 1 reply; 19+ messages in thread
From: Jakub Kicinski @ 2024-04-20  3:25 UTC (permalink / raw)
  To: Mina Almasry; +Cc: Shailend Chand, netdev, davem, edumazet, pabeni, willemb

On Fri, 19 Apr 2024 09:10:42 -0700 Mina Almasry wrote:
> Currently the ndos don't include an interface for the driver to
> declare the size, right? In theory we could add it to the ndos like
> so, if I understood you correctly (untested yet, just to illustrate
> what I'm thinking point):
> 
> diff --git a/drivers/net/ethernet/google/gve/gve_main.c
> b/drivers/net/ethernet/google/gve/gve_main.c
> index 7c38dc06a392..efe3944b529a 100644
> --- a/drivers/net/ethernet/google/gve/gve_main.c
> +++ b/drivers/net/ethernet/google/gve/gve_main.c
> @@ -2579,11 +2579,16 @@ static void gve_write_version(u8 __iomem
> *driver_version_register)
>   writeb('\n', driver_version_register);
>  }
> 
> +static size_t gve_rx_queue_mem_get_size(void)
> +{
> + return sizeof(struct gve_rx_ring);
> +}

> @@ -2709,6 +2709,7 @@ static const struct netdev_queue_mgmt_ops
> gve_queue_mgmt_ops = {
>   .ndo_queue_mem_free = gve_rx_queue_mem_free,
>   .ndo_queue_start = gve_rx_queue_start,
>   .ndo_queue_stop = gve_rx_queue_stop,
> + .ndo_queue_mem_get_size = gve_rx_queue_mem_get_size,
>  };

I don't think we need to make it a callback, even, directly:

const struct netdev_queue_mgmt_ops gve_queue_mgmt_ops = {
+	.queue_mem_size		= sizeof(struct gve_rx_ring),
 	.ndo_queue_mem_free	= gve_rx_queue_mem_free,
 	.ndo_queue_start	= gve_rx_queue_start,
 	.ndo_queue_stop		= gve_rx_queue_stop,

> I think maybe if we want to apply this change to mem_stop, then we
> should probably also apply this change to queue_mem_alloc as well,
> right? I.e. core will allocate the pointer, and ndo_queue_mem_alloc
> would allocate the actual resources and would fill in the entries of
> the pointer? Is this what you're looking for here?

Yup. But thinking about it again, this may be more natural once we also
have the open/close path use the queue API. IIUC for now the driver
allocates the queue resources on open, without going via .ndo_queue_*
If that's the case we can keep the code as you have it here.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH net-next 9/9] gve: Implement queue api
  2024-04-20  3:25       ` Jakub Kicinski
@ 2024-04-22 16:58         ` Mina Almasry
  2024-04-22 18:41           ` Jakub Kicinski
  0 siblings, 1 reply; 19+ messages in thread
From: Mina Almasry @ 2024-04-22 16:58 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: Shailend Chand, netdev, davem, edumazet, pabeni, willemb

On Fri, Apr 19, 2024 at 8:25 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Fri, 19 Apr 2024 09:10:42 -0700 Mina Almasry wrote:
> > Currently the ndos don't include an interface for the driver to
> > declare the size, right? In theory we could add it to the ndos like
> > so, if I understood you correctly (untested yet, just to illustrate
> > what I'm thinking point):
> >
> > diff --git a/drivers/net/ethernet/google/gve/gve_main.c
> > b/drivers/net/ethernet/google/gve/gve_main.c
> > index 7c38dc06a392..efe3944b529a 100644
> > --- a/drivers/net/ethernet/google/gve/gve_main.c
> > +++ b/drivers/net/ethernet/google/gve/gve_main.c
> > @@ -2579,11 +2579,16 @@ static void gve_write_version(u8 __iomem
> > *driver_version_register)
> >   writeb('\n', driver_version_register);
> >  }
> >
> > +static size_t gve_rx_queue_mem_get_size(void)
> > +{
> > + return sizeof(struct gve_rx_ring);
> > +}
>
> > @@ -2709,6 +2709,7 @@ static const struct netdev_queue_mgmt_ops
> > gve_queue_mgmt_ops = {
> >   .ndo_queue_mem_free = gve_rx_queue_mem_free,
> >   .ndo_queue_start = gve_rx_queue_start,
> >   .ndo_queue_stop = gve_rx_queue_stop,
> > + .ndo_queue_mem_get_size = gve_rx_queue_mem_get_size,
> >  };
>
> I don't think we need to make it a callback, even, directly:
>
> const struct netdev_queue_mgmt_ops gve_queue_mgmt_ops = {
> +       .queue_mem_size         = sizeof(struct gve_rx_ring),
>         .ndo_queue_mem_free     = gve_rx_queue_mem_free,
>         .ndo_queue_start        = gve_rx_queue_start,
>         .ndo_queue_stop         = gve_rx_queue_stop,
>
> > I think maybe if we want to apply this change to mem_stop, then we
> > should probably also apply this change to queue_mem_alloc as well,
> > right? I.e. core will allocate the pointer, and ndo_queue_mem_alloc
> > would allocate the actual resources and would fill in the entries of
> > the pointer? Is this what you're looking for here?
>
> Yup. But thinking about it again, this may be more natural once we also
> have the open/close path use the queue API. IIUC for now the driver
> allocates the queue resources on open, without going via .ndo_queue_*
> If that's the case we can keep the code as you have it here.

Yes, as currently written the queue API funcs & gve_open/close use the
same internal gve helpers, but they do not go via .ndo_queue_*.

OK, sounds like you would like us to keep this bit of the code as we
have it here. Let us know if you have any other feedback. I think
we're working through final testing/polishing/code reviewing
internally and will be submitting something very similar to this as
non-RFC for review, with whatever feedback we receive in the meantime.

-- 
Thanks,
Mina

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH net-next 9/9] gve: Implement queue api
  2024-04-22 16:58         ` Mina Almasry
@ 2024-04-22 18:41           ` Jakub Kicinski
  0 siblings, 0 replies; 19+ messages in thread
From: Jakub Kicinski @ 2024-04-22 18:41 UTC (permalink / raw)
  To: Mina Almasry; +Cc: Shailend Chand, netdev, davem, edumazet, pabeni, willemb

On Mon, 22 Apr 2024 09:58:38 -0700 Mina Almasry wrote:
> OK, sounds like you would like us to keep this bit of the code as we
> have it here. Let us know if you have any other feedback. I think
> we're working through final testing/polishing/code reviewing
> internally and will be submitting something very similar to this as
> non-RFC for review, with whatever feedback we receive in the meantime.

No other notes, thanks!

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH net-next 9/9] gve: Implement queue api
  2024-04-18 19:51 ` [RFC PATCH net-next 9/9] gve: Implement queue api Shailend Chand
  2024-04-19  1:48   ` Jakub Kicinski
  2024-04-19 22:23   ` Shailend Chand
@ 2024-04-23 17:33   ` David Wei
  2 siblings, 0 replies; 19+ messages in thread
From: David Wei @ 2024-04-23 17:33 UTC (permalink / raw)
  To: Shailend Chand, netdev
  Cc: almasrymina, davem, edumazet, kuba, pabeni, willemb

On 2024-04-18 12:51 pm, Shailend Chand wrote:
> An api enabling the net stack to reset driver queues is implemented for
> gve.
> 
> Signed-off-by: Shailend Chand <shailend@google.com>
> ---
>  drivers/net/ethernet/google/gve/gve.h        |   6 +
>  drivers/net/ethernet/google/gve/gve_dqo.h    |   6 +
>  drivers/net/ethernet/google/gve/gve_main.c   | 143 +++++++++++++++++++
>  drivers/net/ethernet/google/gve/gve_rx.c     |  12 +-
>  drivers/net/ethernet/google/gve/gve_rx_dqo.c |  12 +-
>  5 files changed, 167 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
> index 9f6a897c87cb..d752e525bde7 100644
> --- a/drivers/net/ethernet/google/gve/gve.h
> +++ b/drivers/net/ethernet/google/gve/gve.h
> @@ -1147,6 +1147,12 @@ bool gve_tx_clean_pending(struct gve_priv *priv, struct gve_tx_ring *tx);
>  void gve_rx_write_doorbell(struct gve_priv *priv, struct gve_rx_ring *rx);
>  int gve_rx_poll(struct gve_notify_block *block, int budget);
>  bool gve_rx_work_pending(struct gve_rx_ring *rx);
> +int gve_rx_alloc_ring_gqi(struct gve_priv *priv,
> +			  struct gve_rx_alloc_rings_cfg *cfg,
> +			  struct gve_rx_ring *rx,
> +			  int idx);
> +void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx,
> +			  struct gve_rx_alloc_rings_cfg *cfg);
>  int gve_rx_alloc_rings(struct gve_priv *priv);
>  int gve_rx_alloc_rings_gqi(struct gve_priv *priv,
>  			   struct gve_rx_alloc_rings_cfg *cfg);
> diff --git a/drivers/net/ethernet/google/gve/gve_dqo.h b/drivers/net/ethernet/google/gve/gve_dqo.h
> index b81584829c40..e83773fb891f 100644
> --- a/drivers/net/ethernet/google/gve/gve_dqo.h
> +++ b/drivers/net/ethernet/google/gve/gve_dqo.h
> @@ -44,6 +44,12 @@ void gve_tx_free_rings_dqo(struct gve_priv *priv,
>  			   struct gve_tx_alloc_rings_cfg *cfg);
>  void gve_tx_start_ring_dqo(struct gve_priv *priv, int idx);
>  void gve_tx_stop_ring_dqo(struct gve_priv *priv, int idx);
> +int gve_rx_alloc_ring_dqo(struct gve_priv *priv,
> +			  struct gve_rx_alloc_rings_cfg *cfg,
> +			  struct gve_rx_ring *rx,
> +			  int idx);
> +void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx,
> +			  struct gve_rx_alloc_rings_cfg *cfg);
>  int gve_rx_alloc_rings_dqo(struct gve_priv *priv,
>  			   struct gve_rx_alloc_rings_cfg *cfg);
>  void gve_rx_free_rings_dqo(struct gve_priv *priv,
> diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
> index c348dff7cca6..5e652958f10f 100644
> --- a/drivers/net/ethernet/google/gve/gve_main.c
> +++ b/drivers/net/ethernet/google/gve/gve_main.c
> @@ -17,6 +17,7 @@
>  #include <linux/workqueue.h>
>  #include <linux/utsname.h>
>  #include <linux/version.h>
> +#include <net/netdev_queues.h>
>  #include <net/sch_generic.h>
>  #include <net/xdp_sock_drv.h>
>  #include "gve.h"
> @@ -2070,6 +2071,15 @@ static void gve_turnup(struct gve_priv *priv)
>  	gve_set_napi_enabled(priv);
>  }
>  
> +static void gve_turnup_and_check_status(struct gve_priv *priv)
> +{
> +	u32 status;
> +
> +	gve_turnup(priv);
> +	status = ioread32be(&priv->reg_bar0->device_status);
> +	gve_handle_link_status(priv, GVE_DEVICE_STATUS_LINK_STATUS_MASK & status);
> +}
> +
>  static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue)
>  {
>  	struct gve_notify_block *block;
> @@ -2530,6 +2540,138 @@ static void gve_write_version(u8 __iomem *driver_version_register)
>  	writeb('\n', driver_version_register);
>  }
>  
> +static int gve_rx_queue_stop(struct net_device *dev, int idx,
> +			     void **out_per_q_mem)
> +{
> +	struct gve_priv *priv = netdev_priv(dev);
> +	struct gve_rx_ring *rx;
> +	int err;
> +
> +	if (!priv->rx)
> +		return -EAGAIN;
> +	if (idx < 0 || idx >= priv->rx_cfg.max_queues)
> +		return -ERANGE;
> +
> +	/* Destroying queue 0 while other queues exist is not supported in DQO */
> +	if (!gve_is_gqi(priv) && idx == 0)
> +		return -ERANGE;
> +
> +	rx = kvzalloc(sizeof(*rx), GFP_KERNEL);
> +	if (!rx)
> +		return -ENOMEM;
> +	*rx = priv->rx[idx];
> +
> +	/* Single-queue destruction requires quiescence on all queues */
> +	gve_turndown(priv);
> +
> +	/* This failure will trigger a reset - no need to clean up */
> +	err = gve_adminq_destroy_single_rx_queue(priv, idx);
> +	if (err) {
> +		kvfree(rx);
> +		return err;
> +	}
> +
> +	if (gve_is_gqi(priv))
> +		gve_rx_stop_ring_gqi(priv, idx);
> +	else
> +		gve_rx_stop_ring_dqo(priv, idx);

Could these be pulled out into a helper? I see it repeated a lot.

> +
> +	/* Turn the unstopped queues back up */
> +	gve_turnup_and_check_status(priv);
> +
> +	*out_per_q_mem = rx;
> +	return 0;
> +}
> +
> +static void gve_rx_queue_mem_free(struct net_device *dev, void *per_q_mem)
> +{
> +	struct gve_priv *priv = netdev_priv(dev);
> +	struct gve_rx_alloc_rings_cfg cfg = {0};
> +	struct gve_rx_ring *rx;
> +
> +	gve_rx_get_curr_alloc_cfg(priv, &cfg);
> +	rx = (struct gve_rx_ring *)per_q_mem;
> +	if (!rx)
> +		return;

This can be checked earlier.

> +
> +	if (gve_is_gqi(priv))
> +		gve_rx_free_ring_gqi(priv, rx, &cfg);
> +	else
> +		gve_rx_free_ring_dqo(priv, rx, &cfg);
> +
> +	kvfree(per_q_mem);
> +}
> +
> +static void *gve_rx_queue_mem_alloc(struct net_device *dev, int idx)
> +{
> +	struct gve_priv *priv = netdev_priv(dev);
> +	struct gve_rx_alloc_rings_cfg cfg = {0};
> +	struct gve_rx_ring *rx;
> +	int err;
> +
> +	gve_rx_get_curr_alloc_cfg(priv, &cfg);
> +	if (idx < 0 || idx >= cfg.qcfg->max_queues)
> +		return NULL;
> +
> +	rx = kvzalloc(sizeof(*rx), GFP_KERNEL);
> +	if (!rx)
> +		return NULL;
> +
> +	if (gve_is_gqi(priv))
> +		err = gve_rx_alloc_ring_gqi(priv, &cfg, rx, idx);
> +	else
> +		err = gve_rx_alloc_ring_dqo(priv, &cfg, rx, idx);
> +
> +	if (err) {
> +		kvfree(rx);
> +		return NULL;
> +	}
> +	return rx;
> +}
> +
> +static int gve_rx_queue_start(struct net_device *dev, int idx, void *per_q_mem)
> +{
> +	struct gve_priv *priv = netdev_priv(dev);
> +	struct gve_rx_ring *rx;
> +	int err;
> +
> +	if (!priv->rx)
> +		return -EAGAIN;
> +	if (idx < 0 || idx >= priv->rx_cfg.max_queues)
> +		return -ERANGE;
> +	rx = (struct gve_rx_ring *)per_q_mem;
> +	priv->rx[idx] = *rx;
> +
> +	/* Single-queue creation requires quiescence on all queues */
> +	gve_turndown(priv);
> +
> +	if (gve_is_gqi(priv))
> +		gve_rx_start_ring_gqi(priv, idx);
> +	else
> +		gve_rx_start_ring_dqo(priv, idx);
> +
> +	/* This failure will trigger a reset - no need to clean up */
> +	err = gve_adminq_create_single_rx_queue(priv, idx);
> +	if (err)
> +		return err;
> +
> +	if (gve_is_gqi(priv))
> +		gve_rx_write_doorbell(priv, &priv->rx[idx]);
> +	else
> +		gve_rx_post_buffers_dqo(&priv->rx[idx]);
> +
> +	/* Turn the unstopped queues back up */
> +	gve_turnup_and_check_status(priv);
> +	return 0;
> +}
> +
> +static const struct netdev_queue_mgmt_ops gve_queue_mgmt_ops = {
> +	.ndo_queue_mem_alloc	=	gve_rx_queue_mem_alloc,
> +	.ndo_queue_mem_free	=	gve_rx_queue_mem_free,
> +	.ndo_queue_start	=	gve_rx_queue_start,
> +	.ndo_queue_stop		=	gve_rx_queue_stop,
> +};
> +
>  static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
>  {
>  	int max_tx_queues, max_rx_queues;
> @@ -2584,6 +2726,7 @@ static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
>  	pci_set_drvdata(pdev, dev);
>  	dev->ethtool_ops = &gve_ethtool_ops;
>  	dev->netdev_ops = &gve_netdev_ops;
> +	dev->queue_mgmt_ops = &gve_queue_mgmt_ops;
>  
>  	/* Set default and supported features.
>  	 *
> diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
> index 1d235caab4c5..307bf97d4778 100644
> --- a/drivers/net/ethernet/google/gve/gve_rx.c
> +++ b/drivers/net/ethernet/google/gve/gve_rx.c
> @@ -101,8 +101,8 @@ void gve_rx_stop_ring_gqi(struct gve_priv *priv, int idx)
>  	gve_rx_reset_ring_gqi(priv, idx);
>  }
>  
> -static void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx,
> -				 struct gve_rx_alloc_rings_cfg *cfg)
> +void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx,
> +			  struct gve_rx_alloc_rings_cfg *cfg)
>  {
>  	struct device *dev = &priv->pdev->dev;
>  	u32 slots = rx->mask + 1;
> @@ -270,10 +270,10 @@ void gve_rx_start_ring_gqi(struct gve_priv *priv, int idx)
>  	gve_add_napi(priv, ntfy_idx, gve_napi_poll);
>  }
>  
> -static int gve_rx_alloc_ring_gqi(struct gve_priv *priv,
> -				 struct gve_rx_alloc_rings_cfg *cfg,
> -				 struct gve_rx_ring *rx,
> -				 int idx)
> +int gve_rx_alloc_ring_gqi(struct gve_priv *priv,
> +			  struct gve_rx_alloc_rings_cfg *cfg,
> +			  struct gve_rx_ring *rx,
> +			  int idx)
>  {
>  	struct device *hdev = &priv->pdev->dev;
>  	u32 slots = cfg->ring_size;
> diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
> index dc2c6bd92e82..dcbc37118870 100644
> --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c
> +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
> @@ -299,8 +299,8 @@ void gve_rx_stop_ring_dqo(struct gve_priv *priv, int idx)
>  	gve_rx_reset_ring_dqo(priv, idx);
>  }
>  
> -static void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx,
> -				 struct gve_rx_alloc_rings_cfg *cfg)
> +void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx,
> +			  struct gve_rx_alloc_rings_cfg *cfg)
>  {
>  	struct device *hdev = &priv->pdev->dev;
>  	size_t completion_queue_slots;
> @@ -373,10 +373,10 @@ void gve_rx_start_ring_dqo(struct gve_priv *priv, int idx)
>  	gve_add_napi(priv, ntfy_idx, gve_napi_poll_dqo);
>  }
>  
> -static int gve_rx_alloc_ring_dqo(struct gve_priv *priv,
> -				 struct gve_rx_alloc_rings_cfg *cfg,
> -				 struct gve_rx_ring *rx,
> -				 int idx)
> +int gve_rx_alloc_ring_dqo(struct gve_priv *priv,
> +			  struct gve_rx_alloc_rings_cfg *cfg,
> +			  struct gve_rx_ring *rx,
> +			  int idx)
>  {
>  	struct device *hdev = &priv->pdev->dev;
>  	size_t size;

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH net-next 9/9] gve: Implement queue api
  2024-04-19 22:23   ` Shailend Chand
@ 2024-04-23 17:55     ` David Wei
  0 siblings, 0 replies; 19+ messages in thread
From: David Wei @ 2024-04-23 17:55 UTC (permalink / raw)
  To: Shailend Chand, netdev
  Cc: almasrymina, davem, edumazet, kuba, pabeni, willemb

On 2024-04-19 3:23 pm, Shailend Chand wrote:
> On Thu, Apr 18, 2024 at 12:52 PM Shailend Chand <shailend@google.com> wrote:
>>
>> An api enabling the net stack to reset driver queues is implemented for
>> gve.
>>
>> Signed-off-by: Shailend Chand <shailend@google.com>
>> ---
>>  drivers/net/ethernet/google/gve/gve.h        |   6 +
>>  drivers/net/ethernet/google/gve/gve_dqo.h    |   6 +
>>  drivers/net/ethernet/google/gve/gve_main.c   | 143 +++++++++++++++++++
>>  drivers/net/ethernet/google/gve/gve_rx.c     |  12 +-
>>  drivers/net/ethernet/google/gve/gve_rx_dqo.c |  12 +-
>>  5 files changed, 167 insertions(+), 12 deletions(-)
>>
>> diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
>> index 9f6a897c87cb..d752e525bde7 100644
>> --- a/drivers/net/ethernet/google/gve/gve.h
>> +++ b/drivers/net/ethernet/google/gve/gve.h
>> @@ -1147,6 +1147,12 @@ bool gve_tx_clean_pending(struct gve_priv *priv, struct gve_tx_ring *tx);
>>  void gve_rx_write_doorbell(struct gve_priv *priv, struct gve_rx_ring *rx);
>>  int gve_rx_poll(struct gve_notify_block *block, int budget);
>>  bool gve_rx_work_pending(struct gve_rx_ring *rx);
>> +int gve_rx_alloc_ring_gqi(struct gve_priv *priv,
>> +                         struct gve_rx_alloc_rings_cfg *cfg,
>> +                         struct gve_rx_ring *rx,
>> +                         int idx);
>> +void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx,
>> +                         struct gve_rx_alloc_rings_cfg *cfg);
>>  int gve_rx_alloc_rings(struct gve_priv *priv);
>>  int gve_rx_alloc_rings_gqi(struct gve_priv *priv,
>>                            struct gve_rx_alloc_rings_cfg *cfg);
>> diff --git a/drivers/net/ethernet/google/gve/gve_dqo.h b/drivers/net/ethernet/google/gve/gve_dqo.h
>> index b81584829c40..e83773fb891f 100644
>> --- a/drivers/net/ethernet/google/gve/gve_dqo.h
>> +++ b/drivers/net/ethernet/google/gve/gve_dqo.h
>> @@ -44,6 +44,12 @@ void gve_tx_free_rings_dqo(struct gve_priv *priv,
>>                            struct gve_tx_alloc_rings_cfg *cfg);
>>  void gve_tx_start_ring_dqo(struct gve_priv *priv, int idx);
>>  void gve_tx_stop_ring_dqo(struct gve_priv *priv, int idx);
>> +int gve_rx_alloc_ring_dqo(struct gve_priv *priv,
>> +                         struct gve_rx_alloc_rings_cfg *cfg,
>> +                         struct gve_rx_ring *rx,
>> +                         int idx);
>> +void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx,
>> +                         struct gve_rx_alloc_rings_cfg *cfg);
>>  int gve_rx_alloc_rings_dqo(struct gve_priv *priv,
>>                            struct gve_rx_alloc_rings_cfg *cfg);
>>  void gve_rx_free_rings_dqo(struct gve_priv *priv,
>> diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
>> index c348dff7cca6..5e652958f10f 100644
>> --- a/drivers/net/ethernet/google/gve/gve_main.c
>> +++ b/drivers/net/ethernet/google/gve/gve_main.c
>> @@ -17,6 +17,7 @@
>>  #include <linux/workqueue.h>
>>  #include <linux/utsname.h>
>>  #include <linux/version.h>
>> +#include <net/netdev_queues.h>
>>  #include <net/sch_generic.h>
>>  #include <net/xdp_sock_drv.h>
>>  #include "gve.h"
>> @@ -2070,6 +2071,15 @@ static void gve_turnup(struct gve_priv *priv)
>>         gve_set_napi_enabled(priv);
>>  }
>>
>> +static void gve_turnup_and_check_status(struct gve_priv *priv)
>> +{
>> +       u32 status;
>> +
>> +       gve_turnup(priv);
>> +       status = ioread32be(&priv->reg_bar0->device_status);
>> +       gve_handle_link_status(priv, GVE_DEVICE_STATUS_LINK_STATUS_MASK & status);
>> +}
>> +
>>  static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue)
>>  {
>>         struct gve_notify_block *block;
>> @@ -2530,6 +2540,138 @@ static void gve_write_version(u8 __iomem *driver_version_register)
>>         writeb('\n', driver_version_register);
>>  }
>>
>> +static int gve_rx_queue_stop(struct net_device *dev, int idx,
>> +                            void **out_per_q_mem)
>> +{
>> +       struct gve_priv *priv = netdev_priv(dev);
>> +       struct gve_rx_ring *rx;
>> +       int err;
>> +
>> +       if (!priv->rx)
>> +               return -EAGAIN;
>> +       if (idx < 0 || idx >= priv->rx_cfg.max_queues)
>> +               return -ERANGE;
>> +
>> +       /* Destroying queue 0 while other queues exist is not supported in DQO */
>> +       if (!gve_is_gqi(priv) && idx == 0)
>> +               return -ERANGE;
>> +
>> +       rx = kvzalloc(sizeof(*rx), GFP_KERNEL);
>> +       if (!rx)
>> +               return -ENOMEM;
>> +       *rx = priv->rx[idx];
>> +
>> +       /* Single-queue destruction requires quiescence on all queues */
>> +       gve_turndown(priv);
>> +
>> +       /* This failure will trigger a reset - no need to clean up */
>> +       err = gve_adminq_destroy_single_rx_queue(priv, idx);
>> +       if (err) {
>> +               kvfree(rx);
>> +               return err;
>> +       }
>> +
>> +       if (gve_is_gqi(priv))
>> +               gve_rx_stop_ring_gqi(priv, idx);
>> +       else
>> +               gve_rx_stop_ring_dqo(priv, idx);
>> +
>> +       /* Turn the unstopped queues back up */
>> +       gve_turnup_and_check_status(priv);
>> +
>> +       *out_per_q_mem = rx;
>> +       return 0;
>> +}
>> +
>> +static void gve_rx_queue_mem_free(struct net_device *dev, void *per_q_mem)
>> +{
>> +       struct gve_priv *priv = netdev_priv(dev);
>> +       struct gve_rx_alloc_rings_cfg cfg = {0};
>> +       struct gve_rx_ring *rx;
>> +
>> +       gve_rx_get_curr_alloc_cfg(priv, &cfg);
>> +       rx = (struct gve_rx_ring *)per_q_mem;
>> +       if (!rx)
>> +               return;
>> +
>> +       if (gve_is_gqi(priv))
>> +               gve_rx_free_ring_gqi(priv, rx, &cfg);
>> +       else
>> +               gve_rx_free_ring_dqo(priv, rx, &cfg);
>> +
>> +       kvfree(per_q_mem);
>> +}
>> +
>> +static void *gve_rx_queue_mem_alloc(struct net_device *dev, int idx)
>> +{
>> +       struct gve_priv *priv = netdev_priv(dev);
>> +       struct gve_rx_alloc_rings_cfg cfg = {0};
>> +       struct gve_rx_ring *rx;
>> +       int err;
>> +
>> +       gve_rx_get_curr_alloc_cfg(priv, &cfg);
>> +       if (idx < 0 || idx >= cfg.qcfg->max_queues)
>> +               return NULL;
>> +
>> +       rx = kvzalloc(sizeof(*rx), GFP_KERNEL);
>> +       if (!rx)
>> +               return NULL;
>> +
>> +       if (gve_is_gqi(priv))
>> +               err = gve_rx_alloc_ring_gqi(priv, &cfg, rx, idx);
>> +       else
>> +               err = gve_rx_alloc_ring_dqo(priv, &cfg, rx, idx);
>> +
>> +       if (err) {
>> +               kvfree(rx);
>> +               return NULL;
>> +       }
>> +       return rx;
>> +}
>> +
>> +static int gve_rx_queue_start(struct net_device *dev, int idx, void *per_q_mem)
>> +{
>> +       struct gve_priv *priv = netdev_priv(dev);
>> +       struct gve_rx_ring *rx;
>> +       int err;
>> +
>> +       if (!priv->rx)
>> +               return -EAGAIN;
>> +       if (idx < 0 || idx >= priv->rx_cfg.max_queues)
>> +               return -ERANGE;
>> +       rx = (struct gve_rx_ring *)per_q_mem;
>> +       priv->rx[idx] = *rx;
>> +
>> +       /* Single-queue creation requires quiescence on all queues */
>> +       gve_turndown(priv);
>> +
>> +       if (gve_is_gqi(priv))
>> +               gve_rx_start_ring_gqi(priv, idx);
>> +       else
>> +               gve_rx_start_ring_dqo(priv, idx);
>> +
>> +       /* This failure will trigger a reset - no need to clean up */
>> +       err = gve_adminq_create_single_rx_queue(priv, idx);
>> +       if (err)
>> +               return err;
>> +
>> +       if (gve_is_gqi(priv))
>> +               gve_rx_write_doorbell(priv, &priv->rx[idx]);
>> +       else
>> +               gve_rx_post_buffers_dqo(&priv->rx[idx]);
>> +
>> +       /* Turn the unstopped queues back up */
>> +       gve_turnup_and_check_status(priv);
>> +       return 0;
>> +}
> 
> I realized that due to not kvfree-ing the passed-in `per_q_mem`, there
> is a leak. The alloc and stop hooks kvzalloc
> a temp ring struct, which means the start and free hooks ought to have
> kvfreed them to keep symmetry and avoid leaking.
> The free hook is doing it but I forgot to do it in the start hook.
> 
> If we go down the route of making core aware of the ring struct size,
> then none of the four hooks
> need to worry about the temp struct: core can just alloc and free it
> for both the old and new queue.

Having the core own qmem is the direction to move towards. I have
patches that do this, so we could merge what you have so far (i.e. have
the ndos alloc/free qmem) then I'll build on top of it for FBNIC/bnxt.

What might be missing in this patchset is a user of
netdev_queue_mgmt_ops. I know Mina has a restart_rx_queue() somewhere
and perhaps that should be included.

> 
>> +
>> +static const struct netdev_queue_mgmt_ops gve_queue_mgmt_ops = {
>> +       .ndo_queue_mem_alloc    =       gve_rx_queue_mem_alloc,
>> +       .ndo_queue_mem_free     =       gve_rx_queue_mem_free,
>> +       .ndo_queue_start        =       gve_rx_queue_start,
>> +       .ndo_queue_stop         =       gve_rx_queue_stop,
>> +};
>> +
>>  static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
>>  {
>>         int max_tx_queues, max_rx_queues;
>> @@ -2584,6 +2726,7 @@ static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
>>         pci_set_drvdata(pdev, dev);
>>         dev->ethtool_ops = &gve_ethtool_ops;
>>         dev->netdev_ops = &gve_netdev_ops;
>> +       dev->queue_mgmt_ops = &gve_queue_mgmt_ops;
>>
>>         /* Set default and supported features.
>>          *
>> diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
>> index 1d235caab4c5..307bf97d4778 100644
>> --- a/drivers/net/ethernet/google/gve/gve_rx.c
>> +++ b/drivers/net/ethernet/google/gve/gve_rx.c
>> @@ -101,8 +101,8 @@ void gve_rx_stop_ring_gqi(struct gve_priv *priv, int idx)
>>         gve_rx_reset_ring_gqi(priv, idx);
>>  }
>>
>> -static void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx,
>> -                                struct gve_rx_alloc_rings_cfg *cfg)
>> +void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx,
>> +                         struct gve_rx_alloc_rings_cfg *cfg)
>>  {
>>         struct device *dev = &priv->pdev->dev;
>>         u32 slots = rx->mask + 1;
>> @@ -270,10 +270,10 @@ void gve_rx_start_ring_gqi(struct gve_priv *priv, int idx)
>>         gve_add_napi(priv, ntfy_idx, gve_napi_poll);
>>  }
>>
>> -static int gve_rx_alloc_ring_gqi(struct gve_priv *priv,
>> -                                struct gve_rx_alloc_rings_cfg *cfg,
>> -                                struct gve_rx_ring *rx,
>> -                                int idx)
>> +int gve_rx_alloc_ring_gqi(struct gve_priv *priv,
>> +                         struct gve_rx_alloc_rings_cfg *cfg,
>> +                         struct gve_rx_ring *rx,
>> +                         int idx)
>>  {
>>         struct device *hdev = &priv->pdev->dev;
>>         u32 slots = cfg->ring_size;
>> diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
>> index dc2c6bd92e82..dcbc37118870 100644
>> --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c
>> +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
>> @@ -299,8 +299,8 @@ void gve_rx_stop_ring_dqo(struct gve_priv *priv, int idx)
>>         gve_rx_reset_ring_dqo(priv, idx);
>>  }
>>
>> -static void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx,
>> -                                struct gve_rx_alloc_rings_cfg *cfg)
>> +void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx,
>> +                         struct gve_rx_alloc_rings_cfg *cfg)
>>  {
>>         struct device *hdev = &priv->pdev->dev;
>>         size_t completion_queue_slots;
>> @@ -373,10 +373,10 @@ void gve_rx_start_ring_dqo(struct gve_priv *priv, int idx)
>>         gve_add_napi(priv, ntfy_idx, gve_napi_poll_dqo);
>>  }
>>
>> -static int gve_rx_alloc_ring_dqo(struct gve_priv *priv,
>> -                                struct gve_rx_alloc_rings_cfg *cfg,
>> -                                struct gve_rx_ring *rx,
>> -                                int idx)
>> +int gve_rx_alloc_ring_dqo(struct gve_priv *priv,
>> +                         struct gve_rx_alloc_rings_cfg *cfg,
>> +                         struct gve_rx_ring *rx,
>> +                         int idx)
>>  {
>>         struct device *hdev = &priv->pdev->dev;
>>         size_t size;
>> --
>> 2.44.0.769.g3c40516874-goog
>>

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2024-04-23 17:55 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-18 19:51 [RFC PATCH net-next 0/9] gve: Implement netdev queue api Shailend Chand
2024-04-18 19:51 ` [RFC PATCH net-next 1/9] queue_api: define " Shailend Chand
2024-04-18 19:51 ` [RFC PATCH net-next 2/9] gve: Make the RX free queue funcs idempotent Shailend Chand
2024-04-18 19:51 ` [RFC PATCH net-next 3/9] gve: Add adminq funcs to add/remove a single Rx queue Shailend Chand
2024-04-18 19:51 ` [RFC PATCH net-next 4/9] gve: Make gve_turn(up|down) ignore stopped queues Shailend Chand
2024-04-18 19:51 ` [RFC PATCH net-next 5/9] gve: Make gve_turnup work for nonempty queues Shailend Chand
2024-04-18 19:51 ` [RFC PATCH net-next 6/9] gve: Avoid rescheduling napi if on wrong cpu Shailend Chand
2024-04-18 19:51 ` [RFC PATCH net-next 7/9] gve: Reset Rx ring state in the ring-stop funcs Shailend Chand
2024-04-18 19:51 ` [RFC PATCH net-next 8/9] gve: Account for stopped queues when reading NIC stats Shailend Chand
2024-04-18 19:51 ` [RFC PATCH net-next 9/9] gve: Implement queue api Shailend Chand
2024-04-19  1:48   ` Jakub Kicinski
2024-04-19 16:10     ` Mina Almasry
2024-04-20  3:25       ` Jakub Kicinski
2024-04-22 16:58         ` Mina Almasry
2024-04-22 18:41           ` Jakub Kicinski
2024-04-19 22:23   ` Shailend Chand
2024-04-23 17:55     ` David Wei
2024-04-23 17:33   ` David Wei
2024-04-18 21:55 ` [RFC PATCH net-next 0/9] gve: Implement netdev " Mina Almasry

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.