All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V4 net-next 0/9] XDP Redirect implementation for ENA driver
@ 2020-12-04 12:11 akiyano
  2020-12-04 12:11 ` [PATCH V4 net-next 1/9] net: ena: use constant value for net_device allocation akiyano
                   ` (8 more replies)
  0 siblings, 9 replies; 19+ messages in thread
From: akiyano @ 2020-12-04 12:11 UTC (permalink / raw)
  To: davem, kuba, netdev
  Cc: Arthur Kiyanovski, dwmw, zorik, matua, saeedb, msw, aliguori,
	nafea, gtzalik, netanel, alisaidi, benh, ndagan, shayagr,
	sameehj

From: Arthur Kiyanovski <akiyano@amazon.com>

V4 Changes:
-----------
Added an explanation to the decision of using netdev_* prints
in functions that are also called before netdev is registered.


V3 Changes:
-----------
1. Removed RFC, the commits that this patchset relies on where
merged from net already so no more conflicts are expected.

2. Fixed checkpatch errors.


V2 Changes:
---------------
1. Changed this to RFC, since we are waiting for the recently merged
net patches to be merged so there will be no merge conflicts. But
still would like to get review comments beforehand.

2. Removed the word "atomic" from the name of
ena_increase_stat_atomic(), as it is indeed not atomic.


V1 Cover Letter:
----------------
Hi all,
ENA is adding XDP Redirect support for its driver and some other
small tweaks.

This series adds the following:

- Make log messages in the driver have a uniform format using
  netdev_* function
- Improve code readability and add explicit masking
- Add support for XDP Redirect

Arthur Kiyanovski (9):
  net: ena: use constant value for net_device allocation
  net: ena: add device distinct log prefix to files
  net: ena: add explicit casting to variables
  net: ena: fix coding style nits
  net: ena: aggregate stats increase into a function
  net: ena: use xdp_frame in XDP TX flow
  net: ena: introduce XDP redirect implementation
  net: ena: use xdp_return_frame() to free xdp frames
  net: ena: introduce ndo_xdp_xmit() function for XDP_REDIRECT

 drivers/net/ethernet/amazon/ena/ena_com.c     | 394 ++++++++++-------
 drivers/net/ethernet/amazon/ena/ena_com.h     |  21 +
 drivers/net/ethernet/amazon/ena/ena_eth_com.c |  71 ++--
 drivers/net/ethernet/amazon/ena/ena_eth_com.h |  23 +-
 drivers/net/ethernet/amazon/ena/ena_ethtool.c |   3 +-
 drivers/net/ethernet/amazon/ena/ena_netdev.c  | 395 ++++++++++--------
 drivers/net/ethernet/amazon/ena/ena_netdev.h  |  12 +-
 7 files changed, 544 insertions(+), 375 deletions(-)

-- 
2.23.3


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH V4 net-next 1/9] net: ena: use constant value for net_device allocation
  2020-12-04 12:11 [PATCH V4 net-next 0/9] XDP Redirect implementation for ENA driver akiyano
@ 2020-12-04 12:11 ` akiyano
  2020-12-04 12:11 ` [PATCH V4 net-next 2/9] net: ena: add device distinct log prefix to files akiyano
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: akiyano @ 2020-12-04 12:11 UTC (permalink / raw)
  To: davem, kuba, netdev
  Cc: Arthur Kiyanovski, dwmw, zorik, matua, saeedb, msw, aliguori,
	nafea, gtzalik, netanel, alisaidi, benh, ndagan, shayagr,
	sameehj

From: Arthur Kiyanovski <akiyano@amazon.com>

The patch changes the maximum number of RX/TX queues it advertises to
the kernel (via alloc_etherdev_mq()) from a value received from the
device to a constant value which is the minimum between 128 and the
number of CPUs in the system.

By allocating the net_device struct with a constant number of queues,
the driver is able to allocate it at a much earlier stage, before
calling any ena_com functions. This would allow to make all log prints
in ena_com to use netdev_* log functions instead or current pr_* ones.

Note:
netdev_* prints in ena_com functions that are called before
net_device registration in ena_probe() might print messages that are
a bit ugly (with strings like "(unnamed net_device) (uninitialized)").
However we decided to use netdev_* prints in these functions anyway,
for the sake of getting better messages later, when ena_com functions
are called after ena_probe() form other parts of the driver.
See discussion about this decision in [1].

[1] http://www.mail-archive.com/netdev@vger.kernel.org/msg353590.html

Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
---
 drivers/net/ethernet/amazon/ena/ena_netdev.c | 46 ++++++++++----------
 1 file changed, 23 insertions(+), 23 deletions(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index df1884d57d1a..985dea1870b5 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -29,6 +29,8 @@ MODULE_LICENSE("GPL");
 /* Time in jiffies before concluding the transmitter is hung. */
 #define TX_TIMEOUT  (5 * HZ)
 
+#define ENA_MAX_RINGS min_t(unsigned int, ENA_MAX_NUM_IO_QUEUES, num_possible_cpus())
+
 #define ENA_NAPI_BUDGET 64
 
 #define DEFAULT_MSG_ENABLE (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_IFUP | \
@@ -4176,18 +4178,34 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 
 	ena_dev->dmadev = &pdev->dev;
 
+	netdev = alloc_etherdev_mq(sizeof(struct ena_adapter), ENA_MAX_RINGS);
+	if (!netdev) {
+		dev_err(&pdev->dev, "alloc_etherdev_mq failed\n");
+		rc = -ENOMEM;
+		goto err_free_region;
+	}
+
+	SET_NETDEV_DEV(netdev, &pdev->dev);
+	adapter = netdev_priv(netdev);
+	adapter->ena_dev = ena_dev;
+	adapter->netdev = netdev;
+	adapter->pdev = pdev;
+	adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE);
+
+	pci_set_drvdata(pdev, adapter);
+
 	rc = ena_device_init(ena_dev, pdev, &get_feat_ctx, &wd_state);
 	if (rc) {
 		dev_err(&pdev->dev, "ENA device init failed\n");
 		if (rc == -ETIME)
 			rc = -EPROBE_DEFER;
-		goto err_free_region;
+		goto err_netdev_destroy;
 	}
 
 	rc = ena_map_llq_mem_bar(pdev, ena_dev, bars);
 	if (rc) {
 		dev_err(&pdev->dev, "ENA llq bar mapping failed\n");
-		goto err_free_ena_dev;
+		goto err_device_destroy;
 	}
 
 	calc_queue_ctx.ena_dev = ena_dev;
@@ -4207,26 +4225,8 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 		goto err_device_destroy;
 	}
 
-	/* dev zeroed in init_etherdev */
-	netdev = alloc_etherdev_mq(sizeof(struct ena_adapter), max_num_io_queues);
-	if (!netdev) {
-		dev_err(&pdev->dev, "alloc_etherdev_mq failed\n");
-		rc = -ENOMEM;
-		goto err_device_destroy;
-	}
-
-	SET_NETDEV_DEV(netdev, &pdev->dev);
-
-	adapter = netdev_priv(netdev);
-	pci_set_drvdata(pdev, adapter);
-
-	adapter->ena_dev = ena_dev;
-	adapter->netdev = netdev;
-	adapter->pdev = pdev;
-
 	ena_set_conf_feat_params(adapter, &get_feat_ctx);
 
-	adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE);
 	adapter->reset_reason = ENA_REGS_RESET_NORMAL;
 
 	adapter->requested_tx_ring_size = calc_queue_ctx.tx_queue_size;
@@ -4257,7 +4257,7 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	if (rc) {
 		dev_err(&pdev->dev,
 			"Failed to query interrupt moderation feature\n");
-		goto err_netdev_destroy;
+		goto err_device_destroy;
 	}
 	ena_init_io_rings(adapter,
 			  0,
@@ -4335,11 +4335,11 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	ena_disable_msix(adapter);
 err_worker_destroy:
 	del_timer(&adapter->timer_service);
-err_netdev_destroy:
-	free_netdev(netdev);
 err_device_destroy:
 	ena_com_delete_host_info(ena_dev);
 	ena_com_admin_destroy(ena_dev);
+err_netdev_destroy:
+	free_netdev(netdev);
 err_free_region:
 	ena_release_bars(ena_dev, pdev);
 err_free_ena_dev:
-- 
2.23.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH V4 net-next 2/9] net: ena: add device distinct log prefix to files
  2020-12-04 12:11 [PATCH V4 net-next 0/9] XDP Redirect implementation for ENA driver akiyano
  2020-12-04 12:11 ` [PATCH V4 net-next 1/9] net: ena: use constant value for net_device allocation akiyano
@ 2020-12-04 12:11 ` akiyano
  2020-12-04 12:11 ` [PATCH V4 net-next 3/9] net: ena: add explicit casting to variables akiyano
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: akiyano @ 2020-12-04 12:11 UTC (permalink / raw)
  To: davem, kuba, netdev
  Cc: Arthur Kiyanovski, dwmw, zorik, matua, saeedb, msw, aliguori,
	nafea, gtzalik, netanel, alisaidi, benh, ndagan, shayagr,
	sameehj, Amit Bernstein

From: Arthur Kiyanovski <akiyano@amazon.com>

ENA logs are adjusted to display the full ENA representation to
distinct each ENA device in case of multiple interfaces.
Using netdev_err/warn and dev_info functions for logging provides
uniform printing with clear distinction of the device and interface.

This patch changes all printing in ena_com files to use netev_* logging
functions except for messages of info level. Log functions of that level
would be printed with dev_info because of the early stage they are
called in when net_device struct isn't yet registered.

To allow using netdev_* functions in all ena_com functions, a pointer to
the net_device was added to ena_com_dev struct.

The patch also adds some log messages to make driver debugging easier.

Signed-off-by: Amit Bernstein <amitbern@amazon.com>
Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
---
 drivers/net/ethernet/amazon/ena/ena_com.c     | 384 +++++++++++-------
 drivers/net/ethernet/amazon/ena/ena_com.h     |  21 +
 drivers/net/ethernet/amazon/ena/ena_eth_com.c |  66 +--
 drivers/net/ethernet/amazon/ena/ena_eth_com.h |  23 +-
 drivers/net/ethernet/amazon/ena/ena_netdev.c  |   2 +
 5 files changed, 312 insertions(+), 184 deletions(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c b/drivers/net/ethernet/amazon/ena/ena_com.c
index 5f8769aa469d..e168edf3c930 100644
--- a/drivers/net/ethernet/amazon/ena/ena_com.c
+++ b/drivers/net/ethernet/amazon/ena/ena_com.c
@@ -71,7 +71,8 @@ static int ena_com_mem_addr_set(struct ena_com_dev *ena_dev,
 				       dma_addr_t addr)
 {
 	if ((addr & GENMASK_ULL(ena_dev->dma_addr_bits - 1, 0)) != addr) {
-		pr_err("DMA address has more bits that the device supports\n");
+		netdev_err(ena_dev->net_device,
+			   "DMA address has more bits that the device supports\n");
 		return -EINVAL;
 	}
 
@@ -83,6 +84,7 @@ static int ena_com_mem_addr_set(struct ena_com_dev *ena_dev,
 
 static int ena_com_admin_init_sq(struct ena_com_admin_queue *admin_queue)
 {
+	struct ena_com_dev *ena_dev = admin_queue->ena_dev;
 	struct ena_com_admin_sq *sq = &admin_queue->sq;
 	u16 size = ADMIN_SQ_SIZE(admin_queue->q_depth);
 
@@ -90,7 +92,7 @@ static int ena_com_admin_init_sq(struct ena_com_admin_queue *admin_queue)
 					 &sq->dma_addr, GFP_KERNEL);
 
 	if (!sq->entries) {
-		pr_err("Memory allocation failed\n");
+		netdev_err(ena_dev->net_device, "Memory allocation failed\n");
 		return -ENOMEM;
 	}
 
@@ -105,6 +107,7 @@ static int ena_com_admin_init_sq(struct ena_com_admin_queue *admin_queue)
 
 static int ena_com_admin_init_cq(struct ena_com_admin_queue *admin_queue)
 {
+	struct ena_com_dev *ena_dev = admin_queue->ena_dev;
 	struct ena_com_admin_cq *cq = &admin_queue->cq;
 	u16 size = ADMIN_CQ_SIZE(admin_queue->q_depth);
 
@@ -112,7 +115,7 @@ static int ena_com_admin_init_cq(struct ena_com_admin_queue *admin_queue)
 					 &cq->dma_addr, GFP_KERNEL);
 
 	if (!cq->entries) {
-		pr_err("Memory allocation failed\n");
+		netdev_err(ena_dev->net_device, "Memory allocation failed\n");
 		return -ENOMEM;
 	}
 
@@ -135,7 +138,7 @@ static int ena_com_admin_init_aenq(struct ena_com_dev *ena_dev,
 					   &aenq->dma_addr, GFP_KERNEL);
 
 	if (!aenq->entries) {
-		pr_err("Memory allocation failed\n");
+		netdev_err(ena_dev->net_device, "Memory allocation failed\n");
 		return -ENOMEM;
 	}
 
@@ -156,7 +159,8 @@ static int ena_com_admin_init_aenq(struct ena_com_dev *ena_dev,
 	writel(aenq_caps, ena_dev->reg_bar + ENA_REGS_AENQ_CAPS_OFF);
 
 	if (unlikely(!aenq_handlers)) {
-		pr_err("AENQ handlers pointer is NULL\n");
+		netdev_err(ena_dev->net_device,
+			   "AENQ handlers pointer is NULL\n");
 		return -EINVAL;
 	}
 
@@ -176,18 +180,21 @@ static struct ena_comp_ctx *get_comp_ctxt(struct ena_com_admin_queue *admin_queu
 					  u16 command_id, bool capture)
 {
 	if (unlikely(command_id >= admin_queue->q_depth)) {
-		pr_err("Command id is larger than the queue size. cmd_id: %u queue size %d\n",
-		       command_id, admin_queue->q_depth);
+		netdev_err(admin_queue->ena_dev->net_device,
+			   "Command id is larger than the queue size. cmd_id: %u queue size %d\n",
+			   command_id, admin_queue->q_depth);
 		return NULL;
 	}
 
 	if (unlikely(!admin_queue->comp_ctx)) {
-		pr_err("Completion context is NULL\n");
+		netdev_err(admin_queue->ena_dev->net_device,
+			   "Completion context is NULL\n");
 		return NULL;
 	}
 
 	if (unlikely(admin_queue->comp_ctx[command_id].occupied && capture)) {
-		pr_err("Completion context is occupied\n");
+		netdev_err(admin_queue->ena_dev->net_device,
+			   "Completion context is occupied\n");
 		return NULL;
 	}
 
@@ -217,7 +224,8 @@ static struct ena_comp_ctx *__ena_com_submit_admin_cmd(struct ena_com_admin_queu
 	/* In case of queue FULL */
 	cnt = (u16)atomic_read(&admin_queue->outstanding_cmds);
 	if (cnt >= admin_queue->q_depth) {
-		pr_debug("Admin queue is full.\n");
+		netdev_dbg(admin_queue->ena_dev->net_device,
+			   "Admin queue is full.\n");
 		admin_queue->stats.out_of_space++;
 		return ERR_PTR(-ENOSPC);
 	}
@@ -259,6 +267,7 @@ static struct ena_comp_ctx *__ena_com_submit_admin_cmd(struct ena_com_admin_queu
 
 static int ena_com_init_comp_ctxt(struct ena_com_admin_queue *admin_queue)
 {
+	struct ena_com_dev *ena_dev = admin_queue->ena_dev;
 	size_t size = admin_queue->q_depth * sizeof(struct ena_comp_ctx);
 	struct ena_comp_ctx *comp_ctx;
 	u16 i;
@@ -266,7 +275,7 @@ static int ena_com_init_comp_ctxt(struct ena_com_admin_queue *admin_queue)
 	admin_queue->comp_ctx =
 		devm_kzalloc(admin_queue->q_dmadev, size, GFP_KERNEL);
 	if (unlikely(!admin_queue->comp_ctx)) {
-		pr_err("Memory allocation failed\n");
+		netdev_err(ena_dev->net_device, "Memory allocation failed\n");
 		return -ENOMEM;
 	}
 
@@ -337,7 +346,8 @@ static int ena_com_init_io_sq(struct ena_com_dev *ena_dev,
 		}
 
 		if (!io_sq->desc_addr.virt_addr) {
-			pr_err("Memory allocation failed\n");
+			netdev_err(ena_dev->net_device,
+				   "Memory allocation failed\n");
 			return -ENOMEM;
 		}
 	}
@@ -363,7 +373,8 @@ static int ena_com_init_io_sq(struct ena_com_dev *ena_dev,
 				devm_kzalloc(ena_dev->dmadev, size, GFP_KERNEL);
 
 		if (!io_sq->bounce_buf_ctrl.base_buffer) {
-			pr_err("Bounce buffer memory allocation failed\n");
+			netdev_err(ena_dev->net_device,
+				   "Bounce buffer memory allocation failed\n");
 			return -ENOMEM;
 		}
 
@@ -423,7 +434,7 @@ static int ena_com_init_io_cq(struct ena_com_dev *ena_dev,
 	}
 
 	if (!io_cq->cdesc_addr.virt_addr) {
-		pr_err("Memory allocation failed\n");
+		netdev_err(ena_dev->net_device, "Memory allocation failed\n");
 		return -ENOMEM;
 	}
 
@@ -444,7 +455,8 @@ static void ena_com_handle_single_admin_completion(struct ena_com_admin_queue *a
 
 	comp_ctx = get_comp_ctxt(admin_queue, cmd_id, false);
 	if (unlikely(!comp_ctx)) {
-		pr_err("comp_ctx is NULL. Changing the admin queue running state\n");
+		netdev_err(admin_queue->ena_dev->net_device,
+			   "comp_ctx is NULL. Changing the admin queue running state\n");
 		admin_queue->running_state = false;
 		return;
 	}
@@ -496,10 +508,12 @@ static void ena_com_handle_admin_completion(struct ena_com_admin_queue *admin_qu
 	admin_queue->stats.completed_cmd += comp_num;
 }
 
-static int ena_com_comp_status_to_errno(u8 comp_status)
+static int ena_com_comp_status_to_errno(struct ena_com_admin_queue *admin_queue,
+					u8 comp_status)
 {
 	if (unlikely(comp_status != 0))
-		pr_err("Admin command failed[%u]\n", comp_status);
+		netdev_err(admin_queue->ena_dev->net_device,
+			   "Admin command failed[%u]\n", comp_status);
 
 	switch (comp_status) {
 	case ENA_ADMIN_SUCCESS:
@@ -546,7 +560,8 @@ static int ena_com_wait_and_process_admin_cq_polling(struct ena_comp_ctx *comp_c
 			break;
 
 		if (time_is_before_jiffies(timeout)) {
-			pr_err("Wait for completion (polling) timeout\n");
+			netdev_err(admin_queue->ena_dev->net_device,
+				   "Wait for completion (polling) timeout\n");
 			/* ENA didn't have any completion */
 			spin_lock_irqsave(&admin_queue->q_lock, flags);
 			admin_queue->stats.no_completion++;
@@ -562,7 +577,8 @@ static int ena_com_wait_and_process_admin_cq_polling(struct ena_comp_ctx *comp_c
 	}
 
 	if (unlikely(comp_ctx->status == ENA_CMD_ABORTED)) {
-		pr_err("Command was aborted\n");
+		netdev_err(admin_queue->ena_dev->net_device,
+			   "Command was aborted\n");
 		spin_lock_irqsave(&admin_queue->q_lock, flags);
 		admin_queue->stats.aborted_cmd++;
 		spin_unlock_irqrestore(&admin_queue->q_lock, flags);
@@ -573,7 +589,7 @@ static int ena_com_wait_and_process_admin_cq_polling(struct ena_comp_ctx *comp_c
 	WARN(comp_ctx->status != ENA_CMD_COMPLETED, "Invalid comp status %d\n",
 	     comp_ctx->status);
 
-	ret = ena_com_comp_status_to_errno(comp_ctx->comp_status);
+	ret = ena_com_comp_status_to_errno(admin_queue, comp_ctx->comp_status);
 err:
 	comp_ctxt_release(admin_queue, comp_ctx);
 	return ret;
@@ -615,7 +631,8 @@ static int ena_com_set_llq(struct ena_com_dev *ena_dev)
 					    sizeof(resp));
 
 	if (unlikely(ret))
-		pr_err("Failed to set LLQ configurations: %d\n", ret);
+		netdev_err(ena_dev->net_device,
+			   "Failed to set LLQ configurations: %d\n", ret);
 
 	return ret;
 }
@@ -637,8 +654,9 @@ static int ena_com_config_llq_info(struct ena_com_dev *ena_dev,
 		llq_info->header_location_ctrl =
 			llq_default_cfg->llq_header_location;
 	} else {
-		pr_err("Invalid header location control, supported: 0x%x\n",
-		       supported_feat);
+		netdev_err(ena_dev->net_device,
+			   "Invalid header location control, supported: 0x%x\n",
+			   supported_feat);
 		return -EINVAL;
 	}
 
@@ -652,14 +670,16 @@ static int ena_com_config_llq_info(struct ena_com_dev *ena_dev,
 			} else if (supported_feat & ENA_ADMIN_SINGLE_DESC_PER_ENTRY) {
 				llq_info->desc_stride_ctrl = ENA_ADMIN_SINGLE_DESC_PER_ENTRY;
 			} else {
-				pr_err("Invalid desc_stride_ctrl, supported: 0x%x\n",
-				       supported_feat);
+				netdev_err(ena_dev->net_device,
+					   "Invalid desc_stride_ctrl, supported: 0x%x\n",
+					   supported_feat);
 				return -EINVAL;
 			}
 
-			pr_err("Default llq stride ctrl is not supported, performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n",
-			       llq_default_cfg->llq_stride_ctrl, supported_feat,
-			       llq_info->desc_stride_ctrl);
+			netdev_err(ena_dev->net_device,
+				   "Default llq stride ctrl is not supported, performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n",
+				   llq_default_cfg->llq_stride_ctrl,
+				   supported_feat, llq_info->desc_stride_ctrl);
 		}
 	} else {
 		llq_info->desc_stride_ctrl = 0;
@@ -680,20 +700,23 @@ static int ena_com_config_llq_info(struct ena_com_dev *ena_dev,
 			llq_info->desc_list_entry_size_ctrl = ENA_ADMIN_LIST_ENTRY_SIZE_256B;
 			llq_info->desc_list_entry_size = 256;
 		} else {
-			pr_err("Invalid entry_size_ctrl, supported: 0x%x\n",
-			       supported_feat);
+			netdev_err(ena_dev->net_device,
+				   "Invalid entry_size_ctrl, supported: 0x%x\n",
+				   supported_feat);
 			return -EINVAL;
 		}
 
-		pr_err("Default llq ring entry size is not supported, performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n",
-		       llq_default_cfg->llq_ring_entry_size, supported_feat,
-		       llq_info->desc_list_entry_size);
+		netdev_err(ena_dev->net_device,
+			   "Default llq ring entry size is not supported, performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n",
+			   llq_default_cfg->llq_ring_entry_size, supported_feat,
+			   llq_info->desc_list_entry_size);
 	}
 	if (unlikely(llq_info->desc_list_entry_size & 0x7)) {
 		/* The desc list entry size should be whole multiply of 8
 		 * This requirement comes from __iowrite64_copy()
 		 */
-		pr_err("Illegal entry size %d\n", llq_info->desc_list_entry_size);
+		netdev_err(ena_dev->net_device, "Illegal entry size %d\n",
+			   llq_info->desc_list_entry_size);
 		return -EINVAL;
 	}
 
@@ -716,14 +739,16 @@ static int ena_com_config_llq_info(struct ena_com_dev *ena_dev,
 		} else if (supported_feat & ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_8) {
 			llq_info->descs_num_before_header = ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_8;
 		} else {
-			pr_err("Invalid descs_num_before_header, supported: 0x%x\n",
-			       supported_feat);
+			netdev_err(ena_dev->net_device,
+				   "Invalid descs_num_before_header, supported: 0x%x\n",
+				   supported_feat);
 			return -EINVAL;
 		}
 
-		pr_err("Default llq num descs before header is not supported, performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n",
-		       llq_default_cfg->llq_num_decs_before_header,
-		       supported_feat, llq_info->descs_num_before_header);
+		netdev_err(ena_dev->net_device,
+			   "Default llq num descs before header is not supported, performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n",
+			   llq_default_cfg->llq_num_decs_before_header,
+			   supported_feat, llq_info->descs_num_before_header);
 	}
 	/* Check for accelerated queue supported */
 	llq_accel_mode_get = llq_features->accel_mode.u.get;
@@ -739,7 +764,8 @@ static int ena_com_config_llq_info(struct ena_com_dev *ena_dev,
 
 	rc = ena_com_set_llq(ena_dev);
 	if (rc)
-		pr_err("Cannot set LLQ configuration: %d\n", rc);
+		netdev_err(ena_dev->net_device,
+			   "Cannot set LLQ configuration: %d\n", rc);
 
 	return rc;
 }
@@ -766,15 +792,17 @@ static int ena_com_wait_and_process_admin_cq_interrupts(struct ena_comp_ctx *com
 		spin_unlock_irqrestore(&admin_queue->q_lock, flags);
 
 		if (comp_ctx->status == ENA_CMD_COMPLETED) {
-			pr_err("The ena device sent a completion but the driver didn't receive a MSI-X interrupt (cmd %d), autopolling mode is %s\n",
-			       comp_ctx->cmd_opcode,
-			       admin_queue->auto_polling ? "ON" : "OFF");
+			netdev_err(admin_queue->ena_dev->net_device,
+				   "The ena device sent a completion but the driver didn't receive a MSI-X interrupt (cmd %d), autopolling mode is %s\n",
+				   comp_ctx->cmd_opcode,
+				   admin_queue->auto_polling ? "ON" : "OFF");
 			/* Check if fallback to polling is enabled */
 			if (admin_queue->auto_polling)
 				admin_queue->polling = true;
 		} else {
-			pr_err("The ena device didn't send a completion for the admin cmd %d status %d\n",
-			       comp_ctx->cmd_opcode, comp_ctx->status);
+			netdev_err(admin_queue->ena_dev->net_device,
+				   "The ena device didn't send a completion for the admin cmd %d status %d\n",
+				   comp_ctx->cmd_opcode, comp_ctx->status);
 		}
 		/* Check if shifted to polling mode.
 		 * This will happen if there is a completion without an interrupt
@@ -787,7 +815,7 @@ static int ena_com_wait_and_process_admin_cq_interrupts(struct ena_comp_ctx *com
 		}
 	}
 
-	ret = ena_com_comp_status_to_errno(comp_ctx->comp_status);
+	ret = ena_com_comp_status_to_errno(admin_queue, comp_ctx->comp_status);
 err:
 	comp_ctxt_release(admin_queue, comp_ctx);
 	return ret;
@@ -834,15 +862,17 @@ static u32 ena_com_reg_bar_read32(struct ena_com_dev *ena_dev, u16 offset)
 	}
 
 	if (unlikely(i == timeout)) {
-		pr_err("Reading reg failed for timeout. expected: req id[%hu] offset[%hu] actual: req id[%hu] offset[%hu]\n",
-		       mmio_read->seq_num, offset, read_resp->req_id,
-		       read_resp->reg_off);
+		netdev_err(ena_dev->net_device,
+			   "Reading reg failed for timeout. expected: req id[%hu] offset[%hu] actual: req id[%hu] offset[%hu]\n",
+			   mmio_read->seq_num, offset, read_resp->req_id,
+			   read_resp->reg_off);
 		ret = ENA_MMIO_READ_TIMEOUT;
 		goto err;
 	}
 
 	if (read_resp->reg_off != offset) {
-		pr_err("Read failure: wrong offset provided\n");
+		netdev_err(ena_dev->net_device,
+			   "Read failure: wrong offset provided\n");
 		ret = ENA_MMIO_READ_TIMEOUT;
 	} else {
 		ret = read_resp->reg_val;
@@ -901,7 +931,8 @@ static int ena_com_destroy_io_sq(struct ena_com_dev *ena_dev,
 					    sizeof(destroy_resp));
 
 	if (unlikely(ret && (ret != -ENODEV)))
-		pr_err("Failed to destroy io sq error: %d\n", ret);
+		netdev_err(ena_dev->net_device,
+			   "Failed to destroy io sq error: %d\n", ret);
 
 	return ret;
 }
@@ -951,7 +982,8 @@ static int wait_for_reset_state(struct ena_com_dev *ena_dev, u32 timeout,
 		val = ena_com_reg_bar_read32(ena_dev, ENA_REGS_DEV_STS_OFF);
 
 		if (unlikely(val == ENA_MMIO_READ_TIMEOUT)) {
-			pr_err("Reg read timeout occurred\n");
+			netdev_err(ena_dev->net_device,
+				   "Reg read timeout occurred\n");
 			return -ETIME;
 		}
 
@@ -991,7 +1023,8 @@ static int ena_com_get_feature_ex(struct ena_com_dev *ena_dev,
 	int ret;
 
 	if (!ena_com_check_supported_feature_id(ena_dev, feature_id)) {
-		pr_debug("Feature %d isn't supported\n", feature_id);
+		netdev_dbg(ena_dev->net_device, "Feature %d isn't supported\n",
+			   feature_id);
 		return -EOPNOTSUPP;
 	}
 
@@ -1010,7 +1043,7 @@ static int ena_com_get_feature_ex(struct ena_com_dev *ena_dev,
 				   &get_cmd.control_buffer.address,
 				   control_buf_dma_addr);
 	if (unlikely(ret)) {
-		pr_err("Memory address set failed\n");
+		netdev_err(ena_dev->net_device, "Memory address set failed\n");
 		return ret;
 	}
 
@@ -1027,8 +1060,9 @@ static int ena_com_get_feature_ex(struct ena_com_dev *ena_dev,
 					    sizeof(*get_resp));
 
 	if (unlikely(ret))
-		pr_err("Failed to submit get_feature command %d error: %d\n",
-		       feature_id, ret);
+		netdev_err(ena_dev->net_device,
+			   "Failed to submit get_feature command %d error: %d\n",
+			   feature_id, ret);
 
 	return ret;
 }
@@ -1130,9 +1164,10 @@ static int ena_com_indirect_table_allocate(struct ena_com_dev *ena_dev,
 
 	if ((get_resp.u.ind_table.min_size > log_size) ||
 	    (get_resp.u.ind_table.max_size < log_size)) {
-		pr_err("Indirect table size doesn't fit. requested size: %d while min is:%d and max %d\n",
-		       1 << log_size, 1 << get_resp.u.ind_table.min_size,
-		       1 << get_resp.u.ind_table.max_size);
+		netdev_err(ena_dev->net_device,
+			   "Indirect table size doesn't fit. requested size: %d while min is:%d and max %d\n",
+			   1 << log_size, 1 << get_resp.u.ind_table.min_size,
+			   1 << get_resp.u.ind_table.max_size);
 		return -EINVAL;
 	}
 
@@ -1223,7 +1258,8 @@ static int ena_com_create_io_sq(struct ena_com_dev *ena_dev,
 					   &create_cmd.sq_ba,
 					   io_sq->desc_addr.phys_addr);
 		if (unlikely(ret)) {
-			pr_err("Memory address set failed\n");
+			netdev_err(ena_dev->net_device,
+				   "Memory address set failed\n");
 			return ret;
 		}
 	}
@@ -1234,7 +1270,8 @@ static int ena_com_create_io_sq(struct ena_com_dev *ena_dev,
 					    (struct ena_admin_acq_entry *)&cmd_completion,
 					    sizeof(cmd_completion));
 	if (unlikely(ret)) {
-		pr_err("Failed to create IO SQ. error: %d\n", ret);
+		netdev_err(ena_dev->net_device,
+			   "Failed to create IO SQ. error: %d\n", ret);
 		return ret;
 	}
 
@@ -1252,7 +1289,8 @@ static int ena_com_create_io_sq(struct ena_com_dev *ena_dev,
 			cmd_completion.llq_descriptors_offset);
 	}
 
-	pr_debug("Created sq[%u], depth[%u]\n", io_sq->idx, io_sq->q_depth);
+	netdev_dbg(ena_dev->net_device, "Created sq[%u], depth[%u]\n",
+		   io_sq->idx, io_sq->q_depth);
 
 	return ret;
 }
@@ -1286,7 +1324,8 @@ static void ena_com_update_intr_delay_resolution(struct ena_com_dev *ena_dev,
 	u16 prev_intr_delay_resolution = ena_dev->intr_delay_resolution;
 
 	if (unlikely(!intr_delay_resolution)) {
-		pr_err("Illegal intr_delay_resolution provided. Going to use default 1 usec resolution\n");
+		netdev_err(ena_dev->net_device,
+			   "Illegal intr_delay_resolution provided. Going to use default 1 usec resolution\n");
 		intr_delay_resolution = ENA_DEFAULT_INTR_DELAY_RESOLUTION;
 	}
 
@@ -1322,11 +1361,13 @@ int ena_com_execute_admin_command(struct ena_com_admin_queue *admin_queue,
 					    comp, comp_size);
 	if (IS_ERR(comp_ctx)) {
 		if (comp_ctx == ERR_PTR(-ENODEV))
-			pr_debug("Failed to submit command [%ld]\n",
-				 PTR_ERR(comp_ctx));
+			netdev_dbg(admin_queue->ena_dev->net_device,
+				   "Failed to submit command [%ld]\n",
+				   PTR_ERR(comp_ctx));
 		else
-			pr_err("Failed to submit command [%ld]\n",
-			       PTR_ERR(comp_ctx));
+			netdev_err(admin_queue->ena_dev->net_device,
+				   "Failed to submit command [%ld]\n",
+				   PTR_ERR(comp_ctx));
 
 		return PTR_ERR(comp_ctx);
 	}
@@ -1334,9 +1375,11 @@ int ena_com_execute_admin_command(struct ena_com_admin_queue *admin_queue,
 	ret = ena_com_wait_and_process_admin_cq(comp_ctx, admin_queue);
 	if (unlikely(ret)) {
 		if (admin_queue->running_state)
-			pr_err("Failed to process command. ret = %d\n", ret);
+			netdev_err(admin_queue->ena_dev->net_device,
+				   "Failed to process command. ret = %d\n", ret);
 		else
-			pr_debug("Failed to process command. ret = %d\n", ret);
+			netdev_dbg(admin_queue->ena_dev->net_device,
+				   "Failed to process command. ret = %d\n", ret);
 	}
 	return ret;
 }
@@ -1365,7 +1408,7 @@ int ena_com_create_io_cq(struct ena_com_dev *ena_dev,
 				   &create_cmd.cq_ba,
 				   io_cq->cdesc_addr.phys_addr);
 	if (unlikely(ret)) {
-		pr_err("Memory address set failed\n");
+		netdev_err(ena_dev->net_device, "Memory address set failed\n");
 		return ret;
 	}
 
@@ -1375,7 +1418,8 @@ int ena_com_create_io_cq(struct ena_com_dev *ena_dev,
 					    (struct ena_admin_acq_entry *)&cmd_completion,
 					    sizeof(cmd_completion));
 	if (unlikely(ret)) {
-		pr_err("Failed to create IO CQ. error: %d\n", ret);
+		netdev_err(ena_dev->net_device,
+			   "Failed to create IO CQ. error: %d\n", ret);
 		return ret;
 	}
 
@@ -1394,7 +1438,8 @@ int ena_com_create_io_cq(struct ena_com_dev *ena_dev,
 			(u32 __iomem *)((uintptr_t)ena_dev->reg_bar +
 			cmd_completion.numa_node_register_offset);
 
-	pr_debug("Created cq[%u], depth[%u]\n", io_cq->idx, io_cq->q_depth);
+	netdev_dbg(ena_dev->net_device, "Created cq[%u], depth[%u]\n",
+		   io_cq->idx, io_cq->q_depth);
 
 	return ret;
 }
@@ -1404,8 +1449,9 @@ int ena_com_get_io_handlers(struct ena_com_dev *ena_dev, u16 qid,
 			    struct ena_com_io_cq **io_cq)
 {
 	if (qid >= ENA_TOTAL_NUM_QUEUES) {
-		pr_err("Invalid queue number %d but the max is %d\n", qid,
-		       ENA_TOTAL_NUM_QUEUES);
+		netdev_err(ena_dev->net_device,
+			   "Invalid queue number %d but the max is %d\n", qid,
+			   ENA_TOTAL_NUM_QUEUES);
 		return -EINVAL;
 	}
 
@@ -1471,7 +1517,8 @@ int ena_com_destroy_io_cq(struct ena_com_dev *ena_dev,
 					    sizeof(destroy_resp));
 
 	if (unlikely(ret && (ret != -ENODEV)))
-		pr_err("Failed to destroy IO CQ. error: %d\n", ret);
+		netdev_err(ena_dev->net_device,
+			   "Failed to destroy IO CQ. error: %d\n", ret);
 
 	return ret;
 }
@@ -1513,13 +1560,14 @@ int ena_com_set_aenq_config(struct ena_com_dev *ena_dev, u32 groups_flag)
 
 	ret = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_AENQ_CONFIG, 0);
 	if (ret) {
-		pr_info("Can't get aenq configuration\n");
+		dev_info(ena_dev->dmadev, "Can't get aenq configuration\n");
 		return ret;
 	}
 
 	if ((get_resp.u.aenq.supported_groups & groups_flag) != groups_flag) {
-		pr_warn("Trying to set unsupported aenq events. supported flag: 0x%x asked flag: 0x%x\n",
-			get_resp.u.aenq.supported_groups, groups_flag);
+		netdev_warn(ena_dev->net_device,
+			    "Trying to set unsupported aenq events. supported flag: 0x%x asked flag: 0x%x\n",
+			    get_resp.u.aenq.supported_groups, groups_flag);
 		return -EOPNOTSUPP;
 	}
 
@@ -1538,7 +1586,8 @@ int ena_com_set_aenq_config(struct ena_com_dev *ena_dev, u32 groups_flag)
 					    sizeof(resp));
 
 	if (unlikely(ret))
-		pr_err("Failed to config AENQ ret: %d\n", ret);
+		netdev_err(ena_dev->net_device,
+			   "Failed to config AENQ ret: %d\n", ret);
 
 	return ret;
 }
@@ -1549,17 +1598,18 @@ int ena_com_get_dma_width(struct ena_com_dev *ena_dev)
 	int width;
 
 	if (unlikely(caps == ENA_MMIO_READ_TIMEOUT)) {
-		pr_err("Reg read timeout occurred\n");
+		netdev_err(ena_dev->net_device, "Reg read timeout occurred\n");
 		return -ETIME;
 	}
 
 	width = (caps & ENA_REGS_CAPS_DMA_ADDR_WIDTH_MASK) >>
 		ENA_REGS_CAPS_DMA_ADDR_WIDTH_SHIFT;
 
-	pr_debug("ENA dma width: %d\n", width);
+	netdev_dbg(ena_dev->net_device, "ENA dma width: %d\n", width);
 
 	if ((width < 32) || width > ENA_MAX_PHYS_ADDR_SIZE_BITS) {
-		pr_err("DMA width illegal value: %d\n", width);
+		netdev_err(ena_dev->net_device, "DMA width illegal value: %d\n",
+			   width);
 		return -EINVAL;
 	}
 
@@ -1583,23 +1633,24 @@ int ena_com_validate_version(struct ena_com_dev *ena_dev)
 
 	if (unlikely((ver == ENA_MMIO_READ_TIMEOUT) ||
 		     (ctrl_ver == ENA_MMIO_READ_TIMEOUT))) {
-		pr_err("Reg read timeout occurred\n");
+		netdev_err(ena_dev->net_device, "Reg read timeout occurred\n");
 		return -ETIME;
 	}
 
-	pr_info("ENA device version: %d.%d\n",
-		(ver & ENA_REGS_VERSION_MAJOR_VERSION_MASK) >>
-			ENA_REGS_VERSION_MAJOR_VERSION_SHIFT,
-		ver & ENA_REGS_VERSION_MINOR_VERSION_MASK);
+	dev_info(ena_dev->dmadev, "ENA device version: %d.%d\n",
+		 (ver & ENA_REGS_VERSION_MAJOR_VERSION_MASK) >>
+			 ENA_REGS_VERSION_MAJOR_VERSION_SHIFT,
+		 ver & ENA_REGS_VERSION_MINOR_VERSION_MASK);
 
-	pr_info("ENA controller version: %d.%d.%d implementation version %d\n",
-		(ctrl_ver & ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_MASK) >>
-			ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_SHIFT,
-		(ctrl_ver & ENA_REGS_CONTROLLER_VERSION_MINOR_VERSION_MASK) >>
-			ENA_REGS_CONTROLLER_VERSION_MINOR_VERSION_SHIFT,
-		(ctrl_ver & ENA_REGS_CONTROLLER_VERSION_SUBMINOR_VERSION_MASK),
-		(ctrl_ver & ENA_REGS_CONTROLLER_VERSION_IMPL_ID_MASK) >>
-			ENA_REGS_CONTROLLER_VERSION_IMPL_ID_SHIFT);
+	dev_info(ena_dev->dmadev,
+		 "ENA controller version: %d.%d.%d implementation version %d\n",
+		 (ctrl_ver & ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_MASK) >>
+			 ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_SHIFT,
+		 (ctrl_ver & ENA_REGS_CONTROLLER_VERSION_MINOR_VERSION_MASK) >>
+			 ENA_REGS_CONTROLLER_VERSION_MINOR_VERSION_SHIFT,
+		 (ctrl_ver & ENA_REGS_CONTROLLER_VERSION_SUBMINOR_VERSION_MASK),
+		 (ctrl_ver & ENA_REGS_CONTROLLER_VERSION_IMPL_ID_MASK) >>
+			 ENA_REGS_CONTROLLER_VERSION_IMPL_ID_SHIFT);
 
 	ctrl_ver_masked =
 		(ctrl_ver & ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_MASK) |
@@ -1608,7 +1659,8 @@ int ena_com_validate_version(struct ena_com_dev *ena_dev)
 
 	/* Validate the ctrl version without the implementation ID */
 	if (ctrl_ver_masked < MIN_ENA_CTRL_VER) {
-		pr_err("ENA ctrl version is lower than the minimal ctrl version the driver supports\n");
+		netdev_err(ena_dev->net_device,
+			   "ENA ctrl version is lower than the minimal ctrl version the driver supports\n");
 		return -1;
 	}
 
@@ -1741,12 +1793,13 @@ int ena_com_admin_init(struct ena_com_dev *ena_dev,
 	dev_sts = ena_com_reg_bar_read32(ena_dev, ENA_REGS_DEV_STS_OFF);
 
 	if (unlikely(dev_sts == ENA_MMIO_READ_TIMEOUT)) {
-		pr_err("Reg read timeout occurred\n");
+		netdev_err(ena_dev->net_device, "Reg read timeout occurred\n");
 		return -ETIME;
 	}
 
 	if (!(dev_sts & ENA_REGS_DEV_STS_READY_MASK)) {
-		pr_err("Device isn't ready, abort com init\n");
+		netdev_err(ena_dev->net_device,
+			   "Device isn't ready, abort com init\n");
 		return -ENODEV;
 	}
 
@@ -1823,8 +1876,9 @@ int ena_com_create_io_queue(struct ena_com_dev *ena_dev,
 	int ret;
 
 	if (ctx->qid >= ENA_TOTAL_NUM_QUEUES) {
-		pr_err("Qid (%d) is bigger than max num of queues (%d)\n",
-		       ctx->qid, ENA_TOTAL_NUM_QUEUES);
+		netdev_err(ena_dev->net_device,
+			   "Qid (%d) is bigger than max num of queues (%d)\n",
+			   ctx->qid, ENA_TOTAL_NUM_QUEUES);
 		return -EINVAL;
 	}
 
@@ -1882,8 +1936,9 @@ void ena_com_destroy_io_queue(struct ena_com_dev *ena_dev, u16 qid)
 	struct ena_com_io_cq *io_cq;
 
 	if (qid >= ENA_TOTAL_NUM_QUEUES) {
-		pr_err("Qid (%d) is bigger than max num of queues (%d)\n", qid,
-		       ENA_TOTAL_NUM_QUEUES);
+		netdev_err(ena_dev->net_device,
+			   "Qid (%d) is bigger than max num of queues (%d)\n",
+			   qid, ENA_TOTAL_NUM_QUEUES);
 		return;
 	}
 
@@ -2035,8 +2090,9 @@ void ena_com_aenq_intr_handler(struct ena_com_dev *ena_dev, void *data)
 		timestamp = (u64)aenq_common->timestamp_low |
 			((u64)aenq_common->timestamp_high << 32);
 
-		pr_debug("AENQ! Group[%x] Syndrome[%x] timestamp: [%llus]\n",
-			 aenq_common->group, aenq_common->syndrome, timestamp);
+		netdev_dbg(ena_dev->net_device,
+			   "AENQ! Group[%x] Syndrome[%x] timestamp: [%llus]\n",
+			   aenq_common->group, aenq_common->syndrome, timestamp);
 
 		/* Handle specific event*/
 		handler_cb = ena_com_get_specific_aenq_cb(ena_dev,
@@ -2079,19 +2135,20 @@ int ena_com_dev_reset(struct ena_com_dev *ena_dev,
 
 	if (unlikely((stat == ENA_MMIO_READ_TIMEOUT) ||
 		     (cap == ENA_MMIO_READ_TIMEOUT))) {
-		pr_err("Reg read32 timeout occurred\n");
+		netdev_err(ena_dev->net_device, "Reg read32 timeout occurred\n");
 		return -ETIME;
 	}
 
 	if ((stat & ENA_REGS_DEV_STS_READY_MASK) == 0) {
-		pr_err("Device isn't ready, can't reset device\n");
+		netdev_err(ena_dev->net_device,
+			   "Device isn't ready, can't reset device\n");
 		return -EINVAL;
 	}
 
 	timeout = (cap & ENA_REGS_CAPS_RESET_TIMEOUT_MASK) >>
 			ENA_REGS_CAPS_RESET_TIMEOUT_SHIFT;
 	if (timeout == 0) {
-		pr_err("Invalid timeout value\n");
+		netdev_err(ena_dev->net_device, "Invalid timeout value\n");
 		return -EINVAL;
 	}
 
@@ -2107,7 +2164,8 @@ int ena_com_dev_reset(struct ena_com_dev *ena_dev,
 	rc = wait_for_reset_state(ena_dev, timeout,
 				  ENA_REGS_DEV_STS_RESET_IN_PROGRESS_MASK);
 	if (rc != 0) {
-		pr_err("Reset indication didn't turn on\n");
+		netdev_err(ena_dev->net_device,
+			   "Reset indication didn't turn on\n");
 		return rc;
 	}
 
@@ -2115,7 +2173,8 @@ int ena_com_dev_reset(struct ena_com_dev *ena_dev,
 	writel(0, ena_dev->reg_bar + ENA_REGS_DEV_CTL_OFF);
 	rc = wait_for_reset_state(ena_dev, timeout, 0);
 	if (rc != 0) {
-		pr_err("Reset indication didn't turn off\n");
+		netdev_err(ena_dev->net_device,
+			   "Reset indication didn't turn off\n");
 		return rc;
 	}
 
@@ -2152,7 +2211,8 @@ static int ena_get_dev_stats(struct ena_com_dev *ena_dev,
 					     sizeof(*get_resp));
 
 	if (unlikely(ret))
-		pr_err("Failed to get stats. error: %d\n", ret);
+		netdev_err(ena_dev->net_device,
+			   "Failed to get stats. error: %d\n", ret);
 
 	return ret;
 }
@@ -2195,7 +2255,8 @@ int ena_com_set_dev_mtu(struct ena_com_dev *ena_dev, int mtu)
 	int ret;
 
 	if (!ena_com_check_supported_feature_id(ena_dev, ENA_ADMIN_MTU)) {
-		pr_debug("Feature %d isn't supported\n", ENA_ADMIN_MTU);
+		netdev_dbg(ena_dev->net_device, "Feature %d isn't supported\n",
+			   ENA_ADMIN_MTU);
 		return -EOPNOTSUPP;
 	}
 
@@ -2214,7 +2275,8 @@ int ena_com_set_dev_mtu(struct ena_com_dev *ena_dev, int mtu)
 					    sizeof(resp));
 
 	if (unlikely(ret))
-		pr_err("Failed to set mtu %d. error: %d\n", mtu, ret);
+		netdev_err(ena_dev->net_device,
+			   "Failed to set mtu %d. error: %d\n", mtu, ret);
 
 	return ret;
 }
@@ -2228,7 +2290,8 @@ int ena_com_get_offload_settings(struct ena_com_dev *ena_dev,
 	ret = ena_com_get_feature(ena_dev, &resp,
 				  ENA_ADMIN_STATELESS_OFFLOAD_CONFIG, 0);
 	if (unlikely(ret)) {
-		pr_err("Failed to get offload capabilities %d\n", ret);
+		netdev_err(ena_dev->net_device,
+			   "Failed to get offload capabilities %d\n", ret);
 		return ret;
 	}
 
@@ -2248,8 +2311,8 @@ int ena_com_set_hash_function(struct ena_com_dev *ena_dev)
 
 	if (!ena_com_check_supported_feature_id(ena_dev,
 						ENA_ADMIN_RSS_HASH_FUNCTION)) {
-		pr_debug("Feature %d isn't supported\n",
-			 ENA_ADMIN_RSS_HASH_FUNCTION);
+		netdev_dbg(ena_dev->net_device, "Feature %d isn't supported\n",
+			   ENA_ADMIN_RSS_HASH_FUNCTION);
 		return -EOPNOTSUPP;
 	}
 
@@ -2260,8 +2323,9 @@ int ena_com_set_hash_function(struct ena_com_dev *ena_dev)
 		return ret;
 
 	if (!(get_resp.u.flow_hash_func.supported_func & BIT(rss->hash_func))) {
-		pr_err("Func hash %d isn't supported by device, abort\n",
-		       rss->hash_func);
+		netdev_err(ena_dev->net_device,
+			   "Func hash %d isn't supported by device, abort\n",
+			   rss->hash_func);
 		return -EOPNOTSUPP;
 	}
 
@@ -2278,7 +2342,7 @@ int ena_com_set_hash_function(struct ena_com_dev *ena_dev)
 				   &cmd.control_buffer.address,
 				   rss->hash_key_dma_addr);
 	if (unlikely(ret)) {
-		pr_err("Memory address set failed\n");
+		netdev_err(ena_dev->net_device, "Memory address set failed\n");
 		return ret;
 	}
 
@@ -2290,8 +2354,9 @@ int ena_com_set_hash_function(struct ena_com_dev *ena_dev)
 					    (struct ena_admin_acq_entry *)&resp,
 					    sizeof(resp));
 	if (unlikely(ret)) {
-		pr_err("Failed to set hash function %d. error: %d\n",
-		       rss->hash_func, ret);
+		netdev_err(ena_dev->net_device,
+			   "Failed to set hash function %d. error: %d\n",
+			   rss->hash_func, ret);
 		return -EINVAL;
 	}
 
@@ -2322,7 +2387,8 @@ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev,
 		return rc;
 
 	if (!(BIT(func) & get_resp.u.flow_hash_func.supported_func)) {
-		pr_err("Flow hash function %d isn't supported\n", func);
+		netdev_err(ena_dev->net_device,
+			   "Flow hash function %d isn't supported\n", func);
 		return -EOPNOTSUPP;
 	}
 
@@ -2330,8 +2396,9 @@ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev,
 	case ENA_ADMIN_TOEPLITZ:
 		if (key) {
 			if (key_len != sizeof(hash_key->key)) {
-				pr_err("key len (%hu) doesn't equal the supported size (%zu)\n",
-				       key_len, sizeof(hash_key->key));
+				netdev_err(ena_dev->net_device,
+					   "key len (%hu) doesn't equal the supported size (%zu)\n",
+					   key_len, sizeof(hash_key->key));
 				return -EINVAL;
 			}
 			memcpy(hash_key->key, key, key_len);
@@ -2343,7 +2410,8 @@ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev,
 		rss->hash_init_val = init_val;
 		break;
 	default:
-		pr_err("Invalid hash function (%d)\n", func);
+		netdev_err(ena_dev->net_device, "Invalid hash function (%d)\n",
+			   func);
 		return -EINVAL;
 	}
 
@@ -2429,8 +2497,8 @@ int ena_com_set_hash_ctrl(struct ena_com_dev *ena_dev)
 
 	if (!ena_com_check_supported_feature_id(ena_dev,
 						ENA_ADMIN_RSS_HASH_INPUT)) {
-		pr_debug("Feature %d isn't supported\n",
-			 ENA_ADMIN_RSS_HASH_INPUT);
+		netdev_dbg(ena_dev->net_device, "Feature %d isn't supported\n",
+			   ENA_ADMIN_RSS_HASH_INPUT);
 		return -EOPNOTSUPP;
 	}
 
@@ -2448,7 +2516,7 @@ int ena_com_set_hash_ctrl(struct ena_com_dev *ena_dev)
 				   &cmd.control_buffer.address,
 				   rss->hash_ctrl_dma_addr);
 	if (unlikely(ret)) {
-		pr_err("Memory address set failed\n");
+		netdev_err(ena_dev->net_device, "Memory address set failed\n");
 		return ret;
 	}
 	cmd.control_buffer.length = sizeof(*hash_ctrl);
@@ -2459,7 +2527,8 @@ int ena_com_set_hash_ctrl(struct ena_com_dev *ena_dev)
 					    (struct ena_admin_acq_entry *)&resp,
 					    sizeof(resp));
 	if (unlikely(ret))
-		pr_err("Failed to set hash input. error: %d\n", ret);
+		netdev_err(ena_dev->net_device,
+			   "Failed to set hash input. error: %d\n", ret);
 
 	return ret;
 }
@@ -2509,9 +2578,10 @@ int ena_com_set_default_hash_ctrl(struct ena_com_dev *ena_dev)
 		available_fields = hash_ctrl->selected_fields[i].fields &
 				hash_ctrl->supported_fields[i].fields;
 		if (available_fields != hash_ctrl->selected_fields[i].fields) {
-			pr_err("Hash control doesn't support all the desire configuration. proto %x supported %x selected %x\n",
-			       i, hash_ctrl->supported_fields[i].fields,
-			       hash_ctrl->selected_fields[i].fields);
+			netdev_err(ena_dev->net_device,
+				   "Hash control doesn't support all the desire configuration. proto %x supported %x selected %x\n",
+				   i, hash_ctrl->supported_fields[i].fields,
+				   hash_ctrl->selected_fields[i].fields);
 			return -EOPNOTSUPP;
 		}
 	}
@@ -2535,7 +2605,8 @@ int ena_com_fill_hash_ctrl(struct ena_com_dev *ena_dev,
 	int rc;
 
 	if (proto >= ENA_ADMIN_RSS_PROTO_NUM) {
-		pr_err("Invalid proto num (%u)\n", proto);
+		netdev_err(ena_dev->net_device, "Invalid proto num (%u)\n",
+			   proto);
 		return -EINVAL;
 	}
 
@@ -2547,8 +2618,9 @@ int ena_com_fill_hash_ctrl(struct ena_com_dev *ena_dev,
 	/* Make sure all the fields are supported */
 	supported_fields = hash_ctrl->supported_fields[proto].fields;
 	if ((hash_fields & supported_fields) != hash_fields) {
-		pr_err("Proto %d doesn't support the required fields %x. supports only: %x\n",
-		       proto, hash_fields, supported_fields);
+		netdev_err(ena_dev->net_device,
+			   "Proto %d doesn't support the required fields %x. supports only: %x\n",
+			   proto, hash_fields, supported_fields);
 	}
 
 	hash_ctrl->selected_fields[proto].fields = hash_fields;
@@ -2588,14 +2660,15 @@ int ena_com_indirect_table_set(struct ena_com_dev *ena_dev)
 
 	if (!ena_com_check_supported_feature_id(
 		    ena_dev, ENA_ADMIN_RSS_INDIRECTION_TABLE_CONFIG)) {
-		pr_debug("Feature %d isn't supported\n",
-			 ENA_ADMIN_RSS_INDIRECTION_TABLE_CONFIG);
+		netdev_dbg(ena_dev->net_device, "Feature %d isn't supported\n",
+			   ENA_ADMIN_RSS_INDIRECTION_TABLE_CONFIG);
 		return -EOPNOTSUPP;
 	}
 
 	ret = ena_com_ind_tbl_convert_to_device(ena_dev);
 	if (ret) {
-		pr_err("Failed to convert host indirection table to device table\n");
+		netdev_err(ena_dev->net_device,
+			   "Failed to convert host indirection table to device table\n");
 		return ret;
 	}
 
@@ -2612,7 +2685,7 @@ int ena_com_indirect_table_set(struct ena_com_dev *ena_dev)
 				   &cmd.control_buffer.address,
 				   rss->rss_ind_tbl_dma_addr);
 	if (unlikely(ret)) {
-		pr_err("Memory address set failed\n");
+		netdev_err(ena_dev->net_device, "Memory address set failed\n");
 		return ret;
 	}
 
@@ -2626,7 +2699,8 @@ int ena_com_indirect_table_set(struct ena_com_dev *ena_dev)
 					    sizeof(resp));
 
 	if (unlikely(ret))
-		pr_err("Failed to set indirect table. error: %d\n", ret);
+		netdev_err(ena_dev->net_device,
+			   "Failed to set indirect table. error: %d\n", ret);
 
 	return ret;
 }
@@ -2782,7 +2856,7 @@ int ena_com_set_host_attributes(struct ena_com_dev *ena_dev)
 				   &cmd.u.host_attr.debug_ba,
 				   host_attr->debug_area_dma_addr);
 	if (unlikely(ret)) {
-		pr_err("Memory address set failed\n");
+		netdev_err(ena_dev->net_device, "Memory address set failed\n");
 		return ret;
 	}
 
@@ -2790,7 +2864,7 @@ int ena_com_set_host_attributes(struct ena_com_dev *ena_dev)
 				   &cmd.u.host_attr.os_info_ba,
 				   host_attr->host_info_dma_addr);
 	if (unlikely(ret)) {
-		pr_err("Memory address set failed\n");
+		netdev_err(ena_dev->net_device, "Memory address set failed\n");
 		return ret;
 	}
 
@@ -2803,7 +2877,8 @@ int ena_com_set_host_attributes(struct ena_com_dev *ena_dev)
 					    sizeof(resp));
 
 	if (unlikely(ret))
-		pr_err("Failed to set host attributes: %d\n", ret);
+		netdev_err(ena_dev->net_device,
+			   "Failed to set host attributes: %d\n", ret);
 
 	return ret;
 }
@@ -2815,12 +2890,14 @@ bool ena_com_interrupt_moderation_supported(struct ena_com_dev *ena_dev)
 						  ENA_ADMIN_INTERRUPT_MODERATION);
 }
 
-static int ena_com_update_nonadaptive_moderation_interval(u32 coalesce_usecs,
+static int ena_com_update_nonadaptive_moderation_interval(struct ena_com_dev *ena_dev,
+							  u32 coalesce_usecs,
 							  u32 intr_delay_resolution,
 							  u32 *intr_moder_interval)
 {
 	if (!intr_delay_resolution) {
-		pr_err("Illegal interrupt delay granularity value\n");
+		netdev_err(ena_dev->net_device,
+			   "Illegal interrupt delay granularity value\n");
 		return -EFAULT;
 	}
 
@@ -2832,7 +2909,8 @@ static int ena_com_update_nonadaptive_moderation_interval(u32 coalesce_usecs,
 int ena_com_update_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev,
 						      u32 tx_coalesce_usecs)
 {
-	return ena_com_update_nonadaptive_moderation_interval(tx_coalesce_usecs,
+	return ena_com_update_nonadaptive_moderation_interval(ena_dev,
+							      tx_coalesce_usecs,
 							      ena_dev->intr_delay_resolution,
 							      &ena_dev->intr_moder_tx_interval);
 }
@@ -2840,7 +2918,8 @@ int ena_com_update_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_de
 int ena_com_update_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev,
 						      u32 rx_coalesce_usecs)
 {
-	return ena_com_update_nonadaptive_moderation_interval(rx_coalesce_usecs,
+	return ena_com_update_nonadaptive_moderation_interval(ena_dev,
+							      rx_coalesce_usecs,
 							      ena_dev->intr_delay_resolution,
 							      &ena_dev->intr_moder_rx_interval);
 }
@@ -2856,12 +2935,14 @@ int ena_com_init_interrupt_moderation(struct ena_com_dev *ena_dev)
 
 	if (rc) {
 		if (rc == -EOPNOTSUPP) {
-			pr_debug("Feature %d isn't supported\n",
-				 ENA_ADMIN_INTERRUPT_MODERATION);
+			netdev_dbg(ena_dev->net_device,
+				   "Feature %d isn't supported\n",
+				   ENA_ADMIN_INTERRUPT_MODERATION);
 			rc = 0;
 		} else {
-			pr_err("Failed to get interrupt moderation admin cmd. rc: %d\n",
-			       rc);
+			netdev_err(ena_dev->net_device,
+				   "Failed to get interrupt moderation admin cmd. rc: %d\n",
+				   rc);
 		}
 
 		/* no moderation supported, disable adaptive support */
@@ -2909,7 +2990,8 @@ int ena_com_config_dev_mode(struct ena_com_dev *ena_dev,
 		(llq_info->descs_num_before_header * sizeof(struct ena_eth_io_tx_desc));
 
 	if (unlikely(ena_dev->tx_max_header_size == 0)) {
-		pr_err("The size of the LLQ entry is smaller than needed\n");
+		netdev_err(ena_dev->net_device,
+			   "The size of the LLQ entry is smaller than needed\n");
 		return -EINVAL;
 	}
 
diff --git a/drivers/net/ethernet/amazon/ena/ena_com.h b/drivers/net/ethernet/amazon/ena/ena_com.h
index 55097750d062..b0f76fb3b1d7 100644
--- a/drivers/net/ethernet/amazon/ena/ena_com.h
+++ b/drivers/net/ethernet/amazon/ena/ena_com.h
@@ -303,6 +303,7 @@ struct ena_com_dev {
 	u8 __iomem *reg_bar;
 	void __iomem *mem_bar;
 	void *dmadev;
+	struct net_device *net_device;
 
 	enum ena_admin_placement_policy_type tx_mem_queue_type;
 	u32 tx_max_header_size;
@@ -931,6 +932,26 @@ int ena_com_config_dev_mode(struct ena_com_dev *ena_dev,
 			    struct ena_admin_feature_llq_desc *llq_features,
 			    struct ena_llq_configurations *llq_default_config);
 
+/* ena_com_io_sq_to_ena_dev - Extract ena_com_dev using contained field io_sq.
+ * @io_sq: IO submit queue struct
+ *
+ * @return - ena_com_dev struct extracted from io_sq
+ */
+static inline struct ena_com_dev *ena_com_io_sq_to_ena_dev(struct ena_com_io_sq *io_sq)
+{
+	return container_of(io_sq, struct ena_com_dev, io_sq_queues[io_sq->qid]);
+}
+
+/* ena_com_io_cq_to_ena_dev - Extract ena_com_dev using contained field io_cq.
+ * @io_sq: IO submit queue struct
+ *
+ * @return - ena_com_dev struct extracted from io_sq
+ */
+static inline struct ena_com_dev *ena_com_io_cq_to_ena_dev(struct ena_com_io_cq *io_cq)
+{
+	return container_of(io_cq, struct ena_com_dev, io_cq_queues[io_cq->qid]);
+}
+
 static inline bool ena_com_get_adaptive_moderation_enabled(struct ena_com_dev *ena_dev)
 {
 	return ena_dev->adaptive_coalescing;
diff --git a/drivers/net/ethernet/amazon/ena/ena_eth_com.c b/drivers/net/ethernet/amazon/ena/ena_eth_com.c
index 032ab9f20438..85daeac219ec 100644
--- a/drivers/net/ethernet/amazon/ena/ena_eth_com.c
+++ b/drivers/net/ethernet/amazon/ena/ena_eth_com.c
@@ -58,13 +58,15 @@ static int ena_com_write_bounce_buffer_to_dev(struct ena_com_io_sq *io_sq,
 
 	if (is_llq_max_tx_burst_exists(io_sq)) {
 		if (unlikely(!io_sq->entries_in_tx_burst_left)) {
-			pr_err("Error: trying to send more packets than tx burst allows\n");
+			netdev_err(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
+				   "Error: trying to send more packets than tx burst allows\n");
 			return -ENOSPC;
 		}
 
 		io_sq->entries_in_tx_burst_left--;
-		pr_debug("Decreasing entries_in_tx_burst_left of queue %d to %d\n",
-			 io_sq->qid, io_sq->entries_in_tx_burst_left);
+		netdev_dbg(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
+			   "Decreasing entries_in_tx_burst_left of queue %d to %d\n",
+			   io_sq->qid, io_sq->entries_in_tx_burst_left);
 	}
 
 	/* Make sure everything was written into the bounce buffer before
@@ -102,12 +104,14 @@ static int ena_com_write_header_to_bounce(struct ena_com_io_sq *io_sq,
 
 	if (unlikely((header_offset + header_len) >
 		     llq_info->desc_list_entry_size)) {
-		pr_err("Trying to write header larger than llq entry can accommodate\n");
+		netdev_err(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
+			   "Trying to write header larger than llq entry can accommodate\n");
 		return -EFAULT;
 	}
 
 	if (unlikely(!bounce_buffer)) {
-		pr_err("Bounce buffer is NULL\n");
+		netdev_err(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
+			   "Bounce buffer is NULL\n");
 		return -EFAULT;
 	}
 
@@ -125,7 +129,8 @@ static void *get_sq_desc_llq(struct ena_com_io_sq *io_sq)
 	bounce_buffer = pkt_ctrl->curr_bounce_buf;
 
 	if (unlikely(!bounce_buffer)) {
-		pr_err("Bounce buffer is NULL\n");
+		netdev_err(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
+			   "Bounce buffer is NULL\n");
 		return NULL;
 	}
 
@@ -250,8 +255,9 @@ static u16 ena_com_cdesc_rx_pkt_get(struct ena_com_io_cq *io_cq,
 		io_cq->cur_rx_pkt_cdesc_count = 0;
 		io_cq->cur_rx_pkt_cdesc_start_idx = head_masked;
 
-		pr_debug("ENA q_id: %d packets were completed. first desc idx %u descs# %d\n",
-			 io_cq->qid, *first_cdesc_idx, count);
+		netdev_dbg(ena_com_io_cq_to_ena_dev(io_cq)->net_device,
+			   "ENA q_id: %d packets were completed. first desc idx %u descs# %d\n",
+			   io_cq->qid, *first_cdesc_idx, count);
 	} else {
 		io_cq->cur_rx_pkt_cdesc_count += count;
 		count = 0;
@@ -335,7 +341,8 @@ static int ena_com_create_and_store_tx_meta_desc(struct ena_com_io_sq *io_sq,
 	return 0;
 }
 
-static void ena_com_rx_set_flags(struct ena_com_rx_ctx *ena_rx_ctx,
+static void ena_com_rx_set_flags(struct ena_com_io_cq *io_cq,
+				 struct ena_com_rx_ctx *ena_rx_ctx,
 				 struct ena_eth_io_rx_cdesc_base *cdesc)
 {
 	ena_rx_ctx->l3_proto = cdesc->status &
@@ -357,10 +364,11 @@ static void ena_com_rx_set_flags(struct ena_com_rx_ctx *ena_rx_ctx,
 		(cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_MASK) >>
 		ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_SHIFT;
 
-	pr_debug("l3_proto %d l4_proto %d l3_csum_err %d l4_csum_err %d hash %d frag %d cdesc_status %x\n",
-		 ena_rx_ctx->l3_proto, ena_rx_ctx->l4_proto,
-		 ena_rx_ctx->l3_csum_err, ena_rx_ctx->l4_csum_err,
-		 ena_rx_ctx->hash, ena_rx_ctx->frag, cdesc->status);
+	netdev_dbg(ena_com_io_cq_to_ena_dev(io_cq)->net_device,
+		   "l3_proto %d l4_proto %d l3_csum_err %d l4_csum_err %d hash %d frag %d cdesc_status %x\n",
+		   ena_rx_ctx->l3_proto, ena_rx_ctx->l4_proto,
+		   ena_rx_ctx->l3_csum_err, ena_rx_ctx->l4_csum_err,
+		   ena_rx_ctx->hash, ena_rx_ctx->frag, cdesc->status);
 }
 
 /*****************************************************************************/
@@ -385,13 +393,15 @@ int ena_com_prepare_tx(struct ena_com_io_sq *io_sq,
 
 	/* num_bufs +1 for potential meta desc */
 	if (unlikely(!ena_com_sq_have_enough_space(io_sq, num_bufs + 1))) {
-		pr_debug("Not enough space in the tx queue\n");
+		netdev_dbg(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
+			   "Not enough space in the tx queue\n");
 		return -ENOMEM;
 	}
 
 	if (unlikely(header_len > io_sq->tx_max_header_size)) {
-		pr_err("Header size is too large %d max header: %d\n",
-		       header_len, io_sq->tx_max_header_size);
+		netdev_err(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
+			   "Header size is too large %d max header: %d\n",
+			   header_len, io_sq->tx_max_header_size);
 		return -EINVAL;
 	}
 
@@ -405,7 +415,8 @@ int ena_com_prepare_tx(struct ena_com_io_sq *io_sq,
 
 	rc = ena_com_create_and_store_tx_meta_desc(io_sq, ena_tx_ctx, &have_meta);
 	if (unlikely(rc)) {
-		pr_err("Failed to create and store tx meta desc\n");
+		netdev_err(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
+			   "Failed to create and store tx meta desc\n");
 		return rc;
 	}
 
@@ -529,12 +540,14 @@ int ena_com_rx_pkt(struct ena_com_io_cq *io_cq,
 		return 0;
 	}
 
-	pr_debug("Fetch rx packet: queue %d completed desc: %d\n", io_cq->qid,
-		 nb_hw_desc);
+	netdev_dbg(ena_com_io_cq_to_ena_dev(io_cq)->net_device,
+		   "Fetch rx packet: queue %d completed desc: %d\n", io_cq->qid,
+		   nb_hw_desc);
 
 	if (unlikely(nb_hw_desc > ena_rx_ctx->max_bufs)) {
-		pr_err("Too many RX cdescs (%d) > MAX(%d)\n", nb_hw_desc,
-		       ena_rx_ctx->max_bufs);
+		netdev_err(ena_com_io_cq_to_ena_dev(io_cq)->net_device,
+			   "Too many RX cdescs (%d) > MAX(%d)\n", nb_hw_desc,
+			   ena_rx_ctx->max_bufs);
 		return -ENOSPC;
 	}
 
@@ -557,11 +570,12 @@ int ena_com_rx_pkt(struct ena_com_io_cq *io_cq,
 	/* Update SQ head ptr */
 	io_sq->next_to_comp += nb_hw_desc;
 
-	pr_debug("[%s][QID#%d] Updating SQ head to: %d\n", __func__, io_sq->qid,
-		 io_sq->next_to_comp);
+	netdev_dbg(ena_com_io_cq_to_ena_dev(io_cq)->net_device,
+		   "[%s][QID#%d] Updating SQ head to: %d\n", __func__,
+		   io_sq->qid, io_sq->next_to_comp);
 
 	/* Get rx flags from the last pkt */
-	ena_com_rx_set_flags(ena_rx_ctx, cdesc);
+	ena_com_rx_set_flags(io_cq, ena_rx_ctx, cdesc);
 
 	ena_rx_ctx->descs = nb_hw_desc;
 	return 0;
@@ -593,6 +607,10 @@ int ena_com_add_single_rx_desc(struct ena_com_io_sq *io_sq,
 
 	desc->req_id = req_id;
 
+	netdev_dbg(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
+		   "[%s] Adding single RX desc, Queue: %u, req_id: %u\n",
+		   __func__, io_sq->qid, req_id);
+
 	desc->buff_addr_lo = (u32)ena_buf->paddr;
 	desc->buff_addr_hi =
 		((ena_buf->paddr & GENMASK_ULL(io_sq->dma_addr_bits - 1, 32)) >> 32);
diff --git a/drivers/net/ethernet/amazon/ena/ena_eth_com.h b/drivers/net/ethernet/amazon/ena/ena_eth_com.h
index 2c16c218818a..689313ee25a8 100644
--- a/drivers/net/ethernet/amazon/ena/ena_eth_com.h
+++ b/drivers/net/ethernet/amazon/ena/ena_eth_com.h
@@ -140,8 +140,9 @@ static inline bool ena_com_is_doorbell_needed(struct ena_com_io_sq *io_sq,
 						   llq_info->descs_per_entry);
 	}
 
-	pr_debug("Queue: %d num_descs: %d num_entries_needed: %d\n", io_sq->qid,
-		 num_descs, num_entries_needed);
+	netdev_dbg(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
+		   "Queue: %d num_descs: %d num_entries_needed: %d\n",
+		   io_sq->qid, num_descs, num_entries_needed);
 
 	return num_entries_needed > io_sq->entries_in_tx_burst_left;
 }
@@ -151,14 +152,16 @@ static inline int ena_com_write_sq_doorbell(struct ena_com_io_sq *io_sq)
 	u16 max_entries_in_tx_burst = io_sq->llq_info.max_entries_in_tx_burst;
 	u16 tail = io_sq->tail;
 
-	pr_debug("Write submission queue doorbell for queue: %d tail: %d\n",
-		 io_sq->qid, tail);
+	netdev_dbg(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
+		   "Write submission queue doorbell for queue: %d tail: %d\n",
+		   io_sq->qid, tail);
 
 	writel(tail, io_sq->db_addr);
 
 	if (is_llq_max_tx_burst_exists(io_sq)) {
-		pr_debug("Reset available entries in tx burst for queue %d to %d\n",
-			 io_sq->qid, max_entries_in_tx_burst);
+		netdev_dbg(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
+			   "Reset available entries in tx burst for queue %d to %d\n",
+			   io_sq->qid, max_entries_in_tx_burst);
 		io_sq->entries_in_tx_burst_left = max_entries_in_tx_burst;
 	}
 
@@ -176,8 +179,9 @@ static inline int ena_com_update_dev_comp_head(struct ena_com_io_cq *io_cq)
 		need_update = unreported_comp > (io_cq->q_depth / ENA_COMP_HEAD_THRESH);
 
 		if (unlikely(need_update)) {
-			pr_debug("Write completion queue doorbell for queue %d: head: %d\n",
-				 io_cq->qid, head);
+			netdev_dbg(ena_com_io_cq_to_ena_dev(io_cq)->net_device,
+				   "Write completion queue doorbell for queue %d: head: %d\n",
+				   io_cq->qid, head);
 			writel(head, io_cq->cq_head_db_reg);
 			io_cq->last_head_update = head;
 		}
@@ -240,7 +244,8 @@ static inline int ena_com_tx_comp_req_id_get(struct ena_com_io_cq *io_cq,
 
 	*req_id = READ_ONCE(cdesc->req_id);
 	if (unlikely(*req_id >= io_cq->q_depth)) {
-		pr_err("Invalid req id %d\n", cdesc->req_id);
+		netdev_err(ena_com_io_cq_to_ena_dev(io_cq)->net_device,
+			   "Invalid req id %d\n", cdesc->req_id);
 		return -EINVAL;
 	}
 
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index 985dea1870b5..371593ed0400 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -4192,6 +4192,8 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	adapter->pdev = pdev;
 	adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE);
 
+	ena_dev->net_device = netdev;
+
 	pci_set_drvdata(pdev, adapter);
 
 	rc = ena_device_init(ena_dev, pdev, &get_feat_ctx, &wd_state);
-- 
2.23.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH V4 net-next 3/9] net: ena: add explicit casting to variables
  2020-12-04 12:11 [PATCH V4 net-next 0/9] XDP Redirect implementation for ENA driver akiyano
  2020-12-04 12:11 ` [PATCH V4 net-next 1/9] net: ena: use constant value for net_device allocation akiyano
  2020-12-04 12:11 ` [PATCH V4 net-next 2/9] net: ena: add device distinct log prefix to files akiyano
@ 2020-12-04 12:11 ` akiyano
  2020-12-07 19:00   ` Alexander Duyck
  2020-12-04 12:11 ` [PATCH V4 net-next 4/9] net: ena: fix coding style nits akiyano
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 19+ messages in thread
From: akiyano @ 2020-12-04 12:11 UTC (permalink / raw)
  To: davem, kuba, netdev
  Cc: Arthur Kiyanovski, dwmw, zorik, matua, saeedb, msw, aliguori,
	nafea, gtzalik, netanel, alisaidi, benh, ndagan, shayagr,
	sameehj, Ido Segev, Igor Chauskin

From: Arthur Kiyanovski <akiyano@amazon.com>

This patch adds explicit casting to some implicit conversions in the ena
driver. The implicit conversions fail some of our static checkers that
search for accidental conversions in our driver.
Adding this cast won't affect the end results, and would sooth the
checkers.

Signed-off-by: Ido Segev <idose@amazon.com>
Signed-off-by: Igor Chauskin <igorch@amazon.com>
Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
---
 drivers/net/ethernet/amazon/ena/ena_com.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c b/drivers/net/ethernet/amazon/ena/ena_com.c
index e168edf3c930..7910d8e68a99 100644
--- a/drivers/net/ethernet/amazon/ena/ena_com.c
+++ b/drivers/net/ethernet/amazon/ena/ena_com.c
@@ -1369,7 +1369,7 @@ int ena_com_execute_admin_command(struct ena_com_admin_queue *admin_queue,
 				   "Failed to submit command [%ld]\n",
 				   PTR_ERR(comp_ctx));
 
-		return PTR_ERR(comp_ctx);
+		return (int)PTR_ERR(comp_ctx);
 	}
 
 	ret = ena_com_wait_and_process_admin_cq(comp_ctx, admin_queue);
@@ -1595,7 +1595,7 @@ int ena_com_set_aenq_config(struct ena_com_dev *ena_dev, u32 groups_flag)
 int ena_com_get_dma_width(struct ena_com_dev *ena_dev)
 {
 	u32 caps = ena_com_reg_bar_read32(ena_dev, ENA_REGS_CAPS_OFF);
-	int width;
+	u32 width;
 
 	if (unlikely(caps == ENA_MMIO_READ_TIMEOUT)) {
 		netdev_err(ena_dev->net_device, "Reg read timeout occurred\n");
@@ -2266,7 +2266,7 @@ int ena_com_set_dev_mtu(struct ena_com_dev *ena_dev, int mtu)
 	cmd.aq_common_descriptor.opcode = ENA_ADMIN_SET_FEATURE;
 	cmd.aq_common_descriptor.flags = 0;
 	cmd.feat_common.feature_id = ENA_ADMIN_MTU;
-	cmd.u.mtu.mtu = mtu;
+	cmd.u.mtu.mtu = (u32)mtu;
 
 	ret = ena_com_execute_admin_command(admin_queue,
 					    (struct ena_admin_aq_entry *)&cmd,
@@ -2689,7 +2689,7 @@ int ena_com_indirect_table_set(struct ena_com_dev *ena_dev)
 		return ret;
 	}
 
-	cmd.control_buffer.length = (1ULL << rss->tbl_log_size) *
+	cmd.control_buffer.length = (u32)(1ULL << rss->tbl_log_size) *
 		sizeof(struct ena_admin_rss_ind_table_entry);
 
 	ret = ena_com_execute_admin_command(admin_queue,
@@ -2712,7 +2712,7 @@ int ena_com_indirect_table_get(struct ena_com_dev *ena_dev, u32 *ind_tbl)
 	u32 tbl_size;
 	int i, rc;
 
-	tbl_size = (1ULL << rss->tbl_log_size) *
+	tbl_size = (u32)(1ULL << rss->tbl_log_size) *
 		sizeof(struct ena_admin_rss_ind_table_entry);
 
 	rc = ena_com_get_feature_ex(ena_dev, &get_resp,
-- 
2.23.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH V4 net-next 4/9] net: ena: fix coding style nits
  2020-12-04 12:11 [PATCH V4 net-next 0/9] XDP Redirect implementation for ENA driver akiyano
                   ` (2 preceding siblings ...)
  2020-12-04 12:11 ` [PATCH V4 net-next 3/9] net: ena: add explicit casting to variables akiyano
@ 2020-12-04 12:11 ` akiyano
  2020-12-04 12:11 ` [PATCH V4 net-next 5/9] net: ena: aggregate stats increase into a function akiyano
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: akiyano @ 2020-12-04 12:11 UTC (permalink / raw)
  To: davem, kuba, netdev
  Cc: Arthur Kiyanovski, dwmw, zorik, matua, saeedb, msw, aliguori,
	nafea, gtzalik, netanel, alisaidi, benh, ndagan, shayagr,
	sameehj, Kuniyuki Iwashima

From: Arthur Kiyanovski <akiyano@amazon.com>

This commit fixes two nits, but it does not generate any change to binary
because of the optimization of gcc.

  - use `count` instead of `channels->combined_count`
  - change return type from `int` to `bool`

Also add spaces and change macro order in OR assignment to make the code
easier to read.

Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp>
Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
---
 drivers/net/ethernet/amazon/ena/ena_eth_com.c | 5 +++--
 drivers/net/ethernet/amazon/ena/ena_ethtool.c | 2 +-
 drivers/net/ethernet/amazon/ena/ena_netdev.h  | 4 ++--
 3 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_eth_com.c b/drivers/net/ethernet/amazon/ena/ena_eth_com.c
index 85daeac219ec..c3be751e7379 100644
--- a/drivers/net/ethernet/amazon/ena/ena_eth_com.c
+++ b/drivers/net/ethernet/amazon/ena/ena_eth_com.c
@@ -578,6 +578,7 @@ int ena_com_rx_pkt(struct ena_com_io_cq *io_cq,
 	ena_com_rx_set_flags(io_cq, ena_rx_ctx, cdesc);
 
 	ena_rx_ctx->descs = nb_hw_desc;
+
 	return 0;
 }
 
@@ -602,8 +603,8 @@ int ena_com_add_single_rx_desc(struct ena_com_io_sq *io_sq,
 
 	desc->ctrl = ENA_ETH_IO_RX_DESC_FIRST_MASK |
 		     ENA_ETH_IO_RX_DESC_LAST_MASK |
-		     (io_sq->phase & ENA_ETH_IO_RX_DESC_PHASE_MASK) |
-		     ENA_ETH_IO_RX_DESC_COMP_REQ_MASK;
+		     ENA_ETH_IO_RX_DESC_COMP_REQ_MASK |
+		     (io_sq->phase & ENA_ETH_IO_RX_DESC_PHASE_MASK);
 
 	desc->req_id = req_id;
 
diff --git a/drivers/net/ethernet/amazon/ena/ena_ethtool.c b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
index 6cdd9efe8df3..2ad44ae74cf6 100644
--- a/drivers/net/ethernet/amazon/ena/ena_ethtool.c
+++ b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
@@ -839,7 +839,7 @@ static int ena_set_channels(struct net_device *netdev,
 	/* The check for max value is already done in ethtool */
 	if (count < ENA_MIN_NUM_IO_QUEUES ||
 	    (ena_xdp_present(adapter) &&
-	    !ena_xdp_legal_queue_count(adapter, channels->combined_count)))
+	    !ena_xdp_legal_queue_count(adapter, count)))
 		return -EINVAL;
 
 	return ena_update_queue_count(adapter, count);
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h
index 30eb686749dc..c39f41711c31 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.h
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h
@@ -433,8 +433,8 @@ static inline bool ena_xdp_present_ring(struct ena_ring *ring)
 	return !!ring->xdp_bpf_prog;
 }
 
-static inline int ena_xdp_legal_queue_count(struct ena_adapter *adapter,
-					    u32 queues)
+static inline bool ena_xdp_legal_queue_count(struct ena_adapter *adapter,
+					     u32 queues)
 {
 	return 2 * queues <= adapter->max_num_io_queues;
 }
-- 
2.23.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH V4 net-next 5/9] net: ena: aggregate stats increase into a function
  2020-12-04 12:11 [PATCH V4 net-next 0/9] XDP Redirect implementation for ENA driver akiyano
                   ` (3 preceding siblings ...)
  2020-12-04 12:11 ` [PATCH V4 net-next 4/9] net: ena: fix coding style nits akiyano
@ 2020-12-04 12:11 ` akiyano
  2020-12-04 12:11 ` [PATCH V4 net-next 6/9] net: ena: use xdp_frame in XDP TX flow akiyano
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: akiyano @ 2020-12-04 12:11 UTC (permalink / raw)
  To: davem, kuba, netdev
  Cc: Arthur Kiyanovski, dwmw, zorik, matua, saeedb, msw, aliguori,
	nafea, gtzalik, netanel, alisaidi, benh, ndagan, shayagr,
	sameehj

From: Arthur Kiyanovski <akiyano@amazon.com>

Introduce ena_increase_stat() function to increase statistics by a
certain number.
The function includes the
    - lock aquire (on 32bit machines)
    - stat increase
    - lock release (on 32bit machines)

line sequence that is ubiquitous across the driver.

The function increases a single stat at a time and several stats which
are increased together weren't put into a function to avoid
calling the function several times for each stat which looks bad and
might decrease performance.

Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
---
 drivers/net/ethernet/amazon/ena/ena_netdev.c | 167 ++++++++-----------
 1 file changed, 68 insertions(+), 99 deletions(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index 371593ed0400..222bb576e30e 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -80,6 +80,15 @@ static void ena_unmap_tx_buff(struct ena_ring *tx_ring,
 static int ena_create_io_tx_queues_in_range(struct ena_adapter *adapter,
 					    int first_index, int count);
 
+/* Increase a stat by cnt while holding syncp seqlock on 32bit machines */
+static void ena_increase_stat(u64 *statp, u64 cnt,
+			      struct u64_stats_sync *syncp)
+{
+	u64_stats_update_begin(syncp);
+	(*statp) += cnt;
+	u64_stats_update_end(syncp);
+}
+
 static void ena_tx_timeout(struct net_device *dev, unsigned int txqueue)
 {
 	struct ena_adapter *adapter = netdev_priv(dev);
@@ -92,9 +101,7 @@ static void ena_tx_timeout(struct net_device *dev, unsigned int txqueue)
 		return;
 
 	adapter->reset_reason = ENA_REGS_RESET_OS_NETDEV_WD;
-	u64_stats_update_begin(&adapter->syncp);
-	adapter->dev_stats.tx_timeout++;
-	u64_stats_update_end(&adapter->syncp);
+	ena_increase_stat(&adapter->dev_stats.tx_timeout, 1, &adapter->syncp);
 
 	netif_err(adapter, tx_err, dev, "Transmit time out\n");
 }
@@ -154,9 +161,8 @@ static int ena_xmit_common(struct net_device *dev,
 	if (unlikely(rc)) {
 		netif_err(adapter, tx_queued, dev,
 			  "Failed to prepare tx bufs\n");
-		u64_stats_update_begin(&ring->syncp);
-		ring->tx_stats.prepare_ctx_err++;
-		u64_stats_update_end(&ring->syncp);
+		ena_increase_stat(&ring->tx_stats.prepare_ctx_err, 1,
+				  &ring->syncp);
 		if (rc != -ENOMEM) {
 			adapter->reset_reason =
 				ENA_REGS_RESET_DRIVER_INVALID_STATE;
@@ -264,9 +270,8 @@ static int ena_xdp_tx_map_buff(struct ena_ring *xdp_ring,
 	return 0;
 
 error_report_dma_error:
-	u64_stats_update_begin(&xdp_ring->syncp);
-	xdp_ring->tx_stats.dma_mapping_err++;
-	u64_stats_update_end(&xdp_ring->syncp);
+	ena_increase_stat(&xdp_ring->tx_stats.dma_mapping_err, 1,
+			  &xdp_ring->syncp);
 	netif_warn(adapter, tx_queued, adapter->netdev, "Failed to map xdp buff\n");
 
 	xdp_return_frame_rx_napi(tx_info->xdpf);
@@ -320,9 +325,7 @@ static int ena_xdp_xmit_buff(struct net_device *dev,
 	 * has a mb
 	 */
 	ena_com_write_sq_doorbell(xdp_ring->ena_com_io_sq);
-	u64_stats_update_begin(&xdp_ring->syncp);
-	xdp_ring->tx_stats.doorbells++;
-	u64_stats_update_end(&xdp_ring->syncp);
+	ena_increase_stat(&xdp_ring->tx_stats.doorbells, 1, &xdp_ring->syncp);
 
 	return NETDEV_TX_OK;
 
@@ -369,9 +372,7 @@ static int ena_xdp_execute(struct ena_ring *rx_ring,
 		xdp_stat = &rx_ring->rx_stats.xdp_invalid;
 	}
 
-	u64_stats_update_begin(&rx_ring->syncp);
-	(*xdp_stat)++;
-	u64_stats_update_end(&rx_ring->syncp);
+	ena_increase_stat(xdp_stat, 1, &rx_ring->syncp);
 out:
 	rcu_read_unlock();
 
@@ -924,9 +925,8 @@ static int ena_alloc_rx_page(struct ena_ring *rx_ring,
 
 	page = alloc_page(gfp);
 	if (unlikely(!page)) {
-		u64_stats_update_begin(&rx_ring->syncp);
-		rx_ring->rx_stats.page_alloc_fail++;
-		u64_stats_update_end(&rx_ring->syncp);
+		ena_increase_stat(&rx_ring->rx_stats.page_alloc_fail, 1,
+				  &rx_ring->syncp);
 		return -ENOMEM;
 	}
 
@@ -936,9 +936,8 @@ static int ena_alloc_rx_page(struct ena_ring *rx_ring,
 	dma = dma_map_page(rx_ring->dev, page, 0, ENA_PAGE_SIZE,
 			   DMA_BIDIRECTIONAL);
 	if (unlikely(dma_mapping_error(rx_ring->dev, dma))) {
-		u64_stats_update_begin(&rx_ring->syncp);
-		rx_ring->rx_stats.dma_mapping_err++;
-		u64_stats_update_end(&rx_ring->syncp);
+		ena_increase_stat(&rx_ring->rx_stats.dma_mapping_err, 1,
+				  &rx_ring->syncp);
 
 		__free_page(page);
 		return -EIO;
@@ -1011,9 +1010,8 @@ static int ena_refill_rx_bufs(struct ena_ring *rx_ring, u32 num)
 	}
 
 	if (unlikely(i < num)) {
-		u64_stats_update_begin(&rx_ring->syncp);
-		rx_ring->rx_stats.refil_partial++;
-		u64_stats_update_end(&rx_ring->syncp);
+		ena_increase_stat(&rx_ring->rx_stats.refil_partial, 1,
+				  &rx_ring->syncp);
 		netif_warn(rx_ring->adapter, rx_err, rx_ring->netdev,
 			   "Refilled rx qid %d with only %d buffers (from %d)\n",
 			   rx_ring->qid, i, num);
@@ -1189,9 +1187,7 @@ static int handle_invalid_req_id(struct ena_ring *ring, u16 req_id,
 			  "Invalid req_id: %hu\n",
 			  req_id);
 
-	u64_stats_update_begin(&ring->syncp);
-	ring->tx_stats.bad_req_id++;
-	u64_stats_update_end(&ring->syncp);
+	ena_increase_stat(&ring->tx_stats.bad_req_id, 1, &ring->syncp);
 
 	/* Trigger device reset */
 	ring->adapter->reset_reason = ENA_REGS_RESET_INV_TX_REQ_ID;
@@ -1302,9 +1298,8 @@ static int ena_clean_tx_irq(struct ena_ring *tx_ring, u32 budget)
 		if (netif_tx_queue_stopped(txq) && above_thresh &&
 		    test_bit(ENA_FLAG_DEV_UP, &tx_ring->adapter->flags)) {
 			netif_tx_wake_queue(txq);
-			u64_stats_update_begin(&tx_ring->syncp);
-			tx_ring->tx_stats.queue_wakeup++;
-			u64_stats_update_end(&tx_ring->syncp);
+			ena_increase_stat(&tx_ring->tx_stats.queue_wakeup, 1,
+					  &tx_ring->syncp);
 		}
 		__netif_tx_unlock(txq);
 	}
@@ -1323,9 +1318,8 @@ static struct sk_buff *ena_alloc_skb(struct ena_ring *rx_ring, bool frags)
 						rx_ring->rx_copybreak);
 
 	if (unlikely(!skb)) {
-		u64_stats_update_begin(&rx_ring->syncp);
-		rx_ring->rx_stats.skb_alloc_fail++;
-		u64_stats_update_end(&rx_ring->syncp);
+		ena_increase_stat(&rx_ring->rx_stats.skb_alloc_fail, 1,
+				  &rx_ring->syncp);
 		netif_dbg(rx_ring->adapter, rx_err, rx_ring->netdev,
 			  "Failed to allocate skb. frags: %d\n", frags);
 		return NULL;
@@ -1453,9 +1447,8 @@ static void ena_rx_checksum(struct ena_ring *rx_ring,
 		     (ena_rx_ctx->l3_csum_err))) {
 		/* ipv4 checksum error */
 		skb->ip_summed = CHECKSUM_NONE;
-		u64_stats_update_begin(&rx_ring->syncp);
-		rx_ring->rx_stats.bad_csum++;
-		u64_stats_update_end(&rx_ring->syncp);
+		ena_increase_stat(&rx_ring->rx_stats.bad_csum, 1,
+				  &rx_ring->syncp);
 		netif_dbg(rx_ring->adapter, rx_err, rx_ring->netdev,
 			  "RX IPv4 header checksum error\n");
 		return;
@@ -1466,9 +1459,8 @@ static void ena_rx_checksum(struct ena_ring *rx_ring,
 		   (ena_rx_ctx->l4_proto == ENA_ETH_IO_L4_PROTO_UDP))) {
 		if (unlikely(ena_rx_ctx->l4_csum_err)) {
 			/* TCP/UDP checksum error */
-			u64_stats_update_begin(&rx_ring->syncp);
-			rx_ring->rx_stats.bad_csum++;
-			u64_stats_update_end(&rx_ring->syncp);
+			ena_increase_stat(&rx_ring->rx_stats.bad_csum, 1,
+					  &rx_ring->syncp);
 			netif_dbg(rx_ring->adapter, rx_err, rx_ring->netdev,
 				  "RX L4 checksum error\n");
 			skb->ip_summed = CHECKSUM_NONE;
@@ -1477,13 +1469,11 @@ static void ena_rx_checksum(struct ena_ring *rx_ring,
 
 		if (likely(ena_rx_ctx->l4_csum_checked)) {
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
-			u64_stats_update_begin(&rx_ring->syncp);
-			rx_ring->rx_stats.csum_good++;
-			u64_stats_update_end(&rx_ring->syncp);
+			ena_increase_stat(&rx_ring->rx_stats.csum_good, 1,
+					  &rx_ring->syncp);
 		} else {
-			u64_stats_update_begin(&rx_ring->syncp);
-			rx_ring->rx_stats.csum_unchecked++;
-			u64_stats_update_end(&rx_ring->syncp);
+			ena_increase_stat(&rx_ring->rx_stats.csum_unchecked, 1,
+					  &rx_ring->syncp);
 			skb->ip_summed = CHECKSUM_NONE;
 		}
 	} else {
@@ -1675,14 +1665,12 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
 	adapter = netdev_priv(rx_ring->netdev);
 
 	if (rc == -ENOSPC) {
-		u64_stats_update_begin(&rx_ring->syncp);
-		rx_ring->rx_stats.bad_desc_num++;
-		u64_stats_update_end(&rx_ring->syncp);
+		ena_increase_stat(&rx_ring->rx_stats.bad_desc_num, 1,
+				  &rx_ring->syncp);
 		adapter->reset_reason = ENA_REGS_RESET_TOO_MANY_RX_DESCS;
 	} else {
-		u64_stats_update_begin(&rx_ring->syncp);
-		rx_ring->rx_stats.bad_req_id++;
-		u64_stats_update_end(&rx_ring->syncp);
+		ena_increase_stat(&rx_ring->rx_stats.bad_req_id, 1,
+				  &rx_ring->syncp);
 		adapter->reset_reason = ENA_REGS_RESET_INV_RX_REQ_ID;
 	}
 
@@ -1743,9 +1731,8 @@ static void ena_unmask_interrupt(struct ena_ring *tx_ring,
 				tx_ring->smoothed_interval,
 				true);
 
-	u64_stats_update_begin(&tx_ring->syncp);
-	tx_ring->tx_stats.unmask_interrupt++;
-	u64_stats_update_end(&tx_ring->syncp);
+	ena_increase_stat(&tx_ring->tx_stats.unmask_interrupt, 1,
+			  &tx_ring->syncp);
 
 	/* It is a shared MSI-X.
 	 * Tx and Rx CQ have pointer to it.
@@ -2552,9 +2539,8 @@ static int ena_up(struct ena_adapter *adapter)
 	if (test_bit(ENA_FLAG_LINK_UP, &adapter->flags))
 		netif_carrier_on(adapter->netdev);
 
-	u64_stats_update_begin(&adapter->syncp);
-	adapter->dev_stats.interface_up++;
-	u64_stats_update_end(&adapter->syncp);
+	ena_increase_stat(&adapter->dev_stats.interface_up, 1,
+			  &adapter->syncp);
 
 	set_bit(ENA_FLAG_DEV_UP, &adapter->flags);
 
@@ -2592,9 +2578,8 @@ static void ena_down(struct ena_adapter *adapter)
 
 	clear_bit(ENA_FLAG_DEV_UP, &adapter->flags);
 
-	u64_stats_update_begin(&adapter->syncp);
-	adapter->dev_stats.interface_down++;
-	u64_stats_update_end(&adapter->syncp);
+	ena_increase_stat(&adapter->dev_stats.interface_down, 1,
+			  &adapter->syncp);
 
 	netif_carrier_off(adapter->netdev);
 	netif_tx_disable(adapter->netdev);
@@ -2822,15 +2807,12 @@ static int ena_check_and_linearize_skb(struct ena_ring *tx_ring,
 	    (header_len < tx_ring->tx_max_header_size))
 		return 0;
 
-	u64_stats_update_begin(&tx_ring->syncp);
-	tx_ring->tx_stats.linearize++;
-	u64_stats_update_end(&tx_ring->syncp);
+	ena_increase_stat(&tx_ring->tx_stats.linearize, 1, &tx_ring->syncp);
 
 	rc = skb_linearize(skb);
 	if (unlikely(rc)) {
-		u64_stats_update_begin(&tx_ring->syncp);
-		tx_ring->tx_stats.linearize_failed++;
-		u64_stats_update_end(&tx_ring->syncp);
+		ena_increase_stat(&tx_ring->tx_stats.linearize_failed, 1,
+				  &tx_ring->syncp);
 	}
 
 	return rc;
@@ -2870,9 +2852,8 @@ static int ena_tx_map_skb(struct ena_ring *tx_ring,
 					       tx_ring->push_buf_intermediate_buf);
 		*header_len = push_len;
 		if (unlikely(skb->data != *push_hdr)) {
-			u64_stats_update_begin(&tx_ring->syncp);
-			tx_ring->tx_stats.llq_buffer_copy++;
-			u64_stats_update_end(&tx_ring->syncp);
+			ena_increase_stat(&tx_ring->tx_stats.llq_buffer_copy, 1,
+					  &tx_ring->syncp);
 
 			delta = push_len - skb_head_len;
 		}
@@ -2929,9 +2910,8 @@ static int ena_tx_map_skb(struct ena_ring *tx_ring,
 	return 0;
 
 error_report_dma_error:
-	u64_stats_update_begin(&tx_ring->syncp);
-	tx_ring->tx_stats.dma_mapping_err++;
-	u64_stats_update_end(&tx_ring->syncp);
+	ena_increase_stat(&tx_ring->tx_stats.dma_mapping_err, 1,
+			  &tx_ring->syncp);
 	netif_warn(adapter, tx_queued, adapter->netdev, "Failed to map skb\n");
 
 	tx_info->skb = NULL;
@@ -3008,9 +2988,8 @@ static netdev_tx_t ena_start_xmit(struct sk_buff *skb, struct net_device *dev)
 			  __func__, qid);
 
 		netif_tx_stop_queue(txq);
-		u64_stats_update_begin(&tx_ring->syncp);
-		tx_ring->tx_stats.queue_stop++;
-		u64_stats_update_end(&tx_ring->syncp);
+		ena_increase_stat(&tx_ring->tx_stats.queue_stop, 1,
+				  &tx_ring->syncp);
 
 		/* There is a rare condition where this function decide to
 		 * stop the queue but meanwhile clean_tx_irq updates
@@ -3025,9 +3004,8 @@ static netdev_tx_t ena_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		if (ena_com_sq_have_enough_space(tx_ring->ena_com_io_sq,
 						 ENA_TX_WAKEUP_THRESH)) {
 			netif_tx_wake_queue(txq);
-			u64_stats_update_begin(&tx_ring->syncp);
-			tx_ring->tx_stats.queue_wakeup++;
-			u64_stats_update_end(&tx_ring->syncp);
+			ena_increase_stat(&tx_ring->tx_stats.queue_wakeup, 1,
+					  &tx_ring->syncp);
 		}
 	}
 
@@ -3036,9 +3014,8 @@ static netdev_tx_t ena_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		 * has a mb
 		 */
 		ena_com_write_sq_doorbell(tx_ring->ena_com_io_sq);
-		u64_stats_update_begin(&tx_ring->syncp);
-		tx_ring->tx_stats.doorbells++;
-		u64_stats_update_end(&tx_ring->syncp);
+		ena_increase_stat(&tx_ring->tx_stats.doorbells, 1,
+				  &tx_ring->syncp);
 	}
 
 	return NETDEV_TX_OK;
@@ -3673,9 +3650,8 @@ static int check_missing_comp_in_tx_queue(struct ena_adapter *adapter,
 		rc = -EIO;
 	}
 
-	u64_stats_update_begin(&tx_ring->syncp);
-	tx_ring->tx_stats.missed_tx += missed_tx;
-	u64_stats_update_end(&tx_ring->syncp);
+	ena_increase_stat(&tx_ring->tx_stats.missed_tx, missed_tx,
+			  &tx_ring->syncp);
 
 	return rc;
 }
@@ -3758,9 +3734,8 @@ static void check_for_empty_rx_ring(struct ena_adapter *adapter)
 			rx_ring->empty_rx_queue++;
 
 			if (rx_ring->empty_rx_queue >= EMPTY_RX_REFILL) {
-				u64_stats_update_begin(&rx_ring->syncp);
-				rx_ring->rx_stats.empty_rx_ring++;
-				u64_stats_update_end(&rx_ring->syncp);
+				ena_increase_stat(&rx_ring->rx_stats.empty_rx_ring, 1,
+						  &rx_ring->syncp);
 
 				netif_err(adapter, drv, adapter->netdev,
 					  "Trigger refill for ring %d\n", i);
@@ -3790,9 +3765,8 @@ static void check_for_missing_keep_alive(struct ena_adapter *adapter)
 	if (unlikely(time_is_before_jiffies(keep_alive_expired))) {
 		netif_err(adapter, drv, adapter->netdev,
 			  "Keep alive watchdog timeout.\n");
-		u64_stats_update_begin(&adapter->syncp);
-		adapter->dev_stats.wd_expired++;
-		u64_stats_update_end(&adapter->syncp);
+		ena_increase_stat(&adapter->dev_stats.wd_expired, 1,
+				  &adapter->syncp);
 		adapter->reset_reason = ENA_REGS_RESET_KEEP_ALIVE_TO;
 		set_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags);
 	}
@@ -3803,9 +3777,8 @@ static void check_for_admin_com_state(struct ena_adapter *adapter)
 	if (unlikely(!ena_com_get_admin_running_state(adapter->ena_dev))) {
 		netif_err(adapter, drv, adapter->netdev,
 			  "ENA admin queue is not in running state!\n");
-		u64_stats_update_begin(&adapter->syncp);
-		adapter->dev_stats.admin_q_pause++;
-		u64_stats_update_end(&adapter->syncp);
+		ena_increase_stat(&adapter->dev_stats.admin_q_pause, 1,
+				  &adapter->syncp);
 		adapter->reset_reason = ENA_REGS_RESET_ADMIN_TO;
 		set_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags);
 	}
@@ -4441,9 +4414,7 @@ static int __maybe_unused ena_suspend(struct device *dev_d)
 	struct pci_dev *pdev = to_pci_dev(dev_d);
 	struct ena_adapter *adapter = pci_get_drvdata(pdev);
 
-	u64_stats_update_begin(&adapter->syncp);
-	adapter->dev_stats.suspend++;
-	u64_stats_update_end(&adapter->syncp);
+	ena_increase_stat(&adapter->dev_stats.suspend, 1, &adapter->syncp);
 
 	rtnl_lock();
 	if (unlikely(test_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags))) {
@@ -4464,9 +4435,7 @@ static int __maybe_unused ena_resume(struct device *dev_d)
 	struct ena_adapter *adapter = dev_get_drvdata(dev_d);
 	int rc;
 
-	u64_stats_update_begin(&adapter->syncp);
-	adapter->dev_stats.resume++;
-	u64_stats_update_end(&adapter->syncp);
+	ena_increase_stat(&adapter->dev_stats.resume, 1, &adapter->syncp);
 
 	rtnl_lock();
 	rc = ena_restore_device(adapter);
-- 
2.23.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH V4 net-next 6/9] net: ena: use xdp_frame in XDP TX flow
  2020-12-04 12:11 [PATCH V4 net-next 0/9] XDP Redirect implementation for ENA driver akiyano
                   ` (4 preceding siblings ...)
  2020-12-04 12:11 ` [PATCH V4 net-next 5/9] net: ena: aggregate stats increase into a function akiyano
@ 2020-12-04 12:11 ` akiyano
  2020-12-06 20:10   ` Maciej Fijalkowski
  2020-12-04 12:11 ` [PATCH V4 net-next 7/9] net: ena: introduce XDP redirect implementation akiyano
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 19+ messages in thread
From: akiyano @ 2020-12-04 12:11 UTC (permalink / raw)
  To: davem, kuba, netdev
  Cc: Arthur Kiyanovski, dwmw, zorik, matua, saeedb, msw, aliguori,
	nafea, gtzalik, netanel, alisaidi, benh, ndagan, shayagr,
	sameehj

From: Arthur Kiyanovski <akiyano@amazon.com>

Rename the ena_xdp_xmit_buff() function to ena_xdp_xmit_frame() and pass
it an xdp_frame struct instead of xdp_buff.
This change lays the ground for XDP redirect implementation which uses
xdp_frames when 'xmit'ing packets.

Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
---
 drivers/net/ethernet/amazon/ena/ena_netdev.c | 46 ++++++++++----------
 1 file changed, 23 insertions(+), 23 deletions(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index 222bb576e30e..cbb07548409a 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -233,18 +233,18 @@ static int ena_xdp_io_poll(struct napi_struct *napi, int budget)
 	return ret;
 }
 
-static int ena_xdp_tx_map_buff(struct ena_ring *xdp_ring,
-			       struct ena_tx_buffer *tx_info,
-			       struct xdp_buff *xdp,
-			       void **push_hdr,
-			       u32 *push_len)
+static int ena_xdp_tx_map_frame(struct ena_ring *xdp_ring,
+				struct ena_tx_buffer *tx_info,
+				struct xdp_frame *xdpf,
+				void **push_hdr,
+				u32 *push_len)
 {
 	struct ena_adapter *adapter = xdp_ring->adapter;
 	struct ena_com_buf *ena_buf;
 	dma_addr_t dma = 0;
 	u32 size;
 
-	tx_info->xdpf = xdp_convert_buff_to_frame(xdp);
+	tx_info->xdpf = xdpf;
 	size = tx_info->xdpf->len;
 	ena_buf = tx_info->bufs;
 
@@ -281,29 +281,31 @@ static int ena_xdp_tx_map_buff(struct ena_ring *xdp_ring,
 	return -EINVAL;
 }
 
-static int ena_xdp_xmit_buff(struct net_device *dev,
-			     struct xdp_buff *xdp,
-			     int qid,
-			     struct ena_rx_buffer *rx_info)
+static int ena_xdp_xmit_frame(struct net_device *dev,
+			      struct xdp_frame *xdpf,
+			      int qid)
 {
 	struct ena_adapter *adapter = netdev_priv(dev);
 	struct ena_com_tx_ctx ena_tx_ctx = {};
 	struct ena_tx_buffer *tx_info;
 	struct ena_ring *xdp_ring;
+	struct page *rx_buff_page;
 	u16 next_to_use, req_id;
 	int rc;
 	void *push_hdr;
 	u32 push_len;
 
+	rx_buff_page = virt_to_page(xdpf->data);
+
 	xdp_ring = &adapter->tx_ring[qid];
 	next_to_use = xdp_ring->next_to_use;
 	req_id = xdp_ring->free_ids[next_to_use];
 	tx_info = &xdp_ring->tx_buffer_info[req_id];
 	tx_info->num_of_bufs = 0;
-	page_ref_inc(rx_info->page);
-	tx_info->xdp_rx_page = rx_info->page;
+	page_ref_inc(rx_buff_page);
+	tx_info->xdp_rx_page = rx_buff_page;
 
-	rc = ena_xdp_tx_map_buff(xdp_ring, tx_info, xdp, &push_hdr, &push_len);
+	rc = ena_xdp_tx_map_frame(xdp_ring, tx_info, xdpf, &push_hdr, &push_len);
 	if (unlikely(rc))
 		goto error_drop_packet;
 
@@ -318,7 +320,7 @@ static int ena_xdp_xmit_buff(struct net_device *dev,
 			     tx_info,
 			     &ena_tx_ctx,
 			     next_to_use,
-			     xdp->data_end - xdp->data);
+			     xdpf->len);
 	if (rc)
 		goto error_unmap_dma;
 	/* trigger the dma engine. ena_com_write_sq_doorbell()
@@ -337,12 +339,11 @@ static int ena_xdp_xmit_buff(struct net_device *dev,
 	return NETDEV_TX_OK;
 }
 
-static int ena_xdp_execute(struct ena_ring *rx_ring,
-			   struct xdp_buff *xdp,
-			   struct ena_rx_buffer *rx_info)
+static int ena_xdp_execute(struct ena_ring *rx_ring, struct xdp_buff *xdp)
 {
 	struct bpf_prog *xdp_prog;
 	u32 verdict = XDP_PASS;
+	struct xdp_frame *xdpf;
 	u64 *xdp_stat;
 
 	rcu_read_lock();
@@ -354,10 +355,9 @@ static int ena_xdp_execute(struct ena_ring *rx_ring,
 	verdict = bpf_prog_run_xdp(xdp_prog, xdp);
 
 	if (verdict == XDP_TX) {
-		ena_xdp_xmit_buff(rx_ring->netdev,
-				  xdp,
-				  rx_ring->qid + rx_ring->adapter->num_io_queues,
-				  rx_info);
+		xdpf = xdp_convert_buff_to_frame(xdp);
+		ena_xdp_xmit_frame(rx_ring->netdev, xdpf,
+				   rx_ring->qid + rx_ring->adapter->num_io_queues);
 
 		xdp_stat = &rx_ring->rx_stats.xdp_tx;
 	} else if (unlikely(verdict == XDP_ABORTED)) {
@@ -1521,7 +1521,7 @@ static int ena_xdp_handle_buff(struct ena_ring *rx_ring, struct xdp_buff *xdp)
 	if (unlikely(rx_ring->ena_bufs[0].len > ENA_XDP_MAX_MTU))
 		return XDP_DROP;
 
-	ret = ena_xdp_execute(rx_ring, xdp, rx_info);
+	ret = ena_xdp_execute(rx_ring, xdp);
 
 	/* The xdp program might expand the headers */
 	if (ret == XDP_PASS) {
@@ -1600,7 +1600,7 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
 		if (unlikely(!skb)) {
 			/* The page might not actually be freed here since the
 			 * page reference count is incremented in
-			 * ena_xdp_xmit_buff(), and it will be decreased only
+			 * ena_xdp_xmit_frame(), and it will be decreased only
 			 * when send completion was received from the device
 			 */
 			if (xdp_verdict == XDP_TX)
-- 
2.23.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH V4 net-next 7/9] net: ena: introduce XDP redirect implementation
  2020-12-04 12:11 [PATCH V4 net-next 0/9] XDP Redirect implementation for ENA driver akiyano
                   ` (5 preceding siblings ...)
  2020-12-04 12:11 ` [PATCH V4 net-next 6/9] net: ena: use xdp_frame in XDP TX flow akiyano
@ 2020-12-04 12:11 ` akiyano
  2020-12-06  0:15   ` Jakub Kicinski
  2020-12-04 12:11 ` [PATCH V4 net-next 8/9] net: ena: use xdp_return_frame() to free xdp frames akiyano
  2020-12-04 12:11 ` [PATCH V4 net-next 9/9] net: ena: introduce ndo_xdp_xmit() function for XDP_REDIRECT akiyano
  8 siblings, 1 reply; 19+ messages in thread
From: akiyano @ 2020-12-04 12:11 UTC (permalink / raw)
  To: davem, kuba, netdev
  Cc: Arthur Kiyanovski, dwmw, zorik, matua, saeedb, msw, aliguori,
	nafea, gtzalik, netanel, alisaidi, benh, ndagan, shayagr,
	sameehj

From: Arthur Kiyanovski <akiyano@amazon.com>

This patch adds a partial support for the XDP_REDIRECT directive which
instructs the driver to pass the packet to an interface specified by the
program. The directive is passed to the driver by calling bpf_redirect()
or bpf_redirect_map() functions from the eBPF program.

To lay the ground for integration with the existing XDP TX
implementation the patch removes the redundant page ref count increase
in ena_xdp_xmit_frame() and then decrease in ena_clean_rx_irq(). Instead
it only DMA unmaps descriptors for which XDP TX or REDIRECT directive
was received.

The XDP Redirect support is still missing .ndo_xdp_xmit function
implementation, which allows to redirect packet to an ENA interface,
which would be added in a later patch.

Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
---
 drivers/net/ethernet/amazon/ena/ena_ethtool.c |  1 +
 drivers/net/ethernet/amazon/ena/ena_netdev.c  | 74 +++++++++++--------
 drivers/net/ethernet/amazon/ena/ena_netdev.h  |  1 +
 3 files changed, 47 insertions(+), 29 deletions(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_ethtool.c b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
index 2ad44ae74cf6..d6cc7aa612b7 100644
--- a/drivers/net/ethernet/amazon/ena/ena_ethtool.c
+++ b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
@@ -95,6 +95,7 @@ static const struct ena_stats ena_stats_rx_strings[] = {
 	ENA_STAT_RX_ENTRY(xdp_pass),
 	ENA_STAT_RX_ENTRY(xdp_tx),
 	ENA_STAT_RX_ENTRY(xdp_invalid),
+	ENA_STAT_RX_ENTRY(xdp_redirect),
 };
 
 static const struct ena_stats ena_stats_ena_com_strings[] = {
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index cbb07548409a..25e2e2369f45 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -289,21 +289,17 @@ static int ena_xdp_xmit_frame(struct net_device *dev,
 	struct ena_com_tx_ctx ena_tx_ctx = {};
 	struct ena_tx_buffer *tx_info;
 	struct ena_ring *xdp_ring;
-	struct page *rx_buff_page;
 	u16 next_to_use, req_id;
 	int rc;
 	void *push_hdr;
 	u32 push_len;
 
-	rx_buff_page = virt_to_page(xdpf->data);
-
 	xdp_ring = &adapter->tx_ring[qid];
 	next_to_use = xdp_ring->next_to_use;
 	req_id = xdp_ring->free_ids[next_to_use];
 	tx_info = &xdp_ring->tx_buffer_info[req_id];
 	tx_info->num_of_bufs = 0;
-	page_ref_inc(rx_buff_page);
-	tx_info->xdp_rx_page = rx_buff_page;
+	tx_info->xdp_rx_page = virt_to_page(xdpf->data);
 
 	rc = ena_xdp_tx_map_frame(xdp_ring, tx_info, xdpf, &push_hdr, &push_len);
 	if (unlikely(rc))
@@ -335,7 +331,7 @@ static int ena_xdp_xmit_frame(struct net_device *dev,
 	ena_unmap_tx_buff(xdp_ring, tx_info);
 	tx_info->xdpf = NULL;
 error_drop_packet:
-	__free_page(tx_info->xdp_rx_page);
+	xdp_return_frame(xdpf);
 	return NETDEV_TX_OK;
 }
 
@@ -354,20 +350,28 @@ static int ena_xdp_execute(struct ena_ring *rx_ring, struct xdp_buff *xdp)
 
 	verdict = bpf_prog_run_xdp(xdp_prog, xdp);
 
-	if (verdict == XDP_TX) {
+	switch (verdict) {
+	case XDP_TX:
 		xdpf = xdp_convert_buff_to_frame(xdp);
 		ena_xdp_xmit_frame(rx_ring->netdev, xdpf,
 				   rx_ring->qid + rx_ring->adapter->num_io_queues);
-
 		xdp_stat = &rx_ring->rx_stats.xdp_tx;
-	} else if (unlikely(verdict == XDP_ABORTED)) {
+		break;
+	case XDP_REDIRECT:
+		xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
+		xdp_stat = &rx_ring->rx_stats.xdp_redirect;
+		break;
+	case XDP_ABORTED:
 		trace_xdp_exception(rx_ring->netdev, xdp_prog, verdict);
 		xdp_stat = &rx_ring->rx_stats.xdp_aborted;
-	} else if (unlikely(verdict == XDP_DROP)) {
+		break;
+	case XDP_DROP:
 		xdp_stat = &rx_ring->rx_stats.xdp_drop;
-	} else if (unlikely(verdict == XDP_PASS)) {
+		break;
+	case XDP_PASS:
 		xdp_stat = &rx_ring->rx_stats.xdp_pass;
-	} else {
+		break;
+	default:
 		bpf_warn_invalid_xdp_action(verdict);
 		xdp_stat = &rx_ring->rx_stats.xdp_invalid;
 	}
@@ -953,11 +957,20 @@ static int ena_alloc_rx_page(struct ena_ring *rx_ring,
 	return 0;
 }
 
+static void ena_unmap_rx_buff(struct ena_ring *rx_ring,
+			      struct ena_rx_buffer *rx_info)
+{
+	struct ena_com_buf *ena_buf = &rx_info->ena_buf;
+
+	dma_unmap_page(rx_ring->dev, ena_buf->paddr - rx_ring->rx_headroom,
+		       ENA_PAGE_SIZE,
+		       DMA_BIDIRECTIONAL);
+}
+
 static void ena_free_rx_page(struct ena_ring *rx_ring,
 			     struct ena_rx_buffer *rx_info)
 {
 	struct page *page = rx_info->page;
-	struct ena_com_buf *ena_buf = &rx_info->ena_buf;
 
 	if (unlikely(!page)) {
 		netif_warn(rx_ring->adapter, rx_err, rx_ring->netdev,
@@ -965,9 +978,7 @@ static void ena_free_rx_page(struct ena_ring *rx_ring,
 		return;
 	}
 
-	dma_unmap_page(rx_ring->dev, ena_buf->paddr - rx_ring->rx_headroom,
-		       ENA_PAGE_SIZE,
-		       DMA_BIDIRECTIONAL);
+	ena_unmap_rx_buff(rx_ring, rx_info);
 
 	__free_page(page);
 	rx_info->page = NULL;
@@ -1391,9 +1402,7 @@ static struct sk_buff *ena_rx_skb(struct ena_ring *rx_ring,
 		return NULL;
 
 	do {
-		dma_unmap_page(rx_ring->dev,
-			       dma_unmap_addr(&rx_info->ena_buf, paddr),
-			       ENA_PAGE_SIZE, DMA_BIDIRECTIONAL);
+		ena_unmap_rx_buff(rx_ring, rx_info);
 
 		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_info->page,
 				rx_info->page_offset, len, ENA_PAGE_SIZE);
@@ -1551,6 +1560,7 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
 	struct sk_buff *skb;
 	int refill_required;
 	struct xdp_buff xdp;
+	int xdp_flags = 0;
 	int total_len = 0;
 	int xdp_verdict;
 	int rc = 0;
@@ -1598,22 +1608,25 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
 					 &next_to_clean);
 
 		if (unlikely(!skb)) {
-			/* The page might not actually be freed here since the
-			 * page reference count is incremented in
-			 * ena_xdp_xmit_frame(), and it will be decreased only
-			 * when send completion was received from the device
-			 */
-			if (xdp_verdict == XDP_TX)
-				ena_free_rx_page(rx_ring,
-						 &rx_ring->rx_buffer_info[rx_ring->ena_bufs[0].req_id]);
 			for (i = 0; i < ena_rx_ctx.descs; i++) {
-				rx_ring->free_ids[next_to_clean] =
-					rx_ring->ena_bufs[i].req_id;
+				int req_id = rx_ring->ena_bufs[i].req_id;
+
+				rx_ring->free_ids[next_to_clean] = req_id;
 				next_to_clean =
 					ENA_RX_RING_IDX_NEXT(next_to_clean,
 							     rx_ring->ring_size);
+
+				/* Packets was passed for transmission, unmap it
+				 * from RX side.
+				 */
+				if (xdp_verdict == XDP_TX || xdp_verdict == XDP_REDIRECT) {
+					ena_unmap_rx_buff(rx_ring,
+							  &rx_ring->rx_buffer_info[req_id]);
+					rx_ring->rx_buffer_info[req_id].page = NULL;
+				}
 			}
 			if (xdp_verdict != XDP_PASS) {
+				xdp_flags |= xdp_verdict;
 				res_budget--;
 				continue;
 			}
@@ -1659,6 +1672,9 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
 		ena_refill_rx_bufs(rx_ring, refill_required);
 	}
 
+	if (xdp_flags & XDP_REDIRECT)
+		xdp_do_flush_map();
+
 	return work_done;
 
 error:
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h
index c39f41711c31..0fef876c23eb 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.h
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h
@@ -239,6 +239,7 @@ struct ena_stats_rx {
 	u64 xdp_pass;
 	u64 xdp_tx;
 	u64 xdp_invalid;
+	u64 xdp_redirect;
 };
 
 struct ena_ring {
-- 
2.23.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH V4 net-next 8/9] net: ena: use xdp_return_frame() to free xdp frames
  2020-12-04 12:11 [PATCH V4 net-next 0/9] XDP Redirect implementation for ENA driver akiyano
                   ` (6 preceding siblings ...)
  2020-12-04 12:11 ` [PATCH V4 net-next 7/9] net: ena: introduce XDP redirect implementation akiyano
@ 2020-12-04 12:11 ` akiyano
  2020-12-04 12:11 ` [PATCH V4 net-next 9/9] net: ena: introduce ndo_xdp_xmit() function for XDP_REDIRECT akiyano
  8 siblings, 0 replies; 19+ messages in thread
From: akiyano @ 2020-12-04 12:11 UTC (permalink / raw)
  To: davem, kuba, netdev
  Cc: Arthur Kiyanovski, dwmw, zorik, matua, saeedb, msw, aliguori,
	nafea, gtzalik, netanel, alisaidi, benh, ndagan, shayagr,
	sameehj

From: Arthur Kiyanovski <akiyano@amazon.com>

XDP subsystem has a function to free XDP frames and their associated
pages. Using this function would help the driver's XDP implementation to
adjust to new changes in the XDP subsystem in the kernel (e.g.
introduction of XDP MB).

Also, remove 'xdp_rx_page' field from ena_tx_buffer struct since it is
no longer used.

Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
---
 drivers/net/ethernet/amazon/ena/ena_netdev.c | 3 +--
 drivers/net/ethernet/amazon/ena/ena_netdev.h | 6 ------
 2 files changed, 1 insertion(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index 25e2e2369f45..0d077a626604 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -299,7 +299,6 @@ static int ena_xdp_xmit_frame(struct net_device *dev,
 	req_id = xdp_ring->free_ids[next_to_use];
 	tx_info = &xdp_ring->tx_buffer_info[req_id];
 	tx_info->num_of_bufs = 0;
-	tx_info->xdp_rx_page = virt_to_page(xdpf->data);
 
 	rc = ena_xdp_tx_map_frame(xdp_ring, tx_info, xdpf, &push_hdr, &push_len);
 	if (unlikely(rc))
@@ -1828,7 +1827,7 @@ static int ena_clean_xdp_irq(struct ena_ring *xdp_ring, u32 budget)
 		tx_pkts++;
 		total_done += tx_info->tx_descs;
 
-		__free_page(tx_info->xdp_rx_page);
+		xdp_return_frame(xdpf);
 		xdp_ring->free_ids[next_to_clean] = req_id;
 		next_to_clean = ENA_TX_RING_IDX_NEXT(next_to_clean,
 						     xdp_ring->ring_size);
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h
index 0fef876c23eb..fed79c50a870 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.h
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h
@@ -170,12 +170,6 @@ struct ena_tx_buffer {
 	 * the xdp queues
 	 */
 	struct xdp_frame *xdpf;
-	/* The rx page for the rx buffer that was received in rx and
-	 * re transmitted on xdp tx queues as a result of XDP_TX action.
-	 * We need to free the page once we finished cleaning the buffer in
-	 * clean_xdp_irq()
-	 */
-	struct page *xdp_rx_page;
 
 	/* Indicate if bufs[0] map the linear data of the skb. */
 	u8 map_linear_data;
-- 
2.23.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH V4 net-next 9/9] net: ena: introduce ndo_xdp_xmit() function for XDP_REDIRECT
  2020-12-04 12:11 [PATCH V4 net-next 0/9] XDP Redirect implementation for ENA driver akiyano
                   ` (7 preceding siblings ...)
  2020-12-04 12:11 ` [PATCH V4 net-next 8/9] net: ena: use xdp_return_frame() to free xdp frames akiyano
@ 2020-12-04 12:11 ` akiyano
  2020-12-06 20:22   ` Maciej Fijalkowski
  8 siblings, 1 reply; 19+ messages in thread
From: akiyano @ 2020-12-04 12:11 UTC (permalink / raw)
  To: davem, kuba, netdev
  Cc: Arthur Kiyanovski, dwmw, zorik, matua, saeedb, msw, aliguori,
	nafea, gtzalik, netanel, alisaidi, benh, ndagan, shayagr,
	sameehj

From: Arthur Kiyanovski <akiyano@amazon.com>

This patch implements the ndo_xdp_xmit() net_device function which is
called when a packet is redirected to this driver using an
XDP_REDIRECT directive.

The function receives an array of xdp frames that it needs to xmit.
The TX queues that are used to xmit these frames are the XDP
queues used by the XDP_TX flow. Therefore a lock is added to synchronize
both flows (XDP_TX and XDP_REDIRECT).

Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
---
 drivers/net/ethernet/amazon/ena/ena_netdev.c | 83 +++++++++++++++++---
 drivers/net/ethernet/amazon/ena/ena_netdev.h |  1 +
 2 files changed, 72 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index 0d077a626604..6c5d8b8c4d13 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -281,20 +281,18 @@ static int ena_xdp_tx_map_frame(struct ena_ring *xdp_ring,
 	return -EINVAL;
 }
 
-static int ena_xdp_xmit_frame(struct net_device *dev,
+static int ena_xdp_xmit_frame(struct ena_ring *xdp_ring,
+			      struct net_device *dev,
 			      struct xdp_frame *xdpf,
-			      int qid)
+			      int flags)
 {
-	struct ena_adapter *adapter = netdev_priv(dev);
 	struct ena_com_tx_ctx ena_tx_ctx = {};
 	struct ena_tx_buffer *tx_info;
-	struct ena_ring *xdp_ring;
 	u16 next_to_use, req_id;
-	int rc;
 	void *push_hdr;
 	u32 push_len;
+	int rc;
 
-	xdp_ring = &adapter->tx_ring[qid];
 	next_to_use = xdp_ring->next_to_use;
 	req_id = xdp_ring->free_ids[next_to_use];
 	tx_info = &xdp_ring->tx_buffer_info[req_id];
@@ -321,25 +319,76 @@ static int ena_xdp_xmit_frame(struct net_device *dev,
 	/* trigger the dma engine. ena_com_write_sq_doorbell()
 	 * has a mb
 	 */
-	ena_com_write_sq_doorbell(xdp_ring->ena_com_io_sq);
-	ena_increase_stat(&xdp_ring->tx_stats.doorbells, 1, &xdp_ring->syncp);
+	if (flags & XDP_XMIT_FLUSH) {
+		ena_com_write_sq_doorbell(xdp_ring->ena_com_io_sq);
+		ena_increase_stat(&xdp_ring->tx_stats.doorbells, 1,
+				  &xdp_ring->syncp);
+	}
 
-	return NETDEV_TX_OK;
+	return rc;
 
 error_unmap_dma:
 	ena_unmap_tx_buff(xdp_ring, tx_info);
 	tx_info->xdpf = NULL;
 error_drop_packet:
 	xdp_return_frame(xdpf);
-	return NETDEV_TX_OK;
+	return rc;
+}
+
+static int ena_xdp_xmit(struct net_device *dev, int n,
+			struct xdp_frame **frames, u32 flags)
+{
+	struct ena_adapter *adapter = netdev_priv(dev);
+	int qid, i, err, drops = 0;
+	struct ena_ring *xdp_ring;
+
+	if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
+		return -EINVAL;
+
+	if (!test_bit(ENA_FLAG_DEV_UP, &adapter->flags))
+		return -ENETDOWN;
+
+	/* We assume that all rings have the same XDP program */
+	if (!READ_ONCE(adapter->rx_ring->xdp_bpf_prog))
+		return -ENXIO;
+
+	qid = smp_processor_id() % adapter->xdp_num_queues;
+	qid += adapter->xdp_first_ring;
+	xdp_ring = &adapter->tx_ring[qid];
+
+	/* Other CPU ids might try to send thorugh this queue */
+	spin_lock(&xdp_ring->xdp_tx_lock);
+
+	for (i = 0; i < n; i++) {
+		err = ena_xdp_xmit_frame(xdp_ring, dev, frames[i], 0);
+		/* The descriptor is freed by ena_xdp_xmit_frame in case
+		 * of an error.
+		 */
+		if (err)
+			drops++;
+	}
+
+	/* Ring doorbell to make device aware of the packets */
+	if (flags & XDP_XMIT_FLUSH) {
+		ena_com_write_sq_doorbell(xdp_ring->ena_com_io_sq);
+		ena_increase_stat(&xdp_ring->tx_stats.doorbells, 1,
+				  &xdp_ring->syncp);
+	}
+
+	spin_unlock(&xdp_ring->xdp_tx_lock);
+
+	/* Return number of packets sent */
+	return n - drops;
 }
 
 static int ena_xdp_execute(struct ena_ring *rx_ring, struct xdp_buff *xdp)
 {
 	struct bpf_prog *xdp_prog;
+	struct ena_ring *xdp_ring;
 	u32 verdict = XDP_PASS;
 	struct xdp_frame *xdpf;
 	u64 *xdp_stat;
+	int qid;
 
 	rcu_read_lock();
 	xdp_prog = READ_ONCE(rx_ring->xdp_bpf_prog);
@@ -352,8 +401,16 @@ static int ena_xdp_execute(struct ena_ring *rx_ring, struct xdp_buff *xdp)
 	switch (verdict) {
 	case XDP_TX:
 		xdpf = xdp_convert_buff_to_frame(xdp);
-		ena_xdp_xmit_frame(rx_ring->netdev, xdpf,
-				   rx_ring->qid + rx_ring->adapter->num_io_queues);
+		/* Find xmit queue */
+		qid = rx_ring->qid + rx_ring->adapter->num_io_queues;
+		xdp_ring = &rx_ring->adapter->tx_ring[qid];
+
+		/* The XDP queues are shared between XDP_TX and XDP_REDIRECT */
+		spin_lock(&xdp_ring->xdp_tx_lock);
+
+		ena_xdp_xmit_frame(xdp_ring, rx_ring->netdev, xdpf, XDP_XMIT_FLUSH);
+
+		spin_unlock(&xdp_ring->xdp_tx_lock);
 		xdp_stat = &rx_ring->rx_stats.xdp_tx;
 		break;
 	case XDP_REDIRECT:
@@ -644,6 +701,7 @@ static void ena_init_io_rings(struct ena_adapter *adapter,
 		txr->smoothed_interval =
 			ena_com_get_nonadaptive_moderation_interval_tx(ena_dev);
 		txr->disable_meta_caching = adapter->disable_meta_caching;
+		spin_lock_init(&txr->xdp_tx_lock);
 
 		/* Don't init RX queues for xdp queues */
 		if (!ENA_IS_XDP_INDEX(adapter, i)) {
@@ -3236,6 +3294,7 @@ static const struct net_device_ops ena_netdev_ops = {
 	.ndo_set_mac_address	= NULL,
 	.ndo_validate_addr	= eth_validate_addr,
 	.ndo_bpf		= ena_xdp,
+	.ndo_xdp_xmit		= ena_xdp_xmit,
 };
 
 static int ena_device_validate_params(struct ena_adapter *adapter,
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h
index fed79c50a870..74af15d62ee1 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.h
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h
@@ -258,6 +258,7 @@ struct ena_ring {
 	struct ena_com_io_sq *ena_com_io_sq;
 	struct bpf_prog *xdp_bpf_prog;
 	struct xdp_rxq_info xdp_rxq;
+	spinlock_t xdp_tx_lock;	/* synchronize XDP TX/Redirect traffic */
 
 	u16 next_to_use;
 	u16 next_to_clean;
-- 
2.23.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH V4 net-next 7/9] net: ena: introduce XDP redirect implementation
  2020-12-04 12:11 ` [PATCH V4 net-next 7/9] net: ena: introduce XDP redirect implementation akiyano
@ 2020-12-06  0:15   ` Jakub Kicinski
  2020-12-07 19:18     ` Shay Agroskin
  0 siblings, 1 reply; 19+ messages in thread
From: Jakub Kicinski @ 2020-12-06  0:15 UTC (permalink / raw)
  To: akiyano
  Cc: davem, netdev, dwmw, zorik, matua, saeedb, msw, aliguori, nafea,
	gtzalik, netanel, alisaidi, benh, ndagan, shayagr, sameehj

On Fri, 4 Dec 2020 14:11:13 +0200 akiyano@amazon.com wrote:
> +	case XDP_REDIRECT:
> +		xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
> +		xdp_stat = &rx_ring->rx_stats.xdp_redirect;
> +		break;

Don't you have to check if xdp_do_redirect() returned an error or not?

You should CC XDP maintainers on the XDP patches.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V4 net-next 6/9] net: ena: use xdp_frame in XDP TX flow
  2020-12-04 12:11 ` [PATCH V4 net-next 6/9] net: ena: use xdp_frame in XDP TX flow akiyano
@ 2020-12-06 20:10   ` Maciej Fijalkowski
  2020-12-07 19:16     ` Shay Agroskin
  0 siblings, 1 reply; 19+ messages in thread
From: Maciej Fijalkowski @ 2020-12-06 20:10 UTC (permalink / raw)
  To: akiyano
  Cc: davem, kuba, netdev, dwmw, zorik, matua, saeedb, msw, aliguori,
	nafea, gtzalik, netanel, alisaidi, benh, ndagan, shayagr,
	sameehj

On Fri, Dec 04, 2020 at 02:11:12PM +0200, akiyano@amazon.com wrote:
> From: Arthur Kiyanovski <akiyano@amazon.com>
> 
> Rename the ena_xdp_xmit_buff() function to ena_xdp_xmit_frame() and pass
> it an xdp_frame struct instead of xdp_buff.
> This change lays the ground for XDP redirect implementation which uses
> xdp_frames when 'xmit'ing packets.
> 
> Signed-off-by: Shay Agroskin <shayagr@amazon.com>
> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
> ---
>  drivers/net/ethernet/amazon/ena/ena_netdev.c | 46 ++++++++++----------
>  1 file changed, 23 insertions(+), 23 deletions(-)
> 
> diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
> index 222bb576e30e..cbb07548409a 100644
> --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
> +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
> @@ -233,18 +233,18 @@ static int ena_xdp_io_poll(struct napi_struct *napi, int budget)
>  	return ret;
>  }
>  
> -static int ena_xdp_tx_map_buff(struct ena_ring *xdp_ring,
> -			       struct ena_tx_buffer *tx_info,
> -			       struct xdp_buff *xdp,
> -			       void **push_hdr,
> -			       u32 *push_len)
> +static int ena_xdp_tx_map_frame(struct ena_ring *xdp_ring,
> +				struct ena_tx_buffer *tx_info,
> +				struct xdp_frame *xdpf,
> +				void **push_hdr,
> +				u32 *push_len)
>  {
>  	struct ena_adapter *adapter = xdp_ring->adapter;
>  	struct ena_com_buf *ena_buf;
>  	dma_addr_t dma = 0;
>  	u32 size;
>  
> -	tx_info->xdpf = xdp_convert_buff_to_frame(xdp);
> +	tx_info->xdpf = xdpf;
>  	size = tx_info->xdpf->len;
>  	ena_buf = tx_info->bufs;
>  
> @@ -281,29 +281,31 @@ static int ena_xdp_tx_map_buff(struct ena_ring *xdp_ring,
>  	return -EINVAL;
>  }
>  
> -static int ena_xdp_xmit_buff(struct net_device *dev,
> -			     struct xdp_buff *xdp,
> -			     int qid,
> -			     struct ena_rx_buffer *rx_info)
> +static int ena_xdp_xmit_frame(struct net_device *dev,
> +			      struct xdp_frame *xdpf,
> +			      int qid)
>  {
>  	struct ena_adapter *adapter = netdev_priv(dev);
>  	struct ena_com_tx_ctx ena_tx_ctx = {};
>  	struct ena_tx_buffer *tx_info;
>  	struct ena_ring *xdp_ring;
> +	struct page *rx_buff_page;
>  	u16 next_to_use, req_id;
>  	int rc;
>  	void *push_hdr;
>  	u32 push_len;
>  
> +	rx_buff_page = virt_to_page(xdpf->data);
> +
>  	xdp_ring = &adapter->tx_ring[qid];
>  	next_to_use = xdp_ring->next_to_use;
>  	req_id = xdp_ring->free_ids[next_to_use];
>  	tx_info = &xdp_ring->tx_buffer_info[req_id];
>  	tx_info->num_of_bufs = 0;
> -	page_ref_inc(rx_info->page);
> -	tx_info->xdp_rx_page = rx_info->page;
> +	page_ref_inc(rx_buff_page);
> +	tx_info->xdp_rx_page = rx_buff_page;
>  
> -	rc = ena_xdp_tx_map_buff(xdp_ring, tx_info, xdp, &push_hdr, &push_len);
> +	rc = ena_xdp_tx_map_frame(xdp_ring, tx_info, xdpf, &push_hdr, &push_len);
>  	if (unlikely(rc))
>  		goto error_drop_packet;
>  
> @@ -318,7 +320,7 @@ static int ena_xdp_xmit_buff(struct net_device *dev,
>  			     tx_info,
>  			     &ena_tx_ctx,
>  			     next_to_use,
> -			     xdp->data_end - xdp->data);
> +			     xdpf->len);
>  	if (rc)
>  		goto error_unmap_dma;
>  	/* trigger the dma engine. ena_com_write_sq_doorbell()
> @@ -337,12 +339,11 @@ static int ena_xdp_xmit_buff(struct net_device *dev,
>  	return NETDEV_TX_OK;
>  }
>  
> -static int ena_xdp_execute(struct ena_ring *rx_ring,
> -			   struct xdp_buff *xdp,
> -			   struct ena_rx_buffer *rx_info)
> +static int ena_xdp_execute(struct ena_ring *rx_ring, struct xdp_buff *xdp)
>  {
>  	struct bpf_prog *xdp_prog;
>  	u32 verdict = XDP_PASS;
> +	struct xdp_frame *xdpf;
>  	u64 *xdp_stat;
>  
>  	rcu_read_lock();
> @@ -354,10 +355,9 @@ static int ena_xdp_execute(struct ena_ring *rx_ring,
>  	verdict = bpf_prog_run_xdp(xdp_prog, xdp);
>  
>  	if (verdict == XDP_TX) {
> -		ena_xdp_xmit_buff(rx_ring->netdev,
> -				  xdp,
> -				  rx_ring->qid + rx_ring->adapter->num_io_queues,
> -				  rx_info);
> +		xdpf = xdp_convert_buff_to_frame(xdp);

Similar to Jakub's comment on another patch, xdp_convert_buff_to_frame can
return NULL and from what I can tell you never check that in
ena_xdp_xmit_frame.

> +		ena_xdp_xmit_frame(rx_ring->netdev, xdpf,
> +				   rx_ring->qid + rx_ring->adapter->num_io_queues);
>  
>  		xdp_stat = &rx_ring->rx_stats.xdp_tx;
>  	} else if (unlikely(verdict == XDP_ABORTED)) {
> @@ -1521,7 +1521,7 @@ static int ena_xdp_handle_buff(struct ena_ring *rx_ring, struct xdp_buff *xdp)
>  	if (unlikely(rx_ring->ena_bufs[0].len > ENA_XDP_MAX_MTU))
>  		return XDP_DROP;
>  
> -	ret = ena_xdp_execute(rx_ring, xdp, rx_info);
> +	ret = ena_xdp_execute(rx_ring, xdp);
>  
>  	/* The xdp program might expand the headers */
>  	if (ret == XDP_PASS) {
> @@ -1600,7 +1600,7 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
>  		if (unlikely(!skb)) {
>  			/* The page might not actually be freed here since the
>  			 * page reference count is incremented in
> -			 * ena_xdp_xmit_buff(), and it will be decreased only
> +			 * ena_xdp_xmit_frame(), and it will be decreased only
>  			 * when send completion was received from the device
>  			 */
>  			if (xdp_verdict == XDP_TX)
> -- 
> 2.23.3
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V4 net-next 9/9] net: ena: introduce ndo_xdp_xmit() function for XDP_REDIRECT
  2020-12-04 12:11 ` [PATCH V4 net-next 9/9] net: ena: introduce ndo_xdp_xmit() function for XDP_REDIRECT akiyano
@ 2020-12-06 20:22   ` Maciej Fijalkowski
  2020-12-07 19:28     ` Shay Agroskin
  0 siblings, 1 reply; 19+ messages in thread
From: Maciej Fijalkowski @ 2020-12-06 20:22 UTC (permalink / raw)
  To: akiyano
  Cc: davem, kuba, netdev, dwmw, zorik, matua, saeedb, msw, aliguori,
	nafea, gtzalik, netanel, alisaidi, benh, ndagan, shayagr,
	sameehj

On Fri, Dec 04, 2020 at 02:11:15PM +0200, akiyano@amazon.com wrote:
> From: Arthur Kiyanovski <akiyano@amazon.com>
> 
> This patch implements the ndo_xdp_xmit() net_device function which is
> called when a packet is redirected to this driver using an
> XDP_REDIRECT directive.
> 
> The function receives an array of xdp frames that it needs to xmit.
> The TX queues that are used to xmit these frames are the XDP
> queues used by the XDP_TX flow. Therefore a lock is added to synchronize
> both flows (XDP_TX and XDP_REDIRECT).
> 
> Signed-off-by: Shay Agroskin <shayagr@amazon.com>
> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
> ---
>  drivers/net/ethernet/amazon/ena/ena_netdev.c | 83 +++++++++++++++++---
>  drivers/net/ethernet/amazon/ena/ena_netdev.h |  1 +
>  2 files changed, 72 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
> index 0d077a626604..6c5d8b8c4d13 100644
> --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
> +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
> @@ -281,20 +281,18 @@ static int ena_xdp_tx_map_frame(struct ena_ring *xdp_ring,
>  	return -EINVAL;
>  }
>  
> -static int ena_xdp_xmit_frame(struct net_device *dev,
> +static int ena_xdp_xmit_frame(struct ena_ring *xdp_ring,
> +			      struct net_device *dev,
>  			      struct xdp_frame *xdpf,
> -			      int qid)
> +			      int flags)
>  {
> -	struct ena_adapter *adapter = netdev_priv(dev);
>  	struct ena_com_tx_ctx ena_tx_ctx = {};
>  	struct ena_tx_buffer *tx_info;
> -	struct ena_ring *xdp_ring;
>  	u16 next_to_use, req_id;
> -	int rc;
>  	void *push_hdr;
>  	u32 push_len;
> +	int rc;
>  
> -	xdp_ring = &adapter->tx_ring[qid];
>  	next_to_use = xdp_ring->next_to_use;
>  	req_id = xdp_ring->free_ids[next_to_use];
>  	tx_info = &xdp_ring->tx_buffer_info[req_id];
> @@ -321,25 +319,76 @@ static int ena_xdp_xmit_frame(struct net_device *dev,
>  	/* trigger the dma engine. ena_com_write_sq_doorbell()
>  	 * has a mb
>  	 */
> -	ena_com_write_sq_doorbell(xdp_ring->ena_com_io_sq);
> -	ena_increase_stat(&xdp_ring->tx_stats.doorbells, 1, &xdp_ring->syncp);
> +	if (flags & XDP_XMIT_FLUSH) {
> +		ena_com_write_sq_doorbell(xdp_ring->ena_com_io_sq);
> +		ena_increase_stat(&xdp_ring->tx_stats.doorbells, 1,
> +				  &xdp_ring->syncp);
> +	}
>  
> -	return NETDEV_TX_OK;
> +	return rc;
>  
>  error_unmap_dma:
>  	ena_unmap_tx_buff(xdp_ring, tx_info);
>  	tx_info->xdpf = NULL;
>  error_drop_packet:
>  	xdp_return_frame(xdpf);
> -	return NETDEV_TX_OK;
> +	return rc;
> +}
> +
> +static int ena_xdp_xmit(struct net_device *dev, int n,
> +			struct xdp_frame **frames, u32 flags)
> +{
> +	struct ena_adapter *adapter = netdev_priv(dev);
> +	int qid, i, err, drops = 0;
> +	struct ena_ring *xdp_ring;
> +
> +	if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
> +		return -EINVAL;
> +
> +	if (!test_bit(ENA_FLAG_DEV_UP, &adapter->flags))
> +		return -ENETDOWN;
> +
> +	/* We assume that all rings have the same XDP program */
> +	if (!READ_ONCE(adapter->rx_ring->xdp_bpf_prog))
> +		return -ENXIO;
> +
> +	qid = smp_processor_id() % adapter->xdp_num_queues;
> +	qid += adapter->xdp_first_ring;
> +	xdp_ring = &adapter->tx_ring[qid];
> +
> +	/* Other CPU ids might try to send thorugh this queue */
> +	spin_lock(&xdp_ring->xdp_tx_lock);

I have a feeling that we are not consistent with this locking approach as
some drivers do that and some don't.

> +
> +	for (i = 0; i < n; i++) {
> +		err = ena_xdp_xmit_frame(xdp_ring, dev, frames[i], 0);
> +		/* The descriptor is freed by ena_xdp_xmit_frame in case
> +		 * of an error.
> +		 */
> +		if (err)
> +			drops++;
> +	}
> +
> +	/* Ring doorbell to make device aware of the packets */
> +	if (flags & XDP_XMIT_FLUSH) {
> +		ena_com_write_sq_doorbell(xdp_ring->ena_com_io_sq);
> +		ena_increase_stat(&xdp_ring->tx_stats.doorbells, 1,
> +				  &xdp_ring->syncp);

Have you thought of ringing the doorbell once per a batch of xmitted
frames?

> +	}
> +
> +	spin_unlock(&xdp_ring->xdp_tx_lock);
> +
> +	/* Return number of packets sent */
> +	return n - drops;
>  }
>  
>  static int ena_xdp_execute(struct ena_ring *rx_ring, struct xdp_buff *xdp)
>  {
>  	struct bpf_prog *xdp_prog;
> +	struct ena_ring *xdp_ring;
>  	u32 verdict = XDP_PASS;
>  	struct xdp_frame *xdpf;
>  	u64 *xdp_stat;
> +	int qid;
>  
>  	rcu_read_lock();
>  	xdp_prog = READ_ONCE(rx_ring->xdp_bpf_prog);
> @@ -352,8 +401,16 @@ static int ena_xdp_execute(struct ena_ring *rx_ring, struct xdp_buff *xdp)
>  	switch (verdict) {
>  	case XDP_TX:
>  		xdpf = xdp_convert_buff_to_frame(xdp);
> -		ena_xdp_xmit_frame(rx_ring->netdev, xdpf,
> -				   rx_ring->qid + rx_ring->adapter->num_io_queues);
> +		/* Find xmit queue */
> +		qid = rx_ring->qid + rx_ring->adapter->num_io_queues;
> +		xdp_ring = &rx_ring->adapter->tx_ring[qid];
> +
> +		/* The XDP queues are shared between XDP_TX and XDP_REDIRECT */
> +		spin_lock(&xdp_ring->xdp_tx_lock);
> +
> +		ena_xdp_xmit_frame(xdp_ring, rx_ring->netdev, xdpf, XDP_XMIT_FLUSH);

Once again you don't check retval over here.

> +
> +		spin_unlock(&xdp_ring->xdp_tx_lock);
>  		xdp_stat = &rx_ring->rx_stats.xdp_tx;
>  		break;
>  	case XDP_REDIRECT:
> @@ -644,6 +701,7 @@ static void ena_init_io_rings(struct ena_adapter *adapter,
>  		txr->smoothed_interval =
>  			ena_com_get_nonadaptive_moderation_interval_tx(ena_dev);
>  		txr->disable_meta_caching = adapter->disable_meta_caching;
> +		spin_lock_init(&txr->xdp_tx_lock);
>  
>  		/* Don't init RX queues for xdp queues */
>  		if (!ENA_IS_XDP_INDEX(adapter, i)) {
> @@ -3236,6 +3294,7 @@ static const struct net_device_ops ena_netdev_ops = {
>  	.ndo_set_mac_address	= NULL,
>  	.ndo_validate_addr	= eth_validate_addr,
>  	.ndo_bpf		= ena_xdp,
> +	.ndo_xdp_xmit		= ena_xdp_xmit,
>  };
>  
>  static int ena_device_validate_params(struct ena_adapter *adapter,
> diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h
> index fed79c50a870..74af15d62ee1 100644
> --- a/drivers/net/ethernet/amazon/ena/ena_netdev.h
> +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h
> @@ -258,6 +258,7 @@ struct ena_ring {
>  	struct ena_com_io_sq *ena_com_io_sq;
>  	struct bpf_prog *xdp_bpf_prog;
>  	struct xdp_rxq_info xdp_rxq;
> +	spinlock_t xdp_tx_lock;	/* synchronize XDP TX/Redirect traffic */
>  
>  	u16 next_to_use;
>  	u16 next_to_clean;
> -- 
> 2.23.3
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V4 net-next 3/9] net: ena: add explicit casting to variables
  2020-12-04 12:11 ` [PATCH V4 net-next 3/9] net: ena: add explicit casting to variables akiyano
@ 2020-12-07 19:00   ` Alexander Duyck
  2020-12-07 20:19     ` Shay Agroskin
  2020-12-08 18:11     ` Shay Agroskin
  0 siblings, 2 replies; 19+ messages in thread
From: Alexander Duyck @ 2020-12-07 19:00 UTC (permalink / raw)
  To: akiyano
  Cc: David Miller, Jakub Kicinski, Netdev, dwmw, zorik, matua, saeedb,
	msw, aliguori, nafea, gtzalik, netanel, alisaidi, benh, ndagan,
	shayagr, Jubran, Samih, Ido Segev, Igor Chauskin

On Fri, Dec 4, 2020 at 4:15 AM <akiyano@amazon.com> wrote:
>
> From: Arthur Kiyanovski <akiyano@amazon.com>
>
> This patch adds explicit casting to some implicit conversions in the ena
> driver. The implicit conversions fail some of our static checkers that
> search for accidental conversions in our driver.
> Adding this cast won't affect the end results, and would sooth the
> checkers.
>
> Signed-off-by: Ido Segev <idose@amazon.com>
> Signed-off-by: Igor Chauskin <igorch@amazon.com>
> Signed-off-by: Shay Agroskin <shayagr@amazon.com>
> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
> ---
>  drivers/net/ethernet/amazon/ena/ena_com.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c b/drivers/net/ethernet/amazon/ena/ena_com.c
> index e168edf3c930..7910d8e68a99 100644
> --- a/drivers/net/ethernet/amazon/ena/ena_com.c
> +++ b/drivers/net/ethernet/amazon/ena/ena_com.c
> @@ -1369,7 +1369,7 @@ int ena_com_execute_admin_command(struct ena_com_admin_queue *admin_queue,
>                                    "Failed to submit command [%ld]\n",
>                                    PTR_ERR(comp_ctx));
>
> -               return PTR_ERR(comp_ctx);
> +               return (int)PTR_ERR(comp_ctx);
>         }
>
>         ret = ena_com_wait_and_process_admin_cq(comp_ctx, admin_queue);

I'm not a big fan of resolving it this way as we are going to end up
with the pattern throughout the kernel if this is really needed. It
might make more sense to either come up with a new define that returns
int instead of long, or to tweak the existing PTR_ERR define so that
it returns an int instead of a long.

An alternative here would be to just pass PTR_ERR into ret and then
process it that way within this if block. As it stands the comparison
to ERR_PTR(-ENODEV) doesn't read very well anyway.

> @@ -1595,7 +1595,7 @@ int ena_com_set_aenq_config(struct ena_com_dev *ena_dev, u32 groups_flag)
>  int ena_com_get_dma_width(struct ena_com_dev *ena_dev)
>  {
>         u32 caps = ena_com_reg_bar_read32(ena_dev, ENA_REGS_CAPS_OFF);
> -       int width;
> +       u32 width;
>
>         if (unlikely(caps == ENA_MMIO_READ_TIMEOUT)) {
>                 netdev_err(ena_dev->net_device, "Reg read timeout occurred\n");
> @@ -2266,7 +2266,7 @@ int ena_com_set_dev_mtu(struct ena_com_dev *ena_dev, int mtu)
>         cmd.aq_common_descriptor.opcode = ENA_ADMIN_SET_FEATURE;
>         cmd.aq_common_descriptor.flags = 0;
>         cmd.feat_common.feature_id = ENA_ADMIN_MTU;
> -       cmd.u.mtu.mtu = mtu;
> +       cmd.u.mtu.mtu = (u32)mtu;
>
>         ret = ena_com_execute_admin_command(admin_queue,
>                                             (struct ena_admin_aq_entry *)&cmd,

Wouldn't it make more sense to define mtu as a u32 in the first place
and address this in the function that calls this rather than doing the
cast at the last minute?

> @@ -2689,7 +2689,7 @@ int ena_com_indirect_table_set(struct ena_com_dev *ena_dev)
>                 return ret;
>         }
>
> -       cmd.control_buffer.length = (1ULL << rss->tbl_log_size) *
> +       cmd.control_buffer.length = (u32)(1ULL << rss->tbl_log_size) *
>                 sizeof(struct ena_admin_rss_ind_table_entry);
>
>         ret = ena_com_execute_admin_command(admin_queue,
> @@ -2712,7 +2712,7 @@ int ena_com_indirect_table_get(struct ena_com_dev *ena_dev, u32 *ind_tbl)
>         u32 tbl_size;
>         int i, rc;
>
> -       tbl_size = (1ULL << rss->tbl_log_size) *
> +       tbl_size = (u32)(1ULL << rss->tbl_log_size) *
>                 sizeof(struct ena_admin_rss_ind_table_entry);
>
>         rc = ena_com_get_feature_ex(ena_dev, &get_resp,

For these last two why not make it 1u instead of 1ull for the bit
being shifted? At least that way you are not implying possible
truncation in the conversion.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V4 net-next 6/9] net: ena: use xdp_frame in XDP TX flow
  2020-12-06 20:10   ` Maciej Fijalkowski
@ 2020-12-07 19:16     ` Shay Agroskin
  0 siblings, 0 replies; 19+ messages in thread
From: Shay Agroskin @ 2020-12-07 19:16 UTC (permalink / raw)
  To: Maciej Fijalkowski
  Cc: akiyano, davem, kuba, netdev, dwmw, zorik, matua, saeedb, msw,
	aliguori, nafea, gtzalik, netanel, alisaidi, benh, ndagan,
	sameehj


Maciej Fijalkowski <maciej.fijalkowski@intel.com> writes:

> On Fri, Dec 04, 2020 at 02:11:12PM +0200, akiyano@amazon.com 
> wrote:
>> From: Arthur Kiyanovski <akiyano@amazon.com>
>> 
>> Rename the ena_xdp_xmit_buff() function to ena_xdp_xmit_frame() 
>> and pass
>> it an xdp_frame struct instead of xdp_buff.
>> This change lays the ground for XDP redirect implementation 
>> which uses
>> xdp_frames when 'xmit'ing packets.
>> 
>> Signed-off-by: Shay Agroskin <shayagr@amazon.com>
>> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
>> ---
>>  drivers/net/ethernet/amazon/ena/ena_netdev.c | 46 
>>  ++++++++++----------
>>  1 file changed, 23 insertions(+), 23 deletions(-)
>> 
>> diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c 
>> b/drivers/net/ethernet/amazon/ena/ena_netdev.c
>> index 222bb576e30e..cbb07548409a 100644
>> --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
>> +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
>> @@ -233,18 +233,18 @@ static int ena_xdp_io_poll(struct 
>> napi_struct *napi, int budget)
>>  	return ret;
>>  }
>>  
>>  ...
>>  	if (verdict == XDP_TX) {
>> -		ena_xdp_xmit_buff(rx_ring->netdev,
>> -				  xdp,
>> -				  rx_ring->qid + 
>> rx_ring->adapter->num_io_queues,
>> -				  rx_info);
>> +		xdpf = xdp_convert_buff_to_frame(xdp);
>
> Similar to Jakub's comment on another patch, 
> xdp_convert_buff_to_frame can
> return NULL and from what I can tell you never check that in
> ena_xdp_xmit_frame.
>

Hi, thanks for reviewing the code (:

Going over xdp_convert_buff_to_frame() it seems (to me) that the 
function fails either
- we're using an AF XDP socket
- the driver failed to leave enough room for xdp_frame and 
  skb_shared_info structs

the first isn't supported by ENA, and the second doesn't seem to 
be possible since the driver leaves enough space on the RX page 
and bpf_xdp_adjust_head()/bpf_xdp_adjust_tail() seem
to make sure enough space is left on the page for the structs.

Nevertheless, the correct approach is to check the return value of 
the function. I'll add it in the next patchset. Thanks

>> +		ena_xdp_xmit_frame(rx_ring->netdev, xdpf,
>> +				   rx_ring->qid + 
>> rx_ring->adapter->num_io_queues);
>>  
>>  		xdp_stat = &rx_ring->rx_stats.xdp_tx;
>>  	} else if (unlikely(verdict == XDP_ABORTED)) {
>> @@ -1521,7 +1521,7 @@ static int ena_xdp_handle_buff(struct 
>> ena_ring *rx_ring, struct xdp_buff *xdp)
>>  	if (unlikely(rx_ring->ena_bufs[0].len > ENA_XDP_MAX_MTU))
>>  		return XDP_DROP;
>>  
>> -	ret = ena_xdp_execute(rx_ring, xdp, rx_info);
>> +	ret = ena_xdp_execute(rx_ring, xdp);
>>  
>>  	/* The xdp program might expand the headers */
>> ...
>>  			 */
>>  			if (xdp_verdict == XDP_TX)
>> -- 
>> 2.23.3
>> 


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V4 net-next 7/9] net: ena: introduce XDP redirect implementation
  2020-12-06  0:15   ` Jakub Kicinski
@ 2020-12-07 19:18     ` Shay Agroskin
  0 siblings, 0 replies; 19+ messages in thread
From: Shay Agroskin @ 2020-12-07 19:18 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: akiyano, davem, netdev, dwmw, zorik, matua, saeedb, msw,
	aliguori, nafea, gtzalik, netanel, alisaidi, benh, ndagan,
	sameehj


Jakub Kicinski <kuba@kernel.org> writes:

> On Fri, 4 Dec 2020 14:11:13 +0200 akiyano@amazon.com wrote:
>> +	case XDP_REDIRECT:
>> +		xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
>> +		xdp_stat = &rx_ring->rx_stats.xdp_redirect;
>> +		break;
>
> Don't you have to check if xdp_do_redirect() returned an error 
> or not?
>
> You should CC XDP maintainers on the XDP patches.

Thanks for reviewing the code (:
I'll add a return value check next patchset and CC XDP 
maintainers.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V4 net-next 9/9] net: ena: introduce ndo_xdp_xmit() function for XDP_REDIRECT
  2020-12-06 20:22   ` Maciej Fijalkowski
@ 2020-12-07 19:28     ` Shay Agroskin
  0 siblings, 0 replies; 19+ messages in thread
From: Shay Agroskin @ 2020-12-07 19:28 UTC (permalink / raw)
  To: Maciej Fijalkowski
  Cc: akiyano, davem, kuba, netdev, dwmw, zorik, matua, saeedb, msw,
	aliguori, nafea, gtzalik, netanel, alisaidi, benh, ndagan,
	sameehj


Maciej Fijalkowski <maciej.fijalkowski@intel.com> writes:

> On Fri, Dec 04, 2020 at 02:11:15PM +0200, akiyano@amazon.com 
> wrote:
>> From: Arthur Kiyanovski <akiyano@amazon.com>
>> 
>> This patch implements the ndo_xdp_xmit() net_device function 
>> which is
>> called when a packet is redirected to this driver using an
>> XDP_REDIRECT directive.
>> 
>> The function receives an array of xdp frames that it needs to 
>> xmit.
>> The TX queues that are used to xmit these frames are the XDP
>> queues used by the XDP_TX flow. Therefore a lock is added to 
>> synchronize
>> both flows (XDP_TX and XDP_REDIRECT).
>> 
>> Signed-off-by: Shay Agroskin <shayagr@amazon.com>
>> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
>> ...
>> +	xdp_ring = &adapter->tx_ring[qid];
>> +
>> +	/* Other CPU ids might try to send thorugh this queue */
>> +	spin_lock(&xdp_ring->xdp_tx_lock);
>
> I have a feeling that we are not consistent with this locking 
> approach as
> some drivers do that and some don't.
>

Not sure what you mean here, ENA driver uses a lock for XDP xmit 
function because XDP_TX and XDP_REDIRECT flows share the same egress queues. This is a design choice that was 
taken. Some drivers (e.g. mlx5) seem to have separate queues for 
regular TX, XDP_TX, XDP_REDIRECT and RX flows, and saw are able to 
avoid locking.

>> +
>> +	for (i = 0; i < n; i++) {
>> +		err = ena_xdp_xmit_frame(xdp_ring, dev, frames[i], 
>> 0);
>> +		/* The descriptor is freed by ena_xdp_xmit_frame 
>> in case
>> +		 * of an error.
>> +		 */
>> +		if (err)
>> +			drops++;
>> +	}
>> +
>> +	/* Ring doorbell to make device aware of the packets */
>> +	if (flags & XDP_XMIT_FLUSH) {
>> + 
>> ena_com_write_sq_doorbell(xdp_ring->ena_com_io_sq);
>> +		ena_increase_stat(&xdp_ring->tx_stats.doorbells, 
>> 1,
>> +				  &xdp_ring->syncp);
>
> Have you thought of ringing the doorbell once per a batch of 
> xmitted
> frames?
>

For XDP_REDIRECT the packets are indeed batched before sending a 
doorbell. XDP_TX flow would be added the same improvement in 
future patchset.
Thanks for this idea (:

>> +	}
>> +
>> +	spin_unlock(&xdp_ring->xdp_tx_lock);
>> +
>> +	/* Return number of packets sent */
>> +	return n - drops;
>>  }
>> ...
>>  
>> -				   rx_ring->qid + 
>> rx_ring->adapter->num_io_queues);
>> +		/* Find xmit queue */
>> +		qid = rx_ring->qid + 
>> rx_ring->adapter->num_io_queues;
>> +		xdp_ring = &rx_ring->adapter->tx_ring[qid];
>> +
>> +		/* The XDP queues are shared between XDP_TX and 
>> XDP_REDIRECT */
>> +		spin_lock(&xdp_ring->xdp_tx_lock);
>> +
>> +		ena_xdp_xmit_frame(xdp_ring, rx_ring->netdev, 
>> xdpf, XDP_XMIT_FLUSH);
>
> Once again you don't check retval over here.
>

ena_xdp_xmit_frame() function handles failure internally (reducing 
the ref-count of the RX page, increasing error stat etc.) 
therefore there is no need for special handling of its failure.
For XDP Redirect flow ena_xdp_xmit() function returns the kernel 
the number of packets that were successfully sent, and so it needs 
to monitor the return value of this function.
This is not the case here though.

>> +
>> +		spin_unlock(&xdp_ring->xdp_tx_lock);
>>  		xdp_stat = &rx_ring->rx_stats.xdp_tx;
>>  		break;
>>  	case XDP_REDIRECT:
>> @@ -644,6 +701,7 @@ static void ena_init_io_rings(struct 
>> ena_adapter *adapter,
>>  		txr->smoothed_interval =
>>  			ena_com_get_nonadaptive_moderation_interval_tx(ena_dev);
>>  		txr->disable_meta_caching = 
>>  adapter->disable_meta_caching;
>> +		spin_lock_init(&txr->xdp_tx_lock);
>>  
>>  		/* Don't init RX queues for xdp queues */
>>  		if (!ENA_IS_XDP_INDEX(adapter, i)) {
>> @@ -3236,6 +3294,7 @@ static const struct net_device_ops 
>> ena_netdev_ops = {
>>  	.ndo_set_mac_address	= NULL,
>>  	.ndo_validate_addr	= eth_validate_addr,
>>  	.ndo_bpf		= ena_xdp,
>> +	.ndo_xdp_xmit		= ena_xdp_xmit,
>>  };
>> ...
>> +	spinlock_t xdp_tx_lock;	/* synchronize XDP TX/Redirect 
>> traffic */
>>  
>>  	u16 next_to_use;
>>  	u16 next_to_clean;
>> -- 
>> 2.23.3
>> 


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V4 net-next 3/9] net: ena: add explicit casting to variables
  2020-12-07 19:00   ` Alexander Duyck
@ 2020-12-07 20:19     ` Shay Agroskin
  2020-12-08 18:11     ` Shay Agroskin
  1 sibling, 0 replies; 19+ messages in thread
From: Shay Agroskin @ 2020-12-07 20:19 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: akiyano, David Miller, Jakub Kicinski, Netdev, dwmw, zorik,
	matua, saeedb, msw, aliguori, nafea, gtzalik, netanel, alisaidi,
	benh, ndagan, Jubran, Samih, Ido Segev, Igor Chauskin


Alexander Duyck <alexander.duyck@gmail.com> writes:

> On Fri, Dec 4, 2020 at 4:15 AM <akiyano@amazon.com> wrote:
>>
>> From: Arthur Kiyanovski <akiyano@amazon.com>
>>
>> This patch adds explicit casting to some implicit conversions 
>> in the ena
>> driver. The implicit conversions fail some of our static 
>> checkers that
>> search for accidental conversions in our driver.
>> Adding this cast won't affect the end results, and would sooth 
>> the
>> checkers.
>>
>> Signed-off-by: Ido Segev <idose@amazon.com>
>> Signed-off-by: Igor Chauskin <igorch@amazon.com>
>> Signed-off-by: Shay Agroskin <shayagr@amazon.com>
>> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
>> ---
>>  drivers/net/ethernet/amazon/ena/ena_com.c | 10 +++++-----
>>  1 file changed, 5 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c 
>> b/drivers/net/ethernet/amazon/ena/ena_com.c
>> index e168edf3c930..7910d8e68a99 100644
>> --- a/drivers/net/ethernet/amazon/ena/ena_com.c
>> +++ b/drivers/net/ethernet/amazon/ena/ena_com.c
>> @@ -1369,7 +1369,7 @@ int ena_com_execute_admin_command(struct 
>> ena_com_admin_queue *admin_queue,
>>                                    "Failed to submit command 
>>                                    [%ld]\n",
>>                                    PTR_ERR(comp_ctx));
>>
>> -               return PTR_ERR(comp_ctx);
>> +               return (int)PTR_ERR(comp_ctx);
>>         }
>>
>>         ret = ena_com_wait_and_process_admin_cq(comp_ctx, 
>>         admin_queue);
>
> I'm not a big fan of resolving it this way as we are going to 
> end up
> with the pattern throughout the kernel if this is really 
> needed. It
> might make more sense to either come up with a new define that 
> returns
> int instead of long, or to tweak the existing PTR_ERR define so 
> that
> it returns an int instead of a long.
>
> An alternative here would be to just pass PTR_ERR into ret and 
> then
> process it that way within this if block. As it stands the 
> comparison
> to ERR_PTR(-ENODEV) doesn't read very well anyway.
>

Hi, thanks for reviewing the code. Looking at it I agree it looks 
rather ugly. I'll try to rework it to something less hideous.

Regarding the idea to make it more generic (kernel wide change), 
our static checkers test the ena_com code on various platforms and 
the 'implicit cast' warning might not repeat for other drivers

>> @@ -1595,7 +1595,7 @@ int ena_com_set_aenq_config(struct 
>> ena_com_dev *ena_dev, u32 groups_flag)
>>  int ena_com_get_dma_width(struct ena_com_dev *ena_dev)
>>  {
>>         u32 caps = ena_com_reg_bar_read32(ena_dev, 
>>         ENA_REGS_CAPS_OFF);
>> -       int width;
>> +       u32 width;
>>
>>         if (unlikely(caps == ENA_MMIO_READ_TIMEOUT)) {
>>                 netdev_err(ena_dev->net_device, "Reg read 
>>                 timeout occurred\n");
>> @@ -2266,7 +2266,7 @@ int ena_com_set_dev_mtu(struct 
>> ena_com_dev *ena_dev, int mtu)
>>         cmd.aq_common_descriptor.opcode = 
>>         ENA_ADMIN_SET_FEATURE;
>>         cmd.aq_common_descriptor.flags = 0;
>>         cmd.feat_common.feature_id = ENA_ADMIN_MTU;
>> -       cmd.u.mtu.mtu = mtu;
>> +       cmd.u.mtu.mtu = (u32)mtu;
>>
>>         ret = ena_com_execute_admin_command(admin_queue,
>>                                             (struct 
>>                                             ena_admin_aq_entry 
>>                                             *)&cmd,
>
> Wouldn't it make more sense to define mtu as a u32 in the first 
> place
> and address this in the function that calls this rather than 
> doing the
> cast at the last minute?
>

It would make the code prettier at the very least. I'll try to 
tweak this a little

>> @@ -2689,7 +2689,7 @@ int ena_com_indirect_table_set(struct 
>> ena_com_dev *ena_dev)
>>                 return ret;
>>         }
>>
>> -       cmd.control_buffer.length = (1ULL << rss->tbl_log_size) 
>> *
>> +       cmd.control_buffer.length = (u32)(1ULL << 
>> rss->tbl_log_size) *
>>                 sizeof(struct ena_admin_rss_ind_table_entry);
>>
>>         ret = ena_com_execute_admin_command(admin_queue,
>> @@ -2712,7 +2712,7 @@ int ena_com_indirect_table_get(struct 
>> ena_com_dev *ena_dev, u32 *ind_tbl)
>>         u32 tbl_size;
>>         int i, rc;
>>
>> -       tbl_size = (1ULL << rss->tbl_log_size) *
>> +       tbl_size = (u32)(1ULL << rss->tbl_log_size) *
>>                 sizeof(struct ena_admin_rss_ind_table_entry);
>>
>>         rc = ena_com_get_feature_ex(ena_dev, &get_resp,
>
> For these last two why not make it 1u instead of 1ull for the 
> bit
> being shifted? At least that way you are not implying possible
> truncation in the conversion.

This sounds correct, I'll give it a try. Thanks

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V4 net-next 3/9] net: ena: add explicit casting to variables
  2020-12-07 19:00   ` Alexander Duyck
  2020-12-07 20:19     ` Shay Agroskin
@ 2020-12-08 18:11     ` Shay Agroskin
  1 sibling, 0 replies; 19+ messages in thread
From: Shay Agroskin @ 2020-12-08 18:11 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: akiyano, David Miller, Jakub Kicinski, Netdev, dwmw, zorik,
	matua, saeedb, msw, aliguori, nafea, gtzalik, netanel, alisaidi,
	benh, ndagan, Jubran, Samih, Ido Segev, Igor Chauskin


Alexander Duyck <alexander.duyck@gmail.com> writes:

> On Fri, Dec 4, 2020 at 4:15 AM <akiyano@amazon.com> wrote:
>>
>> From: Arthur Kiyanovski <akiyano@amazon.com>
>>
>> This patch adds explicit casting to some implicit conversions 
>> in the ena
>> driver. The implicit conversions fail some of our static 
>> checkers that
>> search for accidental conversions in our driver.
>> Adding this cast won't affect the end results, and would sooth 
>> the
>> checkers.
>>
>> Signed-off-by: Ido Segev <idose@amazon.com>
>> Signed-off-by: Igor Chauskin <igorch@amazon.com>
>> Signed-off-by: Shay Agroskin <shayagr@amazon.com>
>> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
>> ---
>> ...
>> @@ -2712,7 +2712,7 @@ int ena_com_indirect_table_get(struct 
>> ena_com_dev *ena_dev, u32 *ind_tbl)
>>         u32 tbl_size;
>>         int i, rc;
>>
>> -       tbl_size = (1ULL << rss->tbl_log_size) *
>> +       tbl_size = (u32)(1ULL << rss->tbl_log_size) *
>>                 sizeof(struct ena_admin_rss_ind_table_entry);
>>
>>         rc = ena_com_get_feature_ex(ena_dev, &get_resp,
>
> For these last two why not make it 1u instead of 1ull for the 
> bit
> being shifted? At least that way you are not implying possible
> truncation in the conversion.

We decided to remove this conversion from the patch altogether. We 
might do something different in the future to achieve the same 
result. Thanks for your comment

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2020-12-08 18:13 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-04 12:11 [PATCH V4 net-next 0/9] XDP Redirect implementation for ENA driver akiyano
2020-12-04 12:11 ` [PATCH V4 net-next 1/9] net: ena: use constant value for net_device allocation akiyano
2020-12-04 12:11 ` [PATCH V4 net-next 2/9] net: ena: add device distinct log prefix to files akiyano
2020-12-04 12:11 ` [PATCH V4 net-next 3/9] net: ena: add explicit casting to variables akiyano
2020-12-07 19:00   ` Alexander Duyck
2020-12-07 20:19     ` Shay Agroskin
2020-12-08 18:11     ` Shay Agroskin
2020-12-04 12:11 ` [PATCH V4 net-next 4/9] net: ena: fix coding style nits akiyano
2020-12-04 12:11 ` [PATCH V4 net-next 5/9] net: ena: aggregate stats increase into a function akiyano
2020-12-04 12:11 ` [PATCH V4 net-next 6/9] net: ena: use xdp_frame in XDP TX flow akiyano
2020-12-06 20:10   ` Maciej Fijalkowski
2020-12-07 19:16     ` Shay Agroskin
2020-12-04 12:11 ` [PATCH V4 net-next 7/9] net: ena: introduce XDP redirect implementation akiyano
2020-12-06  0:15   ` Jakub Kicinski
2020-12-07 19:18     ` Shay Agroskin
2020-12-04 12:11 ` [PATCH V4 net-next 8/9] net: ena: use xdp_return_frame() to free xdp frames akiyano
2020-12-04 12:11 ` [PATCH V4 net-next 9/9] net: ena: introduce ndo_xdp_xmit() function for XDP_REDIRECT akiyano
2020-12-06 20:22   ` Maciej Fijalkowski
2020-12-07 19:28     ` Shay Agroskin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.