All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next v3 0/5] gve: Add XDP support for GQI-QPL format
@ 2023-03-13 20:26 Praveen Kaligineedi
  2023-03-13 20:26 ` [PATCH net-next v3 1/5] gve: XDP support GQI-QPL: helper function changes Praveen Kaligineedi
                   ` (4 more replies)
  0 siblings, 5 replies; 11+ messages in thread
From: Praveen Kaligineedi @ 2023-03-13 20:26 UTC (permalink / raw)
  To: netdev; +Cc: davem, kuba, maciej.fijalkowski, Praveen Kaligineedi

Adding support for XDP DROP, PASS, TX, REDIRECT for GQI QPL format.
Add AF_XDP zero-copy support.

When an XDP program is installed, dedicated TX queues are created to
handle XDP traffic. The user needs to ensure that the number of
configured TX queues is equal to the number of configured RX queues; and
the number of TX/RX queues is less than or equal to half the maximum
number of TX/RX queues.

The XDP traffic from AF_XDP sockets and from other NICs (arriving via
XDP_REDIRECT) will also egress through the dedicated XDP TX queues.

Although these changes support AF_XDP socket in zero-copy mode, there is
still a copy happening within the driver between XSK buffer pool and QPL
bounce buffers in GQI-QPL format.

The following example demonstrates how the XDP packets are mapped to
TX queues:

Example configuration:
Max RX queues : 2N, Max TX queues : 2N
Configured RX queues : N, Configured TX queues : N

TX queue mapping:
TX queues with queue id 0,...,N-1 will handle traffic from the stack.
TX queues with queue id N,...,2N-1 will handle XDP traffic.

For the XDP packets transmitted using XDP_TX action:
<Egress TX queue id> = N + <Ingress RX queue id>

For the XDP packets that arrive from other NICs via XDP_REDIRECT action:
<Egress TX queue id> = N + ( smp_processor_id % N )

For AF_XDP zero-copy mode:
<Egress TX queue id> = N + <AF_XDP TX queue id>

Changes in v2:
- Removed gve_close/gve_open when adding XDP dedicated queues. Instead
we add and register additional TX queues when the XDP program is
installed. If the allocation/registration fails we return error and do
not install the XDP program. Added a new patch to enable adding TX queues
without gve_close/gve_open
- Removed xdp tx spin lock from this patch. It is needed for XDP_REDIRECT
support as both XDP_REDIRECT and XDP_TX traffic share the dedicated XDP
queues. Moved the code to add xdp tx spinlock to the subsequent patch
that adds XDP_REDIRECT support.
- Added netdev_err when the user tries to set rx/tx queues to the values
not supported when XDP is enabled.
- Removed rcu annotation for xdp_prog. We disable the napi prior to
adding/removing the xdp_prog and reenable it after the program has
been installed for all the queues.
- Ring the tx doorbell once for napi instead of every XDP TX packet.
- Added a new helper function for freeing the FIFO buffer
- Unregister xdp rxq for all the queues when the registration
fails during XDP program installation
- Register xsk rxq only when XSK buff pool is enabled
- Removed code accessing internal xsk_buff_pool fields
- Removed sleep driven code when disabling XSK buff pool. Disable
napi and re-enable it after disabling XSK pool.
- Make sure that we clean up dma mappings on XSK pool disable
- Use napi_if_scheduled_mark_missed to avoid unnecessary napi move
to the CPU calling ndo_xsk_wakeup()

Changes in v3:
- Padding bytes are used if the XDP TX packet headers do not
fit at tail of TX FIFO. Taking these padding bytes into account
while checking if enough space is available in TX FIFO.

Praveen Kaligineedi (5):
  gve: XDP support GQI-QPL: helper function changes
  gve: Changes to add new TX queues
  gve: Add XDP DROP and TX support for GQI-QPL format
  gve: Add XDP REDIRECT support for GQI-QPL format
  gve: Add AF_XDP zero-copy support for GQI-QPL format

 drivers/net/ethernet/google/gve/gve.h         | 112 ++-
 drivers/net/ethernet/google/gve/gve_adminq.c  |   8 +-
 drivers/net/ethernet/google/gve/gve_adminq.h  |   4 +-
 drivers/net/ethernet/google/gve/gve_ethtool.c |  91 ++-
 drivers/net/ethernet/google/gve/gve_main.c    | 670 +++++++++++++++++-
 drivers/net/ethernet/google/gve/gve_rx.c      | 147 +++-
 drivers/net/ethernet/google/gve/gve_rx_dqo.c  |   2 +-
 drivers/net/ethernet/google/gve/gve_tx.c      | 298 +++++++-
 drivers/net/ethernet/google/gve/gve_utils.c   |   6 +-
 drivers/net/ethernet/google/gve/gve_utils.h   |   3 +-
 10 files changed, 1220 insertions(+), 121 deletions(-)

-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH net-next v3 1/5] gve: XDP support GQI-QPL: helper function changes
  2023-03-13 20:26 [PATCH net-next v3 0/5] gve: Add XDP support for GQI-QPL format Praveen Kaligineedi
@ 2023-03-13 20:26 ` Praveen Kaligineedi
  2023-03-15 17:13   ` Michal Kubiak
  2023-03-13 20:26 ` [PATCH net-next v3 2/5] gve: Changes to add new TX queues Praveen Kaligineedi
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 11+ messages in thread
From: Praveen Kaligineedi @ 2023-03-13 20:26 UTC (permalink / raw)
  To: netdev
  Cc: davem, kuba, maciej.fijalkowski, Praveen Kaligineedi, Jeroen de Borst

This patch adds/modifies helper functions needed to add XDP
support.

Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com>
Reviewed-by: Jeroen de Borst <jeroendb@google.com>

---
Changed in v2:
- No changes

Changed in v3:
- No changes
---

 drivers/net/ethernet/google/gve/gve.h         |  5 +++
 drivers/net/ethernet/google/gve/gve_ethtool.c | 26 +++++++----
 drivers/net/ethernet/google/gve/gve_main.c    | 27 +++++++-----
 drivers/net/ethernet/google/gve/gve_rx.c      |  2 +-
 drivers/net/ethernet/google/gve/gve_rx_dqo.c  |  2 +-
 drivers/net/ethernet/google/gve/gve_tx.c      | 43 +++++++++++--------
 drivers/net/ethernet/google/gve/gve_utils.c   |  6 +--
 drivers/net/ethernet/google/gve/gve_utils.h   |  3 +-
 8 files changed, 70 insertions(+), 44 deletions(-)

diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
index 64eb0442c82f..f52f23198278 100644
--- a/drivers/net/ethernet/google/gve/gve.h
+++ b/drivers/net/ethernet/google/gve/gve.h
@@ -855,6 +855,11 @@ static inline bool gve_is_gqi(struct gve_priv *priv)
 		priv->queue_format == GVE_GQI_QPL_FORMAT;
 }
 
+static inline u32 gve_num_tx_queues(struct gve_priv *priv)
+{
+	return priv->tx_cfg.num_queues;
+}
+
 /* buffers */
 int gve_alloc_page(struct gve_priv *priv, struct device *dev,
 		   struct page **page, dma_addr_t *dma,
diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c
index ce574d097e28..5b6e31812fae 100644
--- a/drivers/net/ethernet/google/gve/gve_ethtool.c
+++ b/drivers/net/ethernet/google/gve/gve_ethtool.c
@@ -81,8 +81,10 @@ static void gve_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
 {
 	struct gve_priv *priv = netdev_priv(netdev);
 	char *s = (char *)data;
+	int num_tx_queues;
 	int i, j;
 
+	num_tx_queues = gve_num_tx_queues(priv);
 	switch (stringset) {
 	case ETH_SS_STATS:
 		memcpy(s, *gve_gstrings_main_stats,
@@ -97,7 +99,7 @@ static void gve_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
 			}
 		}
 
-		for (i = 0; i < priv->tx_cfg.num_queues; i++) {
+		for (i = 0; i < num_tx_queues; i++) {
 			for (j = 0; j < NUM_GVE_TX_CNTS; j++) {
 				snprintf(s, ETH_GSTRING_LEN,
 					 gve_gstrings_tx_stats[j], i);
@@ -124,12 +126,14 @@ static void gve_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
 static int gve_get_sset_count(struct net_device *netdev, int sset)
 {
 	struct gve_priv *priv = netdev_priv(netdev);
+	int num_tx_queues;
 
+	num_tx_queues = gve_num_tx_queues(priv);
 	switch (sset) {
 	case ETH_SS_STATS:
 		return GVE_MAIN_STATS_LEN + GVE_ADMINQ_STATS_LEN +
 		       (priv->rx_cfg.num_queues * NUM_GVE_RX_CNTS) +
-		       (priv->tx_cfg.num_queues * NUM_GVE_TX_CNTS);
+		       (num_tx_queues * NUM_GVE_TX_CNTS);
 	case ETH_SS_PRIV_FLAGS:
 		return GVE_PRIV_FLAGS_STR_LEN;
 	default:
@@ -153,18 +157,20 @@ gve_get_ethtool_stats(struct net_device *netdev,
 	struct gve_priv *priv;
 	bool skip_nic_stats;
 	unsigned int start;
+	int num_tx_queues;
 	int ring;
 	int i, j;
 
 	ASSERT_RTNL();
 
 	priv = netdev_priv(netdev);
+	num_tx_queues = gve_num_tx_queues(priv);
 	report_stats = priv->stats_report->stats;
 	rx_qid_to_stats_idx = kmalloc_array(priv->rx_cfg.num_queues,
 					    sizeof(int), GFP_KERNEL);
 	if (!rx_qid_to_stats_idx)
 		return;
-	tx_qid_to_stats_idx = kmalloc_array(priv->tx_cfg.num_queues,
+	tx_qid_to_stats_idx = kmalloc_array(num_tx_queues,
 					    sizeof(int), GFP_KERNEL);
 	if (!tx_qid_to_stats_idx) {
 		kfree(rx_qid_to_stats_idx);
@@ -195,7 +201,7 @@ gve_get_ethtool_stats(struct net_device *netdev,
 		}
 	}
 	for (tx_pkts = 0, tx_bytes = 0, tx_dropped = 0, ring = 0;
-	     ring < priv->tx_cfg.num_queues; ring++) {
+	     ring < num_tx_queues; ring++) {
 		if (priv->tx) {
 			do {
 				start =
@@ -232,7 +238,7 @@ gve_get_ethtool_stats(struct net_device *netdev,
 	i = GVE_MAIN_STATS_LEN;
 
 	/* For rx cross-reporting stats, start from nic rx stats in report */
-	base_stats_idx = GVE_TX_STATS_REPORT_NUM * priv->tx_cfg.num_queues +
+	base_stats_idx = GVE_TX_STATS_REPORT_NUM * num_tx_queues +
 		GVE_RX_STATS_REPORT_NUM * priv->rx_cfg.num_queues;
 	max_stats_idx = NIC_RX_STATS_REPORT_NUM * priv->rx_cfg.num_queues +
 		base_stats_idx;
@@ -298,7 +304,7 @@ gve_get_ethtool_stats(struct net_device *netdev,
 
 	/* For tx cross-reporting stats, start from nic tx stats in report */
 	base_stats_idx = max_stats_idx;
-	max_stats_idx = NIC_TX_STATS_REPORT_NUM * priv->tx_cfg.num_queues +
+	max_stats_idx = NIC_TX_STATS_REPORT_NUM * num_tx_queues +
 		max_stats_idx;
 	/* Preprocess the stats report for tx, map queue id to start index */
 	skip_nic_stats = false;
@@ -316,7 +322,7 @@ gve_get_ethtool_stats(struct net_device *netdev,
 	}
 	/* walk TX rings */
 	if (priv->tx) {
-		for (ring = 0; ring < priv->tx_cfg.num_queues; ring++) {
+		for (ring = 0; ring < num_tx_queues; ring++) {
 			struct gve_tx_ring *tx = &priv->tx[ring];
 
 			if (gve_is_gqi(priv)) {
@@ -355,7 +361,7 @@ gve_get_ethtool_stats(struct net_device *netdev,
 			}
 		}
 	} else {
-		i += priv->tx_cfg.num_queues * NUM_GVE_TX_CNTS;
+		i += num_tx_queues * NUM_GVE_TX_CNTS;
 	}
 
 	kfree(rx_qid_to_stats_idx);
@@ -502,7 +508,9 @@ static int gve_set_priv_flags(struct net_device *netdev, u32 flags)
 {
 	struct gve_priv *priv = netdev_priv(netdev);
 	u64 ori_flags, new_flags;
+	int num_tx_queues;
 
+	num_tx_queues = gve_num_tx_queues(priv);
 	ori_flags = READ_ONCE(priv->ethtool_flags);
 	new_flags = ori_flags;
 
@@ -522,7 +530,7 @@ static int gve_set_priv_flags(struct net_device *netdev, u32 flags)
 	/* delete report stats timer. */
 	if (!(flags & BIT(0)) && (ori_flags & BIT(0))) {
 		int tx_stats_num = GVE_TX_STATS_REPORT_NUM *
-			priv->tx_cfg.num_queues;
+			num_tx_queues;
 		int rx_stats_num = GVE_RX_STATS_REPORT_NUM *
 			priv->rx_cfg.num_queues;
 
diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
index 07111c241e0e..3cfdeeb74f60 100644
--- a/drivers/net/ethernet/google/gve/gve_main.c
+++ b/drivers/net/ethernet/google/gve/gve_main.c
@@ -90,8 +90,10 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
 	struct gve_priv *priv = netdev_priv(dev);
 	unsigned int start;
 	u64 packets, bytes;
+	int num_tx_queues;
 	int ring;
 
+	num_tx_queues = gve_num_tx_queues(priv);
 	if (priv->rx) {
 		for (ring = 0; ring < priv->rx_cfg.num_queues; ring++) {
 			do {
@@ -106,7 +108,7 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
 		}
 	}
 	if (priv->tx) {
-		for (ring = 0; ring < priv->tx_cfg.num_queues; ring++) {
+		for (ring = 0; ring < num_tx_queues; ring++) {
 			do {
 				start =
 				  u64_stats_fetch_begin(&priv->tx[ring].statss);
@@ -180,7 +182,7 @@ static int gve_alloc_stats_report(struct gve_priv *priv)
 	int tx_stats_num, rx_stats_num;
 
 	tx_stats_num = (GVE_TX_STATS_REPORT_NUM + NIC_TX_STATS_REPORT_NUM) *
-		       priv->tx_cfg.num_queues;
+		       gve_num_tx_queues(priv);
 	rx_stats_num = (GVE_RX_STATS_REPORT_NUM + NIC_RX_STATS_REPORT_NUM) *
 		       priv->rx_cfg.num_queues;
 	priv->stats_report_len = struct_size(priv->stats_report, stats,
@@ -622,20 +624,21 @@ static int gve_unregister_qpls(struct gve_priv *priv)
 
 static int gve_create_rings(struct gve_priv *priv)
 {
+	int num_tx_queues = gve_num_tx_queues(priv);
 	int err;
 	int i;
 
-	err = gve_adminq_create_tx_queues(priv, priv->tx_cfg.num_queues);
+	err = gve_adminq_create_tx_queues(priv, num_tx_queues);
 	if (err) {
 		netif_err(priv, drv, priv->dev, "failed to create %d tx queues\n",
-			  priv->tx_cfg.num_queues);
+			  num_tx_queues);
 		/* This failure will trigger a reset - no need to clean
 		 * up
 		 */
 		return err;
 	}
 	netif_dbg(priv, drv, priv->dev, "created %d tx queues\n",
-		  priv->tx_cfg.num_queues);
+		  num_tx_queues);
 
 	err = gve_adminq_create_rx_queues(priv, priv->rx_cfg.num_queues);
 	if (err) {
@@ -675,7 +678,7 @@ static void add_napi_init_sync_stats(struct gve_priv *priv,
 	int i;
 
 	/* Add tx napi & init sync stats*/
-	for (i = 0; i < priv->tx_cfg.num_queues; i++) {
+	for (i = 0; i < gve_num_tx_queues(priv); i++) {
 		int ntfy_idx = gve_tx_idx_to_ntfy(priv, i);
 
 		u64_stats_init(&priv->tx[i].statss);
@@ -753,9 +756,10 @@ static int gve_alloc_rings(struct gve_priv *priv)
 
 static int gve_destroy_rings(struct gve_priv *priv)
 {
+	int num_tx_queues = gve_num_tx_queues(priv);
 	int err;
 
-	err = gve_adminq_destroy_tx_queues(priv, priv->tx_cfg.num_queues);
+	err = gve_adminq_destroy_tx_queues(priv, num_tx_queues);
 	if (err) {
 		netif_err(priv, drv, priv->dev,
 			  "failed to destroy tx queues\n");
@@ -784,11 +788,12 @@ static void gve_rx_free_rings(struct gve_priv *priv)
 
 static void gve_free_rings(struct gve_priv *priv)
 {
+	int num_tx_queues = gve_num_tx_queues(priv);
 	int ntfy_idx;
 	int i;
 
 	if (priv->tx) {
-		for (i = 0; i < priv->tx_cfg.num_queues; i++) {
+		for (i = 0; i < num_tx_queues; i++) {
 			ntfy_idx = gve_tx_idx_to_ntfy(priv, i);
 			gve_remove_napi(priv, ntfy_idx);
 		}
@@ -1118,7 +1123,7 @@ static void gve_turndown(struct gve_priv *priv)
 		return;
 
 	/* Disable napi to prevent more work from coming in */
-	for (idx = 0; idx < priv->tx_cfg.num_queues; idx++) {
+	for (idx = 0; idx < gve_num_tx_queues(priv); idx++) {
 		int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx);
 		struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx];
 
@@ -1146,7 +1151,7 @@ static void gve_turnup(struct gve_priv *priv)
 	netif_tx_start_all_queues(priv->dev);
 
 	/* Enable napi and unmask interrupts for all queues */
-	for (idx = 0; idx < priv->tx_cfg.num_queues; idx++) {
+	for (idx = 0; idx < gve_num_tx_queues(priv); idx++) {
 		int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx);
 		struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx];
 
@@ -1306,7 +1311,7 @@ void gve_handle_report_stats(struct gve_priv *priv)
 	be64_add_cpu(&priv->stats_report->written_count, 1);
 	/* tx stats */
 	if (priv->tx) {
-		for (idx = 0; idx < priv->tx_cfg.num_queues; idx++) {
+		for (idx = 0; idx < gve_num_tx_queues(priv); idx++) {
 			u32 last_completion = 0;
 			u32 tx_frames = 0;
 
diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
index 1f55137722b0..db1c74b1d7d3 100644
--- a/drivers/net/ethernet/google/gve/gve_rx.c
+++ b/drivers/net/ethernet/google/gve/gve_rx.c
@@ -556,7 +556,7 @@ static struct sk_buff *gve_rx_skb(struct gve_priv *priv, struct gve_rx_ring *rx,
 
 	if (len <= priv->rx_copybreak && is_only_frag)  {
 		/* Just copy small packets */
-		skb = gve_rx_copy(netdev, napi, page_info, len, GVE_RX_PAD);
+		skb = gve_rx_copy(netdev, napi, page_info, len);
 		if (skb) {
 			u64_stats_update_begin(&rx->statss);
 			rx->rx_copied_pkt++;
diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
index 630f42a3037b..e57b73eb70f6 100644
--- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c
+++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
@@ -568,7 +568,7 @@ static int gve_rx_dqo(struct napi_struct *napi, struct gve_rx_ring *rx,
 
 	if (eop && buf_len <= priv->rx_copybreak) {
 		rx->ctx.skb_head = gve_rx_copy(priv->dev, napi,
-					       &buf_state->page_info, buf_len, 0);
+					       &buf_state->page_info, buf_len);
 		if (unlikely(!rx->ctx.skb_head))
 			goto error;
 		rx->ctx.skb_tail = rx->ctx.skb_head;
diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
index 4888bf05fbed..0fb052ce9e0b 100644
--- a/drivers/net/ethernet/google/gve/gve_tx.c
+++ b/drivers/net/ethernet/google/gve/gve_tx.c
@@ -374,18 +374,18 @@ static int gve_maybe_stop_tx(struct gve_priv *priv, struct gve_tx_ring *tx,
 }
 
 static void gve_tx_fill_pkt_desc(union gve_tx_desc *pkt_desc,
-				 struct sk_buff *skb, bool is_gso,
+				 u16 csum_offset, u8 ip_summed, bool is_gso,
 				 int l4_hdr_offset, u32 desc_cnt,
-				 u16 hlen, u64 addr)
+				 u16 hlen, u64 addr, u16 pkt_len)
 {
 	/* l4_hdr_offset and csum_offset are in units of 16-bit words */
 	if (is_gso) {
 		pkt_desc->pkt.type_flags = GVE_TXD_TSO | GVE_TXF_L4CSUM;
-		pkt_desc->pkt.l4_csum_offset = skb->csum_offset >> 1;
+		pkt_desc->pkt.l4_csum_offset = csum_offset >> 1;
 		pkt_desc->pkt.l4_hdr_offset = l4_hdr_offset >> 1;
-	} else if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
+	} else if (likely(ip_summed == CHECKSUM_PARTIAL)) {
 		pkt_desc->pkt.type_flags = GVE_TXD_STD | GVE_TXF_L4CSUM;
-		pkt_desc->pkt.l4_csum_offset = skb->csum_offset >> 1;
+		pkt_desc->pkt.l4_csum_offset = csum_offset >> 1;
 		pkt_desc->pkt.l4_hdr_offset = l4_hdr_offset >> 1;
 	} else {
 		pkt_desc->pkt.type_flags = GVE_TXD_STD;
@@ -393,7 +393,7 @@ static void gve_tx_fill_pkt_desc(union gve_tx_desc *pkt_desc,
 		pkt_desc->pkt.l4_hdr_offset = 0;
 	}
 	pkt_desc->pkt.desc_cnt = desc_cnt;
-	pkt_desc->pkt.len = cpu_to_be16(skb->len);
+	pkt_desc->pkt.len = cpu_to_be16(pkt_len);
 	pkt_desc->pkt.seg_len = cpu_to_be16(hlen);
 	pkt_desc->pkt.seg_addr = cpu_to_be64(addr);
 }
@@ -412,15 +412,16 @@ static void gve_tx_fill_mtd_desc(union gve_tx_desc *mtd_desc,
 }
 
 static void gve_tx_fill_seg_desc(union gve_tx_desc *seg_desc,
-				 struct sk_buff *skb, bool is_gso,
+				 u16 l3_offset, u16 gso_size,
+				 bool is_gso_v6, bool is_gso,
 				 u16 len, u64 addr)
 {
 	seg_desc->seg.type_flags = GVE_TXD_SEG;
 	if (is_gso) {
-		if (skb_is_gso_v6(skb))
+		if (is_gso_v6)
 			seg_desc->seg.type_flags |= GVE_TXSF_IPV6;
-		seg_desc->seg.l3_offset = skb_network_offset(skb) >> 1;
-		seg_desc->seg.mss = cpu_to_be16(skb_shinfo(skb)->gso_size);
+		seg_desc->seg.l3_offset = l3_offset >> 1;
+		seg_desc->seg.mss = cpu_to_be16(gso_size);
 	}
 	seg_desc->seg.seg_len = cpu_to_be16(len);
 	seg_desc->seg.seg_addr = cpu_to_be64(addr);
@@ -473,9 +474,10 @@ static int gve_tx_add_skb_copy(struct gve_priv *priv, struct gve_tx_ring *tx, st
 	payload_nfrags = gve_tx_alloc_fifo(&tx->tx_fifo, skb->len - hlen,
 					   &info->iov[payload_iov]);
 
-	gve_tx_fill_pkt_desc(pkt_desc, skb, is_gso, l4_hdr_offset,
+	gve_tx_fill_pkt_desc(pkt_desc, skb->csum_offset, skb->ip_summed,
+			     is_gso, l4_hdr_offset,
 			     1 + mtd_desc_nr + payload_nfrags, hlen,
-			     info->iov[hdr_nfrags - 1].iov_offset);
+			     info->iov[hdr_nfrags - 1].iov_offset, skb->len);
 
 	skb_copy_bits(skb, 0,
 		      tx->tx_fifo.base + info->iov[hdr_nfrags - 1].iov_offset,
@@ -494,7 +496,9 @@ static int gve_tx_add_skb_copy(struct gve_priv *priv, struct gve_tx_ring *tx, st
 		next_idx = (tx->req + 1 + mtd_desc_nr + i - payload_iov) & tx->mask;
 		seg_desc = &tx->desc[next_idx];
 
-		gve_tx_fill_seg_desc(seg_desc, skb, is_gso,
+		gve_tx_fill_seg_desc(seg_desc, skb_network_offset(skb),
+				     skb_shinfo(skb)->gso_size,
+				     skb_is_gso_v6(skb), is_gso,
 				     info->iov[i].iov_len,
 				     info->iov[i].iov_offset);
 
@@ -552,8 +556,9 @@ static int gve_tx_add_skb_no_copy(struct gve_priv *priv, struct gve_tx_ring *tx,
 	if (mtd_desc_nr)
 		num_descriptors++;
 
-	gve_tx_fill_pkt_desc(pkt_desc, skb, is_gso, l4_hdr_offset,
-			     num_descriptors, hlen, addr);
+	gve_tx_fill_pkt_desc(pkt_desc, skb->csum_offset, skb->ip_summed,
+			     is_gso, l4_hdr_offset,
+			     num_descriptors, hlen, addr, skb->len);
 
 	if (mtd_desc_nr) {
 		idx = (idx + 1) & tx->mask;
@@ -569,7 +574,9 @@ static int gve_tx_add_skb_no_copy(struct gve_priv *priv, struct gve_tx_ring *tx,
 		addr += hlen;
 		idx = (idx + 1) & tx->mask;
 		seg_desc = &tx->desc[idx];
-		gve_tx_fill_seg_desc(seg_desc, skb, is_gso, len, addr);
+		gve_tx_fill_seg_desc(seg_desc, skb_network_offset(skb),
+				     skb_shinfo(skb)->gso_size,
+				     skb_is_gso_v6(skb), is_gso, len, addr);
 	}
 
 	for (i = 0; i < shinfo->nr_frags; i++) {
@@ -587,7 +594,9 @@ static int gve_tx_add_skb_no_copy(struct gve_priv *priv, struct gve_tx_ring *tx,
 		dma_unmap_len_set(&tx->info[idx], len, len);
 		dma_unmap_addr_set(&tx->info[idx], dma, addr);
 
-		gve_tx_fill_seg_desc(seg_desc, skb, is_gso, len, addr);
+		gve_tx_fill_seg_desc(seg_desc, skb_network_offset(skb),
+				     skb_shinfo(skb)->gso_size,
+				     skb_is_gso_v6(skb), is_gso, len, addr);
 	}
 
 	return num_descriptors;
diff --git a/drivers/net/ethernet/google/gve/gve_utils.c b/drivers/net/ethernet/google/gve/gve_utils.c
index 6ba46adaaee3..26e08d753270 100644
--- a/drivers/net/ethernet/google/gve/gve_utils.c
+++ b/drivers/net/ethernet/google/gve/gve_utils.c
@@ -49,10 +49,10 @@ void gve_rx_add_to_block(struct gve_priv *priv, int queue_idx)
 }
 
 struct sk_buff *gve_rx_copy(struct net_device *dev, struct napi_struct *napi,
-			    struct gve_rx_slot_page_info *page_info, u16 len,
-			    u16 padding)
+			    struct gve_rx_slot_page_info *page_info, u16 len)
 {
-	void *va = page_info->page_address + padding + page_info->page_offset;
+	void *va = page_info->page_address + page_info->page_offset +
+		page_info->pad;
 	struct sk_buff *skb;
 
 	skb = napi_alloc_skb(napi, len);
diff --git a/drivers/net/ethernet/google/gve/gve_utils.h b/drivers/net/ethernet/google/gve/gve_utils.h
index 79595940b351..324fd98a6112 100644
--- a/drivers/net/ethernet/google/gve/gve_utils.h
+++ b/drivers/net/ethernet/google/gve/gve_utils.h
@@ -18,8 +18,7 @@ void gve_rx_remove_from_block(struct gve_priv *priv, int queue_idx);
 void gve_rx_add_to_block(struct gve_priv *priv, int queue_idx);
 
 struct sk_buff *gve_rx_copy(struct net_device *dev, struct napi_struct *napi,
-			    struct gve_rx_slot_page_info *page_info, u16 len,
-			    u16 pad);
+			    struct gve_rx_slot_page_info *page_info, u16 len);
 
 /* Decrement pagecnt_bias. Set it back to INT_MAX if it reached zero. */
 void gve_dec_pagecnt_bias(struct gve_rx_slot_page_info *page_info);
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH net-next v3 2/5] gve: Changes to add new TX queues
  2023-03-13 20:26 [PATCH net-next v3 0/5] gve: Add XDP support for GQI-QPL format Praveen Kaligineedi
  2023-03-13 20:26 ` [PATCH net-next v3 1/5] gve: XDP support GQI-QPL: helper function changes Praveen Kaligineedi
@ 2023-03-13 20:26 ` Praveen Kaligineedi
  2023-03-15 17:14   ` Michal Kubiak
  2023-03-13 20:26 ` [PATCH net-next v3 3/5] gve: Add XDP DROP and TX support for GQI-QPL format Praveen Kaligineedi
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 11+ messages in thread
From: Praveen Kaligineedi @ 2023-03-13 20:26 UTC (permalink / raw)
  To: netdev
  Cc: davem, kuba, maciej.fijalkowski, Praveen Kaligineedi, Jeroen de Borst

Changes to enable adding and removing TX queues without calling
gve_close() and gve_open().

Made the following changes:
1) priv->tx, priv->rx and priv->qpls arrays are allocated based on
   max tx queues and max rx queues
2) Changed gve_adminq_create_tx_queues(), gve_adminq_destroy_tx_queues(),
gve_tx_alloc_rings() and gve_tx_free_rings() functions to add/remove a
subset of TX queues rather than all the TX queues.

Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com>
Reviewed-by: Jeroen de Borst <jeroendb@google.com>

---
Changed in v2:
- Added this patch to address the issue raised by Jakub Kicinski about
  implications of resource allocation failing after reconfig.

Changed in v3:
- No changes
---
 drivers/net/ethernet/google/gve/gve.h        | 45 +++++++----
 drivers/net/ethernet/google/gve/gve_adminq.c |  8 +-
 drivers/net/ethernet/google/gve/gve_adminq.h |  4 +-
 drivers/net/ethernet/google/gve/gve_main.c   | 83 ++++++++++++++------
 drivers/net/ethernet/google/gve/gve_rx.c     |  2 +-
 drivers/net/ethernet/google/gve/gve_tx.c     | 12 +--
 6 files changed, 104 insertions(+), 50 deletions(-)

diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
index f52f23198278..f354a6448c25 100644
--- a/drivers/net/ethernet/google/gve/gve.h
+++ b/drivers/net/ethernet/google/gve/gve.h
@@ -798,16 +798,35 @@ static inline u32 gve_num_rx_qpls(struct gve_priv *priv)
 	return priv->rx_cfg.num_queues;
 }
 
+static inline u32 gve_tx_qpl_id(struct gve_priv *priv, int tx_qid)
+{
+	return tx_qid;
+}
+
+static inline u32 gve_rx_qpl_id(struct gve_priv *priv, int rx_qid)
+{
+	return priv->tx_cfg.max_queues + rx_qid;
+}
+
+static inline u32 gve_tx_start_qpl_id(struct gve_priv *priv)
+{
+	return gve_tx_qpl_id(priv, 0);
+}
+
+static inline u32 gve_rx_start_qpl_id(struct gve_priv *priv)
+{
+	return gve_rx_qpl_id(priv, 0);
+}
+
 /* Returns a pointer to the next available tx qpl in the list of qpls
  */
 static inline
-struct gve_queue_page_list *gve_assign_tx_qpl(struct gve_priv *priv)
+struct gve_queue_page_list *gve_assign_tx_qpl(struct gve_priv *priv, int tx_qid)
 {
-	int id = find_first_zero_bit(priv->qpl_cfg.qpl_id_map,
-				     priv->qpl_cfg.qpl_map_size);
+	int id = gve_tx_qpl_id(priv, tx_qid);
 
-	/* we are out of tx qpls */
-	if (id >= gve_num_tx_qpls(priv))
+	/* QPL already in use */
+	if (test_bit(id, priv->qpl_cfg.qpl_id_map))
 		return NULL;
 
 	set_bit(id, priv->qpl_cfg.qpl_id_map);
@@ -817,14 +836,12 @@ struct gve_queue_page_list *gve_assign_tx_qpl(struct gve_priv *priv)
 /* Returns a pointer to the next available rx qpl in the list of qpls
  */
 static inline
-struct gve_queue_page_list *gve_assign_rx_qpl(struct gve_priv *priv)
+struct gve_queue_page_list *gve_assign_rx_qpl(struct gve_priv *priv, int rx_qid)
 {
-	int id = find_next_zero_bit(priv->qpl_cfg.qpl_id_map,
-				    priv->qpl_cfg.qpl_map_size,
-				    gve_num_tx_qpls(priv));
+	int id = gve_rx_qpl_id(priv, rx_qid);
 
-	/* we are out of rx qpls */
-	if (id == gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv))
+	/* QPL already in use */
+	if (test_bit(id, priv->qpl_cfg.qpl_id_map))
 		return NULL;
 
 	set_bit(id, priv->qpl_cfg.qpl_id_map);
@@ -843,7 +860,7 @@ static inline void gve_unassign_qpl(struct gve_priv *priv, int id)
 static inline enum dma_data_direction gve_qpl_dma_dir(struct gve_priv *priv,
 						      int id)
 {
-	if (id < gve_num_tx_qpls(priv))
+	if (id < gve_rx_start_qpl_id(priv))
 		return DMA_TO_DEVICE;
 	else
 		return DMA_FROM_DEVICE;
@@ -869,8 +886,8 @@ void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma,
 /* tx handling */
 netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev);
 bool gve_tx_poll(struct gve_notify_block *block, int budget);
-int gve_tx_alloc_rings(struct gve_priv *priv);
-void gve_tx_free_rings_gqi(struct gve_priv *priv);
+int gve_tx_alloc_rings(struct gve_priv *priv, int start_id, int num_rings);
+void gve_tx_free_rings_gqi(struct gve_priv *priv, int start_id, int num_rings);
 u32 gve_tx_load_event_counter(struct gve_priv *priv,
 			      struct gve_tx_ring *tx);
 bool gve_tx_clean_pending(struct gve_priv *priv, struct gve_tx_ring *tx);
diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c
index 60061288ad9d..252974202a3f 100644
--- a/drivers/net/ethernet/google/gve/gve_adminq.c
+++ b/drivers/net/ethernet/google/gve/gve_adminq.c
@@ -516,12 +516,12 @@ static int gve_adminq_create_tx_queue(struct gve_priv *priv, u32 queue_index)
 	return gve_adminq_issue_cmd(priv, &cmd);
 }
 
-int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 num_queues)
+int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues)
 {
 	int err;
 	int i;
 
-	for (i = 0; i < num_queues; i++) {
+	for (i = start_id; i < start_id + num_queues; i++) {
 		err = gve_adminq_create_tx_queue(priv, i);
 		if (err)
 			return err;
@@ -604,12 +604,12 @@ static int gve_adminq_destroy_tx_queue(struct gve_priv *priv, u32 queue_index)
 	return 0;
 }
 
-int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 num_queues)
+int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues)
 {
 	int err;
 	int i;
 
-	for (i = 0; i < num_queues; i++) {
+	for (i = start_id; i < start_id + num_queues; i++) {
 		err = gve_adminq_destroy_tx_queue(priv, i);
 		if (err)
 			return err;
diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/ethernet/google/gve/gve_adminq.h
index cf29662e6ad1..f894beb3deaf 100644
--- a/drivers/net/ethernet/google/gve/gve_adminq.h
+++ b/drivers/net/ethernet/google/gve/gve_adminq.h
@@ -410,8 +410,8 @@ int gve_adminq_configure_device_resources(struct gve_priv *priv,
 					  dma_addr_t db_array_bus_addr,
 					  u32 num_ntfy_blks);
 int gve_adminq_deconfigure_device_resources(struct gve_priv *priv);
-int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 num_queues);
-int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 queue_id);
+int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues);
+int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues);
 int gve_adminq_create_rx_queues(struct gve_priv *priv, u32 num_queues);
 int gve_adminq_destroy_rx_queues(struct gve_priv *priv, u32 queue_id);
 int gve_adminq_register_page_list(struct gve_priv *priv,
diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
index 3cfdeeb74f60..160ca77c2751 100644
--- a/drivers/net/ethernet/google/gve/gve_main.c
+++ b/drivers/net/ethernet/google/gve/gve_main.c
@@ -584,11 +584,26 @@ static void gve_remove_napi(struct gve_priv *priv, int ntfy_idx)
 
 static int gve_register_qpls(struct gve_priv *priv)
 {
-	int num_qpls = gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv);
+	int start_id;
 	int err;
 	int i;
 
-	for (i = 0; i < num_qpls; i++) {
+	start_id = gve_tx_start_qpl_id(priv);
+	for (i = start_id; i < start_id + gve_num_tx_qpls(priv); i++) {
+		err = gve_adminq_register_page_list(priv, &priv->qpls[i]);
+		if (err) {
+			netif_err(priv, drv, priv->dev,
+				  "failed to register queue page list %d\n",
+				  priv->qpls[i].id);
+			/* This failure will trigger a reset - no need to clean
+			 * up
+			 */
+			return err;
+		}
+	}
+
+	start_id = gve_rx_start_qpl_id(priv);
+	for (i = start_id; i < start_id + gve_num_rx_qpls(priv); i++) {
 		err = gve_adminq_register_page_list(priv, &priv->qpls[i]);
 		if (err) {
 			netif_err(priv, drv, priv->dev,
@@ -605,11 +620,24 @@ static int gve_register_qpls(struct gve_priv *priv)
 
 static int gve_unregister_qpls(struct gve_priv *priv)
 {
-	int num_qpls = gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv);
+	int start_id;
 	int err;
 	int i;
 
-	for (i = 0; i < num_qpls; i++) {
+	start_id = gve_tx_start_qpl_id(priv);
+	for (i = start_id; i < start_id + gve_num_tx_qpls(priv); i++) {
+		err = gve_adminq_unregister_page_list(priv, priv->qpls[i].id);
+		/* This failure will trigger a reset - no need to clean up */
+		if (err) {
+			netif_err(priv, drv, priv->dev,
+				  "Failed to unregister queue page list %d\n",
+				  priv->qpls[i].id);
+			return err;
+		}
+	}
+
+	start_id = gve_rx_start_qpl_id(priv);
+	for (i = start_id; i < start_id + gve_num_rx_qpls(priv); i++) {
 		err = gve_adminq_unregister_page_list(priv, priv->qpls[i].id);
 		/* This failure will trigger a reset - no need to clean up */
 		if (err) {
@@ -628,7 +656,7 @@ static int gve_create_rings(struct gve_priv *priv)
 	int err;
 	int i;
 
-	err = gve_adminq_create_tx_queues(priv, num_tx_queues);
+	err = gve_adminq_create_tx_queues(priv, 0, num_tx_queues);
 	if (err) {
 		netif_err(priv, drv, priv->dev, "failed to create %d tx queues\n",
 			  num_tx_queues);
@@ -695,10 +723,10 @@ static void add_napi_init_sync_stats(struct gve_priv *priv,
 	}
 }
 
-static void gve_tx_free_rings(struct gve_priv *priv)
+static void gve_tx_free_rings(struct gve_priv *priv, int start_id, int num_rings)
 {
 	if (gve_is_gqi(priv)) {
-		gve_tx_free_rings_gqi(priv);
+		gve_tx_free_rings_gqi(priv, start_id, num_rings);
 	} else {
 		gve_tx_free_rings_dqo(priv);
 	}
@@ -709,20 +737,20 @@ static int gve_alloc_rings(struct gve_priv *priv)
 	int err;
 
 	/* Setup tx rings */
-	priv->tx = kvcalloc(priv->tx_cfg.num_queues, sizeof(*priv->tx),
+	priv->tx = kvcalloc(priv->tx_cfg.max_queues, sizeof(*priv->tx),
 			    GFP_KERNEL);
 	if (!priv->tx)
 		return -ENOMEM;
 
 	if (gve_is_gqi(priv))
-		err = gve_tx_alloc_rings(priv);
+		err = gve_tx_alloc_rings(priv, 0, gve_num_tx_queues(priv));
 	else
 		err = gve_tx_alloc_rings_dqo(priv);
 	if (err)
 		goto free_tx;
 
 	/* Setup rx rings */
-	priv->rx = kvcalloc(priv->rx_cfg.num_queues, sizeof(*priv->rx),
+	priv->rx = kvcalloc(priv->rx_cfg.max_queues, sizeof(*priv->rx),
 			    GFP_KERNEL);
 	if (!priv->rx) {
 		err = -ENOMEM;
@@ -747,7 +775,7 @@ static int gve_alloc_rings(struct gve_priv *priv)
 	kvfree(priv->rx);
 	priv->rx = NULL;
 free_tx_queue:
-	gve_tx_free_rings(priv);
+	gve_tx_free_rings(priv, 0, gve_num_tx_queues(priv));
 free_tx:
 	kvfree(priv->tx);
 	priv->tx = NULL;
@@ -759,7 +787,7 @@ static int gve_destroy_rings(struct gve_priv *priv)
 	int num_tx_queues = gve_num_tx_queues(priv);
 	int err;
 
-	err = gve_adminq_destroy_tx_queues(priv, num_tx_queues);
+	err = gve_adminq_destroy_tx_queues(priv, 0, num_tx_queues);
 	if (err) {
 		netif_err(priv, drv, priv->dev,
 			  "failed to destroy tx queues\n");
@@ -797,7 +825,7 @@ static void gve_free_rings(struct gve_priv *priv)
 			ntfy_idx = gve_tx_idx_to_ntfy(priv, i);
 			gve_remove_napi(priv, ntfy_idx);
 		}
-		gve_tx_free_rings(priv);
+		gve_tx_free_rings(priv, 0, num_tx_queues);
 		kvfree(priv->tx);
 		priv->tx = NULL;
 	}
@@ -894,40 +922,46 @@ static void gve_free_queue_page_list(struct gve_priv *priv, u32 id)
 			      qpl->page_buses[i], gve_qpl_dma_dir(priv, id));
 
 	kvfree(qpl->page_buses);
+	qpl->page_buses = NULL;
 free_pages:
 	kvfree(qpl->pages);
+	qpl->pages = NULL;
 	priv->num_registered_pages -= qpl->num_entries;
 }
 
 static int gve_alloc_qpls(struct gve_priv *priv)
 {
-	int num_qpls = gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv);
+	int max_queues = priv->tx_cfg.max_queues + priv->rx_cfg.max_queues;
+	int start_id;
 	int i, j;
 	int err;
 
-	if (num_qpls == 0)
+	if (priv->queue_format != GVE_GQI_QPL_FORMAT)
 		return 0;
 
-	priv->qpls = kvcalloc(num_qpls, sizeof(*priv->qpls), GFP_KERNEL);
+	priv->qpls = kvcalloc(max_queues, sizeof(*priv->qpls), GFP_KERNEL);
 	if (!priv->qpls)
 		return -ENOMEM;
 
-	for (i = 0; i < gve_num_tx_qpls(priv); i++) {
+	start_id = gve_tx_start_qpl_id(priv);
+	for (i = start_id; i < start_id + gve_num_tx_qpls(priv); i++) {
 		err = gve_alloc_queue_page_list(priv, i,
 						priv->tx_pages_per_qpl);
 		if (err)
 			goto free_qpls;
 	}
-	for (; i < num_qpls; i++) {
+
+	start_id = gve_rx_start_qpl_id(priv);
+	for (i = start_id; i < start_id + gve_num_rx_qpls(priv); i++) {
 		err = gve_alloc_queue_page_list(priv, i,
 						priv->rx_data_slot_cnt);
 		if (err)
 			goto free_qpls;
 	}
 
-	priv->qpl_cfg.qpl_map_size = BITS_TO_LONGS(num_qpls) *
+	priv->qpl_cfg.qpl_map_size = BITS_TO_LONGS(max_queues) *
 				     sizeof(unsigned long) * BITS_PER_BYTE;
-	priv->qpl_cfg.qpl_id_map = kvcalloc(BITS_TO_LONGS(num_qpls),
+	priv->qpl_cfg.qpl_id_map = kvcalloc(BITS_TO_LONGS(max_queues),
 					    sizeof(unsigned long), GFP_KERNEL);
 	if (!priv->qpl_cfg.qpl_id_map) {
 		err = -ENOMEM;
@@ -940,23 +974,26 @@ static int gve_alloc_qpls(struct gve_priv *priv)
 	for (j = 0; j <= i; j++)
 		gve_free_queue_page_list(priv, j);
 	kvfree(priv->qpls);
+	priv->qpls = NULL;
 	return err;
 }
 
 static void gve_free_qpls(struct gve_priv *priv)
 {
-	int num_qpls = gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv);
+	int max_queues = priv->tx_cfg.max_queues + priv->rx_cfg.max_queues;
 	int i;
 
-	if (num_qpls == 0)
+	if (!priv->qpls)
 		return;
 
 	kvfree(priv->qpl_cfg.qpl_id_map);
+	priv->qpl_cfg.qpl_id_map = NULL;
 
-	for (i = 0; i < num_qpls; i++)
+	for (i = 0; i < max_queues; i++)
 		gve_free_queue_page_list(priv, i);
 
 	kvfree(priv->qpls);
+	priv->qpls = NULL;
 }
 
 /* Use this to schedule a reset when the device is capable of continuing
diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
index db1c74b1d7d3..051a15e4f1af 100644
--- a/drivers/net/ethernet/google/gve/gve_rx.c
+++ b/drivers/net/ethernet/google/gve/gve_rx.c
@@ -124,7 +124,7 @@ static int gve_prefill_rx_pages(struct gve_rx_ring *rx)
 		return -ENOMEM;
 
 	if (!rx->data.raw_addressing) {
-		rx->data.qpl = gve_assign_rx_qpl(priv);
+		rx->data.qpl = gve_assign_rx_qpl(priv, rx->q_num);
 		if (!rx->data.qpl) {
 			kvfree(rx->data.page_info);
 			rx->data.page_info = NULL;
diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
index 0fb052ce9e0b..e24e73e74e33 100644
--- a/drivers/net/ethernet/google/gve/gve_tx.c
+++ b/drivers/net/ethernet/google/gve/gve_tx.c
@@ -195,7 +195,7 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx)
 	tx->raw_addressing = priv->queue_format == GVE_GQI_RDA_FORMAT;
 	tx->dev = &priv->pdev->dev;
 	if (!tx->raw_addressing) {
-		tx->tx_fifo.qpl = gve_assign_tx_qpl(priv);
+		tx->tx_fifo.qpl = gve_assign_tx_qpl(priv, idx);
 		if (!tx->tx_fifo.qpl)
 			goto abort_with_desc;
 		/* map Tx FIFO */
@@ -233,12 +233,12 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx)
 	return -ENOMEM;
 }
 
-int gve_tx_alloc_rings(struct gve_priv *priv)
+int gve_tx_alloc_rings(struct gve_priv *priv, int start_id, int num_rings)
 {
 	int err = 0;
 	int i;
 
-	for (i = 0; i < priv->tx_cfg.num_queues; i++) {
+	for (i = start_id; i < start_id + num_rings; i++) {
 		err = gve_tx_alloc_ring(priv, i);
 		if (err) {
 			netif_err(priv, drv, priv->dev,
@@ -251,17 +251,17 @@ int gve_tx_alloc_rings(struct gve_priv *priv)
 	if (err) {
 		int j;
 
-		for (j = 0; j < i; j++)
+		for (j = start_id; j < i; j++)
 			gve_tx_free_ring(priv, j);
 	}
 	return err;
 }
 
-void gve_tx_free_rings_gqi(struct gve_priv *priv)
+void gve_tx_free_rings_gqi(struct gve_priv *priv, int start_id, int num_rings)
 {
 	int i;
 
-	for (i = 0; i < priv->tx_cfg.num_queues; i++)
+	for (i = start_id; i < start_id + num_rings; i++)
 		gve_tx_free_ring(priv, i);
 }
 
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH net-next v3 3/5] gve: Add XDP DROP and TX support for GQI-QPL format
  2023-03-13 20:26 [PATCH net-next v3 0/5] gve: Add XDP support for GQI-QPL format Praveen Kaligineedi
  2023-03-13 20:26 ` [PATCH net-next v3 1/5] gve: XDP support GQI-QPL: helper function changes Praveen Kaligineedi
  2023-03-13 20:26 ` [PATCH net-next v3 2/5] gve: Changes to add new TX queues Praveen Kaligineedi
@ 2023-03-13 20:26 ` Praveen Kaligineedi
  2023-03-15 16:53   ` Michal Kubiak
  2023-03-13 20:26 ` [PATCH net-next v3 4/5] gve: Add XDP REDIRECT " Praveen Kaligineedi
  2023-03-13 20:26 ` [PATCH net-next v3 5/5] gve: Add AF_XDP zero-copy " Praveen Kaligineedi
  4 siblings, 1 reply; 11+ messages in thread
From: Praveen Kaligineedi @ 2023-03-13 20:26 UTC (permalink / raw)
  To: netdev
  Cc: davem, kuba, maciej.fijalkowski, Praveen Kaligineedi, Jeroen de Borst

Add support for XDP PASS, DROP and TX actions.

This patch contains the following changes:
1) Support installing/uninstalling XDP program
2) Add dedicated XDP TX queues
3) Add support for XDP DROP action
4) Add support for XDP TX action

Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com>
Reviewed-by: Jeroen de Borst <jeroendb@google.com>

---
Changed in v2:
- Removed gve_close/gve_open when adding XDP dedicated queues. Instead
we add and register additional TX queues when the XDP program is
installed. If the allocation/registration fails we return error and do
not install the XDP program.
- Removed xdp tx spin lock from this patch. It is needed for XDP_REDIRECT
support as both XDP_REDIRECT and XDP_TX traffic share the dedicated XDP
queues. Moved the code to add xdp tx spinlock to the subsequent patch
that adds XDP_REDIRECT support.
- Added netdev_err when the user tries to set rx/tx queues to the values
not supported when XDP is enabled.
- Removed rcu annotation for xdp_prog. We disable the napi prior to
adding/removing the xdp_prog and reenable it after the program has
been installed for all the queues.
- Ring the tx doorbell once for napi instead of every XDP TX packet.
- Added a new helper function for freeing the FIFO buffer
- Unregister xdp rxq for all the queues when the registration
fails during XDP program installation

Changed in v3:
- Padding bytes are used if the XDP TX packet headers do not
fit at tail of TX FIFO. Taking these padding bytes into account
while checking if enough space is available in TX FIFO.
---
 drivers/net/ethernet/google/gve/gve.h         |  44 +-
 drivers/net/ethernet/google/gve/gve_ethtool.c |  37 +-
 drivers/net/ethernet/google/gve/gve_main.c    | 376 +++++++++++++++++-
 drivers/net/ethernet/google/gve/gve_rx.c      |  74 +++-
 drivers/net/ethernet/google/gve/gve_tx.c      | 149 ++++++-
 5 files changed, 658 insertions(+), 22 deletions(-)

diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
index f354a6448c25..8d5234d4ba67 100644
--- a/drivers/net/ethernet/google/gve/gve.h
+++ b/drivers/net/ethernet/google/gve/gve.h
@@ -47,6 +47,10 @@
 
 #define GVE_RX_BUFFER_SIZE_DQO 2048
 
+#define GVE_XDP_ACTIONS 5
+
+#define GVE_TX_MAX_HEADER_SIZE 182
+
 /* Each slot in the desc ring has a 1:1 mapping to a slot in the data ring */
 struct gve_rx_desc_queue {
 	struct gve_rx_desc *desc_ring; /* the descriptor ring */
@@ -230,7 +234,9 @@ struct gve_rx_ring {
 	u64 rx_frag_flip_cnt; /* free-running count of rx segments where page_flip was used */
 	u64 rx_frag_copy_cnt; /* free-running count of rx segments copied */
 	u64 rx_frag_alloc_cnt; /* free-running count of rx page allocations */
-
+	u64 xdp_tx_errors;
+	u64 xdp_redirect_errors;
+	u64 xdp_actions[GVE_XDP_ACTIONS];
 	u32 q_num; /* queue index */
 	u32 ntfy_id; /* notification block index */
 	struct gve_queue_resources *q_resources; /* head and tail pointer idx */
@@ -238,6 +244,9 @@ struct gve_rx_ring {
 	struct u64_stats_sync statss; /* sync stats for 32bit archs */
 
 	struct gve_rx_ctx ctx; /* Info for packet currently being processed in this ring. */
+
+	/* XDP stuff */
+	struct xdp_rxq_info xdp_rxq;
 };
 
 /* A TX desc ring entry */
@@ -259,6 +268,9 @@ struct gve_tx_iovec {
  */
 struct gve_tx_buffer_state {
 	struct sk_buff *skb; /* skb for this pkt */
+	struct {
+		u16 size; /* size of xmitted xdp pkt */
+	} xdp;
 	union {
 		struct gve_tx_iovec iov[GVE_TX_MAX_IOVEC]; /* segments of this pkt */
 		struct {
@@ -526,9 +538,11 @@ struct gve_priv {
 	u16 rx_data_slot_cnt; /* rx buffer length */
 	u64 max_registered_pages;
 	u64 num_registered_pages; /* num pages registered with NIC */
+	struct bpf_prog *xdp_prog; /* XDP BPF program */
 	u32 rx_copybreak; /* copy packets smaller than this */
 	u16 default_num_queues; /* default num queues to set up */
 
+	u16 num_xdp_queues;
 	struct gve_queue_config tx_cfg;
 	struct gve_queue_config rx_cfg;
 	struct gve_qpl_config qpl_cfg; /* map used QPL ids */
@@ -785,7 +799,17 @@ static inline u32 gve_num_tx_qpls(struct gve_priv *priv)
 	if (priv->queue_format != GVE_GQI_QPL_FORMAT)
 		return 0;
 
-	return priv->tx_cfg.num_queues;
+	return priv->tx_cfg.num_queues + priv->num_xdp_queues;
+}
+
+/* Returns the number of XDP tx queue page lists
+ */
+static inline u32 gve_num_xdp_qpls(struct gve_priv *priv)
+{
+	if (priv->queue_format != GVE_GQI_QPL_FORMAT)
+		return 0;
+
+	return priv->num_xdp_queues;
 }
 
 /* Returns the number of rx queue page lists
@@ -874,7 +898,17 @@ static inline bool gve_is_gqi(struct gve_priv *priv)
 
 static inline u32 gve_num_tx_queues(struct gve_priv *priv)
 {
-	return priv->tx_cfg.num_queues;
+	return priv->tx_cfg.num_queues + priv->num_xdp_queues;
+}
+
+static inline u32 gve_xdp_tx_queue_id(struct gve_priv *priv, u32 queue_id)
+{
+	return priv->tx_cfg.num_queues + queue_id;
+}
+
+static inline u32 gve_xdp_tx_start_queue_id(struct gve_priv *priv)
+{
+	return gve_xdp_tx_queue_id(priv, 0);
 }
 
 /* buffers */
@@ -885,7 +919,11 @@ void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma,
 		   enum dma_data_direction);
 /* tx handling */
 netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev);
+int gve_xdp_xmit_one(struct gve_priv *priv, struct gve_tx_ring *tx,
+		     void *data, int len);
+void gve_xdp_tx_flush(struct gve_priv *priv, u32 xdp_qid);
 bool gve_tx_poll(struct gve_notify_block *block, int budget);
+bool gve_xdp_poll(struct gve_notify_block *block, int budget);
 int gve_tx_alloc_rings(struct gve_priv *priv, int start_id, int num_rings);
 void gve_tx_free_rings_gqi(struct gve_priv *priv, int start_id, int num_rings);
 u32 gve_tx_load_event_counter(struct gve_priv *priv,
diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c
index 5b6e31812fae..067b393ccf9d 100644
--- a/drivers/net/ethernet/google/gve/gve_ethtool.c
+++ b/drivers/net/ethernet/google/gve/gve_ethtool.c
@@ -34,6 +34,11 @@ static u32 gve_get_msglevel(struct net_device *netdev)
 	return priv->msg_enable;
 }
 
+/* For the following stats column string names, make sure the order
+ * matches how it is filled in the code. For xdp_aborted, xdp_drop,
+ * xdp_pass, xdp_tx, xdp_redirect, make sure it also matches the order
+ * as declared in enum xdp_action inside file uapi/linux/bpf.h .
+ */
 static const char gve_gstrings_main_stats[][ETH_GSTRING_LEN] = {
 	"rx_packets", "tx_packets", "rx_bytes", "tx_bytes",
 	"rx_dropped", "tx_dropped", "tx_timeouts",
@@ -49,6 +54,9 @@ static const char gve_gstrings_rx_stats[][ETH_GSTRING_LEN] = {
 	"rx_dropped_pkt[%u]", "rx_copybreak_pkt[%u]", "rx_copied_pkt[%u]",
 	"rx_queue_drop_cnt[%u]", "rx_no_buffers_posted[%u]",
 	"rx_drops_packet_over_mru[%u]", "rx_drops_invalid_checksum[%u]",
+	"rx_xdp_aborted[%u]", "rx_xdp_drop[%u]", "rx_xdp_pass[%u]",
+	"rx_xdp_tx[%u]", "rx_xdp_redirect[%u]",
+	"rx_xdp_tx_errors[%u]", "rx_xdp_redirect_errors[%u]",
 };
 
 static const char gve_gstrings_tx_stats[][ETH_GSTRING_LEN] = {
@@ -289,14 +297,25 @@ gve_get_ethtool_stats(struct net_device *netdev,
 			if (skip_nic_stats) {
 				/* skip NIC rx stats */
 				i += NIC_RX_STATS_REPORT_NUM;
-				continue;
-			}
-			for (j = 0; j < NIC_RX_STATS_REPORT_NUM; j++) {
-				u64 value =
-				be64_to_cpu(report_stats[rx_qid_to_stats_idx[ring] + j].value);
+			} else {
+				stats_idx = rx_qid_to_stats_idx[ring];
+				for (j = 0; j < NIC_RX_STATS_REPORT_NUM; j++) {
+					u64 value =
+						be64_to_cpu(report_stats[stats_idx + j].value);
 
-				data[i++] = value;
+					data[i++] = value;
+				}
 			}
+			/* XDP rx counters */
+			do {
+				start =	u64_stats_fetch_begin(&priv->rx[ring].statss);
+				for (j = 0; j < GVE_XDP_ACTIONS; j++)
+					data[i + j] = rx->xdp_actions[j];
+				data[i + j++] = rx->xdp_tx_errors;
+				data[i + j++] = rx->xdp_redirect_errors;
+			} while (u64_stats_fetch_retry(&priv->rx[ring].statss,
+						       start));
+			i += GVE_XDP_ACTIONS + 2; /* XDP rx counters */
 		}
 	} else {
 		i += priv->rx_cfg.num_queues * NUM_GVE_RX_CNTS;
@@ -418,6 +437,12 @@ static int gve_set_channels(struct net_device *netdev,
 	if (!new_rx || !new_tx)
 		return -EINVAL;
 
+	if (priv->num_xdp_queues &&
+	    (new_tx != new_rx || (2 * new_tx > priv->tx_cfg.max_queues))) {
+		dev_err(&priv->pdev->dev, "XDP load failed: The number of configured RX queues should be equal to the number of configured TX queues and the number of configured RX/TX queues should be less than or equal to half the maximum number of RX/TX queues");
+		return -EINVAL;
+	}
+
 	if (!netif_carrier_ok(netdev)) {
 		priv->tx_cfg.num_queues = new_tx;
 		priv->rx_cfg.num_queues = new_rx;
diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
index 160ca77c2751..7d3f15cf79ed 100644
--- a/drivers/net/ethernet/google/gve/gve_main.c
+++ b/drivers/net/ethernet/google/gve/gve_main.c
@@ -4,8 +4,10 @@
  * Copyright (C) 2015-2021 Google, Inc.
  */
 
+#include <linux/bpf.h>
 #include <linux/cpumask.h>
 #include <linux/etherdevice.h>
+#include <linux/filter.h>
 #include <linux/interrupt.h>
 #include <linux/module.h>
 #include <linux/pci.h>
@@ -247,8 +249,13 @@ static int gve_napi_poll(struct napi_struct *napi, int budget)
 	block = container_of(napi, struct gve_notify_block, napi);
 	priv = block->priv;
 
-	if (block->tx)
-		reschedule |= gve_tx_poll(block, budget);
+	if (block->tx) {
+		if (block->tx->q_num < priv->tx_cfg.num_queues)
+			reschedule |= gve_tx_poll(block, budget);
+		else
+			reschedule |= gve_xdp_poll(block, budget);
+	}
+
 	if (block->rx) {
 		work_done = gve_rx_poll(block, budget);
 		reschedule |= work_done == budget;
@@ -582,6 +589,28 @@ static void gve_remove_napi(struct gve_priv *priv, int ntfy_idx)
 	netif_napi_del(&block->napi);
 }
 
+static int gve_register_xdp_qpls(struct gve_priv *priv)
+{
+	int start_id;
+	int err;
+	int i;
+
+	start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv));
+	for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) {
+		err = gve_adminq_register_page_list(priv, &priv->qpls[i]);
+		if (err) {
+			netif_err(priv, drv, priv->dev,
+				  "failed to register queue page list %d\n",
+				  priv->qpls[i].id);
+			/* This failure will trigger a reset - no need to clean
+			 * up
+			 */
+			return err;
+		}
+	}
+	return 0;
+}
+
 static int gve_register_qpls(struct gve_priv *priv)
 {
 	int start_id;
@@ -618,6 +647,26 @@ static int gve_register_qpls(struct gve_priv *priv)
 	return 0;
 }
 
+static int gve_unregister_xdp_qpls(struct gve_priv *priv)
+{
+	int start_id;
+	int err;
+	int i;
+
+	start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv));
+	for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) {
+		err = gve_adminq_unregister_page_list(priv, priv->qpls[i].id);
+		/* This failure will trigger a reset - no need to clean up */
+		if (err) {
+			netif_err(priv, drv, priv->dev,
+				  "Failed to unregister queue page list %d\n",
+				  priv->qpls[i].id);
+			return err;
+		}
+	}
+	return 0;
+}
+
 static int gve_unregister_qpls(struct gve_priv *priv)
 {
 	int start_id;
@@ -650,6 +699,27 @@ static int gve_unregister_qpls(struct gve_priv *priv)
 	return 0;
 }
 
+static int gve_create_xdp_rings(struct gve_priv *priv)
+{
+	int err;
+
+	err = gve_adminq_create_tx_queues(priv,
+					  gve_xdp_tx_start_queue_id(priv),
+					  priv->num_xdp_queues);
+	if (err) {
+		netif_err(priv, drv, priv->dev, "failed to create %d XDP tx queues\n",
+			  priv->num_xdp_queues);
+		/* This failure will trigger a reset - no need to clean
+		 * up
+		 */
+		return err;
+	}
+	netif_dbg(priv, drv, priv->dev, "created %d XDP tx queues\n",
+		  priv->num_xdp_queues);
+
+	return 0;
+}
+
 static int gve_create_rings(struct gve_priv *priv)
 {
 	int num_tx_queues = gve_num_tx_queues(priv);
@@ -699,6 +769,23 @@ static int gve_create_rings(struct gve_priv *priv)
 	return 0;
 }
 
+static void add_napi_init_xdp_sync_stats(struct gve_priv *priv,
+					 int (*napi_poll)(struct napi_struct *napi,
+							  int budget))
+{
+	int start_id = gve_xdp_tx_start_queue_id(priv);
+	int i;
+
+	/* Add xdp tx napi & init sync stats*/
+	for (i = start_id; i < start_id + priv->num_xdp_queues; i++) {
+		int ntfy_idx = gve_tx_idx_to_ntfy(priv, i);
+
+		u64_stats_init(&priv->tx[i].statss);
+		priv->tx[i].ntfy_id = ntfy_idx;
+		gve_add_napi(priv, ntfy_idx, napi_poll);
+	}
+}
+
 static void add_napi_init_sync_stats(struct gve_priv *priv,
 				     int (*napi_poll)(struct napi_struct *napi,
 						      int budget))
@@ -732,6 +819,23 @@ static void gve_tx_free_rings(struct gve_priv *priv, int start_id, int num_rings
 	}
 }
 
+static int gve_alloc_xdp_rings(struct gve_priv *priv)
+{
+	int start_id;
+	int err = 0;
+
+	if (!priv->num_xdp_queues)
+		return 0;
+
+	start_id = gve_xdp_tx_start_queue_id(priv);
+	err = gve_tx_alloc_rings(priv, start_id, priv->num_xdp_queues);
+	if (err)
+		return err;
+	add_napi_init_xdp_sync_stats(priv, gve_napi_poll);
+
+	return 0;
+}
+
 static int gve_alloc_rings(struct gve_priv *priv)
 {
 	int err;
@@ -782,6 +886,26 @@ static int gve_alloc_rings(struct gve_priv *priv)
 	return err;
 }
 
+static int gve_destroy_xdp_rings(struct gve_priv *priv)
+{
+	int start_id;
+	int err;
+
+	start_id = gve_xdp_tx_start_queue_id(priv);
+	err = gve_adminq_destroy_tx_queues(priv,
+					   start_id,
+					   priv->num_xdp_queues);
+	if (err) {
+		netif_err(priv, drv, priv->dev,
+			  "failed to destroy XDP queues\n");
+		/* This failure will trigger a reset - no need to clean up */
+		return err;
+	}
+	netif_dbg(priv, drv, priv->dev, "destroyed XDP queues\n");
+
+	return 0;
+}
+
 static int gve_destroy_rings(struct gve_priv *priv)
 {
 	int num_tx_queues = gve_num_tx_queues(priv);
@@ -814,6 +938,21 @@ static void gve_rx_free_rings(struct gve_priv *priv)
 		gve_rx_free_rings_dqo(priv);
 }
 
+static void gve_free_xdp_rings(struct gve_priv *priv)
+{
+	int ntfy_idx, start_id;
+	int i;
+
+	start_id = gve_xdp_tx_start_queue_id(priv);
+	if (priv->tx) {
+		for (i = start_id; i <  start_id + priv->num_xdp_queues; i++) {
+			ntfy_idx = gve_tx_idx_to_ntfy(priv, i);
+			gve_remove_napi(priv, ntfy_idx);
+		}
+		gve_tx_free_rings(priv, start_id, priv->num_xdp_queues);
+	}
+}
+
 static void gve_free_rings(struct gve_priv *priv)
 {
 	int num_tx_queues = gve_num_tx_queues(priv);
@@ -929,6 +1068,28 @@ static void gve_free_queue_page_list(struct gve_priv *priv, u32 id)
 	priv->num_registered_pages -= qpl->num_entries;
 }
 
+static int gve_alloc_xdp_qpls(struct gve_priv *priv)
+{
+	int start_id;
+	int i, j;
+	int err;
+
+	start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv));
+	for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) {
+		err = gve_alloc_queue_page_list(priv, i,
+						priv->tx_pages_per_qpl);
+		if (err)
+			goto free_qpls;
+	}
+
+	return 0;
+
+free_qpls:
+	for (j = start_id; j <= i; j++)
+		gve_free_queue_page_list(priv, j);
+	return err;
+}
+
 static int gve_alloc_qpls(struct gve_priv *priv)
 {
 	int max_queues = priv->tx_cfg.max_queues + priv->rx_cfg.max_queues;
@@ -978,6 +1139,16 @@ static int gve_alloc_qpls(struct gve_priv *priv)
 	return err;
 }
 
+static void gve_free_xdp_qpls(struct gve_priv *priv)
+{
+	int start_id;
+	int i;
+
+	start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv));
+	for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++)
+		gve_free_queue_page_list(priv, i);
+}
+
 static void gve_free_qpls(struct gve_priv *priv)
 {
 	int max_queues = priv->tx_cfg.max_queues + priv->rx_cfg.max_queues;
@@ -1011,11 +1182,64 @@ static int gve_reset_recovery(struct gve_priv *priv, bool was_up);
 static void gve_turndown(struct gve_priv *priv);
 static void gve_turnup(struct gve_priv *priv);
 
+static int gve_reg_xdp_info(struct gve_priv *priv, struct net_device *dev)
+{
+	struct napi_struct *napi;
+	struct gve_rx_ring *rx;
+	int err = 0;
+	int i, j;
+
+	if (!priv->num_xdp_queues)
+		return 0;
+
+	for (i = 0; i < priv->rx_cfg.num_queues; i++) {
+		rx = &priv->rx[i];
+		napi = &priv->ntfy_blocks[rx->ntfy_id].napi;
+
+		err = xdp_rxq_info_reg(&rx->xdp_rxq, dev, i,
+				       napi->napi_id);
+		if (err)
+			goto err;
+		err = xdp_rxq_info_reg_mem_model(&rx->xdp_rxq,
+						 MEM_TYPE_PAGE_SHARED, NULL);
+		if (err)
+			goto err;
+	}
+	return 0;
+
+err:
+	for (j = i; j >= 0; j--) {
+		rx = &priv->rx[j];
+		if (xdp_rxq_info_is_reg(&rx->xdp_rxq))
+			xdp_rxq_info_unreg(&rx->xdp_rxq);
+	}
+	return err;
+}
+
+static void gve_unreg_xdp_info(struct gve_priv *priv)
+{
+	int i;
+
+	if (!priv->num_xdp_queues)
+		return;
+
+	for (i = 0; i < priv->rx_cfg.num_queues; i++) {
+		struct gve_rx_ring *rx = &priv->rx[i];
+
+		xdp_rxq_info_unreg(&rx->xdp_rxq);
+	}
+}
+
 static int gve_open(struct net_device *dev)
 {
 	struct gve_priv *priv = netdev_priv(dev);
 	int err;
 
+	if (priv->xdp_prog)
+		priv->num_xdp_queues = priv->tx_cfg.num_queues;
+	else
+		priv->num_xdp_queues = 0;
+
 	err = gve_alloc_qpls(priv);
 	if (err)
 		return err;
@@ -1031,6 +1255,10 @@ static int gve_open(struct net_device *dev)
 	if (err)
 		goto free_rings;
 
+	err = gve_reg_xdp_info(priv, dev);
+	if (err)
+		goto free_rings;
+
 	err = gve_register_qpls(priv);
 	if (err)
 		goto reset;
@@ -1095,6 +1323,7 @@ static int gve_close(struct net_device *dev)
 	}
 	del_timer_sync(&priv->stats_report_timer);
 
+	gve_unreg_xdp_info(priv);
 	gve_free_rings(priv);
 	gve_free_qpls(priv);
 	priv->interface_down_cnt++;
@@ -1111,6 +1340,148 @@ static int gve_close(struct net_device *dev)
 	return gve_reset_recovery(priv, false);
 }
 
+static int gve_remove_xdp_queues(struct gve_priv *priv)
+{
+	int err;
+
+	err = gve_destroy_xdp_rings(priv);
+	if (err)
+		return err;
+
+	err = gve_unregister_xdp_qpls(priv);
+	if (err)
+		return err;
+
+	gve_unreg_xdp_info(priv);
+	gve_free_xdp_rings(priv);
+	gve_free_xdp_qpls(priv);
+	priv->num_xdp_queues = 0;
+	return 0;
+}
+
+static int gve_add_xdp_queues(struct gve_priv *priv)
+{
+	int err;
+
+	priv->num_xdp_queues = priv->tx_cfg.num_queues;
+
+	err = gve_alloc_xdp_qpls(priv);
+	if (err)
+		goto err;
+
+	err = gve_alloc_xdp_rings(priv);
+	if (err)
+		goto free_xdp_qpls;
+
+	err = gve_reg_xdp_info(priv, priv->dev);
+	if (err)
+		goto free_xdp_rings;
+
+	err = gve_register_xdp_qpls(priv);
+	if (err)
+		goto free_xdp_rings;
+
+	err = gve_create_xdp_rings(priv);
+	if (err)
+		goto free_xdp_rings;
+
+	return 0;
+
+free_xdp_rings:
+	gve_free_xdp_rings(priv);
+free_xdp_qpls:
+	gve_free_xdp_qpls(priv);
+err:
+	priv->num_xdp_queues = 0;
+	return err;
+}
+
+static int gve_set_xdp(struct gve_priv *priv, struct bpf_prog *prog,
+		       struct netlink_ext_ack *extack)
+{
+	struct bpf_prog *old_prog;
+	int err = 0;
+
+	old_prog = READ_ONCE(priv->xdp_prog);
+	if (!netif_carrier_ok(priv->dev)) {
+		WRITE_ONCE(priv->xdp_prog, prog);
+		if (old_prog)
+			bpf_prog_put(old_prog);
+		return 0;
+	}
+
+	gve_turndown(priv);
+	if (!old_prog && prog) {
+		// Allocate XDP TX queues if an XDP program is
+		// being installed
+		err = gve_add_xdp_queues(priv);
+		if (err)
+			goto out;
+	} else if (old_prog && !prog) {
+		// Remove XDP TX queues if an XDP program is
+		// being uninstalled
+		err = gve_remove_xdp_queues(priv);
+		if (err)
+			goto out;
+	}
+	WRITE_ONCE(priv->xdp_prog, prog);
+	if (old_prog)
+		bpf_prog_put(old_prog);
+
+out:
+	gve_turnup(priv);
+	queue_work(priv->gve_wq, &priv->service_task);
+	return err;
+}
+
+static int verify_xdp_configuration(struct net_device *dev)
+{
+	struct gve_priv *priv = netdev_priv(dev);
+
+	if (dev->features & NETIF_F_LRO) {
+		netdev_warn(dev, "XDP is not supported when LRO is on.\n");
+		return -EOPNOTSUPP;
+	}
+
+	if (priv->queue_format != GVE_GQI_QPL_FORMAT) {
+		netdev_warn(dev, "XDP is not supported in mode %d.\n",
+			    priv->queue_format);
+		return -EOPNOTSUPP;
+	}
+
+	if (dev->mtu > (PAGE_SIZE / 2) - sizeof(struct ethhdr) - GVE_RX_PAD) {
+		netdev_warn(dev, "XDP is not supported for mtu %d.\n",
+			    dev->mtu);
+		return -EOPNOTSUPP;
+	}
+
+	if (priv->rx_cfg.num_queues != priv->tx_cfg.num_queues ||
+	    (2 * priv->tx_cfg.num_queues > priv->tx_cfg.max_queues)) {
+		netdev_warn(dev, "XDP load failed: The number of configured RX queues %d should be equal to the number of configured TX queues %d and the number of configured RX/TX queues should be less than or equal to half the maximum number of RX/TX queues %d",
+			    priv->rx_cfg.num_queues,
+			    priv->tx_cfg.num_queues,
+			    priv->tx_cfg.max_queues);
+		return -EINVAL;
+	}
+	return 0;
+}
+
+static int gve_xdp(struct net_device *dev, struct netdev_bpf *xdp)
+{
+	struct gve_priv *priv = netdev_priv(dev);
+	int err;
+
+	err = verify_xdp_configuration(dev);
+	if (err)
+		return err;
+	switch (xdp->command) {
+	case XDP_SETUP_PROG:
+		return gve_set_xdp(priv, xdp->prog, xdp->extack);
+	default:
+		return -EINVAL;
+	}
+}
+
 int gve_adjust_queues(struct gve_priv *priv,
 		      struct gve_queue_config new_rx_config,
 		      struct gve_queue_config new_tx_config)
@@ -1305,6 +1676,7 @@ static const struct net_device_ops gve_netdev_ops = {
 	.ndo_get_stats64	=	gve_get_stats,
 	.ndo_tx_timeout         =       gve_tx_timeout,
 	.ndo_set_features	=	gve_set_features,
+	.ndo_bpf		=	gve_xdp,
 };
 
 static void gve_handle_status(struct gve_priv *priv, u32 status)
diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
index 051a15e4f1af..3241f6ea29be 100644
--- a/drivers/net/ethernet/google/gve/gve_rx.c
+++ b/drivers/net/ethernet/google/gve/gve_rx.c
@@ -8,6 +8,8 @@
 #include "gve_adminq.h"
 #include "gve_utils.h"
 #include <linux/etherdevice.h>
+#include <linux/filter.h>
+#include <net/xdp.h>
 
 static void gve_rx_free_buffer(struct device *dev,
 			       struct gve_rx_slot_page_info *page_info,
@@ -591,6 +593,43 @@ static struct sk_buff *gve_rx_skb(struct gve_priv *priv, struct gve_rx_ring *rx,
 	return skb;
 }
 
+static void gve_xdp_done(struct gve_priv *priv, struct gve_rx_ring *rx,
+			 struct xdp_buff *xdp, struct bpf_prog *xprog,
+			 int xdp_act)
+{
+	struct gve_tx_ring *tx;
+	int tx_qid;
+	int err;
+
+	switch (xdp_act) {
+	case XDP_ABORTED:
+	case XDP_DROP:
+	default:
+		break;
+	case XDP_TX:
+		tx_qid = gve_xdp_tx_queue_id(priv, rx->q_num);
+		tx = &priv->tx[tx_qid];
+		err = gve_xdp_xmit_one(priv, tx, xdp->data,
+				       xdp->data_end - xdp->data);
+
+		if (unlikely(err)) {
+			u64_stats_update_begin(&rx->statss);
+			rx->xdp_tx_errors++;
+			u64_stats_update_end(&rx->statss);
+		}
+		break;
+	case XDP_REDIRECT:
+		u64_stats_update_begin(&rx->statss);
+		rx->xdp_redirect_errors++;
+		u64_stats_update_end(&rx->statss);
+		break;
+	}
+	u64_stats_update_begin(&rx->statss);
+	if ((u32)xdp_act < GVE_XDP_ACTIONS)
+		rx->xdp_actions[xdp_act]++;
+	u64_stats_update_end(&rx->statss);
+}
+
 #define GVE_PKTCONT_BIT_IS_SET(x) (GVE_RXF_PKT_CONT & (x))
 static void gve_rx(struct gve_rx_ring *rx, netdev_features_t feat,
 		   struct gve_rx_desc *desc, u32 idx,
@@ -603,9 +642,12 @@ static void gve_rx(struct gve_rx_ring *rx, netdev_features_t feat,
 	union gve_rx_data_slot *data_slot;
 	struct gve_priv *priv = rx->gve;
 	struct sk_buff *skb = NULL;
+	struct bpf_prog *xprog;
+	struct xdp_buff xdp;
 	dma_addr_t page_bus;
 	void *va;
 
+	u16 len = frag_size;
 	struct napi_struct *napi = &priv->ntfy_blocks[rx->ntfy_id].napi;
 	bool is_first_frag = ctx->frag_cnt == 0;
 
@@ -645,9 +687,35 @@ static void gve_rx(struct gve_rx_ring *rx, netdev_features_t feat,
 	dma_sync_single_for_cpu(&priv->pdev->dev, page_bus,
 				PAGE_SIZE, DMA_FROM_DEVICE);
 	page_info->pad = is_first_frag ? GVE_RX_PAD : 0;
+	len -= page_info->pad;
 	frag_size -= page_info->pad;
 
-	skb = gve_rx_skb(priv, rx, page_info, napi, frag_size,
+	xprog = READ_ONCE(priv->xdp_prog);
+	if (xprog && is_only_frag) {
+		void *old_data;
+		int xdp_act;
+
+		xdp_init_buff(&xdp, rx->packet_buffer_size, &rx->xdp_rxq);
+		xdp_prepare_buff(&xdp, page_info->page_address +
+				 page_info->page_offset, GVE_RX_PAD,
+				 len, false);
+		old_data = xdp.data;
+		xdp_act = bpf_prog_run_xdp(xprog, &xdp);
+		if (xdp_act != XDP_PASS) {
+			gve_xdp_done(priv, rx, &xdp, xprog, xdp_act);
+			ctx->total_size += frag_size;
+			goto finish_ok_pkt;
+		}
+
+		page_info->pad += xdp.data - old_data;
+		len = xdp.data_end - xdp.data;
+
+		u64_stats_update_begin(&rx->statss);
+		rx->xdp_actions[XDP_PASS]++;
+		u64_stats_update_end(&rx->statss);
+	}
+
+	skb = gve_rx_skb(priv, rx, page_info, napi, len,
 			 data_slot, is_only_frag);
 	if (!skb) {
 		u64_stats_update_begin(&rx->statss);
@@ -773,6 +841,7 @@ static bool gve_rx_refill_buffers(struct gve_priv *priv, struct gve_rx_ring *rx)
 static int gve_clean_rx_done(struct gve_rx_ring *rx, int budget,
 			     netdev_features_t feat)
 {
+	u64 xdp_txs = rx->xdp_actions[XDP_TX];
 	struct gve_rx_ctx *ctx = &rx->ctx;
 	struct gve_priv *priv = rx->gve;
 	struct gve_rx_cnts cnts = {0};
@@ -820,6 +889,9 @@ static int gve_clean_rx_done(struct gve_rx_ring *rx, int budget,
 		u64_stats_update_end(&rx->statss);
 	}
 
+	if (xdp_txs != rx->xdp_actions[XDP_TX])
+		gve_xdp_tx_flush(priv, rx->q_num);
+
 	/* restock ring slots */
 	if (!rx->data.raw_addressing) {
 		/* In QPL mode buffs are refilled as the desc are processed */
diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
index e24e73e74e33..d37515e6c10c 100644
--- a/drivers/net/ethernet/google/gve/gve_tx.c
+++ b/drivers/net/ethernet/google/gve/gve_tx.c
@@ -19,6 +19,14 @@ static inline void gve_tx_put_doorbell(struct gve_priv *priv,
 	iowrite32be(val, &priv->db_bar2[be32_to_cpu(q_resources->db_index)]);
 }
 
+void gve_xdp_tx_flush(struct gve_priv *priv, u32 xdp_qid)
+{
+	u32 tx_qid = gve_xdp_tx_queue_id(priv, xdp_qid);
+	struct gve_tx_ring *tx = &priv->tx[tx_qid];
+
+	gve_tx_put_doorbell(priv, tx->q_resources, tx->req);
+}
+
 /* gvnic can only transmit from a Registered Segment.
  * We copy skb payloads into the registered segment before writing Tx
  * descriptors and ringing the Tx doorbell.
@@ -132,6 +140,50 @@ static void gve_tx_free_fifo(struct gve_tx_fifo *fifo, size_t bytes)
 	atomic_add(bytes, &fifo->available);
 }
 
+static size_t gve_tx_clear_buffer_state(struct gve_tx_buffer_state *info)
+{
+	size_t space_freed = 0;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(info->iov); i++) {
+		space_freed += info->iov[i].iov_len + info->iov[i].iov_padding;
+		info->iov[i].iov_len = 0;
+		info->iov[i].iov_padding = 0;
+	}
+	return space_freed;
+}
+
+static int gve_clean_xdp_done(struct gve_priv *priv, struct gve_tx_ring *tx,
+			      u32 to_do)
+{
+	struct gve_tx_buffer_state *info;
+	u32 clean_end = tx->done + to_do;
+	u64 pkts = 0, bytes = 0;
+	size_t space_freed = 0;
+	u32 idx;
+
+	for (; tx->done < clean_end; tx->done++) {
+		idx = tx->done & tx->mask;
+		info = &tx->info[idx];
+
+		if (unlikely(!info->xdp.size))
+			continue;
+
+		bytes += info->xdp.size;
+		pkts++;
+
+		info->xdp.size = 0;
+		space_freed += gve_tx_clear_buffer_state(info);
+	}
+
+	gve_tx_free_fifo(&tx->tx_fifo, space_freed);
+	u64_stats_update_begin(&tx->statss);
+	tx->bytes_done += bytes;
+	tx->pkt_done += pkts;
+	u64_stats_update_end(&tx->statss);
+	return pkts;
+}
+
 static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx,
 			     u32 to_do, bool try_to_wake);
 
@@ -144,8 +196,12 @@ static void gve_tx_free_ring(struct gve_priv *priv, int idx)
 
 	gve_tx_remove_from_block(priv, idx);
 	slots = tx->mask + 1;
-	gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false);
-	netdev_tx_reset_queue(tx->netdev_txq);
+	if (tx->q_num < priv->tx_cfg.num_queues) {
+		gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false);
+		netdev_tx_reset_queue(tx->netdev_txq);
+	} else {
+		gve_clean_xdp_done(priv, tx, priv->tx_desc_cnt);
+	}
 
 	dma_free_coherent(hdev, sizeof(*tx->q_resources),
 			  tx->q_resources, tx->q_resources_bus);
@@ -213,7 +269,8 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx)
 
 	netif_dbg(priv, drv, priv->dev, "tx[%d]->bus=%lx\n", idx,
 		  (unsigned long)tx->bus);
-	tx->netdev_txq = netdev_get_tx_queue(priv->dev, idx);
+	if (idx < priv->tx_cfg.num_queues)
+		tx->netdev_txq = netdev_get_tx_queue(priv->dev, idx);
 	gve_tx_add_to_block(priv, idx);
 
 	return 0;
@@ -657,6 +714,65 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev)
 	return NETDEV_TX_OK;
 }
 
+static int gve_tx_fill_xdp(struct gve_priv *priv, struct gve_tx_ring *tx,
+			   void *data, int len)
+{
+	int pad, nfrags, ndescs, iovi, offset;
+	struct gve_tx_buffer_state *info;
+	u32 reqi = tx->req;
+
+	pad = gve_tx_fifo_pad_alloc_one_frag(&tx->tx_fifo, len);
+	if (pad >= GVE_TX_MAX_HEADER_SIZE)
+		pad = 0;
+	info = &tx->info[reqi & tx->mask];
+	info->xdp.size = len;
+
+	nfrags = gve_tx_alloc_fifo(&tx->tx_fifo, pad + len,
+				   &info->iov[0]);
+	iovi = pad > 0;
+	ndescs = nfrags - iovi;
+	offset = 0;
+
+	while (iovi < nfrags) {
+		if (!offset)
+			gve_tx_fill_pkt_desc(&tx->desc[reqi & tx->mask], 0,
+					     CHECKSUM_NONE, false, 0, ndescs,
+					     info->iov[iovi].iov_len,
+					     info->iov[iovi].iov_offset, len);
+		else
+			gve_tx_fill_seg_desc(&tx->desc[reqi & tx->mask],
+					     0, 0, false, false,
+					     info->iov[iovi].iov_len,
+					     info->iov[iovi].iov_offset);
+
+		memcpy(tx->tx_fifo.base + info->iov[iovi].iov_offset,
+		       data + offset, info->iov[iovi].iov_len);
+		gve_dma_sync_for_device(&priv->pdev->dev,
+					tx->tx_fifo.qpl->page_buses,
+					info->iov[iovi].iov_offset,
+					info->iov[iovi].iov_len);
+		offset += info->iov[iovi].iov_len;
+		iovi++;
+		reqi++;
+	}
+
+	return ndescs;
+}
+
+int gve_xdp_xmit_one(struct gve_priv *priv, struct gve_tx_ring *tx,
+		     void *data, int len)
+{
+	int nsegs;
+
+	if (!gve_can_tx(tx, len + GVE_TX_MAX_HEADER_SIZE - 1))
+		return -EBUSY;
+
+	nsegs = gve_tx_fill_xdp(priv, tx, data, len);
+	tx->req += nsegs;
+
+	return 0;
+}
+
 #define GVE_TX_START_THRESH	PAGE_SIZE
 
 static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx,
@@ -666,7 +782,7 @@ static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx,
 	u64 pkts = 0, bytes = 0;
 	size_t space_freed = 0;
 	struct sk_buff *skb;
-	int i, j;
+	int j;
 	u32 idx;
 
 	for (j = 0; j < to_do; j++) {
@@ -689,12 +805,7 @@ static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx,
 			dev_consume_skb_any(skb);
 			if (tx->raw_addressing)
 				continue;
-			/* FIFO free */
-			for (i = 0; i < ARRAY_SIZE(info->iov); i++) {
-				space_freed += info->iov[i].iov_len + info->iov[i].iov_padding;
-				info->iov[i].iov_len = 0;
-				info->iov[i].iov_padding = 0;
-			}
+			space_freed += gve_tx_clear_buffer_state(info);
 		}
 	}
 
@@ -729,6 +840,24 @@ u32 gve_tx_load_event_counter(struct gve_priv *priv,
 	return be32_to_cpu(counter);
 }
 
+bool gve_xdp_poll(struct gve_notify_block *block, int budget)
+{
+	struct gve_priv *priv = block->priv;
+	struct gve_tx_ring *tx = block->tx;
+	u32 nic_done;
+	u32 to_do;
+
+	/* If budget is 0, do all the work */
+	if (budget == 0)
+		budget = INT_MAX;
+
+	/* Find out how much work there is to be done */
+	nic_done = gve_tx_load_event_counter(priv, tx);
+	to_do = min_t(u32, (nic_done - tx->done), budget);
+	gve_clean_xdp_done(priv, tx, to_do);
+	return nic_done != tx->done;
+}
+
 bool gve_tx_poll(struct gve_notify_block *block, int budget)
 {
 	struct gve_priv *priv = block->priv;
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH net-next v3 4/5] gve: Add XDP REDIRECT support for GQI-QPL format
  2023-03-13 20:26 [PATCH net-next v3 0/5] gve: Add XDP support for GQI-QPL format Praveen Kaligineedi
                   ` (2 preceding siblings ...)
  2023-03-13 20:26 ` [PATCH net-next v3 3/5] gve: Add XDP DROP and TX support for GQI-QPL format Praveen Kaligineedi
@ 2023-03-13 20:26 ` Praveen Kaligineedi
  2023-03-13 20:26 ` [PATCH net-next v3 5/5] gve: Add AF_XDP zero-copy " Praveen Kaligineedi
  4 siblings, 0 replies; 11+ messages in thread
From: Praveen Kaligineedi @ 2023-03-13 20:26 UTC (permalink / raw)
  To: netdev
  Cc: davem, kuba, maciej.fijalkowski, Praveen Kaligineedi, Jeroen de Borst

This patch contains the following changes:
1) Support for XDP REDIRECT action on rx
2) ndo_xdp_xmit callback support

In GQI-QPL queue format, the driver needs to allocate a fixed size
memory, the size specified by vNIC device, for RX/TX and register this
memory as a bounce buffer with the vNIC device when a queue is created.
The number of pages in the bounce buffer is limited and the pages need to
be made available to the vNIC by copying the RX data out to prevent
head-of-line blocking. The XDP_REDIRECT packets are therefore immediately
copied to a newly allocated page.

Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com>
Reviewed-by: Jeroen de Borst <jeroendb@google.com>

---
Changed in v2:
- Moved xdp tx spin lock code from the patch adding XDP_TX support to this
  patch.
- Provide explanation on why packets are copied to a different page on
  XDP_REDIRECT.

Changed in v3:
- no changes
---
 drivers/net/ethernet/google/gve/gve.h         | 15 +++++-
 drivers/net/ethernet/google/gve/gve_ethtool.c | 26 ++++++----
 drivers/net/ethernet/google/gve/gve_main.c    | 17 +++++++
 drivers/net/ethernet/google/gve/gve_rx.c      | 47 ++++++++++++++++--
 drivers/net/ethernet/google/gve/gve_tx.c      | 48 +++++++++++++++++--
 5 files changed, 136 insertions(+), 17 deletions(-)

diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
index 8d5234d4ba67..a3b2aec2c575 100644
--- a/drivers/net/ethernet/google/gve/gve.h
+++ b/drivers/net/ethernet/google/gve/gve.h
@@ -236,6 +236,7 @@ struct gve_rx_ring {
 	u64 rx_frag_alloc_cnt; /* free-running count of rx page allocations */
 	u64 xdp_tx_errors;
 	u64 xdp_redirect_errors;
+	u64 xdp_alloc_fails;
 	u64 xdp_actions[GVE_XDP_ACTIONS];
 	u32 q_num; /* queue index */
 	u32 ntfy_id; /* notification block index */
@@ -247,6 +248,7 @@ struct gve_rx_ring {
 
 	/* XDP stuff */
 	struct xdp_rxq_info xdp_rxq;
+	struct page_frag_cache page_cache; /* Page cache to allocate XDP frames */
 };
 
 /* A TX desc ring entry */
@@ -267,7 +269,10 @@ struct gve_tx_iovec {
  * ring entry but only used for a pkt_desc not a seg_desc
  */
 struct gve_tx_buffer_state {
-	struct sk_buff *skb; /* skb for this pkt */
+	union {
+		struct sk_buff *skb; /* skb for this pkt */
+		struct xdp_frame *xdp_frame; /* xdp_frame */
+	};
 	struct {
 		u16 size; /* size of xmitted xdp pkt */
 	} xdp;
@@ -385,6 +390,8 @@ struct gve_tx_ring {
 		struct {
 			/* Spinlock for when cleanup in progress */
 			spinlock_t clean_lock;
+			/* Spinlock for XDP tx traffic */
+			spinlock_t xdp_lock;
 		};
 
 		/* DQO fields. */
@@ -462,6 +469,8 @@ struct gve_tx_ring {
 	dma_addr_t q_resources_bus; /* dma address of the queue resources */
 	dma_addr_t complq_bus_dqo; /* dma address of the dqo.compl_ring */
 	struct u64_stats_sync statss; /* sync stats for 32bit archs */
+	u64 xdp_xmit;
+	u64 xdp_xmit_errors;
 } ____cacheline_aligned;
 
 /* Wraps the info for one irq including the napi struct and the queues
@@ -919,8 +928,10 @@ void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma,
 		   enum dma_data_direction);
 /* tx handling */
 netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev);
+int gve_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
+		 u32 flags);
 int gve_xdp_xmit_one(struct gve_priv *priv, struct gve_tx_ring *tx,
-		     void *data, int len);
+		     void *data, int len, void *frame_p);
 void gve_xdp_tx_flush(struct gve_priv *priv, u32 xdp_qid);
 bool gve_tx_poll(struct gve_notify_block *block, int budget);
 bool gve_xdp_poll(struct gve_notify_block *block, int budget);
diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c
index 067b393ccf9d..23db0f3534a8 100644
--- a/drivers/net/ethernet/google/gve/gve_ethtool.c
+++ b/drivers/net/ethernet/google/gve/gve_ethtool.c
@@ -56,13 +56,14 @@ static const char gve_gstrings_rx_stats[][ETH_GSTRING_LEN] = {
 	"rx_drops_packet_over_mru[%u]", "rx_drops_invalid_checksum[%u]",
 	"rx_xdp_aborted[%u]", "rx_xdp_drop[%u]", "rx_xdp_pass[%u]",
 	"rx_xdp_tx[%u]", "rx_xdp_redirect[%u]",
-	"rx_xdp_tx_errors[%u]", "rx_xdp_redirect_errors[%u]",
+	"rx_xdp_tx_errors[%u]", "rx_xdp_redirect_errors[%u]", "rx_xdp_alloc_fails[%u]",
 };
 
 static const char gve_gstrings_tx_stats[][ETH_GSTRING_LEN] = {
 	"tx_posted_desc[%u]", "tx_completed_desc[%u]", "tx_consumed_desc[%u]", "tx_bytes[%u]",
 	"tx_wake[%u]", "tx_stop[%u]", "tx_event_counter[%u]",
 	"tx_dma_mapping_error[%u]",
+	"tx_xdp_xmit[%u]", "tx_xdp_xmit_errors[%u]"
 };
 
 static const char gve_gstrings_adminq_stats[][ETH_GSTRING_LEN] = {
@@ -313,9 +314,10 @@ gve_get_ethtool_stats(struct net_device *netdev,
 					data[i + j] = rx->xdp_actions[j];
 				data[i + j++] = rx->xdp_tx_errors;
 				data[i + j++] = rx->xdp_redirect_errors;
+				data[i + j++] = rx->xdp_alloc_fails;
 			} while (u64_stats_fetch_retry(&priv->rx[ring].statss,
 						       start));
-			i += GVE_XDP_ACTIONS + 2; /* XDP rx counters */
+			i += GVE_XDP_ACTIONS + 3; /* XDP rx counters */
 		}
 	} else {
 		i += priv->rx_cfg.num_queues * NUM_GVE_RX_CNTS;
@@ -371,13 +373,21 @@ gve_get_ethtool_stats(struct net_device *netdev,
 			if (skip_nic_stats) {
 				/* skip NIC tx stats */
 				i += NIC_TX_STATS_REPORT_NUM;
-				continue;
-			}
-			for (j = 0; j < NIC_TX_STATS_REPORT_NUM; j++) {
-				u64 value =
-				be64_to_cpu(report_stats[tx_qid_to_stats_idx[ring] + j].value);
-				data[i++] = value;
+			} else {
+				stats_idx = tx_qid_to_stats_idx[ring];
+				for (j = 0; j < NIC_TX_STATS_REPORT_NUM; j++) {
+					u64 value =
+						be64_to_cpu(report_stats[stats_idx + j].value);
+					data[i++] = value;
+				}
 			}
+			do {
+				start = u64_stats_fetch_begin(&priv->tx[ring].statss);
+				data[i] = tx->xdp_xmit;
+				data[i + 1] = tx->xdp_xmit_errors;
+			} while (u64_stats_fetch_retry(&priv->tx[ring].statss,
+						       start));
+			i += 2; /* XDP tx counters */
 		}
 	} else {
 		i += num_tx_queues * NUM_GVE_TX_CNTS;
diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
index 7d3f15cf79ed..f0fc3b2a91ee 100644
--- a/drivers/net/ethernet/google/gve/gve_main.c
+++ b/drivers/net/ethernet/google/gve/gve_main.c
@@ -1230,6 +1230,21 @@ static void gve_unreg_xdp_info(struct gve_priv *priv)
 	}
 }
 
+static void gve_drain_page_cache(struct gve_priv *priv)
+{
+	struct page_frag_cache *nc;
+	int i;
+
+	for (i = 0; i < priv->rx_cfg.num_queues; i++) {
+		nc = &priv->rx[i].page_cache;
+		if (nc->va) {
+			__page_frag_cache_drain(virt_to_page(nc->va),
+						nc->pagecnt_bias);
+			nc->va = NULL;
+		}
+	}
+}
+
 static int gve_open(struct net_device *dev)
 {
 	struct gve_priv *priv = netdev_priv(dev);
@@ -1313,6 +1328,7 @@ static int gve_close(struct net_device *dev)
 	netif_carrier_off(dev);
 	if (gve_get_device_rings_ok(priv)) {
 		gve_turndown(priv);
+		gve_drain_page_cache(priv);
 		err = gve_destroy_rings(priv);
 		if (err)
 			goto err;
@@ -1677,6 +1693,7 @@ static const struct net_device_ops gve_netdev_ops = {
 	.ndo_tx_timeout         =       gve_tx_timeout,
 	.ndo_set_features	=	gve_set_features,
 	.ndo_bpf		=	gve_xdp,
+	.ndo_xdp_xmit		=	gve_xdp_xmit,
 };
 
 static void gve_handle_status(struct gve_priv *priv, u32 status)
diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
index 3241f6ea29be..ed4b5a540e6d 100644
--- a/drivers/net/ethernet/google/gve/gve_rx.c
+++ b/drivers/net/ethernet/google/gve/gve_rx.c
@@ -593,6 +593,35 @@ static struct sk_buff *gve_rx_skb(struct gve_priv *priv, struct gve_rx_ring *rx,
 	return skb;
 }
 
+static int gve_xdp_redirect(struct net_device *dev, struct gve_rx_ring *rx,
+			    struct xdp_buff *orig, struct bpf_prog *xdp_prog)
+{
+	int total_len, len = orig->data_end - orig->data;
+	int headroom = XDP_PACKET_HEADROOM;
+	struct xdp_buff new;
+	void *frame;
+	int err;
+
+	total_len = headroom + SKB_DATA_ALIGN(len) +
+		SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+	frame = page_frag_alloc(&rx->page_cache, total_len, GFP_ATOMIC);
+	if (!frame) {
+		u64_stats_update_begin(&rx->statss);
+		rx->xdp_alloc_fails++;
+		u64_stats_update_end(&rx->statss);
+		return -ENOMEM;
+	}
+	xdp_init_buff(&new, total_len, &rx->xdp_rxq);
+	xdp_prepare_buff(&new, frame, headroom, len, false);
+	memcpy(new.data, orig->data, len);
+
+	err = xdp_do_redirect(dev, &new, xdp_prog);
+	if (err)
+		page_frag_free(frame);
+
+	return err;
+}
+
 static void gve_xdp_done(struct gve_priv *priv, struct gve_rx_ring *rx,
 			 struct xdp_buff *xdp, struct bpf_prog *xprog,
 			 int xdp_act)
@@ -609,8 +638,10 @@ static void gve_xdp_done(struct gve_priv *priv, struct gve_rx_ring *rx,
 	case XDP_TX:
 		tx_qid = gve_xdp_tx_queue_id(priv, rx->q_num);
 		tx = &priv->tx[tx_qid];
+		spin_lock(&tx->xdp_lock);
 		err = gve_xdp_xmit_one(priv, tx, xdp->data,
-				       xdp->data_end - xdp->data);
+				       xdp->data_end - xdp->data, NULL);
+		spin_unlock(&tx->xdp_lock);
 
 		if (unlikely(err)) {
 			u64_stats_update_begin(&rx->statss);
@@ -619,9 +650,13 @@ static void gve_xdp_done(struct gve_priv *priv, struct gve_rx_ring *rx,
 		}
 		break;
 	case XDP_REDIRECT:
-		u64_stats_update_begin(&rx->statss);
-		rx->xdp_redirect_errors++;
-		u64_stats_update_end(&rx->statss);
+		err = gve_xdp_redirect(priv->dev, rx, xdp, xprog);
+
+		if (unlikely(err)) {
+			u64_stats_update_begin(&rx->statss);
+			rx->xdp_redirect_errors++;
+			u64_stats_update_end(&rx->statss);
+		}
 		break;
 	}
 	u64_stats_update_begin(&rx->statss);
@@ -841,6 +876,7 @@ static bool gve_rx_refill_buffers(struct gve_priv *priv, struct gve_rx_ring *rx)
 static int gve_clean_rx_done(struct gve_rx_ring *rx, int budget,
 			     netdev_features_t feat)
 {
+	u64 xdp_redirects = rx->xdp_actions[XDP_REDIRECT];
 	u64 xdp_txs = rx->xdp_actions[XDP_TX];
 	struct gve_rx_ctx *ctx = &rx->ctx;
 	struct gve_priv *priv = rx->gve;
@@ -892,6 +928,9 @@ static int gve_clean_rx_done(struct gve_rx_ring *rx, int budget,
 	if (xdp_txs != rx->xdp_actions[XDP_TX])
 		gve_xdp_tx_flush(priv, rx->q_num);
 
+	if (xdp_redirects != rx->xdp_actions[XDP_REDIRECT])
+		xdp_do_flush();
+
 	/* restock ring slots */
 	if (!rx->data.raw_addressing) {
 		/* In QPL mode buffs are refilled as the desc are processed */
diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
index d37515e6c10c..f047ca0c29d9 100644
--- a/drivers/net/ethernet/google/gve/gve_tx.c
+++ b/drivers/net/ethernet/google/gve/gve_tx.c
@@ -173,6 +173,10 @@ static int gve_clean_xdp_done(struct gve_priv *priv, struct gve_tx_ring *tx,
 		pkts++;
 
 		info->xdp.size = 0;
+		if (info->xdp_frame) {
+			xdp_return_frame(info->xdp_frame);
+			info->xdp_frame = NULL;
+		}
 		space_freed += gve_tx_clear_buffer_state(info);
 	}
 
@@ -233,6 +237,7 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx)
 	/* Make sure everything is zeroed to start */
 	memset(tx, 0, sizeof(*tx));
 	spin_lock_init(&tx->clean_lock);
+	spin_lock_init(&tx->xdp_lock);
 	tx->q_num = idx;
 
 	tx->mask = slots - 1;
@@ -715,7 +720,7 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev)
 }
 
 static int gve_tx_fill_xdp(struct gve_priv *priv, struct gve_tx_ring *tx,
-			   void *data, int len)
+			   void *data, int len, void *frame_p)
 {
 	int pad, nfrags, ndescs, iovi, offset;
 	struct gve_tx_buffer_state *info;
@@ -725,6 +730,7 @@ static int gve_tx_fill_xdp(struct gve_priv *priv, struct gve_tx_ring *tx,
 	if (pad >= GVE_TX_MAX_HEADER_SIZE)
 		pad = 0;
 	info = &tx->info[reqi & tx->mask];
+	info->xdp_frame = frame_p;
 	info->xdp.size = len;
 
 	nfrags = gve_tx_alloc_fifo(&tx->tx_fifo, pad + len,
@@ -759,15 +765,51 @@ static int gve_tx_fill_xdp(struct gve_priv *priv, struct gve_tx_ring *tx,
 	return ndescs;
 }
 
+int gve_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
+		 u32 flags)
+{
+	struct gve_priv *priv = netdev_priv(dev);
+	struct gve_tx_ring *tx;
+	int i, err = 0, qid;
+
+	if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
+		return -EINVAL;
+
+	qid = gve_xdp_tx_queue_id(priv,
+				  smp_processor_id() % priv->num_xdp_queues);
+
+	tx = &priv->tx[qid];
+
+	spin_lock(&tx->xdp_lock);
+	for (i = 0; i < n; i++) {
+		err = gve_xdp_xmit_one(priv, tx, frames[i]->data,
+				       frames[i]->len, frames[i]);
+		if (err)
+			break;
+	}
+
+	if (flags & XDP_XMIT_FLUSH)
+		gve_tx_put_doorbell(priv, tx->q_resources, tx->req);
+
+	spin_unlock(&tx->xdp_lock);
+
+	u64_stats_update_begin(&tx->statss);
+	tx->xdp_xmit += n;
+	tx->xdp_xmit_errors += n - i;
+	u64_stats_update_end(&tx->statss);
+
+	return i ? i : err;
+}
+
 int gve_xdp_xmit_one(struct gve_priv *priv, struct gve_tx_ring *tx,
-		     void *data, int len)
+		     void *data, int len, void *frame_p)
 {
 	int nsegs;
 
 	if (!gve_can_tx(tx, len + GVE_TX_MAX_HEADER_SIZE - 1))
 		return -EBUSY;
 
-	nsegs = gve_tx_fill_xdp(priv, tx, data, len);
+	nsegs = gve_tx_fill_xdp(priv, tx, data, len, frame_p);
 	tx->req += nsegs;
 
 	return 0;
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH net-next v3 5/5] gve: Add AF_XDP zero-copy support for GQI-QPL format
  2023-03-13 20:26 [PATCH net-next v3 0/5] gve: Add XDP support for GQI-QPL format Praveen Kaligineedi
                   ` (3 preceding siblings ...)
  2023-03-13 20:26 ` [PATCH net-next v3 4/5] gve: Add XDP REDIRECT " Praveen Kaligineedi
@ 2023-03-13 20:26 ` Praveen Kaligineedi
  4 siblings, 0 replies; 11+ messages in thread
From: Praveen Kaligineedi @ 2023-03-13 20:26 UTC (permalink / raw)
  To: netdev
  Cc: davem, kuba, maciej.fijalkowski, Praveen Kaligineedi, Jeroen de Borst

Adding AF_XDP zero-copy support.

Note: Although these changes support AF_XDP socket in zero-copy
mode, there is still a copy happening within the driver between
XSK buffer pool and QPL bounce buffers in GQI-QPL format.
In GQI-QPL queue format, the driver needs to allocate a fixed size
memory, the size specified by vNIC device, for RX/TX and register this
memory as a bounce buffer with the vNIC device when a queue is
created. The number of pages in the bounce buffer is limited and the
pages need to be made available to the vNIC by copying the RX data out
to prevent head-of-line blocking. Therefore, we cannot pass the XSK
buffer pool to the vNIC.

The number of copies on RX path from the bounce buffer to XSK buffer is 2
for AF_XDP copy mode (bounce buffer -> allocated page frag -> XSK buffer)
and 1 for AF_XDP zero-copy mode (bounce buffer -> XSK buffer).

This patch contains the following changes:
1) Enable and disable XSK buffer pool
2) Copy XDP packets from QPL bounce buffers to XSK buffer on rx
3) Copy XDP packets from XSK buffer to QPL bounce buffers and
   ring the doorbell as part of XDP TX napi poll
4) ndo_xsk_wakeup callback support

Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com>
Reviewed-by: Jeroen de Borst <jeroendb@google.com>

---
Changed in v2:
- Register xsk rxq only when XSK buff pool is enabled
- Removed code accessing internal xsk_buff_pool fields
- Removed sleep driven code when disabling XSK buff pool. Disable
napi and re-enable it after disabling XSK pool.
- Make sure that we clean up dma mappings on XSK pool disable
- Use napi_if_scheduled_mark_missed to avoid unnecessary napi move
to the CPU calling ndo_xsk_wakeup()
- Provide an explanation for why the XSK buff pool cannot be passed to
  the NIC.

Changed in v3:
- no changes
---
 drivers/net/ethernet/google/gve/gve.h         |   7 +
 drivers/net/ethernet/google/gve/gve_ethtool.c |  14 +-
 drivers/net/ethernet/google/gve/gve_main.c    | 173 +++++++++++++++++-
 drivers/net/ethernet/google/gve/gve_rx.c      |  30 +++
 drivers/net/ethernet/google/gve/gve_tx.c      |  58 +++++-
 5 files changed, 273 insertions(+), 9 deletions(-)

diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
index a3b2aec2c575..e214b51d3c8b 100644
--- a/drivers/net/ethernet/google/gve/gve.h
+++ b/drivers/net/ethernet/google/gve/gve.h
@@ -248,6 +248,8 @@ struct gve_rx_ring {
 
 	/* XDP stuff */
 	struct xdp_rxq_info xdp_rxq;
+	struct xdp_rxq_info xsk_rxq;
+	struct xsk_buff_pool *xsk_pool;
 	struct page_frag_cache page_cache; /* Page cache to allocate XDP frames */
 };
 
@@ -275,6 +277,7 @@ struct gve_tx_buffer_state {
 	};
 	struct {
 		u16 size; /* size of xmitted xdp pkt */
+		u8 is_xsk; /* xsk buff */
 	} xdp;
 	union {
 		struct gve_tx_iovec iov[GVE_TX_MAX_IOVEC]; /* segments of this pkt */
@@ -469,6 +472,10 @@ struct gve_tx_ring {
 	dma_addr_t q_resources_bus; /* dma address of the queue resources */
 	dma_addr_t complq_bus_dqo; /* dma address of the dqo.compl_ring */
 	struct u64_stats_sync statss; /* sync stats for 32bit archs */
+	struct xsk_buff_pool *xsk_pool;
+	u32 xdp_xsk_wakeup;
+	u32 xdp_xsk_done;
+	u64 xdp_xsk_sent;
 	u64 xdp_xmit;
 	u64 xdp_xmit_errors;
 } ____cacheline_aligned;
diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c
index 23db0f3534a8..b18804e934d3 100644
--- a/drivers/net/ethernet/google/gve/gve_ethtool.c
+++ b/drivers/net/ethernet/google/gve/gve_ethtool.c
@@ -62,8 +62,8 @@ static const char gve_gstrings_rx_stats[][ETH_GSTRING_LEN] = {
 static const char gve_gstrings_tx_stats[][ETH_GSTRING_LEN] = {
 	"tx_posted_desc[%u]", "tx_completed_desc[%u]", "tx_consumed_desc[%u]", "tx_bytes[%u]",
 	"tx_wake[%u]", "tx_stop[%u]", "tx_event_counter[%u]",
-	"tx_dma_mapping_error[%u]",
-	"tx_xdp_xmit[%u]", "tx_xdp_xmit_errors[%u]"
+	"tx_dma_mapping_error[%u]", "tx_xsk_wakeup[%u]",
+	"tx_xsk_done[%u]", "tx_xsk_sent[%u]", "tx_xdp_xmit[%u]", "tx_xdp_xmit_errors[%u]"
 };
 
 static const char gve_gstrings_adminq_stats[][ETH_GSTRING_LEN] = {
@@ -381,13 +381,17 @@ gve_get_ethtool_stats(struct net_device *netdev,
 					data[i++] = value;
 				}
 			}
+			/* XDP xsk counters */
+			data[i++] = tx->xdp_xsk_wakeup;
+			data[i++] = tx->xdp_xsk_done;
 			do {
 				start = u64_stats_fetch_begin(&priv->tx[ring].statss);
-				data[i] = tx->xdp_xmit;
-				data[i + 1] = tx->xdp_xmit_errors;
+				data[i] = tx->xdp_xsk_sent;
+				data[i + 1] = tx->xdp_xmit;
+				data[i + 2] = tx->xdp_xmit_errors;
 			} while (u64_stats_fetch_retry(&priv->tx[ring].statss,
 						       start));
-			i += 2; /* XDP tx counters */
+			i += 3; /* XDP tx counters */
 		}
 	} else {
 		i += num_tx_queues * NUM_GVE_TX_CNTS;
diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
index f0fc3b2a91ee..1dd248a5b555 100644
--- a/drivers/net/ethernet/google/gve/gve_main.c
+++ b/drivers/net/ethernet/google/gve/gve_main.c
@@ -17,6 +17,7 @@
 #include <linux/utsname.h>
 #include <linux/version.h>
 #include <net/sch_generic.h>
+#include <net/xdp_sock_drv.h>
 #include "gve.h"
 #include "gve_dqo.h"
 #include "gve_adminq.h"
@@ -1188,6 +1189,7 @@ static int gve_reg_xdp_info(struct gve_priv *priv, struct net_device *dev)
 	struct gve_rx_ring *rx;
 	int err = 0;
 	int i, j;
+	u32 tx_qid;
 
 	if (!priv->num_xdp_queues)
 		return 0;
@@ -1204,6 +1206,24 @@ static int gve_reg_xdp_info(struct gve_priv *priv, struct net_device *dev)
 						 MEM_TYPE_PAGE_SHARED, NULL);
 		if (err)
 			goto err;
+		rx->xsk_pool = xsk_get_pool_from_qid(dev, i);
+		if (rx->xsk_pool) {
+			err = xdp_rxq_info_reg(&rx->xsk_rxq, dev, i,
+					       napi->napi_id);
+			if (err)
+				goto err;
+			err = xdp_rxq_info_reg_mem_model(&rx->xsk_rxq,
+							 MEM_TYPE_XSK_BUFF_POOL, NULL);
+			if (err)
+				goto err;
+			xsk_pool_set_rxq_info(rx->xsk_pool,
+					      &rx->xsk_rxq);
+		}
+	}
+
+	for (i = 0; i < priv->num_xdp_queues; i++) {
+		tx_qid = gve_xdp_tx_queue_id(priv, i);
+		priv->tx[tx_qid].xsk_pool = xsk_get_pool_from_qid(dev, i);
 	}
 	return 0;
 
@@ -1212,13 +1232,15 @@ static int gve_reg_xdp_info(struct gve_priv *priv, struct net_device *dev)
 		rx = &priv->rx[j];
 		if (xdp_rxq_info_is_reg(&rx->xdp_rxq))
 			xdp_rxq_info_unreg(&rx->xdp_rxq);
+		if (xdp_rxq_info_is_reg(&rx->xsk_rxq))
+			xdp_rxq_info_unreg(&rx->xsk_rxq);
 	}
 	return err;
 }
 
 static void gve_unreg_xdp_info(struct gve_priv *priv)
 {
-	int i;
+	int i, tx_qid;
 
 	if (!priv->num_xdp_queues)
 		return;
@@ -1227,6 +1249,15 @@ static void gve_unreg_xdp_info(struct gve_priv *priv)
 		struct gve_rx_ring *rx = &priv->rx[i];
 
 		xdp_rxq_info_unreg(&rx->xdp_rxq);
+		if (rx->xsk_pool) {
+			xdp_rxq_info_unreg(&rx->xsk_rxq);
+			rx->xsk_pool = NULL;
+		}
+	}
+
+	for (i = 0; i < priv->num_xdp_queues; i++) {
+		tx_qid = gve_xdp_tx_queue_id(priv, i);
+		priv->tx[tx_qid].xsk_pool = NULL;
 	}
 }
 
@@ -1450,6 +1481,140 @@ static int gve_set_xdp(struct gve_priv *priv, struct bpf_prog *prog,
 	return err;
 }
 
+static int gve_xsk_pool_enable(struct net_device *dev,
+			       struct xsk_buff_pool *pool,
+			       u16 qid)
+{
+	struct gve_priv *priv = netdev_priv(dev);
+	struct napi_struct *napi;
+	struct gve_rx_ring *rx;
+	int tx_qid;
+	int err;
+
+	if (qid >= priv->rx_cfg.num_queues) {
+		dev_err(&priv->pdev->dev, "xsk pool invalid qid %d", qid);
+		return -EINVAL;
+	}
+	if (xsk_pool_get_rx_frame_size(pool) <
+	     priv->dev->max_mtu + sizeof(struct ethhdr)) {
+		dev_err(&priv->pdev->dev, "xsk pool frame_len too small");
+		return -EINVAL;
+	}
+
+	err = xsk_pool_dma_map(pool, &priv->pdev->dev,
+			       DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING);
+	if (err)
+		return err;
+
+	/* If XDP prog is not installed, return */
+	if (!priv->xdp_prog)
+		return 0;
+
+	rx = &priv->rx[qid];
+	napi = &priv->ntfy_blocks[rx->ntfy_id].napi;
+	err = xdp_rxq_info_reg(&rx->xsk_rxq, dev, qid, napi->napi_id);
+	if (err)
+		goto err;
+
+	err = xdp_rxq_info_reg_mem_model(&rx->xsk_rxq,
+					 MEM_TYPE_XSK_BUFF_POOL, NULL);
+	if (err)
+		goto err;
+
+	xsk_pool_set_rxq_info(pool, &rx->xsk_rxq);
+	rx->xsk_pool = pool;
+
+	tx_qid = gve_xdp_tx_queue_id(priv, qid);
+	priv->tx[tx_qid].xsk_pool = pool;
+
+	return 0;
+err:
+	if (xdp_rxq_info_is_reg(&rx->xsk_rxq))
+		xdp_rxq_info_unreg(&rx->xsk_rxq);
+
+	xsk_pool_dma_unmap(pool,
+			   DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING);
+	return err;
+}
+
+static int gve_xsk_pool_disable(struct net_device *dev,
+				u16 qid)
+{
+	struct gve_priv *priv = netdev_priv(dev);
+	struct napi_struct *napi_rx;
+	struct napi_struct *napi_tx;
+	struct xsk_buff_pool *pool;
+	int tx_qid;
+
+	pool = xsk_get_pool_from_qid(dev, qid);
+	if (!pool)
+		return -EINVAL;
+	if (qid >= priv->rx_cfg.num_queues)
+		return -EINVAL;
+
+	/* If XDP prog is not installed, unmap DMA and return */
+	if (!priv->xdp_prog)
+		goto done;
+
+	tx_qid = gve_xdp_tx_queue_id(priv, qid);
+	if (!netif_running(dev)) {
+		priv->rx[qid].xsk_pool = NULL;
+		xdp_rxq_info_unreg(&priv->rx[qid].xsk_rxq);
+		priv->tx[tx_qid].xsk_pool = NULL;
+		goto done;
+	}
+
+	napi_rx = &priv->ntfy_blocks[priv->rx[qid].ntfy_id].napi;
+	napi_disable(napi_rx); /* make sure current rx poll is done */
+
+	napi_tx = &priv->ntfy_blocks[priv->tx[tx_qid].ntfy_id].napi;
+	napi_disable(napi_tx); /* make sure current tx poll is done */
+
+	priv->rx[qid].xsk_pool = NULL;
+	xdp_rxq_info_unreg(&priv->rx[qid].xsk_rxq);
+	priv->tx[tx_qid].xsk_pool = NULL;
+	smp_mb(); /* Make sure it is visible to the workers on datapath */
+
+	napi_enable(napi_rx);
+	if (gve_rx_work_pending(&priv->rx[qid]))
+		napi_schedule(napi_rx);
+
+	napi_enable(napi_tx);
+	if (gve_tx_clean_pending(priv, &priv->tx[tx_qid]))
+		napi_schedule(napi_tx);
+
+done:
+	xsk_pool_dma_unmap(pool,
+			   DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING);
+	return 0;
+}
+
+static int gve_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags)
+{
+	struct gve_priv *priv = netdev_priv(dev);
+	int tx_queue_id = gve_xdp_tx_queue_id(priv, queue_id);
+
+	if (queue_id >= priv->rx_cfg.num_queues || !priv->xdp_prog)
+		return -EINVAL;
+
+	if (flags & XDP_WAKEUP_TX) {
+		struct gve_tx_ring *tx = &priv->tx[tx_queue_id];
+		struct napi_struct *napi =
+			&priv->ntfy_blocks[tx->ntfy_id].napi;
+
+		if (!napi_if_scheduled_mark_missed(napi)) {
+			/* Call local_bh_enable to trigger SoftIRQ processing */
+			local_bh_disable();
+			napi_schedule(napi);
+			local_bh_enable();
+		}
+
+		tx->xdp_xsk_wakeup++;
+	}
+
+	return 0;
+}
+
 static int verify_xdp_configuration(struct net_device *dev)
 {
 	struct gve_priv *priv = netdev_priv(dev);
@@ -1493,6 +1658,11 @@ static int gve_xdp(struct net_device *dev, struct netdev_bpf *xdp)
 	switch (xdp->command) {
 	case XDP_SETUP_PROG:
 		return gve_set_xdp(priv, xdp->prog, xdp->extack);
+	case XDP_SETUP_XSK_POOL:
+		if (xdp->xsk.pool)
+			return gve_xsk_pool_enable(dev, xdp->xsk.pool, xdp->xsk.queue_id);
+		else
+			return gve_xsk_pool_disable(dev, xdp->xsk.queue_id);
 	default:
 		return -EINVAL;
 	}
@@ -1694,6 +1864,7 @@ static const struct net_device_ops gve_netdev_ops = {
 	.ndo_set_features	=	gve_set_features,
 	.ndo_bpf		=	gve_xdp,
 	.ndo_xdp_xmit		=	gve_xdp_xmit,
+	.ndo_xsk_wakeup		=	gve_xsk_wakeup,
 };
 
 static void gve_handle_status(struct gve_priv *priv, u32 status)
diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
index ed4b5a540e6d..d1da7413dc4d 100644
--- a/drivers/net/ethernet/google/gve/gve_rx.c
+++ b/drivers/net/ethernet/google/gve/gve_rx.c
@@ -10,6 +10,7 @@
 #include <linux/etherdevice.h>
 #include <linux/filter.h>
 #include <net/xdp.h>
+#include <net/xdp_sock_drv.h>
 
 static void gve_rx_free_buffer(struct device *dev,
 			       struct gve_rx_slot_page_info *page_info,
@@ -593,6 +594,31 @@ static struct sk_buff *gve_rx_skb(struct gve_priv *priv, struct gve_rx_ring *rx,
 	return skb;
 }
 
+static int gve_xsk_pool_redirect(struct net_device *dev,
+				 struct gve_rx_ring *rx,
+				 void *data, int len,
+				 struct bpf_prog *xdp_prog)
+{
+	struct xdp_buff *xdp;
+	int err;
+
+	if (rx->xsk_pool->frame_len < len)
+		return -E2BIG;
+	xdp = xsk_buff_alloc(rx->xsk_pool);
+	if (!xdp) {
+		u64_stats_update_begin(&rx->statss);
+		rx->xdp_alloc_fails++;
+		u64_stats_update_end(&rx->statss);
+		return -ENOMEM;
+	}
+	xdp->data_end = xdp->data + len;
+	memcpy(xdp->data, data, len);
+	err = xdp_do_redirect(dev, xdp, xdp_prog);
+	if (err)
+		xsk_buff_free(xdp);
+	return err;
+}
+
 static int gve_xdp_redirect(struct net_device *dev, struct gve_rx_ring *rx,
 			    struct xdp_buff *orig, struct bpf_prog *xdp_prog)
 {
@@ -602,6 +628,10 @@ static int gve_xdp_redirect(struct net_device *dev, struct gve_rx_ring *rx,
 	void *frame;
 	int err;
 
+	if (rx->xsk_pool)
+		return gve_xsk_pool_redirect(dev, rx, orig->data,
+					     len, xdp_prog);
+
 	total_len = headroom + SKB_DATA_ALIGN(len) +
 		SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
 	frame = page_frag_alloc(&rx->page_cache, total_len, GFP_ATOMIC);
diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
index f047ca0c29d9..508c2121ec58 100644
--- a/drivers/net/ethernet/google/gve/gve_tx.c
+++ b/drivers/net/ethernet/google/gve/gve_tx.c
@@ -11,6 +11,7 @@
 #include <linux/tcp.h>
 #include <linux/vmalloc.h>
 #include <linux/skbuff.h>
+#include <net/xdp_sock_drv.h>
 
 static inline void gve_tx_put_doorbell(struct gve_priv *priv,
 				       struct gve_queue_resources *q_resources,
@@ -160,6 +161,7 @@ static int gve_clean_xdp_done(struct gve_priv *priv, struct gve_tx_ring *tx,
 	u32 clean_end = tx->done + to_do;
 	u64 pkts = 0, bytes = 0;
 	size_t space_freed = 0;
+	u32 xsk_complete = 0;
 	u32 idx;
 
 	for (; tx->done < clean_end; tx->done++) {
@@ -171,6 +173,7 @@ static int gve_clean_xdp_done(struct gve_priv *priv, struct gve_tx_ring *tx,
 
 		bytes += info->xdp.size;
 		pkts++;
+		xsk_complete += info->xdp.is_xsk;
 
 		info->xdp.size = 0;
 		if (info->xdp_frame) {
@@ -181,6 +184,8 @@ static int gve_clean_xdp_done(struct gve_priv *priv, struct gve_tx_ring *tx,
 	}
 
 	gve_tx_free_fifo(&tx->tx_fifo, space_freed);
+	if (xsk_complete > 0 && tx->xsk_pool)
+		xsk_tx_completed(tx->xsk_pool, xsk_complete);
 	u64_stats_update_begin(&tx->statss);
 	tx->bytes_done += bytes;
 	tx->pkt_done += pkts;
@@ -720,7 +725,7 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev)
 }
 
 static int gve_tx_fill_xdp(struct gve_priv *priv, struct gve_tx_ring *tx,
-			   void *data, int len, void *frame_p)
+			   void *data, int len, void *frame_p, bool is_xsk)
 {
 	int pad, nfrags, ndescs, iovi, offset;
 	struct gve_tx_buffer_state *info;
@@ -732,6 +737,7 @@ static int gve_tx_fill_xdp(struct gve_priv *priv, struct gve_tx_ring *tx,
 	info = &tx->info[reqi & tx->mask];
 	info->xdp_frame = frame_p;
 	info->xdp.size = len;
+	info->xdp.is_xsk = is_xsk;
 
 	nfrags = gve_tx_alloc_fifo(&tx->tx_fifo, pad + len,
 				   &info->iov[0]);
@@ -809,7 +815,7 @@ int gve_xdp_xmit_one(struct gve_priv *priv, struct gve_tx_ring *tx,
 	if (!gve_can_tx(tx, len + GVE_TX_MAX_HEADER_SIZE - 1))
 		return -EBUSY;
 
-	nsegs = gve_tx_fill_xdp(priv, tx, data, len, frame_p);
+	nsegs = gve_tx_fill_xdp(priv, tx, data, len, frame_p, false);
 	tx->req += nsegs;
 
 	return 0;
@@ -882,11 +888,43 @@ u32 gve_tx_load_event_counter(struct gve_priv *priv,
 	return be32_to_cpu(counter);
 }
 
+static int gve_xsk_tx(struct gve_priv *priv, struct gve_tx_ring *tx,
+		      int budget)
+{
+	struct xdp_desc desc;
+	int sent = 0, nsegs;
+	void *data;
+
+	spin_lock(&tx->xdp_lock);
+	while (sent < budget) {
+		if (!gve_can_tx(tx, GVE_TX_START_THRESH))
+			goto out;
+
+		if (!xsk_tx_peek_desc(tx->xsk_pool, &desc)) {
+			tx->xdp_xsk_done = tx->xdp_xsk_wakeup;
+			goto out;
+		}
+
+		data = xsk_buff_raw_get_data(tx->xsk_pool, desc.addr);
+		nsegs = gve_tx_fill_xdp(priv, tx, data, desc.len, NULL, true);
+		tx->req += nsegs;
+		sent++;
+	}
+out:
+	if (sent > 0) {
+		gve_tx_put_doorbell(priv, tx->q_resources, tx->req);
+		xsk_tx_release(tx->xsk_pool);
+	}
+	spin_unlock(&tx->xdp_lock);
+	return sent;
+}
+
 bool gve_xdp_poll(struct gve_notify_block *block, int budget)
 {
 	struct gve_priv *priv = block->priv;
 	struct gve_tx_ring *tx = block->tx;
 	u32 nic_done;
+	bool repoll;
 	u32 to_do;
 
 	/* If budget is 0, do all the work */
@@ -897,7 +935,21 @@ bool gve_xdp_poll(struct gve_notify_block *block, int budget)
 	nic_done = gve_tx_load_event_counter(priv, tx);
 	to_do = min_t(u32, (nic_done - tx->done), budget);
 	gve_clean_xdp_done(priv, tx, to_do);
-	return nic_done != tx->done;
+	repoll = nic_done != tx->done;
+
+	if (tx->xsk_pool) {
+		int sent = gve_xsk_tx(priv, tx, budget);
+
+		u64_stats_update_begin(&tx->statss);
+		tx->xdp_xsk_sent += sent;
+		u64_stats_update_end(&tx->statss);
+		repoll |= (sent == budget);
+		if (xsk_uses_need_wakeup(tx->xsk_pool))
+			xsk_set_tx_need_wakeup(tx->xsk_pool);
+	}
+
+	/* If we still have work we want to repoll */
+	return repoll;
 }
 
 bool gve_tx_poll(struct gve_notify_block *block, int budget)
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next v3 3/5] gve: Add XDP DROP and TX support for GQI-QPL format
  2023-03-13 20:26 ` [PATCH net-next v3 3/5] gve: Add XDP DROP and TX support for GQI-QPL format Praveen Kaligineedi
@ 2023-03-15 16:53   ` Michal Kubiak
  2023-03-15 21:10     ` Praveen Kaligineedi
  0 siblings, 1 reply; 11+ messages in thread
From: Michal Kubiak @ 2023-03-15 16:53 UTC (permalink / raw)
  To: Praveen Kaligineedi
  Cc: netdev, davem, kuba, maciej.fijalkowski, Jeroen de Borst

On Mon, Mar 13, 2023 at 01:26:38PM -0700, Praveen Kaligineedi wrote:
> Add support for XDP PASS, DROP and TX actions.
> 
> This patch contains the following changes:
> 1) Support installing/uninstalling XDP program
> 2) Add dedicated XDP TX queues
> 3) Add support for XDP DROP action
> 4) Add support for XDP TX action
> 
> Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com>
> Reviewed-by: Jeroen de Borst <jeroendb@google.com>
> 
> ---
> Changed in v2:
> - Removed gve_close/gve_open when adding XDP dedicated queues. Instead
> we add and register additional TX queues when the XDP program is
> installed. If the allocation/registration fails we return error and do
> not install the XDP program.
> - Removed xdp tx spin lock from this patch. It is needed for XDP_REDIRECT
> support as both XDP_REDIRECT and XDP_TX traffic share the dedicated XDP
> queues. Moved the code to add xdp tx spinlock to the subsequent patch
> that adds XDP_REDIRECT support.
> - Added netdev_err when the user tries to set rx/tx queues to the values
> not supported when XDP is enabled.
> - Removed rcu annotation for xdp_prog. We disable the napi prior to
> adding/removing the xdp_prog and reenable it after the program has
> been installed for all the queues.
> - Ring the tx doorbell once for napi instead of every XDP TX packet.
> - Added a new helper function for freeing the FIFO buffer
> - Unregister xdp rxq for all the queues when the registration
> fails during XDP program installation
> 
> Changed in v3:
> - Padding bytes are used if the XDP TX packet headers do not
> fit at tail of TX FIFO. Taking these padding bytes into account
> while checking if enough space is available in TX FIFO.
> ---

Hi Praveen,

Please find my comments inline.
Also, I have a general comment regarding newest XDP netdev API. As far
as I checked you do not use "net_device.xdp_features". That member has
been added to "struct net_device" since 6.3-rc1 kernel version and that
feature are checked now before .ndo_bpf is called, so you should set all
supported flags in that structure member.

Thanks,
Michal

>  drivers/net/ethernet/google/gve/gve.h         |  44 +-
>  drivers/net/ethernet/google/gve/gve_ethtool.c |  37 +-
>  drivers/net/ethernet/google/gve/gve_main.c    | 376 +++++++++++++++++-
>  drivers/net/ethernet/google/gve/gve_rx.c      |  74 +++-
>  drivers/net/ethernet/google/gve/gve_tx.c      | 149 ++++++-
>  5 files changed, 658 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
> index f354a6448c25..8d5234d4ba67 100644
> --- a/drivers/net/ethernet/google/gve/gve.h
> +++ b/drivers/net/ethernet/google/gve/gve.h
> @@ -47,6 +47,10 @@
>  
>  #define GVE_RX_BUFFER_SIZE_DQO 2048
>  
> +#define GVE_XDP_ACTIONS 5
> +
> +#define GVE_TX_MAX_HEADER_SIZE 182
> +
>  /* Each slot in the desc ring has a 1:1 mapping to a slot in the data ring */
>  struct gve_rx_desc_queue {
>  	struct gve_rx_desc *desc_ring; /* the descriptor ring */
> @@ -230,7 +234,9 @@ struct gve_rx_ring {
>  	u64 rx_frag_flip_cnt; /* free-running count of rx segments where page_flip was used */
>  	u64 rx_frag_copy_cnt; /* free-running count of rx segments copied */
>  	u64 rx_frag_alloc_cnt; /* free-running count of rx page allocations */
> -
> +	u64 xdp_tx_errors;
> +	u64 xdp_redirect_errors;
> +	u64 xdp_actions[GVE_XDP_ACTIONS];
>  	u32 q_num; /* queue index */
>  	u32 ntfy_id; /* notification block index */
>  	struct gve_queue_resources *q_resources; /* head and tail pointer idx */
> @@ -238,6 +244,9 @@ struct gve_rx_ring {
>  	struct u64_stats_sync statss; /* sync stats for 32bit archs */
>  
>  	struct gve_rx_ctx ctx; /* Info for packet currently being processed in this ring. */
> +
> +	/* XDP stuff */
> +	struct xdp_rxq_info xdp_rxq;
>  };
>  
>  /* A TX desc ring entry */
> @@ -259,6 +268,9 @@ struct gve_tx_iovec {
>   */
>  struct gve_tx_buffer_state {
>  	struct sk_buff *skb; /* skb for this pkt */
> +	struct {
> +		u16 size; /* size of xmitted xdp pkt */
> +	} xdp;
>  	union {
>  		struct gve_tx_iovec iov[GVE_TX_MAX_IOVEC]; /* segments of this pkt */
>  		struct {
> @@ -526,9 +538,11 @@ struct gve_priv {
>  	u16 rx_data_slot_cnt; /* rx buffer length */
>  	u64 max_registered_pages;
>  	u64 num_registered_pages; /* num pages registered with NIC */
> +	struct bpf_prog *xdp_prog; /* XDP BPF program */
>  	u32 rx_copybreak; /* copy packets smaller than this */
>  	u16 default_num_queues; /* default num queues to set up */
>  
> +	u16 num_xdp_queues;
>  	struct gve_queue_config tx_cfg;
>  	struct gve_queue_config rx_cfg;
>  	struct gve_qpl_config qpl_cfg; /* map used QPL ids */
> @@ -785,7 +799,17 @@ static inline u32 gve_num_tx_qpls(struct gve_priv *priv)
>  	if (priv->queue_format != GVE_GQI_QPL_FORMAT)
>  		return 0;
>  
> -	return priv->tx_cfg.num_queues;
> +	return priv->tx_cfg.num_queues + priv->num_xdp_queues;
> +}
> +
> +/* Returns the number of XDP tx queue page lists
> + */
> +static inline u32 gve_num_xdp_qpls(struct gve_priv *priv)
> +{
> +	if (priv->queue_format != GVE_GQI_QPL_FORMAT)
> +		return 0;
> +
> +	return priv->num_xdp_queues;
>  }
>  
>  /* Returns the number of rx queue page lists
> @@ -874,7 +898,17 @@ static inline bool gve_is_gqi(struct gve_priv *priv)
>  
>  static inline u32 gve_num_tx_queues(struct gve_priv *priv)
>  {
> -	return priv->tx_cfg.num_queues;
> +	return priv->tx_cfg.num_queues + priv->num_xdp_queues;
> +}
> +
> +static inline u32 gve_xdp_tx_queue_id(struct gve_priv *priv, u32 queue_id)
> +{
> +	return priv->tx_cfg.num_queues + queue_id;
> +}
> +
> +static inline u32 gve_xdp_tx_start_queue_id(struct gve_priv *priv)
> +{
> +	return gve_xdp_tx_queue_id(priv, 0);
>  }
>  
>  /* buffers */
> @@ -885,7 +919,11 @@ void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma,
>  		   enum dma_data_direction);
>  /* tx handling */
>  netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev);
> +int gve_xdp_xmit_one(struct gve_priv *priv, struct gve_tx_ring *tx,
> +		     void *data, int len);
> +void gve_xdp_tx_flush(struct gve_priv *priv, u32 xdp_qid);
>  bool gve_tx_poll(struct gve_notify_block *block, int budget);
> +bool gve_xdp_poll(struct gve_notify_block *block, int budget);
>  int gve_tx_alloc_rings(struct gve_priv *priv, int start_id, int num_rings);
>  void gve_tx_free_rings_gqi(struct gve_priv *priv, int start_id, int num_rings);
>  u32 gve_tx_load_event_counter(struct gve_priv *priv,
> diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c
> index 5b6e31812fae..067b393ccf9d 100644
> --- a/drivers/net/ethernet/google/gve/gve_ethtool.c
> +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c
> @@ -34,6 +34,11 @@ static u32 gve_get_msglevel(struct net_device *netdev)
>  	return priv->msg_enable;
>  }
>  
> +/* For the following stats column string names, make sure the order
> + * matches how it is filled in the code. For xdp_aborted, xdp_drop,
> + * xdp_pass, xdp_tx, xdp_redirect, make sure it also matches the order
> + * as declared in enum xdp_action inside file uapi/linux/bpf.h .
> + */
>  static const char gve_gstrings_main_stats[][ETH_GSTRING_LEN] = {
>  	"rx_packets", "tx_packets", "rx_bytes", "tx_bytes",
>  	"rx_dropped", "tx_dropped", "tx_timeouts",
> @@ -49,6 +54,9 @@ static const char gve_gstrings_rx_stats[][ETH_GSTRING_LEN] = {
>  	"rx_dropped_pkt[%u]", "rx_copybreak_pkt[%u]", "rx_copied_pkt[%u]",
>  	"rx_queue_drop_cnt[%u]", "rx_no_buffers_posted[%u]",
>  	"rx_drops_packet_over_mru[%u]", "rx_drops_invalid_checksum[%u]",
> +	"rx_xdp_aborted[%u]", "rx_xdp_drop[%u]", "rx_xdp_pass[%u]",
> +	"rx_xdp_tx[%u]", "rx_xdp_redirect[%u]",
> +	"rx_xdp_tx_errors[%u]", "rx_xdp_redirect_errors[%u]",
>  };
>  
>  static const char gve_gstrings_tx_stats[][ETH_GSTRING_LEN] = {
> @@ -289,14 +297,25 @@ gve_get_ethtool_stats(struct net_device *netdev,
>  			if (skip_nic_stats) {
>  				/* skip NIC rx stats */
>  				i += NIC_RX_STATS_REPORT_NUM;
> -				continue;
> -			}
> -			for (j = 0; j < NIC_RX_STATS_REPORT_NUM; j++) {
> -				u64 value =
> -				be64_to_cpu(report_stats[rx_qid_to_stats_idx[ring] + j].value);
> +			} else {
> +				stats_idx = rx_qid_to_stats_idx[ring];
> +				for (j = 0; j < NIC_RX_STATS_REPORT_NUM; j++) {
> +					u64 value =
> +						be64_to_cpu(report_stats[stats_idx + j].value);
>  
> -				data[i++] = value;
> +					data[i++] = value;
> +				}
>  			}
> +			/* XDP rx counters */
> +			do {
> +				start =	u64_stats_fetch_begin(&priv->rx[ring].statss);
> +				for (j = 0; j < GVE_XDP_ACTIONS; j++)
> +					data[i + j] = rx->xdp_actions[j];
> +				data[i + j++] = rx->xdp_tx_errors;
> +				data[i + j++] = rx->xdp_redirect_errors;
> +			} while (u64_stats_fetch_retry(&priv->rx[ring].statss,
> +						       start));
> +			i += GVE_XDP_ACTIONS + 2; /* XDP rx counters */
>  		}
>  	} else {
>  		i += priv->rx_cfg.num_queues * NUM_GVE_RX_CNTS;
> @@ -418,6 +437,12 @@ static int gve_set_channels(struct net_device *netdev,
>  	if (!new_rx || !new_tx)
>  		return -EINVAL;
>  
> +	if (priv->num_xdp_queues &&
> +	    (new_tx != new_rx || (2 * new_tx > priv->tx_cfg.max_queues))) {
> +		dev_err(&priv->pdev->dev, "XDP load failed: The number of configured RX queues should be equal to the number of configured TX queues and the number of configured RX/TX queues should be less than or equal to half the maximum number of RX/TX queues");

Please explain why the number of RX and TX queues cannot be asymmetric
while the XDP program is loaded?
Regarding the second condition: shouldn't it look like:
	"2 * new_tx > priv->rx_cfg.max_queues" ?

Please take a look at my other comments regarding XDP queues number.

> +		return -EINVAL;
> +	}
> +
>  	if (!netif_carrier_ok(netdev)) {
>  		priv->tx_cfg.num_queues = new_tx;
>  		priv->rx_cfg.num_queues = new_rx;
> diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
> index 160ca77c2751..7d3f15cf79ed 100644
> --- a/drivers/net/ethernet/google/gve/gve_main.c
> +++ b/drivers/net/ethernet/google/gve/gve_main.c
> @@ -4,8 +4,10 @@
>   * Copyright (C) 2015-2021 Google, Inc.
>   */
>  
> +#include <linux/bpf.h>
>  #include <linux/cpumask.h>
>  #include <linux/etherdevice.h>
> +#include <linux/filter.h>
>  #include <linux/interrupt.h>
>  #include <linux/module.h>
>  #include <linux/pci.h>
> @@ -247,8 +249,13 @@ static int gve_napi_poll(struct napi_struct *napi, int budget)
>  	block = container_of(napi, struct gve_notify_block, napi);
>  	priv = block->priv;
>  
> -	if (block->tx)
> -		reschedule |= gve_tx_poll(block, budget);
> +	if (block->tx) {
> +		if (block->tx->q_num < priv->tx_cfg.num_queues)
> +			reschedule |= gve_tx_poll(block, budget);
> +		else
> +			reschedule |= gve_xdp_poll(block, budget);
> +	}
> +
>  	if (block->rx) {
>  		work_done = gve_rx_poll(block, budget);
>  		reschedule |= work_done == budget;
> @@ -582,6 +589,28 @@ static void gve_remove_napi(struct gve_priv *priv, int ntfy_idx)
>  	netif_napi_del(&block->napi);
>  }
>  
> +static int gve_register_xdp_qpls(struct gve_priv *priv)
> +{
> +	int start_id;
> +	int err;
> +	int i;
> +
> +	start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv));
> +	for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) {
> +		err = gve_adminq_register_page_list(priv, &priv->qpls[i]);
> +		if (err) {
> +			netif_err(priv, drv, priv->dev,
> +				  "failed to register queue page list %d\n",
> +				  priv->qpls[i].id);
> +			/* This failure will trigger a reset - no need to clean
> +			 * up
> +			 */
> +			return err;
> +		}
> +	}
> +	return 0;
> +}
> +
>  static int gve_register_qpls(struct gve_priv *priv)
>  {
>  	int start_id;
> @@ -618,6 +647,26 @@ static int gve_register_qpls(struct gve_priv *priv)
>  	return 0;
>  }
>  
> +static int gve_unregister_xdp_qpls(struct gve_priv *priv)
> +{
> +	int start_id;
> +	int err;
> +	int i;
> +
> +	start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv));
> +	for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) {
> +		err = gve_adminq_unregister_page_list(priv, priv->qpls[i].id);
> +		/* This failure will trigger a reset - no need to clean up */
> +		if (err) {
> +			netif_err(priv, drv, priv->dev,
> +				  "Failed to unregister queue page list %d\n",
> +				  priv->qpls[i].id);
> +			return err;
> +		}
> +	}
> +	return 0;
> +}
> +
>  static int gve_unregister_qpls(struct gve_priv *priv)
>  {
>  	int start_id;
> @@ -650,6 +699,27 @@ static int gve_unregister_qpls(struct gve_priv *priv)
>  	return 0;
>  }
>  
> +static int gve_create_xdp_rings(struct gve_priv *priv)
> +{
> +	int err;
> +
> +	err = gve_adminq_create_tx_queues(priv,
> +					  gve_xdp_tx_start_queue_id(priv),
> +					  priv->num_xdp_queues);
> +	if (err) {
> +		netif_err(priv, drv, priv->dev, "failed to create %d XDP tx queues\n",
> +			  priv->num_xdp_queues);
> +		/* This failure will trigger a reset - no need to clean
> +		 * up
> +		 */
> +		return err;
> +	}
> +	netif_dbg(priv, drv, priv->dev, "created %d XDP tx queues\n",
> +		  priv->num_xdp_queues);
> +
> +	return 0;
> +}
> +
>  static int gve_create_rings(struct gve_priv *priv)
>  {
>  	int num_tx_queues = gve_num_tx_queues(priv);
> @@ -699,6 +769,23 @@ static int gve_create_rings(struct gve_priv *priv)
>  	return 0;
>  }
>  
> +static void add_napi_init_xdp_sync_stats(struct gve_priv *priv,
> +					 int (*napi_poll)(struct napi_struct *napi,
> +							  int budget))
> +{
> +	int start_id = gve_xdp_tx_start_queue_id(priv);
> +	int i;
> +
> +	/* Add xdp tx napi & init sync stats*/
> +	for (i = start_id; i < start_id + priv->num_xdp_queues; i++) {
> +		int ntfy_idx = gve_tx_idx_to_ntfy(priv, i);
> +
> +		u64_stats_init(&priv->tx[i].statss);
> +		priv->tx[i].ntfy_id = ntfy_idx;
> +		gve_add_napi(priv, ntfy_idx, napi_poll);
> +	}
> +}
> +
>  static void add_napi_init_sync_stats(struct gve_priv *priv,
>  				     int (*napi_poll)(struct napi_struct *napi,
>  						      int budget))
> @@ -732,6 +819,23 @@ static void gve_tx_free_rings(struct gve_priv *priv, int start_id, int num_rings
>  	}
>  }
>  
> +static int gve_alloc_xdp_rings(struct gve_priv *priv)
> +{
> +	int start_id;
> +	int err = 0;
> +
> +	if (!priv->num_xdp_queues)
> +		return 0;
> +
> +	start_id = gve_xdp_tx_start_queue_id(priv);
> +	err = gve_tx_alloc_rings(priv, start_id, priv->num_xdp_queues);
> +	if (err)
> +		return err;
> +	add_napi_init_xdp_sync_stats(priv, gve_napi_poll);
> +
> +	return 0;
> +}
> +
>  static int gve_alloc_rings(struct gve_priv *priv)
>  {
>  	int err;
> @@ -782,6 +886,26 @@ static int gve_alloc_rings(struct gve_priv *priv)
>  	return err;
>  }
>  
> +static int gve_destroy_xdp_rings(struct gve_priv *priv)
> +{
> +	int start_id;
> +	int err;
> +
> +	start_id = gve_xdp_tx_start_queue_id(priv);
> +	err = gve_adminq_destroy_tx_queues(priv,
> +					   start_id,
> +					   priv->num_xdp_queues);
> +	if (err) {
> +		netif_err(priv, drv, priv->dev,
> +			  "failed to destroy XDP queues\n");
> +		/* This failure will trigger a reset - no need to clean up */
> +		return err;
> +	}
> +	netif_dbg(priv, drv, priv->dev, "destroyed XDP queues\n");
> +
> +	return 0;
> +}
> +
>  static int gve_destroy_rings(struct gve_priv *priv)
>  {
>  	int num_tx_queues = gve_num_tx_queues(priv);
> @@ -814,6 +938,21 @@ static void gve_rx_free_rings(struct gve_priv *priv)
>  		gve_rx_free_rings_dqo(priv);
>  }
>  
> +static void gve_free_xdp_rings(struct gve_priv *priv)
> +{
> +	int ntfy_idx, start_id;
> +	int i;
> +
> +	start_id = gve_xdp_tx_start_queue_id(priv);
> +	if (priv->tx) {
> +		for (i = start_id; i <  start_id + priv->num_xdp_queues; i++) {
> +			ntfy_idx = gve_tx_idx_to_ntfy(priv, i);
> +			gve_remove_napi(priv, ntfy_idx);
> +		}
> +		gve_tx_free_rings(priv, start_id, priv->num_xdp_queues);
> +	}
> +}
> +
>  static void gve_free_rings(struct gve_priv *priv)
>  {
>  	int num_tx_queues = gve_num_tx_queues(priv);
> @@ -929,6 +1068,28 @@ static void gve_free_queue_page_list(struct gve_priv *priv, u32 id)
>  	priv->num_registered_pages -= qpl->num_entries;
>  }
>  
> +static int gve_alloc_xdp_qpls(struct gve_priv *priv)
> +{
> +	int start_id;
> +	int i, j;
> +	int err;
> +
> +	start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv));
> +	for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) {
> +		err = gve_alloc_queue_page_list(priv, i,
> +						priv->tx_pages_per_qpl);
> +		if (err)
> +			goto free_qpls;
> +	}
> +
> +	return 0;
> +
> +free_qpls:
> +	for (j = start_id; j <= i; j++)
> +		gve_free_queue_page_list(priv, j);
> +	return err;
> +}
> +
>  static int gve_alloc_qpls(struct gve_priv *priv)
>  {
>  	int max_queues = priv->tx_cfg.max_queues + priv->rx_cfg.max_queues;
> @@ -978,6 +1139,16 @@ static int gve_alloc_qpls(struct gve_priv *priv)
>  	return err;
>  }
>  
> +static void gve_free_xdp_qpls(struct gve_priv *priv)
> +{
> +	int start_id;
> +	int i;
> +
> +	start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv));
> +	for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++)
> +		gve_free_queue_page_list(priv, i);
> +}
> +
>  static void gve_free_qpls(struct gve_priv *priv)
>  {
>  	int max_queues = priv->tx_cfg.max_queues + priv->rx_cfg.max_queues;
> @@ -1011,11 +1182,64 @@ static int gve_reset_recovery(struct gve_priv *priv, bool was_up);
>  static void gve_turndown(struct gve_priv *priv);
>  static void gve_turnup(struct gve_priv *priv);
>  
> +static int gve_reg_xdp_info(struct gve_priv *priv, struct net_device *dev)
> +{
> +	struct napi_struct *napi;
> +	struct gve_rx_ring *rx;
> +	int err = 0;
> +	int i, j;
> +
> +	if (!priv->num_xdp_queues)
> +		return 0;
> +
> +	for (i = 0; i < priv->rx_cfg.num_queues; i++) {
> +		rx = &priv->rx[i];
> +		napi = &priv->ntfy_blocks[rx->ntfy_id].napi;
> +
> +		err = xdp_rxq_info_reg(&rx->xdp_rxq, dev, i,
> +				       napi->napi_id);
> +		if (err)
> +			goto err;
> +		err = xdp_rxq_info_reg_mem_model(&rx->xdp_rxq,
> +						 MEM_TYPE_PAGE_SHARED, NULL);
> +		if (err)
> +			goto err;
> +	}
> +	return 0;
> +
> +err:
> +	for (j = i; j >= 0; j--) {
> +		rx = &priv->rx[j];
> +		if (xdp_rxq_info_is_reg(&rx->xdp_rxq))
> +			xdp_rxq_info_unreg(&rx->xdp_rxq);
> +	}
> +	return err;
> +}
> +
> +static void gve_unreg_xdp_info(struct gve_priv *priv)
> +{
> +	int i;
> +
> +	if (!priv->num_xdp_queues)
> +		return;
> +
> +	for (i = 0; i < priv->rx_cfg.num_queues; i++) {
> +		struct gve_rx_ring *rx = &priv->rx[i];
> +
> +		xdp_rxq_info_unreg(&rx->xdp_rxq);
> +	}
> +}
> +
>  static int gve_open(struct net_device *dev)
>  {
>  	struct gve_priv *priv = netdev_priv(dev);
>  	int err;
>  
> +	if (priv->xdp_prog)
> +		priv->num_xdp_queues = priv->tx_cfg.num_queues;

Why the number of XDP queues is initialized to TX queues number?
Shouldn't it be rather RX queues number?

For example, if you have:
 - asymmetric number of RX and TX queues,
 - tx_cfg.num_queues < rx_cfg.num_queues.

I believe in such a scenario you won't be able to handle XDP_TX action
for all RX queues.
Please take a look at your implementation of "XDP_TX" case in "gve_xdp_done()".

Is it any mistake or an intentional design?

> +	else
> +		priv->num_xdp_queues = 0;
> +
>  	err = gve_alloc_qpls(priv);
>  	if (err)
>  		return err;
> @@ -1031,6 +1255,10 @@ static int gve_open(struct net_device *dev)
>  	if (err)
>  		goto free_rings;
>  
> +	err = gve_reg_xdp_info(priv, dev);
> +	if (err)
> +		goto free_rings;
> +
>  	err = gve_register_qpls(priv);
>  	if (err)
>  		goto reset;
> @@ -1095,6 +1323,7 @@ static int gve_close(struct net_device *dev)
>  	}
>  	del_timer_sync(&priv->stats_report_timer);
>  
> +	gve_unreg_xdp_info(priv);
>  	gve_free_rings(priv);
>  	gve_free_qpls(priv);
>  	priv->interface_down_cnt++;
> @@ -1111,6 +1340,148 @@ static int gve_close(struct net_device *dev)
>  	return gve_reset_recovery(priv, false);
>  }
>  
> +static int gve_remove_xdp_queues(struct gve_priv *priv)
> +{
> +	int err;
> +
> +	err = gve_destroy_xdp_rings(priv);
> +	if (err)
> +		return err;
> +
> +	err = gve_unregister_xdp_qpls(priv);
> +	if (err)
> +		return err;
> +
> +	gve_unreg_xdp_info(priv);
> +	gve_free_xdp_rings(priv);
> +	gve_free_xdp_qpls(priv);
> +	priv->num_xdp_queues = 0;
> +	return 0;
> +}
> +
> +static int gve_add_xdp_queues(struct gve_priv *priv)
> +{
> +	int err;
> +
> +	priv->num_xdp_queues = priv->tx_cfg.num_queues;

The same question here: shouldn't it be equal to
"priv->rx_cfg.num_queues"?

> +
> +	err = gve_alloc_xdp_qpls(priv);
> +	if (err)
> +		goto err;
> +
> +	err = gve_alloc_xdp_rings(priv);
> +	if (err)
> +		goto free_xdp_qpls;
> +
> +	err = gve_reg_xdp_info(priv, priv->dev);
> +	if (err)
> +		goto free_xdp_rings;
> +
> +	err = gve_register_xdp_qpls(priv);
> +	if (err)
> +		goto free_xdp_rings;
> +
> +	err = gve_create_xdp_rings(priv);
> +	if (err)
> +		goto free_xdp_rings;
> +
> +	return 0;
> +
> +free_xdp_rings:
> +	gve_free_xdp_rings(priv);
> +free_xdp_qpls:
> +	gve_free_xdp_qpls(priv);
> +err:
> +	priv->num_xdp_queues = 0;
> +	return err;
> +}
> +
> +static int gve_set_xdp(struct gve_priv *priv, struct bpf_prog *prog,
> +		       struct netlink_ext_ack *extack)
> +{
> +	struct bpf_prog *old_prog;
> +	int err = 0;
> +
> +	old_prog = READ_ONCE(priv->xdp_prog);
> +	if (!netif_carrier_ok(priv->dev)) {
> +		WRITE_ONCE(priv->xdp_prog, prog);
> +		if (old_prog)
> +			bpf_prog_put(old_prog);
> +		return 0;
> +	}
> +
> +	gve_turndown(priv);
> +	if (!old_prog && prog) {
> +		// Allocate XDP TX queues if an XDP program is
> +		// being installed
> +		err = gve_add_xdp_queues(priv);
> +		if (err)
> +			goto out;
> +	} else if (old_prog && !prog) {
> +		// Remove XDP TX queues if an XDP program is
> +		// being uninstalled
> +		err = gve_remove_xdp_queues(priv);
> +		if (err)
> +			goto out;
> +	}
> +	WRITE_ONCE(priv->xdp_prog, prog);
> +	if (old_prog)
> +		bpf_prog_put(old_prog);
> +
> +out:
> +	gve_turnup(priv);
> +	queue_work(priv->gve_wq, &priv->service_task);

As far as I understand, you are starting some stuff asynchronously
(service_task), but you never wait for the result of that stuff.
So, if err == 0, but the "service_task" fails, you will still return
a success to the kernel.
Is it OK? Is it possible to return an inconsistent result to the kernel?

> +	return err;
> +}
> +
> +static int verify_xdp_configuration(struct net_device *dev)
> +{
> +	struct gve_priv *priv = netdev_priv(dev);
> +
> +	if (dev->features & NETIF_F_LRO) {
> +		netdev_warn(dev, "XDP is not supported when LRO is on.\n");
> +		return -EOPNOTSUPP;
> +	}
> +
> +	if (priv->queue_format != GVE_GQI_QPL_FORMAT) {
> +		netdev_warn(dev, "XDP is not supported in mode %d.\n",
> +			    priv->queue_format);
> +		return -EOPNOTSUPP;
> +	}
> +
> +	if (dev->mtu > (PAGE_SIZE / 2) - sizeof(struct ethhdr) - GVE_RX_PAD) {
> +		netdev_warn(dev, "XDP is not supported for mtu %d.\n",
> +			    dev->mtu);
> +		return -EOPNOTSUPP;
> +	}
> +
> +	if (priv->rx_cfg.num_queues != priv->tx_cfg.num_queues ||
> +	    (2 * priv->tx_cfg.num_queues > priv->tx_cfg.max_queues)) {
> +		netdev_warn(dev, "XDP load failed: The number of configured RX queues %d should be equal to the number of configured TX queues %d and the number of configured RX/TX queues should be less than or equal to half the maximum number of RX/TX queues %d",
> +			    priv->rx_cfg.num_queues,
> +			    priv->tx_cfg.num_queues,
> +			    priv->tx_cfg.max_queues);
> +		return -EINVAL;
> +	}
> +	return 0;
> +}
> +
> +static int gve_xdp(struct net_device *dev, struct netdev_bpf *xdp)
> +{
> +	struct gve_priv *priv = netdev_priv(dev);
> +	int err;
> +
> +	err = verify_xdp_configuration(dev);
> +	if (err)
> +		return err;
> +	switch (xdp->command) {
> +	case XDP_SETUP_PROG:
> +		return gve_set_xdp(priv, xdp->prog, xdp->extack);
> +	default:
> +		return -EINVAL;
> +	}
> +}
> +
>  int gve_adjust_queues(struct gve_priv *priv,
>  		      struct gve_queue_config new_rx_config,
>  		      struct gve_queue_config new_tx_config)
> @@ -1305,6 +1676,7 @@ static const struct net_device_ops gve_netdev_ops = {
>  	.ndo_get_stats64	=	gve_get_stats,
>  	.ndo_tx_timeout         =       gve_tx_timeout,
>  	.ndo_set_features	=	gve_set_features,
> +	.ndo_bpf		=	gve_xdp,
>  };
>  
>  static void gve_handle_status(struct gve_priv *priv, u32 status)
> diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
> index 051a15e4f1af..3241f6ea29be 100644
> --- a/drivers/net/ethernet/google/gve/gve_rx.c
> +++ b/drivers/net/ethernet/google/gve/gve_rx.c
> @@ -8,6 +8,8 @@
>  #include "gve_adminq.h"
>  #include "gve_utils.h"
>  #include <linux/etherdevice.h>
> +#include <linux/filter.h>
> +#include <net/xdp.h>
>  
>  static void gve_rx_free_buffer(struct device *dev,
>  			       struct gve_rx_slot_page_info *page_info,
> @@ -591,6 +593,43 @@ static struct sk_buff *gve_rx_skb(struct gve_priv *priv, struct gve_rx_ring *rx,
>  	return skb;
>  }
>  
> +static void gve_xdp_done(struct gve_priv *priv, struct gve_rx_ring *rx,
> +			 struct xdp_buff *xdp, struct bpf_prog *xprog,
> +			 int xdp_act)
> +{
> +	struct gve_tx_ring *tx;
> +	int tx_qid;
> +	int err;
> +
> +	switch (xdp_act) {
> +	case XDP_ABORTED:
> +	case XDP_DROP:
> +	default:
> +		break;
> +	case XDP_TX:
> +		tx_qid = gve_xdp_tx_queue_id(priv, rx->q_num);
> +		tx = &priv->tx[tx_qid];

As I have already mentioned: if num_rx_queues > num_tx_queues, you can
select uninitialized xdp queue (because num_tx_queues ==
num_xdp_queues).

Please check if your number of XDP queues should be equal to the number
of RX queues.

> +		err = gve_xdp_xmit_one(priv, tx, xdp->data,
> +				       xdp->data_end - xdp->data);
> +
> +		if (unlikely(err)) {
> +			u64_stats_update_begin(&rx->statss);
> +			rx->xdp_tx_errors++;
> +			u64_stats_update_end(&rx->statss);
> +		}
> +		break;
> +	case XDP_REDIRECT:
> +		u64_stats_update_begin(&rx->statss);
> +		rx->xdp_redirect_errors++;
> +		u64_stats_update_end(&rx->statss);
> +		break;
> +	}
> +	u64_stats_update_begin(&rx->statss);
> +	if ((u32)xdp_act < GVE_XDP_ACTIONS)
> +		rx->xdp_actions[xdp_act]++;
> +	u64_stats_update_end(&rx->statss);
> +}
> +
>  #define GVE_PKTCONT_BIT_IS_SET(x) (GVE_RXF_PKT_CONT & (x))
>  static void gve_rx(struct gve_rx_ring *rx, netdev_features_t feat,
>  		   struct gve_rx_desc *desc, u32 idx,
> @@ -603,9 +642,12 @@ static void gve_rx(struct gve_rx_ring *rx, netdev_features_t feat,
>  	union gve_rx_data_slot *data_slot;
>  	struct gve_priv *priv = rx->gve;
>  	struct sk_buff *skb = NULL;
> +	struct bpf_prog *xprog;
> +	struct xdp_buff xdp;
>  	dma_addr_t page_bus;
>  	void *va;
>  
> +	u16 len = frag_size;
>  	struct napi_struct *napi = &priv->ntfy_blocks[rx->ntfy_id].napi;
>  	bool is_first_frag = ctx->frag_cnt == 0;
>  
> @@ -645,9 +687,35 @@ static void gve_rx(struct gve_rx_ring *rx, netdev_features_t feat,
>  	dma_sync_single_for_cpu(&priv->pdev->dev, page_bus,
>  				PAGE_SIZE, DMA_FROM_DEVICE);
>  	page_info->pad = is_first_frag ? GVE_RX_PAD : 0;
> +	len -= page_info->pad;
>  	frag_size -= page_info->pad;
>  
> -	skb = gve_rx_skb(priv, rx, page_info, napi, frag_size,
> +	xprog = READ_ONCE(priv->xdp_prog);
> +	if (xprog && is_only_frag) {
> +		void *old_data;
> +		int xdp_act;
> +
> +		xdp_init_buff(&xdp, rx->packet_buffer_size, &rx->xdp_rxq);
> +		xdp_prepare_buff(&xdp, page_info->page_address +
> +				 page_info->page_offset, GVE_RX_PAD,
> +				 len, false);
> +		old_data = xdp.data;
> +		xdp_act = bpf_prog_run_xdp(xprog, &xdp);
> +		if (xdp_act != XDP_PASS) {
> +			gve_xdp_done(priv, rx, &xdp, xprog, xdp_act);
> +			ctx->total_size += frag_size;
> +			goto finish_ok_pkt;
> +		}
> +
> +		page_info->pad += xdp.data - old_data;
> +		len = xdp.data_end - xdp.data;
> +
> +		u64_stats_update_begin(&rx->statss);
> +		rx->xdp_actions[XDP_PASS]++;
> +		u64_stats_update_end(&rx->statss);
> +	}
> +
> +	skb = gve_rx_skb(priv, rx, page_info, napi, len,
>  			 data_slot, is_only_frag);
>  	if (!skb) {
>  		u64_stats_update_begin(&rx->statss);
> @@ -773,6 +841,7 @@ static bool gve_rx_refill_buffers(struct gve_priv *priv, struct gve_rx_ring *rx)
>  static int gve_clean_rx_done(struct gve_rx_ring *rx, int budget,
>  			     netdev_features_t feat)
>  {
> +	u64 xdp_txs = rx->xdp_actions[XDP_TX];
>  	struct gve_rx_ctx *ctx = &rx->ctx;
>  	struct gve_priv *priv = rx->gve;
>  	struct gve_rx_cnts cnts = {0};
> @@ -820,6 +889,9 @@ static int gve_clean_rx_done(struct gve_rx_ring *rx, int budget,
>  		u64_stats_update_end(&rx->statss);
>  	}
>  
> +	if (xdp_txs != rx->xdp_actions[XDP_TX])
> +		gve_xdp_tx_flush(priv, rx->q_num);
> +
>  	/* restock ring slots */
>  	if (!rx->data.raw_addressing) {
>  		/* In QPL mode buffs are refilled as the desc are processed */
> diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
> index e24e73e74e33..d37515e6c10c 100644
> --- a/drivers/net/ethernet/google/gve/gve_tx.c
> +++ b/drivers/net/ethernet/google/gve/gve_tx.c
> @@ -19,6 +19,14 @@ static inline void gve_tx_put_doorbell(struct gve_priv *priv,
>  	iowrite32be(val, &priv->db_bar2[be32_to_cpu(q_resources->db_index)]);
>  }
>  
> +void gve_xdp_tx_flush(struct gve_priv *priv, u32 xdp_qid)
> +{
> +	u32 tx_qid = gve_xdp_tx_queue_id(priv, xdp_qid);
> +	struct gve_tx_ring *tx = &priv->tx[tx_qid];
> +
> +	gve_tx_put_doorbell(priv, tx->q_resources, tx->req);
> +}
> +
>  /* gvnic can only transmit from a Registered Segment.
>   * We copy skb payloads into the registered segment before writing Tx
>   * descriptors and ringing the Tx doorbell.
> @@ -132,6 +140,50 @@ static void gve_tx_free_fifo(struct gve_tx_fifo *fifo, size_t bytes)
>  	atomic_add(bytes, &fifo->available);
>  }
>  
> +static size_t gve_tx_clear_buffer_state(struct gve_tx_buffer_state *info)
> +{
> +	size_t space_freed = 0;
> +	int i;
> +
> +	for (i = 0; i < ARRAY_SIZE(info->iov); i++) {
> +		space_freed += info->iov[i].iov_len + info->iov[i].iov_padding;
> +		info->iov[i].iov_len = 0;
> +		info->iov[i].iov_padding = 0;
> +	}
> +	return space_freed;
> +}
> +
> +static int gve_clean_xdp_done(struct gve_priv *priv, struct gve_tx_ring *tx,
> +			      u32 to_do)
> +{
> +	struct gve_tx_buffer_state *info;
> +	u32 clean_end = tx->done + to_do;
> +	u64 pkts = 0, bytes = 0;
> +	size_t space_freed = 0;
> +	u32 idx;
> +
> +	for (; tx->done < clean_end; tx->done++) {
> +		idx = tx->done & tx->mask;
> +		info = &tx->info[idx];
> +
> +		if (unlikely(!info->xdp.size))
> +			continue;
> +
> +		bytes += info->xdp.size;
> +		pkts++;
> +
> +		info->xdp.size = 0;
> +		space_freed += gve_tx_clear_buffer_state(info);
> +	}
> +
> +	gve_tx_free_fifo(&tx->tx_fifo, space_freed);
> +	u64_stats_update_begin(&tx->statss);
> +	tx->bytes_done += bytes;
> +	tx->pkt_done += pkts;
> +	u64_stats_update_end(&tx->statss);
> +	return pkts;
> +}
> +
>  static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx,
>  			     u32 to_do, bool try_to_wake);
>  
> @@ -144,8 +196,12 @@ static void gve_tx_free_ring(struct gve_priv *priv, int idx)
>  
>  	gve_tx_remove_from_block(priv, idx);
>  	slots = tx->mask + 1;
> -	gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false);
> -	netdev_tx_reset_queue(tx->netdev_txq);
> +	if (tx->q_num < priv->tx_cfg.num_queues) {
> +		gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false);
> +		netdev_tx_reset_queue(tx->netdev_txq);
> +	} else {
> +		gve_clean_xdp_done(priv, tx, priv->tx_desc_cnt);
> +	}
>  
>  	dma_free_coherent(hdev, sizeof(*tx->q_resources),
>  			  tx->q_resources, tx->q_resources_bus);
> @@ -213,7 +269,8 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx)
>  
>  	netif_dbg(priv, drv, priv->dev, "tx[%d]->bus=%lx\n", idx,
>  		  (unsigned long)tx->bus);
> -	tx->netdev_txq = netdev_get_tx_queue(priv->dev, idx);
> +	if (idx < priv->tx_cfg.num_queues)
> +		tx->netdev_txq = netdev_get_tx_queue(priv->dev, idx);
>  	gve_tx_add_to_block(priv, idx);
>  
>  	return 0;
> @@ -657,6 +714,65 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev)
>  	return NETDEV_TX_OK;
>  }
>  
> +static int gve_tx_fill_xdp(struct gve_priv *priv, struct gve_tx_ring *tx,
> +			   void *data, int len)
> +{
> +	int pad, nfrags, ndescs, iovi, offset;
> +	struct gve_tx_buffer_state *info;
> +	u32 reqi = tx->req;
> +
> +	pad = gve_tx_fifo_pad_alloc_one_frag(&tx->tx_fifo, len);
> +	if (pad >= GVE_TX_MAX_HEADER_SIZE)
> +		pad = 0;
> +	info = &tx->info[reqi & tx->mask];
> +	info->xdp.size = len;
> +
> +	nfrags = gve_tx_alloc_fifo(&tx->tx_fifo, pad + len,
> +				   &info->iov[0]);
> +	iovi = pad > 0;
> +	ndescs = nfrags - iovi;
> +	offset = 0;
> +
> +	while (iovi < nfrags) {
> +		if (!offset)
> +			gve_tx_fill_pkt_desc(&tx->desc[reqi & tx->mask], 0,
> +					     CHECKSUM_NONE, false, 0, ndescs,
> +					     info->iov[iovi].iov_len,
> +					     info->iov[iovi].iov_offset, len);
> +		else
> +			gve_tx_fill_seg_desc(&tx->desc[reqi & tx->mask],
> +					     0, 0, false, false,
> +					     info->iov[iovi].iov_len,
> +					     info->iov[iovi].iov_offset);
> +
> +		memcpy(tx->tx_fifo.base + info->iov[iovi].iov_offset,
> +		       data + offset, info->iov[iovi].iov_len);
> +		gve_dma_sync_for_device(&priv->pdev->dev,
> +					tx->tx_fifo.qpl->page_buses,
> +					info->iov[iovi].iov_offset,
> +					info->iov[iovi].iov_len);
> +		offset += info->iov[iovi].iov_len;
> +		iovi++;
> +		reqi++;
> +	}

Could you please explain the logic above a little bit more?
How is it possible to xmit more than one frag per one XDP packet?
I believe in your implementation the XDP supports only one descriptor
per packet. So, I think this "while" loop will always have only one
iteration?

> +
> +	return ndescs;
> +}
> +
> +int gve_xdp_xmit_one(struct gve_priv *priv, struct gve_tx_ring *tx,
> +		     void *data, int len)
> +{
> +	int nsegs;
> +
> +	if (!gve_can_tx(tx, len + GVE_TX_MAX_HEADER_SIZE - 1))
> +		return -EBUSY;
> +
> +	nsegs = gve_tx_fill_xdp(priv, tx, data, len);
> +	tx->req += nsegs;
> +
> +	return 0;
> +}
> +
>  #define GVE_TX_START_THRESH	PAGE_SIZE
>  
>  static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx,
> @@ -666,7 +782,7 @@ static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx,
>  	u64 pkts = 0, bytes = 0;
>  	size_t space_freed = 0;
>  	struct sk_buff *skb;
> -	int i, j;
> +	int j;
>  	u32 idx;

RCT

>  
>  	for (j = 0; j < to_do; j++) {
> @@ -689,12 +805,7 @@ static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx,
>  			dev_consume_skb_any(skb);
>  			if (tx->raw_addressing)
>  				continue;
> -			/* FIFO free */
> -			for (i = 0; i < ARRAY_SIZE(info->iov); i++) {
> -				space_freed += info->iov[i].iov_len + info->iov[i].iov_padding;
> -				info->iov[i].iov_len = 0;
> -				info->iov[i].iov_padding = 0;
> -			}
> +			space_freed += gve_tx_clear_buffer_state(info);
>  		}
>  	}
>  
> @@ -729,6 +840,24 @@ u32 gve_tx_load_event_counter(struct gve_priv *priv,
>  	return be32_to_cpu(counter);
>  }
>  
> +bool gve_xdp_poll(struct gve_notify_block *block, int budget)
> +{
> +	struct gve_priv *priv = block->priv;
> +	struct gve_tx_ring *tx = block->tx;
> +	u32 nic_done;
> +	u32 to_do;
> +
> +	/* If budget is 0, do all the work */
> +	if (budget == 0)
> +		budget = INT_MAX;
> +
> +	/* Find out how much work there is to be done */
> +	nic_done = gve_tx_load_event_counter(priv, tx);
> +	to_do = min_t(u32, (nic_done - tx->done), budget);
> +	gve_clean_xdp_done(priv, tx, to_do);
> +	return nic_done != tx->done;
> +}
> +
>  bool gve_tx_poll(struct gve_notify_block *block, int budget)
>  {
>  	struct gve_priv *priv = block->priv;
> -- 
> 2.40.0.rc1.284.g88254d51c5-goog
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next v3 1/5] gve: XDP support GQI-QPL: helper function changes
  2023-03-13 20:26 ` [PATCH net-next v3 1/5] gve: XDP support GQI-QPL: helper function changes Praveen Kaligineedi
@ 2023-03-15 17:13   ` Michal Kubiak
  0 siblings, 0 replies; 11+ messages in thread
From: Michal Kubiak @ 2023-03-15 17:13 UTC (permalink / raw)
  To: Praveen Kaligineedi
  Cc: netdev, davem, kuba, maciej.fijalkowski, Jeroen de Borst

On Mon, Mar 13, 2023 at 01:26:36PM -0700, Praveen Kaligineedi wrote:
> This patch adds/modifies helper functions needed to add XDP
> support.
> 
> Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com>
> Reviewed-by: Jeroen de Borst <jeroendb@google.com>
> 

Reviewed-by: Michal Kubiak <michal.kubiak@intel.com>

Thanks,
Michal

> ---
> Changed in v2:
> - No changes
> 
> Changed in v3:
> - No changes
> ---
> 
>  drivers/net/ethernet/google/gve/gve.h         |  5 +++
>  drivers/net/ethernet/google/gve/gve_ethtool.c | 26 +++++++----
>  drivers/net/ethernet/google/gve/gve_main.c    | 27 +++++++-----
>  drivers/net/ethernet/google/gve/gve_rx.c      |  2 +-
>  drivers/net/ethernet/google/gve/gve_rx_dqo.c  |  2 +-
>  drivers/net/ethernet/google/gve/gve_tx.c      | 43 +++++++++++--------
>  drivers/net/ethernet/google/gve/gve_utils.c   |  6 +--
>  drivers/net/ethernet/google/gve/gve_utils.h   |  3 +-
>  8 files changed, 70 insertions(+), 44 deletions(-)
> 
> diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
> index 64eb0442c82f..f52f23198278 100644
> --- a/drivers/net/ethernet/google/gve/gve.h
> +++ b/drivers/net/ethernet/google/gve/gve.h
> @@ -855,6 +855,11 @@ static inline bool gve_is_gqi(struct gve_priv *priv)
>  		priv->queue_format == GVE_GQI_QPL_FORMAT;
>  }
>  
> +static inline u32 gve_num_tx_queues(struct gve_priv *priv)
> +{
> +	return priv->tx_cfg.num_queues;
> +}
> +
>  /* buffers */
>  int gve_alloc_page(struct gve_priv *priv, struct device *dev,
>  		   struct page **page, dma_addr_t *dma,
> diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c
> index ce574d097e28..5b6e31812fae 100644
> --- a/drivers/net/ethernet/google/gve/gve_ethtool.c
> +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c
> @@ -81,8 +81,10 @@ static void gve_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
>  {
>  	struct gve_priv *priv = netdev_priv(netdev);
>  	char *s = (char *)data;
> +	int num_tx_queues;
>  	int i, j;
>  
> +	num_tx_queues = gve_num_tx_queues(priv);
>  	switch (stringset) {
>  	case ETH_SS_STATS:
>  		memcpy(s, *gve_gstrings_main_stats,
> @@ -97,7 +99,7 @@ static void gve_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
>  			}
>  		}
>  
> -		for (i = 0; i < priv->tx_cfg.num_queues; i++) {
> +		for (i = 0; i < num_tx_queues; i++) {
>  			for (j = 0; j < NUM_GVE_TX_CNTS; j++) {
>  				snprintf(s, ETH_GSTRING_LEN,
>  					 gve_gstrings_tx_stats[j], i);
> @@ -124,12 +126,14 @@ static void gve_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
>  static int gve_get_sset_count(struct net_device *netdev, int sset)
>  {
>  	struct gve_priv *priv = netdev_priv(netdev);
> +	int num_tx_queues;
>  
> +	num_tx_queues = gve_num_tx_queues(priv);
>  	switch (sset) {
>  	case ETH_SS_STATS:
>  		return GVE_MAIN_STATS_LEN + GVE_ADMINQ_STATS_LEN +
>  		       (priv->rx_cfg.num_queues * NUM_GVE_RX_CNTS) +
> -		       (priv->tx_cfg.num_queues * NUM_GVE_TX_CNTS);
> +		       (num_tx_queues * NUM_GVE_TX_CNTS);
>  	case ETH_SS_PRIV_FLAGS:
>  		return GVE_PRIV_FLAGS_STR_LEN;
>  	default:
> @@ -153,18 +157,20 @@ gve_get_ethtool_stats(struct net_device *netdev,
>  	struct gve_priv *priv;
>  	bool skip_nic_stats;
>  	unsigned int start;
> +	int num_tx_queues;
>  	int ring;
>  	int i, j;
>  
>  	ASSERT_RTNL();
>  
>  	priv = netdev_priv(netdev);
> +	num_tx_queues = gve_num_tx_queues(priv);
>  	report_stats = priv->stats_report->stats;
>  	rx_qid_to_stats_idx = kmalloc_array(priv->rx_cfg.num_queues,
>  					    sizeof(int), GFP_KERNEL);
>  	if (!rx_qid_to_stats_idx)
>  		return;
> -	tx_qid_to_stats_idx = kmalloc_array(priv->tx_cfg.num_queues,
> +	tx_qid_to_stats_idx = kmalloc_array(num_tx_queues,
>  					    sizeof(int), GFP_KERNEL);
>  	if (!tx_qid_to_stats_idx) {
>  		kfree(rx_qid_to_stats_idx);
> @@ -195,7 +201,7 @@ gve_get_ethtool_stats(struct net_device *netdev,
>  		}
>  	}
>  	for (tx_pkts = 0, tx_bytes = 0, tx_dropped = 0, ring = 0;
> -	     ring < priv->tx_cfg.num_queues; ring++) {
> +	     ring < num_tx_queues; ring++) {
>  		if (priv->tx) {
>  			do {
>  				start =
> @@ -232,7 +238,7 @@ gve_get_ethtool_stats(struct net_device *netdev,
>  	i = GVE_MAIN_STATS_LEN;
>  
>  	/* For rx cross-reporting stats, start from nic rx stats in report */
> -	base_stats_idx = GVE_TX_STATS_REPORT_NUM * priv->tx_cfg.num_queues +
> +	base_stats_idx = GVE_TX_STATS_REPORT_NUM * num_tx_queues +
>  		GVE_RX_STATS_REPORT_NUM * priv->rx_cfg.num_queues;
>  	max_stats_idx = NIC_RX_STATS_REPORT_NUM * priv->rx_cfg.num_queues +
>  		base_stats_idx;
> @@ -298,7 +304,7 @@ gve_get_ethtool_stats(struct net_device *netdev,
>  
>  	/* For tx cross-reporting stats, start from nic tx stats in report */
>  	base_stats_idx = max_stats_idx;
> -	max_stats_idx = NIC_TX_STATS_REPORT_NUM * priv->tx_cfg.num_queues +
> +	max_stats_idx = NIC_TX_STATS_REPORT_NUM * num_tx_queues +
>  		max_stats_idx;
>  	/* Preprocess the stats report for tx, map queue id to start index */
>  	skip_nic_stats = false;
> @@ -316,7 +322,7 @@ gve_get_ethtool_stats(struct net_device *netdev,
>  	}
>  	/* walk TX rings */
>  	if (priv->tx) {
> -		for (ring = 0; ring < priv->tx_cfg.num_queues; ring++) {
> +		for (ring = 0; ring < num_tx_queues; ring++) {
>  			struct gve_tx_ring *tx = &priv->tx[ring];
>  
>  			if (gve_is_gqi(priv)) {
> @@ -355,7 +361,7 @@ gve_get_ethtool_stats(struct net_device *netdev,
>  			}
>  		}
>  	} else {
> -		i += priv->tx_cfg.num_queues * NUM_GVE_TX_CNTS;
> +		i += num_tx_queues * NUM_GVE_TX_CNTS;
>  	}
>  
>  	kfree(rx_qid_to_stats_idx);
> @@ -502,7 +508,9 @@ static int gve_set_priv_flags(struct net_device *netdev, u32 flags)
>  {
>  	struct gve_priv *priv = netdev_priv(netdev);
>  	u64 ori_flags, new_flags;
> +	int num_tx_queues;
>  
> +	num_tx_queues = gve_num_tx_queues(priv);
>  	ori_flags = READ_ONCE(priv->ethtool_flags);
>  	new_flags = ori_flags;
>  
> @@ -522,7 +530,7 @@ static int gve_set_priv_flags(struct net_device *netdev, u32 flags)
>  	/* delete report stats timer. */
>  	if (!(flags & BIT(0)) && (ori_flags & BIT(0))) {
>  		int tx_stats_num = GVE_TX_STATS_REPORT_NUM *
> -			priv->tx_cfg.num_queues;
> +			num_tx_queues;
>  		int rx_stats_num = GVE_RX_STATS_REPORT_NUM *
>  			priv->rx_cfg.num_queues;
>  
> diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
> index 07111c241e0e..3cfdeeb74f60 100644
> --- a/drivers/net/ethernet/google/gve/gve_main.c
> +++ b/drivers/net/ethernet/google/gve/gve_main.c
> @@ -90,8 +90,10 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
>  	struct gve_priv *priv = netdev_priv(dev);
>  	unsigned int start;
>  	u64 packets, bytes;
> +	int num_tx_queues;
>  	int ring;
>  
> +	num_tx_queues = gve_num_tx_queues(priv);
>  	if (priv->rx) {
>  		for (ring = 0; ring < priv->rx_cfg.num_queues; ring++) {
>  			do {
> @@ -106,7 +108,7 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
>  		}
>  	}
>  	if (priv->tx) {
> -		for (ring = 0; ring < priv->tx_cfg.num_queues; ring++) {
> +		for (ring = 0; ring < num_tx_queues; ring++) {
>  			do {
>  				start =
>  				  u64_stats_fetch_begin(&priv->tx[ring].statss);
> @@ -180,7 +182,7 @@ static int gve_alloc_stats_report(struct gve_priv *priv)
>  	int tx_stats_num, rx_stats_num;
>  
>  	tx_stats_num = (GVE_TX_STATS_REPORT_NUM + NIC_TX_STATS_REPORT_NUM) *
> -		       priv->tx_cfg.num_queues;
> +		       gve_num_tx_queues(priv);
>  	rx_stats_num = (GVE_RX_STATS_REPORT_NUM + NIC_RX_STATS_REPORT_NUM) *
>  		       priv->rx_cfg.num_queues;
>  	priv->stats_report_len = struct_size(priv->stats_report, stats,
> @@ -622,20 +624,21 @@ static int gve_unregister_qpls(struct gve_priv *priv)
>  
>  static int gve_create_rings(struct gve_priv *priv)
>  {
> +	int num_tx_queues = gve_num_tx_queues(priv);
>  	int err;
>  	int i;
>  
> -	err = gve_adminq_create_tx_queues(priv, priv->tx_cfg.num_queues);
> +	err = gve_adminq_create_tx_queues(priv, num_tx_queues);
>  	if (err) {
>  		netif_err(priv, drv, priv->dev, "failed to create %d tx queues\n",
> -			  priv->tx_cfg.num_queues);
> +			  num_tx_queues);
>  		/* This failure will trigger a reset - no need to clean
>  		 * up
>  		 */
>  		return err;
>  	}
>  	netif_dbg(priv, drv, priv->dev, "created %d tx queues\n",
> -		  priv->tx_cfg.num_queues);
> +		  num_tx_queues);
>  
>  	err = gve_adminq_create_rx_queues(priv, priv->rx_cfg.num_queues);
>  	if (err) {
> @@ -675,7 +678,7 @@ static void add_napi_init_sync_stats(struct gve_priv *priv,
>  	int i;
>  
>  	/* Add tx napi & init sync stats*/
> -	for (i = 0; i < priv->tx_cfg.num_queues; i++) {
> +	for (i = 0; i < gve_num_tx_queues(priv); i++) {
>  		int ntfy_idx = gve_tx_idx_to_ntfy(priv, i);
>  
>  		u64_stats_init(&priv->tx[i].statss);
> @@ -753,9 +756,10 @@ static int gve_alloc_rings(struct gve_priv *priv)
>  
>  static int gve_destroy_rings(struct gve_priv *priv)
>  {
> +	int num_tx_queues = gve_num_tx_queues(priv);
>  	int err;
>  
> -	err = gve_adminq_destroy_tx_queues(priv, priv->tx_cfg.num_queues);
> +	err = gve_adminq_destroy_tx_queues(priv, num_tx_queues);
>  	if (err) {
>  		netif_err(priv, drv, priv->dev,
>  			  "failed to destroy tx queues\n");
> @@ -784,11 +788,12 @@ static void gve_rx_free_rings(struct gve_priv *priv)
>  
>  static void gve_free_rings(struct gve_priv *priv)
>  {
> +	int num_tx_queues = gve_num_tx_queues(priv);
>  	int ntfy_idx;
>  	int i;
>  
>  	if (priv->tx) {
> -		for (i = 0; i < priv->tx_cfg.num_queues; i++) {
> +		for (i = 0; i < num_tx_queues; i++) {
>  			ntfy_idx = gve_tx_idx_to_ntfy(priv, i);
>  			gve_remove_napi(priv, ntfy_idx);
>  		}
> @@ -1118,7 +1123,7 @@ static void gve_turndown(struct gve_priv *priv)
>  		return;
>  
>  	/* Disable napi to prevent more work from coming in */
> -	for (idx = 0; idx < priv->tx_cfg.num_queues; idx++) {
> +	for (idx = 0; idx < gve_num_tx_queues(priv); idx++) {
>  		int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx);
>  		struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx];
>  
> @@ -1146,7 +1151,7 @@ static void gve_turnup(struct gve_priv *priv)
>  	netif_tx_start_all_queues(priv->dev);
>  
>  	/* Enable napi and unmask interrupts for all queues */
> -	for (idx = 0; idx < priv->tx_cfg.num_queues; idx++) {
> +	for (idx = 0; idx < gve_num_tx_queues(priv); idx++) {
>  		int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx);
>  		struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx];
>  
> @@ -1306,7 +1311,7 @@ void gve_handle_report_stats(struct gve_priv *priv)
>  	be64_add_cpu(&priv->stats_report->written_count, 1);
>  	/* tx stats */
>  	if (priv->tx) {
> -		for (idx = 0; idx < priv->tx_cfg.num_queues; idx++) {
> +		for (idx = 0; idx < gve_num_tx_queues(priv); idx++) {
>  			u32 last_completion = 0;
>  			u32 tx_frames = 0;
>  
> diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
> index 1f55137722b0..db1c74b1d7d3 100644
> --- a/drivers/net/ethernet/google/gve/gve_rx.c
> +++ b/drivers/net/ethernet/google/gve/gve_rx.c
> @@ -556,7 +556,7 @@ static struct sk_buff *gve_rx_skb(struct gve_priv *priv, struct gve_rx_ring *rx,
>  
>  	if (len <= priv->rx_copybreak && is_only_frag)  {
>  		/* Just copy small packets */
> -		skb = gve_rx_copy(netdev, napi, page_info, len, GVE_RX_PAD);
> +		skb = gve_rx_copy(netdev, napi, page_info, len);
>  		if (skb) {
>  			u64_stats_update_begin(&rx->statss);
>  			rx->rx_copied_pkt++;
> diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
> index 630f42a3037b..e57b73eb70f6 100644
> --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c
> +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
> @@ -568,7 +568,7 @@ static int gve_rx_dqo(struct napi_struct *napi, struct gve_rx_ring *rx,
>  
>  	if (eop && buf_len <= priv->rx_copybreak) {
>  		rx->ctx.skb_head = gve_rx_copy(priv->dev, napi,
> -					       &buf_state->page_info, buf_len, 0);
> +					       &buf_state->page_info, buf_len);
>  		if (unlikely(!rx->ctx.skb_head))
>  			goto error;
>  		rx->ctx.skb_tail = rx->ctx.skb_head;
> diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
> index 4888bf05fbed..0fb052ce9e0b 100644
> --- a/drivers/net/ethernet/google/gve/gve_tx.c
> +++ b/drivers/net/ethernet/google/gve/gve_tx.c
> @@ -374,18 +374,18 @@ static int gve_maybe_stop_tx(struct gve_priv *priv, struct gve_tx_ring *tx,
>  }
>  
>  static void gve_tx_fill_pkt_desc(union gve_tx_desc *pkt_desc,
> -				 struct sk_buff *skb, bool is_gso,
> +				 u16 csum_offset, u8 ip_summed, bool is_gso,
>  				 int l4_hdr_offset, u32 desc_cnt,
> -				 u16 hlen, u64 addr)
> +				 u16 hlen, u64 addr, u16 pkt_len)
>  {
>  	/* l4_hdr_offset and csum_offset are in units of 16-bit words */
>  	if (is_gso) {
>  		pkt_desc->pkt.type_flags = GVE_TXD_TSO | GVE_TXF_L4CSUM;
> -		pkt_desc->pkt.l4_csum_offset = skb->csum_offset >> 1;
> +		pkt_desc->pkt.l4_csum_offset = csum_offset >> 1;
>  		pkt_desc->pkt.l4_hdr_offset = l4_hdr_offset >> 1;
> -	} else if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
> +	} else if (likely(ip_summed == CHECKSUM_PARTIAL)) {
>  		pkt_desc->pkt.type_flags = GVE_TXD_STD | GVE_TXF_L4CSUM;
> -		pkt_desc->pkt.l4_csum_offset = skb->csum_offset >> 1;
> +		pkt_desc->pkt.l4_csum_offset = csum_offset >> 1;
>  		pkt_desc->pkt.l4_hdr_offset = l4_hdr_offset >> 1;
>  	} else {
>  		pkt_desc->pkt.type_flags = GVE_TXD_STD;
> @@ -393,7 +393,7 @@ static void gve_tx_fill_pkt_desc(union gve_tx_desc *pkt_desc,
>  		pkt_desc->pkt.l4_hdr_offset = 0;
>  	}
>  	pkt_desc->pkt.desc_cnt = desc_cnt;
> -	pkt_desc->pkt.len = cpu_to_be16(skb->len);
> +	pkt_desc->pkt.len = cpu_to_be16(pkt_len);
>  	pkt_desc->pkt.seg_len = cpu_to_be16(hlen);
>  	pkt_desc->pkt.seg_addr = cpu_to_be64(addr);
>  }
> @@ -412,15 +412,16 @@ static void gve_tx_fill_mtd_desc(union gve_tx_desc *mtd_desc,
>  }
>  
>  static void gve_tx_fill_seg_desc(union gve_tx_desc *seg_desc,
> -				 struct sk_buff *skb, bool is_gso,
> +				 u16 l3_offset, u16 gso_size,
> +				 bool is_gso_v6, bool is_gso,
>  				 u16 len, u64 addr)
>  {
>  	seg_desc->seg.type_flags = GVE_TXD_SEG;
>  	if (is_gso) {
> -		if (skb_is_gso_v6(skb))
> +		if (is_gso_v6)
>  			seg_desc->seg.type_flags |= GVE_TXSF_IPV6;
> -		seg_desc->seg.l3_offset = skb_network_offset(skb) >> 1;
> -		seg_desc->seg.mss = cpu_to_be16(skb_shinfo(skb)->gso_size);
> +		seg_desc->seg.l3_offset = l3_offset >> 1;
> +		seg_desc->seg.mss = cpu_to_be16(gso_size);
>  	}
>  	seg_desc->seg.seg_len = cpu_to_be16(len);
>  	seg_desc->seg.seg_addr = cpu_to_be64(addr);
> @@ -473,9 +474,10 @@ static int gve_tx_add_skb_copy(struct gve_priv *priv, struct gve_tx_ring *tx, st
>  	payload_nfrags = gve_tx_alloc_fifo(&tx->tx_fifo, skb->len - hlen,
>  					   &info->iov[payload_iov]);
>  
> -	gve_tx_fill_pkt_desc(pkt_desc, skb, is_gso, l4_hdr_offset,
> +	gve_tx_fill_pkt_desc(pkt_desc, skb->csum_offset, skb->ip_summed,
> +			     is_gso, l4_hdr_offset,
>  			     1 + mtd_desc_nr + payload_nfrags, hlen,
> -			     info->iov[hdr_nfrags - 1].iov_offset);
> +			     info->iov[hdr_nfrags - 1].iov_offset, skb->len);
>  
>  	skb_copy_bits(skb, 0,
>  		      tx->tx_fifo.base + info->iov[hdr_nfrags - 1].iov_offset,
> @@ -494,7 +496,9 @@ static int gve_tx_add_skb_copy(struct gve_priv *priv, struct gve_tx_ring *tx, st
>  		next_idx = (tx->req + 1 + mtd_desc_nr + i - payload_iov) & tx->mask;
>  		seg_desc = &tx->desc[next_idx];
>  
> -		gve_tx_fill_seg_desc(seg_desc, skb, is_gso,
> +		gve_tx_fill_seg_desc(seg_desc, skb_network_offset(skb),
> +				     skb_shinfo(skb)->gso_size,
> +				     skb_is_gso_v6(skb), is_gso,
>  				     info->iov[i].iov_len,
>  				     info->iov[i].iov_offset);
>  
> @@ -552,8 +556,9 @@ static int gve_tx_add_skb_no_copy(struct gve_priv *priv, struct gve_tx_ring *tx,
>  	if (mtd_desc_nr)
>  		num_descriptors++;
>  
> -	gve_tx_fill_pkt_desc(pkt_desc, skb, is_gso, l4_hdr_offset,
> -			     num_descriptors, hlen, addr);
> +	gve_tx_fill_pkt_desc(pkt_desc, skb->csum_offset, skb->ip_summed,
> +			     is_gso, l4_hdr_offset,
> +			     num_descriptors, hlen, addr, skb->len);
>  
>  	if (mtd_desc_nr) {
>  		idx = (idx + 1) & tx->mask;
> @@ -569,7 +574,9 @@ static int gve_tx_add_skb_no_copy(struct gve_priv *priv, struct gve_tx_ring *tx,
>  		addr += hlen;
>  		idx = (idx + 1) & tx->mask;
>  		seg_desc = &tx->desc[idx];
> -		gve_tx_fill_seg_desc(seg_desc, skb, is_gso, len, addr);
> +		gve_tx_fill_seg_desc(seg_desc, skb_network_offset(skb),
> +				     skb_shinfo(skb)->gso_size,
> +				     skb_is_gso_v6(skb), is_gso, len, addr);
>  	}
>  
>  	for (i = 0; i < shinfo->nr_frags; i++) {
> @@ -587,7 +594,9 @@ static int gve_tx_add_skb_no_copy(struct gve_priv *priv, struct gve_tx_ring *tx,
>  		dma_unmap_len_set(&tx->info[idx], len, len);
>  		dma_unmap_addr_set(&tx->info[idx], dma, addr);
>  
> -		gve_tx_fill_seg_desc(seg_desc, skb, is_gso, len, addr);
> +		gve_tx_fill_seg_desc(seg_desc, skb_network_offset(skb),
> +				     skb_shinfo(skb)->gso_size,
> +				     skb_is_gso_v6(skb), is_gso, len, addr);
>  	}
>  
>  	return num_descriptors;
> diff --git a/drivers/net/ethernet/google/gve/gve_utils.c b/drivers/net/ethernet/google/gve/gve_utils.c
> index 6ba46adaaee3..26e08d753270 100644
> --- a/drivers/net/ethernet/google/gve/gve_utils.c
> +++ b/drivers/net/ethernet/google/gve/gve_utils.c
> @@ -49,10 +49,10 @@ void gve_rx_add_to_block(struct gve_priv *priv, int queue_idx)
>  }
>  
>  struct sk_buff *gve_rx_copy(struct net_device *dev, struct napi_struct *napi,
> -			    struct gve_rx_slot_page_info *page_info, u16 len,
> -			    u16 padding)
> +			    struct gve_rx_slot_page_info *page_info, u16 len)
>  {
> -	void *va = page_info->page_address + padding + page_info->page_offset;
> +	void *va = page_info->page_address + page_info->page_offset +
> +		page_info->pad;
>  	struct sk_buff *skb;
>  
>  	skb = napi_alloc_skb(napi, len);
> diff --git a/drivers/net/ethernet/google/gve/gve_utils.h b/drivers/net/ethernet/google/gve/gve_utils.h
> index 79595940b351..324fd98a6112 100644
> --- a/drivers/net/ethernet/google/gve/gve_utils.h
> +++ b/drivers/net/ethernet/google/gve/gve_utils.h
> @@ -18,8 +18,7 @@ void gve_rx_remove_from_block(struct gve_priv *priv, int queue_idx);
>  void gve_rx_add_to_block(struct gve_priv *priv, int queue_idx);
>  
>  struct sk_buff *gve_rx_copy(struct net_device *dev, struct napi_struct *napi,
> -			    struct gve_rx_slot_page_info *page_info, u16 len,
> -			    u16 pad);
> +			    struct gve_rx_slot_page_info *page_info, u16 len);
>  
>  /* Decrement pagecnt_bias. Set it back to INT_MAX if it reached zero. */
>  void gve_dec_pagecnt_bias(struct gve_rx_slot_page_info *page_info);
> -- 
> 2.40.0.rc1.284.g88254d51c5-goog
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next v3 2/5] gve: Changes to add new TX queues
  2023-03-13 20:26 ` [PATCH net-next v3 2/5] gve: Changes to add new TX queues Praveen Kaligineedi
@ 2023-03-15 17:14   ` Michal Kubiak
  0 siblings, 0 replies; 11+ messages in thread
From: Michal Kubiak @ 2023-03-15 17:14 UTC (permalink / raw)
  To: Praveen Kaligineedi
  Cc: netdev, davem, kuba, maciej.fijalkowski, Jeroen de Borst

On Mon, Mar 13, 2023 at 01:26:37PM -0700, Praveen Kaligineedi wrote:
> Changes to enable adding and removing TX queues without calling
> gve_close() and gve_open().
> 
> Made the following changes:
> 1) priv->tx, priv->rx and priv->qpls arrays are allocated based on
>    max tx queues and max rx queues
> 2) Changed gve_adminq_create_tx_queues(), gve_adminq_destroy_tx_queues(),
> gve_tx_alloc_rings() and gve_tx_free_rings() functions to add/remove a
> subset of TX queues rather than all the TX queues.
> 
> Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com>
> Reviewed-by: Jeroen de Borst <jeroendb@google.com>
> 

Reviewed-by: Michal Kubiak <michal.kubiak@intel.com>

> ---
> Changed in v2:
> - Added this patch to address the issue raised by Jakub Kicinski about
>   implications of resource allocation failing after reconfig.
> 
> Changed in v3:
> - No changes
> ---
>  drivers/net/ethernet/google/gve/gve.h        | 45 +++++++----
>  drivers/net/ethernet/google/gve/gve_adminq.c |  8 +-
>  drivers/net/ethernet/google/gve/gve_adminq.h |  4 +-
>  drivers/net/ethernet/google/gve/gve_main.c   | 83 ++++++++++++++------
>  drivers/net/ethernet/google/gve/gve_rx.c     |  2 +-
>  drivers/net/ethernet/google/gve/gve_tx.c     | 12 +--
>  6 files changed, 104 insertions(+), 50 deletions(-)
> 
> diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
> index f52f23198278..f354a6448c25 100644
> --- a/drivers/net/ethernet/google/gve/gve.h
> +++ b/drivers/net/ethernet/google/gve/gve.h
> @@ -798,16 +798,35 @@ static inline u32 gve_num_rx_qpls(struct gve_priv *priv)
>  	return priv->rx_cfg.num_queues;
>  }
>  
> +static inline u32 gve_tx_qpl_id(struct gve_priv *priv, int tx_qid)
> +{
> +	return tx_qid;
> +}
> +
> +static inline u32 gve_rx_qpl_id(struct gve_priv *priv, int rx_qid)
> +{
> +	return priv->tx_cfg.max_queues + rx_qid;
> +}
> +
> +static inline u32 gve_tx_start_qpl_id(struct gve_priv *priv)
> +{
> +	return gve_tx_qpl_id(priv, 0);
> +}
> +
> +static inline u32 gve_rx_start_qpl_id(struct gve_priv *priv)
> +{
> +	return gve_rx_qpl_id(priv, 0);
> +}
> +
>  /* Returns a pointer to the next available tx qpl in the list of qpls
>   */
>  static inline
> -struct gve_queue_page_list *gve_assign_tx_qpl(struct gve_priv *priv)
> +struct gve_queue_page_list *gve_assign_tx_qpl(struct gve_priv *priv, int tx_qid)
>  {
> -	int id = find_first_zero_bit(priv->qpl_cfg.qpl_id_map,
> -				     priv->qpl_cfg.qpl_map_size);
> +	int id = gve_tx_qpl_id(priv, tx_qid);
>  
> -	/* we are out of tx qpls */
> -	if (id >= gve_num_tx_qpls(priv))
> +	/* QPL already in use */
> +	if (test_bit(id, priv->qpl_cfg.qpl_id_map))
>  		return NULL;
>  
>  	set_bit(id, priv->qpl_cfg.qpl_id_map);
> @@ -817,14 +836,12 @@ struct gve_queue_page_list *gve_assign_tx_qpl(struct gve_priv *priv)
>  /* Returns a pointer to the next available rx qpl in the list of qpls
>   */
>  static inline
> -struct gve_queue_page_list *gve_assign_rx_qpl(struct gve_priv *priv)
> +struct gve_queue_page_list *gve_assign_rx_qpl(struct gve_priv *priv, int rx_qid)
>  {
> -	int id = find_next_zero_bit(priv->qpl_cfg.qpl_id_map,
> -				    priv->qpl_cfg.qpl_map_size,
> -				    gve_num_tx_qpls(priv));
> +	int id = gve_rx_qpl_id(priv, rx_qid);
>  
> -	/* we are out of rx qpls */
> -	if (id == gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv))
> +	/* QPL already in use */
> +	if (test_bit(id, priv->qpl_cfg.qpl_id_map))
>  		return NULL;
>  
>  	set_bit(id, priv->qpl_cfg.qpl_id_map);
> @@ -843,7 +860,7 @@ static inline void gve_unassign_qpl(struct gve_priv *priv, int id)
>  static inline enum dma_data_direction gve_qpl_dma_dir(struct gve_priv *priv,
>  						      int id)
>  {
> -	if (id < gve_num_tx_qpls(priv))
> +	if (id < gve_rx_start_qpl_id(priv))
>  		return DMA_TO_DEVICE;
>  	else
>  		return DMA_FROM_DEVICE;
> @@ -869,8 +886,8 @@ void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma,
>  /* tx handling */
>  netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev);
>  bool gve_tx_poll(struct gve_notify_block *block, int budget);
> -int gve_tx_alloc_rings(struct gve_priv *priv);
> -void gve_tx_free_rings_gqi(struct gve_priv *priv);
> +int gve_tx_alloc_rings(struct gve_priv *priv, int start_id, int num_rings);
> +void gve_tx_free_rings_gqi(struct gve_priv *priv, int start_id, int num_rings);
>  u32 gve_tx_load_event_counter(struct gve_priv *priv,
>  			      struct gve_tx_ring *tx);
>  bool gve_tx_clean_pending(struct gve_priv *priv, struct gve_tx_ring *tx);
> diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c
> index 60061288ad9d..252974202a3f 100644
> --- a/drivers/net/ethernet/google/gve/gve_adminq.c
> +++ b/drivers/net/ethernet/google/gve/gve_adminq.c
> @@ -516,12 +516,12 @@ static int gve_adminq_create_tx_queue(struct gve_priv *priv, u32 queue_index)
>  	return gve_adminq_issue_cmd(priv, &cmd);
>  }
>  
> -int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 num_queues)
> +int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues)
>  {
>  	int err;
>  	int i;
>  
> -	for (i = 0; i < num_queues; i++) {
> +	for (i = start_id; i < start_id + num_queues; i++) {
>  		err = gve_adminq_create_tx_queue(priv, i);
>  		if (err)
>  			return err;
> @@ -604,12 +604,12 @@ static int gve_adminq_destroy_tx_queue(struct gve_priv *priv, u32 queue_index)
>  	return 0;
>  }
>  
> -int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 num_queues)
> +int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues)
>  {
>  	int err;
>  	int i;
>  
> -	for (i = 0; i < num_queues; i++) {
> +	for (i = start_id; i < start_id + num_queues; i++) {
>  		err = gve_adminq_destroy_tx_queue(priv, i);
>  		if (err)
>  			return err;
> diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/ethernet/google/gve/gve_adminq.h
> index cf29662e6ad1..f894beb3deaf 100644
> --- a/drivers/net/ethernet/google/gve/gve_adminq.h
> +++ b/drivers/net/ethernet/google/gve/gve_adminq.h
> @@ -410,8 +410,8 @@ int gve_adminq_configure_device_resources(struct gve_priv *priv,
>  					  dma_addr_t db_array_bus_addr,
>  					  u32 num_ntfy_blks);
>  int gve_adminq_deconfigure_device_resources(struct gve_priv *priv);
> -int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 num_queues);
> -int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 queue_id);
> +int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues);
> +int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues);
>  int gve_adminq_create_rx_queues(struct gve_priv *priv, u32 num_queues);
>  int gve_adminq_destroy_rx_queues(struct gve_priv *priv, u32 queue_id);
>  int gve_adminq_register_page_list(struct gve_priv *priv,
> diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
> index 3cfdeeb74f60..160ca77c2751 100644
> --- a/drivers/net/ethernet/google/gve/gve_main.c
> +++ b/drivers/net/ethernet/google/gve/gve_main.c
> @@ -584,11 +584,26 @@ static void gve_remove_napi(struct gve_priv *priv, int ntfy_idx)
>  
>  static int gve_register_qpls(struct gve_priv *priv)
>  {
> -	int num_qpls = gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv);
> +	int start_id;
>  	int err;
>  	int i;
>  
> -	for (i = 0; i < num_qpls; i++) {
> +	start_id = gve_tx_start_qpl_id(priv);
> +	for (i = start_id; i < start_id + gve_num_tx_qpls(priv); i++) {
> +		err = gve_adminq_register_page_list(priv, &priv->qpls[i]);
> +		if (err) {
> +			netif_err(priv, drv, priv->dev,
> +				  "failed to register queue page list %d\n",
> +				  priv->qpls[i].id);
> +			/* This failure will trigger a reset - no need to clean
> +			 * up
> +			 */
> +			return err;
> +		}
> +	}
> +
> +	start_id = gve_rx_start_qpl_id(priv);
> +	for (i = start_id; i < start_id + gve_num_rx_qpls(priv); i++) {
>  		err = gve_adminq_register_page_list(priv, &priv->qpls[i]);
>  		if (err) {
>  			netif_err(priv, drv, priv->dev,
> @@ -605,11 +620,24 @@ static int gve_register_qpls(struct gve_priv *priv)
>  
>  static int gve_unregister_qpls(struct gve_priv *priv)
>  {
> -	int num_qpls = gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv);
> +	int start_id;
>  	int err;
>  	int i;
>  
> -	for (i = 0; i < num_qpls; i++) {
> +	start_id = gve_tx_start_qpl_id(priv);
> +	for (i = start_id; i < start_id + gve_num_tx_qpls(priv); i++) {
> +		err = gve_adminq_unregister_page_list(priv, priv->qpls[i].id);
> +		/* This failure will trigger a reset - no need to clean up */
> +		if (err) {
> +			netif_err(priv, drv, priv->dev,
> +				  "Failed to unregister queue page list %d\n",
> +				  priv->qpls[i].id);
> +			return err;
> +		}
> +	}
> +
> +	start_id = gve_rx_start_qpl_id(priv);
> +	for (i = start_id; i < start_id + gve_num_rx_qpls(priv); i++) {
>  		err = gve_adminq_unregister_page_list(priv, priv->qpls[i].id);
>  		/* This failure will trigger a reset - no need to clean up */
>  		if (err) {
> @@ -628,7 +656,7 @@ static int gve_create_rings(struct gve_priv *priv)
>  	int err;
>  	int i;
>  
> -	err = gve_adminq_create_tx_queues(priv, num_tx_queues);
> +	err = gve_adminq_create_tx_queues(priv, 0, num_tx_queues);
>  	if (err) {
>  		netif_err(priv, drv, priv->dev, "failed to create %d tx queues\n",
>  			  num_tx_queues);
> @@ -695,10 +723,10 @@ static void add_napi_init_sync_stats(struct gve_priv *priv,
>  	}
>  }
>  
> -static void gve_tx_free_rings(struct gve_priv *priv)
> +static void gve_tx_free_rings(struct gve_priv *priv, int start_id, int num_rings)
>  {
>  	if (gve_is_gqi(priv)) {
> -		gve_tx_free_rings_gqi(priv);
> +		gve_tx_free_rings_gqi(priv, start_id, num_rings);
>  	} else {
>  		gve_tx_free_rings_dqo(priv);
>  	}
> @@ -709,20 +737,20 @@ static int gve_alloc_rings(struct gve_priv *priv)
>  	int err;
>  
>  	/* Setup tx rings */
> -	priv->tx = kvcalloc(priv->tx_cfg.num_queues, sizeof(*priv->tx),
> +	priv->tx = kvcalloc(priv->tx_cfg.max_queues, sizeof(*priv->tx),
>  			    GFP_KERNEL);
>  	if (!priv->tx)
>  		return -ENOMEM;
>  
>  	if (gve_is_gqi(priv))
> -		err = gve_tx_alloc_rings(priv);
> +		err = gve_tx_alloc_rings(priv, 0, gve_num_tx_queues(priv));
>  	else
>  		err = gve_tx_alloc_rings_dqo(priv);
>  	if (err)
>  		goto free_tx;
>  
>  	/* Setup rx rings */
> -	priv->rx = kvcalloc(priv->rx_cfg.num_queues, sizeof(*priv->rx),
> +	priv->rx = kvcalloc(priv->rx_cfg.max_queues, sizeof(*priv->rx),
>  			    GFP_KERNEL);
>  	if (!priv->rx) {
>  		err = -ENOMEM;
> @@ -747,7 +775,7 @@ static int gve_alloc_rings(struct gve_priv *priv)
>  	kvfree(priv->rx);
>  	priv->rx = NULL;
>  free_tx_queue:
> -	gve_tx_free_rings(priv);
> +	gve_tx_free_rings(priv, 0, gve_num_tx_queues(priv));
>  free_tx:
>  	kvfree(priv->tx);
>  	priv->tx = NULL;
> @@ -759,7 +787,7 @@ static int gve_destroy_rings(struct gve_priv *priv)
>  	int num_tx_queues = gve_num_tx_queues(priv);
>  	int err;
>  
> -	err = gve_adminq_destroy_tx_queues(priv, num_tx_queues);
> +	err = gve_adminq_destroy_tx_queues(priv, 0, num_tx_queues);
>  	if (err) {
>  		netif_err(priv, drv, priv->dev,
>  			  "failed to destroy tx queues\n");
> @@ -797,7 +825,7 @@ static void gve_free_rings(struct gve_priv *priv)
>  			ntfy_idx = gve_tx_idx_to_ntfy(priv, i);
>  			gve_remove_napi(priv, ntfy_idx);
>  		}
> -		gve_tx_free_rings(priv);
> +		gve_tx_free_rings(priv, 0, num_tx_queues);
>  		kvfree(priv->tx);
>  		priv->tx = NULL;
>  	}
> @@ -894,40 +922,46 @@ static void gve_free_queue_page_list(struct gve_priv *priv, u32 id)
>  			      qpl->page_buses[i], gve_qpl_dma_dir(priv, id));
>  
>  	kvfree(qpl->page_buses);
> +	qpl->page_buses = NULL;
>  free_pages:
>  	kvfree(qpl->pages);
> +	qpl->pages = NULL;
>  	priv->num_registered_pages -= qpl->num_entries;
>  }
>  
>  static int gve_alloc_qpls(struct gve_priv *priv)
>  {
> -	int num_qpls = gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv);
> +	int max_queues = priv->tx_cfg.max_queues + priv->rx_cfg.max_queues;
> +	int start_id;
>  	int i, j;
>  	int err;
>  
> -	if (num_qpls == 0)
> +	if (priv->queue_format != GVE_GQI_QPL_FORMAT)
>  		return 0;
>  
> -	priv->qpls = kvcalloc(num_qpls, sizeof(*priv->qpls), GFP_KERNEL);
> +	priv->qpls = kvcalloc(max_queues, sizeof(*priv->qpls), GFP_KERNEL);
>  	if (!priv->qpls)
>  		return -ENOMEM;
>  
> -	for (i = 0; i < gve_num_tx_qpls(priv); i++) {
> +	start_id = gve_tx_start_qpl_id(priv);
> +	for (i = start_id; i < start_id + gve_num_tx_qpls(priv); i++) {
>  		err = gve_alloc_queue_page_list(priv, i,
>  						priv->tx_pages_per_qpl);
>  		if (err)
>  			goto free_qpls;
>  	}
> -	for (; i < num_qpls; i++) {
> +
> +	start_id = gve_rx_start_qpl_id(priv);
> +	for (i = start_id; i < start_id + gve_num_rx_qpls(priv); i++) {
>  		err = gve_alloc_queue_page_list(priv, i,
>  						priv->rx_data_slot_cnt);
>  		if (err)
>  			goto free_qpls;
>  	}
>  
> -	priv->qpl_cfg.qpl_map_size = BITS_TO_LONGS(num_qpls) *
> +	priv->qpl_cfg.qpl_map_size = BITS_TO_LONGS(max_queues) *
>  				     sizeof(unsigned long) * BITS_PER_BYTE;
> -	priv->qpl_cfg.qpl_id_map = kvcalloc(BITS_TO_LONGS(num_qpls),
> +	priv->qpl_cfg.qpl_id_map = kvcalloc(BITS_TO_LONGS(max_queues),
>  					    sizeof(unsigned long), GFP_KERNEL);
>  	if (!priv->qpl_cfg.qpl_id_map) {
>  		err = -ENOMEM;
> @@ -940,23 +974,26 @@ static int gve_alloc_qpls(struct gve_priv *priv)
>  	for (j = 0; j <= i; j++)
>  		gve_free_queue_page_list(priv, j);
>  	kvfree(priv->qpls);
> +	priv->qpls = NULL;
>  	return err;
>  }
>  
>  static void gve_free_qpls(struct gve_priv *priv)
>  {
> -	int num_qpls = gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv);
> +	int max_queues = priv->tx_cfg.max_queues + priv->rx_cfg.max_queues;
>  	int i;
>  
> -	if (num_qpls == 0)
> +	if (!priv->qpls)
>  		return;
>  
>  	kvfree(priv->qpl_cfg.qpl_id_map);
> +	priv->qpl_cfg.qpl_id_map = NULL;
>  
> -	for (i = 0; i < num_qpls; i++)
> +	for (i = 0; i < max_queues; i++)
>  		gve_free_queue_page_list(priv, i);
>  
>  	kvfree(priv->qpls);
> +	priv->qpls = NULL;
>  }
>  
>  /* Use this to schedule a reset when the device is capable of continuing
> diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
> index db1c74b1d7d3..051a15e4f1af 100644
> --- a/drivers/net/ethernet/google/gve/gve_rx.c
> +++ b/drivers/net/ethernet/google/gve/gve_rx.c
> @@ -124,7 +124,7 @@ static int gve_prefill_rx_pages(struct gve_rx_ring *rx)
>  		return -ENOMEM;
>  
>  	if (!rx->data.raw_addressing) {
> -		rx->data.qpl = gve_assign_rx_qpl(priv);
> +		rx->data.qpl = gve_assign_rx_qpl(priv, rx->q_num);
>  		if (!rx->data.qpl) {
>  			kvfree(rx->data.page_info);
>  			rx->data.page_info = NULL;
> diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
> index 0fb052ce9e0b..e24e73e74e33 100644
> --- a/drivers/net/ethernet/google/gve/gve_tx.c
> +++ b/drivers/net/ethernet/google/gve/gve_tx.c
> @@ -195,7 +195,7 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx)
>  	tx->raw_addressing = priv->queue_format == GVE_GQI_RDA_FORMAT;
>  	tx->dev = &priv->pdev->dev;
>  	if (!tx->raw_addressing) {
> -		tx->tx_fifo.qpl = gve_assign_tx_qpl(priv);
> +		tx->tx_fifo.qpl = gve_assign_tx_qpl(priv, idx);
>  		if (!tx->tx_fifo.qpl)
>  			goto abort_with_desc;
>  		/* map Tx FIFO */
> @@ -233,12 +233,12 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx)
>  	return -ENOMEM;
>  }
>  
> -int gve_tx_alloc_rings(struct gve_priv *priv)
> +int gve_tx_alloc_rings(struct gve_priv *priv, int start_id, int num_rings)
>  {
>  	int err = 0;
>  	int i;
>  
> -	for (i = 0; i < priv->tx_cfg.num_queues; i++) {
> +	for (i = start_id; i < start_id + num_rings; i++) {
>  		err = gve_tx_alloc_ring(priv, i);
>  		if (err) {
>  			netif_err(priv, drv, priv->dev,
> @@ -251,17 +251,17 @@ int gve_tx_alloc_rings(struct gve_priv *priv)
>  	if (err) {
>  		int j;
>  
> -		for (j = 0; j < i; j++)
> +		for (j = start_id; j < i; j++)
>  			gve_tx_free_ring(priv, j);
>  	}
>  	return err;
>  }
>  
> -void gve_tx_free_rings_gqi(struct gve_priv *priv)
> +void gve_tx_free_rings_gqi(struct gve_priv *priv, int start_id, int num_rings)
>  {
>  	int i;
>  
> -	for (i = 0; i < priv->tx_cfg.num_queues; i++)
> +	for (i = start_id; i < start_id + num_rings; i++)
>  		gve_tx_free_ring(priv, i);
>  }
>  
> -- 
> 2.40.0.rc1.284.g88254d51c5-goog
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next v3 3/5] gve: Add XDP DROP and TX support for GQI-QPL format
  2023-03-15 16:53   ` Michal Kubiak
@ 2023-03-15 21:10     ` Praveen Kaligineedi
  2023-03-16 22:07       ` Michal Kubiak
  0 siblings, 1 reply; 11+ messages in thread
From: Praveen Kaligineedi @ 2023-03-15 21:10 UTC (permalink / raw)
  To: Michal Kubiak; +Cc: netdev, davem, kuba, maciej.fijalkowski, Jeroen de Borst

Hi Michal,

Thanks for your feedback. We will add xdp_features
as part of version V4 of this patch. Please find my responses for other
comments inline:

On Wed, Mar 15, 2023 at 9:53 AM Michal Kubiak <michal.kubiak@intel.com> wrote:
>
> On Mon, Mar 13, 2023 at 01:26:38PM -0700, Praveen Kaligineedi wrote:
> > Add support for XDP PASS, DROP and TX actions.
> >
> > This patch contains the following changes:
> > 1) Support installing/uninstalling XDP program
> > 2) Add dedicated XDP TX queues
> > 3) Add support for XDP DROP action
> > 4) Add support for XDP TX action
> >
> > Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com>
> > Reviewed-by: Jeroen de Borst <jeroendb@google.com>
> >
> > ---
> > Changed in v2:
> > - Removed gve_close/gve_open when adding XDP dedicated queues. Instead
> > we add and register additional TX queues when the XDP program is
> > installed. If the allocation/registration fails we return error and do
> > not install the XDP program.
> > - Removed xdp tx spin lock from this patch. It is needed for XDP_REDIRECT
> > support as both XDP_REDIRECT and XDP_TX traffic share the dedicated XDP
> > queues. Moved the code to add xdp tx spinlock to the subsequent patch
> > that adds XDP_REDIRECT support.
> > - Added netdev_err when the user tries to set rx/tx queues to the values
> > not supported when XDP is enabled.
> > - Removed rcu annotation for xdp_prog. We disable the napi prior to
> > adding/removing the xdp_prog and reenable it after the program has
> > been installed for all the queues.
> > - Ring the tx doorbell once for napi instead of every XDP TX packet.
> > - Added a new helper function for freeing the FIFO buffer
> > - Unregister xdp rxq for all the queues when the registration
> > fails during XDP program installation
> >
> > Changed in v3:
> > - Padding bytes are used if the XDP TX packet headers do not
> > fit at tail of TX FIFO. Taking these padding bytes into account
> > while checking if enough space is available in TX FIFO.
> > ---
>
> Hi Praveen,
>
> Please find my comments inline.
> Also, I have a general comment regarding newest XDP netdev API. As far
> as I checked you do not use "net_device.xdp_features". That member has
> been added to "struct net_device" since 6.3-rc1 kernel version and that
> feature are checked now before .ndo_bpf is called, so you should set all
> supported flags in that structure member.
>
> Thanks,
> Michal
>
> >  drivers/net/ethernet/google/gve/gve.h         |  44 +-
> >  drivers/net/ethernet/google/gve/gve_ethtool.c |  37 +-
> >  drivers/net/ethernet/google/gve/gve_main.c    | 376 +++++++++++++++++-
> >  drivers/net/ethernet/google/gve/gve_rx.c      |  74 +++-
> >  drivers/net/ethernet/google/gve/gve_tx.c      | 149 ++++++-
> >  5 files changed, 658 insertions(+), 22 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
> > index f354a6448c25..8d5234d4ba67 100644
> > --- a/drivers/net/ethernet/google/gve/gve.h
> > +++ b/drivers/net/ethernet/google/gve/gve.h
> > @@ -47,6 +47,10 @@
> >
> >  #define GVE_RX_BUFFER_SIZE_DQO 2048
> >
> > +#define GVE_XDP_ACTIONS 5
> > +
> > +#define GVE_TX_MAX_HEADER_SIZE 182
> > +
> >  /* Each slot in the desc ring has a 1:1 mapping to a slot in the data ring */
> >  struct gve_rx_desc_queue {
> >       struct gve_rx_desc *desc_ring; /* the descriptor ring */
> > @@ -230,7 +234,9 @@ struct gve_rx_ring {
> >       u64 rx_frag_flip_cnt; /* free-running count of rx segments where page_flip was used */
> >       u64 rx_frag_copy_cnt; /* free-running count of rx segments copied */
> >       u64 rx_frag_alloc_cnt; /* free-running count of rx page allocations */
> > -
> > +     u64 xdp_tx_errors;
> > +     u64 xdp_redirect_errors;
> > +     u64 xdp_actions[GVE_XDP_ACTIONS];
> >       u32 q_num; /* queue index */
> >       u32 ntfy_id; /* notification block index */
> >       struct gve_queue_resources *q_resources; /* head and tail pointer idx */
> > @@ -238,6 +244,9 @@ struct gve_rx_ring {
> >       struct u64_stats_sync statss; /* sync stats for 32bit archs */
> >
> >       struct gve_rx_ctx ctx; /* Info for packet currently being processed in this ring. */
> > +
> > +     /* XDP stuff */
> > +     struct xdp_rxq_info xdp_rxq;
> >  };
> >
> >  /* A TX desc ring entry */
> > @@ -259,6 +268,9 @@ struct gve_tx_iovec {
> >   */
> >  struct gve_tx_buffer_state {
> >       struct sk_buff *skb; /* skb for this pkt */
> > +     struct {
> > +             u16 size; /* size of xmitted xdp pkt */
> > +     } xdp;
> >       union {
> >               struct gve_tx_iovec iov[GVE_TX_MAX_IOVEC]; /* segments of this pkt */
> >               struct {
> > @@ -526,9 +538,11 @@ struct gve_priv {
> >       u16 rx_data_slot_cnt; /* rx buffer length */
> >       u64 max_registered_pages;
> >       u64 num_registered_pages; /* num pages registered with NIC */
> > +     struct bpf_prog *xdp_prog; /* XDP BPF program */
> >       u32 rx_copybreak; /* copy packets smaller than this */
> >       u16 default_num_queues; /* default num queues to set up */
> >
> > +     u16 num_xdp_queues;
> >       struct gve_queue_config tx_cfg;
> >       struct gve_queue_config rx_cfg;
> >       struct gve_qpl_config qpl_cfg; /* map used QPL ids */
> > @@ -785,7 +799,17 @@ static inline u32 gve_num_tx_qpls(struct gve_priv *priv)
> >       if (priv->queue_format != GVE_GQI_QPL_FORMAT)
> >               return 0;
> >
> > -     return priv->tx_cfg.num_queues;
> > +     return priv->tx_cfg.num_queues + priv->num_xdp_queues;
> > +}
> > +
> > +/* Returns the number of XDP tx queue page lists
> > + */
> > +static inline u32 gve_num_xdp_qpls(struct gve_priv *priv)
> > +{
> > +     if (priv->queue_format != GVE_GQI_QPL_FORMAT)
> > +             return 0;
> > +
> > +     return priv->num_xdp_queues;
> >  }
> >
> >  /* Returns the number of rx queue page lists
> > @@ -874,7 +898,17 @@ static inline bool gve_is_gqi(struct gve_priv *priv)
> >
> >  static inline u32 gve_num_tx_queues(struct gve_priv *priv)
> >  {
> > -     return priv->tx_cfg.num_queues;
> > +     return priv->tx_cfg.num_queues + priv->num_xdp_queues;
> > +}
> > +
> > +static inline u32 gve_xdp_tx_queue_id(struct gve_priv *priv, u32 queue_id)
> > +{
> > +     return priv->tx_cfg.num_queues + queue_id;
> > +}
> > +
> > +static inline u32 gve_xdp_tx_start_queue_id(struct gve_priv *priv)
> > +{
> > +     return gve_xdp_tx_queue_id(priv, 0);
> >  }
> >
> >  /* buffers */
> > @@ -885,7 +919,11 @@ void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma,
> >                  enum dma_data_direction);
> >  /* tx handling */
> >  netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev);
> > +int gve_xdp_xmit_one(struct gve_priv *priv, struct gve_tx_ring *tx,
> > +                  void *data, int len);
> > +void gve_xdp_tx_flush(struct gve_priv *priv, u32 xdp_qid);
> >  bool gve_tx_poll(struct gve_notify_block *block, int budget);
> > +bool gve_xdp_poll(struct gve_notify_block *block, int budget);
> >  int gve_tx_alloc_rings(struct gve_priv *priv, int start_id, int num_rings);
> >  void gve_tx_free_rings_gqi(struct gve_priv *priv, int start_id, int num_rings);
> >  u32 gve_tx_load_event_counter(struct gve_priv *priv,
> > diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c
> > index 5b6e31812fae..067b393ccf9d 100644
> > --- a/drivers/net/ethernet/google/gve/gve_ethtool.c
> > +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c
> > @@ -34,6 +34,11 @@ static u32 gve_get_msglevel(struct net_device *netdev)
> >       return priv->msg_enable;
> >  }
> >
> > +/* For the following stats column string names, make sure the order
> > + * matches how it is filled in the code. For xdp_aborted, xdp_drop,
> > + * xdp_pass, xdp_tx, xdp_redirect, make sure it also matches the order
> > + * as declared in enum xdp_action inside file uapi/linux/bpf.h .
> > + */
> >  static const char gve_gstrings_main_stats[][ETH_GSTRING_LEN] = {
> >       "rx_packets", "tx_packets", "rx_bytes", "tx_bytes",
> >       "rx_dropped", "tx_dropped", "tx_timeouts",
> > @@ -49,6 +54,9 @@ static const char gve_gstrings_rx_stats[][ETH_GSTRING_LEN] = {
> >       "rx_dropped_pkt[%u]", "rx_copybreak_pkt[%u]", "rx_copied_pkt[%u]",
> >       "rx_queue_drop_cnt[%u]", "rx_no_buffers_posted[%u]",
> >       "rx_drops_packet_over_mru[%u]", "rx_drops_invalid_checksum[%u]",
> > +     "rx_xdp_aborted[%u]", "rx_xdp_drop[%u]", "rx_xdp_pass[%u]",
> > +     "rx_xdp_tx[%u]", "rx_xdp_redirect[%u]",
> > +     "rx_xdp_tx_errors[%u]", "rx_xdp_redirect_errors[%u]",
> >  };
> >
> >  static const char gve_gstrings_tx_stats[][ETH_GSTRING_LEN] = {
> > @@ -289,14 +297,25 @@ gve_get_ethtool_stats(struct net_device *netdev,
> >                       if (skip_nic_stats) {
> >                               /* skip NIC rx stats */
> >                               i += NIC_RX_STATS_REPORT_NUM;
> > -                             continue;
> > -                     }
> > -                     for (j = 0; j < NIC_RX_STATS_REPORT_NUM; j++) {
> > -                             u64 value =
> > -                             be64_to_cpu(report_stats[rx_qid_to_stats_idx[ring] + j].value);
> > +                     } else {
> > +                             stats_idx = rx_qid_to_stats_idx[ring];
> > +                             for (j = 0; j < NIC_RX_STATS_REPORT_NUM; j++) {
> > +                                     u64 value =
> > +                                             be64_to_cpu(report_stats[stats_idx + j].value);
> >
> > -                             data[i++] = value;
> > +                                     data[i++] = value;
> > +                             }
> >                       }
> > +                     /* XDP rx counters */
> > +                     do {
> > +                             start = u64_stats_fetch_begin(&priv->rx[ring].statss);
> > +                             for (j = 0; j < GVE_XDP_ACTIONS; j++)
> > +                                     data[i + j] = rx->xdp_actions[j];
> > +                             data[i + j++] = rx->xdp_tx_errors;
> > +                             data[i + j++] = rx->xdp_redirect_errors;
> > +                     } while (u64_stats_fetch_retry(&priv->rx[ring].statss,
> > +                                                    start));
> > +                     i += GVE_XDP_ACTIONS + 2; /* XDP rx counters */
> >               }
> >       } else {
> >               i += priv->rx_cfg.num_queues * NUM_GVE_RX_CNTS;
> > @@ -418,6 +437,12 @@ static int gve_set_channels(struct net_device *netdev,
> >       if (!new_rx || !new_tx)
> >               return -EINVAL;
> >
> > +     if (priv->num_xdp_queues &&
> > +         (new_tx != new_rx || (2 * new_tx > priv->tx_cfg.max_queues))) {
> > +             dev_err(&priv->pdev->dev, "XDP load failed: The number of configured RX queues should be equal to the number of configured TX queues and the number of configured RX/TX queues should be less than or equal to half the maximum number of RX/TX queues");
>
> Please explain why the number of RX and TX queues cannot be asymmetric
> while the XDP program is loaded?
> Regarding the second condition: shouldn't it look like:
>         "2 * new_tx > priv->rx_cfg.max_queues" ?
>
> Please take a look at my other comments regarding XDP queues number.
>

For this first iteration of XDP support, we chose to enforce number of
configured RX queues to be equal to the number of configured TX queues
to simplify how the dedicated XDP transmit queues are assigned for
XDP_TX traffic as well as for AF_XDP ZC TX traffic.  We only load XDP
when the number of RX queues is equal to the number of TX queues.
In effect, the number of XDP TX queues = number of configured TX queues =
number of configured RX queues.

The check 2*new_tx > priv->tx_cfg.max_queues ensures that we
are not creating more tx queues than the maximum allowed TX
queues.

> > +             return -EINVAL;
> > +     }
> > +
> >       if (!netif_carrier_ok(netdev)) {
> >               priv->tx_cfg.num_queues = new_tx;
> >               priv->rx_cfg.num_queues = new_rx;
> > diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
> > index 160ca77c2751..7d3f15cf79ed 100644
> > --- a/drivers/net/ethernet/google/gve/gve_main.c
> > +++ b/drivers/net/ethernet/google/gve/gve_main.c
> > @@ -4,8 +4,10 @@
> >   * Copyright (C) 2015-2021 Google, Inc.
> >   */
> >
> > +#include <linux/bpf.h>
> >  #include <linux/cpumask.h>
> >  #include <linux/etherdevice.h>
> > +#include <linux/filter.h>
> >  #include <linux/interrupt.h>
> >  #include <linux/module.h>
> >  #include <linux/pci.h>
> > @@ -247,8 +249,13 @@ static int gve_napi_poll(struct napi_struct *napi, int budget)
> >       block = container_of(napi, struct gve_notify_block, napi);
> >       priv = block->priv;
> >
> > -     if (block->tx)
> > -             reschedule |= gve_tx_poll(block, budget);
> > +     if (block->tx) {
> > +             if (block->tx->q_num < priv->tx_cfg.num_queues)
> > +                     reschedule |= gve_tx_poll(block, budget);
> > +             else
> > +                     reschedule |= gve_xdp_poll(block, budget);
> > +     }
> > +
> >       if (block->rx) {
> >               work_done = gve_rx_poll(block, budget);
> >               reschedule |= work_done == budget;
> > @@ -582,6 +589,28 @@ static void gve_remove_napi(struct gve_priv *priv, int ntfy_idx)
> >       netif_napi_del(&block->napi);
> >  }
> >
> > +static int gve_register_xdp_qpls(struct gve_priv *priv)
> > +{
> > +     int start_id;
> > +     int err;
> > +     int i;
> > +
> > +     start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv));
> > +     for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) {
> > +             err = gve_adminq_register_page_list(priv, &priv->qpls[i]);
> > +             if (err) {
> > +                     netif_err(priv, drv, priv->dev,
> > +                               "failed to register queue page list %d\n",
> > +                               priv->qpls[i].id);
> > +                     /* This failure will trigger a reset - no need to clean
> > +                      * up
> > +                      */
> > +                     return err;
> > +             }
> > +     }
> > +     return 0;
> > +}
> > +
> >  static int gve_register_qpls(struct gve_priv *priv)
> >  {
> >       int start_id;
> > @@ -618,6 +647,26 @@ static int gve_register_qpls(struct gve_priv *priv)
> >       return 0;
> >  }
> >
> > +static int gve_unregister_xdp_qpls(struct gve_priv *priv)
> > +{
> > +     int start_id;
> > +     int err;
> > +     int i;
> > +
> > +     start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv));
> > +     for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) {
> > +             err = gve_adminq_unregister_page_list(priv, priv->qpls[i].id);
> > +             /* This failure will trigger a reset - no need to clean up */
> > +             if (err) {
> > +                     netif_err(priv, drv, priv->dev,
> > +                               "Failed to unregister queue page list %d\n",
> > +                               priv->qpls[i].id);
> > +                     return err;
> > +             }
> > +     }
> > +     return 0;
> > +}
> > +
> >  static int gve_unregister_qpls(struct gve_priv *priv)
> >  {
> >       int start_id;
> > @@ -650,6 +699,27 @@ static int gve_unregister_qpls(struct gve_priv *priv)
> >       return 0;
> >  }
> >
> > +static int gve_create_xdp_rings(struct gve_priv *priv)
> > +{
> > +     int err;
> > +
> > +     err = gve_adminq_create_tx_queues(priv,
> > +                                       gve_xdp_tx_start_queue_id(priv),
> > +                                       priv->num_xdp_queues);
> > +     if (err) {
> > +             netif_err(priv, drv, priv->dev, "failed to create %d XDP tx queues\n",
> > +                       priv->num_xdp_queues);
> > +             /* This failure will trigger a reset - no need to clean
> > +              * up
> > +              */
> > +             return err;
> > +     }
> > +     netif_dbg(priv, drv, priv->dev, "created %d XDP tx queues\n",
> > +               priv->num_xdp_queues);
> > +
> > +     return 0;
> > +}
> > +
> >  static int gve_create_rings(struct gve_priv *priv)
> >  {
> >       int num_tx_queues = gve_num_tx_queues(priv);
> > @@ -699,6 +769,23 @@ static int gve_create_rings(struct gve_priv *priv)
> >       return 0;
> >  }
> >
> > +static void add_napi_init_xdp_sync_stats(struct gve_priv *priv,
> > +                                      int (*napi_poll)(struct napi_struct *napi,
> > +                                                       int budget))
> > +{
> > +     int start_id = gve_xdp_tx_start_queue_id(priv);
> > +     int i;
> > +
> > +     /* Add xdp tx napi & init sync stats*/
> > +     for (i = start_id; i < start_id + priv->num_xdp_queues; i++) {
> > +             int ntfy_idx = gve_tx_idx_to_ntfy(priv, i);
> > +
> > +             u64_stats_init(&priv->tx[i].statss);
> > +             priv->tx[i].ntfy_id = ntfy_idx;
> > +             gve_add_napi(priv, ntfy_idx, napi_poll);
> > +     }
> > +}
> > +
> >  static void add_napi_init_sync_stats(struct gve_priv *priv,
> >                                    int (*napi_poll)(struct napi_struct *napi,
> >                                                     int budget))
> > @@ -732,6 +819,23 @@ static void gve_tx_free_rings(struct gve_priv *priv, int start_id, int num_rings
> >       }
> >  }
> >
> > +static int gve_alloc_xdp_rings(struct gve_priv *priv)
> > +{
> > +     int start_id;
> > +     int err = 0;
> > +
> > +     if (!priv->num_xdp_queues)
> > +             return 0;
> > +
> > +     start_id = gve_xdp_tx_start_queue_id(priv);
> > +     err = gve_tx_alloc_rings(priv, start_id, priv->num_xdp_queues);
> > +     if (err)
> > +             return err;
> > +     add_napi_init_xdp_sync_stats(priv, gve_napi_poll);
> > +
> > +     return 0;
> > +}
> > +
> >  static int gve_alloc_rings(struct gve_priv *priv)
> >  {
> >       int err;
> > @@ -782,6 +886,26 @@ static int gve_alloc_rings(struct gve_priv *priv)
> >       return err;
> >  }
> >
> > +static int gve_destroy_xdp_rings(struct gve_priv *priv)
> > +{
> > +     int start_id;
> > +     int err;
> > +
> > +     start_id = gve_xdp_tx_start_queue_id(priv);
> > +     err = gve_adminq_destroy_tx_queues(priv,
> > +                                        start_id,
> > +                                        priv->num_xdp_queues);
> > +     if (err) {
> > +             netif_err(priv, drv, priv->dev,
> > +                       "failed to destroy XDP queues\n");
> > +             /* This failure will trigger a reset - no need to clean up */
> > +             return err;
> > +     }
> > +     netif_dbg(priv, drv, priv->dev, "destroyed XDP queues\n");
> > +
> > +     return 0;
> > +}
> > +
> >  static int gve_destroy_rings(struct gve_priv *priv)
> >  {
> >       int num_tx_queues = gve_num_tx_queues(priv);
> > @@ -814,6 +938,21 @@ static void gve_rx_free_rings(struct gve_priv *priv)
> >               gve_rx_free_rings_dqo(priv);
> >  }
> >
> > +static void gve_free_xdp_rings(struct gve_priv *priv)
> > +{
> > +     int ntfy_idx, start_id;
> > +     int i;
> > +
> > +     start_id = gve_xdp_tx_start_queue_id(priv);
> > +     if (priv->tx) {
> > +             for (i = start_id; i <  start_id + priv->num_xdp_queues; i++) {
> > +                     ntfy_idx = gve_tx_idx_to_ntfy(priv, i);
> > +                     gve_remove_napi(priv, ntfy_idx);
> > +             }
> > +             gve_tx_free_rings(priv, start_id, priv->num_xdp_queues);
> > +     }
> > +}
> > +
> >  static void gve_free_rings(struct gve_priv *priv)
> >  {
> >       int num_tx_queues = gve_num_tx_queues(priv);
> > @@ -929,6 +1068,28 @@ static void gve_free_queue_page_list(struct gve_priv *priv, u32 id)
> >       priv->num_registered_pages -= qpl->num_entries;
> >  }
> >
> > +static int gve_alloc_xdp_qpls(struct gve_priv *priv)
> > +{
> > +     int start_id;
> > +     int i, j;
> > +     int err;
> > +
> > +     start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv));
> > +     for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) {
> > +             err = gve_alloc_queue_page_list(priv, i,
> > +                                             priv->tx_pages_per_qpl);
> > +             if (err)
> > +                     goto free_qpls;
> > +     }
> > +
> > +     return 0;
> > +
> > +free_qpls:
> > +     for (j = start_id; j <= i; j++)
> > +             gve_free_queue_page_list(priv, j);
> > +     return err;
> > +}
> > +
> >  static int gve_alloc_qpls(struct gve_priv *priv)
> >  {
> >       int max_queues = priv->tx_cfg.max_queues + priv->rx_cfg.max_queues;
> > @@ -978,6 +1139,16 @@ static int gve_alloc_qpls(struct gve_priv *priv)
> >       return err;
> >  }
> >
> > +static void gve_free_xdp_qpls(struct gve_priv *priv)
> > +{
> > +     int start_id;
> > +     int i;
> > +
> > +     start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv));
> > +     for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++)
> > +             gve_free_queue_page_list(priv, i);
> > +}
> > +
> >  static void gve_free_qpls(struct gve_priv *priv)
> >  {
> >       int max_queues = priv->tx_cfg.max_queues + priv->rx_cfg.max_queues;
> > @@ -1011,11 +1182,64 @@ static int gve_reset_recovery(struct gve_priv *priv, bool was_up);
> >  static void gve_turndown(struct gve_priv *priv);
> >  static void gve_turnup(struct gve_priv *priv);
> >
> > +static int gve_reg_xdp_info(struct gve_priv *priv, struct net_device *dev)
> > +{
> > +     struct napi_struct *napi;
> > +     struct gve_rx_ring *rx;
> > +     int err = 0;
> > +     int i, j;
> > +
> > +     if (!priv->num_xdp_queues)
> > +             return 0;
> > +
> > +     for (i = 0; i < priv->rx_cfg.num_queues; i++) {
> > +             rx = &priv->rx[i];
> > +             napi = &priv->ntfy_blocks[rx->ntfy_id].napi;
> > +
> > +             err = xdp_rxq_info_reg(&rx->xdp_rxq, dev, i,
> > +                                    napi->napi_id);
> > +             if (err)
> > +                     goto err;
> > +             err = xdp_rxq_info_reg_mem_model(&rx->xdp_rxq,
> > +                                              MEM_TYPE_PAGE_SHARED, NULL);
> > +             if (err)
> > +                     goto err;
> > +     }
> > +     return 0;
> > +
> > +err:
> > +     for (j = i; j >= 0; j--) {
> > +             rx = &priv->rx[j];
> > +             if (xdp_rxq_info_is_reg(&rx->xdp_rxq))
> > +                     xdp_rxq_info_unreg(&rx->xdp_rxq);
> > +     }
> > +     return err;
> > +}
> > +
> > +static void gve_unreg_xdp_info(struct gve_priv *priv)
> > +{
> > +     int i;
> > +
> > +     if (!priv->num_xdp_queues)
> > +             return;
> > +
> > +     for (i = 0; i < priv->rx_cfg.num_queues; i++) {
> > +             struct gve_rx_ring *rx = &priv->rx[i];
> > +
> > +             xdp_rxq_info_unreg(&rx->xdp_rxq);
> > +     }
> > +}
> > +
> >  static int gve_open(struct net_device *dev)
> >  {
> >       struct gve_priv *priv = netdev_priv(dev);
> >       int err;
> >
> > +     if (priv->xdp_prog)
> > +             priv->num_xdp_queues = priv->tx_cfg.num_queues;
>
> Why the number of XDP queues is initialized to TX queues number?
> Shouldn't it be rather RX queues number?
>
> For example, if you have:
>  - asymmetric number of RX and TX queues,
>  - tx_cfg.num_queues < rx_cfg.num_queues.
>
> I believe in such a scenario you won't be able to handle XDP_TX action
> for all RX queues.
> Please take a look at your implementation of "XDP_TX" case in "gve_xdp_done()".
>
> Is it any mistake or an intentional design?
>

As mentioned above, we enforce num of rx queues == num of tx queues
== num of XDP tx queues.

> > +     else
> > +             priv->num_xdp_queues = 0;
> > +
> >       err = gve_alloc_qpls(priv);
> >       if (err)
> >               return err;
> > @@ -1031,6 +1255,10 @@ static int gve_open(struct net_device *dev)
> >       if (err)
> >               goto free_rings;
> >
> > +     err = gve_reg_xdp_info(priv, dev);
> > +     if (err)
> > +             goto free_rings;
> > +
> >       err = gve_register_qpls(priv);
> >       if (err)
> >               goto reset;
> > @@ -1095,6 +1323,7 @@ static int gve_close(struct net_device *dev)
> >       }
> >       del_timer_sync(&priv->stats_report_timer);
> >
> > +     gve_unreg_xdp_info(priv);
> >       gve_free_rings(priv);
> >       gve_free_qpls(priv);
> >       priv->interface_down_cnt++;
> > @@ -1111,6 +1340,148 @@ static int gve_close(struct net_device *dev)
> >       return gve_reset_recovery(priv, false);
> >  }
> >
> > +static int gve_remove_xdp_queues(struct gve_priv *priv)
> > +{
> > +     int err;
> > +
> > +     err = gve_destroy_xdp_rings(priv);
> > +     if (err)
> > +             return err;
> > +
> > +     err = gve_unregister_xdp_qpls(priv);
> > +     if (err)
> > +             return err;
> > +
> > +     gve_unreg_xdp_info(priv);
> > +     gve_free_xdp_rings(priv);
> > +     gve_free_xdp_qpls(priv);
> > +     priv->num_xdp_queues = 0;
> > +     return 0;
> > +}
> > +
> > +static int gve_add_xdp_queues(struct gve_priv *priv)
> > +{
> > +     int err;
> > +
> > +     priv->num_xdp_queues = priv->tx_cfg.num_queues;
>
> The same question here: shouldn't it be equal to
> "priv->rx_cfg.num_queues"?
>

We enforce priv->tx_cfg.num_queues == priv->rx_cfg.num_queues
when XDP is enabled. We will make this priv->rx_cfg.num_queues
just to be clearer.

> > +
> > +     err = gve_alloc_xdp_qpls(priv);
> > +     if (err)
> > +             goto err;
> > +
> > +     err = gve_alloc_xdp_rings(priv);
> > +     if (err)
> > +             goto free_xdp_qpls;
> > +
> > +     err = gve_reg_xdp_info(priv, priv->dev);
> > +     if (err)
> > +             goto free_xdp_rings;
> > +
> > +     err = gve_register_xdp_qpls(priv);
> > +     if (err)
> > +             goto free_xdp_rings;
> > +
> > +     err = gve_create_xdp_rings(priv);
> > +     if (err)
> > +             goto free_xdp_rings;
> > +
> > +     return 0;
> > +
> > +free_xdp_rings:
> > +     gve_free_xdp_rings(priv);
> > +free_xdp_qpls:
> > +     gve_free_xdp_qpls(priv);
> > +err:
> > +     priv->num_xdp_queues = 0;
> > +     return err;
> > +}
> > +
> > +static int gve_set_xdp(struct gve_priv *priv, struct bpf_prog *prog,
> > +                    struct netlink_ext_ack *extack)
> > +{
> > +     struct bpf_prog *old_prog;
> > +     int err = 0;
> > +
> > +     old_prog = READ_ONCE(priv->xdp_prog);
> > +     if (!netif_carrier_ok(priv->dev)) {
> > +             WRITE_ONCE(priv->xdp_prog, prog);
> > +             if (old_prog)
> > +                     bpf_prog_put(old_prog);
> > +             return 0;
> > +     }
> > +
> > +     gve_turndown(priv);
> > +     if (!old_prog && prog) {
> > +             // Allocate XDP TX queues if an XDP program is
> > +             // being installed
> > +             err = gve_add_xdp_queues(priv);
> > +             if (err)
> > +                     goto out;
> > +     } else if (old_prog && !prog) {
> > +             // Remove XDP TX queues if an XDP program is
> > +             // being uninstalled
> > +             err = gve_remove_xdp_queues(priv);
> > +             if (err)
> > +                     goto out;
> > +     }
> > +     WRITE_ONCE(priv->xdp_prog, prog);
> > +     if (old_prog)
> > +             bpf_prog_put(old_prog);
> > +
> > +out:
> > +     gve_turnup(priv);
> > +     queue_work(priv->gve_wq, &priv->service_task);
>
> As far as I understand, you are starting some stuff asynchronously
> (service_task), but you never wait for the result of that stuff.
> So, if err == 0, but the "service_task" fails, you will still return
> a success to the kernel.
> Is it OK? Is it possible to return an inconsistent result to the kernel?
>
service_task was being scheduled to turn on the carrier based
on the link status. We will change this in V4 version of this patch
to turn on the carrier based on the link status synchronously rather
than asynchronously.
> > +     return err;
> > +}
> > +
> > +static int verify_xdp_configuration(struct net_device *dev)
> > +{
> > +     struct gve_priv *priv = netdev_priv(dev);
> > +
> > +     if (dev->features & NETIF_F_LRO) {
> > +             netdev_warn(dev, "XDP is not supported when LRO is on.\n");
> > +             return -EOPNOTSUPP;
> > +     }
> > +
> > +     if (priv->queue_format != GVE_GQI_QPL_FORMAT) {
> > +             netdev_warn(dev, "XDP is not supported in mode %d.\n",
> > +                         priv->queue_format);
> > +             return -EOPNOTSUPP;
> > +     }
> > +
> > +     if (dev->mtu > (PAGE_SIZE / 2) - sizeof(struct ethhdr) - GVE_RX_PAD) {
> > +             netdev_warn(dev, "XDP is not supported for mtu %d.\n",
> > +                         dev->mtu);
> > +             return -EOPNOTSUPP;
> > +     }
> > +
> > +     if (priv->rx_cfg.num_queues != priv->tx_cfg.num_queues ||
> > +         (2 * priv->tx_cfg.num_queues > priv->tx_cfg.max_queues)) {
> > +             netdev_warn(dev, "XDP load failed: The number of configured RX queues %d should be equal to the number of configured TX queues %d and the number of configured RX/TX queues should be less than or equal to half the maximum number of RX/TX queues %d",
> > +                         priv->rx_cfg.num_queues,
> > +                         priv->tx_cfg.num_queues,
> > +                         priv->tx_cfg.max_queues);
> > +             return -EINVAL;
> > +     }
> > +     return 0;
> > +}
> > +
> > +static int gve_xdp(struct net_device *dev, struct netdev_bpf *xdp)
> > +{
> > +     struct gve_priv *priv = netdev_priv(dev);
> > +     int err;
> > +
> > +     err = verify_xdp_configuration(dev);
> > +     if (err)
> > +             return err;
> > +     switch (xdp->command) {
> > +     case XDP_SETUP_PROG:
> > +             return gve_set_xdp(priv, xdp->prog, xdp->extack);
> > +     default:
> > +             return -EINVAL;
> > +     }
> > +}
> > +
> >  int gve_adjust_queues(struct gve_priv *priv,
> >                     struct gve_queue_config new_rx_config,
> >                     struct gve_queue_config new_tx_config)
> > @@ -1305,6 +1676,7 @@ static const struct net_device_ops gve_netdev_ops = {
> >       .ndo_get_stats64        =       gve_get_stats,
> >       .ndo_tx_timeout         =       gve_tx_timeout,
> >       .ndo_set_features       =       gve_set_features,
> > +     .ndo_bpf                =       gve_xdp,
> >  };
> >
> >  static void gve_handle_status(struct gve_priv *priv, u32 status)
> > diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
> > index 051a15e4f1af..3241f6ea29be 100644
> > --- a/drivers/net/ethernet/google/gve/gve_rx.c
> > +++ b/drivers/net/ethernet/google/gve/gve_rx.c
> > @@ -8,6 +8,8 @@
> >  #include "gve_adminq.h"
> >  #include "gve_utils.h"
> >  #include <linux/etherdevice.h>
> > +#include <linux/filter.h>
> > +#include <net/xdp.h>
> >
> >  static void gve_rx_free_buffer(struct device *dev,
> >                              struct gve_rx_slot_page_info *page_info,
> > @@ -591,6 +593,43 @@ static struct sk_buff *gve_rx_skb(struct gve_priv *priv, struct gve_rx_ring *rx,
> >       return skb;
> >  }
> >
> > +static void gve_xdp_done(struct gve_priv *priv, struct gve_rx_ring *rx,
> > +                      struct xdp_buff *xdp, struct bpf_prog *xprog,
> > +                      int xdp_act)
> > +{
> > +     struct gve_tx_ring *tx;
> > +     int tx_qid;
> > +     int err;
> > +
> > +     switch (xdp_act) {
> > +     case XDP_ABORTED:
> > +     case XDP_DROP:
> > +     default:
> > +             break;
> > +     case XDP_TX:
> > +             tx_qid = gve_xdp_tx_queue_id(priv, rx->q_num);
> > +             tx = &priv->tx[tx_qid];
>
> As I have already mentioned: if num_rx_queues > num_tx_queues, you can
> select uninitialized xdp queue (because num_tx_queues ==
> num_xdp_queues).
>
> Please check if your number of XDP queues should be equal to the number
> of RX queues.
>
> > +             err = gve_xdp_xmit_one(priv, tx, xdp->data,
> > +                                    xdp->data_end - xdp->data);
> > +
> > +             if (unlikely(err)) {
> > +                     u64_stats_update_begin(&rx->statss);
> > +                     rx->xdp_tx_errors++;
> > +                     u64_stats_update_end(&rx->statss);
> > +             }
> > +             break;
> > +     case XDP_REDIRECT:
> > +             u64_stats_update_begin(&rx->statss);
> > +             rx->xdp_redirect_errors++;
> > +             u64_stats_update_end(&rx->statss);
> > +             break;
> > +     }
> > +     u64_stats_update_begin(&rx->statss);
> > +     if ((u32)xdp_act < GVE_XDP_ACTIONS)
> > +             rx->xdp_actions[xdp_act]++;
> > +     u64_stats_update_end(&rx->statss);
> > +}
> > +
> >  #define GVE_PKTCONT_BIT_IS_SET(x) (GVE_RXF_PKT_CONT & (x))
> >  static void gve_rx(struct gve_rx_ring *rx, netdev_features_t feat,
> >                  struct gve_rx_desc *desc, u32 idx,
> > @@ -603,9 +642,12 @@ static void gve_rx(struct gve_rx_ring *rx, netdev_features_t feat,
> >       union gve_rx_data_slot *data_slot;
> >       struct gve_priv *priv = rx->gve;
> >       struct sk_buff *skb = NULL;
> > +     struct bpf_prog *xprog;
> > +     struct xdp_buff xdp;
> >       dma_addr_t page_bus;
> >       void *va;
> >
> > +     u16 len = frag_size;
> >       struct napi_struct *napi = &priv->ntfy_blocks[rx->ntfy_id].napi;
> >       bool is_first_frag = ctx->frag_cnt == 0;
> >
> > @@ -645,9 +687,35 @@ static void gve_rx(struct gve_rx_ring *rx, netdev_features_t feat,
> >       dma_sync_single_for_cpu(&priv->pdev->dev, page_bus,
> >                               PAGE_SIZE, DMA_FROM_DEVICE);
> >       page_info->pad = is_first_frag ? GVE_RX_PAD : 0;
> > +     len -= page_info->pad;
> >       frag_size -= page_info->pad;
> >
> > -     skb = gve_rx_skb(priv, rx, page_info, napi, frag_size,
> > +     xprog = READ_ONCE(priv->xdp_prog);
> > +     if (xprog && is_only_frag) {
> > +             void *old_data;
> > +             int xdp_act;
> > +
> > +             xdp_init_buff(&xdp, rx->packet_buffer_size, &rx->xdp_rxq);
> > +             xdp_prepare_buff(&xdp, page_info->page_address +
> > +                              page_info->page_offset, GVE_RX_PAD,
> > +                              len, false);
> > +             old_data = xdp.data;
> > +             xdp_act = bpf_prog_run_xdp(xprog, &xdp);
> > +             if (xdp_act != XDP_PASS) {
> > +                     gve_xdp_done(priv, rx, &xdp, xprog, xdp_act);
> > +                     ctx->total_size += frag_size;
> > +                     goto finish_ok_pkt;
> > +             }
> > +
> > +             page_info->pad += xdp.data - old_data;
> > +             len = xdp.data_end - xdp.data;
> > +
> > +             u64_stats_update_begin(&rx->statss);
> > +             rx->xdp_actions[XDP_PASS]++;
> > +             u64_stats_update_end(&rx->statss);
> > +     }
> > +
> > +     skb = gve_rx_skb(priv, rx, page_info, napi, len,
> >                        data_slot, is_only_frag);
> >       if (!skb) {
> >               u64_stats_update_begin(&rx->statss);
> > @@ -773,6 +841,7 @@ static bool gve_rx_refill_buffers(struct gve_priv *priv, struct gve_rx_ring *rx)
> >  static int gve_clean_rx_done(struct gve_rx_ring *rx, int budget,
> >                            netdev_features_t feat)
> >  {
> > +     u64 xdp_txs = rx->xdp_actions[XDP_TX];
> >       struct gve_rx_ctx *ctx = &rx->ctx;
> >       struct gve_priv *priv = rx->gve;
> >       struct gve_rx_cnts cnts = {0};
> > @@ -820,6 +889,9 @@ static int gve_clean_rx_done(struct gve_rx_ring *rx, int budget,
> >               u64_stats_update_end(&rx->statss);
> >       }
> >
> > +     if (xdp_txs != rx->xdp_actions[XDP_TX])
> > +             gve_xdp_tx_flush(priv, rx->q_num);
> > +
> >       /* restock ring slots */
> >       if (!rx->data.raw_addressing) {
> >               /* In QPL mode buffs are refilled as the desc are processed */
> > diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
> > index e24e73e74e33..d37515e6c10c 100644
> > --- a/drivers/net/ethernet/google/gve/gve_tx.c
> > +++ b/drivers/net/ethernet/google/gve/gve_tx.c
> > @@ -19,6 +19,14 @@ static inline void gve_tx_put_doorbell(struct gve_priv *priv,
> >       iowrite32be(val, &priv->db_bar2[be32_to_cpu(q_resources->db_index)]);
> >  }
> >
> > +void gve_xdp_tx_flush(struct gve_priv *priv, u32 xdp_qid)
> > +{
> > +     u32 tx_qid = gve_xdp_tx_queue_id(priv, xdp_qid);
> > +     struct gve_tx_ring *tx = &priv->tx[tx_qid];
> > +
> > +     gve_tx_put_doorbell(priv, tx->q_resources, tx->req);
> > +}
> > +
> >  /* gvnic can only transmit from a Registered Segment.
> >   * We copy skb payloads into the registered segment before writing Tx
> >   * descriptors and ringing the Tx doorbell.
> > @@ -132,6 +140,50 @@ static void gve_tx_free_fifo(struct gve_tx_fifo *fifo, size_t bytes)
> >       atomic_add(bytes, &fifo->available);
> >  }
> >
> > +static size_t gve_tx_clear_buffer_state(struct gve_tx_buffer_state *info)
> > +{
> > +     size_t space_freed = 0;
> > +     int i;
> > +
> > +     for (i = 0; i < ARRAY_SIZE(info->iov); i++) {
> > +             space_freed += info->iov[i].iov_len + info->iov[i].iov_padding;
> > +             info->iov[i].iov_len = 0;
> > +             info->iov[i].iov_padding = 0;
> > +     }
> > +     return space_freed;
> > +}
> > +
> > +static int gve_clean_xdp_done(struct gve_priv *priv, struct gve_tx_ring *tx,
> > +                           u32 to_do)
> > +{
> > +     struct gve_tx_buffer_state *info;
> > +     u32 clean_end = tx->done + to_do;
> > +     u64 pkts = 0, bytes = 0;
> > +     size_t space_freed = 0;
> > +     u32 idx;
> > +
> > +     for (; tx->done < clean_end; tx->done++) {
> > +             idx = tx->done & tx->mask;
> > +             info = &tx->info[idx];
> > +
> > +             if (unlikely(!info->xdp.size))
> > +                     continue;
> > +
> > +             bytes += info->xdp.size;
> > +             pkts++;
> > +
> > +             info->xdp.size = 0;
> > +             space_freed += gve_tx_clear_buffer_state(info);
> > +     }
> > +
> > +     gve_tx_free_fifo(&tx->tx_fifo, space_freed);
> > +     u64_stats_update_begin(&tx->statss);
> > +     tx->bytes_done += bytes;
> > +     tx->pkt_done += pkts;
> > +     u64_stats_update_end(&tx->statss);
> > +     return pkts;
> > +}
> > +
> >  static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx,
> >                            u32 to_do, bool try_to_wake);
> >
> > @@ -144,8 +196,12 @@ static void gve_tx_free_ring(struct gve_priv *priv, int idx)
> >
> >       gve_tx_remove_from_block(priv, idx);
> >       slots = tx->mask + 1;
> > -     gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false);
> > -     netdev_tx_reset_queue(tx->netdev_txq);
> > +     if (tx->q_num < priv->tx_cfg.num_queues) {
> > +             gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false);
> > +             netdev_tx_reset_queue(tx->netdev_txq);
> > +     } else {
> > +             gve_clean_xdp_done(priv, tx, priv->tx_desc_cnt);
> > +     }
> >
> >       dma_free_coherent(hdev, sizeof(*tx->q_resources),
> >                         tx->q_resources, tx->q_resources_bus);
> > @@ -213,7 +269,8 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx)
> >
> >       netif_dbg(priv, drv, priv->dev, "tx[%d]->bus=%lx\n", idx,
> >                 (unsigned long)tx->bus);
> > -     tx->netdev_txq = netdev_get_tx_queue(priv->dev, idx);
> > +     if (idx < priv->tx_cfg.num_queues)
> > +             tx->netdev_txq = netdev_get_tx_queue(priv->dev, idx);
> >       gve_tx_add_to_block(priv, idx);
> >
> >       return 0;
> > @@ -657,6 +714,65 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev)
> >       return NETDEV_TX_OK;
> >  }
> >
> > +static int gve_tx_fill_xdp(struct gve_priv *priv, struct gve_tx_ring *tx,
> > +                        void *data, int len)
> > +{
> > +     int pad, nfrags, ndescs, iovi, offset;
> > +     struct gve_tx_buffer_state *info;
> > +     u32 reqi = tx->req;
> > +
> > +     pad = gve_tx_fifo_pad_alloc_one_frag(&tx->tx_fifo, len);
> > +     if (pad >= GVE_TX_MAX_HEADER_SIZE)
> > +             pad = 0;
> > +     info = &tx->info[reqi & tx->mask];
> > +     info->xdp.size = len;
> > +
> > +     nfrags = gve_tx_alloc_fifo(&tx->tx_fifo, pad + len,
> > +                                &info->iov[0]);
> > +     iovi = pad > 0;
> > +     ndescs = nfrags - iovi;
> > +     offset = 0;
> > +
> > +     while (iovi < nfrags) {
> > +             if (!offset)
> > +                     gve_tx_fill_pkt_desc(&tx->desc[reqi & tx->mask], 0,
> > +                                          CHECKSUM_NONE, false, 0, ndescs,
> > +                                          info->iov[iovi].iov_len,
> > +                                          info->iov[iovi].iov_offset, len);
> > +             else
> > +                     gve_tx_fill_seg_desc(&tx->desc[reqi & tx->mask],
> > +                                          0, 0, false, false,
> > +                                          info->iov[iovi].iov_len,
> > +                                          info->iov[iovi].iov_offset);
> > +
> > +             memcpy(tx->tx_fifo.base + info->iov[iovi].iov_offset,
> > +                    data + offset, info->iov[iovi].iov_len);
> > +             gve_dma_sync_for_device(&priv->pdev->dev,
> > +                                     tx->tx_fifo.qpl->page_buses,
> > +                                     info->iov[iovi].iov_offset,
> > +                                     info->iov[iovi].iov_len);
> > +             offset += info->iov[iovi].iov_len;
> > +             iovi++;
> > +             reqi++;
> > +     }
>
> Could you please explain the logic above a little bit more?
> How is it possible to xmit more than one frag per one XDP packet?
> I believe in your implementation the XDP supports only one descriptor
> per packet. So, I think this "while" loop will always have only one
> iteration?
>
In GQI-QPL mode, the XDP packets are first copied into TX-FIFO buffer (which
is preregistered with vNIC). The packets are then DMAed by the vNIC from
TX FIFO buffer. When the entire packet does not fit at the end of TX-FIFO,
we split the packet into two chunks - one at tail of TX FIFO and one at
the head of TX-FIFO and use two descriptors to transmit the packet to vNIC.


> > +
> > +     return ndescs;
> > +}
> > +
> > +int gve_xdp_xmit_one(struct gve_priv *priv, struct gve_tx_ring *tx,
> > +                  void *data, int len)
> > +{
> > +     int nsegs;
> > +
> > +     if (!gve_can_tx(tx, len + GVE_TX_MAX_HEADER_SIZE - 1))
> > +             return -EBUSY;
> > +
> > +     nsegs = gve_tx_fill_xdp(priv, tx, data, len);
> > +     tx->req += nsegs;
> > +
> > +     return 0;
> > +}
> > +
> >  #define GVE_TX_START_THRESH  PAGE_SIZE
> >
> >  static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx,
> > @@ -666,7 +782,7 @@ static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx,
> >       u64 pkts = 0, bytes = 0;
> >       size_t space_freed = 0;
> >       struct sk_buff *skb;
> > -     int i, j;
> > +     int j;
> >       u32 idx;
>
> RCT
>
> >
> >       for (j = 0; j < to_do; j++) {
> > @@ -689,12 +805,7 @@ static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx,
> >                       dev_consume_skb_any(skb);
> >                       if (tx->raw_addressing)
> >                               continue;
> > -                     /* FIFO free */
> > -                     for (i = 0; i < ARRAY_SIZE(info->iov); i++) {
> > -                             space_freed += info->iov[i].iov_len + info->iov[i].iov_padding;
> > -                             info->iov[i].iov_len = 0;
> > -                             info->iov[i].iov_padding = 0;
> > -                     }
> > +                     space_freed += gve_tx_clear_buffer_state(info);
> >               }
> >       }
> >
> > @@ -729,6 +840,24 @@ u32 gve_tx_load_event_counter(struct gve_priv *priv,
> >       return be32_to_cpu(counter);
> >  }
> >
> > +bool gve_xdp_poll(struct gve_notify_block *block, int budget)
> > +{
> > +     struct gve_priv *priv = block->priv;
> > +     struct gve_tx_ring *tx = block->tx;
> > +     u32 nic_done;
> > +     u32 to_do;
> > +
> > +     /* If budget is 0, do all the work */
> > +     if (budget == 0)
> > +             budget = INT_MAX;
> > +
> > +     /* Find out how much work there is to be done */
> > +     nic_done = gve_tx_load_event_counter(priv, tx);
> > +     to_do = min_t(u32, (nic_done - tx->done), budget);
> > +     gve_clean_xdp_done(priv, tx, to_do);
> > +     return nic_done != tx->done;
> > +}
> > +
> >  bool gve_tx_poll(struct gve_notify_block *block, int budget)
> >  {
> >       struct gve_priv *priv = block->priv;
> > --
> > 2.40.0.rc1.284.g88254d51c5-goog
> >

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next v3 3/5] gve: Add XDP DROP and TX support for GQI-QPL format
  2023-03-15 21:10     ` Praveen Kaligineedi
@ 2023-03-16 22:07       ` Michal Kubiak
  0 siblings, 0 replies; 11+ messages in thread
From: Michal Kubiak @ 2023-03-16 22:07 UTC (permalink / raw)
  To: Praveen Kaligineedi
  Cc: netdev, davem, kuba, maciej.fijalkowski, Jeroen de Borst

On Wed, Mar 15, 2023 at 02:10:10PM -0700, Praveen Kaligineedi wrote:
> Hi Michal,
> 
> Thanks for your feedback. We will add xdp_features
> as part of version V4 of this patch. Please find my responses for other
> comments inline:

Hi Praveen,

Thanks for your answers and explanations. I will take a look at the v4
version of your series. The rest of my answers are inline.

Thanks,
Michal

> 
> On Wed, Mar 15, 2023 at 9:53 AM Michal Kubiak <michal.kubiak@intel.com> wrote:
> >
> > On Mon, Mar 13, 2023 at 01:26:38PM -0700, Praveen Kaligineedi wrote:
> > > Add support for XDP PASS, DROP and TX actions.
> > >
> > > This patch contains the following changes:
> > > 1) Support installing/uninstalling XDP program
> > > 2) Add dedicated XDP TX queues
> > > 3) Add support for XDP DROP action
> > > 4) Add support for XDP TX action
> > >
> > > Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com>
> > > Reviewed-by: Jeroen de Borst <jeroendb@google.com>
> > >
> > > ---
> > > Changed in v2:
> > > - Removed gve_close/gve_open when adding XDP dedicated queues. Instead
> > > we add and register additional TX queues when the XDP program is
> > > installed. If the allocation/registration fails we return error and do
> > > not install the XDP program.
> > > - Removed xdp tx spin lock from this patch. It is needed for XDP_REDIRECT
> > > support as both XDP_REDIRECT and XDP_TX traffic share the dedicated XDP
> > > queues. Moved the code to add xdp tx spinlock to the subsequent patch
> > > that adds XDP_REDIRECT support.
> > > - Added netdev_err when the user tries to set rx/tx queues to the values
> > > not supported when XDP is enabled.
> > > - Removed rcu annotation for xdp_prog. We disable the napi prior to
> > > adding/removing the xdp_prog and reenable it after the program has
> > > been installed for all the queues.
> > > - Ring the tx doorbell once for napi instead of every XDP TX packet.
> > > - Added a new helper function for freeing the FIFO buffer
> > > - Unregister xdp rxq for all the queues when the registration
> > > fails during XDP program installation
> > >
> > > Changed in v3:
> > > - Padding bytes are used if the XDP TX packet headers do not
> > > fit at tail of TX FIFO. Taking these padding bytes into account
> > > while checking if enough space is available in TX FIFO.
> > > ---
> >
> > Hi Praveen,
> >
> > Please find my comments inline.
> > Also, I have a general comment regarding newest XDP netdev API. As far
> > as I checked you do not use "net_device.xdp_features". That member has
> > been added to "struct net_device" since 6.3-rc1 kernel version and that
> > feature are checked now before .ndo_bpf is called, so you should set all
> > supported flags in that structure member.
> >
> > Thanks,
> > Michal
> >
> > >  drivers/net/ethernet/google/gve/gve.h         |  44 +-
> > >  drivers/net/ethernet/google/gve/gve_ethtool.c |  37 +-
> > >  drivers/net/ethernet/google/gve/gve_main.c    | 376 +++++++++++++++++-
> > >  drivers/net/ethernet/google/gve/gve_rx.c      |  74 +++-
> > >  drivers/net/ethernet/google/gve/gve_tx.c      | 149 ++++++-
> > >  5 files changed, 658 insertions(+), 22 deletions(-)
> > >
> > > diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
> > > index f354a6448c25..8d5234d4ba67 100644
> > > --- a/drivers/net/ethernet/google/gve/gve.h
> > > +++ b/drivers/net/ethernet/google/gve/gve.h
> > > @@ -47,6 +47,10 @@
> > >
> > >  #define GVE_RX_BUFFER_SIZE_DQO 2048
> > >
> > > +#define GVE_XDP_ACTIONS 5
> > > +
> > > +#define GVE_TX_MAX_HEADER_SIZE 182
> > > +
> > >  /* Each slot in the desc ring has a 1:1 mapping to a slot in the data ring */
> > >  struct gve_rx_desc_queue {
> > >       struct gve_rx_desc *desc_ring; /* the descriptor ring */
> > > @@ -230,7 +234,9 @@ struct gve_rx_ring {
> > >       u64 rx_frag_flip_cnt; /* free-running count of rx segments where page_flip was used */
> > >       u64 rx_frag_copy_cnt; /* free-running count of rx segments copied */
> > >       u64 rx_frag_alloc_cnt; /* free-running count of rx page allocations */
> > > -
> > > +     u64 xdp_tx_errors;
> > > +     u64 xdp_redirect_errors;
> > > +     u64 xdp_actions[GVE_XDP_ACTIONS];
> > >       u32 q_num; /* queue index */
> > >       u32 ntfy_id; /* notification block index */
> > >       struct gve_queue_resources *q_resources; /* head and tail pointer idx */
> > > @@ -238,6 +244,9 @@ struct gve_rx_ring {
> > >       struct u64_stats_sync statss; /* sync stats for 32bit archs */
> > >
> > >       struct gve_rx_ctx ctx; /* Info for packet currently being processed in this ring. */
> > > +
> > > +     /* XDP stuff */
> > > +     struct xdp_rxq_info xdp_rxq;
> > >  };
> > >
> > >  /* A TX desc ring entry */
> > > @@ -259,6 +268,9 @@ struct gve_tx_iovec {
> > >   */
> > >  struct gve_tx_buffer_state {
> > >       struct sk_buff *skb; /* skb for this pkt */
> > > +     struct {
> > > +             u16 size; /* size of xmitted xdp pkt */
> > > +     } xdp;
> > >       union {
> > >               struct gve_tx_iovec iov[GVE_TX_MAX_IOVEC]; /* segments of this pkt */
> > >               struct {
> > > @@ -526,9 +538,11 @@ struct gve_priv {
> > >       u16 rx_data_slot_cnt; /* rx buffer length */
> > >       u64 max_registered_pages;
> > >       u64 num_registered_pages; /* num pages registered with NIC */
> > > +     struct bpf_prog *xdp_prog; /* XDP BPF program */
> > >       u32 rx_copybreak; /* copy packets smaller than this */
> > >       u16 default_num_queues; /* default num queues to set up */
> > >
> > > +     u16 num_xdp_queues;
> > >       struct gve_queue_config tx_cfg;
> > >       struct gve_queue_config rx_cfg;
> > >       struct gve_qpl_config qpl_cfg; /* map used QPL ids */
> > > @@ -785,7 +799,17 @@ static inline u32 gve_num_tx_qpls(struct gve_priv *priv)
> > >       if (priv->queue_format != GVE_GQI_QPL_FORMAT)
> > >               return 0;
> > >
> > > -     return priv->tx_cfg.num_queues;
> > > +     return priv->tx_cfg.num_queues + priv->num_xdp_queues;
> > > +}
> > > +
> > > +/* Returns the number of XDP tx queue page lists
> > > + */
> > > +static inline u32 gve_num_xdp_qpls(struct gve_priv *priv)
> > > +{
> > > +     if (priv->queue_format != GVE_GQI_QPL_FORMAT)
> > > +             return 0;
> > > +
> > > +     return priv->num_xdp_queues;
> > >  }
> > >
> > >  /* Returns the number of rx queue page lists
> > > @@ -874,7 +898,17 @@ static inline bool gve_is_gqi(struct gve_priv *priv)
> > >
> > >  static inline u32 gve_num_tx_queues(struct gve_priv *priv)
> > >  {
> > > -     return priv->tx_cfg.num_queues;
> > > +     return priv->tx_cfg.num_queues + priv->num_xdp_queues;
> > > +}
> > > +
> > > +static inline u32 gve_xdp_tx_queue_id(struct gve_priv *priv, u32 queue_id)
> > > +{
> > > +     return priv->tx_cfg.num_queues + queue_id;
> > > +}
> > > +
> > > +static inline u32 gve_xdp_tx_start_queue_id(struct gve_priv *priv)
> > > +{
> > > +     return gve_xdp_tx_queue_id(priv, 0);
> > >  }
> > >
> > >  /* buffers */
> > > @@ -885,7 +919,11 @@ void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma,
> > >                  enum dma_data_direction);
> > >  /* tx handling */
> > >  netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev);
> > > +int gve_xdp_xmit_one(struct gve_priv *priv, struct gve_tx_ring *tx,
> > > +                  void *data, int len);
> > > +void gve_xdp_tx_flush(struct gve_priv *priv, u32 xdp_qid);
> > >  bool gve_tx_poll(struct gve_notify_block *block, int budget);
> > > +bool gve_xdp_poll(struct gve_notify_block *block, int budget);
> > >  int gve_tx_alloc_rings(struct gve_priv *priv, int start_id, int num_rings);
> > >  void gve_tx_free_rings_gqi(struct gve_priv *priv, int start_id, int num_rings);
> > >  u32 gve_tx_load_event_counter(struct gve_priv *priv,
> > > diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c
> > > index 5b6e31812fae..067b393ccf9d 100644
> > > --- a/drivers/net/ethernet/google/gve/gve_ethtool.c
> > > +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c
> > > @@ -34,6 +34,11 @@ static u32 gve_get_msglevel(struct net_device *netdev)
> > >       return priv->msg_enable;
> > >  }
> > >
> > > +/* For the following stats column string names, make sure the order
> > > + * matches how it is filled in the code. For xdp_aborted, xdp_drop,
> > > + * xdp_pass, xdp_tx, xdp_redirect, make sure it also matches the order
> > > + * as declared in enum xdp_action inside file uapi/linux/bpf.h .
> > > + */
> > >  static const char gve_gstrings_main_stats[][ETH_GSTRING_LEN] = {
> > >       "rx_packets", "tx_packets", "rx_bytes", "tx_bytes",
> > >       "rx_dropped", "tx_dropped", "tx_timeouts",
> > > @@ -49,6 +54,9 @@ static const char gve_gstrings_rx_stats[][ETH_GSTRING_LEN] = {
> > >       "rx_dropped_pkt[%u]", "rx_copybreak_pkt[%u]", "rx_copied_pkt[%u]",
> > >       "rx_queue_drop_cnt[%u]", "rx_no_buffers_posted[%u]",
> > >       "rx_drops_packet_over_mru[%u]", "rx_drops_invalid_checksum[%u]",
> > > +     "rx_xdp_aborted[%u]", "rx_xdp_drop[%u]", "rx_xdp_pass[%u]",
> > > +     "rx_xdp_tx[%u]", "rx_xdp_redirect[%u]",
> > > +     "rx_xdp_tx_errors[%u]", "rx_xdp_redirect_errors[%u]",
> > >  };
> > >
> > >  static const char gve_gstrings_tx_stats[][ETH_GSTRING_LEN] = {
> > > @@ -289,14 +297,25 @@ gve_get_ethtool_stats(struct net_device *netdev,
> > >                       if (skip_nic_stats) {
> > >                               /* skip NIC rx stats */
> > >                               i += NIC_RX_STATS_REPORT_NUM;
> > > -                             continue;
> > > -                     }
> > > -                     for (j = 0; j < NIC_RX_STATS_REPORT_NUM; j++) {
> > > -                             u64 value =
> > > -                             be64_to_cpu(report_stats[rx_qid_to_stats_idx[ring] + j].value);
> > > +                     } else {
> > > +                             stats_idx = rx_qid_to_stats_idx[ring];
> > > +                             for (j = 0; j < NIC_RX_STATS_REPORT_NUM; j++) {
> > > +                                     u64 value =
> > > +                                             be64_to_cpu(report_stats[stats_idx + j].value);
> > >
> > > -                             data[i++] = value;
> > > +                                     data[i++] = value;
> > > +                             }
> > >                       }
> > > +                     /* XDP rx counters */
> > > +                     do {
> > > +                             start = u64_stats_fetch_begin(&priv->rx[ring].statss);
> > > +                             for (j = 0; j < GVE_XDP_ACTIONS; j++)
> > > +                                     data[i + j] = rx->xdp_actions[j];
> > > +                             data[i + j++] = rx->xdp_tx_errors;
> > > +                             data[i + j++] = rx->xdp_redirect_errors;
> > > +                     } while (u64_stats_fetch_retry(&priv->rx[ring].statss,
> > > +                                                    start));
> > > +                     i += GVE_XDP_ACTIONS + 2; /* XDP rx counters */
> > >               }
> > >       } else {
> > >               i += priv->rx_cfg.num_queues * NUM_GVE_RX_CNTS;
> > > @@ -418,6 +437,12 @@ static int gve_set_channels(struct net_device *netdev,
> > >       if (!new_rx || !new_tx)
> > >               return -EINVAL;
> > >
> > > +     if (priv->num_xdp_queues &&
> > > +         (new_tx != new_rx || (2 * new_tx > priv->tx_cfg.max_queues))) {
> > > +             dev_err(&priv->pdev->dev, "XDP load failed: The number of configured RX queues should be equal to the number of configured TX queues and the number of configured RX/TX queues should be less than or equal to half the maximum number of RX/TX queues");
> >
> > Please explain why the number of RX and TX queues cannot be asymmetric
> > while the XDP program is loaded?
> > Regarding the second condition: shouldn't it look like:
> >         "2 * new_tx > priv->rx_cfg.max_queues" ?
> >
> > Please take a look at my other comments regarding XDP queues number.
> >
> 
> For this first iteration of XDP support, we chose to enforce number of
> configured RX queues to be equal to the number of configured TX queues
> to simplify how the dedicated XDP transmit queues are assigned for
> XDP_TX traffic as well as for AF_XDP ZC TX traffic.  We only load XDP
> when the number of RX queues is equal to the number of TX queues.
> In effect, the number of XDP TX queues = number of configured TX queues =
> number of configured RX queues.
> 
> The check 2*new_tx > priv->tx_cfg.max_queues ensures that we
> are not creating more tx queues than the maximum allowed TX
> queues.
>

OK, got it.

> > > +             return -EINVAL;
> > > +     }
> > > +
> > >       if (!netif_carrier_ok(netdev)) {
> > >               priv->tx_cfg.num_queues = new_tx;
> > >               priv->rx_cfg.num_queues = new_rx;
> > > diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
> > > index 160ca77c2751..7d3f15cf79ed 100644
> > > --- a/drivers/net/ethernet/google/gve/gve_main.c
> > > +++ b/drivers/net/ethernet/google/gve/gve_main.c
> > > @@ -4,8 +4,10 @@
> > >   * Copyright (C) 2015-2021 Google, Inc.
> > >   */
> > >
> > > +#include <linux/bpf.h>
> > >  #include <linux/cpumask.h>
> > >  #include <linux/etherdevice.h>
> > > +#include <linux/filter.h>
> > >  #include <linux/interrupt.h>
> > >  #include <linux/module.h>
> > >  #include <linux/pci.h>
> > > @@ -247,8 +249,13 @@ static int gve_napi_poll(struct napi_struct *napi, int budget)
> > >       block = container_of(napi, struct gve_notify_block, napi);
> > >       priv = block->priv;
> > >
> > > -     if (block->tx)
> > > -             reschedule |= gve_tx_poll(block, budget);
> > > +     if (block->tx) {
> > > +             if (block->tx->q_num < priv->tx_cfg.num_queues)
> > > +                     reschedule |= gve_tx_poll(block, budget);
> > > +             else
> > > +                     reschedule |= gve_xdp_poll(block, budget);
> > > +     }
> > > +
> > >       if (block->rx) {
> > >               work_done = gve_rx_poll(block, budget);
> > >               reschedule |= work_done == budget;
> > > @@ -582,6 +589,28 @@ static void gve_remove_napi(struct gve_priv *priv, int ntfy_idx)
> > >       netif_napi_del(&block->napi);
> > >  }
> > >
> > > +static int gve_register_xdp_qpls(struct gve_priv *priv)
> > > +{
> > > +     int start_id;
> > > +     int err;
> > > +     int i;
> > > +
> > > +     start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv));
> > > +     for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) {
> > > +             err = gve_adminq_register_page_list(priv, &priv->qpls[i]);
> > > +             if (err) {
> > > +                     netif_err(priv, drv, priv->dev,
> > > +                               "failed to register queue page list %d\n",
> > > +                               priv->qpls[i].id);
> > > +                     /* This failure will trigger a reset - no need to clean
> > > +                      * up
> > > +                      */
> > > +                     return err;
> > > +             }
> > > +     }
> > > +     return 0;
> > > +}
> > > +
> > >  static int gve_register_qpls(struct gve_priv *priv)
> > >  {
> > >       int start_id;
> > > @@ -618,6 +647,26 @@ static int gve_register_qpls(struct gve_priv *priv)
> > >       return 0;
> > >  }
> > >
> > > +static int gve_unregister_xdp_qpls(struct gve_priv *priv)
> > > +{
> > > +     int start_id;
> > > +     int err;
> > > +     int i;
> > > +
> > > +     start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv));
> > > +     for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) {
> > > +             err = gve_adminq_unregister_page_list(priv, priv->qpls[i].id);
> > > +             /* This failure will trigger a reset - no need to clean up */
> > > +             if (err) {
> > > +                     netif_err(priv, drv, priv->dev,
> > > +                               "Failed to unregister queue page list %d\n",
> > > +                               priv->qpls[i].id);
> > > +                     return err;
> > > +             }
> > > +     }
> > > +     return 0;
> > > +}
> > > +
> > >  static int gve_unregister_qpls(struct gve_priv *priv)
> > >  {
> > >       int start_id;
> > > @@ -650,6 +699,27 @@ static int gve_unregister_qpls(struct gve_priv *priv)
> > >       return 0;
> > >  }
> > >
> > > +static int gve_create_xdp_rings(struct gve_priv *priv)
> > > +{
> > > +     int err;
> > > +
> > > +     err = gve_adminq_create_tx_queues(priv,
> > > +                                       gve_xdp_tx_start_queue_id(priv),
> > > +                                       priv->num_xdp_queues);
> > > +     if (err) {
> > > +             netif_err(priv, drv, priv->dev, "failed to create %d XDP tx queues\n",
> > > +                       priv->num_xdp_queues);
> > > +             /* This failure will trigger a reset - no need to clean
> > > +              * up
> > > +              */
> > > +             return err;
> > > +     }
> > > +     netif_dbg(priv, drv, priv->dev, "created %d XDP tx queues\n",
> > > +               priv->num_xdp_queues);
> > > +
> > > +     return 0;
> > > +}
> > > +
> > >  static int gve_create_rings(struct gve_priv *priv)
> > >  {
> > >       int num_tx_queues = gve_num_tx_queues(priv);
> > > @@ -699,6 +769,23 @@ static int gve_create_rings(struct gve_priv *priv)
> > >       return 0;
> > >  }
> > >
> > > +static void add_napi_init_xdp_sync_stats(struct gve_priv *priv,
> > > +                                      int (*napi_poll)(struct napi_struct *napi,
> > > +                                                       int budget))
> > > +{
> > > +     int start_id = gve_xdp_tx_start_queue_id(priv);
> > > +     int i;
> > > +
> > > +     /* Add xdp tx napi & init sync stats*/
> > > +     for (i = start_id; i < start_id + priv->num_xdp_queues; i++) {
> > > +             int ntfy_idx = gve_tx_idx_to_ntfy(priv, i);
> > > +
> > > +             u64_stats_init(&priv->tx[i].statss);
> > > +             priv->tx[i].ntfy_id = ntfy_idx;
> > > +             gve_add_napi(priv, ntfy_idx, napi_poll);
> > > +     }
> > > +}
> > > +
> > >  static void add_napi_init_sync_stats(struct gve_priv *priv,
> > >                                    int (*napi_poll)(struct napi_struct *napi,
> > >                                                     int budget))
> > > @@ -732,6 +819,23 @@ static void gve_tx_free_rings(struct gve_priv *priv, int start_id, int num_rings
> > >       }
> > >  }
> > >
> > > +static int gve_alloc_xdp_rings(struct gve_priv *priv)
> > > +{
> > > +     int start_id;
> > > +     int err = 0;
> > > +
> > > +     if (!priv->num_xdp_queues)
> > > +             return 0;
> > > +
> > > +     start_id = gve_xdp_tx_start_queue_id(priv);
> > > +     err = gve_tx_alloc_rings(priv, start_id, priv->num_xdp_queues);
> > > +     if (err)
> > > +             return err;
> > > +     add_napi_init_xdp_sync_stats(priv, gve_napi_poll);
> > > +
> > > +     return 0;
> > > +}
> > > +
> > >  static int gve_alloc_rings(struct gve_priv *priv)
> > >  {
> > >       int err;
> > > @@ -782,6 +886,26 @@ static int gve_alloc_rings(struct gve_priv *priv)
> > >       return err;
> > >  }
> > >
> > > +static int gve_destroy_xdp_rings(struct gve_priv *priv)
> > > +{
> > > +     int start_id;
> > > +     int err;
> > > +
> > > +     start_id = gve_xdp_tx_start_queue_id(priv);
> > > +     err = gve_adminq_destroy_tx_queues(priv,
> > > +                                        start_id,
> > > +                                        priv->num_xdp_queues);
> > > +     if (err) {
> > > +             netif_err(priv, drv, priv->dev,
> > > +                       "failed to destroy XDP queues\n");
> > > +             /* This failure will trigger a reset - no need to clean up */
> > > +             return err;
> > > +     }
> > > +     netif_dbg(priv, drv, priv->dev, "destroyed XDP queues\n");
> > > +
> > > +     return 0;
> > > +}
> > > +
> > >  static int gve_destroy_rings(struct gve_priv *priv)
> > >  {
> > >       int num_tx_queues = gve_num_tx_queues(priv);
> > > @@ -814,6 +938,21 @@ static void gve_rx_free_rings(struct gve_priv *priv)
> > >               gve_rx_free_rings_dqo(priv);
> > >  }
> > >
> > > +static void gve_free_xdp_rings(struct gve_priv *priv)
> > > +{
> > > +     int ntfy_idx, start_id;
> > > +     int i;
> > > +
> > > +     start_id = gve_xdp_tx_start_queue_id(priv);
> > > +     if (priv->tx) {
> > > +             for (i = start_id; i <  start_id + priv->num_xdp_queues; i++) {
> > > +                     ntfy_idx = gve_tx_idx_to_ntfy(priv, i);
> > > +                     gve_remove_napi(priv, ntfy_idx);
> > > +             }
> > > +             gve_tx_free_rings(priv, start_id, priv->num_xdp_queues);
> > > +     }
> > > +}
> > > +
> > >  static void gve_free_rings(struct gve_priv *priv)
> > >  {
> > >       int num_tx_queues = gve_num_tx_queues(priv);
> > > @@ -929,6 +1068,28 @@ static void gve_free_queue_page_list(struct gve_priv *priv, u32 id)
> > >       priv->num_registered_pages -= qpl->num_entries;
> > >  }
> > >
> > > +static int gve_alloc_xdp_qpls(struct gve_priv *priv)
> > > +{
> > > +     int start_id;
> > > +     int i, j;
> > > +     int err;
> > > +
> > > +     start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv));
> > > +     for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) {
> > > +             err = gve_alloc_queue_page_list(priv, i,
> > > +                                             priv->tx_pages_per_qpl);
> > > +             if (err)
> > > +                     goto free_qpls;
> > > +     }
> > > +
> > > +     return 0;
> > > +
> > > +free_qpls:
> > > +     for (j = start_id; j <= i; j++)
> > > +             gve_free_queue_page_list(priv, j);
> > > +     return err;
> > > +}
> > > +
> > >  static int gve_alloc_qpls(struct gve_priv *priv)
> > >  {
> > >       int max_queues = priv->tx_cfg.max_queues + priv->rx_cfg.max_queues;
> > > @@ -978,6 +1139,16 @@ static int gve_alloc_qpls(struct gve_priv *priv)
> > >       return err;
> > >  }
> > >
> > > +static void gve_free_xdp_qpls(struct gve_priv *priv)
> > > +{
> > > +     int start_id;
> > > +     int i;
> > > +
> > > +     start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv));
> > > +     for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++)
> > > +             gve_free_queue_page_list(priv, i);
> > > +}
> > > +
> > >  static void gve_free_qpls(struct gve_priv *priv)
> > >  {
> > >       int max_queues = priv->tx_cfg.max_queues + priv->rx_cfg.max_queues;
> > > @@ -1011,11 +1182,64 @@ static int gve_reset_recovery(struct gve_priv *priv, bool was_up);
> > >  static void gve_turndown(struct gve_priv *priv);
> > >  static void gve_turnup(struct gve_priv *priv);
> > >
> > > +static int gve_reg_xdp_info(struct gve_priv *priv, struct net_device *dev)
> > > +{
> > > +     struct napi_struct *napi;
> > > +     struct gve_rx_ring *rx;
> > > +     int err = 0;
> > > +     int i, j;
> > > +
> > > +     if (!priv->num_xdp_queues)
> > > +             return 0;
> > > +
> > > +     for (i = 0; i < priv->rx_cfg.num_queues; i++) {
> > > +             rx = &priv->rx[i];
> > > +             napi = &priv->ntfy_blocks[rx->ntfy_id].napi;
> > > +
> > > +             err = xdp_rxq_info_reg(&rx->xdp_rxq, dev, i,
> > > +                                    napi->napi_id);
> > > +             if (err)
> > > +                     goto err;
> > > +             err = xdp_rxq_info_reg_mem_model(&rx->xdp_rxq,
> > > +                                              MEM_TYPE_PAGE_SHARED, NULL);
> > > +             if (err)
> > > +                     goto err;
> > > +     }
> > > +     return 0;
> > > +
> > > +err:
> > > +     for (j = i; j >= 0; j--) {
> > > +             rx = &priv->rx[j];
> > > +             if (xdp_rxq_info_is_reg(&rx->xdp_rxq))
> > > +                     xdp_rxq_info_unreg(&rx->xdp_rxq);
> > > +     }
> > > +     return err;
> > > +}
> > > +
> > > +static void gve_unreg_xdp_info(struct gve_priv *priv)
> > > +{
> > > +     int i;
> > > +
> > > +     if (!priv->num_xdp_queues)
> > > +             return;
> > > +
> > > +     for (i = 0; i < priv->rx_cfg.num_queues; i++) {
> > > +             struct gve_rx_ring *rx = &priv->rx[i];
> > > +
> > > +             xdp_rxq_info_unreg(&rx->xdp_rxq);
> > > +     }
> > > +}
> > > +
> > >  static int gve_open(struct net_device *dev)
> > >  {
> > >       struct gve_priv *priv = netdev_priv(dev);
> > >       int err;
> > >
> > > +     if (priv->xdp_prog)
> > > +             priv->num_xdp_queues = priv->tx_cfg.num_queues;
> >
> > Why the number of XDP queues is initialized to TX queues number?
> > Shouldn't it be rather RX queues number?
> >
> > For example, if you have:
> >  - asymmetric number of RX and TX queues,
> >  - tx_cfg.num_queues < rx_cfg.num_queues.
> >
> > I believe in such a scenario you won't be able to handle XDP_TX action
> > for all RX queues.
> > Please take a look at your implementation of "XDP_TX" case in "gve_xdp_done()".
> >
> > Is it any mistake or an intentional design?
> >
> 
> As mentioned above, we enforce num of rx queues == num of tx queues
> == num of XDP tx queues.
> 

OK, I got it - it is just a first iteration of your implementation.
However, I would still recommend to use the assignment:
	priv->num_xdp_queues = priv->rx_cfg.num_queues;

I think it would be less prone to bugs after you add a support for
asymmetric number of queues.
Also, creating an XDP queue for each Rx queue is logical from the perspective
of handling XDP_TX action. You always want to be able to perform the TX action
regardless of which Rx queue you received the packet from.
If num_xdp_queues == num_rx_queues, you can always perform concurrent
XDP_TX actions. The number of your regular Tx queues does not matter
in such a scenario.

Anyway, after I had written that comment I read your answer below where you are
explaining you will change it to Rx :-)
Thanks!

> > > +     else
> > > +             priv->num_xdp_queues = 0;
> > > +
> > >       err = gve_alloc_qpls(priv);
> > >       if (err)
> > >               return err;
> > > @@ -1031,6 +1255,10 @@ static int gve_open(struct net_device *dev)
> > >       if (err)
> > >               goto free_rings;
> > >
> > > +     err = gve_reg_xdp_info(priv, dev);
> > > +     if (err)
> > > +             goto free_rings;
> > > +
> > >       err = gve_register_qpls(priv);
> > >       if (err)
> > >               goto reset;
> > > @@ -1095,6 +1323,7 @@ static int gve_close(struct net_device *dev)
> > >       }
> > >       del_timer_sync(&priv->stats_report_timer);
> > >
> > > +     gve_unreg_xdp_info(priv);
> > >       gve_free_rings(priv);
> > >       gve_free_qpls(priv);
> > >       priv->interface_down_cnt++;
> > > @@ -1111,6 +1340,148 @@ static int gve_close(struct net_device *dev)
> > >       return gve_reset_recovery(priv, false);
> > >  }
> > >
> > > +static int gve_remove_xdp_queues(struct gve_priv *priv)
> > > +{
> > > +     int err;
> > > +
> > > +     err = gve_destroy_xdp_rings(priv);
> > > +     if (err)
> > > +             return err;
> > > +
> > > +     err = gve_unregister_xdp_qpls(priv);
> > > +     if (err)
> > > +             return err;
> > > +
> > > +     gve_unreg_xdp_info(priv);
> > > +     gve_free_xdp_rings(priv);
> > > +     gve_free_xdp_qpls(priv);
> > > +     priv->num_xdp_queues = 0;
> > > +     return 0;
> > > +}
> > > +
> > > +static int gve_add_xdp_queues(struct gve_priv *priv)
> > > +{
> > > +     int err;
> > > +
> > > +     priv->num_xdp_queues = priv->tx_cfg.num_queues;
> >
> > The same question here: shouldn't it be equal to
> > "priv->rx_cfg.num_queues"?
> >
> 
> We enforce priv->tx_cfg.num_queues == priv->rx_cfg.num_queues
> when XDP is enabled. We will make this priv->rx_cfg.num_queues
> just to be clearer.
>

OK, thanks!

> > > +
> > > +     err = gve_alloc_xdp_qpls(priv);
> > > +     if (err)
> > > +             goto err;
> > > +
> > > +     err = gve_alloc_xdp_rings(priv);
> > > +     if (err)
> > > +             goto free_xdp_qpls;
> > > +
> > > +     err = gve_reg_xdp_info(priv, priv->dev);
> > > +     if (err)
> > > +             goto free_xdp_rings;
> > > +
> > > +     err = gve_register_xdp_qpls(priv);
> > > +     if (err)
> > > +             goto free_xdp_rings;
> > > +
> > > +     err = gve_create_xdp_rings(priv);
> > > +     if (err)
> > > +             goto free_xdp_rings;
> > > +
> > > +     return 0;
> > > +
> > > +free_xdp_rings:
> > > +     gve_free_xdp_rings(priv);
> > > +free_xdp_qpls:
> > > +     gve_free_xdp_qpls(priv);
> > > +err:
> > > +     priv->num_xdp_queues = 0;
> > > +     return err;
> > > +}
> > > +
> > > +static int gve_set_xdp(struct gve_priv *priv, struct bpf_prog *prog,
> > > +                    struct netlink_ext_ack *extack)
> > > +{
> > > +     struct bpf_prog *old_prog;
> > > +     int err = 0;
> > > +
> > > +     old_prog = READ_ONCE(priv->xdp_prog);
> > > +     if (!netif_carrier_ok(priv->dev)) {
> > > +             WRITE_ONCE(priv->xdp_prog, prog);
> > > +             if (old_prog)
> > > +                     bpf_prog_put(old_prog);
> > > +             return 0;
> > > +     }
> > > +
> > > +     gve_turndown(priv);
> > > +     if (!old_prog && prog) {
> > > +             // Allocate XDP TX queues if an XDP program is
> > > +             // being installed
> > > +             err = gve_add_xdp_queues(priv);
> > > +             if (err)
> > > +                     goto out;
> > > +     } else if (old_prog && !prog) {
> > > +             // Remove XDP TX queues if an XDP program is
> > > +             // being uninstalled
> > > +             err = gve_remove_xdp_queues(priv);
> > > +             if (err)
> > > +                     goto out;
> > > +     }
> > > +     WRITE_ONCE(priv->xdp_prog, prog);
> > > +     if (old_prog)
> > > +             bpf_prog_put(old_prog);
> > > +
> > > +out:
> > > +     gve_turnup(priv);
> > > +     queue_work(priv->gve_wq, &priv->service_task);
> >
> > As far as I understand, you are starting some stuff asynchronously
> > (service_task), but you never wait for the result of that stuff.
> > So, if err == 0, but the "service_task" fails, you will still return
> > a success to the kernel.
> > Is it OK? Is it possible to return an inconsistent result to the kernel?
> >
> service_task was being scheduled to turn on the carrier based
> on the link status. We will change this in V4 version of this patch
> to turn on the carrier based on the link status synchronously rather
> than asynchronously.

OK, thanks!

> > > +     return err;
> > > +}
> > > +
> > > +static int verify_xdp_configuration(struct net_device *dev)
> > > +{
> > > +     struct gve_priv *priv = netdev_priv(dev);
> > > +
> > > +     if (dev->features & NETIF_F_LRO) {
> > > +             netdev_warn(dev, "XDP is not supported when LRO is on.\n");
> > > +             return -EOPNOTSUPP;
> > > +     }
> > > +
> > > +     if (priv->queue_format != GVE_GQI_QPL_FORMAT) {
> > > +             netdev_warn(dev, "XDP is not supported in mode %d.\n",
> > > +                         priv->queue_format);
> > > +             return -EOPNOTSUPP;
> > > +     }
> > > +
> > > +     if (dev->mtu > (PAGE_SIZE / 2) - sizeof(struct ethhdr) - GVE_RX_PAD) {
> > > +             netdev_warn(dev, "XDP is not supported for mtu %d.\n",
> > > +                         dev->mtu);
> > > +             return -EOPNOTSUPP;
> > > +     }
> > > +
> > > +     if (priv->rx_cfg.num_queues != priv->tx_cfg.num_queues ||
> > > +         (2 * priv->tx_cfg.num_queues > priv->tx_cfg.max_queues)) {
> > > +             netdev_warn(dev, "XDP load failed: The number of configured RX queues %d should be equal to the number of configured TX queues %d and the number of configured RX/TX queues should be less than or equal to half the maximum number of RX/TX queues %d",
> > > +                         priv->rx_cfg.num_queues,
> > > +                         priv->tx_cfg.num_queues,
> > > +                         priv->tx_cfg.max_queues);
> > > +             return -EINVAL;
> > > +     }
> > > +     return 0;
> > > +}
> > > +
> > > +static int gve_xdp(struct net_device *dev, struct netdev_bpf *xdp)
> > > +{
> > > +     struct gve_priv *priv = netdev_priv(dev);
> > > +     int err;
> > > +
> > > +     err = verify_xdp_configuration(dev);
> > > +     if (err)
> > > +             return err;
> > > +     switch (xdp->command) {
> > > +     case XDP_SETUP_PROG:
> > > +             return gve_set_xdp(priv, xdp->prog, xdp->extack);
> > > +     default:
> > > +             return -EINVAL;
> > > +     }
> > > +}
> > > +
> > >  int gve_adjust_queues(struct gve_priv *priv,
> > >                     struct gve_queue_config new_rx_config,
> > >                     struct gve_queue_config new_tx_config)
> > > @@ -1305,6 +1676,7 @@ static const struct net_device_ops gve_netdev_ops = {
> > >       .ndo_get_stats64        =       gve_get_stats,
> > >       .ndo_tx_timeout         =       gve_tx_timeout,
> > >       .ndo_set_features       =       gve_set_features,
> > > +     .ndo_bpf                =       gve_xdp,
> > >  };
> > >
> > >  static void gve_handle_status(struct gve_priv *priv, u32 status)
> > > diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
> > > index 051a15e4f1af..3241f6ea29be 100644
> > > --- a/drivers/net/ethernet/google/gve/gve_rx.c
> > > +++ b/drivers/net/ethernet/google/gve/gve_rx.c
> > > @@ -8,6 +8,8 @@
> > >  #include "gve_adminq.h"
> > >  #include "gve_utils.h"
> > >  #include <linux/etherdevice.h>
> > > +#include <linux/filter.h>
> > > +#include <net/xdp.h>
> > >
> > >  static void gve_rx_free_buffer(struct device *dev,
> > >                              struct gve_rx_slot_page_info *page_info,
> > > @@ -591,6 +593,43 @@ static struct sk_buff *gve_rx_skb(struct gve_priv *priv, struct gve_rx_ring *rx,
> > >       return skb;
> > >  }
> > >
> > > +static void gve_xdp_done(struct gve_priv *priv, struct gve_rx_ring *rx,
> > > +                      struct xdp_buff *xdp, struct bpf_prog *xprog,
> > > +                      int xdp_act)
> > > +{
> > > +     struct gve_tx_ring *tx;
> > > +     int tx_qid;
> > > +     int err;
> > > +
> > > +     switch (xdp_act) {
> > > +     case XDP_ABORTED:
> > > +     case XDP_DROP:
> > > +     default:
> > > +             break;
> > > +     case XDP_TX:
> > > +             tx_qid = gve_xdp_tx_queue_id(priv, rx->q_num);
> > > +             tx = &priv->tx[tx_qid];
> >
> > As I have already mentioned: if num_rx_queues > num_tx_queues, you can
> > select uninitialized xdp queue (because num_tx_queues ==
> > num_xdp_queues).
> >
> > Please check if your number of XDP queues should be equal to the number
> > of RX queues.
> >
> > > +             err = gve_xdp_xmit_one(priv, tx, xdp->data,
> > > +                                    xdp->data_end - xdp->data);
> > > +
> > > +             if (unlikely(err)) {
> > > +                     u64_stats_update_begin(&rx->statss);
> > > +                     rx->xdp_tx_errors++;
> > > +                     u64_stats_update_end(&rx->statss);
> > > +             }
> > > +             break;
> > > +     case XDP_REDIRECT:
> > > +             u64_stats_update_begin(&rx->statss);
> > > +             rx->xdp_redirect_errors++;
> > > +             u64_stats_update_end(&rx->statss);
> > > +             break;
> > > +     }
> > > +     u64_stats_update_begin(&rx->statss);
> > > +     if ((u32)xdp_act < GVE_XDP_ACTIONS)
> > > +             rx->xdp_actions[xdp_act]++;
> > > +     u64_stats_update_end(&rx->statss);
> > > +}
> > > +
> > >  #define GVE_PKTCONT_BIT_IS_SET(x) (GVE_RXF_PKT_CONT & (x))
> > >  static void gve_rx(struct gve_rx_ring *rx, netdev_features_t feat,
> > >                  struct gve_rx_desc *desc, u32 idx,
> > > @@ -603,9 +642,12 @@ static void gve_rx(struct gve_rx_ring *rx, netdev_features_t feat,
> > >       union gve_rx_data_slot *data_slot;
> > >       struct gve_priv *priv = rx->gve;
> > >       struct sk_buff *skb = NULL;
> > > +     struct bpf_prog *xprog;
> > > +     struct xdp_buff xdp;
> > >       dma_addr_t page_bus;
> > >       void *va;
> > >
> > > +     u16 len = frag_size;
> > >       struct napi_struct *napi = &priv->ntfy_blocks[rx->ntfy_id].napi;
> > >       bool is_first_frag = ctx->frag_cnt == 0;
> > >
> > > @@ -645,9 +687,35 @@ static void gve_rx(struct gve_rx_ring *rx, netdev_features_t feat,
> > >       dma_sync_single_for_cpu(&priv->pdev->dev, page_bus,
> > >                               PAGE_SIZE, DMA_FROM_DEVICE);
> > >       page_info->pad = is_first_frag ? GVE_RX_PAD : 0;
> > > +     len -= page_info->pad;
> > >       frag_size -= page_info->pad;
> > >
> > > -     skb = gve_rx_skb(priv, rx, page_info, napi, frag_size,
> > > +     xprog = READ_ONCE(priv->xdp_prog);
> > > +     if (xprog && is_only_frag) {
> > > +             void *old_data;
> > > +             int xdp_act;
> > > +
> > > +             xdp_init_buff(&xdp, rx->packet_buffer_size, &rx->xdp_rxq);
> > > +             xdp_prepare_buff(&xdp, page_info->page_address +
> > > +                              page_info->page_offset, GVE_RX_PAD,
> > > +                              len, false);
> > > +             old_data = xdp.data;
> > > +             xdp_act = bpf_prog_run_xdp(xprog, &xdp);
> > > +             if (xdp_act != XDP_PASS) {
> > > +                     gve_xdp_done(priv, rx, &xdp, xprog, xdp_act);
> > > +                     ctx->total_size += frag_size;
> > > +                     goto finish_ok_pkt;
> > > +             }
> > > +
> > > +             page_info->pad += xdp.data - old_data;
> > > +             len = xdp.data_end - xdp.data;
> > > +
> > > +             u64_stats_update_begin(&rx->statss);
> > > +             rx->xdp_actions[XDP_PASS]++;
> > > +             u64_stats_update_end(&rx->statss);
> > > +     }
> > > +
> > > +     skb = gve_rx_skb(priv, rx, page_info, napi, len,
> > >                        data_slot, is_only_frag);
> > >       if (!skb) {
> > >               u64_stats_update_begin(&rx->statss);
> > > @@ -773,6 +841,7 @@ static bool gve_rx_refill_buffers(struct gve_priv *priv, struct gve_rx_ring *rx)
> > >  static int gve_clean_rx_done(struct gve_rx_ring *rx, int budget,
> > >                            netdev_features_t feat)
> > >  {
> > > +     u64 xdp_txs = rx->xdp_actions[XDP_TX];
> > >       struct gve_rx_ctx *ctx = &rx->ctx;
> > >       struct gve_priv *priv = rx->gve;
> > >       struct gve_rx_cnts cnts = {0};
> > > @@ -820,6 +889,9 @@ static int gve_clean_rx_done(struct gve_rx_ring *rx, int budget,
> > >               u64_stats_update_end(&rx->statss);
> > >       }
> > >
> > > +     if (xdp_txs != rx->xdp_actions[XDP_TX])
> > > +             gve_xdp_tx_flush(priv, rx->q_num);
> > > +
> > >       /* restock ring slots */
> > >       if (!rx->data.raw_addressing) {
> > >               /* In QPL mode buffs are refilled as the desc are processed */
> > > diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
> > > index e24e73e74e33..d37515e6c10c 100644
> > > --- a/drivers/net/ethernet/google/gve/gve_tx.c
> > > +++ b/drivers/net/ethernet/google/gve/gve_tx.c
> > > @@ -19,6 +19,14 @@ static inline void gve_tx_put_doorbell(struct gve_priv *priv,
> > >       iowrite32be(val, &priv->db_bar2[be32_to_cpu(q_resources->db_index)]);
> > >  }
> > >
> > > +void gve_xdp_tx_flush(struct gve_priv *priv, u32 xdp_qid)
> > > +{
> > > +     u32 tx_qid = gve_xdp_tx_queue_id(priv, xdp_qid);
> > > +     struct gve_tx_ring *tx = &priv->tx[tx_qid];
> > > +
> > > +     gve_tx_put_doorbell(priv, tx->q_resources, tx->req);
> > > +}
> > > +
> > >  /* gvnic can only transmit from a Registered Segment.
> > >   * We copy skb payloads into the registered segment before writing Tx
> > >   * descriptors and ringing the Tx doorbell.
> > > @@ -132,6 +140,50 @@ static void gve_tx_free_fifo(struct gve_tx_fifo *fifo, size_t bytes)
> > >       atomic_add(bytes, &fifo->available);
> > >  }
> > >
> > > +static size_t gve_tx_clear_buffer_state(struct gve_tx_buffer_state *info)
> > > +{
> > > +     size_t space_freed = 0;
> > > +     int i;
> > > +
> > > +     for (i = 0; i < ARRAY_SIZE(info->iov); i++) {
> > > +             space_freed += info->iov[i].iov_len + info->iov[i].iov_padding;
> > > +             info->iov[i].iov_len = 0;
> > > +             info->iov[i].iov_padding = 0;
> > > +     }
> > > +     return space_freed;
> > > +}
> > > +
> > > +static int gve_clean_xdp_done(struct gve_priv *priv, struct gve_tx_ring *tx,
> > > +                           u32 to_do)
> > > +{
> > > +     struct gve_tx_buffer_state *info;
> > > +     u32 clean_end = tx->done + to_do;
> > > +     u64 pkts = 0, bytes = 0;
> > > +     size_t space_freed = 0;
> > > +     u32 idx;
> > > +
> > > +     for (; tx->done < clean_end; tx->done++) {
> > > +             idx = tx->done & tx->mask;
> > > +             info = &tx->info[idx];
> > > +
> > > +             if (unlikely(!info->xdp.size))
> > > +                     continue;
> > > +
> > > +             bytes += info->xdp.size;
> > > +             pkts++;
> > > +
> > > +             info->xdp.size = 0;
> > > +             space_freed += gve_tx_clear_buffer_state(info);
> > > +     }
> > > +
> > > +     gve_tx_free_fifo(&tx->tx_fifo, space_freed);
> > > +     u64_stats_update_begin(&tx->statss);
> > > +     tx->bytes_done += bytes;
> > > +     tx->pkt_done += pkts;
> > > +     u64_stats_update_end(&tx->statss);
> > > +     return pkts;
> > > +}
> > > +
> > >  static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx,
> > >                            u32 to_do, bool try_to_wake);
> > >
> > > @@ -144,8 +196,12 @@ static void gve_tx_free_ring(struct gve_priv *priv, int idx)
> > >
> > >       gve_tx_remove_from_block(priv, idx);
> > >       slots = tx->mask + 1;
> > > -     gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false);
> > > -     netdev_tx_reset_queue(tx->netdev_txq);
> > > +     if (tx->q_num < priv->tx_cfg.num_queues) {
> > > +             gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false);
> > > +             netdev_tx_reset_queue(tx->netdev_txq);
> > > +     } else {
> > > +             gve_clean_xdp_done(priv, tx, priv->tx_desc_cnt);
> > > +     }
> > >
> > >       dma_free_coherent(hdev, sizeof(*tx->q_resources),
> > >                         tx->q_resources, tx->q_resources_bus);
> > > @@ -213,7 +269,8 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx)
> > >
> > >       netif_dbg(priv, drv, priv->dev, "tx[%d]->bus=%lx\n", idx,
> > >                 (unsigned long)tx->bus);
> > > -     tx->netdev_txq = netdev_get_tx_queue(priv->dev, idx);
> > > +     if (idx < priv->tx_cfg.num_queues)
> > > +             tx->netdev_txq = netdev_get_tx_queue(priv->dev, idx);
> > >       gve_tx_add_to_block(priv, idx);
> > >
> > >       return 0;
> > > @@ -657,6 +714,65 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev)
> > >       return NETDEV_TX_OK;
> > >  }
> > >
> > > +static int gve_tx_fill_xdp(struct gve_priv *priv, struct gve_tx_ring *tx,
> > > +                        void *data, int len)
> > > +{
> > > +     int pad, nfrags, ndescs, iovi, offset;
> > > +     struct gve_tx_buffer_state *info;
> > > +     u32 reqi = tx->req;
> > > +
> > > +     pad = gve_tx_fifo_pad_alloc_one_frag(&tx->tx_fifo, len);
> > > +     if (pad >= GVE_TX_MAX_HEADER_SIZE)
> > > +             pad = 0;
> > > +     info = &tx->info[reqi & tx->mask];
> > > +     info->xdp.size = len;
> > > +
> > > +     nfrags = gve_tx_alloc_fifo(&tx->tx_fifo, pad + len,
> > > +                                &info->iov[0]);
> > > +     iovi = pad > 0;
> > > +     ndescs = nfrags - iovi;
> > > +     offset = 0;
> > > +
> > > +     while (iovi < nfrags) {
> > > +             if (!offset)
> > > +                     gve_tx_fill_pkt_desc(&tx->desc[reqi & tx->mask], 0,
> > > +                                          CHECKSUM_NONE, false, 0, ndescs,
> > > +                                          info->iov[iovi].iov_len,
> > > +                                          info->iov[iovi].iov_offset, len);
> > > +             else
> > > +                     gve_tx_fill_seg_desc(&tx->desc[reqi & tx->mask],
> > > +                                          0, 0, false, false,
> > > +                                          info->iov[iovi].iov_len,
> > > +                                          info->iov[iovi].iov_offset);
> > > +
> > > +             memcpy(tx->tx_fifo.base + info->iov[iovi].iov_offset,
> > > +                    data + offset, info->iov[iovi].iov_len);
> > > +             gve_dma_sync_for_device(&priv->pdev->dev,
> > > +                                     tx->tx_fifo.qpl->page_buses,
> > > +                                     info->iov[iovi].iov_offset,
> > > +                                     info->iov[iovi].iov_len);
> > > +             offset += info->iov[iovi].iov_len;
> > > +             iovi++;
> > > +             reqi++;
> > > +     }
> >
> > Could you please explain the logic above a little bit more?
> > How is it possible to xmit more than one frag per one XDP packet?
> > I believe in your implementation the XDP supports only one descriptor
> > per packet. So, I think this "while" loop will always have only one
> > iteration?
> >
> In GQI-QPL mode, the XDP packets are first copied into TX-FIFO buffer (which
> is preregistered with vNIC). The packets are then DMAed by the vNIC from
> TX FIFO buffer. When the entire packet does not fit at the end of TX-FIFO,
> we split the packet into two chunks - one at tail of TX FIFO and one at
> the head of TX-FIFO and use two descriptors to transmit the packet to vNIC.
> 

Thank you for the explanation!
It seems I confused the number of Tx frags with the number of Rx frags
for XDP (where you always check "is_only_frag" parameter for XDP).
So, I thought you would never have more than one frag in Tx, which is not
true it seems.

It seems although you can have only one frag in Rx for XDP, you can
still have more than one frag in Tx, correct?

> 
> > > +
> > > +     return ndescs;
> > > +}
> > > +
> > > +int gve_xdp_xmit_one(struct gve_priv *priv, struct gve_tx_ring *tx,
> > > +                  void *data, int len)
> > > +{
> > > +     int nsegs;
> > > +
> > > +     if (!gve_can_tx(tx, len + GVE_TX_MAX_HEADER_SIZE - 1))
> > > +             return -EBUSY;
> > > +
> > > +     nsegs = gve_tx_fill_xdp(priv, tx, data, len);
> > > +     tx->req += nsegs;
> > > +
> > > +     return 0;
> > > +}
> > > +
> > >  #define GVE_TX_START_THRESH  PAGE_SIZE
> > >
> > >  static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx,
> > > @@ -666,7 +782,7 @@ static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx,
> > >       u64 pkts = 0, bytes = 0;
> > >       size_t space_freed = 0;
> > >       struct sk_buff *skb;
> > > -     int i, j;
> > > +     int j;
> > >       u32 idx;
> >
> > RCT
> >
> > >
> > >       for (j = 0; j < to_do; j++) {
> > > @@ -689,12 +805,7 @@ static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx,
> > >                       dev_consume_skb_any(skb);
> > >                       if (tx->raw_addressing)
> > >                               continue;
> > > -                     /* FIFO free */
> > > -                     for (i = 0; i < ARRAY_SIZE(info->iov); i++) {
> > > -                             space_freed += info->iov[i].iov_len + info->iov[i].iov_padding;
> > > -                             info->iov[i].iov_len = 0;
> > > -                             info->iov[i].iov_padding = 0;
> > > -                     }
> > > +                     space_freed += gve_tx_clear_buffer_state(info);
> > >               }
> > >       }
> > >
> > > @@ -729,6 +840,24 @@ u32 gve_tx_load_event_counter(struct gve_priv *priv,
> > >       return be32_to_cpu(counter);
> > >  }
> > >
> > > +bool gve_xdp_poll(struct gve_notify_block *block, int budget)
> > > +{
> > > +     struct gve_priv *priv = block->priv;
> > > +     struct gve_tx_ring *tx = block->tx;
> > > +     u32 nic_done;
> > > +     u32 to_do;
> > > +
> > > +     /* If budget is 0, do all the work */
> > > +     if (budget == 0)
> > > +             budget = INT_MAX;
> > > +
> > > +     /* Find out how much work there is to be done */
> > > +     nic_done = gve_tx_load_event_counter(priv, tx);
> > > +     to_do = min_t(u32, (nic_done - tx->done), budget);
> > > +     gve_clean_xdp_done(priv, tx, to_do);
> > > +     return nic_done != tx->done;
> > > +}
> > > +
> > >  bool gve_tx_poll(struct gve_notify_block *block, int budget)
> > >  {
> > >       struct gve_priv *priv = block->priv;
> > > --
> > > 2.40.0.rc1.284.g88254d51c5-goog
> > >

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2023-03-16 22:08 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-13 20:26 [PATCH net-next v3 0/5] gve: Add XDP support for GQI-QPL format Praveen Kaligineedi
2023-03-13 20:26 ` [PATCH net-next v3 1/5] gve: XDP support GQI-QPL: helper function changes Praveen Kaligineedi
2023-03-15 17:13   ` Michal Kubiak
2023-03-13 20:26 ` [PATCH net-next v3 2/5] gve: Changes to add new TX queues Praveen Kaligineedi
2023-03-15 17:14   ` Michal Kubiak
2023-03-13 20:26 ` [PATCH net-next v3 3/5] gve: Add XDP DROP and TX support for GQI-QPL format Praveen Kaligineedi
2023-03-15 16:53   ` Michal Kubiak
2023-03-15 21:10     ` Praveen Kaligineedi
2023-03-16 22:07       ` Michal Kubiak
2023-03-13 20:26 ` [PATCH net-next v3 4/5] gve: Add XDP REDIRECT " Praveen Kaligineedi
2023-03-13 20:26 ` [PATCH net-next v3 5/5] gve: Add AF_XDP zero-copy " Praveen Kaligineedi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.