All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net v2 0/4] bnxt: Tx NAPI disabling resiliency improvements
@ 2021-08-11 21:37 Jakub Kicinski
  2021-08-11 21:37 ` [PATCH net v2 1/4] bnxt: don't lock the tx queue from napi poll Jakub Kicinski
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Jakub Kicinski @ 2021-08-11 21:37 UTC (permalink / raw)
  To: davem
  Cc: michael.chan, huangjw, eddie.wai, prashant, gospo, netdev,
	edwin.peer, Jakub Kicinski

A lockdep warning was triggered by netpoll because napi poll
was taking the xmit lock. Fix that and a couple more issues
noticed while reading the code.

Jakub Kicinski (4):
  bnxt: don't lock the tx queue from napi poll
  bnxt: disable napi before canceling DIM
  bnxt: make sure xmit_more + errors does not miss doorbells
  bnxt: count Tx drops

 drivers/net/ethernet/broadcom/bnxt/bnxt.c | 60 ++++++++++++++---------
 drivers/net/ethernet/broadcom/bnxt/bnxt.h |  1 +
 2 files changed, 37 insertions(+), 24 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH net v2 1/4] bnxt: don't lock the tx queue from napi poll
  2021-08-11 21:37 [PATCH net v2 0/4] bnxt: Tx NAPI disabling resiliency improvements Jakub Kicinski
@ 2021-08-11 21:37 ` Jakub Kicinski
  2021-08-11 21:37 ` [PATCH net v2 2/4] bnxt: disable napi before canceling DIM Jakub Kicinski
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 12+ messages in thread
From: Jakub Kicinski @ 2021-08-11 21:37 UTC (permalink / raw)
  To: davem
  Cc: michael.chan, huangjw, eddie.wai, prashant, gospo, netdev,
	edwin.peer, Jakub Kicinski

We can't take the tx lock from the napi poll routine, because
netpoll can poll napi at any moment, including with the tx lock
already held.

It seems that the tx lock is only protecting against the disable
path, appropriate barriers are already in place to make sure
cleanup can safely run concurrently with start_xmit. I don't see
any other reason why 'stopped && avail > thresh' needs to be
re-checked under the lock.

Remove the tx lock and use synchronize_net() to make sure
closing the device does not race we restarting the queues.
Annotate accesses to dev_state against data races.

Fixes: c0c050c58d84 ("bnxt_en: New Broadcom ethernet driver.")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
--
v2: keep the unlikely in bnxt_tx_int() [Edwin]
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c | 19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 865fcb8cf29f..365f8ae91acb 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -731,14 +731,9 @@ static void bnxt_tx_int(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts)
 	smp_mb();
 
 	if (unlikely(netif_tx_queue_stopped(txq)) &&
-	    (bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh)) {
-		__netif_tx_lock(txq, smp_processor_id());
-		if (netif_tx_queue_stopped(txq) &&
-		    bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh &&
-		    txr->dev_state != BNXT_DEV_STATE_CLOSING)
-			netif_tx_wake_queue(txq);
-		__netif_tx_unlock(txq);
-	}
+	    bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh &&
+	    READ_ONCE(txr->dev_state) != BNXT_DEV_STATE_CLOSING)
+		netif_tx_wake_queue(txq);
 }
 
 static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping,
@@ -9264,9 +9259,11 @@ void bnxt_tx_disable(struct bnxt *bp)
 	if (bp->tx_ring) {
 		for (i = 0; i < bp->tx_nr_rings; i++) {
 			txr = &bp->tx_ring[i];
-			txr->dev_state = BNXT_DEV_STATE_CLOSING;
+			WRITE_ONCE(txr->dev_state, BNXT_DEV_STATE_CLOSING);
 		}
 	}
+	/* Make sure napi polls see @dev_state change */
+	synchronize_net();
 	/* Drop carrier first to prevent TX timeout */
 	netif_carrier_off(bp->dev);
 	/* Stop all TX queues */
@@ -9280,8 +9277,10 @@ void bnxt_tx_enable(struct bnxt *bp)
 
 	for (i = 0; i < bp->tx_nr_rings; i++) {
 		txr = &bp->tx_ring[i];
-		txr->dev_state = 0;
+		WRITE_ONCE(txr->dev_state, 0);
 	}
+	/* Make sure napi polls see @dev_state change */
+	synchronize_net();
 	netif_tx_wake_all_queues(bp->dev);
 	if (bp->link_info.link_up)
 		netif_carrier_on(bp->dev);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net v2 2/4] bnxt: disable napi before canceling DIM
  2021-08-11 21:37 [PATCH net v2 0/4] bnxt: Tx NAPI disabling resiliency improvements Jakub Kicinski
  2021-08-11 21:37 ` [PATCH net v2 1/4] bnxt: don't lock the tx queue from napi poll Jakub Kicinski
@ 2021-08-11 21:37 ` Jakub Kicinski
  2021-08-11 21:37 ` [PATCH net v2 3/4] bnxt: make sure xmit_more + errors does not miss doorbells Jakub Kicinski
  2021-08-11 21:37 ` [PATCH net v2 4/4] bnxt: count Tx drops Jakub Kicinski
  3 siblings, 0 replies; 12+ messages in thread
From: Jakub Kicinski @ 2021-08-11 21:37 UTC (permalink / raw)
  To: davem
  Cc: michael.chan, huangjw, eddie.wai, prashant, gospo, netdev,
	edwin.peer, Jakub Kicinski

napi schedules DIM, napi has to be disabled first,
then DIM canceled.

Noticed while reading the code.

Fixes: 0bc0b97fca73 ("bnxt_en: cleanup DIM work on device shutdown")
Fixes: 6a8788f25625 ("bnxt_en: add support for software dynamic interrupt moderation")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 365f8ae91acb..52f5c8405e76 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -9222,10 +9222,9 @@ static void bnxt_disable_napi(struct bnxt *bp)
 	for (i = 0; i < bp->cp_nr_rings; i++) {
 		struct bnxt_cp_ring_info *cpr = &bp->bnapi[i]->cp_ring;
 
+		napi_disable(&bp->bnapi[i]->napi);
 		if (bp->bnapi[i]->rx_ring)
 			cancel_work_sync(&cpr->dim.work);
-
-		napi_disable(&bp->bnapi[i]->napi);
 	}
 }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net v2 3/4] bnxt: make sure xmit_more + errors does not miss doorbells
  2021-08-11 21:37 [PATCH net v2 0/4] bnxt: Tx NAPI disabling resiliency improvements Jakub Kicinski
  2021-08-11 21:37 ` [PATCH net v2 1/4] bnxt: don't lock the tx queue from napi poll Jakub Kicinski
  2021-08-11 21:37 ` [PATCH net v2 2/4] bnxt: disable napi before canceling DIM Jakub Kicinski
@ 2021-08-11 21:37 ` Jakub Kicinski
  2021-08-11 22:36   ` Michael Chan
                     ` (2 more replies)
  2021-08-11 21:37 ` [PATCH net v2 4/4] bnxt: count Tx drops Jakub Kicinski
  3 siblings, 3 replies; 12+ messages in thread
From: Jakub Kicinski @ 2021-08-11 21:37 UTC (permalink / raw)
  To: davem
  Cc: michael.chan, huangjw, eddie.wai, prashant, gospo, netdev,
	edwin.peer, Jakub Kicinski

skbs are freed on error and not put on the ring. We may, however,
be in a situation where we're freeing the last skb of a batch,
and there is a doorbell ring pending because of xmit_more() being
true earlier. Make sure we ring the door bell in such situations.

Since errors are rare don't pay attention to xmit_more() and just
always flush the pending frames.

The busy case should be safe to be left alone because it can
only happen if start_xmit races with completions and they
both enable the queue. In that case the kick can't be pending.

Noticed while reading the code.

Fixes: 4d172f21cefe ("bnxt_en: Implement xmit_more.")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
v2: - netdev_warn() -> netif_warn() [Edwin]
    - use correct prod value [Michael]
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c | 36 +++++++++++++++--------
 drivers/net/ethernet/broadcom/bnxt/bnxt.h |  1 +
 2 files changed, 25 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 52f5c8405e76..79bbd6ec7ef7 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -72,7 +72,8 @@
 #include "bnxt_debugfs.h"
 
 #define BNXT_TX_TIMEOUT		(5 * HZ)
-#define BNXT_DEF_MSG_ENABLE	(NETIF_MSG_DRV | NETIF_MSG_HW)
+#define BNXT_DEF_MSG_ENABLE	(NETIF_MSG_DRV | NETIF_MSG_HW | \
+				 NETIF_MSG_TX_ERR)
 
 MODULE_LICENSE("GPL");
 MODULE_DESCRIPTION("Broadcom BCM573xx network driver");
@@ -367,6 +368,13 @@ static u16 bnxt_xmit_get_cfa_action(struct sk_buff *skb)
 	return md_dst->u.port_info.port_id;
 }
 
+static void bnxt_txr_db_kick(struct bnxt *bp, struct bnxt_tx_ring_info *txr,
+			     u16 prod)
+{
+	bnxt_db_write(bp, &txr->tx_db, prod);
+	txr->kick_pending = 0;
+}
+
 static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct bnxt *bp = netdev_priv(dev);
@@ -396,6 +404,8 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	free_size = bnxt_tx_avail(bp, txr);
 	if (unlikely(free_size < skb_shinfo(skb)->nr_frags + 2)) {
 		netif_tx_stop_queue(txq);
+		if (net_ratelimit() && txr->kick_pending)
+			netif_warn(bp, tx_err, dev, "bnxt: ring busy!\n");
 		return NETDEV_TX_BUSY;
 	}
 
@@ -516,21 +526,16 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
 normal_tx:
 	if (length < BNXT_MIN_PKT_SIZE) {
 		pad = BNXT_MIN_PKT_SIZE - length;
-		if (skb_pad(skb, pad)) {
+		if (skb_pad(skb, pad))
 			/* SKB already freed. */
-			tx_buf->skb = NULL;
-			return NETDEV_TX_OK;
-		}
+			goto tx_kick_pending;
 		length = BNXT_MIN_PKT_SIZE;
 	}
 
 	mapping = dma_map_single(&pdev->dev, skb->data, len, DMA_TO_DEVICE);
 
-	if (unlikely(dma_mapping_error(&pdev->dev, mapping))) {
-		dev_kfree_skb_any(skb);
-		tx_buf->skb = NULL;
-		return NETDEV_TX_OK;
-	}
+	if (unlikely(dma_mapping_error(&pdev->dev, mapping)))
+		goto tx_free;
 
 	dma_unmap_addr_set(tx_buf, mapping, mapping);
 	flags = (len << TX_BD_LEN_SHIFT) | TX_BD_TYPE_LONG_TX_BD |
@@ -617,13 +622,15 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	txr->tx_prod = prod;
 
 	if (!netdev_xmit_more() || netif_xmit_stopped(txq))
-		bnxt_db_write(bp, &txr->tx_db, prod);
+		bnxt_txr_db_kick(bp, txr, prod);
+	else
+		txr->kick_pending = 1;
 
 tx_done:
 
 	if (unlikely(bnxt_tx_avail(bp, txr) <= MAX_SKB_FRAGS + 1)) {
 		if (netdev_xmit_more() && !tx_buf->is_push)
-			bnxt_db_write(bp, &txr->tx_db, prod);
+			bnxt_txr_db_kick(bp, txr, prod);
 
 		netif_tx_stop_queue(txq);
 
@@ -661,7 +668,12 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
 			       PCI_DMA_TODEVICE);
 	}
 
+tx_free:
 	dev_kfree_skb_any(skb);
+tx_kick_pending:
+	tx_buf->skb = NULL;
+	if (txr->kick_pending)
+		bnxt_txr_db_kick(bp, txr, txr->tx_prod);
 	return NETDEV_TX_OK;
 }
 
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index 9c3324e76ff7..7b989b6e4f6e 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -799,6 +799,7 @@ struct bnxt_tx_ring_info {
 	u16			tx_prod;
 	u16			tx_cons;
 	u16			txq_index;
+	u8			kick_pending;
 	struct bnxt_db_info	tx_db;
 
 	struct tx_bd		*tx_desc_ring[MAX_TX_PAGES];
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net v2 4/4] bnxt: count Tx drops
  2021-08-11 21:37 [PATCH net v2 0/4] bnxt: Tx NAPI disabling resiliency improvements Jakub Kicinski
                   ` (2 preceding siblings ...)
  2021-08-11 21:37 ` [PATCH net v2 3/4] bnxt: make sure xmit_more + errors does not miss doorbells Jakub Kicinski
@ 2021-08-11 21:37 ` Jakub Kicinski
  3 siblings, 0 replies; 12+ messages in thread
From: Jakub Kicinski @ 2021-08-11 21:37 UTC (permalink / raw)
  To: davem
  Cc: michael.chan, huangjw, eddie.wai, prashant, gospo, netdev,
	edwin.peer, Jakub Kicinski

Drivers should count packets they are dropping.

Fixes: c0c050c58d84 ("bnxt_en: New Broadcom ethernet driver.")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 79bbd6ec7ef7..642d7aa4676e 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -394,6 +394,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	i = skb_get_queue_mapping(skb);
 	if (unlikely(i >= bp->tx_nr_rings)) {
 		dev_kfree_skb_any(skb);
+		atomic_long_inc(&dev->tx_dropped);
 		return NETDEV_TX_OK;
 	}
 
@@ -674,6 +675,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	tx_buf->skb = NULL;
 	if (txr->kick_pending)
 		bnxt_txr_db_kick(bp, txr, txr->tx_prod);
+	atomic_long_inc(&dev->tx_dropped);
 	return NETDEV_TX_OK;
 }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH net v2 3/4] bnxt: make sure xmit_more + errors does not miss doorbells
  2021-08-11 21:37 ` [PATCH net v2 3/4] bnxt: make sure xmit_more + errors does not miss doorbells Jakub Kicinski
@ 2021-08-11 22:36   ` Michael Chan
  2021-08-11 22:44     ` Jakub Kicinski
  2021-08-12  6:51   ` Michael Chan
  2021-08-13  8:35   ` David Laight
  2 siblings, 1 reply; 12+ messages in thread
From: Michael Chan @ 2021-08-11 22:36 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: David Miller, Jeffrey Huang, Eddie Wai, Prashant Sreedharan,
	Andrew Gospodarek, Netdev, Edwin Peer

[-- Attachment #1: Type: text/plain, Size: 4206 bytes --]

On Wed, Aug 11, 2021 at 2:38 PM Jakub Kicinski <kuba@kernel.org> wrote:

> ---
> v2: - netdev_warn() -> netif_warn() [Edwin]
>     - use correct prod value [Michael]
> ---
>  drivers/net/ethernet/broadcom/bnxt/bnxt.c | 36 +++++++++++++++--------
>  drivers/net/ethernet/broadcom/bnxt/bnxt.h |  1 +
>  2 files changed, 25 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> index 52f5c8405e76..79bbd6ec7ef7 100644
> --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> @@ -72,7 +72,8 @@
>  #include "bnxt_debugfs.h"
>
>  #define BNXT_TX_TIMEOUT                (5 * HZ)
> -#define BNXT_DEF_MSG_ENABLE    (NETIF_MSG_DRV | NETIF_MSG_HW)
> +#define BNXT_DEF_MSG_ENABLE    (NETIF_MSG_DRV | NETIF_MSG_HW | \
> +                                NETIF_MSG_TX_ERR)
>
>  MODULE_LICENSE("GPL");
>  MODULE_DESCRIPTION("Broadcom BCM573xx network driver");
> @@ -367,6 +368,13 @@ static u16 bnxt_xmit_get_cfa_action(struct sk_buff *skb)
>         return md_dst->u.port_info.port_id;
>  }
>
> +static void bnxt_txr_db_kick(struct bnxt *bp, struct bnxt_tx_ring_info *txr,
> +                            u16 prod)
> +{
> +       bnxt_db_write(bp, &txr->tx_db, prod);
> +       txr->kick_pending = 0;
> +}
> +
>  static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  {
>         struct bnxt *bp = netdev_priv(dev);
> @@ -396,6 +404,8 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
>         free_size = bnxt_tx_avail(bp, txr);
>         if (unlikely(free_size < skb_shinfo(skb)->nr_frags + 2)) {
>                 netif_tx_stop_queue(txq);
> +               if (net_ratelimit() && txr->kick_pending)
> +                       netif_warn(bp, tx_err, dev, "bnxt: ring busy!\n");

You forgot to remove this.

>                 return NETDEV_TX_BUSY;
>         }
>
> @@ -516,21 +526,16 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  normal_tx:
>         if (length < BNXT_MIN_PKT_SIZE) {
>                 pad = BNXT_MIN_PKT_SIZE - length;
> -               if (skb_pad(skb, pad)) {
> +               if (skb_pad(skb, pad))
>                         /* SKB already freed. */
> -                       tx_buf->skb = NULL;
> -                       return NETDEV_TX_OK;
> -               }
> +                       goto tx_kick_pending;
>                 length = BNXT_MIN_PKT_SIZE;
>         }
>
>         mapping = dma_map_single(&pdev->dev, skb->data, len, DMA_TO_DEVICE);
>
> -       if (unlikely(dma_mapping_error(&pdev->dev, mapping))) {
> -               dev_kfree_skb_any(skb);
> -               tx_buf->skb = NULL;
> -               return NETDEV_TX_OK;
> -       }
> +       if (unlikely(dma_mapping_error(&pdev->dev, mapping)))
> +               goto tx_free;
>
>         dma_unmap_addr_set(tx_buf, mapping, mapping);
>         flags = (len << TX_BD_LEN_SHIFT) | TX_BD_TYPE_LONG_TX_BD |
> @@ -617,13 +622,15 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
>         txr->tx_prod = prod;
>
>         if (!netdev_xmit_more() || netif_xmit_stopped(txq))
> -               bnxt_db_write(bp, &txr->tx_db, prod);
> +               bnxt_txr_db_kick(bp, txr, prod);
> +       else
> +               txr->kick_pending = 1;
>
>  tx_done:
>
>         if (unlikely(bnxt_tx_avail(bp, txr) <= MAX_SKB_FRAGS + 1)) {
>                 if (netdev_xmit_more() && !tx_buf->is_push)
> -                       bnxt_db_write(bp, &txr->tx_db, prod);
> +                       bnxt_txr_db_kick(bp, txr, prod);
>
>                 netif_tx_stop_queue(txq);
>
> @@ -661,7 +668,12 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
>                                PCI_DMA_TODEVICE);
>         }
>
> +tx_free:
>         dev_kfree_skb_any(skb);
> +tx_kick_pending:
> +       tx_buf->skb = NULL;

I think we should remove the setting of tx_buf->skb to NULL in the
tx_dma_error path since we are setting it here now.

> +       if (txr->kick_pending)
> +               bnxt_txr_db_kick(bp, txr, txr->tx_prod);
>         return NETDEV_TX_OK;
>  }
>

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4209 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net v2 3/4] bnxt: make sure xmit_more + errors does not miss doorbells
  2021-08-11 22:36   ` Michael Chan
@ 2021-08-11 22:44     ` Jakub Kicinski
  2021-08-11 23:00       ` Michael Chan
  0 siblings, 1 reply; 12+ messages in thread
From: Jakub Kicinski @ 2021-08-11 22:44 UTC (permalink / raw)
  To: Michael Chan
  Cc: David Miller, Jeffrey Huang, Eddie Wai, Prashant Sreedharan,
	Andrew Gospodarek, Netdev, Edwin Peer

On Wed, 11 Aug 2021 15:36:34 -0700 Michael Chan wrote:
> On Wed, Aug 11, 2021 at 2:38 PM Jakub Kicinski <kuba@kernel.org> wrote:
> 
> > ---
> > v2: - netdev_warn() -> netif_warn() [Edwin]
> >     - use correct prod value [Michael]
> > ---
> >  drivers/net/ethernet/broadcom/bnxt/bnxt.c | 36 +++++++++++++++--------
> >  drivers/net/ethernet/broadcom/bnxt/bnxt.h |  1 +
> >  2 files changed, 25 insertions(+), 12 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> > index 52f5c8405e76..79bbd6ec7ef7 100644
> > --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> > +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> > @@ -72,7 +72,8 @@
> >  #include "bnxt_debugfs.h"
> >
> >  #define BNXT_TX_TIMEOUT                (5 * HZ)
> > -#define BNXT_DEF_MSG_ENABLE    (NETIF_MSG_DRV | NETIF_MSG_HW)
> > +#define BNXT_DEF_MSG_ENABLE    (NETIF_MSG_DRV | NETIF_MSG_HW | \
> > +                                NETIF_MSG_TX_ERR)
> >
> >  MODULE_LICENSE("GPL");
> >  MODULE_DESCRIPTION("Broadcom BCM573xx network driver");
> > @@ -367,6 +368,13 @@ static u16 bnxt_xmit_get_cfa_action(struct sk_buff *skb)
> >         return md_dst->u.port_info.port_id;
> >  }
> >
> > +static void bnxt_txr_db_kick(struct bnxt *bp, struct bnxt_tx_ring_info *txr,
> > +                            u16 prod)
> > +{
> > +       bnxt_db_write(bp, &txr->tx_db, prod);
> > +       txr->kick_pending = 0;
> > +}
> > +
> >  static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
> >  {
> >         struct bnxt *bp = netdev_priv(dev);
> > @@ -396,6 +404,8 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
> >         free_size = bnxt_tx_avail(bp, txr);
> >         if (unlikely(free_size < skb_shinfo(skb)->nr_frags + 2)) {
> >                 netif_tx_stop_queue(txq);
> > +               if (net_ratelimit() && txr->kick_pending)
> > +                       netif_warn(bp, tx_err, dev, "bnxt: ring busy!\n");  
> 
> You forgot to remove this.

I changed my mind. I added the && txr->kick_pending to the condition,
if there is a race and napi starts the queue unnecessarily the kick
can't be pending.

> >                 return NETDEV_TX_BUSY;
> >         }
> >
> > @@ -516,21 +526,16 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
> >  normal_tx:
> >         if (length < BNXT_MIN_PKT_SIZE) {
> >                 pad = BNXT_MIN_PKT_SIZE - length;
> > -               if (skb_pad(skb, pad)) {
> > +               if (skb_pad(skb, pad))
> >                         /* SKB already freed. */
> > -                       tx_buf->skb = NULL;
> > -                       return NETDEV_TX_OK;
> > -               }
> > +                       goto tx_kick_pending;
> >                 length = BNXT_MIN_PKT_SIZE;
> >         }
> >
> >         mapping = dma_map_single(&pdev->dev, skb->data, len, DMA_TO_DEVICE);
> >
> > -       if (unlikely(dma_mapping_error(&pdev->dev, mapping))) {
> > -               dev_kfree_skb_any(skb);
> > -               tx_buf->skb = NULL;
> > -               return NETDEV_TX_OK;
> > -       }
> > +       if (unlikely(dma_mapping_error(&pdev->dev, mapping)))
> > +               goto tx_free;
> >
> >         dma_unmap_addr_set(tx_buf, mapping, mapping);
> >         flags = (len << TX_BD_LEN_SHIFT) | TX_BD_TYPE_LONG_TX_BD |
> > @@ -617,13 +622,15 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
> >         txr->tx_prod = prod;
> >
> >         if (!netdev_xmit_more() || netif_xmit_stopped(txq))
> > -               bnxt_db_write(bp, &txr->tx_db, prod);
> > +               bnxt_txr_db_kick(bp, txr, prod);
> > +       else
> > +               txr->kick_pending = 1;
> >
> >  tx_done:
> >
> >         if (unlikely(bnxt_tx_avail(bp, txr) <= MAX_SKB_FRAGS + 1)) {
> >                 if (netdev_xmit_more() && !tx_buf->is_push)
> > -                       bnxt_db_write(bp, &txr->tx_db, prod);
> > +                       bnxt_txr_db_kick(bp, txr, prod);
> >
> >                 netif_tx_stop_queue(txq);
> >
> > @@ -661,7 +668,12 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
> >                                PCI_DMA_TODEVICE);
> >         }
> >
> > +tx_free:
> >         dev_kfree_skb_any(skb);
> > +tx_kick_pending:
> > +       tx_buf->skb = NULL;  
> 
> I think we should remove the setting of tx_buf->skb to NULL in the
> tx_dma_error path since we are setting it here now.

But tx_buf gets moved IIRC - if we hit tx_dma_error tx_buf will be one
of the fragment bufs at this point. It should be legal to clear the skb
pointer on those AFAICT.

Are you suggesting to do something along the lines of:

	txr->tx_buf_ring[txr->tx_prod].skb = NULL;

?

> > +       if (txr->kick_pending)
> > +               bnxt_txr_db_kick(bp, txr, txr->tx_prod);
> >         return NETDEV_TX_OK;
> >  }
> >  


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net v2 3/4] bnxt: make sure xmit_more + errors does not miss doorbells
  2021-08-11 22:44     ` Jakub Kicinski
@ 2021-08-11 23:00       ` Michael Chan
  2021-08-11 23:16         ` Jakub Kicinski
  0 siblings, 1 reply; 12+ messages in thread
From: Michael Chan @ 2021-08-11 23:00 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: David Miller, Jeffrey Huang, Eddie Wai, Prashant Sreedharan,
	Andrew Gospodarek, Netdev, Edwin Peer

[-- Attachment #1: Type: text/plain, Size: 4690 bytes --]

On Wed, Aug 11, 2021 at 3:44 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Wed, 11 Aug 2021 15:36:34 -0700 Michael Chan wrote:
> > On Wed, Aug 11, 2021 at 2:38 PM Jakub Kicinski <kuba@kernel.org> wrote:
> > > @@ -367,6 +368,13 @@ static u16 bnxt_xmit_get_cfa_action(struct sk_buff *skb)
> > >         return md_dst->u.port_info.port_id;
> > >  }
> > >
> > > +static void bnxt_txr_db_kick(struct bnxt *bp, struct bnxt_tx_ring_info *txr,
> > > +                            u16 prod)
> > > +{
> > > +       bnxt_db_write(bp, &txr->tx_db, prod);
> > > +       txr->kick_pending = 0;
> > > +}
> > > +
> > >  static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
> > >  {
> > >         struct bnxt *bp = netdev_priv(dev);
> > > @@ -396,6 +404,8 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
> > >         free_size = bnxt_tx_avail(bp, txr);
> > >         if (unlikely(free_size < skb_shinfo(skb)->nr_frags + 2)) {
> > >                 netif_tx_stop_queue(txq);
> > > +               if (net_ratelimit() && txr->kick_pending)
> > > +                       netif_warn(bp, tx_err, dev, "bnxt: ring busy!\n");
> >
> > You forgot to remove this.
>
> I changed my mind. I added the && txr->kick_pending to the condition,
> if there is a race and napi starts the queue unnecessarily the kick
> can't be pending.

I don't understand.  The queue should be stopped if we have <=
MAX_SKB_FRAGS + 1 descriptors left.  If there is a race and the queue
is awake, the first TX packet may slip through if
skb_shinfo(skb)->nr_frags is small and we have enough descriptors for
it.  Let's say xmit_more is set for this packet and so kick is
pending.  The next packet may not fit anymore and it will hit this
check here.

>
> > >                 return NETDEV_TX_BUSY;
> > >         }
> > >
> > > @@ -516,21 +526,16 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
> > >  normal_tx:
> > >         if (length < BNXT_MIN_PKT_SIZE) {
> > >                 pad = BNXT_MIN_PKT_SIZE - length;
> > > -               if (skb_pad(skb, pad)) {
> > > +               if (skb_pad(skb, pad))
> > >                         /* SKB already freed. */
> > > -                       tx_buf->skb = NULL;
> > > -                       return NETDEV_TX_OK;
> > > -               }
> > > +                       goto tx_kick_pending;
> > >                 length = BNXT_MIN_PKT_SIZE;
> > >         }
> > >
> > >         mapping = dma_map_single(&pdev->dev, skb->data, len, DMA_TO_DEVICE);
> > >
> > > -       if (unlikely(dma_mapping_error(&pdev->dev, mapping))) {
> > > -               dev_kfree_skb_any(skb);
> > > -               tx_buf->skb = NULL;
> > > -               return NETDEV_TX_OK;
> > > -       }
> > > +       if (unlikely(dma_mapping_error(&pdev->dev, mapping)))
> > > +               goto tx_free;
> > >
> > >         dma_unmap_addr_set(tx_buf, mapping, mapping);
> > >         flags = (len << TX_BD_LEN_SHIFT) | TX_BD_TYPE_LONG_TX_BD |
> > > @@ -617,13 +622,15 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
> > >         txr->tx_prod = prod;
> > >
> > >         if (!netdev_xmit_more() || netif_xmit_stopped(txq))
> > > -               bnxt_db_write(bp, &txr->tx_db, prod);
> > > +               bnxt_txr_db_kick(bp, txr, prod);
> > > +       else
> > > +               txr->kick_pending = 1;
> > >
> > >  tx_done:
> > >
> > >         if (unlikely(bnxt_tx_avail(bp, txr) <= MAX_SKB_FRAGS + 1)) {
> > >                 if (netdev_xmit_more() && !tx_buf->is_push)
> > > -                       bnxt_db_write(bp, &txr->tx_db, prod);
> > > +                       bnxt_txr_db_kick(bp, txr, prod);
> > >
> > >                 netif_tx_stop_queue(txq);
> > >
> > > @@ -661,7 +668,12 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
> > >                                PCI_DMA_TODEVICE);
> > >         }
> > >
> > > +tx_free:
> > >         dev_kfree_skb_any(skb);
> > > +tx_kick_pending:
> > > +       tx_buf->skb = NULL;
> >
> > I think we should remove the setting of tx_buf->skb to NULL in the
> > tx_dma_error path since we are setting it here now.
>
> But tx_buf gets moved IIRC - if we hit tx_dma_error tx_buf will be one
> of the fragment bufs at this point. It should be legal to clear the skb
> pointer on those AFAICT.

Ah, you're right.

>
> Are you suggesting to do something along the lines of:
>
>         txr->tx_buf_ring[txr->tx_prod].skb = NULL;

Yeah, I like this the best.

>
> ?
>
> > > +       if (txr->kick_pending)
> > > +               bnxt_txr_db_kick(bp, txr, txr->tx_prod);
> > >         return NETDEV_TX_OK;
> > >  }
> > >
>

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4209 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net v2 3/4] bnxt: make sure xmit_more + errors does not miss doorbells
  2021-08-11 23:00       ` Michael Chan
@ 2021-08-11 23:16         ` Jakub Kicinski
  2021-08-11 23:38           ` Michael Chan
  0 siblings, 1 reply; 12+ messages in thread
From: Jakub Kicinski @ 2021-08-11 23:16 UTC (permalink / raw)
  To: Michael Chan
  Cc: David Miller, Jeffrey Huang, Eddie Wai, Prashant Sreedharan,
	Andrew Gospodarek, Netdev, Edwin Peer

On Wed, 11 Aug 2021 16:00:52 -0700 Michael Chan wrote:
> On Wed, Aug 11, 2021 at 3:44 PM Jakub Kicinski <kuba@kernel.org> wrote:
> > On Wed, 11 Aug 2021 15:36:34 -0700 Michael Chan wrote:  
> > > On Wed, Aug 11, 2021 at 2:38 PM Jakub Kicinski <kuba@kernel.org> wrote:  
> > > > @@ -367,6 +368,13 @@ static u16 bnxt_xmit_get_cfa_action(struct sk_buff *skb)
> > > >         return md_dst->u.port_info.port_id;
> > > >  }
> > > >
> > > > +static void bnxt_txr_db_kick(struct bnxt *bp, struct bnxt_tx_ring_info *txr,
> > > > +                            u16 prod)
> > > > +{
> > > > +       bnxt_db_write(bp, &txr->tx_db, prod);
> > > > +       txr->kick_pending = 0;
> > > > +}
> > > > +
> > > >  static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
> > > >  {
> > > >         struct bnxt *bp = netdev_priv(dev);
> > > > @@ -396,6 +404,8 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
> > > >         free_size = bnxt_tx_avail(bp, txr);
> > > >         if (unlikely(free_size < skb_shinfo(skb)->nr_frags + 2)) {
> > > >                 netif_tx_stop_queue(txq);
> > > > +               if (net_ratelimit() && txr->kick_pending)
> > > > +                       netif_warn(bp, tx_err, dev, "bnxt: ring busy!\n");  
> > >
> > > You forgot to remove this.  
> >
> > I changed my mind. I added the && txr->kick_pending to the condition,
> > if there is a race and napi starts the queue unnecessarily the kick
> > can't be pending.  
> 
> I don't understand.  The queue should be stopped if we have <=
> MAX_SKB_FRAGS + 1 descriptors left.  If there is a race and the queue
> is awake, the first TX packet may slip through if
> skb_shinfo(skb)->nr_frags is small and we have enough descriptors for
> it.  Let's say xmit_more is set for this packet and so kick is
> pending.  The next packet may not fit anymore and it will hit this
> check here.

But even if we slip past this check we can only do it once, the check 
at the end of start_xmit() will see we have fewer slots than MAX_FRAGS
+ 2, ring the doorbell and stop.

> > > > @@ -661,7 +668,12 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
> > > >                                PCI_DMA_TODEVICE);
> > > >         }
> > > >
> > > > +tx_free:
> > > >         dev_kfree_skb_any(skb);
> > > > +tx_kick_pending:
> > > > +       tx_buf->skb = NULL;  
> > >
> > > I think we should remove the setting of tx_buf->skb to NULL in the
> > > tx_dma_error path since we are setting it here now.  
> >
> > Are you suggesting to do something along the lines of:
> >
> >         txr->tx_buf_ring[txr->tx_prod].skb = NULL;  
> 
> Yeah, I like this the best.

Roger that, I'll send v3 tomorrow, I run out of day.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net v2 3/4] bnxt: make sure xmit_more + errors does not miss doorbells
  2021-08-11 23:16         ` Jakub Kicinski
@ 2021-08-11 23:38           ` Michael Chan
  0 siblings, 0 replies; 12+ messages in thread
From: Michael Chan @ 2021-08-11 23:38 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: David Miller, Jeffrey Huang, Eddie Wai, Prashant Sreedharan,
	Andrew Gospodarek, Netdev, Edwin Peer

[-- Attachment #1: Type: text/plain, Size: 2250 bytes --]

On Wed, Aug 11, 2021 at 4:17 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Wed, 11 Aug 2021 16:00:52 -0700 Michael Chan wrote:
> > On Wed, Aug 11, 2021 at 3:44 PM Jakub Kicinski <kuba@kernel.org> wrote:
> > > On Wed, 11 Aug 2021 15:36:34 -0700 Michael Chan wrote:
> > > > On Wed, Aug 11, 2021 at 2:38 PM Jakub Kicinski <kuba@kernel.org> wrote:
> > > > > @@ -367,6 +368,13 @@ static u16 bnxt_xmit_get_cfa_action(struct sk_buff *skb)
> > > > >         return md_dst->u.port_info.port_id;
> > > > >  }
> > > > >
> > > > > +static void bnxt_txr_db_kick(struct bnxt *bp, struct bnxt_tx_ring_info *txr,
> > > > > +                            u16 prod)
> > > > > +{
> > > > > +       bnxt_db_write(bp, &txr->tx_db, prod);
> > > > > +       txr->kick_pending = 0;
> > > > > +}
> > > > > +
> > > > >  static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
> > > > >  {
> > > > >         struct bnxt *bp = netdev_priv(dev);
> > > > > @@ -396,6 +404,8 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
> > > > >         free_size = bnxt_tx_avail(bp, txr);
> > > > >         if (unlikely(free_size < skb_shinfo(skb)->nr_frags + 2)) {
> > > > >                 netif_tx_stop_queue(txq);
> > > > > +               if (net_ratelimit() && txr->kick_pending)
> > > > > +                       netif_warn(bp, tx_err, dev, "bnxt: ring busy!\n");
> > > >
> > > > You forgot to remove this.
> > >
> > > I changed my mind. I added the && txr->kick_pending to the condition,
> > > if there is a race and napi starts the queue unnecessarily the kick
> > > can't be pending.
> >
> > I don't understand.  The queue should be stopped if we have <=
> > MAX_SKB_FRAGS + 1 descriptors left.  If there is a race and the queue
> > is awake, the first TX packet may slip through if
> > skb_shinfo(skb)->nr_frags is small and we have enough descriptors for
> > it.  Let's say xmit_more is set for this packet and so kick is
> > pending.  The next packet may not fit anymore and it will hit this
> > check here.
>
> But even if we slip past this check we can only do it once, the check
> at the end of start_xmit() will see we have fewer slots than MAX_FRAGS
> + 2, ring the doorbell and stop.

Yeah, I think you're right.

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4209 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net v2 3/4] bnxt: make sure xmit_more + errors does not miss doorbells
  2021-08-11 21:37 ` [PATCH net v2 3/4] bnxt: make sure xmit_more + errors does not miss doorbells Jakub Kicinski
  2021-08-11 22:36   ` Michael Chan
@ 2021-08-12  6:51   ` Michael Chan
  2021-08-13  8:35   ` David Laight
  2 siblings, 0 replies; 12+ messages in thread
From: Michael Chan @ 2021-08-12  6:51 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: David Miller, Jeffrey Huang, Eddie Wai, Prashant Sreedharan,
	Andrew Gospodarek, Netdev, Edwin Peer

[-- Attachment #1: Type: text/plain, Size: 847 bytes --]

On Wed, Aug 11, 2021 at 2:38 PM Jakub Kicinski <kuba@kernel.org> wrote:

> @@ -396,6 +404,8 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
>         free_size = bnxt_tx_avail(bp, txr);
>         if (unlikely(free_size < skb_shinfo(skb)->nr_frags + 2)) {
>                 netif_tx_stop_queue(txq);
> +               if (net_ratelimit() && txr->kick_pending)
> +                       netif_warn(bp, tx_err, dev, "bnxt: ring busy!\n");

I think there is one more problem here.  Now that it is possible to
get here, we can race with bnxt_tx_int() again here.  We can call
netif_tx_stop_queue() here after bnxt_tx_int() has already cleaned the
entire TX ring.  So I think we need to call bnxt_tx_wake_queue() here
again if descriptors have become available.

>                 return NETDEV_TX_BUSY;
>         }
>

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4209 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH net v2 3/4] bnxt: make sure xmit_more + errors does not miss doorbells
  2021-08-11 21:37 ` [PATCH net v2 3/4] bnxt: make sure xmit_more + errors does not miss doorbells Jakub Kicinski
  2021-08-11 22:36   ` Michael Chan
  2021-08-12  6:51   ` Michael Chan
@ 2021-08-13  8:35   ` David Laight
  2 siblings, 0 replies; 12+ messages in thread
From: David Laight @ 2021-08-13  8:35 UTC (permalink / raw)
  To: 'Jakub Kicinski', davem
  Cc: michael.chan, huangjw, eddie.wai, prashant, gospo, netdev, edwin.peer

From: Jakub Kicinski
> Sent: 11 August 2021 22:38
> 
> skbs are freed on error and not put on the ring. We may, however,
> be in a situation where we're freeing the last skb of a batch,
> and there is a doorbell ring pending because of xmit_more() being
> true earlier. Make sure we ring the door bell in such situations.
> 
> Since errors are rare don't pay attention to xmit_more() and just
> always flush the pending frames.
> 
...
> +tx_free:
>  	dev_kfree_skb_any(skb);
> +tx_kick_pending:
> +	tx_buf->skb = NULL;
> +	if (txr->kick_pending)
> +		bnxt_txr_db_kick(bp, txr, txr->tx_prod);
>  	return NETDEV_TX_OK;

Is this case actually so unlikely that the 'kick' can be
done unconditionally?
Then all the conditionals can be removed from the hot path.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-08-13  8:35 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-11 21:37 [PATCH net v2 0/4] bnxt: Tx NAPI disabling resiliency improvements Jakub Kicinski
2021-08-11 21:37 ` [PATCH net v2 1/4] bnxt: don't lock the tx queue from napi poll Jakub Kicinski
2021-08-11 21:37 ` [PATCH net v2 2/4] bnxt: disable napi before canceling DIM Jakub Kicinski
2021-08-11 21:37 ` [PATCH net v2 3/4] bnxt: make sure xmit_more + errors does not miss doorbells Jakub Kicinski
2021-08-11 22:36   ` Michael Chan
2021-08-11 22:44     ` Jakub Kicinski
2021-08-11 23:00       ` Michael Chan
2021-08-11 23:16         ` Jakub Kicinski
2021-08-11 23:38           ` Michael Chan
2021-08-12  6:51   ` Michael Chan
2021-08-13  8:35   ` David Laight
2021-08-11 21:37 ` [PATCH net v2 4/4] bnxt: count Tx drops Jakub Kicinski

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.