On Wed, Aug 11, 2021 at 2:38 PM Jakub Kicinski wrote: > @@ -396,6 +404,8 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) > free_size = bnxt_tx_avail(bp, txr); > if (unlikely(free_size < skb_shinfo(skb)->nr_frags + 2)) { > netif_tx_stop_queue(txq); > + if (net_ratelimit() && txr->kick_pending) > + netif_warn(bp, tx_err, dev, "bnxt: ring busy!\n"); I think there is one more problem here. Now that it is possible to get here, we can race with bnxt_tx_int() again here. We can call netif_tx_stop_queue() here after bnxt_tx_int() has already cleaned the entire TX ring. So I think we need to call bnxt_tx_wake_queue() here again if descriptors have become available. > return NETDEV_TX_BUSY; > } >