From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932875AbcFJL3z (ORCPT ); Fri, 10 Jun 2016 07:29:55 -0400 Received: from nbd.name ([46.4.11.11]:56714 "EHLO nbd.name" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932834AbcFJL3w (ORCPT ); Fri, 10 Jun 2016 07:29:52 -0400 From: John Crispin To: "David S. Miller" Cc: Felix Fietkau , Sean Wang , netdev@vger.kernel.org, linux-mediatek@lists.infradead.org, linux-kernel@vger.kernel.org, John Crispin Subject: [PATCH V2 11/11] net: mediatek: remove superfluous queue wake up call Date: Fri, 10 Jun 2016 13:28:08 +0200 Message-Id: <1465558088-15265-12-git-send-email-john@phrozen.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1465558088-15265-1-git-send-email-john@phrozen.org> References: <1465558088-15265-1-git-send-email-john@phrozen.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The code checks if the queue should be stopped because we are below the threshold of free descriptors only to check if it should be started again. If we do end up in a state where we are at the threshold limit, it makes more sense to just stop the queue and wait for the next IRQ to trigger the TX housekeeping again. There is no rush in enqueuing the next packet, it needs to wait for all the others in the queue to be dispatched first anyway. Signed-off-by: John Crispin --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 40d3cfd..ebfca7d 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -764,12 +764,9 @@ static int mtk_start_xmit(struct sk_buff *skb, struct net_device *dev) if (mtk_tx_map(skb, dev, tx_num, ring, gso) < 0) goto drop; - if (unlikely(atomic_read(&ring->free_count) <= ring->thresh)) { + if (unlikely(atomic_read(&ring->free_count) <= ring->thresh)) mtk_stop_queue(eth); - if (unlikely(atomic_read(&ring->free_count) > - ring->thresh)) - mtk_wake_queue(eth); - } + spin_unlock_irqrestore(ð->page_lock, flags); return NETDEV_TX_OK; -- 1.7.10.4