From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755277AbcIMHBk (ORCPT ); Tue, 13 Sep 2016 03:01:40 -0400 Received: from mail-lf0-f49.google.com ([209.85.215.49]:33751 "EHLO mail-lf0-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753442AbcIMHBJ (ORCPT ); Tue, 13 Sep 2016 03:01:09 -0400 From: Marcin Wojtas To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, netdev@vger.kernel.org Cc: davem@davemloft.net, linux@arm.linux.org.uk, sebastian.hesselbarth@gmail.com, andrew@lunn.ch, jason@lakedaemon.net, thomas.petazzoni@free-electrons.com, gregory.clement@free-electrons.com, nadavh@marvell.com, alior@marvell.com, simon.guinot@sequanux.org, nitroshift@yahoo.com, mw@semihalf.com, jaz@semihalf.com Subject: [PATCH net-next 1/2] net: mvneta: add xmit_more support Date: Tue, 13 Sep 2016 09:00:05 +0200 Message-Id: <1473750006-21199-2-git-send-email-mw@semihalf.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1473750006-21199-1-git-send-email-mw@semihalf.com> References: <1473750006-21199-1-git-send-email-mw@semihalf.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Simon Guinot Basing on xmit_more flag of the skb, TX descriptors can be concatenated before flushing. This commit delay Tx descriptor flush if the queue is running and if there is more skb's to send. Signed-off-by: Simon Guinot --- drivers/net/ethernet/marvell/mvneta.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index d41c28d..b9dccea 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -512,6 +512,7 @@ struct mvneta_tx_queue { * descriptor ring */ int count; + int pending; int tx_stop_threshold; int tx_wake_threshold; @@ -802,8 +803,9 @@ static void mvneta_txq_pend_desc_add(struct mvneta_port *pp, /* Only 255 descriptors can be added at once ; Assume caller * process TX desriptors in quanta less than 256 */ - val = pend_desc; + val = pend_desc + txq->pending; mvreg_write(pp, MVNETA_TXQ_UPDATE_REG(txq->id), val); + txq->pending = 0; } /* Get pointer to next TX descriptor to be processed (send) by HW */ @@ -2357,11 +2359,14 @@ out: struct netdev_queue *nq = netdev_get_tx_queue(dev, txq_id); txq->count += frags; - mvneta_txq_pend_desc_add(pp, txq, frags); - if (txq->count >= txq->tx_stop_threshold) netif_tx_stop_queue(nq); + if (!skb->xmit_more || netif_xmit_stopped(nq)) + mvneta_txq_pend_desc_add(pp, txq, frags); + else + txq->pending += frags; + u64_stats_update_begin(&stats->syncp); stats->tx_packets++; stats->tx_bytes += len; -- 1.8.3.1