All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHv3 1/1] net: forcedeth: add xmit_more support
@ 2019-10-29  3:30 Zhu Yanjun
  2019-10-29 17:32 ` Jakub Kicinski
  0 siblings, 1 reply; 5+ messages in thread
From: Zhu Yanjun @ 2019-10-29  3:30 UTC (permalink / raw)
  To: rain.1986.08.12, yanjun.zhu, davem, netdev

This change adds support for xmit_more based on the igb commit 6f19e12f6230
("igb: flush when in xmit_more mode and under descriptor pressure") and
commit 6b16f9ee89b8 ("net: move skb->xmit_more hint to softnet data") that
were made to igb to support this feature. The function netif_xmit_stopped
is called to check whether transmit queue on device is currently unable to
send to determine whether we must write the tail because we can add no further
buffers.
When normal packets and/or xmit_more packets fill up tx_desc, it is
necessary to trigger NIC tx reg.

Tested:
  - pktgen (xmit_more packets) SMP x86_64 ->
    Test command:
    ./pktgen_sample03_burst_single_flow.sh ... -b 8 -n 1000000
    Test results:
    Params:
    ...
    burst: 8
    ...
    Result: OK: 12194004(c12188996+d5007) usec, 1000001 (1500byte,0frags)
    82007pps 984Mb/sec (984084000bps) errors: 0

  - iperf (normal packets) SMP x86_64 ->
    Test command:
    Server: iperf -s
    Client: iperf -c serverip
    Result:
    TCP window size: 85.0 KByte (default)
    ------------------------------------------------------------
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0-10.0 sec  1.10 GBytes   942 Mbits/sec

CC: Joe Jin <joe.jin@oracle.com>
CC: JUNXIAO_BI <junxiao.bi@oracle.com>
Reported-and-tested-by: Nan san <nan.1986san@gmail.com>
Signed-off-by: Zhu Yanjun <yanjun.zhu@oracle.com>
Acked-by: Rain River <rain.1986.08.12@gmail.com>
---
V2->V3: fix typo errors.
V1->V2: use the lower case label.
---
 drivers/net/ethernet/nvidia/forcedeth.c | 37 +++++++++++++++++++++++++++------
 1 file changed, 31 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/nvidia/forcedeth.c b/drivers/net/ethernet/nvidia/forcedeth.c
index 05d2b47..e2bb0cd 100644
--- a/drivers/net/ethernet/nvidia/forcedeth.c
+++ b/drivers/net/ethernet/nvidia/forcedeth.c
@@ -2225,6 +2225,7 @@ static netdev_tx_t nv_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	struct nv_skb_map *prev_tx_ctx;
 	struct nv_skb_map *tmp_tx_ctx = NULL, *start_tx_ctx = NULL;
 	unsigned long flags;
+	netdev_tx_t ret = NETDEV_TX_OK;
 
 	/* add fragments to entries count */
 	for (i = 0; i < fragments; i++) {
@@ -2240,7 +2241,12 @@ static netdev_tx_t nv_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		netif_stop_queue(dev);
 		np->tx_stop = 1;
 		spin_unlock_irqrestore(&np->lock, flags);
-		return NETDEV_TX_BUSY;
+
+		/* When normal packets and/or xmit_more packets fill up
+		 * tx_desc, it is necessary to trigger NIC tx reg.
+		 */
+		ret = NETDEV_TX_BUSY;
+		goto txkick;
 	}
 	spin_unlock_irqrestore(&np->lock, flags);
 
@@ -2357,8 +2363,14 @@ static netdev_tx_t nv_start_xmit(struct sk_buff *skb, struct net_device *dev)
 
 	spin_unlock_irqrestore(&np->lock, flags);
 
-	writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
-	return NETDEV_TX_OK;
+txkick:
+	if (netif_queue_stopped(dev) || !netdev_xmit_more()) {
+		u32 txrxctl_kick = NVREG_TXRXCTL_KICK | np->txrxctl_bits;
+
+		writel(txrxctl_kick, get_hwbase(dev) + NvRegTxRxControl);
+	}
+
+	return ret;
 }
 
 static netdev_tx_t nv_start_xmit_optimized(struct sk_buff *skb,
@@ -2381,6 +2393,7 @@ static netdev_tx_t nv_start_xmit_optimized(struct sk_buff *skb,
 	struct nv_skb_map *start_tx_ctx = NULL;
 	struct nv_skb_map *tmp_tx_ctx = NULL;
 	unsigned long flags;
+	netdev_tx_t ret = NETDEV_TX_OK;
 
 	/* add fragments to entries count */
 	for (i = 0; i < fragments; i++) {
@@ -2396,7 +2409,13 @@ static netdev_tx_t nv_start_xmit_optimized(struct sk_buff *skb,
 		netif_stop_queue(dev);
 		np->tx_stop = 1;
 		spin_unlock_irqrestore(&np->lock, flags);
-		return NETDEV_TX_BUSY;
+
+		/* When normal packets and/or xmit_more packets fill up
+		 * tx_desc, it is necessary to trigger NIC tx reg.
+		 */
+		ret = NETDEV_TX_BUSY;
+
+		goto txkick;
 	}
 	spin_unlock_irqrestore(&np->lock, flags);
 
@@ -2542,8 +2561,14 @@ static netdev_tx_t nv_start_xmit_optimized(struct sk_buff *skb,
 
 	spin_unlock_irqrestore(&np->lock, flags);
 
-	writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
-	return NETDEV_TX_OK;
+txkick:
+	if (netif_queue_stopped(dev) || !netdev_xmit_more()) {
+		u32 txrxctl_kick = NVREG_TXRXCTL_KICK | np->txrxctl_bits;
+
+		writel(txrxctl_kick, get_hwbase(dev) + NvRegTxRxControl);
+	}
+
+	return ret;
 }
 
 static inline void nv_tx_flip_ownership(struct net_device *dev)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCHv3 1/1] net: forcedeth: add xmit_more support
  2019-10-29  3:30 [PATCHv3 1/1] net: forcedeth: add xmit_more support Zhu Yanjun
@ 2019-10-29 17:32 ` Jakub Kicinski
  2019-10-30  4:18   ` Zhu Yanjun
  0 siblings, 1 reply; 5+ messages in thread
From: Jakub Kicinski @ 2019-10-29 17:32 UTC (permalink / raw)
  To: Zhu Yanjun; +Cc: rain.1986.08.12, davem, netdev

On Mon, 28 Oct 2019 23:30:12 -0400, Zhu Yanjun wrote:
> This change adds support for xmit_more based on the igb commit 6f19e12f6230
> ("igb: flush when in xmit_more mode and under descriptor pressure") and
> commit 6b16f9ee89b8 ("net: move skb->xmit_more hint to softnet data") that
> were made to igb to support this feature. The function netif_xmit_stopped
> is called to check whether transmit queue on device is currently unable to
> send to determine whether we must write the tail because we can add no further
> buffers.
> When normal packets and/or xmit_more packets fill up tx_desc, it is
> necessary to trigger NIC tx reg.
> 

> CC: Joe Jin <joe.jin@oracle.com>
> CC: JUNXIAO_BI <junxiao.bi@oracle.com>
> Reported-and-tested-by: Nan san <nan.1986san@gmail.com>
> Signed-off-by: Zhu Yanjun <yanjun.zhu@oracle.com>
> Acked-by: Rain River <rain.1986.08.12@gmail.com>

I explained to you nicely that you have to kick on the DMA error cases.

Please stop wasting everyone's time by reposting this.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCHv3 1/1] net: forcedeth: add xmit_more support
  2019-10-30  4:18   ` Zhu Yanjun
@ 2019-10-30  4:14     ` David Miller
  2019-10-30  4:30       ` Zhu Yanjun
  0 siblings, 1 reply; 5+ messages in thread
From: David Miller @ 2019-10-30  4:14 UTC (permalink / raw)
  To: yanjun.zhu; +Cc: jakub.kicinski, rain.1986.08.12, netdev

From: Zhu Yanjun <yanjun.zhu@oracle.com>
Date: Wed, 30 Oct 2019 12:18:43 +0800

> If igb does not handle DMA error, it is not appropriate for us to
> handle DMA error.
> 
> After igb fixes this DMA error, I will follow.;-)

Sorry, this is an invalid and unaceptable argument.

Just because a bug exists in another driver, does not mean you
can copy that bug into your driver.

Fix the check, do things properly, and resubmit your patch only after
you've done that.

Thank you.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCHv3 1/1] net: forcedeth: add xmit_more support
  2019-10-29 17:32 ` Jakub Kicinski
@ 2019-10-30  4:18   ` Zhu Yanjun
  2019-10-30  4:14     ` David Miller
  0 siblings, 1 reply; 5+ messages in thread
From: Zhu Yanjun @ 2019-10-30  4:18 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: rain.1986.08.12, davem, netdev


On 2019/10/30 1:32, Jakub Kicinski wrote:
> On Mon, 28 Oct 2019 23:30:12 -0400, Zhu Yanjun wrote:
>> This change adds support for xmit_more based on the igb commit 6f19e12f6230
>> ("igb: flush when in xmit_more mode and under descriptor pressure") and
>> commit 6b16f9ee89b8 ("net: move skb->xmit_more hint to softnet data") that
>> were made to igb to support this feature. The function netif_xmit_stopped
>> is called to check whether transmit queue on device is currently unable to
>> send to determine whether we must write the tail because we can add no further
>> buffers.
>> When normal packets and/or xmit_more packets fill up tx_desc, it is
>> necessary to trigger NIC tx reg.
>>
>> CC: Joe Jin <joe.jin@oracle.com>
>> CC: JUNXIAO_BI <junxiao.bi@oracle.com>
>> Reported-and-tested-by: Nan san <nan.1986san@gmail.com>
>> Signed-off-by: Zhu Yanjun <yanjun.zhu@oracle.com>
>> Acked-by: Rain River <rain.1986.08.12@gmail.com>
> I explained to you nicely that you have to kick on the DMA error cases.
As I mentioned, this commit is based on the igb commit 6f19e12f6230
("igb: flush when in xmit_more mode and under descriptor pressure") and

commit 6b16f9ee89b8 ("net: move skb->xmit_more hint to softnet data").

If igb does not handle DMA error, it is not appropriate for us to handle 
DMA error.

After igb fixes this DMA error, I will follow.;-)

Sorry.

Zhu Yanjun

>
> Please stop wasting everyone's time by reposting this.
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCHv3 1/1] net: forcedeth: add xmit_more support
  2019-10-30  4:14     ` David Miller
@ 2019-10-30  4:30       ` Zhu Yanjun
  0 siblings, 0 replies; 5+ messages in thread
From: Zhu Yanjun @ 2019-10-30  4:30 UTC (permalink / raw)
  To: David Miller; +Cc: jakub.kicinski, rain.1986.08.12, netdev


On 2019/10/30 12:14, David Miller wrote:
> From: Zhu Yanjun <yanjun.zhu@oracle.com>
> Date: Wed, 30 Oct 2019 12:18:43 +0800
>
>> If igb does not handle DMA error, it is not appropriate for us to
>> handle DMA error.
>>
>> After igb fixes this DMA error, I will follow.;-)
> Sorry, this is an invalid and unaceptable argument.
>
> Just because a bug exists in another driver, does not mean you
> can copy that bug into your driver.
>
> Fix the check, do things properly, and resubmit your patch only after

OK. I will follow the advice from you and Jakub.

After this DMA error is handled, I will resubmit the commit again.

Thanks,

Zhu Yanjun

> you've done that.
>
> Thank you.
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-10-30  4:24 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-29  3:30 [PATCHv3 1/1] net: forcedeth: add xmit_more support Zhu Yanjun
2019-10-29 17:32 ` Jakub Kicinski
2019-10-30  4:18   ` Zhu Yanjun
2019-10-30  4:14     ` David Miller
2019-10-30  4:30       ` Zhu Yanjun

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.