netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [net-next PATCH 0/2] CPSW interrupt handling cleanup and performance improvement
@ 2015-07-27 11:18 Mugunthan V N
  2015-07-27 11:19 ` [net-next PATCH 1/2] drivers: net: cpsw: remove disable_irq/enable_irq as irq can be masked from cpsw itself Mugunthan V N
  2015-07-27 11:19 ` [net-next PATCH 2/2] drivers: net: cpsw: add separate napi for tx packet handling for performance improvment Mugunthan V N
  0 siblings, 2 replies; 10+ messages in thread
From: Mugunthan V N @ 2015-07-27 11:18 UTC (permalink / raw)
  To: netdev; +Cc: davem, Mugunthan V N

This patch series removes the irq controller disable interrupt and
adding a napi for tx event handling which improves the performance by
180Mbps on dra7-evm

[  5] local 192.168.10.116 port 5001 connected with 192.168.10.125 port 44174
[  5]  0.0-60.0 sec  1.46 GBytes   209 Mbits/sec
[  4] local 192.168.10.116 port 5001 connected with 192.168.10.125 port 33954
[  4]  0.0-60.0 sec  2.72 GBytes   390 Mbits/sec

Mugunthan V N (2):
  drivers: net: cpsw: remove  disable_irq/enable_irq as irq can be
    masked from cpsw itself
  drivers: net: cpsw: add separate napi for tx packet handling for
    performance improvment

 drivers/net/ethernet/ti/cpsw.c | 88 +++++++++++++++++++++++-------------------
 1 file changed, 49 insertions(+), 39 deletions(-)

-- 
2.5.0.rc3.2.g6f9504c

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [net-next PATCH 1/2] drivers: net: cpsw: remove  disable_irq/enable_irq as irq can be masked from cpsw itself
  2015-07-27 11:18 [net-next PATCH 0/2] CPSW interrupt handling cleanup and performance improvement Mugunthan V N
@ 2015-07-27 11:19 ` Mugunthan V N
  2015-07-27 11:19 ` [net-next PATCH 2/2] drivers: net: cpsw: add separate napi for tx packet handling for performance improvment Mugunthan V N
  1 sibling, 0 replies; 10+ messages in thread
From: Mugunthan V N @ 2015-07-27 11:19 UTC (permalink / raw)
  To: netdev; +Cc: davem, Mugunthan V N

CPSW interrupts can be disabled by masking CPSW interrupts and
clearing interrupt by writing appropriate EOI. So removing all
disable_irq/enable_irq as discussed in [1]

[1] http://patchwork.ozlabs.org/patch/492741/

Signed-off-by: Mugunthan V N <mugunthanvnm@ti.com>
---
 drivers/net/ethernet/ti/cpsw.c | 27 ++-------------------------
 1 file changed, 2 insertions(+), 25 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index d155bf2..d68d759 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -389,7 +389,6 @@ struct cpsw_priv {
 	/* snapshot of IRQ numbers */
 	u32 irqs_table[4];
 	u32 num_irqs;
-	bool irq_enabled;
 	struct cpts *cpts;
 	u32 emac_port;
 };
@@ -767,12 +766,7 @@ static irqreturn_t cpsw_rx_interrupt(int irq, void *dev_id)
 	struct cpsw_priv *priv = dev_id;
 
 	cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_RX);
-
-	cpsw_intr_disable(priv);
-	if (priv->irq_enabled == true) {
-		disable_irq_nosync(priv->irqs_table[0]);
-		priv->irq_enabled = false;
-	}
+	writel(0, &priv->wr_regs->rx_en);
 
 	if (netif_running(priv->ndev)) {
 		napi_schedule(&priv->napi);
@@ -797,15 +791,8 @@ static int cpsw_poll(struct napi_struct *napi, int budget)
 
 	num_rx = cpdma_chan_process(priv->rxch, budget);
 	if (num_rx < budget) {
-		struct cpsw_priv *prim_cpsw;
-
 		napi_complete(napi);
-		cpsw_intr_enable(priv);
-		prim_cpsw = cpsw_get_slave_priv(priv, 0);
-		if (prim_cpsw->irq_enabled == false) {
-			prim_cpsw->irq_enabled = true;
-			enable_irq(priv->irqs_table[0]);
-		}
+		writel(0xff, &priv->wr_regs->rx_en);
 	}
 
 	if (num_rx)
@@ -1230,7 +1217,6 @@ static void cpsw_slave_stop(struct cpsw_slave *slave, struct cpsw_priv *priv)
 static int cpsw_ndo_open(struct net_device *ndev)
 {
 	struct cpsw_priv *priv = netdev_priv(ndev);
-	struct cpsw_priv *prim_cpsw;
 	int i, ret;
 	u32 reg;
 
@@ -1315,14 +1301,6 @@ static int cpsw_ndo_open(struct net_device *ndev)
 	cpdma_ctlr_start(priv->dma);
 	cpsw_intr_enable(priv);
 
-	prim_cpsw = cpsw_get_slave_priv(priv, 0);
-	if (prim_cpsw->irq_enabled == false) {
-		if ((priv == prim_cpsw) || !netif_running(prim_cpsw->ndev)) {
-			prim_cpsw->irq_enabled = true;
-			enable_irq(prim_cpsw->irqs_table[0]);
-		}
-	}
-
 	if (priv->data.dual_emac)
 		priv->slaves[priv->emac_port].open_stat = true;
 	return 0;
@@ -2169,7 +2147,6 @@ static int cpsw_probe(struct platform_device *pdev)
 	priv->msg_enable = netif_msg_init(debug_level, CPSW_DEBUG);
 	priv->rx_packet_max = max(rx_packet_max, 128);
 	priv->cpts = devm_kzalloc(&pdev->dev, sizeof(struct cpts), GFP_KERNEL);
-	priv->irq_enabled = true;
 	if (!priv->cpts) {
 		dev_err(&pdev->dev, "error allocating cpts\n");
 		ret = -ENOMEM;
-- 
2.5.0.rc3.2.g6f9504c

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [net-next PATCH 2/2] drivers: net: cpsw: add separate napi for tx packet handling for performance improvment
  2015-07-27 11:18 [net-next PATCH 0/2] CPSW interrupt handling cleanup and performance improvement Mugunthan V N
  2015-07-27 11:19 ` [net-next PATCH 1/2] drivers: net: cpsw: remove disable_irq/enable_irq as irq can be masked from cpsw itself Mugunthan V N
@ 2015-07-27 11:19 ` Mugunthan V N
  2015-07-27 21:22   ` Francois Romieu
  1 sibling, 1 reply; 10+ messages in thread
From: Mugunthan V N @ 2015-07-27 11:19 UTC (permalink / raw)
  To: netdev; +Cc: davem, Mugunthan V N

Instead of processing tx events in ISR itself, moving the tx
event processing to a separate napi improves tx performance by
180 Mbps with omap2plus_defconfig. Also cleaning up rx napis by
renaming to napi_rx for better understanding the code.

Signed-off-by: Mugunthan V N <mugunthanvnm@ti.com>
---
 drivers/net/ethernet/ti/cpsw.c | 61 ++++++++++++++++++++++++++++++++----------
 1 file changed, 47 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index d68d759..4f98537 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -365,7 +365,8 @@ struct cpsw_priv {
 	spinlock_t			lock;
 	struct platform_device		*pdev;
 	struct net_device		*ndev;
-	struct napi_struct		napi;
+	struct napi_struct		napi_rx;
+	struct napi_struct		napi_tx;
 	struct device			*dev;
 	struct cpsw_platform_data	data;
 	struct cpsw_ss_regs __iomem	*regs;
@@ -752,13 +753,22 @@ static irqreturn_t cpsw_tx_interrupt(int irq, void *dev_id)
 	struct cpsw_priv *priv = dev_id;
 
 	cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_TX);
-	cpdma_chan_process(priv->txch, 128);
+	writel(0, &priv->wr_regs->tx_en);
+
+	if (netif_running(priv->ndev)) {
+		napi_schedule(&priv->napi_tx);
+		return IRQ_HANDLED;
+	}
 
 	priv = cpsw_get_slave_priv(priv, 1);
-	if (priv)
-		cpdma_chan_process(priv->txch, 128);
+	if (!priv)
+		return IRQ_NONE;
 
-	return IRQ_HANDLED;
+	if (netif_running(priv->ndev)) {
+		napi_schedule(&priv->napi_tx);
+		return IRQ_HANDLED;
+	}
+	return IRQ_NONE;
 }
 
 static irqreturn_t cpsw_rx_interrupt(int irq, void *dev_id)
@@ -769,7 +779,7 @@ static irqreturn_t cpsw_rx_interrupt(int irq, void *dev_id)
 	writel(0, &priv->wr_regs->rx_en);
 
 	if (netif_running(priv->ndev)) {
-		napi_schedule(&priv->napi);
+		napi_schedule(&priv->napi_rx);
 		return IRQ_HANDLED;
 	}
 
@@ -778,20 +788,37 @@ static irqreturn_t cpsw_rx_interrupt(int irq, void *dev_id)
 		return IRQ_NONE;
 
 	if (netif_running(priv->ndev)) {
-		napi_schedule(&priv->napi);
+		napi_schedule(&priv->napi_rx);
 		return IRQ_HANDLED;
 	}
 	return IRQ_NONE;
 }
 
-static int cpsw_poll(struct napi_struct *napi, int budget)
+static int cpsw_tx_poll(struct napi_struct *napi_tx, int budget)
+{
+	struct cpsw_priv	*priv = napi_to_priv(napi_tx);
+	int			num_tx;
+
+	num_tx = cpdma_chan_process(priv->txch, budget);
+	if (num_tx < budget) {
+		napi_complete(napi_tx);
+		writel(0xff, &priv->wr_regs->tx_en);
+	}
+
+	if (num_tx)
+		cpsw_dbg(priv, intr, "poll %d tx pkts\n", num_tx);
+
+	return num_tx;
+}
+
+static int cpsw_rx_poll(struct napi_struct *napi_rx, int budget)
 {
-	struct cpsw_priv	*priv = napi_to_priv(napi);
+	struct cpsw_priv	*priv = napi_to_priv(napi_rx);
 	int			num_rx;
 
 	num_rx = cpdma_chan_process(priv->rxch, budget);
 	if (num_rx < budget) {
-		napi_complete(napi);
+		napi_complete(napi_rx);
 		writel(0xff, &priv->wr_regs->rx_en);
 	}
 
@@ -1297,7 +1324,8 @@ static int cpsw_ndo_open(struct net_device *ndev)
 		cpsw_set_coalesce(ndev, &coal);
 	}
 
-	napi_enable(&priv->napi);
+	napi_enable(&priv->napi_rx);
+	napi_enable(&priv->napi_tx);
 	cpdma_ctlr_start(priv->dma);
 	cpsw_intr_enable(priv);
 
@@ -1319,7 +1347,8 @@ static int cpsw_ndo_stop(struct net_device *ndev)
 
 	cpsw_info(priv, ifdown, "shutting down cpsw device\n");
 	netif_stop_queue(priv->ndev);
-	napi_disable(&priv->napi);
+	napi_disable(&priv->napi_rx);
+	napi_disable(&priv->napi_tx);
 	netif_carrier_off(priv->ndev);
 
 	if (cpsw_common_res_usage_state(priv) <= 1) {
@@ -2105,7 +2134,10 @@ static int cpsw_probe_dual_emac(struct platform_device *pdev,
 
 	ndev->netdev_ops = &cpsw_netdev_ops;
 	ndev->ethtool_ops = &cpsw_ethtool_ops;
-	netif_napi_add(ndev, &priv_sl2->napi, cpsw_poll, CPSW_POLL_WEIGHT);
+	netif_napi_add(ndev, &priv_sl2->napi_rx, cpsw_rx_poll,
+		       CPSW_POLL_WEIGHT);
+	netif_napi_add(ndev, &priv_sl2->napi_tx, cpsw_tx_poll,
+		       CPSW_POLL_WEIGHT);
 
 	/* register the network device */
 	SET_NETDEV_DEV(ndev, &pdev->dev);
@@ -2357,7 +2389,8 @@ static int cpsw_probe(struct platform_device *pdev)
 
 	ndev->netdev_ops = &cpsw_netdev_ops;
 	ndev->ethtool_ops = &cpsw_ethtool_ops;
-	netif_napi_add(ndev, &priv->napi, cpsw_poll, CPSW_POLL_WEIGHT);
+	netif_napi_add(ndev, &priv->napi_rx, cpsw_rx_poll, CPSW_POLL_WEIGHT);
+	netif_napi_add(ndev, &priv->napi_tx, cpsw_tx_poll, CPSW_POLL_WEIGHT);
 
 	/* register the network device */
 	SET_NETDEV_DEV(ndev, &pdev->dev);
-- 
2.5.0.rc3.2.g6f9504c

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [net-next PATCH 2/2] drivers: net: cpsw: add separate napi for tx packet handling for performance improvment
  2015-07-27 11:19 ` [net-next PATCH 2/2] drivers: net: cpsw: add separate napi for tx packet handling for performance improvment Mugunthan V N
@ 2015-07-27 21:22   ` Francois Romieu
  2015-07-28  6:02     ` Mugunthan V N
  2015-07-28  6:18     ` Mugunthan V N
  0 siblings, 2 replies; 10+ messages in thread
From: Francois Romieu @ 2015-07-27 21:22 UTC (permalink / raw)
  To: Mugunthan V N; +Cc: netdev, davem

Mugunthan V N <mugunthanvnm@ti.com> :
[...]
> diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
> index d68d759..4f98537 100644
> --- a/drivers/net/ethernet/ti/cpsw.c
> +++ b/drivers/net/ethernet/ti/cpsw.c
> @@ -752,13 +753,22 @@ static irqreturn_t cpsw_tx_interrupt(int irq, void *dev_id)
>  	struct cpsw_priv *priv = dev_id;
>  
>  	cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_TX);
> -	cpdma_chan_process(priv->txch, 128);
> +	writel(0, &priv->wr_regs->tx_en);
> +
> +	if (netif_running(priv->ndev)) {
> +		napi_schedule(&priv->napi_tx);
> +		return IRQ_HANDLED;
> +	}


cpsw_ndo_stop calls napi_disable: you can remove netif_running.

-- 
Ueimor

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [net-next PATCH 2/2] drivers: net: cpsw: add separate napi for tx packet handling for performance improvment
  2015-07-27 21:22   ` Francois Romieu
@ 2015-07-28  6:02     ` Mugunthan V N
  2015-07-28  6:18     ` Mugunthan V N
  1 sibling, 0 replies; 10+ messages in thread
From: Mugunthan V N @ 2015-07-28  6:02 UTC (permalink / raw)
  To: Francois Romieu; +Cc: netdev, davem

On Tuesday 28 July 2015 02:52 AM, Francois Romieu wrote:
> Mugunthan V N <mugunthanvnm@ti.com> :
> [...]
>> diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
>> index d68d759..4f98537 100644
>> --- a/drivers/net/ethernet/ti/cpsw.c
>> +++ b/drivers/net/ethernet/ti/cpsw.c
>> @@ -752,13 +753,22 @@ static irqreturn_t cpsw_tx_interrupt(int irq, void *dev_id)
>>  	struct cpsw_priv *priv = dev_id;
>>  
>>  	cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_TX);
>> -	cpdma_chan_process(priv->txch, 128);
>> +	writel(0, &priv->wr_regs->tx_en);
>> +
>> +	if (netif_running(priv->ndev)) {
>> +		napi_schedule(&priv->napi_tx);
>> +		return IRQ_HANDLED;
>> +	}
> 
> 
> cpsw_ndo_stop calls napi_disable: you can remove netif_running.
> 

Hmmmm, Will fix in v2.

Regards
Mugunthan V N

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [net-next PATCH 2/2] drivers: net: cpsw: add separate napi for tx packet handling for performance improvment
  2015-07-27 21:22   ` Francois Romieu
  2015-07-28  6:02     ` Mugunthan V N
@ 2015-07-28  6:18     ` Mugunthan V N
  2015-07-28 22:30       ` Francois Romieu
  1 sibling, 1 reply; 10+ messages in thread
From: Mugunthan V N @ 2015-07-28  6:18 UTC (permalink / raw)
  To: Francois Romieu; +Cc: netdev, davem

On Tuesday 28 July 2015 02:52 AM, Francois Romieu wrote:
> Mugunthan V N <mugunthanvnm@ti.com> :
> [...]
>> diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
>> index d68d759..4f98537 100644
>> --- a/drivers/net/ethernet/ti/cpsw.c
>> +++ b/drivers/net/ethernet/ti/cpsw.c
>> @@ -752,13 +753,22 @@ static irqreturn_t cpsw_tx_interrupt(int irq, void *dev_id)
>>  	struct cpsw_priv *priv = dev_id;
>>  
>>  	cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_TX);
>> -	cpdma_chan_process(priv->txch, 128);
>> +	writel(0, &priv->wr_regs->tx_en);
>> +
>> +	if (netif_running(priv->ndev)) {
>> +		napi_schedule(&priv->napi_tx);
>> +		return IRQ_HANDLED;
>> +	}
> 
> 
> cpsw_ndo_stop calls napi_disable: you can remove netif_running.
> 

This netif_running check is to find which interface is up as the
interrupt is shared by both the interfaces. When first interface is down
and second interface is active then napi_schedule for first interface
will fail and second interface napi needs to be scheduled.

So I don't think netif_running needs to be removed.

Regards
Mugunthan V N

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [net-next PATCH 2/2] drivers: net: cpsw: add separate napi for tx packet handling for performance improvment
  2015-07-28  6:18     ` Mugunthan V N
@ 2015-07-28 22:30       ` Francois Romieu
  2015-07-29  5:18         ` Mugunthan V N
  0 siblings, 1 reply; 10+ messages in thread
From: Francois Romieu @ 2015-07-28 22:30 UTC (permalink / raw)
  To: Mugunthan V N; +Cc: netdev, davem

Mugunthan V N <mugunthanvnm@ti.com> :
> On Tuesday 28 July 2015 02:52 AM, Francois Romieu wrote:
> > Mugunthan V N <mugunthanvnm@ti.com> :
[...]
> >> @@ -752,13 +753,22 @@ static irqreturn_t cpsw_tx_interrupt(int irq, void *dev_id)
> >>  	struct cpsw_priv *priv = dev_id;
> >>  
> >>  	cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_TX);
> >> -	cpdma_chan_process(priv->txch, 128);
> >> +	writel(0, &priv->wr_regs->tx_en);
> >> +
> >> +	if (netif_running(priv->ndev)) {
> >> +		napi_schedule(&priv->napi_tx);
> >> +		return IRQ_HANDLED;
> >> +	}
> > 
> > 
> > cpsw_ndo_stop calls napi_disable: you can remove netif_running.
> > 
> 
> This netif_running check is to find which interface is up as the
> interrupt is shared by both the interfaces. When first interface is down
> and second interface is active then napi_schedule for first interface
> will fail and second interface napi needs to be scheduled.
> 
> So I don't think netif_running needs to be removed.

Each interface has its own napi tx (resp. rx) context: I would had expected
two unconditional napi_schedule per tx (resp. rx) shared irq, not one.

I'll read it again after some sleep.

-- 
Ueimor

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [net-next PATCH 2/2] drivers: net: cpsw: add separate napi for tx packet handling for performance improvment
  2015-07-28 22:30       ` Francois Romieu
@ 2015-07-29  5:18         ` Mugunthan V N
  2015-07-29 22:57           ` Francois Romieu
  0 siblings, 1 reply; 10+ messages in thread
From: Mugunthan V N @ 2015-07-29  5:18 UTC (permalink / raw)
  To: Francois Romieu; +Cc: netdev, davem

On Wednesday 29 July 2015 04:00 AM, Francois Romieu wrote:
> Mugunthan V N <mugunthanvnm@ti.com> :
>> On Tuesday 28 July 2015 02:52 AM, Francois Romieu wrote:
>>> Mugunthan V N <mugunthanvnm@ti.com> :
> [...]
>>>> @@ -752,13 +753,22 @@ static irqreturn_t cpsw_tx_interrupt(int irq, void *dev_id)
>>>>  	struct cpsw_priv *priv = dev_id;
>>>>  
>>>>  	cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_TX);
>>>> -	cpdma_chan_process(priv->txch, 128);
>>>> +	writel(0, &priv->wr_regs->tx_en);
>>>> +
>>>> +	if (netif_running(priv->ndev)) {
>>>> +		napi_schedule(&priv->napi_tx);
>>>> +		return IRQ_HANDLED;
>>>> +	}
>>>
>>>
>>> cpsw_ndo_stop calls napi_disable: you can remove netif_running.
>>>
>>
>> This netif_running check is to find which interface is up as the
>> interrupt is shared by both the interfaces. When first interface is down
>> and second interface is active then napi_schedule for first interface
>> will fail and second interface napi needs to be scheduled.
>>
>> So I don't think netif_running needs to be removed.
> 
> Each interface has its own napi tx (resp. rx) context: I would had expected
> two unconditional napi_schedule per tx (resp. rx) shared irq, not one.
> 
> I'll read it again after some sleep.
> 

For each interrupt only one napi will be scheduled, when the first
interface is down then only second interface napi is scheduled in both
tx and rx irqs.

Regards
Mugunthan V N

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [net-next PATCH 2/2] drivers: net: cpsw: add separate napi for tx packet handling for performance improvment
  2015-07-29  5:18         ` Mugunthan V N
@ 2015-07-29 22:57           ` Francois Romieu
  2015-07-30  6:12             ` Mugunthan V N
  0 siblings, 1 reply; 10+ messages in thread
From: Francois Romieu @ 2015-07-29 22:57 UTC (permalink / raw)
  To: Mugunthan V N; +Cc: netdev, davem

Mugunthan V N <mugunthanvnm@ti.com> :
> On Wednesday 29 July 2015 04:00 AM, Francois Romieu wrote:
> > Mugunthan V N <mugunthanvnm@ti.com> :
> >> On Tuesday 28 July 2015 02:52 AM, Francois Romieu wrote:
[...]
> >>> cpsw_ndo_stop calls napi_disable: you can remove netif_running.
> >>>
> >>
> >> This netif_running check is to find which interface is up as the
> >> interrupt is shared by both the interfaces. When first interface is down
> >> and second interface is active then napi_schedule for first interface
> >> will fail and second interface napi needs to be scheduled.
> >>
> >> So I don't think netif_running needs to be removed.
> > 
> > Each interface has its own napi tx (resp. rx) context: I would had expected
> > two unconditional napi_schedule per tx (resp. rx) shared irq, not one.
> > 
> > I'll read it again after some sleep.
> > 
> 
> For each interrupt only one napi will be scheduled, when the first
> interface is down then only second interface napi is scheduled in both
> tx and rx irqs.

Ok, I've had some hints from the "Assumptions" section at
http://processors.wiki.ti.com/index.php/AM335x_CPSW_%28Ethernet%29_Driver%27s_Guide#Dual_Standalone_EMAC_mode

Why does the driver create 2 rx napi contexts ? They don't run at the
same time and the port demux is done in cpsw_dual_emac_src_port_detect.
The driver would work the same with a single rx (resp. tx) napi context
for both interfaces.

-- 
Ueimor

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [net-next PATCH 2/2] drivers: net: cpsw: add separate napi for tx packet handling for performance improvment
  2015-07-29 22:57           ` Francois Romieu
@ 2015-07-30  6:12             ` Mugunthan V N
  0 siblings, 0 replies; 10+ messages in thread
From: Mugunthan V N @ 2015-07-30  6:12 UTC (permalink / raw)
  To: Francois Romieu; +Cc: netdev, davem

On Thursday 30 July 2015 04:27 AM, Francois Romieu wrote:
> Mugunthan V N <mugunthanvnm@ti.com> :
>> On Wednesday 29 July 2015 04:00 AM, Francois Romieu wrote:
>>> Mugunthan V N <mugunthanvnm@ti.com> :
>>>> On Tuesday 28 July 2015 02:52 AM, Francois Romieu wrote:
> [...]
>>>>> cpsw_ndo_stop calls napi_disable: you can remove netif_running.
>>>>>
>>>>
>>>> This netif_running check is to find which interface is up as the
>>>> interrupt is shared by both the interfaces. When first interface is down
>>>> and second interface is active then napi_schedule for first interface
>>>> will fail and second interface napi needs to be scheduled.
>>>>
>>>> So I don't think netif_running needs to be removed.
>>>
>>> Each interface has its own napi tx (resp. rx) context: I would had expected
>>> two unconditional napi_schedule per tx (resp. rx) shared irq, not one.
>>>
>>> I'll read it again after some sleep.
>>>
>>
>> For each interrupt only one napi will be scheduled, when the first
>> interface is down then only second interface napi is scheduled in both
>> tx and rx irqs.
> 
> Ok, I've had some hints from the "Assumptions" section at
> http://processors.wiki.ti.com/index.php/AM335x_CPSW_%28Ethernet%29_Driver%27s_Guide#Dual_Standalone_EMAC_mode
> 
> Why does the driver create 2 rx napi contexts ? They don't run at the
> same time and the port demux is done in cpsw_dual_emac_src_port_detect.
> The driver would work the same with a single rx (resp. tx) napi context
> for both interfaces.
> 

The wiki you had pointed out is old design done on v3.2 and doesn't have
device tree support also. In mainline Dual EMAC implementation has
changed a lot.

I can think of a way with one napi implementation for each rx and tx,
will submit a separate patch for it.

Regards
Mugunthan V N

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2015-07-30  6:12 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-07-27 11:18 [net-next PATCH 0/2] CPSW interrupt handling cleanup and performance improvement Mugunthan V N
2015-07-27 11:19 ` [net-next PATCH 1/2] drivers: net: cpsw: remove disable_irq/enable_irq as irq can be masked from cpsw itself Mugunthan V N
2015-07-27 11:19 ` [net-next PATCH 2/2] drivers: net: cpsw: add separate napi for tx packet handling for performance improvment Mugunthan V N
2015-07-27 21:22   ` Francois Romieu
2015-07-28  6:02     ` Mugunthan V N
2015-07-28  6:18     ` Mugunthan V N
2015-07-28 22:30       ` Francois Romieu
2015-07-29  5:18         ` Mugunthan V N
2015-07-29 22:57           ` Francois Romieu
2015-07-30  6:12             ` Mugunthan V N

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).