All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] net: mvneta: fix changing MTU when using per-cpu processing
@ 2016-04-01 13:21 ` Marcin Wojtas
  0 siblings, 0 replies; 6+ messages in thread
From: Marcin Wojtas @ 2016-04-01 13:21 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, netdev
  Cc: davem, linux, sebastian.hesselbarth, andrew, jason,
	thomas.petazzoni, gregory.clement, nadavh, alior, nitroshift, mw,
	jaz

After enabling per-cpu processing it appeared that under heavy load
changing MTU can result in blocking all port's interrupts and transmitting
data is not possible after the change.

This commit fixes above issue by disabling percpu interrupts for the
time, when TXQs and RXQs are reconfigured.

Signed-off-by: Marcin Wojtas <mw@semihalf.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 30 ++++++++++++++++--------------
 1 file changed, 16 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index fee6a91..a433de9 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -3083,6 +3083,20 @@ static int mvneta_check_mtu_valid(struct net_device *dev, int mtu)
 	return mtu;
 }
 
+static void mvneta_percpu_enable(void *arg)
+{
+	struct mvneta_port *pp = arg;
+
+	enable_percpu_irq(pp->dev->irq, IRQ_TYPE_NONE);
+}
+
+static void mvneta_percpu_disable(void *arg)
+{
+	struct mvneta_port *pp = arg;
+
+	disable_percpu_irq(pp->dev->irq);
+}
+
 /* Change the device mtu */
 static int mvneta_change_mtu(struct net_device *dev, int mtu)
 {
@@ -3107,6 +3121,7 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
 	 * reallocation of the queues
 	 */
 	mvneta_stop_dev(pp);
+	on_each_cpu(mvneta_percpu_disable, pp, true);
 
 	mvneta_cleanup_txqs(pp);
 	mvneta_cleanup_rxqs(pp);
@@ -3130,6 +3145,7 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
 		return ret;
 	}
 
+	on_each_cpu(mvneta_percpu_enable, pp, true);
 	mvneta_start_dev(pp);
 	mvneta_port_up(pp);
 
@@ -3283,20 +3299,6 @@ static void mvneta_mdio_remove(struct mvneta_port *pp)
 	pp->phy_dev = NULL;
 }
 
-static void mvneta_percpu_enable(void *arg)
-{
-	struct mvneta_port *pp = arg;
-
-	enable_percpu_irq(pp->dev->irq, IRQ_TYPE_NONE);
-}
-
-static void mvneta_percpu_disable(void *arg)
-{
-	struct mvneta_port *pp = arg;
-
-	disable_percpu_irq(pp->dev->irq);
-}
-
 /* Electing a CPU must be done in an atomic way: it should be done
  * after or before the removal/insertion of a CPU and this function is
  * not reentrant.
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH] net: mvneta: fix changing MTU when using per-cpu processing
@ 2016-04-01 13:21 ` Marcin Wojtas
  0 siblings, 0 replies; 6+ messages in thread
From: Marcin Wojtas @ 2016-04-01 13:21 UTC (permalink / raw)
  To: linux-arm-kernel

After enabling per-cpu processing it appeared that under heavy load
changing MTU can result in blocking all port's interrupts and transmitting
data is not possible after the change.

This commit fixes above issue by disabling percpu interrupts for the
time, when TXQs and RXQs are reconfigured.

Signed-off-by: Marcin Wojtas <mw@semihalf.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 30 ++++++++++++++++--------------
 1 file changed, 16 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index fee6a91..a433de9 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -3083,6 +3083,20 @@ static int mvneta_check_mtu_valid(struct net_device *dev, int mtu)
 	return mtu;
 }
 
+static void mvneta_percpu_enable(void *arg)
+{
+	struct mvneta_port *pp = arg;
+
+	enable_percpu_irq(pp->dev->irq, IRQ_TYPE_NONE);
+}
+
+static void mvneta_percpu_disable(void *arg)
+{
+	struct mvneta_port *pp = arg;
+
+	disable_percpu_irq(pp->dev->irq);
+}
+
 /* Change the device mtu */
 static int mvneta_change_mtu(struct net_device *dev, int mtu)
 {
@@ -3107,6 +3121,7 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
 	 * reallocation of the queues
 	 */
 	mvneta_stop_dev(pp);
+	on_each_cpu(mvneta_percpu_disable, pp, true);
 
 	mvneta_cleanup_txqs(pp);
 	mvneta_cleanup_rxqs(pp);
@@ -3130,6 +3145,7 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
 		return ret;
 	}
 
+	on_each_cpu(mvneta_percpu_enable, pp, true);
 	mvneta_start_dev(pp);
 	mvneta_port_up(pp);
 
@@ -3283,20 +3299,6 @@ static void mvneta_mdio_remove(struct mvneta_port *pp)
 	pp->phy_dev = NULL;
 }
 
-static void mvneta_percpu_enable(void *arg)
-{
-	struct mvneta_port *pp = arg;
-
-	enable_percpu_irq(pp->dev->irq, IRQ_TYPE_NONE);
-}
-
-static void mvneta_percpu_disable(void *arg)
-{
-	struct mvneta_port *pp = arg;
-
-	disable_percpu_irq(pp->dev->irq);
-}
-
 /* Electing a CPU must be done in an atomic way: it should be done
  * after or before the removal/insertion of a CPU and this function is
  * not reentrant.
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] net: mvneta: fix changing MTU when using per-cpu processing
  2016-04-01 13:21 ` Marcin Wojtas
@ 2016-04-01 13:22   ` Marcin Wojtas
  -1 siblings, 0 replies; 6+ messages in thread
From: Marcin Wojtas @ 2016-04-01 13:22 UTC (permalink / raw)
  To: David S. Miller
  Cc: linux-kernel, netdev, linux-arm-kernel, Russell King - ARM Linux,
	Sebastian Hesselbarth, Andrew Lunn, Jason Cooper,
	Thomas Petazzoni, Gregory Clément, nadavh, Lior Amsalem,
	Sebastian Careba, Marcin Wojtas, Grzegorz Jaszczyk

Hi David,

I've just realized I forgot to add an information, that this patch is
intended for 'net' tree.

Best regards,
Marcin

2016-04-01 15:21 GMT+02:00 Marcin Wojtas <mw@semihalf.com>:
> After enabling per-cpu processing it appeared that under heavy load
> changing MTU can result in blocking all port's interrupts and transmitting
> data is not possible after the change.
>
> This commit fixes above issue by disabling percpu interrupts for the
> time, when TXQs and RXQs are reconfigured.
>
> Signed-off-by: Marcin Wojtas <mw@semihalf.com>
> ---
>  drivers/net/ethernet/marvell/mvneta.c | 30 ++++++++++++++++--------------
>  1 file changed, 16 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> index fee6a91..a433de9 100644
> --- a/drivers/net/ethernet/marvell/mvneta.c
> +++ b/drivers/net/ethernet/marvell/mvneta.c
> @@ -3083,6 +3083,20 @@ static int mvneta_check_mtu_valid(struct net_device *dev, int mtu)
>         return mtu;
>  }
>
> +static void mvneta_percpu_enable(void *arg)
> +{
> +       struct mvneta_port *pp = arg;
> +
> +       enable_percpu_irq(pp->dev->irq, IRQ_TYPE_NONE);
> +}
> +
> +static void mvneta_percpu_disable(void *arg)
> +{
> +       struct mvneta_port *pp = arg;
> +
> +       disable_percpu_irq(pp->dev->irq);
> +}
> +
>  /* Change the device mtu */
>  static int mvneta_change_mtu(struct net_device *dev, int mtu)
>  {
> @@ -3107,6 +3121,7 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
>          * reallocation of the queues
>          */
>         mvneta_stop_dev(pp);
> +       on_each_cpu(mvneta_percpu_disable, pp, true);
>
>         mvneta_cleanup_txqs(pp);
>         mvneta_cleanup_rxqs(pp);
> @@ -3130,6 +3145,7 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
>                 return ret;
>         }
>
> +       on_each_cpu(mvneta_percpu_enable, pp, true);
>         mvneta_start_dev(pp);
>         mvneta_port_up(pp);
>
> @@ -3283,20 +3299,6 @@ static void mvneta_mdio_remove(struct mvneta_port *pp)
>         pp->phy_dev = NULL;
>  }
>
> -static void mvneta_percpu_enable(void *arg)
> -{
> -       struct mvneta_port *pp = arg;
> -
> -       enable_percpu_irq(pp->dev->irq, IRQ_TYPE_NONE);
> -}
> -
> -static void mvneta_percpu_disable(void *arg)
> -{
> -       struct mvneta_port *pp = arg;
> -
> -       disable_percpu_irq(pp->dev->irq);
> -}
> -
>  /* Electing a CPU must be done in an atomic way: it should be done
>   * after or before the removal/insertion of a CPU and this function is
>   * not reentrant.
> --
> 1.8.3.1
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH] net: mvneta: fix changing MTU when using per-cpu processing
@ 2016-04-01 13:22   ` Marcin Wojtas
  0 siblings, 0 replies; 6+ messages in thread
From: Marcin Wojtas @ 2016-04-01 13:22 UTC (permalink / raw)
  To: linux-arm-kernel

Hi David,

I've just realized I forgot to add an information, that this patch is
intended for 'net' tree.

Best regards,
Marcin

2016-04-01 15:21 GMT+02:00 Marcin Wojtas <mw@semihalf.com>:
> After enabling per-cpu processing it appeared that under heavy load
> changing MTU can result in blocking all port's interrupts and transmitting
> data is not possible after the change.
>
> This commit fixes above issue by disabling percpu interrupts for the
> time, when TXQs and RXQs are reconfigured.
>
> Signed-off-by: Marcin Wojtas <mw@semihalf.com>
> ---
>  drivers/net/ethernet/marvell/mvneta.c | 30 ++++++++++++++++--------------
>  1 file changed, 16 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> index fee6a91..a433de9 100644
> --- a/drivers/net/ethernet/marvell/mvneta.c
> +++ b/drivers/net/ethernet/marvell/mvneta.c
> @@ -3083,6 +3083,20 @@ static int mvneta_check_mtu_valid(struct net_device *dev, int mtu)
>         return mtu;
>  }
>
> +static void mvneta_percpu_enable(void *arg)
> +{
> +       struct mvneta_port *pp = arg;
> +
> +       enable_percpu_irq(pp->dev->irq, IRQ_TYPE_NONE);
> +}
> +
> +static void mvneta_percpu_disable(void *arg)
> +{
> +       struct mvneta_port *pp = arg;
> +
> +       disable_percpu_irq(pp->dev->irq);
> +}
> +
>  /* Change the device mtu */
>  static int mvneta_change_mtu(struct net_device *dev, int mtu)
>  {
> @@ -3107,6 +3121,7 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
>          * reallocation of the queues
>          */
>         mvneta_stop_dev(pp);
> +       on_each_cpu(mvneta_percpu_disable, pp, true);
>
>         mvneta_cleanup_txqs(pp);
>         mvneta_cleanup_rxqs(pp);
> @@ -3130,6 +3145,7 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
>                 return ret;
>         }
>
> +       on_each_cpu(mvneta_percpu_enable, pp, true);
>         mvneta_start_dev(pp);
>         mvneta_port_up(pp);
>
> @@ -3283,20 +3299,6 @@ static void mvneta_mdio_remove(struct mvneta_port *pp)
>         pp->phy_dev = NULL;
>  }
>
> -static void mvneta_percpu_enable(void *arg)
> -{
> -       struct mvneta_port *pp = arg;
> -
> -       enable_percpu_irq(pp->dev->irq, IRQ_TYPE_NONE);
> -}
> -
> -static void mvneta_percpu_disable(void *arg)
> -{
> -       struct mvneta_port *pp = arg;
> -
> -       disable_percpu_irq(pp->dev->irq);
> -}
> -
>  /* Electing a CPU must be done in an atomic way: it should be done
>   * after or before the removal/insertion of a CPU and this function is
>   * not reentrant.
> --
> 1.8.3.1
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] net: mvneta: fix changing MTU when using per-cpu processing
  2016-04-01 13:21 ` Marcin Wojtas
@ 2016-04-01 19:18   ` David Miller
  -1 siblings, 0 replies; 6+ messages in thread
From: David Miller @ 2016-04-01 19:18 UTC (permalink / raw)
  To: mw
  Cc: linux-kernel, linux-arm-kernel, netdev, linux,
	sebastian.hesselbarth, andrew, jason, thomas.petazzoni,
	gregory.clement, nadavh, alior, nitroshift, jaz

From: Marcin Wojtas <mw@semihalf.com>
Date: Fri,  1 Apr 2016 15:21:18 +0200

> After enabling per-cpu processing it appeared that under heavy load
> changing MTU can result in blocking all port's interrupts and transmitting
> data is not possible after the change.
> 
> This commit fixes above issue by disabling percpu interrupts for the
> time, when TXQs and RXQs are reconfigured.
> 
> Signed-off-by: Marcin Wojtas <mw@semihalf.com>

Applied, thanks.

When I reviewed this I was worried that this was yet another case where
the ndo op could be invoked in a potentially atomic or similar context,
whereby on_each_cpu() would be illegal to use.

But that appears to not be the case, and thus this change is just fine.

Thanks.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH] net: mvneta: fix changing MTU when using per-cpu processing
@ 2016-04-01 19:18   ` David Miller
  0 siblings, 0 replies; 6+ messages in thread
From: David Miller @ 2016-04-01 19:18 UTC (permalink / raw)
  To: linux-arm-kernel

From: Marcin Wojtas <mw@semihalf.com>
Date: Fri,  1 Apr 2016 15:21:18 +0200

> After enabling per-cpu processing it appeared that under heavy load
> changing MTU can result in blocking all port's interrupts and transmitting
> data is not possible after the change.
> 
> This commit fixes above issue by disabling percpu interrupts for the
> time, when TXQs and RXQs are reconfigured.
> 
> Signed-off-by: Marcin Wojtas <mw@semihalf.com>

Applied, thanks.

When I reviewed this I was worried that this was yet another case where
the ndo op could be invoked in a potentially atomic or similar context,
whereby on_each_cpu() would be illegal to use.

But that appears to not be the case, and thus this change is just fine.

Thanks.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2016-04-01 19:18 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-04-01 13:21 [PATCH] net: mvneta: fix changing MTU when using per-cpu processing Marcin Wojtas
2016-04-01 13:21 ` Marcin Wojtas
2016-04-01 13:22 ` Marcin Wojtas
2016-04-01 13:22   ` Marcin Wojtas
2016-04-01 19:18 ` David Miller
2016-04-01 19:18   ` David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.