netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC net-next 0/4] net: Improving network scheduling latencies
@ 2021-12-10 19:35 Yannick Vignon
  2021-12-10 19:35 ` [RFC net-next 1/4] net: napi threaded: remove unnecessary locking Yannick Vignon
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Yannick Vignon @ 2021-12-10 19:35 UTC (permalink / raw)
  To: Giuseppe Cavallaro, Alexandre Torgue, netdev, Ong Boon Leong,
	David S. Miller, Jakub Kicinski, Jose Abreu, Eric Dumazet,
	Wei Wang, Alexander Lobakin, Vladimir Oltean, Xiaoliang Yang,
	mingkai.hu, Joakim Zhang, sebastien.laveze

I am working on an application to showcase TSN use cases. That
application wakes up periodically, reads packet(s) from the network,
sends packet(s), then goes back to sleep. Endpoints are synchronized
through gPTP, and a 802.1Qbv schedule is in place to ensure packets are
sent at a fixed time. Right now, we achieve an overal period of 2ms,
which results in 500µs between the time the application is supposed to
wake up to the time the last packet is sent. We use an NXP kernel 5.10.x
with PREEMPT_RT patches.
I've been focusing lately on reducing the period, to see how close a
Linux-based system could get to a micro-controller with a "real-time"
OS. I've been able to achieve 500µs overall (125µs for the app itself)
by using AF_XDP sockets, but this also led to identifying several
sources of "scheduling" latencies, which I've tried to resolve with the
patches attached. The main culprit so far has been
local_bh_disable/local_bh_enable sections running in lower prio tasks,
requiring costly context switches along with priority inheritance. I've
removed the offending sections without significant problems so far, but
I'm not entirely clear though on the reason local_disable/enable were
used in those places: is it some simple oversight, an excess of caution,
or am I missing something more fundamental in the way those locks are
used?

Thanks,
Yannick

Yannick Vignon (4)
  net: stmmac: remove unnecessary locking around PTP clock reads
  net: stmmac: do not use __netif_tx_lock_bh when in NAPI threaded mode
  net: stmmac: move to threaded IRQ
  net: napi threaded: remove unnecessary locking

 drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 44
+++++++++++++++++++++++++-------------------
 drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c  |  2 --
 net/core/dev.c                                    |  2 --
 3 files changed, 25 insertions(+), 23 deletions(-)



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [RFC net-next 1/4] net: napi threaded: remove unnecessary locking
  2021-12-10 19:35 [RFC net-next 0/4] net: Improving network scheduling latencies Yannick Vignon
@ 2021-12-10 19:35 ` Yannick Vignon
  2021-12-11  3:10   ` Jakub Kicinski
  2021-12-10 19:35 ` [RFC net-next 2/4] net: stmmac: move to threaded IRQ Yannick Vignon
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 7+ messages in thread
From: Yannick Vignon @ 2021-12-10 19:35 UTC (permalink / raw)
  To: Giuseppe Cavallaro, Alexandre Torgue, netdev, Ong Boon Leong,
	David S. Miller, Jakub Kicinski, Jose Abreu, Eric Dumazet,
	Wei Wang, Alexander Lobakin, Vladimir Oltean, Xiaoliang Yang,
	mingkai.hu, Joakim Zhang, sebastien.laveze
  Cc: Yannick Vignon

From: Yannick Vignon <yannick.vignon@nxp.com>

NAPI polling is normally protected by local_bh_disable()/local_bh_enable()
calls, to avoid that code from being executed concurrently due to the
softirq design. When NAPI instances are assigned their own dedicated kernel
thread however, that concurrent code execution can no longer happen.

Removing the lock helps lower latencies when handling real-time traffic
(whose processing could still be delayed because of on-going processing of
best-effort traffic), and should also have a positive effect on overall
performance.

Signed-off-by: Yannick Vignon <yannick.vignon@nxp.com>
---
 net/core/dev.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 15ac064b5562..e35d90e70c75 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -7131,13 +7131,11 @@ static int napi_threaded_poll(void *data)
 		for (;;) {
 			bool repoll = false;
 
-			local_bh_disable();
 
 			have = netpoll_poll_lock(napi);
 			__napi_poll(napi, &repoll);
 			netpoll_poll_unlock(have);
 
-			local_bh_enable();
 
 			if (!repoll)
 				break;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [RFC net-next 2/4] net: stmmac: move to threaded IRQ
  2021-12-10 19:35 [RFC net-next 0/4] net: Improving network scheduling latencies Yannick Vignon
  2021-12-10 19:35 ` [RFC net-next 1/4] net: napi threaded: remove unnecessary locking Yannick Vignon
@ 2021-12-10 19:35 ` Yannick Vignon
  2021-12-13  7:09   ` Joakim Zhang
  2021-12-10 19:35 ` [RFC net-next 3/4] net: stmmac: do not use __netif_tx_lock_bh when in NAPI threaded mode Yannick Vignon
  2021-12-10 19:35 ` [RFC net-next 4/4] net: stmmac: remove unnecessary locking around PTP clock reads Yannick Vignon
  3 siblings, 1 reply; 7+ messages in thread
From: Yannick Vignon @ 2021-12-10 19:35 UTC (permalink / raw)
  To: Giuseppe Cavallaro, Alexandre Torgue, netdev, Ong Boon Leong,
	David S. Miller, Jakub Kicinski, Jose Abreu, Eric Dumazet,
	Wei Wang, Alexander Lobakin, Vladimir Oltean, Xiaoliang Yang,
	mingkai.hu, Joakim Zhang, sebastien.laveze
  Cc: Yannick Vignon

From: Yannick Vignon <yannick.vignon@nxp.com>

WIP (seems to generate warnings/error on startup)

When an IRQ is forced threaded, execution of the handler remains protected
by local_bh_disable()/local_bh_enable() calls to keep the semantics of the
IRQ context and avoid deadlocks. However, this also creates a contention
point where a higher prio interrupt handler gets blocked by a lower prio
task already holding the lock. Even though priority inheritance kicks in in
such a case, the lower prio task can still execute for an indefinite time.

Move the stmmac interrupts to be explicitely threaded, so that high
priority traffic can be processed without delay even if another piece of
code was already running with BH disabled.

Signed-off-by: Yannick Vignon <yannick.vignon@nxp.com>
---
 .../net/ethernet/stmicro/stmmac/stmmac_main.c | 28 +++++++++----------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index 748195697e5a..8bf24902be3c 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -3460,8 +3460,8 @@ static int stmmac_request_irq_multi_msi(struct net_device *dev)
 	/* For common interrupt */
 	int_name = priv->int_name_mac;
 	sprintf(int_name, "%s:%s", dev->name, "mac");
-	ret = request_irq(dev->irq, stmmac_mac_interrupt,
-			  0, int_name, dev);
+	ret = request_threaded_irq(dev->irq, NULL, stmmac_interrupt,
+			  IRQF_ONESHOT, int_name, dev);
 	if (unlikely(ret < 0)) {
 		netdev_err(priv->dev,
 			   "%s: alloc mac MSI %d (error: %d)\n",
@@ -3476,9 +3476,9 @@ static int stmmac_request_irq_multi_msi(struct net_device *dev)
 	if (priv->wol_irq > 0 && priv->wol_irq != dev->irq) {
 		int_name = priv->int_name_wol;
 		sprintf(int_name, "%s:%s", dev->name, "wol");
-		ret = request_irq(priv->wol_irq,
-				  stmmac_mac_interrupt,
-				  0, int_name, dev);
+		ret = request_threaded_irq(priv->wol_irq,
+				  NULL, stmmac_mac_interrupt,
+				  IRQF_ONESHOT, int_name, dev);
 		if (unlikely(ret < 0)) {
 			netdev_err(priv->dev,
 				   "%s: alloc wol MSI %d (error: %d)\n",
@@ -3494,9 +3494,9 @@ static int stmmac_request_irq_multi_msi(struct net_device *dev)
 	if (priv->lpi_irq > 0 && priv->lpi_irq != dev->irq) {
 		int_name = priv->int_name_lpi;
 		sprintf(int_name, "%s:%s", dev->name, "lpi");
-		ret = request_irq(priv->lpi_irq,
-				  stmmac_mac_interrupt,
-				  0, int_name, dev);
+		ret = request_threaded_irq(priv->lpi_irq,
+				  NULL, stmmac_mac_interrupt,
+				  IRQF_ONESHOT, int_name, dev);
 		if (unlikely(ret < 0)) {
 			netdev_err(priv->dev,
 				   "%s: alloc lpi MSI %d (error: %d)\n",
@@ -3605,8 +3605,8 @@ static int stmmac_request_irq_single(struct net_device *dev)
 	enum request_irq_err irq_err;
 	int ret;
 
-	ret = request_irq(dev->irq, stmmac_interrupt,
-			  IRQF_SHARED, dev->name, dev);
+	ret = request_threaded_irq(dev->irq, NULL, stmmac_interrupt,
+			  IRQF_SHARED | IRQF_ONESHOT, dev->name, dev);
 	if (unlikely(ret < 0)) {
 		netdev_err(priv->dev,
 			   "%s: ERROR: allocating the IRQ %d (error: %d)\n",
@@ -3619,8 +3619,8 @@ static int stmmac_request_irq_single(struct net_device *dev)
 	 * is used for WoL
 	 */
 	if (priv->wol_irq > 0 && priv->wol_irq != dev->irq) {
-		ret = request_irq(priv->wol_irq, stmmac_interrupt,
-				  IRQF_SHARED, dev->name, dev);
+		ret = request_threaded_irq(priv->wol_irq, NULL, stmmac_interrupt,
+				  IRQF_SHARED | IRQF_ONESHOT, dev->name, dev);
 		if (unlikely(ret < 0)) {
 			netdev_err(priv->dev,
 				   "%s: ERROR: allocating the WoL IRQ %d (%d)\n",
@@ -3632,8 +3632,8 @@ static int stmmac_request_irq_single(struct net_device *dev)
 
 	/* Request the IRQ lines */
 	if (priv->lpi_irq > 0 && priv->lpi_irq != dev->irq) {
-		ret = request_irq(priv->lpi_irq, stmmac_interrupt,
-				  IRQF_SHARED, dev->name, dev);
+		ret = request_threaded_irq(priv->lpi_irq, NULL, stmmac_interrupt,
+				  IRQF_SHARED | IRQF_ONESHOT, dev->name, dev);
 		if (unlikely(ret < 0)) {
 			netdev_err(priv->dev,
 				   "%s: ERROR: allocating the LPI IRQ %d (%d)\n",
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [RFC net-next 3/4] net: stmmac: do not use __netif_tx_lock_bh when in NAPI threaded mode
  2021-12-10 19:35 [RFC net-next 0/4] net: Improving network scheduling latencies Yannick Vignon
  2021-12-10 19:35 ` [RFC net-next 1/4] net: napi threaded: remove unnecessary locking Yannick Vignon
  2021-12-10 19:35 ` [RFC net-next 2/4] net: stmmac: move to threaded IRQ Yannick Vignon
@ 2021-12-10 19:35 ` Yannick Vignon
  2021-12-10 19:35 ` [RFC net-next 4/4] net: stmmac: remove unnecessary locking around PTP clock reads Yannick Vignon
  3 siblings, 0 replies; 7+ messages in thread
From: Yannick Vignon @ 2021-12-10 19:35 UTC (permalink / raw)
  To: Giuseppe Cavallaro, Alexandre Torgue, netdev, Ong Boon Leong,
	David S. Miller, Jakub Kicinski, Jose Abreu, Eric Dumazet,
	Wei Wang, Alexander Lobakin, Vladimir Oltean, Xiaoliang Yang,
	mingkai.hu, Joakim Zhang, sebastien.laveze
  Cc: Yannick Vignon

From: Yannick Vignon <yannick.vignon@nxp.com>

In threaded mode, a NAPI instance can not execute concurrently in a
separate context but only in its assigned kernel thread.

Replace the calls to __netif_tx_lock_bh/__netif_tx_unlock_bh by their
non-bh version to avoid disabling BH in that case. This prevents high
priority traffic from being blocked by another piece of code already
running with BH disabled.

Signed-off-by: Yannick Vignon <yannick.vignon@nxp.com>
---
 .../net/ethernet/stmicro/stmmac/stmmac_main.c    | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index 8bf24902be3c..2190b40fa92e 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -2481,13 +2481,16 @@ static bool stmmac_xdp_xmit_zc(struct stmmac_priv *priv, u32 queue, u32 budget)
  * @queue: TX queue index
  * Description: it reclaims the transmit resources after transmission completes.
  */
-static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue)
+static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue, bool is_threaded)
 {
 	struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue];
 	unsigned int bytes_compl = 0, pkts_compl = 0;
 	unsigned int entry, xmits = 0, count = 0;
 
-	__netif_tx_lock_bh(netdev_get_tx_queue(priv->dev, queue));
+	if (is_threaded)
+		__netif_tx_lock(netdev_get_tx_queue(priv->dev, queue), smp_processor_id());
+	else
+		__netif_tx_lock_bh(netdev_get_tx_queue(priv->dev, queue));
 
 	priv->xstats.tx_clean++;
 
@@ -2646,7 +2649,10 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue)
 			      STMMAC_COAL_TIMER(priv->tx_coal_timer[queue]),
 			      HRTIMER_MODE_REL);
 
-	__netif_tx_unlock_bh(netdev_get_tx_queue(priv->dev, queue));
+	if (is_threaded)
+		__netif_tx_unlock(netdev_get_tx_queue(priv->dev, queue));
+	else
+		__netif_tx_unlock_bh(netdev_get_tx_queue(priv->dev, queue));
 
 	/* Combine decisions from TX clean and XSK TX */
 	return max(count, xmits);
@@ -5377,7 +5383,7 @@ static int stmmac_napi_poll_tx(struct napi_struct *napi, int budget)
 
 	priv->xstats.napi_poll++;
 
-	work_done = stmmac_tx_clean(priv, budget, chan);
+	work_done = stmmac_tx_clean(priv, budget, chan, !!napi->thread);
 	work_done = min(work_done, budget);
 
 	if (work_done < budget && napi_complete_done(napi, work_done)) {
@@ -5401,7 +5407,7 @@ static int stmmac_napi_poll_rxtx(struct napi_struct *napi, int budget)
 
 	priv->xstats.napi_poll++;
 
-	tx_done = stmmac_tx_clean(priv, budget, chan);
+	tx_done = stmmac_tx_clean(priv, budget, chan, !!napi->thread);
 	tx_done = min(tx_done, budget);
 
 	rx_done = stmmac_rx_zc(priv, budget, chan);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [RFC net-next 4/4] net: stmmac: remove unnecessary locking around PTP clock reads
  2021-12-10 19:35 [RFC net-next 0/4] net: Improving network scheduling latencies Yannick Vignon
                   ` (2 preceding siblings ...)
  2021-12-10 19:35 ` [RFC net-next 3/4] net: stmmac: do not use __netif_tx_lock_bh when in NAPI threaded mode Yannick Vignon
@ 2021-12-10 19:35 ` Yannick Vignon
  3 siblings, 0 replies; 7+ messages in thread
From: Yannick Vignon @ 2021-12-10 19:35 UTC (permalink / raw)
  To: Giuseppe Cavallaro, Alexandre Torgue, netdev, Ong Boon Leong,
	David S. Miller, Jakub Kicinski, Jose Abreu, Eric Dumazet,
	Wei Wang, Alexander Lobakin, Vladimir Oltean, Xiaoliang Yang,
	mingkai.hu, Joakim Zhang, sebastien.laveze
  Cc: Yannick Vignon

From: Yannick Vignon <yannick.vignon@nxp.com>

Reading the PTP clock is a simple operation requiring only 2 register reads,
while the time can be set atomically through another set of registers. Under
a PREEMPT_RT kernel, protecting the reads by a spin_lock is
counter-productive:
 * if the task is preempted in-between the 2 reads, the return time value
could become inconsistent,
 * if the 2nd task preempting the 1st has a higher prio but needs to
read time as well, it will require 2 costly context switches, which
will pretty much always be more costly than disabling preemption with a real
spin_lock.

Remove the unneeded locking around the gettime call.

Signed-off-by: Yannick Vignon <yannick.vignon@nxp.com>
---
 drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
index 580cc035536b..8f0dcac23b1e 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
@@ -137,9 +137,7 @@ static int stmmac_get_time(struct ptp_clock_info *ptp, struct timespec64 *ts)
 	unsigned long flags;
 	u64 ns = 0;
 
-	spin_lock_irqsave(&priv->ptp_lock, flags);
 	stmmac_get_systime(priv, priv->ptpaddr, &ns);
-	spin_unlock_irqrestore(&priv->ptp_lock, flags);
 
 	*ts = ns_to_timespec64(ns);
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [RFC net-next 1/4] net: napi threaded: remove unnecessary locking
  2021-12-10 19:35 ` [RFC net-next 1/4] net: napi threaded: remove unnecessary locking Yannick Vignon
@ 2021-12-11  3:10   ` Jakub Kicinski
  0 siblings, 0 replies; 7+ messages in thread
From: Jakub Kicinski @ 2021-12-11  3:10 UTC (permalink / raw)
  To: Yannick Vignon
  Cc: Giuseppe Cavallaro, Alexandre Torgue, netdev, Ong Boon Leong,
	David S. Miller, Jose Abreu, Eric Dumazet, Wei Wang,
	Alexander Lobakin, Vladimir Oltean, Xiaoliang Yang, mingkai.hu,
	Joakim Zhang, sebastien.laveze, Yannick Vignon

On Fri, 10 Dec 2021 20:35:53 +0100 Yannick Vignon wrote:
> From: Yannick Vignon <yannick.vignon@nxp.com>
> 
> NAPI polling is normally protected by local_bh_disable()/local_bh_enable()
> calls, to avoid that code from being executed concurrently due to the
> softirq design. When NAPI instances are assigned their own dedicated kernel
> thread however, that concurrent code execution can no longer happen.

Meaning NAPI will now run in process context? That's not possible, 
lots of things depend on being in BH context. There are per-CPU
cross-NAPI resources, I think we have some expectations that we
don't need to take RCU locks here and there..

> Removing the lock helps lower latencies when handling real-time traffic
> (whose processing could still be delayed because of on-going processing of
> best-effort traffic), and should also have a positive effect on overall
> performance.
> 
> Signed-off-by: Yannick Vignon <yannick.vignon@nxp.com>
> ---
>  net/core/dev.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/net/core/dev.c b/net/core/dev.c
> index 15ac064b5562..e35d90e70c75 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -7131,13 +7131,11 @@ static int napi_threaded_poll(void *data)
>  		for (;;) {
>  			bool repoll = false;
>  
> -			local_bh_disable();
>  
>  			have = netpoll_poll_lock(napi);
>  			__napi_poll(napi, &repoll);
>  			netpoll_poll_unlock(have);
>  
> -			local_bh_enable();
>  
>  			if (!repoll)
>  				break;


^ permalink raw reply	[flat|nested] 7+ messages in thread

* RE: [RFC net-next 2/4] net: stmmac: move to threaded IRQ
  2021-12-10 19:35 ` [RFC net-next 2/4] net: stmmac: move to threaded IRQ Yannick Vignon
@ 2021-12-13  7:09   ` Joakim Zhang
  0 siblings, 0 replies; 7+ messages in thread
From: Joakim Zhang @ 2021-12-13  7:09 UTC (permalink / raw)
  To: Yannick Vignon (OSS),
	Giuseppe Cavallaro, Alexandre Torgue, netdev, Ong Boon Leong,
	David S. Miller, Jakub Kicinski, Jose Abreu, Eric Dumazet,
	Wei Wang, Alexander Lobakin, Vladimir Oltean, Xiaoliang Yang,
	Mingkai Hu, Sebastien Laveze
  Cc: Yannick Vignon


Hi Yannick,

> -----Original Message-----
> From: Yannick Vignon (OSS) <yannick.vignon@oss.nxp.com>
> Sent: 2021年12月11日 3:36
> To: Giuseppe Cavallaro <peppe.cavallaro@st.com>; Alexandre Torgue
> <alexandre.torgue@st.com>; netdev@vger.kernel.org; Ong Boon Leong
> <boon.leong.ong@intel.com>; David S. Miller <davem@davemloft.net>;
> Jakub Kicinski <kuba@kernel.org>; Jose Abreu <joabreu@synopsys.com>;
> Eric Dumazet <edumazet@google.com>; Wei Wang <weiwan@google.com>;
> Alexander Lobakin <alexandr.lobakin@intel.com>; Vladimir Oltean
> <olteanv@gmail.com>; Xiaoliang Yang <xiaoliang.yang_1@nxp.com>;
> Mingkai Hu <mingkai.hu@nxp.com>; Joakim Zhang
> <qiangqing.zhang@nxp.com>; Sebastien Laveze
> <sebastien.laveze@nxp.com>
> Cc: Yannick Vignon <yannick.vignon@nxp.com>
> Subject: [RFC net-next 2/4] net: stmmac: move to threaded IRQ
> 
> From: Yannick Vignon <yannick.vignon@nxp.com>
> 
> WIP (seems to generate warnings/error on startup)
> 
> When an IRQ is forced threaded, execution of the handler remains protected
> by local_bh_disable()/local_bh_enable() calls to keep the semantics of the
> IRQ context and avoid deadlocks. However, this also creates a contention
> point where a higher prio interrupt handler gets blocked by a lower prio task
> already holding the lock. Even though priority inheritance kicks in in such a
> case, the lower prio task can still execute for an indefinite time.
> 
> Move the stmmac interrupts to be explicitely threaded, so that high priority
> traffic can be processed without delay even if another piece of code was
> already running with BH disabled.
> 
> Signed-off-by: Yannick Vignon <yannick.vignon@nxp.com>
> ---
>  .../net/ethernet/stmicro/stmmac/stmmac_main.c | 28 +++++++++----------
>  1 file changed, 14 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> index 748195697e5a..8bf24902be3c 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> @@ -3460,8 +3460,8 @@ static int stmmac_request_irq_multi_msi(struct
> net_device *dev)
>  	/* For common interrupt */
>  	int_name = priv->int_name_mac;
>  	sprintf(int_name, "%s:%s", dev->name, "mac");
> -	ret = request_irq(dev->irq, stmmac_mac_interrupt,
> -			  0, int_name, dev);
> +	ret = request_threaded_irq(dev->irq, NULL, stmmac_interrupt,
> +			  IRQF_ONESHOT, int_name, dev);

Why change from stmmac_mac_interrupt() to stmmac_interrupt()? A copy-paste issue?

Best Regards,
Joakim Zhang
>  	if (unlikely(ret < 0)) {
>  		netdev_err(priv->dev,
>  			   "%s: alloc mac MSI %d (error: %d)\n", @@ -3476,9 +3476,9
> @@ static int stmmac_request_irq_multi_msi(struct net_device *dev)
>  	if (priv->wol_irq > 0 && priv->wol_irq != dev->irq) {
>  		int_name = priv->int_name_wol;
>  		sprintf(int_name, "%s:%s", dev->name, "wol");
> -		ret = request_irq(priv->wol_irq,
> -				  stmmac_mac_interrupt,
> -				  0, int_name, dev);
> +		ret = request_threaded_irq(priv->wol_irq,
> +				  NULL, stmmac_mac_interrupt,
> +				  IRQF_ONESHOT, int_name, dev);
>  		if (unlikely(ret < 0)) {
>  			netdev_err(priv->dev,
>  				   "%s: alloc wol MSI %d (error: %d)\n", @@ -3494,9
> +3494,9 @@ static int stmmac_request_irq_multi_msi(struct net_device
> *dev)
>  	if (priv->lpi_irq > 0 && priv->lpi_irq != dev->irq) {
>  		int_name = priv->int_name_lpi;
>  		sprintf(int_name, "%s:%s", dev->name, "lpi");
> -		ret = request_irq(priv->lpi_irq,
> -				  stmmac_mac_interrupt,
> -				  0, int_name, dev);
> +		ret = request_threaded_irq(priv->lpi_irq,
> +				  NULL, stmmac_mac_interrupt,
> +				  IRQF_ONESHOT, int_name, dev);
>  		if (unlikely(ret < 0)) {
>  			netdev_err(priv->dev,
>  				   "%s: alloc lpi MSI %d (error: %d)\n", @@ -3605,8
> +3605,8 @@ static int stmmac_request_irq_single(struct net_device *dev)
>  	enum request_irq_err irq_err;
>  	int ret;
> 
> -	ret = request_irq(dev->irq, stmmac_interrupt,
> -			  IRQF_SHARED, dev->name, dev);
> +	ret = request_threaded_irq(dev->irq, NULL, stmmac_interrupt,
> +			  IRQF_SHARED | IRQF_ONESHOT, dev->name, dev);
>  	if (unlikely(ret < 0)) {
>  		netdev_err(priv->dev,
>  			   "%s: ERROR: allocating the IRQ %d (error: %d)\n", @@
> -3619,8 +3619,8 @@ static int stmmac_request_irq_single(struct net_device
> *dev)
>  	 * is used for WoL
>  	 */
>  	if (priv->wol_irq > 0 && priv->wol_irq != dev->irq) {
> -		ret = request_irq(priv->wol_irq, stmmac_interrupt,
> -				  IRQF_SHARED, dev->name, dev);
> +		ret = request_threaded_irq(priv->wol_irq, NULL,
> stmmac_interrupt,
> +				  IRQF_SHARED | IRQF_ONESHOT, dev->name, dev);
>  		if (unlikely(ret < 0)) {
>  			netdev_err(priv->dev,
>  				   "%s: ERROR: allocating the WoL IRQ %d (%d)\n", @@
> -3632,8 +3632,8 @@ static int stmmac_request_irq_single(struct net_device
> *dev)
> 
>  	/* Request the IRQ lines */
>  	if (priv->lpi_irq > 0 && priv->lpi_irq != dev->irq) {
> -		ret = request_irq(priv->lpi_irq, stmmac_interrupt,
> -				  IRQF_SHARED, dev->name, dev);
> +		ret = request_threaded_irq(priv->lpi_irq, NULL, stmmac_interrupt,
> +				  IRQF_SHARED | IRQF_ONESHOT, dev->name, dev);
>  		if (unlikely(ret < 0)) {
>  			netdev_err(priv->dev,
>  				   "%s: ERROR: allocating the LPI IRQ %d (%d)\n",
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-12-13  7:09 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-10 19:35 [RFC net-next 0/4] net: Improving network scheduling latencies Yannick Vignon
2021-12-10 19:35 ` [RFC net-next 1/4] net: napi threaded: remove unnecessary locking Yannick Vignon
2021-12-11  3:10   ` Jakub Kicinski
2021-12-10 19:35 ` [RFC net-next 2/4] net: stmmac: move to threaded IRQ Yannick Vignon
2021-12-13  7:09   ` Joakim Zhang
2021-12-10 19:35 ` [RFC net-next 3/4] net: stmmac: do not use __netif_tx_lock_bh when in NAPI threaded mode Yannick Vignon
2021-12-10 19:35 ` [RFC net-next 4/4] net: stmmac: remove unnecessary locking around PTP clock reads Yannick Vignon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).