All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC net-next 0/4] net: Improving network scheduling latencies
@ 2021-12-10 19:35 Yannick Vignon
  2021-12-10 19:35 ` [RFC net-next 1/4] net: napi threaded: remove unnecessary locking Yannick Vignon
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Yannick Vignon @ 2021-12-10 19:35 UTC (permalink / raw)
  To: Giuseppe Cavallaro, Alexandre Torgue, netdev, Ong Boon Leong,
	David S. Miller, Jakub Kicinski, Jose Abreu, Eric Dumazet,
	Wei Wang, Alexander Lobakin, Vladimir Oltean, Xiaoliang Yang,
	mingkai.hu, Joakim Zhang, sebastien.laveze

I am working on an application to showcase TSN use cases. That
application wakes up periodically, reads packet(s) from the network,
sends packet(s), then goes back to sleep. Endpoints are synchronized
through gPTP, and a 802.1Qbv schedule is in place to ensure packets are
sent at a fixed time. Right now, we achieve an overal period of 2ms,
which results in 500µs between the time the application is supposed to
wake up to the time the last packet is sent. We use an NXP kernel 5.10.x
with PREEMPT_RT patches.
I've been focusing lately on reducing the period, to see how close a
Linux-based system could get to a micro-controller with a "real-time"
OS. I've been able to achieve 500µs overall (125µs for the app itself)
by using AF_XDP sockets, but this also led to identifying several
sources of "scheduling" latencies, which I've tried to resolve with the
patches attached. The main culprit so far has been
local_bh_disable/local_bh_enable sections running in lower prio tasks,
requiring costly context switches along with priority inheritance. I've
removed the offending sections without significant problems so far, but
I'm not entirely clear though on the reason local_disable/enable were
used in those places: is it some simple oversight, an excess of caution,
or am I missing something more fundamental in the way those locks are
used?

Thanks,
Yannick

Yannick Vignon (4)
  net: stmmac: remove unnecessary locking around PTP clock reads
  net: stmmac: do not use __netif_tx_lock_bh when in NAPI threaded mode
  net: stmmac: move to threaded IRQ
  net: napi threaded: remove unnecessary locking

 drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 44
+++++++++++++++++++++++++-------------------
 drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c  |  2 --
 net/core/dev.c                                    |  2 --
 3 files changed, 25 insertions(+), 23 deletions(-)



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [RFC net-next 1/4] net: napi threaded: remove unnecessary locking
  2021-12-10 19:35 [RFC net-next 0/4] net: Improving network scheduling latencies Yannick Vignon
@ 2021-12-10 19:35 ` Yannick Vignon
  2021-12-11  3:10   ` Jakub Kicinski
  2021-12-10 19:35 ` [RFC net-next 2/4] net: stmmac: move to threaded IRQ Yannick Vignon
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 8+ messages in thread
From: Yannick Vignon @ 2021-12-10 19:35 UTC (permalink / raw)
  To: Giuseppe Cavallaro, Alexandre Torgue, netdev, Ong Boon Leong,
	David S. Miller, Jakub Kicinski, Jose Abreu, Eric Dumazet,
	Wei Wang, Alexander Lobakin, Vladimir Oltean, Xiaoliang Yang,
	mingkai.hu, Joakim Zhang, sebastien.laveze
  Cc: Yannick Vignon

From: Yannick Vignon <yannick.vignon@nxp.com>

NAPI polling is normally protected by local_bh_disable()/local_bh_enable()
calls, to avoid that code from being executed concurrently due to the
softirq design. When NAPI instances are assigned their own dedicated kernel
thread however, that concurrent code execution can no longer happen.

Removing the lock helps lower latencies when handling real-time traffic
(whose processing could still be delayed because of on-going processing of
best-effort traffic), and should also have a positive effect on overall
performance.

Signed-off-by: Yannick Vignon <yannick.vignon@nxp.com>
---
 net/core/dev.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 15ac064b5562..e35d90e70c75 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -7131,13 +7131,11 @@ static int napi_threaded_poll(void *data)
 		for (;;) {
 			bool repoll = false;
 
-			local_bh_disable();
 
 			have = netpoll_poll_lock(napi);
 			__napi_poll(napi, &repoll);
 			netpoll_poll_unlock(have);
 
-			local_bh_enable();
 
 			if (!repoll)
 				break;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [RFC net-next 2/4] net: stmmac: move to threaded IRQ
  2021-12-10 19:35 [RFC net-next 0/4] net: Improving network scheduling latencies Yannick Vignon
  2021-12-10 19:35 ` [RFC net-next 1/4] net: napi threaded: remove unnecessary locking Yannick Vignon
@ 2021-12-10 19:35 ` Yannick Vignon
  2021-12-13  7:09   ` Joakim Zhang
  2021-12-10 19:35 ` [RFC net-next 3/4] net: stmmac: do not use __netif_tx_lock_bh when in NAPI threaded mode Yannick Vignon
  2021-12-10 19:35 ` [RFC net-next 4/4] net: stmmac: remove unnecessary locking around PTP clock reads Yannick Vignon
  3 siblings, 1 reply; 8+ messages in thread
From: Yannick Vignon @ 2021-12-10 19:35 UTC (permalink / raw)
  To: Giuseppe Cavallaro, Alexandre Torgue, netdev, Ong Boon Leong,
	David S. Miller, Jakub Kicinski, Jose Abreu, Eric Dumazet,
	Wei Wang, Alexander Lobakin, Vladimir Oltean, Xiaoliang Yang,
	mingkai.hu, Joakim Zhang, sebastien.laveze
  Cc: Yannick Vignon

From: Yannick Vignon <yannick.vignon@nxp.com>

WIP (seems to generate warnings/error on startup)

When an IRQ is forced threaded, execution of the handler remains protected
by local_bh_disable()/local_bh_enable() calls to keep the semantics of the
IRQ context and avoid deadlocks. However, this also creates a contention
point where a higher prio interrupt handler gets blocked by a lower prio
task already holding the lock. Even though priority inheritance kicks in in
such a case, the lower prio task can still execute for an indefinite time.

Move the stmmac interrupts to be explicitely threaded, so that high
priority traffic can be processed without delay even if another piece of
code was already running with BH disabled.

Signed-off-by: Yannick Vignon <yannick.vignon@nxp.com>
---
 .../net/ethernet/stmicro/stmmac/stmmac_main.c | 28 +++++++++----------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index 748195697e5a..8bf24902be3c 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -3460,8 +3460,8 @@ static int stmmac_request_irq_multi_msi(struct net_device *dev)
 	/* For common interrupt */
 	int_name = priv->int_name_mac;
 	sprintf(int_name, "%s:%s", dev->name, "mac");
-	ret = request_irq(dev->irq, stmmac_mac_interrupt,
-			  0, int_name, dev);
+	ret = request_threaded_irq(dev->irq, NULL, stmmac_interrupt,
+			  IRQF_ONESHOT, int_name, dev);
 	if (unlikely(ret < 0)) {
 		netdev_err(priv->dev,
 			   "%s: alloc mac MSI %d (error: %d)\n",
@@ -3476,9 +3476,9 @@ static int stmmac_request_irq_multi_msi(struct net_device *dev)
 	if (priv->wol_irq > 0 && priv->wol_irq != dev->irq) {
 		int_name = priv->int_name_wol;
 		sprintf(int_name, "%s:%s", dev->name, "wol");
-		ret = request_irq(priv->wol_irq,
-				  stmmac_mac_interrupt,
-				  0, int_name, dev);
+		ret = request_threaded_irq(priv->wol_irq,
+				  NULL, stmmac_mac_interrupt,
+				  IRQF_ONESHOT, int_name, dev);
 		if (unlikely(ret < 0)) {
 			netdev_err(priv->dev,
 				   "%s: alloc wol MSI %d (error: %d)\n",
@@ -3494,9 +3494,9 @@ static int stmmac_request_irq_multi_msi(struct net_device *dev)
 	if (priv->lpi_irq > 0 && priv->lpi_irq != dev->irq) {
 		int_name = priv->int_name_lpi;
 		sprintf(int_name, "%s:%s", dev->name, "lpi");
-		ret = request_irq(priv->lpi_irq,
-				  stmmac_mac_interrupt,
-				  0, int_name, dev);
+		ret = request_threaded_irq(priv->lpi_irq,
+				  NULL, stmmac_mac_interrupt,
+				  IRQF_ONESHOT, int_name, dev);
 		if (unlikely(ret < 0)) {
 			netdev_err(priv->dev,
 				   "%s: alloc lpi MSI %d (error: %d)\n",
@@ -3605,8 +3605,8 @@ static int stmmac_request_irq_single(struct net_device *dev)
 	enum request_irq_err irq_err;
 	int ret;
 
-	ret = request_irq(dev->irq, stmmac_interrupt,
-			  IRQF_SHARED, dev->name, dev);
+	ret = request_threaded_irq(dev->irq, NULL, stmmac_interrupt,
+			  IRQF_SHARED | IRQF_ONESHOT, dev->name, dev);
 	if (unlikely(ret < 0)) {
 		netdev_err(priv->dev,
 			   "%s: ERROR: allocating the IRQ %d (error: %d)\n",
@@ -3619,8 +3619,8 @@ static int stmmac_request_irq_single(struct net_device *dev)
 	 * is used for WoL
 	 */
 	if (priv->wol_irq > 0 && priv->wol_irq != dev->irq) {
-		ret = request_irq(priv->wol_irq, stmmac_interrupt,
-				  IRQF_SHARED, dev->name, dev);
+		ret = request_threaded_irq(priv->wol_irq, NULL, stmmac_interrupt,
+				  IRQF_SHARED | IRQF_ONESHOT, dev->name, dev);
 		if (unlikely(ret < 0)) {
 			netdev_err(priv->dev,
 				   "%s: ERROR: allocating the WoL IRQ %d (%d)\n",
@@ -3632,8 +3632,8 @@ static int stmmac_request_irq_single(struct net_device *dev)
 
 	/* Request the IRQ lines */
 	if (priv->lpi_irq > 0 && priv->lpi_irq != dev->irq) {
-		ret = request_irq(priv->lpi_irq, stmmac_interrupt,
-				  IRQF_SHARED, dev->name, dev);
+		ret = request_threaded_irq(priv->lpi_irq, NULL, stmmac_interrupt,
+				  IRQF_SHARED | IRQF_ONESHOT, dev->name, dev);
 		if (unlikely(ret < 0)) {
 			netdev_err(priv->dev,
 				   "%s: ERROR: allocating the LPI IRQ %d (%d)\n",
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [RFC net-next 3/4] net: stmmac: do not use __netif_tx_lock_bh when in NAPI threaded mode
  2021-12-10 19:35 [RFC net-next 0/4] net: Improving network scheduling latencies Yannick Vignon
  2021-12-10 19:35 ` [RFC net-next 1/4] net: napi threaded: remove unnecessary locking Yannick Vignon
  2021-12-10 19:35 ` [RFC net-next 2/4] net: stmmac: move to threaded IRQ Yannick Vignon
@ 2021-12-10 19:35 ` Yannick Vignon
  2021-12-10 19:35 ` [RFC net-next 4/4] net: stmmac: remove unnecessary locking around PTP clock reads Yannick Vignon
  3 siblings, 0 replies; 8+ messages in thread
From: Yannick Vignon @ 2021-12-10 19:35 UTC (permalink / raw)
  To: Giuseppe Cavallaro, Alexandre Torgue, netdev, Ong Boon Leong,
	David S. Miller, Jakub Kicinski, Jose Abreu, Eric Dumazet,
	Wei Wang, Alexander Lobakin, Vladimir Oltean, Xiaoliang Yang,
	mingkai.hu, Joakim Zhang, sebastien.laveze
  Cc: Yannick Vignon

From: Yannick Vignon <yannick.vignon@nxp.com>

In threaded mode, a NAPI instance can not execute concurrently in a
separate context but only in its assigned kernel thread.

Replace the calls to __netif_tx_lock_bh/__netif_tx_unlock_bh by their
non-bh version to avoid disabling BH in that case. This prevents high
priority traffic from being blocked by another piece of code already
running with BH disabled.

Signed-off-by: Yannick Vignon <yannick.vignon@nxp.com>
---
 .../net/ethernet/stmicro/stmmac/stmmac_main.c    | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index 8bf24902be3c..2190b40fa92e 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -2481,13 +2481,16 @@ static bool stmmac_xdp_xmit_zc(struct stmmac_priv *priv, u32 queue, u32 budget)
  * @queue: TX queue index
  * Description: it reclaims the transmit resources after transmission completes.
  */
-static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue)
+static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue, bool is_threaded)
 {
 	struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue];
 	unsigned int bytes_compl = 0, pkts_compl = 0;
 	unsigned int entry, xmits = 0, count = 0;
 
-	__netif_tx_lock_bh(netdev_get_tx_queue(priv->dev, queue));
+	if (is_threaded)
+		__netif_tx_lock(netdev_get_tx_queue(priv->dev, queue), smp_processor_id());
+	else
+		__netif_tx_lock_bh(netdev_get_tx_queue(priv->dev, queue));
 
 	priv->xstats.tx_clean++;
 
@@ -2646,7 +2649,10 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue)
 			      STMMAC_COAL_TIMER(priv->tx_coal_timer[queue]),
 			      HRTIMER_MODE_REL);
 
-	__netif_tx_unlock_bh(netdev_get_tx_queue(priv->dev, queue));
+	if (is_threaded)
+		__netif_tx_unlock(netdev_get_tx_queue(priv->dev, queue));
+	else
+		__netif_tx_unlock_bh(netdev_get_tx_queue(priv->dev, queue));
 
 	/* Combine decisions from TX clean and XSK TX */
 	return max(count, xmits);
@@ -5377,7 +5383,7 @@ static int stmmac_napi_poll_tx(struct napi_struct *napi, int budget)
 
 	priv->xstats.napi_poll++;
 
-	work_done = stmmac_tx_clean(priv, budget, chan);
+	work_done = stmmac_tx_clean(priv, budget, chan, !!napi->thread);
 	work_done = min(work_done, budget);
 
 	if (work_done < budget && napi_complete_done(napi, work_done)) {
@@ -5401,7 +5407,7 @@ static int stmmac_napi_poll_rxtx(struct napi_struct *napi, int budget)
 
 	priv->xstats.napi_poll++;
 
-	tx_done = stmmac_tx_clean(priv, budget, chan);
+	tx_done = stmmac_tx_clean(priv, budget, chan, !!napi->thread);
 	tx_done = min(tx_done, budget);
 
 	rx_done = stmmac_rx_zc(priv, budget, chan);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [RFC net-next 4/4] net: stmmac: remove unnecessary locking around PTP clock reads
  2021-12-10 19:35 [RFC net-next 0/4] net: Improving network scheduling latencies Yannick Vignon
                   ` (2 preceding siblings ...)
  2021-12-10 19:35 ` [RFC net-next 3/4] net: stmmac: do not use __netif_tx_lock_bh when in NAPI threaded mode Yannick Vignon
@ 2021-12-10 19:35 ` Yannick Vignon
  3 siblings, 0 replies; 8+ messages in thread
From: Yannick Vignon @ 2021-12-10 19:35 UTC (permalink / raw)
  To: Giuseppe Cavallaro, Alexandre Torgue, netdev, Ong Boon Leong,
	David S. Miller, Jakub Kicinski, Jose Abreu, Eric Dumazet,
	Wei Wang, Alexander Lobakin, Vladimir Oltean, Xiaoliang Yang,
	mingkai.hu, Joakim Zhang, sebastien.laveze
  Cc: Yannick Vignon

From: Yannick Vignon <yannick.vignon@nxp.com>

Reading the PTP clock is a simple operation requiring only 2 register reads,
while the time can be set atomically through another set of registers. Under
a PREEMPT_RT kernel, protecting the reads by a spin_lock is
counter-productive:
 * if the task is preempted in-between the 2 reads, the return time value
could become inconsistent,
 * if the 2nd task preempting the 1st has a higher prio but needs to
read time as well, it will require 2 costly context switches, which
will pretty much always be more costly than disabling preemption with a real
spin_lock.

Remove the unneeded locking around the gettime call.

Signed-off-by: Yannick Vignon <yannick.vignon@nxp.com>
---
 drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
index 580cc035536b..8f0dcac23b1e 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
@@ -137,9 +137,7 @@ static int stmmac_get_time(struct ptp_clock_info *ptp, struct timespec64 *ts)
 	unsigned long flags;
 	u64 ns = 0;
 
-	spin_lock_irqsave(&priv->ptp_lock, flags);
 	stmmac_get_systime(priv, priv->ptpaddr, &ns);
-	spin_unlock_irqrestore(&priv->ptp_lock, flags);
 
 	*ts = ns_to_timespec64(ns);
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [RFC net-next 1/4] net: napi threaded: remove unnecessary locking
  2021-12-10 19:35 ` [RFC net-next 1/4] net: napi threaded: remove unnecessary locking Yannick Vignon
@ 2021-12-11  3:10   ` Jakub Kicinski
  0 siblings, 0 replies; 8+ messages in thread
From: Jakub Kicinski @ 2021-12-11  3:10 UTC (permalink / raw)
  To: Yannick Vignon
  Cc: Giuseppe Cavallaro, Alexandre Torgue, netdev, Ong Boon Leong,
	David S. Miller, Jose Abreu, Eric Dumazet, Wei Wang,
	Alexander Lobakin, Vladimir Oltean, Xiaoliang Yang, mingkai.hu,
	Joakim Zhang, sebastien.laveze, Yannick Vignon

On Fri, 10 Dec 2021 20:35:53 +0100 Yannick Vignon wrote:
> From: Yannick Vignon <yannick.vignon@nxp.com>
> 
> NAPI polling is normally protected by local_bh_disable()/local_bh_enable()
> calls, to avoid that code from being executed concurrently due to the
> softirq design. When NAPI instances are assigned their own dedicated kernel
> thread however, that concurrent code execution can no longer happen.

Meaning NAPI will now run in process context? That's not possible, 
lots of things depend on being in BH context. There are per-CPU
cross-NAPI resources, I think we have some expectations that we
don't need to take RCU locks here and there..

> Removing the lock helps lower latencies when handling real-time traffic
> (whose processing could still be delayed because of on-going processing of
> best-effort traffic), and should also have a positive effect on overall
> performance.
> 
> Signed-off-by: Yannick Vignon <yannick.vignon@nxp.com>
> ---
>  net/core/dev.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/net/core/dev.c b/net/core/dev.c
> index 15ac064b5562..e35d90e70c75 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -7131,13 +7131,11 @@ static int napi_threaded_poll(void *data)
>  		for (;;) {
>  			bool repoll = false;
>  
> -			local_bh_disable();
>  
>  			have = netpoll_poll_lock(napi);
>  			__napi_poll(napi, &repoll);
>  			netpoll_poll_unlock(have);
>  
> -			local_bh_enable();
>  
>  			if (!repoll)
>  				break;


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [RFC net-next 2/4] net: stmmac: move to threaded IRQ
  2021-12-10 19:35 ` [RFC net-next 2/4] net: stmmac: move to threaded IRQ Yannick Vignon
@ 2021-12-13  7:09   ` Joakim Zhang
  0 siblings, 0 replies; 8+ messages in thread
From: Joakim Zhang @ 2021-12-13  7:09 UTC (permalink / raw)
  To: Yannick Vignon (OSS),
	Giuseppe Cavallaro, Alexandre Torgue, netdev, Ong Boon Leong,
	David S. Miller, Jakub Kicinski, Jose Abreu, Eric Dumazet,
	Wei Wang, Alexander Lobakin, Vladimir Oltean, Xiaoliang Yang,
	Mingkai Hu, Sebastien Laveze
  Cc: Yannick Vignon


Hi Yannick,

> -----Original Message-----
> From: Yannick Vignon (OSS) <yannick.vignon@oss.nxp.com>
> Sent: 2021年12月11日 3:36
> To: Giuseppe Cavallaro <peppe.cavallaro@st.com>; Alexandre Torgue
> <alexandre.torgue@st.com>; netdev@vger.kernel.org; Ong Boon Leong
> <boon.leong.ong@intel.com>; David S. Miller <davem@davemloft.net>;
> Jakub Kicinski <kuba@kernel.org>; Jose Abreu <joabreu@synopsys.com>;
> Eric Dumazet <edumazet@google.com>; Wei Wang <weiwan@google.com>;
> Alexander Lobakin <alexandr.lobakin@intel.com>; Vladimir Oltean
> <olteanv@gmail.com>; Xiaoliang Yang <xiaoliang.yang_1@nxp.com>;
> Mingkai Hu <mingkai.hu@nxp.com>; Joakim Zhang
> <qiangqing.zhang@nxp.com>; Sebastien Laveze
> <sebastien.laveze@nxp.com>
> Cc: Yannick Vignon <yannick.vignon@nxp.com>
> Subject: [RFC net-next 2/4] net: stmmac: move to threaded IRQ
> 
> From: Yannick Vignon <yannick.vignon@nxp.com>
> 
> WIP (seems to generate warnings/error on startup)
> 
> When an IRQ is forced threaded, execution of the handler remains protected
> by local_bh_disable()/local_bh_enable() calls to keep the semantics of the
> IRQ context and avoid deadlocks. However, this also creates a contention
> point where a higher prio interrupt handler gets blocked by a lower prio task
> already holding the lock. Even though priority inheritance kicks in in such a
> case, the lower prio task can still execute for an indefinite time.
> 
> Move the stmmac interrupts to be explicitely threaded, so that high priority
> traffic can be processed without delay even if another piece of code was
> already running with BH disabled.
> 
> Signed-off-by: Yannick Vignon <yannick.vignon@nxp.com>
> ---
>  .../net/ethernet/stmicro/stmmac/stmmac_main.c | 28 +++++++++----------
>  1 file changed, 14 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> index 748195697e5a..8bf24902be3c 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> @@ -3460,8 +3460,8 @@ static int stmmac_request_irq_multi_msi(struct
> net_device *dev)
>  	/* For common interrupt */
>  	int_name = priv->int_name_mac;
>  	sprintf(int_name, "%s:%s", dev->name, "mac");
> -	ret = request_irq(dev->irq, stmmac_mac_interrupt,
> -			  0, int_name, dev);
> +	ret = request_threaded_irq(dev->irq, NULL, stmmac_interrupt,
> +			  IRQF_ONESHOT, int_name, dev);

Why change from stmmac_mac_interrupt() to stmmac_interrupt()? A copy-paste issue?

Best Regards,
Joakim Zhang
>  	if (unlikely(ret < 0)) {
>  		netdev_err(priv->dev,
>  			   "%s: alloc mac MSI %d (error: %d)\n", @@ -3476,9 +3476,9
> @@ static int stmmac_request_irq_multi_msi(struct net_device *dev)
>  	if (priv->wol_irq > 0 && priv->wol_irq != dev->irq) {
>  		int_name = priv->int_name_wol;
>  		sprintf(int_name, "%s:%s", dev->name, "wol");
> -		ret = request_irq(priv->wol_irq,
> -				  stmmac_mac_interrupt,
> -				  0, int_name, dev);
> +		ret = request_threaded_irq(priv->wol_irq,
> +				  NULL, stmmac_mac_interrupt,
> +				  IRQF_ONESHOT, int_name, dev);
>  		if (unlikely(ret < 0)) {
>  			netdev_err(priv->dev,
>  				   "%s: alloc wol MSI %d (error: %d)\n", @@ -3494,9
> +3494,9 @@ static int stmmac_request_irq_multi_msi(struct net_device
> *dev)
>  	if (priv->lpi_irq > 0 && priv->lpi_irq != dev->irq) {
>  		int_name = priv->int_name_lpi;
>  		sprintf(int_name, "%s:%s", dev->name, "lpi");
> -		ret = request_irq(priv->lpi_irq,
> -				  stmmac_mac_interrupt,
> -				  0, int_name, dev);
> +		ret = request_threaded_irq(priv->lpi_irq,
> +				  NULL, stmmac_mac_interrupt,
> +				  IRQF_ONESHOT, int_name, dev);
>  		if (unlikely(ret < 0)) {
>  			netdev_err(priv->dev,
>  				   "%s: alloc lpi MSI %d (error: %d)\n", @@ -3605,8
> +3605,8 @@ static int stmmac_request_irq_single(struct net_device *dev)
>  	enum request_irq_err irq_err;
>  	int ret;
> 
> -	ret = request_irq(dev->irq, stmmac_interrupt,
> -			  IRQF_SHARED, dev->name, dev);
> +	ret = request_threaded_irq(dev->irq, NULL, stmmac_interrupt,
> +			  IRQF_SHARED | IRQF_ONESHOT, dev->name, dev);
>  	if (unlikely(ret < 0)) {
>  		netdev_err(priv->dev,
>  			   "%s: ERROR: allocating the IRQ %d (error: %d)\n", @@
> -3619,8 +3619,8 @@ static int stmmac_request_irq_single(struct net_device
> *dev)
>  	 * is used for WoL
>  	 */
>  	if (priv->wol_irq > 0 && priv->wol_irq != dev->irq) {
> -		ret = request_irq(priv->wol_irq, stmmac_interrupt,
> -				  IRQF_SHARED, dev->name, dev);
> +		ret = request_threaded_irq(priv->wol_irq, NULL,
> stmmac_interrupt,
> +				  IRQF_SHARED | IRQF_ONESHOT, dev->name, dev);
>  		if (unlikely(ret < 0)) {
>  			netdev_err(priv->dev,
>  				   "%s: ERROR: allocating the WoL IRQ %d (%d)\n", @@
> -3632,8 +3632,8 @@ static int stmmac_request_irq_single(struct net_device
> *dev)
> 
>  	/* Request the IRQ lines */
>  	if (priv->lpi_irq > 0 && priv->lpi_irq != dev->irq) {
> -		ret = request_irq(priv->lpi_irq, stmmac_interrupt,
> -				  IRQF_SHARED, dev->name, dev);
> +		ret = request_threaded_irq(priv->lpi_irq, NULL, stmmac_interrupt,
> +				  IRQF_SHARED | IRQF_ONESHOT, dev->name, dev);
>  		if (unlikely(ret < 0)) {
>  			netdev_err(priv->dev,
>  				   "%s: ERROR: allocating the LPI IRQ %d (%d)\n",
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC net-next 2/4] net: stmmac: move to threaded IRQ
@ 2021-12-11  9:49 kernel test robot
  0 siblings, 0 replies; 8+ messages in thread
From: kernel test robot @ 2021-12-11  9:49 UTC (permalink / raw)
  To: kbuild

[-- Attachment #1: Type: text/plain, Size: 13929 bytes --]

CC: kbuild-all(a)lists.01.org
In-Reply-To: <20211210193556.1349090-3-yannick.vignon@oss.nxp.com>
References: <20211210193556.1349090-3-yannick.vignon@oss.nxp.com>
TO: Yannick Vignon <yannick.vignon@oss.nxp.com>

Hi Yannick,

[FYI, it's a private test report for your RFC patch.]
[auto build test WARNING on net-next/master]

url:    https://github.com/0day-ci/linux/commits/Yannick-Vignon/net-Improving-network-scheduling-latencies/20211211-035854
base:   https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git e1b539bd73a76dc8a7bf82befe6eac4ae79c76b3
:::::: branch date: 14 hours ago
:::::: commit date: 14 hours ago
config: x86_64-randconfig-m001-20211210 (https://download.01.org/0day-ci/archive/20211211/202112111750.g2bjweCW-lkp(a)intel.com/config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>

New smatch warnings:
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c:3591 stmmac_request_irq_multi_msi() warn: 'dev->irq' not released on lines: 3591.

Old smatch warnings:
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c:1736 init_dma_rx_desc_rings() warn: always true condition '(queue >= 0) => (0-u32max >= 0)'
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c:1736 init_dma_rx_desc_rings() warn: always true condition '(queue >= 0) => (0-u32max >= 0)'
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c:4744 stmmac_xdp_run_prog() error: (-2147483647) too low for ERR_PTR
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c:4810 stmmac_dispatch_skb_zc() error: uninitialized symbol 'hash'.
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c:4810 stmmac_dispatch_skb_zc() error: uninitialized symbol 'hash_type'.
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c:5312 stmmac_rx() error: uninitialized symbol 'hash'.
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c:5312 stmmac_rx() error: uninitialized symbol 'hash_type'.

vim +3591 drivers/net/ethernet/stmicro/stmmac/stmmac_main.c

8532f613bc78b6 Ong Boon Leong 2021-03-26  3441  
8532f613bc78b6 Ong Boon Leong 2021-03-26  3442  static int stmmac_request_irq_multi_msi(struct net_device *dev)
8532f613bc78b6 Ong Boon Leong 2021-03-26  3443  {
8532f613bc78b6 Ong Boon Leong 2021-03-26  3444  	struct stmmac_priv *priv = netdev_priv(dev);
3e6dc7b650250f Wong Vee Khee  2021-06-11  3445  	enum request_irq_err irq_err;
8deec94c6040bb Ong Boon Leong 2021-04-01  3446  	cpumask_t cpu_mask;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3447  	int irq_idx = 0;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3448  	char *int_name;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3449  	int ret;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3450  	int i;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3451  
8532f613bc78b6 Ong Boon Leong 2021-03-26  3452  	/* For common interrupt */
8532f613bc78b6 Ong Boon Leong 2021-03-26  3453  	int_name = priv->int_name_mac;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3454  	sprintf(int_name, "%s:%s", dev->name, "mac");
b5d79af3830aed Yannick Vignon 2021-12-10  3455  	ret = request_threaded_irq(dev->irq, NULL, stmmac_interrupt,
b5d79af3830aed Yannick Vignon 2021-12-10  3456  			  IRQF_ONESHOT, int_name, dev);
8532f613bc78b6 Ong Boon Leong 2021-03-26  3457  	if (unlikely(ret < 0)) {
8532f613bc78b6 Ong Boon Leong 2021-03-26  3458  		netdev_err(priv->dev,
8532f613bc78b6 Ong Boon Leong 2021-03-26  3459  			   "%s: alloc mac MSI %d (error: %d)\n",
8532f613bc78b6 Ong Boon Leong 2021-03-26  3460  			   __func__, dev->irq, ret);
8532f613bc78b6 Ong Boon Leong 2021-03-26  3461  		irq_err = REQ_IRQ_ERR_MAC;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3462  		goto irq_error;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3463  	}
8532f613bc78b6 Ong Boon Leong 2021-03-26  3464  
8532f613bc78b6 Ong Boon Leong 2021-03-26  3465  	/* Request the Wake IRQ in case of another line
8532f613bc78b6 Ong Boon Leong 2021-03-26  3466  	 * is used for WoL
8532f613bc78b6 Ong Boon Leong 2021-03-26  3467  	 */
8532f613bc78b6 Ong Boon Leong 2021-03-26  3468  	if (priv->wol_irq > 0 && priv->wol_irq != dev->irq) {
8532f613bc78b6 Ong Boon Leong 2021-03-26  3469  		int_name = priv->int_name_wol;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3470  		sprintf(int_name, "%s:%s", dev->name, "wol");
b5d79af3830aed Yannick Vignon 2021-12-10  3471  		ret = request_threaded_irq(priv->wol_irq,
b5d79af3830aed Yannick Vignon 2021-12-10  3472  				  NULL, stmmac_mac_interrupt,
b5d79af3830aed Yannick Vignon 2021-12-10  3473  				  IRQF_ONESHOT, int_name, dev);
8532f613bc78b6 Ong Boon Leong 2021-03-26  3474  		if (unlikely(ret < 0)) {
8532f613bc78b6 Ong Boon Leong 2021-03-26  3475  			netdev_err(priv->dev,
8532f613bc78b6 Ong Boon Leong 2021-03-26  3476  				   "%s: alloc wol MSI %d (error: %d)\n",
8532f613bc78b6 Ong Boon Leong 2021-03-26  3477  				   __func__, priv->wol_irq, ret);
8532f613bc78b6 Ong Boon Leong 2021-03-26  3478  			irq_err = REQ_IRQ_ERR_WOL;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3479  			goto irq_error;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3480  		}
8532f613bc78b6 Ong Boon Leong 2021-03-26  3481  	}
8532f613bc78b6 Ong Boon Leong 2021-03-26  3482  
8532f613bc78b6 Ong Boon Leong 2021-03-26  3483  	/* Request the LPI IRQ in case of another line
8532f613bc78b6 Ong Boon Leong 2021-03-26  3484  	 * is used for LPI
8532f613bc78b6 Ong Boon Leong 2021-03-26  3485  	 */
8532f613bc78b6 Ong Boon Leong 2021-03-26  3486  	if (priv->lpi_irq > 0 && priv->lpi_irq != dev->irq) {
8532f613bc78b6 Ong Boon Leong 2021-03-26  3487  		int_name = priv->int_name_lpi;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3488  		sprintf(int_name, "%s:%s", dev->name, "lpi");
b5d79af3830aed Yannick Vignon 2021-12-10  3489  		ret = request_threaded_irq(priv->lpi_irq,
b5d79af3830aed Yannick Vignon 2021-12-10  3490  				  NULL, stmmac_mac_interrupt,
b5d79af3830aed Yannick Vignon 2021-12-10  3491  				  IRQF_ONESHOT, int_name, dev);
8532f613bc78b6 Ong Boon Leong 2021-03-26  3492  		if (unlikely(ret < 0)) {
8532f613bc78b6 Ong Boon Leong 2021-03-26  3493  			netdev_err(priv->dev,
8532f613bc78b6 Ong Boon Leong 2021-03-26  3494  				   "%s: alloc lpi MSI %d (error: %d)\n",
8532f613bc78b6 Ong Boon Leong 2021-03-26  3495  				   __func__, priv->lpi_irq, ret);
8532f613bc78b6 Ong Boon Leong 2021-03-26  3496  			irq_err = REQ_IRQ_ERR_LPI;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3497  			goto irq_error;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3498  		}
8532f613bc78b6 Ong Boon Leong 2021-03-26  3499  	}
8532f613bc78b6 Ong Boon Leong 2021-03-26  3500  
8532f613bc78b6 Ong Boon Leong 2021-03-26  3501  	/* Request the Safety Feature Correctible Error line in
8532f613bc78b6 Ong Boon Leong 2021-03-26  3502  	 * case of another line is used
8532f613bc78b6 Ong Boon Leong 2021-03-26  3503  	 */
8532f613bc78b6 Ong Boon Leong 2021-03-26  3504  	if (priv->sfty_ce_irq > 0 && priv->sfty_ce_irq != dev->irq) {
8532f613bc78b6 Ong Boon Leong 2021-03-26  3505  		int_name = priv->int_name_sfty_ce;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3506  		sprintf(int_name, "%s:%s", dev->name, "safety-ce");
8532f613bc78b6 Ong Boon Leong 2021-03-26  3507  		ret = request_irq(priv->sfty_ce_irq,
8532f613bc78b6 Ong Boon Leong 2021-03-26  3508  				  stmmac_safety_interrupt,
8532f613bc78b6 Ong Boon Leong 2021-03-26  3509  				  0, int_name, dev);
8532f613bc78b6 Ong Boon Leong 2021-03-26  3510  		if (unlikely(ret < 0)) {
8532f613bc78b6 Ong Boon Leong 2021-03-26  3511  			netdev_err(priv->dev,
8532f613bc78b6 Ong Boon Leong 2021-03-26  3512  				   "%s: alloc sfty ce MSI %d (error: %d)\n",
8532f613bc78b6 Ong Boon Leong 2021-03-26  3513  				   __func__, priv->sfty_ce_irq, ret);
8532f613bc78b6 Ong Boon Leong 2021-03-26  3514  			irq_err = REQ_IRQ_ERR_SFTY_CE;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3515  			goto irq_error;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3516  		}
8532f613bc78b6 Ong Boon Leong 2021-03-26  3517  	}
8532f613bc78b6 Ong Boon Leong 2021-03-26  3518  
8532f613bc78b6 Ong Boon Leong 2021-03-26  3519  	/* Request the Safety Feature Uncorrectible Error line in
8532f613bc78b6 Ong Boon Leong 2021-03-26  3520  	 * case of another line is used
8532f613bc78b6 Ong Boon Leong 2021-03-26  3521  	 */
8532f613bc78b6 Ong Boon Leong 2021-03-26  3522  	if (priv->sfty_ue_irq > 0 && priv->sfty_ue_irq != dev->irq) {
8532f613bc78b6 Ong Boon Leong 2021-03-26  3523  		int_name = priv->int_name_sfty_ue;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3524  		sprintf(int_name, "%s:%s", dev->name, "safety-ue");
8532f613bc78b6 Ong Boon Leong 2021-03-26  3525  		ret = request_irq(priv->sfty_ue_irq,
8532f613bc78b6 Ong Boon Leong 2021-03-26  3526  				  stmmac_safety_interrupt,
8532f613bc78b6 Ong Boon Leong 2021-03-26  3527  				  0, int_name, dev);
8532f613bc78b6 Ong Boon Leong 2021-03-26  3528  		if (unlikely(ret < 0)) {
8532f613bc78b6 Ong Boon Leong 2021-03-26  3529  			netdev_err(priv->dev,
8532f613bc78b6 Ong Boon Leong 2021-03-26  3530  				   "%s: alloc sfty ue MSI %d (error: %d)\n",
8532f613bc78b6 Ong Boon Leong 2021-03-26  3531  				   __func__, priv->sfty_ue_irq, ret);
8532f613bc78b6 Ong Boon Leong 2021-03-26  3532  			irq_err = REQ_IRQ_ERR_SFTY_UE;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3533  			goto irq_error;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3534  		}
8532f613bc78b6 Ong Boon Leong 2021-03-26  3535  	}
8532f613bc78b6 Ong Boon Leong 2021-03-26  3536  
8532f613bc78b6 Ong Boon Leong 2021-03-26  3537  	/* Request Rx MSI irq */
8532f613bc78b6 Ong Boon Leong 2021-03-26  3538  	for (i = 0; i < priv->plat->rx_queues_to_use; i++) {
d68c2e1d19c540 Arnd Bergmann  2021-09-27  3539  		if (i >= MTL_MAX_RX_QUEUES)
3e0d5699a97571 Arnd Bergmann  2021-09-27  3540  			break;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3541  		if (priv->rx_irq[i] == 0)
8532f613bc78b6 Ong Boon Leong 2021-03-26  3542  			continue;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3543  
8532f613bc78b6 Ong Boon Leong 2021-03-26  3544  		int_name = priv->int_name_rx_irq[i];
8532f613bc78b6 Ong Boon Leong 2021-03-26  3545  		sprintf(int_name, "%s:%s-%d", dev->name, "rx", i);
8532f613bc78b6 Ong Boon Leong 2021-03-26  3546  		ret = request_irq(priv->rx_irq[i],
8532f613bc78b6 Ong Boon Leong 2021-03-26  3547  				  stmmac_msi_intr_rx,
8532f613bc78b6 Ong Boon Leong 2021-03-26  3548  				  0, int_name, &priv->rx_queue[i]);
8532f613bc78b6 Ong Boon Leong 2021-03-26  3549  		if (unlikely(ret < 0)) {
8532f613bc78b6 Ong Boon Leong 2021-03-26  3550  			netdev_err(priv->dev,
8532f613bc78b6 Ong Boon Leong 2021-03-26  3551  				   "%s: alloc rx-%d  MSI %d (error: %d)\n",
8532f613bc78b6 Ong Boon Leong 2021-03-26  3552  				   __func__, i, priv->rx_irq[i], ret);
8532f613bc78b6 Ong Boon Leong 2021-03-26  3553  			irq_err = REQ_IRQ_ERR_RX;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3554  			irq_idx = i;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3555  			goto irq_error;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3556  		}
8deec94c6040bb Ong Boon Leong 2021-04-01  3557  		cpumask_clear(&cpu_mask);
8deec94c6040bb Ong Boon Leong 2021-04-01  3558  		cpumask_set_cpu(i % num_online_cpus(), &cpu_mask);
8deec94c6040bb Ong Boon Leong 2021-04-01  3559  		irq_set_affinity_hint(priv->rx_irq[i], &cpu_mask);
8532f613bc78b6 Ong Boon Leong 2021-03-26  3560  	}
8532f613bc78b6 Ong Boon Leong 2021-03-26  3561  
8532f613bc78b6 Ong Boon Leong 2021-03-26  3562  	/* Request Tx MSI irq */
8532f613bc78b6 Ong Boon Leong 2021-03-26  3563  	for (i = 0; i < priv->plat->tx_queues_to_use; i++) {
d68c2e1d19c540 Arnd Bergmann  2021-09-27  3564  		if (i >= MTL_MAX_TX_QUEUES)
3e0d5699a97571 Arnd Bergmann  2021-09-27  3565  			break;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3566  		if (priv->tx_irq[i] == 0)
8532f613bc78b6 Ong Boon Leong 2021-03-26  3567  			continue;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3568  
8532f613bc78b6 Ong Boon Leong 2021-03-26  3569  		int_name = priv->int_name_tx_irq[i];
8532f613bc78b6 Ong Boon Leong 2021-03-26  3570  		sprintf(int_name, "%s:%s-%d", dev->name, "tx", i);
8532f613bc78b6 Ong Boon Leong 2021-03-26  3571  		ret = request_irq(priv->tx_irq[i],
8532f613bc78b6 Ong Boon Leong 2021-03-26  3572  				  stmmac_msi_intr_tx,
8532f613bc78b6 Ong Boon Leong 2021-03-26  3573  				  0, int_name, &priv->tx_queue[i]);
8532f613bc78b6 Ong Boon Leong 2021-03-26  3574  		if (unlikely(ret < 0)) {
8532f613bc78b6 Ong Boon Leong 2021-03-26  3575  			netdev_err(priv->dev,
8532f613bc78b6 Ong Boon Leong 2021-03-26  3576  				   "%s: alloc tx-%d  MSI %d (error: %d)\n",
8532f613bc78b6 Ong Boon Leong 2021-03-26  3577  				   __func__, i, priv->tx_irq[i], ret);
8532f613bc78b6 Ong Boon Leong 2021-03-26  3578  			irq_err = REQ_IRQ_ERR_TX;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3579  			irq_idx = i;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3580  			goto irq_error;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3581  		}
8deec94c6040bb Ong Boon Leong 2021-04-01  3582  		cpumask_clear(&cpu_mask);
8deec94c6040bb Ong Boon Leong 2021-04-01  3583  		cpumask_set_cpu(i % num_online_cpus(), &cpu_mask);
8deec94c6040bb Ong Boon Leong 2021-04-01  3584  		irq_set_affinity_hint(priv->tx_irq[i], &cpu_mask);
8532f613bc78b6 Ong Boon Leong 2021-03-26  3585  	}
8532f613bc78b6 Ong Boon Leong 2021-03-26  3586  
8532f613bc78b6 Ong Boon Leong 2021-03-26  3587  	return 0;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3588  
8532f613bc78b6 Ong Boon Leong 2021-03-26  3589  irq_error:
8532f613bc78b6 Ong Boon Leong 2021-03-26  3590  	stmmac_free_irq(dev, irq_err, irq_idx);
8532f613bc78b6 Ong Boon Leong 2021-03-26 @3591  	return ret;
8532f613bc78b6 Ong Boon Leong 2021-03-26  3592  }
8532f613bc78b6 Ong Boon Leong 2021-03-26  3593  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-12-13  7:09 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-10 19:35 [RFC net-next 0/4] net: Improving network scheduling latencies Yannick Vignon
2021-12-10 19:35 ` [RFC net-next 1/4] net: napi threaded: remove unnecessary locking Yannick Vignon
2021-12-11  3:10   ` Jakub Kicinski
2021-12-10 19:35 ` [RFC net-next 2/4] net: stmmac: move to threaded IRQ Yannick Vignon
2021-12-13  7:09   ` Joakim Zhang
2021-12-10 19:35 ` [RFC net-next 3/4] net: stmmac: do not use __netif_tx_lock_bh when in NAPI threaded mode Yannick Vignon
2021-12-10 19:35 ` [RFC net-next 4/4] net: stmmac: remove unnecessary locking around PTP clock reads Yannick Vignon
2021-12-11  9:49 [RFC net-next 2/4] net: stmmac: move to threaded IRQ kernel test robot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.