netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 0/5] net: give napi_threaded_poll() some love
@ 2023-04-21  9:43 Eric Dumazet
  2023-04-21  9:43 ` [PATCH net-next 1/5] net: add debugging checks in skb_attempt_defer_free() Eric Dumazet
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Eric Dumazet @ 2023-04-21  9:43 UTC (permalink / raw)
  To: David S . Miller, Jakub Kicinski, Paolo Abeni
  Cc: netdev, eric.dumazet, Eric Dumazet

There is interest to revert commit 4cd13c21b207
("softirq: Let ksoftirqd do its job") and use instead the
napi_threaded_poll() mode.

https://lore.kernel.org/netdev/140f61e2e1fcb8cf53619709046e312e343b53ca.camel@redhat.com/T/#m8a8f5b09844adba157ad0d22fc1233d97013de50

Before doing so, make sure napi_threaded_poll() benefits
from recent core stack improvements, to further reduce
softirq triggers.

Eric Dumazet (5):
  net: add debugging checks in skb_attempt_defer_free()
  net: do not provide hard irq safety for sd->defer_lock
  net: move skb_defer_free_flush() up
  net: make napi_threaded_poll() aware of sd->defer_list
  net: optimize napi_threaded_poll() vs RPS/RFS

 include/linux/netdevice.h |  3 +++
 net/core/dev.c            | 57 +++++++++++++++++++++++----------------
 net/core/skbuff.c         |  8 +++---
 3 files changed, 42 insertions(+), 26 deletions(-)

-- 
2.40.0.634.g4ca3ef3211-goog


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH net-next 1/5] net: add debugging checks in skb_attempt_defer_free()
  2023-04-21  9:43 [PATCH net-next 0/5] net: give napi_threaded_poll() some love Eric Dumazet
@ 2023-04-21  9:43 ` Eric Dumazet
  2023-04-21  9:43 ` [PATCH net-next 2/5] net: do not provide hard irq safety for sd->defer_lock Eric Dumazet
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Eric Dumazet @ 2023-04-21  9:43 UTC (permalink / raw)
  To: David S . Miller, Jakub Kicinski, Paolo Abeni
  Cc: netdev, eric.dumazet, Eric Dumazet

Make sure skbs that are stored in softnet_data.defer_list
do not have a dst attached.

Also make sure the the skb was orphaned.

Link: https://lore.kernel.org/netdev/CANn89iJuEVe72bPmEftyEJHLzzN=QNR2yueFjTxYXCEpS5S8HQ@mail.gmail.com/T/
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
 net/core/skbuff.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 0d998806b3773577f6fc7ca2bfffd5304ea8b20f..bd815a00d2affae9be4ea6cdba188423e1122164 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -6881,6 +6881,9 @@ nodefer:	__kfree_skb(skb);
 		return;
 	}
 
+	DEBUG_NET_WARN_ON_ONCE(skb_dst(skb));
+	DEBUG_NET_WARN_ON_ONCE(skb->destructor);
+
 	sd = &per_cpu(softnet_data, cpu);
 	defer_max = READ_ONCE(sysctl_skb_defer_max);
 	if (READ_ONCE(sd->defer_count) >= defer_max)
-- 
2.40.0.634.g4ca3ef3211-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH net-next 2/5] net: do not provide hard irq safety for sd->defer_lock
  2023-04-21  9:43 [PATCH net-next 0/5] net: give napi_threaded_poll() some love Eric Dumazet
  2023-04-21  9:43 ` [PATCH net-next 1/5] net: add debugging checks in skb_attempt_defer_free() Eric Dumazet
@ 2023-04-21  9:43 ` Eric Dumazet
  2023-04-21  9:43 ` [PATCH net-next 3/5] net: move skb_defer_free_flush() up Eric Dumazet
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Eric Dumazet @ 2023-04-21  9:43 UTC (permalink / raw)
  To: David S . Miller, Jakub Kicinski, Paolo Abeni
  Cc: netdev, eric.dumazet, Eric Dumazet

kfree_skb() can be called from hard irq handlers,
but skb_attempt_defer_free() is meant to be used
from process or BH contexts, and skb_defer_free_flush()
is meant to be called from BH contexts.

Not having to mask hard irq can save some cycles.

Signed-off-by: Eric Dumazet <edumazet@google.com>
---
 net/core/dev.c    | 4 ++--
 net/core/skbuff.c | 5 ++---
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 1551aabac3437938566813363d748ac639fb0075..d15568f5a44f1a397941bd5fca3873ee4d7d0e48 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6632,11 +6632,11 @@ static void skb_defer_free_flush(struct softnet_data *sd)
 	if (!READ_ONCE(sd->defer_list))
 		return;
 
-	spin_lock_irq(&sd->defer_lock);
+	spin_lock(&sd->defer_lock);
 	skb = sd->defer_list;
 	sd->defer_list = NULL;
 	sd->defer_count = 0;
-	spin_unlock_irq(&sd->defer_lock);
+	spin_unlock(&sd->defer_lock);
 
 	while (skb != NULL) {
 		next = skb->next;
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index bd815a00d2affae9be4ea6cdba188423e1122164..304a966164d82600cf196e512f24e3deee0c9bc5 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -6870,7 +6870,6 @@ void skb_attempt_defer_free(struct sk_buff *skb)
 {
 	int cpu = skb->alloc_cpu;
 	struct softnet_data *sd;
-	unsigned long flags;
 	unsigned int defer_max;
 	bool kick;
 
@@ -6889,7 +6888,7 @@ nodefer:	__kfree_skb(skb);
 	if (READ_ONCE(sd->defer_count) >= defer_max)
 		goto nodefer;
 
-	spin_lock_irqsave(&sd->defer_lock, flags);
+	spin_lock_bh(&sd->defer_lock);
 	/* Send an IPI every time queue reaches half capacity. */
 	kick = sd->defer_count == (defer_max >> 1);
 	/* Paired with the READ_ONCE() few lines above */
@@ -6898,7 +6897,7 @@ nodefer:	__kfree_skb(skb);
 	skb->next = sd->defer_list;
 	/* Paired with READ_ONCE() in skb_defer_free_flush() */
 	WRITE_ONCE(sd->defer_list, skb);
-	spin_unlock_irqrestore(&sd->defer_lock, flags);
+	spin_unlock_bh(&sd->defer_lock);
 
 	/* Make sure to trigger NET_RX_SOFTIRQ on the remote CPU
 	 * if we are unlucky enough (this seems very unlikely).
-- 
2.40.0.634.g4ca3ef3211-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH net-next 3/5] net: move skb_defer_free_flush() up
  2023-04-21  9:43 [PATCH net-next 0/5] net: give napi_threaded_poll() some love Eric Dumazet
  2023-04-21  9:43 ` [PATCH net-next 1/5] net: add debugging checks in skb_attempt_defer_free() Eric Dumazet
  2023-04-21  9:43 ` [PATCH net-next 2/5] net: do not provide hard irq safety for sd->defer_lock Eric Dumazet
@ 2023-04-21  9:43 ` Eric Dumazet
  2023-04-21  9:43 ` [PATCH net-next 4/5] net: make napi_threaded_poll() aware of sd->defer_list Eric Dumazet
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Eric Dumazet @ 2023-04-21  9:43 UTC (permalink / raw)
  To: David S . Miller, Jakub Kicinski, Paolo Abeni
  Cc: netdev, eric.dumazet, Eric Dumazet

We plan using skb_defer_free_flush() from napi_threaded_poll()
in the following patch.

Signed-off-by: Eric Dumazet <edumazet@google.com>
---
 net/core/dev.c | 42 +++++++++++++++++++++---------------------
 1 file changed, 21 insertions(+), 21 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index d15568f5a44f1a397941bd5fca3873ee4d7d0e48..81ded215731bdbb3a025fe40ed9638123d97a347 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6598,6 +6598,27 @@ static int napi_thread_wait(struct napi_struct *napi)
 	return -1;
 }
 
+static void skb_defer_free_flush(struct softnet_data *sd)
+{
+	struct sk_buff *skb, *next;
+
+	/* Paired with WRITE_ONCE() in skb_attempt_defer_free() */
+	if (!READ_ONCE(sd->defer_list))
+		return;
+
+	spin_lock(&sd->defer_lock);
+	skb = sd->defer_list;
+	sd->defer_list = NULL;
+	sd->defer_count = 0;
+	spin_unlock(&sd->defer_lock);
+
+	while (skb != NULL) {
+		next = skb->next;
+		napi_consume_skb(skb, 1);
+		skb = next;
+	}
+}
+
 static int napi_threaded_poll(void *data)
 {
 	struct napi_struct *napi = data;
@@ -6624,27 +6645,6 @@ static int napi_threaded_poll(void *data)
 	return 0;
 }
 
-static void skb_defer_free_flush(struct softnet_data *sd)
-{
-	struct sk_buff *skb, *next;
-
-	/* Paired with WRITE_ONCE() in skb_attempt_defer_free() */
-	if (!READ_ONCE(sd->defer_list))
-		return;
-
-	spin_lock(&sd->defer_lock);
-	skb = sd->defer_list;
-	sd->defer_list = NULL;
-	sd->defer_count = 0;
-	spin_unlock(&sd->defer_lock);
-
-	while (skb != NULL) {
-		next = skb->next;
-		napi_consume_skb(skb, 1);
-		skb = next;
-	}
-}
-
 static __latent_entropy void net_rx_action(struct softirq_action *h)
 {
 	struct softnet_data *sd = this_cpu_ptr(&softnet_data);
-- 
2.40.0.634.g4ca3ef3211-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH net-next 4/5] net: make napi_threaded_poll() aware of sd->defer_list
  2023-04-21  9:43 [PATCH net-next 0/5] net: give napi_threaded_poll() some love Eric Dumazet
                   ` (2 preceding siblings ...)
  2023-04-21  9:43 ` [PATCH net-next 3/5] net: move skb_defer_free_flush() up Eric Dumazet
@ 2023-04-21  9:43 ` Eric Dumazet
  2023-04-21  9:43 ` [PATCH net-next 5/5] net: optimize napi_threaded_poll() vs RPS/RFS Eric Dumazet
  2023-04-23 12:50 ` [PATCH net-next 0/5] net: give napi_threaded_poll() some love patchwork-bot+netdevbpf
  5 siblings, 0 replies; 11+ messages in thread
From: Eric Dumazet @ 2023-04-21  9:43 UTC (permalink / raw)
  To: David S . Miller, Jakub Kicinski, Paolo Abeni
  Cc: netdev, eric.dumazet, Eric Dumazet

If we call skb_defer_free_flush() from napi_threaded_poll(),
we can avoid to raise IPI from skb_attempt_defer_free()
when the list becomes too big.

This allows napi_threaded_poll() to rely less on softirqs,
and lowers latency caused by a too big list.

Signed-off-by: Eric Dumazet <edumazet@google.com>
---
 net/core/dev.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/net/core/dev.c b/net/core/dev.c
index 81ded215731bdbb3a025fe40ed9638123d97a347..7d9ec23f97c6ec80ec9f971f740a9da747f78ceb 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6622,6 +6622,7 @@ static void skb_defer_free_flush(struct softnet_data *sd)
 static int napi_threaded_poll(void *data)
 {
 	struct napi_struct *napi = data;
+	struct softnet_data *sd;
 	void *have;
 
 	while (!napi_thread_wait(napi)) {
@@ -6629,11 +6630,13 @@ static int napi_threaded_poll(void *data)
 			bool repoll = false;
 
 			local_bh_disable();
+			sd = this_cpu_ptr(&softnet_data);
 
 			have = netpoll_poll_lock(napi);
 			__napi_poll(napi, &repoll);
 			netpoll_poll_unlock(have);
 
+			skb_defer_free_flush(sd);
 			local_bh_enable();
 
 			if (!repoll)
-- 
2.40.0.634.g4ca3ef3211-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH net-next 5/5] net: optimize napi_threaded_poll() vs RPS/RFS
  2023-04-21  9:43 [PATCH net-next 0/5] net: give napi_threaded_poll() some love Eric Dumazet
                   ` (3 preceding siblings ...)
  2023-04-21  9:43 ` [PATCH net-next 4/5] net: make napi_threaded_poll() aware of sd->defer_list Eric Dumazet
@ 2023-04-21  9:43 ` Eric Dumazet
  2023-04-21 13:09   ` Paolo Abeni
  2023-04-23 12:50 ` [PATCH net-next 0/5] net: give napi_threaded_poll() some love patchwork-bot+netdevbpf
  5 siblings, 1 reply; 11+ messages in thread
From: Eric Dumazet @ 2023-04-21  9:43 UTC (permalink / raw)
  To: David S . Miller, Jakub Kicinski, Paolo Abeni
  Cc: netdev, eric.dumazet, Eric Dumazet

We use napi_threaded_poll() in order to reduce our softirq dependency.

We can add a followup of 821eba962d95 ("net: optimize napi_schedule_rps()")
to further remove the need of firing NET_RX_SOFTIRQ whenever
RPS/RFS are used.

Signed-off-by: Eric Dumazet <edumazet@google.com>
---
 include/linux/netdevice.h |  3 +++
 net/core/dev.c            | 12 ++++++++++--
 2 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index a6a3e9457d6cbc9fcbbde96b43b4b21878495403..08fbd4622ccf731daaee34ad99773d6dc2e82fa6 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -3194,7 +3194,10 @@ struct softnet_data {
 #ifdef CONFIG_RPS
 	struct softnet_data	*rps_ipi_list;
 #endif
+
 	bool			in_net_rx_action;
+	bool			in_napi_threaded_poll;
+
 #ifdef CONFIG_NET_FLOW_LIMIT
 	struct sd_flow_limit __rcu *flow_limit;
 #endif
diff --git a/net/core/dev.c b/net/core/dev.c
index 7d9ec23f97c6ec80ec9f971f740a9da747f78ceb..735096d42c1d13999597a882370ca439e9389e24 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4603,10 +4603,10 @@ static void napi_schedule_rps(struct softnet_data *sd)
 		sd->rps_ipi_next = mysd->rps_ipi_list;
 		mysd->rps_ipi_list = sd;
 
-		/* If not called from net_rx_action()
+		/* If not called from net_rx_action() or napi_threaded_poll()
 		 * we have to raise NET_RX_SOFTIRQ.
 		 */
-		if (!mysd->in_net_rx_action)
+		if (!mysd->in_net_rx_action && !mysd->in_napi_threaded_poll)
 			__raise_softirq_irqoff(NET_RX_SOFTIRQ);
 		return;
 	}
@@ -6631,11 +6631,19 @@ static int napi_threaded_poll(void *data)
 
 			local_bh_disable();
 			sd = this_cpu_ptr(&softnet_data);
+			sd->in_napi_threaded_poll = true;
 
 			have = netpoll_poll_lock(napi);
 			__napi_poll(napi, &repoll);
 			netpoll_poll_unlock(have);
 
+			sd->in_napi_threaded_poll = false;
+			barrier();
+
+			if (sd_has_rps_ipi_waiting(sd)) {
+				local_irq_disable();
+				net_rps_action_and_irq_enable(sd);
+			}
 			skb_defer_free_flush(sd);
 			local_bh_enable();
 
-- 
2.40.0.634.g4ca3ef3211-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 5/5] net: optimize napi_threaded_poll() vs RPS/RFS
  2023-04-21  9:43 ` [PATCH net-next 5/5] net: optimize napi_threaded_poll() vs RPS/RFS Eric Dumazet
@ 2023-04-21 13:09   ` Paolo Abeni
  2023-04-21 14:05     ` Jakub Kicinski
  2023-04-21 14:06     ` Eric Dumazet
  0 siblings, 2 replies; 11+ messages in thread
From: Paolo Abeni @ 2023-04-21 13:09 UTC (permalink / raw)
  To: Eric Dumazet, David S . Miller, Jakub Kicinski; +Cc: netdev, eric.dumazet

Hello,

thank you for the extremely fast turnaround!

On Fri, 2023-04-21 at 09:43 +0000, Eric Dumazet wrote:
> We use napi_threaded_poll() in order to reduce our softirq dependency.
> 
> We can add a followup of 821eba962d95 ("net: optimize napi_schedule_rps()")
> to further remove the need of firing NET_RX_SOFTIRQ whenever
> RPS/RFS are used.
> 
> Signed-off-by: Eric Dumazet <edumazet@google.com>
> ---
>  include/linux/netdevice.h |  3 +++
>  net/core/dev.c            | 12 ++++++++++--
>  2 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> index a6a3e9457d6cbc9fcbbde96b43b4b21878495403..08fbd4622ccf731daaee34ad99773d6dc2e82fa6 100644
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
> @@ -3194,7 +3194,10 @@ struct softnet_data {
>  #ifdef CONFIG_RPS
>  	struct softnet_data	*rps_ipi_list;
>  #endif
> +
>  	bool			in_net_rx_action;
> +	bool			in_napi_threaded_poll;

If I'm correct only one of the above 2 flags can be set to true at any
give time. I'm wondering if could use a single flag (possibly with a
rename - say 'in_napi_polling')?

Otherwise LGTM, many thanks!

Paolo


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 5/5] net: optimize napi_threaded_poll() vs RPS/RFS
  2023-04-21 13:09   ` Paolo Abeni
@ 2023-04-21 14:05     ` Jakub Kicinski
  2023-04-21 14:06     ` Eric Dumazet
  1 sibling, 0 replies; 11+ messages in thread
From: Jakub Kicinski @ 2023-04-21 14:05 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: Eric Dumazet, David S . Miller, netdev, eric.dumazet

On Fri, 21 Apr 2023 15:09:58 +0200 Paolo Abeni wrote:
> >  #endif
> > +
> >  	bool			in_net_rx_action;
> > +	bool			in_napi_threaded_poll;  
> 
> If I'm correct only one of the above 2 flags can be set to true at any
> give time. I'm wondering if could use a single flag (possibly with a
> rename - say 'in_napi_polling')?
> 
> Otherwise LGTM, many thanks!

No strong feelings either way but FWIW my intuition would be that it's
less confusing to keep the two separate (there seems to be no cost to
it at this point) 🤷️

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 5/5] net: optimize napi_threaded_poll() vs RPS/RFS
  2023-04-21 13:09   ` Paolo Abeni
  2023-04-21 14:05     ` Jakub Kicinski
@ 2023-04-21 14:06     ` Eric Dumazet
  2023-04-21 15:21       ` Paolo Abeni
  1 sibling, 1 reply; 11+ messages in thread
From: Eric Dumazet @ 2023-04-21 14:06 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: David S . Miller, Jakub Kicinski, netdev, eric.dumazet

On Fri, Apr 21, 2023 at 3:10 PM Paolo Abeni <pabeni@redhat.com> wrote:
>
> Hello,
>
> thank you for the extremely fast turnaround!
>
> On Fri, 2023-04-21 at 09:43 +0000, Eric Dumazet wrote:
> > We use napi_threaded_poll() in order to reduce our softirq dependency.
> >
> > We can add a followup of 821eba962d95 ("net: optimize napi_schedule_rps()")
> > to further remove the need of firing NET_RX_SOFTIRQ whenever
> > RPS/RFS are used.
> >
> > Signed-off-by: Eric Dumazet <edumazet@google.com>
> > ---
> >  include/linux/netdevice.h |  3 +++
> >  net/core/dev.c            | 12 ++++++++++--
> >  2 files changed, 13 insertions(+), 2 deletions(-)
> >
> > diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> > index a6a3e9457d6cbc9fcbbde96b43b4b21878495403..08fbd4622ccf731daaee34ad99773d6dc2e82fa6 100644
> > --- a/include/linux/netdevice.h
> > +++ b/include/linux/netdevice.h
> > @@ -3194,7 +3194,10 @@ struct softnet_data {
> >  #ifdef CONFIG_RPS
> >       struct softnet_data     *rps_ipi_list;
> >  #endif
> > +
> >       bool                    in_net_rx_action;
> > +     bool                    in_napi_threaded_poll;
>
> If I'm correct only one of the above 2 flags can be set to true at any
> give time. I'm wondering if could use a single flag (possibly with a
> rename - say 'in_napi_polling')?

Well, we can _not_ use the same flag, because we do not want to
accidentally enable
the part in ____napi_schedule()

We could use a bit mask with 2 bits, but I am not sure it will help readability.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 5/5] net: optimize napi_threaded_poll() vs RPS/RFS
  2023-04-21 14:06     ` Eric Dumazet
@ 2023-04-21 15:21       ` Paolo Abeni
  0 siblings, 0 replies; 11+ messages in thread
From: Paolo Abeni @ 2023-04-21 15:21 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David S . Miller, Jakub Kicinski, netdev, eric.dumazet

On Fri, 2023-04-21 at 16:06 +0200, Eric Dumazet wrote:
> On Fri, Apr 21, 2023 at 3:10 PM Paolo Abeni <pabeni@redhat.com> wrote:
> > 
> > Hello,
> > 
> > thank you for the extremely fast turnaround!
> > 
> > On Fri, 2023-04-21 at 09:43 +0000, Eric Dumazet wrote:
> > > We use napi_threaded_poll() in order to reduce our softirq dependency.
> > > 
> > > We can add a followup of 821eba962d95 ("net: optimize napi_schedule_rps()")
> > > to further remove the need of firing NET_RX_SOFTIRQ whenever
> > > RPS/RFS are used.
> > > 
> > > Signed-off-by: Eric Dumazet <edumazet@google.com>
> > > ---
> > >  include/linux/netdevice.h |  3 +++
> > >  net/core/dev.c            | 12 ++++++++++--
> > >  2 files changed, 13 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> > > index a6a3e9457d6cbc9fcbbde96b43b4b21878495403..08fbd4622ccf731daaee34ad99773d6dc2e82fa6 100644
> > > --- a/include/linux/netdevice.h
> > > +++ b/include/linux/netdevice.h
> > > @@ -3194,7 +3194,10 @@ struct softnet_data {
> > >  #ifdef CONFIG_RPS
> > >       struct softnet_data     *rps_ipi_list;
> > >  #endif
> > > +
> > >       bool                    in_net_rx_action;
> > > +     bool                    in_napi_threaded_poll;
> > 
> > If I'm correct only one of the above 2 flags can be set to true at any
> > give time. I'm wondering if could use a single flag (possibly with a
> > rename - say 'in_napi_polling')?
> 
> Well, we can _not_ use the same flag, because we do not want to
> accidentally enable
> the part in ____napi_schedule()

I see, thanks for the pointer.
> 
> We could use a bit mask with 2 bits, but I am not sure it will help readability.

Agreed, it's better to use 2 separate bool.

LGTM,

Acked-by: Paolo Abeni <pabeni@redhat.com>

(for pw's sake ;)



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 0/5] net: give napi_threaded_poll() some love
  2023-04-21  9:43 [PATCH net-next 0/5] net: give napi_threaded_poll() some love Eric Dumazet
                   ` (4 preceding siblings ...)
  2023-04-21  9:43 ` [PATCH net-next 5/5] net: optimize napi_threaded_poll() vs RPS/RFS Eric Dumazet
@ 2023-04-23 12:50 ` patchwork-bot+netdevbpf
  5 siblings, 0 replies; 11+ messages in thread
From: patchwork-bot+netdevbpf @ 2023-04-23 12:50 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: davem, kuba, pabeni, netdev, eric.dumazet

Hello:

This series was applied to netdev/net-next.git (main)
by David S. Miller <davem@davemloft.net>:

On Fri, 21 Apr 2023 09:43:52 +0000 you wrote:
> There is interest to revert commit 4cd13c21b207
> ("softirq: Let ksoftirqd do its job") and use instead the
> napi_threaded_poll() mode.
> 
> https://lore.kernel.org/netdev/140f61e2e1fcb8cf53619709046e312e343b53ca.camel@redhat.com/T/#m8a8f5b09844adba157ad0d22fc1233d97013de50
> 
> Before doing so, make sure napi_threaded_poll() benefits
> from recent core stack improvements, to further reduce
> softirq triggers.
> 
> [...]

Here is the summary with links:
  - [net-next,1/5] net: add debugging checks in skb_attempt_defer_free()
    https://git.kernel.org/netdev/net-next/c/e8e1ce8454c9
  - [net-next,2/5] net: do not provide hard irq safety for sd->defer_lock
    https://git.kernel.org/netdev/net-next/c/931e93bdf8ca
  - [net-next,3/5] net: move skb_defer_free_flush() up
    https://git.kernel.org/netdev/net-next/c/e6f50edfef04
  - [net-next,4/5] net: make napi_threaded_poll() aware of sd->defer_list
    https://git.kernel.org/netdev/net-next/c/a1aaee7f8f79
  - [net-next,5/5] net: optimize napi_threaded_poll() vs RPS/RFS
    https://git.kernel.org/netdev/net-next/c/87eff2ec57b6

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2023-04-23 12:50 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-21  9:43 [PATCH net-next 0/5] net: give napi_threaded_poll() some love Eric Dumazet
2023-04-21  9:43 ` [PATCH net-next 1/5] net: add debugging checks in skb_attempt_defer_free() Eric Dumazet
2023-04-21  9:43 ` [PATCH net-next 2/5] net: do not provide hard irq safety for sd->defer_lock Eric Dumazet
2023-04-21  9:43 ` [PATCH net-next 3/5] net: move skb_defer_free_flush() up Eric Dumazet
2023-04-21  9:43 ` [PATCH net-next 4/5] net: make napi_threaded_poll() aware of sd->defer_list Eric Dumazet
2023-04-21  9:43 ` [PATCH net-next 5/5] net: optimize napi_threaded_poll() vs RPS/RFS Eric Dumazet
2023-04-21 13:09   ` Paolo Abeni
2023-04-21 14:05     ` Jakub Kicinski
2023-04-21 14:06     ` Eric Dumazet
2023-04-21 15:21       ` Paolo Abeni
2023-04-23 12:50 ` [PATCH net-next 0/5] net: give napi_threaded_poll() some love patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).