All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net] docs: net: clarify the NAPI rules around XDP Tx
@ 2023-07-20 16:13 Jakub Kicinski
  2023-07-21  2:35 ` Wei Fang
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Jakub Kicinski @ 2023-07-20 16:13 UTC (permalink / raw)
  To: davem; +Cc: netdev, edumazet, pabeni, Jakub Kicinski, corbet, linux-doc

page pool and XDP should not be accessed from IRQ context
which may happen if drivers try to clean up XDP TX with
NAPI budget of 0.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
CC: corbet@lwn.net
CC: linux-doc@vger.kernel.org
---
 Documentation/networking/napi.rst | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/Documentation/networking/napi.rst b/Documentation/networking/napi.rst
index a7a047742e93..7bf7b95c4f7a 100644
--- a/Documentation/networking/napi.rst
+++ b/Documentation/networking/napi.rst
@@ -65,15 +65,16 @@ argument - drivers can process completions for any number of Tx
 packets but should only process up to ``budget`` number of
 Rx packets. Rx processing is usually much more expensive.
 
-In other words, it is recommended to ignore the budget argument when
-performing TX buffer reclamation to ensure that the reclamation is not
-arbitrarily bounded; however, it is required to honor the budget argument
-for RX processing.
+In other words for Rx processing the ``budget`` argument limits how many
+packets driver can process in a single poll. Rx specific APIs like page
+pool or XDP cannot be used at all when ``budget`` is 0.
+skb Tx processing should happen regardless of the ``budget``, but if
+the argument is 0 driver cannot call any XDP (or page pool) APIs.
 
 .. warning::
 
-   The ``budget`` argument may be 0 if core tries to only process Tx completions
-   and no Rx packets.
+   The ``budget`` argument may be 0 if core tries to only process
+   skb Tx completions and no Rx or XDP packets.
 
 The poll method returns the amount of work done. If the driver still
 has outstanding work to do (e.g. ``budget`` was exhausted)
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* RE: [PATCH net] docs: net: clarify the NAPI rules around XDP Tx
  2023-07-20 16:13 [PATCH net] docs: net: clarify the NAPI rules around XDP Tx Jakub Kicinski
@ 2023-07-21  2:35 ` Wei Fang
  2023-07-21  3:07   ` Jakub Kicinski
  2023-07-22  2:10 ` patchwork-bot+netdevbpf
  2023-07-25 17:30 ` Alexander H Duyck
  2 siblings, 1 reply; 11+ messages in thread
From: Wei Fang @ 2023-07-21  2:35 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: netdev, edumazet, pabeni, corbet, linux-doc, davem

> -----Original Message-----
> From: Jakub Kicinski <kuba@kernel.org>
> Sent: 2023年7月21日 0:13
> To: davem@davemloft.net
> Cc: netdev@vger.kernel.org; edumazet@google.com; pabeni@redhat.com;
> Jakub Kicinski <kuba@kernel.org>; corbet@lwn.net; linux-doc@vger.kernel.org
> Subject: [PATCH net] docs: net: clarify the NAPI rules around XDP Tx
> 
> page pool and XDP should not be accessed from IRQ context which may
> happen if drivers try to clean up XDP TX with NAPI budget of 0.
> 
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
> ---
> CC: corbet@lwn.net
> CC: linux-doc@vger.kernel.org
> ---
>  Documentation/networking/napi.rst | 13 +++++++------
>  1 file changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/Documentation/networking/napi.rst
> b/Documentation/networking/napi.rst
> index a7a047742e93..7bf7b95c4f7a 100644
> --- a/Documentation/networking/napi.rst
> +++ b/Documentation/networking/napi.rst
> @@ -65,15 +65,16 @@ argument - drivers can process completions for any
> number of Tx  packets but should only process up to ``budget`` number of
> Rx packets. Rx processing is usually much more expensive.
> 
> -In other words, it is recommended to ignore the budget argument when
> -performing TX buffer reclamation to ensure that the reclamation is not
> -arbitrarily bounded; however, it is required to honor the budget argument -for
> RX processing.
> +In other words for Rx processing the ``budget`` argument limits how
> +many packets driver can process in a single poll. Rx specific APIs like
> +page pool or XDP cannot be used at all when ``budget`` is 0.
> +skb Tx processing should happen regardless of the ``budget``, but if
> +the argument is 0 driver cannot call any XDP (or page pool) APIs.
> 
Can I ask a stupid question why tx processing cannot call any XDP (or page pool)
APIs if the "budget" is 0?


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] docs: net: clarify the NAPI rules around XDP Tx
  2023-07-21  2:35 ` Wei Fang
@ 2023-07-21  3:07   ` Jakub Kicinski
  2023-07-21  4:31     ` Wei Fang
  0 siblings, 1 reply; 11+ messages in thread
From: Jakub Kicinski @ 2023-07-21  3:07 UTC (permalink / raw)
  To: Wei Fang; +Cc: netdev, edumazet, pabeni, corbet, linux-doc, davem

On Fri, 21 Jul 2023 02:35:41 +0000 Wei Fang wrote:
> > -In other words, it is recommended to ignore the budget argument when
> > -performing TX buffer reclamation to ensure that the reclamation is not
> > -arbitrarily bounded; however, it is required to honor the budget argument -for
> > RX processing.
> > +In other words for Rx processing the ``budget`` argument limits how
> > +many packets driver can process in a single poll. Rx specific APIs like
> > +page pool or XDP cannot be used at all when ``budget`` is 0.
> > +skb Tx processing should happen regardless of the ``budget``, but if
> > +the argument is 0 driver cannot call any XDP (or page pool) APIs.
> >   
> Can I ask a stupid question why tx processing cannot call any XDP (or page pool)
> APIs if the "budget" is 0?

Because in that case we may be in an interrupt context, and page pool
assumes it's either in process or softirq context. See commit
afbed3f74830 ("net/mlx5e: do as little as possible in napi poll when
budget is 0") for an example stack trace.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH net] docs: net: clarify the NAPI rules around XDP Tx
  2023-07-21  3:07   ` Jakub Kicinski
@ 2023-07-21  4:31     ` Wei Fang
  0 siblings, 0 replies; 11+ messages in thread
From: Wei Fang @ 2023-07-21  4:31 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: netdev, edumazet, pabeni, corbet, linux-doc, davem

> -----Original Message-----
> From: Jakub Kicinski <kuba@kernel.org>
> Sent: 2023年7月21日 11:07
> To: Wei Fang <wei.fang@nxp.com>
> Cc: netdev@vger.kernel.org; edumazet@google.com; pabeni@redhat.com;
> corbet@lwn.net; linux-doc@vger.kernel.org; davem@davemloft.net
> Subject: Re: [PATCH net] docs: net: clarify the NAPI rules around XDP Tx
> 
> On Fri, 21 Jul 2023 02:35:41 +0000 Wei Fang wrote:
> > > -In other words, it is recommended to ignore the budget argument
> > > when -performing TX buffer reclamation to ensure that the
> > > reclamation is not -arbitrarily bounded; however, it is required to
> > > honor the budget argument -for RX processing.
> > > +In other words for Rx processing the ``budget`` argument limits how
> > > +many packets driver can process in a single poll. Rx specific APIs
> > > +like page pool or XDP cannot be used at all when ``budget`` is 0.
> > > +skb Tx processing should happen regardless of the ``budget``, but
> > > +if the argument is 0 driver cannot call any XDP (or page pool) APIs.
> > >
> > Can I ask a stupid question why tx processing cannot call any XDP (or
> > page pool) APIs if the "budget" is 0?
> 
> Because in that case we may be in an interrupt context, and page pool
> assumes it's either in process or softirq context. See commit
> afbed3f74830 ("net/mlx5e: do as little as possible in napi poll when budget is
> 0") for an example stack trace.
I got it, thank you!

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] docs: net: clarify the NAPI rules around XDP Tx
  2023-07-20 16:13 [PATCH net] docs: net: clarify the NAPI rules around XDP Tx Jakub Kicinski
  2023-07-21  2:35 ` Wei Fang
@ 2023-07-22  2:10 ` patchwork-bot+netdevbpf
  2023-07-25 17:30 ` Alexander H Duyck
  2 siblings, 0 replies; 11+ messages in thread
From: patchwork-bot+netdevbpf @ 2023-07-22  2:10 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: davem, netdev, edumazet, pabeni, corbet, linux-doc

Hello:

This patch was applied to netdev/net.git (main)
by Jakub Kicinski <kuba@kernel.org>:

On Thu, 20 Jul 2023 09:13:23 -0700 you wrote:
> page pool and XDP should not be accessed from IRQ context
> which may happen if drivers try to clean up XDP TX with
> NAPI budget of 0.
> 
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
> ---
> CC: corbet@lwn.net
> CC: linux-doc@vger.kernel.org
> 
> [...]

Here is the summary with links:
  - [net] docs: net: clarify the NAPI rules around XDP Tx
    https://git.kernel.org/netdev/net/c/32ad45b76990

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] docs: net: clarify the NAPI rules around XDP Tx
  2023-07-20 16:13 [PATCH net] docs: net: clarify the NAPI rules around XDP Tx Jakub Kicinski
  2023-07-21  2:35 ` Wei Fang
  2023-07-22  2:10 ` patchwork-bot+netdevbpf
@ 2023-07-25 17:30 ` Alexander H Duyck
  2023-07-25 18:55   ` Jakub Kicinski
  2 siblings, 1 reply; 11+ messages in thread
From: Alexander H Duyck @ 2023-07-25 17:30 UTC (permalink / raw)
  To: Jakub Kicinski, davem; +Cc: netdev, edumazet, pabeni, corbet, linux-doc

On Thu, 2023-07-20 at 09:13 -0700, Jakub Kicinski wrote:
> page pool and XDP should not be accessed from IRQ context
> which may happen if drivers try to clean up XDP TX with
> NAPI budget of 0.
> 
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
> ---
> CC: corbet@lwn.net
> CC: linux-doc@vger.kernel.org
> ---
>  Documentation/networking/napi.rst | 13 +++++++------
>  1 file changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/Documentation/networking/napi.rst b/Documentation/networking/napi.rst
> index a7a047742e93..7bf7b95c4f7a 100644
> --- a/Documentation/networking/napi.rst
> +++ b/Documentation/networking/napi.rst
> @@ -65,15 +65,16 @@ argument - drivers can process completions for any number of Tx
>  packets but should only process up to ``budget`` number of
>  Rx packets. Rx processing is usually much more expensive.
>  
> -In other words, it is recommended to ignore the budget argument when
> -performing TX buffer reclamation to ensure that the reclamation is not
> -arbitrarily bounded; however, it is required to honor the budget argument
> -for RX processing.
> +In other words for Rx processing the ``budget`` argument limits how many
> +packets driver can process in a single poll. Rx specific APIs like page
> +pool or XDP cannot be used at all when ``budget`` is 0.
> +skb Tx processing should happen regardless of the ``budget``, but if
> +the argument is 0 driver cannot call any XDP (or page pool) APIs.
> 

This isn't accurate, and I would say it is somewhat dangerous advice.
The Tx still needs to be processed regardless of if it is processing
page_pool pages or XDP pages. I agree the Rx should not be processed,
but the Tx must be processed using mechanisms that do NOT make use of
NAPI optimizations when budget is 0.

So specifically, xdp_return_frame is safe in non-NAPI Tx cleanup. The
xdp_return_frame_rx_napi is not.

Likewise there is napi_consume_skb which will use either a NAPI or non-
NAPI version of things depending on if budget is 0 or not.

For the page_pool calls there is the "allow_direct" argument that is
meant to decide between recycling in directly into the page_pool cache
or not. It should only be used in the Rx handler itself when budget is
non-zero.

I realise this was written up in response to a patch on the Mellanox
driver. Based on the patch in question it looks like they were calling
page_pool_recycle_direct outside of NAPI context. There is an explicit
warning above that function about NOT calling it outside of NAPI
context.

>  .. warning::
>  
> -   The ``budget`` argument may be 0 if core tries to only process Tx completions
> -   and no Rx packets.
> +   The ``budget`` argument may be 0 if core tries to only process
> +   skb Tx completions and no Rx or XDP packets.
>  
>  The poll method returns the amount of work done. If the driver still
>  has outstanding work to do (e.g. ``budget`` was exhausted)

We cannot make this distinction if both XDP and skb are processed in
the same Tx queue. Otherwise you will cause the Tx to stall and break
netpoll. If the ring is XDP only then yes, it can be skipped like what
they did in the Mellanox driver, but if it is mixed then the XDP side
of things needs to use the "safe" versions of the calls.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] docs: net: clarify the NAPI rules around XDP Tx
  2023-07-25 17:30 ` Alexander H Duyck
@ 2023-07-25 18:55   ` Jakub Kicinski
  2023-07-25 20:10     ` Alexander Duyck
  0 siblings, 1 reply; 11+ messages in thread
From: Jakub Kicinski @ 2023-07-25 18:55 UTC (permalink / raw)
  To: Alexander H Duyck; +Cc: davem, netdev, edumazet, pabeni, corbet, linux-doc

On Tue, 25 Jul 2023 10:30:24 -0700 Alexander H Duyck wrote:
> > -In other words, it is recommended to ignore the budget argument when
> > -performing TX buffer reclamation to ensure that the reclamation is not
> > -arbitrarily bounded; however, it is required to honor the budget argument
> > -for RX processing.
> > +In other words for Rx processing the ``budget`` argument limits how many
> > +packets driver can process in a single poll. Rx specific APIs like page
> > +pool or XDP cannot be used at all when ``budget`` is 0.
> > +skb Tx processing should happen regardless of the ``budget``, but if
> > +the argument is 0 driver cannot call any XDP (or page pool) APIs.
> 
> This isn't accurate, and I would say it is somewhat dangerous advice.
> The Tx still needs to be processed regardless of if it is processing
> page_pool pages or XDP pages. I agree the Rx should not be processed,
> but the Tx must be processed using mechanisms that do NOT make use of
> NAPI optimizations when budget is 0.
> 
> So specifically, xdp_return_frame is safe in non-NAPI Tx cleanup. The
> xdp_return_frame_rx_napi is not.
> 
> Likewise there is napi_consume_skb which will use either a NAPI or non-
> NAPI version of things depending on if budget is 0 or not.
> 
> For the page_pool calls there is the "allow_direct" argument that is
> meant to decide between recycling in directly into the page_pool cache
> or not. It should only be used in the Rx handler itself when budget is
> non-zero.
> 
> I realise this was written up in response to a patch on the Mellanox
> driver. Based on the patch in question it looks like they were calling
> page_pool_recycle_direct outside of NAPI context. There is an explicit
> warning above that function about NOT calling it outside of NAPI
> context.

Unless I'm missing something budget=0 can be called from hard IRQ
context. And page pool takes _bh() locks. So unless we "teach it"
not to recycle _anything_ in hard IRQ context, it is not safe to call.

> >  .. warning::
> >  
> > -   The ``budget`` argument may be 0 if core tries to only process Tx completions
> > -   and no Rx packets.
> > +   The ``budget`` argument may be 0 if core tries to only process
> > +   skb Tx completions and no Rx or XDP packets.
> >  
> >  The poll method returns the amount of work done. If the driver still
> >  has outstanding work to do (e.g. ``budget`` was exhausted)  
> 
> We cannot make this distinction if both XDP and skb are processed in
> the same Tx queue. Otherwise you will cause the Tx to stall and break
> netpoll. If the ring is XDP only then yes, it can be skipped like what
> they did in the Mellanox driver, but if it is mixed then the XDP side
> of things needs to use the "safe" versions of the calls.

IDK, a rare delay in sending of a netpoll message is not a major
concern.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] docs: net: clarify the NAPI rules around XDP Tx
  2023-07-25 18:55   ` Jakub Kicinski
@ 2023-07-25 20:10     ` Alexander Duyck
  2023-07-25 20:41       ` Jakub Kicinski
  0 siblings, 1 reply; 11+ messages in thread
From: Alexander Duyck @ 2023-07-25 20:10 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: davem, netdev, edumazet, pabeni, corbet, linux-doc

On Tue, Jul 25, 2023 at 11:55 AM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Tue, 25 Jul 2023 10:30:24 -0700 Alexander H Duyck wrote:
> > > -In other words, it is recommended to ignore the budget argument when
> > > -performing TX buffer reclamation to ensure that the reclamation is not
> > > -arbitrarily bounded; however, it is required to honor the budget argument
> > > -for RX processing.
> > > +In other words for Rx processing the ``budget`` argument limits how many
> > > +packets driver can process in a single poll. Rx specific APIs like page
> > > +pool or XDP cannot be used at all when ``budget`` is 0.
> > > +skb Tx processing should happen regardless of the ``budget``, but if
> > > +the argument is 0 driver cannot call any XDP (or page pool) APIs.
> >
> > This isn't accurate, and I would say it is somewhat dangerous advice.
> > The Tx still needs to be processed regardless of if it is processing
> > page_pool pages or XDP pages. I agree the Rx should not be processed,
> > but the Tx must be processed using mechanisms that do NOT make use of
> > NAPI optimizations when budget is 0.
> >
> > So specifically, xdp_return_frame is safe in non-NAPI Tx cleanup. The
> > xdp_return_frame_rx_napi is not.
> >
> > Likewise there is napi_consume_skb which will use either a NAPI or non-
> > NAPI version of things depending on if budget is 0 or not.
> >
> > For the page_pool calls there is the "allow_direct" argument that is
> > meant to decide between recycling in directly into the page_pool cache
> > or not. It should only be used in the Rx handler itself when budget is
> > non-zero.
> >
> > I realise this was written up in response to a patch on the Mellanox
> > driver. Based on the patch in question it looks like they were calling
> > page_pool_recycle_direct outside of NAPI context. There is an explicit
> > warning above that function about NOT calling it outside of NAPI
> > context.
>
> Unless I'm missing something budget=0 can be called from hard IRQ
> context. And page pool takes _bh() locks. So unless we "teach it"
> not to recycle _anything_ in hard IRQ context, it is not safe to call.

That is the thing. We have to be able to free the pages regardless of
context. Otherwise we make a huge mess of things. Also there isn't
much way to differentiate between page_pool and non-page_pool pages
because an skb can be composed of page pool pages just as easy as an
XDP frame can be. All you would just have to enable routing or
bridging for Rx frames to end up with page pool pages in the Tx path.

As far as netpoll itself we are safe because it has BH disabled and so
as a result page_pool doesn't use the _bh locks. There is code in
place to account for that in the producer locking code, and if it were
an issue we would have likely blown up long before now. The fact is
that page_pool has proliferated into skbs, so you are still freeing
page_pool pages indirectly anyway.

That said, there are calls that are not supposed to be used outside of
NAPI context, such as page_pool_recycle_direct(). Those have mostly
been called out in the page_pool.h header itself, so if someone
decides to shoot themselves in the foot with one of those, that is on
them. What we need to watch out for are people abusing the "direct"
calls and such or just passing "true" for allow_direct in the
page_pool calls without taking proper steps to guarantee the context.

> > >  .. warning::
> > >
> > > -   The ``budget`` argument may be 0 if core tries to only process Tx completions
> > > -   and no Rx packets.
> > > +   The ``budget`` argument may be 0 if core tries to only process
> > > +   skb Tx completions and no Rx or XDP packets.
> > >
> > >  The poll method returns the amount of work done. If the driver still
> > >  has outstanding work to do (e.g. ``budget`` was exhausted)
> >
> > We cannot make this distinction if both XDP and skb are processed in
> > the same Tx queue. Otherwise you will cause the Tx to stall and break
> > netpoll. If the ring is XDP only then yes, it can be skipped like what
> > they did in the Mellanox driver, but if it is mixed then the XDP side
> > of things needs to use the "safe" versions of the calls.
>
> IDK, a rare delay in sending of a netpoll message is not a major
> concern.

The whole point of netpoll is to get data out after something like a
crash. Otherwise we could have just been using regular NAPI. If the Tx
ring is hung it might not be a delay but rather a complete stall that
prevents data on the Tx queue from being transmitted on since the
system will likely not be recovering. Worse yet is if it is a scenario
where the Tx queue can recover it might trigger the Tx watchdog since
I could see scenarios where the ring fills, but interrupts were
dropped because of the netpoll.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] docs: net: clarify the NAPI rules around XDP Tx
  2023-07-25 20:10     ` Alexander Duyck
@ 2023-07-25 20:41       ` Jakub Kicinski
  2023-07-26  0:02         ` Alexander Duyck
  0 siblings, 1 reply; 11+ messages in thread
From: Jakub Kicinski @ 2023-07-25 20:41 UTC (permalink / raw)
  To: Alexander Duyck; +Cc: davem, netdev, edumazet, pabeni, corbet, linux-doc

On Tue, 25 Jul 2023 13:10:18 -0700 Alexander Duyck wrote:
> On Tue, Jul 25, 2023 at 11:55 AM Jakub Kicinski <kuba@kernel.org> wrote:
> > > This isn't accurate, and I would say it is somewhat dangerous advice.
> > > The Tx still needs to be processed regardless of if it is processing
> > > page_pool pages or XDP pages. I agree the Rx should not be processed,
> > > but the Tx must be processed using mechanisms that do NOT make use of
> > > NAPI optimizations when budget is 0.
> > >
> > > So specifically, xdp_return_frame is safe in non-NAPI Tx cleanup. The
> > > xdp_return_frame_rx_napi is not.
> > >
> > > Likewise there is napi_consume_skb which will use either a NAPI or non-
> > > NAPI version of things depending on if budget is 0 or not.
> > >
> > > For the page_pool calls there is the "allow_direct" argument that is
> > > meant to decide between recycling in directly into the page_pool cache
> > > or not. It should only be used in the Rx handler itself when budget is
> > > non-zero.
> > >
> > > I realise this was written up in response to a patch on the Mellanox
> > > driver. Based on the patch in question it looks like they were calling
> > > page_pool_recycle_direct outside of NAPI context. There is an explicit
> > > warning above that function about NOT calling it outside of NAPI
> > > context.  
> >
> > Unless I'm missing something budget=0 can be called from hard IRQ
> > context. And page pool takes _bh() locks. So unless we "teach it"
> > not to recycle _anything_ in hard IRQ context, it is not safe to call.  
> 
> That is the thing. We have to be able to free the pages regardless of
> context. Otherwise we make a huge mess of things. Also there isn't
> much way to differentiate between page_pool and non-page_pool pages
> because an skb can be composed of page pool pages just as easy as an
> XDP frame can be. All you would just have to enable routing or
> bridging for Rx frames to end up with page pool pages in the Tx path.
> 
> As far as netpoll itself we are safe because it has BH disabled and so

We do? Can you point me to where netpoll disables BH?

> as a result page_pool doesn't use the _bh locks. There is code in
> place to account for that in the producer locking code, and if it were
> an issue we would have likely blown up long before now. The fact is
> that page_pool has proliferated into skbs, so you are still freeing
> page_pool pages indirectly anyway.
> 
> That said, there are calls that are not supposed to be used outside of
> NAPI context, such as page_pool_recycle_direct(). Those have mostly
> been called out in the page_pool.h header itself, so if someone
> decides to shoot themselves in the foot with one of those, that is on
> them. What we need to watch out for are people abusing the "direct"
> calls and such or just passing "true" for allow_direct in the
> page_pool calls without taking proper steps to guarantee the context.
>
> > > We cannot make this distinction if both XDP and skb are processed in
> > > the same Tx queue. Otherwise you will cause the Tx to stall and break
> > > netpoll. If the ring is XDP only then yes, it can be skipped like what
> > > they did in the Mellanox driver, but if it is mixed then the XDP side
> > > of things needs to use the "safe" versions of the calls.  
> >
> > IDK, a rare delay in sending of a netpoll message is not a major
> > concern.  
> 
> The whole point of netpoll is to get data out after something like a
> crash. Otherwise we could have just been using regular NAPI. If the Tx
> ring is hung it might not be a delay but rather a complete stall that
> prevents data on the Tx queue from being transmitted on since the
> system will likely not be recovering. Worse yet is if it is a scenario
> where the Tx queue can recover it might trigger the Tx watchdog since
> I could see scenarios where the ring fills, but interrupts were
> dropped because of the netpoll.

I'm not disagreeing with you. I just don't have time to take a deeper
look and add the IRQ checks myself and I'm 90% sure the current code
can't work with netpoll. So I thought I'd at least document that :(

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] docs: net: clarify the NAPI rules around XDP Tx
  2023-07-25 20:41       ` Jakub Kicinski
@ 2023-07-26  0:02         ` Alexander Duyck
  2023-07-26  0:56           ` Jakub Kicinski
  0 siblings, 1 reply; 11+ messages in thread
From: Alexander Duyck @ 2023-07-26  0:02 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: davem, netdev, edumazet, pabeni, corbet, linux-doc

On Tue, Jul 25, 2023 at 1:41 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Tue, 25 Jul 2023 13:10:18 -0700 Alexander Duyck wrote:
> > On Tue, Jul 25, 2023 at 11:55 AM Jakub Kicinski <kuba@kernel.org> wrote:
> > > > This isn't accurate, and I would say it is somewhat dangerous advice.
> > > > The Tx still needs to be processed regardless of if it is processing
> > > > page_pool pages or XDP pages. I agree the Rx should not be processed,
> > > > but the Tx must be processed using mechanisms that do NOT make use of
> > > > NAPI optimizations when budget is 0.
> > > >
> > > > So specifically, xdp_return_frame is safe in non-NAPI Tx cleanup. The
> > > > xdp_return_frame_rx_napi is not.
> > > >
> > > > Likewise there is napi_consume_skb which will use either a NAPI or non-
> > > > NAPI version of things depending on if budget is 0 or not.
> > > >
> > > > For the page_pool calls there is the "allow_direct" argument that is
> > > > meant to decide between recycling in directly into the page_pool cache
> > > > or not. It should only be used in the Rx handler itself when budget is
> > > > non-zero.
> > > >
> > > > I realise this was written up in response to a patch on the Mellanox
> > > > driver. Based on the patch in question it looks like they were calling
> > > > page_pool_recycle_direct outside of NAPI context. There is an explicit
> > > > warning above that function about NOT calling it outside of NAPI
> > > > context.
> > >
> > > Unless I'm missing something budget=0 can be called from hard IRQ
> > > context. And page pool takes _bh() locks. So unless we "teach it"
> > > not to recycle _anything_ in hard IRQ context, it is not safe to call.
> >
> > That is the thing. We have to be able to free the pages regardless of
> > context. Otherwise we make a huge mess of things. Also there isn't
> > much way to differentiate between page_pool and non-page_pool pages
> > because an skb can be composed of page pool pages just as easy as an
> > XDP frame can be. All you would just have to enable routing or
> > bridging for Rx frames to end up with page pool pages in the Tx path.
> >
> > As far as netpoll itself we are safe because it has BH disabled and so
>
> We do? Can you point me to where netpoll disables BH?

I misread the code. Basically it looks like netconsole is explicitly
disabling interrupts via spin_lock_irqsave in write_msg is what is
going on.

> > as a result page_pool doesn't use the _bh locks. There is code in
> > place to account for that in the producer locking code, and if it were
> > an issue we would have likely blown up long before now. The fact is
> > that page_pool has proliferated into skbs, so you are still freeing
> > page_pool pages indirectly anyway.
> >
> > That said, there are calls that are not supposed to be used outside of
> > NAPI context, such as page_pool_recycle_direct(). Those have mostly
> > been called out in the page_pool.h header itself, so if someone
> > decides to shoot themselves in the foot with one of those, that is on
> > them. What we need to watch out for are people abusing the "direct"
> > calls and such or just passing "true" for allow_direct in the
> > page_pool calls without taking proper steps to guarantee the context.
> >
> > > > We cannot make this distinction if both XDP and skb are processed in
> > > > the same Tx queue. Otherwise you will cause the Tx to stall and break
> > > > netpoll. If the ring is XDP only then yes, it can be skipped like what
> > > > they did in the Mellanox driver, but if it is mixed then the XDP side
> > > > of things needs to use the "safe" versions of the calls.
> > >
> > > IDK, a rare delay in sending of a netpoll message is not a major
> > > concern.
> >
> > The whole point of netpoll is to get data out after something like a
> > crash. Otherwise we could have just been using regular NAPI. If the Tx
> > ring is hung it might not be a delay but rather a complete stall that
> > prevents data on the Tx queue from being transmitted on since the
> > system will likely not be recovering. Worse yet is if it is a scenario
> > where the Tx queue can recover it might trigger the Tx watchdog since
> > I could see scenarios where the ring fills, but interrupts were
> > dropped because of the netpoll.
>
> I'm not disagreeing with you. I just don't have time to take a deeper
> look and add the IRQ checks myself and I'm 90% sure the current code
> can't work with netpoll. So I thought I'd at least document that :(

So looking at it more I realized the way we are getting around the
issue is that the skbuffs are ALWAYS freed in softirq context.
Basically we hand them off to dev_consume_skb_any, which will hand
them off to dev_kfree_skb_irq_reason, and it is queueing them up to be
processed in the net_tx_action handler.

As far as the page pool pages themselves I wonder if we couldn't just
look at modifying __page_pool_put_page() so that it had something
similar to dev_consume_skb_any_reason() so if we are in a hardirq or
IRQs are disabled we just force the page to be freed.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] docs: net: clarify the NAPI rules around XDP Tx
  2023-07-26  0:02         ` Alexander Duyck
@ 2023-07-26  0:56           ` Jakub Kicinski
  0 siblings, 0 replies; 11+ messages in thread
From: Jakub Kicinski @ 2023-07-26  0:56 UTC (permalink / raw)
  To: Alexander Duyck; +Cc: davem, netdev, edumazet, pabeni, corbet, linux-doc

On Tue, 25 Jul 2023 17:02:42 -0700 Alexander Duyck wrote:
> So looking at it more I realized the way we are getting around the
> issue is that the skbuffs are ALWAYS freed in softirq context.
> Basically we hand them off to dev_consume_skb_any, which will hand
> them off to dev_kfree_skb_irq_reason, and it is queueing them up to be
> processed in the net_tx_action handler.

SG.

> As far as the page pool pages themselves I wonder if we couldn't just
> look at modifying __page_pool_put_page() so that it had something
> similar to dev_consume_skb_any_reason() so if we are in a hardirq or
> IRQs are disabled we just force the page to be freed.

Yup (same for the bulk API). I think that Olek was trying to implement
this somehow nicely, not sure how far he got.

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2023-07-26  0:56 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-20 16:13 [PATCH net] docs: net: clarify the NAPI rules around XDP Tx Jakub Kicinski
2023-07-21  2:35 ` Wei Fang
2023-07-21  3:07   ` Jakub Kicinski
2023-07-21  4:31     ` Wei Fang
2023-07-22  2:10 ` patchwork-bot+netdevbpf
2023-07-25 17:30 ` Alexander H Duyck
2023-07-25 18:55   ` Jakub Kicinski
2023-07-25 20:10     ` Alexander Duyck
2023-07-25 20:41       ` Jakub Kicinski
2023-07-26  0:02         ` Alexander Duyck
2023-07-26  0:56           ` Jakub Kicinski

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.