All of lore.kernel.org
 help / color / mirror / Atom feed
* Locking in network code
@ 2018-05-06 13:43 Jacob S. Moroni
  2018-05-06 16:16 ` Alexander Duyck
  0 siblings, 1 reply; 3+ messages in thread
From: Jacob S. Moroni @ 2018-05-06 13:43 UTC (permalink / raw)
  To: netdev

Hello,

I have a stupid question regarding which variant of spin_lock to use
throughout the network stack, and inside RX handlers specifically.

It's my understanding that skbuffs are normally passed into the stack
from soft IRQ context if the device is using NAPI, and hard IRQ
context if it's not using NAPI (and I guess process context too if the
driver does it's own workqueue thing). 

So, that means that handlers registered with netdev_rx_handler_register
may end up being called from any context.

However, the RX handler in the macvlan code calls ip_check_defrag,
which could eventually lead to a call to ip_defrag, which ends
up taking a regular spin_lock around the call to ip_frag_queue.

Is this a risk of deadlock, and if not, why?

What if you're running a system with one CPU and a packet fragment
arrives on a NAPI interface, then, while the spin_lock is held,
another fragment somehow arrives on another interface which does
its processing in hard IRQ context?

-- 
  Jacob S. Moroni
  mail@jakemoroni.com

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Locking in network code
  2018-05-06 13:43 Locking in network code Jacob S. Moroni
@ 2018-05-06 16:16 ` Alexander Duyck
  2018-05-07 14:48   ` Stephen Hemminger
  0 siblings, 1 reply; 3+ messages in thread
From: Alexander Duyck @ 2018-05-06 16:16 UTC (permalink / raw)
  To: Jacob S. Moroni; +Cc: Netdev

On Sun, May 6, 2018 at 6:43 AM, Jacob S. Moroni <mail@jakemoroni.com> wrote:
> Hello,
>
> I have a stupid question regarding which variant of spin_lock to use
> throughout the network stack, and inside RX handlers specifically.
>
> It's my understanding that skbuffs are normally passed into the stack
> from soft IRQ context if the device is using NAPI, and hard IRQ
> context if it's not using NAPI (and I guess process context too if the
> driver does it's own workqueue thing).
>
> So, that means that handlers registered with netdev_rx_handler_register
> may end up being called from any context.

I am pretty sure the Rx handlers are all called from softirq context.
The hard IRQ will just call netif_rx which will queue the packet up to
be handles in the soft IRQ later.

> However, the RX handler in the macvlan code calls ip_check_defrag,
> which could eventually lead to a call to ip_defrag, which ends
> up taking a regular spin_lock around the call to ip_frag_queue.
>
> Is this a risk of deadlock, and if not, why?
>
> What if you're running a system with one CPU and a packet fragment
> arrives on a NAPI interface, then, while the spin_lock is held,
> another fragment somehow arrives on another interface which does
> its processing in hard IRQ context?
>
> --
>   Jacob S. Moroni
>   mail@jakemoroni.com

Take a look at the netif_rx code and it should answer most of your
questions. Basically everything is handed off from the hard IRQ to the
soft IRQ via a backlog queue and then handled in net_rx_action.

- Alex

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Locking in network code
  2018-05-06 16:16 ` Alexander Duyck
@ 2018-05-07 14:48   ` Stephen Hemminger
  0 siblings, 0 replies; 3+ messages in thread
From: Stephen Hemminger @ 2018-05-07 14:48 UTC (permalink / raw)
  To: Alexander Duyck; +Cc: Jacob S. Moroni, Netdev

On Sun, 6 May 2018 09:16:26 -0700
Alexander Duyck <alexander.duyck@gmail.com> wrote:

> On Sun, May 6, 2018 at 6:43 AM, Jacob S. Moroni <mail@jakemoroni.com> wrote:
> > Hello,
> >
> > I have a stupid question regarding which variant of spin_lock to use
> > throughout the network stack, and inside RX handlers specifically.
> >
> > It's my understanding that skbuffs are normally passed into the stack
> > from soft IRQ context if the device is using NAPI, and hard IRQ
> > context if it's not using NAPI (and I guess process context too if the
> > driver does it's own workqueue thing).
> >
> > So, that means that handlers registered with netdev_rx_handler_register
> > may end up being called from any context.  
> 
> I am pretty sure the Rx handlers are all called from softirq context.
> The hard IRQ will just call netif_rx which will queue the packet up to
> be handles in the soft IRQ later.

The only exception is the netpoll code which runs stack in hardirq context.

> > However, the RX handler in the macvlan code calls ip_check_defrag,
> > which could eventually lead to a call to ip_defrag, which ends
> > up taking a regular spin_lock around the call to ip_frag_queue.
> >
> > Is this a risk of deadlock, and if not, why?
> >
> > What if you're running a system with one CPU and a packet fragment
> > arrives on a NAPI interface, then, while the spin_lock is held,
> > another fragment somehow arrives on another interface which does
> > its processing in hard IRQ context?
> >
> > --
> >   Jacob S. Moroni
> >   mail@jakemoroni.com  
> 
> Take a look at the netif_rx code and it should answer most of your
> questions. Basically everything is handed off from the hard IRQ to the
> soft IRQ via a backlog queue and then handled in net_rx_action.
> 
> - Alex

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2018-05-07 14:48 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-06 13:43 Locking in network code Jacob S. Moroni
2018-05-06 16:16 ` Alexander Duyck
2018-05-07 14:48   ` Stephen Hemminger

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.