All of lore.kernel.org
 help / color / mirror / Atom feed
* error loading xdp program on virtio nic
@ 2019-11-20 17:52 David Ahern
  2019-11-21  3:26 ` Jason Wang
  0 siblings, 1 reply; 17+ messages in thread
From: David Ahern @ 2019-11-20 17:52 UTC (permalink / raw)
  To: xdp-newbies, Jason Wang

Hi:

Trying to load an XDP program on a virtio based nic is failing with:

virtio_net: XDP expects header/data in single page, any_header_sg required

I have not encountered this error before and not able to find what is
missing. Any tips?

Thanks,
David

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: error loading xdp program on virtio nic
  2019-11-20 17:52 error loading xdp program on virtio nic David Ahern
@ 2019-11-21  3:26 ` Jason Wang
  2019-11-21  3:35   ` David Ahern
  0 siblings, 1 reply; 17+ messages in thread
From: Jason Wang @ 2019-11-21  3:26 UTC (permalink / raw)
  To: David Ahern, xdp-newbies


On 2019/11/21 上午1:52, David Ahern wrote:
> Hi:
>
> Trying to load an XDP program on a virtio based nic is failing with:
>
> virtio_net: XDP expects header/data in single page, any_header_sg required
>
> I have not encountered this error before and not able to find what is
> missing. Any tips?


Hi David:

What qemu + guest kernel version did you use? And could you share you 
qemu cli?

Old qemu requires vnet header to be placed into a separate sg which 
breaks the assumption of XDP. Recent qemu doesn't have such limitation 
(any_header_sg feature).

Thanks


>
> Thanks,
> David
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: error loading xdp program on virtio nic
  2019-11-21  3:26 ` Jason Wang
@ 2019-11-21  3:35   ` David Ahern
  2019-11-21  3:54     ` Jason Wang
  0 siblings, 1 reply; 17+ messages in thread
From: David Ahern @ 2019-11-21  3:35 UTC (permalink / raw)
  To: Jason Wang, xdp-newbies

On 11/20/19 8:26 PM, Jason Wang wrote:
> 
> On 2019/11/21 上午1:52, David Ahern wrote:
>> Hi:
>>
>> Trying to load an XDP program on a virtio based nic is failing with:
>>
>> virtio_net: XDP expects header/data in single page, any_header_sg
>> required
>>
>> I have not encountered this error before and not able to find what is
>> missing. Any tips?
> 
> 
> Hi David:
> 
> What qemu + guest kernel version did you use? And could you share you
> qemu cli?
> 
> Old qemu requires vnet header to be placed into a separate sg which
> breaks the assumption of XDP. Recent qemu doesn't have such limitation
> (any_header_sg feature).
> 
>

Hi Jason,

When I run qemu via my older vm-tools scripts XDP works fine. This is
the first time I am trying to use XDP with guests started by libvirt.

We isolated it to a libvirt xml file using an old machine type
(pc-i440fx-1.5) - basically any machine with VIRTIO_F_VERSION_1 not set.
Using a newer one move the problem forward.

The current error message is:
  virtio_net: Too few free TX rings available
again, looking for some libvirt setting for the vm create.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: error loading xdp program on virtio nic
  2019-11-21  3:35   ` David Ahern
@ 2019-11-21  3:54     ` Jason Wang
  2019-11-21  4:05       ` David Ahern
  0 siblings, 1 reply; 17+ messages in thread
From: Jason Wang @ 2019-11-21  3:54 UTC (permalink / raw)
  To: David Ahern, xdp-newbies


On 2019/11/21 上午11:35, David Ahern wrote:
> On 11/20/19 8:26 PM, Jason Wang wrote:
>> On 2019/11/21 上午1:52, David Ahern wrote:
>>> Hi:
>>>
>>> Trying to load an XDP program on a virtio based nic is failing with:
>>>
>>> virtio_net: XDP expects header/data in single page, any_header_sg
>>> required
>>>
>>> I have not encountered this error before and not able to find what is
>>> missing. Any tips?
>>
>> Hi David:
>>
>> What qemu + guest kernel version did you use? And could you share you
>> qemu cli?
>>
>> Old qemu requires vnet header to be placed into a separate sg which
>> breaks the assumption of XDP. Recent qemu doesn't have such limitation
>> (any_header_sg feature).
>>
>>
> Hi Jason,
>
> When I run qemu via my older vm-tools scripts XDP works fine. This is
> the first time I am trying to use XDP with guests started by libvirt.
>
> We isolated it to a libvirt xml file using an old machine type
> (pc-i440fx-1.5) - basically any machine with VIRTIO_F_VERSION_1 not set.
> Using a newer one move the problem forward.


Yes, if VERSION_1 implies ANY_HEADER_SG, and if you're using old qemu, 
make sure any_header_sg is enabled in qemu cli.


>
> The current error message is:
>    virtio_net: Too few free TX rings available
> again, looking for some libvirt setting for the vm create.
>

Make sure you have sufficient queues, e.g if you N vcpus with multiqueue 
enabled, you need 2*N queues for virtio-net.

Thanks

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: error loading xdp program on virtio nic
  2019-11-21  3:54     ` Jason Wang
@ 2019-11-21  4:05       ` David Ahern
  2019-11-21  6:26         ` Jesper Dangaard Brouer
  0 siblings, 1 reply; 17+ messages in thread
From: David Ahern @ 2019-11-21  4:05 UTC (permalink / raw)
  To: Jason Wang, xdp-newbies

On 11/20/19 8:54 PM, Jason Wang wrote:
>>
>> The current error message is:
>>    virtio_net: Too few free TX rings available
>> again, looking for some libvirt setting for the vm create.
>>
> 
> Make sure you have sufficient queues, e.g if you N vcpus with multiqueue
> enabled, you need 2*N queues for virtio-net.

yep, that did the trick and now I can attach xdp programs. Thanks for
the help.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: error loading xdp program on virtio nic
  2019-11-21  4:05       ` David Ahern
@ 2019-11-21  6:26         ` Jesper Dangaard Brouer
  2019-11-21  7:02           ` Jason Wang
  0 siblings, 1 reply; 17+ messages in thread
From: Jesper Dangaard Brouer @ 2019-11-21  6:26 UTC (permalink / raw)
  To: David Ahern; +Cc: brouer, Jason Wang, xdp-newbies

On Wed, 20 Nov 2019 21:05:48 -0700
David Ahern <dsahern@gmail.com> wrote:

> On 11/20/19 8:54 PM, Jason Wang wrote:
> >>
> >> The current error message is:
> >>    virtio_net: Too few free TX rings available
> >> again, looking for some libvirt setting for the vm create.
> >>  
> > 
> > Make sure you have sufficient queues, e.g if you N vcpus with multiqueue
> > enabled, you need 2*N queues for virtio-net.  
> 
> yep, that did the trick and now I can attach xdp programs. Thanks for
> the help.

How did you configure number of queues in libbvirt? 

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: error loading xdp program on virtio nic
  2019-11-21  6:26         ` Jesper Dangaard Brouer
@ 2019-11-21  7:02           ` Jason Wang
  2019-11-21 15:49             ` David Ahern
  0 siblings, 1 reply; 17+ messages in thread
From: Jason Wang @ 2019-11-21  7:02 UTC (permalink / raw)
  To: Jesper Dangaard Brouer, David Ahern; +Cc: xdp-newbies


On 2019/11/21 下午2:26, Jesper Dangaard Brouer wrote:
> On Wed, 20 Nov 2019 21:05:48 -0700
> David Ahern <dsahern@gmail.com> wrote:
>
>> On 11/20/19 8:54 PM, Jason Wang wrote:
>>>> The current error message is:
>>>>     virtio_net: Too few free TX rings available
>>>> again, looking for some libvirt setting for the vm create.
>>>>   
>>> Make sure you have sufficient queues, e.g if you N vcpus with multiqueue
>>> enabled, you need 2*N queues for virtio-net.
>> yep, that did the trick and now I can attach xdp programs. Thanks for
>> the help.
> How did you configure number of queues in libbvirt?
>

By specifying queues property like:

<devices>

   <interface type='network'>
     <source network='default'/>
     <target dev='vnet1'/>
     <model type='virtio'/>
     <driver name='vhost' txmode='iothread' ioeventfd='on' 
event_idx='off' queues='5' rx_queue_size='256' tx_queue_size='256'>
       <host csum='off' gso='off' tso4='off' tso6='off' ecn='off' 
ufo='off' mrg_rxbuf='off'/>
       <guest csum='off' tso4='off' tso6='off' ecn='off' ufo='off'/>
     </driver>
     </interface>
</devices>


More information is here: https://libvirt.org/formatdomain.html

Thanks

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: error loading xdp program on virtio nic
  2019-11-21  7:02           ` Jason Wang
@ 2019-11-21 15:49             ` David Ahern
  2019-11-22  6:09               ` Jason Wang
  0 siblings, 1 reply; 17+ messages in thread
From: David Ahern @ 2019-11-21 15:49 UTC (permalink / raw)
  To: Jason Wang, Jesper Dangaard Brouer; +Cc: xdp-newbies

On 11/21/19 12:02 AM, Jason Wang wrote:
> By specifying queues property like:
> 
> <devices>
> 
>   <interface type='network'>
>     <source network='default'/>
>     <target dev='vnet1'/>
>     <model type='virtio'/>
>     <driver name='vhost' txmode='iothread' ioeventfd='on'
> event_idx='off' queues='5' rx_queue_size='256' tx_queue_size='256'>

I can not check this because the 3.0 version of libvirt does not support
tx_queue_size. It is multiqueue (queues=5 in the example) setting that
needs to be set to 2*Nvcpus for XDP, correct?

>       <host csum='off' gso='off' tso4='off' tso6='off' ecn='off'
> ufo='off' mrg_rxbuf='off'/>
>       <guest csum='off' tso4='off' tso6='off' ecn='off' ufo='off'/>
>     </driver>
>     </interface>
> </devices>
> 
> 

The virtio_net driver suggests the queues are needed for XDP_TX:

       /* XDP requires extra queues for XDP_TX */
        if (curr_qp + xdp_qp > vi->max_queue_pairs) {
                NL_SET_ERR_MSG_MOD(extack, "Too few free TX rings
available");
                netdev_warn(dev, "request %i queues but max is %i\n",
                            curr_qp + xdp_qp, vi->max_queue_pairs);
                return -ENOMEM;
        }

Doubling the number of queues for each tap device adds overhead to the
hypervisor if you only want to allow XDP_DROP or XDP_DIRECT. Am I
understanding that correctly?

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: error loading xdp program on virtio nic
  2019-11-21 15:49             ` David Ahern
@ 2019-11-22  6:09               ` Jason Wang
  2019-11-22 15:43                 ` David Ahern
  0 siblings, 1 reply; 17+ messages in thread
From: Jason Wang @ 2019-11-22  6:09 UTC (permalink / raw)
  To: David Ahern, Jesper Dangaard Brouer; +Cc: xdp-newbies


On 2019/11/21 下午11:49, David Ahern wrote:
> On 11/21/19 12:02 AM, Jason Wang wrote:
>> By specifying queues property like:
>>
>> <devices>
>>
>>    <interface type='network'>
>>      <source network='default'/>
>>      <target dev='vnet1'/>
>>      <model type='virtio'/>
>>      <driver name='vhost' txmode='iothread' ioeventfd='on'
>> event_idx='off' queues='5' rx_queue_size='256' tx_queue_size='256'>
> I can not check this because the 3.0 version of libvirt does not support
> tx_queue_size. It is multiqueue (queues=5 in the example) setting that
> needs to be set to 2*Nvcpus for XDP, correct?


Yes.


>
>>        <host csum='off' gso='off' tso4='off' tso6='off' ecn='off'
>> ufo='off' mrg_rxbuf='off'/>
>>        <guest csum='off' tso4='off' tso6='off' ecn='off' ufo='off'/>
>>      </driver>
>>      </interface>
>> </devices>
>>
>>
> The virtio_net driver suggests the queues are needed for XDP_TX:
>
>         /* XDP requires extra queues for XDP_TX */
>          if (curr_qp + xdp_qp > vi->max_queue_pairs) {
>                  NL_SET_ERR_MSG_MOD(extack, "Too few free TX rings
> available");
>                  netdev_warn(dev, "request %i queues but max is %i\n",
>                              curr_qp + xdp_qp, vi->max_queue_pairs);
>                  return -ENOMEM;
>          }
>
> Doubling the number of queues for each tap device adds overhead to the
> hypervisor if you only want to allow XDP_DROP or XDP_DIRECT. Am I
> understanding that correctly?


Yes, but there's almost impossible to know whether or not XDP_TX will be 
used by the program. If we don't use per CPU TX queue, it must be 
serialized through locks, not sure it's worth try that (not by default, 
of course).

Thanks

>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: error loading xdp program on virtio nic
  2019-11-22  6:09               ` Jason Wang
@ 2019-11-22 15:43                 ` David Ahern
  2019-11-22 16:50                   ` Jakub Kicinski
                                     ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: David Ahern @ 2019-11-22 15:43 UTC (permalink / raw)
  To: Jason Wang, Jesper Dangaard Brouer; +Cc: xdp-newbies

On 11/21/19 11:09 PM, Jason Wang wrote:
>> Doubling the number of queues for each tap device adds overhead to the
>> hypervisor if you only want to allow XDP_DROP or XDP_DIRECT. Am I
>> understanding that correctly?
> 
> 
> Yes, but there's almost impossible to know whether or not XDP_TX will be
> used by the program. If we don't use per CPU TX queue, it must be
> serialized through locks, not sure it's worth try that (not by default,
> of course).
> 

This restriction is going to prevent use of XDP in VMs in general cloud
hosting environments. 2x vhost threads for vcpus is a non-starter.

If one XDP feature has high resource needs, then we need to subdivide
the capabilities to let some work and others fail. For example, a flag
can be added to xdp_buff / xdp_md that indicates supported XDP features.
If there are insufficient resources for XDP_TX, do not show support for
it. If a program returns XDP_TX anyways, packets will be dropped.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: error loading xdp program on virtio nic
  2019-11-22 15:43                 ` David Ahern
@ 2019-11-22 16:50                   ` Jakub Kicinski
  2019-11-22 16:57                   ` Jesper Dangaard Brouer
  2019-11-25  2:48                   ` Jason Wang
  2 siblings, 0 replies; 17+ messages in thread
From: Jakub Kicinski @ 2019-11-22 16:50 UTC (permalink / raw)
  To: David Ahern; +Cc: Jason Wang, Jesper Dangaard Brouer, xdp-newbies

On Fri, 22 Nov 2019 08:43:50 -0700, David Ahern wrote:
> On 11/21/19 11:09 PM, Jason Wang wrote:
> >> Doubling the number of queues for each tap device adds overhead to the
> >> hypervisor if you only want to allow XDP_DROP or XDP_DIRECT. Am I
> >> understanding that correctly?  
> > 
> > Yes, but there's almost impossible to know whether or not XDP_TX will be
> > used by the program. If we don't use per CPU TX queue, it must be
> > serialized through locks, not sure it's worth try that (not by default,
> > of course). 
> 
> This restriction is going to prevent use of XDP in VMs in general cloud
> hosting environments. 2x vhost threads for vcpus is a non-starter.
> 
> If one XDP feature has high resource needs, then we need to subdivide
> the capabilities to let some work and others fail. For example, a flag
> can be added to xdp_buff / xdp_md that indicates supported XDP features.
> If there are insufficient resources for XDP_TX, do not show support for
> it. If a program returns XDP_TX anyways, packets will be dropped.

This is covered by the always planned but never completed work on
richer queue configuration ABI by yours truly, Magnus and others.

Once we have better control over queues we can do some "manual mode"
config where queues are not automatically provisioned, are provisioned
even when program is not bound or are provisioned on a subset of cores
(if all NICs' IRQs/NAPIs are pinned to those core all should be gut).

🤷‍♂️

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: error loading xdp program on virtio nic
  2019-11-22 15:43                 ` David Ahern
  2019-11-22 16:50                   ` Jakub Kicinski
@ 2019-11-22 16:57                   ` Jesper Dangaard Brouer
  2019-11-22 17:42                     ` David Ahern
  2019-11-25  2:42                     ` Jason Wang
  2019-11-25  2:48                   ` Jason Wang
  2 siblings, 2 replies; 17+ messages in thread
From: Jesper Dangaard Brouer @ 2019-11-22 16:57 UTC (permalink / raw)
  To: David Ahern; +Cc: Jason Wang, xdp-newbies, brouer, netdev

On Fri, 22 Nov 2019 08:43:50 -0700
David Ahern <dsahern@gmail.com> wrote:

> On 11/21/19 11:09 PM, Jason Wang wrote:
> >> Doubling the number of queues for each tap device adds overhead to the
> >> hypervisor if you only want to allow XDP_DROP or XDP_DIRECT. Am I
> >> understanding that correctly?  
> > 
> > 
> > Yes, but there's almost impossible to know whether or not XDP_TX will be
> > used by the program. If we don't use per CPU TX queue, it must be
> > serialized through locks, not sure it's worth try that (not by default,
> > of course).
> >   
> 
> This restriction is going to prevent use of XDP in VMs in general cloud
> hosting environments. 2x vhost threads for vcpus is a non-starter.
> 
> If one XDP feature has high resource needs, then we need to subdivide
> the capabilities to let some work and others fail. For example, a flag
> can be added to xdp_buff / xdp_md that indicates supported XDP features.
> If there are insufficient resources for XDP_TX, do not show support for
> it. If a program returns XDP_TX anyways, packets will be dropped.
> 

This sounds like concrete use-case and solid argument why we need XDP
feature detection and checks. (Last part of LPC talk[1] were about
XDP features).

An interesting perspective you bring up, is that XDP features are not
static per device driver.  It actually needs to be dynamic, as your
XDP_TX feature request depend on the queue resources available.

Implementation wise, I would not add flags to xdp_buff / xdp_md.
Instead I propose in[1] slide 46, that the verifier should detect the
XDP features used by a BPF-prog.  If you XDP prog doesn't use e.g.
XDP_TX, then you should be allowed to run it on a virtio_net device
with less queue configured, right?


[1] http://people.netfilter.org/hawk/presentations/LinuxPlumbers2019/xdp-distro-view.pdf
-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: error loading xdp program on virtio nic
  2019-11-22 16:57                   ` Jesper Dangaard Brouer
@ 2019-11-22 17:42                     ` David Ahern
  2019-11-23 13:27                         ` Toke Høiland-Jørgensen
  2019-11-25  2:42                     ` Jason Wang
  1 sibling, 1 reply; 17+ messages in thread
From: David Ahern @ 2019-11-22 17:42 UTC (permalink / raw)
  To: Jesper Dangaard Brouer; +Cc: Jason Wang, xdp-newbies, netdev

On 11/22/19 9:57 AM, Jesper Dangaard Brouer wrote:
> Implementation wise, I would not add flags to xdp_buff / xdp_md.
> Instead I propose in[1] slide 46, that the verifier should detect the
> XDP features used by a BPF-prog.  If you XDP prog doesn't use e.g.
> XDP_TX, then you should be allowed to run it on a virtio_net device
> with less queue configured, right?

Thanks for the reference and yes, that is the goal: allow XDP in the
most use cases possible. e.g., Why limit XDP_DROP which requires no
resources because XDP_TX does not work?

I agree a flag in the api is an ugly way to allow it. For the verifier
approach, you mean add an internal flag (e.g., bitmask of return codes)
that the program uses and the NIC driver can check at attach time?

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: error loading xdp program on virtio nic
  2019-11-22 17:42                     ` David Ahern
@ 2019-11-23 13:27                         ` Toke Høiland-Jørgensen
  0 siblings, 0 replies; 17+ messages in thread
From: Toke Høiland-Jørgensen @ 2019-11-23 13:27 UTC (permalink / raw)
  To: David Ahern, Jesper Dangaard Brouer; +Cc: Jason Wang, xdp-newbies, netdev

David Ahern <dsahern@gmail.com> writes:

> On 11/22/19 9:57 AM, Jesper Dangaard Brouer wrote:
>> Implementation wise, I would not add flags to xdp_buff / xdp_md.
>> Instead I propose in[1] slide 46, that the verifier should detect the
>> XDP features used by a BPF-prog.  If you XDP prog doesn't use e.g.
>> XDP_TX, then you should be allowed to run it on a virtio_net device
>> with less queue configured, right?
>
> Thanks for the reference and yes, that is the goal: allow XDP in the
> most use cases possible. e.g., Why limit XDP_DROP which requires no
> resources because XDP_TX does not work?
>
> I agree a flag in the api is an ugly way to allow it. For the verifier
> approach, you mean add an internal flag (e.g., bitmask of return codes)
> that the program uses and the NIC driver can check at attach time?

Yes, that's more or less what we've discussed. With the actual set of
flags, and the API for the driver (new ndo?) TBD. Suggestions welcome; I
anticipate this is something Jesper and I need to circle back to soonish
in any case (unless someone beats us to it!).

-Toke


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: error loading xdp program on virtio nic
@ 2019-11-23 13:27                         ` Toke Høiland-Jørgensen
  0 siblings, 0 replies; 17+ messages in thread
From: Toke Høiland-Jørgensen @ 2019-11-23 13:27 UTC (permalink / raw)
  To: David Ahern, Jesper Dangaard Brouer; +Cc: Jason Wang, xdp-newbies, netdev

David Ahern <dsahern@gmail.com> writes:

> On 11/22/19 9:57 AM, Jesper Dangaard Brouer wrote:
>> Implementation wise, I would not add flags to xdp_buff / xdp_md.
>> Instead I propose in[1] slide 46, that the verifier should detect the
>> XDP features used by a BPF-prog.  If you XDP prog doesn't use e.g.
>> XDP_TX, then you should be allowed to run it on a virtio_net device
>> with less queue configured, right?
>
> Thanks for the reference and yes, that is the goal: allow XDP in the
> most use cases possible. e.g., Why limit XDP_DROP which requires no
> resources because XDP_TX does not work?
>
> I agree a flag in the api is an ugly way to allow it. For the verifier
> approach, you mean add an internal flag (e.g., bitmask of return codes)
> that the program uses and the NIC driver can check at attach time?

Yes, that's more or less what we've discussed. With the actual set of
flags, and the API for the driver (new ndo?) TBD. Suggestions welcome; I
anticipate this is something Jesper and I need to circle back to soonish
in any case (unless someone beats us to it!).

-Toke

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: error loading xdp program on virtio nic
  2019-11-22 16:57                   ` Jesper Dangaard Brouer
  2019-11-22 17:42                     ` David Ahern
@ 2019-11-25  2:42                     ` Jason Wang
  1 sibling, 0 replies; 17+ messages in thread
From: Jason Wang @ 2019-11-25  2:42 UTC (permalink / raw)
  To: Jesper Dangaard Brouer, David Ahern; +Cc: xdp-newbies, netdev


On 2019/11/23 上午12:57, Jesper Dangaard Brouer wrote:
> On Fri, 22 Nov 2019 08:43:50 -0700
> David Ahern <dsahern@gmail.com> wrote:
>
>> On 11/21/19 11:09 PM, Jason Wang wrote:
>>>> Doubling the number of queues for each tap device adds overhead to the
>>>> hypervisor if you only want to allow XDP_DROP or XDP_DIRECT. Am I
>>>> understanding that correctly?
>>>
>>> Yes, but there's almost impossible to know whether or not XDP_TX will be
>>> used by the program. If we don't use per CPU TX queue, it must be
>>> serialized through locks, not sure it's worth try that (not by default,
>>> of course).
>>>    
>> This restriction is going to prevent use of XDP in VMs in general cloud
>> hosting environments. 2x vhost threads for vcpus is a non-starter.
>>
>> If one XDP feature has high resource needs, then we need to subdivide
>> the capabilities to let some work and others fail. For example, a flag
>> can be added to xdp_buff / xdp_md that indicates supported XDP features.
>> If there are insufficient resources for XDP_TX, do not show support for
>> it. If a program returns XDP_TX anyways, packets will be dropped.
>>
> This sounds like concrete use-case and solid argument why we need XDP
> feature detection and checks. (Last part of LPC talk[1] were about
> XDP features).
>
> An interesting perspective you bring up, is that XDP features are not
> static per device driver.  It actually needs to be dynamic, as your
> XDP_TX feature request depend on the queue resources available.
>
> Implementation wise, I would not add flags to xdp_buff / xdp_md.
> Instead I propose in[1] slide 46, that the verifier should detect the
> XDP features used by a BPF-prog.  If you XDP prog doesn't use e.g.
> XDP_TX, then you should be allowed to run it on a virtio_net device
> with less queue configured, right?


Yes, I think so. But I remember we used to have something like 
header_adjust in the past but finally removed ...

Thanks


>
>
> [1] http://people.netfilter.org/hawk/presentations/LinuxPlumbers2019/xdp-distro-view.pdf


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: error loading xdp program on virtio nic
  2019-11-22 15:43                 ` David Ahern
  2019-11-22 16:50                   ` Jakub Kicinski
  2019-11-22 16:57                   ` Jesper Dangaard Brouer
@ 2019-11-25  2:48                   ` Jason Wang
  2 siblings, 0 replies; 17+ messages in thread
From: Jason Wang @ 2019-11-25  2:48 UTC (permalink / raw)
  To: David Ahern, Jesper Dangaard Brouer; +Cc: xdp-newbies


On 2019/11/22 下午11:43, David Ahern wrote:
> On 11/21/19 11:09 PM, Jason Wang wrote:
>>> Doubling the number of queues for each tap device adds overhead to the
>>> hypervisor if you only want to allow XDP_DROP or XDP_DIRECT. Am I
>>> understanding that correctly?
>>
>> Yes, but there's almost impossible to know whether or not XDP_TX will be
>> used by the program. If we don't use per CPU TX queue, it must be
>> serialized through locks, not sure it's worth try that (not by default,
>> of course).
>>
> This restriction is going to prevent use of XDP in VMs in general cloud
> hosting environments. 2x vhost threads for vcpus is a non-starter.
>
> If one XDP feature has high resource needs, then we need to subdivide
> the capabilities to let some work and others fail. For example, a flag
> can be added to xdp_buff / xdp_md that indicates supported XDP features.
> If there are insufficient resources for XDP_TX, do not show support for
> it. If a program returns XDP_TX anyways, packets will be dropped.
>

Or we can just:
- If queues is sufficient, using per-cpu TX queue
- If not, synchronize through spinlocks (like what TAP did right now, 
since there's no easy way to have more queues on the fly)

Thanks

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2019-11-25  2:48 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-20 17:52 error loading xdp program on virtio nic David Ahern
2019-11-21  3:26 ` Jason Wang
2019-11-21  3:35   ` David Ahern
2019-11-21  3:54     ` Jason Wang
2019-11-21  4:05       ` David Ahern
2019-11-21  6:26         ` Jesper Dangaard Brouer
2019-11-21  7:02           ` Jason Wang
2019-11-21 15:49             ` David Ahern
2019-11-22  6:09               ` Jason Wang
2019-11-22 15:43                 ` David Ahern
2019-11-22 16:50                   ` Jakub Kicinski
2019-11-22 16:57                   ` Jesper Dangaard Brouer
2019-11-22 17:42                     ` David Ahern
2019-11-23 13:27                       ` Toke Høiland-Jørgensen
2019-11-23 13:27                         ` Toke Høiland-Jørgensen
2019-11-25  2:42                     ` Jason Wang
2019-11-25  2:48                   ` Jason Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.