BPF Archive on lore.kernel.org
 help / color / Atom feed
* [RFC PATCH bpf-next 0/2] xdp: add dev map multicast support
@ 2020-04-15  8:54 Hangbin Liu
  2020-04-15  8:54 ` [RFC PATCH bpf-next 1/2] " Hangbin Liu
                   ` (4 more replies)
  0 siblings, 5 replies; 127+ messages in thread
From: Hangbin Liu @ 2020-04-15  8:54 UTC (permalink / raw)
  To: bpf
  Cc: netdev, Toke Høiland-Jørgensen, Jiri Benc,
	Jesper Dangaard Brouer, Eelco Chaudron, Alexei Starovoitov,
	Daniel Borkmann, Hangbin Liu

Hi all,

This is a prototype for xdp multicast support, which has been discussed
before[0]. The goal is to be able to implement an OVS-like data plane in
XDP, i.e., a software switch that can forward XDP frames to multiple

To achieve this, an application needs to specify a group of interfaces
to forward a packet to. It is also common to want to exclude one or more
physical interfaces from the forwarding operation - e.g., to forward a
packet to all interfaces in the multicast group except the interface it
arrived on. While this could be done simply by adding more groups, this
quickly leads to a combinatorial explosion in the number of groups an
application has to maintain.

To avoid the combinatorial explosion, we propose to include the ability
to specify an "exclude group" as part of the forwarding operation. This
needs to be a group (instead of just a single port index), because a
physical interface can be part of a logical grouping, such as a bond

Thus, the logical forwarding operation becomes a "set difference"
operation, i.e. "forward to all ports in group A that are not also in
group B". This series implements such an operation using device maps to
represent the groups. This means that the XDP program specifies two
device maps, one containing the list of netdevs to redirect to, and the
other containing the exclude list.

To be able to reuse the existing bpf_redirect_map() helper, we use a
containing map-in-map type to store the forwarding and exclude groups.
When a map-in-map type is passed to the redirect helper, it will
interpret the index as encoding the forwarding group in the upper 16
bits and the exclude group in the lower 16 bits. The enqueue logic will
unpack the two halves of the index and perform separate lookups in the
containing map. E.g., an index of 0x00010001 will look for the
forwarding group at map index 0x10000 and the exclude group at map index
0x1; the application is expected to populate the map accordingly.

For this RFC series we are primarily looking for feedback on the concept
and API: the example in patch 2 is functional, but not a lot of effort
has been made on performance optimisation.

Last but not least, thanks a lot to Jiri, Eelco, Toke and Jesper for
suggestions and help on implementation.

[0] https://xdp-project.net/#Handling-multicast

Hangbin Liu (2):
  xdp: add dev map multicast support
  sample/bpf: add xdp_redirect_map_multicast test

 include/linux/bpf.h                           |  29 ++
 include/net/xdp.h                             |   1 +
 kernel/bpf/arraymap.c                         |   2 +-
 kernel/bpf/devmap.c                           | 118 +++++++
 kernel/bpf/hashtab.c                          |   2 +-
 kernel/bpf/verifier.c                         |  15 +-
 net/core/filter.c                             |  69 +++-
 net/core/xdp.c                                |  26 ++
 samples/bpf/Makefile                          |   3 +
 samples/bpf/xdp_redirect_map_multicast.sh     | 142 ++++++++
 samples/bpf/xdp_redirect_map_multicast_kern.c | 147 +++++++++
 samples/bpf/xdp_redirect_map_multicast_user.c | 306 ++++++++++++++++++
 12 files changed, 854 insertions(+), 6 deletions(-)
 create mode 100755 samples/bpf/xdp_redirect_map_multicast.sh
 create mode 100644 samples/bpf/xdp_redirect_map_multicast_kern.c
 create mode 100644 samples/bpf/xdp_redirect_map_multicast_user.c


^ permalink raw reply	[flat|nested] 127+ messages in thread