* [PATCH net-next 00/12] vxlan metadata device vnifiltering support
@ 2022-02-20 14:03 Roopa Prabhu
2022-02-20 14:03 ` [PATCH net-next 01/12] vxlan: move to its own directory Roopa Prabhu
` (11 more replies)
0 siblings, 12 replies; 19+ messages in thread
From: Roopa Prabhu @ 2022-02-20 14:03 UTC (permalink / raw)
To: davem, kuba; +Cc: netdev, stephen, nikolay, idosch, dsahern
This series adds vnifiltering support to vxlan collect metadata device.
Motivation:
You can only use a single vxlan collect metadata device for a given
vxlan udp port in the system today. The vxlan collect metadata device
terminates all received vxlan packets. As shown in the below diagram,
there are use-cases where you need to support multiple such vxlan devices in
independent bridge domains. Each vxlan device must terminate the vni's
it is configured for.
Example usecase: In a service provider network a service provider
typically supports multiple bridge domains with overlapping vlans.
One bridge domain per customer. Vlans in each bridge domain are
mapped to globally unique vxlan ranges assigned to each customer.
This series adds vnifiltering support to collect metadata devices to
terminate only configured vnis. This is similar to vlan filtering in
bridge driver. The vni filtering capability is provided by a new flag on
collect metadata device.
In the below pic:
- customer1 is mapped to br1 bridge domain
- customer2 is mapped to br2 bridge domain
- customer1 vlan 10-11 is mapped to vni 1001-1002
- customer2 vlan 10-11 is mapped to vni 2001-2002
- br1 and br2 are vlan filtering bridges
- vxlan1 and vxlan2 are collect metadata devices with
vnifiltering enabled
┌──────────────────────────────────────────────────────────────────┐
│ switch │
│ │
│ ┌───────────┐ ┌───────────┐ │
│ │ │ │ │ │
│ │ br1 │ │ br2 │ │
│ └┬─────────┬┘ └──┬───────┬┘ │
│ vlans│ │ vlans │ │ │
│ 10,11│ │ 10,11│ │ │
│ │ vlanvnimap: │ vlanvnimap: │
│ │ 10-1001,11-1002 │ 10-2001,11-2002 │
│ │ │ │ │ │
│ ┌──────┴┐ ┌──┴─────────┐ ┌───┴────┐ │ │
│ │ swp1 │ │vxlan1 │ │ swp2 │ ┌┴─────────────┐ │
│ │ │ │ vnifilter:│ │ │ │vxlan2 │ │
│ └───┬───┘ │ 1001,1002│ └───┬────┘ │ vnifilter: │ │
│ │ └────────────┘ │ │ 2001,2002 │ │
│ │ │ └──────────────┘ │
│ │ │ │
└───────┼──────────────────────────────────┼───────────────────────┘
│ │
│ │
┌─────┴───────┐ │
│ customer1 │ ┌─────┴──────┐
│ host/VM │ │customer2 │
└─────────────┘ │ host/VM │
└────────────┘
Benjamin Poirier (1):
selinux: add support for RTM_NEWTUNNEL, RTM_DELTUNNEL, and
RTM_GETTUNNEL
Nikolay Aleksandrov (2):
drivers: vxlan: vnifilter: per vni stats
drivers: vxlan: vnifilter: add support for stats dumping
Roopa Prabhu (9):
vxlan: move to its own directory
vxlan_core: move common declarations to private header file
vxlan_core: move some fdb helpers to non-static
vxlan_core: make multicast helper take rip and ifindex explicitly
vxlan_core: add helper vxlan_vni_in_use
rtnetlink: add new rtm tunnel api for tunnel id filtering
vxlan_multicast: Move multicast helpers to a separate file
vxlan: vni filtering support on collect metadata device
selftests: add new tests for vxlan vnifiltering
drivers/net/Makefile | 2 +-
drivers/net/vxlan/Makefile | 7 +
drivers/net/{vxlan.c => vxlan/vxlan_core.c} | 420 +++-----
drivers/net/vxlan/vxlan_multicast.c | 274 +++++
drivers/net/vxlan/vxlan_private.h | 178 ++++
drivers/net/vxlan/vxlan_vnifilter.c | 958 ++++++++++++++++++
include/net/vxlan.h | 54 +-
include/uapi/linux/if_link.h | 54 +
include/uapi/linux/rtnetlink.h | 9 +
security/selinux/nlmsgtab.c | 5 +-
.../selftests/net/test_vxlan_vnifiltering.sh | 581 +++++++++++
11 files changed, 2275 insertions(+), 267 deletions(-)
create mode 100644 drivers/net/vxlan/Makefile
rename drivers/net/{vxlan.c => vxlan/vxlan_core.c} (94%)
create mode 100644 drivers/net/vxlan/vxlan_multicast.c
create mode 100644 drivers/net/vxlan/vxlan_private.h
create mode 100644 drivers/net/vxlan/vxlan_vnifilter.c
create mode 100755 tools/testing/selftests/net/test_vxlan_vnifiltering.sh
--
2.25.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH net-next 01/12] vxlan: move to its own directory
2022-02-20 14:03 [PATCH net-next 00/12] vxlan metadata device vnifiltering support Roopa Prabhu
@ 2022-02-20 14:03 ` Roopa Prabhu
2022-02-20 14:03 ` [PATCH net-next 02/12] vxlan_core: move common declarations to private header file Roopa Prabhu
` (10 subsequent siblings)
11 siblings, 0 replies; 19+ messages in thread
From: Roopa Prabhu @ 2022-02-20 14:03 UTC (permalink / raw)
To: davem, kuba; +Cc: netdev, stephen, nikolay, idosch, dsahern
vxlan.c has grown too long. This patch moves
it to its own directory. subsequent patches add new
functionality in new files.
Signed-off-by: Roopa Prabhu <roopa@nvidia.com>
---
drivers/net/Makefile | 2 +-
drivers/net/vxlan/Makefile | 7 +++++++
drivers/net/{vxlan.c => vxlan/vxlan_core.c} | 0
3 files changed, 8 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/vxlan/Makefile
rename drivers/net/{vxlan.c => vxlan/vxlan_core.c} (100%)
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 50b23e71065f..3f1192d3c52d 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -31,7 +31,7 @@ obj-$(CONFIG_TUN) += tun.o
obj-$(CONFIG_TAP) += tap.o
obj-$(CONFIG_VETH) += veth.o
obj-$(CONFIG_VIRTIO_NET) += virtio_net.o
-obj-$(CONFIG_VXLAN) += vxlan.o
+obj-$(CONFIG_VXLAN) += vxlan/
obj-$(CONFIG_GENEVE) += geneve.o
obj-$(CONFIG_BAREUDP) += bareudp.o
obj-$(CONFIG_GTP) += gtp.o
diff --git a/drivers/net/vxlan/Makefile b/drivers/net/vxlan/Makefile
new file mode 100644
index 000000000000..567266133593
--- /dev/null
+++ b/drivers/net/vxlan/Makefile
@@ -0,0 +1,7 @@
+#
+# Makefile for the vxlan driver
+#
+
+obj-$(CONFIG_VXLAN) += vxlan.o
+
+vxlan-objs := vxlan_core.o
diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan/vxlan_core.c
similarity index 100%
rename from drivers/net/vxlan.c
rename to drivers/net/vxlan/vxlan_core.c
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH net-next 02/12] vxlan_core: move common declarations to private header file
2022-02-20 14:03 [PATCH net-next 00/12] vxlan metadata device vnifiltering support Roopa Prabhu
2022-02-20 14:03 ` [PATCH net-next 01/12] vxlan: move to its own directory Roopa Prabhu
@ 2022-02-20 14:03 ` Roopa Prabhu
2022-02-20 14:03 ` [PATCH net-next 03/12] vxlan_core: move some fdb helpers to non-static Roopa Prabhu
` (9 subsequent siblings)
11 siblings, 0 replies; 19+ messages in thread
From: Roopa Prabhu @ 2022-02-20 14:03 UTC (permalink / raw)
To: davem, kuba; +Cc: netdev, stephen, nikolay, idosch, dsahern
This patch moves common structures and global declarations
to a shared private headerfile vxlan_private.h. Subsequent
patches use this header file as a common header file for
additional shared declarations.
Signed-off-by: Roopa Prabhu <roopa@nvidia.com>
---
drivers/net/vxlan/vxlan_core.c | 83 ++-------------------------
drivers/net/vxlan/vxlan_private.h | 95 +++++++++++++++++++++++++++++++
2 files changed, 99 insertions(+), 79 deletions(-)
create mode 100644 drivers/net/vxlan/vxlan_private.h
diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
index d0dc90d3dac2..5856ef92b9c9 100644
--- a/drivers/net/vxlan/vxlan_core.c
+++ b/drivers/net/vxlan/vxlan_core.c
@@ -34,10 +34,10 @@
#include <net/ip6_checksum.h>
#endif
+#include "vxlan_private.h"
+
#define VXLAN_VERSION "0.1"
-#define PORT_HASH_BITS 8
-#define PORT_HASH_SIZE (1<<PORT_HASH_BITS)
#define FDB_AGE_DEFAULT 300 /* 5 min */
#define FDB_AGE_INTERVAL (10 * HZ) /* rescan interval */
@@ -53,41 +53,14 @@ static bool log_ecn_error = true;
module_param(log_ecn_error, bool, 0644);
MODULE_PARM_DESC(log_ecn_error, "Log packets received with corrupted ECN");
-static unsigned int vxlan_net_id;
-static struct rtnl_link_ops vxlan_link_ops;
+unsigned int vxlan_net_id;
-static const u8 all_zeros_mac[ETH_ALEN + 2];
+static struct rtnl_link_ops vxlan_link_ops;
static int vxlan_sock_add(struct vxlan_dev *vxlan);
static void vxlan_vs_del_dev(struct vxlan_dev *vxlan);
-/* per-network namespace private data for this module */
-struct vxlan_net {
- struct list_head vxlan_list;
- struct hlist_head sock_list[PORT_HASH_SIZE];
- spinlock_t sock_lock;
- struct notifier_block nexthop_notifier_block;
-};
-
-/* Forwarding table entry */
-struct vxlan_fdb {
- struct hlist_node hlist; /* linked list of entries */
- struct rcu_head rcu;
- unsigned long updated; /* jiffies */
- unsigned long used;
- struct list_head remotes;
- u8 eth_addr[ETH_ALEN];
- u16 state; /* see ndm_state */
- __be32 vni;
- u16 flags; /* see ndm_flags and below */
- struct list_head nh_list;
- struct nexthop __rcu *nh;
- struct vxlan_dev __rcu *vdev;
-};
-
-#define NTF_VXLAN_ADDED_BY_USER 0x100
-
/* salt for hash table */
static u32 vxlan_salt __read_mostly;
@@ -98,17 +71,6 @@ static inline bool vxlan_collect_metadata(struct vxlan_sock *vs)
}
#if IS_ENABLED(CONFIG_IPV6)
-static inline
-bool vxlan_addr_equal(const union vxlan_addr *a, const union vxlan_addr *b)
-{
- if (a->sa.sa_family != b->sa.sa_family)
- return false;
- if (a->sa.sa_family == AF_INET6)
- return ipv6_addr_equal(&a->sin6.sin6_addr, &b->sin6.sin6_addr);
- else
- return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
-}
-
static int vxlan_nla_get_addr(union vxlan_addr *ip, struct nlattr *nla)
{
if (nla_len(nla) >= sizeof(struct in6_addr)) {
@@ -135,12 +97,6 @@ static int vxlan_nla_put_addr(struct sk_buff *skb, int attr,
#else /* !CONFIG_IPV6 */
-static inline
-bool vxlan_addr_equal(const union vxlan_addr *a, const union vxlan_addr *b)
-{
- return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
-}
-
static int vxlan_nla_get_addr(union vxlan_addr *ip, struct nlattr *nla)
{
if (nla_len(nla) >= sizeof(struct in6_addr)) {
@@ -161,37 +117,6 @@ static int vxlan_nla_put_addr(struct sk_buff *skb, int attr,
}
#endif
-/* Virtual Network hash table head */
-static inline struct hlist_head *vni_head(struct vxlan_sock *vs, __be32 vni)
-{
- return &vs->vni_list[hash_32((__force u32)vni, VNI_HASH_BITS)];
-}
-
-/* Socket hash table head */
-static inline struct hlist_head *vs_head(struct net *net, __be16 port)
-{
- struct vxlan_net *vn = net_generic(net, vxlan_net_id);
-
- return &vn->sock_list[hash_32(ntohs(port), PORT_HASH_BITS)];
-}
-
-/* First remote destination for a forwarding entry.
- * Guaranteed to be non-NULL because remotes are never deleted.
- */
-static inline struct vxlan_rdst *first_remote_rcu(struct vxlan_fdb *fdb)
-{
- if (rcu_access_pointer(fdb->nh))
- return NULL;
- return list_entry_rcu(fdb->remotes.next, struct vxlan_rdst, list);
-}
-
-static inline struct vxlan_rdst *first_remote_rtnl(struct vxlan_fdb *fdb)
-{
- if (rcu_access_pointer(fdb->nh))
- return NULL;
- return list_first_entry(&fdb->remotes, struct vxlan_rdst, list);
-}
-
/* Find VXLAN socket based on network namespace, address family, UDP port,
* enabled unshareable flags and socket device binding (see l3mdev with
* non-default VRF).
diff --git a/drivers/net/vxlan/vxlan_private.h b/drivers/net/vxlan/vxlan_private.h
new file mode 100644
index 000000000000..6940d570354d
--- /dev/null
+++ b/drivers/net/vxlan/vxlan_private.h
@@ -0,0 +1,95 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Vxlan private header file
+ *
+ */
+
+#ifndef _VXLAN_PRIVATE_H
+#define _VXLAN_PRIVATE_H
+
+extern unsigned int vxlan_net_id;
+static const u8 all_zeros_mac[ETH_ALEN + 2];
+
+#define PORT_HASH_BITS 8
+#define PORT_HASH_SIZE (1 << PORT_HASH_BITS)
+
+/* per-network namespace private data for this module */
+struct vxlan_net {
+ struct list_head vxlan_list;
+ struct hlist_head sock_list[PORT_HASH_SIZE];
+ spinlock_t sock_lock;
+ struct notifier_block nexthop_notifier_block;
+};
+
+/* Forwarding table entry */
+struct vxlan_fdb {
+ struct hlist_node hlist; /* linked list of entries */
+ struct rcu_head rcu;
+ unsigned long updated; /* jiffies */
+ unsigned long used;
+ struct list_head remotes;
+ u8 eth_addr[ETH_ALEN];
+ u16 state; /* see ndm_state */
+ __be32 vni;
+ u16 flags; /* see ndm_flags and below */
+ struct list_head nh_list;
+ struct nexthop __rcu *nh;
+ struct vxlan_dev __rcu *vdev;
+};
+
+#define NTF_VXLAN_ADDED_BY_USER 0x100
+
+/* Virtual Network hash table head */
+static inline struct hlist_head *vni_head(struct vxlan_sock *vs, __be32 vni)
+{
+ return &vs->vni_list[hash_32((__force u32)vni, VNI_HASH_BITS)];
+}
+
+/* Socket hash table head */
+static inline struct hlist_head *vs_head(struct net *net, __be16 port)
+{
+ struct vxlan_net *vn = net_generic(net, vxlan_net_id);
+
+ return &vn->sock_list[hash_32(ntohs(port), PORT_HASH_BITS)];
+}
+
+/* First remote destination for a forwarding entry.
+ * Guaranteed to be non-NULL because remotes are never deleted.
+ */
+static inline struct vxlan_rdst *first_remote_rcu(struct vxlan_fdb *fdb)
+{
+ if (rcu_access_pointer(fdb->nh))
+ return NULL;
+ return list_entry_rcu(fdb->remotes.next, struct vxlan_rdst, list);
+}
+
+static inline struct vxlan_rdst *first_remote_rtnl(struct vxlan_fdb *fdb)
+{
+ if (rcu_access_pointer(fdb->nh))
+ return NULL;
+ return list_first_entry(&fdb->remotes, struct vxlan_rdst, list);
+}
+
+#if IS_ENABLED(CONFIG_IPV6)
+static inline
+bool vxlan_addr_equal(const union vxlan_addr *a, const union vxlan_addr *b)
+{
+ if (a->sa.sa_family != b->sa.sa_family)
+ return false;
+ if (a->sa.sa_family == AF_INET6)
+ return ipv6_addr_equal(&a->sin6.sin6_addr, &b->sin6.sin6_addr);
+ else
+ return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
+}
+
+#else /* !CONFIG_IPV6 */
+
+static inline
+bool vxlan_addr_equal(const union vxlan_addr *a, const union vxlan_addr *b)
+{
+ return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
+}
+
+#endif
+
+#endif
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH net-next 03/12] vxlan_core: move some fdb helpers to non-static
2022-02-20 14:03 [PATCH net-next 00/12] vxlan metadata device vnifiltering support Roopa Prabhu
2022-02-20 14:03 ` [PATCH net-next 01/12] vxlan: move to its own directory Roopa Prabhu
2022-02-20 14:03 ` [PATCH net-next 02/12] vxlan_core: move common declarations to private header file Roopa Prabhu
@ 2022-02-20 14:03 ` Roopa Prabhu
2022-02-20 14:03 ` [PATCH net-next 04/12] vxlan_core: make multicast helper take rip and ifindex explicitly Roopa Prabhu
` (8 subsequent siblings)
11 siblings, 0 replies; 19+ messages in thread
From: Roopa Prabhu @ 2022-02-20 14:03 UTC (permalink / raw)
To: davem, kuba; +Cc: netdev, stephen, nikolay, idosch, dsahern
This patch moves some fdb helpers to non-static
for use in later patches. Ideally, all fdb code
could move into its own file vxlan_fdb.c.
This can be done as a subsequent patch and is out
of scope of this series.
Signed-off-by: Roopa Prabhu <roopa@nvidia.com>
---
drivers/net/vxlan/vxlan_core.c | 54 +++++++++++++++----------------
drivers/net/vxlan/vxlan_private.h | 20 ++++++++++++
2 files changed, 47 insertions(+), 27 deletions(-)
diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
index 5856ef92b9c9..c4e76c5c3b9e 100644
--- a/drivers/net/vxlan/vxlan_core.c
+++ b/drivers/net/vxlan/vxlan_core.c
@@ -418,7 +418,7 @@ static u32 eth_hash(const unsigned char *addr)
return hash_64(value, FDB_HASH_BITS);
}
-static u32 eth_vni_hash(const unsigned char *addr, __be32 vni)
+u32 eth_vni_hash(const unsigned char *addr, __be32 vni)
{
/* use 1 byte of OUI and 3 bytes of NIC */
u32 key = get_unaligned((u32 *)(addr + 2));
@@ -426,7 +426,7 @@ static u32 eth_vni_hash(const unsigned char *addr, __be32 vni)
return jhash_2words(key, vni, vxlan_salt) & (FDB_HASH_SIZE - 1);
}
-static u32 fdb_head_index(struct vxlan_dev *vxlan, const u8 *mac, __be32 vni)
+u32 fdb_head_index(struct vxlan_dev *vxlan, const u8 *mac, __be32 vni)
{
if (vxlan->cfg.flags & VXLAN_F_COLLECT_METADATA)
return eth_vni_hash(mac, vni);
@@ -845,12 +845,12 @@ static int vxlan_fdb_nh_update(struct vxlan_dev *vxlan, struct vxlan_fdb *fdb,
return err;
}
-static int vxlan_fdb_create(struct vxlan_dev *vxlan,
- const u8 *mac, union vxlan_addr *ip,
- __u16 state, __be16 port, __be32 src_vni,
- __be32 vni, __u32 ifindex, __u16 ndm_flags,
- u32 nhid, struct vxlan_fdb **fdb,
- struct netlink_ext_ack *extack)
+int vxlan_fdb_create(struct vxlan_dev *vxlan,
+ const u8 *mac, union vxlan_addr *ip,
+ __u16 state, __be16 port, __be32 src_vni,
+ __be32 vni, __u32 ifindex, __u16 ndm_flags,
+ u32 nhid, struct vxlan_fdb **fdb,
+ struct netlink_ext_ack *extack)
{
struct vxlan_rdst *rd = NULL;
struct vxlan_fdb *f;
@@ -938,14 +938,14 @@ static void vxlan_dst_free(struct rcu_head *head)
kfree(rd);
}
-static int vxlan_fdb_update_existing(struct vxlan_dev *vxlan,
- union vxlan_addr *ip,
- __u16 state, __u16 flags,
- __be16 port, __be32 vni,
- __u32 ifindex, __u16 ndm_flags,
- struct vxlan_fdb *f, u32 nhid,
- bool swdev_notify,
- struct netlink_ext_ack *extack)
+int vxlan_fdb_update_existing(struct vxlan_dev *vxlan,
+ union vxlan_addr *ip,
+ __u16 state, __u16 flags,
+ __be16 port, __be32 vni,
+ __u32 ifindex, __u16 ndm_flags,
+ struct vxlan_fdb *f, u32 nhid,
+ bool swdev_notify,
+ struct netlink_ext_ack *extack)
{
__u16 fdb_flags = (ndm_flags & ~NTF_USE);
struct vxlan_rdst *rd = NULL;
@@ -1075,13 +1075,13 @@ static int vxlan_fdb_update_create(struct vxlan_dev *vxlan,
}
/* Add new entry to forwarding table -- assumes lock held */
-static int vxlan_fdb_update(struct vxlan_dev *vxlan,
- const u8 *mac, union vxlan_addr *ip,
- __u16 state, __u16 flags,
- __be16 port, __be32 src_vni, __be32 vni,
- __u32 ifindex, __u16 ndm_flags, u32 nhid,
- bool swdev_notify,
- struct netlink_ext_ack *extack)
+int vxlan_fdb_update(struct vxlan_dev *vxlan,
+ const u8 *mac, union vxlan_addr *ip,
+ __u16 state, __u16 flags,
+ __be16 port, __be32 src_vni, __be32 vni,
+ __u32 ifindex, __u16 ndm_flags, u32 nhid,
+ bool swdev_notify,
+ struct netlink_ext_ack *extack)
{
struct vxlan_fdb *f;
@@ -1232,10 +1232,10 @@ static int vxlan_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
return err;
}
-static int __vxlan_fdb_delete(struct vxlan_dev *vxlan,
- const unsigned char *addr, union vxlan_addr ip,
- __be16 port, __be32 src_vni, __be32 vni,
- u32 ifindex, bool swdev_notify)
+int __vxlan_fdb_delete(struct vxlan_dev *vxlan,
+ const unsigned char *addr, union vxlan_addr ip,
+ __be16 port, __be32 src_vni, __be32 vni,
+ u32 ifindex, bool swdev_notify)
{
struct vxlan_rdst *rd = NULL;
struct vxlan_fdb *f;
diff --git a/drivers/net/vxlan/vxlan_private.h b/drivers/net/vxlan/vxlan_private.h
index 6940d570354d..6b29670254a2 100644
--- a/drivers/net/vxlan/vxlan_private.h
+++ b/drivers/net/vxlan/vxlan_private.h
@@ -92,4 +92,24 @@ bool vxlan_addr_equal(const union vxlan_addr *a, const union vxlan_addr *b)
#endif
+/* vxlan_core.c */
+int vxlan_fdb_create(struct vxlan_dev *vxlan,
+ const u8 *mac, union vxlan_addr *ip,
+ __u16 state, __be16 port, __be32 src_vni,
+ __be32 vni, __u32 ifindex, __u16 ndm_flags,
+ u32 nhid, struct vxlan_fdb **fdb,
+ struct netlink_ext_ack *extack);
+int __vxlan_fdb_delete(struct vxlan_dev *vxlan,
+ const unsigned char *addr, union vxlan_addr ip,
+ __be16 port, __be32 src_vni, __be32 vni,
+ u32 ifindex, bool swdev_notify);
+u32 eth_vni_hash(const unsigned char *addr, __be32 vni);
+u32 fdb_head_index(struct vxlan_dev *vxlan, const u8 *mac, __be32 vni);
+int vxlan_fdb_update(struct vxlan_dev *vxlan,
+ const u8 *mac, union vxlan_addr *ip,
+ __u16 state, __u16 flags,
+ __be16 port, __be32 src_vni, __be32 vni,
+ __u32 ifindex, __u16 ndm_flags, u32 nhid,
+ bool swdev_notify, struct netlink_ext_ack *extack);
+
#endif
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH net-next 04/12] vxlan_core: make multicast helper take rip and ifindex explicitly
2022-02-20 14:03 [PATCH net-next 00/12] vxlan metadata device vnifiltering support Roopa Prabhu
` (2 preceding siblings ...)
2022-02-20 14:03 ` [PATCH net-next 03/12] vxlan_core: move some fdb helpers to non-static Roopa Prabhu
@ 2022-02-20 14:03 ` Roopa Prabhu
2022-02-20 14:03 ` [PATCH net-next 05/12] vxlan_core: add helper vxlan_vni_in_use Roopa Prabhu
` (7 subsequent siblings)
11 siblings, 0 replies; 19+ messages in thread
From: Roopa Prabhu @ 2022-02-20 14:03 UTC (permalink / raw)
To: davem, kuba; +Cc: netdev, stephen, nikolay, idosch, dsahern
This patch changes multicast helpers to take rip and ifindex as input.
This is needed in future patches where rip can come from a pervni
structure while the ifindex can come from the vxlan device.
Signed-off-by: Roopa Prabhu <roopa@nvidia.com>
---
drivers/net/vxlan/vxlan_core.c | 37 +++++++++++++++++++---------------
1 file changed, 21 insertions(+), 16 deletions(-)
diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
index c4e76c5c3b9e..3f3e606c3c7d 100644
--- a/drivers/net/vxlan/vxlan_core.c
+++ b/drivers/net/vxlan/vxlan_core.c
@@ -1445,8 +1445,11 @@ static bool vxlan_snoop(struct net_device *dev,
}
/* See if multicast group is already in use by other ID */
-static bool vxlan_group_used(struct vxlan_net *vn, struct vxlan_dev *dev)
+static bool vxlan_group_used(struct vxlan_net *vn, struct vxlan_dev *dev,
+ union vxlan_addr *rip, int rifindex)
{
+ union vxlan_addr *ip = (rip ? : &dev->default_dst.remote_ip);
+ int ifindex = (rifindex ? : dev->default_dst.remote_ifindex);
struct vxlan_dev *vxlan;
struct vxlan_sock *sock4;
#if IS_ENABLED(CONFIG_IPV6)
@@ -1481,11 +1484,10 @@ static bool vxlan_group_used(struct vxlan_net *vn, struct vxlan_dev *dev)
#endif
if (!vxlan_addr_equal(&vxlan->default_dst.remote_ip,
- &dev->default_dst.remote_ip))
+ ip))
continue;
- if (vxlan->default_dst.remote_ifindex !=
- dev->default_dst.remote_ifindex)
+ if (vxlan->default_dst.remote_ifindex != ifindex)
continue;
return true;
@@ -1545,12 +1547,13 @@ static void vxlan_sock_release(struct vxlan_dev *vxlan)
/* Update multicast group membership when first VNI on
* multicast address is brought up
*/
-static int vxlan_igmp_join(struct vxlan_dev *vxlan)
+static int vxlan_igmp_join(struct vxlan_dev *vxlan, union vxlan_addr *rip,
+ int rifindex)
{
- struct sock *sk;
- union vxlan_addr *ip = &vxlan->default_dst.remote_ip;
- int ifindex = vxlan->default_dst.remote_ifindex;
+ union vxlan_addr *ip = (rip ? : &vxlan->default_dst.remote_ip);
+ int ifindex = (rifindex ? : vxlan->default_dst.remote_ifindex);
int ret = -EINVAL;
+ struct sock *sk;
if (ip->sa.sa_family == AF_INET) {
struct vxlan_sock *sock4 = rtnl_dereference(vxlan->vn4_sock);
@@ -1578,13 +1581,13 @@ static int vxlan_igmp_join(struct vxlan_dev *vxlan)
return ret;
}
-/* Inverse of vxlan_igmp_join when last VNI is brought down */
-static int vxlan_igmp_leave(struct vxlan_dev *vxlan)
+static int vxlan_igmp_leave(struct vxlan_dev *vxlan, union vxlan_addr *rip,
+ int rifindex)
{
- struct sock *sk;
- union vxlan_addr *ip = &vxlan->default_dst.remote_ip;
- int ifindex = vxlan->default_dst.remote_ifindex;
+ union vxlan_addr *ip = (rip ? : &vxlan->default_dst.remote_ip);
+ int ifindex = (rifindex ? : vxlan->default_dst.remote_ifindex);
int ret = -EINVAL;
+ struct sock *sk;
if (ip->sa.sa_family == AF_INET) {
struct vxlan_sock *sock4 = rtnl_dereference(vxlan->vn4_sock);
@@ -3016,7 +3019,8 @@ static int vxlan_open(struct net_device *dev)
return ret;
if (vxlan_addr_multicast(&vxlan->default_dst.remote_ip)) {
- ret = vxlan_igmp_join(vxlan);
+ ret = vxlan_igmp_join(vxlan, &vxlan->default_dst.remote_ip,
+ vxlan->default_dst.remote_ifindex);
if (ret == -EADDRINUSE)
ret = 0;
if (ret) {
@@ -3063,8 +3067,9 @@ static int vxlan_stop(struct net_device *dev)
int ret = 0;
if (vxlan_addr_multicast(&vxlan->default_dst.remote_ip) &&
- !vxlan_group_used(vn, vxlan))
- ret = vxlan_igmp_leave(vxlan);
+ !vxlan_group_used(vn, vxlan, NULL, 0))
+ ret = vxlan_igmp_leave(vxlan, &vxlan->default_dst.remote_ip,
+ vxlan->default_dst.remote_ifindex);
del_timer_sync(&vxlan->age_timer);
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH net-next 05/12] vxlan_core: add helper vxlan_vni_in_use
2022-02-20 14:03 [PATCH net-next 00/12] vxlan metadata device vnifiltering support Roopa Prabhu
` (3 preceding siblings ...)
2022-02-20 14:03 ` [PATCH net-next 04/12] vxlan_core: make multicast helper take rip and ifindex explicitly Roopa Prabhu
@ 2022-02-20 14:03 ` Roopa Prabhu
2022-02-20 14:03 ` [PATCH net-next 06/12] rtnetlink: add new rtm tunnel api for tunnel id filtering Roopa Prabhu
` (6 subsequent siblings)
11 siblings, 0 replies; 19+ messages in thread
From: Roopa Prabhu @ 2022-02-20 14:03 UTC (permalink / raw)
To: davem, kuba; +Cc: netdev, stephen, nikolay, idosch, dsahern
more users in follow up patches
Signed-off-by: Roopa Prabhu <roopa@nvidia.com>
---
drivers/net/vxlan/vxlan_core.c | 46 +++++++++++++++++++++-------------
1 file changed, 28 insertions(+), 18 deletions(-)
diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
index 3f3e606c3c7d..d17d450f2058 100644
--- a/drivers/net/vxlan/vxlan_core.c
+++ b/drivers/net/vxlan/vxlan_core.c
@@ -3546,13 +3546,38 @@ static int vxlan_sock_add(struct vxlan_dev *vxlan)
return ret;
}
+static int vxlan_vni_in_use(struct net *src_net, struct vxlan_dev *vxlan,
+ struct vxlan_config *conf, __be32 vni)
+{
+ struct vxlan_net *vn = net_generic(src_net, vxlan_net_id);
+ struct vxlan_dev *tmp;
+
+ list_for_each_entry(tmp, &vn->vxlan_list, next) {
+ if (tmp == vxlan)
+ continue;
+ if (tmp->cfg.vni != vni)
+ continue;
+ if (tmp->cfg.dst_port != conf->dst_port)
+ continue;
+ if ((tmp->cfg.flags & (VXLAN_F_RCV_FLAGS | VXLAN_F_IPV6)) !=
+ (conf->flags & (VXLAN_F_RCV_FLAGS | VXLAN_F_IPV6)))
+ continue;
+
+ if ((conf->flags & VXLAN_F_IPV6_LINKLOCAL) &&
+ tmp->cfg.remote_ifindex != conf->remote_ifindex)
+ continue;
+
+ return -EEXIST;
+ }
+
+ return 0;
+}
+
static int vxlan_config_validate(struct net *src_net, struct vxlan_config *conf,
struct net_device **lower,
struct vxlan_dev *old,
struct netlink_ext_ack *extack)
{
- struct vxlan_net *vn = net_generic(src_net, vxlan_net_id);
- struct vxlan_dev *tmp;
bool use_ipv6 = false;
if (conf->flags & VXLAN_F_GPE) {
@@ -3685,22 +3710,7 @@ static int vxlan_config_validate(struct net *src_net, struct vxlan_config *conf,
if (!conf->age_interval)
conf->age_interval = FDB_AGE_DEFAULT;
- list_for_each_entry(tmp, &vn->vxlan_list, next) {
- if (tmp == old)
- continue;
-
- if (tmp->cfg.vni != conf->vni)
- continue;
- if (tmp->cfg.dst_port != conf->dst_port)
- continue;
- if ((tmp->cfg.flags & (VXLAN_F_RCV_FLAGS | VXLAN_F_IPV6)) !=
- (conf->flags & (VXLAN_F_RCV_FLAGS | VXLAN_F_IPV6)))
- continue;
-
- if ((conf->flags & VXLAN_F_IPV6_LINKLOCAL) &&
- tmp->cfg.remote_ifindex != conf->remote_ifindex)
- continue;
-
+ if (vxlan_vni_in_use(src_net, old, conf, conf->vni)) {
NL_SET_ERR_MSG(extack,
"A VXLAN device with the specified VNI already exists");
return -EEXIST;
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH net-next 06/12] rtnetlink: add new rtm tunnel api for tunnel id filtering
2022-02-20 14:03 [PATCH net-next 00/12] vxlan metadata device vnifiltering support Roopa Prabhu
` (4 preceding siblings ...)
2022-02-20 14:03 ` [PATCH net-next 05/12] vxlan_core: add helper vxlan_vni_in_use Roopa Prabhu
@ 2022-02-20 14:03 ` Roopa Prabhu
2022-02-20 14:29 ` Roopa Prabhu
2022-02-20 14:04 ` [PATCH net-next 07/12] vxlan_multicast: Move multicast helpers to a separate file Roopa Prabhu
` (5 subsequent siblings)
11 siblings, 1 reply; 19+ messages in thread
From: Roopa Prabhu @ 2022-02-20 14:03 UTC (permalink / raw)
To: davem, kuba; +Cc: netdev, stephen, nikolay, idosch, dsahern
This patch adds new rtm tunnel msg and api for tunnel id
filtering in dst_metadata devices. First dst_metadata
device to use the api is vxlan driver with AF_BRIDGE
family.
This and later changes add ability in vxlan driver to do
tunnel id filtering (or vni filtering) on dst_metadata
devices. This is similar to vlan api in the vlan filtering bridge.
Signed-off-by: Roopa Prabhu <roopa@nvidia.com>
---
include/uapi/linux/if_link.h | 26 ++++++++++++++++++++++++++
include/uapi/linux/rtnetlink.h | 9 +++++++++
2 files changed, 35 insertions(+)
diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h
index 6218f93f5c1a..eb046a82188d 100644
--- a/include/uapi/linux/if_link.h
+++ b/include/uapi/linux/if_link.h
@@ -712,6 +712,31 @@ enum ipvlan_mode {
#define IPVLAN_F_PRIVATE 0x01
#define IPVLAN_F_VEPA 0x02
+/* Tunnel RTM header */
+struct tunnel_msg {
+ __u8 family;
+ __u8 reserved1;
+ __u16 reserved2;
+ __u32 ifindex;
+};
+
+enum {
+ VXLAN_VNIFILTER_ENTRY_UNSPEC,
+ VXLAN_VNIFILTER_ENTRY_START,
+ VXLAN_VNIFILTER_ENTRY_END,
+ VXLAN_VNIFILTER_ENTRY_GROUP,
+ VXLAN_VNIFILTER_ENTRY_GROUP6,
+ __VXLAN_VNIFILTER_ENTRY_MAX
+};
+#define VXLAN_VNIFILTER_ENTRY_MAX (__VXLAN_VNIFILTER_ENTRY_MAX - 1)
+
+enum {
+ VXLAN_VNIFILTER_UNSPEC,
+ VXLAN_VNIFILTER_ENTRY,
+ __VXLAN_VNIFILTER_MAX
+};
+#define VXLAN_VNIFILTER_MAX (__VXLAN_VNIFILTER_MAX - 1)
+
/* VXLAN section */
enum {
IFLA_VXLAN_UNSPEC,
@@ -744,6 +769,7 @@ enum {
IFLA_VXLAN_GPE,
IFLA_VXLAN_TTL_INHERIT,
IFLA_VXLAN_DF,
+ IFLA_VXLAN_VNIFILTER, /* only applicable with COLLECT_METADATA mode */
__IFLA_VXLAN_MAX
};
#define IFLA_VXLAN_MAX (__IFLA_VXLAN_MAX - 1)
diff --git a/include/uapi/linux/rtnetlink.h b/include/uapi/linux/rtnetlink.h
index 93d934cc4613..0970cb4b1b88 100644
--- a/include/uapi/linux/rtnetlink.h
+++ b/include/uapi/linux/rtnetlink.h
@@ -185,6 +185,13 @@ enum {
RTM_GETNEXTHOPBUCKET,
#define RTM_GETNEXTHOPBUCKET RTM_GETNEXTHOPBUCKET
+ RTM_NEWTUNNEL = 120,
+#define RTM_NEWTUNNEL RTM_NEWTUNNEL
+ RTM_DELTUNNEL,
+#define RTM_DELTUNNEL RTM_DELTUNNEL
+ RTM_GETTUNNEL,
+#define RTM_GETTUNNEL RTM_GETTUNNEL
+
__RTM_MAX,
#define RTM_MAX (((__RTM_MAX + 3) & ~3) - 1)
};
@@ -756,6 +763,8 @@ enum rtnetlink_groups {
#define RTNLGRP_BRVLAN RTNLGRP_BRVLAN
RTNLGRP_MCTP_IFADDR,
#define RTNLGRP_MCTP_IFADDR RTNLGRP_MCTP_IFADDR
+ RTNLGRP_TUNNEL,
+#define RTNLGRP_TUNNEL RTNLGRP_TUNNEL
__RTNLGRP_MAX
};
#define RTNLGRP_MAX (__RTNLGRP_MAX - 1)
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH net-next 07/12] vxlan_multicast: Move multicast helpers to a separate file
2022-02-20 14:03 [PATCH net-next 00/12] vxlan metadata device vnifiltering support Roopa Prabhu
` (5 preceding siblings ...)
2022-02-20 14:03 ` [PATCH net-next 06/12] rtnetlink: add new rtm tunnel api for tunnel id filtering Roopa Prabhu
@ 2022-02-20 14:04 ` Roopa Prabhu
2022-02-20 14:04 ` [PATCH net-next 08/12] vxlan: vni filtering support on collect metadata device Roopa Prabhu
` (4 subsequent siblings)
11 siblings, 0 replies; 19+ messages in thread
From: Roopa Prabhu @ 2022-02-20 14:04 UTC (permalink / raw)
To: davem, kuba; +Cc: netdev, stephen, nikolay, idosch, dsahern
subsequent patches will add more helpers.
Signed-off-by: Roopa Prabhu <roopa@nvidia.com>
---
drivers/net/vxlan/Makefile | 2 +-
drivers/net/vxlan/vxlan_core.c | 123 -------------------------
drivers/net/vxlan/vxlan_multicast.c | 136 ++++++++++++++++++++++++++++
drivers/net/vxlan/vxlan_private.h | 7 ++
4 files changed, 144 insertions(+), 124 deletions(-)
create mode 100644 drivers/net/vxlan/vxlan_multicast.c
diff --git a/drivers/net/vxlan/Makefile b/drivers/net/vxlan/Makefile
index 567266133593..61c80e9c6c24 100644
--- a/drivers/net/vxlan/Makefile
+++ b/drivers/net/vxlan/Makefile
@@ -4,4 +4,4 @@
obj-$(CONFIG_VXLAN) += vxlan.o
-vxlan-objs := vxlan_core.o
+vxlan-objs := vxlan_core.o vxlan_multicast.o
diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
index d17d450f2058..1bbfca495b12 100644
--- a/drivers/net/vxlan/vxlan_core.c
+++ b/drivers/net/vxlan/vxlan_core.c
@@ -1444,58 +1444,6 @@ static bool vxlan_snoop(struct net_device *dev,
return false;
}
-/* See if multicast group is already in use by other ID */
-static bool vxlan_group_used(struct vxlan_net *vn, struct vxlan_dev *dev,
- union vxlan_addr *rip, int rifindex)
-{
- union vxlan_addr *ip = (rip ? : &dev->default_dst.remote_ip);
- int ifindex = (rifindex ? : dev->default_dst.remote_ifindex);
- struct vxlan_dev *vxlan;
- struct vxlan_sock *sock4;
-#if IS_ENABLED(CONFIG_IPV6)
- struct vxlan_sock *sock6;
-#endif
- unsigned short family = dev->default_dst.remote_ip.sa.sa_family;
-
- sock4 = rtnl_dereference(dev->vn4_sock);
-
- /* The vxlan_sock is only used by dev, leaving group has
- * no effect on other vxlan devices.
- */
- if (family == AF_INET && sock4 && refcount_read(&sock4->refcnt) == 1)
- return false;
-#if IS_ENABLED(CONFIG_IPV6)
- sock6 = rtnl_dereference(dev->vn6_sock);
- if (family == AF_INET6 && sock6 && refcount_read(&sock6->refcnt) == 1)
- return false;
-#endif
-
- list_for_each_entry(vxlan, &vn->vxlan_list, next) {
- if (!netif_running(vxlan->dev) || vxlan == dev)
- continue;
-
- if (family == AF_INET &&
- rtnl_dereference(vxlan->vn4_sock) != sock4)
- continue;
-#if IS_ENABLED(CONFIG_IPV6)
- if (family == AF_INET6 &&
- rtnl_dereference(vxlan->vn6_sock) != sock6)
- continue;
-#endif
-
- if (!vxlan_addr_equal(&vxlan->default_dst.remote_ip,
- ip))
- continue;
-
- if (vxlan->default_dst.remote_ifindex != ifindex)
- continue;
-
- return true;
- }
-
- return false;
-}
-
static bool __vxlan_sock_release_prep(struct vxlan_sock *vs)
{
struct vxlan_net *vn;
@@ -1544,77 +1492,6 @@ static void vxlan_sock_release(struct vxlan_dev *vxlan)
#endif
}
-/* Update multicast group membership when first VNI on
- * multicast address is brought up
- */
-static int vxlan_igmp_join(struct vxlan_dev *vxlan, union vxlan_addr *rip,
- int rifindex)
-{
- union vxlan_addr *ip = (rip ? : &vxlan->default_dst.remote_ip);
- int ifindex = (rifindex ? : vxlan->default_dst.remote_ifindex);
- int ret = -EINVAL;
- struct sock *sk;
-
- if (ip->sa.sa_family == AF_INET) {
- struct vxlan_sock *sock4 = rtnl_dereference(vxlan->vn4_sock);
- struct ip_mreqn mreq = {
- .imr_multiaddr.s_addr = ip->sin.sin_addr.s_addr,
- .imr_ifindex = ifindex,
- };
-
- sk = sock4->sock->sk;
- lock_sock(sk);
- ret = ip_mc_join_group(sk, &mreq);
- release_sock(sk);
-#if IS_ENABLED(CONFIG_IPV6)
- } else {
- struct vxlan_sock *sock6 = rtnl_dereference(vxlan->vn6_sock);
-
- sk = sock6->sock->sk;
- lock_sock(sk);
- ret = ipv6_stub->ipv6_sock_mc_join(sk, ifindex,
- &ip->sin6.sin6_addr);
- release_sock(sk);
-#endif
- }
-
- return ret;
-}
-
-static int vxlan_igmp_leave(struct vxlan_dev *vxlan, union vxlan_addr *rip,
- int rifindex)
-{
- union vxlan_addr *ip = (rip ? : &vxlan->default_dst.remote_ip);
- int ifindex = (rifindex ? : vxlan->default_dst.remote_ifindex);
- int ret = -EINVAL;
- struct sock *sk;
-
- if (ip->sa.sa_family == AF_INET) {
- struct vxlan_sock *sock4 = rtnl_dereference(vxlan->vn4_sock);
- struct ip_mreqn mreq = {
- .imr_multiaddr.s_addr = ip->sin.sin_addr.s_addr,
- .imr_ifindex = ifindex,
- };
-
- sk = sock4->sock->sk;
- lock_sock(sk);
- ret = ip_mc_leave_group(sk, &mreq);
- release_sock(sk);
-#if IS_ENABLED(CONFIG_IPV6)
- } else {
- struct vxlan_sock *sock6 = rtnl_dereference(vxlan->vn6_sock);
-
- sk = sock6->sock->sk;
- lock_sock(sk);
- ret = ipv6_stub->ipv6_sock_mc_drop(sk, ifindex,
- &ip->sin6.sin6_addr);
- release_sock(sk);
-#endif
- }
-
- return ret;
-}
-
static bool vxlan_remcsum(struct vxlanhdr *unparsed,
struct sk_buff *skb, u32 vxflags)
{
diff --git a/drivers/net/vxlan/vxlan_multicast.c b/drivers/net/vxlan/vxlan_multicast.c
new file mode 100644
index 000000000000..ddb241876567
--- /dev/null
+++ b/drivers/net/vxlan/vxlan_multicast.c
@@ -0,0 +1,136 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Vxlan multicast group handling
+ *
+ */
+#include <linux/kernel.h>
+#include <net/net_namespace.h>
+#include <net/sock.h>
+#include <linux/igmp.h>
+#include <net/vxlan.h>
+
+#include "vxlan_private.h"
+
+/* Update multicast group membership when first VNI on
+ * multicast address is brought up
+ */
+int vxlan_igmp_join(struct vxlan_dev *vxlan, union vxlan_addr *rip,
+ int rifindex)
+{
+ union vxlan_addr *ip = (rip ? : &vxlan->default_dst.remote_ip);
+ int ifindex = (rifindex ? : vxlan->default_dst.remote_ifindex);
+ int ret = -EINVAL;
+ struct sock *sk;
+
+ if (ip->sa.sa_family == AF_INET) {
+ struct vxlan_sock *sock4 = rtnl_dereference(vxlan->vn4_sock);
+ struct ip_mreqn mreq = {
+ .imr_multiaddr.s_addr = ip->sin.sin_addr.s_addr,
+ .imr_ifindex = ifindex,
+ };
+
+ sk = sock4->sock->sk;
+ lock_sock(sk);
+ ret = ip_mc_join_group(sk, &mreq);
+ release_sock(sk);
+#if IS_ENABLED(CONFIG_IPV6)
+ } else {
+ struct vxlan_sock *sock6 = rtnl_dereference(vxlan->vn6_sock);
+
+ sk = sock6->sock->sk;
+ lock_sock(sk);
+ ret = ipv6_stub->ipv6_sock_mc_join(sk, ifindex,
+ &ip->sin6.sin6_addr);
+ release_sock(sk);
+#endif
+ }
+
+ return ret;
+}
+
+int vxlan_igmp_leave(struct vxlan_dev *vxlan, union vxlan_addr *rip,
+ int rifindex)
+{
+ union vxlan_addr *ip = (rip ? : &vxlan->default_dst.remote_ip);
+ int ifindex = (rifindex ? : vxlan->default_dst.remote_ifindex);
+ int ret = -EINVAL;
+ struct sock *sk;
+
+ pr_debug("%s -> %pIS, %d\n", __func__, ip, ifindex);
+
+ if (ip->sa.sa_family == AF_INET) {
+ struct vxlan_sock *sock4 = rtnl_dereference(vxlan->vn4_sock);
+ struct ip_mreqn mreq = {
+ .imr_multiaddr.s_addr = ip->sin.sin_addr.s_addr,
+ .imr_ifindex = ifindex,
+ };
+
+ sk = sock4->sock->sk;
+ lock_sock(sk);
+ ret = ip_mc_leave_group(sk, &mreq);
+ release_sock(sk);
+#if IS_ENABLED(CONFIG_IPV6)
+ } else {
+ struct vxlan_sock *sock6 = rtnl_dereference(vxlan->vn6_sock);
+
+ sk = sock6->sock->sk;
+ lock_sock(sk);
+ ret = ipv6_stub->ipv6_sock_mc_drop(sk, ifindex,
+ &ip->sin6.sin6_addr);
+ release_sock(sk);
+#endif
+ }
+
+ return ret;
+}
+
+/* See if multicast group is already in use by other ID */
+bool vxlan_group_used(struct vxlan_net *vn, struct vxlan_dev *dev,
+ union vxlan_addr *rip, int rifindex)
+{
+ union vxlan_addr *ip = (rip ? : &dev->default_dst.remote_ip);
+ int ifindex = (rifindex ? : dev->default_dst.remote_ifindex);
+ struct vxlan_dev *vxlan;
+ struct vxlan_sock *sock4;
+#if IS_ENABLED(CONFIG_IPV6)
+ struct vxlan_sock *sock6;
+#endif
+ unsigned short family = dev->default_dst.remote_ip.sa.sa_family;
+
+ sock4 = rtnl_dereference(dev->vn4_sock);
+
+ /* The vxlan_sock is only used by dev, leaving group has
+ * no effect on other vxlan devices.
+ */
+ if (family == AF_INET && sock4 && refcount_read(&sock4->refcnt) == 1)
+ return false;
+
+#if IS_ENABLED(CONFIG_IPV6)
+ sock6 = rtnl_dereference(dev->vn6_sock);
+ if (family == AF_INET6 && sock6 && refcount_read(&sock6->refcnt) == 1)
+ return false;
+#endif
+
+ list_for_each_entry(vxlan, &vn->vxlan_list, next) {
+ if (!netif_running(vxlan->dev) || vxlan == dev)
+ continue;
+
+ if (family == AF_INET &&
+ rtnl_dereference(vxlan->vn4_sock) != sock4)
+ continue;
+#if IS_ENABLED(CONFIG_IPV6)
+ if (family == AF_INET6 &&
+ rtnl_dereference(vxlan->vn6_sock) != sock6)
+ continue;
+#endif
+ if (!vxlan_addr_equal(&vxlan->default_dst.remote_ip, ip))
+ continue;
+
+ if (vxlan->default_dst.remote_ifindex != ifindex)
+ continue;
+
+ return true;
+ }
+
+ return false;
+}
diff --git a/drivers/net/vxlan/vxlan_private.h b/drivers/net/vxlan/vxlan_private.h
index 6b29670254a2..ad2f561c6e94 100644
--- a/drivers/net/vxlan/vxlan_private.h
+++ b/drivers/net/vxlan/vxlan_private.h
@@ -112,4 +112,11 @@ int vxlan_fdb_update(struct vxlan_dev *vxlan,
__u32 ifindex, __u16 ndm_flags, u32 nhid,
bool swdev_notify, struct netlink_ext_ack *extack);
+/* vxlan_multicast.c */
+int vxlan_igmp_join(struct vxlan_dev *vxlan, union vxlan_addr *rip,
+ int rifindex);
+int vxlan_igmp_leave(struct vxlan_dev *vxlan, union vxlan_addr *rip,
+ int rifindex);
+bool vxlan_group_used(struct vxlan_net *vn, struct vxlan_dev *dev,
+ union vxlan_addr *rip, int rifindex);
#endif
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH net-next 08/12] vxlan: vni filtering support on collect metadata device
2022-02-20 14:03 [PATCH net-next 00/12] vxlan metadata device vnifiltering support Roopa Prabhu
` (6 preceding siblings ...)
2022-02-20 14:04 ` [PATCH net-next 07/12] vxlan_multicast: Move multicast helpers to a separate file Roopa Prabhu
@ 2022-02-20 14:04 ` Roopa Prabhu
2022-02-20 22:24 ` kernel test robot
2022-02-20 14:04 ` [PATCH net-next 09/12] selftests: add new tests for vxlan vnifiltering Roopa Prabhu
` (3 subsequent siblings)
11 siblings, 1 reply; 19+ messages in thread
From: Roopa Prabhu @ 2022-02-20 14:04 UTC (permalink / raw)
To: davem, kuba; +Cc: netdev, stephen, nikolay, idosch, dsahern
This patch adds vnifiltering support to collect metadata device.
Motivation:
You can only use a single vxlan collect metadata device for a given
vxlan udp port in the system today. The vxlan collect metadata device
terminates all received vxlan packets. As shown in the below diagram,
there are use-cases where you need to support multiple such vxlan devices in
independent bridge domains. Each vxlan device must terminate the vni's
it is configured for.
Example usecase: In a service provider network a service provider
typically supports multiple bridge domains with overlapping vlans.
One bridge domain per customer. Vlans in each bridge domain are
mapped to globally unique vxlan ranges assigned to each customer.
vnifiltering support in collect metadata devices terminates only configured
vnis. This is similar to vlan filtering in bridge driver. The vni filtering
capability is provided by a new flag on collect metadata device.
In the below pic:
- customer1 is mapped to br1 bridge domain
- customer2 is mapped to br2 bridge domain
- customer1 vlan 10-11 is mapped to vni 1001-1002
- customer2 vlan 10-11 is mapped to vni 2001-2002
- br1 and br2 are vlan filtering bridges
- vxlan1 and vxlan2 are collect metadata devices with
vnifiltering enabled
┌──────────────────────────────────────────────────────────────────┐
│ switch │
│ │
│ ┌───────────┐ ┌───────────┐ │
│ │ │ │ │ │
│ │ br1 │ │ br2 │ │
│ └┬─────────┬┘ └──┬───────┬┘ │
│ vlans│ │ vlans │ │ │
│ 10,11│ │ 10,11│ │ │
│ │ vlanvnimap: │ vlanvnimap: │
│ │ 10-1001,11-1002 │ 10-2001,11-2002 │
│ │ │ │ │ │
│ ┌──────┴┐ ┌──┴─────────┐ ┌───┴────┐ │ │
│ │ swp1 │ │vxlan1 │ │ swp2 │ ┌┴─────────────┐ │
│ │ │ │ vnifilter:│ │ │ │vxlan2 │ │
│ └───┬───┘ │ 1001,1002│ └───┬────┘ │ vnifilter: │ │
│ │ └────────────┘ │ │ 2001,2002 │ │
│ │ │ └──────────────┘ │
│ │ │ │
└───────┼──────────────────────────────────┼───────────────────────┘
│ │
│ │
┌─────┴───────┐ │
│ customer1 │ ┌─────┴──────┐
│ host/VM │ │customer2 │
└─────────────┘ │ host/VM │
└────────────┘
With this implementation, vxlan dst metadata device can
be associated with range of vnis.
struct vxlan_vni_node is introduced to represent
a configured vni. We start with vni and its
associated remote_ip in this structure. This
structure can be extended to bring in other
per vni attributes if there are usecases for it.
A vni inherits an attribute from the base vxlan device
if there is no per vni attributes defined.
struct vxlan_dev gets a new rhashtable for
vnis called vxlan_vni_group. vxlan_vnifilter.c
implements the necessary netlink api, notifications
and helper functions to process and manage lifecycle
of vxlan_vni_node.
This patch also adds new helper functions in vxlan_multicast.c
to handle per vni remote_ip multicast groups which are part
of vxlan_vni_group.
Signed-off-by: Roopa Prabhu <roopa@nvidia.com>
---
drivers/net/vxlan/Makefile | 2 +-
drivers/net/vxlan/vxlan_core.c | 96 +++-
drivers/net/vxlan/vxlan_multicast.c | 150 ++++-
drivers/net/vxlan/vxlan_private.h | 59 +-
drivers/net/vxlan/vxlan_vnifilter.c | 833 ++++++++++++++++++++++++++++
include/net/vxlan.h | 28 +-
6 files changed, 1136 insertions(+), 32 deletions(-)
create mode 100644 drivers/net/vxlan/vxlan_vnifilter.c
diff --git a/drivers/net/vxlan/Makefile b/drivers/net/vxlan/Makefile
index 61c80e9c6c24..d4c255499b72 100644
--- a/drivers/net/vxlan/Makefile
+++ b/drivers/net/vxlan/Makefile
@@ -4,4 +4,4 @@
obj-$(CONFIG_VXLAN) += vxlan.o
-vxlan-objs := vxlan_core.o vxlan_multicast.o
+vxlan-objs := vxlan_core.o vxlan_multicast.o vxlan_vnifilter.o
diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
index 1bbfca495b12..e88217b52bb9 100644
--- a/drivers/net/vxlan/vxlan_core.c
+++ b/drivers/net/vxlan/vxlan_core.c
@@ -144,12 +144,19 @@ static struct vxlan_dev *vxlan_vs_find_vni(struct vxlan_sock *vs, int ifindex,
struct vxlan_dev_node *node;
/* For flow based devices, map all packets to VNI 0 */
- if (vs->flags & VXLAN_F_COLLECT_METADATA)
+ if (vs->flags & VXLAN_F_COLLECT_METADATA &&
+ !(vs->flags & VXLAN_F_VNIFILTER))
vni = 0;
hlist_for_each_entry_rcu(node, vni_head(vs, vni), hlist) {
- if (node->vxlan->default_dst.remote_vni != vni)
+ if (!node->vxlan)
continue;
+ if (node->vxlan->cfg.flags & VXLAN_F_VNIFILTER) {
+ if (!vxlan_vnifilter_lookup(node->vxlan, vni))
+ continue;
+ } else if (node->vxlan->default_dst.remote_vni != vni) {
+ continue;
+ }
if (IS_ENABLED(CONFIG_IPV6)) {
const struct vxlan_config *cfg = &node->vxlan->cfg;
@@ -1477,7 +1484,10 @@ static void vxlan_sock_release(struct vxlan_dev *vxlan)
RCU_INIT_POINTER(vxlan->vn4_sock, NULL);
synchronize_net();
- vxlan_vs_del_dev(vxlan);
+ if (vxlan->cfg.flags & VXLAN_F_VNIFILTER)
+ vxlan_vs_del_vnigrp(vxlan);
+ else
+ vxlan_vs_del_dev(vxlan);
if (__vxlan_sock_release_prep(sock4)) {
udp_tunnel_sock_release(sock4->sock);
@@ -2849,6 +2859,9 @@ static int vxlan_init(struct net_device *dev)
struct vxlan_dev *vxlan = netdev_priv(dev);
int err;
+ if (vxlan->cfg.flags & VXLAN_F_VNIFILTER)
+ vxlan_vnigroup_init(vxlan);
+
dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
if (!dev->tstats)
return -ENOMEM;
@@ -2878,6 +2891,9 @@ static void vxlan_uninit(struct net_device *dev)
{
struct vxlan_dev *vxlan = netdev_priv(dev);
+ if (vxlan->cfg.flags & VXLAN_F_VNIFILTER)
+ vxlan_vnigroup_uninit(vxlan);
+
gro_cells_destroy(&vxlan->gro_cells);
vxlan_fdb_delete_default(vxlan, vxlan->cfg.vni);
@@ -2895,15 +2911,10 @@ static int vxlan_open(struct net_device *dev)
if (ret < 0)
return ret;
- if (vxlan_addr_multicast(&vxlan->default_dst.remote_ip)) {
- ret = vxlan_igmp_join(vxlan, &vxlan->default_dst.remote_ip,
- vxlan->default_dst.remote_ifindex);
- if (ret == -EADDRINUSE)
- ret = 0;
- if (ret) {
- vxlan_sock_release(vxlan);
- return ret;
- }
+ ret = vxlan_multicast_join(vxlan);
+ if (ret) {
+ vxlan_sock_release(vxlan);
+ return ret;
}
if (vxlan->cfg.age_interval)
@@ -2940,13 +2951,9 @@ static void vxlan_flush(struct vxlan_dev *vxlan, bool do_all)
static int vxlan_stop(struct net_device *dev)
{
struct vxlan_dev *vxlan = netdev_priv(dev);
- struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
int ret = 0;
- if (vxlan_addr_multicast(&vxlan->default_dst.remote_ip) &&
- !vxlan_group_used(vn, vxlan, NULL, 0))
- ret = vxlan_igmp_leave(vxlan, &vxlan->default_dst.remote_ip,
- vxlan->default_dst.remote_ifindex);
+ vxlan_multicast_leave(vxlan);
del_timer_sync(&vxlan->age_timer);
@@ -3176,6 +3183,7 @@ static const struct nla_policy vxlan_policy[IFLA_VXLAN_MAX + 1] = {
[IFLA_VXLAN_REMCSUM_NOPARTIAL] = { .type = NLA_FLAG },
[IFLA_VXLAN_TTL_INHERIT] = { .type = NLA_FLAG },
[IFLA_VXLAN_DF] = { .type = NLA_U8 },
+ [IFLA_VXLAN_VNIFILTER] = { .type = NLA_U8 },
};
static int vxlan_validate(struct nlattr *tb[], struct nlattr *data[],
@@ -3361,6 +3369,7 @@ static struct vxlan_sock *vxlan_socket_create(struct net *net, bool ipv6,
static int __vxlan_sock_add(struct vxlan_dev *vxlan, bool ipv6)
{
struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
+ bool metadata = vxlan->cfg.flags & VXLAN_F_COLLECT_METADATA;
struct vxlan_sock *vs = NULL;
struct vxlan_dev_node *node;
int l3mdev_index = 0;
@@ -3396,7 +3405,12 @@ static int __vxlan_sock_add(struct vxlan_dev *vxlan, bool ipv6)
rcu_assign_pointer(vxlan->vn4_sock, vs);
node = &vxlan->hlist4;
}
- vxlan_vs_add_dev(vs, vxlan, node);
+
+ if (metadata && (vxlan->cfg.flags & VXLAN_F_VNIFILTER))
+ vxlan_vs_add_vnigrp(vxlan, vs, ipv6);
+ else
+ vxlan_vs_add_dev(vs, vxlan, node);
+
return 0;
}
@@ -3423,8 +3437,8 @@ static int vxlan_sock_add(struct vxlan_dev *vxlan)
return ret;
}
-static int vxlan_vni_in_use(struct net *src_net, struct vxlan_dev *vxlan,
- struct vxlan_config *conf, __be32 vni)
+int vxlan_vni_in_use(struct net *src_net, struct vxlan_dev *vxlan,
+ struct vxlan_config *conf, __be32 vni)
{
struct vxlan_net *vn = net_generic(src_net, vxlan_net_id);
struct vxlan_dev *tmp;
@@ -3432,8 +3446,12 @@ static int vxlan_vni_in_use(struct net *src_net, struct vxlan_dev *vxlan,
list_for_each_entry(tmp, &vn->vxlan_list, next) {
if (tmp == vxlan)
continue;
- if (tmp->cfg.vni != vni)
+ if (tmp->cfg.flags & VXLAN_F_VNIFILTER) {
+ if (!vxlan_vnifilter_lookup(tmp, vni))
+ continue;
+ } else if (tmp->cfg.vni != vni) {
continue;
+ }
if (tmp->cfg.dst_port != conf->dst_port)
continue;
if ((tmp->cfg.flags & (VXLAN_F_RCV_FLAGS | VXLAN_F_IPV6)) !=
@@ -4043,6 +4061,21 @@ static int vxlan_nl2conf(struct nlattr *tb[], struct nlattr *data[],
if (data[IFLA_VXLAN_DF])
conf->df = nla_get_u8(data[IFLA_VXLAN_DF]);
+ if (data[IFLA_VXLAN_VNIFILTER]) {
+ err = vxlan_nl2flag(conf, data, IFLA_VXLAN_VNIFILTER,
+ VXLAN_F_VNIFILTER, changelink, false,
+ extack);
+ if (err)
+ return err;
+
+ if ((conf->flags & VXLAN_F_VNIFILTER) &&
+ !(conf->flags & VXLAN_F_COLLECT_METADATA)) {
+ NL_SET_ERR_MSG_ATTR(extack, data[IFLA_VXLAN_VNIFILTER],
+ "vxlan vnifilter only valid in collect metadata mode");
+ return -EINVAL;
+ }
+ }
+
return 0;
}
@@ -4118,6 +4151,19 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[],
dst->remote_ifindex,
true);
spin_unlock_bh(&vxlan->hash_lock[hash_index]);
+
+ /* If vni filtering device, also update fdb entries of
+ * all vnis that were using default remote ip
+ */
+ if (vxlan->cfg.flags & VXLAN_F_VNIFILTER) {
+ err = vxlan_vnilist_update_group(vxlan, &dst->remote_ip,
+ &conf.remote_ip, extack);
+ if (err) {
+ netdev_adjacent_change_abort(dst->remote_dev,
+ lowerdev, dev);
+ return err;
+ }
+ }
}
if (conf.age_interval != vxlan->cfg.age_interval)
@@ -4263,6 +4309,11 @@ static int vxlan_fill_info(struct sk_buff *skb, const struct net_device *dev)
nla_put_flag(skb, IFLA_VXLAN_REMCSUM_NOPARTIAL))
goto nla_put_failure;
+ if (vxlan->cfg.flags & VXLAN_F_VNIFILTER &&
+ nla_put_u8(skb, IFLA_VXLAN_VNIFILTER,
+ !!(vxlan->cfg.flags & VXLAN_F_VNIFILTER)))
+ goto nla_put_failure;
+
return 0;
nla_put_failure:
@@ -4622,6 +4673,8 @@ static int __init vxlan_init_module(void)
if (rc)
goto out4;
+ vxlan_vnifilter_init();
+
return 0;
out4:
unregister_switchdev_notifier(&vxlan_switchdev_notifier_block);
@@ -4636,6 +4689,7 @@ late_initcall(vxlan_init_module);
static void __exit vxlan_cleanup_module(void)
{
+ vxlan_vnifilter_uninit();
rtnl_link_unregister(&vxlan_link_ops);
unregister_switchdev_notifier(&vxlan_switchdev_notifier_block);
unregister_netdevice_notifier(&vxlan_notifier_block);
diff --git a/drivers/net/vxlan/vxlan_multicast.c b/drivers/net/vxlan/vxlan_multicast.c
index ddb241876567..7675c1df8169 100644
--- a/drivers/net/vxlan/vxlan_multicast.c
+++ b/drivers/net/vxlan/vxlan_multicast.c
@@ -84,9 +84,48 @@ int vxlan_igmp_leave(struct vxlan_dev *vxlan, union vxlan_addr *rip,
return ret;
}
+static bool vxlan_group_used_match(union vxlan_addr *ip, int ifindex,
+ union vxlan_addr *rip, int rifindex)
+{
+ if (!vxlan_addr_multicast(rip))
+ return false;
+
+ if (!vxlan_addr_equal(rip, ip))
+ return false;
+
+ if (rifindex != ifindex)
+ return false;
+
+ return true;
+}
+
+static bool vxlan_group_used_by_vnifilter(struct vxlan_dev *vxlan,
+ union vxlan_addr *ip, int ifindex)
+{
+ struct vxlan_vni_group *vg = rtnl_dereference(vxlan->vnigrp);
+ struct vxlan_vni_node *v, *tmp;
+
+ if (vxlan_group_used_match(ip, ifindex,
+ &vxlan->default_dst.remote_ip,
+ vxlan->default_dst.remote_ifindex))
+ return true;
+
+ list_for_each_entry_safe(v, tmp, &vg->vni_list, vlist) {
+ if (!vxlan_addr_multicast(&v->remote_ip))
+ continue;
+
+ if (vxlan_group_used_match(ip, ifindex,
+ &v->remote_ip,
+ vxlan->default_dst.remote_ifindex))
+ return true;
+ }
+
+ return false;
+}
+
/* See if multicast group is already in use by other ID */
bool vxlan_group_used(struct vxlan_net *vn, struct vxlan_dev *dev,
- union vxlan_addr *rip, int rifindex)
+ __be32 vni, union vxlan_addr *rip, int rifindex)
{
union vxlan_addr *ip = (rip ? : &dev->default_dst.remote_ip);
int ifindex = (rifindex ? : dev->default_dst.remote_ifindex);
@@ -123,14 +162,113 @@ bool vxlan_group_used(struct vxlan_net *vn, struct vxlan_dev *dev,
rtnl_dereference(vxlan->vn6_sock) != sock6)
continue;
#endif
- if (!vxlan_addr_equal(&vxlan->default_dst.remote_ip, ip))
- continue;
-
- if (vxlan->default_dst.remote_ifindex != ifindex)
- continue;
+ if (vxlan->cfg.flags & VXLAN_F_VNIFILTER) {
+ if (!vxlan_group_used_by_vnifilter(vxlan, ip, ifindex))
+ continue;
+ } else {
+ if (!vxlan_group_used_match(ip, ifindex,
+ &vxlan->default_dst.remote_ip,
+ vxlan->default_dst.remote_ifindex))
+ continue;
+ }
return true;
}
return false;
}
+
+int vxlan_multicast_join_vnigrp(struct vxlan_dev *vxlan)
+{
+ struct vxlan_vni_group *vg = rtnl_dereference(vxlan->vnigrp);
+ struct vxlan_vni_node *v, *tmp, *vgood = NULL;
+ int ret = 0;
+
+ list_for_each_entry_safe(v, tmp, &vg->vni_list, vlist) {
+ if (!vxlan_addr_multicast(&v->remote_ip))
+ continue;
+ /* skip if address is same as default address */
+ if (vxlan_addr_equal(&v->remote_ip,
+ &vxlan->default_dst.remote_ip))
+ continue;
+ ret = vxlan_igmp_join(vxlan, &v->remote_ip, 0);
+ if (ret == -EADDRINUSE)
+ ret = 0;
+ if (ret)
+ goto out;
+ vgood = v;
+ }
+out:
+ if (ret) {
+ list_for_each_entry_safe(v, tmp, &vg->vni_list, vlist) {
+ if (!vxlan_addr_multicast(&v->remote_ip))
+ continue;
+ if (vxlan_addr_equal(&v->remote_ip,
+ &vxlan->default_dst.remote_ip))
+ continue;
+ vxlan_igmp_leave(vxlan, &v->remote_ip, 0);
+ if (v == vgood)
+ break;
+ }
+ }
+
+ return ret;
+}
+
+int vxlan_multicast_leave_vnigrp(struct vxlan_dev *vxlan)
+{
+ struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
+ struct vxlan_vni_group *vg = rtnl_dereference(vxlan->vnigrp);
+ struct vxlan_vni_node *v, *tmp;
+ int last_err = 0, ret;
+
+ list_for_each_entry_safe(v, tmp, &vg->vni_list, vlist) {
+ if (vxlan_addr_multicast(&v->remote_ip) &&
+ !vxlan_group_used(vn, vxlan, v->vni, &v->remote_ip,
+ 0)) {
+ ret = vxlan_igmp_leave(vxlan, &v->remote_ip, 0);
+ if (ret)
+ last_err = ret;
+ }
+ }
+
+ return last_err;
+}
+
+int vxlan_multicast_join(struct vxlan_dev *vxlan)
+{
+ int ret = 0;
+
+ if (vxlan_addr_multicast(&vxlan->default_dst.remote_ip)) {
+ ret = vxlan_igmp_join(vxlan, &vxlan->default_dst.remote_ip,
+ vxlan->default_dst.remote_ifindex);
+ if (ret == -EADDRINUSE)
+ ret = 0;
+ if (ret)
+ return ret;
+ }
+
+ if (vxlan->cfg.flags & VXLAN_F_VNIFILTER)
+ return vxlan_multicast_join_vnigrp(vxlan);
+
+ return 0;
+}
+
+int vxlan_multicast_leave(struct vxlan_dev *vxlan)
+{
+ struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
+ int ret = 0;
+
+ if (vxlan_addr_multicast(&vxlan->default_dst.remote_ip) &&
+ !vxlan_group_used(vn, vxlan, 0, NULL, 0)) {
+ ret = vxlan_igmp_leave(vxlan, &vxlan->default_dst.remote_ip,
+ vxlan->default_dst.remote_ifindex);
+ if (ret)
+ return ret;
+ }
+
+ if (vxlan->cfg.flags & VXLAN_F_VNIFILTER)
+ return vxlan_multicast_leave_vnigrp(vxlan);
+
+ return 0;
+}
diff --git a/drivers/net/vxlan/vxlan_private.h b/drivers/net/vxlan/vxlan_private.h
index ad2f561c6e94..73fe1c16060e 100644
--- a/drivers/net/vxlan/vxlan_private.h
+++ b/drivers/net/vxlan/vxlan_private.h
@@ -7,6 +7,8 @@
#ifndef _VXLAN_PRIVATE_H
#define _VXLAN_PRIVATE_H
+#include <linux/rhashtable.h>
+
extern unsigned int vxlan_net_id;
static const u8 all_zeros_mac[ETH_ALEN + 2];
@@ -92,6 +94,38 @@ bool vxlan_addr_equal(const union vxlan_addr *a, const union vxlan_addr *b)
#endif
+static inline int vxlan_vni_cmp(struct rhashtable_compare_arg *arg,
+ const void *ptr)
+{
+ const struct vxlan_vni_node *vnode = ptr;
+ __be32 vni = *(__be32 *)arg->key;
+
+ return vnode->vni != vni;
+}
+
+static const struct rhashtable_params vxlan_vni_rht_params = {
+ .head_offset = offsetof(struct vxlan_vni_node, vnode),
+ .key_offset = offsetof(struct vxlan_vni_node, vni),
+ .key_len = sizeof(__be32),
+ .nelem_hint = 3,
+ .max_size = VXLAN_N_VID,
+ .obj_cmpfn = vxlan_vni_cmp,
+ .automatic_shrinking = true,
+};
+
+static inline struct vxlan_vni_node *
+vxlan_vnifilter_lookup(struct vxlan_dev *vxlan, __be32 vni)
+{
+ struct vxlan_vni_group *vg;
+
+ vg = rcu_dereference_rtnl(vxlan->vnigrp);
+ if (!vg)
+ return NULL;
+
+ return rhashtable_lookup_fast(&vg->vni_hash, &vni,
+ vxlan_vni_rht_params);
+}
+
/* vxlan_core.c */
int vxlan_fdb_create(struct vxlan_dev *vxlan,
const u8 *mac, union vxlan_addr *ip,
@@ -111,12 +145,33 @@ int vxlan_fdb_update(struct vxlan_dev *vxlan,
__be16 port, __be32 src_vni, __be32 vni,
__u32 ifindex, __u16 ndm_flags, u32 nhid,
bool swdev_notify, struct netlink_ext_ack *extack);
+int vxlan_vni_in_use(struct net *src_net, struct vxlan_dev *vxlan,
+ struct vxlan_config *conf, __be32 vni);
+
+/* vxlan_vnifilter.c */
+int vxlan_vnigroup_init(struct vxlan_dev *vxlan);
+void vxlan_vnigroup_uninit(struct vxlan_dev *vxlan);
+
+void vxlan_vnifilter_init(void);
+void vxlan_vnifilter_uninit(void);
+
+void vxlan_vs_add_vnigrp(struct vxlan_dev *vxlan,
+ struct vxlan_sock *vs,
+ bool ipv6);
+void vxlan_vs_del_vnigrp(struct vxlan_dev *vxlan);
+int vxlan_vnilist_update_group(struct vxlan_dev *vxlan,
+ union vxlan_addr *old_remote_ip,
+ union vxlan_addr *new_remote_ip,
+ struct netlink_ext_ack *extack);
+
/* vxlan_multicast.c */
+int vxlan_multicast_join(struct vxlan_dev *vxlan);
+int vxlan_multicast_leave(struct vxlan_dev *vxlan);
+bool vxlan_group_used(struct vxlan_net *vn, struct vxlan_dev *dev,
+ __be32 vni, union vxlan_addr *rip, int rifindex);
int vxlan_igmp_join(struct vxlan_dev *vxlan, union vxlan_addr *rip,
int rifindex);
int vxlan_igmp_leave(struct vxlan_dev *vxlan, union vxlan_addr *rip,
int rifindex);
-bool vxlan_group_used(struct vxlan_net *vn, struct vxlan_dev *dev,
- union vxlan_addr *rip, int rifindex);
#endif
diff --git a/drivers/net/vxlan/vxlan_vnifilter.c b/drivers/net/vxlan/vxlan_vnifilter.c
new file mode 100644
index 000000000000..95a76ddfca75
--- /dev/null
+++ b/drivers/net/vxlan/vxlan_vnifilter.c
@@ -0,0 +1,833 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Vxlan vni filter for collect metadata mode
+ *
+ * Authors: Roopa Prabhu <roopa@nvidia.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/etherdevice.h>
+#include <linux/rhashtable.h>
+#include <net/rtnetlink.h>
+#include <net/net_namespace.h>
+#include <net/sock.h>
+#include <net/vxlan.h>
+
+#include "vxlan_private.h"
+
+void vxlan_vs_add_del_vninode(struct vxlan_dev *vxlan,
+ struct vxlan_vni_node *v,
+ bool del)
+{
+ struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
+ struct vxlan_dev_node *node;
+ struct vxlan_sock *vs;
+
+ spin_lock(&vn->sock_lock);
+ if (del) {
+ if (!hlist_unhashed(&v->hlist4.hlist))
+ hlist_del_init_rcu(&v->hlist4.hlist);
+#if IS_ENABLED(CONFIG_IPV6)
+ if (!hlist_unhashed(&v->hlist6.hlist))
+ hlist_del_init_rcu(&v->hlist6.hlist);
+#endif
+ goto out;
+ }
+
+#if IS_ENABLED(CONFIG_IPV6)
+ vs = rtnl_dereference(vxlan->vn6_sock);
+ if (vs && v) {
+ node = &v->hlist6;
+ hlist_add_head_rcu(&node->hlist, vni_head(vs, v->vni));
+ }
+#endif
+ vs = rtnl_dereference(vxlan->vn4_sock);
+ if (vs && v) {
+ node = &v->hlist4;
+ hlist_add_head_rcu(&node->hlist, vni_head(vs, v->vni));
+ }
+out:
+ spin_unlock(&vn->sock_lock);
+}
+
+void vxlan_vs_add_vnigrp(struct vxlan_dev *vxlan,
+ struct vxlan_sock *vs,
+ bool ipv6)
+{
+ struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
+ struct vxlan_vni_group *vg = rtnl_dereference(vxlan->vnigrp);
+ struct vxlan_vni_node *v, *tmp;
+ struct vxlan_dev_node *node;
+
+ if (!vg)
+ return;
+
+ spin_lock(&vn->sock_lock);
+ list_for_each_entry_safe(v, tmp, &vg->vni_list, vlist) {
+#if IS_ENABLED(CONFIG_IPV6)
+ if (ipv6)
+ node = &v->hlist6;
+ else
+#endif
+ node = &v->hlist4;
+ node->vxlan = vxlan;
+ hlist_add_head_rcu(&node->hlist, vni_head(vs, v->vni));
+ }
+ spin_unlock(&vn->sock_lock);
+}
+
+void vxlan_vs_del_vnigrp(struct vxlan_dev *vxlan)
+{
+ struct vxlan_vni_group *vg = rtnl_dereference(vxlan->vnigrp);
+ struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
+ struct vxlan_vni_node *v, *tmp;
+
+ if (!vg)
+ return;
+
+ spin_lock(&vn->sock_lock);
+ list_for_each_entry_safe(v, tmp, &vg->vni_list, vlist) {
+ hlist_del_init_rcu(&v->hlist4.hlist);
+#if IS_ENABLED(CONFIG_IPV6)
+ hlist_del_init_rcu(&v->hlist6.hlist);
+#endif
+ }
+ spin_unlock(&vn->sock_lock);
+}
+
+static u32 vnirange(struct vxlan_vni_node *vbegin,
+ struct vxlan_vni_node *vend)
+{
+ return (be32_to_cpu(vend->vni) - be32_to_cpu(vbegin->vni));
+}
+
+static size_t vxlan_vnifilter_entry_nlmsg_size(void)
+{
+ return NLMSG_ALIGN(sizeof(struct tunnel_msg))
+ + nla_total_size(0) /* VXLAN_VNIFILTER_ENTRY */
+ + nla_total_size(sizeof(u32)) /* VXLAN_VNIFILTER_ENTRY_START */
+ + nla_total_size(sizeof(u32)) /* VXLAN_VNIFILTER_ENTRY_END */
+ + nla_total_size(sizeof(struct in6_addr));/* VXLAN_VNIFILTER_ENTRY_GROUP{6} */
+}
+
+static bool vxlan_fill_vni_filter_entry(struct sk_buff *skb,
+ struct vxlan_vni_node *vbegin,
+ struct vxlan_vni_node *vend)
+{
+ struct nlattr *ventry;
+ u32 vs = be32_to_cpu(vbegin->vni);
+ u32 ve = 0;
+
+ if (vbegin != vend)
+ ve = be32_to_cpu(vend->vni);
+
+ ventry = nla_nest_start(skb, VXLAN_VNIFILTER_ENTRY);
+ if (!ventry)
+ return false;
+
+ if (nla_put_u32(skb, VXLAN_VNIFILTER_ENTRY_START, vs))
+ goto out_err;
+
+ if (ve && nla_put_u32(skb, VXLAN_VNIFILTER_ENTRY_END, ve))
+ goto out_err;
+
+ if (!vxlan_addr_any(&vbegin->remote_ip)) {
+ if (vbegin->remote_ip.sa.sa_family == AF_INET) {
+ if (nla_put_in_addr(skb, VXLAN_VNIFILTER_ENTRY_GROUP,
+ vbegin->remote_ip.sin.sin_addr.s_addr))
+ goto out_err;
+#if IS_ENABLED(CONFIG_IPV6)
+ } else {
+ if (nla_put_in6_addr(skb, VXLAN_VNIFILTER_ENTRY_GROUP6,
+ &vbegin->remote_ip.sin6.sin6_addr))
+ goto out_err;
+#endif
+ }
+ }
+
+ nla_nest_end(skb, ventry);
+
+ return true;
+
+out_err:
+ nla_nest_cancel(skb, ventry);
+
+ return false;
+}
+
+static void vxlan_vnifilter_notify(const struct vxlan_dev *vxlan,
+ struct vxlan_vni_node *vninode, int cmd)
+{
+ struct tunnel_msg *tmsg;
+ struct sk_buff *skb;
+ struct nlmsghdr *nlh;
+ struct net *net = dev_net(vxlan->dev);
+ int err = -ENOBUFS;
+
+ skb = nlmsg_new(vxlan_vnifilter_entry_nlmsg_size(), GFP_KERNEL);
+ if (!skb)
+ goto out_err;
+
+ err = -EMSGSIZE;
+ nlh = nlmsg_put(skb, 0, 0, cmd, sizeof(*tmsg), 0);
+ if (!nlh)
+ goto out_err;
+ tmsg = nlmsg_data(nlh);
+ memset(tmsg, 0, sizeof(*tmsg));
+ tmsg->family = AF_BRIDGE;
+ tmsg->ifindex = vxlan->dev->ifindex;
+
+ if (!vxlan_fill_vni_filter_entry(skb, vninode, vninode))
+ goto out_err;
+
+ nlmsg_end(skb, nlh);
+ rtnl_notify(skb, net, 0, RTNLGRP_TUNNEL, NULL, GFP_KERNEL);
+
+ return;
+
+out_err:
+ rtnl_set_sk_err(net, RTNLGRP_TUNNEL, err);
+
+ kfree_skb(skb);
+}
+
+static int vxlan_vnifilter_dump_dev(const struct net_device *dev,
+ struct sk_buff *skb,
+ struct netlink_callback *cb)
+{
+ struct vxlan_vni_node *tmp, *v, *vbegin = NULL, *vend = NULL;
+ struct vxlan_dev *vxlan = netdev_priv(dev);
+ struct tunnel_msg *new_tmsg, *tmsg;
+ int idx = 0, s_idx = cb->args[1];
+ struct vxlan_vni_group *vg;
+ struct nlmsghdr *nlh;
+ int err = 0;
+
+ if (!(vxlan->cfg.flags & VXLAN_F_VNIFILTER))
+ return -EINVAL;
+
+ /* RCU needed because of the vni locking rules (rcu || rtnl) */
+ vg = rcu_dereference(vxlan->vnigrp);
+ if (!vg || !vg->num_vnis)
+ return 0;
+
+ tmsg = nlmsg_data(cb->nlh);
+
+ nlh = nlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
+ RTM_NEWTUNNEL, sizeof(*new_tmsg), NLM_F_MULTI);
+ if (!nlh)
+ return -EMSGSIZE;
+ new_tmsg = nlmsg_data(nlh);
+ memset(new_tmsg, 0, sizeof(*new_tmsg));
+ new_tmsg->family = PF_BRIDGE;
+ new_tmsg->ifindex = dev->ifindex;
+
+ list_for_each_entry_safe(v, tmp, &vg->vni_list, vlist) {
+ if (idx < s_idx) {
+ idx++;
+ continue;
+ }
+ if (!vbegin) {
+ vbegin = v;
+ vend = v;
+ continue;
+ }
+ if (vnirange(vend, v) == 1 &&
+ vxlan_addr_equal(&v->remote_ip, &vend->remote_ip)) {
+ goto update_end;
+ } else {
+ if (!vxlan_fill_vni_filter_entry(skb, vbegin, vend)) {
+ err = -EMSGSIZE;
+ break;
+ }
+ idx += vnirange(vbegin, vend) + 1;
+ vbegin = v;
+ }
+update_end:
+ vend = v;
+ }
+
+ if (!err && vbegin) {
+ if (!vxlan_fill_vni_filter_entry(skb, vbegin, vend))
+ err = -EMSGSIZE;
+ }
+
+ cb->args[1] = err ? idx : 0;
+
+ nlmsg_end(skb, nlh);
+
+ return err;
+}
+
+static int vxlan_vnifilter_dump(struct sk_buff *skb, struct netlink_callback *cb)
+{
+ int idx = 0, err = 0, s_idx = cb->args[0];
+ struct net *net = sock_net(skb->sk);
+ struct tunnel_msg *tmsg;
+ struct net_device *dev;
+
+ tmsg = nlmsg_data(cb->nlh);
+
+ rcu_read_lock();
+ if (tmsg->ifindex) {
+ dev = dev_get_by_index_rcu(net, tmsg->ifindex);
+ if (!dev) {
+ err = -ENODEV;
+ goto out_err;
+ }
+ err = vxlan_vnifilter_dump_dev(dev, skb, cb);
+ /* if the dump completed without an error we return 0 here */
+ if (err != -EMSGSIZE)
+ goto out_err;
+ } else {
+ for_each_netdev_rcu(net, dev) {
+ if (!netif_is_vxlan(dev))
+ continue;
+ if (idx < s_idx)
+ goto skip;
+ err = vxlan_vnifilter_dump_dev(dev, skb, cb);
+ if (err == -EMSGSIZE)
+ break;
+skip:
+ idx++;
+ }
+ }
+ cb->args[0] = idx;
+ rcu_read_unlock();
+
+ return skb->len;
+
+out_err:
+ rcu_read_unlock();
+
+ return err;
+}
+
+static const struct nla_policy vni_filter_entry_policy[VXLAN_VNIFILTER_ENTRY_MAX + 1] = {
+ [VXLAN_VNIFILTER_ENTRY_START] = { .type = NLA_U32 },
+ [VXLAN_VNIFILTER_ENTRY_END] = { .type = NLA_U32 },
+ [VXLAN_VNIFILTER_ENTRY_GROUP] = { .type = NLA_BINARY,
+ .len = sizeof_field(struct iphdr, daddr) },
+ [VXLAN_VNIFILTER_ENTRY_GROUP6] = { .type = NLA_BINARY,
+ .len = sizeof(struct in6_addr) },
+};
+
+static int vxlan_update_default_fdb_entry(struct vxlan_dev *vxlan, __be32 vni,
+ union vxlan_addr *old_remote_ip,
+ union vxlan_addr *remote_ip,
+ struct netlink_ext_ack *extack)
+{
+ struct vxlan_rdst *dst = &vxlan->default_dst;
+ u32 hash_index;
+ int err = 0;
+
+ hash_index = fdb_head_index(vxlan, all_zeros_mac, vni);
+ spin_lock_bh(&vxlan->hash_lock[hash_index]);
+ if (remote_ip && !vxlan_addr_any(remote_ip)) {
+ err = vxlan_fdb_update(vxlan, all_zeros_mac,
+ remote_ip,
+ NUD_REACHABLE | NUD_PERMANENT,
+ NLM_F_APPEND | NLM_F_CREATE,
+ vxlan->cfg.dst_port,
+ vni,
+ vni,
+ dst->remote_ifindex,
+ NTF_SELF, 0, true, extack);
+ if (err) {
+ spin_unlock_bh(&vxlan->hash_lock[hash_index]);
+ return err;
+ }
+ }
+
+ if (old_remote_ip && !vxlan_addr_any(old_remote_ip)) {
+ __vxlan_fdb_delete(vxlan, all_zeros_mac,
+ *old_remote_ip,
+ vxlan->cfg.dst_port,
+ vni, vni,
+ dst->remote_ifindex,
+ true);
+ }
+ spin_unlock_bh(&vxlan->hash_lock[hash_index]);
+
+ return err;
+}
+
+static int vxlan_vni_update_group(struct vxlan_dev *vxlan,
+ struct vxlan_vni_node *vninode,
+ union vxlan_addr *group,
+ bool create, bool *changed,
+ struct netlink_ext_ack *extack)
+{
+ struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
+ struct vxlan_rdst *dst = &vxlan->default_dst;
+ union vxlan_addr *newrip = NULL, *oldrip = NULL;
+ union vxlan_addr old_remote_ip;
+ int ret = 0;
+
+ memcpy(&old_remote_ip, &vninode->remote_ip, sizeof(old_remote_ip));
+
+ /* if per vni remote ip is not present use vxlan dev
+ * default dst remote ip for fdb entry
+ */
+ if (group && !vxlan_addr_any(group)) {
+ newrip = group;
+ } else {
+ if (!vxlan_addr_any(&dst->remote_ip))
+ newrip = &dst->remote_ip;
+ }
+
+ /* if old rip exists, and no newrip,
+ * explicitly delete old rip
+ */
+ if (!newrip && !vxlan_addr_any(&old_remote_ip))
+ oldrip = &old_remote_ip;
+
+ if (!newrip && !oldrip)
+ return 0;
+
+ if (!create && oldrip && newrip && vxlan_addr_equal(oldrip, newrip))
+ return 0;
+
+ ret = vxlan_update_default_fdb_entry(vxlan, vninode->vni,
+ oldrip, newrip,
+ extack);
+ if (ret)
+ goto out;
+
+ if (group)
+ memcpy(&vninode->remote_ip, group, sizeof(vninode->remote_ip));
+
+ if (vxlan->dev->flags & IFF_UP) {
+ if (vxlan_addr_multicast(&old_remote_ip) &&
+ !vxlan_group_used(vn, vxlan, vninode->vni,
+ &old_remote_ip,
+ vxlan->default_dst.remote_ifindex)) {
+ ret = vxlan_igmp_leave(vxlan, &old_remote_ip,
+ 0);
+ if (ret)
+ goto out;
+ }
+
+ if (vxlan_addr_multicast(&vninode->remote_ip)) {
+ ret = vxlan_igmp_join(vxlan, &vninode->remote_ip, 0);
+ if (ret == -EADDRINUSE)
+ ret = 0;
+ if (ret)
+ goto out;
+ }
+ }
+
+ *changed = true;
+
+ return 0;
+out:
+ return ret;
+}
+
+int vxlan_vnilist_update_group(struct vxlan_dev *vxlan,
+ union vxlan_addr *old_remote_ip,
+ union vxlan_addr *new_remote_ip,
+ struct netlink_ext_ack *extack)
+{
+ struct list_head *headp, *hpos;
+ struct vxlan_vni_group *vg;
+ struct vxlan_vni_node *vent;
+ int ret;
+
+ vg = rtnl_dereference(vxlan->vnigrp);
+
+ headp = &vg->vni_list;
+ list_for_each_prev(hpos, headp) {
+ vent = list_entry(hpos, struct vxlan_vni_node, vlist);
+ if (vxlan_addr_any(&vent->remote_ip)) {
+ ret = vxlan_update_default_fdb_entry(vxlan, vent->vni,
+ old_remote_ip,
+ new_remote_ip,
+ extack);
+ if (ret)
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+static void vxlan_vni_delete_group(struct vxlan_dev *vxlan,
+ struct vxlan_vni_node *vninode)
+{
+ struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
+ struct vxlan_rdst *dst = &vxlan->default_dst;
+
+ /* if per vni remote_ip not present, delete the
+ * default dst remote_ip previously added for this vni
+ */
+ if (!vxlan_addr_any(&vninode->remote_ip) ||
+ !vxlan_addr_any(&dst->remote_ip))
+ __vxlan_fdb_delete(vxlan, all_zeros_mac,
+ (vxlan_addr_any(&vninode->remote_ip) ?
+ dst->remote_ip : vninode->remote_ip),
+ vxlan->cfg.dst_port,
+ vninode->vni, vninode->vni,
+ dst->remote_ifindex,
+ true);
+
+ if (vxlan->dev->flags & IFF_UP) {
+ if (vxlan_addr_multicast(&vninode->remote_ip) &&
+ !vxlan_group_used(vn, vxlan, vninode->vni,
+ &vninode->remote_ip,
+ dst->remote_ifindex)) {
+ vxlan_igmp_leave(vxlan, &vninode->remote_ip, 0);
+ }
+ }
+}
+
+static int vxlan_vni_update(struct vxlan_dev *vxlan,
+ struct vxlan_vni_group *vg,
+ __be32 vni, union vxlan_addr *group,
+ bool *changed,
+ struct netlink_ext_ack *extack)
+{
+ struct vxlan_vni_node *vninode;
+ int ret;
+
+ vninode = rhashtable_lookup_fast(&vg->vni_hash, &vni,
+ vxlan_vni_rht_params);
+ if (!vninode)
+ return 0;
+
+ ret = vxlan_vni_update_group(vxlan, vninode, group, false, changed,
+ extack);
+ if (ret)
+ return ret;
+
+ if (changed)
+ vxlan_vnifilter_notify(vxlan, vninode, RTM_NEWTUNNEL);
+
+ return 0;
+}
+
+static void __vxlan_vni_add_list(struct vxlan_vni_group *vg,
+ struct vxlan_vni_node *v)
+{
+ struct list_head *headp, *hpos;
+ struct vxlan_vni_node *vent;
+
+ headp = &vg->vni_list;
+ list_for_each_prev(hpos, headp) {
+ vent = list_entry(hpos, struct vxlan_vni_node, vlist);
+ if (be32_to_cpu(v->vni) < be32_to_cpu(vent->vni))
+ continue;
+ else
+ break;
+ }
+ list_add_rcu(&v->vlist, hpos);
+ vg->num_vnis++;
+}
+
+static void __vxlan_vni_del_list(struct vxlan_vni_group *vg,
+ struct vxlan_vni_node *v)
+{
+ list_del_rcu(&v->vlist);
+ vg->num_vnis--;
+}
+
+static struct vxlan_vni_node *vxlan_vni_alloc(struct vxlan_dev *vxlan,
+ __be32 vni)
+{
+ struct vxlan_vni_node *vninode;
+
+ vninode = kzalloc(sizeof(*vninode), GFP_ATOMIC);
+ if (!vninode)
+ return NULL;
+ vninode->vni = vni;
+ vninode->hlist4.vxlan = vxlan;
+ vninode->hlist6.vxlan = vxlan;
+
+ return vninode;
+}
+
+static int vxlan_vni_add(struct vxlan_dev *vxlan,
+ struct vxlan_vni_group *vg,
+ u32 vni, union vxlan_addr *group,
+ struct netlink_ext_ack *extack)
+{
+ struct vxlan_vni_node *vninode;
+ __be32 v = cpu_to_be32(vni);
+ bool changed = false;
+ int err = 0;
+
+ if (vxlan_vnifilter_lookup(vxlan, v))
+ return vxlan_vni_update(vxlan, vg, v, group, &changed, extack);
+
+ err = vxlan_vni_in_use(vxlan->net, vxlan, &vxlan->cfg, v);
+ if (err) {
+ NL_SET_ERR_MSG(extack, "VNI in use");
+ return err;
+ }
+
+ vninode = vxlan_vni_alloc(vxlan, v);
+ if (!vninode)
+ return -ENOMEM;
+
+ err = rhashtable_lookup_insert_fast(&vg->vni_hash,
+ &vninode->vnode,
+ vxlan_vni_rht_params);
+ if (err)
+ return err;
+
+ __vxlan_vni_add_list(vg, vninode);
+
+ if (vxlan->dev->flags & IFF_UP)
+ vxlan_vs_add_del_vninode(vxlan, vninode, false);
+
+ err = vxlan_vni_update_group(vxlan, vninode, group, true, &changed,
+ extack);
+
+ if (changed)
+ vxlan_vnifilter_notify(vxlan, vninode, RTM_NEWTUNNEL);
+
+ return err;
+}
+
+static void vxlan_vni_node_rcu_free(struct rcu_head *rcu)
+{
+ struct vxlan_vni_node *v;
+
+ v = container_of(rcu, struct vxlan_vni_node, rcu);
+ kfree(v);
+}
+
+static int vxlan_vni_del(struct vxlan_dev *vxlan,
+ struct vxlan_vni_group *vg,
+ u32 vni, struct netlink_ext_ack *extack)
+{
+ struct vxlan_vni_node *vninode;
+ __be32 v = cpu_to_be32(vni);
+ int err = 0;
+
+ vg = rtnl_dereference(vxlan->vnigrp);
+
+ vninode = rhashtable_lookup_fast(&vg->vni_hash, &v,
+ vxlan_vni_rht_params);
+ if (!vninode) {
+ err = -ENOENT;
+ goto out;
+ }
+
+ vxlan_vni_delete_group(vxlan, vninode);
+
+ err = rhashtable_remove_fast(&vg->vni_hash,
+ &vninode->vnode,
+ vxlan_vni_rht_params);
+ if (err)
+ goto out;
+
+ __vxlan_vni_del_list(vg, vninode);
+
+ vxlan_vnifilter_notify(vxlan, vninode, RTM_DELTUNNEL);
+
+ if (vxlan->dev->flags & IFF_UP)
+ vxlan_vs_add_del_vninode(vxlan, vninode, true);
+
+ call_rcu(&vninode->rcu, vxlan_vni_node_rcu_free);
+
+ return 0;
+out:
+ return err;
+}
+
+static int vxlan_vni_add_del(struct vxlan_dev *vxlan, __u32 start_vni,
+ __u32 end_vni, union vxlan_addr *group,
+ int cmd, struct netlink_ext_ack *extack)
+{
+ struct vxlan_vni_group *vg;
+ int v, err = 0;
+
+ vg = rtnl_dereference(vxlan->vnigrp);
+
+ for (v = start_vni; v <= end_vni; v++) {
+ switch (cmd) {
+ case RTM_NEWTUNNEL:
+ err = vxlan_vni_add(vxlan, vg, v, group, extack);
+ break;
+ case RTM_DELTUNNEL:
+ err = vxlan_vni_del(vxlan, vg, v, extack);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ break;
+ }
+ if (err)
+ goto out;
+ }
+
+ return 0;
+out:
+ return err;
+}
+
+static int vxlan_process_vni_filter(struct vxlan_dev *vxlan,
+ struct nlattr *nlvnifilter,
+ int cmd, struct netlink_ext_ack *extack)
+{
+ struct nlattr *vattrs[VXLAN_VNIFILTER_ENTRY_MAX + 1];
+ u32 vni_start = 0, vni_end = 0;
+ union vxlan_addr group;
+ int err = 0;
+
+ err = nla_parse_nested(vattrs,
+ VXLAN_VNIFILTER_ENTRY_MAX,
+ nlvnifilter, vni_filter_entry_policy,
+ extack);
+ if (err)
+ return err;
+
+ if (vattrs[VXLAN_VNIFILTER_ENTRY_START]) {
+ vni_start = nla_get_u32(vattrs[VXLAN_VNIFILTER_ENTRY_START]);
+ vni_end = vni_start;
+ }
+
+ if (vattrs[VXLAN_VNIFILTER_ENTRY_END])
+ vni_end = nla_get_u32(vattrs[VXLAN_VNIFILTER_ENTRY_END]);
+
+ if (!vni_start && !vni_end) {
+ NL_SET_ERR_MSG_ATTR(extack, nlvnifilter,
+ "vni start nor end found in vni entry");
+ return -EINVAL;
+ }
+
+ if (vattrs[VXLAN_VNIFILTER_ENTRY_GROUP]) {
+ group.sin.sin_addr.s_addr =
+ nla_get_in_addr(vattrs[VXLAN_VNIFILTER_ENTRY_GROUP]);
+ group.sa.sa_family = AF_INET;
+ } else if (vattrs[VXLAN_VNIFILTER_ENTRY_GROUP6]) {
+ group.sin6.sin6_addr =
+ nla_get_in6_addr(vattrs[VXLAN_VNIFILTER_ENTRY_GROUP6]);
+ group.sa.sa_family = AF_INET6;
+ } else {
+ memset(&group, 0, sizeof(group));
+ }
+
+ err = vxlan_vni_add_del(vxlan, vni_start, vni_end, &group, cmd,
+ extack);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+void vxlan_vnigroup_uninit(struct vxlan_dev *vxlan)
+{
+ struct vxlan_vni_node *v, *tmp;
+ struct vxlan_vni_group *vg;
+
+ vg = rtnl_dereference(vxlan->vnigrp);
+ list_for_each_entry_safe(v, tmp, &vg->vni_list, vlist) {
+ rhashtable_remove_fast(&vg->vni_hash, &v->vnode,
+ vxlan_vni_rht_params);
+ hlist_del_init_rcu(&v->hlist4.hlist);
+ hlist_del_init_rcu(&v->hlist6.hlist);
+ __vxlan_vni_del_list(vg, v);
+ call_rcu(&v->rcu, vxlan_vni_node_rcu_free);
+ }
+ rhashtable_destroy(&vg->vni_hash);
+ kfree(vg);
+}
+
+int vxlan_vnigroup_init(struct vxlan_dev *vxlan)
+{
+ struct vxlan_vni_group *vg;
+ int ret = -ENOMEM;
+
+ vg = kzalloc(sizeof(*vg), GFP_KERNEL);
+ if (!vg)
+ goto out;
+ ret = rhashtable_init(&vg->vni_hash, &vxlan_vni_rht_params);
+ if (ret)
+ goto err_rhtbl;
+ INIT_LIST_HEAD(&vg->vni_list);
+ rcu_assign_pointer(vxlan->vnigrp, vg);
+
+ return 0;
+
+out:
+ return ret;
+
+err_rhtbl:
+ kfree(vg);
+
+ goto out;
+}
+
+static int vxlan_vnifilter_process(struct sk_buff *skb, struct nlmsghdr *nlh,
+ struct netlink_ext_ack *extack)
+{
+ struct net *net = sock_net(skb->sk);
+ struct tunnel_msg *tmsg;
+ struct vxlan_dev *vxlan;
+ struct net_device *dev;
+ struct nlattr *attr;
+ int err, vnis = 0;
+ int rem;
+
+ /* this should validate the header and check for remaining bytes */
+ err = nlmsg_parse(nlh, sizeof(*tmsg), NULL, VXLAN_VNIFILTER_MAX, NULL,
+ extack);
+ if (err < 0)
+ return err;
+
+ tmsg = nlmsg_data(nlh);
+ dev = __dev_get_by_index(net, tmsg->ifindex);
+ if (!dev)
+ return -ENODEV;
+
+ if (!netif_is_vxlan(dev)) {
+ NL_SET_ERR_MSG_MOD(extack, "The device is not a vxlan device");
+ return -EINVAL;
+ }
+
+ vxlan = netdev_priv(dev);
+
+ if (!(vxlan->cfg.flags & VXLAN_F_VNIFILTER))
+ return -EOPNOTSUPP;
+
+ nlmsg_for_each_attr(attr, nlh, sizeof(*tmsg), rem) {
+ switch (nla_type(attr)) {
+ case VXLAN_VNIFILTER_ENTRY:
+ err = vxlan_process_vni_filter(vxlan, attr,
+ nlh->nlmsg_type, extack);
+ break;
+ default:
+ continue;
+ }
+ vnis++;
+ if (err)
+ break;
+ }
+
+ if (!vnis) {
+ NL_SET_ERR_MSG_MOD(extack, "No vnis found to process");
+ err = -EINVAL;
+ }
+
+ return err;
+}
+
+void vxlan_vnifilter_init(void)
+{
+ rtnl_register_module(THIS_MODULE, PF_BRIDGE, RTM_GETTUNNEL, NULL,
+ vxlan_vnifilter_dump, 0);
+ rtnl_register_module(THIS_MODULE, PF_BRIDGE, RTM_NEWTUNNEL,
+ vxlan_vnifilter_process, NULL, 0);
+ rtnl_register_module(THIS_MODULE, PF_BRIDGE, RTM_DELTUNNEL,
+ vxlan_vnifilter_process, NULL, 0);
+}
+
+void vxlan_vnifilter_uninit(void)
+{
+ rtnl_unregister(PF_BRIDGE, RTM_GETTUNNEL);
+ rtnl_unregister(PF_BRIDGE, RTM_NEWTUNNEL);
+ rtnl_unregister(PF_BRIDGE, RTM_DELTUNNEL);
+}
diff --git a/include/net/vxlan.h b/include/net/vxlan.h
index 5a934bebe630..8eb961bb9589 100644
--- a/include/net/vxlan.h
+++ b/include/net/vxlan.h
@@ -232,6 +232,25 @@ struct vxlan_dev_node {
struct vxlan_dev *vxlan;
};
+struct vxlan_vni_node {
+ struct rhash_head vnode;
+ struct vxlan_dev_node hlist4; /* vni hash table for IPv4 socket */
+#if IS_ENABLED(CONFIG_IPV6)
+ struct vxlan_dev_node hlist6; /* vni hash table for IPv6 socket */
+#endif
+ struct list_head vlist;
+ __be32 vni;
+ union vxlan_addr remote_ip; /* default remote ip for this vni */
+
+ struct rcu_head rcu;
+};
+
+struct vxlan_vni_group {
+ struct rhashtable vni_hash;
+ struct list_head vni_list;
+ u32 num_vnis;
+};
+
/* Pseudo network device */
struct vxlan_dev {
struct vxlan_dev_node hlist4; /* vni hash table for IPv4 socket */
@@ -254,6 +273,8 @@ struct vxlan_dev {
struct vxlan_config cfg;
+ struct vxlan_vni_group __rcu *vnigrp;
+
struct hlist_head fdb_head[FDB_HASH_SIZE];
};
@@ -274,6 +295,7 @@ struct vxlan_dev {
#define VXLAN_F_GPE 0x4000
#define VXLAN_F_IPV6_LINKLOCAL 0x8000
#define VXLAN_F_TTL_INHERIT 0x10000
+#define VXLAN_F_VNIFILTER 0x20000
/* Flags that are used in the receive path. These flags must match in
* order for a socket to be shareable
@@ -283,7 +305,8 @@ struct vxlan_dev {
VXLAN_F_UDP_ZERO_CSUM6_RX | \
VXLAN_F_REMCSUM_RX | \
VXLAN_F_REMCSUM_NOPARTIAL | \
- VXLAN_F_COLLECT_METADATA)
+ VXLAN_F_COLLECT_METADATA | \
+ VXLAN_F_VNIFILTER)
/* Flags that can be set together with VXLAN_F_GPE. */
#define VXLAN_F_ALLOWED_GPE (VXLAN_F_GPE | \
@@ -292,7 +315,8 @@ struct vxlan_dev {
VXLAN_F_UDP_ZERO_CSUM_TX | \
VXLAN_F_UDP_ZERO_CSUM6_TX | \
VXLAN_F_UDP_ZERO_CSUM6_RX | \
- VXLAN_F_COLLECT_METADATA)
+ VXLAN_F_COLLECT_METADATA | \
+ VXLAN_F_VNIFILTER)
struct net_device *vxlan_dev_create(struct net *net, const char *name,
u8 name_assign_type, struct vxlan_config *conf);
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH net-next 09/12] selftests: add new tests for vxlan vnifiltering
2022-02-20 14:03 [PATCH net-next 00/12] vxlan metadata device vnifiltering support Roopa Prabhu
` (7 preceding siblings ...)
2022-02-20 14:04 ` [PATCH net-next 08/12] vxlan: vni filtering support on collect metadata device Roopa Prabhu
@ 2022-02-20 14:04 ` Roopa Prabhu
2022-02-20 14:04 ` [PATCH net-next 10/12] selinux: add support for RTM_NEWTUNNEL, RTM_DELTUNNEL, and RTM_GETTUNNEL Roopa Prabhu
` (2 subsequent siblings)
11 siblings, 0 replies; 19+ messages in thread
From: Roopa Prabhu @ 2022-02-20 14:04 UTC (permalink / raw)
To: davem, kuba; +Cc: netdev, stephen, nikolay, idosch, dsahern
This patch adds a new test script test_vxlan_vnifiltering.sh
with tests for vni filtering api, various datapath tests.
Also has a test with a mix of traditional, metadata and vni
filtering devices inuse at the same time.
Signed-off-by: Roopa Prabhu <roopa@nvidia.com>
---
.../selftests/net/test_vxlan_vnifiltering.sh | 581 ++++++++++++++++++
1 file changed, 581 insertions(+)
create mode 100755 tools/testing/selftests/net/test_vxlan_vnifiltering.sh
diff --git a/tools/testing/selftests/net/test_vxlan_vnifiltering.sh b/tools/testing/selftests/net/test_vxlan_vnifiltering.sh
new file mode 100755
index 000000000000..98abcb55b7c2
--- /dev/null
+++ b/tools/testing/selftests/net/test_vxlan_vnifiltering.sh
@@ -0,0 +1,581 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+# This test is for checking the VXLAN vni filtering api and
+# datapath.
+# It simulates two hypervisors running two VMs each using four network
+# six namespaces: two for the HVs, four for the VMs. Each VM is
+# connected to a separate bridge. The VM's use overlapping vlans and
+# hence the separate bridge domain. Each vxlan device is a collect
+# metadata device with vni filtering and hence has the ability to
+# terminate configured vni's only.
+
+# +--------------------------------+ +------------------------------------+
+# | vm-11 netns | | vm-21 netns |
+# | | | |
+# |+------------+ +-------------+ | |+-------------+ +----------------+ |
+# ||veth-11.10 | |veth-11.20 | | ||veth-21.10 | | veth-21.20 | |
+# ||10.0.10.11/24 |10.0.20.11/24| | ||10.0.10.21/24| | 10.0.20.21/24 | |
+# |+------|-----+ +|------------+ | |+-----------|-+ +---|------------+ |
+# | | | | | | | |
+# | | | | | +------------+ |
+# | +------------+ | | | veth-21 | |
+# | | veth-11 | | | | | |
+# | | | | | +-----|------+ |
+# | +-----|------+ | | | |
+# | | | | | |
+# +------------|-------------------+ +---------------|--------------------+
+# +------------|-----------------------------------------|-------------------+
+# | +-----|------+ +-----|------+ |
+# | |vethhv-11 | |vethhv-21 | |
+# | +----|-------+ +-----|------+ |
+# | +---|---+ +---|--+ |
+# | | br1 | | br2 | |
+# | +---|---+ +---|--+ |
+# | +---|----+ +---|--+ |
+# | | vxlan1| |vxlan2| |
+# | +--|-----+ +--|---+ |
+# | | | |
+# | | +---------------------+ | |
+# | | |veth0 | | |
+# | +---------|172.16.0.1/24 -----------+ |
+# | |2002:fee1::1/64 | |
+# | hv-1 netns +--------|------------+ |
+# +-----------------------------|--------------------------------------------+
+# |
+# +-----------------------------|--------------------------------------------+
+# | hv-2 netns +--------|-------------+ |
+# | | veth0 | |
+# | +------| 172.16.0.2/24 |---+ |
+# | | | 2002:fee1::2/64 | | |
+# | | | | | |
+# | | +----------------------+ | - |
+# | | | |
+# | +-|-------+ +--------|-+ |
+# | | vxlan1 | | vxlan2 | |
+# | +----|----+ +---|------+ |
+# | +--|--+ +-|---+ |
+# | | br1 | | br2 | |
+# | +--|--+ +--|--+ |
+# | +-----|-------+ +----|-------+ |
+# | | vethhv-12 | |vethhv-22 | |
+# | +------|------+ +-------|----+ |
+# +-----------------|----------------------------|---------------------------+
+# | |
+# +-----------------|-----------------+ +--------|---------------------------+
+# | +-------|---+ | | +--|---------+ |
+# | | veth-12 | | | |veth-22 | |
+# | +-|--------|+ | | +--|--------|+ |
+# | | | | | | | |
+# |+----------|--+ +---|-----------+ | |+-------|-----+ +|---------------+ |
+# ||veth-12.10 | |veth-12.20 | | ||veth-22.10 | |veth-22.20 | |
+# ||10.0.10.12/24| |10.0.20.12/24 | | ||10.0.10.22/24| |10.0.20.22/24 | |
+# |+-------------+ +---------------+ | |+-------------+ +----------------+ |
+# | | | |
+# | | | |
+# | vm-12 netns | |vm-22 netns |
+# +-----------------------------------+ +------------------------------------+
+#
+#
+# This test tests the new vxlan vnifiltering api
+
+ret=0
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
+# all tests in this script. Can be overridden with -t option
+TESTS="
+ vxlan_vnifilter_api
+ vxlan_vnifilter_datapath
+ vxlan_vnifilter_datapath_pervni
+ vxlan_vnifilter_datapath_mgroup
+ vxlan_vnifilter_datapath_mgroup_pervni
+ vxlan_vnifilter_metadata_and_traditional_mix
+"
+VERBOSE=0
+PAUSE_ON_FAIL=no
+PAUSE=no
+IP="ip -netns ns1"
+NS_EXEC="ip netns exec ns1"
+
+which ping6 > /dev/null 2>&1 && ping6=$(which ping6) || ping6=$(which ping)
+
+log_test()
+{
+ local rc=$1
+ local expected=$2
+ local msg="$3"
+
+ if [ ${rc} -eq ${expected} ]; then
+ printf " TEST: %-60s [ OK ]\n" "${msg}"
+ nsuccess=$((nsuccess+1))
+ else
+ ret=1
+ nfail=$((nfail+1))
+ printf " TEST: %-60s [FAIL]\n" "${msg}"
+ if [ "${PAUSE_ON_FAIL}" = "yes" ]; then
+ echo
+ echo "hit enter to continue, 'q' to quit"
+ read a
+ [ "$a" = "q" ] && exit 1
+ fi
+ fi
+
+ if [ "${PAUSE}" = "yes" ]; then
+ echo
+ echo "hit enter to continue, 'q' to quit"
+ read a
+ [ "$a" = "q" ] && exit 1
+ fi
+}
+
+run_cmd()
+{
+ local cmd="$1"
+ local out
+ local stderr="2>/dev/null"
+
+ if [ "$VERBOSE" = "1" ]; then
+ printf "COMMAND: $cmd\n"
+ stderr=
+ fi
+
+ out=$(eval $cmd $stderr)
+ rc=$?
+ if [ "$VERBOSE" = "1" -a -n "$out" ]; then
+ echo " $out"
+ fi
+
+ return $rc
+}
+
+check_hv_connectivity() {
+ ip netns exec hv-1 ping -c 1 -W 1 $1 &>/dev/null
+ sleep 1
+ ip netns exec hv-1 ping -c 1 -W 1 $2 &>/dev/null
+
+ return $?
+}
+
+check_vm_connectivity() {
+ run_cmd "ip netns exec vm-11 ping -c 1 -W 1 10.0.10.12"
+ log_test $? 0 "VM connectivity over $1 (ipv4 default rdst)"
+
+ run_cmd "ip netns exec vm-21 ping -c 1 -W 1 10.0.10.22"
+ log_test $? 0 "VM connectivity over $1 (ipv6 default rdst)"
+}
+
+cleanup() {
+ ip link del veth-hv-1 2>/dev/null || true
+ ip link del vethhv-11 vethhv-12 vethhv-21 vethhv-22 2>/dev/null || true
+
+ for ns in hv-1 hv-2 vm-11 vm-21 vm-12 vm-22 vm-31 vm-32; do
+ ip netns del $ns 2>/dev/null || true
+ done
+}
+
+trap cleanup EXIT
+
+setup-hv-networking() {
+ hv=$1
+ local1=$2
+ mask1=$3
+ local2=$4
+ mask2=$5
+
+ ip netns add hv-$hv
+ ip link set veth-hv-$hv netns hv-$hv
+ ip -netns hv-$hv link set veth-hv-$hv name veth0
+ ip -netns hv-$hv addr add $local1/$mask1 dev veth0
+ ip -netns hv-$hv addr add $local2/$mask2 dev veth0
+ ip -netns hv-$hv link set veth0 up
+}
+
+# Setups a "VM" simulated by a netns an a veth pair
+# example: setup-vm <hvid> <vmid> <brid> <VATTRS> <mcast_for_bum>
+# VATTRS = comma separated "<vlan>-<v[46]>-<localip>-<remoteip>-<VTYPE>-<vxlandstport>"
+# VTYPE = vxlan device type. "default = traditional device, metadata = metadata device
+# vnifilter = vnifiltering device,
+# vnifilterg = vnifiltering device with per vni group/remote"
+# example:
+# setup-vm 1 11 1 \
+# 10-v4-172.16.0.1-239.1.1.100-vnifilterg,20-v4-172.16.0.1-239.1.1.100-vnifilterg 1
+#
+setup-vm() {
+ hvid=$1
+ vmid=$2
+ brid=$3
+ vattrs=$4
+ mcast=$5
+ lastvxlandev=""
+
+ # create bridge
+ ip -netns hv-$hvid link add br$brid type bridge vlan_filtering 1 vlan_default_pvid 0 \
+ mcast_snooping 0
+ ip -netns hv-$hvid link set br$brid up
+
+ # create vm namespace and interfaces and connect to hypervisor
+ # namespace
+ ip netns add vm-$vmid
+ hvvethif="vethhv-$vmid"
+ vmvethif="veth-$vmid"
+ ip link add $hvvethif type veth peer name $vmvethif
+ ip link set $hvvethif netns hv-$hvid
+ ip link set $vmvethif netns vm-$vmid
+ ip -netns hv-$hvid link set $hvvethif up
+ ip -netns vm-$vmid link set $vmvethif up
+ ip -netns hv-$hvid link set $hvvethif master br$brid
+
+ # configure VM vlan/vni filtering on hypervisor
+ for vmap in $(echo $vattrs | cut -d "," -f1- --output-delimiter=' ')
+ do
+ local vid=$(echo $vmap | awk -F'-' '{print ($1)}')
+ local family=$(echo $vmap | awk -F'-' '{print ($2)}')
+ local localip=$(echo $vmap | awk -F'-' '{print ($3)}')
+ local group=$(echo $vmap | awk -F'-' '{print ($4)}')
+ local vtype=$(echo $vmap | awk -F'-' '{print ($5)}')
+ local port=$(echo $vmap | awk -F'-' '{print ($6)}')
+
+ ip -netns vm-$vmid link add name $vmvethif.$vid link $vmvethif type vlan id $vid
+ ip -netns vm-$vmid addr add 10.0.$vid.$vmid/24 dev $vmvethif.$vid
+ ip -netns vm-$vmid link set $vmvethif.$vid up
+
+ tid=$vid
+ vxlandev="vxlan$brid"
+ vxlandevflags=""
+
+ if [[ -n $vtype && $vtype == "metadata" ]]; then
+ vxlandevflags="$vxlandevflags external"
+ elif [[ -n $vtype && $vtype == "vnifilter" || $vtype == "vnifilterg" ]]; then
+ vxlandevflags="$vxlandevflags external vnifilter"
+ tid=$((vid+brid))
+ else
+ vxlandevflags="$vxlandevflags id $tid"
+ vxlandev="vxlan$tid"
+ fi
+
+ if [[ -n $vtype && $vtype != "vnifilterg" ]]; then
+ if [[ -n "$group" && "$group" != "null" ]]; then
+ if [ $mcast -eq 1 ]; then
+ vxlandevflags="$vxlandevflags group $group"
+ else
+ vxlandevflags="$vxlandevflags remote $group"
+ fi
+ fi
+ fi
+
+ if [[ -n "$port" && "$port" != "default" ]]; then
+ vxlandevflags="$vxlandevflags dstport $port"
+ fi
+
+ # create vxlan device
+ if [ "$vxlandev" != "$lastvxlandev" ]; then
+ ip -netns hv-$hvid link add $vxlandev type vxlan local $localip $vxlandevflags dev veth0 2>/dev/null
+ ip -netns hv-$hvid link set $vxlandev master br$brid
+ ip -netns hv-$hvid link set $vxlandev up
+ lastvxlandev=$vxlandev
+ fi
+
+ # add vlan
+ bridge -netns hv-$hvid vlan add vid $vid dev $hvvethif
+ bridge -netns hv-$hvid vlan add vid $vid pvid dev $vxlandev
+
+ # Add bridge vni filter for tx
+ if [[ -n $vtype && $vtype == "metadata" || $vtype == "vnifilter" || $vtype == "vnifilterg" ]]; then
+ bridge -netns hv-$hvid link set dev $vxlandev vlan_tunnel on
+ bridge -netns hv-$hvid vlan add dev $vxlandev vid $vid tunnel_info id $tid
+ fi
+
+ if [[ -n $vtype && $vtype == "metadata" ]]; then
+ bridge -netns hv-$hvid fdb add 00:00:00:00:00:00 dev $vxlandev \
+ src_vni $tid vni $tid dst $group self
+ elif [[ -n $vtype && $vtype == "vnifilter" ]]; then
+ # Add per vni rx filter with 'bridge vni' api
+ bridge -netns hv-$hvid vni add dev $vxlandev vni $tid
+ elif [[ -n $vtype && $vtype == "vnifilterg" ]]; then
+ # Add per vni group config with 'bridge vni' api
+ if [ -n "$group" ]; then
+ if [ "$family" == "v4" ]; then
+ if [ $mcast -eq 1 ]; then
+ bridge -netns hv-$hvid vni add dev $vxlandev vni $tid group $group
+ else
+ bridge -netns hv-$hvid vni add dev $vxlandev vni $tid remote $group
+ fi
+ else
+ if [ $mcast -eq 1 ]; then
+ bridge -netns hv-$hvid vni add dev $vxlandev vni $tid group6 $group
+ else
+ bridge -netns hv-$hvid vni add dev $vxlandev vni $tid remote6 $group
+ fi
+ fi
+ fi
+ fi
+ done
+}
+
+setup_vnifilter_api()
+{
+ ip link add veth-host type veth peer name veth-testns
+ ip netns add testns
+ ip link set veth-testns netns testns
+}
+
+cleanup_vnifilter_api()
+{
+ ip link del veth-host 2>/dev/null || true
+ ip netns del testns 2>/dev/null || true
+}
+
+# tests vxlan filtering api
+vxlan_vnifilter_api()
+{
+ hv1addr1="172.16.0.1"
+ hv2addr1="172.16.0.2"
+ hv1addr2="2002:fee1::1"
+ hv2addr2="2002:fee1::2"
+ localip="172.16.0.1"
+ group="239.1.1.101"
+
+ cleanup_vnifilter_api &>/dev/null
+ setup_vnifilter_api
+
+ # Duplicate vni test
+ # create non-vnifiltering traditional vni device
+ run_cmd "ip -netns testns link add vxlan100 type vxlan id 100 local $localip dev veth-testns dstport 4789"
+ log_test $? 0 "Create traditional vxlan device"
+
+ # create vni filtering device
+ run_cmd "ip -netns testns link add vxlan-ext1 type vxlan vnifilter local $localip dev veth-testns dstport 4789"
+ log_test $? 1 "Cannot create vnifilter device without external flag"
+
+ run_cmd "ip -netns testns link add vxlan-ext1 type vxlan external vnifilter local $localip dev veth-testns dstport 4789"
+ log_test $? 0 "Creating external vxlan device with vnifilter flag"
+
+ run_cmd "bridge -netns testns vni add dev vxlan-ext1 vni 100"
+ log_test $? 0 "Cannot set in-use vni id on vnifiltering device"
+
+ run_cmd "bridge -netns testns vni add dev vxlan-ext1 vni 200"
+ log_test $? 0 "Set new vni id on vnifiltering device"
+
+ run_cmd "ip -netns testns link add vxlan-ext2 type vxlan external vnifilter local $localip dev veth-testns dstport 4789"
+ log_test $? 0 "Create second external vxlan device with vnifilter flag"
+
+ run_cmd "bridge -netns testns vni add dev vxlan-ext2 vni 200"
+ log_test $? 255 "Cannot set in-use vni id on vnifiltering device"
+
+ run_cmd "bridge -netns testns vni add dev vxlan-ext2 vni 300"
+ log_test $? 0 "Set new vni id on vnifiltering device"
+
+ # check in bridge vni show
+ run_cmd "bridge -netns testns vni add dev vxlan-ext2 vni 300"
+ log_test $? 0 "Update vni id on vnifiltering device"
+
+ run_cmd "bridge -netns testns vni add dev vxlan-ext2 vni 400"
+ log_test $? 0 "Add new vni id on vnifiltering device"
+
+ # add multicast group per vni
+ run_cmd "bridge -netns testns vni add dev vxlan-ext1 vni 200 group $group"
+ log_test $? 0 "Set multicast group on existing vni"
+
+ # add multicast group per vni
+ run_cmd "bridge -netns testns vni add dev vxlan-ext2 vni 300 group $group"
+ log_test $? 0 "Set multicast group on existing vni"
+
+ # set vnifilter on an existing external vxlan device
+ run_cmd "ip -netns testns link set dev vxlan-ext1 type vxlan external vnifilter"
+ log_test $? 2 "Cannot set vnifilter flag on a device"
+
+ # change vxlan vnifilter flag
+ run_cmd "ip -netns testns link set dev vxlan-ext1 type vxlan external novnifilter"
+ log_test $? 2 "Cannot unset vnifilter flag on a device"
+}
+
+# Sanity test vnifilter datapath
+# vnifilter vnis inherit BUM group from
+# vxlan device
+vxlan_vnifilter_datapath()
+{
+ hv1addr1="172.16.0.1"
+ hv2addr1="172.16.0.2"
+ hv1addr2="2002:fee1::1"
+ hv2addr2="2002:fee1::2"
+
+ ip link add veth-hv-1 type veth peer name veth-hv-2
+ setup-hv-networking 1 $hv1addr1 24 $hv1addr2 64 $hv2addr1 $hv2addr2
+ setup-hv-networking 2 $hv2addr1 24 $hv2addr2 64 $hv1addr1 $hv1addr2
+
+ check_hv_connectivity hv2addr1 hv2addr2
+
+ setup-vm 1 11 1 10-v4-$hv1addr1-$hv2addr1-vnifilter,20-v4-$hv1addr1-$hv2addr1-vnifilter 0
+ setup-vm 1 21 2 10-v6-$hv1addr2-$hv2addr2-vnifilter,20-v6-$hv1addr2-$hv2addr2-vnifilter 0
+
+ setup-vm 2 12 1 10-v4-$hv2addr1-$hv1addr1-vnifilter,20-v4-$hv2addr1-$hv1addr1-vnifilter 0
+ setup-vm 2 22 2 10-v6-$hv2addr2-$hv1addr2-vnifilter,20-v6-$hv2addr2-$hv1addr2-vnifilter 0
+
+ check_vm_connectivity "vnifiltering vxlan"
+}
+
+# Sanity test vnifilter datapath
+# with vnifilter per vni configured BUM
+# group/remote
+vxlan_vnifilter_datapath_pervni()
+{
+ hv1addr1="172.16.0.1"
+ hv2addr1="172.16.0.2"
+ hv1addr2="2002:fee1::1"
+ hv2addr2="2002:fee1::2"
+
+ ip link add veth-hv-1 type veth peer name veth-hv-2
+ setup-hv-networking 1 $hv1addr1 24 $hv1addr2 64
+ setup-hv-networking 2 $hv2addr1 24 $hv2addr2 64
+
+ check_hv_connectivity hv2addr1 hv2addr2
+
+ setup-vm 1 11 1 10-v4-$hv1addr1-$hv2addr1-vnifilterg,20-v4-$hv1addr1-$hv2addr1-vnifilterg 0
+ setup-vm 1 21 2 10-v6-$hv1addr2-$hv2addr2-vnifilterg,20-v6-$hv1addr2-$hv2addr2-vnifilterg 0
+
+ setup-vm 2 12 1 10-v4-$hv2addr1-$hv1addr1-vnifilterg,20-v4-$hv2addr1-$hv1addr1-vnifilterg 0
+ setup-vm 2 22 2 10-v6-$hv2addr2-$hv1addr2-vnifilterg,20-v6-$hv2addr2-$hv1addr2-vnifilterg 0
+
+ check_vm_connectivity "vnifiltering vxlan pervni remote"
+}
+
+
+vxlan_vnifilter_datapath_mgroup()
+{
+ hv1addr1="172.16.0.1"
+ hv2addr1="172.16.0.2"
+ hv1addr2="2002:fee1::1"
+ hv2addr2="2002:fee1::2"
+ group="239.1.1.100"
+ group6="ff07::1"
+
+ ip link add veth-hv-1 type veth peer name veth-hv-2
+ setup-hv-networking 1 $hv1addr1 24 $hv1addr2 64
+ setup-hv-networking 2 $hv2addr1 24 $hv2addr2 64
+
+ check_hv_connectivity hv2addr1 hv2addr2
+
+ setup-vm 1 11 1 10-v4-$hv1addr1-$group-vnifilter,20-v4-$hv1addr1-$group-vnifilter 1
+ setup-vm 1 21 2 "10-v6-$hv1addr2-$group6-vnifilter,20-v6-$hv1addr2-$group6-vnifilter" 1
+
+ setup-vm 2 12 1 10-v4-$hv2addr1-$group-vnifilter,20-v4-$hv2addr1-$group-vnifilter 1
+ setup-vm 2 22 2 10-v6-$hv2addr2-$group6-vnifilter,20-v6-$hv2addr2-$group6-vnifilter 1
+
+ check_vm_connectivity "vnifiltering vxlan mgroup"
+}
+
+vxlan_vnifilter_datapath_mgroup_pervni()
+{
+ hv1addr1="172.16.0.1"
+ hv2addr1="172.16.0.2"
+ hv1addr2="2002:fee1::1"
+ hv2addr2="2002:fee1::2"
+ group="239.1.1.100"
+ group6="ff07::1"
+
+ ip link add veth-hv-1 type veth peer name veth-hv-2
+ setup-hv-networking 1 $hv1addr1 24 $hv1addr2 64
+ setup-hv-networking 2 $hv2addr1 24 $hv2addr2 64
+
+ check_hv_connectivity hv2addr1 hv2addr2
+
+ setup-vm 1 11 1 10-v4-$hv1addr1-$group-vnifilterg,20-v4-$hv1addr1-$group-vnifilterg 1
+ setup-vm 1 21 2 10-v6-$hv1addr2-$group6-vnifilterg,20-v6-$hv1addr2-$group6-vnifilterg 1
+
+ setup-vm 2 12 1 10-v4-$hv2addr1-$group-vnifilterg,20-v4-$hv2addr1-$group-vnifilterg 1
+ setup-vm 2 22 2 10-v6-$hv2addr2-$group6-vnifilterg,20-v6-$hv2addr2-$group6-vnifilterg 1
+
+ check_vm_connectivity "vnifiltering vxlan pervni mgroup"
+}
+
+vxlan_vnifilter_metadata_and_traditional_mix()
+{
+ hv1addr1="172.16.0.1"
+ hv2addr1="172.16.0.2"
+ hv1addr2="2002:fee1::1"
+ hv2addr2="2002:fee1::2"
+
+ ip link add veth-hv-1 type veth peer name veth-hv-2
+ setup-hv-networking 1 $hv1addr1 24 $hv1addr2 64
+ setup-hv-networking 2 $hv2addr1 24 $hv2addr2 64
+
+ check_hv_connectivity hv2addr1 hv2addr2
+
+ setup-vm 1 11 1 10-v4-$hv1addr1-$hv2addr1-vnifilter,20-v4-$hv1addr1-$hv2addr1-vnifilter 0
+ setup-vm 1 21 2 10-v6-$hv1addr2-$hv2addr2-vnifilter,20-v6-$hv1addr2-$hv2addr2-vnifilter 0
+ setup-vm 1 31 3 30-v4-$hv1addr1-$hv2addr1-default-4790,40-v6-$hv1addr2-$hv2addr2-default-4790,50-v4-$hv1addr1-$hv2addr1-metadata-4791 0
+
+
+ setup-vm 2 12 1 10-v4-$hv2addr1-$hv1addr1-vnifilter,20-v4-$hv2addr1-$hv1addr1-vnifilter 0
+ setup-vm 2 22 2 10-v6-$hv2addr2-$hv1addr2-vnifilter,20-v6-$hv2addr2-$hv1addr2-vnifilter 0
+ setup-vm 2 32 3 30-v4-$hv2addr1-$hv1addr1-default-4790,40-v6-$hv2addr2-$hv1addr2-default-4790,50-v4-$hv2addr1-$hv1addr1-metadata-4791 0
+
+ check_vm_connectivity "vnifiltering vxlan pervni remote mix"
+
+ # check VM connectivity over traditional/non-vxlan filtering vxlan devices
+ run_cmd "ip netns exec vm-31 ping -c 1 -W 1 10.0.30.32"
+ log_test $? 0 "VM connectivity over traditional vxlan (ipv4 default rdst)"
+
+ run_cmd "ip netns exec vm-31 ping -c 1 -W 1 10.0.40.32"
+ log_test $? 0 "VM connectivity over traditional vxlan (ipv6 default rdst)"
+
+ run_cmd "ip netns exec vm-31 ping -c 1 -W 1 10.0.50.32"
+ log_test $? 0 "VM connectivity over metadata nonfiltering vxlan (ipv4 default rdst)"
+}
+
+while getopts :t:pP46hv o
+do
+ case $o in
+ t) TESTS=$OPTARG;;
+ p) PAUSE_ON_FAIL=yes;;
+ P) PAUSE=yes;;
+ v) VERBOSE=$(($VERBOSE + 1));;
+ h) usage; exit 0;;
+ *) usage; exit 1;;
+ esac
+done
+
+# make sure we don't pause twice
+[ "${PAUSE}" = "yes" ] && PAUSE_ON_FAIL=no
+
+if [ "$(id -u)" -ne 0 ];then
+ echo "SKIP: Need root privileges"
+ exit $ksft_skip;
+fi
+
+if [ ! -x "$(command -v ip)" ]; then
+ echo "SKIP: Could not run test without ip tool"
+ exit $ksft_skip
+fi
+
+ip link help vxlan 2>&1 | grep -q "vnifilter"
+if [ $? -ne 0 ]; then
+ echo "SKIP: iproute2 too old, missing vxlan dev vnifilter setting"
+ sync
+ exit $ksft_skip
+fi
+
+bridge vni help 2>&1 | grep -q "Usage: bridge vni"
+if [ $? -ne 0 ]; then
+ echo "SKIP: iproute2 bridge lacks vxlan vnifiltering support"
+ exit $ksft_skip
+fi
+
+# start clean
+cleanup &> /dev/null
+
+for t in $TESTS
+do
+ case $t in
+ none) setup; exit 0;;
+ *) $t; cleanup;;
+ esac
+done
+
+if [ "$TESTS" != "none" ]; then
+ printf "\nTests passed: %3d\n" ${nsuccess}
+ printf "Tests failed: %3d\n" ${nfail}
+fi
+
+exit $ret
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH net-next 10/12] selinux: add support for RTM_NEWTUNNEL, RTM_DELTUNNEL, and RTM_GETTUNNEL
2022-02-20 14:03 [PATCH net-next 00/12] vxlan metadata device vnifiltering support Roopa Prabhu
` (8 preceding siblings ...)
2022-02-20 14:04 ` [PATCH net-next 09/12] selftests: add new tests for vxlan vnifiltering Roopa Prabhu
@ 2022-02-20 14:04 ` Roopa Prabhu
2022-02-21 1:47 ` Benjamin Poirier
2022-02-20 14:04 ` [PATCH net-next 11/12] drivers: vxlan: vnifilter: per vni stats Roopa Prabhu
2022-02-20 14:04 ` [PATCH net-next 12/12] drivers: vxlan: vnifilter: add support for stats dumping Roopa Prabhu
11 siblings, 1 reply; 19+ messages in thread
From: Roopa Prabhu @ 2022-02-20 14:04 UTC (permalink / raw)
To: davem, kuba; +Cc: netdev, stephen, nikolay, idosch, dsahern
From: Benjamin Poirier <bpoirier@nvidia.com>
This patch adds newly added RTM_*TUNNEL msgs to nlmsg_route_perms
Signed-off-by: Benjamin Poirier <bpoirier@nvidia.com>
---
security/selinux/nlmsgtab.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/security/selinux/nlmsgtab.c b/security/selinux/nlmsgtab.c
index 94ea2a8b2bb7..6ad3ee02e023 100644
--- a/security/selinux/nlmsgtab.c
+++ b/security/selinux/nlmsgtab.c
@@ -91,6 +91,9 @@ static const struct nlmsg_perm nlmsg_route_perms[] =
{ RTM_NEWNEXTHOPBUCKET, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
{ RTM_DELNEXTHOPBUCKET, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
{ RTM_GETNEXTHOPBUCKET, NETLINK_ROUTE_SOCKET__NLMSG_READ },
+ { RTM_NEWTUNNEL, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_DELTUNNEL, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_GETTUNNEL, NETLINK_ROUTE_SOCKET__NLMSG_READ },
};
static const struct nlmsg_perm nlmsg_tcpdiag_perms[] =
@@ -176,7 +179,7 @@ int selinux_nlmsg_lookup(u16 sclass, u16 nlmsg_type, u32 *perm)
* structures at the top of this file with the new mappings
* before updating the BUILD_BUG_ON() macro!
*/
- BUILD_BUG_ON(RTM_MAX != (RTM_NEWNEXTHOPBUCKET + 3));
+ BUILD_BUG_ON(RTM_MAX != (RTM_NEWTUNNEL + 3));
err = nlmsg_perm(nlmsg_type, perm, nlmsg_route_perms,
sizeof(nlmsg_route_perms));
break;
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH net-next 11/12] drivers: vxlan: vnifilter: per vni stats
2022-02-20 14:03 [PATCH net-next 00/12] vxlan metadata device vnifiltering support Roopa Prabhu
` (9 preceding siblings ...)
2022-02-20 14:04 ` [PATCH net-next 10/12] selinux: add support for RTM_NEWTUNNEL, RTM_DELTUNNEL, and RTM_GETTUNNEL Roopa Prabhu
@ 2022-02-20 14:04 ` Roopa Prabhu
2022-02-21 2:11 ` kernel test robot
2022-02-20 14:04 ` [PATCH net-next 12/12] drivers: vxlan: vnifilter: add support for stats dumping Roopa Prabhu
11 siblings, 1 reply; 19+ messages in thread
From: Roopa Prabhu @ 2022-02-20 14:04 UTC (permalink / raw)
To: davem, kuba; +Cc: netdev, stephen, nikolay, idosch, dsahern
From: Nikolay Aleksandrov <nikolay@nvidia.com>
Add per-vni statistics for vni filter mode. Counting Rx/Tx
bytes/packets/drops/errors at the appropriate places.
Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com>
---
drivers/net/vxlan/vxlan_core.c | 29 +++++++++--
drivers/net/vxlan/vxlan_private.h | 3 +-
drivers/net/vxlan/vxlan_vnifilter.c | 80 +++++++++++++++++++++++++++++
include/net/vxlan.h | 26 ++++++++++
4 files changed, 134 insertions(+), 4 deletions(-)
diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
index e88217b52bb9..ab2fb2789674 100644
--- a/drivers/net/vxlan/vxlan_core.c
+++ b/drivers/net/vxlan/vxlan_core.c
@@ -1745,6 +1745,7 @@ static int vxlan_rcv(struct sock *sk, struct sk_buff *skb)
if (!vxlan_ecn_decapsulate(vs, oiph, skb)) {
++vxlan->dev->stats.rx_frame_errors;
++vxlan->dev->stats.rx_errors;
+ vxlan_vnifilter_count(vxlan, vni, VXLAN_VNI_STATS_RX_ERRORS, 0);
goto drop;
}
@@ -1753,10 +1754,12 @@ static int vxlan_rcv(struct sock *sk, struct sk_buff *skb)
if (unlikely(!(vxlan->dev->flags & IFF_UP))) {
rcu_read_unlock();
atomic_long_inc(&vxlan->dev->rx_dropped);
+ vxlan_vnifilter_count(vxlan, vni, VXLAN_VNI_STATS_RX_DROPS, 0);
goto drop;
}
dev_sw_netstats_rx_add(vxlan->dev, skb->len);
+ vxlan_vnifilter_count(vxlan, vni, VXLAN_VNI_STATS_RX, skb->len);
gro_cells_receive(&vxlan->gro_cells, skb);
rcu_read_unlock();
@@ -1864,8 +1867,12 @@ static int arp_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni)
reply->ip_summed = CHECKSUM_UNNECESSARY;
reply->pkt_type = PACKET_HOST;
- if (netif_rx_ni(reply) == NET_RX_DROP)
+ if (netif_rx_ni(reply) == NET_RX_DROP) {
dev->stats.rx_dropped++;
+ vxlan_vnifilter_count(vxlan, vni,
+ VXLAN_VNI_STATS_RX_DROPS, 0);
+ }
+
} else if (vxlan->cfg.flags & VXLAN_F_L3MISS) {
union vxlan_addr ipa = {
.sin.sin_addr.s_addr = tip,
@@ -2019,9 +2026,11 @@ static int neigh_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni)
if (reply == NULL)
goto out;
- if (netif_rx_ni(reply) == NET_RX_DROP)
+ if (netif_rx_ni(reply) == NET_RX_DROP) {
dev->stats.rx_dropped++;
-
+ vxlan_vnifilter_count(vxlan, vni,
+ VXLAN_VNI_STATS_RX_DROPS, 0);
+ }
} else if (vxlan->cfg.flags & VXLAN_F_L3MISS) {
union vxlan_addr ipa = {
.sin6.sin6_addr = msg->target,
@@ -2355,15 +2364,19 @@ static void vxlan_encap_bypass(struct sk_buff *skb, struct vxlan_dev *src_vxlan,
tx_stats->tx_packets++;
tx_stats->tx_bytes += len;
u64_stats_update_end(&tx_stats->syncp);
+ vxlan_vnifilter_count(src_vxlan, vni, VXLAN_VNI_STATS_TX, len);
if (__netif_rx(skb) == NET_RX_SUCCESS) {
u64_stats_update_begin(&rx_stats->syncp);
rx_stats->rx_packets++;
rx_stats->rx_bytes += len;
u64_stats_update_end(&rx_stats->syncp);
+ vxlan_vnifilter_count(dst_vxlan, vni, VXLAN_VNI_STATS_RX, len);
} else {
drop:
dev->stats.rx_dropped++;
+ vxlan_vnifilter_count(dst_vxlan, vni, VXLAN_VNI_STATS_RX_DROPS,
+ 0);
}
rcu_read_unlock();
}
@@ -2393,6 +2406,8 @@ static int encap_bypass_if_local(struct sk_buff *skb, struct net_device *dev,
vxlan->cfg.flags);
if (!dst_vxlan) {
dev->stats.tx_errors++;
+ vxlan_vnifilter_count(vxlan, vni,
+ VXLAN_VNI_STATS_TX_ERRORS, 0);
kfree_skb(skb);
return -ENOENT;
@@ -2416,6 +2431,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
union vxlan_addr remote_ip, local_ip;
struct vxlan_metadata _md;
struct vxlan_metadata *md = &_md;
+ unsigned int pkt_len = skb->len;
__be16 src_port = 0, dst_port;
struct dst_entry *ndst = NULL;
__be32 vni, label;
@@ -2636,12 +2652,14 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
label, src_port, dst_port, !udp_sum);
#endif
}
+ vxlan_vnifilter_count(vxlan, vni, VXLAN_VNI_STATS_TX, pkt_len);
out_unlock:
rcu_read_unlock();
return;
drop:
dev->stats.tx_dropped++;
+ vxlan_vnifilter_count(vxlan, vni, VXLAN_VNI_STATS_TX_DROPS, 0);
dev_kfree_skb(skb);
return;
@@ -2653,6 +2671,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
dev->stats.tx_carrier_errors++;
dst_release(ndst);
dev->stats.tx_errors++;
+ vxlan_vnifilter_count(vxlan, vni, VXLAN_VNI_STATS_TX_ERRORS, 0);
kfree_skb(skb);
}
@@ -2685,6 +2704,8 @@ static void vxlan_xmit_nh(struct sk_buff *skb, struct net_device *dev,
drop:
dev->stats.tx_dropped++;
+ vxlan_vnifilter_count(netdev_priv(dev), vni,
+ VXLAN_VNI_STATS_TX_DROPS, 0);
dev_kfree_skb(skb);
}
@@ -2759,6 +2780,8 @@ static netdev_tx_t vxlan_xmit(struct sk_buff *skb, struct net_device *dev)
vxlan_fdb_miss(vxlan, eth->h_dest);
dev->stats.tx_dropped++;
+ vxlan_vnifilter_count(vxlan, vni,
+ VXLAN_VNI_STATS_TX_DROPS, 0);
kfree_skb(skb);
return NETDEV_TX_OK;
}
diff --git a/drivers/net/vxlan/vxlan_private.h b/drivers/net/vxlan/vxlan_private.h
index 73fe1c16060e..08d64f7a0f15 100644
--- a/drivers/net/vxlan/vxlan_private.h
+++ b/drivers/net/vxlan/vxlan_private.h
@@ -154,6 +154,8 @@ void vxlan_vnigroup_uninit(struct vxlan_dev *vxlan);
void vxlan_vnifilter_init(void);
void vxlan_vnifilter_uninit(void);
+void vxlan_vnifilter_count(struct vxlan_dev *vxlan, __be32 vni,
+ int type, unsigned int len);
void vxlan_vs_add_vnigrp(struct vxlan_dev *vxlan,
struct vxlan_sock *vs,
@@ -164,7 +166,6 @@ int vxlan_vnilist_update_group(struct vxlan_dev *vxlan,
union vxlan_addr *new_remote_ip,
struct netlink_ext_ack *extack);
-
/* vxlan_multicast.c */
int vxlan_multicast_join(struct vxlan_dev *vxlan);
int vxlan_multicast_leave(struct vxlan_dev *vxlan);
diff --git a/drivers/net/vxlan/vxlan_vnifilter.c b/drivers/net/vxlan/vxlan_vnifilter.c
index 95a76ddfca75..935f3007f348 100644
--- a/drivers/net/vxlan/vxlan_vnifilter.c
+++ b/drivers/net/vxlan/vxlan_vnifilter.c
@@ -97,6 +97,80 @@ void vxlan_vs_del_vnigrp(struct vxlan_dev *vxlan)
spin_unlock(&vn->sock_lock);
}
+static void vxlan_vnifilter_stats_get(const struct vxlan_vni_node *vninode,
+ struct vxlan_vni_stats *dest)
+{
+ int i;
+
+ memset(dest, 0, sizeof(*dest));
+ for_each_possible_cpu(i) {
+ struct vxlan_vni_stats_pcpu *pstats;
+ struct vxlan_vni_stats temp;
+ unsigned int start;
+
+ pstats = per_cpu_ptr(vninode->stats, i);
+ do {
+ start = u64_stats_fetch_begin_irq(&pstats->syncp);
+ memcpy(&temp, &pstats->stats, sizeof(temp));
+ } while (u64_stats_fetch_retry_irq(&pstats->syncp, start));
+
+ dest->rx_packets += temp.rx_packets;
+ dest->rx_bytes += temp.rx_bytes;
+ dest->rx_drops += temp.rx_drops;
+ dest->rx_errors += temp.rx_errors;
+ dest->tx_packets += temp.tx_packets;
+ dest->tx_bytes += temp.tx_bytes;
+ dest->tx_drops += temp.tx_drops;
+ dest->tx_errors += temp.tx_errors;
+ }
+}
+
+static void vxlan_vnifilter_stats_add(struct vxlan_vni_node *vninode,
+ int type, unsigned int len)
+{
+ struct vxlan_vni_stats_pcpu *pstats = this_cpu_ptr(vninode->stats);
+
+ u64_stats_update_begin(&pstats->syncp);
+ switch (type) {
+ case VXLAN_VNI_STATS_RX:
+ pstats->stats.rx_bytes += len;
+ pstats->stats.rx_packets++;
+ break;
+ case VXLAN_VNI_STATS_RX_DROPS:
+ pstats->stats.rx_drops++;
+ break;
+ case VXLAN_VNI_STATS_RX_ERRORS:
+ pstats->stats.rx_errors++;
+ break;
+ case VXLAN_VNI_STATS_TX:
+ pstats->stats.tx_bytes += len;
+ pstats->stats.tx_packets++;
+ break;
+ case VXLAN_VNI_STATS_TX_DROPS:
+ pstats->stats.tx_drops++;
+ break;
+ case VXLAN_VNI_STATS_TX_ERRORS:
+ pstats->stats.tx_errors++;
+ break;
+ }
+ u64_stats_update_end(&pstats->syncp);
+}
+
+void vxlan_vnifilter_count(struct vxlan_dev *vxlan, __be32 vni,
+ int type, unsigned int len)
+{
+ struct vxlan_vni_node *vninode;
+
+ if (!(vxlan->cfg.flags & VXLAN_F_VNIFILTER))
+ return;
+
+ vninode = vxlan_vnifilter_lookup(vxlan, vni);
+ if (!vninode)
+ return;
+
+ vxlan_vnifilter_stats_add(vninode, type, len);
+}
+
static u32 vnirange(struct vxlan_vni_node *vbegin,
struct vxlan_vni_node *vend)
{
@@ -541,6 +615,11 @@ static struct vxlan_vni_node *vxlan_vni_alloc(struct vxlan_dev *vxlan,
vninode = kzalloc(sizeof(*vninode), GFP_ATOMIC);
if (!vninode)
return NULL;
+ vninode->stats = netdev_alloc_pcpu_stats(struct vxlan_vni_stats_pcpu);
+ if (!vninode->stats) {
+ kfree(vninode);
+ return NULL;
+ }
vninode->vni = vni;
vninode->hlist4.vxlan = vxlan;
vninode->hlist6.vxlan = vxlan;
@@ -596,6 +675,7 @@ static void vxlan_vni_node_rcu_free(struct rcu_head *rcu)
struct vxlan_vni_node *v;
v = container_of(rcu, struct vxlan_vni_node, rcu);
+ free_percpu(v->stats);
kfree(v);
}
diff --git a/include/net/vxlan.h b/include/net/vxlan.h
index 8eb961bb9589..bca5b01af247 100644
--- a/include/net/vxlan.h
+++ b/include/net/vxlan.h
@@ -227,6 +227,31 @@ struct vxlan_config {
enum ifla_vxlan_df df;
};
+enum {
+ VXLAN_VNI_STATS_RX,
+ VXLAN_VNI_STATS_RX_DROPS,
+ VXLAN_VNI_STATS_RX_ERRORS,
+ VXLAN_VNI_STATS_TX,
+ VXLAN_VNI_STATS_TX_DROPS,
+ VXLAN_VNI_STATS_TX_ERRORS,
+};
+
+struct vxlan_vni_stats {
+ u64 rx_packets;
+ u64 rx_bytes;
+ u64 rx_drops;
+ u64 rx_errors;
+ u64 tx_packets;
+ u64 tx_bytes;
+ u64 tx_drops;
+ u64 tx_errors;
+};
+
+struct vxlan_vni_stats_pcpu {
+ struct vxlan_vni_stats stats;
+ struct u64_stats_sync syncp;
+};
+
struct vxlan_dev_node {
struct hlist_node hlist;
struct vxlan_dev *vxlan;
@@ -241,6 +266,7 @@ struct vxlan_vni_node {
struct list_head vlist;
__be32 vni;
union vxlan_addr remote_ip; /* default remote ip for this vni */
+ struct vxlan_vni_stats_pcpu __percpu *stats;
struct rcu_head rcu;
};
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH net-next 12/12] drivers: vxlan: vnifilter: add support for stats dumping
2022-02-20 14:03 [PATCH net-next 00/12] vxlan metadata device vnifiltering support Roopa Prabhu
` (10 preceding siblings ...)
2022-02-20 14:04 ` [PATCH net-next 11/12] drivers: vxlan: vnifilter: per vni stats Roopa Prabhu
@ 2022-02-20 14:04 ` Roopa Prabhu
2022-02-20 14:12 ` Nikolay Aleksandrov
11 siblings, 1 reply; 19+ messages in thread
From: Roopa Prabhu @ 2022-02-20 14:04 UTC (permalink / raw)
To: davem, kuba; +Cc: netdev, stephen, nikolay, idosch, dsahern
From: Nikolay Aleksandrov <nikolay@nvidia.com>
Add support for VXLAN vni filter entries' stats dumping.
Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com>
---
drivers/net/vxlan/vxlan_vnifilter.c | 55 ++++++++++++++++++++++++++---
include/uapi/linux/if_link.h | 30 +++++++++++++++-
2 files changed, 79 insertions(+), 6 deletions(-)
diff --git a/drivers/net/vxlan/vxlan_vnifilter.c b/drivers/net/vxlan/vxlan_vnifilter.c
index 935f3007f348..861f7195fe58 100644
--- a/drivers/net/vxlan/vxlan_vnifilter.c
+++ b/drivers/net/vxlan/vxlan_vnifilter.c
@@ -186,9 +186,48 @@ static size_t vxlan_vnifilter_entry_nlmsg_size(void)
+ nla_total_size(sizeof(struct in6_addr));/* VXLAN_VNIFILTER_ENTRY_GROUP{6} */
}
+static int __vnifilter_entry_fill_stats(struct sk_buff *skb,
+ const struct vxlan_vni_node *vbegin)
+{
+ struct vxlan_vni_stats vstats;
+ struct nlattr *vstats_attr;
+
+ vstats_attr = nla_nest_start(skb, VXLAN_VNIFILTER_ENTRY_STATS);
+ if (!vstats_attr)
+ goto out_stats_err;
+
+ vxlan_vnifilter_stats_get(vbegin, &vstats);
+ if (nla_put_u64_64bit(skb, VNIFILTER_ENTRY_STATS_RX_BYTES,
+ vstats.rx_bytes, VNIFILTER_ENTRY_STATS_PAD) ||
+ nla_put_u64_64bit(skb, VNIFILTER_ENTRY_STATS_RX_PKTS,
+ vstats.rx_packets, VNIFILTER_ENTRY_STATS_PAD) ||
+ nla_put_u64_64bit(skb, VNIFILTER_ENTRY_STATS_RX_DROPS,
+ vstats.rx_drops, VNIFILTER_ENTRY_STATS_PAD) ||
+ nla_put_u64_64bit(skb, VNIFILTER_ENTRY_STATS_RX_ERRORS,
+ vstats.rx_errors, VNIFILTER_ENTRY_STATS_PAD) ||
+ nla_put_u64_64bit(skb, VNIFILTER_ENTRY_STATS_TX_BYTES,
+ vstats.tx_bytes, VNIFILTER_ENTRY_STATS_PAD) ||
+ nla_put_u64_64bit(skb, VNIFILTER_ENTRY_STATS_TX_PKTS,
+ vstats.tx_packets, VNIFILTER_ENTRY_STATS_PAD) ||
+ nla_put_u64_64bit(skb, VNIFILTER_ENTRY_STATS_TX_DROPS,
+ vstats.tx_drops, VNIFILTER_ENTRY_STATS_PAD) ||
+ nla_put_u64_64bit(skb, VNIFILTER_ENTRY_STATS_TX_ERRORS,
+ vstats.tx_errors, VNIFILTER_ENTRY_STATS_PAD))
+ goto out_stats_err;
+
+ nla_nest_end(skb, vstats_attr);
+
+ return 0;
+
+out_stats_err:
+ nla_nest_cancel(skb, vstats_attr);
+ return -EMSGSIZE;
+}
+
static bool vxlan_fill_vni_filter_entry(struct sk_buff *skb,
struct vxlan_vni_node *vbegin,
- struct vxlan_vni_node *vend)
+ struct vxlan_vni_node *vend,
+ bool fill_stats)
{
struct nlattr *ventry;
u32 vs = be32_to_cpu(vbegin->vni);
@@ -221,6 +260,9 @@ static bool vxlan_fill_vni_filter_entry(struct sk_buff *skb,
}
}
+ if (fill_stats && __vnifilter_entry_fill_stats(skb, vbegin))
+ goto out_err;
+
nla_nest_end(skb, ventry);
return true;
@@ -253,7 +295,7 @@ static void vxlan_vnifilter_notify(const struct vxlan_dev *vxlan,
tmsg->family = AF_BRIDGE;
tmsg->ifindex = vxlan->dev->ifindex;
- if (!vxlan_fill_vni_filter_entry(skb, vninode, vninode))
+ if (!vxlan_fill_vni_filter_entry(skb, vninode, vninode, false))
goto out_err;
nlmsg_end(skb, nlh);
@@ -277,6 +319,7 @@ static int vxlan_vnifilter_dump_dev(const struct net_device *dev,
int idx = 0, s_idx = cb->args[1];
struct vxlan_vni_group *vg;
struct nlmsghdr *nlh;
+ bool dump_stats;
int err = 0;
if (!(vxlan->cfg.flags & VXLAN_F_VNIFILTER))
@@ -288,6 +331,7 @@ static int vxlan_vnifilter_dump_dev(const struct net_device *dev,
return 0;
tmsg = nlmsg_data(cb->nlh);
+ dump_stats = !!(tmsg->flags & TUNNEL_MSG_FLAG_STATS);
nlh = nlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
RTM_NEWTUNNEL, sizeof(*new_tmsg), NLM_F_MULTI);
@@ -308,11 +352,12 @@ static int vxlan_vnifilter_dump_dev(const struct net_device *dev,
vend = v;
continue;
}
- if (vnirange(vend, v) == 1 &&
+ if (!dump_stats && vnirange(vend, v) == 1 &&
vxlan_addr_equal(&v->remote_ip, &vend->remote_ip)) {
goto update_end;
} else {
- if (!vxlan_fill_vni_filter_entry(skb, vbegin, vend)) {
+ if (!vxlan_fill_vni_filter_entry(skb, vbegin, vend,
+ dump_stats)) {
err = -EMSGSIZE;
break;
}
@@ -324,7 +369,7 @@ static int vxlan_vnifilter_dump_dev(const struct net_device *dev,
}
if (!err && vbegin) {
- if (!vxlan_fill_vni_filter_entry(skb, vbegin, vend))
+ if (!vxlan_fill_vni_filter_entry(skb, vbegin, vend, dump_stats))
err = -EMSGSIZE;
}
diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h
index eb046a82188d..1a362c2a8e4b 100644
--- a/include/uapi/linux/if_link.h
+++ b/include/uapi/linux/if_link.h
@@ -715,17 +715,37 @@ enum ipvlan_mode {
/* Tunnel RTM header */
struct tunnel_msg {
__u8 family;
- __u8 reserved1;
+ __u8 flags;
__u16 reserved2;
__u32 ifindex;
};
+/* include statistics in the dump */
+#define TUNNEL_MSG_FLAG_STATS 0x01
+
+/* Embedded inside VXLAN_VNIFILTER_ENTRY_STATS */
+enum {
+ VNIFILTER_ENTRY_STATS_UNSPEC,
+ VNIFILTER_ENTRY_STATS_RX_BYTES,
+ VNIFILTER_ENTRY_STATS_RX_PKTS,
+ VNIFILTER_ENTRY_STATS_RX_DROPS,
+ VNIFILTER_ENTRY_STATS_RX_ERRORS,
+ VNIFILTER_ENTRY_STATS_TX_BYTES,
+ VNIFILTER_ENTRY_STATS_TX_PKTS,
+ VNIFILTER_ENTRY_STATS_TX_DROPS,
+ VNIFILTER_ENTRY_STATS_TX_ERRORS,
+ VNIFILTER_ENTRY_STATS_PAD,
+ __VNIFILTER_ENTRY_STATS_MAX
+};
+#define VNIFILTER_ENTRY_STATS_MAX (__VNIFILTER_ENTRY_STATS_MAX - 1)
+
enum {
VXLAN_VNIFILTER_ENTRY_UNSPEC,
VXLAN_VNIFILTER_ENTRY_START,
VXLAN_VNIFILTER_ENTRY_END,
VXLAN_VNIFILTER_ENTRY_GROUP,
VXLAN_VNIFILTER_ENTRY_GROUP6,
+ VXLAN_VNIFILTER_ENTRY_STATS,
__VXLAN_VNIFILTER_ENTRY_MAX
};
#define VXLAN_VNIFILTER_ENTRY_MAX (__VXLAN_VNIFILTER_ENTRY_MAX - 1)
@@ -737,6 +757,14 @@ enum {
};
#define VXLAN_VNIFILTER_MAX (__VXLAN_VNIFILTER_MAX - 1)
+/* Embedded inside LINK_XSTATS_TYPE_VXLAN */
+enum {
+ VXLAN_XSTATS_UNSPEC,
+ VXLAN_XSTATS_VNIFILTER,
+ __VXLAN_XSTATS_MAX
+};
+#define VXLAN_XSTATS_MAX (__VXLAN_XSTATS_MAX - 1)
+
/* VXLAN section */
enum {
IFLA_VXLAN_UNSPEC,
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH net-next 12/12] drivers: vxlan: vnifilter: add support for stats dumping
2022-02-20 14:04 ` [PATCH net-next 12/12] drivers: vxlan: vnifilter: add support for stats dumping Roopa Prabhu
@ 2022-02-20 14:12 ` Nikolay Aleksandrov
2022-02-20 14:27 ` Roopa Prabhu
0 siblings, 1 reply; 19+ messages in thread
From: Nikolay Aleksandrov @ 2022-02-20 14:12 UTC (permalink / raw)
To: Roopa Prabhu, davem, kuba; +Cc: netdev, stephen, nikolay, idosch, dsahern
On 20/02/2022 16:04, Roopa Prabhu wrote:
> From: Nikolay Aleksandrov <nikolay@nvidia.com>
>
> Add support for VXLAN vni filter entries' stats dumping.
>
> Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com>
> ---
> drivers/net/vxlan/vxlan_vnifilter.c | 55 ++++++++++++++++++++++++++---
> include/uapi/linux/if_link.h | 30 +++++++++++++++-
> 2 files changed, 79 insertions(+), 6 deletions(-)
>
[snip]
> +/* Embedded inside LINK_XSTATS_TYPE_VXLAN */
> +enum {
> + VXLAN_XSTATS_UNSPEC,
> + VXLAN_XSTATS_VNIFILTER,
> + __VXLAN_XSTATS_MAX
> +};
> +#define VXLAN_XSTATS_MAX (__VXLAN_XSTATS_MAX - 1)
> +
> /* VXLAN section */
> enum {
> IFLA_VXLAN_UNSPEC,
xstats leftover should be removed
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next 12/12] drivers: vxlan: vnifilter: add support for stats dumping
2022-02-20 14:12 ` Nikolay Aleksandrov
@ 2022-02-20 14:27 ` Roopa Prabhu
0 siblings, 0 replies; 19+ messages in thread
From: Roopa Prabhu @ 2022-02-20 14:27 UTC (permalink / raw)
To: Nikolay Aleksandrov, davem, kuba
Cc: netdev, stephen, nikolay, idosch, dsahern
On 2/20/22 6:12 AM, Nikolay Aleksandrov wrote:
> On 20/02/2022 16:04, Roopa Prabhu wrote:
>> From: Nikolay Aleksandrov <nikolay@nvidia.com>
>>
>> Add support for VXLAN vni filter entries' stats dumping.
>>
>> Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com>
>> ---
>> drivers/net/vxlan/vxlan_vnifilter.c | 55 ++++++++++++++++++++++++++---
>> include/uapi/linux/if_link.h | 30 +++++++++++++++-
>> 2 files changed, 79 insertions(+), 6 deletions(-)
>>
> [snip]
>> +/* Embedded inside LINK_XSTATS_TYPE_VXLAN */
>> +enum {
>> + VXLAN_XSTATS_UNSPEC,
>> + VXLAN_XSTATS_VNIFILTER,
>> + __VXLAN_XSTATS_MAX
>> +};
>> +#define VXLAN_XSTATS_MAX (__VXLAN_XSTATS_MAX - 1)
>> +
>> /* VXLAN section */
>> enum {
>> IFLA_VXLAN_UNSPEC,
> xstats leftover should be removed
ah, ack, looks like i only removed the stale reference. will include
when i spin v2.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next 06/12] rtnetlink: add new rtm tunnel api for tunnel id filtering
2022-02-20 14:03 ` [PATCH net-next 06/12] rtnetlink: add new rtm tunnel api for tunnel id filtering Roopa Prabhu
@ 2022-02-20 14:29 ` Roopa Prabhu
0 siblings, 0 replies; 19+ messages in thread
From: Roopa Prabhu @ 2022-02-20 14:29 UTC (permalink / raw)
To: davem, kuba; +Cc: netdev, stephen, nikolay, idosch, dsahern
On 2/20/22 6:03 AM, Roopa Prabhu wrote:
> This patch adds new rtm tunnel msg and api for tunnel id
> filtering in dst_metadata devices. First dst_metadata
> device to use the api is vxlan driver with AF_BRIDGE
> family.
>
> This and later changes add ability in vxlan driver to do
> tunnel id filtering (or vni filtering) on dst_metadata
> devices. This is similar to vlan api in the vlan filtering bridge.
>
> Signed-off-by: Roopa Prabhu <roopa@nvidia.com>
> ---
> include/uapi/linux/if_link.h | 26 ++++++++++++++++++++++++++
> include/uapi/linux/rtnetlink.h | 9 +++++++++
> 2 files changed, 35 insertions(+)
>
> diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h
> index 6218f93f5c1a..eb046a82188d 100644
> --- a/include/uapi/linux/if_link.h
> +++ b/include/uapi/linux/if_link.h
> @@ -712,6 +712,31 @@ enum ipvlan_mode {
> #define IPVLAN_F_PRIVATE 0x01
> #define IPVLAN_F_VEPA 0x02
>
> +/* Tunnel RTM header */
> +struct tunnel_msg {
> + __u8 family;
> + __u8 reserved1;
> + __u16 reserved2;
> + __u32 ifindex;
> +};
> +
> +enum {
> + VXLAN_VNIFILTER_ENTRY_UNSPEC,
> + VXLAN_VNIFILTER_ENTRY_START,
> + VXLAN_VNIFILTER_ENTRY_END,
> + VXLAN_VNIFILTER_ENTRY_GROUP,
> + VXLAN_VNIFILTER_ENTRY_GROUP6,
> + __VXLAN_VNIFILTER_ENTRY_MAX
> +};
> +#define VXLAN_VNIFILTER_ENTRY_MAX (__VXLAN_VNIFILTER_ENTRY_MAX - 1)
> +
> +enum {
> + VXLAN_VNIFILTER_UNSPEC,
> + VXLAN_VNIFILTER_ENTRY,
> + __VXLAN_VNIFILTER_MAX
> +};
> +#define VXLAN_VNIFILTER_MAX (__VXLAN_VNIFILTER_MAX - 1)
> +
> /* VXLAN section */
just noticed, this comment should move up. will include in v2
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next 08/12] vxlan: vni filtering support on collect metadata device
2022-02-20 14:04 ` [PATCH net-next 08/12] vxlan: vni filtering support on collect metadata device Roopa Prabhu
@ 2022-02-20 22:24 ` kernel test robot
0 siblings, 0 replies; 19+ messages in thread
From: kernel test robot @ 2022-02-20 22:24 UTC (permalink / raw)
To: Roopa Prabhu, davem, kuba
Cc: kbuild-all, netdev, stephen, nikolay, idosch, dsahern
Hi Roopa,
I love your patch! Yet something to improve:
[auto build test ERROR on net-next/master]
url: https://github.com/0day-ci/linux/commits/Roopa-Prabhu/vxlan-metadata-device-vnifiltering-support/20220220-220748
base: https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 129c77b5692d4a95a00aa7d58075afe77179623e
config: m68k-randconfig-s032-20220220 (https://download.01.org/0day-ci/archive/20220221/202202210620.Qp46jPJO-lkp@intel.com/config)
compiler: m68k-linux-gcc (GCC) 11.2.0
reproduce:
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# apt-get install sparse
# sparse version: v0.6.4-dirty
# https://github.com/0day-ci/linux/commit/5344344656a955610e1a596bf3de904d5c6647f4
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Roopa-Prabhu/vxlan-metadata-device-vnifiltering-support/20220220-220748
git checkout 5344344656a955610e1a596bf3de904d5c6647f4
# save the config file to linux build tree
mkdir build_dir
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' O=build_dir ARCH=m68k SHELL=/bin/bash drivers/net/vxlan/ kernel/time/
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All error/warnings (new ones prefixed by >>):
>> drivers/net/vxlan/vxlan_multicast.c:181:5: warning: no previous prototype for 'vxlan_multicast_join_vnigrp' [-Wmissing-prototypes]
181 | int vxlan_multicast_join_vnigrp(struct vxlan_dev *vxlan)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/net/vxlan/vxlan_multicast.c:218:5: warning: no previous prototype for 'vxlan_multicast_leave_vnigrp' [-Wmissing-prototypes]
218 | int vxlan_multicast_leave_vnigrp(struct vxlan_dev *vxlan)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from drivers/net/vxlan/vxlan_multicast.c:12:
drivers/net/vxlan/vxlan_private.h:13:17: warning: 'all_zeros_mac' defined but not used [-Wunused-const-variable=]
13 | static const u8 all_zeros_mac[ETH_ALEN + 2];
| ^~~~~~~~~~~~~
--
>> drivers/net/vxlan/vxlan_vnifilter.c:20:6: warning: no previous prototype for 'vxlan_vs_add_del_vninode' [-Wmissing-prototypes]
20 | void vxlan_vs_add_del_vninode(struct vxlan_dev *vxlan,
| ^~~~~~~~~~~~~~~~~~~~~~~~
drivers/net/vxlan/vxlan_vnifilter.c: In function 'vxlan_vnifilter_dump_dev':
drivers/net/vxlan/vxlan_vnifilter.c:202:39: warning: variable 'tmsg' set but not used [-Wunused-but-set-variable]
202 | struct tunnel_msg *new_tmsg, *tmsg;
| ^~~~
drivers/net/vxlan/vxlan_vnifilter.c: In function 'vxlan_vni_alloc':
>> drivers/net/vxlan/vxlan_vnifilter.c:546:18: error: 'struct vxlan_vni_node' has no member named 'hlist6'; did you mean 'hlist4'?
546 | vninode->hlist6.vxlan = vxlan;
| ^~~~~~
| hlist4
drivers/net/vxlan/vxlan_vnifilter.c: In function 'vxlan_vnigroup_uninit':
drivers/net/vxlan/vxlan_vnifilter.c:731:40: error: 'struct vxlan_vni_node' has no member named 'hlist6'; did you mean 'hlist4'?
731 | hlist_del_init_rcu(&v->hlist6.hlist);
| ^~~~~~
| hlist4
sparse warnings: (new ones prefixed by >>)
>> drivers/net/vxlan/vxlan_multicast.c:181:5: sparse: sparse: symbol 'vxlan_multicast_join_vnigrp' was not declared. Should it be static?
>> drivers/net/vxlan/vxlan_multicast.c:218:5: sparse: sparse: symbol 'vxlan_multicast_leave_vnigrp' was not declared. Should it be static?
vim +546 drivers/net/vxlan/vxlan_vnifilter.c
535
536 static struct vxlan_vni_node *vxlan_vni_alloc(struct vxlan_dev *vxlan,
537 __be32 vni)
538 {
539 struct vxlan_vni_node *vninode;
540
541 vninode = kzalloc(sizeof(*vninode), GFP_ATOMIC);
542 if (!vninode)
543 return NULL;
544 vninode->vni = vni;
545 vninode->hlist4.vxlan = vxlan;
> 546 vninode->hlist6.vxlan = vxlan;
547
548 return vninode;
549 }
550
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next 10/12] selinux: add support for RTM_NEWTUNNEL, RTM_DELTUNNEL, and RTM_GETTUNNEL
2022-02-20 14:04 ` [PATCH net-next 10/12] selinux: add support for RTM_NEWTUNNEL, RTM_DELTUNNEL, and RTM_GETTUNNEL Roopa Prabhu
@ 2022-02-21 1:47 ` Benjamin Poirier
0 siblings, 0 replies; 19+ messages in thread
From: Benjamin Poirier @ 2022-02-21 1:47 UTC (permalink / raw)
To: Roopa Prabhu; +Cc: davem, kuba, netdev, stephen, nikolay, idosch, dsahern
On 2022-02-20 14:04 +0000, Roopa Prabhu wrote:
> From: Benjamin Poirier <bpoirier@nvidia.com>
>
> This patch adds newly added RTM_*TUNNEL msgs to nlmsg_route_perms
>
> Signed-off-by: Benjamin Poirier <bpoirier@nvidia.com>
> ---
> security/selinux/nlmsgtab.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/security/selinux/nlmsgtab.c b/security/selinux/nlmsgtab.c
> index 94ea2a8b2bb7..6ad3ee02e023 100644
> --- a/security/selinux/nlmsgtab.c
> +++ b/security/selinux/nlmsgtab.c
> @@ -91,6 +91,9 @@ static const struct nlmsg_perm nlmsg_route_perms[] =
> { RTM_NEWNEXTHOPBUCKET, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
> { RTM_DELNEXTHOPBUCKET, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
> { RTM_GETNEXTHOPBUCKET, NETLINK_ROUTE_SOCKET__NLMSG_READ },
> + { RTM_NEWTUNNEL, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
> + { RTM_DELTUNNEL, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
> + { RTM_GETTUNNEL, NETLINK_ROUTE_SOCKET__NLMSG_READ },
> };
>
> static const struct nlmsg_perm nlmsg_tcpdiag_perms[] =
> @@ -176,7 +179,7 @@ int selinux_nlmsg_lookup(u16 sclass, u16 nlmsg_type, u32 *perm)
> * structures at the top of this file with the new mappings
> * before updating the BUILD_BUG_ON() macro!
> */
> - BUILD_BUG_ON(RTM_MAX != (RTM_NEWNEXTHOPBUCKET + 3));
> + BUILD_BUG_ON(RTM_MAX != (RTM_NEWTUNNEL + 3));
This patch should be folded with patch 06 ("rtnetlink: add new rtm
tunnel api for tunnel id filtering") otherwise there is build breakage
partway through the series when compiling with
CONFIG_SECURITY_SELINUX=y:
CC security/selinux/nlmsgtab.o
In file included from <command-line>:
security/selinux/nlmsgtab.c: In function ‘selinux_nlmsg_lookup’:
././include/linux/compiler_types.h:349:45: error: call to ‘__compiletime_assert_516’ declared with attribute error: BUILD_BUG_ON failed: RTM_MAX != (RTM_NEWNEXTHOPBUCKET + 3)
349 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
| ^
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next 11/12] drivers: vxlan: vnifilter: per vni stats
2022-02-20 14:04 ` [PATCH net-next 11/12] drivers: vxlan: vnifilter: per vni stats Roopa Prabhu
@ 2022-02-21 2:11 ` kernel test robot
0 siblings, 0 replies; 19+ messages in thread
From: kernel test robot @ 2022-02-21 2:11 UTC (permalink / raw)
To: Roopa Prabhu, davem, kuba
Cc: llvm, kbuild-all, netdev, stephen, nikolay, idosch, dsahern
Hi Roopa,
I love your patch! Perhaps something to improve:
[auto build test WARNING on net-next/master]
url: https://github.com/0day-ci/linux/commits/Roopa-Prabhu/vxlan-metadata-device-vnifiltering-support/20220220-220748
base: https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 129c77b5692d4a95a00aa7d58075afe77179623e
config: x86_64-randconfig-a005 (https://download.01.org/0day-ci/archive/20220221/202202211055.sxjukMsT-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project d271fc04d5b97b12e6b797c6067d3c96a8d7470e)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/ebc9d58021bf3de80a5f6b094758abc46d3cd4c4
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Roopa-Prabhu/vxlan-metadata-device-vnifiltering-support/20220220-220748
git checkout ebc9d58021bf3de80a5f6b094758abc46d3cd4c4
# save the config file to linux build tree
mkdir build_dir
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash drivers/net/vxlan/
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
drivers/net/vxlan/vxlan_core.c:948:5: warning: no previous prototype for function 'vxlan_fdb_update_existing' [-Wmissing-prototypes]
int vxlan_fdb_update_existing(struct vxlan_dev *vxlan,
^
drivers/net/vxlan/vxlan_core.c:948:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
int vxlan_fdb_update_existing(struct vxlan_dev *vxlan,
^
static
drivers/net/vxlan/vxlan_core.c:2437:14: warning: variable 'label' set but not used [-Wunused-but-set-variable]
__be32 vni, label;
^
>> drivers/net/vxlan/vxlan_core.c:2483:7: warning: variable 'vni' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
if (!info) {
^~~~~
drivers/net/vxlan/vxlan_core.c:2662:31: note: uninitialized use occurs here
vxlan_vnifilter_count(vxlan, vni, VXLAN_VNI_STATS_TX_DROPS, 0);
^~~
drivers/net/vxlan/vxlan_core.c:2483:3: note: remove the 'if' if its condition is always false
if (!info) {
^~~~~~~~~~~~
>> drivers/net/vxlan/vxlan_core.c:2450:8: warning: variable 'vni' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
if (did_rsc) {
^~~~~~~
drivers/net/vxlan/vxlan_core.c:2662:31: note: uninitialized use occurs here
vxlan_vnifilter_count(vxlan, vni, VXLAN_VNI_STATS_TX_DROPS, 0);
^~~
drivers/net/vxlan/vxlan_core.c:2450:4: note: remove the 'if' if its condition is always true
if (did_rsc) {
^~~~~~~~~~~~~
drivers/net/vxlan/vxlan_core.c:2437:12: note: initialize the variable 'vni' to silence this warning
__be32 vni, label;
^
= 0
4 warnings generated.
vim +2483 drivers/net/vxlan/vxlan_core.c
fee1fad7c73dd0 drivers/net/vxlan.c pravin shelar 2016-11-13 2421
4ad169300a7350 drivers/net/vxlan.c Stephen Hemminger 2013-06-17 2422 static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
3ad7a4b141ebd6 drivers/net/vxlan.c Roopa Prabhu 2017-01-31 2423 __be32 default_vni, struct vxlan_rdst *rdst,
3ad7a4b141ebd6 drivers/net/vxlan.c Roopa Prabhu 2017-01-31 2424 bool did_rsc)
d342894c5d2f8c drivers/net/vxlan.c stephen hemminger 2012-10-01 2425 {
d71785ffc7e7ca drivers/net/vxlan.c Paolo Abeni 2016-02-12 2426 struct dst_cache *dst_cache;
3093fbe7ff4bc7 drivers/net/vxlan.c Thomas Graf 2015-07-21 2427 struct ip_tunnel_info *info;
d342894c5d2f8c drivers/net/vxlan.c stephen hemminger 2012-10-01 2428 struct vxlan_dev *vxlan = netdev_priv(dev);
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2429 const struct iphdr *old_iph = ip_hdr(skb);
e4c7ed415387cf drivers/net/vxlan.c Cong Wang 2013-08-31 2430 union vxlan_addr *dst;
272d96a5ab1066 drivers/net/vxlan.c pravin shelar 2016-08-05 2431 union vxlan_addr remote_ip, local_ip;
ee122c79d4227f drivers/net/vxlan.c Thomas Graf 2015-07-21 2432 struct vxlan_metadata _md;
ee122c79d4227f drivers/net/vxlan.c Thomas Graf 2015-07-21 2433 struct vxlan_metadata *md = &_md;
ebc9d58021bf3d drivers/net/vxlan/vxlan_core.c Nikolay Aleksandrov 2022-02-20 2434 unsigned int pkt_len = skb->len;
e4c7ed415387cf drivers/net/vxlan.c Cong Wang 2013-08-31 2435 __be16 src_port = 0, dst_port;
655c3de16540b8 drivers/net/vxlan.c pravin shelar 2016-11-13 2436 struct dst_entry *ndst = NULL;
e7f70af111f086 drivers/net/vxlan.c Daniel Borkmann 2016-03-09 2437 __be32 vni, label;
d342894c5d2f8c drivers/net/vxlan.c stephen hemminger 2012-10-01 2438 __u8 tos, ttl;
49f810f00fa347 drivers/net/vxlan.c Matthias Schiffer 2017-06-19 2439 int ifindex;
0e6fbc5b6c6218 drivers/net/vxlan.c Pravin B Shelar 2013-06-17 2440 int err;
dc5321d79697db drivers/net/vxlan.c Matthias Schiffer 2017-06-19 2441 u32 flags = vxlan->cfg.flags;
b4ed5cad24c107 drivers/net/vxlan.c Jiri Benc 2016-02-02 2442 bool udp_sum = false;
f491e56dba511d drivers/net/vxlan.c Jiri Benc 2016-02-02 2443 bool xnet = !net_eq(vxlan->net, dev_net(vxlan->dev));
e4f67addf158f9 drivers/net/vxlan.c David Stevens 2012-11-20 2444
61adedf3e3f1d3 drivers/net/vxlan.c Jiri Benc 2015-08-20 2445 info = skb_tunnel_info(skb);
3093fbe7ff4bc7 drivers/net/vxlan.c Thomas Graf 2015-07-21 2446
ee122c79d4227f drivers/net/vxlan.c Thomas Graf 2015-07-21 2447 if (rdst) {
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2448 dst = &rdst->remote_ip;
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2449 if (vxlan_addr_any(dst)) {
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 @2450 if (did_rsc) {
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2451 /* short-circuited back to local bridge */
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2452 vxlan_encap_bypass(skb, vxlan, vxlan,
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2453 default_vni, true);
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2454 return;
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2455 }
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2456 goto drop;
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2457 }
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2458
0dfbdf4102b930 drivers/net/vxlan.c Thomas Graf 2015-07-21 2459 dst_port = rdst->remote_port ? rdst->remote_port : vxlan->cfg.dst_port;
3ad7a4b141ebd6 drivers/net/vxlan.c Roopa Prabhu 2017-01-31 2460 vni = (rdst->remote_vni) ? : default_vni;
49f810f00fa347 drivers/net/vxlan.c Matthias Schiffer 2017-06-19 2461 ifindex = rdst->remote_ifindex;
1158632b5a2dcc drivers/net/vxlan.c Brian Russell 2017-02-24 2462 local_ip = vxlan->cfg.saddr;
d71785ffc7e7ca drivers/net/vxlan.c Paolo Abeni 2016-02-12 2463 dst_cache = &rdst->dst_cache;
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2464 md->gbp = skb->mark;
72f6d71e491e6c drivers/net/vxlan.c Hangbin Liu 2018-04-17 2465 if (flags & VXLAN_F_TTL_INHERIT) {
72f6d71e491e6c drivers/net/vxlan.c Hangbin Liu 2018-04-17 2466 ttl = ip_tunnel_get_ttl(old_iph, skb);
72f6d71e491e6c drivers/net/vxlan.c Hangbin Liu 2018-04-17 2467 } else {
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2468 ttl = vxlan->cfg.ttl;
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2469 if (!ttl && vxlan_addr_multicast(dst))
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2470 ttl = 1;
72f6d71e491e6c drivers/net/vxlan.c Hangbin Liu 2018-04-17 2471 }
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2472
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2473 tos = vxlan->cfg.tos;
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2474 if (tos == 1)
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2475 tos = ip_tunnel_get_dsfield(old_iph, skb);
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2476
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2477 if (dst->sa.sa_family == AF_INET)
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2478 udp_sum = !(flags & VXLAN_F_UDP_ZERO_CSUM_TX);
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2479 else
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2480 udp_sum = !(flags & VXLAN_F_UDP_ZERO_CSUM6_TX);
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2481 label = vxlan->cfg.label;
ee122c79d4227f drivers/net/vxlan.c Thomas Graf 2015-07-21 2482 } else {
435be28b0789b3 drivers/net/vxlan.c Jakub Kicinski 2020-09-25 @2483 if (!info) {
435be28b0789b3 drivers/net/vxlan.c Jakub Kicinski 2020-09-25 2484 WARN_ONCE(1, "%s: Missing encapsulation instructions\n",
435be28b0789b3 drivers/net/vxlan.c Jakub Kicinski 2020-09-25 2485 dev->name);
435be28b0789b3 drivers/net/vxlan.c Jakub Kicinski 2020-09-25 2486 goto drop;
435be28b0789b3 drivers/net/vxlan.c Jakub Kicinski 2020-09-25 2487 }
b1be00a6c39fda drivers/net/vxlan.c Jiri Benc 2015-09-24 2488 remote_ip.sa.sa_family = ip_tunnel_info_af(info);
272d96a5ab1066 drivers/net/vxlan.c pravin shelar 2016-08-05 2489 if (remote_ip.sa.sa_family == AF_INET) {
c1ea5d672aaff0 drivers/net/vxlan.c Jiri Benc 2015-08-20 2490 remote_ip.sin.sin_addr.s_addr = info->key.u.ipv4.dst;
272d96a5ab1066 drivers/net/vxlan.c pravin shelar 2016-08-05 2491 local_ip.sin.sin_addr.s_addr = info->key.u.ipv4.src;
272d96a5ab1066 drivers/net/vxlan.c pravin shelar 2016-08-05 2492 } else {
a725e514dbb444 drivers/net/vxlan.c Jiri Benc 2015-08-20 2493 remote_ip.sin6.sin6_addr = info->key.u.ipv6.dst;
272d96a5ab1066 drivers/net/vxlan.c pravin shelar 2016-08-05 2494 local_ip.sin6.sin6_addr = info->key.u.ipv6.src;
272d96a5ab1066 drivers/net/vxlan.c pravin shelar 2016-08-05 2495 }
ee122c79d4227f drivers/net/vxlan.c Thomas Graf 2015-07-21 2496 dst = &remote_ip;
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2497 dst_port = info->key.tp_dst ? : vxlan->cfg.dst_port;
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2498 vni = tunnel_id_to_key32(info->key.tun_id);
49f810f00fa347 drivers/net/vxlan.c Matthias Schiffer 2017-06-19 2499 ifindex = 0;
d71785ffc7e7ca drivers/net/vxlan.c Paolo Abeni 2016-02-12 2500 dst_cache = &info->dst_cache;
eadf52cf185219 drivers/net/vxlan.c Xin Long 2019-10-29 2501 if (info->key.tun_flags & TUNNEL_VXLAN_OPT) {
eadf52cf185219 drivers/net/vxlan.c Xin Long 2019-10-29 2502 if (info->options_len < sizeof(*md))
eadf52cf185219 drivers/net/vxlan.c Xin Long 2019-10-29 2503 goto drop;
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2504 md = ip_tunnel_info_opts(info);
eadf52cf185219 drivers/net/vxlan.c Xin Long 2019-10-29 2505 }
7c383fb2254c44 drivers/net/vxlan.c Jiri Benc 2015-08-20 2506 ttl = info->key.ttl;
7c383fb2254c44 drivers/net/vxlan.c Jiri Benc 2015-08-20 2507 tos = info->key.tos;
e7f70af111f086 drivers/net/vxlan.c Daniel Borkmann 2016-03-09 2508 label = info->key.label;
b4ed5cad24c107 drivers/net/vxlan.c Jiri Benc 2016-02-02 2509 udp_sum = !!(info->key.tun_flags & TUNNEL_CSUM);
ee122c79d4227f drivers/net/vxlan.c Thomas Graf 2015-07-21 2510 }
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2511 src_port = udp_flow_src_port(dev_net(dev), skb, vxlan->cfg.port_min,
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2512 vxlan->cfg.port_max, true);
ee122c79d4227f drivers/net/vxlan.c Thomas Graf 2015-07-21 2513
56de859e9967c0 drivers/net/vxlan.c Jakub Kicinski 2017-02-24 2514 rcu_read_lock();
a725e514dbb444 drivers/net/vxlan.c Jiri Benc 2015-08-20 2515 if (dst->sa.sa_family == AF_INET) {
c6fcc4fc5f8b59 drivers/net/vxlan.c pravin shelar 2016-10-28 2516 struct vxlan_sock *sock4 = rcu_dereference(vxlan->vn4_sock);
c46b7897ad5ba4 drivers/net/vxlan.c pravin shelar 2016-11-13 2517 struct rtable *rt;
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2518 __be16 df = 0;
c6fcc4fc5f8b59 drivers/net/vxlan.c pravin shelar 2016-10-28 2519
aab8cc3630e325 drivers/net/vxlan.c Alexis Bauvin 2018-12-03 2520 if (!ifindex)
aab8cc3630e325 drivers/net/vxlan.c Alexis Bauvin 2018-12-03 2521 ifindex = sock4->sock->sk->sk_bound_dev_if;
aab8cc3630e325 drivers/net/vxlan.c Alexis Bauvin 2018-12-03 2522
49f810f00fa347 drivers/net/vxlan.c Matthias Schiffer 2017-06-19 2523 rt = vxlan_get_route(vxlan, dev, sock4, skb, ifindex, tos,
272d96a5ab1066 drivers/net/vxlan.c pravin shelar 2016-08-05 2524 dst->sin.sin_addr.s_addr,
1158632b5a2dcc drivers/net/vxlan.c Brian Russell 2017-02-24 2525 &local_ip.sin.sin_addr.s_addr,
4ecb1d83f6abe8 drivers/net/vxlan.c Martynas Pumputis 2017-01-11 2526 dst_port, src_port,
d71785ffc7e7ca drivers/net/vxlan.c Paolo Abeni 2016-02-12 2527 dst_cache, info);
8ebd115bb23ac4 drivers/net/vxlan.c David S. Miller 2016-11-15 2528 if (IS_ERR(rt)) {
8ebd115bb23ac4 drivers/net/vxlan.c David S. Miller 2016-11-15 2529 err = PTR_ERR(rt);
c46b7897ad5ba4 drivers/net/vxlan.c pravin shelar 2016-11-13 2530 goto tx_error;
8ebd115bb23ac4 drivers/net/vxlan.c David S. Miller 2016-11-15 2531 }
d342894c5d2f8c drivers/net/vxlan.c stephen hemminger 2012-10-01 2532
fee1fad7c73dd0 drivers/net/vxlan.c pravin shelar 2016-11-13 2533 if (!info) {
b4d3069783bccf drivers/net/vxlan.c Stefano Brivio 2018-11-08 2534 /* Bypass encapsulation if the destination is local */
fee1fad7c73dd0 drivers/net/vxlan.c pravin shelar 2016-11-13 2535 err = encap_bypass_if_local(skb, dev, vxlan, dst,
49f810f00fa347 drivers/net/vxlan.c Matthias Schiffer 2017-06-19 2536 dst_port, ifindex, vni,
49f810f00fa347 drivers/net/vxlan.c Matthias Schiffer 2017-06-19 2537 &rt->dst, rt->rt_flags);
fee1fad7c73dd0 drivers/net/vxlan.c pravin shelar 2016-11-13 2538 if (err)
56de859e9967c0 drivers/net/vxlan.c Jakub Kicinski 2017-02-24 2539 goto out_unlock;
b4d3069783bccf drivers/net/vxlan.c Stefano Brivio 2018-11-08 2540
b4d3069783bccf drivers/net/vxlan.c Stefano Brivio 2018-11-08 2541 if (vxlan->cfg.df == VXLAN_DF_SET) {
b4d3069783bccf drivers/net/vxlan.c Stefano Brivio 2018-11-08 2542 df = htons(IP_DF);
b4d3069783bccf drivers/net/vxlan.c Stefano Brivio 2018-11-08 2543 } else if (vxlan->cfg.df == VXLAN_DF_INHERIT) {
b4d3069783bccf drivers/net/vxlan.c Stefano Brivio 2018-11-08 2544 struct ethhdr *eth = eth_hdr(skb);
b4d3069783bccf drivers/net/vxlan.c Stefano Brivio 2018-11-08 2545
b4d3069783bccf drivers/net/vxlan.c Stefano Brivio 2018-11-08 2546 if (ntohs(eth->h_proto) == ETH_P_IPV6 ||
b4d3069783bccf drivers/net/vxlan.c Stefano Brivio 2018-11-08 2547 (ntohs(eth->h_proto) == ETH_P_IP &&
b4d3069783bccf drivers/net/vxlan.c Stefano Brivio 2018-11-08 2548 old_iph->frag_off & htons(IP_DF)))
b4d3069783bccf drivers/net/vxlan.c Stefano Brivio 2018-11-08 2549 df = htons(IP_DF);
b4d3069783bccf drivers/net/vxlan.c Stefano Brivio 2018-11-08 2550 }
fee1fad7c73dd0 drivers/net/vxlan.c pravin shelar 2016-11-13 2551 } else if (info->key.tun_flags & TUNNEL_DONT_FRAGMENT) {
6ceb31ca5f65ac drivers/net/vxlan.c Alexander Duyck 2016-02-19 2552 df = htons(IP_DF);
fee1fad7c73dd0 drivers/net/vxlan.c pravin shelar 2016-11-13 2553 }
6ceb31ca5f65ac drivers/net/vxlan.c Alexander Duyck 2016-02-19 2554
c46b7897ad5ba4 drivers/net/vxlan.c pravin shelar 2016-11-13 2555 ndst = &rt->dst;
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2556 err = skb_tunnel_check_pmtu(skb, ndst, VXLAN_HEADROOM,
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2557 netif_is_any_bridge_port(dev));
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2558 if (err < 0) {
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2559 goto tx_error;
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2560 } else if (err) {
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2561 if (info) {
30a93d2b7d5a7c drivers/net/vxlan.c Antoine Tenart 2021-03-25 2562 struct ip_tunnel_info *unclone;
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2563 struct in_addr src, dst;
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2564
30a93d2b7d5a7c drivers/net/vxlan.c Antoine Tenart 2021-03-25 2565 unclone = skb_tunnel_info_unclone(skb);
30a93d2b7d5a7c drivers/net/vxlan.c Antoine Tenart 2021-03-25 2566 if (unlikely(!unclone))
30a93d2b7d5a7c drivers/net/vxlan.c Antoine Tenart 2021-03-25 2567 goto tx_error;
30a93d2b7d5a7c drivers/net/vxlan.c Antoine Tenart 2021-03-25 2568
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2569 src = remote_ip.sin.sin_addr;
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2570 dst = local_ip.sin.sin_addr;
30a93d2b7d5a7c drivers/net/vxlan.c Antoine Tenart 2021-03-25 2571 unclone->key.u.ipv4.src = src.s_addr;
30a93d2b7d5a7c drivers/net/vxlan.c Antoine Tenart 2021-03-25 2572 unclone->key.u.ipv4.dst = dst.s_addr;
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2573 }
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2574 vxlan_encap_bypass(skb, vxlan, vxlan, vni, false);
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2575 dst_release(ndst);
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2576 goto out_unlock;
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2577 }
a93bf0ff449064 drivers/net/vxlan.c Xin Long 2017-12-18 2578
a0dced17ad9dc0 drivers/net/vxlan.c Hangbin Liu 2020-08-05 2579 tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
0e6fbc5b6c6218 drivers/net/vxlan.c Pravin B Shelar 2013-06-17 2580 ttl = ttl ? : ip4_dst_hoplimit(&rt->dst);
c46b7897ad5ba4 drivers/net/vxlan.c pravin shelar 2016-11-13 2581 err = vxlan_build_skb(skb, ndst, sizeof(struct iphdr),
54bfd872bf16d4 drivers/net/vxlan.c Jiri Benc 2016-02-16 2582 vni, md, flags, udp_sum);
f491e56dba511d drivers/net/vxlan.c Jiri Benc 2016-02-02 2583 if (err < 0)
c46b7897ad5ba4 drivers/net/vxlan.c pravin shelar 2016-11-13 2584 goto tx_error;
f491e56dba511d drivers/net/vxlan.c Jiri Benc 2016-02-02 2585
1158632b5a2dcc drivers/net/vxlan.c Brian Russell 2017-02-24 2586 udp_tunnel_xmit_skb(rt, sock4->sock->sk, skb, local_ip.sin.sin_addr.s_addr,
af33c1adae1e09 drivers/net/vxlan.c Tom Herbert 2015-01-20 2587 dst->sin.sin_addr.s_addr, tos, ttl, df,
f491e56dba511d drivers/net/vxlan.c Jiri Benc 2016-02-02 2588 src_port, dst_port, xnet, !udp_sum);
e4c7ed415387cf drivers/net/vxlan.c Cong Wang 2013-08-31 2589 #if IS_ENABLED(CONFIG_IPV6)
e4c7ed415387cf drivers/net/vxlan.c Cong Wang 2013-08-31 2590 } else {
c6fcc4fc5f8b59 drivers/net/vxlan.c pravin shelar 2016-10-28 2591 struct vxlan_sock *sock6 = rcu_dereference(vxlan->vn6_sock);
e4c7ed415387cf drivers/net/vxlan.c Cong Wang 2013-08-31 2592
aab8cc3630e325 drivers/net/vxlan.c Alexis Bauvin 2018-12-03 2593 if (!ifindex)
aab8cc3630e325 drivers/net/vxlan.c Alexis Bauvin 2018-12-03 2594 ifindex = sock6->sock->sk->sk_bound_dev_if;
aab8cc3630e325 drivers/net/vxlan.c Alexis Bauvin 2018-12-03 2595
49f810f00fa347 drivers/net/vxlan.c Matthias Schiffer 2017-06-19 2596 ndst = vxlan6_get_route(vxlan, dev, sock6, skb, ifindex, tos,
272d96a5ab1066 drivers/net/vxlan.c pravin shelar 2016-08-05 2597 label, &dst->sin6.sin6_addr,
1158632b5a2dcc drivers/net/vxlan.c Brian Russell 2017-02-24 2598 &local_ip.sin6.sin6_addr,
4ecb1d83f6abe8 drivers/net/vxlan.c Martynas Pumputis 2017-01-11 2599 dst_port, src_port,
db3c6139e6ead9 drivers/net/vxlan.c Daniel Borkmann 2016-03-04 2600 dst_cache, info);
e5d4b29fe86a91 drivers/net/vxlan.c Jiri Benc 2015-12-07 2601 if (IS_ERR(ndst)) {
8ebd115bb23ac4 drivers/net/vxlan.c David S. Miller 2016-11-15 2602 err = PTR_ERR(ndst);
c46b7897ad5ba4 drivers/net/vxlan.c pravin shelar 2016-11-13 2603 ndst = NULL;
e4c7ed415387cf drivers/net/vxlan.c Cong Wang 2013-08-31 2604 goto tx_error;
e4c7ed415387cf drivers/net/vxlan.c Cong Wang 2013-08-31 2605 }
655c3de16540b8 drivers/net/vxlan.c pravin shelar 2016-11-13 2606
fee1fad7c73dd0 drivers/net/vxlan.c pravin shelar 2016-11-13 2607 if (!info) {
fee1fad7c73dd0 drivers/net/vxlan.c pravin shelar 2016-11-13 2608 u32 rt6i_flags = ((struct rt6_info *)ndst)->rt6i_flags;
e4c7ed415387cf drivers/net/vxlan.c Cong Wang 2013-08-31 2609
fee1fad7c73dd0 drivers/net/vxlan.c pravin shelar 2016-11-13 2610 err = encap_bypass_if_local(skb, dev, vxlan, dst,
49f810f00fa347 drivers/net/vxlan.c Matthias Schiffer 2017-06-19 2611 dst_port, ifindex, vni,
49f810f00fa347 drivers/net/vxlan.c Matthias Schiffer 2017-06-19 2612 ndst, rt6i_flags);
fee1fad7c73dd0 drivers/net/vxlan.c pravin shelar 2016-11-13 2613 if (err)
56de859e9967c0 drivers/net/vxlan.c Jakub Kicinski 2017-02-24 2614 goto out_unlock;
fee1fad7c73dd0 drivers/net/vxlan.c pravin shelar 2016-11-13 2615 }
35e2d1152b22ea drivers/net/vxlan.c Jesse Gross 2016-01-20 2616
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2617 err = skb_tunnel_check_pmtu(skb, ndst, VXLAN6_HEADROOM,
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2618 netif_is_any_bridge_port(dev));
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2619 if (err < 0) {
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2620 goto tx_error;
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2621 } else if (err) {
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2622 if (info) {
30a93d2b7d5a7c drivers/net/vxlan.c Antoine Tenart 2021-03-25 2623 struct ip_tunnel_info *unclone;
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2624 struct in6_addr src, dst;
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2625
30a93d2b7d5a7c drivers/net/vxlan.c Antoine Tenart 2021-03-25 2626 unclone = skb_tunnel_info_unclone(skb);
30a93d2b7d5a7c drivers/net/vxlan.c Antoine Tenart 2021-03-25 2627 if (unlikely(!unclone))
30a93d2b7d5a7c drivers/net/vxlan.c Antoine Tenart 2021-03-25 2628 goto tx_error;
30a93d2b7d5a7c drivers/net/vxlan.c Antoine Tenart 2021-03-25 2629
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2630 src = remote_ip.sin6.sin6_addr;
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2631 dst = local_ip.sin6.sin6_addr;
30a93d2b7d5a7c drivers/net/vxlan.c Antoine Tenart 2021-03-25 2632 unclone->key.u.ipv6.src = src;
30a93d2b7d5a7c drivers/net/vxlan.c Antoine Tenart 2021-03-25 2633 unclone->key.u.ipv6.dst = dst;
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2634 }
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2635
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2636 vxlan_encap_bypass(skb, vxlan, vxlan, vni, false);
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2637 dst_release(ndst);
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2638 goto out_unlock;
fc68c99577cc66 drivers/net/vxlan.c Stefano Brivio 2020-08-04 2639 }
a93bf0ff449064 drivers/net/vxlan.c Xin Long 2017-12-18 2640
a0dced17ad9dc0 drivers/net/vxlan.c Hangbin Liu 2020-08-05 2641 tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
e4c7ed415387cf drivers/net/vxlan.c Cong Wang 2013-08-31 2642 ttl = ttl ? : ip6_dst_hoplimit(ndst);
f491e56dba511d drivers/net/vxlan.c Jiri Benc 2016-02-02 2643 skb_scrub_packet(skb, xnet);
f491e56dba511d drivers/net/vxlan.c Jiri Benc 2016-02-02 2644 err = vxlan_build_skb(skb, ndst, sizeof(struct ipv6hdr),
54bfd872bf16d4 drivers/net/vxlan.c Jiri Benc 2016-02-16 2645 vni, md, flags, udp_sum);
c46b7897ad5ba4 drivers/net/vxlan.c pravin shelar 2016-11-13 2646 if (err < 0)
c46b7897ad5ba4 drivers/net/vxlan.c pravin shelar 2016-11-13 2647 goto tx_error;
c46b7897ad5ba4 drivers/net/vxlan.c pravin shelar 2016-11-13 2648
0770b53bd276a7 drivers/net/vxlan.c pravin shelar 2016-11-13 2649 udp_tunnel6_xmit_skb(ndst, sock6->sock->sk, skb, dev,
1158632b5a2dcc drivers/net/vxlan.c Brian Russell 2017-02-24 2650 &local_ip.sin6.sin6_addr,
272d96a5ab1066 drivers/net/vxlan.c pravin shelar 2016-08-05 2651 &dst->sin6.sin6_addr, tos, ttl,
e7f70af111f086 drivers/net/vxlan.c Daniel Borkmann 2016-03-09 2652 label, src_port, dst_port, !udp_sum);
e4c7ed415387cf drivers/net/vxlan.c Cong Wang 2013-08-31 2653 #endif
e4c7ed415387cf drivers/net/vxlan.c Cong Wang 2013-08-31 2654 }
ebc9d58021bf3d drivers/net/vxlan/vxlan_core.c Nikolay Aleksandrov 2022-02-20 2655 vxlan_vnifilter_count(vxlan, vni, VXLAN_VNI_STATS_TX, pkt_len);
56de859e9967c0 drivers/net/vxlan.c Jakub Kicinski 2017-02-24 2656 out_unlock:
56de859e9967c0 drivers/net/vxlan.c Jakub Kicinski 2017-02-24 2657 rcu_read_unlock();
4ad169300a7350 drivers/net/vxlan.c Stephen Hemminger 2013-06-17 2658 return;
d342894c5d2f8c drivers/net/vxlan.c stephen hemminger 2012-10-01 2659
d342894c5d2f8c drivers/net/vxlan.c stephen hemminger 2012-10-01 2660 drop:
d342894c5d2f8c drivers/net/vxlan.c stephen hemminger 2012-10-01 2661 dev->stats.tx_dropped++;
ebc9d58021bf3d drivers/net/vxlan/vxlan_core.c Nikolay Aleksandrov 2022-02-20 2662 vxlan_vnifilter_count(vxlan, vni, VXLAN_VNI_STATS_TX_DROPS, 0);
c46b7897ad5ba4 drivers/net/vxlan.c pravin shelar 2016-11-13 2663 dev_kfree_skb(skb);
c46b7897ad5ba4 drivers/net/vxlan.c pravin shelar 2016-11-13 2664 return;
d342894c5d2f8c drivers/net/vxlan.c stephen hemminger 2012-10-01 2665
d342894c5d2f8c drivers/net/vxlan.c stephen hemminger 2012-10-01 2666 tx_error:
56de859e9967c0 drivers/net/vxlan.c Jakub Kicinski 2017-02-24 2667 rcu_read_unlock();
655c3de16540b8 drivers/net/vxlan.c pravin shelar 2016-11-13 2668 if (err == -ELOOP)
655c3de16540b8 drivers/net/vxlan.c pravin shelar 2016-11-13 2669 dev->stats.collisions++;
655c3de16540b8 drivers/net/vxlan.c pravin shelar 2016-11-13 2670 else if (err == -ENETUNREACH)
655c3de16540b8 drivers/net/vxlan.c pravin shelar 2016-11-13 2671 dev->stats.tx_carrier_errors++;
c46b7897ad5ba4 drivers/net/vxlan.c pravin shelar 2016-11-13 2672 dst_release(ndst);
d342894c5d2f8c drivers/net/vxlan.c stephen hemminger 2012-10-01 2673 dev->stats.tx_errors++;
ebc9d58021bf3d drivers/net/vxlan/vxlan_core.c Nikolay Aleksandrov 2022-02-20 2674 vxlan_vnifilter_count(vxlan, vni, VXLAN_VNI_STATS_TX_ERRORS, 0);
c46b7897ad5ba4 drivers/net/vxlan.c pravin shelar 2016-11-13 2675 kfree_skb(skb);
d342894c5d2f8c drivers/net/vxlan.c stephen hemminger 2012-10-01 2676 }
d342894c5d2f8c drivers/net/vxlan.c stephen hemminger 2012-10-01 2677
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2022-02-21 2:12 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-20 14:03 [PATCH net-next 00/12] vxlan metadata device vnifiltering support Roopa Prabhu
2022-02-20 14:03 ` [PATCH net-next 01/12] vxlan: move to its own directory Roopa Prabhu
2022-02-20 14:03 ` [PATCH net-next 02/12] vxlan_core: move common declarations to private header file Roopa Prabhu
2022-02-20 14:03 ` [PATCH net-next 03/12] vxlan_core: move some fdb helpers to non-static Roopa Prabhu
2022-02-20 14:03 ` [PATCH net-next 04/12] vxlan_core: make multicast helper take rip and ifindex explicitly Roopa Prabhu
2022-02-20 14:03 ` [PATCH net-next 05/12] vxlan_core: add helper vxlan_vni_in_use Roopa Prabhu
2022-02-20 14:03 ` [PATCH net-next 06/12] rtnetlink: add new rtm tunnel api for tunnel id filtering Roopa Prabhu
2022-02-20 14:29 ` Roopa Prabhu
2022-02-20 14:04 ` [PATCH net-next 07/12] vxlan_multicast: Move multicast helpers to a separate file Roopa Prabhu
2022-02-20 14:04 ` [PATCH net-next 08/12] vxlan: vni filtering support on collect metadata device Roopa Prabhu
2022-02-20 22:24 ` kernel test robot
2022-02-20 14:04 ` [PATCH net-next 09/12] selftests: add new tests for vxlan vnifiltering Roopa Prabhu
2022-02-20 14:04 ` [PATCH net-next 10/12] selinux: add support for RTM_NEWTUNNEL, RTM_DELTUNNEL, and RTM_GETTUNNEL Roopa Prabhu
2022-02-21 1:47 ` Benjamin Poirier
2022-02-20 14:04 ` [PATCH net-next 11/12] drivers: vxlan: vnifilter: per vni stats Roopa Prabhu
2022-02-21 2:11 ` kernel test robot
2022-02-20 14:04 ` [PATCH net-next 12/12] drivers: vxlan: vnifilter: add support for stats dumping Roopa Prabhu
2022-02-20 14:12 ` Nikolay Aleksandrov
2022-02-20 14:27 ` Roopa Prabhu
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.