netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next,v2 0/9] netfilter: flowtable bridge and vlan enhancements
@ 2020-10-15 16:30 Pablo Neira Ayuso
  2020-10-15 16:30 ` [PATCH net-next,v2 1/9] netfilter: flowtable: add xmit path types Pablo Neira Ayuso
                   ` (9 more replies)
  0 siblings, 10 replies; 15+ messages in thread
From: Pablo Neira Ayuso @ 2020-10-15 16:30 UTC (permalink / raw)
  To: netfilter-devel; +Cc: davem, netdev, kuba

Hi,

[ This is v2 fixing up the sparse warnings. ]

The following patchset adds infrastructure to augment the Netfilter
flowtable fastpath [1] to support for local network topologies that
combine IP forwarding, bridge and vlan devices.

A typical scenario that can benefit from this infrastructure is composed
of several VMs connected to bridge ports where the bridge master device
'br0' has an IP address. A DHCP server is also assumed to be running to
provide connectivity to the VMs. The VMs reach the Internet through
'br0' as default gateway, which makes the packet enter the IP forwarding
path. Then, netfilter is used to NAT the packets before they leave to
through the wan device.

Something like this:

                       fast path
                .------------------------.
               /                          \
               |           IP forwarding   |
               |          /             \  .
               |       br0               eth0
               .       / \
               -- veth1  veth2
                   .
                   .
                   .
                 ethX
           ab:cd:ef:ab:cd:ef
                  VM

The idea is to accelerate forwarding by building a fast path that takes
packets from the ingress path of the bridge port and place them in the
egress path of the wan device (and vice versa). Hence, skipping the
classic bridge and IP stack paths.

Patch #1 adds the transmit path type field to the flow tuple. Two transmit
         paths are supported so far: the neighbour and the xfrm transmit
         paths. This patch comes in preparation to add a new direct ethernet
         transmit path (see patch #7).

Patch #2 adds dev_fill_forward_path() and .ndo_fill_forward_path() to
         netdev_ops. This new function describes the list of netdevice hops
         to reach a given destination MAC address in the local network topology,
         e.g.
                           IP forwarding
                          /             \
                       br0              eth0
                       / \
                   veth1 veth2
                    .
                    .
                    .
                   ethX
             ab:cd:ef:ab:cd:ef

          where veth1 and veth2 are bridge ports and eth0 provides Internet
          connectivity. ethX is the interface in the VM which is connected to
          the veth1 bridge port. Then, for packets going to br0 whose
          destination MAC address is ab:cd:ef:ab:cd:ef, dev_fill_forward_path()
          provides the following path: br0 -> veth1.

Patch #3 adds .ndo_fill_forward_path for vlan devices, which provides the next
         device hop via vlan->real_dev. This also annotates the vlan id and
         protocol. This is useful to know what vlan headers are expected from
         the ingress device. This also provides information regarding the vlan
         headers to be pushed before transmission via the egress device.

Patch #4 adds .ndo_fill_forward_path for bridge devices, which allows to make
         lookups to the FDB to locate the next device hop (bridge port) in the
         forwarding path.

Patch #5 updates the flowtable to use the dev_fill_forward_path()
         infrastructure to obtain the ingress device in the forwarding path.

Patch #6 updates the flowtable to use the dev_fill_forward_path()
         infrastructure to obtain the egress device in the forwarding path.

Patch #7 adds the direct ethernet transmit path, which pushes the
         ethernet header to the packet and send it through dev_queue_xmit().

Patch #8 uses the direct ethernet transmit path (added in the previous
         patch) to transmit packets to bridge ports - in case
         dev_fill_forward_path() describes a topology that includes a bridge.

Patch #9 updates the flowtable to include the vlan information in the flow tuple
         for lookups from the ingress path as well as the vlan headers to be
         pushed into the packet before transmission to the egress device.
         802.1q and 802.1ad (q-in-q) are supported. The vlan information is
         also described by dev_fill_forward_path().

Please, apply.

[1] https://www.kernel.org/doc/html/latest/networking/nf_flowtable.html

Pablo Neira Ayuso (9):
  netfilter: flowtable: add xmit path types
  net: resolve forwarding path from virtual netdevice and HW destination address
  net: 8021q: resolve forwarding path for vlan devices
  bridge: resolve forwarding path for bridge devices
  netfilter: flowtable: use dev_fill_forward_path() to obtain ingress device
  netfilter: flowtable: use dev_fill_forward_path() to obtain egress device
  netfilter: flowtable: add direct xmit path
  netfilter: flowtable: bridge port support
  netfilter: flowtable: add vlan support

 include/linux/netdevice.h             |  35 ++++
 include/net/netfilter/nf_flow_table.h |  41 ++++-
 net/8021q/vlan_dev.c                  |  15 ++
 net/bridge/br_device.c                |  22 +++
 net/core/dev.c                        |  31 ++++
 net/netfilter/nf_flow_table_core.c    |  27 ++-
 net/netfilter/nf_flow_table_ip.c      | 247 ++++++++++++++++++++++----
 net/netfilter/nft_flow_offload.c      | 107 ++++++++++-
 8 files changed, 484 insertions(+), 41 deletions(-)

--
2.20.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH net-next,v2 1/9] netfilter: flowtable: add xmit path types
  2020-10-15 16:30 [PATCH net-next,v2 0/9] netfilter: flowtable bridge and vlan enhancements Pablo Neira Ayuso
@ 2020-10-15 16:30 ` Pablo Neira Ayuso
  2020-10-15 16:30 ` [PATCH net-next,v2 2/9] net: resolve forwarding path from virtual netdevice and HW destination address Pablo Neira Ayuso
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Pablo Neira Ayuso @ 2020-10-15 16:30 UTC (permalink / raw)
  To: netfilter-devel; +Cc: davem, netdev, kuba

Add the xmit_type field that defines the two supported xmit paths in the
flowtable data plane, which are the neighbour and the xfrm xmit paths.
This patch prepares for new flowtable xmit path types to come.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
v2: no changes.

 include/net/netfilter/nf_flow_table.h |  6 ++++
 net/netfilter/nf_flow_table_core.c    |  4 +++
 net/netfilter/nf_flow_table_ip.c      | 50 ++++++++++++++++++---------
 3 files changed, 44 insertions(+), 16 deletions(-)

diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
index 16e8b2f8d006..08e779f149ee 100644
--- a/include/net/netfilter/nf_flow_table.h
+++ b/include/net/netfilter/nf_flow_table.h
@@ -89,6 +89,11 @@ enum flow_offload_tuple_dir {
 	FLOW_OFFLOAD_DIR_MAX = IP_CT_DIR_MAX
 };
 
+enum flow_offload_xmit_type {
+	FLOW_OFFLOAD_XMIT_NEIGH		= 0,
+	FLOW_OFFLOAD_XMIT_XFRM,
+};
+
 struct flow_offload_tuple {
 	union {
 		struct in_addr		src_v4;
@@ -108,6 +113,7 @@ struct flow_offload_tuple {
 	u8				l3proto;
 	u8				l4proto;
 	u8				dir;
+	enum flow_offload_xmit_type	xmit_type:8;
 
 	u16				mtu;
 
diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
index 513f78db3cb2..97f04f244961 100644
--- a/net/netfilter/nf_flow_table_core.c
+++ b/net/netfilter/nf_flow_table_core.c
@@ -95,6 +95,10 @@ static int flow_offload_fill_route(struct flow_offload *flow,
 	}
 
 	flow_tuple->iifidx = other_dst->dev->ifindex;
+
+	if (dst_xfrm(dst))
+		flow_tuple->xmit_type = FLOW_OFFLOAD_XMIT_XFRM;
+
 	flow_tuple->dst_cache = dst;
 
 	return 0;
diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c
index a698dbe28ef5..e215c79e6777 100644
--- a/net/netfilter/nf_flow_table_ip.c
+++ b/net/netfilter/nf_flow_table_ip.c
@@ -252,6 +252,7 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
 	unsigned int thoff;
 	struct iphdr *iph;
 	__be32 nexthop;
+	int ret;
 
 	if (skb->protocol != htons(ETH_P_IP))
 		return NF_ACCEPT;
@@ -295,19 +296,27 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
 	if (flow_table->flags & NF_FLOWTABLE_COUNTER)
 		nf_ct_acct_update(flow->ct, tuplehash->tuple.dir, skb->len);
 
-	if (unlikely(dst_xfrm(&rt->dst))) {
+	switch (tuplehash->tuple.xmit_type) {
+	case FLOW_OFFLOAD_XMIT_NEIGH:
+		skb->dev = outdev;
+		nexthop = rt_nexthop(rt, flow->tuplehash[!dir].tuple.src_v4.s_addr);
+		skb_dst_set_noref(skb, &rt->dst);
+		neigh_xmit(NEIGH_ARP_TABLE, outdev, &nexthop, skb);
+		ret = NF_STOLEN;
+		break;
+	case FLOW_OFFLOAD_XMIT_XFRM:
 		memset(skb->cb, 0, sizeof(struct inet_skb_parm));
 		IPCB(skb)->iif = skb->dev->ifindex;
 		IPCB(skb)->flags = IPSKB_FORWARDED;
-		return nf_flow_xmit_xfrm(skb, state, &rt->dst);
+		ret = nf_flow_xmit_xfrm(skb, state, &rt->dst);
+		break;
+	default:
+		WARN_ON_ONCE(1);
+		ret = NF_DROP;
+		break;
 	}
 
-	skb->dev = outdev;
-	nexthop = rt_nexthop(rt, flow->tuplehash[!dir].tuple.src_v4.s_addr);
-	skb_dst_set_noref(skb, &rt->dst);
-	neigh_xmit(NEIGH_ARP_TABLE, outdev, &nexthop, skb);
-
-	return NF_STOLEN;
+	return ret;
 }
 EXPORT_SYMBOL_GPL(nf_flow_offload_ip_hook);
 
@@ -493,6 +502,7 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb,
 	struct net_device *outdev;
 	struct ipv6hdr *ip6h;
 	struct rt6_info *rt;
+	int ret;
 
 	if (skb->protocol != htons(ETH_P_IPV6))
 		return NF_ACCEPT;
@@ -536,18 +546,26 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb,
 	if (flow_table->flags & NF_FLOWTABLE_COUNTER)
 		nf_ct_acct_update(flow->ct, tuplehash->tuple.dir, skb->len);
 
-	if (unlikely(dst_xfrm(&rt->dst))) {
+	switch (tuplehash->tuple.xmit_type) {
+	case FLOW_OFFLOAD_XMIT_NEIGH:
+		skb->dev = outdev;
+		nexthop = rt6_nexthop(rt, &flow->tuplehash[!dir].tuple.src_v6);
+		skb_dst_set_noref(skb, &rt->dst);
+		neigh_xmit(NEIGH_ND_TABLE, outdev, nexthop, skb);
+		ret = NF_STOLEN;
+		break;
+	case FLOW_OFFLOAD_XMIT_XFRM:
 		memset(skb->cb, 0, sizeof(struct inet6_skb_parm));
 		IP6CB(skb)->iif = skb->dev->ifindex;
 		IP6CB(skb)->flags = IP6SKB_FORWARDED;
-		return nf_flow_xmit_xfrm(skb, state, &rt->dst);
+		ret = nf_flow_xmit_xfrm(skb, state, &rt->dst);
+		break;
+	default:
+		WARN_ON_ONCE(1);
+		ret = NF_DROP;
+		break;
 	}
 
-	skb->dev = outdev;
-	nexthop = rt6_nexthop(rt, &flow->tuplehash[!dir].tuple.src_v6);
-	skb_dst_set_noref(skb, &rt->dst);
-	neigh_xmit(NEIGH_ND_TABLE, outdev, nexthop, skb);
-
-	return NF_STOLEN;
+	return ret;
 }
 EXPORT_SYMBOL_GPL(nf_flow_offload_ipv6_hook);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next,v2 2/9] net: resolve forwarding path from virtual netdevice and HW destination address
  2020-10-15 16:30 [PATCH net-next,v2 0/9] netfilter: flowtable bridge and vlan enhancements Pablo Neira Ayuso
  2020-10-15 16:30 ` [PATCH net-next,v2 1/9] netfilter: flowtable: add xmit path types Pablo Neira Ayuso
@ 2020-10-15 16:30 ` Pablo Neira Ayuso
  2020-10-15 16:30 ` [PATCH net-next,v2 3/9] net: 8021q: resolve forwarding path for vlan devices Pablo Neira Ayuso
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Pablo Neira Ayuso @ 2020-10-15 16:30 UTC (permalink / raw)
  To: netfilter-devel; +Cc: davem, netdev, kuba

This patch adds dev_fill_forward_path() which resolves the path to reach
the real netdevice from the IP forwarding step. This function takes as
input the netdevice and the destination hardware address and it walks
down over the devices calling .ndo_fill_forward_path() for each device
until the real device is found.

For instance, assuming the following topology:

               IP forwarding
              /             \
           br0              eth0
           / \
       eth1  eth2
        .
        .
        .
       ethX
 ab:cd:ef:ab:cd:ef

where eth1 and eth2 are bridge ports and eth0 provides WAN connectivity.
ethX is the interface in another box which is connected to the eth1
bridge port.

For packets in the IP forwarding step going to br0 whose destination MAC
address is ab:cd:ef:ab:cd:ef, dev_fill_forward_path() provides the
following path:

	br0 -> eth1

.ndo_fill_forward_path for br0 looks up at the FDB for the bridge port
from the destination MAC address to get the bridge port eth1.

This information allows to create a fast path to bypass the classic
bridge and the IP forwarding path, so packets go directly from the
bridge port eth1 to eth0 (wan interface) and vice versa.

             fast path
      .------------------------.
     /                          \
    |           IP forwarding   |
    |          /             \  .
    |       br0               eth0
    .       / \
     -> eth1  eth2
        .
        .
        .
       ethX
 ab:cd:ef:ab:cd:ef

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
v2: no changes

 include/linux/netdevice.h | 27 +++++++++++++++++++++++++++
 net/core/dev.c            | 31 +++++++++++++++++++++++++++++++
 2 files changed, 58 insertions(+)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 964b494b0e8d..b77960621c27 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -833,6 +833,27 @@ typedef u16 (*select_queue_fallback_t)(struct net_device *dev,
 				       struct sk_buff *skb,
 				       struct net_device *sb_dev);
 
+enum net_device_path_type {
+	DEV_PATH_ETHERNET = 0,
+};
+
+struct net_device_path {
+	enum net_device_path_type	type;
+	const struct net_device		*dev;
+};
+
+#define NET_DEVICE_PATH_STACK_MAX	5
+
+struct net_device_path_stack {
+	int			num_paths;
+	struct net_device_path	path[NET_DEVICE_PATH_STACK_MAX];
+};
+
+struct net_device_path_ctx {
+	const struct net_device *dev;
+	const u8		*daddr;
+};
+
 enum tc_setup_type {
 	TC_SETUP_QDISC_MQPRIO,
 	TC_SETUP_CLSU32,
@@ -1279,6 +1300,8 @@ struct netdev_net_notifier {
  * struct net_device *(*ndo_get_peer_dev)(struct net_device *dev);
  *	If a device is paired with a peer device, return the peer instance.
  *	The caller must be under RCU read context.
+ * int (*ndo_fill_forward_path)(struct net_device_path_ctx *ctx, struct net_device_path *path);
+ *	Get the forwarding path to reach the real device from the HW destination address
  */
 struct net_device_ops {
 	int			(*ndo_init)(struct net_device *dev);
@@ -1487,6 +1510,8 @@ struct net_device_ops {
 	int			(*ndo_tunnel_ctl)(struct net_device *dev,
 						  struct ip_tunnel_parm *p, int cmd);
 	struct net_device *	(*ndo_get_peer_dev)(struct net_device *dev);
+	int			(*ndo_fill_forward_path)(struct net_device_path_ctx *ctx,
+							 struct net_device_path *path);
 };
 
 /**
@@ -2798,6 +2823,8 @@ void dev_remove_offload(struct packet_offload *po);
 
 int dev_get_iflink(const struct net_device *dev);
 int dev_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb);
+int dev_fill_forward_path(const struct net_device *dev, const u8 *daddr,
+			  struct net_device_path_stack *stack);
 struct net_device *__dev_get_by_flags(struct net *net, unsigned short flags,
 				      unsigned short mask);
 struct net_device *dev_get_by_name(struct net *net, const char *name);
diff --git a/net/core/dev.c b/net/core/dev.c
index 751e5264fd49..51d820bb89d4 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -845,6 +845,37 @@ int dev_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
 }
 EXPORT_SYMBOL_GPL(dev_fill_metadata_dst);
 
+int dev_fill_forward_path(const struct net_device *dev, const u8 *daddr,
+			  struct net_device_path_stack *stack)
+{
+	const struct net_device *last_dev;
+	struct net_device_path_ctx ctx;
+	struct net_device_path *path;
+	int ret = 0;
+
+	memset(&ctx, 0, sizeof(ctx));
+	ctx.dev = dev;
+	ctx.daddr = daddr;
+
+	while (ctx.dev && ctx.dev->netdev_ops->ndo_fill_forward_path) {
+		last_dev = ctx.dev;
+
+		path = &stack->path[stack->num_paths++];
+		ret = ctx.dev->netdev_ops->ndo_fill_forward_path(&ctx, path);
+		if (ret < 0)
+			return -1;
+
+		if (WARN_ON_ONCE(last_dev == ctx.dev))
+			return -1;
+	}
+	path = &stack->path[stack->num_paths++];
+	path->type = DEV_PATH_ETHERNET;
+	path->dev = ctx.dev;
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(dev_fill_forward_path);
+
 /**
  *	__dev_get_by_name	- find a device by its name
  *	@net: the applicable net namespace
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next,v2 3/9] net: 8021q: resolve forwarding path for vlan devices
  2020-10-15 16:30 [PATCH net-next,v2 0/9] netfilter: flowtable bridge and vlan enhancements Pablo Neira Ayuso
  2020-10-15 16:30 ` [PATCH net-next,v2 1/9] netfilter: flowtable: add xmit path types Pablo Neira Ayuso
  2020-10-15 16:30 ` [PATCH net-next,v2 2/9] net: resolve forwarding path from virtual netdevice and HW destination address Pablo Neira Ayuso
@ 2020-10-15 16:30 ` Pablo Neira Ayuso
  2020-10-15 16:30 ` [PATCH net-next,v2 4/9] bridge: resolve forwarding path for bridge devices Pablo Neira Ayuso
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Pablo Neira Ayuso @ 2020-10-15 16:30 UTC (permalink / raw)
  To: netfilter-devel; +Cc: davem, netdev, kuba

Add .ndo_fill_forward_path for vlan devices.

For instance, assuming the following topology:

                   IP forwarding
                  /             \
            eth0.100             eth0
            |
            eth0
            .
            .
            .
           ethX
     ab:cd:ef:ab:cd:ef

For packets in the IP forwarding going to eth0.100 whose destination MAC
address is ab:cd:ef:ab:cd:ef, dev_fill_forward_path() provides the
following path:

            eth0.100 -> eth0

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
v2: fix sparse warning.

 include/linux/netdevice.h |  7 +++++++
 net/8021q/vlan_dev.c      | 15 +++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index b77960621c27..d4263ed5dd79 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -835,11 +835,18 @@ typedef u16 (*select_queue_fallback_t)(struct net_device *dev,
 
 enum net_device_path_type {
 	DEV_PATH_ETHERNET = 0,
+	DEV_PATH_VLAN,
 };
 
 struct net_device_path {
 	enum net_device_path_type	type;
 	const struct net_device		*dev;
+	union {
+		struct {
+			u16		id;
+			__be16		proto;
+		} vlan;
+	};
 };
 
 #define NET_DEVICE_PATH_STACK_MAX	5
diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c
index ec8408d1638f..f06a507557f9 100644
--- a/net/8021q/vlan_dev.c
+++ b/net/8021q/vlan_dev.c
@@ -767,6 +767,20 @@ static int vlan_dev_get_iflink(const struct net_device *dev)
 	return real_dev->ifindex;
 }
 
+static int vlan_dev_fill_forward_path(struct net_device_path_ctx *ctx,
+				      struct net_device_path *path)
+{
+	struct vlan_dev_priv *vlan = vlan_dev_priv(ctx->dev);
+
+	path->type = DEV_PATH_VLAN;
+	path->vlan.id = vlan->vlan_id;
+	path->vlan.proto = vlan->vlan_proto;
+	path->dev = ctx->dev;
+	ctx->dev = vlan->real_dev;
+
+	return 0;
+}
+
 static const struct ethtool_ops vlan_ethtool_ops = {
 	.get_link_ksettings	= vlan_ethtool_get_link_ksettings,
 	.get_drvinfo	        = vlan_ethtool_get_drvinfo,
@@ -805,6 +819,7 @@ static const struct net_device_ops vlan_netdev_ops = {
 #endif
 	.ndo_fix_features	= vlan_dev_fix_features,
 	.ndo_get_iflink		= vlan_dev_get_iflink,
+	.ndo_fill_forward_path	= vlan_dev_fill_forward_path,
 };
 
 static void vlan_dev_free(struct net_device *dev)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next,v2 4/9] bridge: resolve forwarding path for bridge devices
  2020-10-15 16:30 [PATCH net-next,v2 0/9] netfilter: flowtable bridge and vlan enhancements Pablo Neira Ayuso
                   ` (2 preceding siblings ...)
  2020-10-15 16:30 ` [PATCH net-next,v2 3/9] net: 8021q: resolve forwarding path for vlan devices Pablo Neira Ayuso
@ 2020-10-15 16:30 ` Pablo Neira Ayuso
  2020-10-22 10:24   ` Nikolay Aleksandrov
  2020-10-15 16:30 ` [PATCH net-next,v2 5/9] netfilter: flowtable: use dev_fill_forward_path() to obtain ingress device Pablo Neira Ayuso
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 15+ messages in thread
From: Pablo Neira Ayuso @ 2020-10-15 16:30 UTC (permalink / raw)
  To: netfilter-devel; +Cc: davem, netdev, kuba

Add .ndo_fill_forward_path for bridge devices.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
v2: no changes

 include/linux/netdevice.h |  1 +
 net/bridge/br_device.c    | 22 ++++++++++++++++++++++
 2 files changed, 23 insertions(+)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index d4263ed5dd79..4cabdbc672d3 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -836,6 +836,7 @@ typedef u16 (*select_queue_fallback_t)(struct net_device *dev,
 enum net_device_path_type {
 	DEV_PATH_ETHERNET = 0,
 	DEV_PATH_VLAN,
+	DEV_PATH_BRIDGE,
 };
 
 struct net_device_path {
diff --git a/net/bridge/br_device.c b/net/bridge/br_device.c
index 6f742fee874a..06046a35868d 100644
--- a/net/bridge/br_device.c
+++ b/net/bridge/br_device.c
@@ -391,6 +391,27 @@ static int br_del_slave(struct net_device *dev, struct net_device *slave_dev)
 	return br_del_if(br, slave_dev);
 }
 
+static int br_fill_forward_path(struct net_device_path_ctx *ctx,
+				struct net_device_path *path)
+{
+	struct net_bridge_fdb_entry *f;
+	struct net_bridge *br;
+
+	if (netif_is_bridge_port(ctx->dev))
+		return -1;
+
+	br = netdev_priv(ctx->dev);
+	f = br_fdb_find_rcu(br, ctx->daddr, 0);
+	if (!f || !f->dst)
+		return -1;
+
+	path->type = DEV_PATH_BRIDGE;
+	path->dev = f->dst->br->dev;
+	ctx->dev = f->dst->dev;
+
+	return 0;
+}
+
 static const struct ethtool_ops br_ethtool_ops = {
 	.get_drvinfo		 = br_getinfo,
 	.get_link		 = ethtool_op_get_link,
@@ -425,6 +446,7 @@ static const struct net_device_ops br_netdev_ops = {
 	.ndo_bridge_setlink	 = br_setlink,
 	.ndo_bridge_dellink	 = br_dellink,
 	.ndo_features_check	 = passthru_features_check,
+	.ndo_fill_forward_path	 = br_fill_forward_path,
 };
 
 static struct device_type br_type = {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next,v2 5/9] netfilter: flowtable: use dev_fill_forward_path() to obtain ingress device
  2020-10-15 16:30 [PATCH net-next,v2 0/9] netfilter: flowtable bridge and vlan enhancements Pablo Neira Ayuso
                   ` (3 preceding siblings ...)
  2020-10-15 16:30 ` [PATCH net-next,v2 4/9] bridge: resolve forwarding path for bridge devices Pablo Neira Ayuso
@ 2020-10-15 16:30 ` Pablo Neira Ayuso
  2020-10-15 16:30 ` [PATCH net-next,v2 6/9] netfilter: flowtable: use dev_fill_forward_path() to obtain egress device Pablo Neira Ayuso
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Pablo Neira Ayuso @ 2020-10-15 16:30 UTC (permalink / raw)
  To: netfilter-devel; +Cc: davem, netdev, kuba

The ingress device in the tuple is obtained from route in the reply
direction. Use dev_fill_forward_path() instead to provide the real
ingress device for this flow.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
v2: no changes

 include/net/netfilter/nf_flow_table.h |  6 ++-
 net/netfilter/nf_flow_table_core.c    |  3 +-
 net/netfilter/nft_flow_offload.c      | 75 ++++++++++++++++++++++++++-
 3 files changed, 80 insertions(+), 4 deletions(-)

diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
index 08e779f149ee..ecb84d4358cc 100644
--- a/include/net/netfilter/nf_flow_table.h
+++ b/include/net/netfilter/nf_flow_table.h
@@ -112,9 +112,10 @@ struct flow_offload_tuple {
 
 	u8				l3proto;
 	u8				l4proto;
+
+	/* All members above are keys for lookups, see flow_offload_hash(). */
 	u8				dir;
 	enum flow_offload_xmit_type	xmit_type:8;
-
 	u16				mtu;
 
 	struct dst_entry		*dst_cache;
@@ -160,6 +161,9 @@ static inline __s32 nf_flow_timeout_delta(unsigned int timeout)
 
 struct nf_flow_route {
 	struct {
+		struct {
+			u32		ifindex;
+		} in;
 		struct dst_entry	*dst;
 	} tuple[FLOW_OFFLOAD_DIR_MAX];
 };
diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
index 97f04f244961..66abc7f287a3 100644
--- a/net/netfilter/nf_flow_table_core.c
+++ b/net/netfilter/nf_flow_table_core.c
@@ -79,7 +79,6 @@ static int flow_offload_fill_route(struct flow_offload *flow,
 				   enum flow_offload_tuple_dir dir)
 {
 	struct flow_offload_tuple *flow_tuple = &flow->tuplehash[dir].tuple;
-	struct dst_entry *other_dst = route->tuple[!dir].dst;
 	struct dst_entry *dst = route->tuple[dir].dst;
 
 	if (!dst_hold_safe(route->tuple[dir].dst))
@@ -94,7 +93,7 @@ static int flow_offload_fill_route(struct flow_offload *flow,
 		break;
 	}
 
-	flow_tuple->iifidx = other_dst->dev->ifindex;
+	flow_tuple->iifidx = route->tuple[dir].in.ifindex;
 
 	if (dst_xfrm(dst))
 		flow_tuple->xmit_type = FLOW_OFFLOAD_XMIT_XFRM;
diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c
index 3a6c84fb2c90..4b476b0a3c88 100644
--- a/net/netfilter/nft_flow_offload.c
+++ b/net/netfilter/nft_flow_offload.c
@@ -19,6 +19,75 @@ struct nft_flow_offload {
 	struct nft_flowtable	*flowtable;
 };
 
+static int nft_dev_fill_forward_path(const struct nf_flow_route *route,
+				     const struct nf_conn *ct,
+				     enum ip_conntrack_dir dir,
+				     struct net_device_path_stack *stack)
+{
+	const struct dst_entry *dst_cache = route->tuple[dir].dst;
+	const void *daddr = &ct->tuplehash[!dir].tuple.src.u3;
+	unsigned char ha[ETH_ALEN];
+	struct net_device *dev;
+	struct neighbour *n;
+	struct rtable *rt;
+	u8 nud_state;
+
+	rt = (struct rtable *)dst_cache;
+	dev = rt->dst.dev;
+
+	n = dst_neigh_lookup(dst_cache, daddr);
+	if (!n)
+		return -1;
+
+	read_lock_bh(&n->lock);
+	nud_state = n->nud_state;
+	ether_addr_copy(ha, n->ha);
+	read_unlock_bh(&n->lock);
+	neigh_release(n);
+
+	if (!(nud_state & NUD_VALID))
+		return -1;
+
+	return dev_fill_forward_path(dev, ha, stack);
+}
+
+struct nft_forward_info {
+	u32 iifindex;
+};
+
+static int nft_dev_forward_path(struct nf_flow_route *route,
+				const struct nf_conn *ct,
+				enum ip_conntrack_dir dir)
+{
+	struct net_device_path_stack stack = {};
+	struct nft_forward_info info = {};
+	struct net_device_path *path;
+	int i, ret;
+
+	memset(&stack, 0, sizeof(stack));
+
+	ret = nft_dev_fill_forward_path(route, ct, dir, &stack);
+	if (ret < 0)
+		return -1;
+
+	for (i = stack.num_paths - 1; i >= 0; i--) {
+		path = &stack.path[i];
+		switch (path->type) {
+		case DEV_PATH_ETHERNET:
+			info.iifindex = path->dev->ifindex;
+			break;
+		case DEV_PATH_VLAN:
+			return -1;
+		case DEV_PATH_BRIDGE:
+			return -1;
+		}
+	}
+
+	route->tuple[!dir].in.ifindex = info.iifindex;
+
+	return 0;
+}
+
 static int nft_flow_route(const struct nft_pktinfo *pkt,
 			  const struct nf_conn *ct,
 			  struct nf_flow_route *route,
@@ -47,6 +116,10 @@ static int nft_flow_route(const struct nft_pktinfo *pkt,
 	route->tuple[dir].dst		= this_dst;
 	route->tuple[!dir].dst		= other_dst;
 
+	if (nft_dev_forward_path(route, ct, IP_CT_DIR_ORIGINAL) < 0 ||
+	    nft_dev_forward_path(route, ct, IP_CT_DIR_REPLY) < 0)
+		return -ENOENT;
+
 	return 0;
 }
 
@@ -74,8 +147,8 @@ static void nft_flow_offload_eval(const struct nft_expr *expr,
 	struct nft_flow_offload *priv = nft_expr_priv(expr);
 	struct nf_flowtable *flowtable = &priv->flowtable->data;
 	struct tcphdr _tcph, *tcph = NULL;
+	struct nf_flow_route route = {};
 	enum ip_conntrack_info ctinfo;
-	struct nf_flow_route route;
 	struct flow_offload *flow;
 	enum ip_conntrack_dir dir;
 	struct nf_conn *ct;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next,v2 6/9] netfilter: flowtable: use dev_fill_forward_path() to obtain egress device
  2020-10-15 16:30 [PATCH net-next,v2 0/9] netfilter: flowtable bridge and vlan enhancements Pablo Neira Ayuso
                   ` (4 preceding siblings ...)
  2020-10-15 16:30 ` [PATCH net-next,v2 5/9] netfilter: flowtable: use dev_fill_forward_path() to obtain ingress device Pablo Neira Ayuso
@ 2020-10-15 16:30 ` Pablo Neira Ayuso
  2020-10-19  9:32   ` Jeremy Sowden
  2020-10-15 16:30 ` [PATCH net-next,v2 7/9] netfilter: flowtable: add direct xmit path Pablo Neira Ayuso
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 15+ messages in thread
From: Pablo Neira Ayuso @ 2020-10-15 16:30 UTC (permalink / raw)
  To: netfilter-devel; +Cc: davem, netdev, kuba

The egress device in the tuple is obtained from route. Use
dev_fill_forward_path() instead to provide the real ingress device for
this flow whenever this is available.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
v2: no changes.

 include/net/netfilter/nf_flow_table.h |  4 ++++
 net/netfilter/nf_flow_table_core.c    |  1 +
 net/netfilter/nf_flow_table_ip.c      | 25 +++++++++++++++++++++++--
 net/netfilter/nft_flow_offload.c      |  1 +
 4 files changed, 29 insertions(+), 2 deletions(-)

diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
index ecb84d4358cc..fe225e881cc7 100644
--- a/include/net/netfilter/nf_flow_table.h
+++ b/include/net/netfilter/nf_flow_table.h
@@ -117,6 +117,7 @@ struct flow_offload_tuple {
 	u8				dir;
 	enum flow_offload_xmit_type	xmit_type:8;
 	u16				mtu;
+	u32				oifidx;
 
 	struct dst_entry		*dst_cache;
 };
@@ -164,6 +165,9 @@ struct nf_flow_route {
 		struct {
 			u32		ifindex;
 		} in;
+		struct {
+			u32		ifindex;
+		} out;
 		struct dst_entry	*dst;
 	} tuple[FLOW_OFFLOAD_DIR_MAX];
 };
diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
index 66abc7f287a3..99f01f08c550 100644
--- a/net/netfilter/nf_flow_table_core.c
+++ b/net/netfilter/nf_flow_table_core.c
@@ -94,6 +94,7 @@ static int flow_offload_fill_route(struct flow_offload *flow,
 	}
 
 	flow_tuple->iifidx = route->tuple[dir].in.ifindex;
+	flow_tuple->oifidx = route->tuple[dir].out.ifindex;
 
 	if (dst_xfrm(dst))
 		flow_tuple->xmit_type = FLOW_OFFLOAD_XMIT_XFRM;
diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c
index e215c79e6777..92f444db8d9f 100644
--- a/net/netfilter/nf_flow_table_ip.c
+++ b/net/netfilter/nf_flow_table_ip.c
@@ -228,6 +228,15 @@ static int nf_flow_offload_dst_check(struct dst_entry *dst)
 	return 0;
 }
 
+static struct net_device *nf_flow_outdev_lookup(struct net *net, int oifidx,
+						struct net_device *dev)
+{
+	if (oifidx == dev->ifindex)
+		return dev;
+
+	return dev_get_by_index_rcu(net, oifidx);
+}
+
 static unsigned int nf_flow_xmit_xfrm(struct sk_buff *skb,
 				      const struct nf_hook_state *state,
 				      struct dst_entry *dst)
@@ -267,7 +276,6 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
 	dir = tuplehash->tuple.dir;
 	flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]);
 	rt = (struct rtable *)flow->tuplehash[dir].tuple.dst_cache;
-	outdev = rt->dst.dev;
 
 	if (unlikely(nf_flow_exceeds_mtu(skb, flow->tuplehash[dir].tuple.mtu)))
 		return NF_ACCEPT;
@@ -286,6 +294,13 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
 		return NF_ACCEPT;
 	}
 
+	outdev = nf_flow_outdev_lookup(state->net, tuplehash->tuple.oifidx,
+				       rt->dst.dev);
+	if (!outdev) {
+		flow_offload_teardown(flow);
+		return NF_ACCEPT;
+	}
+
 	if (nf_flow_nat_ip(flow, skb, thoff, dir) < 0)
 		return NF_DROP;
 
@@ -517,7 +532,6 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb,
 	dir = tuplehash->tuple.dir;
 	flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]);
 	rt = (struct rt6_info *)flow->tuplehash[dir].tuple.dst_cache;
-	outdev = rt->dst.dev;
 
 	if (unlikely(nf_flow_exceeds_mtu(skb, flow->tuplehash[dir].tuple.mtu)))
 		return NF_ACCEPT;
@@ -533,6 +547,13 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb,
 		return NF_ACCEPT;
 	}
 
+	outdev = nf_flow_outdev_lookup(state->net, tuplehash->tuple.oifidx,
+				       rt->dst.dev);
+	if (!outdev) {
+		flow_offload_teardown(flow);
+		return NF_ACCEPT;
+	}
+
 	if (skb_try_make_writable(skb, sizeof(*ip6h)))
 		return NF_DROP;
 
diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c
index 4b476b0a3c88..6a6633e2ceeb 100644
--- a/net/netfilter/nft_flow_offload.c
+++ b/net/netfilter/nft_flow_offload.c
@@ -84,6 +84,7 @@ static int nft_dev_forward_path(struct nf_flow_route *route,
 	}
 
 	route->tuple[!dir].in.ifindex = info.iifindex;
+	route->tuple[dir].out.ifindex = info.iifindex;
 
 	return 0;
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next,v2 7/9] netfilter: flowtable: add direct xmit path
  2020-10-15 16:30 [PATCH net-next,v2 0/9] netfilter: flowtable bridge and vlan enhancements Pablo Neira Ayuso
                   ` (5 preceding siblings ...)
  2020-10-15 16:30 ` [PATCH net-next,v2 6/9] netfilter: flowtable: use dev_fill_forward_path() to obtain egress device Pablo Neira Ayuso
@ 2020-10-15 16:30 ` Pablo Neira Ayuso
  2020-10-15 16:30 ` [PATCH net-next,v2 8/9] netfilter: flowtable: bridge port support Pablo Neira Ayuso
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Pablo Neira Ayuso @ 2020-10-15 16:30 UTC (permalink / raw)
  To: netfilter-devel; +Cc: davem, netdev, kuba

Add FLOW_OFFLOAD_XMIT_DIRECT to turn on the direct dev_queue_xmit() path
to transmit ethernet frames. Cache the source and destination hardware
address for flow to use dev_queue_xmit() to transfer packets.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
v2: no changes.

 include/net/netfilter/nf_flow_table.h |  5 +++++
 net/netfilter/nf_flow_table_core.c    |  2 ++
 net/netfilter/nf_flow_table_ip.c      | 24 ++++++++++++++++++++++++
 net/netfilter/nft_flow_offload.c      | 16 +++++++++++-----
 4 files changed, 42 insertions(+), 5 deletions(-)

diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
index fe225e881cc7..1b57d1d1270d 100644
--- a/include/net/netfilter/nf_flow_table.h
+++ b/include/net/netfilter/nf_flow_table.h
@@ -92,6 +92,7 @@ enum flow_offload_tuple_dir {
 enum flow_offload_xmit_type {
 	FLOW_OFFLOAD_XMIT_NEIGH		= 0,
 	FLOW_OFFLOAD_XMIT_XFRM,
+	FLOW_OFFLOAD_XMIT_DIRECT,
 };
 
 struct flow_offload_tuple {
@@ -119,6 +120,8 @@ struct flow_offload_tuple {
 	u16				mtu;
 	u32				oifidx;
 
+	u8				h_source[ETH_ALEN];
+	u8				h_dest[ETH_ALEN];
 	struct dst_entry		*dst_cache;
 };
 
@@ -167,6 +170,8 @@ struct nf_flow_route {
 		} in;
 		struct {
 			u32		ifindex;
+			u8		h_source[ETH_ALEN];
+			u8		h_dest[ETH_ALEN];
 		} out;
 		struct dst_entry	*dst;
 	} tuple[FLOW_OFFLOAD_DIR_MAX];
diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
index 99f01f08c550..725366339b85 100644
--- a/net/netfilter/nf_flow_table_core.c
+++ b/net/netfilter/nf_flow_table_core.c
@@ -94,6 +94,8 @@ static int flow_offload_fill_route(struct flow_offload *flow,
 	}
 
 	flow_tuple->iifidx = route->tuple[dir].in.ifindex;
+	memcpy(flow_tuple->h_dest, route->tuple[dir].out.h_dest, ETH_ALEN);
+	memcpy(flow_tuple->h_source, route->tuple[dir].out.h_source, ETH_ALEN);
 	flow_tuple->oifidx = route->tuple[dir].out.ifindex;
 
 	if (dst_xfrm(dst))
diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c
index 92f444db8d9f..1fa5c67cd914 100644
--- a/net/netfilter/nf_flow_table_ip.c
+++ b/net/netfilter/nf_flow_table_ip.c
@@ -247,6 +247,24 @@ static unsigned int nf_flow_xmit_xfrm(struct sk_buff *skb,
 	return NF_STOLEN;
 }
 
+static unsigned int nf_flow_queue_xmit(struct sk_buff *skb,
+				       struct net_device *outdev,
+				       const struct flow_offload_tuple *tuple)
+{
+	struct ethhdr *eth;
+
+	skb->dev = outdev;
+	skb_push(skb, skb->mac_len);
+	skb_reset_mac_header(skb);
+
+	eth = eth_hdr(skb);
+	memcpy(eth->h_source, tuple->h_source, ETH_ALEN);
+	memcpy(eth->h_dest, tuple->h_dest, ETH_ALEN);
+	dev_queue_xmit(skb);
+
+	return NF_STOLEN;
+}
+
 unsigned int
 nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
 			const struct nf_hook_state *state)
@@ -325,6 +343,9 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
 		IPCB(skb)->flags = IPSKB_FORWARDED;
 		ret = nf_flow_xmit_xfrm(skb, state, &rt->dst);
 		break;
+	case FLOW_OFFLOAD_XMIT_DIRECT:
+		ret = nf_flow_queue_xmit(skb, outdev, &tuplehash->tuple);
+		break;
 	default:
 		WARN_ON_ONCE(1);
 		ret = NF_DROP;
@@ -581,6 +602,9 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb,
 		IP6CB(skb)->flags = IP6SKB_FORWARDED;
 		ret = nf_flow_xmit_xfrm(skb, state, &rt->dst);
 		break;
+	case FLOW_OFFLOAD_XMIT_DIRECT:
+		ret = nf_flow_queue_xmit(skb, outdev, &tuplehash->tuple);
+		break;
 	default:
 		WARN_ON_ONCE(1);
 		ret = NF_DROP;
diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c
index 6a6633e2ceeb..d440e436cb16 100644
--- a/net/netfilter/nft_flow_offload.c
+++ b/net/netfilter/nft_flow_offload.c
@@ -21,12 +21,11 @@ struct nft_flow_offload {
 
 static int nft_dev_fill_forward_path(const struct nf_flow_route *route,
 				     const struct nf_conn *ct,
-				     enum ip_conntrack_dir dir,
+				     enum ip_conntrack_dir dir, u8 *dst,
 				     struct net_device_path_stack *stack)
 {
 	const struct dst_entry *dst_cache = route->tuple[dir].dst;
 	const void *daddr = &ct->tuplehash[!dir].tuple.src.u3;
-	unsigned char ha[ETH_ALEN];
 	struct net_device *dev;
 	struct neighbour *n;
 	struct rtable *rt;
@@ -41,18 +40,20 @@ static int nft_dev_fill_forward_path(const struct nf_flow_route *route,
 
 	read_lock_bh(&n->lock);
 	nud_state = n->nud_state;
-	ether_addr_copy(ha, n->ha);
+	ether_addr_copy(dst, n->ha);
 	read_unlock_bh(&n->lock);
 	neigh_release(n);
 
 	if (!(nud_state & NUD_VALID))
 		return -1;
 
-	return dev_fill_forward_path(dev, ha, stack);
+	return dev_fill_forward_path(dev, dst, stack);
 }
 
 struct nft_forward_info {
 	u32 iifindex;
+	u8 h_source[ETH_ALEN];
+	u8 h_dest[ETH_ALEN];
 };
 
 static int nft_dev_forward_path(struct nf_flow_route *route,
@@ -62,11 +63,12 @@ static int nft_dev_forward_path(struct nf_flow_route *route,
 	struct net_device_path_stack stack = {};
 	struct nft_forward_info info = {};
 	struct net_device_path *path;
+	unsigned char dst[ETH_ALEN];
 	int i, ret;
 
 	memset(&stack, 0, sizeof(stack));
 
-	ret = nft_dev_fill_forward_path(route, ct, dir, &stack);
+	ret = nft_dev_fill_forward_path(route, ct, dir, dst, &stack);
 	if (ret < 0)
 		return -1;
 
@@ -74,6 +76,8 @@ static int nft_dev_forward_path(struct nf_flow_route *route,
 		path = &stack.path[i];
 		switch (path->type) {
 		case DEV_PATH_ETHERNET:
+			memcpy(info.h_dest, path->dev->dev_addr, ETH_ALEN);
+			memcpy(info.h_source, dst, ETH_ALEN);
 			info.iifindex = path->dev->ifindex;
 			break;
 		case DEV_PATH_VLAN:
@@ -84,6 +88,8 @@ static int nft_dev_forward_path(struct nf_flow_route *route,
 	}
 
 	route->tuple[!dir].in.ifindex = info.iifindex;
+	memcpy(route->tuple[dir].out.h_dest, info.h_source, ETH_ALEN);
+	memcpy(route->tuple[dir].out.h_source, info.h_dest, ETH_ALEN);
 	route->tuple[dir].out.ifindex = info.iifindex;
 
 	return 0;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next,v2 8/9] netfilter: flowtable: bridge port support
  2020-10-15 16:30 [PATCH net-next,v2 0/9] netfilter: flowtable bridge and vlan enhancements Pablo Neira Ayuso
                   ` (6 preceding siblings ...)
  2020-10-15 16:30 ` [PATCH net-next,v2 7/9] netfilter: flowtable: add direct xmit path Pablo Neira Ayuso
@ 2020-10-15 16:30 ` Pablo Neira Ayuso
  2020-10-19  9:32   ` Jeremy Sowden
  2020-10-15 16:30 ` [PATCH net-next,v2 9/9] netfilter: flowtable: add vlan support Pablo Neira Ayuso
  2020-10-15 19:47 ` [PATCH net-next,v2 0/9] netfilter: flowtable bridge and vlan enhancements Jakub Kicinski
  9 siblings, 1 reply; 15+ messages in thread
From: Pablo Neira Ayuso @ 2020-10-15 16:30 UTC (permalink / raw)
  To: netfilter-devel; +Cc: davem, netdev, kuba

Update hardware destination address to the master bridge device to
emulate the forwarding behaviour.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
v2: no changes.

 include/net/netfilter/nf_flow_table.h | 1 +
 net/netfilter/nf_flow_table_core.c    | 4 ++++
 net/netfilter/nft_flow_offload.c      | 6 +++++-
 3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
index 1b57d1d1270d..4ec3f9bb2f32 100644
--- a/include/net/netfilter/nf_flow_table.h
+++ b/include/net/netfilter/nf_flow_table.h
@@ -172,6 +172,7 @@ struct nf_flow_route {
 			u32		ifindex;
 			u8		h_source[ETH_ALEN];
 			u8		h_dest[ETH_ALEN];
+			enum flow_offload_xmit_type xmit_type;
 		} out;
 		struct dst_entry	*dst;
 	} tuple[FLOW_OFFLOAD_DIR_MAX];
diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
index 725366339b85..e80dcabe3668 100644
--- a/net/netfilter/nf_flow_table_core.c
+++ b/net/netfilter/nf_flow_table_core.c
@@ -100,6 +100,10 @@ static int flow_offload_fill_route(struct flow_offload *flow,
 
 	if (dst_xfrm(dst))
 		flow_tuple->xmit_type = FLOW_OFFLOAD_XMIT_XFRM;
+	else
+		flow_tuple->xmit_type = route->tuple[dir].out.xmit_type;
+
+	flow_tuple->dst_cache = dst;
 
 	flow_tuple->dst_cache = dst;
 
diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c
index d440e436cb16..9efb5d584290 100644
--- a/net/netfilter/nft_flow_offload.c
+++ b/net/netfilter/nft_flow_offload.c
@@ -54,6 +54,7 @@ struct nft_forward_info {
 	u32 iifindex;
 	u8 h_source[ETH_ALEN];
 	u8 h_dest[ETH_ALEN];
+	enum flow_offload_xmit_type xmit_type;
 };
 
 static int nft_dev_forward_path(struct nf_flow_route *route,
@@ -83,7 +84,9 @@ static int nft_dev_forward_path(struct nf_flow_route *route,
 		case DEV_PATH_VLAN:
 			return -1;
 		case DEV_PATH_BRIDGE:
-			return -1;
+			memcpy(info.h_dest, path->dev->dev_addr, ETH_ALEN);
+			info.xmit_type = FLOW_OFFLOAD_XMIT_DIRECT;
+			break;
 		}
 	}
 
@@ -91,6 +94,7 @@ static int nft_dev_forward_path(struct nf_flow_route *route,
 	memcpy(route->tuple[dir].out.h_dest, info.h_source, ETH_ALEN);
 	memcpy(route->tuple[dir].out.h_source, info.h_dest, ETH_ALEN);
 	route->tuple[dir].out.ifindex = info.iifindex;
+	route->tuple[dir].out.xmit_type = info.xmit_type;
 
 	return 0;
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next,v2 9/9] netfilter: flowtable: add vlan support
  2020-10-15 16:30 [PATCH net-next,v2 0/9] netfilter: flowtable bridge and vlan enhancements Pablo Neira Ayuso
                   ` (7 preceding siblings ...)
  2020-10-15 16:30 ` [PATCH net-next,v2 8/9] netfilter: flowtable: bridge port support Pablo Neira Ayuso
@ 2020-10-15 16:30 ` Pablo Neira Ayuso
  2020-10-15 19:47 ` [PATCH net-next,v2 0/9] netfilter: flowtable bridge and vlan enhancements Jakub Kicinski
  9 siblings, 0 replies; 15+ messages in thread
From: Pablo Neira Ayuso @ 2020-10-15 16:30 UTC (permalink / raw)
  To: netfilter-devel; +Cc: davem, netdev, kuba

Add the vlan id and proto to the flow tuple to uniquely identify flows
from the receive path. Store the vlan id and proto to set it accordingly
from the transmit path. This patch includes support for two VLAN headers
(Q-in-Q).

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
v2: fix sparse warnings.

 include/net/netfilter/nf_flow_table.h |  21 +++-
 net/netfilter/nf_flow_table_core.c    |  13 +++
 net/netfilter/nf_flow_table_ip.c      | 148 ++++++++++++++++++++++----
 net/netfilter/nft_flow_offload.c      |  23 +++-
 4 files changed, 184 insertions(+), 21 deletions(-)

diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
index 4ec3f9bb2f32..ed7a610d10bc 100644
--- a/include/net/netfilter/nf_flow_table.h
+++ b/include/net/netfilter/nf_flow_table.h
@@ -95,6 +95,8 @@ enum flow_offload_xmit_type {
 	FLOW_OFFLOAD_XMIT_DIRECT,
 };
 
+#define NF_FLOW_TABLE_VLAN_MAX	2
+
 struct flow_offload_tuple {
 	union {
 		struct in_addr		src_v4;
@@ -113,13 +115,24 @@ struct flow_offload_tuple {
 
 	u8				l3proto;
 	u8				l4proto;
+	struct {
+		u16			id;
+		__be16			proto;
+	} in_vlan[NF_FLOW_TABLE_VLAN_MAX];
 
 	/* All members above are keys for lookups, see flow_offload_hash(). */
 	u8				dir;
-	enum flow_offload_xmit_type	xmit_type:8;
+	enum flow_offload_xmit_type	xmit_type:6,
+					in_vlan_num:2;
 	u16				mtu;
 	u32				oifidx;
 
+	struct {
+		u16			id;
+		__be16			proto;
+	} out_vlan[NF_FLOW_TABLE_VLAN_MAX];
+	u8				out_vlan_num;
+
 	u8				h_source[ETH_ALEN];
 	u8				h_dest[ETH_ALEN];
 	struct dst_entry		*dst_cache;
@@ -167,11 +180,17 @@ struct nf_flow_route {
 	struct {
 		struct {
 			u32		ifindex;
+			u16		vid[NF_FLOW_TABLE_VLAN_MAX];
+			__be16		vproto[NF_FLOW_TABLE_VLAN_MAX];
+			u8		num_vlans;
 		} in;
 		struct {
 			u32		ifindex;
 			u8		h_source[ETH_ALEN];
 			u8		h_dest[ETH_ALEN];
+			u16		vid[NF_FLOW_TABLE_VLAN_MAX];
+			__be16		vproto[NF_FLOW_TABLE_VLAN_MAX];
+			u8		num_vlans;
 			enum flow_offload_xmit_type xmit_type;
 		} out;
 		struct dst_entry	*dst;
diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
index e80dcabe3668..c486c9b489e0 100644
--- a/net/netfilter/nf_flow_table_core.c
+++ b/net/netfilter/nf_flow_table_core.c
@@ -80,6 +80,7 @@ static int flow_offload_fill_route(struct flow_offload *flow,
 {
 	struct flow_offload_tuple *flow_tuple = &flow->tuplehash[dir].tuple;
 	struct dst_entry *dst = route->tuple[dir].dst;
+	int i;
 
 	if (!dst_hold_safe(route->tuple[dir].dst))
 		return -1;
@@ -98,6 +99,18 @@ static int flow_offload_fill_route(struct flow_offload *flow,
 	memcpy(flow_tuple->h_source, route->tuple[dir].out.h_source, ETH_ALEN);
 	flow_tuple->oifidx = route->tuple[dir].out.ifindex;
 
+	for (i = 0; i < route->tuple[dir].in.num_vlans; i++) {
+		flow_tuple->in_vlan[i].id = route->tuple[dir].in.vid[i];
+		flow_tuple->in_vlan[i].proto = route->tuple[dir].in.vproto[i];
+	}
+	flow_tuple->in_vlan_num = route->tuple[dir].in.num_vlans;
+
+	for (i = 0; i < route->tuple[dir].out.num_vlans; i++) {
+		flow_tuple->out_vlan[i].id = route->tuple[dir].out.vid[i];
+		flow_tuple->out_vlan[i].proto = route->tuple[dir].out.vproto[i];
+	}
+	flow_tuple->out_vlan_num = route->tuple[dir].out.num_vlans;
+
 	if (dst_xfrm(dst))
 		flow_tuple->xmit_type = FLOW_OFFLOAD_XMIT_XFRM;
 	else
diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c
index 1fa5c67cd914..5bbfc64180c5 100644
--- a/net/netfilter/nf_flow_table_ip.c
+++ b/net/netfilter/nf_flow_table_ip.c
@@ -159,17 +159,35 @@ static bool ip_has_options(unsigned int thoff)
 	return thoff != sizeof(struct iphdr);
 }
 
+static void nf_flow_tuple_vlan(struct sk_buff *skb,
+			       struct flow_offload_tuple *tuple)
+{
+	if (skb_vlan_tag_present(skb)) {
+		tuple->in_vlan[0].id = skb_vlan_tag_get(skb);
+		tuple->in_vlan[0].proto = skb->vlan_proto;
+	}
+	if (skb->protocol == htons(ETH_P_8021Q)) {
+		struct vlan_ethhdr *veth = (struct vlan_ethhdr *)skb_mac_header(skb);
+
+		tuple->in_vlan[1].id = ntohs(veth->h_vlan_TCI);
+		tuple->in_vlan[1].proto = skb->protocol;
+	}
+}
+
 static int nf_flow_tuple_ip(struct sk_buff *skb, const struct net_device *dev,
 			    struct flow_offload_tuple *tuple)
 {
-	unsigned int thoff, hdrsize;
+	unsigned int thoff, hdrsize, offset = 0;
 	struct flow_ports *ports;
 	struct iphdr *iph;
 
-	if (!pskb_may_pull(skb, sizeof(*iph)))
+	if (skb->protocol == htons(ETH_P_8021Q))
+		offset += VLAN_HLEN;
+
+	if (!pskb_may_pull(skb, sizeof(*iph) + offset))
 		return -1;
 
-	iph = ip_hdr(skb);
+	iph = (struct iphdr *)(skb_network_header(skb) + offset);
 	thoff = iph->ihl * 4;
 
 	if (ip_is_fragment(iph) ||
@@ -191,11 +209,11 @@ static int nf_flow_tuple_ip(struct sk_buff *skb, const struct net_device *dev,
 		return -1;
 
 	thoff = iph->ihl * 4;
-	if (!pskb_may_pull(skb, thoff + hdrsize))
+	if (!pskb_may_pull(skb, thoff + hdrsize + offset))
 		return -1;
 
-	iph = ip_hdr(skb);
-	ports = (struct flow_ports *)(skb_network_header(skb) + thoff);
+	iph = (struct iphdr *)(skb_network_header(skb) + offset);
+	ports = (struct flow_ports *)(skb_network_header(skb) + thoff + offset);
 
 	tuple->src_v4.s_addr	= iph->saddr;
 	tuple->dst_v4.s_addr	= iph->daddr;
@@ -204,6 +222,7 @@ static int nf_flow_tuple_ip(struct sk_buff *skb, const struct net_device *dev,
 	tuple->l3proto		= AF_INET;
 	tuple->l4proto		= iph->protocol;
 	tuple->iifidx		= dev->ifindex;
+	nf_flow_tuple_vlan(skb, tuple);
 
 	return 0;
 }
@@ -247,6 +266,67 @@ static unsigned int nf_flow_xmit_xfrm(struct sk_buff *skb,
 	return NF_STOLEN;
 }
 
+static bool nf_flow_skb_vlan_protocol(const struct sk_buff *skb, __be16 proto)
+{
+	if (skb->protocol == htons(ETH_P_8021Q)) {
+		struct vlan_ethhdr *veth;
+
+		veth = (struct vlan_ethhdr *)skb_mac_header(skb);
+		if (veth->h_vlan_encapsulated_proto == proto)
+			return true;
+	}
+
+	return false;
+}
+
+static void nf_flow_vlan_pop(struct sk_buff *skb,
+			     struct flow_offload_tuple_rhash *tuplehash)
+{
+	struct vlan_hdr *vlan_hdr;
+	int i;
+
+	for (i = 0; i < tuplehash->tuple.in_vlan_num; i++) {
+		if (skb_vlan_tag_present(skb)) {
+			__vlan_hwaccel_clear_tag(skb);
+			continue;
+		}
+		vlan_hdr = (struct vlan_hdr *)skb->data;
+		__skb_pull(skb, VLAN_HLEN);
+		vlan_set_encap_proto(skb, vlan_hdr);
+		skb_reset_network_header(skb);
+	}
+}
+
+static int __nf_flow_vlan_push(struct sk_buff *skb,
+			       const struct flow_offload_tuple *tuple)
+{
+	int i;
+
+	for (i = tuple->out_vlan_num - 1; i >= 0; i--) {
+		if (skb_vlan_push(skb, tuple->out_vlan[i].proto,
+				  tuple->out_vlan[i].id) < 0)
+			return -1;
+	}
+
+	return 0;
+}
+
+static int nf_flow_vlan_push(struct sk_buff *skb,
+			     const struct flow_offload_tuple *tuple)
+{
+	if (tuple->out_vlan_num > 1)
+		skb_push(skb, skb->mac_len);
+
+	__nf_flow_vlan_push(skb, tuple);
+
+	if (tuple->out_vlan_num > 1) {
+		__skb_pull(skb, skb->mac_len - VLAN_HLEN);
+		skb_reset_network_header(skb);
+	}
+
+	return 0;
+}
+
 static unsigned int nf_flow_queue_xmit(struct sk_buff *skb,
 				       struct net_device *outdev,
 				       const struct flow_offload_tuple *tuple)
@@ -257,6 +337,10 @@ static unsigned int nf_flow_queue_xmit(struct sk_buff *skb,
 	skb_push(skb, skb->mac_len);
 	skb_reset_mac_header(skb);
 
+	if (tuple->out_vlan_num > 0 &&
+	    __nf_flow_vlan_push(skb, tuple) < 0)
+		return NF_DROP;
+
 	eth = eth_hdr(skb);
 	memcpy(eth->h_source, tuple->h_source, ETH_ALEN);
 	memcpy(eth->h_dest, tuple->h_dest, ETH_ALEN);
@@ -279,9 +363,11 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
 	unsigned int thoff;
 	struct iphdr *iph;
 	__be32 nexthop;
+	u32 offset = 0;
 	int ret;
 
-	if (skb->protocol != htons(ETH_P_IP))
+	if (skb->protocol != htons(ETH_P_IP) &&
+	    !nf_flow_skb_vlan_protocol(skb, htons(ETH_P_IP)))
 		return NF_ACCEPT;
 
 	if (nf_flow_tuple_ip(skb, state->in, &tuple) < 0)
@@ -298,11 +384,15 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
 	if (unlikely(nf_flow_exceeds_mtu(skb, flow->tuplehash[dir].tuple.mtu)))
 		return NF_ACCEPT;
 
-	if (skb_try_make_writable(skb, sizeof(*iph)))
+	if (skb->protocol == htons(ETH_P_8021Q))
+		offset += VLAN_HLEN;
+
+	if (skb_try_make_writable(skb, sizeof(*iph) + offset))
 		return NF_DROP;
 
-	thoff = ip_hdr(skb)->ihl * 4;
-	if (nf_flow_state_check(flow, ip_hdr(skb)->protocol, skb, thoff))
+	iph = (struct iphdr *)(skb_network_header(skb) + offset);
+	thoff = (iph->ihl * 4) + offset;
+	if (nf_flow_state_check(flow, iph->protocol, skb, thoff))
 		return NF_ACCEPT;
 
 	flow_offload_refresh(flow_table, flow);
@@ -319,6 +409,9 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
 		return NF_ACCEPT;
 	}
 
+	nf_flow_vlan_pop(skb, tuplehash);
+	thoff -= offset;
+
 	if (nf_flow_nat_ip(flow, skb, thoff, dir) < 0)
 		return NF_DROP;
 
@@ -331,6 +424,9 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
 
 	switch (tuplehash->tuple.xmit_type) {
 	case FLOW_OFFLOAD_XMIT_NEIGH:
+		if (nf_flow_vlan_push(skb, &tuplehash->tuple) < 0)
+			return NF_DROP;
+
 		skb->dev = outdev;
 		nexthop = rt_nexthop(rt, flow->tuplehash[!dir].tuple.src_v4.s_addr);
 		skb_dst_set_noref(skb, &rt->dst);
@@ -484,14 +580,17 @@ static int nf_flow_nat_ipv6(const struct flow_offload *flow,
 static int nf_flow_tuple_ipv6(struct sk_buff *skb, const struct net_device *dev,
 			      struct flow_offload_tuple *tuple)
 {
-	unsigned int thoff, hdrsize;
+	unsigned int thoff, hdrsize, offset = 0;
 	struct flow_ports *ports;
 	struct ipv6hdr *ip6h;
 
-	if (!pskb_may_pull(skb, sizeof(*ip6h)))
+	if (skb->protocol == htons(ETH_P_8021Q))
+		offset += VLAN_HLEN;
+
+	if (!pskb_may_pull(skb, sizeof(*ip6h) + offset))
 		return -1;
 
-	ip6h = ipv6_hdr(skb);
+	ip6h = (struct ipv6hdr *)(skb_network_header(skb) + offset);
 
 	switch (ip6h->nexthdr) {
 	case IPPROTO_TCP:
@@ -508,11 +607,11 @@ static int nf_flow_tuple_ipv6(struct sk_buff *skb, const struct net_device *dev,
 		return -1;
 
 	thoff = sizeof(*ip6h);
-	if (!pskb_may_pull(skb, thoff + hdrsize))
+	if (!pskb_may_pull(skb, thoff + hdrsize + offset))
 		return -1;
 
-	ip6h = ipv6_hdr(skb);
-	ports = (struct flow_ports *)(skb_network_header(skb) + thoff);
+	ip6h = (struct ipv6hdr *)(skb_network_header(skb) + offset);
+	ports = (struct flow_ports *)(skb_network_header(skb) + thoff + offset);
 
 	tuple->src_v6		= ip6h->saddr;
 	tuple->dst_v6		= ip6h->daddr;
@@ -521,6 +620,7 @@ static int nf_flow_tuple_ipv6(struct sk_buff *skb, const struct net_device *dev,
 	tuple->l3proto		= AF_INET6;
 	tuple->l4proto		= ip6h->nexthdr;
 	tuple->iifidx		= dev->ifindex;
+	nf_flow_tuple_vlan(skb, tuple);
 
 	return 0;
 }
@@ -538,9 +638,11 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb,
 	struct net_device *outdev;
 	struct ipv6hdr *ip6h;
 	struct rt6_info *rt;
+	u32 offset = 0;
 	int ret;
 
-	if (skb->protocol != htons(ETH_P_IPV6))
+	if (skb->protocol != htons(ETH_P_IPV6) &&
+	    !nf_flow_skb_vlan_protocol(skb, htons(ETH_P_IPV6)))
 		return NF_ACCEPT;
 
 	if (nf_flow_tuple_ipv6(skb, state->in, &tuple) < 0)
@@ -557,8 +659,11 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb,
 	if (unlikely(nf_flow_exceeds_mtu(skb, flow->tuplehash[dir].tuple.mtu)))
 		return NF_ACCEPT;
 
-	if (nf_flow_state_check(flow, ipv6_hdr(skb)->nexthdr, skb,
-				sizeof(*ip6h)))
+	if (skb->protocol == htons(ETH_P_8021Q))
+		offset += VLAN_HLEN;
+
+	ip6h = (struct ipv6hdr *)(skb_network_header(skb) + offset);
+	if (nf_flow_state_check(flow, ip6h->nexthdr, skb, sizeof(*ip6h)))
 		return NF_ACCEPT;
 
 	flow_offload_refresh(flow_table, flow);
@@ -575,6 +680,8 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb,
 		return NF_ACCEPT;
 	}
 
+	nf_flow_vlan_pop(skb, tuplehash);
+
 	if (skb_try_make_writable(skb, sizeof(*ip6h)))
 		return NF_DROP;
 
@@ -590,6 +697,9 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb,
 
 	switch (tuplehash->tuple.xmit_type) {
 	case FLOW_OFFLOAD_XMIT_NEIGH:
+		if (nf_flow_vlan_push(skb, &tuplehash->tuple) < 0)
+			return NF_DROP;
+
 		skb->dev = outdev;
 		nexthop = rt6_nexthop(rt, &flow->tuplehash[!dir].tuple.src_v6);
 		skb_dst_set_noref(skb, &rt->dst);
diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c
index 9efb5d584290..28fc991e06a6 100644
--- a/net/netfilter/nft_flow_offload.c
+++ b/net/netfilter/nft_flow_offload.c
@@ -51,6 +51,9 @@ static int nft_dev_fill_forward_path(const struct nf_flow_route *route,
 }
 
 struct nft_forward_info {
+	__u16 vid[NF_FLOW_TABLE_VLAN_MAX];
+	__be16 vproto[NF_FLOW_TABLE_VLAN_MAX];
+	u8 num_vlans;
 	u32 iifindex;
 	u8 h_source[ETH_ALEN];
 	u8 h_dest[ETH_ALEN];
@@ -82,7 +85,13 @@ static int nft_dev_forward_path(struct nf_flow_route *route,
 			info.iifindex = path->dev->ifindex;
 			break;
 		case DEV_PATH_VLAN:
-			return -1;
+			if (info.num_vlans >= NF_FLOW_TABLE_VLAN_MAX)
+				return -1;
+
+			info.vid[info.num_vlans] = path->vlan.id;
+			info.vproto[info.num_vlans] = path->vlan.proto;
+			info.num_vlans++;
+			break;
 		case DEV_PATH_BRIDGE:
 			memcpy(info.h_dest, path->dev->dev_addr, ETH_ALEN);
 			info.xmit_type = FLOW_OFFLOAD_XMIT_DIRECT;
@@ -96,6 +105,18 @@ static int nft_dev_forward_path(struct nf_flow_route *route,
 	route->tuple[dir].out.ifindex = info.iifindex;
 	route->tuple[dir].out.xmit_type = info.xmit_type;
 
+	for (i = 0; i < info.num_vlans; i++) {
+		route->tuple[!dir].in.vid[i] = info.vid[i];
+		route->tuple[!dir].in.vproto[i] = info.vproto[i];
+	}
+	route->tuple[!dir].in.num_vlans = info.num_vlans;
+
+	for (i = 0; i < info.num_vlans; i++) {
+		route->tuple[dir].out.vid[i] = info.vid[i];
+		route->tuple[dir].out.vproto[i] = info.vproto[i];
+	}
+	route->tuple[dir].out.num_vlans = info.num_vlans;
+
 	return 0;
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next,v2 0/9] netfilter: flowtable bridge and vlan enhancements
  2020-10-15 16:30 [PATCH net-next,v2 0/9] netfilter: flowtable bridge and vlan enhancements Pablo Neira Ayuso
                   ` (8 preceding siblings ...)
  2020-10-15 16:30 ` [PATCH net-next,v2 9/9] netfilter: flowtable: add vlan support Pablo Neira Ayuso
@ 2020-10-15 19:47 ` Jakub Kicinski
  2020-10-15 23:04   ` Pablo Neira Ayuso
  9 siblings, 1 reply; 15+ messages in thread
From: Jakub Kicinski @ 2020-10-15 19:47 UTC (permalink / raw)
  To: Pablo Neira Ayuso; +Cc: netfilter-devel, davem, netdev

On Thu, 15 Oct 2020 18:30:29 +0200 Pablo Neira Ayuso wrote:
> The following patchset adds infrastructure to augment the Netfilter
> flowtable fastpath [1] to support for local network topologies that
> combine IP forwarding, bridge and vlan devices.
> 
> A typical scenario that can benefit from this infrastructure is composed
> of several VMs connected to bridge ports where the bridge master device
> 'br0' has an IP address. A DHCP server is also assumed to be running to
> provide connectivity to the VMs. The VMs reach the Internet through
> 'br0' as default gateway, which makes the packet enter the IP forwarding
> path. Then, netfilter is used to NAT the packets before they leave to
> through the wan device.

Hi Pablo, I should have looked at this closer yesterday, but I think it
warrants a little more review than we can afford right now. 

Let's take it after the merge window, sorry!

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next,v2 0/9] netfilter: flowtable bridge and vlan enhancements
  2020-10-15 19:47 ` [PATCH net-next,v2 0/9] netfilter: flowtable bridge and vlan enhancements Jakub Kicinski
@ 2020-10-15 23:04   ` Pablo Neira Ayuso
  0 siblings, 0 replies; 15+ messages in thread
From: Pablo Neira Ayuso @ 2020-10-15 23:04 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: netfilter-devel, davem, netdev

On Thu, Oct 15, 2020 at 12:47:48PM -0700, Jakub Kicinski wrote:
> On Thu, 15 Oct 2020 18:30:29 +0200 Pablo Neira Ayuso wrote:
> > The following patchset adds infrastructure to augment the Netfilter
> > flowtable fastpath [1] to support for local network topologies that
> > combine IP forwarding, bridge and vlan devices.
> > 
> > A typical scenario that can benefit from this infrastructure is composed
> > of several VMs connected to bridge ports where the bridge master device
> > 'br0' has an IP address. A DHCP server is also assumed to be running to
> > provide connectivity to the VMs. The VMs reach the Internet through
> > 'br0' as default gateway, which makes the packet enter the IP forwarding
> > path. Then, netfilter is used to NAT the packets before they leave to
> > through the wan device.
> 
> Hi Pablo, I should have looked at this closer yesterday, but I think it
> warrants a little more review than we can afford right now. 
> 
> Let's take it after the merge window, sorry!

I understand, I admit it was a bit late patchset.

I have to say that I'm dissapointed. I cannot avoid shaking the
feeling that there is always a reason to push back for Netfilter
stuff.

Probably it's not fair to mention this in this case.

It's just a personal perception, so I might be really wrong about it.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next,v2 6/9] netfilter: flowtable: use dev_fill_forward_path() to obtain egress device
  2020-10-15 16:30 ` [PATCH net-next,v2 6/9] netfilter: flowtable: use dev_fill_forward_path() to obtain egress device Pablo Neira Ayuso
@ 2020-10-19  9:32   ` Jeremy Sowden
  0 siblings, 0 replies; 15+ messages in thread
From: Jeremy Sowden @ 2020-10-19  9:32 UTC (permalink / raw)
  To: Pablo Neira Ayuso; +Cc: netfilter-devel, davem, netdev, kuba

[-- Attachment #1: Type: text/plain, Size: 4412 bytes --]

On 2020-10-15, at 18:30:35 +0200, Pablo Neira Ayuso wrote:
> The egress device in the tuple is obtained from route. Use
> dev_fill_forward_path() instead to provide the real ingress device for
                                                      ^^^^^^^

Should this be "egress"?

> this flow whenever this is available.
>
> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
> ---
> v2: no changes.
>
>  include/net/netfilter/nf_flow_table.h |  4 ++++
>  net/netfilter/nf_flow_table_core.c    |  1 +
>  net/netfilter/nf_flow_table_ip.c      | 25 +++++++++++++++++++++++--
>  net/netfilter/nft_flow_offload.c      |  1 +
>  4 files changed, 29 insertions(+), 2 deletions(-)
>
> diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
> index ecb84d4358cc..fe225e881cc7 100644
> --- a/include/net/netfilter/nf_flow_table.h
> +++ b/include/net/netfilter/nf_flow_table.h
> @@ -117,6 +117,7 @@ struct flow_offload_tuple {
>  	u8				dir;
>  	enum flow_offload_xmit_type	xmit_type:8;
>  	u16				mtu;
> +	u32				oifidx;
>
>  	struct dst_entry		*dst_cache;
>  };
> @@ -164,6 +165,9 @@ struct nf_flow_route {
>  		struct {
>  			u32		ifindex;
>  		} in;
> +		struct {
> +			u32		ifindex;
> +		} out;
>  		struct dst_entry	*dst;
>  	} tuple[FLOW_OFFLOAD_DIR_MAX];
>  };
> diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
> index 66abc7f287a3..99f01f08c550 100644
> --- a/net/netfilter/nf_flow_table_core.c
> +++ b/net/netfilter/nf_flow_table_core.c
> @@ -94,6 +94,7 @@ static int flow_offload_fill_route(struct flow_offload *flow,
>  	}
>
>  	flow_tuple->iifidx = route->tuple[dir].in.ifindex;
> +	flow_tuple->oifidx = route->tuple[dir].out.ifindex;
>
>  	if (dst_xfrm(dst))
>  		flow_tuple->xmit_type = FLOW_OFFLOAD_XMIT_XFRM;
> diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c
> index e215c79e6777..92f444db8d9f 100644
> --- a/net/netfilter/nf_flow_table_ip.c
> +++ b/net/netfilter/nf_flow_table_ip.c
> @@ -228,6 +228,15 @@ static int nf_flow_offload_dst_check(struct dst_entry *dst)
>  	return 0;
>  }
>
> +static struct net_device *nf_flow_outdev_lookup(struct net *net, int oifidx,
> +						struct net_device *dev)
> +{
> +	if (oifidx == dev->ifindex)
> +		return dev;
> +
> +	return dev_get_by_index_rcu(net, oifidx);
> +}
> +
>  static unsigned int nf_flow_xmit_xfrm(struct sk_buff *skb,
>  				      const struct nf_hook_state *state,
>  				      struct dst_entry *dst)
> @@ -267,7 +276,6 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
>  	dir = tuplehash->tuple.dir;
>  	flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]);
>  	rt = (struct rtable *)flow->tuplehash[dir].tuple.dst_cache;
> -	outdev = rt->dst.dev;
>
>  	if (unlikely(nf_flow_exceeds_mtu(skb, flow->tuplehash[dir].tuple.mtu)))
>  		return NF_ACCEPT;
> @@ -286,6 +294,13 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
>  		return NF_ACCEPT;
>  	}
>
> +	outdev = nf_flow_outdev_lookup(state->net, tuplehash->tuple.oifidx,
> +				       rt->dst.dev);
> +	if (!outdev) {
> +		flow_offload_teardown(flow);
> +		return NF_ACCEPT;
> +	}
> +
>  	if (nf_flow_nat_ip(flow, skb, thoff, dir) < 0)
>  		return NF_DROP;
>
> @@ -517,7 +532,6 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb,
>  	dir = tuplehash->tuple.dir;
>  	flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]);
>  	rt = (struct rt6_info *)flow->tuplehash[dir].tuple.dst_cache;
> -	outdev = rt->dst.dev;
>
>  	if (unlikely(nf_flow_exceeds_mtu(skb, flow->tuplehash[dir].tuple.mtu)))
>  		return NF_ACCEPT;
> @@ -533,6 +547,13 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb,
>  		return NF_ACCEPT;
>  	}
>
> +	outdev = nf_flow_outdev_lookup(state->net, tuplehash->tuple.oifidx,
> +				       rt->dst.dev);
> +	if (!outdev) {
> +		flow_offload_teardown(flow);
> +		return NF_ACCEPT;
> +	}
> +
>  	if (skb_try_make_writable(skb, sizeof(*ip6h)))
>  		return NF_DROP;
>
> diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c
> index 4b476b0a3c88..6a6633e2ceeb 100644
> --- a/net/netfilter/nft_flow_offload.c
> +++ b/net/netfilter/nft_flow_offload.c
> @@ -84,6 +84,7 @@ static int nft_dev_forward_path(struct nf_flow_route *route,
>  	}
>
>  	route->tuple[!dir].in.ifindex = info.iifindex;
> +	route->tuple[dir].out.ifindex = info.iifindex;
>
>  	return 0;
>  }
> --
> 2.20.1

J.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 659 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next,v2 8/9] netfilter: flowtable: bridge port support
  2020-10-15 16:30 ` [PATCH net-next,v2 8/9] netfilter: flowtable: bridge port support Pablo Neira Ayuso
@ 2020-10-19  9:32   ` Jeremy Sowden
  0 siblings, 0 replies; 15+ messages in thread
From: Jeremy Sowden @ 2020-10-19  9:32 UTC (permalink / raw)
  To: Pablo Neira Ayuso; +Cc: netfilter-devel, davem, netdev, kuba

[-- Attachment #1: Type: text/plain, Size: 2580 bytes --]

On 2020-10-15, at 18:30:37 +0200, Pablo Neira Ayuso wrote:
> Update hardware destination address to the master bridge device to
> emulate the forwarding behaviour.
>
> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
> ---
> v2: no changes.
>
>  include/net/netfilter/nf_flow_table.h | 1 +
>  net/netfilter/nf_flow_table_core.c    | 4 ++++
>  net/netfilter/nft_flow_offload.c      | 6 +++++-
>  3 files changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
> index 1b57d1d1270d..4ec3f9bb2f32 100644
> --- a/include/net/netfilter/nf_flow_table.h
> +++ b/include/net/netfilter/nf_flow_table.h
> @@ -172,6 +172,7 @@ struct nf_flow_route {
>  			u32		ifindex;
>  			u8		h_source[ETH_ALEN];
>  			u8		h_dest[ETH_ALEN];
> +			enum flow_offload_xmit_type xmit_type;
>  		} out;
>  		struct dst_entry	*dst;
>  	} tuple[FLOW_OFFLOAD_DIR_MAX];
> diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
> index 725366339b85..e80dcabe3668 100644
> --- a/net/netfilter/nf_flow_table_core.c
> +++ b/net/netfilter/nf_flow_table_core.c
> @@ -100,6 +100,10 @@ static int flow_offload_fill_route(struct flow_offload *flow,
>
>  	if (dst_xfrm(dst))
>  		flow_tuple->xmit_type = FLOW_OFFLOAD_XMIT_XFRM;
> +	else
> +		flow_tuple->xmit_type = route->tuple[dir].out.xmit_type;
> +
> +	flow_tuple->dst_cache = dst;
>
>  	flow_tuple->dst_cache = dst;

Accidentally duplicated assignment?

> diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c
> index d440e436cb16..9efb5d584290 100644
> --- a/net/netfilter/nft_flow_offload.c
> +++ b/net/netfilter/nft_flow_offload.c
> @@ -54,6 +54,7 @@ struct nft_forward_info {
>  	u32 iifindex;
>  	u8 h_source[ETH_ALEN];
>  	u8 h_dest[ETH_ALEN];
> +	enum flow_offload_xmit_type xmit_type;
>  };
>
>  static int nft_dev_forward_path(struct nf_flow_route *route,
> @@ -83,7 +84,9 @@ static int nft_dev_forward_path(struct nf_flow_route *route,
>  		case DEV_PATH_VLAN:
>  			return -1;
>  		case DEV_PATH_BRIDGE:
> -			return -1;
> +			memcpy(info.h_dest, path->dev->dev_addr, ETH_ALEN);
> +			info.xmit_type = FLOW_OFFLOAD_XMIT_DIRECT;
> +			break;
>  		}
>  	}
>
> @@ -91,6 +94,7 @@ static int nft_dev_forward_path(struct nf_flow_route *route,
>  	memcpy(route->tuple[dir].out.h_dest, info.h_source, ETH_ALEN);
>  	memcpy(route->tuple[dir].out.h_source, info.h_dest, ETH_ALEN);
>  	route->tuple[dir].out.ifindex = info.iifindex;
> +	route->tuple[dir].out.xmit_type = info.xmit_type;
>
>  	return 0;
>  }
> --
> 2.20.1

J.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 659 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next,v2 4/9] bridge: resolve forwarding path for bridge devices
  2020-10-15 16:30 ` [PATCH net-next,v2 4/9] bridge: resolve forwarding path for bridge devices Pablo Neira Ayuso
@ 2020-10-22 10:24   ` Nikolay Aleksandrov
  0 siblings, 0 replies; 15+ messages in thread
From: Nikolay Aleksandrov @ 2020-10-22 10:24 UTC (permalink / raw)
  To: Pablo Neira Ayuso, netfilter-devel; +Cc: davem, netdev, kuba

On Thu, 2020-10-15 at 18:30 +0200, Pablo Neira Ayuso wrote:
> Add .ndo_fill_forward_path for bridge devices.
> 
> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
> ---
> v2: no changes
> 
>  include/linux/netdevice.h |  1 +
>  net/bridge/br_device.c    | 22 ++++++++++++++++++++++
>  2 files changed, 23 insertions(+)
> 
> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> index d4263ed5dd79..4cabdbc672d3 100644
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
> @@ -836,6 +836,7 @@ typedef u16 (*select_queue_fallback_t)(struct net_device *dev,
>  enum net_device_path_type {
>  	DEV_PATH_ETHERNET = 0,
>  	DEV_PATH_VLAN,
> +	DEV_PATH_BRIDGE,
>  };
>  
>  struct net_device_path {
> diff --git a/net/bridge/br_device.c b/net/bridge/br_device.c
> index 6f742fee874a..06046a35868d 100644
> --- a/net/bridge/br_device.c
> +++ b/net/bridge/br_device.c
> @@ -391,6 +391,27 @@ static int br_del_slave(struct net_device *dev, struct net_device *slave_dev)
>  	return br_del_if(br, slave_dev);
>  }
>  
> +static int br_fill_forward_path(struct net_device_path_ctx *ctx,
> +				struct net_device_path *path)
> +{
> +	struct net_bridge_fdb_entry *f;
> +	struct net_bridge *br;
> +
> +	if (netif_is_bridge_port(ctx->dev))
> +		return -1;
> +
> +	br = netdev_priv(ctx->dev);
> +	f = br_fdb_find_rcu(br, ctx->daddr, 0);
> +	if (!f || !f->dst)
> +		return -1;
> +
> +	path->type = DEV_PATH_BRIDGE;
> +	path->dev = f->dst->br->dev;
> +	ctx->dev = f->dst->dev;

Please use READ_ONCE() for f->dst since it can become NULL if the entry
is changed to point to the bridge device itself after the check above. I've had
a patch in my queue that changes the bridge to use WRITE_ONCE() to annotate it
as a lockless read.

Thanks,
 Nik

> +
> +	return 0;
> +}
> +
>  static const struct ethtool_ops br_ethtool_ops = {
>  	.get_drvinfo		 = br_getinfo,
>  	.get_link		 = ethtool_op_get_link,
> @@ -425,6 +446,7 @@ static const struct net_device_ops br_netdev_ops = {
>  	.ndo_bridge_setlink	 = br_setlink,
>  	.ndo_bridge_dellink	 = br_dellink,
>  	.ndo_features_check	 = passthru_features_check,
> +	.ndo_fill_forward_path	 = br_fill_forward_path,
>  };
>  
>  static struct device_type br_type = {



^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2020-10-22 10:24 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-15 16:30 [PATCH net-next,v2 0/9] netfilter: flowtable bridge and vlan enhancements Pablo Neira Ayuso
2020-10-15 16:30 ` [PATCH net-next,v2 1/9] netfilter: flowtable: add xmit path types Pablo Neira Ayuso
2020-10-15 16:30 ` [PATCH net-next,v2 2/9] net: resolve forwarding path from virtual netdevice and HW destination address Pablo Neira Ayuso
2020-10-15 16:30 ` [PATCH net-next,v2 3/9] net: 8021q: resolve forwarding path for vlan devices Pablo Neira Ayuso
2020-10-15 16:30 ` [PATCH net-next,v2 4/9] bridge: resolve forwarding path for bridge devices Pablo Neira Ayuso
2020-10-22 10:24   ` Nikolay Aleksandrov
2020-10-15 16:30 ` [PATCH net-next,v2 5/9] netfilter: flowtable: use dev_fill_forward_path() to obtain ingress device Pablo Neira Ayuso
2020-10-15 16:30 ` [PATCH net-next,v2 6/9] netfilter: flowtable: use dev_fill_forward_path() to obtain egress device Pablo Neira Ayuso
2020-10-19  9:32   ` Jeremy Sowden
2020-10-15 16:30 ` [PATCH net-next,v2 7/9] netfilter: flowtable: add direct xmit path Pablo Neira Ayuso
2020-10-15 16:30 ` [PATCH net-next,v2 8/9] netfilter: flowtable: bridge port support Pablo Neira Ayuso
2020-10-19  9:32   ` Jeremy Sowden
2020-10-15 16:30 ` [PATCH net-next,v2 9/9] netfilter: flowtable: add vlan support Pablo Neira Ayuso
2020-10-15 19:47 ` [PATCH net-next,v2 0/9] netfilter: flowtable bridge and vlan enhancements Jakub Kicinski
2020-10-15 23:04   ` Pablo Neira Ayuso

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).