All of lore.kernel.org
 help / color / mirror / Atom feed
* [bpf-next V2 PATCH 0/2] Implement sample code for XDP cpumap IP-pair load-balancing
@ 2018-08-10 12:02 Jesper Dangaard Brouer
  2018-08-10 12:02 ` [bpf-next V2 PATCH 1/2] samples/bpf: add Paul Hsieh's (LGPL 2.1) hash function SuperFastHash Jesper Dangaard Brouer
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Jesper Dangaard Brouer @ 2018-08-10 12:02 UTC (permalink / raw)
  To: netdev, Jesper Dangaard Brouer
  Cc: victor, eric, Daniel Borkmann, Alexei Starovoitov, jhsiao

Background: cpumap moves the SKB allocation out of the driver code,
and instead allocate it on the remote CPU, and invokes the regular
kernel network stack with the newly allocated SKB.

The idea behind the XDP CPU redirect feature, is to use XDP as a
load-balancer step in-front of regular kernel network stack.  But the
current sample code does not provide a good example of this.  Part of
the reason is that, I have implemented this as part of Suricata XDP
load-balancer.

Given this is the most frequent feature request I get.  This patchset
implement the same XDP load-balancing as Suricata does, which is a
symmetric hash based on the IP-pairs + L4-protocol.

The expected setup for the use-case is to reduce the number of NIC RX
queues via ethtool (as XDP can handle more per core), and via
smp_affinity assign these RX queues to a set of CPUs, which will be
handling RX packets.  The CPUs that runs the regular network stack is
supplied to the sample xdp_redirect_cpu tool by specifying
the --cpu option multiple times on the cmdline.

I do note that cpumap SKB creation is not feature complete yet, and
more work is coming.  E.g. given GRO is not implemented yet, do expect
TCP workloads to be slower.  My measurements do indicate UDP workloads
are faster.

---

Jesper Dangaard Brouer (2):
      samples/bpf: add Paul Hsieh's (LGPL 2.1) hash function SuperFastHash
      samples/bpf: xdp_redirect_cpu load balance like Suricata


 samples/bpf/hash_func01.h           |   55 +++++++++++++++++++
 samples/bpf/xdp_redirect_cpu_kern.c |  103 +++++++++++++++++++++++++++++++++++
 samples/bpf/xdp_redirect_cpu_user.c |    4 +
 3 files changed, 160 insertions(+), 2 deletions(-)
 create mode 100644 samples/bpf/hash_func01.h

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [bpf-next V2 PATCH 1/2] samples/bpf: add Paul Hsieh's (LGPL 2.1) hash function SuperFastHash
  2018-08-10 12:02 [bpf-next V2 PATCH 0/2] Implement sample code for XDP cpumap IP-pair load-balancing Jesper Dangaard Brouer
@ 2018-08-10 12:02 ` Jesper Dangaard Brouer
  2018-08-10 12:03 ` [bpf-next V2 PATCH 2/2] samples/bpf: xdp_redirect_cpu load balance like Suricata Jesper Dangaard Brouer
  2018-08-10 14:10 ` [bpf-next V2 PATCH 0/2] Implement sample code for XDP cpumap IP-pair load-balancing Daniel Borkmann
  2 siblings, 0 replies; 4+ messages in thread
From: Jesper Dangaard Brouer @ 2018-08-10 12:02 UTC (permalink / raw)
  To: netdev, Jesper Dangaard Brouer
  Cc: victor, eric, Daniel Borkmann, Alexei Starovoitov, jhsiao

Adjusted function call API to take an initval. This allow the API
user to set the initial value, as a seed. This could also be used for
inputting the previous hash.

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 samples/bpf/hash_func01.h |   55 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 55 insertions(+)
 create mode 100644 samples/bpf/hash_func01.h

diff --git a/samples/bpf/hash_func01.h b/samples/bpf/hash_func01.h
new file mode 100644
index 000000000000..38255812e376
--- /dev/null
+++ b/samples/bpf/hash_func01.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: LGPL-2.1
+ *
+ * Based on Paul Hsieh's (LGPG 2.1) hash function
+ * From: http://www.azillionmonkeys.com/qed/hash.html
+ */
+
+#define get16bits(d) (*((const __u16 *) (d)))
+
+static __always_inline
+__u32 SuperFastHash (const char *data, int len, __u32 initval) {
+	__u32 hash = initval;
+	__u32 tmp;
+	int rem;
+
+	if (len <= 0 || data == NULL) return 0;
+
+	rem = len & 3;
+	len >>= 2;
+
+	/* Main loop */
+#pragma clang loop unroll(full)
+	for (;len > 0; len--) {
+		hash  += get16bits (data);
+		tmp    = (get16bits (data+2) << 11) ^ hash;
+		hash   = (hash << 16) ^ tmp;
+		data  += 2*sizeof (__u16);
+		hash  += hash >> 11;
+	}
+
+	/* Handle end cases */
+	switch (rem) {
+        case 3: hash += get16bits (data);
+                hash ^= hash << 16;
+                hash ^= ((signed char)data[sizeof (__u16)]) << 18;
+                hash += hash >> 11;
+                break;
+        case 2: hash += get16bits (data);
+                hash ^= hash << 11;
+                hash += hash >> 17;
+                break;
+        case 1: hash += (signed char)*data;
+                hash ^= hash << 10;
+                hash += hash >> 1;
+	}
+
+	/* Force "avalanching" of final 127 bits */
+	hash ^= hash << 3;
+	hash += hash >> 5;
+	hash ^= hash << 4;
+	hash += hash >> 17;
+	hash ^= hash << 25;
+	hash += hash >> 6;
+
+	return hash;
+}

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [bpf-next V2 PATCH 2/2] samples/bpf: xdp_redirect_cpu load balance like Suricata
  2018-08-10 12:02 [bpf-next V2 PATCH 0/2] Implement sample code for XDP cpumap IP-pair load-balancing Jesper Dangaard Brouer
  2018-08-10 12:02 ` [bpf-next V2 PATCH 1/2] samples/bpf: add Paul Hsieh's (LGPL 2.1) hash function SuperFastHash Jesper Dangaard Brouer
@ 2018-08-10 12:03 ` Jesper Dangaard Brouer
  2018-08-10 14:10 ` [bpf-next V2 PATCH 0/2] Implement sample code for XDP cpumap IP-pair load-balancing Daniel Borkmann
  2 siblings, 0 replies; 4+ messages in thread
From: Jesper Dangaard Brouer @ 2018-08-10 12:03 UTC (permalink / raw)
  To: netdev, Jesper Dangaard Brouer
  Cc: victor, eric, Daniel Borkmann, Alexei Starovoitov, jhsiao

From: Jesper Dangaard Brouer <brouer@redhat.com>

This implement XDP CPU redirection load-balancing across available
CPUs, based on the hashing IP-pairs + L4-protocol.  This equivalent to
xdp-cpu-redirect feature in Suricata, which is inspired by the
Suricata 'ippair' hashing code.

An important property is that the hashing is flow symmetric, meaning
that if the source and destination gets swapped then the selected CPU
will remain the same.  This is helps locality by placing both directions
of a flows on the same CPU, in a forwarding/routing scenario.

The hashing INITVAL (15485863 the 10^6th prime number) was fairly
arbitrary choosen, but experiments with kernel tree pktgen scripts
(pktgen_sample04_many_flows.sh +pktgen_sample05_flow_per_thread.sh)
showed this improved the distribution.

This patch also change the default loaded XDP program to be this
load-balancer.  As based on different user feedback, this seems to be
the expected behavior of the sample xdp_redirect_cpu.

Link: https://github.com/OISF/suricata/commit/796ec08dd7a63
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 samples/bpf/xdp_redirect_cpu_kern.c |  103 +++++++++++++++++++++++++++++++++++
 samples/bpf/xdp_redirect_cpu_user.c |    4 +
 2 files changed, 105 insertions(+), 2 deletions(-)

diff --git a/samples/bpf/xdp_redirect_cpu_kern.c b/samples/bpf/xdp_redirect_cpu_kern.c
index 8cb703671b04..081ef4bb4fe3 100644
--- a/samples/bpf/xdp_redirect_cpu_kern.c
+++ b/samples/bpf/xdp_redirect_cpu_kern.c
@@ -13,6 +13,7 @@
 
 #include <uapi/linux/bpf.h>
 #include "bpf_helpers.h"
+#include "hash_func01.h"
 
 #define MAX_CPUS 12 /* WARNING - sync with _user.c */
 
@@ -461,6 +462,108 @@ int  xdp_prognum4_ddos_filter_pktgen(struct xdp_md *ctx)
 	return bpf_redirect_map(&cpu_map, cpu_dest, 0);
 }
 
+/* Hashing initval */
+#define INITVAL 15485863
+
+static __always_inline
+u32 get_ipv4_hash_ip_pair(struct xdp_md *ctx, u64 nh_off)
+{
+	void *data_end = (void *)(long)ctx->data_end;
+	void *data     = (void *)(long)ctx->data;
+	struct iphdr *iph = data + nh_off;
+	u32 cpu_hash;
+
+	if (iph + 1 > data_end)
+		return 0;
+
+	cpu_hash = iph->saddr + iph->daddr;
+	cpu_hash = SuperFastHash((char *)&cpu_hash, 4, INITVAL + iph->protocol);
+
+	return cpu_hash;
+}
+
+static __always_inline
+u32 get_ipv6_hash_ip_pair(struct xdp_md *ctx, u64 nh_off)
+{
+	void *data_end = (void *)(long)ctx->data_end;
+	void *data     = (void *)(long)ctx->data;
+	struct ipv6hdr *ip6h = data + nh_off;
+	u32 cpu_hash;
+
+	if (ip6h + 1 > data_end)
+		return 0;
+
+	cpu_hash  = ip6h->saddr.s6_addr32[0] + ip6h->daddr.s6_addr32[0];
+	cpu_hash += ip6h->saddr.s6_addr32[1] + ip6h->daddr.s6_addr32[1];
+	cpu_hash += ip6h->saddr.s6_addr32[2] + ip6h->daddr.s6_addr32[2];
+	cpu_hash += ip6h->saddr.s6_addr32[3] + ip6h->daddr.s6_addr32[3];
+	cpu_hash = SuperFastHash((char *)&cpu_hash, 4, INITVAL + ip6h->nexthdr);
+
+	return cpu_hash;
+}
+
+/* Load-Balance traffic based on hashing IP-addrs + L4-proto.  The
+ * hashing scheme is symmetric, meaning swapping IP src/dest still hit
+ * same CPU.
+ */
+SEC("xdp_cpu_map5_lb_hash_ip_pairs")
+int  xdp_prognum5_lb_hash_ip_pairs(struct xdp_md *ctx)
+{
+	void *data_end = (void *)(long)ctx->data_end;
+	void *data     = (void *)(long)ctx->data;
+	struct ethhdr *eth = data;
+	u8 ip_proto = IPPROTO_UDP;
+	struct datarec *rec;
+	u16 eth_proto = 0;
+	u64 l3_offset = 0;
+	u32 cpu_dest = 0;
+	u32 cpu_idx = 0;
+	u32 *cpu_lookup;
+	u32 *cpu_max;
+	u32 cpu_hash;
+	u32 key = 0;
+
+	/* Count RX packet in map */
+	rec = bpf_map_lookup_elem(&rx_cnt, &key);
+	if (!rec)
+		return XDP_ABORTED;
+	rec->processed++;
+
+	cpu_max = bpf_map_lookup_elem(&cpus_count, &key);
+	if (!cpu_max)
+		return XDP_ABORTED;
+
+	if (!(parse_eth(eth, data_end, &eth_proto, &l3_offset)))
+		return XDP_PASS; /* Just skip */
+
+	/* Hash for IPv4 and IPv6 */
+	switch (eth_proto) {
+	case ETH_P_IP:
+		cpu_hash = get_ipv4_hash_ip_pair(ctx, l3_offset);
+		break;
+	case ETH_P_IPV6:
+		cpu_hash = get_ipv6_hash_ip_pair(ctx, l3_offset);
+		break;
+	case ETH_P_ARP: /* ARP packet handled on CPU idx 0 */
+	default:
+		cpu_hash = 0;
+	}
+
+	/* Choose CPU based on hash */
+	cpu_idx = cpu_hash % *cpu_max;
+
+	cpu_lookup = bpf_map_lookup_elem(&cpus_available, &cpu_idx);
+	if (!cpu_lookup)
+		return XDP_ABORTED;
+	cpu_dest = *cpu_lookup;
+
+	if (cpu_dest >= MAX_CPUS) {
+		rec->issue++;
+		return XDP_ABORTED;
+	}
+
+	return bpf_redirect_map(&cpu_map, cpu_dest, 0);
+}
 
 char _license[] SEC("license") = "GPL";
 
diff --git a/samples/bpf/xdp_redirect_cpu_user.c b/samples/bpf/xdp_redirect_cpu_user.c
index f6efaefd485b..007710d7c748 100644
--- a/samples/bpf/xdp_redirect_cpu_user.c
+++ b/samples/bpf/xdp_redirect_cpu_user.c
@@ -22,7 +22,7 @@ static const char *__doc__ =
 #define MAX_CPUS 12 /* WARNING - sync with _kern.c */
 
 /* How many xdp_progs are defined in _kern.c */
-#define MAX_PROG 5
+#define MAX_PROG 6
 
 /* Wanted to get rid of bpf_load.h and fake-"libbpf.h" (and instead
  * use bpf/libbpf.h), but cannot as (currently) needed for XDP
@@ -567,7 +567,7 @@ int main(int argc, char **argv)
 	int added_cpus = 0;
 	int longindex = 0;
 	int interval = 2;
-	int prog_num = 0;
+	int prog_num = 5;
 	int add_cpu = -1;
 	__u32 qsize;
 	int opt;

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [bpf-next V2 PATCH 0/2] Implement sample code for XDP cpumap IP-pair load-balancing
  2018-08-10 12:02 [bpf-next V2 PATCH 0/2] Implement sample code for XDP cpumap IP-pair load-balancing Jesper Dangaard Brouer
  2018-08-10 12:02 ` [bpf-next V2 PATCH 1/2] samples/bpf: add Paul Hsieh's (LGPL 2.1) hash function SuperFastHash Jesper Dangaard Brouer
  2018-08-10 12:03 ` [bpf-next V2 PATCH 2/2] samples/bpf: xdp_redirect_cpu load balance like Suricata Jesper Dangaard Brouer
@ 2018-08-10 14:10 ` Daniel Borkmann
  2 siblings, 0 replies; 4+ messages in thread
From: Daniel Borkmann @ 2018-08-10 14:10 UTC (permalink / raw)
  To: Jesper Dangaard Brouer, netdev
  Cc: victor, eric, Daniel Borkmann, Alexei Starovoitov, jhsiao

On 08/10/2018 02:02 PM, Jesper Dangaard Brouer wrote:
> Background: cpumap moves the SKB allocation out of the driver code,
> and instead allocate it on the remote CPU, and invokes the regular
> kernel network stack with the newly allocated SKB.
> 
> The idea behind the XDP CPU redirect feature, is to use XDP as a
> load-balancer step in-front of regular kernel network stack.  But the
> current sample code does not provide a good example of this.  Part of
> the reason is that, I have implemented this as part of Suricata XDP
> load-balancer.
> 
> Given this is the most frequent feature request I get.  This patchset
> implement the same XDP load-balancing as Suricata does, which is a
> symmetric hash based on the IP-pairs + L4-protocol.
> 
> The expected setup for the use-case is to reduce the number of NIC RX
> queues via ethtool (as XDP can handle more per core), and via
> smp_affinity assign these RX queues to a set of CPUs, which will be
> handling RX packets.  The CPUs that runs the regular network stack is
> supplied to the sample xdp_redirect_cpu tool by specifying
> the --cpu option multiple times on the cmdline.
> 
> I do note that cpumap SKB creation is not feature complete yet, and
> more work is coming.  E.g. given GRO is not implemented yet, do expect
> TCP workloads to be slower.  My measurements do indicate UDP workloads
> are faster.

Applied to bpf-next, thanks Jesper!

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-08-10 16:40 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-10 12:02 [bpf-next V2 PATCH 0/2] Implement sample code for XDP cpumap IP-pair load-balancing Jesper Dangaard Brouer
2018-08-10 12:02 ` [bpf-next V2 PATCH 1/2] samples/bpf: add Paul Hsieh's (LGPL 2.1) hash function SuperFastHash Jesper Dangaard Brouer
2018-08-10 12:03 ` [bpf-next V2 PATCH 2/2] samples/bpf: xdp_redirect_cpu load balance like Suricata Jesper Dangaard Brouer
2018-08-10 14:10 ` [bpf-next V2 PATCH 0/2] Implement sample code for XDP cpumap IP-pair load-balancing Daniel Borkmann

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.