* [PATCH bpf-next V2] bpf/lpm_trie: inline longest_prefix_match for fastpath
@ 2024-03-18 13:25 Jesper Dangaard Brouer
2024-03-18 16:07 ` Yonghong Song
2024-03-19 12:50 ` patchwork-bot+netdevbpf
0 siblings, 2 replies; 3+ messages in thread
From: Jesper Dangaard Brouer @ 2024-03-18 13:25 UTC (permalink / raw)
To: bpf, Daniel Borkmann
Cc: Jesper Dangaard Brouer, Alexei Starovoitov, martin.lau, netdev,
bp, kernel-team
The BPF map type LPM (Longest Prefix Match) is used heavily
in production by multiple products that have BPF components.
Perf data shows trie_lookup_elem() and longest_prefix_match()
being part of kernels perf top.
For every level in the LPM tree trie_lookup_elem() calls out
to longest_prefix_match(). The compiler is free to inline this
call, but chooses not to inline, because other slowpath callers
(that can be invoked via syscall) exists like trie_update_elem(),
trie_delete_elem() or trie_get_next_key().
bcc/tools/funccount -Ti 1 'trie_lookup_elem|longest_prefix_match.isra.0'
FUNC COUNT
trie_lookup_elem 664945
longest_prefix_match.isra.0 8101507
Observation on a single random machine shows a factor 12 between
the two functions. Given an average of 12 levels in the trie being
searched.
This patch force inlining longest_prefix_match(), but only for
the lookup fastpath to balance object instruction size.
In production with AMD CPUs, measuring the function latency of
'trie_lookup_elem' (bcc/tools/funclatency) we are seeing an improvement
function latency reduction 7-8% with this patch applied (to production
kernels 6.6 and 6.1). Analyzing perf data, we can explain this rather
large improvement due to reducing the overhead for AMD side-channel
mitigation SRSO (Speculative Return Stack Overflow).
Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Jesper Dangaard Brouer <hawk@kernel.org>
---
kernel/bpf/lpm_trie.c | 18 +++++++++++++-----
1 file changed, 13 insertions(+), 5 deletions(-)
diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
index 050fe1ebf0f7..939620b91c0e 100644
--- a/kernel/bpf/lpm_trie.c
+++ b/kernel/bpf/lpm_trie.c
@@ -155,16 +155,17 @@ static inline int extract_bit(const u8 *data, size_t index)
}
/**
- * longest_prefix_match() - determine the longest prefix
+ * __longest_prefix_match() - determine the longest prefix
* @trie: The trie to get internal sizes from
* @node: The node to operate on
* @key: The key to compare to @node
*
* Determine the longest prefix of @node that matches the bits in @key.
*/
-static size_t longest_prefix_match(const struct lpm_trie *trie,
- const struct lpm_trie_node *node,
- const struct bpf_lpm_trie_key_u8 *key)
+static __always_inline
+size_t __longest_prefix_match(const struct lpm_trie *trie,
+ const struct lpm_trie_node *node,
+ const struct bpf_lpm_trie_key_u8 *key)
{
u32 limit = min(node->prefixlen, key->prefixlen);
u32 prefixlen = 0, i = 0;
@@ -224,6 +225,13 @@ static size_t longest_prefix_match(const struct lpm_trie *trie,
return prefixlen;
}
+static size_t longest_prefix_match(const struct lpm_trie *trie,
+ const struct lpm_trie_node *node,
+ const struct bpf_lpm_trie_key_u8 *key)
+{
+ return __longest_prefix_match(trie, node, key);
+}
+
/* Called from syscall or from eBPF program */
static void *trie_lookup_elem(struct bpf_map *map, void *_key)
{
@@ -245,7 +253,7 @@ static void *trie_lookup_elem(struct bpf_map *map, void *_key)
* If it's the maximum possible prefix for this trie, we have
* an exact match and can return it directly.
*/
- matchlen = longest_prefix_match(trie, node, key);
+ matchlen = __longest_prefix_match(trie, node, key);
if (matchlen == trie->max_prefixlen) {
found = node;
break;
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH bpf-next V2] bpf/lpm_trie: inline longest_prefix_match for fastpath
2024-03-18 13:25 [PATCH bpf-next V2] bpf/lpm_trie: inline longest_prefix_match for fastpath Jesper Dangaard Brouer
@ 2024-03-18 16:07 ` Yonghong Song
2024-03-19 12:50 ` patchwork-bot+netdevbpf
1 sibling, 0 replies; 3+ messages in thread
From: Yonghong Song @ 2024-03-18 16:07 UTC (permalink / raw)
To: Jesper Dangaard Brouer, bpf, Daniel Borkmann
Cc: Alexei Starovoitov, martin.lau, netdev, bp, kernel-team
On 3/18/24 6:25 AM, Jesper Dangaard Brouer wrote:
> The BPF map type LPM (Longest Prefix Match) is used heavily
> in production by multiple products that have BPF components.
> Perf data shows trie_lookup_elem() and longest_prefix_match()
> being part of kernels perf top.
>
> For every level in the LPM tree trie_lookup_elem() calls out
> to longest_prefix_match(). The compiler is free to inline this
> call, but chooses not to inline, because other slowpath callers
> (that can be invoked via syscall) exists like trie_update_elem(),
> trie_delete_elem() or trie_get_next_key().
>
> bcc/tools/funccount -Ti 1 'trie_lookup_elem|longest_prefix_match.isra.0'
> FUNC COUNT
> trie_lookup_elem 664945
> longest_prefix_match.isra.0 8101507
>
> Observation on a single random machine shows a factor 12 between
> the two functions. Given an average of 12 levels in the trie being
> searched.
>
> This patch force inlining longest_prefix_match(), but only for
> the lookup fastpath to balance object instruction size.
>
> In production with AMD CPUs, measuring the function latency of
> 'trie_lookup_elem' (bcc/tools/funclatency) we are seeing an improvement
> function latency reduction 7-8% with this patch applied (to production
> kernels 6.6 and 6.1). Analyzing perf data, we can explain this rather
> large improvement due to reducing the overhead for AMD side-channel
> mitigation SRSO (Speculative Return Stack Overflow).
>
> Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
> Signed-off-by: Jesper Dangaard Brouer <hawk@kernel.org>
I checked out internal PGO (Profile-Guided Optimization) kernel and
it did exactly like the above described: longest_prefix_match() is inlined
to trie_lookup_elem(), but not others.
Acked-by: Yonghong Song <yonghong.song@linux.dev>
> ---
> kernel/bpf/lpm_trie.c | 18 +++++++++++++-----
> 1 file changed, 13 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
> index 050fe1ebf0f7..939620b91c0e 100644
> --- a/kernel/bpf/lpm_trie.c
> +++ b/kernel/bpf/lpm_trie.c
> @@ -155,16 +155,17 @@ static inline int extract_bit(const u8 *data, size_t index)
> }
>
> /**
> - * longest_prefix_match() - determine the longest prefix
> + * __longest_prefix_match() - determine the longest prefix
> * @trie: The trie to get internal sizes from
> * @node: The node to operate on
> * @key: The key to compare to @node
> *
> * Determine the longest prefix of @node that matches the bits in @key.
> */
> -static size_t longest_prefix_match(const struct lpm_trie *trie,
> - const struct lpm_trie_node *node,
> - const struct bpf_lpm_trie_key_u8 *key)
> +static __always_inline
> +size_t __longest_prefix_match(const struct lpm_trie *trie,
> + const struct lpm_trie_node *node,
> + const struct bpf_lpm_trie_key_u8 *key)
> {
> u32 limit = min(node->prefixlen, key->prefixlen);
> u32 prefixlen = 0, i = 0;
> @@ -224,6 +225,13 @@ static size_t longest_prefix_match(const struct lpm_trie *trie,
> return prefixlen;
> }
>
> +static size_t longest_prefix_match(const struct lpm_trie *trie,
> + const struct lpm_trie_node *node,
> + const struct bpf_lpm_trie_key_u8 *key)
> +{
> + return __longest_prefix_match(trie, node, key);
> +}
> +
> /* Called from syscall or from eBPF program */
> static void *trie_lookup_elem(struct bpf_map *map, void *_key)
> {
> @@ -245,7 +253,7 @@ static void *trie_lookup_elem(struct bpf_map *map, void *_key)
> * If it's the maximum possible prefix for this trie, we have
> * an exact match and can return it directly.
> */
> - matchlen = longest_prefix_match(trie, node, key);
> + matchlen = __longest_prefix_match(trie, node, key);
> if (matchlen == trie->max_prefixlen) {
> found = node;
> break;
>
>
>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH bpf-next V2] bpf/lpm_trie: inline longest_prefix_match for fastpath
2024-03-18 13:25 [PATCH bpf-next V2] bpf/lpm_trie: inline longest_prefix_match for fastpath Jesper Dangaard Brouer
2024-03-18 16:07 ` Yonghong Song
@ 2024-03-19 12:50 ` patchwork-bot+netdevbpf
1 sibling, 0 replies; 3+ messages in thread
From: patchwork-bot+netdevbpf @ 2024-03-19 12:50 UTC (permalink / raw)
To: Jesper Dangaard Brouer
Cc: bpf, borkmann, ast, martin.lau, netdev, bp, kernel-team
Hello:
This patch was applied to bpf/bpf-next.git (master)
by Daniel Borkmann <daniel@iogearbox.net>:
On Mon, 18 Mar 2024 14:25:26 +0100 you wrote:
> The BPF map type LPM (Longest Prefix Match) is used heavily
> in production by multiple products that have BPF components.
> Perf data shows trie_lookup_elem() and longest_prefix_match()
> being part of kernels perf top.
>
> For every level in the LPM tree trie_lookup_elem() calls out
> to longest_prefix_match(). The compiler is free to inline this
> call, but chooses not to inline, because other slowpath callers
> (that can be invoked via syscall) exists like trie_update_elem(),
> trie_delete_elem() or trie_get_next_key().
>
> [...]
Here is the summary with links:
- [bpf-next,V2] bpf/lpm_trie: inline longest_prefix_match for fastpath
https://git.kernel.org/bpf/bpf-next/c/1a4a0cb7985f
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2024-03-19 12:50 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-18 13:25 [PATCH bpf-next V2] bpf/lpm_trie: inline longest_prefix_match for fastpath Jesper Dangaard Brouer
2024-03-18 16:07 ` Yonghong Song
2024-03-19 12:50 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).