bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH bpf] bpf: Use kmalloc_size_roundup() to adjust size_index
@ 2023-10-01 20:18 Prabhakar Mahadev Lad
  0 siblings, 0 replies; 4+ messages in thread
From: Prabhakar Mahadev Lad @ 2023-10-01 20:18 UTC (permalink / raw)
  To: houtao
  Cc: alexei.starovoitov, andrii, bpf, daniel, haoluo, houtao1,
	john.fastabend, jolsa, kpsingh, linux, martin.lau, nathan, sdf,
	song, yonghong.song, Lad, Prabhakar



Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> # For RZ/Five SMARC

Cheers,
Prabhakar

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf] bpf: Use kmalloc_size_roundup() to adjust size_index
  2023-09-28 10:15 Hou Tao
  2023-09-29 21:30 ` Emil Renner Berthing
@ 2023-09-30 16:50 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 4+ messages in thread
From: patchwork-bot+netdevbpf @ 2023-09-30 16:50 UTC (permalink / raw)
  To: Hou Tao
  Cc: bpf, martin.lau, alexei.starovoitov, andrii, song, haoluo,
	yonghong.song, daniel, kpsingh, sdf, jolsa, john.fastabend,
	nathan, linux, houtao1

Hello:

This patch was applied to bpf/bpf.git (master)
by Alexei Starovoitov <ast@kernel.org>:

On Thu, 28 Sep 2023 18:15:58 +0800 you wrote:
> From: Hou Tao <houtao1@huawei.com>
> 
> Commit d52b59315bf5 ("bpf: Adjust size_index according to the value of
> KMALLOC_MIN_SIZE") uses KMALLOC_MIN_SIZE to adjust size_index, but as
> reported by Nathan, the adjustment is not enough, because
> __kmalloc_minalign() also decides the minimal alignment of slab object
> as shown in new_kmalloc_cache() and its value may be greater than
> KMALLOC_MIN_SIZE (e.g., 64 bytes vs 8 bytes under a riscv QEMU VM).
> 
> [...]

Here is the summary with links:
  - [bpf] bpf: Use kmalloc_size_roundup() to adjust size_index
    https://git.kernel.org/bpf/bpf/c/9077fc228f09

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf] bpf: Use kmalloc_size_roundup() to adjust size_index
  2023-09-28 10:15 Hou Tao
@ 2023-09-29 21:30 ` Emil Renner Berthing
  2023-09-30 16:50 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 4+ messages in thread
From: Emil Renner Berthing @ 2023-09-29 21:30 UTC (permalink / raw)
  To: Hou Tao, bpf
  Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko, Song Liu,
	Hao Luo, Yonghong Song, Daniel Borkmann, KP Singh,
	Stanislav Fomichev, Jiri Olsa, John Fastabend, Nathan Chancellor,
	Guenter Roeck, houtao1, Emil Renner Berthing

Hou Tao wrote:
> From: Hou Tao <houtao1@huawei.com>
>
> Commit d52b59315bf5 ("bpf: Adjust size_index according to the value of
> KMALLOC_MIN_SIZE") uses KMALLOC_MIN_SIZE to adjust size_index, but as
> reported by Nathan, the adjustment is not enough, because
> __kmalloc_minalign() also decides the minimal alignment of slab object
> as shown in new_kmalloc_cache() and its value may be greater than
> KMALLOC_MIN_SIZE (e.g., 64 bytes vs 8 bytes under a riscv QEMU VM).
>
> Instead of invoking __kmalloc_minalign() in bpf subsystem to find the
> maximal alignment, just using kmalloc_size_roundup() directly to get the
> corresponding slab object size for each allocation size. If these two
> sizes are unmatched, adjust size_index to select a bpf_mem_cache with
> unit_size equal to the object_size of the underlying slab cache for the
> allocation size.

I applied this to 6.6-rc3 and it fixes the warning on my Nezha board (Allwinner
D1) and also boots fine on my VisionFive 2 (JH7110) which didn't show the error
before. I didn't do any other testing beyond that though, but for basic boot
testing:

Tested-by: Emil Renner Berthing <emil.renner.berthing@canonical.com>

> Fixes: 822fb26bdb55 ("bpf: Add a hint to allocated objects.")
> Reported-by: Nathan Chancellor <nathan@kernel.org>
> Closes: https://lore.kernel.org/bpf/20230914181407.GA1000274@dev-arch.thelio-3990X/
> Signed-off-by: Hou Tao <houtao1@huawei.com>
> ---
>  kernel/bpf/memalloc.c | 44 +++++++++++++++++++------------------------
>  1 file changed, 19 insertions(+), 25 deletions(-)
>
> diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
> index 1c22b90e754a..06fbb5168482 100644
> --- a/kernel/bpf/memalloc.c
> +++ b/kernel/bpf/memalloc.c
> @@ -958,37 +958,31 @@ void notrace *bpf_mem_cache_alloc_flags(struct bpf_mem_alloc *ma, gfp_t flags)
>  	return !ret ? NULL : ret + LLIST_NODE_SZ;
>  }
>
> -/* Most of the logic is taken from setup_kmalloc_cache_index_table() */
>  static __init int bpf_mem_cache_adjust_size(void)
>  {
> -	unsigned int size, index;
> +	unsigned int size;
>
> -	/* Normally KMALLOC_MIN_SIZE is 8-bytes, but it can be
> -	 * up-to 256-bytes.
> +	/* Adjusting the indexes in size_index() according to the object_size
> +	 * of underlying slab cache, so bpf_mem_alloc() will select a
> +	 * bpf_mem_cache with unit_size equal to the object_size of
> +	 * the underlying slab cache.
> +	 *
> +	 * The maximal value of KMALLOC_MIN_SIZE and __kmalloc_minalign() is
> +	 * 256-bytes, so only do adjustment for [8-bytes, 192-bytes].
>  	 */
> -	size = KMALLOC_MIN_SIZE;
> -	if (size <= 192)
> -		index = size_index[(size - 1) / 8];
> -	else
> -		index = fls(size - 1) - 1;
> -	for (size = 8; size < KMALLOC_MIN_SIZE && size <= 192; size += 8)
> -		size_index[(size - 1) / 8] = index;
> +	for (size = 192; size >= 8; size -= 8) {
> +		unsigned int kmalloc_size, index;
>
> -	/* The minimal alignment is 64-bytes, so disable 96-bytes cache and
> -	 * use 128-bytes cache instead.
> -	 */
> -	if (KMALLOC_MIN_SIZE >= 64) {
> -		index = size_index[(128 - 1) / 8];
> -		for (size = 64 + 8; size <= 96; size += 8)
> -			size_index[(size - 1) / 8] = index;
> -	}
> +		kmalloc_size = kmalloc_size_roundup(size);
> +		if (kmalloc_size == size)
> +			continue;
>
> -	/* The minimal alignment is 128-bytes, so disable 192-bytes cache and
> -	 * use 256-bytes cache instead.
> -	 */
> -	if (KMALLOC_MIN_SIZE >= 128) {
> -		index = fls(256 - 1) - 1;
> -		for (size = 128 + 8; size <= 192; size += 8)
> +		if (kmalloc_size <= 192)
> +			index = size_index[(kmalloc_size - 1) / 8];
> +		else
> +			index = fls(kmalloc_size - 1) - 1;
> +		/* Only overwrite if necessary */
> +		if (size_index[(size - 1) / 8] != index)
>  			size_index[(size - 1) / 8] = index;
>  	}
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH bpf] bpf: Use kmalloc_size_roundup() to adjust size_index
@ 2023-09-28 10:15 Hou Tao
  2023-09-29 21:30 ` Emil Renner Berthing
  2023-09-30 16:50 ` patchwork-bot+netdevbpf
  0 siblings, 2 replies; 4+ messages in thread
From: Hou Tao @ 2023-09-28 10:15 UTC (permalink / raw)
  To: bpf
  Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko, Song Liu,
	Hao Luo, Yonghong Song, Daniel Borkmann, KP Singh,
	Stanislav Fomichev, Jiri Olsa, John Fastabend, Nathan Chancellor,
	Guenter Roeck, houtao1

From: Hou Tao <houtao1@huawei.com>

Commit d52b59315bf5 ("bpf: Adjust size_index according to the value of
KMALLOC_MIN_SIZE") uses KMALLOC_MIN_SIZE to adjust size_index, but as
reported by Nathan, the adjustment is not enough, because
__kmalloc_minalign() also decides the minimal alignment of slab object
as shown in new_kmalloc_cache() and its value may be greater than
KMALLOC_MIN_SIZE (e.g., 64 bytes vs 8 bytes under a riscv QEMU VM).

Instead of invoking __kmalloc_minalign() in bpf subsystem to find the
maximal alignment, just using kmalloc_size_roundup() directly to get the
corresponding slab object size for each allocation size. If these two
sizes are unmatched, adjust size_index to select a bpf_mem_cache with
unit_size equal to the object_size of the underlying slab cache for the
allocation size.

Fixes: 822fb26bdb55 ("bpf: Add a hint to allocated objects.")
Reported-by: Nathan Chancellor <nathan@kernel.org>
Closes: https://lore.kernel.org/bpf/20230914181407.GA1000274@dev-arch.thelio-3990X/
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 kernel/bpf/memalloc.c | 44 +++++++++++++++++++------------------------
 1 file changed, 19 insertions(+), 25 deletions(-)

diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
index 1c22b90e754a..06fbb5168482 100644
--- a/kernel/bpf/memalloc.c
+++ b/kernel/bpf/memalloc.c
@@ -958,37 +958,31 @@ void notrace *bpf_mem_cache_alloc_flags(struct bpf_mem_alloc *ma, gfp_t flags)
 	return !ret ? NULL : ret + LLIST_NODE_SZ;
 }
 
-/* Most of the logic is taken from setup_kmalloc_cache_index_table() */
 static __init int bpf_mem_cache_adjust_size(void)
 {
-	unsigned int size, index;
+	unsigned int size;
 
-	/* Normally KMALLOC_MIN_SIZE is 8-bytes, but it can be
-	 * up-to 256-bytes.
+	/* Adjusting the indexes in size_index() according to the object_size
+	 * of underlying slab cache, so bpf_mem_alloc() will select a
+	 * bpf_mem_cache with unit_size equal to the object_size of
+	 * the underlying slab cache.
+	 *
+	 * The maximal value of KMALLOC_MIN_SIZE and __kmalloc_minalign() is
+	 * 256-bytes, so only do adjustment for [8-bytes, 192-bytes].
 	 */
-	size = KMALLOC_MIN_SIZE;
-	if (size <= 192)
-		index = size_index[(size - 1) / 8];
-	else
-		index = fls(size - 1) - 1;
-	for (size = 8; size < KMALLOC_MIN_SIZE && size <= 192; size += 8)
-		size_index[(size - 1) / 8] = index;
+	for (size = 192; size >= 8; size -= 8) {
+		unsigned int kmalloc_size, index;
 
-	/* The minimal alignment is 64-bytes, so disable 96-bytes cache and
-	 * use 128-bytes cache instead.
-	 */
-	if (KMALLOC_MIN_SIZE >= 64) {
-		index = size_index[(128 - 1) / 8];
-		for (size = 64 + 8; size <= 96; size += 8)
-			size_index[(size - 1) / 8] = index;
-	}
+		kmalloc_size = kmalloc_size_roundup(size);
+		if (kmalloc_size == size)
+			continue;
 
-	/* The minimal alignment is 128-bytes, so disable 192-bytes cache and
-	 * use 256-bytes cache instead.
-	 */
-	if (KMALLOC_MIN_SIZE >= 128) {
-		index = fls(256 - 1) - 1;
-		for (size = 128 + 8; size <= 192; size += 8)
+		if (kmalloc_size <= 192)
+			index = size_index[(kmalloc_size - 1) / 8];
+		else
+			index = fls(kmalloc_size - 1) - 1;
+		/* Only overwrite if necessary */
+		if (size_index[(size - 1) / 8] != index)
 			size_index[(size - 1) / 8] = index;
 	}
 
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-10-01 20:19 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-01 20:18 [PATCH bpf] bpf: Use kmalloc_size_roundup() to adjust size_index Prabhakar Mahadev Lad
  -- strict thread matches above, loose matches on Subject: below --
2023-09-28 10:15 Hou Tao
2023-09-29 21:30 ` Emil Renner Berthing
2023-09-30 16:50 ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).