* [PATCH v2 bpf-next] bpf/memalloc: Non-atomically allocate freelist during prefill
@ 2023-07-27 20:18 YiFei Zhu
2023-07-27 21:58 ` Yonghong Song
0 siblings, 1 reply; 4+ messages in thread
From: YiFei Zhu @ 2023-07-27 20:18 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Daniel Borkmann, Stanislav Fomichev,
Martin KaFai Lau, Andrii Nakryiko, Hou Tao
In internal testing of test_maps, we sometimes observed failures like:
test_maps: test_maps.c:173: void test_hashmap_percpu(unsigned int, void *):
Assertion `bpf_map_update_elem(fd, &key, value, BPF_ANY) == 0' failed.
where the errno is ENOMEM. After some troubleshooting and enabling
the warnings, we saw:
[ 91.304708] percpu: allocation failed, size=8 align=8 atomic=1, atomic alloc failed, no space left
[ 91.304716] CPU: 51 PID: 24145 Comm: test_maps Kdump: loaded Tainted: G N 6.1.38-smp-DEV #7
[ 91.304719] Hardware name: Google Astoria/astoria, BIOS 0.20230627.0-0 06/27/2023
[ 91.304721] Call Trace:
[ 91.304724] <TASK>
[ 91.304730] [<ffffffffa7ef83b9>] dump_stack_lvl+0x59/0x88
[ 91.304737] [<ffffffffa7ef83f8>] dump_stack+0x10/0x18
[ 91.304738] [<ffffffffa75caa0c>] pcpu_alloc+0x6fc/0x870
[ 91.304741] [<ffffffffa75ca302>] __alloc_percpu_gfp+0x12/0x20
[ 91.304743] [<ffffffffa756785e>] alloc_bulk+0xde/0x1e0
[ 91.304746] [<ffffffffa7566c02>] bpf_mem_alloc_init+0xd2/0x2f0
[ 91.304747] [<ffffffffa7547c69>] htab_map_alloc+0x479/0x650
[ 91.304750] [<ffffffffa751d6e0>] map_create+0x140/0x2e0
[ 91.304752] [<ffffffffa751d413>] __sys_bpf+0x5a3/0x6c0
[ 91.304753] [<ffffffffa751c3ec>] __x64_sys_bpf+0x1c/0x30
[ 91.304754] [<ffffffffa7ef847a>] do_syscall_64+0x5a/0x80
[ 91.304756] [<ffffffffa800009b>] entry_SYSCALL_64_after_hwframe+0x63/0xcd
This makes sense, because in atomic context, percpu allocation would
not create new chunks; it would only create in non-atomic contexts.
And if during prefill all precpu chunks are full, -ENOMEM would
happen immediately upon next unit_alloc.
Prefill phase does not actually run in atomic context, so we can
use this fact to allocate non-atomically with GFP_KERNEL instead
of GFP_NOWAIT. This avoids the immediate -ENOMEM.
Unfortunately unit_alloc runs in atomic context, even from map
item allocation in syscalls, due to rcu_read_lock, so we can't do
non-atomic workarounds in unit_alloc.
Signed-off-by: YiFei Zhu <zhuyifei@google.com>
---
v1->v2:
- Rebase from bpf to bpf-next
- Dropped second patch and edited commit message to include parts
of original cover letter, and dropped Fixes tag
---
kernel/bpf/memalloc.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
index 14d9b1a9a4ca..9c49ae53deaf 100644
--- a/kernel/bpf/memalloc.c
+++ b/kernel/bpf/memalloc.c
@@ -201,12 +201,16 @@ static void add_obj_to_free_list(struct bpf_mem_cache *c, void *obj)
}
/* Mostly runs from irq_work except __init phase. */
-static void alloc_bulk(struct bpf_mem_cache *c, int cnt, int node)
+static void alloc_bulk(struct bpf_mem_cache *c, int cnt, int node, bool atomic)
{
struct mem_cgroup *memcg = NULL, *old_memcg;
+ gfp_t gfp;
void *obj;
int i;
+ gfp = __GFP_NOWARN | __GFP_ACCOUNT;
+ gfp |= atomic ? GFP_NOWAIT : GFP_KERNEL;
+
for (i = 0; i < cnt; i++) {
/*
* For every 'c' llist_del_first(&c->free_by_rcu_ttrace); is
@@ -238,7 +242,7 @@ static void alloc_bulk(struct bpf_mem_cache *c, int cnt, int node)
* will allocate from the current numa node which is what we
* want here.
*/
- obj = __alloc(c, node, GFP_NOWAIT | __GFP_NOWARN | __GFP_ACCOUNT);
+ obj = __alloc(c, node, gfp);
if (!obj)
break;
add_obj_to_free_list(c, obj);
@@ -429,7 +433,7 @@ static void bpf_mem_refill(struct irq_work *work)
/* irq_work runs on this cpu and kmalloc will allocate
* from the current numa node which is what we want here.
*/
- alloc_bulk(c, c->batch, NUMA_NO_NODE);
+ alloc_bulk(c, c->batch, NUMA_NO_NODE, true);
else if (cnt > c->high_watermark)
free_bulk(c);
@@ -477,7 +481,7 @@ static void prefill_mem_cache(struct bpf_mem_cache *c, int cpu)
* prog won't be doing more than 4 map_update_elem from
* irq disabled region
*/
- alloc_bulk(c, c->unit_size <= 256 ? 4 : 1, cpu_to_node(cpu));
+ alloc_bulk(c, c->unit_size <= 256 ? 4 : 1, cpu_to_node(cpu), false);
}
/* When size != 0 bpf_mem_cache for each cpu.
--
2.41.0.585.gd2178a4bd4-goog
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH v2 bpf-next] bpf/memalloc: Non-atomically allocate freelist during prefill
2023-07-27 20:18 [PATCH v2 bpf-next] bpf/memalloc: Non-atomically allocate freelist during prefill YiFei Zhu
@ 2023-07-27 21:58 ` Yonghong Song
2023-07-27 23:46 ` YiFei Zhu
0 siblings, 1 reply; 4+ messages in thread
From: Yonghong Song @ 2023-07-27 21:58 UTC (permalink / raw)
To: YiFei Zhu, bpf
Cc: Alexei Starovoitov, Daniel Borkmann, Stanislav Fomichev,
Martin KaFai Lau, Andrii Nakryiko, Hou Tao
On 7/27/23 1:18 PM, YiFei Zhu wrote:
> In internal testing of test_maps, we sometimes observed failures like:
> test_maps: test_maps.c:173: void test_hashmap_percpu(unsigned int, void *):
> Assertion `bpf_map_update_elem(fd, &key, value, BPF_ANY) == 0' failed.
> where the errno is ENOMEM. After some troubleshooting and enabling
> the warnings, we saw:
> [ 91.304708] percpu: allocation failed, size=8 align=8 atomic=1, atomic alloc failed, no space left
> [ 91.304716] CPU: 51 PID: 24145 Comm: test_maps Kdump: loaded Tainted: G N 6.1.38-smp-DEV #7
> [ 91.304719] Hardware name: Google Astoria/astoria, BIOS 0.20230627.0-0 06/27/2023
> [ 91.304721] Call Trace:
> [ 91.304724] <TASK>
> [ 91.304730] [<ffffffffa7ef83b9>] dump_stack_lvl+0x59/0x88
> [ 91.304737] [<ffffffffa7ef83f8>] dump_stack+0x10/0x18
> [ 91.304738] [<ffffffffa75caa0c>] pcpu_alloc+0x6fc/0x870
> [ 91.304741] [<ffffffffa75ca302>] __alloc_percpu_gfp+0x12/0x20
> [ 91.304743] [<ffffffffa756785e>] alloc_bulk+0xde/0x1e0
> [ 91.304746] [<ffffffffa7566c02>] bpf_mem_alloc_init+0xd2/0x2f0
> [ 91.304747] [<ffffffffa7547c69>] htab_map_alloc+0x479/0x650
> [ 91.304750] [<ffffffffa751d6e0>] map_create+0x140/0x2e0
> [ 91.304752] [<ffffffffa751d413>] __sys_bpf+0x5a3/0x6c0
> [ 91.304753] [<ffffffffa751c3ec>] __x64_sys_bpf+0x1c/0x30
> [ 91.304754] [<ffffffffa7ef847a>] do_syscall_64+0x5a/0x80
> [ 91.304756] [<ffffffffa800009b>] entry_SYSCALL_64_after_hwframe+0x63/0xcd
>
> This makes sense, because in atomic context, percpu allocation would
> not create new chunks; it would only create in non-atomic contexts.
> And if during prefill all precpu chunks are full, -ENOMEM would
> happen immediately upon next unit_alloc.
>
> Prefill phase does not actually run in atomic context, so we can
> use this fact to allocate non-atomically with GFP_KERNEL instead
> of GFP_NOWAIT. This avoids the immediate -ENOMEM.
>
> Unfortunately unit_alloc runs in atomic context, even from map
> item allocation in syscalls, due to rcu_read_lock, so we can't do
> non-atomic workarounds in unit_alloc.
The above description is not clear to me. Do you mean
GFP_NOWAIT has to be used in unit_alloc when bpf program runs
in atomic context. Even if bpf program runs in non-atomic context,
in most cases, rcu read lock is enabled for the program so
GFP_NOWAIT is still needed.
The exception is sleepable bpf program in non-atomic context,
e.g., sleepable bpf_iter program, sleepable fentry program
in non-atomic context, and the unit_alloc is not inside
bpf_rcu_read_lock kfunc. But this is too complicated and
probably not worthwhile to special-case it.
>
> Signed-off-by: YiFei Zhu <zhuyifei@google.com>
Acked-by: Yonghong Song <yhs@fb.com>
> ---
> v1->v2:
> - Rebase from bpf to bpf-next
> - Dropped second patch and edited commit message to include parts
> of original cover letter, and dropped Fixes tag
> ---
> kernel/bpf/memalloc.c | 12 ++++++++----
> 1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
> index 14d9b1a9a4ca..9c49ae53deaf 100644
> --- a/kernel/bpf/memalloc.c
> +++ b/kernel/bpf/memalloc.c
> @@ -201,12 +201,16 @@ static void add_obj_to_free_list(struct bpf_mem_cache *c, void *obj)
> }
>
> /* Mostly runs from irq_work except __init phase. */
> -static void alloc_bulk(struct bpf_mem_cache *c, int cnt, int node)
> +static void alloc_bulk(struct bpf_mem_cache *c, int cnt, int node, bool atomic)
> {
> struct mem_cgroup *memcg = NULL, *old_memcg;
> + gfp_t gfp;
> void *obj;
> int i;
>
> + gfp = __GFP_NOWARN | __GFP_ACCOUNT;
> + gfp |= atomic ? GFP_NOWAIT : GFP_KERNEL;
> +
> for (i = 0; i < cnt; i++) {
> /*
> * For every 'c' llist_del_first(&c->free_by_rcu_ttrace); is
> @@ -238,7 +242,7 @@ static void alloc_bulk(struct bpf_mem_cache *c, int cnt, int node)
> * will allocate from the current numa node which is what we
> * want here.
> */
> - obj = __alloc(c, node, GFP_NOWAIT | __GFP_NOWARN | __GFP_ACCOUNT);
> + obj = __alloc(c, node, gfp);
> if (!obj)
> break;
> add_obj_to_free_list(c, obj);
> @@ -429,7 +433,7 @@ static void bpf_mem_refill(struct irq_work *work)
> /* irq_work runs on this cpu and kmalloc will allocate
> * from the current numa node which is what we want here.
> */
> - alloc_bulk(c, c->batch, NUMA_NO_NODE);
> + alloc_bulk(c, c->batch, NUMA_NO_NODE, true);
> else if (cnt > c->high_watermark)
> free_bulk(c);
>
> @@ -477,7 +481,7 @@ static void prefill_mem_cache(struct bpf_mem_cache *c, int cpu)
> * prog won't be doing more than 4 map_update_elem from
> * irq disabled region
> */
> - alloc_bulk(c, c->unit_size <= 256 ? 4 : 1, cpu_to_node(cpu));
> + alloc_bulk(c, c->unit_size <= 256 ? 4 : 1, cpu_to_node(cpu), false);
> }
>
> /* When size != 0 bpf_mem_cache for each cpu.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v2 bpf-next] bpf/memalloc: Non-atomically allocate freelist during prefill
2023-07-27 21:58 ` Yonghong Song
@ 2023-07-27 23:46 ` YiFei Zhu
2023-07-28 4:06 ` Yonghong Song
0 siblings, 1 reply; 4+ messages in thread
From: YiFei Zhu @ 2023-07-27 23:46 UTC (permalink / raw)
To: yonghong.song
Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Stanislav Fomichev,
Martin KaFai Lau, Andrii Nakryiko, Hou Tao
On Thu, Jul 27, 2023 at 2:59 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>
>
>
> On 7/27/23 1:18 PM, YiFei Zhu wrote:
> > In internal testing of test_maps, we sometimes observed failures like:
> > test_maps: test_maps.c:173: void test_hashmap_percpu(unsigned int, void *):
> > Assertion `bpf_map_update_elem(fd, &key, value, BPF_ANY) == 0' failed.
> > where the errno is ENOMEM. After some troubleshooting and enabling
> > the warnings, we saw:
> > [ 91.304708] percpu: allocation failed, size=8 align=8 atomic=1, atomic alloc failed, no space left
> > [ 91.304716] CPU: 51 PID: 24145 Comm: test_maps Kdump: loaded Tainted: G N 6.1.38-smp-DEV #7
> > [ 91.304719] Hardware name: Google Astoria/astoria, BIOS 0.20230627.0-0 06/27/2023
> > [ 91.304721] Call Trace:
> > [ 91.304724] <TASK>
> > [ 91.304730] [<ffffffffa7ef83b9>] dump_stack_lvl+0x59/0x88
> > [ 91.304737] [<ffffffffa7ef83f8>] dump_stack+0x10/0x18
> > [ 91.304738] [<ffffffffa75caa0c>] pcpu_alloc+0x6fc/0x870
> > [ 91.304741] [<ffffffffa75ca302>] __alloc_percpu_gfp+0x12/0x20
> > [ 91.304743] [<ffffffffa756785e>] alloc_bulk+0xde/0x1e0
> > [ 91.304746] [<ffffffffa7566c02>] bpf_mem_alloc_init+0xd2/0x2f0
> > [ 91.304747] [<ffffffffa7547c69>] htab_map_alloc+0x479/0x650
> > [ 91.304750] [<ffffffffa751d6e0>] map_create+0x140/0x2e0
> > [ 91.304752] [<ffffffffa751d413>] __sys_bpf+0x5a3/0x6c0
> > [ 91.304753] [<ffffffffa751c3ec>] __x64_sys_bpf+0x1c/0x30
> > [ 91.304754] [<ffffffffa7ef847a>] do_syscall_64+0x5a/0x80
> > [ 91.304756] [<ffffffffa800009b>] entry_SYSCALL_64_after_hwframe+0x63/0xcd
> >
> > This makes sense, because in atomic context, percpu allocation would
> > not create new chunks; it would only create in non-atomic contexts.
> > And if during prefill all precpu chunks are full, -ENOMEM would
> > happen immediately upon next unit_alloc.
> >
> > Prefill phase does not actually run in atomic context, so we can
> > use this fact to allocate non-atomically with GFP_KERNEL instead
> > of GFP_NOWAIT. This avoids the immediate -ENOMEM.
> >
> > Unfortunately unit_alloc runs in atomic context, even from map
> > item allocation in syscalls, due to rcu_read_lock, so we can't do
> > non-atomic workarounds in unit_alloc.
>
> The above description is not clear to me. Do you mean
> GFP_NOWAIT has to be used in unit_alloc when bpf program runs
> in atomic context. Even if bpf program runs in non-atomic context,
> in most cases, rcu read lock is enabled for the program so
> GFP_NOWAIT is still needed.
I actually meant that in syscall BPF_MAP_UPDATE_ELEM, at least in the
case of hashmap_percpu the code path is rcu read locked, so one cannot
do non-atomic allocations even from syscalls. Hmm, shall I I change it
to something like this?
GFP_NOWAIT has to be used in unit_alloc when bpf program runs
in atomic context. Even if bpf program runs in non-atomic context,
in most cases, rcu read lock is enabled for the program so
GFP_NOWAIT is still needed. This is often also the case for
BPF_MAP_UPDATE_ELEM syscalls.
> The exception is sleepable bpf program in non-atomic context,
> e.g., sleepable bpf_iter program, sleepable fentry program
> in non-atomic context, and the unit_alloc is not inside
> bpf_rcu_read_lock kfunc. But this is too complicated and
> probably not worthwhile to special-case it.
Ack.
>
> >
> > Signed-off-by: YiFei Zhu <zhuyifei@google.com>
> Acked-by: Yonghong Song <yhs@fb.com>
>
> > ---
> > v1->v2:
> > - Rebase from bpf to bpf-next
> > - Dropped second patch and edited commit message to include parts
> > of original cover letter, and dropped Fixes tag
> > ---
> > kernel/bpf/memalloc.c | 12 ++++++++----
> > 1 file changed, 8 insertions(+), 4 deletions(-)
> >
> > diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
> > index 14d9b1a9a4ca..9c49ae53deaf 100644
> > --- a/kernel/bpf/memalloc.c
> > +++ b/kernel/bpf/memalloc.c
> > @@ -201,12 +201,16 @@ static void add_obj_to_free_list(struct bpf_mem_cache *c, void *obj)
> > }
> >
> > /* Mostly runs from irq_work except __init phase. */
> > -static void alloc_bulk(struct bpf_mem_cache *c, int cnt, int node)
> > +static void alloc_bulk(struct bpf_mem_cache *c, int cnt, int node, bool atomic)
> > {
> > struct mem_cgroup *memcg = NULL, *old_memcg;
> > + gfp_t gfp;
> > void *obj;
> > int i;
> >
> > + gfp = __GFP_NOWARN | __GFP_ACCOUNT;
> > + gfp |= atomic ? GFP_NOWAIT : GFP_KERNEL;
> > +
> > for (i = 0; i < cnt; i++) {
> > /*
> > * For every 'c' llist_del_first(&c->free_by_rcu_ttrace); is
> > @@ -238,7 +242,7 @@ static void alloc_bulk(struct bpf_mem_cache *c, int cnt, int node)
> > * will allocate from the current numa node which is what we
> > * want here.
> > */
> > - obj = __alloc(c, node, GFP_NOWAIT | __GFP_NOWARN | __GFP_ACCOUNT);
> > + obj = __alloc(c, node, gfp);
> > if (!obj)
> > break;
> > add_obj_to_free_list(c, obj);
> > @@ -429,7 +433,7 @@ static void bpf_mem_refill(struct irq_work *work)
> > /* irq_work runs on this cpu and kmalloc will allocate
> > * from the current numa node which is what we want here.
> > */
> > - alloc_bulk(c, c->batch, NUMA_NO_NODE);
> > + alloc_bulk(c, c->batch, NUMA_NO_NODE, true);
> > else if (cnt > c->high_watermark)
> > free_bulk(c);
> >
> > @@ -477,7 +481,7 @@ static void prefill_mem_cache(struct bpf_mem_cache *c, int cpu)
> > * prog won't be doing more than 4 map_update_elem from
> > * irq disabled region
> > */
> > - alloc_bulk(c, c->unit_size <= 256 ? 4 : 1, cpu_to_node(cpu));
> > + alloc_bulk(c, c->unit_size <= 256 ? 4 : 1, cpu_to_node(cpu), false);
> > }
> >
> > /* When size != 0 bpf_mem_cache for each cpu.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v2 bpf-next] bpf/memalloc: Non-atomically allocate freelist during prefill
2023-07-27 23:46 ` YiFei Zhu
@ 2023-07-28 4:06 ` Yonghong Song
0 siblings, 0 replies; 4+ messages in thread
From: Yonghong Song @ 2023-07-28 4:06 UTC (permalink / raw)
To: YiFei Zhu
Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Stanislav Fomichev,
Martin KaFai Lau, Andrii Nakryiko, Hou Tao
On 7/27/23 4:46 PM, YiFei Zhu wrote:
> On Thu, Jul 27, 2023 at 2:59 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>>
>>
>>
>> On 7/27/23 1:18 PM, YiFei Zhu wrote:
>>> In internal testing of test_maps, we sometimes observed failures like:
>>> test_maps: test_maps.c:173: void test_hashmap_percpu(unsigned int, void *):
>>> Assertion `bpf_map_update_elem(fd, &key, value, BPF_ANY) == 0' failed.
>>> where the errno is ENOMEM. After some troubleshooting and enabling
>>> the warnings, we saw:
>>> [ 91.304708] percpu: allocation failed, size=8 align=8 atomic=1, atomic alloc failed, no space left
>>> [ 91.304716] CPU: 51 PID: 24145 Comm: test_maps Kdump: loaded Tainted: G N 6.1.38-smp-DEV #7
>>> [ 91.304719] Hardware name: Google Astoria/astoria, BIOS 0.20230627.0-0 06/27/2023
>>> [ 91.304721] Call Trace:
>>> [ 91.304724] <TASK>
>>> [ 91.304730] [<ffffffffa7ef83b9>] dump_stack_lvl+0x59/0x88
>>> [ 91.304737] [<ffffffffa7ef83f8>] dump_stack+0x10/0x18
>>> [ 91.304738] [<ffffffffa75caa0c>] pcpu_alloc+0x6fc/0x870
>>> [ 91.304741] [<ffffffffa75ca302>] __alloc_percpu_gfp+0x12/0x20
>>> [ 91.304743] [<ffffffffa756785e>] alloc_bulk+0xde/0x1e0
>>> [ 91.304746] [<ffffffffa7566c02>] bpf_mem_alloc_init+0xd2/0x2f0
>>> [ 91.304747] [<ffffffffa7547c69>] htab_map_alloc+0x479/0x650
>>> [ 91.304750] [<ffffffffa751d6e0>] map_create+0x140/0x2e0
>>> [ 91.304752] [<ffffffffa751d413>] __sys_bpf+0x5a3/0x6c0
>>> [ 91.304753] [<ffffffffa751c3ec>] __x64_sys_bpf+0x1c/0x30
>>> [ 91.304754] [<ffffffffa7ef847a>] do_syscall_64+0x5a/0x80
>>> [ 91.304756] [<ffffffffa800009b>] entry_SYSCALL_64_after_hwframe+0x63/0xcd
>>>
>>> This makes sense, because in atomic context, percpu allocation would
>>> not create new chunks; it would only create in non-atomic contexts.
>>> And if during prefill all precpu chunks are full, -ENOMEM would
>>> happen immediately upon next unit_alloc.
>>>
>>> Prefill phase does not actually run in atomic context, so we can
>>> use this fact to allocate non-atomically with GFP_KERNEL instead
>>> of GFP_NOWAIT. This avoids the immediate -ENOMEM.
>>>
>>> Unfortunately unit_alloc runs in atomic context, even from map
>>> item allocation in syscalls, due to rcu_read_lock, so we can't do
>>> non-atomic workarounds in unit_alloc.
>>
>> The above description is not clear to me. Do you mean
>> GFP_NOWAIT has to be used in unit_alloc when bpf program runs
>> in atomic context. Even if bpf program runs in non-atomic context,
>> in most cases, rcu read lock is enabled for the program so
>> GFP_NOWAIT is still needed.
>
> I actually meant that in syscall BPF_MAP_UPDATE_ELEM, at least in the
> case of hashmap_percpu the code path is rcu read locked, so one cannot
> do non-atomic allocations even from syscalls. Hmm, shall I I change it
Indeed, some syscall triggered operation also has rcu enabled for map
operations.
> to something like this?
>
> GFP_NOWAIT has to be used in unit_alloc when bpf program runs
> in atomic context. Even if bpf program runs in non-atomic context,
> in most cases, rcu read lock is enabled for the program so
> GFP_NOWAIT is still needed. This is often also the case for
> BPF_MAP_UPDATE_ELEM syscalls.
LGTM. Thanks.
>
>> The exception is sleepable bpf program in non-atomic context,
>> e.g., sleepable bpf_iter program, sleepable fentry program
>> in non-atomic context, and the unit_alloc is not inside
>> bpf_rcu_read_lock kfunc. But this is too complicated and
>> probably not worthwhile to special-case it.
>
> Ack.
>
>>
>>>
>>> Signed-off-by: YiFei Zhu <zhuyifei@google.com>
>> Acked-by: Yonghong Song <yhs@fb.com>
>>
[...]
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2023-07-28 4:07 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-27 20:18 [PATCH v2 bpf-next] bpf/memalloc: Non-atomically allocate freelist during prefill YiFei Zhu
2023-07-27 21:58 ` Yonghong Song
2023-07-27 23:46 ` YiFei Zhu
2023-07-28 4:06 ` Yonghong Song
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.