All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next] bpf: using rcu_read_lock for bpf_sk_storage_map iterator
@ 2020-09-14 18:46 Yonghong Song
  2020-09-14 21:28 ` Song Liu
  2020-09-15 15:33 ` Jakub Kicinski
  0 siblings, 2 replies; 10+ messages in thread
From: Yonghong Song @ 2020-09-14 18:46 UTC (permalink / raw)
  To: bpf, netdev
  Cc: Alexei Starovoitov, Daniel Borkmann, kernel-team, Martin KaFai Lau

Currently, we use bucket_lock when traversing bpf_sk_storage_map
elements. Since bpf_iter programs cannot use bpf_sk_storage_get()
and bpf_sk_storage_delete() helpers which may also grab bucket lock,
we do not have a deadlock issue which exists for hashmap when
using bucket_lock ([1]).

If a bucket contains a lot of sockets, during bpf_iter traversing
a bucket, concurrent bpf_sk_storage_{get,delete}() may experience
some undesirable delays. Using rcu_read_lock() is a reasonable
compromise here. Although it may lose some precision, e.g.,
access stale sockets, but it will not hurt performance of other
bpf programs.

[1] https://lore.kernel.org/bpf/20200902235341.2001534-1-yhs@fb.com

Cc: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Yonghong Song <yhs@fb.com>
---
 net/core/bpf_sk_storage.c | 15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)

diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
index 4a86ea34f29e..a1db5e988d19 100644
--- a/net/core/bpf_sk_storage.c
+++ b/net/core/bpf_sk_storage.c
@@ -701,7 +701,7 @@ bpf_sk_storage_map_seq_find_next(struct bpf_iter_seq_sk_storage_map_info *info,
 		if (!selem) {
 			/* not found, unlock and go to the next bucket */
 			b = &smap->buckets[bucket_id++];
-			raw_spin_unlock_bh(&b->lock);
+			rcu_read_unlock();
 			skip_elems = 0;
 			break;
 		}
@@ -715,7 +715,7 @@ bpf_sk_storage_map_seq_find_next(struct bpf_iter_seq_sk_storage_map_info *info,
 
 	for (i = bucket_id; i < (1U << smap->bucket_log); i++) {
 		b = &smap->buckets[i];
-		raw_spin_lock_bh(&b->lock);
+		rcu_read_lock();
 		count = 0;
 		hlist_for_each_entry(selem, &b->list, map_node) {
 			sk_storage = rcu_dereference_raw(selem->local_storage);
@@ -726,7 +726,7 @@ bpf_sk_storage_map_seq_find_next(struct bpf_iter_seq_sk_storage_map_info *info,
 			}
 			count++;
 		}
-		raw_spin_unlock_bh(&b->lock);
+		rcu_read_unlock();
 		skip_elems = 0;
 	}
 
@@ -806,13 +806,10 @@ static void bpf_sk_storage_map_seq_stop(struct seq_file *seq, void *v)
 	struct bpf_local_storage_map *smap;
 	struct bpf_local_storage_map_bucket *b;
 
-	if (!v) {
+	if (!v)
 		(void)__bpf_sk_storage_map_seq_show(seq, v);
-	} else {
-		smap = (struct bpf_local_storage_map *)info->map;
-		b = &smap->buckets[info->bucket_id];
-		raw_spin_unlock_bh(&b->lock);
-	}
+	else
+		rcu_read_unlock();
 }
 
 static int bpf_iter_init_sk_storage_map(void *priv_data,
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next] bpf: using rcu_read_lock for bpf_sk_storage_map iterator
  2020-09-14 18:46 [PATCH bpf-next] bpf: using rcu_read_lock for bpf_sk_storage_map iterator Yonghong Song
@ 2020-09-14 21:28 ` Song Liu
  2020-09-15  5:25   ` Yonghong Song
  2020-09-15 15:33 ` Jakub Kicinski
  1 sibling, 1 reply; 10+ messages in thread
From: Song Liu @ 2020-09-14 21:28 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, Networking, Alexei Starovoitov, Daniel Borkmann,
	Kernel Team, Martin KaFai Lau

On Mon, Sep 14, 2020 at 11:47 AM Yonghong Song <yhs@fb.com> wrote:
>
> Currently, we use bucket_lock when traversing bpf_sk_storage_map
> elements. Since bpf_iter programs cannot use bpf_sk_storage_get()
> and bpf_sk_storage_delete() helpers which may also grab bucket lock,
> we do not have a deadlock issue which exists for hashmap when
> using bucket_lock ([1]).

The paragraph above describes why we can use bucket_lock, which is more
or less irrelevant to this change. Also, I am not sure why we refer to [1] here.

>
> If a bucket contains a lot of sockets, during bpf_iter traversing
> a bucket, concurrent bpf_sk_storage_{get,delete}() may experience
> some undesirable delays. Using rcu_read_lock() is a reasonable

It will be great to include some performance comparison.

> compromise here. Although it may lose some precision, e.g.,
> access stale sockets, but it will not hurt performance of other
> bpf programs.
>
> [1] https://lore.kernel.org/bpf/20200902235341.2001534-1-yhs@fb.com
>
> Cc: Martin KaFai Lau <kafai@fb.com>
> Signed-off-by: Yonghong Song <yhs@fb.com>

Other than these,

Acked-by: Song Liu <songliubraving@fb.com>

[...]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next] bpf: using rcu_read_lock for bpf_sk_storage_map iterator
  2020-09-14 21:28 ` Song Liu
@ 2020-09-15  5:25   ` Yonghong Song
  0 siblings, 0 replies; 10+ messages in thread
From: Yonghong Song @ 2020-09-15  5:25 UTC (permalink / raw)
  To: Song Liu
  Cc: bpf, Networking, Alexei Starovoitov, Daniel Borkmann,
	Kernel Team, Martin KaFai Lau



On 9/14/20 2:28 PM, Song Liu wrote:
> On Mon, Sep 14, 2020 at 11:47 AM Yonghong Song <yhs@fb.com> wrote:
>>
>> Currently, we use bucket_lock when traversing bpf_sk_storage_map
>> elements. Since bpf_iter programs cannot use bpf_sk_storage_get()
>> and bpf_sk_storage_delete() helpers which may also grab bucket lock,
>> we do not have a deadlock issue which exists for hashmap when
>> using bucket_lock ([1]).
> 
> The paragraph above describes why we can use bucket_lock, which is more
> or less irrelevant to this change. Also, I am not sure why we refer to [1] here.

What I try to clarify here is unlike [1], we do not have bugs
with the current code.
But I guess no bugs is implicit so I probably do not need to say
anything. Will skip the above paragraph in the next revision.

> 
>>
>> If a bucket contains a lot of sockets, during bpf_iter traversing
>> a bucket, concurrent bpf_sk_storage_{get,delete}() may experience
>> some undesirable delays. Using rcu_read_lock() is a reasonable
> 
> It will be great to include some performance comparison.

Sure. Let me give some try to collect some statistics.

> 
>> compromise here. Although it may lose some precision, e.g.,
>> access stale sockets, but it will not hurt performance of other
>> bpf programs.
>>
>> [1] https://lore.kernel.org/bpf/20200902235341.2001534-1-yhs@fb.com
>>
>> Cc: Martin KaFai Lau <kafai@fb.com>
>> Signed-off-by: Yonghong Song <yhs@fb.com>
> 
> Other than these,
> 
> Acked-by: Song Liu <songliubraving@fb.com>
> 
> [...]
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next] bpf: using rcu_read_lock for bpf_sk_storage_map iterator
  2020-09-14 18:46 [PATCH bpf-next] bpf: using rcu_read_lock for bpf_sk_storage_map iterator Yonghong Song
  2020-09-14 21:28 ` Song Liu
@ 2020-09-15 15:33 ` Jakub Kicinski
  2020-09-15 17:35   ` Yonghong Song
  1 sibling, 1 reply; 10+ messages in thread
From: Jakub Kicinski @ 2020-09-15 15:33 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	Martin KaFai Lau

On Mon, 14 Sep 2020 11:46:30 -0700 Yonghong Song wrote:
> Currently, we use bucket_lock when traversing bpf_sk_storage_map
> elements. Since bpf_iter programs cannot use bpf_sk_storage_get()
> and bpf_sk_storage_delete() helpers which may also grab bucket lock,
> we do not have a deadlock issue which exists for hashmap when
> using bucket_lock ([1]).
> 
> If a bucket contains a lot of sockets, during bpf_iter traversing
> a bucket, concurrent bpf_sk_storage_{get,delete}() may experience
> some undesirable delays. Using rcu_read_lock() is a reasonable
> compromise here. Although it may lose some precision, e.g.,
> access stale sockets, but it will not hurt performance of other
> bpf programs.
> 
> [1] https://lore.kernel.org/bpf/20200902235341.2001534-1-yhs@fb.com
> 
> Cc: Martin KaFai Lau <kafai@fb.com>
> Signed-off-by: Yonghong Song <yhs@fb.com>

Sparse is not happy about it. Could you add some annotations, perhaps?

include/linux/rcupdate.h:686:9: warning: context imbalance in 'bpf_sk_storage_map_seq_find_next' - unexpected unlock
include/linux/rcupdate.h:686:9: warning: context imbalance in 'bpf_sk_storage_map_seq_stop' - unexpected unlock

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next] bpf: using rcu_read_lock for bpf_sk_storage_map iterator
  2020-09-15 15:33 ` Jakub Kicinski
@ 2020-09-15 17:35   ` Yonghong Song
  2020-09-15 17:40     ` Jakub Kicinski
  0 siblings, 1 reply; 10+ messages in thread
From: Yonghong Song @ 2020-09-15 17:35 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: bpf, netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	Martin KaFai Lau



On 9/15/20 8:33 AM, Jakub Kicinski wrote:
> On Mon, 14 Sep 2020 11:46:30 -0700 Yonghong Song wrote:
>> Currently, we use bucket_lock when traversing bpf_sk_storage_map
>> elements. Since bpf_iter programs cannot use bpf_sk_storage_get()
>> and bpf_sk_storage_delete() helpers which may also grab bucket lock,
>> we do not have a deadlock issue which exists for hashmap when
>> using bucket_lock ([1]).
>>
>> If a bucket contains a lot of sockets, during bpf_iter traversing
>> a bucket, concurrent bpf_sk_storage_{get,delete}() may experience
>> some undesirable delays. Using rcu_read_lock() is a reasonable
>> compromise here. Although it may lose some precision, e.g.,
>> access stale sockets, but it will not hurt performance of other
>> bpf programs.
>>
>> [1] https://lore.kernel.org/bpf/20200902235341.2001534-1-yhs@fb.com
>>
>> Cc: Martin KaFai Lau <kafai@fb.com>
>> Signed-off-by: Yonghong Song <yhs@fb.com>
> 
> Sparse is not happy about it. Could you add some annotations, perhaps?
> 
> include/linux/rcupdate.h:686:9: warning: context imbalance in 'bpf_sk_storage_map_seq_find_next' - unexpected unlock
> include/linux/rcupdate.h:686:9: warning: context imbalance in 'bpf_sk_storage_map_seq_stop' - unexpected unlock

Okay, I will try.

On my system, sparse is unhappy and core dumped....

/data/users/yhs/work/net-next/include/linux/string.h:12:38: error: too 
many errors
/bin/sh: line 1: 2710132 Segmentation fault      (core dumped) sparse 
-D__linux__ -Dlinux -D__STDC__ -Dunix
-D__unix__ -Wbitwise -Wno-return-void -Wno-unknown-attribute 
-D__x86_64__ --arch=x86 -mlittle-endian -m64 -W
p,-MMD,net/core/.bpf_sk_storage.o.d -nostdinc -isystem
...
/data/users/yhs/work/net-next/net/core/bpf_sk_storage.c
make[3]: *** [net/core/bpf_sk_storage.o] Error 139
make[3]: *** Deleting file `net/core/bpf_sk_storage.o'

-bash-4.4$ rpm -qf /bin/sparse
sparse-0.5.2-1.el7.x86_64
-bash-4.4$

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next] bpf: using rcu_read_lock for bpf_sk_storage_map iterator
  2020-09-15 17:35   ` Yonghong Song
@ 2020-09-15 17:40     ` Jakub Kicinski
  2020-09-15 18:56       ` Yonghong Song
  0 siblings, 1 reply; 10+ messages in thread
From: Jakub Kicinski @ 2020-09-15 17:40 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	Martin KaFai Lau

On Tue, 15 Sep 2020 10:35:50 -0700 Yonghong Song wrote:
> On 9/15/20 8:33 AM, Jakub Kicinski wrote:
> > On Mon, 14 Sep 2020 11:46:30 -0700 Yonghong Song wrote:  
> >> Currently, we use bucket_lock when traversing bpf_sk_storage_map
> >> elements. Since bpf_iter programs cannot use bpf_sk_storage_get()
> >> and bpf_sk_storage_delete() helpers which may also grab bucket lock,
> >> we do not have a deadlock issue which exists for hashmap when
> >> using bucket_lock ([1]).
> >>
> >> If a bucket contains a lot of sockets, during bpf_iter traversing
> >> a bucket, concurrent bpf_sk_storage_{get,delete}() may experience
> >> some undesirable delays. Using rcu_read_lock() is a reasonable
> >> compromise here. Although it may lose some precision, e.g.,
> >> access stale sockets, but it will not hurt performance of other
> >> bpf programs.
> >>
> >> [1] https://lore.kernel.org/bpf/20200902235341.2001534-1-yhs@fb.com
> >>
> >> Cc: Martin KaFai Lau <kafai@fb.com>
> >> Signed-off-by: Yonghong Song <yhs@fb.com>  
> > 
> > Sparse is not happy about it. Could you add some annotations, perhaps?
> > 
> > include/linux/rcupdate.h:686:9: warning: context imbalance in 'bpf_sk_storage_map_seq_find_next' - unexpected unlock
> > include/linux/rcupdate.h:686:9: warning: context imbalance in 'bpf_sk_storage_map_seq_stop' - unexpected unlock  
> 
> Okay, I will try.
> 
> On my system, sparse is unhappy and core dumped....
> 
> /data/users/yhs/work/net-next/include/linux/string.h:12:38: error: too 
> many errors
> /bin/sh: line 1: 2710132 Segmentation fault      (core dumped) sparse 
> -D__linux__ -Dlinux -D__STDC__ -Dunix
> -D__unix__ -Wbitwise -Wno-return-void -Wno-unknown-attribute 
> -D__x86_64__ --arch=x86 -mlittle-endian -m64 -W
> p,-MMD,net/core/.bpf_sk_storage.o.d -nostdinc -isystem
> ...
> /data/users/yhs/work/net-next/net/core/bpf_sk_storage.c
> make[3]: *** [net/core/bpf_sk_storage.o] Error 139
> make[3]: *** Deleting file `net/core/bpf_sk_storage.o'
> 
> -bash-4.4$ rpm -qf /bin/sparse
> sparse-0.5.2-1.el7.x86_64
> -bash-4.4$

I think you need to build from source, sadly :(

https://git.kernel.org/pub/scm//devel/sparse/sparse.git

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next] bpf: using rcu_read_lock for bpf_sk_storage_map iterator
  2020-09-15 17:40     ` Jakub Kicinski
@ 2020-09-15 18:56       ` Yonghong Song
  2020-09-15 19:03         ` Alexei Starovoitov
  0 siblings, 1 reply; 10+ messages in thread
From: Yonghong Song @ 2020-09-15 18:56 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: bpf, netdev, Alexei Starovoitov, Daniel Borkmann, kernel-team,
	Martin KaFai Lau



On 9/15/20 10:40 AM, Jakub Kicinski wrote:
> On Tue, 15 Sep 2020 10:35:50 -0700 Yonghong Song wrote:
>> On 9/15/20 8:33 AM, Jakub Kicinski wrote:
>>> On Mon, 14 Sep 2020 11:46:30 -0700 Yonghong Song wrote:
>>>> Currently, we use bucket_lock when traversing bpf_sk_storage_map
>>>> elements. Since bpf_iter programs cannot use bpf_sk_storage_get()
>>>> and bpf_sk_storage_delete() helpers which may also grab bucket lock,
>>>> we do not have a deadlock issue which exists for hashmap when
>>>> using bucket_lock ([1]).
>>>>
>>>> If a bucket contains a lot of sockets, during bpf_iter traversing
>>>> a bucket, concurrent bpf_sk_storage_{get,delete}() may experience
>>>> some undesirable delays. Using rcu_read_lock() is a reasonable
>>>> compromise here. Although it may lose some precision, e.g.,
>>>> access stale sockets, but it will not hurt performance of other
>>>> bpf programs.
>>>>
>>>> [1] https://lore.kernel.org/bpf/20200902235341.2001534-1-yhs@fb.com
>>>>
>>>> Cc: Martin KaFai Lau <kafai@fb.com>
>>>> Signed-off-by: Yonghong Song <yhs@fb.com>
>>>
>>> Sparse is not happy about it. Could you add some annotations, perhaps?
>>>
>>> include/linux/rcupdate.h:686:9: warning: context imbalance in 'bpf_sk_storage_map_seq_find_next' - unexpected unlock
>>> include/linux/rcupdate.h:686:9: warning: context imbalance in 'bpf_sk_storage_map_seq_stop' - unexpected unlock
>>
>> Okay, I will try.
>>
>> On my system, sparse is unhappy and core dumped....
>>
>> /data/users/yhs/work/net-next/include/linux/string.h:12:38: error: too
>> many errors
>> /bin/sh: line 1: 2710132 Segmentation fault      (core dumped) sparse
>> -D__linux__ -Dlinux -D__STDC__ -Dunix
>> -D__unix__ -Wbitwise -Wno-return-void -Wno-unknown-attribute
>> -D__x86_64__ --arch=x86 -mlittle-endian -m64 -W
>> p,-MMD,net/core/.bpf_sk_storage.o.d -nostdinc -isystem
>> ...
>> /data/users/yhs/work/net-next/net/core/bpf_sk_storage.c
>> make[3]: *** [net/core/bpf_sk_storage.o] Error 139
>> make[3]: *** Deleting file `net/core/bpf_sk_storage.o'
>>
>> -bash-4.4$ rpm -qf /bin/sparse
>> sparse-0.5.2-1.el7.x86_64
>> -bash-4.4$
> 
> I think you need to build from source, sadly :(
> 
> https://git.kernel.org/pub/scm//devel/sparse/sparse.git

Indeed, building sparse from source works. After adding some 
__releases(RCU) and __acquires(RCU), I now have:
   context imbalance in 'bpf_sk_storage_map_seq_find_next' - different 
lock contexts for basic block
I may need to restructure code to please sparse...

> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next] bpf: using rcu_read_lock for bpf_sk_storage_map iterator
  2020-09-15 18:56       ` Yonghong Song
@ 2020-09-15 19:03         ` Alexei Starovoitov
  0 siblings, 0 replies; 10+ messages in thread
From: Alexei Starovoitov @ 2020-09-15 19:03 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Jakub Kicinski, bpf, Network Development, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, Martin KaFai Lau

On Tue, Sep 15, 2020 at 11:56 AM Yonghong Song <yhs@fb.com> wrote:
>
>
>
> On 9/15/20 10:40 AM, Jakub Kicinski wrote:
> > On Tue, 15 Sep 2020 10:35:50 -0700 Yonghong Song wrote:
> >> On 9/15/20 8:33 AM, Jakub Kicinski wrote:
> >>> On Mon, 14 Sep 2020 11:46:30 -0700 Yonghong Song wrote:
> >>>> Currently, we use bucket_lock when traversing bpf_sk_storage_map
> >>>> elements. Since bpf_iter programs cannot use bpf_sk_storage_get()
> >>>> and bpf_sk_storage_delete() helpers which may also grab bucket lock,
> >>>> we do not have a deadlock issue which exists for hashmap when
> >>>> using bucket_lock ([1]).
> >>>>
> >>>> If a bucket contains a lot of sockets, during bpf_iter traversing
> >>>> a bucket, concurrent bpf_sk_storage_{get,delete}() may experience
> >>>> some undesirable delays. Using rcu_read_lock() is a reasonable
> >>>> compromise here. Although it may lose some precision, e.g.,
> >>>> access stale sockets, but it will not hurt performance of other
> >>>> bpf programs.
> >>>>
> >>>> [1] https://lore.kernel.org/bpf/20200902235341.2001534-1-yhs@fb.com
> >>>>
> >>>> Cc: Martin KaFai Lau <kafai@fb.com>
> >>>> Signed-off-by: Yonghong Song <yhs@fb.com>
> >>>
> >>> Sparse is not happy about it. Could you add some annotations, perhaps?
> >>>
> >>> include/linux/rcupdate.h:686:9: warning: context imbalance in 'bpf_sk_storage_map_seq_find_next' - unexpected unlock
> >>> include/linux/rcupdate.h:686:9: warning: context imbalance in 'bpf_sk_storage_map_seq_stop' - unexpected unlock
> >>
> >> Okay, I will try.
> >>
> >> On my system, sparse is unhappy and core dumped....
> >>
> >> /data/users/yhs/work/net-next/include/linux/string.h:12:38: error: too
> >> many errors
> >> /bin/sh: line 1: 2710132 Segmentation fault      (core dumped) sparse
> >> -D__linux__ -Dlinux -D__STDC__ -Dunix
> >> -D__unix__ -Wbitwise -Wno-return-void -Wno-unknown-attribute
> >> -D__x86_64__ --arch=x86 -mlittle-endian -m64 -W
> >> p,-MMD,net/core/.bpf_sk_storage.o.d -nostdinc -isystem
> >> ...
> >> /data/users/yhs/work/net-next/net/core/bpf_sk_storage.c
> >> make[3]: *** [net/core/bpf_sk_storage.o] Error 139
> >> make[3]: *** Deleting file `net/core/bpf_sk_storage.o'
> >>
> >> -bash-4.4$ rpm -qf /bin/sparse
> >> sparse-0.5.2-1.el7.x86_64
> >> -bash-4.4$
> >
> > I think you need to build from source, sadly :(
> >
> > https://git.kernel.org/pub/scm//devel/sparse/sparse.git
>
> Indeed, building sparse from source works. After adding some
> __releases(RCU) and __acquires(RCU), I now have:
>    context imbalance in 'bpf_sk_storage_map_seq_find_next' - different
> lock contexts for basic block
> I may need to restructure code to please sparse...

I don't think sparse can handle such things even with all annotations.
I would spend too much time on it.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next] bpf: using rcu_read_lock for bpf_sk_storage_map iterator
@ 2020-09-15 10:53 kernel test robot
  0 siblings, 0 replies; 10+ messages in thread
From: kernel test robot @ 2020-09-15 10:53 UTC (permalink / raw)
  To: kbuild

[-- Attachment #1: Type: text/plain, Size: 7356 bytes --]

CC: kbuild-all(a)lists.01.org
In-Reply-To: <20200914184630.1048718-1-yhs@fb.com>
References: <20200914184630.1048718-1-yhs@fb.com>
TO: Yonghong Song <yhs@fb.com>
TO: bpf(a)vger.kernel.org
TO: netdev(a)vger.kernel.org
CC: Alexei Starovoitov <ast@kernel.org>
CC: Daniel Borkmann <daniel@iogearbox.net>
CC: kernel-team(a)fb.com
CC: Martin KaFai Lau <kafai@fb.com>

Hi Yonghong,

I love your patch! Perhaps something to improve:

[auto build test WARNING on bpf-next/master]

url:    https://github.com/0day-ci/linux/commits/Yonghong-Song/bpf-using-rcu_read_lock-for-bpf_sk_storage_map-iterator/20200915-024727
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
:::::: branch date: 16 hours ago
:::::: commit date: 16 hours ago
config: i386-randconfig-s001-20200914 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce:
        # apt-get install sparse
        # sparse version: v0.6.2-191-g10164920-dirty
        # save the attached .config to linux build tree
        make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>


sparse warnings: (new ones prefixed by >>)

   net/core/bpf_sk_storage.c: note: in included file (through include/linux/rculist.h):
>> include/linux/rcupdate.h:686:9: sparse: sparse: context imbalance in 'bpf_sk_storage_map_seq_find_next' - unexpected unlock
   include/linux/rcupdate.h:686:9: sparse: sparse: context imbalance in 'bpf_sk_storage_map_seq_stop' - unexpected unlock

# https://github.com/0day-ci/linux/commit/a9b8e045577010eb33328af8e1a226f29c432afd
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Yonghong-Song/bpf-using-rcu_read_lock-for-bpf_sk_storage_map-iterator/20200915-024727
git checkout a9b8e045577010eb33328af8e1a226f29c432afd
vim +/bpf_sk_storage_map_seq_find_next +686 include/linux/rcupdate.h

^1da177e4c3f41 Linus Torvalds      2005-04-16  638  
^1da177e4c3f41 Linus Torvalds      2005-04-16  639  /*
^1da177e4c3f41 Linus Torvalds      2005-04-16  640   * So where is rcu_write_lock()?  It does not exist, as there is no
^1da177e4c3f41 Linus Torvalds      2005-04-16  641   * way for writers to lock out RCU readers.  This is a feature, not
^1da177e4c3f41 Linus Torvalds      2005-04-16  642   * a bug -- this property is what provides RCU's performance benefits.
^1da177e4c3f41 Linus Torvalds      2005-04-16  643   * Of course, writers must coordinate with each other.  The normal
^1da177e4c3f41 Linus Torvalds      2005-04-16  644   * spinlock primitives work well for this, but any other technique may be
^1da177e4c3f41 Linus Torvalds      2005-04-16  645   * used as well.  RCU does not care how the writers keep out of each
^1da177e4c3f41 Linus Torvalds      2005-04-16  646   * others' way, as long as they do so.
^1da177e4c3f41 Linus Torvalds      2005-04-16  647   */
3d76c082907e8f Paul E. McKenney    2009-09-28  648  
3d76c082907e8f Paul E. McKenney    2009-09-28  649  /**
ca5ecddfa8fcbd Paul E. McKenney    2010-04-28  650   * rcu_read_unlock() - marks the end of an RCU read-side critical section.
3d76c082907e8f Paul E. McKenney    2009-09-28  651   *
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  652   * In most situations, rcu_read_unlock() is immune from deadlock.
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  653   * However, in kernels built with CONFIG_RCU_BOOST, rcu_read_unlock()
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  654   * is responsible for deboosting, which it does via rt_mutex_unlock().
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  655   * Unfortunately, this function acquires the scheduler's runqueue and
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  656   * priority-inheritance spinlocks.  This means that deadlock could result
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  657   * if the caller of rcu_read_unlock() already holds one of these locks or
ec84b27f9b3b56 Anna-Maria Gleixner 2018-05-25  658   * any lock that is ever acquired while holding them.
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  659   *
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  660   * That said, RCU readers are never priority boosted unless they were
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  661   * preempted.  Therefore, one way to avoid deadlock is to make sure
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  662   * that preemption never happens within any RCU read-side critical
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  663   * section whose outermost rcu_read_unlock() is called with one of
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  664   * rt_mutex_unlock()'s locks held.  Such preemption can be avoided in
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  665   * a number of ways, for example, by invoking preempt_disable() before
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  666   * critical section's outermost rcu_read_lock().
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  667   *
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  668   * Given that the set of locks acquired by rt_mutex_unlock() might change
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  669   * at any time, a somewhat more future-proofed approach is to make sure
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  670   * that that preemption never happens within any RCU read-side critical
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  671   * section whose outermost rcu_read_unlock() is called with irqs disabled.
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  672   * This approach relies on the fact that rt_mutex_unlock() currently only
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  673   * acquires irq-disabled locks.
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  674   *
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  675   * The second of these two approaches is best in most situations,
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  676   * however, the first approach can also be useful, at least to those
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  677   * developers willing to keep abreast of the set of locks acquired by
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  678   * rt_mutex_unlock().
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  679   *
3d76c082907e8f Paul E. McKenney    2009-09-28  680   * See rcu_read_lock() for more information.
3d76c082907e8f Paul E. McKenney    2009-09-28  681   */
bc33f24bdca8b6 Paul E. McKenney    2009-08-22  682  static inline void rcu_read_unlock(void)
bc33f24bdca8b6 Paul E. McKenney    2009-08-22  683  {
f78f5b90c4ffa5 Paul E. McKenney    2015-06-18  684  	RCU_LOCKDEP_WARN(!rcu_is_watching(),
bde23c6892878e Heiko Carstens      2012-02-01  685  			 "rcu_read_unlock() used illegally while idle");
bc33f24bdca8b6 Paul E. McKenney    2009-08-22 @686  	__release(RCU);
bc33f24bdca8b6 Paul E. McKenney    2009-08-22  687  	__rcu_read_unlock();
d24209bb689e2c Paul E. McKenney    2015-01-21  688  	rcu_lock_release(&rcu_lock_map); /* Keep acq info for rls diags. */
bc33f24bdca8b6 Paul E. McKenney    2009-08-22  689  }
^1da177e4c3f41 Linus Torvalds      2005-04-16  690  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 34566 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next] bpf: using rcu_read_lock for bpf_sk_storage_map iterator
@ 2020-09-15 10:09 kernel test robot
  0 siblings, 0 replies; 10+ messages in thread
From: kernel test robot @ 2020-09-15 10:09 UTC (permalink / raw)
  To: kbuild

[-- Attachment #1: Type: text/plain, Size: 7357 bytes --]

CC: kbuild-all(a)lists.01.org
In-Reply-To: <20200914184630.1048718-1-yhs@fb.com>
References: <20200914184630.1048718-1-yhs@fb.com>
TO: Yonghong Song <yhs@fb.com>
TO: bpf(a)vger.kernel.org
TO: netdev(a)vger.kernel.org
CC: Alexei Starovoitov <ast@kernel.org>
CC: Daniel Borkmann <daniel@iogearbox.net>
CC: kernel-team(a)fb.com
CC: Martin KaFai Lau <kafai@fb.com>

Hi Yonghong,

I love your patch! Perhaps something to improve:

[auto build test WARNING on bpf-next/master]

url:    https://github.com/0day-ci/linux/commits/Yonghong-Song/bpf-using-rcu_read_lock-for-bpf_sk_storage_map-iterator/20200915-024727
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
:::::: branch date: 15 hours ago
:::::: commit date: 15 hours ago
config: x86_64-randconfig-s021-20200914 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce:
        # apt-get install sparse
        # sparse version: v0.6.2-191-g10164920-dirty
        # save the attached .config to linux build tree
        make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>


sparse warnings: (new ones prefixed by >>)

   net/core/bpf_sk_storage.c:703:25: sparse: sparse: context imbalance in 'bpf_sk_storage_map_seq_find_next' - unexpected unlock
   net/core/bpf_sk_storage.c: note: in included file (through include/linux/rculist.h):
>> include/linux/rcupdate.h:686:9: sparse: sparse: context imbalance in 'bpf_sk_storage_map_seq_stop' - unexpected unlock

# https://github.com/0day-ci/linux/commit/a9b8e045577010eb33328af8e1a226f29c432afd
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Yonghong-Song/bpf-using-rcu_read_lock-for-bpf_sk_storage_map-iterator/20200915-024727
git checkout a9b8e045577010eb33328af8e1a226f29c432afd
vim +/bpf_sk_storage_map_seq_stop +686 include/linux/rcupdate.h

^1da177e4c3f41 Linus Torvalds      2005-04-16  638  
^1da177e4c3f41 Linus Torvalds      2005-04-16  639  /*
^1da177e4c3f41 Linus Torvalds      2005-04-16  640   * So where is rcu_write_lock()?  It does not exist, as there is no
^1da177e4c3f41 Linus Torvalds      2005-04-16  641   * way for writers to lock out RCU readers.  This is a feature, not
^1da177e4c3f41 Linus Torvalds      2005-04-16  642   * a bug -- this property is what provides RCU's performance benefits.
^1da177e4c3f41 Linus Torvalds      2005-04-16  643   * Of course, writers must coordinate with each other.  The normal
^1da177e4c3f41 Linus Torvalds      2005-04-16  644   * spinlock primitives work well for this, but any other technique may be
^1da177e4c3f41 Linus Torvalds      2005-04-16  645   * used as well.  RCU does not care how the writers keep out of each
^1da177e4c3f41 Linus Torvalds      2005-04-16  646   * others' way, as long as they do so.
^1da177e4c3f41 Linus Torvalds      2005-04-16  647   */
3d76c082907e8f Paul E. McKenney    2009-09-28  648  
3d76c082907e8f Paul E. McKenney    2009-09-28  649  /**
ca5ecddfa8fcbd Paul E. McKenney    2010-04-28  650   * rcu_read_unlock() - marks the end of an RCU read-side critical section.
3d76c082907e8f Paul E. McKenney    2009-09-28  651   *
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  652   * In most situations, rcu_read_unlock() is immune from deadlock.
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  653   * However, in kernels built with CONFIG_RCU_BOOST, rcu_read_unlock()
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  654   * is responsible for deboosting, which it does via rt_mutex_unlock().
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  655   * Unfortunately, this function acquires the scheduler's runqueue and
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  656   * priority-inheritance spinlocks.  This means that deadlock could result
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  657   * if the caller of rcu_read_unlock() already holds one of these locks or
ec84b27f9b3b56 Anna-Maria Gleixner 2018-05-25  658   * any lock that is ever acquired while holding them.
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  659   *
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  660   * That said, RCU readers are never priority boosted unless they were
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  661   * preempted.  Therefore, one way to avoid deadlock is to make sure
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  662   * that preemption never happens within any RCU read-side critical
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  663   * section whose outermost rcu_read_unlock() is called with one of
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  664   * rt_mutex_unlock()'s locks held.  Such preemption can be avoided in
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  665   * a number of ways, for example, by invoking preempt_disable() before
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  666   * critical section's outermost rcu_read_lock().
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  667   *
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  668   * Given that the set of locks acquired by rt_mutex_unlock() might change
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  669   * at any time, a somewhat more future-proofed approach is to make sure
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  670   * that that preemption never happens within any RCU read-side critical
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  671   * section whose outermost rcu_read_unlock() is called with irqs disabled.
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  672   * This approach relies on the fact that rt_mutex_unlock() currently only
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  673   * acquires irq-disabled locks.
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  674   *
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  675   * The second of these two approaches is best in most situations,
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  676   * however, the first approach can also be useful, at least to those
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  677   * developers willing to keep abreast of the set of locks acquired by
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  678   * rt_mutex_unlock().
f27bc4873fa8b7 Paul E. McKenney    2014-05-04  679   *
3d76c082907e8f Paul E. McKenney    2009-09-28  680   * See rcu_read_lock() for more information.
3d76c082907e8f Paul E. McKenney    2009-09-28  681   */
bc33f24bdca8b6 Paul E. McKenney    2009-08-22  682  static inline void rcu_read_unlock(void)
bc33f24bdca8b6 Paul E. McKenney    2009-08-22  683  {
f78f5b90c4ffa5 Paul E. McKenney    2015-06-18  684  	RCU_LOCKDEP_WARN(!rcu_is_watching(),
bde23c6892878e Heiko Carstens      2012-02-01  685  			 "rcu_read_unlock() used illegally while idle");
bc33f24bdca8b6 Paul E. McKenney    2009-08-22 @686  	__release(RCU);
bc33f24bdca8b6 Paul E. McKenney    2009-08-22  687  	__rcu_read_unlock();
d24209bb689e2c Paul E. McKenney    2015-01-21  688  	rcu_lock_release(&rcu_lock_map); /* Keep acq info for rls diags. */
bc33f24bdca8b6 Paul E. McKenney    2009-08-22  689  }
^1da177e4c3f41 Linus Torvalds      2005-04-16  690  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 35510 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-09-15 22:47 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-14 18:46 [PATCH bpf-next] bpf: using rcu_read_lock for bpf_sk_storage_map iterator Yonghong Song
2020-09-14 21:28 ` Song Liu
2020-09-15  5:25   ` Yonghong Song
2020-09-15 15:33 ` Jakub Kicinski
2020-09-15 17:35   ` Yonghong Song
2020-09-15 17:40     ` Jakub Kicinski
2020-09-15 18:56       ` Yonghong Song
2020-09-15 19:03         ` Alexei Starovoitov
2020-09-15 10:09 kernel test robot
2020-09-15 10:53 kernel test robot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.