All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yonghong Song <yhs@fb.com>
To: Jakub Kicinski <kuba@kernel.org>
Cc: <bpf@vger.kernel.org>, <netdev@vger.kernel.org>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>, <kernel-team@fb.com>,
	Martin KaFai Lau <kafai@fb.com>
Subject: Re: [PATCH bpf-next] bpf: using rcu_read_lock for bpf_sk_storage_map iterator
Date: Tue, 15 Sep 2020 10:35:50 -0700	[thread overview]
Message-ID: <bc24e2da-33e5-e13c-8fe0-1e24c2a5a579@fb.com> (raw)
In-Reply-To: <20200915083327.7e98cf2d@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>



On 9/15/20 8:33 AM, Jakub Kicinski wrote:
> On Mon, 14 Sep 2020 11:46:30 -0700 Yonghong Song wrote:
>> Currently, we use bucket_lock when traversing bpf_sk_storage_map
>> elements. Since bpf_iter programs cannot use bpf_sk_storage_get()
>> and bpf_sk_storage_delete() helpers which may also grab bucket lock,
>> we do not have a deadlock issue which exists for hashmap when
>> using bucket_lock ([1]).
>>
>> If a bucket contains a lot of sockets, during bpf_iter traversing
>> a bucket, concurrent bpf_sk_storage_{get,delete}() may experience
>> some undesirable delays. Using rcu_read_lock() is a reasonable
>> compromise here. Although it may lose some precision, e.g.,
>> access stale sockets, but it will not hurt performance of other
>> bpf programs.
>>
>> [1] https://lore.kernel.org/bpf/20200902235341.2001534-1-yhs@fb.com
>>
>> Cc: Martin KaFai Lau <kafai@fb.com>
>> Signed-off-by: Yonghong Song <yhs@fb.com>
> 
> Sparse is not happy about it. Could you add some annotations, perhaps?
> 
> include/linux/rcupdate.h:686:9: warning: context imbalance in 'bpf_sk_storage_map_seq_find_next' - unexpected unlock
> include/linux/rcupdate.h:686:9: warning: context imbalance in 'bpf_sk_storage_map_seq_stop' - unexpected unlock

Okay, I will try.

On my system, sparse is unhappy and core dumped....

/data/users/yhs/work/net-next/include/linux/string.h:12:38: error: too 
many errors
/bin/sh: line 1: 2710132 Segmentation fault      (core dumped) sparse 
-D__linux__ -Dlinux -D__STDC__ -Dunix
-D__unix__ -Wbitwise -Wno-return-void -Wno-unknown-attribute 
-D__x86_64__ --arch=x86 -mlittle-endian -m64 -W
p,-MMD,net/core/.bpf_sk_storage.o.d -nostdinc -isystem
...
/data/users/yhs/work/net-next/net/core/bpf_sk_storage.c
make[3]: *** [net/core/bpf_sk_storage.o] Error 139
make[3]: *** Deleting file `net/core/bpf_sk_storage.o'

-bash-4.4$ rpm -qf /bin/sparse
sparse-0.5.2-1.el7.x86_64
-bash-4.4$

  reply	other threads:[~2020-09-15 18:42 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-14 18:46 [PATCH bpf-next] bpf: using rcu_read_lock for bpf_sk_storage_map iterator Yonghong Song
2020-09-14 21:28 ` Song Liu
2020-09-15  5:25   ` Yonghong Song
2020-09-15 15:33 ` Jakub Kicinski
2020-09-15 17:35   ` Yonghong Song [this message]
2020-09-15 17:40     ` Jakub Kicinski
2020-09-15 18:56       ` Yonghong Song
2020-09-15 19:03         ` Alexei Starovoitov
2020-09-15 10:09 kernel test robot
2020-09-15 10:53 kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bc24e2da-33e5-e13c-8fe0-1e24c2a5a579@fb.com \
    --to=yhs@fb.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=kafai@fb.com \
    --cc=kernel-team@fb.com \
    --cc=kuba@kernel.org \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.