From: Alexei Starovoitov <alexei.starovoitov@gmail.com>
To: Hou Tao <houtao@huaweicloud.com>
Cc: bpf@vger.kernel.org, Martin KaFai Lau <martin.lau@linux.dev>,
Andrii Nakryiko <andrii@kernel.org>, Song Liu <song@kernel.org>,
Hao Luo <haoluo@google.com>, Yonghong Song <yhs@fb.com>,
Daniel Borkmann <daniel@iogearbox.net>,
KP Singh <kpsingh@kernel.org>,
Stanislav Fomichev <sdf@google.com>, Jiri Olsa <jolsa@kernel.org>,
John Fastabend <john.fastabend@gmail.com>,
"Paul E . McKenney" <paulmck@kernel.org>,
rcu@vger.kernel.org, houtao1@huawei.com
Subject: Re: [PATCH bpf-next v6 5/5] selftests/bpf: Add benchmark for bpf memory allocator
Date: Mon, 19 Jun 2023 13:35:43 -0700 [thread overview]
Message-ID: <20230619203543.sb3pqx62uxqnucuo@MacBook-Pro-8.local> (raw)
In-Reply-To: <20230613080921.1623219-6-houtao@huaweicloud.com>
On Tue, Jun 13, 2023 at 04:09:21PM +0800, Hou Tao wrote:
> +
> +static void htab_mem_notify_wait_producer(pthread_barrier_t *notify)
notify_wait and wait_notify names are confusing.
The first one is doing map_update and 2nd is map_delete, right?
Just call them such?
> +{
> + while (true) {
> + (void)syscall(__NR_getpgid);
> + /* Notify for start */
the comment is confusing too.
Maybe /* Notify map_deleter that map_updates are done */ ?
> + pthread_barrier_wait(notify);
> + /* Wait for completion */
and /* Wait for deletions to complete */ ?
> + pthread_barrier_wait(notify);
> + }
> +}
> +
> +static void htab_mem_wait_notify_producer(pthread_barrier_t *notify)
> +{
> + while (true) {
> + /* Wait for start */
> + pthread_barrier_wait(notify);
> + (void)syscall(__NR_getpgid);
> + /* Notify for completion */
similar.
> + pthread_barrier_wait(notify);
> + }
> +}
> +static int write_htab(unsigned int i, struct update_ctx *ctx, unsigned int flags)
> +{
> + if (ctx->from >= MAX_ENTRIES)
> + return 1;
It can never be hit, right?
Remove it then?
> +
> + bpf_map_update_elem(&htab, &ctx->from, zeroed_value, flags);
please add error check.
I think update/delete notification is correct, but it could be silently broken.
update(BPF_NOEXIST) could be returning error in one thread and
map_delete_elem could be failing too...
> + ctx->from += ctx->step;
> +
> + return 0;
> +}
> +
> +static int overwrite_htab(unsigned int i, struct update_ctx *ctx)
> +{
> + return write_htab(i, ctx, 0);
> +}
> +
> +static int newwrite_htab(unsigned int i, struct update_ctx *ctx)
> +{
> + return write_htab(i, ctx, BPF_NOEXIST);
> +}
> +
> +static int del_htab(unsigned int i, struct update_ctx *ctx)
> +{
> + if (ctx->from >= MAX_ENTRIES)
> + return 1;
delete?
> +
> + bpf_map_delete_elem(&htab, &ctx->from);
and here.
> + ctx->from += ctx->step;
> +
> + return 0;
> +}
> +
> +SEC("?tp/syscalls/sys_enter_getpgid")
> +int overwrite(void *ctx)
> +{
> + struct update_ctx update;
> +
> + update.from = bpf_get_smp_processor_id();
> + update.step = nr_thread;
> + bpf_loop(64, overwrite_htab, &update, 0);
> + __sync_fetch_and_add(&op_cnt, 1);
> + return 0;
> +}
> +
> +SEC("?tp/syscalls/sys_enter_getpgid")
> +int batch_add_batch_del(void *ctx)
> +{
> + struct update_ctx update;
> +
> + update.from = bpf_get_smp_processor_id();
> + update.step = nr_thread;
> + bpf_loop(64, overwrite_htab, &update, 0);
> +
> + update.from = bpf_get_smp_processor_id();
> + bpf_loop(64, del_htab, &update, 0);
> +
> + __sync_fetch_and_add(&op_cnt, 2);
> + return 0;
> +}
> +
> +SEC("?tp/syscalls/sys_enter_getpgid")
> +int add_del_on_diff_cpu(void *ctx)
> +{
> + struct update_ctx update;
> + unsigned int from;
> +
> + from = bpf_get_smp_processor_id();
> + update.from = from / 2;
why extra 'from' variable? Just combine above two lines.
> + update.step = nr_thread / 2;
> +
> + if (from & 1)
> + bpf_loop(64, newwrite_htab, &update, 0);
> + else
> + bpf_loop(64, del_htab, &update, 0);
I think it's cleaner to split this into two bpf programs.
Do update(NOEXIST) in one triggered by sys_enter_getpgid
and do delete_elem() in another triggered by a different syscall.
> +
> + __sync_fetch_and_add(&op_cnt, 1);
> + return 0;
> +}
> --
> 2.29.2
>
next prev parent reply other threads:[~2023-06-19 20:35 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-13 8:09 [PATCH bpf-next v6 0/5] Add benchmark for bpf memory allocator Hou Tao
2023-06-13 8:09 ` [PATCH bpf-next v6 1/5] selftests/bpf: Use producer_cnt to allocate local counter array Hou Tao
2023-06-13 8:09 ` [PATCH bpf-next v6 2/5] selftests/bpf: Output the correct error code for pthread APIs Hou Tao
2023-06-13 8:09 ` [PATCH bpf-next v6 3/5] selftests/bpf: Ensure that next_cpu() returns a valid CPU number Hou Tao
2023-06-13 8:09 ` [PATCH bpf-next v6 4/5] selftests/bpf: Set the default value of consumer_cnt as 0 Hou Tao
2023-06-13 8:09 ` [PATCH bpf-next v6 5/5] selftests/bpf: Add benchmark for bpf memory allocator Hou Tao
2023-06-19 20:35 ` Alexei Starovoitov [this message]
2023-06-20 1:42 ` Hou Tao
2023-06-19 20:40 ` [PATCH bpf-next v6 0/5] " patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230619203543.sb3pqx62uxqnucuo@MacBook-Pro-8.local \
--to=alexei.starovoitov@gmail.com \
--cc=andrii@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=haoluo@google.com \
--cc=houtao1@huawei.com \
--cc=houtao@huaweicloud.com \
--cc=john.fastabend@gmail.com \
--cc=jolsa@kernel.org \
--cc=kpsingh@kernel.org \
--cc=martin.lau@linux.dev \
--cc=paulmck@kernel.org \
--cc=rcu@vger.kernel.org \
--cc=sdf@google.com \
--cc=song@kernel.org \
--cc=yhs@fb.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).