bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alexei Starovoitov <alexei.starovoitov@gmail.com>
To: Hou Tao <houtao@huaweicloud.com>
Cc: bpf@vger.kernel.org, Martin KaFai Lau <martin.lau@linux.dev>,
	Alexei Starovoitov <ast@kernel.org>,
	Andrii Nakryiko <andrii@kernel.org>, Song Liu <song@kernel.org>,
	Hao Luo <haoluo@google.com>, Yonghong Song <yhs@fb.com>,
	Daniel Borkmann <daniel@iogearbox.net>,
	KP Singh <kpsingh@kernel.org>,
	Stanislav Fomichev <sdf@google.com>, Jiri Olsa <jolsa@kernel.org>,
	John Fastabend <john.fastabend@gmail.com>,
	"Paul E . McKenney" <paulmck@kernel.org>,
	rcu@vger.kernel.org, houtao1@huawei.com
Subject: Re: [RFC bpf-next v2 4/4] bpf: Introduce BPF_MA_REUSE_AFTER_RCU_GP
Date: Wed, 26 Apr 2023 21:24:01 -0700	[thread overview]
Message-ID: <20230427042401.iavewtqx2x3yjepq@dhcp-172-26-102-232.dhcp.thefacebook.com> (raw)
In-Reply-To: <d8608bed-57de-ae92-f8c2-45df998123e5@huaweicloud.com>

On Sun, Apr 23, 2023 at 03:41:05PM +0800, Hou Tao wrote:
> >>
> >> (3) reuse-after-rcu-gp bpf memory allocator
> > that's the one you're implementing below, right?
> Right.
> >
> >> | name                | loop (k/s) | average memory (MiB) | peak memory (MiB) |
> >> | --                  | --         | --                   | --                |
> >> | no_op               | 1276       | 0.96                 | 1.00              |
> >> | overwrite           | 15.66      | 25.00                | 33.07             |
> >> | batch_add_batch_del | 10.32      | 18.84                | 22.64             |
> >> | add_del_on_diff_cpu | 13.00      | 550.50               | 748.74            |
> >>
> >> (4) free-after-rcu-gp bpf memory allocator (free directly through call_rcu)
> > What do you mean? htab uses bpf_ma, but does call_rcu before doing bpf_mem_free ?
> No, there is no call_rcu() before bpf_mem_free(). bpf_mem_free() in
> free-after-rcu-gp flavor will do call_rcu() in batch to free these elements back
> to slab subsystem directly. The elements in this flavor of bpf_ma is not safe
> for access from sleepable program except bpf_rcu_read_{lock,unlock}() are used.
> 
> But I think using call_rcu() to call bpf_mem_free() is good candidate for
> comparison and I saw bpf_cpumask does that, so I modify bpf hash table to do the
> similar thing and paste the benchmark result. As we can seen from the result,
> the memory usage for such flavor is much bigger than reuse-after-rcu-gp and
> free-after-rcu-gp:

I don't follow what exactly you're doing and what you're measuring.
Please provide patches for both reuse-after-rcu-gp and free-after-rcu-gp to
have meaningful conversation.
Rigth now we're stuck at what bench tool is actually measuring.

> >> +		if (try_queue_work && !work_pending(&c->reuse_work)) {
> >> +			/* Use reuse_cb_in_progress to indicate there is
> >> +			 * inflight reuse kworker or reuse RCU callback.
> >> +			 */
> >> +			atomic_inc(&c->reuse_cb_in_progress);
> >> +			/* Already queued */
> >> +			if (!queue_work(bpf_ma_wq, &c->reuse_work))
> > how many kthreads are spawned by wq in the peak?
> I think it depends on the number of bpf_ma. Because bpf_ma_wq is per-CPU
> workqueue, so for each bpf_ma, there is at most one worker for each CPU. And now
> the limit for the number of active workers on each CPU is 256, but it is
> customizable through alloc_workqueue() API.

Which means that on 8 cpu system there will be 8 * 256 kthreads ?
That's a lot. Please provide num_of_all_threads before/after/at_peak during bench.

Pls trim your replies. Mailers like mutt have a hard time navigating.

  reply	other threads:[~2023-04-27  4:24 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-08 14:18 [RFC bpf-next v2 0/4] Introduce BPF_MA_REUSE_AFTER_RCU_GP Hou Tao
2023-04-08 14:18 ` [RFC bpf-next v2 1/4] selftests/bpf: Add benchmark for bpf memory allocator Hou Tao
2023-04-22  2:59   ` Alexei Starovoitov
2023-04-23  1:55     ` Hou Tao
2023-04-27  4:20       ` Alexei Starovoitov
2023-04-27 13:46         ` Paul E. McKenney
2023-04-28  6:13           ` Hou Tao
2023-04-28  2:16         ` Hou Tao
2023-04-23  8:03     ` Hou Tao
2023-04-08 14:18 ` [RFC bpf-next v2 2/4] bpf: Factor out a common helper free_all() Hou Tao
2023-04-08 14:18 ` [RFC bpf-next v2 3/4] bpf: Pass bitwise flags to bpf_mem_alloc_init() Hou Tao
2023-04-08 14:18 ` [RFC bpf-next v2 4/4] bpf: Introduce BPF_MA_REUSE_AFTER_RCU_GP Hou Tao
2023-04-22  3:12   ` Alexei Starovoitov
2023-04-23  7:41     ` Hou Tao
2023-04-27  4:24       ` Alexei Starovoitov [this message]
2023-04-28  2:24         ` Hou Tao
2023-04-21  6:23 ` [RFC bpf-next v2 0/4] " Hou Tao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230427042401.iavewtqx2x3yjepq@dhcp-172-26-102-232.dhcp.thefacebook.com \
    --to=alexei.starovoitov@gmail.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=haoluo@google.com \
    --cc=houtao1@huawei.com \
    --cc=houtao@huaweicloud.com \
    --cc=john.fastabend@gmail.com \
    --cc=jolsa@kernel.org \
    --cc=kpsingh@kernel.org \
    --cc=martin.lau@linux.dev \
    --cc=paulmck@kernel.org \
    --cc=rcu@vger.kernel.org \
    --cc=sdf@google.com \
    --cc=song@kernel.org \
    --cc=yhs@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).