bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Anton Protopopov <aspsk@isovalent.com>
To: Hou Tao <houtao@huaweicloud.com>
Cc: Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	John Fastabend <john.fastabend@gmail.com>,
	Andrii Nakryiko <andrii@kernel.org>,
	Martin KaFai Lau <martin.lau@linux.dev>,
	Song Liu <song@kernel.org>, Yonghong Song <yhs@fb.com>,
	KP Singh <kpsingh@kernel.org>,
	Stanislav Fomichev <sdf@google.com>, Hao Luo <haoluo@google.com>,
	Jiri Olsa <jolsa@kernel.org>,
	bpf@vger.kernel.org
Subject: Re: [v3 PATCH bpf-next 5/6] selftests/bpf: test map percpu stats
Date: Wed, 5 Jul 2023 15:34:08 +0000	[thread overview]
Message-ID: <ZKWNcE5emIW9X1O1@zh-lab-node-5> (raw)
In-Reply-To: <525f2690-f367-6296-8dde-0138ba8aa42f@huaweicloud.com>

On Wed, Jul 05, 2023 at 11:03:25AM +0800, Hou Tao wrote:
> Hi,
> 
> On 7/4/2023 11:02 PM, Anton Protopopov wrote:
> > On Tue, Jul 04, 2023 at 10:41:10PM +0800, Hou Tao wrote:
> >> Hi,
> >>
> >> On 6/30/2023 4:25 PM, Anton Protopopov wrote:
> >>> Add a new map test, map_percpu_stats.c, which is checking the correctness of
> >>> map's percpu elements counters.  For supported maps the test upserts a number
> >>> of elements, checks the correctness of the counters, then deletes all the
> >>> elements and checks again that the counters sum drops down to zero.
> >>>
> >>> The following map types are tested:
> >>>
> >>>     * BPF_MAP_TYPE_HASH, BPF_F_NO_PREALLOC
> >>>     * BPF_MAP_TYPE_PERCPU_HASH, BPF_F_NO_PREALLOC
> >>>     * BPF_MAP_TYPE_HASH,
> >>>     * BPF_MAP_TYPE_PERCPU_HASH,
> >>>     * BPF_MAP_TYPE_LRU_HASH
> >>>     * BPF_MAP_TYPE_LRU_PERCPU_HASH
> >> A test for BPF_MAP_TYPE_HASH_OF_MAPS is also needed.
> We could also exercise the test for LRU map with BPF_F_NO_COMMON_LRU.

Thanks, added.

> >
> SNIP
> >>> diff --git a/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c b/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c
> >>> new file mode 100644
> >>> index 000000000000..5b45af230368
> >>> --- /dev/null
> >>> +++ b/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c
> >>> @@ -0,0 +1,336 @@
> >>> +// SPDX-License-Identifier: GPL-2.0
> >>> +/* Copyright (c) 2023 Isovalent */
> >>> +
> >>> +#include <errno.h>
> >>> +#include <unistd.h>
> >>> +#include <pthread.h>
> >>> +
> >>> +#include <bpf/bpf.h>
> >>> +#include <bpf/libbpf.h>
> >>> +
> >>> +#include <bpf_util.h>
> >>> +#include <test_maps.h>
> >>> +
> >>> +#include "map_percpu_stats.skel.h"
> >>> +
> >>> +#define MAX_ENTRIES 16384
> >>> +#define N_THREADS 37
> >> Why 37 thread is needed here ? Does a small number of threads work as well ?
> > This was used to evict more elements from LRU maps when they are full.
> 
> I see. But in my understanding, for the global LRU list, the eviction
> (the invocation of htab_lru_map_delete_node) will be possible if the
> free element is less than LOCAL_FREE_TARGET(128) * nr_running_cpus. Now
> the number of free elements is 1000 as defined in __test(), the number
> of vCPU is 8 in my local VM setup (BPF CI also uses 8 vCPUs) and it is
> hard to trigger the eviction because 8 * 128 is roughly equal with 1000.
> So I suggest to decrease the number of free elements to 512 and the
> number of threads to 8, or adjust the number of running thread and free
> elements according to the number of online CPUs.

Yes, makes sense. I've changed the test to use 8 threads and offset of 512.

  reply	other threads:[~2023-07-05 15:33 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-30  8:25 [v3 PATCH bpf-next 0/6] bpf: add percpu stats for bpf_map Anton Protopopov
2023-06-30  8:25 ` [v3 PATCH bpf-next 1/6] bpf: add percpu stats for bpf_map elements insertions/deletions Anton Protopopov
2023-06-30  8:25 ` [v3 PATCH bpf-next 2/6] bpf: add a new kfunc to return current bpf_map elements count Anton Protopopov
2023-06-30  8:25 ` [v3 PATCH bpf-next 3/6] bpf: populate the per-cpu insertions/deletions counters for hashmaps Anton Protopopov
2023-07-04 13:56   ` Hou Tao
2023-07-04 14:34     ` Anton Protopopov
2023-07-06  2:01       ` Hou Tao
2023-07-06 12:25         ` Anton Protopopov
2023-07-06 12:30           ` Hou Tao
2023-06-30  8:25 ` [v3 PATCH bpf-next 4/6] bpf: make preloaded map iterators to display map elements count Anton Protopopov
2023-06-30  8:25 ` [v3 PATCH bpf-next 5/6] selftests/bpf: test map percpu stats Anton Protopopov
2023-07-04 14:41   ` Hou Tao
2023-07-04 15:02     ` Anton Protopopov
2023-07-04 15:23       ` Anton Protopopov
2023-07-04 15:49         ` Anton Protopopov
2023-07-05  0:46         ` Hou Tao
2023-07-05 15:41           ` Anton Protopopov
2023-07-05  3:03       ` Hou Tao
2023-07-05 15:34         ` Anton Protopopov [this message]
2023-06-30  8:25 ` [v3 PATCH bpf-next 6/6] selftests/bpf: check that ->elem_count is non-zero for the hash map Anton Protopopov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZKWNcE5emIW9X1O1@zh-lab-node-5 \
    --to=aspsk@isovalent.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=haoluo@google.com \
    --cc=houtao@huaweicloud.com \
    --cc=john.fastabend@gmail.com \
    --cc=jolsa@kernel.org \
    --cc=kpsingh@kernel.org \
    --cc=martin.lau@linux.dev \
    --cc=sdf@google.com \
    --cc=song@kernel.org \
    --cc=yhs@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).