From: Dave Marchevsky <davemarchevsky@fb.com>
To: <bpf@vger.kernel.org>
Cc: Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Andrii Nakryiko <andrii@kernel.org>,
Martin KaFai Lau <martin.lau@kernel.org>,
Kernel Team <kernel-team@fb.com>,
Dave Marchevsky <davemarchevsky@fb.com>
Subject: [PATCH v2 bpf-next 9/9] [DONOTAPPLY] Revert "selftests/bpf: Disable newly-added refcounted_kptr_races test"
Date: Thu, 1 Jun 2023 19:26:47 -0700 [thread overview]
Message-ID: <20230602022647.1571784-10-davemarchevsky@fb.com> (raw)
In-Reply-To: <20230602022647.1571784-1-davemarchevsky@fb.com>
This patch reverts the previous patch's disabling of
refcounted_kptr_races selftest. It is included with the series so that
BPF CI will be able to run the test. This patch should not be applied -
followups which fix remaining bpf_refcount issues will re-enable this
test.
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
---
.../bpf/prog_tests/refcounted_kptr.c | 100 ++++++++++++++++++
1 file changed, 100 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/refcounted_kptr.c b/tools/testing/selftests/bpf/prog_tests/refcounted_kptr.c
index 6a53f304f3e4..e7fcc1dd8864 100644
--- a/tools/testing/selftests/bpf/prog_tests/refcounted_kptr.c
+++ b/tools/testing/selftests/bpf/prog_tests/refcounted_kptr.c
@@ -18,3 +18,103 @@ void test_refcounted_kptr_fail(void)
{
RUN_TESTS(refcounted_kptr_fail);
}
+
+static void force_cpu(pthread_t thread, int cpunum)
+{
+ cpu_set_t cpuset;
+ int err;
+
+ CPU_ZERO(&cpuset);
+ CPU_SET(cpunum, &cpuset);
+ err = pthread_setaffinity_np(thread, sizeof(cpuset), &cpuset);
+ if (!ASSERT_OK(err, "pthread_setaffinity_np"))
+ return;
+}
+
+struct refcounted_kptr *skel;
+
+static void *run_unstash_acq_ref(void *unused)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, opts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
+ long ret, unstash_acq_ref_fd;
+ force_cpu(pthread_self(), 1);
+
+ unstash_acq_ref_fd = bpf_program__fd(skel->progs.unstash_add_and_acquire_refcount);
+
+ ret = bpf_prog_test_run_opts(unstash_acq_ref_fd, &opts);
+ ASSERT_EQ(opts.retval, 0, "unstash_add_and_acquire_refcount retval");
+ ASSERT_EQ(skel->bss->ref_check_3, 2, "ref_check_3");
+ ASSERT_EQ(skel->bss->ref_check_4, 1, "ref_check_4");
+ ASSERT_EQ(skel->bss->ref_check_5, 0, "ref_check_5");
+ pthread_exit((void *)ret);
+}
+
+void test_refcounted_kptr_races(void)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, opts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
+ int ref_acq_lock_fd, ref_acq_unlock_fd, rem_node_lock_fd;
+ int add_stash_fd, remove_tree_fd;
+ pthread_t thread_id;
+ int ret;
+
+ force_cpu(pthread_self(), 0);
+ skel = refcounted_kptr__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "refcounted_kptr__open_and_load"))
+ return;
+
+ add_stash_fd = bpf_program__fd(skel->progs.add_refcounted_node_to_tree_and_stash);
+ remove_tree_fd = bpf_program__fd(skel->progs.remove_refcounted_node_from_tree);
+ ref_acq_lock_fd = bpf_program__fd(skel->progs.unsafe_ref_acq_lock);
+ ref_acq_unlock_fd = bpf_program__fd(skel->progs.unsafe_ref_acq_unlock);
+ rem_node_lock_fd = bpf_program__fd(skel->progs.unsafe_rem_node_lock);
+
+ ret = bpf_prog_test_run_opts(rem_node_lock_fd, &opts);
+ if (!ASSERT_OK(ret, "rem_node_lock"))
+ return;
+
+ ret = bpf_prog_test_run_opts(ref_acq_lock_fd, &opts);
+ if (!ASSERT_OK(ret, "ref_acq_lock"))
+ return;
+
+ ret = bpf_prog_test_run_opts(add_stash_fd, &opts);
+ if (!ASSERT_OK(ret, "add_stash"))
+ return;
+ if (!ASSERT_OK(opts.retval, "add_stash retval"))
+ return;
+
+ ret = pthread_create(&thread_id, NULL, &run_unstash_acq_ref, NULL);
+ if (!ASSERT_OK(ret, "pthread_create"))
+ goto cleanup;
+
+ force_cpu(thread_id, 1);
+
+ /* This program will execute before unstash_acq_ref's refcount_acquire, then
+ * unstash_acq_ref can proceed after unsafe_unlock
+ */
+ ret = bpf_prog_test_run_opts(remove_tree_fd, &opts);
+ if (!ASSERT_OK(ret, "remove_tree"))
+ goto cleanup;
+
+ ret = bpf_prog_test_run_opts(ref_acq_unlock_fd, &opts);
+ if (!ASSERT_OK(ret, "ref_acq_unlock"))
+ goto cleanup;
+
+ ret = pthread_join(thread_id, NULL);
+ if (!ASSERT_OK(ret, "pthread_join"))
+ goto cleanup;
+
+ refcounted_kptr__destroy(skel);
+ return;
+cleanup:
+ bpf_prog_test_run_opts(ref_acq_unlock_fd, &opts);
+ refcounted_kptr__destroy(skel);
+ return;
+}
--
2.34.1
next prev parent reply other threads:[~2023-06-02 2:27 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-02 2:26 [PATCH v2 bpf-next 0/9] bpf_refcount followups (part 1) Dave Marchevsky
2023-06-02 2:26 ` [PATCH v2 bpf-next 1/9] [DONOTAPPLY] Revert "bpf: Disable bpf_refcount_acquire kfunc calls until race conditions are fixed" Dave Marchevsky
2023-06-02 2:26 ` [PATCH v2 bpf-next 2/9] bpf: Set kptr_struct_meta for node param to list and rbtree insert funcs Dave Marchevsky
2023-06-02 2:26 ` [PATCH v2 bpf-next 3/9] bpf: Fix __bpf_{list,rbtree}_add's beginning-of-node calculation Dave Marchevsky
2023-06-02 2:26 ` [PATCH v2 bpf-next 4/9] bpf: Make bpf_refcount_acquire fallible for non-owning refs Dave Marchevsky
2023-06-02 2:26 ` [PATCH v2 bpf-next 5/9] [DONOTAPPLY] bpf: Allow KF_DESTRUCTIVE-flagged kfuncs to be called under spinlock Dave Marchevsky
2023-06-02 2:26 ` [PATCH v2 bpf-next 6/9] [DONOTAPPLY] selftests/bpf: Add unsafe lock/unlock and refcount_read kfuncs to bpf_testmod Dave Marchevsky
2023-06-02 2:26 ` [PATCH v2 bpf-next 7/9] [DONOTAPPLY] selftests/bpf: Add test exercising bpf_refcount_acquire race condition Dave Marchevsky
2023-06-02 2:26 ` [PATCH v2 bpf-next 8/9] [DONOTAPPLY] selftests/bpf: Disable newly-added refcounted_kptr_races test Dave Marchevsky
2023-06-02 2:26 ` Dave Marchevsky [this message]
2023-06-05 20:30 ` [PATCH v2 bpf-next 0/9] bpf_refcount followups (part 1) patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230602022647.1571784-10-davemarchevsky@fb.com \
--to=davemarchevsky@fb.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=kernel-team@fb.com \
--cc=martin.lau@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).