All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrii Nakryiko <andrii.nakryiko@gmail.com>
To: Yucong Sun <fallentree@fb.com>
Cc: Andrii Nakryiko <andrii@kernel.org>,
	sunyucong@gmail.com, bpf <bpf@vger.kernel.org>
Subject: Re: [RFC 0/1] add parallelism to test_progs
Date: Mon, 30 Aug 2021 21:03:23 -0700	[thread overview]
Message-ID: <CAEf4BzYK8=dwrTvV1c=+zC6cxPe7STE+k2MPDokMurKs0cHwGQ@mail.gmail.com> (raw)
In-Reply-To: <20210827231307.3787723-1-fallentree@fb.com>

On Fri, Aug 27, 2021 at 4:13 PM Yucong Sun <fallentree@fb.com> wrote:
>
> This patch added a optional "-p" to test_progs to run tests in multiple
> process, speeding up the tests.
>
> Example:
>
> time ./test_progs
> real    5m51.393s
> user    0m4.695s
> sys    5m48.055s
>
> time ./test_progs -p 16 (on a 8 core vm)
> real    3m45.673s
> user    0m4.434s
> sys    5m47.465s
>
> The feedback area I'm looking for :
>
>   1.Some tests are taking too long to run (for example:
>   bpf_verif_scale/pyperf* takes almost 80% of the total runtime). If we
>   need a work-stealing pool mechanism it would be a bigger change.

Seems like you did just a static assignment based on worker number and
test number in this RFC. I think that's way too simplistic to work
well in practice. But I don't think we need a work stealing queue
either (not any explicit queue at all).

I'd rather go with a simple client/server model, where the server is
the main process which does all the coordination. It would "dispense"
task to each forked worker one by one, wait for that test to complete,
accumulating test's output in per-worker temporary output. If we are
running in verbose mode or a test failed, output accumulated logs. If
not verbose and test is successful, just emit a summary with test name
and OK message and discard accumulated output. I think we can easily
extend this to support running multiple sub-tests on *different*
workers, "breaking up" and scaling that bpf_verif_scale test nicely.
But that could be a pretty easy step #2 after the whole client/server
machinery is setup.

Look into Unix domain sockets (UDS). But not the SOCK_STREAM kind,
rather SOCK_DGRAM. UDS allows to establish bi-directional connection
between server and worker. And it preserves packet boundaries, so you
don't have TCP stream problems of delineating boundaries of logical
packets. And it preserves ordering between packets. All great
properties. With this we can set up client/server communication with a
very simple protocol:

1. Server sends "RUN_TEST" command, specifying the number of the test
to execute by the worker.
2. Worker sends back "TEST_COMPLETED" command with the test number,
test result (success, failure, skipped), and, optionally, console
output.
3. Repeat #1-#2 as many times as needed.
4. Server sends "SHUTDOWN" command and worker exits.

(Well, probably we need a bit more flexibility to report sub-test
successes, so maybe worker will have two possible messages:
SUBTEST_COMPLETED and TEST_COMPLETED, or something along those lines).

On the server side, we can use as suboptimal and simplistic locking
scheme as possible to coordinate everything. It's probably simplest to
have a thread per worker that would take global lock to take the next
test to run (just i++, but under lock). And just remember all the
statuses (and error outputs, for dumping failed tests details).

Some refactoring will be needed to make existing code work in both
non-parallelized and parallelized modes with minimal amount of
changes, but this seems simple enough.

>
>   2. The tests output from workers are currently interleaved from all
>   workers, making it harder to read, one option would be redirect all
>   outputs onto pipes and have main process collect and print in sequence
>   for each worker finish, but that will make seeing real time progress
>   harder.

Yeah, I don't think that's acceptable. Good news is that we needed
some more complexity to hold onto test output until the very end for
error summary reporting anyway.

>
>   3. If main process want to collect tests results from worker, I plan
>   to have each worker writes a stats file to /tmp, or I can use IPC, any
>   preference?

See above, I think UDS is the way to go.

>
>   4. Some tests would fail if run in parallel, I think we would need to
>   pin some tasks onto worker 0.

Yeah, we can mark such tests with some special naming convention
(e.g., to test_blahblah_noparallel) and run them sequentially.

>
> Yucong Sun (1):
>   selftests/bpf: Add parallelism to test_progs
>
>  tools/testing/selftests/bpf/test_progs.c | 94 ++++++++++++++++++++++--
>  tools/testing/selftests/bpf/test_progs.h |  3 +
>  2 files changed, 91 insertions(+), 6 deletions(-)
>
> --
> 2.30.2
>

      parent reply	other threads:[~2021-08-31  4:03 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-27 23:13 [RFC 0/1] add parallelism to test_progs Yucong Sun
2021-08-27 23:13 ` [RFC 1/1] selftests/bpf: Add " Yucong Sun
2021-08-31  3:37   ` Andrii Nakryiko
2021-08-31 12:29     ` sunyucong
2021-08-31  4:03 ` Andrii Nakryiko [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAEf4BzYK8=dwrTvV1c=+zC6cxPe7STE+k2MPDokMurKs0cHwGQ@mail.gmail.com' \
    --to=andrii.nakryiko@gmail.com \
    --cc=andrii@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=fallentree@fb.com \
    --cc=sunyucong@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.