All of lore.kernel.org
 help / color / mirror / Atom feed
* "make check-acceptance" takes way too long
@ 2021-07-30 15:12 Peter Maydell
  2021-07-30 15:41 ` Philippe Mathieu-Daudé
                   ` (5 more replies)
  0 siblings, 6 replies; 46+ messages in thread
From: Peter Maydell @ 2021-07-30 15:12 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Alex Bennée, Daniel P. Berrange, Philippe Mathieu-Daudé,
	Cleber Rosa

"make check-acceptance" takes way way too long. I just did a run
on an arm-and-aarch64-targets-only debug build and it took over
half an hour, and this despite it skipping or cancelling 26 out
of 58 tests!

I think that ~10 minutes runtime is reasonable. 30 is not;
ideally no individual test would take more than a minute or so.

Output saying where the time went. The first two tests take
more than 10 minutes *each*. I think a good start would be to find
a way of testing what they're testing that is less heavyweight.

 (01/58) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_tcg_gicv2:
PASS (629.74 s)
 (02/58) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_tcg_gicv3:
PASS (628.75 s)
 (03/58) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_kvm:
CANCEL: kvm accelerator does not seem to be available (1.18 s)
 (04/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_aarch64_virt:
PASS (3.53 s)
 (05/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_aarch64_xlnx_versal_virt:
PASS (41.13 s)
 (06/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_virt:
PASS (5.22 s)
 (07/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_emcraft_sf2:
PASS (18.88 s)
 (08/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_raspi2_uart0:
PASS (11.30 s)
 (09/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_raspi2_initrd:
PASS (22.66 s)
 (10/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_exynos4210_initrd:
PASS (31.89 s)
 (11/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_cubieboard_initrd:
PASS (27.86 s)
 (12/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_cubieboard_sata:
PASS (27.19 s)
 (13/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_quanta_gsj:
SKIP: Test might timeout
 (14/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_quanta_gsj_initrd:
PASS (22.53 s)
 (15/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi:
PASS (4.86 s)
 (16/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi_initrd:
PASS (39.85 s)
 (17/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi_sd:
PASS (53.57 s)
 (18/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi_bionic_20_08:
SKIP: storage limited
 (19/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi_uboot_netbsd9:
SKIP: storage limited
 (20/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_aarch64_raspi3_atf:
PASS (1.50 s)
 (21/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_vexpressa9:
PASS (10.74 s)
 (22/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_ast2400_palmetto_openbmc_v2_9_0:
PASS (39.43 s)
 (23/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_ast2500_romulus_openbmc_v2_9_0:
PASS (54.01 s)
 (24/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_ast2600_debian:
PASS (40.60 s)
 (25/58) tests/acceptance/boot_xen.py:BootXen.test_arm64_xen_411_and_dom0:
PASS (20.22 s)
 (26/58) tests/acceptance/boot_xen.py:BootXen.test_arm64_xen_414_and_dom0:
PASS (17.37 s)
 (27/58) tests/acceptance/boot_xen.py:BootXen.test_arm64_xen_415_and_dom0:
PASS (23.82 s)
 (28/58) tests/acceptance/empty_cpu_model.py:EmptyCPUModel.test:
CANCEL: No QEMU binary defined or found in the build tree (0.00 s)
 (29/58) tests/acceptance/info_usernet.py:InfoUsernet.test_hostfwd:
CANCEL: No QEMU binary defined or found in the build tree (0.00 s)
 (30/58) tests/acceptance/machine_arm_canona1100.py:CanonA1100Machine.test_arm_canona1100:
PASS (0.20 s)
 (31/58) tests/acceptance/machine_arm_integratorcp.py:IntegratorMachine.test_integratorcp_console:
SKIP: untrusted code
 (32/58) tests/acceptance/machine_arm_integratorcp.py:IntegratorMachine.test_framebuffer_tux_logo:
SKIP: Python NumPy not installed
 (33/58) tests/acceptance/machine_arm_n8x0.py:N8x0Machine.test_n800:
SKIP: untrusted code
 (34/58) tests/acceptance/machine_arm_n8x0.py:N8x0Machine.test_n810:
SKIP: untrusted code
 (35/58) tests/acceptance/migration.py:Migration.test_migration_with_tcp_localhost:
CANCEL: No QEMU binary defined or found in the build tree (0.00 s)
 (36/58) tests/acceptance/migration.py:Migration.test_migration_with_unix:
CANCEL: No QEMU binary defined or found in the build tree (0.00 s)
 (37/58) tests/acceptance/migration.py:Migration.test_migration_with_exec:
CANCEL: No QEMU binary defined or found in the build tree (0.00 s)
 (38/58) tests/acceptance/multiprocess.py:Multiprocess.test_multiprocess_aarch64:
CANCEL: kvm accelerator does not seem to be available (0.06 s)
 (39/58) tests/acceptance/replay_kernel.py:ReplayKernelNormal.test_aarch64_virt:
PASS (19.59 s)
 (40/58) tests/acceptance/replay_kernel.py:ReplayKernelNormal.test_arm_virt:
PASS (28.73 s)
 (41/58) tests/acceptance/replay_kernel.py:ReplayKernelNormal.test_arm_cubieboard_initrd:
PASS (52.00 s)
 (42/58) tests/acceptance/replay_kernel.py:ReplayKernelNormal.test_arm_vexpressa9:
PASS (25.69 s)
 (43/58) tests/acceptance/reverse_debugging.py:ReverseDebugging_AArch64.test_aarch64_virt:
PASS (2.16 s)
 (44/58) tests/acceptance/smmu.py:SMMU.test_smmu_noril: CANCEL: kvm
accelerator does not seem to be available (0.90 s)
 (45/58) tests/acceptance/smmu.py:SMMU.test_smmu_noril_passthrough:
CANCEL: kvm accelerator does not seem to be available (0.70 s)
 (46/58) tests/acceptance/smmu.py:SMMU.test_smmu_noril_nostrict:
CANCEL: kvm accelerator does not seem to be available (1.02 s)
 (47/58) tests/acceptance/smmu.py:SMMU.test_smmu_ril: CANCEL: kvm
accelerator does not seem to be available (0.68 s)
 (48/58) tests/acceptance/smmu.py:SMMU.test_smmu_ril_passthrough:
CANCEL: kvm accelerator does not seem to be available (0.98 s)
 (49/58) tests/acceptance/smmu.py:SMMU.test_smmu_ril_nostrict: CANCEL:
kvm accelerator does not seem to be available (1.00 s)
 (50/58) tests/acceptance/tcg_plugins.py:PluginKernelNormal.test_aarch64_virt_insn:
PASS (12.19 s)
 (51/58) tests/acceptance/tcg_plugins.py:PluginKernelNormal.test_aarch64_virt_insn_icount:
PASS (12.35 s)
 (52/58) tests/acceptance/tcg_plugins.py:PluginKernelNormal.test_aarch64_virt_mem_icount:
PASS (10.21 s)
 (53/58) tests/acceptance/version.py:Version.test_qmp_human_info_version:
CANCEL: No QEMU binary defined or found in the build tree (0.00 s)
 (54/58) tests/acceptance/virtio_check_params.py:VirtioMaxSegSettingsCheck.test_machine_types:
SKIP: break multi-arch CI
 (55/58) tests/acceptance/vnc.py:Vnc.test_no_vnc: CANCEL: No QEMU
binary defined or found in the build tree (0.00 s)
 (56/58) tests/acceptance/vnc.py:Vnc.test_no_vnc_change_password:
CANCEL: No QEMU binary defined or found in the build tree (0.00 s)
 (57/58) tests/acceptance/vnc.py:Vnc.test_change_password_requires_a_password:
CANCEL: No QEMU binary defined or found in the build tree (0.00 s)
 (58/58) tests/acceptance/vnc.py:Vnc.test_change_password: CANCEL: No
QEMU binary defined or found in the build tree (0.00 s)
RESULTS    : PASS 32 | ERROR 0 | FAIL 0 | SKIP 8 | WARN 0 | INTERRUPT
0 | CANCEL 18
JOB TIME   : 1967.44 s

thanks
-- PMM


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2021-07-30 15:12 "make check-acceptance" takes way too long Peter Maydell
@ 2021-07-30 15:41 ` Philippe Mathieu-Daudé
  2021-07-30 15:42 ` Peter Maydell
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 46+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-07-30 15:41 UTC (permalink / raw)
  To: Peter Maydell, Daniel P. Berrange, QEMU Developers
  Cc: Alex Bennée, Wainer dos Santos Moschetta, Willian Rampazzo,
	Cleber Rosa

On 7/30/21 5:12 PM, Peter Maydell wrote:
> "make check-acceptance" takes way way too long. I just did a run
> on an arm-and-aarch64-targets-only debug build and it took over
> half an hour, and this despite it skipping or cancelling 26 out
> of 58 tests!
> 
> I think that ~10 minutes runtime is reasonable. 30 is not;
> ideally no individual test would take more than a minute or so.
> 
> Output saying where the time went. The first two tests take
> more than 10 minutes *each*. I think a good start would be to find
> a way of testing what they're testing that is less heavyweight.

IIRC the KVM forum BoF, we suggested a test shouldn't take more than
60sec. But then it was borderline for some tests so we talked about
allowing 90-120sec, and more should be discussed and documented.

However it was never documented / enforced.

This seems to match my memory:

$ git grep 'timeout =' tests/acceptance/
tests/acceptance/avocado_qemu/__init__.py:440:    timeout = 900
tests/acceptance/boot_linux_console.py:99:    timeout = 90
tests/acceptance/boot_xen.py:26:    timeout = 90
tests/acceptance/linux_initrd.py:27:    timeout = 300
tests/acceptance/linux_ssh_mips_malta.py:26:    timeout = 150 # Not for
'configure --enable-debug --enable-debug-tcg'
tests/acceptance/machine_arm_canona1100.py:18:    timeout = 90
tests/acceptance/machine_arm_integratorcp.py:34:    timeout = 90
tests/acceptance/machine_arm_n8x0.py:20:    timeout = 90
tests/acceptance/machine_avr6.py:25:    timeout = 5
tests/acceptance/machine_m68k_nextcube.py:30:    timeout = 15
tests/acceptance/machine_microblaze.py:14:    timeout = 90
tests/acceptance/machine_mips_fuloong2e.py:18:    timeout = 60
tests/acceptance/machine_mips_loongson3v.py:18:    timeout = 60
tests/acceptance/machine_mips_malta.py:38:    timeout = 30
tests/acceptance/machine_ppc.py:14:    timeout = 90
tests/acceptance/machine_rx_gdbsim.py:22:    timeout = 30
tests/acceptance/machine_s390_ccw_virtio.py:24:    timeout = 120
tests/acceptance/machine_sparc64_sun4u.py:20:    timeout = 90
tests/acceptance/machine_sparc_leon3.py:15:    timeout = 60
tests/acceptance/migration.py:27:    timeout = 10
tests/acceptance/ppc_prep_40p.py:18:    timeout = 60
tests/acceptance/replay_kernel.py:34:    timeout = 120
tests/acceptance/replay_kernel.py:357:    timeout = 180
tests/acceptance/reverse_debugging.py:33:    timeout = 10
tests/acceptance/tcg_plugins.py:24:    timeout = 120

> 
>  (01/58) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_tcg_gicv2:
> PASS (629.74 s)
>  (02/58) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_tcg_gicv3:
> PASS (628.75 s)
>  (03/58) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_kvm:
> CANCEL: kvm accelerator does not seem to be available (1.18 s)

We could restrict these to one of the projects runners (x86 probably)
with something like:

  @skipUnless(os.getenv('X86_64_RUNNER_AVAILABLE'), '...')

>  (15/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi:
> PASS (4.86 s)
>  (16/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi_initrd:
> PASS (39.85 s)
>  (17/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi_sd:
> PASS (53.57 s)
>  (18/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi_bionic_20_08:
> SKIP: storage limited
>  (19/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi_uboot_netbsd9:
> SKIP: storage limited

I've been thinking about restricting them to my sdmmc-tree, but if
I don't send pull-req I won't test or catch other introducing
regressions. They respect the 60sec limit.

We could restrict some jobs to maintainers fork namespace, track
mainstream master branch and either run the pipelines when /master
is updated or regularly
(https://docs.gitlab.com/ee/ci/pipelines/schedules.html)
but them if the maintainer becomes busy / idle / inactive we
similarly won't catch regressions in mainstream.

Anyway Daniel already studied the problem and send a RFC but we
ignored it:
https://www.mail-archive.com/qemu-devel@nongnu.org/msg761087.html

Maybe worth continuing the discussion there?


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2021-07-30 15:12 "make check-acceptance" takes way too long Peter Maydell
  2021-07-30 15:41 ` Philippe Mathieu-Daudé
@ 2021-07-30 15:42 ` Peter Maydell
  2021-07-30 22:04   ` Cleber Rosa
  2021-07-31 18:41 ` Alex Bennée
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 46+ messages in thread
From: Peter Maydell @ 2021-07-30 15:42 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Alex Bennée, Daniel P. Berrange, Philippe Mathieu-Daudé,
	Cleber Rosa

On Fri, 30 Jul 2021 at 16:12, Peter Maydell <peter.maydell@linaro.org> wrote:
>
> "make check-acceptance" takes way way too long. I just did a run
> on an arm-and-aarch64-targets-only debug build and it took over
> half an hour, and this despite it skipping or cancelling 26 out
> of 58 tests!
>
> I think that ~10 minutes runtime is reasonable. 30 is not;
> ideally no individual test would take more than a minute or so.

Side note, can check-acceptance run multiple tests in parallel?
Running 3 or 4 at once would also improve the runtime...

-- PMM


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2021-07-30 15:42 ` Peter Maydell
@ 2021-07-30 22:04   ` Cleber Rosa
  2021-07-31  6:39     ` Thomas Huth
  0 siblings, 1 reply; 46+ messages in thread
From: Cleber Rosa @ 2021-07-30 22:04 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Alex Bennée, Daniel P. Berrange, QEMU Developers,
	Philippe Mathieu-Daudé

On Fri, Jul 30, 2021 at 11:43 AM Peter Maydell <peter.maydell@linaro.org> wrote:
>
> On Fri, 30 Jul 2021 at 16:12, Peter Maydell <peter.maydell@linaro.org> wrote:
> >
> > "make check-acceptance" takes way way too long. I just did a run
> > on an arm-and-aarch64-targets-only debug build and it took over
> > half an hour, and this despite it skipping or cancelling 26 out
> > of 58 tests!
> >
> > I think that ~10 minutes runtime is reasonable. 30 is not;
> > ideally no individual test would take more than a minute or so.
>
> Side note, can check-acceptance run multiple tests in parallel?

Yes, it can, but it's not currently enabled to do so, but I'm planning
to.  As a matter of fact, Yesterday I was trying out Avocado's
parallel capable runner on a GitLab CI pipeline[1] and it went well.

> Running 3 or 4 at once would also improve the runtime...
>

About the time savings, on my own machine I see good results.  On a
build with only the x86_64 target, the parallel execution gets me:

$ avocado run -t arch:x86_64 --filter-by-tags-include-empty
--filter-by-tags-include-empty-key --test-runner=nrunner
--nrunner-max-parallel-tasks=4 tests/acceptance/
...
RESULTS    : PASS 37 | ERROR 0 | FAIL 0 | SKIP 6 | WARN 5 | INTERRUPT
0 | CANCEL 0
...
JOB TIME   : 244.59 s

While the serial execution gets me:

$ avocado run -t arch:x86_64 --filter-by-tags-include-empty
--filter-by-tags-include-empty-key tests/acceptance/
...
RESULTS    : PASS 37 | ERROR 0 | FAIL 0 | SKIP 6 | WARN 5 | INTERRUPT
0 | CANCEL 0
...
JOB TIME   : 658.65 s

But the environment on GitLab CI is fluid, and I bet there's already
some level of overcommit of (at least) CPUs there.  The only pipeline
I ran there with tests running in parallel, resulted in some jobs with
improvements, and others with regressions in runtime.  Additionally,
lack of adequate resources can make more tests time out, and thus give
out false negatives.

Anyway, my current plan is to allow users to configure the
parallelization level on their machines, while slowly and steadily
experimenting what can safely improve the runtime on GitLab CI.

BTW, another **very** sweet spot, which I have experimented with
before, is letting Avocado run the acceptance tests and the iotests in
parallel because they compete for pretty much different resources.
But, that's a matter for another round.

> -- PMM
>

Best regards,
- Cleber.

[1] https://gitlab.com/cleber.gnu/qemu/-/pipelines/344471529
[2] https://gitlab.com/cleber.gnu/qemu/-/pipelines/345082239



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2021-07-30 22:04   ` Cleber Rosa
@ 2021-07-31  6:39     ` Thomas Huth
  2021-07-31 17:58       ` Cleber Rosa
  0 siblings, 1 reply; 46+ messages in thread
From: Thomas Huth @ 2021-07-31  6:39 UTC (permalink / raw)
  To: qemu-devel

On 31/07/2021 00.04, Cleber Rosa wrote:
> On Fri, Jul 30, 2021 at 11:43 AM Peter Maydell <peter.maydell@linaro.org> wrote:
>>
>> On Fri, 30 Jul 2021 at 16:12, Peter Maydell <peter.maydell@linaro.org> wrote:
>>>
>>> "make check-acceptance" takes way way too long. I just did a run
>>> on an arm-and-aarch64-targets-only debug build and it took over
>>> half an hour, and this despite it skipping or cancelling 26 out
>>> of 58 tests!
>>>
>>> I think that ~10 minutes runtime is reasonable. 30 is not;
>>> ideally no individual test would take more than a minute or so.
>>
>> Side note, can check-acceptance run multiple tests in parallel?
> 
> Yes, it can, but it's not currently enabled to do so, but I'm planning
> to.  As a matter of fact, Yesterday I was trying out Avocado's
> parallel capable runner on a GitLab CI pipeline[1] and it went well.

Was this one of the shared gitlab CI runners? ... well, those feature only a 
single CPU, so the run was likely not very different compared to a single run.

> But the environment on GitLab CI is fluid, and I bet there's already
> some level of overcommit of (at least) CPUs there.  The only pipeline
> I ran there with tests running in parallel, resulted in some jobs with
> improvements, and others with regressions in runtime.  Additionally,
> lack of adequate resources can make more tests time out, and thus give
> out false negatives.

It certainly does not make sense to enable parallel tests for the shared 
runners there.

  Thomas



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2021-07-31  6:39     ` Thomas Huth
@ 2021-07-31 17:58       ` Cleber Rosa
  0 siblings, 0 replies; 46+ messages in thread
From: Cleber Rosa @ 2021-07-31 17:58 UTC (permalink / raw)
  To: Thomas Huth; +Cc: qemu-devel

On Sat, Jul 31, 2021 at 2:40 AM Thomas Huth <thuth@redhat.com> wrote:
>
> On 31/07/2021 00.04, Cleber Rosa wrote:
> > On Fri, Jul 30, 2021 at 11:43 AM Peter Maydell <peter.maydell@linaro.org> wrote:
> >>
> >> On Fri, 30 Jul 2021 at 16:12, Peter Maydell <peter.maydell@linaro.org> wrote:
> >>>
> >>> "make check-acceptance" takes way way too long. I just did a run
> >>> on an arm-and-aarch64-targets-only debug build and it took over
> >>> half an hour, and this despite it skipping or cancelling 26 out
> >>> of 58 tests!
> >>>
> >>> I think that ~10 minutes runtime is reasonable. 30 is not;
> >>> ideally no individual test would take more than a minute or so.
> >>
> >> Side note, can check-acceptance run multiple tests in parallel?
> >
> > Yes, it can, but it's not currently enabled to do so, but I'm planning
> > to.  As a matter of fact, Yesterday I was trying out Avocado's
> > parallel capable runner on a GitLab CI pipeline[1] and it went well.
>
> Was this one of the shared gitlab CI runners? ... well, those feature only a
> single CPU, so the run was likely not very different compared to a single run.
>

Yes, the two pipeline executions I referred to were run in the shared
GitLab CI runners.  I was testing two things:

1. Possible caveats/issues with the parallel Avocado runner (AKA
"nrunner") and the Acceptance tests (first pipeline linked, with "max
parallel tasks" set to 1)
2. Any possible gains/losses with running with "max parallel tasks"
set to 2 (second pipeline linked)

> > But the environment on GitLab CI is fluid, and I bet there's already
> > some level of overcommit of (at least) CPUs there.  The only pipeline
> > I ran there with tests running in parallel, resulted in some jobs with
> > improvements, and others with regressions in runtime.  Additionally,
> > lack of adequate resources can make more tests time out, and thus give
> > out false negatives.
>
> It certainly does not make sense to enable parallel tests for the shared
> runners there.
>
>   Thomas
>
>

There could be gains on scenario #2 if there's considerable I/O wait
on some tests.  That's why I mention that previous experiences mixing
the acceptance tests with the iotests were very interesting.  But
you're right, with only acceptance tests, mostly CPU bound, there was
no clear gain.

Best,
- Cleber.



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2021-07-30 15:12 "make check-acceptance" takes way too long Peter Maydell
  2021-07-30 15:41 ` Philippe Mathieu-Daudé
  2021-07-30 15:42 ` Peter Maydell
@ 2021-07-31 18:41 ` Alex Bennée
  2021-07-31 20:32   ` Peter Maydell
  2021-08-02  8:38 ` Daniel P. Berrangé
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 46+ messages in thread
From: Alex Bennée @ 2021-07-31 18:41 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Philippe Mathieu-Daudé,
	Daniel P. Berrange, QEMU Developers, Cleber Rosa


Peter Maydell <peter.maydell@linaro.org> writes:

> "make check-acceptance" takes way way too long. I just did a run
> on an arm-and-aarch64-targets-only debug build and it took over
> half an hour, and this despite it skipping or cancelling 26 out
> of 58 tests!
>
> I think that ~10 minutes runtime is reasonable. 30 is not;
> ideally no individual test would take more than a minute or so.
>
> Output saying where the time went. The first two tests take
> more than 10 minutes *each*. I think a good start would be to find
> a way of testing what they're testing that is less heavyweight.
>
>  (01/58) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_tcg_gicv2:
> PASS (629.74 s)
>  (02/58) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_tcg_gicv3:
> PASS (628.75 s)
>  (03/58) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_kvm:
> CANCEL: kvm accelerator does not seem to be available (1.18 s)

For these tests which purport to exercise the various GIC configurations
I think we would be much better served by running kvm-unit-tests which
at least try and exercise all the features rather than rely on the side
effect of booting an entire OS.

>  (04/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_aarch64_virt:
> PASS (3.53 s)
>  (05/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_aarch64_xlnx_versal_virt:
> PASS (41.13 s)
>  (06/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_virt:
> PASS (5.22 s)
>  (07/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_emcraft_sf2:
> PASS (18.88 s)
>  (08/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_raspi2_uart0:
> PASS (11.30 s)
>  (09/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_raspi2_initrd:
> PASS (22.66 s)
>  (10/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_exynos4210_initrd:
> PASS (31.89 s)
>  (11/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_cubieboard_initrd:
> PASS (27.86 s)
>  (12/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_cubieboard_sata:
> PASS (27.19 s)
>  (13/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_quanta_gsj:
> SKIP: Test might timeout
>  (14/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_quanta_gsj_initrd:
> PASS (22.53 s)
>  (15/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi:
> PASS (4.86 s)
>  (16/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi_initrd:
> PASS (39.85 s)
>  (17/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi_sd:
> PASS (53.57 s)
>  (18/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi_bionic_20_08:
> SKIP: storage limited
>  (19/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi_uboot_netbsd9:
> SKIP: storage limited
>  (20/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_aarch64_raspi3_atf:
> PASS (1.50 s)
>  (21/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_vexpressa9:
> PASS (10.74 s)
>  (22/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_ast2400_palmetto_openbmc_v2_9_0:
> PASS (39.43 s)
>  (23/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_ast2500_romulus_openbmc_v2_9_0:
> PASS (54.01 s)
>  (24/58) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_ast2600_debian:
> PASS (40.60 s)
>  (25/58) tests/acceptance/boot_xen.py:BootXen.test_arm64_xen_411_and_dom0:
> PASS (20.22 s)
>  (26/58) tests/acceptance/boot_xen.py:BootXen.test_arm64_xen_414_and_dom0:
> PASS (17.37 s)
>  (27/58) tests/acceptance/boot_xen.py:BootXen.test_arm64_xen_415_and_dom0:
> PASS (23.82 s)
>  (28/58) tests/acceptance/empty_cpu_model.py:EmptyCPUModel.test:
> CANCEL: No QEMU binary defined or found in the build tree (0.00 s)
>  (29/58) tests/acceptance/info_usernet.py:InfoUsernet.test_hostfwd:
> CANCEL: No QEMU binary defined or found in the build tree (0.00 s)
>  (30/58) tests/acceptance/machine_arm_canona1100.py:CanonA1100Machine.test_arm_canona1100:
> PASS (0.20 s)
>  (31/58) tests/acceptance/machine_arm_integratorcp.py:IntegratorMachine.test_integratorcp_console:
> SKIP: untrusted code
>  (32/58) tests/acceptance/machine_arm_integratorcp.py:IntegratorMachine.test_framebuffer_tux_logo:
> SKIP: Python NumPy not installed
>  (33/58) tests/acceptance/machine_arm_n8x0.py:N8x0Machine.test_n800:
> SKIP: untrusted code
>  (34/58) tests/acceptance/machine_arm_n8x0.py:N8x0Machine.test_n810:
> SKIP: untrusted code
>  (35/58) tests/acceptance/migration.py:Migration.test_migration_with_tcp_localhost:
> CANCEL: No QEMU binary defined or found in the build tree (0.00 s)
>  (36/58) tests/acceptance/migration.py:Migration.test_migration_with_unix:
> CANCEL: No QEMU binary defined or found in the build tree (0.00 s)
>  (37/58) tests/acceptance/migration.py:Migration.test_migration_with_exec:
> CANCEL: No QEMU binary defined or found in the build tree (0.00 s)
>  (38/58) tests/acceptance/multiprocess.py:Multiprocess.test_multiprocess_aarch64:
> CANCEL: kvm accelerator does not seem to be available (0.06 s)
>  (39/58) tests/acceptance/replay_kernel.py:ReplayKernelNormal.test_aarch64_virt:
> PASS (19.59 s)
>  (40/58) tests/acceptance/replay_kernel.py:ReplayKernelNormal.test_arm_virt:
> PASS (28.73 s)
>  (41/58) tests/acceptance/replay_kernel.py:ReplayKernelNormal.test_arm_cubieboard_initrd:
> PASS (52.00 s)
>  (42/58) tests/acceptance/replay_kernel.py:ReplayKernelNormal.test_arm_vexpressa9:
> PASS (25.69 s)
>  (43/58) tests/acceptance/reverse_debugging.py:ReverseDebugging_AArch64.test_aarch64_virt:
> PASS (2.16 s)
>  (44/58) tests/acceptance/smmu.py:SMMU.test_smmu_noril: CANCEL: kvm
> accelerator does not seem to be available (0.90 s)
>  (45/58) tests/acceptance/smmu.py:SMMU.test_smmu_noril_passthrough:
> CANCEL: kvm accelerator does not seem to be available (0.70 s)
>  (46/58) tests/acceptance/smmu.py:SMMU.test_smmu_noril_nostrict:
> CANCEL: kvm accelerator does not seem to be available (1.02 s)
>  (47/58) tests/acceptance/smmu.py:SMMU.test_smmu_ril: CANCEL: kvm
> accelerator does not seem to be available (0.68 s)
>  (48/58) tests/acceptance/smmu.py:SMMU.test_smmu_ril_passthrough:
> CANCEL: kvm accelerator does not seem to be available (0.98 s)
>  (49/58) tests/acceptance/smmu.py:SMMU.test_smmu_ril_nostrict: CANCEL:
> kvm accelerator does not seem to be available (1.00 s)
>  (50/58) tests/acceptance/tcg_plugins.py:PluginKernelNormal.test_aarch64_virt_insn:
> PASS (12.19 s)
>  (51/58) tests/acceptance/tcg_plugins.py:PluginKernelNormal.test_aarch64_virt_insn_icount:
> PASS (12.35 s)
>  (52/58) tests/acceptance/tcg_plugins.py:PluginKernelNormal.test_aarch64_virt_mem_icount:
> PASS (10.21 s)
>  (53/58) tests/acceptance/version.py:Version.test_qmp_human_info_version:
> CANCEL: No QEMU binary defined or found in the build tree (0.00 s)
>  (54/58) tests/acceptance/virtio_check_params.py:VirtioMaxSegSettingsCheck.test_machine_types:
> SKIP: break multi-arch CI
>  (55/58) tests/acceptance/vnc.py:Vnc.test_no_vnc: CANCEL: No QEMU
> binary defined or found in the build tree (0.00 s)
>  (56/58) tests/acceptance/vnc.py:Vnc.test_no_vnc_change_password:
> CANCEL: No QEMU binary defined or found in the build tree (0.00 s)
>  (57/58) tests/acceptance/vnc.py:Vnc.test_change_password_requires_a_password:
> CANCEL: No QEMU binary defined or found in the build tree (0.00 s)
>  (58/58) tests/acceptance/vnc.py:Vnc.test_change_password: CANCEL: No
> QEMU binary defined or found in the build tree (0.00 s)
> RESULTS    : PASS 32 | ERROR 0 | FAIL 0 | SKIP 8 | WARN 0 | INTERRUPT
> 0 | CANCEL 18
> JOB TIME   : 1967.44 s
>
> thanks
> -- PMM


-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2021-07-31 18:41 ` Alex Bennée
@ 2021-07-31 20:32   ` Peter Maydell
  2021-08-02 22:55     ` Cleber Rosa
  0 siblings, 1 reply; 46+ messages in thread
From: Peter Maydell @ 2021-07-31 20:32 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Philippe Mathieu-Daudé,
	Daniel P. Berrange, QEMU Developers, Cleber Rosa

On Sat, 31 Jul 2021 at 19:43, Alex Bennée <alex.bennee@linaro.org> wrote:
>
>
> Peter Maydell <peter.maydell@linaro.org> writes:
>
> > "make check-acceptance" takes way way too long. I just did a run
> > on an arm-and-aarch64-targets-only debug build and it took over
> > half an hour, and this despite it skipping or cancelling 26 out
> > of 58 tests!
> >
> > I think that ~10 minutes runtime is reasonable. 30 is not;
> > ideally no individual test would take more than a minute or so.
> >
> > Output saying where the time went. The first two tests take
> > more than 10 minutes *each*. I think a good start would be to find
> > a way of testing what they're testing that is less heavyweight.
> >
> >  (01/58) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_tcg_gicv2:
> > PASS (629.74 s)
> >  (02/58) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_tcg_gicv3:
> > PASS (628.75 s)
> >  (03/58) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_kvm:
> > CANCEL: kvm accelerator does not seem to be available (1.18 s)
>
> For these tests which purport to exercise the various GIC configurations
> I think we would be much better served by running kvm-unit-tests which
> at least try and exercise all the features rather than rely on the side
> effect of booting an entire OS.

I think "can we boot Linux via UEFI?" is worth testing, as is
"can we boot Linux and do at least some stuff in userspace?"
(there's a lot of TCG that doesn't get exercised by pure kernel boot).
We just need to find a guest OS that isn't so overweight it takes 10
minutes...

-- PMM


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2021-07-30 15:12 "make check-acceptance" takes way too long Peter Maydell
                   ` (2 preceding siblings ...)
  2021-07-31 18:41 ` Alex Bennée
@ 2021-08-02  8:38 ` Daniel P. Berrangé
  2021-08-02 12:47   ` Alex Bennée
  2021-08-02 12:55   ` Alex Bennée
  2022-01-20 15:13 ` Peter Maydell
  2022-02-15 18:14 ` Alex Bennée
  5 siblings, 2 replies; 46+ messages in thread
From: Daniel P. Berrangé @ 2021-08-02  8:38 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Philippe Mathieu-Daudé,
	Alex Bennée, QEMU Developers, Cleber Rosa

On Fri, Jul 30, 2021 at 04:12:27PM +0100, Peter Maydell wrote:
> "make check-acceptance" takes way way too long. I just did a run
> on an arm-and-aarch64-targets-only debug build and it took over
> half an hour, and this despite it skipping or cancelling 26 out
> of 58 tests!
> 
> I think that ~10 minutes runtime is reasonable. 30 is not;
> ideally no individual test would take more than a minute or so.
> 
> Output saying where the time went. The first two tests take
> more than 10 minutes *each*. I think a good start would be to find
> a way of testing what they're testing that is less heavyweight.

While there is certainly value in testing with a real world "full" guest
OS, I think it is overkill as the default setup. I reckon we would get
80-90% of the value, by making our own test image repo, containing minimal
kernel builds for each machine/target combo we need, together with a tiny
initrd containing busybox. This could easily be made to boot in 1 second,
which would make 'make check-acceptance' waaaaay faster, considering how
many times we boot a guest. This would also solve our problem that we're
pointing to URLs to download these giant images, and they're frequently
break URLs.

If we want the re-assurance of running a full guest OS, we could wire
that up 'make check-acceptance FULL_OS=1' and then set that up as a
nightly CI job to run post-merge as a sanity-check, where speed does
not matter


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2021-08-02  8:38 ` Daniel P. Berrangé
@ 2021-08-02 12:47   ` Alex Bennée
  2021-08-02 12:59     ` Daniel P. Berrangé
  2021-08-02 12:55   ` Alex Bennée
  1 sibling, 1 reply; 46+ messages in thread
From: Alex Bennée @ 2021-08-02 12:47 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Peter Maydell, Philippe Mathieu-Daudé, QEMU Developers, Cleber Rosa


Daniel P. Berrangé <berrange@redhat.com> writes:

> On Fri, Jul 30, 2021 at 04:12:27PM +0100, Peter Maydell wrote:
>> "make check-acceptance" takes way way too long. I just did a run
>> on an arm-and-aarch64-targets-only debug build and it took over
>> half an hour, and this despite it skipping or cancelling 26 out
>> of 58 tests!
>> 
>> I think that ~10 minutes runtime is reasonable. 30 is not;
>> ideally no individual test would take more than a minute or so.
>> 
>> Output saying where the time went. The first two tests take
>> more than 10 minutes *each*. I think a good start would be to find
>> a way of testing what they're testing that is less heavyweight.
>
> While there is certainly value in testing with a real world "full" guest
> OS, I think it is overkill as the default setup. I reckon we would get
> 80-90% of the value, by making our own test image repo, containing minimal
> kernel builds for each machine/target combo we need, together with a tiny
> initrd containing busybox. This could easily be made to boot in 1 second,
> which would make 'make check-acceptance' waaaaay faster, considering how
> many times we boot a guest. This would also solve our problem that we're
> pointing to URLs to download these giant images, and they're frequently
> break URLs.

It's been discussed before but previously the worry has been the hassle
of maintaining such images along with such tediousness as ensuring GPL
compliance. We've outsourced that problem to the upstream.

That said we've got test jobs that run from our QEMU advent calendars
and I added some for Xen testing from a stable Linaro file server
before.

> If we want the re-assurance of running a full guest OS, we could wire
> that up 'make check-acceptance FULL_OS=1' and then set that up as a
> nightly CI job to run post-merge as a sanity-check, where speed does
> not matter
>
>
> Regards,
> Daniel


-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2021-08-02  8:38 ` Daniel P. Berrangé
  2021-08-02 12:47   ` Alex Bennée
@ 2021-08-02 12:55   ` Alex Bennée
  2021-08-02 13:00     ` Peter Maydell
  2021-08-02 13:00     ` Daniel P. Berrangé
  1 sibling, 2 replies; 46+ messages in thread
From: Alex Bennée @ 2021-08-02 12:55 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Peter Maydell, Philippe Mathieu-Daudé, QEMU Developers, Cleber Rosa


Daniel P. Berrangé <berrange@redhat.com> writes:

> On Fri, Jul 30, 2021 at 04:12:27PM +0100, Peter Maydell wrote:
>> "make check-acceptance" takes way way too long. I just did a run
>> on an arm-and-aarch64-targets-only debug build and it took over
>> half an hour, and this despite it skipping or cancelling 26 out
>> of 58 tests!
>> 
>> I think that ~10 minutes runtime is reasonable. 30 is not;
>> ideally no individual test would take more than a minute or so.
>> 
>> Output saying where the time went. The first two tests take
>> more than 10 minutes *each*. I think a good start would be to find
>> a way of testing what they're testing that is less heavyweight.
>
> While there is certainly value in testing with a real world "full" guest
> OS, I think it is overkill as the default setup. I reckon we would get
> 80-90% of the value, by making our own test image repo, containing minimal
> kernel builds for each machine/target combo we need, together with a tiny
> initrd containing busybox.

Also another minor wrinkle for this test is because we are booting via
firmware we need a proper disk image with bootloader and the rest of it
which involves more faff than a simple kernel+initrd (which is my goto
format for the local zoo of testing images I have).

-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2021-08-02 12:47   ` Alex Bennée
@ 2021-08-02 12:59     ` Daniel P. Berrangé
  0 siblings, 0 replies; 46+ messages in thread
From: Daniel P. Berrangé @ 2021-08-02 12:59 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Peter Maydell, Philippe Mathieu-Daudé, QEMU Developers, Cleber Rosa

On Mon, Aug 02, 2021 at 01:47:37PM +0100, Alex Bennée wrote:
> 
> Daniel P. Berrangé <berrange@redhat.com> writes:
> 
> > On Fri, Jul 30, 2021 at 04:12:27PM +0100, Peter Maydell wrote:
> >> "make check-acceptance" takes way way too long. I just did a run
> >> on an arm-and-aarch64-targets-only debug build and it took over
> >> half an hour, and this despite it skipping or cancelling 26 out
> >> of 58 tests!
> >> 
> >> I think that ~10 minutes runtime is reasonable. 30 is not;
> >> ideally no individual test would take more than a minute or so.
> >> 
> >> Output saying where the time went. The first two tests take
> >> more than 10 minutes *each*. I think a good start would be to find
> >> a way of testing what they're testing that is less heavyweight.
> >
> > While there is certainly value in testing with a real world "full" guest
> > OS, I think it is overkill as the default setup. I reckon we would get
> > 80-90% of the value, by making our own test image repo, containing minimal
> > kernel builds for each machine/target combo we need, together with a tiny
> > initrd containing busybox. This could easily be made to boot in 1 second,
> > which would make 'make check-acceptance' waaaaay faster, considering how
> > many times we boot a guest. This would also solve our problem that we're
> > pointing to URLs to download these giant images, and they're frequently
> > break URLs.
> 
> It's been discussed before but previously the worry has been the hassle
> of maintaining such images along with such tediousness as ensuring GPL
> compliance. We've outsourced that problem to the upstream.

I don't recall discussing that directly - only discussions around
hosting images / files from other distros on our own infra, that
does indeed create a compliance burden.

This is why I suggested /strictly/ nothing more than kernel+busybox
built from source ourselves, probably using debian cross compilers.

The busybox stuff would only need to be built once per architecture.
The kernel would potentially need more builds to cope with machine
board specific configs. We would not need to continually track new
releases - we can fix on specific kernel + busybox versions for as
long as they cope with the targets/archs we need.

I'd expect it all to be done in a gitlab repo, with a CI job to
publish the results, never any manual builds, so that we ensure
license compliance.

Of course the main problem is someone doing the leg work to get
such a system up & running initially to prove the concept.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2021-08-02 12:55   ` Alex Bennée
@ 2021-08-02 13:00     ` Peter Maydell
  2021-08-02 13:04       ` Daniel P. Berrangé
  2021-08-02 13:00     ` Daniel P. Berrangé
  1 sibling, 1 reply; 46+ messages in thread
From: Peter Maydell @ 2021-08-02 13:00 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Philippe Mathieu-Daudé, Daniel P. Berrangé,
	QEMU Developers, Cleber Rosa

On Mon, 2 Aug 2021 at 13:57, Alex Bennée <alex.bennee@linaro.org> wrote:
>
>
> Daniel P. Berrangé <berrange@redhat.com> writes:
>
> > On Fri, Jul 30, 2021 at 04:12:27PM +0100, Peter Maydell wrote:
> >> "make check-acceptance" takes way way too long. I just did a run
> >> on an arm-and-aarch64-targets-only debug build and it took over
> >> half an hour, and this despite it skipping or cancelling 26 out
> >> of 58 tests!
> >>
> >> I think that ~10 minutes runtime is reasonable. 30 is not;
> >> ideally no individual test would take more than a minute or so.
> >>
> >> Output saying where the time went. The first two tests take
> >> more than 10 minutes *each*. I think a good start would be to find
> >> a way of testing what they're testing that is less heavyweight.
> >
> > While there is certainly value in testing with a real world "full" guest
> > OS, I think it is overkill as the default setup. I reckon we would get
> > 80-90% of the value, by making our own test image repo, containing minimal
> > kernel builds for each machine/target combo we need, together with a tiny
> > initrd containing busybox.
>
> Also another minor wrinkle for this test is because we are booting via
> firmware we need a proper disk image with bootloader and the rest of it
> which involves more faff than a simple kernel+initrd (which is my goto
> format for the local zoo of testing images I have).

If you look at the log which has timestamps for the output, UEFI
takes some extra time but it's not too awful. The real timesink is
when it gets into userspace and systemd starts everything including
the kitchen sink.

-- PMM


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2021-08-02 12:55   ` Alex Bennée
  2021-08-02 13:00     ` Peter Maydell
@ 2021-08-02 13:00     ` Daniel P. Berrangé
  2021-08-02 13:27       ` Thomas Huth
  1 sibling, 1 reply; 46+ messages in thread
From: Daniel P. Berrangé @ 2021-08-02 13:00 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Peter Maydell, Philippe Mathieu-Daudé, QEMU Developers, Cleber Rosa

On Mon, Aug 02, 2021 at 01:55:44PM +0100, Alex Bennée wrote:
> 
> Daniel P. Berrangé <berrange@redhat.com> writes:
> 
> > On Fri, Jul 30, 2021 at 04:12:27PM +0100, Peter Maydell wrote:
> >> "make check-acceptance" takes way way too long. I just did a run
> >> on an arm-and-aarch64-targets-only debug build and it took over
> >> half an hour, and this despite it skipping or cancelling 26 out
> >> of 58 tests!
> >> 
> >> I think that ~10 minutes runtime is reasonable. 30 is not;
> >> ideally no individual test would take more than a minute or so.
> >> 
> >> Output saying where the time went. The first two tests take
> >> more than 10 minutes *each*. I think a good start would be to find
> >> a way of testing what they're testing that is less heavyweight.
> >
> > While there is certainly value in testing with a real world "full" guest
> > OS, I think it is overkill as the default setup. I reckon we would get
> > 80-90% of the value, by making our own test image repo, containing minimal
> > kernel builds for each machine/target combo we need, together with a tiny
> > initrd containing busybox.
> 
> Also another minor wrinkle for this test is because we are booting via
> firmware we need a proper disk image with bootloader and the rest of it
> which involves more faff than a simple kernel+initrd (which is my goto
> format for the local zoo of testing images I have).

Ok, so that would require a bootloader build too, which is likely going
to be arch specific, so probably the most tedious part.


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2021-08-02 13:00     ` Peter Maydell
@ 2021-08-02 13:04       ` Daniel P. Berrangé
  2021-08-02 13:25         ` Thomas Huth
  0 siblings, 1 reply; 46+ messages in thread
From: Daniel P. Berrangé @ 2021-08-02 13:04 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Philippe Mathieu-Daudé,
	Alex Bennée, QEMU Developers, Cleber Rosa

On Mon, Aug 02, 2021 at 02:00:19PM +0100, Peter Maydell wrote:
> On Mon, 2 Aug 2021 at 13:57, Alex Bennée <alex.bennee@linaro.org> wrote:
> >
> >
> > Daniel P. Berrangé <berrange@redhat.com> writes:
> >
> > > On Fri, Jul 30, 2021 at 04:12:27PM +0100, Peter Maydell wrote:
> > >> "make check-acceptance" takes way way too long. I just did a run
> > >> on an arm-and-aarch64-targets-only debug build and it took over
> > >> half an hour, and this despite it skipping or cancelling 26 out
> > >> of 58 tests!
> > >>
> > >> I think that ~10 minutes runtime is reasonable. 30 is not;
> > >> ideally no individual test would take more than a minute or so.
> > >>
> > >> Output saying where the time went. The first two tests take
> > >> more than 10 minutes *each*. I think a good start would be to find
> > >> a way of testing what they're testing that is less heavyweight.
> > >
> > > While there is certainly value in testing with a real world "full" guest
> > > OS, I think it is overkill as the default setup. I reckon we would get
> > > 80-90% of the value, by making our own test image repo, containing minimal
> > > kernel builds for each machine/target combo we need, together with a tiny
> > > initrd containing busybox.
> >
> > Also another minor wrinkle for this test is because we are booting via
> > firmware we need a proper disk image with bootloader and the rest of it
> > which involves more faff than a simple kernel+initrd (which is my goto
> > format for the local zoo of testing images I have).
> 
> If you look at the log which has timestamps for the output, UEFI
> takes some extra time but it's not too awful. The real timesink is
> when it gets into userspace and systemd starts everything including
> the kitchen sink.

Is it possible to pass "s" kernel arg to systemd to tell it to boot in
single user mode so it skips most of userspace, while still providing
a useful test scenario in much less time ?


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2021-08-02 13:04       ` Daniel P. Berrangé
@ 2021-08-02 13:25         ` Thomas Huth
  0 siblings, 0 replies; 46+ messages in thread
From: Thomas Huth @ 2021-08-02 13:25 UTC (permalink / raw)
  To: Daniel P. Berrangé, Peter Maydell
  Cc: Cleber Rosa, Alex Bennée, Philippe Mathieu-Daudé,
	QEMU Developers

On 02/08/2021 15.04, Daniel P. Berrangé wrote:
> On Mon, Aug 02, 2021 at 02:00:19PM +0100, Peter Maydell wrote:
>> On Mon, 2 Aug 2021 at 13:57, Alex Bennée <alex.bennee@linaro.org> wrote:
>>>
>>>
>>> Daniel P. Berrangé <berrange@redhat.com> writes:
>>>
>>>> On Fri, Jul 30, 2021 at 04:12:27PM +0100, Peter Maydell wrote:
>>>>> "make check-acceptance" takes way way too long. I just did a run
>>>>> on an arm-and-aarch64-targets-only debug build and it took over
>>>>> half an hour, and this despite it skipping or cancelling 26 out
>>>>> of 58 tests!
>>>>>
>>>>> I think that ~10 minutes runtime is reasonable. 30 is not;
>>>>> ideally no individual test would take more than a minute or so.
>>>>>
>>>>> Output saying where the time went. The first two tests take
>>>>> more than 10 minutes *each*. I think a good start would be to find
>>>>> a way of testing what they're testing that is less heavyweight.
>>>>
>>>> While there is certainly value in testing with a real world "full" guest
>>>> OS, I think it is overkill as the default setup. I reckon we would get
>>>> 80-90% of the value, by making our own test image repo, containing minimal
>>>> kernel builds for each machine/target combo we need, together with a tiny
>>>> initrd containing busybox.
>>>
>>> Also another minor wrinkle for this test is because we are booting via
>>> firmware we need a proper disk image with bootloader and the rest of it
>>> which involves more faff than a simple kernel+initrd (which is my goto
>>> format for the local zoo of testing images I have).
>>
>> If you look at the log which has timestamps for the output, UEFI
>> takes some extra time but it's not too awful. The real timesink is
>> when it gets into userspace and systemd starts everything including
>> the kitchen sink.
> 
> Is it possible to pass "s" kernel arg to systemd to tell it to boot in
> single user mode so it skips most of userspace, while still providing
> a useful test scenario in much less time ?

FWIW, we're doing something similar in 
tests/acceptance/machine_s390_ccw_virtio.py already: The Debian job is using 
"BOOT_DEBUG=3" to drop into a debug shell early where we can already do all 
the necessary tests, and the Fedora-based job is doing the same with 
"rd.rescue". Additionally the Fedora job is also decompressing its initrd on 
the host already which is way faster than doing that via TCG in the guest. 
Both tricks saved us a significant amount of time in these jobs.

  Thomas



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2021-08-02 13:00     ` Daniel P. Berrangé
@ 2021-08-02 13:27       ` Thomas Huth
  2021-08-02 13:43         ` Gerd Hoffmann
  0 siblings, 1 reply; 46+ messages in thread
From: Thomas Huth @ 2021-08-02 13:27 UTC (permalink / raw)
  To: Daniel P. Berrangé, Alex Bennée
  Cc: Peter Maydell, Cleber Rosa, Philippe Mathieu-Daudé, QEMU Developers

On 02/08/2021 15.00, Daniel P. Berrangé wrote:
> On Mon, Aug 02, 2021 at 01:55:44PM +0100, Alex Bennée wrote:
>>
>> Daniel P. Berrangé <berrange@redhat.com> writes:
>>
>>> On Fri, Jul 30, 2021 at 04:12:27PM +0100, Peter Maydell wrote:
>>>> "make check-acceptance" takes way way too long. I just did a run
>>>> on an arm-and-aarch64-targets-only debug build and it took over
>>>> half an hour, and this despite it skipping or cancelling 26 out
>>>> of 58 tests!
>>>>
>>>> I think that ~10 minutes runtime is reasonable. 30 is not;
>>>> ideally no individual test would take more than a minute or so.
>>>>
>>>> Output saying where the time went. The first two tests take
>>>> more than 10 minutes *each*. I think a good start would be to find
>>>> a way of testing what they're testing that is less heavyweight.
>>>
>>> While there is certainly value in testing with a real world "full" guest
>>> OS, I think it is overkill as the default setup. I reckon we would get
>>> 80-90% of the value, by making our own test image repo, containing minimal
>>> kernel builds for each machine/target combo we need, together with a tiny
>>> initrd containing busybox.
>>
>> Also another minor wrinkle for this test is because we are booting via
>> firmware we need a proper disk image with bootloader and the rest of it
>> which involves more faff than a simple kernel+initrd (which is my goto
>> format for the local zoo of testing images I have).
> 
> Ok, so that would require a bootloader build too, which is likely going
> to be arch specific, so probably the most tedious part.

Maybe we could use buildroot for this. I've used buildroot for my images in 
the QEMU Advent Calendar, and it was really a great help. See also:

  http://people.redhat.com/~thuth/blog/general/2019/01/28/buildroot.html

  Thomas



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2021-08-02 13:27       ` Thomas Huth
@ 2021-08-02 13:43         ` Gerd Hoffmann
  0 siblings, 0 replies; 46+ messages in thread
From: Gerd Hoffmann @ 2021-08-02 13:43 UTC (permalink / raw)
  To: Thomas Huth
  Cc: Peter Maydell, Daniel P. Berrangé,
	QEMU Developers, Philippe Mathieu-Daudé,
	Cleber Rosa, Alex Bennée

  Hi,

> > Ok, so that would require a bootloader build too, which is likely going
> > to be arch specific, so probably the most tedious part.
> 
> Maybe we could use buildroot for this. I've used buildroot for my images in
> the QEMU Advent Calendar, and it was really a great help. See also:
> 
>  http://people.redhat.com/~thuth/blog/general/2019/01/28/buildroot.html

/me played with buildroot too: https://gitlab.com/kraxel/br-kraxel

userspace systemd does take much time indeed, but buildroot allows to
build with busybox.  You can also easily include more stuff like
pciutils to run tests.

bootloader support in buildroot varies much depending on architecture.
Modern standard platforms (x86, arm) are no problem, but for
older/exotic platforms (sparc for example) you can't easily generate
bootable disk images.

take care,
  Gerd



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2021-07-31 20:32   ` Peter Maydell
@ 2021-08-02 22:55     ` Cleber Rosa
  0 siblings, 0 replies; 46+ messages in thread
From: Cleber Rosa @ 2021-08-02 22:55 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Daniel P. Berrange, Alex Bennée, QEMU Developers,
	Philippe Mathieu-Daudé

On Sat, Jul 31, 2021 at 4:33 PM Peter Maydell <peter.maydell@linaro.org> wrote:
>
> On Sat, 31 Jul 2021 at 19:43, Alex Bennée <alex.bennee@linaro.org> wrote:
> >
> >
> > Peter Maydell <peter.maydell@linaro.org> writes:
> >
> > > "make check-acceptance" takes way way too long. I just did a run
> > > on an arm-and-aarch64-targets-only debug build and it took over
> > > half an hour, and this despite it skipping or cancelling 26 out
> > > of 58 tests!
> > >
> > > I think that ~10 minutes runtime is reasonable. 30 is not;
> > > ideally no individual test would take more than a minute or so.
> > >
> > > Output saying where the time went. The first two tests take
> > > more than 10 minutes *each*. I think a good start would be to find
> > > a way of testing what they're testing that is less heavyweight.
> > >
> > >  (01/58) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_tcg_gicv2:
> > > PASS (629.74 s)
> > >  (02/58) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_tcg_gicv3:
> > > PASS (628.75 s)
> > >  (03/58) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_kvm:
> > > CANCEL: kvm accelerator does not seem to be available (1.18 s)
> >
> > For these tests which purport to exercise the various GIC configurations
> > I think we would be much better served by running kvm-unit-tests which
> > at least try and exercise all the features rather than rely on the side
> > effect of booting an entire OS.
>
> I think "can we boot Linux via UEFI?" is worth testing, as is
> "can we boot Linux and do at least some stuff in userspace?"
> (there's a lot of TCG that doesn't get exercised by pure kernel boot).
> We just need to find a guest OS that isn't so overweight it takes 10
> minutes...
>
> -- PMM
>

I think using alternative guests is absolutely the way to go here.  I
had that in mind in the past, so much that I made sure to include
cirros[1] as one of the supported images[2] in avocado.utils.vmimage
(used in these tests above).  These tests are based on the LinuxTest
class[3], and they support the distro[4] and distro_version[5]
parameters.

But, cirros doesn't ship with a fully capable cloud-init package and I
deferred to support it in avocado.utils.cloudinit, and thus, support
cirrus in those tests.  I gave that idea another try, and the results
are encouraging, with reduction of runtime by almost a factor of 6.
On my system I get:

$ avocado run -p distro=fedora -p distro_version=31
tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_tcg_gicv3
 (1/1) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_tcg_gicv3:
PASS (165.48 s)

And with cirros:

$ avocado run -p distro=cirros -p distro_version=0.5.2
tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_tcg_gicv3
(1/1) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_tcg_gicv3:
PASS (28.80 s)

I'll work on posting the bits needed to have this working out of the
box, but it'll require new code on the Avocado side too (tentative to
version 91.0).

Regards,
- Cleber.

[1] https://github.com/cirros-dev/cirros
[2] https://avocado-framework.readthedocs.io/en/90.0/guides/writer/libs/vmimage.html#supported-images
[3] https://qemu-project.gitlab.io/qemu/devel/testing.html#the-avocado-qemu-linuxtest-base-test-class
[4] https://qemu-project.gitlab.io/qemu/devel/testing.html#distro
[5] https://qemu-project.gitlab.io/qemu/devel/testing.html#distro-version



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2021-07-30 15:12 "make check-acceptance" takes way too long Peter Maydell
                   ` (3 preceding siblings ...)
  2021-08-02  8:38 ` Daniel P. Berrangé
@ 2022-01-20 15:13 ` Peter Maydell
  2022-01-20 15:35   ` Philippe Mathieu-Daudé via
  2022-01-21  7:56   ` Thomas Huth
  2022-02-15 18:14 ` Alex Bennée
  5 siblings, 2 replies; 46+ messages in thread
From: Peter Maydell @ 2022-01-20 15:13 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Alex Bennée, Daniel P. Berrange, Philippe Mathieu-Daudé,
	Cleber Rosa

On Fri, 30 Jul 2021 at 16:12, Peter Maydell <peter.maydell@linaro.org> wrote:
>
> "make check-acceptance" takes way way too long. I just did a run
> on an arm-and-aarch64-targets-only debug build and it took over
> half an hour, and this despite it skipping or cancelling 26 out
> of 58 tests!
>
> I think that ~10 minutes runtime is reasonable. 30 is not;
> ideally no individual test would take more than a minute or so.
>
> Output saying where the time went. The first two tests take
> more than 10 minutes *each*. I think a good start would be to find
> a way of testing what they're testing that is less heavyweight.

Does anybody have some time to look at this? It makes
'check-acceptance' almost unusable for testing fixes locally...

-- PMM


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-01-20 15:13 ` Peter Maydell
@ 2022-01-20 15:35   ` Philippe Mathieu-Daudé via
  2022-01-21  7:56   ` Thomas Huth
  1 sibling, 0 replies; 46+ messages in thread
From: Philippe Mathieu-Daudé via @ 2022-01-20 15:35 UTC (permalink / raw)
  To: Peter Maydell, QEMU Developers
  Cc: Daniel P. Berrange, Cleber Rosa, Alex Bennée, Beraldo Leal

Cc'ing Beraldo

On 20/1/22 16:13, Peter Maydell wrote:
> On Fri, 30 Jul 2021 at 16:12, Peter Maydell <peter.maydell@linaro.org> wrote:
>>
>> "make check-acceptance" takes way way too long. I just did a run
>> on an arm-and-aarch64-targets-only debug build and it took over
>> half an hour, and this despite it skipping or cancelling 26 out
>> of 58 tests!
>>
>> I think that ~10 minutes runtime is reasonable. 30 is not;
>> ideally no individual test would take more than a minute or so.
>>
>> Output saying where the time went. The first two tests take
>> more than 10 minutes *each*. I think a good start would be to find
>> a way of testing what they're testing that is less heavyweight.
> 
> Does anybody have some time to look at this? It makes
> 'check-acceptance' almost unusable for testing fixes locally...
> 
> -- PMM


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-01-20 15:13 ` Peter Maydell
  2022-01-20 15:35   ` Philippe Mathieu-Daudé via
@ 2022-01-21  7:56   ` Thomas Huth
  2022-01-21 10:50     ` Markus Armbruster
  1 sibling, 1 reply; 46+ messages in thread
From: Thomas Huth @ 2022-01-21  7:56 UTC (permalink / raw)
  To: Peter Maydell, QEMU Developers
  Cc: Daniel P. Berrange, Beraldo Leal, Philippe Mathieu-Daudé,
	Wainer dos Santos Moschetta, Cleber Rosa, Alex Bennée

On 20/01/2022 16.13, Peter Maydell wrote:
> On Fri, 30 Jul 2021 at 16:12, Peter Maydell <peter.maydell@linaro.org> wrote:
>>
>> "make check-acceptance" takes way way too long. I just did a run
>> on an arm-and-aarch64-targets-only debug build and it took over
>> half an hour, and this despite it skipping or cancelling 26 out
>> of 58 tests!
>>
>> I think that ~10 minutes runtime is reasonable. 30 is not;
>> ideally no individual test would take more than a minute or so.
>>
>> Output saying where the time went. The first two tests take
>> more than 10 minutes *each*. I think a good start would be to find
>> a way of testing what they're testing that is less heavyweight.
> 
> Does anybody have some time to look at this? It makes
> 'check-acceptance' almost unusable for testing fixes locally...

We could start using the "SPEED" environment variable there, too, just like 
we already do it in the qtests, so that slow tests only get executed with 
SPEED=slow or SPEED=thorough ...?

  Thomas



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-01-21  7:56   ` Thomas Huth
@ 2022-01-21 10:50     ` Markus Armbruster
  2022-01-21 11:33       ` Peter Maydell
  0 siblings, 1 reply; 46+ messages in thread
From: Markus Armbruster @ 2022-01-21 10:50 UTC (permalink / raw)
  To: Thomas Huth
  Cc: Peter Maydell, Daniel P. Berrange, Beraldo Leal,
	Philippe Mathieu-Daudé,
	Wainer dos Santos Moschetta, QEMU Developers, Cleber Rosa,
	Alex Bennée

Thomas Huth <thuth@redhat.com> writes:

> On 20/01/2022 16.13, Peter Maydell wrote:
>> On Fri, 30 Jul 2021 at 16:12, Peter Maydell <peter.maydell@linaro.org> wrote:
>>>
>>> "make check-acceptance" takes way way too long. I just did a run
>>> on an arm-and-aarch64-targets-only debug build and it took over
>>> half an hour, and this despite it skipping or cancelling 26 out
>>> of 58 tests!
>>>
>>> I think that ~10 minutes runtime is reasonable. 30 is not;
>>> ideally no individual test would take more than a minute or so.
>>>
>>> Output saying where the time went. The first two tests take
>>> more than 10 minutes *each*. I think a good start would be to find
>>> a way of testing what they're testing that is less heavyweight.
>>
>> Does anybody have some time to look at this? It makes
>> 'check-acceptance' almost unusable for testing fixes locally...
>
> We could start using the "SPEED" environment variable there, too, just
> like we already do it in the qtests, so that slow tests only get
> executed with SPEED=slow or SPEED=thorough ...?

No objection, but it's no replacement for looking into why these tests
are so slow.

The #1 reason for things being slow is not giving a damn :)



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-01-21 10:50     ` Markus Armbruster
@ 2022-01-21 11:33       ` Peter Maydell
  2022-01-21 12:23         ` Alex Bennée
  0 siblings, 1 reply; 46+ messages in thread
From: Peter Maydell @ 2022-01-21 11:33 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: Thomas Huth, Daniel P. Berrange, Beraldo Leal,
	Philippe Mathieu-Daudé,
	Wainer dos Santos Moschetta, QEMU Developers, Cleber Rosa,
	Alex Bennée

On Fri, 21 Jan 2022 at 10:50, Markus Armbruster <armbru@redhat.com> wrote:
> No objection, but it's no replacement for looking into why these tests
> are so slow.
>
> The #1 reason for things being slow is not giving a damn :)

See previous messages in the thread -- the test starts a
full-fat guest OS including UEFI boot, and it takes forever to
get to the login prompt because systemd is starting everything
including the kitchen sink.

-- PMM


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-01-21 11:33       ` Peter Maydell
@ 2022-01-21 12:23         ` Alex Bennée
  2022-01-21 12:41           ` Thomas Huth
  2022-01-21 15:21           ` Daniel P. Berrangé
  0 siblings, 2 replies; 46+ messages in thread
From: Alex Bennée @ 2022-01-21 12:23 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Thomas Huth, Daniel P. Berrange, Beraldo Leal, QEMU Developers,
	Markus Armbruster, Wainer dos Santos Moschetta,
	Philippe Mathieu-Daudé,
	Cleber Rosa


Peter Maydell <peter.maydell@linaro.org> writes:

> On Fri, 21 Jan 2022 at 10:50, Markus Armbruster <armbru@redhat.com> wrote:
>> No objection, but it's no replacement for looking into why these tests
>> are so slow.
>>
>> The #1 reason for things being slow is not giving a damn :)
>
> See previous messages in the thread -- the test starts a
> full-fat guest OS including UEFI boot, and it takes forever to
> get to the login prompt because systemd is starting everything
> including the kitchen sink.

There has to be a half-way house between booting a kernel until it fails
to find a rootfs and running a full Ubuntu distro. Maybe just asking
systemd to reach "rescue.target" would be enough to show the disks are
up and userspace works.

Running the EFI firmware is probably useful coverage but I'm not sure
how one passes command line args to the guest in that approach? Do we
need to set a magic EFI variable?

>
> -- PMM


-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-01-21 12:23         ` Alex Bennée
@ 2022-01-21 12:41           ` Thomas Huth
  2022-01-21 15:21           ` Daniel P. Berrangé
  1 sibling, 0 replies; 46+ messages in thread
From: Thomas Huth @ 2022-01-21 12:41 UTC (permalink / raw)
  To: Alex Bennée, Peter Maydell
  Cc: Daniel P. Berrange, Beraldo Leal, QEMU Developers,
	Philippe Mathieu-Daudé,
	Wainer dos Santos Moschetta, Markus Armbruster, Cleber Rosa

On 21/01/2022 13.23, Alex Bennée wrote:
> 
> Peter Maydell <peter.maydell@linaro.org> writes:
> 
>> On Fri, 21 Jan 2022 at 10:50, Markus Armbruster <armbru@redhat.com> wrote:
>>> No objection, but it's no replacement for looking into why these tests
>>> are so slow.
>>>
>>> The #1 reason for things being slow is not giving a damn :)
>>
>> See previous messages in the thread -- the test starts a
>> full-fat guest OS including UEFI boot, and it takes forever to
>> get to the login prompt because systemd is starting everything
>> including the kitchen sink.
> 
> There has to be a half-way house between booting a kernel until it fails
> to find a rootfs and running a full Ubuntu distro. Maybe just asking
> systemd to reach "rescue.target" would be enough to show the disks are
> up and userspace works.

In case it helps: We're already doing that in 
tests/avocado/machine_s390_ccw_virtio.py : For the Debian kernel, booting 
with BOOT_DEBUG=3 worked out pretty well, and for the Fedora kernel 
"rd.rescue" did the job. Also unpacking the Fedora ramdisk on the host 
proved to be quite faster than letting the guest unpacking the ramdisk on 
its own.

  Thomas



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-01-21 12:23         ` Alex Bennée
  2022-01-21 12:41           ` Thomas Huth
@ 2022-01-21 15:21           ` Daniel P. Berrangé
  2022-01-25  9:20             ` Gerd Hoffmann
  2022-02-01  5:29             ` Cleber Rosa
  1 sibling, 2 replies; 46+ messages in thread
From: Daniel P. Berrangé @ 2022-01-21 15:21 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Peter Maydell, Thomas Huth, Beraldo Leal, QEMU Developers,
	Markus Armbruster, Wainer dos Santos Moschetta,
	Philippe Mathieu-Daudé,
	Cleber Rosa

On Fri, Jan 21, 2022 at 12:23:23PM +0000, Alex Bennée wrote:
> 
> Peter Maydell <peter.maydell@linaro.org> writes:
> 
> > On Fri, 21 Jan 2022 at 10:50, Markus Armbruster <armbru@redhat.com> wrote:
> >> No objection, but it's no replacement for looking into why these tests
> >> are so slow.
> >>
> >> The #1 reason for things being slow is not giving a damn :)
> >
> > See previous messages in the thread -- the test starts a
> > full-fat guest OS including UEFI boot, and it takes forever to
> > get to the login prompt because systemd is starting everything
> > including the kitchen sink.
> 
> There has to be a half-way house between booting a kernel until it fails
> to find a rootfs and running a full Ubuntu distro. Maybe just asking
> systemd to reach "rescue.target" would be enough to show the disks are
> up and userspace works.

Booting up full OS distros is useful, but at the same time I feel it
is too much as something to expect developers to do on any kind of
regular basis.

Ideally some decent amount of acceptance testing could be a standard
part of the 'make check', but that's impossible as long as we're
downloading large disk images or booting things that are very slow,
especially so with TCG.

IMHO the ideal scenario would be for us to have a kernel, initrd
containing just busybox tools for the key arch targets we care
about. Those could be used with direct kernel boot or stuffed
into a disk iamge. Either way, they would boot in ~1 second,
even with TCG, and would be able to execute simple shell scripts
to test a decent amount of QEMU functionality.

It wouldn't eliminate the need to test with full OS, but it
would let us have some acceptance testing run as standard with
'make check' in a decently fast time.  It would then be less
critical if the more thorough full OS tests were somewhat
slower than we'd like. We could just leave those as a scheduled
job to run overnight post-merge. If they do detect any problems
post-merge, then write a dedicated test scenario to replicate it
under the minimal kernel/initrd acceptance test so it'll be
caught pre-merge in future.

> Running the EFI firmware is probably useful coverage but I'm not sure
> how one passes command line args to the guest in that approach? Do we
> need to set a magic EFI variable?

Same as with SeaBIOS - if you're booting off the guest then its the
grub.conf that controls this, if you're booting with direct kernel
on the host then QEMU cli.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-01-21 15:21           ` Daniel P. Berrangé
@ 2022-01-25  9:20             ` Gerd Hoffmann
  2022-02-01  6:31               ` Stefano Brivio
  2022-02-01 11:06               ` Kashyap Chamarthy
  2022-02-01  5:29             ` Cleber Rosa
  1 sibling, 2 replies; 46+ messages in thread
From: Gerd Hoffmann @ 2022-01-25  9:20 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Peter Maydell, Thomas Huth, Beraldo Leal, Markus Armbruster,
	Wainer dos Santos Moschetta, QEMU Developers, Cleber Rosa,
	Alex Bennée, Philippe Mathieu-Daudé

  Hi,

> IMHO the ideal scenario would be for us to have a kernel, initrd
> containing just busybox tools for the key arch targets we care
> about. Those could be used with direct kernel boot or stuffed
> into a disk iamge. Either way, they would boot in ~1 second,
> even with TCG, and would be able to execute simple shell scripts
> to test a decent amount of QEMU functionality.

I have some test images based on buildroot which are essentially that.
https://gitlab.com/kraxel/br-kraxel/

Still a significant download, but much smaller than a full fedora or
ubuntu cloud image and it boots much faster too.  Not down to only one
second though.

take care,
  Gerd



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-01-21 15:21           ` Daniel P. Berrangé
  2022-01-25  9:20             ` Gerd Hoffmann
@ 2022-02-01  5:29             ` Cleber Rosa
  2022-02-01 17:01               ` Daniel P. Berrangé
  1 sibling, 1 reply; 46+ messages in thread
From: Cleber Rosa @ 2022-02-01  5:29 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Peter Maydell, Thomas Huth, Beraldo Leal, Markus Armbruster,
	Wainer dos Santos Moschetta, QEMU Developers, Alex Bennée,
	Philippe Mathieu-Daudé

On Fri, Jan 21, 2022 at 10:22 AM Daniel P. Berrangé <berrange@redhat.com> wrote:
>
> On Fri, Jan 21, 2022 at 12:23:23PM +0000, Alex Bennée wrote:
> >
> > Peter Maydell <peter.maydell@linaro.org> writes:
> >
> > > On Fri, 21 Jan 2022 at 10:50, Markus Armbruster <armbru@redhat.com> wrote:
> > >> No objection, but it's no replacement for looking into why these tests
> > >> are so slow.
> > >>
> > >> The #1 reason for things being slow is not giving a damn :)
> > >
> > > See previous messages in the thread -- the test starts a
> > > full-fat guest OS including UEFI boot, and it takes forever to
> > > get to the login prompt because systemd is starting everything
> > > including the kitchen sink.
> >
> > There has to be a half-way house between booting a kernel until it fails
> > to find a rootfs and running a full Ubuntu distro. Maybe just asking
> > systemd to reach "rescue.target" would be enough to show the disks are
> > up and userspace works.
>
> Booting up full OS distros is useful, but at the same time I feel it
> is too much as something to expect developers to do on any kind of
> regular basis.
>

Agreed.  The solution IMO can be as simple as having different "test
job profiles".

> Ideally some decent amount of acceptance testing could be a standard
> part of the 'make check', but that's impossible as long as we're
> downloading large disk images or booting things that are very slow,
> especially so with TCG.
>
> IMHO the ideal scenario would be for us to have a kernel, initrd
> containing just busybox tools for the key arch targets we care
> about. Those could be used with direct kernel boot or stuffed
> into a disk iamge. Either way, they would boot in ~1 second,
> even with TCG, and would be able to execute simple shell scripts
> to test a decent amount of QEMU functionality.
>

I see different use cases here:

A) Testing that QEMU can boot a full distro

For testing purposes, the more different subsystems the "boot" process
depends on, the better.  Currently the "boot_linux.py" tests require the entire
guest boot to complete and have a networking configuration and interaction.

B) Using something as a base OS for scripts (tests) to run on it

Here's where there's the most benefit in having a more lightweight distro
(or kernel + initrd).  But, this requirement will also come in
different "optimal"
sizes for different people.  Some of the existing tests require not
only a Fedora
system, but a given version that has given capabilities.

For a sustainable, framework-like solution, tests should be able to determine
the guest they need with minimal setup from test writers[1].  If a Fedora-like
system is not needed, maybe a lightweight system like CirrOS[2] is enough.
CirrOS, unfortunately, can not be used Today as the distro in most of the
acceptance tests because the cloud-init mechanism used to configure the
networking is not currently supported, although there have been discussions
to consider implementing it[3].

> It wouldn't eliminate the need to test with full OS, but it
> would let us have some acceptance testing run as standard with
> 'make check' in a decently fast time.  It would then be less
> critical if the more thorough full OS tests were somewhat
> slower than we'd like. We could just leave those as a scheduled
> job to run overnight post-merge. If they do detect any problems
> post-merge, then write a dedicated test scenario to replicate it
> under the minimal kernel/initrd acceptance test so it'll be
> caught pre-merge in future.
>

Assuming this is about "Testing that QEMU can boot a full distro", I wouldn't
try to solve the problem by making the distro too slim to get to the
point of becoming
an unrealistic system.

IMO the deal breaker with regards to test time can be solved more cheaply by
having and using KVM where these tests will run, and not running them by
default otherwise.  With the tagging mechanism we should be able to set a
condition such as: "If using TCG, exclude tests that boot a full blown distro.
If using KVM, do not criticize what gets booted".  Resulting in something
like:

$ avocado list -t accel:tcg,boots:-distro -t accel:kvm
~/src/qemu/tests/avocado/{boot_linux.py,boot_linux_console.py}
avocado-instrumented
/home/cleber/src/qemu/tests/avocado/boot_linux.py:BootLinuxX8664.test_pc_i440fx_kvm
avocado-instrumented
/home/cleber/src/qemu/tests/avocado/boot_linux.py:BootLinuxX8664.test_pc_q35_kvm
avocado-instrumented
/home/cleber/src/qemu/tests/avocado/boot_linux.py:BootLinuxAarch64.test_virt_kvm
avocado-instrumented
/home/cleber/src/qemu/tests/avocado/boot_linux_console.py:BootLinuxConsole.test_aarch64_virt
avocado-instrumented
/home/cleber/src/qemu/tests/avocado/boot_linux_console.py:BootLinuxConsole.test_aarch64_xlnx_versal_virt
avocado-instrumented
/home/cleber/src/qemu/tests/avocado/boot_linux_console.py:BootLinuxConsole.test_arm_virt
avocado-instrumented
/home/cleber/src/qemu/tests/avocado/boot_linux_console.py:BootLinuxConsole.test_arm_emcraft_sf2
avocado-instrumented
/home/cleber/src/qemu/tests/avocado/boot_linux_console.py:BootLinuxConsole.test_arm_raspi2_uart0
avocado-instrumented
/home/cleber/src/qemu/tests/avocado/boot_linux_console.py:BootLinuxConsole.test_arm_exynos4210_initrd
avocado-instrumented
/home/cleber/src/qemu/tests/avocado/boot_linux_console.py:BootLinuxConsole.test_arm_cubieboard_initrd
avocado-instrumented
/home/cleber/src/qemu/tests/avocado/boot_linux_console.py:BootLinuxConsole.test_arm_cubieboard_sata
avocado-instrumented
/home/cleber/src/qemu/tests/avocado/boot_linux_console.py:BootLinuxConsole.test_arm_quanta_gsj
avocado-instrumented
/home/cleber/src/qemu/tests/avocado/boot_linux_console.py:BootLinuxConsole.test_arm_quanta_gsj_initrd
avocado-instrumented
/home/cleber/src/qemu/tests/avocado/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi
avocado-instrumented
/home/cleber/src/qemu/tests/avocado/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi_initrd
avocado-instrumented
/home/cleber/src/qemu/tests/avocado/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi_sd

Does that sound like something appropriate?

BTW, on the topic of "Using something as a base OS for scripts (tests) to run
on it", another possibility for using full blown OS would be to save
their initialized
state, and load it to memory for each test, saving the guest boot time.  This
should of course be done at the framework level and transparent to tests.

Best,
- Cleber.

[1] https://avocado-framework.readthedocs.io/en/94.0/guides/writer/libs/vmimage.html#supported-images
[2] https://launchpad.net/cirros
[3] https://github.com/cirros-dev/cirros/issues/67



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-01-25  9:20             ` Gerd Hoffmann
@ 2022-02-01  6:31               ` Stefano Brivio
  2022-02-01  7:49                 ` Gerd Hoffmann
  2022-02-01  9:06                 ` Daniel P. Berrangé
  2022-02-01 11:06               ` Kashyap Chamarthy
  1 sibling, 2 replies; 46+ messages in thread
From: Stefano Brivio @ 2022-02-01  6:31 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Peter Maydell, Thomas Huth, Daniel P. Berrangé,
	Beraldo Leal, Markus Armbruster, Wainer dos Santos Moschetta,
	QEMU Developers, Cleber Rosa, Alex Bennée,
	Philippe Mathieu-Daudé

Hi,

On Tue, 25 Jan 2022 10:20:11 +0100
Gerd Hoffmann <kraxel@redhat.com> wrote:

>   Hi,
> 
> > IMHO the ideal scenario would be for us to have a kernel, initrd
> > containing just busybox tools for the key arch targets we care
> > about. Those could be used with direct kernel boot or stuffed
> > into a disk iamge. Either way, they would boot in ~1 second,
> > even with TCG, and would be able to execute simple shell scripts
> > to test a decent amount of QEMU functionality.  
> 
> I have some test images based on buildroot which are essentially that.
> https://gitlab.com/kraxel/br-kraxel/
> 
> Still a significant download, but much smaller than a full fedora or
> ubuntu cloud image and it boots much faster too.  Not down to only one
> second though.

I'm not sure you can recycle something from it, but my (ugly) approach
to make this fast (for a different purpose -- I'm using qemu to run
tests in guests, not testing qemu) is to build an initramfs by copying
the host binaries I need (a shell, ip, jq) and recursively sourcing
libraries using ldd (I guess I mentioned it's ugly).

No downloads, systemd, dracut, etc., guest boots in half a second
(x86_64 on x86_64, KVM -- no idea with TCG). Host kernel with a few
modules packed and loaded by a custom init script.

If you're interested, you can see it in operation at 3:11:17 (ah, the
sarcasm) of: https://passt.top/passt/about/#continuous-integration
(click on the "udp/pasta" anchor below, it's a few seconds in), or in
slow motion at 0:51 of https://passt.top/passt/about/#passt_2.

It's basically:

  git clone https://mbuto.lameexcu.se/mbuto/ && cd mbuto
  ./mbuto -c lz4 -p passt -f img # Profiles define sets of binaries
  ${qemu} -kernel /boot/vmlinuz-$(uname -r) -initrd img

-- 
Stefano



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-02-01  6:31               ` Stefano Brivio
@ 2022-02-01  7:49                 ` Gerd Hoffmann
  2022-02-01  9:06                 ` Daniel P. Berrangé
  1 sibling, 0 replies; 46+ messages in thread
From: Gerd Hoffmann @ 2022-02-01  7:49 UTC (permalink / raw)
  To: Stefano Brivio
  Cc: Peter Maydell, Thomas Huth, Daniel P. Berrangé,
	Beraldo Leal, Markus Armbruster, Wainer dos Santos Moschetta,
	QEMU Developers, Cleber Rosa, Alex Bennée,
	Philippe Mathieu-Daudé

  Hi,

> I'm not sure you can recycle something from it, but my (ugly) approach
> to make this fast (for a different purpose -- I'm using qemu to run
> tests in guests, not testing qemu) is to build an initramfs by copying
> the host binaries I need (a shell, ip, jq) and recursively sourcing
> libraries using ldd (I guess I mentioned it's ugly).

By design limited to the host architecture, but might be good enough
depending on what you want test ...

> No downloads, systemd, dracut, etc., guest boots in half a second
> (x86_64 on x86_64, KVM -- no idea with TCG). Host kernel with a few
> modules packed and loaded by a custom init script.

I've simply used dracut for that in the past.  Recursively sourcing
libraries is one of the things it does which I didn't have to code up
myself that way. Used to work pretty well.

But these days dracut doesn't want give you a shell prompt without
asking for a password beforehand, which is annoying if all you want
do is run some simple tests, and there was to easy way to turn that
off last time I checked ...

take care,
  Gerd



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-02-01  6:31               ` Stefano Brivio
  2022-02-01  7:49                 ` Gerd Hoffmann
@ 2022-02-01  9:06                 ` Daniel P. Berrangé
  2022-02-01 10:27                   ` Stefano Brivio
  1 sibling, 1 reply; 46+ messages in thread
From: Daniel P. Berrangé @ 2022-02-01  9:06 UTC (permalink / raw)
  To: Stefano Brivio
  Cc: Peter Maydell, Thomas Huth, Beraldo Leal, Markus Armbruster,
	Wainer dos Santos Moschetta, QEMU Developers, Gerd Hoffmann,
	Cleber Rosa, Alex Bennée, Philippe Mathieu-Daudé

On Tue, Feb 01, 2022 at 07:31:39AM +0100, Stefano Brivio wrote:
> Hi,
> 
> On Tue, 25 Jan 2022 10:20:11 +0100
> Gerd Hoffmann <kraxel@redhat.com> wrote:
> 
> >   Hi,
> > 
> > > IMHO the ideal scenario would be for us to have a kernel, initrd
> > > containing just busybox tools for the key arch targets we care
> > > about. Those could be used with direct kernel boot or stuffed
> > > into a disk iamge. Either way, they would boot in ~1 second,
> > > even with TCG, and would be able to execute simple shell scripts
> > > to test a decent amount of QEMU functionality.  
> > 
> > I have some test images based on buildroot which are essentially that.
> > https://gitlab.com/kraxel/br-kraxel/
> > 
> > Still a significant download, but much smaller than a full fedora or
> > ubuntu cloud image and it boots much faster too.  Not down to only one
> > second though.
> 
> I'm not sure you can recycle something from it, but my (ugly) approach
> to make this fast (for a different purpose -- I'm using qemu to run
> tests in guests, not testing qemu) is to build an initramfs by copying
> the host binaries I need (a shell, ip, jq) and recursively sourcing
> libraries using ldd (I guess I mentioned it's ugly).
> 
> No downloads, systemd, dracut, etc., guest boots in half a second
> (x86_64 on x86_64, KVM -- no idea with TCG). Host kernel with a few
> modules packed and loaded by a custom init script.

That is such a good idea, that it is exactly what I do too :-)

  https://gitlab.com/berrange/tiny-vm-tools/-/blob/master/make-tiny-image.py

it works incredibly well for the simple case of host-arch==guest-arch.

It could be made to work for foreign arch easily enough - just need
to have a foreign chroot lieing around somewhere you can point it
to.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-02-01  9:06                 ` Daniel P. Berrangé
@ 2022-02-01 10:27                   ` Stefano Brivio
  2022-02-01 11:17                     ` Alex Bennée
  0 siblings, 1 reply; 46+ messages in thread
From: Stefano Brivio @ 2022-02-01 10:27 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Peter Maydell, Thomas Huth, Beraldo Leal, Markus Armbruster,
	Wainer dos Santos Moschetta, QEMU Developers, Gerd Hoffmann,
	Cleber Rosa, Alex Bennée, Philippe Mathieu-Daudé

On Tue, 1 Feb 2022 09:06:25 +0000
Daniel P. Berrangé <berrange@redhat.com> wrote:

> On Tue, Feb 01, 2022 at 07:31:39AM +0100, Stefano Brivio wrote:
> > Hi,
> > 
> > On Tue, 25 Jan 2022 10:20:11 +0100
> > Gerd Hoffmann <kraxel@redhat.com> wrote:
> >   
> > >   Hi,
> > >   
> > > > IMHO the ideal scenario would be for us to have a kernel, initrd
> > > > containing just busybox tools for the key arch targets we care
> > > > about. Those could be used with direct kernel boot or stuffed
> > > > into a disk iamge. Either way, they would boot in ~1 second,
> > > > even with TCG, and would be able to execute simple shell scripts
> > > > to test a decent amount of QEMU functionality.    
> > > 
> > > I have some test images based on buildroot which are essentially that.
> > > https://gitlab.com/kraxel/br-kraxel/
> > > 
> > > Still a significant download, but much smaller than a full fedora or
> > > ubuntu cloud image and it boots much faster too.  Not down to only one
> > > second though.  
> > 
> > I'm not sure you can recycle something from it, but my (ugly) approach
> > to make this fast (for a different purpose -- I'm using qemu to run
> > tests in guests, not testing qemu) is to build an initramfs by copying
> > the host binaries I need (a shell, ip, jq) and recursively sourcing
> > libraries using ldd (I guess I mentioned it's ugly).
> > 
> > No downloads, systemd, dracut, etc., guest boots in half a second
> > (x86_64 on x86_64, KVM -- no idea with TCG). Host kernel with a few
> > modules packed and loaded by a custom init script.  
> 
> That is such a good idea, that it is exactly what I do too :-)
> 
>   https://gitlab.com/berrange/tiny-vm-tools/-/blob/master/make-tiny-image.py
> 
> it works incredibly well for the simple case of host-arch==guest-arch.

Ah-ha, I feel better now. ;)

> It could be made to work for foreign arch easily enough - just need
> to have a foreign chroot lieing around somewhere you can point it
> to.

By the way, stage3 archives from:

	https://www.gentoo.org/downloads/#other-arches

get quite close to it ...no kernel binaries though.

-- 
Stefano



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-01-25  9:20             ` Gerd Hoffmann
  2022-02-01  6:31               ` Stefano Brivio
@ 2022-02-01 11:06               ` Kashyap Chamarthy
  2022-02-01 15:54                 ` Cleber Rosa
  1 sibling, 1 reply; 46+ messages in thread
From: Kashyap Chamarthy @ 2022-02-01 11:06 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Peter Maydell, Thomas Huth, Daniel P. Berrangé,
	Beraldo Leal, Markus Armbruster, Wainer dos Santos Moschetta,
	QEMU Developers, Cleber Rosa, Alex Bennée,
	Philippe Mathieu-Daudé

On Tue, Jan 25, 2022 at 10:20:11AM +0100, Gerd Hoffmann wrote:
>   Hi,
> 
> > IMHO the ideal scenario would be for us to have a kernel, initrd
> > containing just busybox tools for the key arch targets we care
> > about. Those could be used with direct kernel boot or stuffed
> > into a disk iamge. Either way, they would boot in ~1 second,
> > even with TCG, and would be able to execute simple shell scripts
> > to test a decent amount of QEMU functionality.
> 
> I have some test images based on buildroot which are essentially that.
> https://gitlab.com/kraxel/br-kraxel/
> 
> Still a significant download, but much smaller than a full fedora or
> ubuntu cloud image and it boots much faster too.  Not down to only one
> second though.

Any objection to using CirrOS[1] images for boot-testing?   FWIW,
OpenStack upstream CI boots thousands of guests each day with these for
many years now.  It boots quick, and also satisfies one of Peter's
other requirements: AArch64 images.

A downside of CirrOS is it doesn't have a package manager, so installing
custom packages is a PITA.  The main use-case of CirrOS images
is any kind of boot-testing only.

To make the booting even quicker with CirrOS, do disable the "metadata
service lookup" (this is queried 20 times) at boot time.  It can be
trivially done by making this change in this file
/etc/cirros-init/config (in the disk image):

    - DATASOURCE_LIST="nocloud configdrive ec2"
    + DATASOURCE_LIST="nocloud"


[1] https://download.cirros-cloud.net/0.5.2/

        * * *

Another alternative that satisfies Peter's main requirements seem to be:
Alpine Linux:

(1) It has a small foot print -- under 50MB; 
(2) It supports x86 _and_ AArch64; and
(3) It has a proper package management system.


-- 
/kashyap



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-02-01 10:27                   ` Stefano Brivio
@ 2022-02-01 11:17                     ` Alex Bennée
  2022-02-01 16:01                       ` Cleber Rosa
  0 siblings, 1 reply; 46+ messages in thread
From: Alex Bennée @ 2022-02-01 11:17 UTC (permalink / raw)
  To: Stefano Brivio
  Cc: Peter Maydell, Thomas Huth, Daniel P. Berrangé,
	Beraldo Leal, Markus Armbruster, Wainer dos Santos Moschetta,
	QEMU Developers, Gerd Hoffmann, Cleber Rosa,
	Philippe Mathieu-Daudé


Stefano Brivio <sbrivio@redhat.com> writes:

> On Tue, 1 Feb 2022 09:06:25 +0000
> Daniel P. Berrangé <berrange@redhat.com> wrote:
>
>> On Tue, Feb 01, 2022 at 07:31:39AM +0100, Stefano Brivio wrote:
>> > Hi,
>> > 
>> > On Tue, 25 Jan 2022 10:20:11 +0100
>> > Gerd Hoffmann <kraxel@redhat.com> wrote:
>> >   
>> > >   Hi,
>> > >   
>> > > > IMHO the ideal scenario would be for us to have a kernel, initrd
>> > > > containing just busybox tools for the key arch targets we care
>> > > > about. Those could be used with direct kernel boot or stuffed
>> > > > into a disk iamge. Either way, they would boot in ~1 second,
>> > > > even with TCG, and would be able to execute simple shell scripts
>> > > > to test a decent amount of QEMU functionality.    
>> > > 
>> > > I have some test images based on buildroot which are essentially that.
>> > > https://gitlab.com/kraxel/br-kraxel/
>> > > 
>> > > Still a significant download, but much smaller than a full fedora or
>> > > ubuntu cloud image and it boots much faster too.  Not down to only one
>> > > second though.  
>> > 
>> > I'm not sure you can recycle something from it, but my (ugly) approach
>> > to make this fast (for a different purpose -- I'm using qemu to run
>> > tests in guests, not testing qemu) is to build an initramfs by copying
>> > the host binaries I need (a shell, ip, jq) and recursively sourcing
>> > libraries using ldd (I guess I mentioned it's ugly).
>> > 
>> > No downloads, systemd, dracut, etc., guest boots in half a second
>> > (x86_64 on x86_64, KVM -- no idea with TCG). Host kernel with a few
>> > modules packed and loaded by a custom init script.  
>> 
>> That is such a good idea, that it is exactly what I do too :-)
>> 
>>   https://gitlab.com/berrange/tiny-vm-tools/-/blob/master/make-tiny-image.py
>> 
>> it works incredibly well for the simple case of host-arch==guest-arch.
>
> Ah-ha, I feel better now. ;)
>
>> It could be made to work for foreign arch easily enough - just need
>> to have a foreign chroot lieing around somewhere you can point it
>> to.
>
> By the way, stage3 archives from:
>
> 	https://www.gentoo.org/downloads/#other-arches
>
> get quite close to it ...no kernel binaries though.

We have up to now tried really hard as a project to avoid building and
hosting our own binaries to avoid theoretical* GPL compliance issues.
This is why we've ended up relying so much on distros to build and host
binaries we can use. Most QEMU developers have their own personal zoo of
kernels and userspaces which they use for testing. I use custom kernels
with a buildroot user space in initramfs for example. We even use the
qemu advent calendar for a number of our avocado tests but we basically
push responsibility for GPL compliance to the individual developers in
that case.

*theoretical in so far I suspect most people would be happy with a
reference to an upstream repo/commit and .config even if that is not to
the letter of the "offer of source code" required for true compliance.

-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-02-01 11:06               ` Kashyap Chamarthy
@ 2022-02-01 15:54                 ` Cleber Rosa
  0 siblings, 0 replies; 46+ messages in thread
From: Cleber Rosa @ 2022-02-01 15:54 UTC (permalink / raw)
  To: Kashyap Chamarthy
  Cc: Peter Maydell, Thomas Huth, Daniel P. Berrangé,
	Beraldo Leal, Markus Armbruster, Wainer dos Santos Moschetta,
	QEMU Developers, Gerd Hoffmann, Alex Bennée,
	Philippe Mathieu-Daudé

On Tue, Feb 1, 2022 at 6:07 AM Kashyap Chamarthy <kchamart@redhat.com> wrote:
>
> On Tue, Jan 25, 2022 at 10:20:11AM +0100, Gerd Hoffmann wrote:
> >   Hi,
> >
> > > IMHO the ideal scenario would be for us to have a kernel, initrd
> > > containing just busybox tools for the key arch targets we care
> > > about. Those could be used with direct kernel boot or stuffed
> > > into a disk iamge. Either way, they would boot in ~1 second,
> > > even with TCG, and would be able to execute simple shell scripts
> > > to test a decent amount of QEMU functionality.
> >
> > I have some test images based on buildroot which are essentially that.
> > https://gitlab.com/kraxel/br-kraxel/
> >
> > Still a significant download, but much smaller than a full fedora or
> > ubuntu cloud image and it boots much faster too.  Not down to only one
> > second though.
>
> Any objection to using CirrOS[1] images for boot-testing?   FWIW,
> OpenStack upstream CI boots thousands of guests each day with these for
> many years now.  It boots quick, and also satisfies one of Peter's
> other requirements: AArch64 images.
>

Even though I strongly support CirrOS (see my reply to Dan), I strongly object
using it as the only OS on "boot tests" (that is, testing that QEMU can fully
boot a system).  The reason is because actual functional coverage is reduced
and detached from most real world scenarios (I'm not aware of CirrOS, Alpine
and similar distros being used significantly on real world workloads).

This is the reasoning behind tests such as
"tests/avocado/boot_linux.py:BootLinuxX8664.test_pc_q35_kvm" which takes ~12
seconds to run on my 4 years old laptop.

Depending on what one considers a system to be booted, the existing approach
on "tests/avocado/boot_linux_console.py:BootLinuxConsole.test_x86_64_pc" of
booting only a kernel / initrd is also valid.  That takes around 0.4
seconds with KVM
and ~2 seconds to run with TCG on my system.

> A downside of CirrOS is it doesn't have a package manager, so installing
> custom packages is a PITA.  The main use-case of CirrOS images
> is any kind of boot-testing only.
>
> To make the booting even quicker with CirrOS, do disable the "metadata
> service lookup" (this is queried 20 times) at boot time.  It can be
> trivially done by making this change in this file
> /etc/cirros-init/config (in the disk image):
>
>     - DATASOURCE_LIST="nocloud configdrive ec2"
>     + DATASOURCE_LIST="nocloud"
>

That's a good tip!

If CirrOS had better support for "nocloud"[1], the existing boot tests could
transparently use it.  For instance, you can currently do this:

$ ./tests/venv/bin/avocado vmimage get --distro=ubuntu --distro-version=20.04
The image was downloaded:
Provider Version Architecture File
ubuntu   20.04   amd64
/home/cleber/avocado/data/cache/by_location/ca6ab0fdb5d175bbf3dfc3d070511559f6eab449/ubuntu-20.04-server-cloudimg-amd64.img

$ ./tests/venv/bin/avocado run -p distro=ubuntu -p
distro_version=20.04
tests/avocado/boot_linux.py:BootLinuxX8664.test_pc_q35_kvm

The "-p distro=cirros" works, but only up to the downloading/preparing
the image.
The lack of proper support for cloud-init/nocloud then breaks it. I
would be a bit
reluctant of adding another family of tests or a third way of dealing
with guests
because they implement a custom behavior for something that is supposed
to be so standard at this point (cloud-init / nocloud).

Regards,
- Cleber.

[1] https://github.com/cirros-dev/cirros/issues/67



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-02-01 11:17                     ` Alex Bennée
@ 2022-02-01 16:01                       ` Cleber Rosa
  2022-02-01 16:19                         ` Daniel P. Berrangé
  2022-02-01 17:59                         ` Cédric Le Goater
  0 siblings, 2 replies; 46+ messages in thread
From: Cleber Rosa @ 2022-02-01 16:01 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Peter Maydell, Thomas Huth, Daniel P. Berrangé,
	Beraldo Leal, Markus Armbruster, Wainer dos Santos Moschetta,
	QEMU Developers, Stefano Brivio, Gerd Hoffmann,
	Philippe Mathieu-Daudé

On Tue, Feb 1, 2022 at 6:25 AM Alex Bennée <alex.bennee@linaro.org> wrote:
>
> We have up to now tried really hard as a project to avoid building and
> hosting our own binaries to avoid theoretical* GPL compliance issues.
> This is why we've ended up relying so much on distros to build and host
> binaries we can use. Most QEMU developers have their own personal zoo of
> kernels and userspaces which they use for testing. I use custom kernels
> with a buildroot user space in initramfs for example. We even use the
> qemu advent calendar for a number of our avocado tests but we basically
> push responsibility for GPL compliance to the individual developers in
> that case.
>
> *theoretical in so far I suspect most people would be happy with a
> reference to an upstream repo/commit and .config even if that is not to
> the letter of the "offer of source code" required for true compliance.
>

Yes, it'd be fine (great, really!) if a lightweight distro (or
kernels/initrd) were to
be maintained and identified as an "official" QEMU pick.  Putting the binaries
in the source tree though, brings all sorts of compliance issues.

The downloading of the images at test "setup time" is still a better approach,
given that tests will simply skip if the download is not possible.

- Cleber.



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-02-01 16:01                       ` Cleber Rosa
@ 2022-02-01 16:19                         ` Daniel P. Berrangé
  2022-02-01 17:47                           ` Cleber Rosa
  2022-02-01 17:59                         ` Cédric Le Goater
  1 sibling, 1 reply; 46+ messages in thread
From: Daniel P. Berrangé @ 2022-02-01 16:19 UTC (permalink / raw)
  To: Cleber Rosa
  Cc: Peter Maydell, Thomas Huth, Beraldo Leal, Markus Armbruster,
	Wainer dos Santos Moschetta, QEMU Developers, Stefano Brivio,
	Gerd Hoffmann, Alex Bennée, Philippe Mathieu-Daudé

On Tue, Feb 01, 2022 at 11:01:43AM -0500, Cleber Rosa wrote:
> On Tue, Feb 1, 2022 at 6:25 AM Alex Bennée <alex.bennee@linaro.org> wrote:
> >
> > We have up to now tried really hard as a project to avoid building and
> > hosting our own binaries to avoid theoretical* GPL compliance issues.
> > This is why we've ended up relying so much on distros to build and host
> > binaries we can use. Most QEMU developers have their own personal zoo of
> > kernels and userspaces which they use for testing. I use custom kernels
> > with a buildroot user space in initramfs for example. We even use the
> > qemu advent calendar for a number of our avocado tests but we basically
> > push responsibility for GPL compliance to the individual developers in
> > that case.
> >
> > *theoretical in so far I suspect most people would be happy with a
> > reference to an upstream repo/commit and .config even if that is not to
> > the letter of the "offer of source code" required for true compliance.
> >
> 
> Yes, it'd be fine (great, really!) if a lightweight distro (or
> kernels/initrd) were to
> be maintained and identified as an "official" QEMU pick.  Putting the binaries
> in the source tree though, brings all sorts of compliance issues.

All that's really needed is to have the source + build recipes
in a separate git repo. A pipeline can build them periodically
and publish artifacts, which QEMU can then consume in its pipeline.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-02-01  5:29             ` Cleber Rosa
@ 2022-02-01 17:01               ` Daniel P. Berrangé
  2022-02-01 17:59                 ` Cleber Rosa
  0 siblings, 1 reply; 46+ messages in thread
From: Daniel P. Berrangé @ 2022-02-01 17:01 UTC (permalink / raw)
  To: Cleber Rosa
  Cc: Peter Maydell, Thomas Huth, Beraldo Leal, QEMU Developers,
	Wainer dos Santos Moschetta, Markus Armbruster, Alex Bennée,
	Philippe Mathieu-Daudé

On Tue, Feb 01, 2022 at 12:29:56AM -0500, Cleber Rosa wrote:
> 
> Assuming this is about "Testing that QEMU can boot a full distro", I wouldn't
> try to solve the problem by making the distro too slim to get to the
> point of becoming
> an unrealistic system.

At a high level our with acceptance (integration) testing is of
course to make sure that QEMU is correctly emulating a full virtual
machine, such that we have confidence that it can run real world
operating systems.

There are a number of approaches to achieve that with varying
tradeoffs.

  - Testing with very specific tailored environments, running
    very specific userspace tools and minimal kernel setup.

    This can give us a pretty decent amount of coverage of
    the core features of the emulated environment in a tightly
    controlled amount of wallclock time. When it fails it ought
    to be relatively easy to understand and debug.

    The downside is that it is the QEMU code paths it hits are
    going to be fairly static.


  - Testing with arbitrary execution of real world OS images.

    I think of this as a bit of scattergun approach. We're not
    trying to tightly control what runs, we actually want it
    to run alot of arbitrarily complex and unusual stuff.

    This is going to be time consuming and is likely to have
    higher false positive failure rates. It is worthwhile
    because it is going to find the edge cases that you simply
    won't detect any other way, because you can't even imagine
    the problems that you're trying to uncover until you uncover
    them by accident with a real OS workload.

    It is kinda like fuzzing QEMU with an entire OS :-)


Both of these approaches are valid/complementary and we should
want to have both.

Any test suite is only going to find bugs though if it is
actually executed.

As a contributor though the former is stuff I'm likely to be
willing to run myself before sending patches, while the latter
is stuff I'm just always going to punt to merge testing infra.

We want to be wary of leaving too much to be caught at time
of merge tests, because that puts a significant burden on the
person responsible for merging code in QEMU.  We need our
contributors to be motivated to run as much testing as possible
ahead of submitting patches.

> IMO the deal breaker with regards to test time can be solved more cheaply by
> having and using KVM where these tests will run, and not running them by
> default otherwise.  With the tagging mechanism we should be able to set a
> condition such as: "If using TCG, exclude tests that boot a full blown distro.
> If using KVM, do not criticize what gets booted".  Resulting in something
> like:

> Does that sound like something appropriate?

Depends whether you only care about KVM or not. From a POV of QEMU
community CI, I think it is valid to want to test TCG functionality


> BTW, on the topic of "Using something as a base OS for scripts (tests) to run
> on it", another possibility for using full blown OS would be to save
> their initialized
> state, and load it to memory for each test, saving the guest boot time.  This
> should of course be done at the framework level and transparent to tests.

There is *massive* virtue in simplicity & predictability for testing.

Building more complex infrastructure to pre-initialize caches with
clever techniques like saving running OS state is clever, but is
certainly not simple or predictable. When that kind of stuff goes
wrong, whoever gets to debug it is going to have a really bad day.

This can be worth doing if there's no other viable approach to achieve
the desired end goal. I don't think that's the case for our integration
testing needs in QEMU though. There's masses of scope for us to explore
testing with minimal tailored guest images/environments, before we need
to resort to building more complex optimization strategies.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-02-01 16:19                         ` Daniel P. Berrangé
@ 2022-02-01 17:47                           ` Cleber Rosa
  2022-02-01 18:03                             ` Alex Bennée
  2022-02-01 18:35                             ` Stefano Brivio
  0 siblings, 2 replies; 46+ messages in thread
From: Cleber Rosa @ 2022-02-01 17:47 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Peter Maydell, Thomas Huth, Beraldo Leal, Markus Armbruster,
	Wainer dos Santos Moschetta, QEMU Developers, Stefano Brivio,
	Gerd Hoffmann, Alex Bennée, Philippe Mathieu-Daudé

On Tue, Feb 1, 2022 at 11:20 AM Daniel P. Berrangé <berrange@redhat.com> wrote:
>
> On Tue, Feb 01, 2022 at 11:01:43AM -0500, Cleber Rosa wrote:
> > On Tue, Feb 1, 2022 at 6:25 AM Alex Bennée <alex.bennee@linaro.org> wrote:
> > >
> > > We have up to now tried really hard as a project to avoid building and
> > > hosting our own binaries to avoid theoretical* GPL compliance issues.
> > > This is why we've ended up relying so much on distros to build and host
> > > binaries we can use. Most QEMU developers have their own personal zoo of
> > > kernels and userspaces which they use for testing. I use custom kernels
> > > with a buildroot user space in initramfs for example. We even use the
> > > qemu advent calendar for a number of our avocado tests but we basically
> > > push responsibility for GPL compliance to the individual developers in
> > > that case.
> > >
> > > *theoretical in so far I suspect most people would be happy with a
> > > reference to an upstream repo/commit and .config even if that is not to
> > > the letter of the "offer of source code" required for true compliance.
> > >
> >
> > Yes, it'd be fine (great, really!) if a lightweight distro (or
> > kernels/initrd) were to
> > be maintained and identified as an "official" QEMU pick.  Putting the binaries
> > in the source tree though, brings all sorts of compliance issues.
>
> All that's really needed is to have the source + build recipes
> in a separate git repo. A pipeline can build them periodically
> and publish artifacts, which QEMU can then consume in its pipeline.
>

I get your point, but then to acquire the artifacts one needs to:

1. depend on the CI system to deploy the artifacts in subsequent job
stages (a limitation IMO), OR
2. if outside the CI, implement a download/cache mechanism for those
artifacts, which gets us back to the previous point, only with a
different distro/kernel+initrd.

With that, the value proposal has to be in the characteristics of
distro/kernel+initrd itself. It has to have enough differentiation to
justify the development/maintenance work, as opposed to using existing
ones.

FWIW, my non-scientific tests booting on my 3+ YO machine:

* CirrOS x86_64+KVM: ~2 seconds
* CirroOS aarch64+TCG: ~20 seconds
* Fedora kernel+initrd aarch64+TCG
(tests/avocado/boot_linux_console.py:BootLinuxConsole.test_aarch64_virt):
~1 second

I would imagine that CirrOS aarch64+KVM on an adequate system would be
similar to the CirrOS x86_64+KVM.  We can develop/maintain a slimmer
distro, and/or set the default test workloads where they perform the
best.  The development cost of the latter is quite small.  I've added
a missing bit to the filtering capabilities in Avocado[1] and will
send a proposal to QEMU along these lines.

Regards,
- Cleber.

[1] https://github.com/avocado-framework/avocado/pull/5245



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-02-01 16:01                       ` Cleber Rosa
  2022-02-01 16:19                         ` Daniel P. Berrangé
@ 2022-02-01 17:59                         ` Cédric Le Goater
  1 sibling, 0 replies; 46+ messages in thread
From: Cédric Le Goater @ 2022-02-01 17:59 UTC (permalink / raw)
  To: Cleber Rosa, Alex Bennée
  Cc: Peter Maydell, Thomas Huth, Daniel P. Berrangé,
	Beraldo Leal, Markus Armbruster, Wainer dos Santos Moschetta,
	QEMU Developers, Stefano Brivio, Gerd Hoffmann,
	Philippe Mathieu-Daudé

On 2/1/22 17:01, Cleber Rosa wrote:
> On Tue, Feb 1, 2022 at 6:25 AM Alex Bennée <alex.bennee@linaro.org> wrote:
>>
>> We have up to now tried really hard as a project to avoid building and
>> hosting our own binaries to avoid theoretical* GPL compliance issues.
>> This is why we've ended up relying so much on distros to build and host
>> binaries we can use. Most QEMU developers have their own personal zoo of
>> kernels and userspaces which they use for testing. I use custom kernels
>> with a buildroot user space in initramfs for example. We even use the
>> qemu advent calendar for a number of our avocado tests but we basically
>> push responsibility for GPL compliance to the individual developers in
>> that case.
>>
>> *theoretical in so far I suspect most people would be happy with a
>> reference to an upstream repo/commit and .config even if that is not to
>> the letter of the "offer of source code" required for true compliance.
>>
> 
> Yes, it'd be fine (great, really!) if a lightweight distro (or
> kernels/initrd) were to
> be maintained and identified as an "official" QEMU pick.  Putting the binaries
> in the source tree though, brings all sorts of compliance issues.

FWIW, before avocado, I am using linux+buildroot images for PPC
and running a simple "boot-net-login-poweroff" script for each
machine/CPU QEMU can test :


ref405ep : Linux /init login DONE (PASSED)
bamboo : Linux /init net login DONE (PASSED)
sam460ex : Linux Linux /init net login DONE (PASSED)
g3beige-604 : FW Linux Linux /init net login DONE (PASSED)
g3beige-g3 : FW Linux Linux /init net login DONE (PASSED)
mac99-g4 : FW Linux Linux /init net login DONE (PASSED)
mac99-7447 : FW Linux Linux /init net login DONE (PASSED)
mac99-7448 : FW Linux Linux /init net login DONE (PASSED)
mac99-7450 : FW Linux Linux /init net login DONE (PASSED)
mpc8544ds : Linux /init net login DONE (PASSED)
e500mc : Linux /init net login DONE (PASSED)
40p : FW login DONE (PASSED) # this one is a special case
e5500 : Linux /init net login DONE (PASSED)
e6500 : Linux /init net login DONE (PASSED)
g5-32 : FW Linux Linux /init net login DONE (PASSED)
g5-64 : FW Linux Linux /init net login DONE (PASSED)
pseries-970 : FW Linux Linux /init net login DONE (PASSED)
pseries-970mp : FW Linux Linux /init net login DONE (PASSED)
pseries-POWER5+ : FW Linux Linux /init net login DONE (PASSED)
pseries : FW Linux Linux /init net login DONE (PASSED)
pseriesle8 : FW Linux Linux /init net login DONE (PASSED)
pseriesle9 : FW Linux Linux /init net login DONE (PASSED)
pseriesle10 : FW Linux Linux /init net login DONE (PASSED)
powernv8 : FW Linux /init net login DONE (PASSED)
powernv9 : FW Linux /init net login DONE (PASSED)

Images are here :

   https://github.com/legoater/qemu-ppc-boot/tree/main/buildroot

Buildroot has a testsuite using QEMU and they have been nice enough
to take new QEMU boards for PPC.

Thanks,

C.


> 
> The downloading of the images at test "setup time" is still a better approach,
> given that tests will simply skip if the download is not possible.
> 
> - Cleber.
> 
> 



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-02-01 17:01               ` Daniel P. Berrangé
@ 2022-02-01 17:59                 ` Cleber Rosa
  0 siblings, 0 replies; 46+ messages in thread
From: Cleber Rosa @ 2022-02-01 17:59 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Peter Maydell, Thomas Huth, Beraldo Leal, QEMU Developers,
	Wainer dos Santos Moschetta, Markus Armbruster, Alex Bennée,
	Philippe Mathieu-Daudé

On Tue, Feb 1, 2022 at 12:01 PM Daniel P. Berrangé <berrange@redhat.com> wrote:
>
> On Tue, Feb 01, 2022 at 12:29:56AM -0500, Cleber Rosa wrote:
> >
> > Assuming this is about "Testing that QEMU can boot a full distro", I wouldn't
> > try to solve the problem by making the distro too slim to get to the
> > point of becoming
> > an unrealistic system.
>
> At a high level our with acceptance (integration) testing is of
> course to make sure that QEMU is correctly emulating a full virtual
> machine, such that we have confidence that it can run real world
> operating systems.
>
> There are a number of approaches to achieve that with varying
> tradeoffs.
>
>   - Testing with very specific tailored environments, running
>     very specific userspace tools and minimal kernel setup.
>
>     This can give us a pretty decent amount of coverage of
>     the core features of the emulated environment in a tightly
>     controlled amount of wallclock time. When it fails it ought
>     to be relatively easy to understand and debug.
>
>     The downside is that it is the QEMU code paths it hits are
>     going to be fairly static.
>
>
>   - Testing with arbitrary execution of real world OS images.
>
>     I think of this as a bit of scattergun approach. We're not
>     trying to tightly control what runs, we actually want it
>     to run alot of arbitrarily complex and unusual stuff.
>
>     This is going to be time consuming and is likely to have
>     higher false positive failure rates. It is worthwhile
>     because it is going to find the edge cases that you simply
>     won't detect any other way, because you can't even imagine
>     the problems that you're trying to uncover until you uncover
>     them by accident with a real OS workload.
>
>     It is kinda like fuzzing QEMU with an entire OS :-)
>
>
> Both of these approaches are valid/complementary and we should
> want to have both.
>

Agreed.

> Any test suite is only going to find bugs though if it is
> actually executed.
>
> As a contributor though the former is stuff I'm likely to be
> willing to run myself before sending patches, while the latter
> is stuff I'm just always going to punt to merge testing infra.
>
> We want to be wary of leaving too much to be caught at time
> of merge tests, because that puts a significant burden on the
> person responsible for merging code in QEMU.  We need our
> contributors to be motivated to run as much testing as possible
> ahead of submitting patches.
>
> > IMO the deal breaker with regards to test time can be solved more cheaply by
> > having and using KVM where these tests will run, and not running them by
> > default otherwise.  With the tagging mechanism we should be able to set a
> > condition such as: "If using TCG, exclude tests that boot a full blown distro.
> > If using KVM, do not criticize what gets booted".  Resulting in something
> > like:
>
> > Does that sound like something appropriate?
>
> Depends whether you only care about KVM or not. From a POV of QEMU
> community CI, I think it is valid to want to test TCG functionality
>
>

Maybe I wasn't clear enough.  I am suggesting that tests using TCG do
not run by default (on a "make check-avocado") if, and only if, they
are booting a complete OS.  That would  bring the time to run "make
check-avocado" to a fifth of its current time.

And to be clear, there are a *lot* of tests running TCG, but they
happen to boot kernel+initrd by default, so we're not necessarily
abandoning TCG at all.

Also, we can have another target, or option as suggested by others in
this thread, where those lengthy TCG based full fistro boot tests get
to run.

> > BTW, on the topic of "Using something as a base OS for scripts (tests) to run
> > on it", another possibility for using full blown OS would be to save
> > their initialized
> > state, and load it to memory for each test, saving the guest boot time.  This
> > should of course be done at the framework level and transparent to tests.
>
> There is *massive* virtue in simplicity & predictability for testing.
>
> Building more complex infrastructure to pre-initialize caches with
> clever techniques like saving running OS state is clever, but is
> certainly not simple or predictable. When that kind of stuff goes
> wrong, whoever gets to debug it is going to have a really bad day.
>
> This can be worth doing if there's no other viable approach to achieve
> the desired end goal. I don't think that's the case for our integration
> testing needs in QEMU though. There's masses of scope for us to explore
> testing with minimal tailored guest images/environments, before we need
> to resort to building more complex optimization strategies.
>

I'm aware and second that. Avocado-VT tests transitioned from a model
where VMs would, by default, be reused across tests, to a "start every
VM from scratch".  But, users can still opt-in to the "reuse VM model"
if they feel the tradeoff is valid.

Best regards!
- Cleber



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-02-01 17:47                           ` Cleber Rosa
@ 2022-02-01 18:03                             ` Alex Bennée
  2022-02-01 19:04                               ` Cleber Rosa
  2022-02-01 18:35                             ` Stefano Brivio
  1 sibling, 1 reply; 46+ messages in thread
From: Alex Bennée @ 2022-02-01 18:03 UTC (permalink / raw)
  To: Cleber Rosa
  Cc: Peter Maydell, Thomas Huth, Daniel P. Berrangé,
	Beraldo Leal, Markus Armbruster, Wainer dos Santos Moschetta,
	QEMU Developers, Stefano Brivio, Gerd Hoffmann,
	Philippe Mathieu-Daudé


Cleber Rosa <crosa@redhat.com> writes:

> On Tue, Feb 1, 2022 at 11:20 AM Daniel P. Berrangé <berrange@redhat.com> wrote:
>>
>> On Tue, Feb 01, 2022 at 11:01:43AM -0500, Cleber Rosa wrote:
>> > On Tue, Feb 1, 2022 at 6:25 AM Alex Bennée <alex.bennee@linaro.org> wrote:
>> > >
>> > > We have up to now tried really hard as a project to avoid building and
>> > > hosting our own binaries to avoid theoretical* GPL compliance issues.
>> > > This is why we've ended up relying so much on distros to build and host
>> > > binaries we can use. Most QEMU developers have their own personal zoo of
>> > > kernels and userspaces which they use for testing. I use custom kernels
>> > > with a buildroot user space in initramfs for example. We even use the
>> > > qemu advent calendar for a number of our avocado tests but we basically
>> > > push responsibility for GPL compliance to the individual developers in
>> > > that case.
>> > >
>> > > *theoretical in so far I suspect most people would be happy with a
>> > > reference to an upstream repo/commit and .config even if that is not to
>> > > the letter of the "offer of source code" required for true compliance.
>> > >
>> >
>> > Yes, it'd be fine (great, really!) if a lightweight distro (or
>> > kernels/initrd) were to
>> > be maintained and identified as an "official" QEMU pick.  Putting the binaries
>> > in the source tree though, brings all sorts of compliance issues.
>>
>> All that's really needed is to have the source + build recipes
>> in a separate git repo. A pipeline can build them periodically
>> and publish artifacts, which QEMU can then consume in its pipeline.
>>
>
> I get your point, but then to acquire the artifacts one needs to:
>
> 1. depend on the CI system to deploy the artifacts in subsequent job
> stages (a limitation IMO), OR
> 2. if outside the CI, implement a download/cache mechanism for those
> artifacts, which gets us back to the previous point, only with a
> different distro/kernel+initrd.
>
> With that, the value proposal has to be in the characteristics of
> distro/kernel+initrd itself. It has to have enough differentiation to
> justify the development/maintenance work, as opposed to using existing
> ones.
>
> FWIW, my non-scientific tests booting on my 3+ YO machine:
>
> * CirrOS x86_64+KVM: ~2 seconds
> * CirroOS aarch64+TCG: ~20 seconds
> * Fedora kernel+initrd aarch64+TCG
> (tests/avocado/boot_linux_console.py:BootLinuxConsole.test_aarch64_virt):
> ~1 second
>
> I would imagine that CirrOS aarch64+KVM on an adequate system would be
> similar to the CirrOS x86_64+KVM.  We can develop/maintain a slimmer
> distro, and/or set the default test workloads where they perform the
> best.  The development cost of the latter is quite small.  I've added
> a missing bit to the filtering capabilities in Avocado[1] and will
> send a proposal to QEMU along these lines.

FWIW the bit I'm interested in for the slow test in question here is
that it does a full boot through the EDK2 bios (EL3->EL2->EL1). I'm not
overly concerned about what gets run in userspace as long as something
is run that shows EL0 can be executed and handle task switching. I
suspect most of the userspace startup of a full distro basically just
ends up testing the same code paths over and over again.

>
> Regards,
> - Cleber.
>
> [1] https://github.com/avocado-framework/avocado/pull/5245


-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-02-01 17:47                           ` Cleber Rosa
  2022-02-01 18:03                             ` Alex Bennée
@ 2022-02-01 18:35                             ` Stefano Brivio
  1 sibling, 0 replies; 46+ messages in thread
From: Stefano Brivio @ 2022-02-01 18:35 UTC (permalink / raw)
  To: Cleber Rosa
  Cc: Peter Maydell, Thomas Huth, Daniel P. Berrangé,
	Beraldo Leal, Markus Armbruster, Wainer dos Santos Moschetta,
	QEMU Developers, Gerd Hoffmann, Alex Bennée,
	Philippe Mathieu-Daudé

On Tue, 1 Feb 2022 12:47:48 -0500
Cleber Rosa <crosa@redhat.com> wrote:

> On Tue, Feb 1, 2022 at 11:20 AM Daniel P. Berrangé
> <berrange@redhat.com> wrote:
> >
> > On Tue, Feb 01, 2022 at 11:01:43AM -0500, Cleber Rosa wrote:  
> > > On Tue, Feb 1, 2022 at 6:25 AM Alex Bennée
> > > <alex.bennee@linaro.org> wrote:  
> > > >
> > > > We have up to now tried really hard as a project to avoid
> > > > building and hosting our own binaries to avoid theoretical* GPL
> > > > compliance issues. This is why we've ended up relying so much
> > > > on distros to build and host binaries we can use. Most QEMU
> > > > developers have their own personal zoo of kernels and
> > > > userspaces which they use for testing. I use custom kernels
> > > > with a buildroot user space in initramfs for example. We even
> > > > use the qemu advent calendar for a number of our avocado tests
> > > > but we basically push responsibility for GPL compliance to the
> > > > individual developers in that case.
> > > >
> > > > *theoretical in so far I suspect most people would be happy
> > > > with a reference to an upstream repo/commit and .config even if
> > > > that is not to the letter of the "offer of source code"
> > > > required for true compliance. 
> > >
> > > Yes, it'd be fine (great, really!) if a lightweight distro (or
> > > kernels/initrd) were to
> > > be maintained and identified as an "official" QEMU pick.  Putting
> > > the binaries in the source tree though, brings all sorts of
> > > compliance issues.  
> >
> > All that's really needed is to have the source + build recipes
> > in a separate git repo. A pipeline can build them periodically
> > and publish artifacts, which QEMU can then consume in its pipeline.
> >  
> 
> I get your point, but then to acquire the artifacts one needs to:
> 
> 1. depend on the CI system to deploy the artifacts in subsequent job
> stages (a limitation IMO), OR
> 2. if outside the CI, implement a download/cache mechanism for those
> artifacts, which gets us back to the previous point, only with a
> different distro/kernel+initrd.
> 
> With that, the value proposal has to be in the characteristics of
> distro/kernel+initrd itself. It has to have enough differentiation to
> justify the development/maintenance work, as opposed to using existing
> ones.
> 
> FWIW, my non-scientific tests booting on my 3+ YO machine:
> 
> * CirrOS x86_64+KVM: ~2 seconds
> * CirroOS aarch64+TCG: ~20 seconds
> * Fedora kernel+initrd aarch64+TCG
> (tests/avocado/boot_linux_console.py:BootLinuxConsole.test_aarch64_virt):
> ~1 second
> 
> I would imagine that CirrOS aarch64+KVM on an adequate system would be
> similar to the CirrOS x86_64+KVM.  We can develop/maintain a slimmer
> distro, and/or set the default test workloads where they perform the
> best.  The development cost of the latter is quite small.  I've added
> a missing bit to the filtering capabilities in Avocado[1] and will
> send a proposal to QEMU along these lines.

I'm not sure how boot/download times compare (I haven't measured) with
CirrOS or Fedora, but when I recently needed a quick test on SPARC (TCG),
something along these lines worked quite reliably for me:

  wget https://bouncer.gentoo.org/fetch/root/all/releases/sparc/autobuilds/20220129T013513Z/install-sparc64-minimal-20220129T013513Z.iso
  wget https://bouncer.gentoo.org/fetch/root/all/releases/sparc/autobuilds/20220129T013513Z/stage3-sparc64-20220129T013513Z.tar.xz
  xz -d stage3-sparc64-20220129T013513Z.tar.xz
  virt-make-fs stage3-sparc64-20220129T013513Z.tar sparc.img
  qemu-system-sparc64 -m 2048 -cdrom ../install-sparc64-minimal-20220129T013513Z.iso -boot d -hda sparc.img -net nic,model=sunhme ...

...same approach worked easily with ppc and aarch64. I quickly
considered Alpine (smaller downloads), but it doesn't offer chroot
environments for as many architectures.

I guess the unique thing about "source-based" distributions is that
somewhat uncommon architectures are less likely to disappear because
of the burden of maintaining the full set of binary packages.

-- 
Stefano



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2022-02-01 18:03                             ` Alex Bennée
@ 2022-02-01 19:04                               ` Cleber Rosa
  0 siblings, 0 replies; 46+ messages in thread
From: Cleber Rosa @ 2022-02-01 19:04 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Peter Maydell, Thomas Huth, Daniel P. Berrangé,
	Beraldo Leal, Markus Armbruster, Wainer dos Santos Moschetta,
	QEMU Developers, Stefano Brivio, Gerd Hoffmann,
	Philippe Mathieu-Daudé

On Tue, Feb 1, 2022 at 1:06 PM Alex Bennée <alex.bennee@linaro.org> wrote:
>
>
> Cleber Rosa <crosa@redhat.com> writes:
>
> > On Tue, Feb 1, 2022 at 11:20 AM Daniel P. Berrangé <berrange@redhat.com> wrote:
> >>
> >> On Tue, Feb 01, 2022 at 11:01:43AM -0500, Cleber Rosa wrote:
> >> > On Tue, Feb 1, 2022 at 6:25 AM Alex Bennée <alex.bennee@linaro.org> wrote:
> >> > >
> >> > > We have up to now tried really hard as a project to avoid building and
> >> > > hosting our own binaries to avoid theoretical* GPL compliance issues.
> >> > > This is why we've ended up relying so much on distros to build and host
> >> > > binaries we can use. Most QEMU developers have their own personal zoo of
> >> > > kernels and userspaces which they use for testing. I use custom kernels
> >> > > with a buildroot user space in initramfs for example. We even use the
> >> > > qemu advent calendar for a number of our avocado tests but we basically
> >> > > push responsibility for GPL compliance to the individual developers in
> >> > > that case.
> >> > >
> >> > > *theoretical in so far I suspect most people would be happy with a
> >> > > reference to an upstream repo/commit and .config even if that is not to
> >> > > the letter of the "offer of source code" required for true compliance.
> >> > >
> >> >
> >> > Yes, it'd be fine (great, really!) if a lightweight distro (or
> >> > kernels/initrd) were to
> >> > be maintained and identified as an "official" QEMU pick.  Putting the binaries
> >> > in the source tree though, brings all sorts of compliance issues.
> >>
> >> All that's really needed is to have the source + build recipes
> >> in a separate git repo. A pipeline can build them periodically
> >> and publish artifacts, which QEMU can then consume in its pipeline.
> >>
> >
> > I get your point, but then to acquire the artifacts one needs to:
> >
> > 1. depend on the CI system to deploy the artifacts in subsequent job
> > stages (a limitation IMO), OR
> > 2. if outside the CI, implement a download/cache mechanism for those
> > artifacts, which gets us back to the previous point, only with a
> > different distro/kernel+initrd.
> >
> > With that, the value proposal has to be in the characteristics of
> > distro/kernel+initrd itself. It has to have enough differentiation to
> > justify the development/maintenance work, as opposed to using existing
> > ones.
> >
> > FWIW, my non-scientific tests booting on my 3+ YO machine:
> >
> > * CirrOS x86_64+KVM: ~2 seconds
> > * CirroOS aarch64+TCG: ~20 seconds
> > * Fedora kernel+initrd aarch64+TCG
> > (tests/avocado/boot_linux_console.py:BootLinuxConsole.test_aarch64_virt):
> > ~1 second
> >
> > I would imagine that CirrOS aarch64+KVM on an adequate system would be
> > similar to the CirrOS x86_64+KVM.  We can develop/maintain a slimmer
> > distro, and/or set the default test workloads where they perform the
> > best.  The development cost of the latter is quite small.  I've added
> > a missing bit to the filtering capabilities in Avocado[1] and will
> > send a proposal to QEMU along these lines.
>
> FWIW the bit I'm interested in for the slow test in question here is
> that it does a full boot through the EDK2 bios (EL3->EL2->EL1). I'm not
> overly concerned about what gets run in userspace as long as something
> is run that shows EL0 can be executed and handle task switching. I
> suspect most of the userspace startup of a full distro basically just
> ends up testing the same code paths over and over again.
>

That's an interesting point.

Does that mean that ,if you are able to determine a condition that the
boot has progressed far enough, you would consider the test a success?
 I mean, that's what the "boot_linux_console.py" tests do: they find a
known pattern in the console, and do not care about what happens next.

The same could be done with the "full blown distro boot" tests
(boot_linux.py). They could be configurable to consider a "successful
boot"  anything, not just a "login prompt" or a "fully initialized and
cloud-init configured system".  We can reuse most of the same code,
and add configurable conditions for different test cases.

Does that make sense?

- Cleber.



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: "make check-acceptance" takes way too long
  2021-07-30 15:12 "make check-acceptance" takes way too long Peter Maydell
                   ` (4 preceding siblings ...)
  2022-01-20 15:13 ` Peter Maydell
@ 2022-02-15 18:14 ` Alex Bennée
  5 siblings, 0 replies; 46+ messages in thread
From: Alex Bennée @ 2022-02-15 18:14 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Richard Henderson, Philippe Mathieu-Daudé,
	Daniel P. Berrange, QEMU Developers, Cleber Rosa


Peter Maydell <peter.maydell@linaro.org> writes:

> "make check-acceptance" takes way way too long. I just did a run
> on an arm-and-aarch64-targets-only debug build and it took over
> half an hour, and this despite it skipping or cancelling 26 out
> of 58 tests!
>
> I think that ~10 minutes runtime is reasonable. 30 is not;
> ideally no individual test would take more than a minute or so.
>
> Output saying where the time went. The first two tests take
> more than 10 minutes *each*. I think a good start would be to find
> a way of testing what they're testing that is less heavyweight.
>
>  (01/58) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_tcg_gicv2:
> PASS (629.74 s)
>  (02/58) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_tcg_gicv3:
> PASS (628.75 s)

So I've done some digging and tried some alternative images but I'm
running into two things:

 - -cpu max is slow without ,pauth-impdef=on
 - for some reason the distro cloud images cause 2 orders of magnitude more TB
   invalidates

For example a very simple Alpine boot:

  Translation buffer state:
  gen code size       810926227/1073659904
  TB count            1514678
  TB avg target size  17 max=2048 bytes
  TB avg host size    292 bytes (expansion ratio: 16.8)
  cross page TB count 0 (0%)
  direct jump count   1035828 (68%) (2 jumps=772419 50%)
  TB hash buckets     439751/524288 (83.88% head buckets used)
  TB hash occupancy   42.96% avg chain occ. Histogram: [0,10)%|▄▁█▁▁▇▁▅▁▂|[90,100]%
  TB hash avg chain   1.056 buckets. Histogram: 1|█▁  ▁▁|10

  Statistics:
  TB flush count      0
  TB invalidate count 550632
  TLB full flushes    0
  TLB partial flushes 1488833
  TLB elided flushes  12085180
  [TCG profiler not compiled]

which unsurprisingly has this at the top of the perf profile:

  20.17%  qemu-system-aar  qemu-system-aarch64      [.] do_tb_phys_invalidate   
   3.60%  qemu-system-aar  qemu-system-aarch64      [.] helper_lookup_tb_ptr
   
Versus my Debian Bullseye testing image (with all of systemd):

  Translation buffer state:
  gen code size       899208739/1073577984
  TB count            1599725
  TB avg target size  18 max=2048 bytes
  TB avg host size    318 bytes (expansion ratio: 17.2)
  cross page TB count 0 (0%)
  direct jump count   1067312 (66%) (2 jumps=826284 51%)
  TB hash buckets     816402/1048576 (77.86% head buckets used)
  TB hash occupancy   36.57% avg chain occ. Histogram: [0,10)%|▅ █  ▆▁▃▁▂|[90,100]%
  TB hash avg chain   1.027 buckets. Histogram: 1|█▁▁  ▁|9

  Statistics:
  TB flush count      0
  TB invalidate count 7763
  TLB full flushes    0
  TLB partial flushes 1066791
  TLB elided flushes  973569
  [TCG profiler not compiled]

with a more reasonable balance:

   4.21%  qemu-system-aar  qemu-system-aarch64         [.] get_phys_addr_lpae
   4.16%  qemu-system-aar  qemu-system-aarch64         [.] helper_lookup_tb_ptr

I'm open to ideas as to what might cause that.

-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 46+ messages in thread

end of thread, other threads:[~2022-02-15 18:36 UTC | newest]

Thread overview: 46+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-30 15:12 "make check-acceptance" takes way too long Peter Maydell
2021-07-30 15:41 ` Philippe Mathieu-Daudé
2021-07-30 15:42 ` Peter Maydell
2021-07-30 22:04   ` Cleber Rosa
2021-07-31  6:39     ` Thomas Huth
2021-07-31 17:58       ` Cleber Rosa
2021-07-31 18:41 ` Alex Bennée
2021-07-31 20:32   ` Peter Maydell
2021-08-02 22:55     ` Cleber Rosa
2021-08-02  8:38 ` Daniel P. Berrangé
2021-08-02 12:47   ` Alex Bennée
2021-08-02 12:59     ` Daniel P. Berrangé
2021-08-02 12:55   ` Alex Bennée
2021-08-02 13:00     ` Peter Maydell
2021-08-02 13:04       ` Daniel P. Berrangé
2021-08-02 13:25         ` Thomas Huth
2021-08-02 13:00     ` Daniel P. Berrangé
2021-08-02 13:27       ` Thomas Huth
2021-08-02 13:43         ` Gerd Hoffmann
2022-01-20 15:13 ` Peter Maydell
2022-01-20 15:35   ` Philippe Mathieu-Daudé via
2022-01-21  7:56   ` Thomas Huth
2022-01-21 10:50     ` Markus Armbruster
2022-01-21 11:33       ` Peter Maydell
2022-01-21 12:23         ` Alex Bennée
2022-01-21 12:41           ` Thomas Huth
2022-01-21 15:21           ` Daniel P. Berrangé
2022-01-25  9:20             ` Gerd Hoffmann
2022-02-01  6:31               ` Stefano Brivio
2022-02-01  7:49                 ` Gerd Hoffmann
2022-02-01  9:06                 ` Daniel P. Berrangé
2022-02-01 10:27                   ` Stefano Brivio
2022-02-01 11:17                     ` Alex Bennée
2022-02-01 16:01                       ` Cleber Rosa
2022-02-01 16:19                         ` Daniel P. Berrangé
2022-02-01 17:47                           ` Cleber Rosa
2022-02-01 18:03                             ` Alex Bennée
2022-02-01 19:04                               ` Cleber Rosa
2022-02-01 18:35                             ` Stefano Brivio
2022-02-01 17:59                         ` Cédric Le Goater
2022-02-01 11:06               ` Kashyap Chamarthy
2022-02-01 15:54                 ` Cleber Rosa
2022-02-01  5:29             ` Cleber Rosa
2022-02-01 17:01               ` Daniel P. Berrangé
2022-02-01 17:59                 ` Cleber Rosa
2022-02-15 18:14 ` Alex Bennée

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.