* [xen-unstable test] 181558: regressions - FAIL
@ 2023-06-23 15:04 osstest service owner
2023-06-28 12:31 ` QEMU assert (was: [xen-unstable test] 181558: regressions - FAIL) Roger Pau Monné
0 siblings, 1 reply; 8+ messages in thread
From: osstest service owner @ 2023-06-23 15:04 UTC (permalink / raw)
To: xen-devel
flight 181558 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181558/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail REGR. vs. 181545
Tests which are failing intermittently (not blocking):
test-amd64-amd64-xl 20 guest-localmigrate/x10 fail in 181552 pass in 181558
test-armhf-armhf-xl-vhd 13 guest-start fail pass in 181552
test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 181552
Tests which did not succeed, but are not blocking:
test-armhf-armhf-xl-vhd 14 migrate-support-check fail in 181552 never pass
test-armhf-armhf-xl-vhd 15 saverestore-support-check fail in 181552 never pass
test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail like 181528
test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop fail like 181545
test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 181545
test-armhf-armhf-libvirt 16 saverestore-support-check fail like 181545
test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop fail like 181545
test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 181545
test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop fail like 181545
test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop fail like 181545
test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail like 181545
test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop fail like 181545
test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail like 181545
test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 181545
test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop fail like 181545
test-amd64-i386-libvirt-xsm 15 migrate-support-check fail never pass
test-amd64-i386-xl-pvshim 14 guest-start fail never pass
test-amd64-amd64-libvirt 15 migrate-support-check fail never pass
test-amd64-i386-libvirt 15 migrate-support-check fail never pass
test-amd64-amd64-libvirt-xsm 15 migrate-support-check fail never pass
test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail never pass
test-arm64-arm64-xl-xsm 15 migrate-support-check fail never pass
test-arm64-arm64-xl-credit1 15 migrate-support-check fail never pass
test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail never pass
test-arm64-arm64-xl-credit1 16 saverestore-support-check fail never pass
test-arm64-arm64-xl-xsm 16 saverestore-support-check fail never pass
test-arm64-arm64-xl-credit2 15 migrate-support-check fail never pass
test-arm64-arm64-xl-credit2 16 saverestore-support-check fail never pass
test-arm64-arm64-xl-thunderx 15 migrate-support-check fail never pass
test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail never pass
test-arm64-arm64-xl 15 migrate-support-check fail never pass
test-arm64-arm64-xl 16 saverestore-support-check fail never pass
test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
test-amd64-i386-libvirt-raw 14 migrate-support-check fail never pass
test-armhf-armhf-xl-arndale 15 migrate-support-check fail never pass
test-armhf-armhf-xl-arndale 16 saverestore-support-check fail never pass
test-arm64-arm64-libvirt-raw 14 migrate-support-check fail never pass
test-arm64-arm64-libvirt-raw 15 saverestore-support-check fail never pass
test-amd64-amd64-libvirt-vhd 14 migrate-support-check fail never pass
test-arm64-arm64-xl-vhd 14 migrate-support-check fail never pass
test-arm64-arm64-xl-vhd 15 saverestore-support-check fail never pass
test-armhf-armhf-libvirt 15 migrate-support-check fail never pass
test-armhf-armhf-libvirt-raw 14 migrate-support-check fail never pass
test-armhf-armhf-xl-rtds 15 migrate-support-check fail never pass
test-armhf-armhf-xl-rtds 16 saverestore-support-check fail never pass
test-armhf-armhf-xl 15 migrate-support-check fail never pass
test-armhf-armhf-xl 16 saverestore-support-check fail never pass
test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail never pass
test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail never pass
test-armhf-armhf-xl-credit1 15 migrate-support-check fail never pass
test-armhf-armhf-xl-credit1 16 saverestore-support-check fail never pass
test-armhf-armhf-xl-credit2 15 migrate-support-check fail never pass
test-armhf-armhf-xl-credit2 16 saverestore-support-check fail never pass
test-armhf-armhf-libvirt-qcow2 14 migrate-support-check fail never pass
version targeted for testing:
xen 5c84f1f636981dab5341e84aaba8d4dd00bbc2cb
baseline version:
xen 7a25a1501ca941c3e01b0c4e624ace05417f1587
Last test of basis 181545 2023-06-22 01:52:10 Z 1 days
Testing same since 181552 2023-06-22 13:08:10 Z 1 days 2 attempts
------------------------------------------------------------
People who touched revisions under test:
Alistair Francis <alistair.francis@wdc.com>
Andrew Cooper <andrew.cooper3@citrix.com>
Anthony PERARD <anthony.perard@citrix.com>
Henry Wang <Henry.Wang@arm.com>
Jan Beulich <jbeulich@suse.com>
Jiamei Xie <jiamei.xie@arm.com>
Julien Grall <jgrall@amazon.com>
Michal Orzel <michal.orzel@amd.com>
Oleksii Kurochko <oleksii.kurochko@gmail.com>
Roger Pau Monné <roger.pau@citrix.com>
Shawn Anastasio <sanastasio@raptorengineering.com>
Stefano Stabellini <sstabellini@kernel.org>
Stefano Stabellini <stefano.stabellini@amd.com>
jobs:
build-amd64-xsm pass
build-arm64-xsm pass
build-i386-xsm pass
build-amd64-xtf pass
build-amd64 pass
build-arm64 pass
build-armhf pass
build-i386 pass
build-amd64-libvirt pass
build-arm64-libvirt pass
build-armhf-libvirt pass
build-i386-libvirt pass
build-amd64-prev pass
build-i386-prev pass
build-amd64-pvops pass
build-arm64-pvops pass
build-armhf-pvops pass
build-i386-pvops pass
test-xtf-amd64-amd64-1 pass
test-xtf-amd64-amd64-2 pass
test-xtf-amd64-amd64-3 pass
test-xtf-amd64-amd64-4 pass
test-xtf-amd64-amd64-5 pass
test-amd64-amd64-xl pass
test-amd64-coresched-amd64-xl pass
test-arm64-arm64-xl pass
test-armhf-armhf-xl pass
test-amd64-i386-xl pass
test-amd64-coresched-i386-xl pass
test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm pass
test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm pass
test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm pass
test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm pass
test-amd64-amd64-xl-qemut-debianhvm-i386-xsm fail
test-amd64-i386-xl-qemut-debianhvm-i386-xsm pass
test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm pass
test-amd64-i386-xl-qemuu-debianhvm-i386-xsm pass
test-amd64-amd64-libvirt-xsm pass
test-arm64-arm64-libvirt-xsm pass
test-amd64-i386-libvirt-xsm pass
test-amd64-amd64-xl-xsm pass
test-arm64-arm64-xl-xsm pass
test-amd64-i386-xl-xsm pass
test-amd64-amd64-qemuu-nested-amd fail
test-amd64-amd64-xl-pvhv2-amd pass
test-amd64-i386-qemut-rhel6hvm-amd pass
test-amd64-i386-qemuu-rhel6hvm-amd pass
test-amd64-amd64-dom0pvh-xl-amd pass
test-amd64-amd64-xl-qemut-debianhvm-amd64 pass
test-amd64-i386-xl-qemut-debianhvm-amd64 pass
test-amd64-amd64-xl-qemuu-debianhvm-amd64 pass
test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
test-amd64-i386-freebsd10-amd64 pass
test-amd64-amd64-qemuu-freebsd11-amd64 pass
test-amd64-amd64-qemuu-freebsd12-amd64 pass
test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
test-amd64-i386-xl-qemuu-ovmf-amd64 pass
test-amd64-amd64-xl-qemut-win7-amd64 fail
test-amd64-i386-xl-qemut-win7-amd64 fail
test-amd64-amd64-xl-qemuu-win7-amd64 fail
test-amd64-i386-xl-qemuu-win7-amd64 fail
test-amd64-amd64-xl-qemut-ws16-amd64 fail
test-amd64-i386-xl-qemut-ws16-amd64 fail
test-amd64-amd64-xl-qemuu-ws16-amd64 fail
test-amd64-i386-xl-qemuu-ws16-amd64 fail
test-armhf-armhf-xl-arndale pass
test-amd64-amd64-examine-bios pass
test-amd64-i386-examine-bios pass
test-amd64-amd64-xl-credit1 pass
test-arm64-arm64-xl-credit1 pass
test-armhf-armhf-xl-credit1 pass
test-amd64-amd64-xl-credit2 pass
test-arm64-arm64-xl-credit2 pass
test-armhf-armhf-xl-credit2 pass
test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict pass
test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict pass
test-amd64-amd64-examine pass
test-arm64-arm64-examine pass
test-armhf-armhf-examine pass
test-amd64-i386-examine pass
test-amd64-i386-freebsd10-i386 pass
test-amd64-amd64-qemuu-nested-intel pass
test-amd64-amd64-xl-pvhv2-intel pass
test-amd64-i386-qemut-rhel6hvm-intel pass
test-amd64-i386-qemuu-rhel6hvm-intel pass
test-amd64-amd64-dom0pvh-xl-intel pass
test-amd64-amd64-libvirt pass
test-armhf-armhf-libvirt pass
test-amd64-i386-libvirt pass
test-amd64-amd64-livepatch pass
test-amd64-i386-livepatch pass
test-amd64-amd64-migrupgrade pass
test-amd64-i386-migrupgrade pass
test-amd64-amd64-xl-multivcpu pass
test-armhf-armhf-xl-multivcpu pass
test-amd64-amd64-pair pass
test-amd64-i386-pair pass
test-amd64-amd64-libvirt-pair pass
test-amd64-i386-libvirt-pair pass
test-amd64-amd64-xl-pvshim pass
test-amd64-i386-xl-pvshim fail
test-amd64-amd64-pygrub pass
test-armhf-armhf-libvirt-qcow2 pass
test-amd64-amd64-xl-qcow2 fail
test-arm64-arm64-libvirt-raw pass
test-armhf-armhf-libvirt-raw pass
test-amd64-i386-libvirt-raw pass
test-amd64-amd64-xl-rtds pass
test-armhf-armhf-xl-rtds pass
test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow pass
test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow pass
test-amd64-amd64-xl-shadow pass
test-amd64-i386-xl-shadow pass
test-arm64-arm64-xl-thunderx pass
test-amd64-amd64-examine-uefi pass
test-amd64-i386-examine-uefi pass
test-amd64-amd64-libvirt-vhd fail
test-arm64-arm64-xl-vhd pass
test-armhf-armhf-xl-vhd fail
test-amd64-i386-xl-vhd pass
------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images
Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs
Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master
Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary
Not pushing.
(No revision log; it would be 412 lines long.)
^ permalink raw reply [flat|nested] 8+ messages in thread
* QEMU assert (was: [xen-unstable test] 181558: regressions - FAIL)
2023-06-23 15:04 [xen-unstable test] 181558: regressions - FAIL osstest service owner
@ 2023-06-28 12:31 ` Roger Pau Monné
2023-06-28 13:30 ` Roger Pau Monné
2023-07-04 9:37 ` Anthony PERARD
0 siblings, 2 replies; 8+ messages in thread
From: Roger Pau Monné @ 2023-06-28 12:31 UTC (permalink / raw)
To: Anthony PERARD; +Cc: Jan Beulich, qemu-devel
On Fri, Jun 23, 2023 at 03:04:21PM +0000, osstest service owner wrote:
> flight 181558 xen-unstable real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/181558/
>
> Regressions :-(
>
> Tests which did not succeed and are blocking,
> including tests which could not be run:
> test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail REGR. vs. 181545
The test failing here is hitting the assert in qemu_cond_signal() as
called by worker_thread():
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007ffff740b535 in __GI_abort () at abort.c:79
#2 0x00007ffff740b40f in __assert_fail_base (fmt=0x7ffff756cef0 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x55555614abcb "cond->initialized",
file=0x55555614ab88 "../qemu-xen-dir-remote/util/qemu-thread-posix.c", line=198, function=<optimized out>) at assert.c:92
#3 0x00007ffff74191a2 in __GI___assert_fail (assertion=0x55555614abcb "cond->initialized", file=0x55555614ab88 "../qemu-xen-dir-remote/util/qemu-thread-posix.c", line=198,
function=0x55555614ad80 <__PRETTY_FUNCTION__.17104> "qemu_cond_signal") at assert.c:101
#4 0x0000555555f1c8d2 in qemu_cond_signal (cond=0x7fffb800db30) at ../qemu-xen-dir-remote/util/qemu-thread-posix.c:198
#5 0x0000555555f36973 in worker_thread (opaque=0x7fffb800dab0) at ../qemu-xen-dir-remote/util/thread-pool.c:129
#6 0x0000555555f1d1d2 in qemu_thread_start (args=0x7fffb8000b20) at ../qemu-xen-dir-remote/util/qemu-thread-posix.c:505
#7 0x00007ffff75b0fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#8 0x00007ffff74e206f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
I've been trying to figure out how it can get in such state, but so
far I had no luck. I'm not a QEMU expert, so it's probably better if
someone else could handle this.
In the failures I've seen, and the reproduction I have, the assert
triggers in the QEMU dom0 instance responsible for locally-attaching
the disk to dom0 in order to run pygrub.
This is also with QEMU 7.2, as testing with upstream QEMU is blocked
ATM, so there's a chance it has already been fixed upstream.
Thanks, Roger.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: QEMU assert (was: [xen-unstable test] 181558: regressions - FAIL)
2023-06-28 12:31 ` QEMU assert (was: [xen-unstable test] 181558: regressions - FAIL) Roger Pau Monné
@ 2023-06-28 13:30 ` Roger Pau Monné
2023-07-04 9:37 ` Anthony PERARD
1 sibling, 0 replies; 8+ messages in thread
From: Roger Pau Monné @ 2023-06-28 13:30 UTC (permalink / raw)
To: xen-devel; +Cc: Anthony PERARD, Jan Beulich, qemu-devel
Dropped xen-devel, adding back.
On Wed, Jun 28, 2023 at 02:31:39PM +0200, Roger Pau Monné wrote:
> On Fri, Jun 23, 2023 at 03:04:21PM +0000, osstest service owner wrote:
> > flight 181558 xen-unstable real [real]
> > http://logs.test-lab.xenproject.org/osstest/logs/181558/
> >
> > Regressions :-(
> >
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> > test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail REGR. vs. 181545
>
> The test failing here is hitting the assert in qemu_cond_signal() as
> called by worker_thread():
>
> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
> #1 0x00007ffff740b535 in __GI_abort () at abort.c:79
> #2 0x00007ffff740b40f in __assert_fail_base (fmt=0x7ffff756cef0 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x55555614abcb "cond->initialized",
> file=0x55555614ab88 "../qemu-xen-dir-remote/util/qemu-thread-posix.c", line=198, function=<optimized out>) at assert.c:92
> #3 0x00007ffff74191a2 in __GI___assert_fail (assertion=0x55555614abcb "cond->initialized", file=0x55555614ab88 "../qemu-xen-dir-remote/util/qemu-thread-posix.c", line=198,
> function=0x55555614ad80 <__PRETTY_FUNCTION__.17104> "qemu_cond_signal") at assert.c:101
> #4 0x0000555555f1c8d2 in qemu_cond_signal (cond=0x7fffb800db30) at ../qemu-xen-dir-remote/util/qemu-thread-posix.c:198
> #5 0x0000555555f36973 in worker_thread (opaque=0x7fffb800dab0) at ../qemu-xen-dir-remote/util/thread-pool.c:129
> #6 0x0000555555f1d1d2 in qemu_thread_start (args=0x7fffb8000b20) at ../qemu-xen-dir-remote/util/qemu-thread-posix.c:505
> #7 0x00007ffff75b0fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
> #8 0x00007ffff74e206f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
>
> I've been trying to figure out how it can get in such state, but so
> far I had no luck. I'm not a QEMU expert, so it's probably better if
> someone else could handle this.
>
> In the failures I've seen, and the reproduction I have, the assert
> triggers in the QEMU dom0 instance responsible for locally-attaching
> the disk to dom0 in order to run pygrub.
>
> This is also with QEMU 7.2, as testing with upstream QEMU is blocked
> ATM, so there's a chance it has already been fixed upstream.
>
> Thanks, Roger.
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: QEMU assert (was: [xen-unstable test] 181558: regressions - FAIL)
2023-06-28 12:31 ` QEMU assert (was: [xen-unstable test] 181558: regressions - FAIL) Roger Pau Monné
@ 2023-07-04 9:37 ` Anthony PERARD
2023-07-04 9:37 ` Anthony PERARD
1 sibling, 0 replies; 8+ messages in thread
From: Anthony PERARD via @ 2023-07-04 9:37 UTC (permalink / raw)
To: Roger Pau Monné; +Cc: Jan Beulich, qemu-devel, xen-devel
On Wed, Jun 28, 2023 at 02:31:39PM +0200, Roger Pau Monné wrote:
> On Fri, Jun 23, 2023 at 03:04:21PM +0000, osstest service owner wrote:
> > flight 181558 xen-unstable real [real]
> > http://logs.test-lab.xenproject.org/osstest/logs/181558/
> >
> > Regressions :-(
> >
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> > test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail REGR. vs. 181545
>
> The test failing here is hitting the assert in qemu_cond_signal() as
> called by worker_thread():
>
> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
> #1 0x00007ffff740b535 in __GI_abort () at abort.c:79
> #2 0x00007ffff740b40f in __assert_fail_base (fmt=0x7ffff756cef0 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x55555614abcb "cond->initialized",
> file=0x55555614ab88 "../qemu-xen-dir-remote/util/qemu-thread-posix.c", line=198, function=<optimized out>) at assert.c:92
> #3 0x00007ffff74191a2 in __GI___assert_fail (assertion=0x55555614abcb "cond->initialized", file=0x55555614ab88 "../qemu-xen-dir-remote/util/qemu-thread-posix.c", line=198,
> function=0x55555614ad80 <__PRETTY_FUNCTION__.17104> "qemu_cond_signal") at assert.c:101
> #4 0x0000555555f1c8d2 in qemu_cond_signal (cond=0x7fffb800db30) at ../qemu-xen-dir-remote/util/qemu-thread-posix.c:198
> #5 0x0000555555f36973 in worker_thread (opaque=0x7fffb800dab0) at ../qemu-xen-dir-remote/util/thread-pool.c:129
> #6 0x0000555555f1d1d2 in qemu_thread_start (args=0x7fffb8000b20) at ../qemu-xen-dir-remote/util/qemu-thread-posix.c:505
> #7 0x00007ffff75b0fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
> #8 0x00007ffff74e206f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
>
> I've been trying to figure out how it can get in such state, but so
> far I had no luck. I'm not a QEMU expert, so it's probably better if
> someone else could handle this.
>
> In the failures I've seen, and the reproduction I have, the assert
> triggers in the QEMU dom0 instance responsible for locally-attaching
> the disk to dom0 in order to run pygrub.
>
> This is also with QEMU 7.2, as testing with upstream QEMU is blocked
> ATM, so there's a chance it has already been fixed upstream.
>
> Thanks, Roger.
So, I've run a test with the latest QEMU and I can still reproduce the
issue. The test also fails with QEMU 7.1.0.
But, QEMU 7.0 seems to pass the test, even with a start-stop loop of 200
iteration. So I'll try to find out if something change in that range.
Or try to find out why would the thread pool be not initialised
properly.
Cheers,
--
Anthony PERARD
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: QEMU assert (was: [xen-unstable test] 181558: regressions - FAIL)
@ 2023-07-04 9:37 ` Anthony PERARD
0 siblings, 0 replies; 8+ messages in thread
From: Anthony PERARD @ 2023-07-04 9:37 UTC (permalink / raw)
To: Roger Pau Monné; +Cc: Jan Beulich, qemu-devel, xen-devel
On Wed, Jun 28, 2023 at 02:31:39PM +0200, Roger Pau Monné wrote:
> On Fri, Jun 23, 2023 at 03:04:21PM +0000, osstest service owner wrote:
> > flight 181558 xen-unstable real [real]
> > http://logs.test-lab.xenproject.org/osstest/logs/181558/
> >
> > Regressions :-(
> >
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> > test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail REGR. vs. 181545
>
> The test failing here is hitting the assert in qemu_cond_signal() as
> called by worker_thread():
>
> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
> #1 0x00007ffff740b535 in __GI_abort () at abort.c:79
> #2 0x00007ffff740b40f in __assert_fail_base (fmt=0x7ffff756cef0 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x55555614abcb "cond->initialized",
> file=0x55555614ab88 "../qemu-xen-dir-remote/util/qemu-thread-posix.c", line=198, function=<optimized out>) at assert.c:92
> #3 0x00007ffff74191a2 in __GI___assert_fail (assertion=0x55555614abcb "cond->initialized", file=0x55555614ab88 "../qemu-xen-dir-remote/util/qemu-thread-posix.c", line=198,
> function=0x55555614ad80 <__PRETTY_FUNCTION__.17104> "qemu_cond_signal") at assert.c:101
> #4 0x0000555555f1c8d2 in qemu_cond_signal (cond=0x7fffb800db30) at ../qemu-xen-dir-remote/util/qemu-thread-posix.c:198
> #5 0x0000555555f36973 in worker_thread (opaque=0x7fffb800dab0) at ../qemu-xen-dir-remote/util/thread-pool.c:129
> #6 0x0000555555f1d1d2 in qemu_thread_start (args=0x7fffb8000b20) at ../qemu-xen-dir-remote/util/qemu-thread-posix.c:505
> #7 0x00007ffff75b0fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
> #8 0x00007ffff74e206f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
>
> I've been trying to figure out how it can get in such state, but so
> far I had no luck. I'm not a QEMU expert, so it's probably better if
> someone else could handle this.
>
> In the failures I've seen, and the reproduction I have, the assert
> triggers in the QEMU dom0 instance responsible for locally-attaching
> the disk to dom0 in order to run pygrub.
>
> This is also with QEMU 7.2, as testing with upstream QEMU is blocked
> ATM, so there's a chance it has already been fixed upstream.
>
> Thanks, Roger.
So, I've run a test with the latest QEMU and I can still reproduce the
issue. The test also fails with QEMU 7.1.0.
But, QEMU 7.0 seems to pass the test, even with a start-stop loop of 200
iteration. So I'll try to find out if something change in that range.
Or try to find out why would the thread pool be not initialised
properly.
Cheers,
--
Anthony PERARD
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: QEMU assert (was: [xen-unstable test] 181558: regressions - FAIL)
2023-07-04 9:37 ` Anthony PERARD
(?)
@ 2023-07-04 9:56 ` Roger Pau Monné
2023-07-14 15:34 ` Anthony PERARD
-1 siblings, 1 reply; 8+ messages in thread
From: Roger Pau Monné @ 2023-07-04 9:56 UTC (permalink / raw)
To: Anthony PERARD; +Cc: Jan Beulich, qemu-devel, xen-devel
On Tue, Jul 04, 2023 at 10:37:38AM +0100, Anthony PERARD wrote:
> On Wed, Jun 28, 2023 at 02:31:39PM +0200, Roger Pau Monné wrote:
> > On Fri, Jun 23, 2023 at 03:04:21PM +0000, osstest service owner wrote:
> > > flight 181558 xen-unstable real [real]
> > > http://logs.test-lab.xenproject.org/osstest/logs/181558/
> > >
> > > Regressions :-(
> > >
> > > Tests which did not succeed and are blocking,
> > > including tests which could not be run:
> > > test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail REGR. vs. 181545
> >
> > The test failing here is hitting the assert in qemu_cond_signal() as
> > called by worker_thread():
> >
> > #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
> > #1 0x00007ffff740b535 in __GI_abort () at abort.c:79
> > #2 0x00007ffff740b40f in __assert_fail_base (fmt=0x7ffff756cef0 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x55555614abcb "cond->initialized",
> > file=0x55555614ab88 "../qemu-xen-dir-remote/util/qemu-thread-posix.c", line=198, function=<optimized out>) at assert.c:92
> > #3 0x00007ffff74191a2 in __GI___assert_fail (assertion=0x55555614abcb "cond->initialized", file=0x55555614ab88 "../qemu-xen-dir-remote/util/qemu-thread-posix.c", line=198,
> > function=0x55555614ad80 <__PRETTY_FUNCTION__.17104> "qemu_cond_signal") at assert.c:101
> > #4 0x0000555555f1c8d2 in qemu_cond_signal (cond=0x7fffb800db30) at ../qemu-xen-dir-remote/util/qemu-thread-posix.c:198
> > #5 0x0000555555f36973 in worker_thread (opaque=0x7fffb800dab0) at ../qemu-xen-dir-remote/util/thread-pool.c:129
> > #6 0x0000555555f1d1d2 in qemu_thread_start (args=0x7fffb8000b20) at ../qemu-xen-dir-remote/util/qemu-thread-posix.c:505
> > #7 0x00007ffff75b0fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
> > #8 0x00007ffff74e206f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
> >
> > I've been trying to figure out how it can get in such state, but so
> > far I had no luck. I'm not a QEMU expert, so it's probably better if
> > someone else could handle this.
> >
> > In the failures I've seen, and the reproduction I have, the assert
> > triggers in the QEMU dom0 instance responsible for locally-attaching
> > the disk to dom0 in order to run pygrub.
> >
> > This is also with QEMU 7.2, as testing with upstream QEMU is blocked
> > ATM, so there's a chance it has already been fixed upstream.
> >
> > Thanks, Roger.
>
> So, I've run a test with the latest QEMU and I can still reproduce the
> issue. The test also fails with QEMU 7.1.0.
>
> But, QEMU 7.0 seems to pass the test, even with a start-stop loop of 200
> iteration. So I'll try to find out if something change in that range.
> Or try to find out why would the thread pool be not initialised
> properly.
Thanks for looking into this.
There are a set of changes from Paolo Bonzini:
232e9255478f thread-pool: remove stopping variable
900fa208f506 thread-pool: replace semaphore with condition variable
3c7b72ddca9c thread-pool: optimize scheduling of completion bottom half
That landed in 7.1 that seem like possible candidates.
Roger.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: QEMU assert (was: [xen-unstable test] 181558: regressions - FAIL)
2023-07-04 9:56 ` Roger Pau Monné
@ 2023-07-14 15:34 ` Anthony PERARD
0 siblings, 0 replies; 8+ messages in thread
From: Anthony PERARD via @ 2023-07-14 15:34 UTC (permalink / raw)
To: Roger Pau Monné; +Cc: Jan Beulich, qemu-devel, xen-devel
On Tue, Jul 04, 2023 at 11:56:54AM +0200, Roger Pau Monné wrote:
> On Tue, Jul 04, 2023 at 10:37:38AM +0100, Anthony PERARD wrote:
> > On Wed, Jun 28, 2023 at 02:31:39PM +0200, Roger Pau Monné wrote:
> > > On Fri, Jun 23, 2023 at 03:04:21PM +0000, osstest service owner wrote:
> > > > flight 181558 xen-unstable real [real]
> > > > http://logs.test-lab.xenproject.org/osstest/logs/181558/
> > > >
> > > > Regressions :-(
> > > >
> > > > Tests which did not succeed and are blocking,
> > > > including tests which could not be run:
> > > > test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail REGR. vs. 181545
> > >
> > > The test failing here is hitting the assert in qemu_cond_signal() as
> > > called by worker_thread():
> > >
> > > #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
> > > #1 0x00007ffff740b535 in __GI_abort () at abort.c:79
> > > #2 0x00007ffff740b40f in __assert_fail_base (fmt=0x7ffff756cef0 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x55555614abcb "cond->initialized",
> > > file=0x55555614ab88 "../qemu-xen-dir-remote/util/qemu-thread-posix.c", line=198, function=<optimized out>) at assert.c:92
> > > #3 0x00007ffff74191a2 in __GI___assert_fail (assertion=0x55555614abcb "cond->initialized", file=0x55555614ab88 "../qemu-xen-dir-remote/util/qemu-thread-posix.c", line=198,
> > > function=0x55555614ad80 <__PRETTY_FUNCTION__.17104> "qemu_cond_signal") at assert.c:101
> > > #4 0x0000555555f1c8d2 in qemu_cond_signal (cond=0x7fffb800db30) at ../qemu-xen-dir-remote/util/qemu-thread-posix.c:198
> > > #5 0x0000555555f36973 in worker_thread (opaque=0x7fffb800dab0) at ../qemu-xen-dir-remote/util/thread-pool.c:129
> > > #6 0x0000555555f1d1d2 in qemu_thread_start (args=0x7fffb8000b20) at ../qemu-xen-dir-remote/util/qemu-thread-posix.c:505
> > > #7 0x00007ffff75b0fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
> > > #8 0x00007ffff74e206f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
> > >
> > > I've been trying to figure out how it can get in such state, but so
> > > far I had no luck. I'm not a QEMU expert, so it's probably better if
> > > someone else could handle this.
> > >
> > > In the failures I've seen, and the reproduction I have, the assert
> > > triggers in the QEMU dom0 instance responsible for locally-attaching
> > > the disk to dom0 in order to run pygrub.
> > >
> > > This is also with QEMU 7.2, as testing with upstream QEMU is blocked
> > > ATM, so there's a chance it has already been fixed upstream.
> > >
> > > Thanks, Roger.
> >
> > So, I've run a test with the latest QEMU and I can still reproduce the
> > issue. The test also fails with QEMU 7.1.0.
> >
> > But, QEMU 7.0 seems to pass the test, even with a start-stop loop of 200
> > iteration. So I'll try to find out if something change in that range.
> > Or try to find out why would the thread pool be not initialised
> > properly.
>
> Thanks for looking into this.
>
> There are a set of changes from Paolo Bonzini:
>
> 232e9255478f thread-pool: remove stopping variable
> 900fa208f506 thread-pool: replace semaphore with condition variable
> 3c7b72ddca9c thread-pool: optimize scheduling of completion bottom half
>
> That landed in 7.1 that seem like possible candidates.
I think I've figured out the issue. I've sent a patch:
https://lore.kernel.org/qemu-devel/20230714152720.5077-1-anthony.perard@citrix.com/
I did run osstest with this patch, and 200 iteration of stop/start, no
more issue of qemu for dom0 disapearing. The issue I've found is osstest
not able to ssh to the guest, which seems to be started. And qemu for
dom0 is still running.
While the report exist:
http://logs.test-lab.xenproject.org/osstest/logs/181785/
Cheers,
--
Anthony PERARD
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: QEMU assert (was: [xen-unstable test] 181558: regressions - FAIL)
@ 2023-07-14 15:34 ` Anthony PERARD
0 siblings, 0 replies; 8+ messages in thread
From: Anthony PERARD @ 2023-07-14 15:34 UTC (permalink / raw)
To: Roger Pau Monné; +Cc: Jan Beulich, qemu-devel, xen-devel
On Tue, Jul 04, 2023 at 11:56:54AM +0200, Roger Pau Monné wrote:
> On Tue, Jul 04, 2023 at 10:37:38AM +0100, Anthony PERARD wrote:
> > On Wed, Jun 28, 2023 at 02:31:39PM +0200, Roger Pau Monné wrote:
> > > On Fri, Jun 23, 2023 at 03:04:21PM +0000, osstest service owner wrote:
> > > > flight 181558 xen-unstable real [real]
> > > > http://logs.test-lab.xenproject.org/osstest/logs/181558/
> > > >
> > > > Regressions :-(
> > > >
> > > > Tests which did not succeed and are blocking,
> > > > including tests which could not be run:
> > > > test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail REGR. vs. 181545
> > >
> > > The test failing here is hitting the assert in qemu_cond_signal() as
> > > called by worker_thread():
> > >
> > > #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
> > > #1 0x00007ffff740b535 in __GI_abort () at abort.c:79
> > > #2 0x00007ffff740b40f in __assert_fail_base (fmt=0x7ffff756cef0 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x55555614abcb "cond->initialized",
> > > file=0x55555614ab88 "../qemu-xen-dir-remote/util/qemu-thread-posix.c", line=198, function=<optimized out>) at assert.c:92
> > > #3 0x00007ffff74191a2 in __GI___assert_fail (assertion=0x55555614abcb "cond->initialized", file=0x55555614ab88 "../qemu-xen-dir-remote/util/qemu-thread-posix.c", line=198,
> > > function=0x55555614ad80 <__PRETTY_FUNCTION__.17104> "qemu_cond_signal") at assert.c:101
> > > #4 0x0000555555f1c8d2 in qemu_cond_signal (cond=0x7fffb800db30) at ../qemu-xen-dir-remote/util/qemu-thread-posix.c:198
> > > #5 0x0000555555f36973 in worker_thread (opaque=0x7fffb800dab0) at ../qemu-xen-dir-remote/util/thread-pool.c:129
> > > #6 0x0000555555f1d1d2 in qemu_thread_start (args=0x7fffb8000b20) at ../qemu-xen-dir-remote/util/qemu-thread-posix.c:505
> > > #7 0x00007ffff75b0fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
> > > #8 0x00007ffff74e206f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
> > >
> > > I've been trying to figure out how it can get in such state, but so
> > > far I had no luck. I'm not a QEMU expert, so it's probably better if
> > > someone else could handle this.
> > >
> > > In the failures I've seen, and the reproduction I have, the assert
> > > triggers in the QEMU dom0 instance responsible for locally-attaching
> > > the disk to dom0 in order to run pygrub.
> > >
> > > This is also with QEMU 7.2, as testing with upstream QEMU is blocked
> > > ATM, so there's a chance it has already been fixed upstream.
> > >
> > > Thanks, Roger.
> >
> > So, I've run a test with the latest QEMU and I can still reproduce the
> > issue. The test also fails with QEMU 7.1.0.
> >
> > But, QEMU 7.0 seems to pass the test, even with a start-stop loop of 200
> > iteration. So I'll try to find out if something change in that range.
> > Or try to find out why would the thread pool be not initialised
> > properly.
>
> Thanks for looking into this.
>
> There are a set of changes from Paolo Bonzini:
>
> 232e9255478f thread-pool: remove stopping variable
> 900fa208f506 thread-pool: replace semaphore with condition variable
> 3c7b72ddca9c thread-pool: optimize scheduling of completion bottom half
>
> That landed in 7.1 that seem like possible candidates.
I think I've figured out the issue. I've sent a patch:
https://lore.kernel.org/qemu-devel/20230714152720.5077-1-anthony.perard@citrix.com/
I did run osstest with this patch, and 200 iteration of stop/start, no
more issue of qemu for dom0 disapearing. The issue I've found is osstest
not able to ssh to the guest, which seems to be started. And qemu for
dom0 is still running.
While the report exist:
http://logs.test-lab.xenproject.org/osstest/logs/181785/
Cheers,
--
Anthony PERARD
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2023-07-14 15:35 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-23 15:04 [xen-unstable test] 181558: regressions - FAIL osstest service owner
2023-06-28 12:31 ` QEMU assert (was: [xen-unstable test] 181558: regressions - FAIL) Roger Pau Monné
2023-06-28 13:30 ` Roger Pau Monné
2023-07-04 9:37 ` Anthony PERARD via
2023-07-04 9:37 ` Anthony PERARD
2023-07-04 9:56 ` Roger Pau Monné
2023-07-14 15:34 ` Anthony PERARD via
2023-07-14 15:34 ` Anthony PERARD
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.