* [xen-4.6-testing test] 124820: trouble: blocked/broken/fail/pass
@ 2018-06-30 16:47 osstest service owner
0 siblings, 0 replies; only message in thread
From: osstest service owner @ 2018-06-30 16:47 UTC (permalink / raw)
To: xen-devel, osstest-admin
flight 124820 xen-4.6-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/124820/
Failures and problems with tests :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-armhf-pvops <job status> broken
build-armhf-pvops 4 host-install(4) broken REGR. vs. 124551
Tests which are failing intermittently (not blocking):
test-amd64-amd64-xl-qemuu-debianhvm-amd64 13 guest-saverestore fail in 124785 pass in 124820
test-amd64-amd64-xl-rtds 15 guest-saverestore fail pass in 124785
Tests which did not succeed, but are not blocking:
test-armhf-armhf-libvirt 1 build-check(1) blocked n/a
test-armhf-armhf-xl 1 build-check(1) blocked n/a
test-armhf-armhf-xl-cubietruck 1 build-check(1) blocked n/a
test-armhf-armhf-xl-credit2 1 build-check(1) blocked n/a
test-armhf-armhf-libvirt-xsm 1 build-check(1) blocked n/a
test-armhf-armhf-xl-rtds 1 build-check(1) blocked n/a
test-armhf-armhf-xl-multivcpu 1 build-check(1) blocked n/a
test-armhf-armhf-xl-vhd 1 build-check(1) blocked n/a
test-armhf-armhf-libvirt-raw 1 build-check(1) blocked n/a
test-armhf-armhf-xl-xsm 1 build-check(1) blocked n/a
test-armhf-armhf-xl-arndale 1 build-check(1) blocked n/a
test-xtf-amd64-amd64-3 50 xtf/test-hvm64-lbr-tsx-vmentry fail in 124785 like 124292
test-xtf-amd64-amd64-5 50 xtf/test-hvm64-lbr-tsx-vmentry fail in 124785 like 124469
test-armhf-armhf-libvirt-xsm 14 saverestore-support-check fail in 124785 like 124551
test-armhf-armhf-libvirt-raw 13 saverestore-support-check fail in 124785 like 124551
test-armhf-armhf-libvirt 14 saverestore-support-check fail in 124785 like 124551
test-armhf-armhf-xl-rtds 16 guest-start/debian.repeat fail in 124785 like 124551
test-armhf-armhf-xl-xsm 13 migrate-support-check fail in 124785 never pass
test-armhf-armhf-xl-xsm 14 saverestore-support-check fail in 124785 never pass
test-armhf-armhf-xl 13 migrate-support-check fail in 124785 never pass
test-armhf-armhf-xl 14 saverestore-support-check fail in 124785 never pass
test-armhf-armhf-xl-multivcpu 13 migrate-support-check fail in 124785 never pass
test-armhf-armhf-xl-multivcpu 14 saverestore-support-check fail in 124785 never pass
test-armhf-armhf-libvirt-xsm 13 migrate-support-check fail in 124785 never pass
test-armhf-armhf-xl-cubietruck 13 migrate-support-check fail in 124785 never pass
test-armhf-armhf-xl-cubietruck 14 saverestore-support-check fail in 124785 never pass
test-armhf-armhf-libvirt-raw 12 migrate-support-check fail in 124785 never pass
test-armhf-armhf-libvirt 13 migrate-support-check fail in 124785 never pass
test-armhf-armhf-xl-arndale 13 migrate-support-check fail in 124785 never pass
test-armhf-armhf-xl-arndale 14 saverestore-support-check fail in 124785 never pass
test-armhf-armhf-xl-credit2 13 migrate-support-check fail in 124785 never pass
test-armhf-armhf-xl-credit2 14 saverestore-support-check fail in 124785 never pass
test-armhf-armhf-xl-rtds 13 migrate-support-check fail in 124785 never pass
test-armhf-armhf-xl-rtds 14 saverestore-support-check fail in 124785 never pass
test-armhf-armhf-xl-vhd 12 migrate-support-check fail in 124785 never pass
test-armhf-armhf-xl-vhd 13 saverestore-support-check fail in 124785 never pass
test-xtf-amd64-amd64-4 50 xtf/test-hvm64-lbr-tsx-vmentry fail like 124292
test-xtf-amd64-amd64-2 50 xtf/test-hvm64-lbr-tsx-vmentry fail like 124393
test-amd64-i386-libvirt-pair 22 guest-migrate/src_host/dst_host fail like 124551
test-amd64-amd64-libvirt-pair 22 guest-migrate/src_host/dst_host fail like 124551
test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 124551
test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail like 124551
test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail like 124551
test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 124551
test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail like 124551
test-xtf-amd64-amd64-2 37 xtf/test-hvm32pae-memop-seg fail never pass
test-xtf-amd64-amd64-2 52 xtf/test-hvm64-memop-seg fail never pass
test-xtf-amd64-amd64-2 77 xtf/test-pv32pae-xsa-194 fail never pass
test-xtf-amd64-amd64-4 37 xtf/test-hvm32pae-memop-seg fail never pass
test-xtf-amd64-amd64-1 37 xtf/test-hvm32pae-memop-seg fail never pass
test-xtf-amd64-amd64-4 52 xtf/test-hvm64-memop-seg fail never pass
test-xtf-amd64-amd64-1 52 xtf/test-hvm64-memop-seg fail never pass
test-xtf-amd64-amd64-4 77 xtf/test-pv32pae-xsa-194 fail never pass
test-xtf-amd64-amd64-1 77 xtf/test-pv32pae-xsa-194 fail never pass
test-xtf-amd64-amd64-3 37 xtf/test-hvm32pae-memop-seg fail never pass
test-xtf-amd64-amd64-5 37 xtf/test-hvm32pae-memop-seg fail never pass
test-xtf-amd64-amd64-3 52 xtf/test-hvm64-memop-seg fail never pass
test-xtf-amd64-amd64-5 52 xtf/test-hvm64-memop-seg fail never pass
test-xtf-amd64-amd64-3 77 xtf/test-pv32pae-xsa-194 fail never pass
test-xtf-amd64-amd64-5 77 xtf/test-pv32pae-xsa-194 fail never pass
test-amd64-amd64-libvirt-xsm 13 migrate-support-check fail never pass
test-amd64-i386-libvirt 13 migrate-support-check fail never pass
test-amd64-i386-libvirt-xsm 13 migrate-support-check fail never pass
test-amd64-amd64-libvirt 13 migrate-support-check fail never pass
test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail never pass
test-amd64-amd64-libvirt-vhd 12 migrate-support-check fail never pass
test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail never pass
test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail never pass
test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
test-amd64-amd64-xl-qemut-win10-i386 10 windows-install fail never pass
test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
test-amd64-amd64-xl-qemuu-win10-i386 10 windows-install fail never pass
version targeted for testing:
xen 598a375f5230d91ac88e76a9f4b4dde4a62a4c5b
baseline version:
xen 542f711567a3f1891cb75187eeaf5cce3f7d6893
Last test of basis 124551 2018-06-21 20:59:18 Z 8 days
Testing same since 124785 2018-06-28 10:36:07 Z 2 days 2 attempts
------------------------------------------------------------
People who touched revisions under test:
Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich <jbeulich@suse.com>
Juergen Gross <jgross@suse.com>
jobs:
build-amd64-xsm pass
build-armhf-xsm pass
build-i386-xsm pass
build-amd64-xtf pass
build-amd64 pass
build-armhf pass
build-i386 pass
build-amd64-libvirt pass
build-armhf-libvirt pass
build-i386-libvirt pass
build-amd64-prev pass
build-i386-prev pass
build-amd64-pvops pass
build-armhf-pvops broken
build-i386-pvops pass
build-amd64-rumprun pass
build-i386-rumprun pass
test-xtf-amd64-amd64-1 pass
test-xtf-amd64-amd64-2 pass
test-xtf-amd64-amd64-3 pass
test-xtf-amd64-amd64-4 pass
test-xtf-amd64-amd64-5 pass
test-amd64-amd64-xl pass
test-armhf-armhf-xl blocked
test-amd64-i386-xl pass
test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm pass
test-amd64-i386-xl-qemut-debianhvm-amd64-xsm pass
test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm pass
test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm pass
test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm pass
test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass
test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm pass
test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm pass
test-amd64-amd64-libvirt-xsm pass
test-armhf-armhf-libvirt-xsm blocked
test-amd64-i386-libvirt-xsm pass
test-amd64-amd64-xl-xsm pass
test-armhf-armhf-xl-xsm blocked
test-amd64-i386-xl-xsm pass
test-amd64-amd64-qemuu-nested-amd fail
test-amd64-i386-qemut-rhel6hvm-amd pass
test-amd64-i386-qemuu-rhel6hvm-amd pass
test-amd64-amd64-xl-qemut-debianhvm-amd64 pass
test-amd64-i386-xl-qemut-debianhvm-amd64 pass
test-amd64-amd64-xl-qemuu-debianhvm-amd64 pass
test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
test-amd64-i386-freebsd10-amd64 pass
test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
test-amd64-i386-xl-qemuu-ovmf-amd64 pass
test-amd64-amd64-rumprun-amd64 pass
test-amd64-amd64-xl-qemut-win7-amd64 fail
test-amd64-i386-xl-qemut-win7-amd64 fail
test-amd64-amd64-xl-qemuu-win7-amd64 fail
test-amd64-i386-xl-qemuu-win7-amd64 fail
test-amd64-amd64-xl-qemut-ws16-amd64 fail
test-amd64-i386-xl-qemut-ws16-amd64 fail
test-amd64-amd64-xl-qemuu-ws16-amd64 fail
test-amd64-i386-xl-qemuu-ws16-amd64 fail
test-armhf-armhf-xl-arndale blocked
test-amd64-amd64-xl-credit2 pass
test-armhf-armhf-xl-credit2 blocked
test-armhf-armhf-xl-cubietruck blocked
test-amd64-i386-freebsd10-i386 pass
test-amd64-i386-rumprun-i386 pass
test-amd64-amd64-xl-qemut-win10-i386 fail
test-amd64-i386-xl-qemut-win10-i386 fail
test-amd64-amd64-xl-qemuu-win10-i386 fail
test-amd64-i386-xl-qemuu-win10-i386 fail
test-amd64-amd64-qemuu-nested-intel pass
test-amd64-i386-qemut-rhel6hvm-intel pass
test-amd64-i386-qemuu-rhel6hvm-intel pass
test-amd64-amd64-libvirt pass
test-armhf-armhf-libvirt blocked
test-amd64-i386-libvirt pass
test-amd64-amd64-migrupgrade pass
test-amd64-i386-migrupgrade pass
test-amd64-amd64-xl-multivcpu pass
test-armhf-armhf-xl-multivcpu blocked
test-amd64-amd64-pair pass
test-amd64-i386-pair pass
test-amd64-amd64-libvirt-pair fail
test-amd64-i386-libvirt-pair fail
test-amd64-amd64-amd64-pvgrub pass
test-amd64-amd64-i386-pvgrub pass
test-amd64-amd64-pygrub pass
test-amd64-amd64-xl-qcow2 pass
test-armhf-armhf-libvirt-raw blocked
test-amd64-i386-xl-raw pass
test-amd64-amd64-xl-rtds fail
test-armhf-armhf-xl-rtds blocked
test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow pass
test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow pass
test-amd64-amd64-xl-shadow pass
test-amd64-i386-xl-shadow pass
test-amd64-amd64-libvirt-vhd pass
test-armhf-armhf-xl-vhd blocked
------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images
Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs
Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master
Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary
broken-job build-armhf-pvops broken
broken-step build-armhf-pvops host-install(4)
Not pushing.
------------------------------------------------------------
commit 598a375f5230d91ac88e76a9f4b4dde4a62a4c5b
Author: Jan Beulich <jbeulich@suse.com>
Date: Thu Jun 28 12:28:21 2018 +0200
x86/HVM: don't cause #NM to be raised in Xen
The changes for XSA-267 did not touch management of CR0.TS for HVM
guests. In fully eager mode this bit should never be set when
respective vCPU-s are active, or else hvmemul_get_fpu() might leave it
wrongly set, leading to #NM in hypervisor context.
{svm,vmx}_enter() and {svm,vmx}_fpu_dirty_intercept() become unreachable
this way. Explicit {svm,vmx}_fpu_leave() invocations need to be guarded
now.
With no CR0.TS management necessary in fully eager mode, there's also no
need anymore to intercept #NM.
Reported-by: Charles Arnold <carnold@suse.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
commit b7b7c4df2d251b1feba217939ea0b618094a48c2
Author: Jan Beulich <jbeulich@suse.com>
Date: Thu Jun 28 12:27:56 2018 +0200
x86/EFI: further correct FPU state handling around runtime calls
We must not leave a vCPU with CR0.TS clear when it is not in fully eager
mode and has not touched non-lazy state. Instead of adding a 3rd
invocation of stts() to vcpu_restore_fpu_eager(), consolidate all of
them into a single one done at the end of the function.
Rename the function at the same time to better reflect its purpose, as
the patches touches all of its occurences anyway.
The new function parameter is not really well named, but
"need_stts_if_not_fully_eager" seemed excessive to me.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
commit ba7d0117ab535280e2b6821aa6d323053ac6b266
Author: Jan Beulich <jbeulich@suse.com>
Date: Thu Jun 28 12:27:34 2018 +0200
x86/EFI: fix FPU state handling around runtime calls
There are two issues. First, the nonlazy xstates were never restored
after returning from the runtime call.
Secondly, with the fully_eager_fpu mitigation for XSA-267 / LazyFPU, the
unilateral stts() is no longer correct, and hits an assertion later when
a lazy state restore tries to occur for a fully eager vcpu.
Fix both of these issues by calling vcpu_restore_fpu_eager(). As EFI
runtime services can be used in the idle context, the idle assertion
needs to move until after the fully_eager_fpu check.
Introduce a "curr" local variable and replace other uses of "current"
at the same time.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Tested-by: Juergen Gross <jgross@suse.com>
commit a54480404a72e2b6c28b41a654d9bd7551e77fe6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Thu Jun 28 12:26:54 2018 +0200
x86: Refine checks in #DB handler for faulting conditions
One of the fix for XSA-260 (c/s 75d6828bc2 "x86/traps: Fix handling of #DB
exceptions in hypervisor context") added some safety checks to help avoid
livelocks of #DB faults.
While a General Detect #DB exception does have fault semantics, hardware
clears %dr7.gd on entry to the handler, meaning that it is actually safe to
return to. Furthermore, %dr6.gd is guest controlled and sticky (never cleared
by hardware). A malicious PV guest can therefore trigger the fatal_trap() and
crash Xen.
Instruction breakpoints are more tricky. The breakpoint match bits in %dr6
are not sticky, but the Intel manual warns that they may be set for
non-enabled breakpoints, so add a breakpoint enabled check.
Beyond that, because of the restriction on the linear addresses PV guests can
set, and the fault (rather than trap) nature of instruction breakpoints
(i.e. can't be deferred by a MovSS shadow), there should be no way to
encounter an instruction breakpoint in Xen context. However, for extra
robustness, deal with this situation by clearing the breakpoint configuration,
rather than crashing.
This is XSA-265
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
commit 2642b56ea54917c43ac03cb95b53f7dadf5c2ad6
Author: Jan Beulich <jbeulich@suse.com>
Date: Thu Jun 28 12:26:25 2018 +0200
x86/mm: don't bypass preemption checks
While unlikely, it is not impossible for a multi-vCPU guest to leverage
bypasses of preemption checks to drive Xen into an unbounded loop.
This is XSA-264.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
commit 03938ba0131520df0b05d9d4e5d8bb1cff526f73
Author: Jan Beulich <jbeulich@suse.com>
Date: Thu Jun 28 12:25:43 2018 +0200
x86: correct default_xen_spec_ctrl calculation
Even with opt_msr_sc_{pv,hvm} both false we should set up the variable
as usual, to ensure proper one-time setup during boot and CPU bringup.
This then also brings the code in line with the comment immediately
ahead of the printk() being modified saying "irrespective of guests".
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: d6239f64713df819278bf048446d3187c6ac4734
master date: 2018-05-29 12:38:52 +0200
(qemu changes not included)
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2018-06-30 16:47 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-30 16:47 [xen-4.6-testing test] 124820: trouble: blocked/broken/fail/pass osstest service owner
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.