All of lore.kernel.org
 help / color / mirror / Atom feed
* [xen-4.3-testing test] 63948: regressions - FAIL
@ 2015-11-10 17:59 ` osstest service owner
  2015-11-11 10:58   ` Jan Beulich
  0 siblings, 1 reply; 17+ messages in thread
From: osstest service owner @ 2015-11-10 17:59 UTC (permalink / raw)
  To: xen-devel, osstest-admin

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 15958 bytes --]

flight 63948 xen-4.3-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63948/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-migrupgrade 21 guest-migrate/src_host/dst_host fail REGR. vs. 63212

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail like 63212

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)               blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  9 debian-hvm-install     fail never pass
 build-i386-rumpuserxen        6 xen-build                    fail   never pass
 build-amd64-rumpuserxen       6 xen-build                    fail   never pass
 test-amd64-i386-migrupgrade 21 guest-migrate/src_host/dst_host fail never pass
 test-armhf-armhf-libvirt-qcow2  6 xen-boot                     fail never pass
 test-armhf-armhf-xl-arndale   6 xen-boot                     fail   never pass
 test-armhf-armhf-libvirt      6 xen-boot                     fail   never pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  9 debian-hvm-install      fail never pass
 test-amd64-i386-libvirt      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2   6 xen-boot                     fail   never pass
 test-armhf-armhf-libvirt-raw  6 xen-boot                     fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu  6 xen-boot                     fail  never pass
 test-armhf-armhf-xl-cubietruck  6 xen-boot                     fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-vhd       6 xen-boot                     fail   never pass
 test-amd64-amd64-libvirt     12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl           6 xen-boot                     fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 21 leak-check/check        fail never pass

version targeted for testing:
 xen                  e875e0e5fcc5912f71422b53674a97e5c0ae77be
baseline version:
 xen                  85ca813ec23c5a60680e4a13777dad530065902b

Last test of basis    63212  2015-10-22 10:03:01 Z   19 days
Failing since         63360  2015-10-29 13:39:04 Z   12 days    9 attempts
Testing same since    63381  2015-10-30 18:44:54 Z   10 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64                                                  pass
 build-armhf                                                  pass
 build-i386                                                   pass
 build-amd64-libvirt                                          pass
 build-armhf-libvirt                                          pass
 build-i386-libvirt                                           pass
 build-amd64-prev                                             pass
 build-i386-prev                                              pass
 build-amd64-pvops                                            pass
 build-armhf-pvops                                            pass
 build-i386-pvops                                             pass
 build-amd64-rumpuserxen                                      fail
 build-i386-rumpuserxen                                       fail
 test-amd64-amd64-xl                                          pass
 test-armhf-armhf-xl                                          fail
 test-amd64-i386-xl                                           pass
 test-amd64-i386-qemut-rhel6hvm-amd                           pass
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass
 test-amd64-i386-freebsd10-amd64                              pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail
 test-amd64-amd64-rumpuserxen-amd64                           blocked
 test-amd64-amd64-xl-qemut-win7-amd64                         fail
 test-amd64-i386-xl-qemut-win7-amd64                          fail
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail
 test-amd64-i386-xl-qemuu-win7-amd64                          fail
 test-armhf-armhf-xl-arndale                                  fail
 test-amd64-amd64-xl-credit2                                  pass
 test-armhf-armhf-xl-credit2                                  fail
 test-armhf-armhf-xl-cubietruck                               fail
 test-amd64-i386-freebsd10-i386                               pass
 test-amd64-i386-rumpuserxen-i386                             blocked
 test-amd64-i386-qemut-rhel6hvm-intel                         pass
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass
 test-amd64-amd64-libvirt                                     pass
 test-armhf-armhf-libvirt                                     fail
 test-amd64-i386-libvirt                                      pass
 test-amd64-amd64-migrupgrade                                 fail
 test-amd64-i386-migrupgrade                                  fail
 test-amd64-amd64-xl-multivcpu                                pass
 test-armhf-armhf-xl-multivcpu                                fail
 test-amd64-amd64-pair                                        pass
 test-amd64-i386-pair                                         pass
 test-amd64-amd64-pv                                          pass
 test-amd64-i386-pv                                           pass
 test-amd64-amd64-amd64-pvgrub                                pass
 test-amd64-amd64-i386-pvgrub                                 pass
 test-amd64-amd64-pygrub                                      pass
 test-armhf-armhf-libvirt-qcow2                               fail
 test-amd64-amd64-xl-qcow2                                    pass
 test-armhf-armhf-libvirt-raw                                 fail
 test-amd64-i386-xl-raw                                       pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     pass
 test-amd64-amd64-libvirt-vhd                                 pass
 test-armhf-armhf-xl-vhd                                      fail
 test-amd64-i386-xend-qemut-winxpsp3                          fail
 test-amd64-amd64-xl-qemut-winxpsp3                           pass
 test-amd64-amd64-xl-qemuu-winxpsp3                           pass


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit e875e0e5fcc5912f71422b53674a97e5c0ae77be
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Wed Oct 21 16:18:30 2015 +0100

    libxl: adjust PoD target by memory fudge, too

    PoD guests need to balloon at least as far as required by PoD, or risk
    crashing.  Currently they don't necessarily know what the right value
    is, because our memory accounting is (at the very least) confusing.

    Apply the memory limit fudge factor to the in-hypervisor PoD memory
    target, too.  This will increase the size of the guest's PoD cache by
    the fudge factor LIBXL_MAXMEM_CONSTANT (currently 1Mby).  This ensures
    that even with a slightly-off balloon driver, the guest will be
    stable even under memory pressure.

    There are two call sites of xc_domain_set_pod_target that need fixing:

    The one in libxl_set_memory_target is straightforward.

    The one in xc_hvm_build_x86.c:setup_guest is more awkward.  Simply
    setting the PoD target differently does not work because the various
    amounts of memory during domain construction no longer match up.
    Instead, we adjust the guest memory target in xenstore (but only for
    PoD guests).

    This introduces a 1Mby discrepancy between the balloon target of a PoD
    guest at boot, and the target set by an apparently-equivalent `xl
    mem-set' (or similar) later.  This approach is low-risk for a security
    fix but we need to fix this up properly in xen.git#staging and
    probably also in stable trees.

    This is XSA-153.

    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit 56fb5fd62320eb40a7517206f9706aa9188d6f7b)
    (cherry picked from commit 423d2cd814e8460d5ea8bd191a770f3c48b3947c)

    Conflicts:
    	tools/libxl/libxl_dom.c
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 73b70e3c5d59e63126c890068ee0cbf8a2a3b640)

commit 9f359c61e3927f94bb280ffb200155dd20465fda
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Oct 29 14:28:33 2015 +0100

    x86: rate-limit logging in do_xen{oprof,pmu}_op()

    Some of the sub-ops are acessible to all guests, and hence should be
    rate-limited. In the xenoprof case, just like for XSA-146, include them
    only in debug builds. Since the vPMU code is rather new, allow them to
    be always present, but downgrade them to (rate limited) guest messages.

    This is CVE-2015-7971 / XSA-152.

    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
    master commit: 95e7415843b94c346e5ba8682665f508f220e04b
    master date: 2015-10-29 13:37:19 +0100

commit 7f28d311a80e9d33d8270d6fb7b949dd4eef37f0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Oct 29 14:28:06 2015 +0100

    xenoprof: free domain's vcpu array

    This was overlooked in fb442e2171 ("x86_64: allow more vCPU-s per
    guest").

    This is CVE-2015-7969 / XSA-151.

    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
    master commit: 6e97c4b37386c2d09e09e9b5d5d232e37728b960
    master date: 2015-10-29 13:36:52 +0100

commit 2d330c121eff67f4828dc8536180986e0dfdf14b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Oct 29 14:27:44 2015 +0100

    x86/PoD: Eager sweep for zeroed pages

    Based on the contents of a guests physical address space,
    p2m_pod_emergency_sweep() could degrade into a linear memcmp() from 0 to
    max_gfn, which runs non-preemptibly.

    As p2m_pod_emergency_sweep() runs behind the scenes in a number of contexts,
    making it preemptible is not feasible.

    Instead, a different approach is taken.  Recently-populated pages are eagerly
    checked for reclaimation, which amortises the p2m_pod_emergency_sweep()
    operation across each p2m_pod_demand_populate() operation.

    Note that in the case that a 2M superpage can't be reclaimed as a superpage,
    it is shattered if 4K pages of zeros can be reclaimed.  This is unfortunate
    but matches the previous behaviour, and is required to avoid regressions
    (domain crash from PoD exhaustion) with VMs configured close to the limit.

    This is CVE-2015-7970 / XSA-150.

    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    master commit: 101ce53266866144e724ed593173bc4098b300b9
    master date: 2015-10-29 13:36:25 +0100

commit d7f7c5c6559ac3fa52dba5d8fe952b2c00f962db
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Oct 29 14:27:02 2015 +0100

    free domain's vcpu array

    This was overlooked in fb442e2171 ("x86_64: allow more vCPU-s per
    guest").

    This is CVE-2015-7969 / XSA-149.

    Reported-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
    master commit: d46896ebbb23f3a9fef2eb6066ae614fd1acfd96
    master date: 2015-10-29 13:35:40 +0100

commit 3be91e6c200af155a1badefc5945008c8da12ce7
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Oct 29 14:24:40 2015 +0100

    x86: guard against undue super page PTE creation

    When optional super page support got added (commit bd1cd81d64 "x86: PV
    support for hugepages"), two adjustments were missed: mod_l2_entry()
    needs to consider the PSE and RW bits when deciding whether to use the
    fast path, and the PSE bit must not be removed from L2_DISALLOW_MASK
    unconditionally.

    This is CVE-2015-7835 / XSA-148.

    Reported-by: "栾尚聪(好风)" <shangcong.lsc@alibaba-inc.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Tim Deegan <tim@xen.org>
    master commit: fe360c90ea13f309ef78810f1a2b92f2ae3b30b8
    master date: 2015-10-29 13:35:07 +0100

commit fb02dec2d06a2dd104682973a21375795e344e25
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Thu Oct 29 14:24:17 2015 +0100

    arm: handle races between relinquish_memory and free_domheap_pages

    Primarily this means XENMEM_decrease_reservation from a toolstack
    domain.

    Unlike x86 we have no requirement right now to queue such pages onto
    a separate list, if we hit this race then the other code has already
    fully accepted responsibility for freeing this page and therefore
    there is no more for relinquish_memory to do.

    This is CVE-2015-7814 / XSA-147.

    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Reviewed-by: Julien Grall <julien.grall@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 1ef01396fdff88b1c3331a09ca5c69619b90f4ea
    master date: 2015-10-29 13:34:17 +0100

commit e06f1c36d36260b7d82f8563f1a6f226160d4b23
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Oct 29 14:23:35 2015 +0100

    AMD Vi: fix HPET ID check

    Cherry picked from commit 2ca9fbd739 ("AMD IOMMU: allocate IRTE entries
    instead of using a static mapping") mainly to fix build with gcc 5.x.

    Signed-off-by: Jan Beulich <jbeulich@suse.com>

commit 39698f92e4185afdd956e9af6888923c27728875
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Thu Oct 29 14:10:31 2015 +0100

    xen: common: Use unbounded array for symbols_offset.

    Using a singleton array causes gcc5 to report:
    symbols.c: In function 'symbols_lookup':
    symbols.c:128:359: error: array subscript is above array bounds [-Werror=array-bounds]
    symbols.c:136:176: error: array subscript is above array bounds [-Werror=array-bounds]

    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    master commit: 3f82ea62826d4eb06002d8dba475bafcc454b845
    master date: 2015-03-20 12:02:03 +0000
========================================


[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [xen-4.3-testing test] 63948: regressions - FAIL
  2015-11-10 17:59 ` [xen-4.3-testing test] 63948: " osstest service owner
@ 2015-11-11 10:58   ` Jan Beulich
  2015-11-11 11:25     ` Ian Campbell
  0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2015-11-11 10:58 UTC (permalink / raw)
  To: osstest-admin; +Cc: xen-devel

>>> On 10.11.15 at 18:59, <osstest-admin@xenproject.org> wrote:
> flight 63948 xen-4.3-testing real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/63948/ 
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-migrupgrade 21 guest-migrate/src_host/dst_host fail REGR. vs. 63212

This having failed for quite some time, I've finally looked more closely
and found

Nov 10 14:36:16.949051 (XEN) vmce.c:88: PV restore: unsupported MCA capabilities 0x1000802 for d1:v0 (supported: 0)

to be the reason for the EPERM here

xc: error: Couldn't set extended vcpu0 info (1 = Operation not permitted): Internal error

Taking apart the value, it is MCG_SER_P | MCG_TES_P (the low 8 bits
get masked out anyway), which is in line with 4.2's GUEST_MCG_CAP.
Hence I would guess that previous successful runs of this test would
have been on Intel systems only; I can't see how this test would ever
succeed on AMD ones. Considering that 4.3 is out of maintenance, I
think the only reasonable change to avoid endless failure here is to
limit this test to Intel systems for this version.

Jan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [xen-4.3-testing test] 63948: regressions - FAIL
  2015-11-11 10:58   ` Jan Beulich
@ 2015-11-11 11:25     ` Ian Campbell
  2015-11-11 11:35       ` Jan Beulich
  2015-11-11 15:30       ` Ian Jackson
  0 siblings, 2 replies; 17+ messages in thread
From: Ian Campbell @ 2015-11-11 11:25 UTC (permalink / raw)
  To: Jan Beulich, osstest-admin; +Cc: xen-devel

On Wed, 2015-11-11 at 03:58 -0700, Jan Beulich wrote:
> > > > On 10.11.15 at 18:59, <osstest-admin@xenproject.org> wrote:
> > flight 63948 xen-4.3-testing real [real]
> > http://logs.test-lab.xenproject.org/osstest/logs/63948/ 
> > 
> > Regressions :-(
> > 
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> >  test-amd64-amd64-migrupgrade 21 guest-migrate/src_host/dst_host fail
> > REGR. vs. 63212
> 
> This having failed for quite some time, I've finally looked more closely
> and found
> 
> Nov 10 14:36:16.949051 (XEN) vmce.c:88: PV restore: unsupported MCA
> capabilities 0x1000802 for d1:v0 (supported: 0)
> 
> to be the reason for the EPERM here
> 
> xc: error: Couldn't set extended vcpu0 info (1 = Operation not
> permitted): Internal error
> 
> Taking apart the value, it is MCG_SER_P | MCG_TES_P (the low 8 bits
> get masked out anyway), which is in line with 4.2's GUEST_MCG_CAP.
> Hence I would guess that previous successful runs of this test would
> have been on Intel systems only; I can't see how this test would ever
> succeed on AMD ones.

FWIW you can find the history of any given test at a URL like:
http://logs.test-lab.xenproject.org/osstest/results/history/test-amd64-amd64-migrupgrade/xen-4.3-testing.html

Figuring out the arch of the machines is a bit of a faff, especially since
some of the relevant logs no longer exist. From
http://logs.test-lab.xenproject.org/osstest/results/host/huxelrebe0.html
http://logs.test-lab.xenproject.org/osstest/results/host/godello0.html
http://logs.test-lab.xenproject.org/osstest/results/host/pinot0.html

I found recent logs which confirm (via the serial log):
Huxelrebe:
    CPU0: Intel(R) Xeon(R) CPU E3-1225 v3 @ 3.20GHz stepping 03
Godello:
    CPU0: Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz stepping 03
Pinot:
    CPU0: AMD Opteron(tm) Processor 3350 HE               stepping 00

With Huxelrebe passing and godello and pinot failing this doesn't seem to
correlate with your investigation though.

>  Considering that 4.3 is out of maintenance, I
> think the only reasonable change to avoid endless failure here is to
> limit this test to Intel systems for this version.

Aside from the above I don't think osstest is currently aware of the vendor
of the processors (although I can certainly think of several reasons it
should be).

But given this is a new test case I would be happy, I think, to restrict it
to only go back as far as the earliest release which was in maintenance at
the time the test was introduced (August this year), or maybe (if something
just dropped out of maintenance recently) just the ones maintained today
(since it took a while for the test case to "bed in" and be made working on
some of the older ones). FWIW the last related fix I see in osstest was
early October.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [xen-4.3-testing test] 63948: regressions - FAIL
  2015-11-11 11:25     ` Ian Campbell
@ 2015-11-11 11:35       ` Jan Beulich
  2015-11-11 11:51         ` Ian Campbell
  2015-11-11 15:30       ` Ian Jackson
  1 sibling, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2015-11-11 11:35 UTC (permalink / raw)
  To: Ian Campbell; +Cc: xen-devel, osstest-admin

>>> On 11.11.15 at 12:25, <ian.campbell@citrix.com> wrote:
> On Wed, 2015-11-11 at 03:58 -0700, Jan Beulich wrote:
>> > > > On 10.11.15 at 18:59, <osstest-admin@xenproject.org> wrote:
>> > flight 63948 xen-4.3-testing real [real]
>> > http://logs.test-lab.xenproject.org/osstest/logs/63948/ 
>> > 
>> > Regressions :-(
>> > 
>> > Tests which did not succeed and are blocking,
>> > including tests which could not be run:
>> >  test-amd64-amd64-migrupgrade 21 guest-migrate/src_host/dst_host fail
>> > REGR. vs. 63212
>> 
>> This having failed for quite some time, I've finally looked more closely
>> and found
>> 
>> Nov 10 14:36:16.949051 (XEN) vmce.c:88: PV restore: unsupported MCA
>> capabilities 0x1000802 for d1:v0 (supported: 0)
>> 
>> to be the reason for the EPERM here
>> 
>> xc: error: Couldn't set extended vcpu0 info (1 = Operation not
>> permitted): Internal error
>> 
>> Taking apart the value, it is MCG_SER_P | MCG_TES_P (the low 8 bits
>> get masked out anyway), which is in line with 4.2's GUEST_MCG_CAP.
>> Hence I would guess that previous successful runs of this test would
>> have been on Intel systems only; I can't see how this test would ever
>> succeed on AMD ones.
> 
> FWIW you can find the history of any given test at a URL like:
> http://logs.test-lab.xenproject.org/osstest/results/history/test-amd64-amd64 
> -migrupgrade/xen-4.3-testing.html
> 
> Figuring out the arch of the machines is a bit of a faff, especially since
> some of the relevant logs no longer exist. From
> http://logs.test-lab.xenproject.org/osstest/results/host/huxelrebe0.html 
> http://logs.test-lab.xenproject.org/osstest/results/host/godello0.html 
> http://logs.test-lab.xenproject.org/osstest/results/host/pinot0.html 
> 
> I found recent logs which confirm (via the serial log):
> Huxelrebe:
>     CPU0: Intel(R) Xeon(R) CPU E3-1225 v3 @ 3.20GHz stepping 03
> Godello:
>     CPU0: Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz stepping 03
> Pinot:
>     CPU0: AMD Opteron(tm) Processor 3350 HE               stepping 00
> 
> With Huxelrebe passing and godello and pinot failing this doesn't seem to
> correlate with your investigation though.

Looking at that history page you point me to, I see passes on
godello.

>>  Considering that 4.3 is out of maintenance, I
>> think the only reasonable change to avoid endless failure here is to
>> limit this test to Intel systems for this version.
> 
> Aside from the above I don't think osstest is currently aware of the vendor
> of the processors (although I can certainly think of several reasons it
> should be).
> 
> But given this is a new test case I would be happy, I think, to restrict it
> to only go back as far as the earliest release which was in maintenance at
> the time the test was introduced (August this year), or maybe (if something
> just dropped out of maintenance recently) just the ones maintained today
> (since it took a while for the test case to "bed in" and be made working on
> some of the older ones). FWIW the last related fix I see in osstest was
> early October.

That would be fine too.

Jan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [xen-4.3-testing test] 63948: regressions - FAIL
  2015-11-11 11:35       ` Jan Beulich
@ 2015-11-11 11:51         ` Ian Campbell
  0 siblings, 0 replies; 17+ messages in thread
From: Ian Campbell @ 2015-11-11 11:51 UTC (permalink / raw)
  To: Jan Beulich, Ian Jackson; +Cc: xen-devel, osstest-admin

On Wed, 2015-11-11 at 04:35 -0700, Jan Beulich wrote:
> > > > On 11.11.15 at 12:25, <ian.campbell@citrix.com> wrote:
> > On Wed, 2015-11-11 at 03:58 -0700, Jan Beulich wrote:
> > > > > > On 10.11.15 at 18:59, <osstest-admin@xenproject.org> wrote:
> > > > flight 63948 xen-4.3-testing real [real]
> > > > http://logs.test-lab.xenproject.org/osstest/logs/63948/ 
> > > > 
> > > > Regressions :-(
> > > > 
> > > > Tests which did not succeed and are blocking,
> > > > including tests which could not be run:
> > > >  test-amd64-amd64-migrupgrade 21 guest-migrate/src_host/dst_host
> > > > fail
> > > > REGR. vs. 63212
> > > 
> > > This having failed for quite some time, I've finally looked more
> > > closely
> > > and found
> > > 
> > > Nov 10 14:36:16.949051 (XEN) vmce.c:88: PV restore: unsupported MCA
> > > capabilities 0x1000802 for d1:v0 (supported: 0)
> > > 
> > > to be the reason for the EPERM here
> > > 
> > > xc: error: Couldn't set extended vcpu0 info (1 = Operation not
> > > permitted): Internal error
> > > 
> > > Taking apart the value, it is MCG_SER_P | MCG_TES_P (the low 8 bits
> > > get masked out anyway), which is in line with 4.2's GUEST_MCG_CAP.
> > > Hence I would guess that previous successful runs of this test would
> > > have been on Intel systems only; I can't see how this test would ever
> > > succeed on AMD ones.
> > 
> > FWIW you can find the history of any given test at a URL like:
> > http://logs.test-lab.xenproject.org/osstest/results/history/test-amd64-
> > amd64 
> > -migrupgrade/xen-4.3-testing.html
> > 
> > Figuring out the arch of the machines is a bit of a faff, especially
> > since
> > some of the relevant logs no longer exist. From
> > http://logs.test-lab.xenproject.org/osstest/results/host/huxelrebe0.htm
> > l 
> > http://logs.test-lab.xenproject.org/osstest/results/host/godello0.html 
> > http://logs.test-lab.xenproject.org/osstest/results/host/pinot0.html 
> > 
> > I found recent logs which confirm (via the serial log):
> > Huxelrebe:
> >     CPU0: Intel(R) Xeon(R) CPU E3-1225 v3 @ 3.20GHz stepping 03
> > Godello:
> >     CPU0: Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz stepping 03
> > Pinot:
> >     CPU0: AMD Opteron(tm) Processor 3350 HE               stepping 00
> > 
> > With Huxelrebe passing and godello and pinot failing this doesn't seem
> > to
> > correlate with your investigation though.
> 
> Looking at that history page you point me to, I see passes on
> godello.

Uh, yes, two brainfarts on my part, first in mixing up that result,
secondly in inverting which one I thought you were saying worked vs didn't.
Sorry.

> 
> > >  Considering that 4.3 is out of maintenance, I
> > > think the only reasonable change to avoid endless failure here is to
> > > limit this test to Intel systems for this version.
> > 
> > Aside from the above I don't think osstest is currently aware of the
> > vendor
> > of the processors (although I can certainly think of several reasons it
> > should be).
> > 
> > But given this is a new test case I would be happy, I think, to
> > restrict it
> > to only go back as far as the earliest release which was in maintenance
> > at
> > the time the test was introduced (August this year), or maybe (if
> > something
> > just dropped out of maintenance recently) just the ones maintained
> > today
> > (since it took a while for the test case to "bed in" and be made
> > working on
> > some of the older ones). FWIW the last related fix I see in osstest was
> > early October.
> 
> That would be fine too.

I'll wait for Ian to have an opinion before doing anything.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [xen-4.3-testing test] 63948: regressions - FAIL
  2015-11-11 11:25     ` Ian Campbell
  2015-11-11 11:35       ` Jan Beulich
@ 2015-11-11 15:30       ` Ian Jackson
  2015-11-11 15:34         ` Ian Campbell
  2015-11-11 15:58         ` Ian Jackson
  1 sibling, 2 replies; 17+ messages in thread
From: Ian Jackson @ 2015-11-11 15:30 UTC (permalink / raw)
  To: Ian Campbell; +Cc: xen-devel, osstest-admin, Jan Beulich

Ian Campbell writes ("Re: [Xen-devel] [xen-4.3-testing test] 63948: regressions - FAIL"):
> Aside from the above I don't think osstest is currently aware of the vendor
> of the processors (although I can certainly think of several reasons it
> should be).

osstest is so aware.  There are hostflags `hvm-amd' and `hvm-intel',
currently used for host allocation for the *hvm*amd* and *hvm*intel*
tests.

> But given this is a new test case I would be happy, I think, to restrict it
> to only go back as far as the earliest release which was in maintenance at
> the time the test was introduced (August this year), or maybe (if something
> just dropped out of maintenance recently) just the ones maintained today
> (since it took a while for the test case to "bed in" and be made working on
> some of the older ones). FWIW the last related fix I see in osstest was
> early October.

I think this is probably best.

Ian.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [xen-4.3-testing test] 63948: regressions - FAIL
  2015-11-11 15:30       ` Ian Jackson
@ 2015-11-11 15:34         ` Ian Campbell
  2015-11-11 15:58         ` Ian Jackson
  1 sibling, 0 replies; 17+ messages in thread
From: Ian Campbell @ 2015-11-11 15:34 UTC (permalink / raw)
  To: Ian Jackson; +Cc: xen-devel, osstest-admin, Jan Beulich

On Wed, 2015-11-11 at 15:30 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [xen-4.3-testing test] 63948:
> regressions - FAIL"):
> > Aside from the above I don't think osstest is currently aware of the
> > vendor
> > of the processors (although I can certainly think of several reasons it
> > should be).
> 
> osstest is so aware.  There are hostflags `hvm-amd' and `hvm-intel',
> currently used for host allocation for the *hvm*amd* and *hvm*intel*
> tests.

Ah yes, my mistake. (I vaguely thought that, and then convinced myself
otherwise somehow)

> > But given this is a new test case I would be happy, I think, to
> > restrict it
> > to only go back as far as the earliest release which was in maintenance
> > at
> > the time the test was introduced (August this year), or maybe (if
> > something
> > just dropped out of maintenance recently) just the ones maintained
> > today
> > (since it took a while for the test case to "bed in" and be made
> > working on
> > some of the older ones). FWIW the last related fix I see in osstest was
> > early October.
> 
> I think this is probably best.

OK.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [xen-4.3-testing test] 63948: regressions - FAIL
  2015-11-11 15:30       ` Ian Jackson
  2015-11-11 15:34         ` Ian Campbell
@ 2015-11-11 15:58         ` Ian Jackson
  2015-11-11 16:17           ` Jan Beulich
  2015-11-11 16:21           ` [xen-4.3-testing test] 63948: regressions - FAIL Ian Campbell
  1 sibling, 2 replies; 17+ messages in thread
From: Ian Jackson @ 2015-11-11 15:58 UTC (permalink / raw)
  To: Ian Campbell, Jan Beulich, osstest-admin, xen-devel

Ian Jackson writes ("Re: [Xen-devel] [xen-4.3-testing test] 63948: regressions - FAIL"):
> Ian Campbell writes ("Re: [Xen-devel] [xen-4.3-testing test] 63948: regressions - FAIL"):
> > But given this is a new test case I would be happy, I think, to restrict it
> > to only go back as far as the earliest release which was in maintenance at
> > the time the test was introduced (August this year), or maybe (if something
> > just dropped out of maintenance recently) just the ones maintained today
> > (since it took a while for the test case to "bed in" and be made working on
> > some of the older ones). FWIW the last related fix I see in osstest was
> > early October.
> 
> I think this is probably best.

Having discussed this on irc with Ian C, I think I have changed my
mind.  Or maybe have, at least.

Do we know why this cross-version migration test fails when
within-version migrations of both 4.2 and 4.3 succeed ?  Can things
easily be fixed ?

I would rather not simply drop this test unless we know it's actually
hard to investigate or fix.  After all migration is one of the ways
people solve the problem `we are running a far too old version'.

Also we don't know (ATM) whether 4.3->4.4 works.  4.4->4.5 does.

Ian.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [xen-4.3-testing test] 63948: regressions - FAIL
  2015-11-11 15:58         ` Ian Jackson
@ 2015-11-11 16:17           ` Jan Beulich
  2015-11-11 16:54             ` [xen-4.3-testing test] 63948: regressions - FAIL [and 1 more messages] Ian Jackson
  2015-11-11 16:21           ` [xen-4.3-testing test] 63948: regressions - FAIL Ian Campbell
  1 sibling, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2015-11-11 16:17 UTC (permalink / raw)
  To: Ian Jackson; +Cc: xen-devel, Ian Campbell, osstest-admin

>>> On 11.11.15 at 16:58, <Ian.Jackson@eu.citrix.com> wrote:
> Ian Jackson writes ("Re: [Xen-devel] [xen-4.3-testing test] 63948: regressions - 
> FAIL"):
>> Ian Campbell writes ("Re: [Xen-devel] [xen-4.3-testing test] 63948: regressions 
> - FAIL"):
>> > But given this is a new test case I would be happy, I think, to restrict it
>> > to only go back as far as the earliest release which was in maintenance at
>> > the time the test was introduced (August this year), or maybe (if something
>> > just dropped out of maintenance recently) just the ones maintained today
>> > (since it took a while for the test case to "bed in" and be made working on
>> > some of the older ones). FWIW the last related fix I see in osstest was
>> > early October.
>> 
>> I think this is probably best.
> 
> Having discussed this on irc with Ian C, I think I have changed my
> mind.  Or maybe have, at least.
> 
> Do we know why this cross-version migration test fails when
> within-version migrations of both 4.2 and 4.3 succeed ?

As said in the original reply to the test report, this is due to vMCE
(validly) rejecting the incoming feature set. On 4.2, regardless of
CPU vendor, certain features got advertised to the guest. On 4.3
that advertisement became CPU vendor specific.

> Can things easily be fixed ?

Not sure. We could cheat and accept the known bogus value. But
should we even consider such on a no longer maintained branch?

Jan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [xen-4.3-testing test] 63948: regressions - FAIL
  2015-11-11 15:58         ` Ian Jackson
  2015-11-11 16:17           ` Jan Beulich
@ 2015-11-11 16:21           ` Ian Campbell
  1 sibling, 0 replies; 17+ messages in thread
From: Ian Campbell @ 2015-11-11 16:21 UTC (permalink / raw)
  To: Ian Jackson, Jan Beulich, osstest-admin, xen-devel

On Wed, 2015-11-11 at 15:58 +0000, Ian Jackson wrote:
> Also we don't know (ATM) whether 4.3->4.4 works.  4.4->4.5 does.

Are you basing that on 
http://logs.test-lab.xenproject.org/osstest/results/history/test-amd64-amd64-migrupgrade/xen-4.4-testing.html
and 
http://logs.test-lab.xenproject.org/osstest/results/history/test-amd64-amd64-migrupgrade/xen-4.5-testing.html

respectively?

Plus:

http://logs.test-lab.xenproject.org/osstest/results/history/test-amd64-i386-migrupgrade/xen-4.4-testing.html
and
http://logs.test-lab.xenproject.org/osstest/results/history/test-amd64-i386-migrupgrade/xen-4.5-testing.html

Of course.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [xen-4.3-testing test] 63948: regressions - FAIL [and 1 more messages]
  2015-11-11 16:17           ` Jan Beulich
@ 2015-11-11 16:54             ` Ian Jackson
  2015-11-16 16:00               ` [xen-4.3-testing test] 63948: regressions - FAIL [and 1 more messages] " Ian Jackson
  0 siblings, 1 reply; 17+ messages in thread
From: Ian Jackson @ 2015-11-11 16:54 UTC (permalink / raw)
  To: Ian Campbell, Jan Beulich; +Cc: xen-devel, osstest-admin

Jan Beulich writes ("Re: [Xen-devel] [xen-4.3-testing test] 63948: regressions - FAIL"):
> On 11.11.15 at 16:58, <Ian.Jackson@eu.citrix.com> wrote:
> > Do we know why this cross-version migration test fails when
> > within-version migrations of both 4.2 and 4.3 succeed ?
> 
> As said in the original reply to the test report, this is due to vMCE
> (validly) rejecting the incoming feature set. On 4.2, regardless of
> CPU vendor, certain features got advertised to the guest. On 4.3
> that advertisement became CPU vendor specific.

And 4.3 checks the incoming advertisement, I see.

> > Can things easily be fixed ?
> 
> Not sure. We could cheat and accept the known bogus value. But
> should we even consider such on a no longer maintained branch?

I wonder how easy it would be to filter out the wrong advertisement,
during outgoing migration, in the toolstack in 4.2.  I appreciate that
4.2 is long-dead.  So maybe we should just leave it.  Users can shut
down and restart their guests.

Ian Campbell writes ("Re: [Xen-devel] [xen-4.3-testing test] 63948: regressions - FAIL"):
> On Wed, 2015-11-11 at 15:58 +0000, Ian Jackson wrote:
> > Also we don't know (ATM) whether 4.3->4.4 works.  4.4->4.5 does.

>From what Jan says, at least this specific bug does not affect
4.3->4.4.

> Are you basing that on 
> http://logs.test-lab.xenproject.org/osstest/results/history/test-amd64-amd64-migrupgrade/xen-4.4-testing.html
> and 
> http://logs.test-lab.xenproject.org/osstest/results/history/test-amd64-amd64-migrupgrade/xen-4.5-testing.html

Yes.

> Plus:
> [i386]

Yes.

I think we should consider introducing
  test-amd64-<something>-hvm-migrupgrade-xsm

Ian.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [xen-4.3-testing test] 64287: regressions - FAIL
@ 2015-11-15 23:19 osstest service owner
  2015-11-10 17:59 ` [xen-4.3-testing test] 63948: " osstest service owner
  0 siblings, 1 reply; 17+ messages in thread
From: osstest service owner @ 2015-11-15 23:19 UTC (permalink / raw)
  To: xen-devel, osstest-admin

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 17394 bytes --]

flight 64287 xen-4.3-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/64287/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-migrupgrade 21 guest-migrate/src_host/dst_host fail REGR. vs. 63212

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail like 63212

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)               blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  9 debian-hvm-install     fail never pass
 build-i386-rumpuserxen        6 xen-build                    fail   never pass
 build-amd64-rumpuserxen       6 xen-build                    fail   never pass
 test-amd64-i386-migrupgrade 21 guest-migrate/src_host/dst_host fail never pass
 test-armhf-armhf-libvirt-qcow2  6 xen-boot                     fail never pass
 test-armhf-armhf-libvirt      6 xen-boot                     fail   never pass
 test-amd64-i386-libvirt      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2   6 xen-boot                     fail   never pass
 test-armhf-armhf-libvirt-raw  6 xen-boot                     fail   never pass
 test-armhf-armhf-xl-arndale   6 xen-boot                     fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu  6 xen-boot                     fail  never pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  9 debian-hvm-install      fail never pass
 test-armhf-armhf-xl-cubietruck  6 xen-boot                     fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-vhd       6 xen-boot                     fail   never pass
 test-amd64-amd64-libvirt     12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl           6 xen-boot                     fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 21 leak-check/check        fail never pass

version targeted for testing:
 xen                  fd1e4cc4d1d100337931b6f6dc50ed0b9766968a
baseline version:
 xen                  85ca813ec23c5a60680e4a13777dad530065902b

Last test of basis    63212  2015-10-22 10:03:01 Z   24 days
Failing since         63360  2015-10-29 13:39:04 Z   17 days   12 attempts
Testing same since    64090  2015-11-10 18:01:42 Z    5 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64                                                  pass
 build-armhf                                                  pass
 build-i386                                                   pass
 build-amd64-libvirt                                          pass
 build-armhf-libvirt                                          pass
 build-i386-libvirt                                           pass
 build-amd64-prev                                             pass
 build-i386-prev                                              pass
 build-amd64-pvops                                            pass
 build-armhf-pvops                                            pass
 build-i386-pvops                                             pass
 build-amd64-rumpuserxen                                      fail
 build-i386-rumpuserxen                                       fail
 test-amd64-amd64-xl                                          pass
 test-armhf-armhf-xl                                          fail
 test-amd64-i386-xl                                           pass
 test-amd64-i386-qemut-rhel6hvm-amd                           pass
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass
 test-amd64-i386-freebsd10-amd64                              pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail
 test-amd64-amd64-rumpuserxen-amd64                           blocked
 test-amd64-amd64-xl-qemut-win7-amd64                         fail
 test-amd64-i386-xl-qemut-win7-amd64                          fail
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail
 test-amd64-i386-xl-qemuu-win7-amd64                          fail
 test-armhf-armhf-xl-arndale                                  fail
 test-amd64-amd64-xl-credit2                                  pass
 test-armhf-armhf-xl-credit2                                  fail
 test-armhf-armhf-xl-cubietruck                               fail
 test-amd64-i386-freebsd10-i386                               pass
 test-amd64-i386-rumpuserxen-i386                             blocked
 test-amd64-i386-qemut-rhel6hvm-intel                         pass
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass
 test-amd64-amd64-libvirt                                     pass
 test-armhf-armhf-libvirt                                     fail
 test-amd64-i386-libvirt                                      pass
 test-amd64-amd64-migrupgrade                                 fail
 test-amd64-i386-migrupgrade                                  fail
 test-amd64-amd64-xl-multivcpu                                pass
 test-armhf-armhf-xl-multivcpu                                fail
 test-amd64-amd64-pair                                        pass
 test-amd64-i386-pair                                         pass
 test-amd64-amd64-pv                                          pass
 test-amd64-i386-pv                                           pass
 test-amd64-amd64-amd64-pvgrub                                pass
 test-amd64-amd64-i386-pvgrub                                 pass
 test-amd64-amd64-pygrub                                      pass
 test-armhf-armhf-libvirt-qcow2                               fail
 test-amd64-amd64-xl-qcow2                                    pass
 test-armhf-armhf-libvirt-raw                                 fail
 test-amd64-i386-xl-raw                                       pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     pass
 test-amd64-amd64-libvirt-vhd                                 pass
 test-armhf-armhf-xl-vhd                                      fail
 test-amd64-i386-xend-qemut-winxpsp3                          fail
 test-amd64-amd64-xl-qemut-winxpsp3                           pass
 test-amd64-amd64-xl-qemuu-winxpsp3                           pass


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit fd1e4cc4d1d100337931b6f6dc50ed0b9766968a
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Nov 10 12:23:43 2015 +0100

    x86/HVM: always intercept #AC and #DB

    Both being benign exceptions, and both being possible to get triggered
    by exception delivery, this is required to prevent a guest from locking
    up a CPU (resulting from no other VM exits occurring once getting into
    such a loop).

    The specific scenarios:

    1) #AC may be raised during exception delivery if the handler is set to
    be a ring-3 one by a 32-bit guest, and the stack is misaligned.

    This is CVE-2015-5307 / XSA-156.

    Reported-by: Benjamin Serebrin <serebrin@google.com>

    2) #DB may be raised during exception delivery when a breakpoint got
    placed on a data structure involved in delivering the exception. This
    can result in an endless loop when a 64-bit guest uses a non-zero IST
    for the vector 1 IDT entry, but even without use of IST the time it
    takes until a contributory fault would get raised (results depending
    on the handler) may be quite long.

    This is CVE-2015-8104 / XSA-156.

    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: bd2239d9fa975a1ee5bcd27c218ae042cd0a57bc
    master date: 2015-11-10 12:03:08 +0100

commit e875e0e5fcc5912f71422b53674a97e5c0ae77be
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Wed Oct 21 16:18:30 2015 +0100

    libxl: adjust PoD target by memory fudge, too

    PoD guests need to balloon at least as far as required by PoD, or risk
    crashing.  Currently they don't necessarily know what the right value
    is, because our memory accounting is (at the very least) confusing.

    Apply the memory limit fudge factor to the in-hypervisor PoD memory
    target, too.  This will increase the size of the guest's PoD cache by
    the fudge factor LIBXL_MAXMEM_CONSTANT (currently 1Mby).  This ensures
    that even with a slightly-off balloon driver, the guest will be
    stable even under memory pressure.

    There are two call sites of xc_domain_set_pod_target that need fixing:

    The one in libxl_set_memory_target is straightforward.

    The one in xc_hvm_build_x86.c:setup_guest is more awkward.  Simply
    setting the PoD target differently does not work because the various
    amounts of memory during domain construction no longer match up.
    Instead, we adjust the guest memory target in xenstore (but only for
    PoD guests).

    This introduces a 1Mby discrepancy between the balloon target of a PoD
    guest at boot, and the target set by an apparently-equivalent `xl
    mem-set' (or similar) later.  This approach is low-risk for a security
    fix but we need to fix this up properly in xen.git#staging and
    probably also in stable trees.

    This is XSA-153.

    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit 56fb5fd62320eb40a7517206f9706aa9188d6f7b)
    (cherry picked from commit 423d2cd814e8460d5ea8bd191a770f3c48b3947c)

    Conflicts:
    	tools/libxl/libxl_dom.c
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 73b70e3c5d59e63126c890068ee0cbf8a2a3b640)

commit 9f359c61e3927f94bb280ffb200155dd20465fda
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Oct 29 14:28:33 2015 +0100

    x86: rate-limit logging in do_xen{oprof,pmu}_op()

    Some of the sub-ops are acessible to all guests, and hence should be
    rate-limited. In the xenoprof case, just like for XSA-146, include them
    only in debug builds. Since the vPMU code is rather new, allow them to
    be always present, but downgrade them to (rate limited) guest messages.

    This is CVE-2015-7971 / XSA-152.

    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
    master commit: 95e7415843b94c346e5ba8682665f508f220e04b
    master date: 2015-10-29 13:37:19 +0100

commit 7f28d311a80e9d33d8270d6fb7b949dd4eef37f0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Oct 29 14:28:06 2015 +0100

    xenoprof: free domain's vcpu array

    This was overlooked in fb442e2171 ("x86_64: allow more vCPU-s per
    guest").

    This is CVE-2015-7969 / XSA-151.

    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
    master commit: 6e97c4b37386c2d09e09e9b5d5d232e37728b960
    master date: 2015-10-29 13:36:52 +0100

commit 2d330c121eff67f4828dc8536180986e0dfdf14b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Oct 29 14:27:44 2015 +0100

    x86/PoD: Eager sweep for zeroed pages

    Based on the contents of a guests physical address space,
    p2m_pod_emergency_sweep() could degrade into a linear memcmp() from 0 to
    max_gfn, which runs non-preemptibly.

    As p2m_pod_emergency_sweep() runs behind the scenes in a number of contexts,
    making it preemptible is not feasible.

    Instead, a different approach is taken.  Recently-populated pages are eagerly
    checked for reclaimation, which amortises the p2m_pod_emergency_sweep()
    operation across each p2m_pod_demand_populate() operation.

    Note that in the case that a 2M superpage can't be reclaimed as a superpage,
    it is shattered if 4K pages of zeros can be reclaimed.  This is unfortunate
    but matches the previous behaviour, and is required to avoid regressions
    (domain crash from PoD exhaustion) with VMs configured close to the limit.

    This is CVE-2015-7970 / XSA-150.

    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    master commit: 101ce53266866144e724ed593173bc4098b300b9
    master date: 2015-10-29 13:36:25 +0100

commit d7f7c5c6559ac3fa52dba5d8fe952b2c00f962db
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Oct 29 14:27:02 2015 +0100

    free domain's vcpu array

    This was overlooked in fb442e2171 ("x86_64: allow more vCPU-s per
    guest").

    This is CVE-2015-7969 / XSA-149.

    Reported-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
    master commit: d46896ebbb23f3a9fef2eb6066ae614fd1acfd96
    master date: 2015-10-29 13:35:40 +0100

commit 3be91e6c200af155a1badefc5945008c8da12ce7
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Oct 29 14:24:40 2015 +0100

    x86: guard against undue super page PTE creation

    When optional super page support got added (commit bd1cd81d64 "x86: PV
    support for hugepages"), two adjustments were missed: mod_l2_entry()
    needs to consider the PSE and RW bits when deciding whether to use the
    fast path, and the PSE bit must not be removed from L2_DISALLOW_MASK
    unconditionally.

    This is CVE-2015-7835 / XSA-148.

    Reported-by: "栾尚聪(好风)" <shangcong.lsc@alibaba-inc.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Tim Deegan <tim@xen.org>
    master commit: fe360c90ea13f309ef78810f1a2b92f2ae3b30b8
    master date: 2015-10-29 13:35:07 +0100

commit fb02dec2d06a2dd104682973a21375795e344e25
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Thu Oct 29 14:24:17 2015 +0100

    arm: handle races between relinquish_memory and free_domheap_pages

    Primarily this means XENMEM_decrease_reservation from a toolstack
    domain.

    Unlike x86 we have no requirement right now to queue such pages onto
    a separate list, if we hit this race then the other code has already
    fully accepted responsibility for freeing this page and therefore
    there is no more for relinquish_memory to do.

    This is CVE-2015-7814 / XSA-147.

    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Reviewed-by: Julien Grall <julien.grall@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 1ef01396fdff88b1c3331a09ca5c69619b90f4ea
    master date: 2015-10-29 13:34:17 +0100

commit e06f1c36d36260b7d82f8563f1a6f226160d4b23
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Oct 29 14:23:35 2015 +0100

    AMD Vi: fix HPET ID check

    Cherry picked from commit 2ca9fbd739 ("AMD IOMMU: allocate IRTE entries
    instead of using a static mapping") mainly to fix build with gcc 5.x.

    Signed-off-by: Jan Beulich <jbeulich@suse.com>

commit 39698f92e4185afdd956e9af6888923c27728875
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Thu Oct 29 14:10:31 2015 +0100

    xen: common: Use unbounded array for symbols_offset.

    Using a singleton array causes gcc5 to report:
    symbols.c: In function 'symbols_lookup':
    symbols.c:128:359: error: array subscript is above array bounds [-Werror=array-bounds]
    symbols.c:136:176: error: array subscript is above array bounds [-Werror=array-bounds]

    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    master commit: 3f82ea62826d4eb06002d8dba475bafcc454b845
    master date: 2015-03-20 12:02:03 +0000
========================================


[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [xen-4.3-testing test] 63948: regressions - FAIL [and 1 more messages] [and 1 more messages]
  2015-11-11 16:54             ` [xen-4.3-testing test] 63948: regressions - FAIL [and 1 more messages] Ian Jackson
@ 2015-11-16 16:00               ` Ian Jackson
  2015-11-16 16:11                 ` Jan Beulich
  0 siblings, 1 reply; 17+ messages in thread
From: Ian Jackson @ 2015-11-16 16:00 UTC (permalink / raw)
  To: Ian Campbell, Jan Beulich, xen-devel

Ian Jackson writes ("Re: [Xen-devel] [xen-4.3-testing test] 63948: regressions - FAIL [and 1 more messages]"):
> I wonder how easy it would be to filter out the wrong advertisement,
> during outgoing migration, in the toolstack in 4.2.  I appreciate that
> 4.2 is long-dead.  So maybe we should just leave it.  Users can shut
> down and restart their guests.

For now I think the easiest fix is:

osstest service owner writes ("[xen-4.3-testing test] 64287: regressions - FAIL"):
> flight 64287 xen-4.3-testing real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/64287/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-migrupgrade 21 guest-migrate/src_host/dst_host fail REGR. vs. 63212

Based on this, which is the only regression in 64287, we could force
push this:

> version targeted for testing:
>  xen                  fd1e4cc4d1d100337931b6f6dc50ed0b9766968a
> baseline version:
>  xen                  85ca813ec23c5a60680e4a13777dad530065902b
> 
> Last test of basis    63212  2015-10-22 10:03:01 Z   24 days
> Failing since         63360  2015-10-29 13:39:04 Z   17 days   12 attempts
> Testing same since    64090  2015-11-10 18:01:42 Z    5 days    3 attempts

Ian.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [xen-4.3-testing test] 63948: regressions - FAIL [and 1 more messages] [and 1 more messages]
  2015-11-16 16:00               ` [xen-4.3-testing test] 63948: regressions - FAIL [and 1 more messages] " Ian Jackson
@ 2015-11-16 16:11                 ` Jan Beulich
  2015-11-16 16:14                   ` Ian Jackson
  0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2015-11-16 16:11 UTC (permalink / raw)
  To: Ian Jackson; +Cc: xen-devel, Ian Campbell

>>> On 16.11.15 at 17:00, <Ian.Jackson@eu.citrix.com> wrote:
> Ian Jackson writes ("Re: [Xen-devel] [xen-4.3-testing test] 63948: regressions - 
> FAIL [and 1 more messages]"):
>> I wonder how easy it would be to filter out the wrong advertisement,
>> during outgoing migration, in the toolstack in 4.2.  I appreciate that
>> 4.2 is long-dead.  So maybe we should just leave it.  Users can shut
>> down and restart their guests.
> 
> For now I think the easiest fix is:
> 
> osstest service owner writes ("[xen-4.3-testing test] 64287: regressions - FAIL"):
>> flight 64287 xen-4.3-testing real [real]
>> http://logs.test-lab.xenproject.org/osstest/logs/64287/ 
>> 
>> Regressions :-(
>> 
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>>  test-amd64-amd64-migrupgrade 21 guest-migrate/src_host/dst_host fail REGR. vs. 63212
> 
> Based on this, which is the only regression in 64287, we could force
> push this:
> 
>> version targeted for testing:
>>  xen                  fd1e4cc4d1d100337931b6f6dc50ed0b9766968a
>> baseline version:
>>  xen                  85ca813ec23c5a60680e4a13777dad530065902b
>> 
>> Last test of basis    63212  2015-10-22 10:03:01 Z   24 days
>> Failing since         63360  2015-10-29 13:39:04 Z   17 days   12 attempts
>> Testing same since    64090  2015-11-10 18:01:42 Z    5 days    3 attempts

Well, yes, but wouldn't it re-occur once it managed to run on an
Intel box again?

Jan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [xen-4.3-testing test] 63948: regressions - FAIL [and 1 more messages] [and 1 more messages]
  2015-11-16 16:11                 ` Jan Beulich
@ 2015-11-16 16:14                   ` Ian Jackson
  2015-11-16 16:24                     ` Jan Beulich
  0 siblings, 1 reply; 17+ messages in thread
From: Ian Jackson @ 2015-11-16 16:14 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Ian Campbell

Jan Beulich writes ("Re: [Xen-devel] [xen-4.3-testing test] 63948: regressions - FAIL [and 1 more messages] [and 1 more messages]"):
> On 16.11.15 at 17:00, <Ian.Jackson@eu.citrix.com> wrote:
> > Based on this, which is the only regression in 64287, we could force
> > push this:
...
> Well, yes, but wouldn't it re-occur once it managed to run on an
> Intel box again?

Yes, but that's not very likely to happen soon.

This will allow us to think about the medium-term fix without the
short-term breakage causing pain.

Ian.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [xen-4.3-testing test] 63948: regressions - FAIL [and 1 more messages] [and 1 more messages]
  2015-11-16 16:14                   ` Ian Jackson
@ 2015-11-16 16:24                     ` Jan Beulich
  2015-11-16 16:55                       ` Ian Jackson
  0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2015-11-16 16:24 UTC (permalink / raw)
  To: Ian Jackson; +Cc: xen-devel, Ian Campbell

>>> On 16.11.15 at 17:14, <Ian.Jackson@eu.citrix.com> wrote:
> Jan Beulich writes ("Re: [Xen-devel] [xen-4.3-testing test] 63948: regressions - 
> FAIL [and 1 more messages] [and 1 more messages]"):
>> On 16.11.15 at 17:00, <Ian.Jackson@eu.citrix.com> wrote:
>> > Based on this, which is the only regression in 64287, we could force
>> > push this:
> ...
>> Well, yes, but wouldn't it re-occur once it managed to run on an
>> Intel box again?
> 
> Yes, but that's not very likely to happen soon.
> 
> This will allow us to think about the medium-term fix without the
> short-term breakage causing pain.

Okay. Why don't you go ahead then?

Jan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [xen-4.3-testing test] 63948: regressions - FAIL [and 1 more messages] [and 1 more messages]
  2015-11-16 16:24                     ` Jan Beulich
@ 2015-11-16 16:55                       ` Ian Jackson
  0 siblings, 0 replies; 17+ messages in thread
From: Ian Jackson @ 2015-11-16 16:55 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Ian Campbell

Jan Beulich writes ("Re: [Xen-devel] [xen-4.3-testing test] 63948: regressions - FAIL [and 1 more messages] [and 1 more messages]"):
> On 16.11.15 at 17:14, <Ian.Jackson@eu.citrix.com> wrote:
> > This will allow us to think about the medium-term fix without the
> > short-term breakage causing pain.
> 
> Okay. Why don't you go ahead then?

Done.

Maybe we should bring AMD in and ask them if they want to fix this
migration ?

Ian.

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2015-11-16 16:55 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-15 23:19 [xen-4.3-testing test] 64287: regressions - FAIL osstest service owner
2015-11-10 17:59 ` [xen-4.3-testing test] 63948: " osstest service owner
2015-11-11 10:58   ` Jan Beulich
2015-11-11 11:25     ` Ian Campbell
2015-11-11 11:35       ` Jan Beulich
2015-11-11 11:51         ` Ian Campbell
2015-11-11 15:30       ` Ian Jackson
2015-11-11 15:34         ` Ian Campbell
2015-11-11 15:58         ` Ian Jackson
2015-11-11 16:17           ` Jan Beulich
2015-11-11 16:54             ` [xen-4.3-testing test] 63948: regressions - FAIL [and 1 more messages] Ian Jackson
2015-11-16 16:00               ` [xen-4.3-testing test] 63948: regressions - FAIL [and 1 more messages] " Ian Jackson
2015-11-16 16:11                 ` Jan Beulich
2015-11-16 16:14                   ` Ian Jackson
2015-11-16 16:24                     ` Jan Beulich
2015-11-16 16:55                       ` Ian Jackson
2015-11-11 16:21           ` [xen-4.3-testing test] 63948: regressions - FAIL Ian Campbell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.