All of lore.kernel.org
 help / color / mirror / Atom feed
* [xen-unstable test] 65141: regressions - FAIL
@ 2015-11-27 15:45 osstest service owner
  2015-11-27 15:54 ` Ian Jackson
  0 siblings, 1 reply; 23+ messages in thread
From: osstest service owner @ 2015-11-27 15:45 UTC (permalink / raw)
  To: xen-devel, osstest-admin

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 17275 bytes --]

flight 65141 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/65141/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-intel 16 debian-hvm-install/l1/l2 fail REGR. vs. 65114

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     15 guest-start.2             fail REGR. vs. 65114
 test-armhf-armhf-libvirt      7 host-ping-check-xen      fail blocked in 65114
 test-amd64-i386-rumpuserxen-i386 10 guest-start                fail like 65114
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail like 65114
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail like 65114
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 9 debian-hvm-install fail like 65114
 test-amd64-amd64-libvirt-vhd  9 debian-di-install            fail   like 65114
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 13 guest-localmigrate fail like 65114

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-intel 11 guest-start                  fail  never pass
 test-armhf-armhf-libvirt-raw  9 debian-di-install            fail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-rtds     13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check fail never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2  9 debian-di-install            fail never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestore            fail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start                  fail   never pass
 test-armhf-armhf-xl-vhd       9 debian-di-install            fail   never pass
 test-armhf-armhf-xl-xsm      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-xsm      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  b1d398b67781140d1c6efd05778d0ad4103b2a32
baseline version:
 xen                  713b7e4ef2aa4ec3ae697cde9c81d5a57548f9b1

Last test of basis    65114  2015-11-25 19:42:37 Z    1 days
Testing same since    65141  2015-11-26 20:45:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Kevin Tian <kevin.tian@intel.com>
  Paul Durrant <paul.durrant@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wei.liu2@citrix.com>
  Yang Zhang <yang.z.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass
 build-armhf-xsm                                              pass
 build-i386-xsm                                               pass
 build-amd64                                                  pass
 build-armhf                                                  pass
 build-i386                                                   pass
 build-amd64-libvirt                                          pass
 build-armhf-libvirt                                          pass
 build-i386-libvirt                                           pass
 build-amd64-oldkern                                          pass
 build-i386-oldkern                                           pass
 build-amd64-prev                                             pass
 build-i386-prev                                              pass
 build-amd64-pvops                                            pass
 build-armhf-pvops                                            pass
 build-i386-pvops                                             pass
 build-amd64-rumpuserxen                                      pass
 build-i386-rumpuserxen                                       pass
 test-amd64-amd64-xl                                          pass
 test-armhf-armhf-xl                                          pass
 test-amd64-i386-xl                                           pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm                pass
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm                 pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm                pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm                 pass
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail
 test-amd64-amd64-libvirt-xsm                                 pass
 test-armhf-armhf-libvirt-xsm                                 fail
 test-amd64-i386-libvirt-xsm                                  pass
 test-amd64-amd64-xl-xsm                                      pass
 test-armhf-armhf-xl-xsm                                      pass
 test-amd64-i386-xl-xsm                                       pass
 test-amd64-amd64-qemuu-nested-amd                            fail
 test-amd64-amd64-xl-pvh-amd                                  fail
 test-amd64-i386-qemut-rhel6hvm-amd                           pass
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass
 test-amd64-i386-freebsd10-amd64                              pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass
 test-amd64-amd64-rumpuserxen-amd64                           pass
 test-amd64-amd64-xl-qemut-win7-amd64                         fail
 test-amd64-i386-xl-qemut-win7-amd64                          fail
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail
 test-amd64-i386-xl-qemuu-win7-amd64                          fail
 test-armhf-armhf-xl-arndale                                  pass
 test-amd64-amd64-xl-credit2                                  pass
 test-armhf-armhf-xl-credit2                                  pass
 test-armhf-armhf-xl-cubietruck                               pass
 test-amd64-i386-freebsd10-i386                               pass
 test-amd64-i386-rumpuserxen-i386                             fail
 test-amd64-amd64-qemuu-nested-intel                          fail
 test-amd64-amd64-xl-pvh-intel                                fail
 test-amd64-i386-qemut-rhel6hvm-intel                         pass
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass
 test-amd64-amd64-libvirt                                     pass
 test-armhf-armhf-libvirt                                     fail
 test-amd64-i386-libvirt                                      pass
 test-amd64-amd64-migrupgrade                                 pass
 test-amd64-i386-migrupgrade                                  pass
 test-amd64-amd64-xl-multivcpu                                pass
 test-armhf-armhf-xl-multivcpu                                pass
 test-amd64-amd64-pair                                        pass
 test-amd64-i386-pair                                         pass
 test-amd64-amd64-libvirt-pair                                pass
 test-amd64-i386-libvirt-pair                                 pass
 test-amd64-amd64-amd64-pvgrub                                pass
 test-amd64-amd64-i386-pvgrub                                 pass
 test-amd64-amd64-pygrub                                      pass
 test-armhf-armhf-libvirt-qcow2                               fail
 test-amd64-amd64-xl-qcow2                                    pass
 test-armhf-armhf-libvirt-raw                                 fail
 test-amd64-i386-xl-raw                                       pass
 test-amd64-amd64-xl-rtds                                     pass
 test-armhf-armhf-xl-rtds                                     fail
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     pass
 test-amd64-amd64-libvirt-vhd                                 fail
 test-armhf-armhf-xl-vhd                                      fail
 test-amd64-amd64-xl-qemut-winxpsp3                           pass
 test-amd64-i386-xl-qemut-winxpsp3                            pass
 test-amd64-amd64-xl-qemuu-winxpsp3                           pass
 test-amd64-i386-xl-qemuu-winxpsp3                            pass


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit b1d398b67781140d1c6efd05778d0ad4103b2a32
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Thu Nov 26 16:01:27 2015 +0100

    x86: allow disabling the emulated local apic

    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Kevin Tian <kevin.tian@intel.com>

commit 302a8519fb8537f619569873d605f65eb18f4bdc
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Thu Nov 26 16:00:56 2015 +0100

    x86/vlapic: fixes for HVM code when running without a vlapic

    The HVM related code (SVM, VMX) generally assumed that a local apic is
    always present. With the introduction of a HVM mode were the local apic can
    be removed, some of this broken code paths arised.

    The SVM exit/resume paths unconditionally checked the state of the lapic,
    which is wrong if it's been disabled by hardware, fix this by adding the
    necessary checks. On the VMX side, make sure we don't add mappings for a
    local apic if it's disabled.

    In the generic vlapic code, add checks to prevent setting the TSC deadline
    timer if the lapic is disabled, and also prevent trying to inject interrupts
    from the PIC is the lapic is also disabled.

    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Kevin Tian <kevin.tian@intel.com>

commit 0ce647ad6f70c5ec0aeee66ce74429982b81911a
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Nov 26 15:51:49 2015 +0100

    x86: suppress bogus log message

    The way we populate mpc_cpufeature is not compatible with modern CPUs,
    and hence the message printed using that information is useless/bogus.
    It's of interest only anyway when not using ACPI, so move it into MPS
    parsing code. This at once significantly reduces boot time logging on
    huge systems.

    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4243baf61acf24d972eb456aea8353481af31100
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Thu Nov 26 15:51:00 2015 +0100

    HVM/save: allow the usage of zeroextend and a fixup function

    With the current compat implementation in the save/restore context handling,
    only one compat structure is allowed, and using _zeroextend prevents the
    fixup function from being called.

    In order to allow for the compat handling layer to be able to handle
    different compat versions allow calling the fixup function with
    hvm_load_entry_zeroextend.

    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit fafa16e3d006d6d5f62b71374991360697db2467
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Thu Nov 26 15:50:36 2015 +0100

    HVM/save: pass a size parameter to the HVM compat functions

    In order to cope with types having multiple compat versions pass a size
    parameter to the fixup function so we can identify which compat version
    Xen is dealing with.

    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 08f3fad0f30a7de35d60c2615c2ee56d3f7b77c8
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Nov 26 15:50:07 2015 +0100

    build: fix dependencies for files compiled from their parent directory

    The use of $(basename ...) here was wrong (yet I'm sure I tested it).

    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit bdead3f25b75772e954d9c93e3ba36c7a926a030
Author: Yang Zhang <yang.z.zhang@intel.com>
Date:   Thu Nov 26 15:49:29 2015 +0100

    MAINTAINERS: change the vt-d maintainer

    add Feng as the new maintainer of VT-d stuff

    Signed-off-by: Yang Zhang <yang.z.zhang@intel.com>
    Acked-by: Kevin Tian <kevin.tian@intel.com>

commit b38d426ad09b3e142f474c1dadfd6d6903664a48
Author: Paul Durrant <paul.durrant@citrix.com>
Date:   Thu Nov 26 15:48:41 2015 +0100

    x86/viridian: flush remote tlbs by hypercall

    The Microsoft Hypervisor Top Level Functional Spec. (section 3.4) defines
    two bits in CPUID leaf 0x40000004:EAX for the hypervisor to recommend
    whether or not to issue a hypercall for local or remote TLB flush.

    Whilst it's doubtful whether using a hypercall for local TLB flush would
    be any more efficient than a specific INVLPG VMEXIT, a remote TLB flush
    may well be more efficiently done. This is because the alternative
    mechanism is to IPI all the vCPUs in question which (in the absence of
    APIC virtualisation) will require emulation and scheduling of the vCPUs
    only to have them immediately VMEXIT for local TLB flush.

    This patch therefore adds a viridian option which, if selected, enables
    the hypercall for remote TLB flush and implements it using ASID
    invalidation for targetted vCPUs followed by an IPI only to the set of
    CPUs that happened to be running a targetted vCPU (which may be the empty
    set). The flush may be more severe than requested since the hypercall can
    request flush only for a specific address space (CR3) but Xen neither
    keeps a mapping of ASID to guest CR3 nor allows invalidation of a specific
    ASID, but on a host with contended CPUs performance is still likely to
    be better than a more specific flush using IPIs.

    The implementation of the patch introduces per-vCPU viridian_init() and
    viridian_deinit() functions to allow a scratch cpumask to be allocated.
    This avoids needing to put this potentially large data structure on stack
    during hypercall processing. It also modifies the hypercall input and
    output bit-fields to allow a check for the 'fast' calling convention,
    and a white-space fix in the definition of HVMPV_feature_mask (to remove
    hard tabs).

    Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wei.liu2@citrix.com>
(qemu changes not included)


[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-11-27 15:45 [xen-unstable test] 65141: regressions - FAIL osstest service owner
@ 2015-11-27 15:54 ` Ian Jackson
  2015-11-30  5:35   ` Hu, Robert
  0 siblings, 1 reply; 23+ messages in thread
From: Ian Jackson @ 2015-11-27 15:54 UTC (permalink / raw)
  To: Hu, Robert; +Cc: xen-devel, osstest service owner

osstest service owner writes ("[xen-unstable test] 65141: regressions - FAIL"):
> flight 65141 xen-unstable real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/65141/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-qemuu-nested-intel 16 debian-hvm-install/l1/l2 fail REGR. vs. 65114

Hi, Robert.  I hope you don't mind me asking you about these Nested
HVM tests in osstest production; if there's someone else we should
contact please let us know.

Anyway: it seems this test failed just now.

osstest thinks it is a regression, but I think it is more likely that
this test either exhibits a heisenbug, or that there is some problem
which is specific to the particular host.

We'd appreciate it if you and your colleagues could take a look at
this and analyse the failure.

In the meantime the osstest bisector will try to start work on it and
I will report what it discovers.

Ian.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-11-27 15:54 ` Ian Jackson
@ 2015-11-30  5:35   ` Hu, Robert
  2015-12-02 10:34     ` Ian Campbell
  0 siblings, 1 reply; 23+ messages in thread
From: Hu, Robert @ 2015-11-30  5:35 UTC (permalink / raw)
  To: Ian Jackson; +Cc: xen-devel, osstest service owner

> -----Original Message-----
> From: Ian Jackson [mailto:Ian.Jackson@eu.citrix.com]
> Sent: Friday, November 27, 2015 11:54 PM
> To: Hu, Robert <robert.hu@intel.com>
> Cc: xen-devel@lists.xensource.com; osstest service owner
> <osstest-admin@xenproject.org>
> Subject: Re: [xen-unstable test] 65141: regressions - FAIL
> 
> osstest service owner writes ("[xen-unstable test] 65141: regressions -
> FAIL"):
> > flight 65141 xen-unstable real [real]
> > http://logs.test-lab.xenproject.org/osstest/logs/65141/
> >
> > Regressions :-(
> >
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> >  test-amd64-amd64-qemuu-nested-intel 16 debian-hvm-install/l1/l2 fail
> REGR. vs. 65114
> 
> Hi, Robert.  I hope you don't mind me asking you about these Nested
> HVM tests in osstest production; if there's someone else we should
> contact please let us know.
[Hu, Robert] 

I'm trying to look for...
But can I get the bad commit first?

> 
> Anyway: it seems this test failed just now.
> 
> osstest thinks it is a regression, but I think it is more likely that
> this test either exhibits a heisenbug, or that there is some problem
> which is specific to the particular host.
> 
> We'd appreciate it if you and your colleagues could take a look at
> this and analyse the failure.
> 
> In the meantime the osstest bisector will try to start work on it and
> I will report what it discovers.
> 
> Ian.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-11-30  5:35   ` Hu, Robert
@ 2015-12-02 10:34     ` Ian Campbell
  2015-12-02 10:52       ` Jan Beulich
  2015-12-02 13:51       ` Ian Campbell
  0 siblings, 2 replies; 23+ messages in thread
From: Ian Campbell @ 2015-12-02 10:34 UTC (permalink / raw)
  To: Hu, Robert, Ian Jackson, Jun Nakajima, Kevin Tian
  Cc: Andrew Cooper, xen-devel, osstest service owner, Jan Beulich

On Mon, 2015-11-30 at 05:35 +0000, Hu, Robert wrote:

Also adding Vt-x maintainers (Kevin and Jun) for their help/input, I'm not
sure if there is a dedicated nested-vmx maintainer. This failure has been
blocking the xen-unstable push gate for a week now so it really does need
looking into.

Also CC the arch X86 maintainers, for-their-info.

> > -----Original Message-----
> > From: Ian Jackson [mailto:Ian.Jackson@eu.citrix.com]
> > Sent: Friday, November 27, 2015 11:54 PM
> > To: Hu, Robert <robert.hu@intel.com>
> > Cc: xen-devel@lists.xensource.com; osstest service owner
> > <osstest-admin@xenproject.org>
> > Subject: Re: [xen-unstable test] 65141: regressions - FAIL
> > 
> > osstest service owner writes ("[xen-unstable test] 65141: regressions -
> > FAIL"):
> > > flight 65141 xen-unstable real [real]
> > > http://logs.test-lab.xenproject.org/osstest/logs/65141/
> > > 
> > > Regressions :-(
> > > 
> > > Tests which did not succeed and are blocking,
> > > including tests which could not be run:
> > >  test-amd64-amd64-qemuu-nested-intel 16 debian-hvm-install/l1/l2 fail
> > REGR. vs. 65114
> > 
> > Hi, Robert.  I hope you don't mind me asking you about these Nested
> > HVM tests in osstest production; if there's someone else we should
> > contact please let us know.
> [Hu, Robert] 
> 
> I'm trying to look for...
> But can I get the bad commit first?

It is in the original mail report, which is in the ML archives:
http://lists.xenproject.org/archives/html/xen-devel/2015-11/msg03222.html

version targeted for testing:
 xen                  b1d398b67781140d1c6efd05778d0ad4103b2a32
baseline version:
 xen                  713b7e4ef2aa4ec3ae697cde9c81d5a57548f9b1

You can also find it via the logs at
         http://logs.test-lab.xenproject.org/osstest/logs/65141
by clicking the header of one of the build-$arch jobs e.g. 
http://logs.test-lab.xenproject.org/osstest/logs/65141/build-amd64/info.html

and looking at the various tree_* and revision_* in the "Test control
variables" table, e.g.:
    revision_xen    	    b1d398b67781140d1c6efd05778d0ad4103b2a32    	    definition

> > Anyway: it seems this test failed just now.
> > 
> > osstest thinks it is a regression, but I think it is more likely that
> > this test either exhibits a heisenbug, or that there is some problem
> > which is specific to the particular host.

FWIW looking at the different branches in
 http://logs.test-lab.xenproject.org/osstest/results/history/test-amd64-amd64-qemuu-nested-intel/

While xen-unstable has started failing on godello1 xen-4.6-testing does
seem to be passing there.

The history for the previous job name (before splitting into -intel and
-amd) also shows a pass on godello1:

http://logs.test-lab.xenproject.org/osstest/results/history/test-amd64-amd64-qemuu-nested/xen-unstable.html

Of course this is a new test so there isn't very much historical data to
draw conclusions from.

> > We'd appreciate it if you and your colleagues could take a look at
> > this and analyse the failure.
> > 
> > In the meantime the osstest bisector will try to start work on it and
> > I will report what it discovers.

According to 
http://osstest.test-lab.xenproject.org/~osstest/pub/results/bisect/xen-unstable/test-amd64-amd64-qemuu-nested-intel.debian-hvm-install--l1--l2.html
it was unable to reproduce a baseline, probably because it didn't have
enough historical data. 

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-12-02 10:34     ` Ian Campbell
@ 2015-12-02 10:52       ` Jan Beulich
  2015-12-02 10:57         ` Andrew Cooper
  2015-12-02 11:07         ` Ian Campbell
  2015-12-02 13:51       ` Ian Campbell
  1 sibling, 2 replies; 23+ messages in thread
From: Jan Beulich @ 2015-12-02 10:52 UTC (permalink / raw)
  To: Ian Campbell, Ian Jackson
  Cc: KevinTian, xen-devel, Andrew Cooper, osstestservice owner,
	Jun Nakajima, Robert Hu

>>> On 02.12.15 at 11:34, <ian.campbell@citrix.com> wrote:
> On Mon, 2015-11-30 at 05:35 +0000, Hu, Robert wrote:
> 
> Also adding Vt-x maintainers (Kevin and Jun) for their help/input, I'm not
> sure if there is a dedicated nested-vmx maintainer. This failure has been
> blocking the xen-unstable push gate for a week now so it really does need
> looking into.
> 
> Also CC the arch X86 maintainers, for-their-info.

I was actually waiting for the bisector to point at something. Also
looking at the history this might be machine specific (the sole
initial pass was on italia0, all failures are on godello1). Should new
tests perhaps be capable of causing regressions only once they
passed on every host they may (usefully) run on?

Jan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-12-02 10:52       ` Jan Beulich
@ 2015-12-02 10:57         ` Andrew Cooper
  2015-12-02 11:07         ` Ian Campbell
  1 sibling, 0 replies; 23+ messages in thread
From: Andrew Cooper @ 2015-12-02 10:57 UTC (permalink / raw)
  To: Jan Beulich, Ian Campbell, Ian Jackson
  Cc: Robert Hu, KevinTian, xen-devel, osstestservice owner, Jun Nakajima

On 02/12/15 10:52, Jan Beulich wrote:
>>>> On 02.12.15 at 11:34, <ian.campbell@citrix.com> wrote:
>> On Mon, 2015-11-30 at 05:35 +0000, Hu, Robert wrote:
>>
>> Also adding Vt-x maintainers (Kevin and Jun) for their help/input, I'm not
>> sure if there is a dedicated nested-vmx maintainer. This failure has been
>> blocking the xen-unstable push gate for a week now so it really does need
>> looking into.
>>
>> Also CC the arch X86 maintainers, for-their-info.
> I was actually waiting for the bisector to point at something. Also
> looking at the history this might be machine specific (the sole
> initial pass was on italia0, all failures are on godello1). Should new
> tests perhaps be capable of causing regressions only once they
> passed on every host they may (usefully) run on?

As a general improvement, OSSTest should not make new tests blocking by
default.  They should need to be proved stable (5 consecutive passes?)
before they become blocking, to prevent a new test spuriously passing
and subsequently blocking pushes.

During this time, the author of the new test has the onus to ensure test
stability; either modifications to the test, or bugfixes to master.

~Andrew

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-12-02 10:52       ` Jan Beulich
  2015-12-02 10:57         ` Andrew Cooper
@ 2015-12-02 11:07         ` Ian Campbell
  1 sibling, 0 replies; 23+ messages in thread
From: Ian Campbell @ 2015-12-02 11:07 UTC (permalink / raw)
  To: Jan Beulich, Ian Jackson
  Cc: KevinTian, xen-devel, Andrew Cooper, osstestservice owner,
	Jun Nakajima, Robert Hu

On Wed, 2015-12-02 at 03:52 -0700, Jan Beulich wrote:
> > > > On 02.12.15 at 11:34, <ian.campbell@citrix.com> wrote:
> > On Mon, 2015-11-30 at 05:35 +0000, Hu, Robert wrote:
> > 
> > Also adding Vt-x maintainers (Kevin and Jun) for their help/input, I'm
> > not
> > sure if there is a dedicated nested-vmx maintainer. This failure has
> > been
> > blocking the xen-unstable push gate for a week now so it really does
> > need
> > looking into.
> > 
> > Also CC the arch X86 maintainers, for-their-info.
> 
> I was actually waiting for the bisector to point at something. Also
> looking at the history this might be machine specific (the sole
> initial pass was on italia0, all failures are on godello1). Should new
> tests perhaps be capable of causing regressions only once they
> passed on every host they may (usefully) run on?

We renamed the test (to add an -intel/-amd suffix and run on an appropriate
host) so

 http://logs.test-lab.xenproject.org/osstest/results/history/test-amd64-amd64-qemuu-nested/xen-unstable.html

is also relevant, rimava is an amd machine (which lead to the rename/split.

That older named test also shows a pass on godello1. That initial pass on
godello1 might have been a fluke, but the current run of seven failures
seems unlikely to be a fluke in the opposite direction.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-12-02 10:34     ` Ian Campbell
  2015-12-02 10:52       ` Jan Beulich
@ 2015-12-02 13:51       ` Ian Campbell
  2015-12-03  5:58         ` Tian, Kevin
  2015-12-05  8:09         ` Ian Campbell
  1 sibling, 2 replies; 23+ messages in thread
From: Ian Campbell @ 2015-12-02 13:51 UTC (permalink / raw)
  To: Hu, Robert, Ian Jackson, Jun Nakajima, Kevin Tian
  Cc: Andrew Cooper, xen-devel, osstest service owner, Jan Beulich

On Wed, 2015-12-02 at 10:34 +0000, Ian Campbell wrote:
> 
[...]
> FWIW looking at the different branches in
>  http://logs.test-lab.xenproject.org/osstest/results/history/test-amd64-amd64-qemuu-nested-intel/
> 
> While xen-unstable has started failing on godello1 xen-4.6-testing does
> seem to be passing there.
> 
> The history for the previous job name (before splitting into -intel and
> -amd) also shows a pass on godello1:
> 
> http://logs.test-lab.xenproject.org/osstest/results/history/test-amd64-amd64-qemuu-nested/xen-unstable.html
> 
> Of course this is a new test so there isn't very much historical data to
> draw conclusions from.
> 
> > > We'd appreciate it if you and your colleagues could take a look at
> > > this and analyse the failure.
> > > 
> > > In the meantime the osstest bisector will try to start work on it and
> > > I will report what it discovers.
> 
> According to 
> http://osstest.test-lab.xenproject.org/~osstest/pub/results/bisect/xen-un
> stable/test-amd64-amd64-qemuu-nested-intel.debian-hvm-install--l1
> --l2.html
> it was unable to reproduce a baseline, probably because it didn't have
> enough historical data. 

So I have run an adhoc test of the version of Xen tested by flight 64494,
i.e. the one which passed on godello under the old job name test-amd64-
amd64-qemuu-nested but using the new name and current test harness and it
was successful:

http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/

I think that ought to give a baseline for the bisector to work with. I'll
prod it to do so.

Given the number of commits I expect this is going to take a little while
to produce results, given that the regression is already a week old it
would be good if the VT-d maintainers could investigate in parallel.

The changes to xen.git in the region are below in case that flags anything
to anyone. Note that other trees have changed as well though.

Ian.

$ git log --oneline d07f63fa6e70..b1d398b67781 
b1d398b x86: allow disabling the emulated local apic
302a851 x86/vlapic: fixes for HVM code when running without a vlapic
0ce647a x86: suppress bogus log message
4243baf HVM/save: allow the usage of zeroextend and a fixup function
fafa16e HVM/save: pass a size parameter to the HVM compat functions
08f3fad build: fix dependencies for files compiled from their parent directory
bdead3f MAINTAINERS: change the vt-d maintainer
b38d426 x86/viridian: flush remote tlbs by hypercall
713b7e4 public/event_channel.h: correct comment
d380b35 x86/boot: check for not allowed sections before linking
c708181 libxc: expose xsaves/xgetbv1/xsavec to hvm guest
460b9a4 x86/xsaves: enable xsaves/xrstors for hvm guest
da62246 x86/xsaves: enable xsaves/xrstors/xsavec in xen
3ebe992 x86/xsaves: using named operand instead numbered operand in xrstor
d9d610d build: remove .d files from xen/ on a clean
1e8193b console: make printk() line continuation tracking per-CPU
2a91f05 xen/arm: vgic-v3: Make clear that GICD_*SPI_* registers are reserved
c38f9e4 xen/arm: vgic-v3: Don't implement write-only register read as zero
cdae22a xen/arm: vgic-v3: Remove spurious return in GICR_INVALLR
833a693 xen/arm: vgic-v3: Emulate read to GICD_ICACTIVER<n>
5482d13 xen/arm: vgic: Re-order the register emulations to match the memory map
49b6d4c xen/arm: vgic-v3: Remove GICR_MOVALLR and GICR_MOVLPIR
a2b83f9 xen/arm: vgic: Properly emulate the full register
84ce5f4 xen/arm: vgic-v3: Only emulate identification registers required by the spec
cf1a142 xen/arm: vgic-v3: Use the correct offset GICR_IGRPMODR0
675b68f xen/arm: vgic-v3: Don't try to emulate IROUTER which do not exist in the spec
afbbf2c xen/arm: vgic-v2: Implement correctly ICFGR{0, 1} read-only
c4d6bbd xen/arm: vgic-v3: Support 32-bit access for 64-bit registers
423e9ec xen/arm: vgic: Introduce helpers to extract/update/clear/set vGIC register ...
5d495f4 xen/arm: vgic: Optimize the way to store the target vCPU in the rank
e99e162 xen/arm: vgic-v2: Don't ignore a write in ITARGETSR if one field is 0
9f5e16e xen/arm: vgic-v2: Handle correctly byte write in ITARGETSR
bc50de8 xen/arm: vgic-v2: Implement correctly ITARGETSR0 - ITARGETSR7 read-only
ca55006 xen/arm: move ticks conversions function declarations to the header file
6c31176 arm: export platform_op XENPF_settime64
f1e5b11 xen: move wallclock functions from x86 to common
7d596f5 x86/VPMU: return correct fixed PMC count
3acb0d6 public/io/netif.h: tidy up and remove duplicate comments
8267253 public/io/netif.h: add definition of gso_prefix flag
19167b1 public/io/netif.h: document the reality of netif_rx_request/reponse
0c3f246 x86/VPMU: Initialize VPMU's lvtpc vector
c03480c x86/vPMU: document as unsupported
9b43668 Merge branch 'staging' of ssh://xenbits.xen.org/home/xen/git/xen into staging
9cf5f0a x86/kexec: hide more kexec infrastructure behind CONFIG_KEXEC
0aa684b x86: drop MAX_APICID
c87303c libxl: fix line wrapping issues introduced by automatic replacement
91cee73 libxl: convert libxl__sprintf(gc) to GCSPRINTF
0a62798 tools/hotplug: quote all variables in vif-bridge
26b4f4d docs: Introduce xenstore paths for guest network address information
e8f98cb docs: Introduce xenstore paths for hotplug features
71e64e1 docs: Introduce xenstore paths for PV driver information
85e3b08 docs: Introduce xenstore paths for PV control features
31aa811 get_maintainer: fix perl 5.22/5.24 deprecated/incompatible "\C" use
2dd7eaf tools/libxl: Drop dead code following calls to libxl__exec()
0188415 xen/arm: use masking operation instead of test_bit for MCSF bits
bb4b673 MAINTAINERS: mini-os patches should be copied to minios-devel
969eb34 MINIOS_UPSTREAM_REVISION Update
032dbba Config.mk: Update SEABIOS_UPSTREAM_TAG to rel-1.9.0
cb6be0d sched: get rid of the per domain vCPU list in Credit2
e604225 sched: get rid of the per domain vCPU list in RTDS
6b53bb4 sched: better handle (not) inserting idle vCPUs in runqueues
a8c6c62 sched: clarify use cases of schedule_cpu_switch()
ae2f41e sched: fix locking for insert_vcpu() in credit1 and RTDS
e4fd700 x86/HVM: type adjustments
81a28f1 VMX: fix/adjust trap injection
2bbe500 ACPI 6.0: Add changes for FADT table
975efe1 acpi/NUMA: build NUMA for x86 only
dee1aee VT-d: dump the posted format IRTE
853f8af vt-d: extend struct iremap_entry to support VT-d Posted-Interrupts
0ba0faa VT-d: remove pointless casts
2944689 vmx: initialize VT-d Posted-Interrupts Descriptor
03523ab vmx: add some helper functions for Posted-Interrupts
30e3f32 vmx: extend struct pi_desc to support VT-d Posted-Interrupts
1d028f7 VT-d Posted-Interrupts feature detection
9670e7d iommu: add iommu_intpost to control VT-d Posted-Interrupts feature
d02e84b vVMX: use latched VMCS machine address
3b47431 VMX: allocate VMCS pages from domain heap
827db7b MINIOS_UPSTREAM_REVISION Update
aec3811 tools/libxc: Correct XC_DOM_PAGE_SIZE() to return a long long
225166e libxl: correct bug in domain builder regarding page tables for pvh
c35eefd x86/P2M: consolidate handling of types not requiring a valid MFN
513c203 x86/PoD: tighten conditions for checking super page
65288cf x86/IO-APIC: fix setting of destinations
5ed662e x86: fixes to LAPIC probing
b9730aa ns16550: limit mapped MMIO size
7efcc58 ns16550: reset bar_64 on each iteration
f2d2de5 x86: move some APIC related macros to apicdef.h
e0116ac x86: add cmpxchg16b support
a2b3502 blkif.h: document blkif multi-queue/ring extension

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-12-02 13:51       ` Ian Campbell
@ 2015-12-03  5:58         ` Tian, Kevin
  2015-12-03  9:25           ` Ian Campbell
  2015-12-05  8:09         ` Ian Campbell
  1 sibling, 1 reply; 23+ messages in thread
From: Tian, Kevin @ 2015-12-03  5:58 UTC (permalink / raw)
  To: Ian Campbell, Hu, Robert, Ian Jackson, Nakajima, Jun
  Cc: Andrew Cooper, xen-devel, osstest service owner, Jan Beulich

> From: Ian Campbell [mailto:ian.campbell@citrix.com]
> Sent: Wednesday, December 02, 2015 9:51 PM
> >
> > According to
> > http://osstest.test-lab.xenproject.org/~osstest/pub/results/bisect/xen-un
> > stable/test-amd64-amd64-qemuu-nested-intel.debian-hvm-install--l1
> > --l2.html
> > it was unable to reproduce a baseline, probably because it didn't have
> > enough historical data.
> 
> So I have run an adhoc test of the version of Xen tested by flight 64494,
> i.e. the one which passed on godello under the old job name test-amd64-
> amd64-qemuu-nested but using the new name and current test harness and it
> was successful:
> 
> http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/
> 
> I think that ought to give a baseline for the bisector to work with. I'll
> prod it to do so.
> 
> Given the number of commits I expect this is going to take a little while
> to produce results, given that the regression is already a week old it
> would be good if the VT-d maintainers could investigate in parallel.

Is the test case VT-d related? From earlier thread looks it's related to nested
virtualization?

I can't access above link. If you can help describe the test steps in
plain text, I can check whether we can reproduce locally.

Thanks
Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-12-03  5:58         ` Tian, Kevin
@ 2015-12-03  9:25           ` Ian Campbell
  0 siblings, 0 replies; 23+ messages in thread
From: Ian Campbell @ 2015-12-03  9:25 UTC (permalink / raw)
  To: Tian, Kevin, Hu, Robert, Ian Jackson, Nakajima, Jun
  Cc: Andrew Cooper, xen-devel, osstest service owner, Jan Beulich

On Thu, 2015-12-03 at 05:58 +0000, Tian, Kevin wrote:
> > From: Ian Campbell [mailto:ian.campbell@citrix.com]
> > Sent: Wednesday, December 02, 2015 9:51 PM
> > > 
> > > According to
> > > http://osstest.test-lab.xenproject.org/~osstest/pub/results/bisect/xe
> > > n-un
> > > stable/test-amd64-amd64-qemuu-nested-intel.debian-hvm-install--l1
> > > --l2.html
> > > it was unable to reproduce a baseline, probably because it didn't
> > > have
> > > enough historical data.
> > 
> > So I have run an adhoc test of the version of Xen tested by flight
> > 64494,
> > i.e. the one which passed on godello under the old job name test-amd64-
> > amd64-qemuu-nested but using the new name and current test harness and
> > it
> > was successful:
> > 
> > http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/
> > 
> > I think that ought to give a baseline for the bisector to work with.
> > I'll
> > prod it to do so.
> > 
> > Given the number of commits I expect this is going to take a little
> > while
> > to produce results, given that the regression is already a week old it
> > would be good if the VT-d maintainers could investigate in parallel.
> 
> Is the test case VT-d related? From earlier thread looks it's related to
> nested virtualization?

It's the nested-vmx/VT-x test case which Intel contributed to osstest over
the last few months. AFAIK VT-d is not involved in any way (no passthrough
etc).

> I can't access above link. If you can help describe the test steps in
> plain text, I can check whether we can reproduce locally.

Sorry, I hadn't spotted that the adhoc link was internal to the colo when I
c&p'd it.

This test case has failed in the last half-dozen real xen-unstable flights,
including the one which started this thread and the latest one which is at 
http://logs.test-lab.xenproject.org/osstest/logs/65287/test-amd64-amd64-qemuu-nested-intel/info.html

The test case is installing Xen in an HVM guest to produce an L1 host and
then trying to boot an L2 HVM guest within that L1, which fails.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-12-02 13:51       ` Ian Campbell
  2015-12-03  5:58         ` Tian, Kevin
@ 2015-12-05  8:09         ` Ian Campbell
  2015-12-07 16:18           ` Jan Beulich
  2015-12-08  8:06           ` Hu, Robert
  1 sibling, 2 replies; 23+ messages in thread
From: Ian Campbell @ 2015-12-05  8:09 UTC (permalink / raw)
  To: Hu, Robert, Ian Jackson, Jun Nakajima, Kevin Tian
  Cc: Andrew Cooper, xen-devel, osstest service owner, Jan Beulich

On Wed, 2015-12-02 at 13:51 +0000, Ian Campbell wrote:

> http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/
> 
> I think that ought to give a baseline for the bisector to work with. I'll
> prod it to do so.

Results are below. TL;DR: d02e84b9d9d "vVMX: use latched VMCS machine
address" is somehow at fault.

It appears to be somewhat machine specific, the one this has been
failing on is godello* which says "CPU0: Intel(R) Xeon(R) CPU E3-1220
v3 @ 3.10GHz stepping 03" in its serial log.

Andy suggested this might be related to cpu_has_vmx_vmcs_shadowing
so Haswell and newer vs IvyBridge and older.

Ian.

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-qemuu-nested-intel
testid debian-hvm-install/l1/l2

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  d02e84b9d9d16b6b56186f0dfdcb3c90b83c82a3
  Bug not present: 3b47431691409004c7218f6a6ba5c9c0bcf483ea
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/65388/


  commit d02e84b9d9d16b6b56186f0dfdcb3c90b83c82a3
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Nov 24 12:07:27 2015 +0100
  
      vVMX: use latched VMCS machine address
      
      Instead of calling domain_page_map_to_mfn() over and over, latch the
      guest VMCS machine address unconditionally (i.e. independent of whether
      VMCS shadowing is supported by the hardware).
      
      Since this requires altering the parameters of __[gs]et_vmcs{,_real}()
      (and hence all their callers) anyway, take the opportunity to also drop
      the bogus double underscores from their names (and from
      __[gs]et_vmcs_virtual() as well).
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Kevin Tian <kevin.tian@intel.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results-adhoc/bisect/xen-unstable/test-amd64-amd64-qemuu-nested-intel.debian-hvm-install--l1--l2.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results-adhoc/bisect/xen-unstable/test-amd64-amd64-qemuu-nested-intel.debian-hvm-install--l1--l2 --summary-out=tmp/65388.bisection-summary --blessings=real,real-bisect,adhoc-bisect --basis-template=65301 --basis-flight=65301 xen-unstable test-amd64-amd64-qemuu-nested-intel debian-hvm-install/l1/l2
Searching for failure / basis pass:
 65314 fail [host=godello1] / template as basis? using template as basis.
Failure / basis pass flights: 65314 / 65301
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed 2c4f313a7e62c7e559a469d4af4c3d03c49afa43
Basis pass 1230ae0e99e05ced8a945a1a2c5762ce5c6c97c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 816609b2841297925a223ec377c336360e044ee5 d07f63fa6e70350b23e7acbde06129247c4e655d
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#1230ae0e99e05ced8a945a1a2c5762ce5c6c97c9-769b79eb206ad5b0249a08665fefb913c3d1998e git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#bc00cad75d8bcc3ba696992bec219c21db8406aa-bc00cad75d8bcc3ba696992bec219c21db8406aa git://xenbits.xen.org/qemu-xen.git#816609b2841297925a223ec377c336360e044ee5-3fb401edbd8e9741c611bfddf6a2032ca91f55ed git://xenbits.xen.org/xen.git#d07f63fa6e70350b23e7acbde06129247c4e655d-2c4f313a7e62c7e559a469d4af4c3d03c49afa43
Loaded 17133 nodes in revision graph
Searching for test results:
 65114 [host=italia0]
 65141 fail 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed b1d398b67781140d1c6efd05778d0ad4103b2a32
 65162 fail 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed 713b7e4ef2aa4ec3ae697cde9c81d5a57548f9b1
 65164 fail 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed b1d398b67781140d1c6efd05778d0ad4103b2a32
 65186 fail 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed b1d398b67781140d1c6efd05778d0ad4103b2a32
 65217 fail 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed b1d398b67781140d1c6efd05778d0ad4103b2a32
 65233 fail 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed b1d398b67781140d1c6efd05778d0ad4103b2a32
 65287 fail 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed c8eb0ec277ae387e78d685523e0fee633e46f046
 65333 pass 1230ae0e99e05ced8a945a1a2c5762ce5c6c97c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 816609b2841297925a223ec377c336360e044ee5 d07f63fa6e70350b23e7acbde06129247c4e655d
 65314 fail 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed 2c4f313a7e62c7e559a469d4af4c3d03c49afa43
 65354 pass 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 816609b2841297925a223ec377c336360e044ee5 c35eefded2992fc9b979f99190422527650872fd
 65267 fail 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed 4c6cd64519f9bc270a7278128c94e4b66e3d2077
 65301 pass 1230ae0e99e05ced8a945a1a2c5762ce5c6c97c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 816609b2841297925a223ec377c336360e044ee5 d07f63fa6e70350b23e7acbde06129247c4e655d
 65351 fail 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed 0ba0faa7f02fb310a330579adeef3534e6eca5a8
 65343 pass 997badf1e03dfd39854094f7767e1a4cf5ed310b c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 816609b2841297925a223ec377c336360e044ee5 d07f63fa6e70350b23e7acbde06129247c4e655d
 65325 fail 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed c8eb0ec277ae387e78d685523e0fee633e46f046
 65345 pass e7a10d9297c1abfd27138d86a13f7b6435634a46 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 816609b2841297925a223ec377c336360e044ee5 d07f63fa6e70350b23e7acbde06129247c4e655d
 65341 fail 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed 2c4f313a7e62c7e559a469d4af4c3d03c49afa43
 65342 fail 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed c87303c04738b0e837da6e891eb561de0bf1b64e
 65362 fail 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed d02e84b9d9d16b6b56186f0dfdcb3c90b83c82a3
 65367 pass 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 816609b2841297925a223ec377c336360e044ee5 827db7b26384ce083df7154d77f13379b2cf4121
 65370 pass 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed 827db7b26384ce083df7154d77f13379b2cf4121
 65374 pass 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed 3b47431691409004c7218f6a6ba5c9c0bcf483ea
 65379 fail 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed d02e84b9d9d16b6b56186f0dfdcb3c90b83c82a3
 65380 pass 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed 3b47431691409004c7218f6a6ba5c9c0bcf483ea
 65383 fail 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed d02e84b9d9d16b6b56186f0dfdcb3c90b83c82a3
 65384 pass 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed 3b47431691409004c7218f6a6ba5c9c0bcf483ea
 65388 fail 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed d02e84b9d9d16b6b56186f0dfdcb3c90b83c82a3
Searching for interesting versions
 Result found: flight 65301 (pass), for basis pass
 Result found: flight 65314 (fail), for basis failure
 Repro found: flight 65333 (pass), for basis pass
 Repro found: flight 65341 (fail), for basis failure
 0 revisions at 769b79eb206ad5b0249a08665fefb913c3d1998e c530a75c1e6a472b0eb9558310b518f0dfcd8860 bc00cad75d8bcc3ba696992bec219c21db8406aa 3fb401edbd8e9741c611bfddf6a2032ca91f55ed 3b47431691409004c7218f6a6ba5c9c0bcf483ea
No revisions left to test, checking graph state.
 Result found: flight 65374 (pass), for last pass
 Result found: flight 65379 (fail), for first failure
 Repro found: flight 65380 (pass), for last pass
 Repro found: flight 65383 (fail), for first failure
 Repro found: flight 65384 (pass), for last pass
 Repro found: flight 65388 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  d02e84b9d9d16b6b56186f0dfdcb3c90b83c82a3
  Bug not present: 3b47431691409004c7218f6a6ba5c9c0bcf483ea
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/65388/


  commit d02e84b9d9d16b6b56186f0dfdcb3c90b83c82a3
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Nov 24 12:07:27 2015 +0100
  
      vVMX: use latched VMCS machine address
      
      Instead of calling domain_page_map_to_mfn() over and over, latch the
      guest VMCS machine address unconditionally (i.e. independent of whether
      VMCS shadowing is supported by the hardware).
      
      Since this requires altering the parameters of __[gs]et_vmcs{,_real}()
      (and hence all their callers) anyway, take the opportunity to also drop
      the bogus double underscores from their names (and from
      __[gs]et_vmcs_virtual() as well).
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Kevin Tian <kevin.tian@intel.com>

Revision graph left in /home/logs/results-adhoc/bisect/xen-unstable/test-amd64-amd64-qemuu-nested-intel.debian-hvm-install--l1--l2.{dot,ps,png,html,svg}.
----------------------------------------
65388: tolerable ALL FAIL

flight 65388 xen-unstable adhoc-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/65388/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-intel 16 debian-hvm-install/l1/l2 fail baseline untested


jobs:
 test-amd64-amd64-qemuu-nested-intel                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-12-05  8:09         ` Ian Campbell
@ 2015-12-07 16:18           ` Jan Beulich
  2015-12-07 16:28             ` Ian Campbell
  2015-12-08  2:46             ` Tian, Kevin
  2015-12-08  8:06           ` Hu, Robert
  1 sibling, 2 replies; 23+ messages in thread
From: Jan Beulich @ 2015-12-07 16:18 UTC (permalink / raw)
  To: Ian Campbell, Ian Jackson, Jun Nakajima, KevinTian, Robert Hu
  Cc: Andrew Cooper, xen-devel, osstestservice owner

>>> On 05.12.15 at 09:09, <ian.campbell@citrix.com> wrote:
> On Wed, 2015-12-02 at 13:51 +0000, Ian Campbell wrote:
> 
>> http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/ 
>> 
>> I think that ought to give a baseline for the bisector to work with. I'll
>> prod it to do so.
> 
> Results are below. TL;DR: d02e84b9d9d "vVMX: use latched VMCS machine
> address" is somehow at fault.
> 
> It appears to be somewhat machine specific, the one this has been
> failing on is godello* which says "CPU0: Intel(R) Xeon(R) CPU E3-1220
> v3 @ 3.10GHz stepping 03" in its serial log.
> 
> Andy suggested this might be related to cpu_has_vmx_vmcs_shadowing
> so Haswell and newer vs IvyBridge and older.

Yeah, but on irc it was also made clear that the regression is on a
system without that capability.

At this point we certainly need to seriously consider reverting the
whole change. The reason I continue to be hesitant is that I'm
afraid this may result in no-one trying to find out what the problem
here is. While I could certainly try to, I'm sure I won't find time to
do so within the foreseeable future. And since we didn't get any
real feedback from Intel so far, I thought I'd ping them to at least
share some status before we decide. That pinging has happened
a few minutes ago. I'd therefore like to give it, say, another day,
and if by then we don't have an estimate for when a fix might
become available, I'd do the revert. Unless of course somebody
feels strongly about doing the revert immediately.

Jan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-12-07 16:18           ` Jan Beulich
@ 2015-12-07 16:28             ` Ian Campbell
  2015-12-07 16:48               ` Jan Beulich
  2015-12-08  2:46             ` Tian, Kevin
  1 sibling, 1 reply; 23+ messages in thread
From: Ian Campbell @ 2015-12-07 16:28 UTC (permalink / raw)
  To: Jan Beulich, Ian Jackson, Jun Nakajima, KevinTian, Robert Hu
  Cc: Andrew Cooper, xen-devel, osstestservice owner

On Mon, 2015-12-07 at 09:18 -0700, Jan Beulich wrote:
> > > > On 05.12.15 at 09:09, <ian.campbell@citrix.com> wrote:
> > On Wed, 2015-12-02 at 13:51 +0000, Ian Campbell wrote:
> > 
> > > http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/ 
> > > 
> > > I think that ought to give a baseline for the bisector to work with.
> > > I'll
> > > prod it to do so.
> > 
> > Results are below. TL;DR: d02e84b9d9d "vVMX: use latched VMCS machine
> > address" is somehow at fault.
> > 
> > It appears to be somewhat machine specific, the one this has been
> > failing on is godello* which says "CPU0: Intel(R) Xeon(R) CPU E3-1220
> > v3 @ 3.10GHz stepping 03" in its serial log.
> > 
> > Andy suggested this might be related to cpu_has_vmx_vmcs_shadowing
> > so Haswell and newer vs IvyBridge and older.
> 
> Yeah, but on irc it was also made clear that the regression is on a
> system without that capability.

What I was trying to say he said was that the difference between working
and broken hosts might be spread along the lines of >=Haswell vs
<=IvyBridge.

How that maps onto E3-1220, which is what is exhibiting the issue, I leave
to you guys.

> At this point we certainly need to seriously consider reverting the
> whole change. The reason I continue to be hesitant is that I'm
> afraid this may result in no-one trying to find out what the problem
> here is. While I could certainly try to, I'm sure I won't find time to
> do so within the foreseeable future. And since we didn't get any
> real feedback from Intel so far, I thought I'd ping them to at least
> share some status before we decide. That pinging has happened
> a few minutes ago. I'd therefore like to give it, say, another day,
> and if by then we don't have an estimate for when a fix might
> become available, I'd do the revert. Unless of course somebody
> feels strongly about doing the revert immediately.

I don't mind waiting.

One approach to fixing might be to disentangle the various things which
this patch did, such that the actual culprit is a smaller thing to analyse.

Ian.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-12-07 16:28             ` Ian Campbell
@ 2015-12-07 16:48               ` Jan Beulich
  0 siblings, 0 replies; 23+ messages in thread
From: Jan Beulich @ 2015-12-07 16:48 UTC (permalink / raw)
  To: Ian Campbell
  Cc: KevinTian, xen-devel, Andrew Cooper, Ian Jackson,
	osstestservice owner, Jun Nakajima, Robert Hu

>>> On 07.12.15 at 17:28, <ian.campbell@citrix.com> wrote:
> One approach to fixing might be to disentangle the various things which
> this patch did, such that the actual culprit is a smaller thing to analyse.

Yes, if we're going to revert, I'll try to do something in that direction.
I don't expect there to be too much room for splitting though.

Jan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-12-07 16:18           ` Jan Beulich
  2015-12-07 16:28             ` Ian Campbell
@ 2015-12-08  2:46             ` Tian, Kevin
  2015-12-08  7:28               ` Jan Beulich
  1 sibling, 1 reply; 23+ messages in thread
From: Tian, Kevin @ 2015-12-08  2:46 UTC (permalink / raw)
  To: Jan Beulich, Ian Campbell, Ian Jackson, Nakajima, Jun, Hu, Robert
  Cc: Andrew Cooper, xen-devel, osstestservice owner

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, December 08, 2015 12:18 AM
> 
> >>> On 05.12.15 at 09:09, <ian.campbell@citrix.com> wrote:
> > On Wed, 2015-12-02 at 13:51 +0000, Ian Campbell wrote:
> >
> >> http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/
> >>
> >> I think that ought to give a baseline for the bisector to work with. I'll
> >> prod it to do so.
> >
> > Results are below. TL;DR: d02e84b9d9d "vVMX: use latched VMCS machine
> > address" is somehow at fault.
> >
> > It appears to be somewhat machine specific, the one this has been
> > failing on is godello* which says "CPU0: Intel(R) Xeon(R) CPU E3-1220
> > v3 @ 3.10GHz stepping 03" in its serial log.
> >
> > Andy suggested this might be related to cpu_has_vmx_vmcs_shadowing
> > so Haswell and newer vs IvyBridge and older.
> 
> Yeah, but on irc it was also made clear that the regression is on a
> system without that capability.
> 
> At this point we certainly need to seriously consider reverting the
> whole change. The reason I continue to be hesitant is that I'm
> afraid this may result in no-one trying to find out what the problem
> here is. While I could certainly try to, I'm sure I won't find time to
> do so within the foreseeable future. And since we didn't get any
> real feedback from Intel so far, I thought I'd ping them to at least
> share some status before we decide. That pinging has happened
> a few minutes ago. I'd therefore like to give it, say, another day,
> and if by then we don't have an estimate for when a fix might
> become available, I'd do the revert. Unless of course somebody
> feels strongly about doing the revert immediately.
> 

I didn't see an obvious error from the commit, so some debug
would be required to identify the problematic code. However this 
issue was not reproduced immediately in our internal environment, 
and the guy familiar with this area (Yang) just left Intel. It takes 
some time to identify a new developer and get him ramped up to
fix issues in this area. Given that fact, I'd suggest to revert related
code now (as you discussed not the whole commit). In parallel
we'll find someone to look at original commit as a rampup task.

Thanks
Kevin

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-12-08  2:46             ` Tian, Kevin
@ 2015-12-08  7:28               ` Jan Beulich
  0 siblings, 0 replies; 23+ messages in thread
From: Jan Beulich @ 2015-12-08  7:28 UTC (permalink / raw)
  To: Kevin Tian
  Cc: xen-devel, Ian Campbell, Andrew Cooper, Ian Jackson,
	osstestservice owner, Jun Nakajima, Robert Hu

>>> On 08.12.15 at 03:46, <kevin.tian@intel.com> wrote:
> I didn't see an obvious error from the commit, so some debug
> would be required to identify the problematic code. However this 
> issue was not reproduced immediately in our internal environment, 
> and the guy familiar with this area (Yang) just left Intel. It takes 
> some time to identify a new developer and get him ramped up to
> fix issues in this area. Given that fact, I'd suggest to revert related
> code now (as you discussed not the whole commit). In parallel
> we'll find someone to look at original commit as a rampup task.

Thanks Kevin!

Jan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-12-05  8:09         ` Ian Campbell
  2015-12-07 16:18           ` Jan Beulich
@ 2015-12-08  8:06           ` Hu, Robert
  2015-12-08 10:29             ` Ian Campbell
  1 sibling, 1 reply; 23+ messages in thread
From: Hu, Robert @ 2015-12-08  8:06 UTC (permalink / raw)
  To: Ian Campbell, Ian Jackson, Nakajima, Jun, Tian, Kevin
  Cc: xen-devel, Andrew Cooper, osstest service owner, Wang, Yong Y,
	Jan Beulich, Jin, Gordon


> -----Original Message-----
> From: Ian Campbell [mailto:ian.campbell@citrix.com]
> Sent: Saturday, December 5, 2015 4:10 PM
> To: Hu, Robert <robert.hu@intel.com>; Ian Jackson
> <Ian.Jackson@eu.citrix.com>; Nakajima, Jun <jun.nakajima@intel.com>;
> Tian, Kevin <kevin.tian@intel.com>
> Cc: xen-devel@lists.xensource.com; osstest service owner
> <osstest-admin@xenproject.org>; Jan Beulich <jbeulich@suse.com>;
> Andrew Cooper <andrew.cooper3@citrix.com>
> Subject: Re: [xen-unstable test] 65141: regressions - FAIL
> 
> On Wed, 2015-12-02 at 13:51 +0000, Ian Campbell wrote:
> 
> > http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/
> >
> > I think that ought to give a baseline for the bisector to work with. I'll
> > prod it to do so.
> 
> Results are below. TL;DR: d02e84b9d9d "vVMX: use latched VMCS machine
> address" is somehow at fault.
> 
> It appears to be somewhat machine specific, the one this has been
> failing on is godello* which says "CPU0: Intel(R) Xeon(R) CPU E3-1220
> v3 @ 3.10GHz stepping 03" in its serial log.
> 
> Andy suggested this might be related to cpu_has_vmx_vmcs_shadowing
> so Haswell and newer vs IvyBridge and older.
> 
> Ian.
> 
> branch xen-unstable
> xenbranch xen-unstable
> job test-amd64-amd64-qemuu-nested-intel
> testid debian-hvm-install/l1/l2
> 
> Tree: linux git://xenbits.xen.org/linux-pvops.git
> Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
> Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
> Tree: qemuu git://xenbits.xen.org/qemu-xen.git
> Tree: xen git://xenbits.xen.org/xen.git
> 
> *** Found and reproduced problem changeset ***
> 
>   Bug is in tree:  xen git://xenbits.xen.org/xen.git
>   Bug introduced:  d02e84b9d9d16b6b56186f0dfdcb3c90b83c82a3
>   Bug not present: 3b47431691409004c7218f6a6ba5c9c0bcf483ea
>   Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/65388/
> 
> 
>   commit d02e84b9d9d16b6b56186f0dfdcb3c90b83c82a3
>   Author: Jan Beulich <jbeulich@suse.com>
>   Date:   Tue Nov 24 12:07:27 2015 +0100
> 
>       vVMX: use latched VMCS machine address
> 
>       Instead of calling domain_page_map_to_mfn() over and over, latch
> the
>       guest VMCS machine address unconditionally (i.e. independent of
> whether
>       VMCS shadowing is supported by the hardware).
> 
>       Since this requires altering the parameters of __[gs]et_vmcs{,_real}()
>       (and hence all their callers) anyway, take the opportunity to also drop
>       the bogus double underscores from their names (and from
>       __[gs]et_vmcs_virtual() as well).
> 
>       Signed-off-by: Jan Beulich <jbeulich@suse.com>
>       Acked-by: Kevin Tian <kevin.tian@intel.com>
> 
> 
> For bisection revision-tuple graph see:
> 
> http://logs.test-lab.xenproject.org/osstest/results-adhoc/bisect/xen-unstabl
> e/test-amd64-amd64-qemuu-nested-intel.debian-hvm-install--l1--l2.html
> Revision IDs in each graph node refer, respectively, to the Trees above.
> 
> ----------------------------------------
> Running cs-bisection-step
> --graph-out=/home/logs/results-adhoc/bisect/xen-unstable/test-amd64-am
> d64-qemuu-nested-intel.debian-hvm-install--l1--l2
> --summary-out=tmp/65388.bisection-summary
> --blessings=real,real-bisect,adhoc-bisect --basis-template=65301
> --basis-flight=65301 xen-unstable test-amd64-amd64-qemuu-nested-intel
> debian-hvm-install/l1/l2
> Searching for failure / basis pass:
>  65314 fail [host=godello1] / template as basis? using template as basis.
> Failure / basis pass flights: 65314 / 65301
> (tree with no url: ovmf)
> (tree with no url: seabios)
> Tree: linux git://xenbits.xen.org/linux-pvops.git
> Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
> Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
> Tree: qemuu git://xenbits.xen.org/qemu-xen.git
> Tree: xen git://xenbits.xen.org/xen.git
> Latest 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> 2c4f313a7e62c7e559a469d4af4c3d03c49afa43
> Basis pass 1230ae0e99e05ced8a945a1a2c5762ce5c6c97c9
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 816609b2841297925a223ec377c336360e044ee5
> d07f63fa6e70350b23e7acbde06129247c4e655d
> Generating revisions with ./adhoc-revtuple-generator
> git://xenbits.xen.org/linux-pvops.git#1230ae0e99e05ced8a945a1a2c5762ce
> 5c6c97c9-769b79eb206ad5b0249a08665fefb913c3d1998e
> git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb955
> 8310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860
> git://xenbits.xen.org/qemu-xen-traditional.git#bc00cad75d8bcc3ba696992b
> ec219c21db8406aa-bc00cad75d8bcc3ba696992bec219c21db8406aa
> git://xenbits.xen.org/qemu-xen.git#816609b2841297925a223ec377c336360
> e044ee5-3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> git://xenbits.xen.org/xen.git#d07f63fa6e70350b23e7acbde06129247c4e655
> d-2c4f313a7e62c7e559a469d4af4c3d03c49afa43
> Loaded 17133 nodes in revision graph
> Searching for test results:
>  65114 [host=italia0]
>  65141 fail 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> b1d398b67781140d1c6efd05778d0ad4103b2a32
>  65162 fail 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> 713b7e4ef2aa4ec3ae697cde9c81d5a57548f9b1
>  65164 fail 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> b1d398b67781140d1c6efd05778d0ad4103b2a32
>  65186 fail 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> b1d398b67781140d1c6efd05778d0ad4103b2a32
>  65217 fail 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> b1d398b67781140d1c6efd05778d0ad4103b2a32
>  65233 fail 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> b1d398b67781140d1c6efd05778d0ad4103b2a32
>  65287 fail 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> c8eb0ec277ae387e78d685523e0fee633e46f046
>  65333 pass 1230ae0e99e05ced8a945a1a2c5762ce5c6c97c9
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 816609b2841297925a223ec377c336360e044ee5
> d07f63fa6e70350b23e7acbde06129247c4e655d
>  65314 fail 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> 2c4f313a7e62c7e559a469d4af4c3d03c49afa43
>  65354 pass 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 816609b2841297925a223ec377c336360e044ee5
> c35eefded2992fc9b979f99190422527650872fd
>  65267 fail 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> 4c6cd64519f9bc270a7278128c94e4b66e3d2077
>  65301 pass 1230ae0e99e05ced8a945a1a2c5762ce5c6c97c9
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 816609b2841297925a223ec377c336360e044ee5
> d07f63fa6e70350b23e7acbde06129247c4e655d
>  65351 fail 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> 0ba0faa7f02fb310a330579adeef3534e6eca5a8
>  65343 pass 997badf1e03dfd39854094f7767e1a4cf5ed310b
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 816609b2841297925a223ec377c336360e044ee5
> d07f63fa6e70350b23e7acbde06129247c4e655d
>  65325 fail 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> c8eb0ec277ae387e78d685523e0fee633e46f046
>  65345 pass e7a10d9297c1abfd27138d86a13f7b6435634a46
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 816609b2841297925a223ec377c336360e044ee5
> d07f63fa6e70350b23e7acbde06129247c4e655d
>  65341 fail 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> 2c4f313a7e62c7e559a469d4af4c3d03c49afa43
>  65342 fail 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> c87303c04738b0e837da6e891eb561de0bf1b64e
>  65362 fail 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> d02e84b9d9d16b6b56186f0dfdcb3c90b83c82a3
>  65367 pass 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 816609b2841297925a223ec377c336360e044ee5
> 827db7b26384ce083df7154d77f13379b2cf4121
>  65370 pass 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> 827db7b26384ce083df7154d77f13379b2cf4121
>  65374 pass 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> 3b47431691409004c7218f6a6ba5c9c0bcf483ea
>  65379 fail 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> d02e84b9d9d16b6b56186f0dfdcb3c90b83c82a3
>  65380 pass 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> 3b47431691409004c7218f6a6ba5c9c0bcf483ea
>  65383 fail 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> d02e84b9d9d16b6b56186f0dfdcb3c90b83c82a3
>  65384 pass 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> 3b47431691409004c7218f6a6ba5c9c0bcf483ea
>  65388 fail 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> d02e84b9d9d16b6b56186f0dfdcb3c90b83c82a3
> Searching for interesting versions
>  Result found: flight 65301 (pass), for basis pass
>  Result found: flight 65314 (fail), for basis failure
>  Repro found: flight 65333 (pass), for basis pass
>  Repro found: flight 65341 (fail), for basis failure
>  0 revisions at 769b79eb206ad5b0249a08665fefb913c3d1998e
> c530a75c1e6a472b0eb9558310b518f0dfcd8860
> bc00cad75d8bcc3ba696992bec219c21db8406aa
> 3fb401edbd8e9741c611bfddf6a2032ca91f55ed
> 3b47431691409004c7218f6a6ba5c9c0bcf483ea
> No revisions left to test, checking graph state.
>  Result found: flight 65374 (pass), for last pass
>  Result found: flight 65379 (fail), for first failure
>  Repro found: flight 65380 (pass), for last pass
>  Repro found: flight 65383 (fail), for first failure
>  Repro found: flight 65384 (pass), for last pass
>  Repro found: flight 65388 (fail), for first failure
> 
> *** Found and reproduced problem changeset ***
> 
>   Bug is in tree:  xen git://xenbits.xen.org/xen.git
>   Bug introduced:  d02e84b9d9d16b6b56186f0dfdcb3c90b83c82a3
>   Bug not present: 3b47431691409004c7218f6a6ba5c9c0bcf483ea
>   Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/65388/
> 
> 
>   commit d02e84b9d9d16b6b56186f0dfdcb3c90b83c82a3
>   Author: Jan Beulich <jbeulich@suse.com>
>   Date:   Tue Nov 24 12:07:27 2015 +0100
> 
>       vVMX: use latched VMCS machine address
> 
>       Instead of calling domain_page_map_to_mfn() over and over, latch
> the
>       guest VMCS machine address unconditionally (i.e. independent of
> whether
>       VMCS shadowing is supported by the hardware).
> 
>       Since this requires altering the parameters of __[gs]et_vmcs{,_real}()
>       (and hence all their callers) anyway, take the opportunity to also drop
>       the bogus double underscores from their names (and from
>       __[gs]et_vmcs_virtual() as well).
> 
>       Signed-off-by: Jan Beulich <jbeulich@suse.com>
>       Acked-by: Kevin Tian <kevin.tian@intel.com>
> 
> Revision graph left in
> /home/logs/results-adhoc/bisect/xen-unstable/test-amd64-amd64-qemuu-n
> ested-intel.debian-hvm-install--l1--l2.{dot,ps,png,html,svg}.
> ----------------------------------------
> 65388: tolerable ALL FAIL
> 
> flight 65388 xen-unstable adhoc-bisect [real]
> http://logs.test-lab.xenproject.org/osstest/logs/65388/
> 
> Failures :-/ but no regressions.
> 
> Tests which did not succeed,
> including tests which could not be run:
>  test-amd64-amd64-qemuu-nested-intel 16 debian-hvm-install/l1/l2 fail
> baseline untested
> 
> 
> jobs:
>  test-amd64-amd64-qemuu-nested-intel                          fail
[Hu, Robert] 

I tried to reproduce the failure in my environment, with your Qemu/Xen/Dom0
/Seabios/OVMF designation, but failed at even earlier stage -- normal L1 guest
installation.

xen be: vkbd-0: initialise() failed
xen be: vkbd-0: initialise() failed
xen be: vkbd-0: initialise() failed

Not sure if this related to linux-pvops tree, for I previously using linux-stable tree.

For your failure, as Kevin mentioned in other mail, we will find someone to look into.
Would you find out the detailed log of ' debian-hvm-install--l1--l2' step? so that he
can start analyze, as I cannot reproduce it right now.
(I didn't managed to find out the log in your web links)

> 
> 
> ------------------------------------------------------------
> sg-report-flight on osstest.test-lab.xenproject.org
> logs: /home/logs/logs
> images: /home/logs/images
> 
> Logs, config files, etc. are available at
>     http://logs.test-lab.xenproject.org/osstest/logs
> 
> Explanation of these reports, and of osstest in general, is at
> 
> http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=m
> aster
> 
> http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master
> 
> Test harness code can be found at
>     http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-12-08  8:06           ` Hu, Robert
@ 2015-12-08 10:29             ` Ian Campbell
  2015-12-09  6:27               ` Robert Hu
  2015-12-11  4:04               ` Robert Hu
  0 siblings, 2 replies; 23+ messages in thread
From: Ian Campbell @ 2015-12-08 10:29 UTC (permalink / raw)
  To: Hu, Robert, Ian Jackson, Nakajima, Jun, Tian, Kevin
  Cc: xen-devel, Andrew Cooper, osstest service owner, Wang, Yong Y,
	Jan Beulich, Jin, Gordon

On Tue, 2015-12-08 at 08:06 +0000, Hu, Robert wrote:
> > 
> [...]

Please trim your quotes.

> For your failure, as Kevin mentioned in other mail, we will find someone
> to look into.
> Would you find out the detailed log of ' debian-hvm-install--l1--l2'
> step? so that he
> can start analyze, as I cannot reproduce it right now.
> (I didn't managed to find out the log in your web links)

From
http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/

(in this thread, or any other relevant logs/NNNNN/ link), you can click the
"test -amd64 -amd64 -qemuu -nested -intel" header to go to
http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/test-amd64-amd64-qemuu-nested-intel/info.html

Then click the result of the "debian-hvm-install/l1/l2" step from that
table to go to the step log:
http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/test-amd64-amd64-qemuu-nested-intel/16.ts-debian-hvm-install.log

Which I think is what you were after?

All of the other related logs are at the end of the same
http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/test-amd64-amd64-qemuu-nested-intel/info.html
page.

See also README in osstest.git which explains the structure of the log
pages/grids/etc.

Ian.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-12-08 10:29             ` Ian Campbell
@ 2015-12-09  6:27               ` Robert Hu
  2015-12-09  8:35                 ` Jin, Gordon
  2015-12-11  4:04               ` Robert Hu
  1 sibling, 1 reply; 23+ messages in thread
From: Robert Hu @ 2015-12-09  6:27 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Tian, Kevin, xen-devel, Jan Beulich, guangrong.xiao,
	Andrew Cooper, Ian Jackson, osstest service owner, Wang, Yong Y,
	Nakajima, Jun, Jin, Gordon, Hu, Robert

On Tue, 2015-12-08 at 10:29 +0000, Ian Campbell wrote:
> On Tue, 2015-12-08 at 08:06 +0000, Hu, Robert wrote:
> > > 
> > [...]
> 
> Please trim your quotes.
> 
> > For your failure, as Kevin mentioned in other mail, we will find someone
> > to look into.
> > Would you find out the detailed log of ' debian-hvm-install--l1--l2'
> > step? so that he
> > can start analyze, as I cannot reproduce it right now.
> > (I didn't managed to find out the log in your web links)
> 
> From
> http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/
> 
> [...]

OK, thanks Ian.

I'd like to confirm if this can be reproduced on Ivybridge? I'm now
trying to set up environment for our developer to dig into.

> 
> Ian.
> 
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-12-09  6:27               ` Robert Hu
@ 2015-12-09  8:35                 ` Jin, Gordon
  2015-12-09 10:21                   ` Ian Campbell
  0 siblings, 1 reply; 23+ messages in thread
From: Jin, Gordon @ 2015-12-09  8:35 UTC (permalink / raw)
  To: Hu, Robert, Ian Campbell
  Cc: Tian, Kevin, xen-devel, Nakajima, Jun, Xiao, Guangrong,
	Andrew Cooper, Ian Jackson, osstest service owner, Wang, Yong Y,
	Jan Beulich

> -----Original Message-----
> From: Robert Hu [mailto:robert.hu@vmm.sh.intel.com]
> Sent: Wednesday, December 09, 2015 2:27 PM
> To: Ian Campbell
> Cc: Hu, Robert; Ian Jackson; Nakajima, Jun; Tian, Kevin;
> xen-devel@lists.xensource.com; osstest service owner; Jan Beulich; Andrew
> Cooper; Wang, Yong Y; Jin, Gordon; Xiao, Guangrong
> Subject: Re: [xen-unstable test] 65141: regressions - FAIL
> 
> On Tue, 2015-12-08 at 10:29 +0000, Ian Campbell wrote:
> > On Tue, 2015-12-08 at 08:06 +0000, Hu, Robert wrote:
> > > >
> > > [...]
> >
> > Please trim your quotes.
> >
> > > For your failure, as Kevin mentioned in other mail, we will find
> > > someone to look into.
> > > Would you find out the detailed log of ' debian-hvm-install--l1--l2'
> > > step? so that he
> > > can start analyze, as I cannot reproduce it right now.
> > > (I didn't managed to find out the log in your web links)
> >
> > From
> > http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/
> >
> > [...]
> 
> OK, thanks Ian.
> 
> I'd like to confirm if this can be reproduced on Ivybridge? I'm now trying to set up
> environment for our developer to dig into.

I checked the name convention, and found Xeon E3-1220 v3 is Haswell.

Thanks
Gordon

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-12-09  8:35                 ` Jin, Gordon
@ 2015-12-09 10:21                   ` Ian Campbell
  0 siblings, 0 replies; 23+ messages in thread
From: Ian Campbell @ 2015-12-09 10:21 UTC (permalink / raw)
  To: Jin, Gordon, Hu, Robert
  Cc: Tian, Kevin, xen-devel, Nakajima, Jun, Xiao, Guangrong,
	Andrew Cooper, Ian Jackson, osstest service owner, Wang, Yong Y,
	Jan Beulich

On Wed, 2015-12-09 at 08:35 +0000, Jin, Gordon wrote:
> > -----Original Message-----
> > From: Robert Hu [mailto:robert.hu@vmm.sh.intel.com]
> > Sent: Wednesday, December 09, 2015 2:27 PM
> > To: Ian Campbell
> > Cc: Hu, Robert; Ian Jackson; Nakajima, Jun; Tian, Kevin;
> > xen-devel@lists.xensource.com; osstest service owner; Jan Beulich;
> > Andrew
> > Cooper; Wang, Yong Y; Jin, Gordon; Xiao, Guangrong
> > Subject: Re: [xen-unstable test] 65141: regressions - FAIL
> > 
> > On Tue, 2015-12-08 at 10:29 +0000, Ian Campbell wrote:
> > > On Tue, 2015-12-08 at 08:06 +0000, Hu, Robert wrote:
> > > > > 
> > > > [...]
> > > 
> > > Please trim your quotes.
> > > 
> > > > For your failure, as Kevin mentioned in other mail, we will find
> > > > someone to look into.
> > > > Would you find out the detailed log of ' debian-hvm-install--l1
> > > > --l2'
> > > > step? so that he
> > > > can start analyze, as I cannot reproduce it right now.
> > > > (I didn't managed to find out the log in your web links)
> > > 
> > > From
> > > http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/
> > > 
> > > [...]
> > 
> > OK, thanks Ian.
> > 
> > I'd like to confirm if this can be reproduced on Ivybridge? I'm now trying to set up
> > environment for our developer to dig into.

http://logs.test-lab.xenproject.org/osstest/results/history/test-amd64-amd64-qemuu-nested-intel/xen-unstable.html

godello1, which is exhibiting the problem is (from the serial logs)a "CPU0:
Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz stepping 03".

Version 713b7e4ef2aa from xen.git also ran and passed on italia0
and 713b7e4ef2aa was after the bad commit (d02e84b9d9d).

The serial logs say that italia0 is "CPU0: Intel(R) Xeon(R) CPU E3-1220 V2
@ 3.10GHz stepping 09".

> I checked the name convention, and found Xeon E3-1220 v3 is Haswell.

So it _does_ happen on Haswell then.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-12-08 10:29             ` Ian Campbell
  2015-12-09  6:27               ` Robert Hu
@ 2015-12-11  4:04               ` Robert Hu
  2015-12-11 11:49                 ` Ian Campbell
  1 sibling, 1 reply; 23+ messages in thread
From: Robert Hu @ 2015-12-11  4:04 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Tian, Kevin, xen-devel, Jan Beulich, guangrong.xiao,
	Andrew Cooper, Ian Jackson, osstest service owner, Wang, Yong Y,
	Nakajima, Jun, Jin, Gordon, Hu, Robert

On Tue, 2015-12-08 at 10:29 +0000, Ian Campbell wrote:
> On Tue, 2015-12-08 at 08:06 +0000, Hu, Robert wrote:
> > > 
> > [...]
> 
> Please trim your quotes.
> 
> > For your failure, as Kevin mentioned in other mail, we will find someone
> > to look into.
> > Would you find out the detailed log of ' debian-hvm-install--l1--l2'
> > step? so that he
> > can start analyze, as I cannot reproduce it right now.
> > (I didn't managed to find out the log in your web links)
> 
> From
> http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/
> 
> (in this thread, or any other relevant logs/NNNNN/ link), you can click the
> "test -amd64 -amd64 -qemuu -nested -intel" header to go to
> http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/test-amd64-amd64-qemuu-nested-intel/info.html
> 
> Then click the result of the "debian-hvm-install/l1/l2" step from that
> table to go to the step log:
> http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/test-amd64-amd64-qemuu-nested-intel/16.ts-debian-hvm-install.log
> 
> Which I think is what you were after?

Yes, I had tracked to here. However, from logs like (the above cannot
open now, let me take the latest for example)
http://logs.test-lab.xenproject.org/osstest/logs/65633/test-amd64-amd64-qemuu-nested-intel/16.ts-debian-hvm-install.log

2015-12-10 20:45:53 Z executing ssh ... root@172.16.146.205 xl -vvv
create /etc/xen/l2.guest.osstest.cfg 
Parsing config from /etc/xen/l2.guest.osstest.cfg
libxl: debug: libxl_create.c:1561:do_domain_create: ao 0xe8abd0: create:
how=(nil) callback=(nil) poller=0xe8b840
libxl: debug: libxl_device.c:269:libxl__device_disk_set_backend: Disk
vdev=hda spec.backend=unknown
libxl: debug: libxl_device.c:298:libxl__device_disk_set_backend: Disk
vdev=hda, using backend phy
libxl: debug: libxl_device.c:269:libxl__device_disk_set_backend: Disk
vdev=hdc spec.backend=unknown
libxl: debug: libxl_device.c:298:libxl__device_disk_set_backend: Disk
vdev=hdc, using backend phy
libxl: debug: libxl_create.c:942:initiate_domain_create: running
bootloader
libxl: debug: libxl_bootloader.c:324:libxl__bootloader_run: not a PV
domain, skipping bootloader
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch
w=0xe8c1f8: deregister unregistered
2015-12-10 20:47:33 Z command timed out [100]: timeout 130 ssh -o
StrictHostKeyChecking=no -o BatchMode=yes -o ConnectTimeout=100 -o
ServerAliveInterval=100 -o PasswordAuthentication=no -o
ChallengeResponseAuthentication=no -o
UserKnownHostsFile=tmp/t.known_hosts_65633.test-amd64-amd64-qemuu-nested-intel root@172.16.146.205 xl -vvv create /etc/xen/l2.guest.osstest.cfg 
status (timed out) at Osstest/TestSupport.pm line 414.
+ rc=4
+ date -u +%Y-%m-%d %H:%M:%S Z exit status 4
2015-12-10 20:47:33 Z exit status 4
+ exit 4

I can only see L1 try to create L2 timeout. We'd like to what's
happening inside L1 hypervisor. Where can I find it? or cannot?

> 
> All of the other related logs are at the end of the same
> http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/test-amd64-amd64-qemuu-nested-intel/info.html
> page.
> 
> See also README in osstest.git which explains the structure of the log
> pages/grids/etc.
> 
> Ian.
> 
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [xen-unstable test] 65141: regressions - FAIL
  2015-12-11  4:04               ` Robert Hu
@ 2015-12-11 11:49                 ` Ian Campbell
  0 siblings, 0 replies; 23+ messages in thread
From: Ian Campbell @ 2015-12-11 11:49 UTC (permalink / raw)
  To: robert.hu
  Cc: Tian, Kevin, xen-devel, Nakajima, Jun, guangrong.xiao,
	Andrew Cooper, Ian Jackson, osstest service owner, Wang, Yong Y,
	Jan Beulich, Jin, Gordon

On Fri, 2015-12-11 at 12:04 +0800, Robert Hu wrote:
> On Tue, 2015-12-08 at 10:29 +0000, Ian Campbell wrote:
> > On Tue, 2015-12-08 at 08:06 +0000, Hu, Robert wrote:
> > > > 
> > > [...]
> > 
> > Please trim your quotes.
> > 
> > > For your failure, as Kevin mentioned in other mail, we will find
> > > someone
> > > to look into.
> > > Would you find out the detailed log of ' debian-hvm-install--l1--l2'
> > > step? so that he
> > > can start analyze, as I cannot reproduce it right now.
> > > (I didn't managed to find out the log in your web links)
> > 
> > From
> > http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/
> > 
> > (in this thread, or any other relevant logs/NNNNN/ link), you can click
> > the
> > "test -amd64 -amd64 -qemuu -nested -intel" header to go to
> > http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/test-amd
> > 64-amd64-qemuu-nested-intel/info.html
> > 
> > Then click the result of the "debian-hvm-install/l1/l2" step from that
> > table to go to the step log:
> > http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65301/test-amd
> > 64-amd64-qemuu-nested-intel/16.ts-debian-hvm-install.log
> > 
> > Which I think is what you were after?
> 
> Yes, I had tracked to here. However, from logs like (the above cannot
> open now, let me take the latest for example)
> http://logs.test-lab.xenproject.org/osstest/logs/65633/test-amd64-amd64-q
> emuu-nested-intel/16.ts-debian-hvm-install.log
> [...]
> I can only see L1 try to create L2 timeout. We'd like to what's
> happening inside L1 hypervisor. Where can I find it? or cannot?

Please take a look through the big list of logfiles in the test case
report:
http://logs.test-lab.xenproject.org/osstest/logs/65633/test-amd64-amd64-qemuu-nested-intel/info.html

There is lots of stuff there, including plenty of stuff from the L1 host
(which generally has names beginning <host>_l1.guest.osstest...,
i.e. godello0_l1.guest.osstest)

This should include everything which is normally collected from a host, it
doesn't matter if it is L0 or L1. If something is missing then we should
add it to ts-logs-capture.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2015-12-11 11:49 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-27 15:45 [xen-unstable test] 65141: regressions - FAIL osstest service owner
2015-11-27 15:54 ` Ian Jackson
2015-11-30  5:35   ` Hu, Robert
2015-12-02 10:34     ` Ian Campbell
2015-12-02 10:52       ` Jan Beulich
2015-12-02 10:57         ` Andrew Cooper
2015-12-02 11:07         ` Ian Campbell
2015-12-02 13:51       ` Ian Campbell
2015-12-03  5:58         ` Tian, Kevin
2015-12-03  9:25           ` Ian Campbell
2015-12-05  8:09         ` Ian Campbell
2015-12-07 16:18           ` Jan Beulich
2015-12-07 16:28             ` Ian Campbell
2015-12-07 16:48               ` Jan Beulich
2015-12-08  2:46             ` Tian, Kevin
2015-12-08  7:28               ` Jan Beulich
2015-12-08  8:06           ` Hu, Robert
2015-12-08 10:29             ` Ian Campbell
2015-12-09  6:27               ` Robert Hu
2015-12-09  8:35                 ` Jin, Gordon
2015-12-09 10:21                   ` Ian Campbell
2015-12-11  4:04               ` Robert Hu
2015-12-11 11:49                 ` Ian Campbell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.