xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [Xen-devel] [xen-unstable test] 145393: regressions - FAIL
@ 2019-12-30 20:19 osstest service owner
  2019-12-31 15:30 ` Roger Pau Monné
  0 siblings, 1 reply; 4+ messages in thread
From: osstest service owner @ 2019-12-30 20:19 UTC (permalink / raw)
  To: xen-devel, osstest-admin

flight 145393 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/145393/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-intel 17 debian-hvm-install/l1/l2 fail REGR. vs. 145025

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-amd64-pvgrub  7 xen-boot                  fail pass in 145377

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 145025
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 145025
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 145025
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 145025
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 145025
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 145025
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 145025
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 145025
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 145025
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 145025
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 xen                  3a13ac3ad4d3ef399fe2c85fb09fcb7ab1cdd140
baseline version:
 xen                  0cd791c499bdc698d14a24050ec56d60b45732e0

Last test of basis   145025  2019-12-20 13:58:10 Z   10 days
Failing since        145058  2019-12-21 07:15:37 Z    9 days   23 attempts
Testing same since   145321  2019-12-28 07:51:14 Z    2 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@arm.com>
  Julien Grall <julien@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Pawel Wieczorkiewicz <wipawel@amazon.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Sergey Kovalev <valor@list.ru>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1392 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Xen-devel] [xen-unstable test] 145393: regressions - FAIL
  2019-12-30 20:19 [Xen-devel] [xen-unstable test] 145393: regressions - FAIL osstest service owner
@ 2019-12-31 15:30 ` Roger Pau Monné
  2020-01-19  2:36   ` Tian, Kevin
  0 siblings, 1 reply; 4+ messages in thread
From: Roger Pau Monné @ 2019-12-31 15:30 UTC (permalink / raw)
  To: osstest service owner; +Cc: xen-devel, Kevin Tian, Jun Nakajima

On Mon, Dec 30, 2019 at 08:19:23PM +0000, osstest service owner wrote:
> flight 145393 xen-unstable real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/145393/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-qemuu-nested-intel 17 debian-hvm-install/l1/l2 fail REGR. vs. 145025

While da9290639eb5d6ac did fix the vmlaunch error, now the L1 guest
seems to loose interrupts:

[  412.127078] NETDEV WATCHDOG: eth0 (e1000): transmit queue 0 timed out
[  412.151837] ------------[ cut here ]------------
[  412.164281] WARNING: CPU: 0 PID: 0 at net/sched/sch_generic.c:320 dev_watchdog+0x252/0x260
[  412.185821] Modules linked in: xen_gntalloc ext4 mbcache jbd2 e1000 sym53c8xx
[  412.204399] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.14.150+ #1
[  412.223988] Hardware name: Xen HVM domU, BIOS 4.14-unstable 12/30/2019
[  412.241657] task: ffffffff82213480 task.stack: ffffffff82200000
[  412.256979] RIP: e030:dev_watchdog+0x252/0x260
[  412.268444] RSP: e02b:ffff88801fc03e90 EFLAGS: 00010286
[  412.281727] RAX: 0000000000000039 RBX: 0000000000000000 RCX: 0000000000000000
[  412.300097] RDX: ffff88801fc1de70 RSI: ffff88801fc16298 RDI: ffff88801fc16298
[  412.318283] RBP: ffff888006c6e41c R08: 000000000001f066 R09: 000000000000023b
[  412.336540] R10: ffff88801fc1a3f0 R11: ffffffff8287d96d R12: ffff888006c6e000
[  412.354643] R13: 0000000000000000 R14: ffff888006e3ac80 R15: 0000000000000001
[  412.373034] FS:  00007fa05293ecc0(0000) GS:ffff88801fc00000(0000) knlGS:0000000000000000
[  412.393367] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
[  412.408112] CR2: 00007fd80ff16000 CR3: 000000000ce78000 CR4: 0000000000040660
[  412.426338] Call Trace:
[  412.432747]  <IRQ>
[  412.438102]  ? dev_deactivate_queue.constprop.33+0x50/0x50
[  412.451896]  call_timer_fn+0x2b/0x130
[  412.464208]  run_timer_softirq+0x3d8/0x4b0
[  412.474598]  ? handle_irq_event_percpu+0x3c/0x50
[  412.486426]  __do_softirq+0x116/0x2ce
[  412.495883]  irq_exit+0xcd/0xe0
[  412.503999]  xen_evtchn_do_upcall+0x27/0x40
[  412.514626]  xen_do_hypervisor_callback+0x29/0x40
[  412.526684]  </IRQ>
[  412.532252] RIP: e030:xen_hypercall_sched_op+0xa/0x20
[  412.545034] RSP: e02b:ffffffff82203ea0 EFLAGS: 00000246
[  412.558347] RAX: 0000000000000000 RBX: ffffffff82213480 RCX: ffffffff810013aa
[  412.576390] RDX: ffffffff822483e8 RSI: deadbeefdeadf00d RDI: deadbeefdeadf00d
[  412.594580] RBP: 0000000000000000 R08: ffffffffffffffff R09: 0000000000000000
[  412.612831] R10: ffffffff82203e30 R11: 0000000000000246 R12: ffffffff82213480
[  412.630980] R13: 0000000000000000 R14: ffffffff82213480 R15: ffffffff82238e80
[  412.649138]  ? xen_hypercall_sched_op+0xa/0x20
[  412.660671]  ? xen_safe_halt+0xc/0x20
[  412.670177]  ? default_idle+0x23/0x110
[  412.679862]  ? do_idle+0x168/0x1f0
[  412.688666]  ? cpu_startup_entry+0x14/0x20
[  412.699059]  ? start_kernel+0x4c3/0x4cb
[  412.708807]  ? xen_start_kernel+0x527/0x530
[  412.720776] Code: cb e9 a0 fe ff ff 0f 0b 4c 89 e7 c6 05 00 d6 c6 00 01 e8 82 89 fd ff 89 d9 48 89 c2 4c 89 e6 48 c7 c7 30 fb 01 82 e8 44 e9 a6 ff <0f> 0b e9 58 fe ff ff 0f 1f 80 00 00 00 00 41 57 41 56 41 55 41 
[  412.767900] ---[ end trace d9e35c3f725f4b57 ]---
[  412.780193] e1000 0000:00:05.0 eth0: Reset adapter

This only happens when L1 is using x2APIC and a guest has been
launched (by L1). Prior to launching any guest L1 seems to be fully
functional. I'm currently trying to figure out how/when that interrupt
is lost, which I bet it's related to the merging of vmcs between L1
and L2 done in L0.

As a workaround I could disable exposing x2APIC in CPUID when nested
virtualization is enabled on Intel.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Xen-devel] [xen-unstable test] 145393: regressions - FAIL
  2019-12-31 15:30 ` Roger Pau Monné
@ 2020-01-19  2:36   ` Tian, Kevin
  2020-01-20 10:10     ` Roger Pau Monné
  0 siblings, 1 reply; 4+ messages in thread
From: Tian, Kevin @ 2020-01-19  2:36 UTC (permalink / raw)
  To: Roger Pau Monné, osstest service owner; +Cc: xen-devel, Nakajima, Jun

> From: Roger Pau Monné <roger.pau@citrix.com>
> Sent: Tuesday, December 31, 2019 11:30 PM
> 
> On Mon, Dec 30, 2019 at 08:19:23PM +0000, osstest service owner wrote:
> > flight 145393 xen-unstable real [real]
> > http://logs.test-lab.xenproject.org/osstest/logs/145393/
> >
> > Regressions :-(
> >
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> >  test-amd64-amd64-qemuu-nested-intel 17 debian-hvm-install/l1/l2 fail
> REGR. vs. 145025
> 
> While da9290639eb5d6ac did fix the vmlaunch error, now the L1 guest
> seems to loose interrupts:
> 
> [  412.127078] NETDEV WATCHDOG: eth0 (e1000): transmit queue 0 timed
> out
> [  412.151837] ------------[ cut here ]------------
> [  412.164281] WARNING: CPU: 0 PID: 0 at net/sched/sch_generic.c:320
> dev_watchdog+0x252/0x260
> [  412.185821] Modules linked in: xen_gntalloc ext4 mbcache jbd2 e1000
> sym53c8xx
> [  412.204399] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.14.150+ #1
> [  412.223988] Hardware name: Xen HVM domU, BIOS 4.14-unstable
> 12/30/2019
> [  412.241657] task: ffffffff82213480 task.stack: ffffffff82200000
> [  412.256979] RIP: e030:dev_watchdog+0x252/0x260
> [  412.268444] RSP: e02b:ffff88801fc03e90 EFLAGS: 00010286
> [  412.281727] RAX: 0000000000000039 RBX: 0000000000000000 RCX:
> 0000000000000000
> [  412.300097] RDX: ffff88801fc1de70 RSI: ffff88801fc16298 RDI:
> ffff88801fc16298
> [  412.318283] RBP: ffff888006c6e41c R08: 000000000001f066 R09:
> 000000000000023b
> [  412.336540] R10: ffff88801fc1a3f0 R11: ffffffff8287d96d R12:
> ffff888006c6e000
> [  412.354643] R13: 0000000000000000 R14: ffff888006e3ac80 R15:
> 0000000000000001
> [  412.373034] FS:  00007fa05293ecc0(0000) GS:ffff88801fc00000(0000)
> knlGS:0000000000000000
> [  412.393367] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
> [  412.408112] CR2: 00007fd80ff16000 CR3: 000000000ce78000 CR4:
> 0000000000040660
> [  412.426338] Call Trace:
> [  412.432747]  <IRQ>
> [  412.438102]  ? dev_deactivate_queue.constprop.33+0x50/0x50
> [  412.451896]  call_timer_fn+0x2b/0x130
> [  412.464208]  run_timer_softirq+0x3d8/0x4b0
> [  412.474598]  ? handle_irq_event_percpu+0x3c/0x50
> [  412.486426]  __do_softirq+0x116/0x2ce
> [  412.495883]  irq_exit+0xcd/0xe0
> [  412.503999]  xen_evtchn_do_upcall+0x27/0x40
> [  412.514626]  xen_do_hypervisor_callback+0x29/0x40
> [  412.526684]  </IRQ>
> [  412.532252] RIP: e030:xen_hypercall_sched_op+0xa/0x20
> [  412.545034] RSP: e02b:ffffffff82203ea0 EFLAGS: 00000246
> [  412.558347] RAX: 0000000000000000 RBX: ffffffff82213480 RCX:
> ffffffff810013aa
> [  412.576390] RDX: ffffffff822483e8 RSI: deadbeefdeadf00d RDI:
> deadbeefdeadf00d
> [  412.594580] RBP: 0000000000000000 R08: ffffffffffffffff R09:
> 0000000000000000
> [  412.612831] R10: ffffffff82203e30 R11: 0000000000000246 R12:
> ffffffff82213480
> [  412.630980] R13: 0000000000000000 R14: ffffffff82213480 R15:
> ffffffff82238e80
> [  412.649138]  ? xen_hypercall_sched_op+0xa/0x20
> [  412.660671]  ? xen_safe_halt+0xc/0x20
> [  412.670177]  ? default_idle+0x23/0x110
> [  412.679862]  ? do_idle+0x168/0x1f0
> [  412.688666]  ? cpu_startup_entry+0x14/0x20
> [  412.699059]  ? start_kernel+0x4c3/0x4cb
> [  412.708807]  ? xen_start_kernel+0x527/0x530
> [  412.720776] Code: cb e9 a0 fe ff ff 0f 0b 4c 89 e7 c6 05 00 d6 c6 00 01 e8 82
> 89 fd ff 89 d9 48 89 c2 4c 89 e6 48 c7 c7 30 fb 01 82 e8 44 e9 a6 ff <0f> 0b e9
> 58 fe ff ff 0f 1f 80 00 00 00 00 41 57 41 56 41 55 41
> [  412.767900] ---[ end trace d9e35c3f725f4b57 ]---
> [  412.780193] e1000 0000:00:05.0 eth0: Reset adapter
> 
> This only happens when L1 is using x2APIC and a guest has been
> launched (by L1). Prior to launching any guest L1 seems to be fully
> functional. I'm currently trying to figure out how/when that interrupt
> is lost, which I bet it's related to the merging of vmcs between L1
> and L2 done in L0.
> 
> As a workaround I could disable exposing x2APIC in CPUID when nested
> virtualization is enabled on Intel.
> 

any progress on this problem? Please let me know if I overlooked a more
recent mail. possibly it's useful to fully compare the APICv related setting
in vmcs02 and vmcs12. Alternatively, you may disable all APICv features
to see whether APICv is the main reason.

Thanks
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Xen-devel] [xen-unstable test] 145393: regressions - FAIL
  2020-01-19  2:36   ` Tian, Kevin
@ 2020-01-20 10:10     ` Roger Pau Monné
  0 siblings, 0 replies; 4+ messages in thread
From: Roger Pau Monné @ 2020-01-20 10:10 UTC (permalink / raw)
  To: Tian, Kevin; +Cc: xen-devel, osstest service owner, Nakajima,  Jun

On Sun, Jan 19, 2020 at 02:36:32AM +0000, Tian, Kevin wrote:
> > From: Roger Pau Monné <roger.pau@citrix.com>
> > Sent: Tuesday, December 31, 2019 11:30 PM
> > 
> > On Mon, Dec 30, 2019 at 08:19:23PM +0000, osstest service owner wrote:
> > > flight 145393 xen-unstable real [real]
> > > http://logs.test-lab.xenproject.org/osstest/logs/145393/
> > >
> > > Regressions :-(
> > >
> > > Tests which did not succeed and are blocking,
> > > including tests which could not be run:
> > >  test-amd64-amd64-qemuu-nested-intel 17 debian-hvm-install/l1/l2 fail
> > REGR. vs. 145025
> > 
> > While da9290639eb5d6ac did fix the vmlaunch error, now the L1 guest
> > seems to loose interrupts:
> > 
> > [  412.127078] NETDEV WATCHDOG: eth0 (e1000): transmit queue 0 timed
> > out
> > [  412.151837] ------------[ cut here ]------------
> > [  412.164281] WARNING: CPU: 0 PID: 0 at net/sched/sch_generic.c:320
> > dev_watchdog+0x252/0x260
> > [  412.185821] Modules linked in: xen_gntalloc ext4 mbcache jbd2 e1000
> > sym53c8xx
> > [  412.204399] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.14.150+ #1
> > [  412.223988] Hardware name: Xen HVM domU, BIOS 4.14-unstable
> > 12/30/2019
> > [  412.241657] task: ffffffff82213480 task.stack: ffffffff82200000
> > [  412.256979] RIP: e030:dev_watchdog+0x252/0x260
> > [  412.268444] RSP: e02b:ffff88801fc03e90 EFLAGS: 00010286
> > [  412.281727] RAX: 0000000000000039 RBX: 0000000000000000 RCX:
> > 0000000000000000
> > [  412.300097] RDX: ffff88801fc1de70 RSI: ffff88801fc16298 RDI:
> > ffff88801fc16298
> > [  412.318283] RBP: ffff888006c6e41c R08: 000000000001f066 R09:
> > 000000000000023b
> > [  412.336540] R10: ffff88801fc1a3f0 R11: ffffffff8287d96d R12:
> > ffff888006c6e000
> > [  412.354643] R13: 0000000000000000 R14: ffff888006e3ac80 R15:
> > 0000000000000001
> > [  412.373034] FS:  00007fa05293ecc0(0000) GS:ffff88801fc00000(0000)
> > knlGS:0000000000000000
> > [  412.393367] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [  412.408112] CR2: 00007fd80ff16000 CR3: 000000000ce78000 CR4:
> > 0000000000040660
> > [  412.426338] Call Trace:
> > [  412.432747]  <IRQ>
> > [  412.438102]  ? dev_deactivate_queue.constprop.33+0x50/0x50
> > [  412.451896]  call_timer_fn+0x2b/0x130
> > [  412.464208]  run_timer_softirq+0x3d8/0x4b0
> > [  412.474598]  ? handle_irq_event_percpu+0x3c/0x50
> > [  412.486426]  __do_softirq+0x116/0x2ce
> > [  412.495883]  irq_exit+0xcd/0xe0
> > [  412.503999]  xen_evtchn_do_upcall+0x27/0x40
> > [  412.514626]  xen_do_hypervisor_callback+0x29/0x40
> > [  412.526684]  </IRQ>
> > [  412.532252] RIP: e030:xen_hypercall_sched_op+0xa/0x20
> > [  412.545034] RSP: e02b:ffffffff82203ea0 EFLAGS: 00000246
> > [  412.558347] RAX: 0000000000000000 RBX: ffffffff82213480 RCX:
> > ffffffff810013aa
> > [  412.576390] RDX: ffffffff822483e8 RSI: deadbeefdeadf00d RDI:
> > deadbeefdeadf00d
> > [  412.594580] RBP: 0000000000000000 R08: ffffffffffffffff R09:
> > 0000000000000000
> > [  412.612831] R10: ffffffff82203e30 R11: 0000000000000246 R12:
> > ffffffff82213480
> > [  412.630980] R13: 0000000000000000 R14: ffffffff82213480 R15:
> > ffffffff82238e80
> > [  412.649138]  ? xen_hypercall_sched_op+0xa/0x20
> > [  412.660671]  ? xen_safe_halt+0xc/0x20
> > [  412.670177]  ? default_idle+0x23/0x110
> > [  412.679862]  ? do_idle+0x168/0x1f0
> > [  412.688666]  ? cpu_startup_entry+0x14/0x20
> > [  412.699059]  ? start_kernel+0x4c3/0x4cb
> > [  412.708807]  ? xen_start_kernel+0x527/0x530
> > [  412.720776] Code: cb e9 a0 fe ff ff 0f 0b 4c 89 e7 c6 05 00 d6 c6 00 01 e8 82
> > 89 fd ff 89 d9 48 89 c2 4c 89 e6 48 c7 c7 30 fb 01 82 e8 44 e9 a6 ff <0f> 0b e9
> > 58 fe ff ff 0f 1f 80 00 00 00 00 41 57 41 56 41 55 41
> > [  412.767900] ---[ end trace d9e35c3f725f4b57 ]---
> > [  412.780193] e1000 0000:00:05.0 eth0: Reset adapter
> > 
> > This only happens when L1 is using x2APIC and a guest has been
> > launched (by L1). Prior to launching any guest L1 seems to be fully
> > functional. I'm currently trying to figure out how/when that interrupt
> > is lost, which I bet it's related to the merging of vmcs between L1
> > and L2 done in L0.
> > 
> > As a workaround I could disable exposing x2APIC in CPUID when nested
> > virtualization is enabled on Intel.
> > 
> 
> any progress on this problem? Please let me know if I overlooked a more
> recent mail. possibly it's useful to fully compare the APICv related setting
> in vmcs02 and vmcs12. Alternatively, you may disable all APICv features
> to see whether APICv is the main reason.

Hello,

Yes, found out what was causing the issue, patches are at:

https://lists.xenproject.org/archives/html/xen-devel/2020-01/msg00437.html

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-01-20 10:10 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-30 20:19 [Xen-devel] [xen-unstable test] 145393: regressions - FAIL osstest service owner
2019-12-31 15:30 ` Roger Pau Monné
2020-01-19  2:36   ` Tian, Kevin
2020-01-20 10:10     ` Roger Pau Monné

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).