All of lore.kernel.org
 help / color / mirror / Atom feed
* stubdom migration failure on merlot* XSM related (Was: [adhoc test] 65682: tolerable FAIL])
       [not found] <E1a77eF-00049f-Er@osstest.test-lab.xenproject.org>
@ 2015-12-11 12:12 ` Ian Campbell
  2015-12-11 14:05   ` Ian Campbell
  0 siblings, 1 reply; 8+ messages in thread
From: Ian Campbell @ 2015-12-11 12:12 UTC (permalink / raw)
  To: Ian Jackson, Wei Liu, Daniel De Graaf; +Cc: xen-devel

Wei did some manual investigation and found he could not reproduce this
issue on merlot1, later realising that he had XSM disabled in his build.

On that hunch I ran an adhoc flight with XSM both on and off, results
below. Public copy of the logs is at
http://logs.test-lab.xenproject.org/osstest/logs/65682/

So it appears that the issue is somehow with the XSM enabled build.

Wei observed that it looked like it was taking longer for the guest vif to
enter forwarding mode, which is rather perplexing since I don't see any
reason XSM would be involved in that, nor why it should also apparently be
machine specific (maybe it's slower everywhere but only too slow on this
host?). I've not looked into this aspect myself.

As with previous failures the issue was with "timeout 10 ssh <guest> date"
which timed out after 10s but the ssh process remained and then appears to
succeed around 5s after the timeout (so 15s in all). This is in:
http://logs.test-lab.xenproject.org/osstest/logs/65682/test-amd64-amd64-xl-
qemut-stubdom-debianhvm-amd64-xsm/13.ts-guest-localmigrate.log

In the success case
http://logs.test-lab.xenproject.org/osstest/logs/65682/test-amd64-amd64-xl-
qemut-stubdom-debianhvm-amd64/13.ts-guest-localmigrate.log
the ssh was basically instantaneous, as it was in all 10 cases in the later
repeated migration:
http://logs.test-lab.xenproject.org/osstest/logs/65682/test-amd64-amd64-xl-
qemut-stubdom-debianhvm-amd64/15.ts-guest-localmigrate.log
(actually it looks like there is some clock skew in the guest, since the
date printed is occasionally 1s before the time the controller logged the
ssh attempt. Perhaps missing NTP in the guest).

I think 10 repetitions indicates it probably wasn't a fluke, however I've
kicked off a repeat (flight 65738) of the test to increase confidence.

In the test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm case I've set
debianhvm_general_timeoutfactor to 2, which should allow the ssh to
complete inside the now 20s timeout, which will then run the 10x case so we
can see how repeated migrations occur. This is not a fix, just a
workaround/diagnostic aid.

I've left test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 alone in the
repeat since I want to check the pass wasn't a fluke.

Ian.

On Thu, 2015-12-10 at 20:17 +0000, ianc@xenbits.xen.org wrote:
> [adhoc adhoc] <testing.git production /dev/pts/5>
> harness 030b474: cr-try-bisect-adhoc: Set OSSTEST_PRIORITY=-30
> 65682: tolerable FAIL
> 
> flight 65682 xen-unstable adhoc [adhoc]
> http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65682/
> 
> Failures :-/ but no regressions.
> 
> Tests which did not succeed,
> including tests which could not be run:
>  test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 13 guest-
> localmigrate fail baseline untested
> 
> 
> jobs:
>  build-amd64-xsm                                              pass    
>  build-amd64                                                  pass    
>  build-amd64-pvops                                            pass    
>  test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
>  test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64            pass    
> 
> 
> ------------------------------------------------------------
> sg-report-flight on osstest.test-lab.xenproject.org
> logs: /home/logs/logs
> images: /home/logs/images
> 
> Logs, config files, etc. are available at
>     http://logs.test-lab.xenproject.org/osstest/logs
> 
> Explanation of these reports, and of osstest in general, is at
>     http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb
> =master
>     http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=maste
> r
> 
> Test harness code can be found at
>     http://xenbits.xen.org/gitweb?p=osstest.git;a=summary
> 
> ============================================================
> 2015-12-10 15:12:12 Z flight 65682 nqueued=1 nrunning=0
> 2015-12-10 15:12:12 Z flight 65682 spawning build-amd64
> 2015-12-10 15:12:12 Z flight 65682 spawned  build-amd64 [2917]
> 2015-12-10 15:12:12 Z flight 65682 spawning build-amd64-pvops
> 2015-12-10 15:12:12 Z flight 65682 spawned  build-amd64-pvops [2918]
> 2015-12-10 15:12:12 Z flight 65682 spawning build-amd64-xsm
> 2015-12-10 15:12:12 Z flight 65682 spawned  build-amd64-xsm [2919]
> 2015-12-10 15:12:15 Z [build-amd64-pvops] starting 65682.build-amd64-
> pvops ts-build-check  build-check(1)
> 2015-12-10 15:12:15 Z [build-amd64] starting 65682.build-amd64 ts-build-
> check  build-check(1)
> 2015-12-10 15:12:15 Z [build-amd64-pvops] awaiting 65682.build-amd64-
> pvops ts-build-check 
> 2015-12-10 15:12:15 Z [build-amd64] awaiting 65682.build-amd64 ts-build-
> check 
> 2015-12-10 15:12:15 Z [build-amd64-xsm] starting 65682.build-amd64-xsm
> ts-build-check  build-check(1)
> 2015-12-10 15:12:18 Z [build-amd64-xsm] awaiting 65682.build-amd64-xsm
> ts-build-check 
> 2015-12-10 15:12:18 Z [build-amd64-pvops] finished 65682.build-amd64-
> pvops ts-build-check  pass 
> 2015-12-10 15:12:18 Z [build-amd64] finished 65682.build-amd64 ts-build-
> check  pass 
> 2015-12-10 15:12:19 Z [build-amd64-pvops] starting 65682.build-amd64-
> pvops ts-hosts-allocate host hosts-allocate
> 2015-12-10 15:12:19 Z [build-amd64-xsm] finished 65682.build-amd64-xsm
> ts-build-check  pass 
> 2015-12-10 15:12:19 Z [build-amd64] starting 65682.build-amd64 ts-hosts-
> allocate host hosts-allocate
> 2015-12-10 15:12:19 Z [build-amd64-xsm] starting 65682.build-amd64-xsm
> ts-hosts-allocate host hosts-allocate
> 2015-12-10 15:12:19 Z [build-amd64] awaiting 65682.build-amd64 ts-hosts-
> allocate host
> 2015-12-10 15:12:19 Z [build-amd64-pvops] awaiting 65682.build-amd64-
> pvops ts-hosts-allocate host
> 2015-12-10 15:12:19 Z [build-amd64-xsm] awaiting 65682.build-amd64-xsm
> ts-hosts-allocate host
> 2015-12-10 15:19:21 Z [build-amd64-pvops] finished 65682.build-amd64-
> pvops ts-hosts-allocate host pass 
> 2015-12-10 15:19:22 Z [build-amd64-pvops] starting 65682.build-amd64-
> pvops ts-host-install-twice  host-install(3)
> 2015-12-10 15:19:22 Z [build-amd64-pvops] awaiting 65682.build-amd64-
> pvops ts-host-install-twice 
> 2015-12-10 15:30:14 Z [build-amd64-pvops] finished 65682.build-amd64-
> pvops ts-host-install-twice  pass 
> 2015-12-10 15:30:14 Z [build-amd64-pvops] starting 65682.build-amd64-
> pvops ts-xen-build-prep  host-build-prep
> 2015-12-10 15:30:14 Z [build-amd64-pvops] awaiting 65682.build-amd64-
> pvops ts-xen-build-prep 
> 2015-12-10 15:37:34 Z [build-amd64-pvops] finished 65682.build-amd64-
> pvops ts-xen-build-prep  pass 
> 2015-12-10 15:38:04 Z [build-amd64-pvops] starting 65682.build-amd64-
> pvops ts-kernel-build  kernel-build
> 2015-12-10 15:38:12 Z [build-amd64-pvops] awaiting 65682.build-amd64-
> pvops ts-kernel-build 
> 2015-12-10 15:38:51 Z [build-amd64-xsm] finished 65682.build-amd64-xsm
> ts-hosts-allocate host pass 
> 2015-12-10 15:38:51 Z [build-amd64-xsm] starting 65682.build-amd64-xsm
> ts-host-install-twice  host-install(3)
> 2015-12-10 15:38:52 Z [build-amd64-xsm] awaiting 65682.build-amd64-xsm
> ts-host-install-twice 
> 2015-12-10 15:38:52 Z [build-amd64] finished 65682.build-amd64 ts-hosts-
> allocate host pass 
> 2015-12-10 15:38:54 Z [build-amd64] starting 65682.build-amd64 ts-host-
> install-twice  host-install(3)
> 2015-12-10 15:38:55 Z [build-amd64] awaiting 65682.build-amd64 ts-host-
> install-twice 
> 2015-12-10 15:38:55 Z [build-amd64-xsm] finished 65682.build-amd64-xsm
> ts-host-install-twice  pass 
> 2015-12-10 15:38:56 Z [build-amd64-xsm] starting 65682.build-amd64-xsm
> ts-xen-build-prep  host-build-prep
> 2015-12-10 15:38:57 Z [build-amd64-xsm] awaiting 65682.build-amd64-xsm
> ts-xen-build-prep 
> 2015-12-10 15:38:57 Z [build-amd64] finished 65682.build-amd64 ts-host-
> install-twice  pass 
> 2015-12-10 15:38:59 Z [build-amd64] starting 65682.build-amd64 ts-xen-
> build-prep  host-build-prep
> 2015-12-10 15:39:00 Z [build-amd64] awaiting 65682.build-amd64 ts-xen-
> build-prep 
> 2015-12-10 15:39:00 Z [build-amd64-xsm] finished 65682.build-amd64-xsm
> ts-xen-build-prep  pass 
> 2015-12-10 15:39:03 Z [build-amd64-xsm] starting 65682.build-amd64-xsm
> ts-xen-build  xen-build
> 2015-12-10 15:39:04 Z [build-amd64] finished 65682.build-amd64 ts-xen-
> build-prep  pass 
> 2015-12-10 15:39:06 Z [build-amd64-xsm] awaiting 65682.build-amd64-xsm
> ts-xen-build 
> 2015-12-10 15:39:06 Z [build-amd64] starting 65682.build-amd64 ts-xen-
> build  xen-build
> 2015-12-10 15:39:07 Z [build-amd64] awaiting 65682.build-amd64 ts-xen-
> build 
> 2015-12-10 15:57:23 Z [build-amd64] finished 65682.build-amd64 ts-xen-
> build  pass 
> 2015-12-10 15:57:24 Z flight 65682 reaped  build-amd64 [2917] EXIT 0
> 2015-12-10 15:57:25 Z flight 65682 nqueued=1 nrunning=2
> 2015-12-10 15:57:29 Z [build-amd64-xsm] finished 65682.build-amd64-xsm
> ts-xen-build  pass 
> 2015-12-10 15:57:29 Z flight 65682 reaped  build-amd64-xsm [2919] EXIT 0
> 2015-12-10 15:57:30 Z flight 65682 nqueued=1 nrunning=1
> 2015-12-10 15:57:39 Z [build-amd64-pvops] finished 65682.build-amd64-
> pvops ts-kernel-build  pass 
> 2015-12-10 15:57:40 Z flight 65682 reaped  build-amd64-pvops [2918] EXIT
> 0
> 2015-12-10 15:57:40 Z flight 65682 nqueued=1 nrunning=0
> 2015-12-10 15:57:40 Z flight 65682 spawning test-amd64-amd64-xl-qemut-
> stubdom-debianhvm-amd64
> 2015-12-10 15:57:40 Z flight 65682 spawned  test-amd64-amd64-xl-qemut-
> stubdom-debianhvm-amd64 [27165]
> 2015-12-10 15:57:40 Z flight 65682 spawning test-amd64-amd64-xl-qemut-
> stubdom-debianhvm-amd64-xsm
> 2015-12-10 15:57:40 Z flight 65682 spawned  test-amd64-amd64-xl-qemut-
> stubdom-debianhvm-amd64-xsm [27166]
> 2015-12-10 15:57:44 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> build-check  build-check(1)
> 2015-12-10 15:57:44 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> build-check 
> 2015-12-10 15:57:44 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-build-check  build-check(1)
> 2015-12-10 15:57:45 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-build-check 
> 2015-12-10 15:57:45 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> build-check  pass 
> 2015-12-10 15:57:46 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-build-check  pass 
> 2015-12-10 15:57:46 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> hosts-allocate host hosts-allocate
> 2015-12-10 15:57:47 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-hosts-allocate host hosts-allocate
> 2015-12-10 15:57:47 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> hosts-allocate host
> 2015-12-10 15:57:47 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-hosts-allocate host
> 2015-12-10 15:58:09 Z flight 65682 killing build-amd64 [2917]
> 2015-12-10 15:58:15 Z flight 65682 killing build-amd64-xsm [2919]
> 2015-12-10 15:58:25 Z flight 65682 killing build-amd64-pvops [2918]
> 2015-12-10 17:48:07 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> hosts-allocate host pass 
> 2015-12-10 17:48:10 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-host-
> install-twice host host-install(3)
> 2015-12-10 17:48:10 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-host-
> install-twice host
> 2015-12-10 18:04:56 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-host-
> install-twice host pass 
> 2015-12-10 18:04:56 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-host-
> ping-check host host-ping-check-native
> 2015-12-10 18:04:57 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-host-
> ping-check host
> 2015-12-10 18:05:34 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-host-
> ping-check host pass 
> 2015-12-10 18:05:36 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-xen-
> install host xen-install
> 2015-12-10 18:05:39 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-xen-
> install host
> 2015-12-10 18:08:28 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-xen-
> install host pass 
> 2015-12-10 18:08:32 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-host-
> reboot host xen-boot
> 2015-12-10 18:08:35 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-host-
> reboot host
> 2015-12-10 18:12:49 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-host-
> reboot host pass 
> 2015-12-10 18:12:50 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-host-
> ping-check host host-ping-check-xen
> 2015-12-10 18:12:50 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-host-
> ping-check host
> 2015-12-10 18:13:12 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-host-
> ping-check host pass 
> 2015-12-10 18:13:12 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-leak-
> check basis host leak-check/basis(8)
> 2015-12-10 18:13:13 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-leak-
> check basis host
> 2015-12-10 18:13:16 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-leak-
> check basis host pass 
> 2015-12-10 18:13:16 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> debian-hvm-install  debian-hvm-install
> 2015-12-10 18:13:17 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> debian-hvm-install 
> 2015-12-10 18:23:04 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> debian-hvm-install  pass 
> 2015-12-10 18:23:04 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> migrate-support-check host debianhvm 1 migrate-support-check
> 2015-12-10 18:23:05 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> migrate-support-check host debianhvm 1
> 2015-12-10 18:23:07 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> migrate-support-check host debianhvm 1 pass 
> 2015-12-10 18:23:08 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> saverestore-support-check host debianhvm saverestore-support-check
> 2015-12-10 18:23:09 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> saverestore-support-check host debianhvm
> 2015-12-10 18:23:10 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> saverestore-support-check host debianhvm pass 
> 2015-12-10 18:23:10 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-saverestore host debianhvm guest-saverestore
> 2015-12-10 18:23:11 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-saverestore host debianhvm
> 2015-12-10 18:26:15 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-saverestore host debianhvm pass 
> 2015-12-10 18:26:18 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-localmigrate host debianhvm guest-localmigrate
> 2015-12-10 18:26:20 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-localmigrate host debianhvm
> 2015-12-10 18:29:44 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-localmigrate host debianhvm pass 
> 2015-12-10 18:29:45 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-saverestore host debianhvm guest-saverestore.2
> 2015-12-10 18:29:45 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-saverestore host debianhvm
> 2015-12-10 18:32:07 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-saverestore host debianhvm pass 
> 2015-12-10 18:32:08 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-localmigrate x10 host debianhvm guest-localmigrate/x10
> 2015-12-10 18:32:08 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-localmigrate x10 host debianhvm
> 2015-12-10 19:04:07 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-localmigrate x10 host debianhvm pass 
> 2015-12-10 19:04:07 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-stop host debianhvm guest-stop
> 2015-12-10 19:04:07 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-stop host debianhvm
> 2015-12-10 19:06:35 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-stop host debianhvm pass 
> 2015-12-10 19:06:44 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> repeat-test 10 ts-guest-start host debianhvm {;} ts-guest-stop host
> debianhvm guest-start/debianhvm.repeat
> 2015-12-10 19:06:47 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> repeat-test 10 ts-guest-start host debianhvm {;} ts-guest-stop host
> debianhvm
> 2015-12-10 19:25:50 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> repeat-test 10 ts-guest-start host debianhvm {;} ts-guest-stop host
> debianhvm pass 
> 2015-12-10 19:25:51 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-start host debianhvm guest-start.2
> 2015-12-10 19:25:51 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-start host debianhvm
> 2015-12-10 19:27:07 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-start host debianhvm pass 
> 2015-12-10 19:27:07 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-destroy host debianhvm guest-destroy
> 2015-12-10 19:27:08 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-destroy host debianhvm
> 2015-12-10 19:27:31 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-
> guest-destroy host debianhvm pass 
> 2015-12-10 19:27:32 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-leak-
> check check host leak-check/check
> 2015-12-10 19:27:32 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-leak-
> check check host
> 2015-12-10 19:27:35 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-leak-
> check check host pass 
> 2015-12-10 19:27:35 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-logs-
> capture host capture-logs(21)
> 2015-12-10 19:27:35 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-logs-
> capture host
> 2015-12-10 19:29:12 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64] 
> finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 ts-logs-
> capture host pass 
> 2015-12-10 19:29:13 Z flight 65682 reaped  test-amd64-amd64-xl-qemut-
> stubdom-debianhvm-amd64 [27165] EXIT 0
> 2015-12-10 19:29:14 Z flight 65682 nqueued=0 nrunning=1
> 2015-12-10 19:29:23 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-hosts-allocate host pass 
> 2015-12-10 19:29:24 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-host-install-twice host host-install(3)
> 2015-12-10 19:29:24 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-host-install-twice host
> 2015-12-10 19:29:58 Z flight 65682 killing test-amd64-amd64-xl-qemut-
> stubdom-debianhvm-amd64 [27165]
> 2015-12-10 19:45:27 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-host-install-twice host pass 
> 2015-12-10 19:45:28 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-host-ping-check host host-ping-check-native
> 2015-12-10 19:45:29 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-host-ping-check host
> 2015-12-10 19:45:50 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-host-ping-check host pass 
> 2015-12-10 19:45:51 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-xen-install host xen-install
> 2015-12-10 19:45:51 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-xen-install host
> 2015-12-10 19:47:11 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-xen-install host pass 
> 2015-12-10 19:47:11 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-host-reboot host xen-boot
> 2015-12-10 19:47:12 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-host-reboot host
> 2015-12-10 19:51:25 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-host-reboot host pass 
> 2015-12-10 19:51:25 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-host-ping-check host host-ping-check-xen
> 2015-12-10 19:51:25 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-host-ping-check host
> 2015-12-10 19:51:46 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-host-ping-check host pass 
> 2015-12-10 19:51:46 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-leak-check basis host leak-check/basis(8)
> 2015-12-10 19:51:47 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-leak-check basis host
> 2015-12-10 19:51:49 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-leak-check basis host pass 
> 2015-12-10 19:51:49 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-debian-hvm-install  debian-hvm-install
> 2015-12-10 19:51:50 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-debian-hvm-install 
> 2015-12-10 20:06:54 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-debian-hvm-install  pass 
> 2015-12-10 20:07:00 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-migrate-support-check host debianhvm 1 migrate-support-check
> 2015-12-10 20:07:11 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-migrate-support-check host debianhvm 1
> 2015-12-10 20:07:19 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-migrate-support-check host debianhvm 1 pass 
> 2015-12-10 20:07:41 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-saverestore-support-check host debianhvm saverestore-support-check
> 2015-12-10 20:07:48 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-saverestore-support-check host debianhvm
> 2015-12-10 20:07:59 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-saverestore-support-check host debianhvm pass 
> 2015-12-10 20:07:59 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-guest-saverestore host debianhvm guest-saverestore
> 2015-12-10 20:08:00 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-guest-saverestore host debianhvm
> 2015-12-10 20:10:16 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-guest-saverestore host debianhvm pass 
> 2015-12-10 20:10:16 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-guest-localmigrate host debianhvm guest-localmigrate
> 2015-12-10 20:10:16 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-guest-localmigrate host debianhvm
> 2015-12-10 20:14:01 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-guest-localmigrate host debianhvm fail child process exited abnormally
> 2015-12-10 20:14:01 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm run-
> job/test-debianhvm failed: test script failed
> 2015-12-10 20:14:08 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] starting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-logs-capture host capture-logs(14)
> 2015-12-10 20:14:09 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] awaiting 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-logs-capture host
> 2015-12-10 20:16:20 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] finished 65682.test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
> ts-logs-capture host pass 
> 2015-12-10 20:16:20 Z [test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-
> xsm] at least one test failed
> 2015-12-10 20:16:21 Z flight 65682 reaped  test-amd64-amd64-xl-qemut-
> stubdom-debianhvm-amd64-xsm [27166] EXIT 0
> 2015-12-10 20:16:23 Z flight 65682 nqueued=0 nrunning=0
> 2015-12-10 20:16:23 Z flight 65682 ending finished.
> 2015-12-10 20:17:06 Z flight 65682 killing test-amd64-amd64-xl-qemut-
> stubdom-debianhvm-amd64-xsm [27166]
> 2015-12-10 20:17:06 Z flight 65682 ended finished.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: stubdom migration failure on merlot* XSM related (Was: [adhoc test] 65682: tolerable FAIL])
  2015-12-11 12:12 ` stubdom migration failure on merlot* XSM related (Was: [adhoc test] 65682: tolerable FAIL]) Ian Campbell
@ 2015-12-11 14:05   ` Ian Campbell
  2015-12-11 15:12     ` Ian Campbell
  0 siblings, 1 reply; 8+ messages in thread
From: Ian Campbell @ 2015-12-11 14:05 UTC (permalink / raw)
  To: Ian Jackson, Wei Liu, Daniel De Graaf; +Cc: xen-devel

On Fri, 2015-12-11 at 12:12 +0000, Ian Campbell wrote:
> I think 10 repetitions indicates it probably wasn't a fluke, however I've
> kicked off a repeat (flight 65738) of the test to increase confidence.
> 
> In the test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm case I've set
> debianhvm_general_timeoutfactor to 2, which should allow the ssh to
> complete inside the now 20s timeout, which will then run the 10x case so we
> can see how repeated migrations occur. This is not a fix, just a
> workaround/diagnostic aid.
> 
> I've left test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64 alone in the
> repeat since I want to check the pass wasn't a fluke.

That repeat has completed, results are at
http://logs.test-lab.xenproject.org/osstest/logs/65738/
(build was reused from the previous adhoc flight 65682)

Both test cases passed, remember that the -xsm one had a doubled time out.

http://logs.test-lab.xenproject.org/osstest/logs/65738/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm/13.ts-guest-localmigrate.log

has (ssh debug* elided):

015-12-11 10:58:49 Z executing ssh ... -v root@172.16.145.159 date 
OpenSSH_6.0p1 Debian-4+deb7u2, OpenSSL 1.0.1e 11 Feb 2013
[...]
Authenticated to 172.16.145.159 ([172.16.145.159]:22).
[...]
Fri Dec 11 10:59:04 UTC 2015
[...]
Transferred: sent 3064, received 2032 bytes, in 0.0 seconds
Bytes per second: sent 301038.8, received 199644.5
[...]
2015-12-11 10:59:05 Z execution took 15 seconds [**>20/2**]: timeout 50 ssh -[...] root@172.16.145.159 date 

In the repeated case http://logs.test-lab.xenproject.org/osstest/logs/65738/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm/15.ts-guest-localmigrate.log
it looks like 4 out of the 10 reps took 15-16s, the rest were below the theshold for logging.

The non-XSM case passes completely again.

Ian.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: stubdom migration failure on merlot* XSM related (Was: [adhoc test] 65682: tolerable FAIL])
  2015-12-11 14:05   ` Ian Campbell
@ 2015-12-11 15:12     ` Ian Campbell
  2015-12-11 15:16       ` Ian Campbell
  0 siblings, 1 reply; 8+ messages in thread
From: Ian Campbell @ 2015-12-11 15:12 UTC (permalink / raw)
  To: Ian Jackson, Wei Liu, Daniel De Graaf; +Cc: xen-devel

On Fri, 2015-12-11 at 14:05 +0000, Ian Campbell wrote:

So returning to the logs for 65682 (the first repro, which exhibited the
fail with the usual timeout).

http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65682/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm/info.html

First thing I noticed was the numa config
http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65682/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm/merlot1-output-xl_vcpu-list
dom0 has 32 vcpus, allowed hard/soft on all processors.

the domU has 2 vcpus, with a soft preference for pcpu 16-23, while the stub
dom has a soft preference for 0-7. I think that's normal NUMA placement
doing what we want and while I've not thought about the wiseness (or
otherwise) of having the stubdom on another node I don't see why that
should cause a 10s+ delay to the network device forwarding.

I've confirmed in 
http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65682/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm/merlot1--etc-network-interfaces.new
that STP is disabled.

The hotplug log is empty.

The migration was of dom5 + stubdom6 into dom7 + stubdom8.

Time line (from 13.ts-guest-localmigrate.log):

2015-12-10 20:10:30 Z migration begins
2015-12-10 20:13:43 Z migration completes ("execution took 193 seconds[<=2x400]"

2015-12-10 20:13:43 Z ssh to guest `date` begins
2015-12-10 20:13:53 Z ssh guest timesout 
2015-12-10-20:13:58 Z ssh produces: "Thu Dec 10 20:13:58 UTC 2015."

http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65682/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm/serial-merlot1.log
(very abridged)

Dec 10 20:13:36.441014 (XEN) HVM5 save: CPU
[...]
Dec 10 20:13:36.641048 (XEN) HVM7 restore: CPU 0
[...]
Dec 10 20:13:36.705029 [ 1391.363224] device vif8.0 entered promiscuous mode
Dec 10 20:13:38.753001 [ 1391.369957] IPv6: ADDRCONF(NETDEV_UP): vif8.0: link is not ready
Dec 10 20:13:38.760973 (d8) Bootstrapping...
Dec 10 20:13:38.985061 (d8) backend at /local/domain/0/backend/vif/8/0
[...]
Dec 10 20:13:38.993010 (d8) mac is 5a:36:0e:92:00:02
Dec 10 20:13:38.993038 [ 1391.467483] IPv6: ADDRCONF(NETDEV_CHANGE): vif8.0: link becomes ready
Dec 10 20:13:39.001015 [ 1391.467574] xenbr0: port 4(vif8.0) entered forwarding state
Dec 10 20:13:39.009015 [ 1391.467608] xenbr0: port 4(vif8.0) entered forwarding state
[...]
Dec 10 20:13:39.273017 (d8) xs_read_watch() -> /local/domain/0/backend/vbd/8/5632/params hdc
[...]
Dec 10 20:13:39.281011 [ 1391.850153] device vif7.0 entered promiscuous mode
Dec 10 20:13:39.281047 [ 1391.856999] IPv6: ADDRCONF(NETDEV_UP): vif7.0: link is not ready
Dec 10 20:13:39.289000 (XEN) irq.c:385: Dom7 callback via changed to Direct Vector 0xf3
[...]
Dec 10 20:13:39.369050 (d8) close(3)
Dec 10 20:13:39.377010 (d8) close network: backend at /local/domain/0/backend/vif/8/0
Dec 10 20:13:39.377045 [ 1391.996438] xen_netback:set_backend_state: backend/vif/8/0: prepare for reconnect
Dec 10 20:13:39.385011 [ 1392.034769] xen-blkback:ring-ref 9, event-channel 24, protocol 1 (x86_64-abi) 
Dec 10 20:13:39.424993 [ 1392.125011] xenbr0: port 2(vif6.0) entered disabled state
Dec 10 20:13:39.513026 [ 1392.125835] device vif6.0 left promiscuous mode
Dec 10 20:13:39.513062 [ 1392.125868] xenbr0: port 2(vif6.0) entered disabled state
Dec 10 20:13:39.520999 [ 1392.185454] xenbr0: port 3(vif5.0) entered disabled state
Dec 10 20:13:39.569013 [ 1392.186308] device vif5.0 left promiscuous mode
Dec 10 20:13:39.577026 [ 1392.186393] xenbr0: port 3(vif5.0) entered disabled state
Dec 10 20:13:39.584976 (XEN) avc:  denied  { pcilevel } for domid=8 target=7 scontext=system_u:system_r:dm_dom_t tcontext=system_u:system_r:domU_t_target tclass=hvm
Dec 10 20:13:39.793039 [ 1392.468784] xenbr0: port 4(vif8.0) entered disabled state
Dec 10 20:13:39.856990 [ 1409.562306] IPv6: ADDRCONF(NETDEV_CHANGE): vif7.0: link becomes ready

Dec 10 20:13:56.953048 [ 1409.562434] xenbr0: port 5(vif7.0) entered forwarding state
Dec 10 20:13:56.953097 [ 1409.562450] xenbr0: port 5(vif7.0) entered forwarding state

(the end, until ts-log-capture sends some keys)

I think what we can see at Dec 10 20:13:39.289000 is the PVHVM stuff
initialising direct vector injection of event channels and the stubdom8
network close at Dec 10 20:13:39.377010 is the guest unplugging the
emulated devices, then vif8.0 (the corresponding stubdom vif) goes away and
vif7.0 (the guest's vif) becomes ready.

All of that is well within the window of the "xl migrate".

However as Wei noticed there is then a 17 second delay until vif7.0 enters
the forwarding state.

The source stubdom log (which includes the kernel serial log) is
http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65682/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm/merlot1---var-log-xen-qemu-dm-debianhvm.guest.osstest.log
and the destination is
http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65682/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm/merlot1---var-log-xen-qemu-dm-debianhvm.guest.osstest--incoming.log

Both lack timestamps so it's not easy to discern anything.

In the non-XSM case 
2015-12-11 12:58:54 Z migration finishes
2015-12-11 12:58:54 Z ssh guest `date` starts
2015-12-11 12:58:55 Z date output from guest: Fri Dec 11 12:58:55 UTC 2015

and in the host serial logs:
Dec 11 12:58:50.573034 [ 1007.966381] IPv6: ADDRCONF(NETDEV_CHANGE): vif7.0: link becomes ready

Dec 11 12:58:54.133079 [ 1007.966487] xenbr0: port 5(vif7.0) entered forwarding state
Dec 11 12:58:54.133149 [ 1007.966502] xenbr0: port 5(vif7.0) entered forwarding state

Note that there is a 4s (not 16s as with XSM) delay there.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: stubdom migration failure on merlot* XSM related (Was: [adhoc test] 65682: tolerable FAIL])
  2015-12-11 15:12     ` Ian Campbell
@ 2015-12-11 15:16       ` Ian Campbell
  2015-12-14 10:14         ` Ian Campbell
  0 siblings, 1 reply; 8+ messages in thread
From: Ian Campbell @ 2015-12-11 15:16 UTC (permalink / raw)
  To: Ian Jackson, Wei Liu, Daniel De Graaf; +Cc: xen-devel

On Fri, 2015-12-11 at 15:12 +0000, Ian Campbell wrote:
> Dec 10 20:13:39.584976 (XEN) avc:  denied  { pcilevel } for domid=8
> target=7 scontext=system_u:system_r:dm_dom_t
> tcontext=system_u:system_r:domU_t_target tclass=hvm

I wonder if this is an intx associated with the USB?

In the XSM case the guest serial log on restore has:
http://osstest.test-lab.xenproject.org/~osstest/pub/logs/65682/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm/merlot1---var-log-xen-qemu-dm-debianhvm.guest.osstest--incoming.log

    [  373.747112] ata2.00: configured for MWDMA2
+00 [  373.920163] usb 1-2: reset full-speed USB device number 2 using uhci_hcd
+04 [  377.360563] PM: restore of devices complete after 3779.268 msecs
+04 [  377.365351] usb 1-2: USB disconnect, device number 2
+04 [  377.720174] usb 1-2: new full-speed USB device number 3 using uhci_hcd
+11 [  384.829312] usb 1-2: New USB device found, idVendor=0627, idProduct=0001
+11 [  384.834014] usb 1-2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
+11 [  384.839012] usb 1-2: Product: QEMU USB Tablet
+11 [  384.842105] usb 1-2: Manufacturer: QEMU 0.10.2
+11 [  384.845230] usb 1-2: SerialNumber: 1
+18 [  391.078740] input: QEMU 0.10.2 QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-2/1-2:1.0/input/input8
+18 [  391.086474] generic-usb 0003:0627:0001.0002: input,hidraw0: USB HID v0.01 Pointer [QEMU 0.10.2 QEMU USB Tablet] on usb-0000:00:01.2-2/input0
+18 [  391.095614] Restarting tasks ... done.
+18 [  391.107562] Setting capacity to 20480000
+18 [  391.124154] Setting capacity to 20480000
+18 [  391.140135] uhci_hcd 0000:00:01.2: Unlink after no-IRQ?  Controller is probably using the wrong IRQ.

(I've annotated with the time since the first line).

The corresponding log in the non-XSM case is
http://logs.test-lab.xenproject.org/osstest/logs/65738/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64/merlot1---var-log-xen-qemu-dm-debianhvm.guest.osstest--incoming.log.10
    [  338.127757] ata2.00: configured for MWDMA2
+00 [  338.296159] usb 1-2: reset full-speed USB device number 2 using uhci_hcd
+02 [  340.301541] usb 1-2: USB disconnect, device number 2
+02 [  340.301655] PM: restore of devices complete after 2342.528 msecs
+02 [  340.520096] usb 1-2: new full-speed USB device number 3 using uhci_hcd
+03 [  341.373756] usb 1-2: New USB device found, idVendor=0627, idProduct=0001
+03 [  341.378788] usb 1-2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
+03 [  341.384183] usb 1-2: Product: QEMU USB Tablet
+03 [  341.387406] usb 1-2: Manufacturer: QEMU 0.10.2
+03 [  341.390738] usb 1-2: SerialNumber: 1
+03 [  341.809652] input: QEMU 0.10.2 QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-2/1-2:1.0/input/input8
+03 [  341.817873] generic-usb 0003:0627:0001.0002: input,hidraw0: USB HID v0.01 Pointer [QEMU 0.10.2 QEMU USB Tablet] on usb-0000:00:01.2-2/input0
+03 [  341.827641] Restarting tasks ... done.
+03 [  341.839784] Setting capacity to 20480000
+03 [  341.846421] Setting capacity to 20480000

Which is over much quicker, and the uhci_hcd is not complaining about a
missing IRQ...

I have a new flight going on (65755) with flask=permissive instead of
flask=enforcing (assuming I didn't botch the osstest modifications to
support that setting via a runvar).

If that test passes, prints the AVC message but not the missing IRQ message
then I think that would be our smoking gun.

The new flight has dropped debianhvm_general_timeoutfactor, which has
proven its point.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: stubdom migration failure on merlot* XSM related (Was: [adhoc test] 65682: tolerable FAIL])
  2015-12-11 15:16       ` Ian Campbell
@ 2015-12-14 10:14         ` Ian Campbell
  2015-12-14 11:11           ` George Dunlap
  2015-12-14 11:55           ` Ian Campbell
  0 siblings, 2 replies; 8+ messages in thread
From: Ian Campbell @ 2015-12-14 10:14 UTC (permalink / raw)
  To: Ian Jackson, Wei Liu, Daniel De Graaf; +Cc: xen-devel

On Fri, 2015-12-11 at 15:16 +0000, Ian Campbell wrote:
> 
> I have a new flight going on (65755) with flask=permissive instead of
> flask=enforcing (assuming I didn't botch the osstest modifications to
> support that setting via a runvar).

I did botch the mods, but luckily permissive is the default, so I got what
I wanted ;-)

> If that test passes, prints the AVC message but not the missing IRQ message
> then I think that would be our smoking gun.

http://logs.test-lab.xenproject.org/osstest/logs/65758/

From serial-merlot1.log:

Dec 11 18:01:57.001037 (XEN) Flask: 64 avtab hash slots, 236 rules.
Dec 11 18:01:57.009023 (XEN) Flask: 64 avtab hash slots, 236 rules.
Dec 11 18:01:57.017004 (XEN) Flask:  3 users, 3 roles, 36 types, 2 bools
Dec 11 18:01:57.017038 (XEN) Flask:  12 classes, 236 rules
Dec 11 18:01:57.025015 (XEN) Flask:  Starting in permissive mode.
[...]
Dec 11 18:06:01.229194 (XEN) avc:  denied  { pcilevel } for domid=2 target=1 scontext=system_u:system_r:dm_dom_t tcontext=system_u:system_r:domU_t_target tclass=hvm

http://logs.test-lab.xenproject.org/osstest/logs/65758/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm/merlot1---var-log-xen-qemu-dm-debianhvm.guest.osstest--incoming.log.10

i.e. no "uhci_hcd 0000:00:01.2: Unlink after no-IRQ?  Controller is
probably using the wrong IRQ." and the test case has passed.

I've running a test with the following patch. I'm reasonably hopeful.

Ian.

From 3f14c5afedc0df360952364b93c2f04de00f00c4 Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Mon, 14 Dec 2015 08:22:41 +0000
Subject: [PATCH] flask: Allow device model to raise PCI interrupts (pcilevel
 capability)

Allows:

(XEN) avc:  denied  { pcilevel } for domid=2 target=1 scontext=system_u:system_r:dm_dom_t tcontext=system_u:system_r:domU_t_target tclass=hvm

Which otherwise leads to the following on resume after migrate (comparing
non-XSM to XSM):

 ata2.00: configured for MWDMA2
 usb 1-2: reset full-speed USB device number 2 using uhci_hcd
+PM: restore of devices complete after 3779.268 msecs
 usb 1-2: USB disconnect, device number 2
-PM: restore of devices complete after 2342.528 msecs
 usb 1-2: new full-speed USB device number 3 using uhci_hcd
 usb 1-2: New USB device found, idVendor=0627, idProduct=0001
 usb 1-2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
 usb 1-2: Product: QEMU USB Tablet
 usb 1-2: Manufacturer: QEMU 0.10.2
 usb 1-2: SerialNumber: 1
 input: QEMU 0.10.2 QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-2/1-2:1.0/input/input8
 generic-usb 0003:0627:0001.0002: input,hidraw0: USB HID v0.01 Pointer [QEMU 0.10.2 QEMU USB Tablet] on usb-0000:00:01.2-2/input0
 Restarting tasks ... done.
 Setting capacity to 20480000
 Setting capacity to 20480000
+uhci_hcd 0000:00:01.2: Unlink after no-IRQ?  Controller is probably using the wrong IRQ.

And a glitch in the domU which is sufficient to disrupt the post migration
checks done by osstest.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/modules/xen/xen.if | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index 32dd7b3..00d1bbb 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -150,7 +150,7 @@ define(`device_model', `
 
 	allow $1 $2_target:domain shutdown;
 	allow $1 $2_target:mmu { map_read map_write adjust physmap target_hack };
-	allow $1 $2_target:hvm { getparam setparam trackdirtyvram hvmctl irqlevel pciroute cacheattr send_irq };
+	allow $1 $2_target:hvm { getparam setparam trackdirtyvram hvmctl irqlevel pciroute pcilevel cacheattr send_irq };
 ')
 
 # make_device_model(priv, dm_dom, hvm_dom)
-- 
2.6.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: stubdom migration failure on merlot* XSM related (Was: [adhoc test] 65682: tolerable FAIL])
  2015-12-14 10:14         ` Ian Campbell
@ 2015-12-14 11:11           ` George Dunlap
  2015-12-14 11:30             ` Ian Campbell
  2015-12-14 11:55           ` Ian Campbell
  1 sibling, 1 reply; 8+ messages in thread
From: George Dunlap @ 2015-12-14 11:11 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Wei Liu, Daniel De Graaf, Ian Jackson, xen-devel

On Mon, Dec 14, 2015 at 10:14 AM, Ian Campbell <ian.campbell@citrix.com> wrote:
> On Fri, 2015-12-11 at 15:16 +0000, Ian Campbell wrote:
>>
>> I have a new flight going on (65755) with flask=permissive instead of
>> flask=enforcing (assuming I didn't botch the osstest modifications to
>> support that setting via a runvar).
>
> I did botch the mods, but luckily permissive is the default, so I got what
> I wanted ;-)
>
>> If that test passes, prints the AVC message but not the missing IRQ message
>> then I think that would be our smoking gun.
>
> http://logs.test-lab.xenproject.org/osstest/logs/65758/
>
> From serial-merlot1.log:
>
> Dec 11 18:01:57.001037 (XEN) Flask: 64 avtab hash slots, 236 rules.
> Dec 11 18:01:57.009023 (XEN) Flask: 64 avtab hash slots, 236 rules.
> Dec 11 18:01:57.017004 (XEN) Flask:  3 users, 3 roles, 36 types, 2 bools
> Dec 11 18:01:57.017038 (XEN) Flask:  12 classes, 236 rules
> Dec 11 18:01:57.025015 (XEN) Flask:  Starting in permissive mode.
> [...]
> Dec 11 18:06:01.229194 (XEN) avc:  denied  { pcilevel } for domid=2 target=1 scontext=system_u:system_r:dm_dom_t tcontext=system_u:system_r:domU_t_target tclass=hvm
>
> http://logs.test-lab.xenproject.org/osstest/logs/65758/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm/merlot1---var-log-xen-qemu-dm-debianhvm.guest.osstest--incoming.log.10

So wait -- does flask not report denials when in enforcing mode?

I can see the point of not letting a rogue / misconfigured guest DoS
your logs, but it seems like some sort of rate-limiting would be a
better solution than just not printing anything.

 -George

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: stubdom migration failure on merlot* XSM related (Was: [adhoc test] 65682: tolerable FAIL])
  2015-12-14 11:11           ` George Dunlap
@ 2015-12-14 11:30             ` Ian Campbell
  0 siblings, 0 replies; 8+ messages in thread
From: Ian Campbell @ 2015-12-14 11:30 UTC (permalink / raw)
  To: George Dunlap; +Cc: Ian Jackson, Daniel De Graaf, Wei Liu, xen-devel

On Mon, 2015-12-14 at 11:11 +0000, George Dunlap wrote:
> On Mon, Dec 14, 2015 at 10:14 AM, Ian Campbell <ian.campbell@citrix.com>
> wrote:
> > On Fri, 2015-12-11 at 15:16 +0000, Ian Campbell wrote:
> > > 
> > > I have a new flight going on (65755) with flask=permissive instead of
> > > flask=enforcing (assuming I didn't botch the osstest modifications to
> > > support that setting via a runvar).
> > 
> > I did botch the mods, but luckily permissive is the default, so I got
> > what
> > I wanted ;-)
> > 
> > > If that test passes, prints the AVC message but not the missing IRQ
> > > message
> > > then I think that would be our smoking gun.
> > 
> > http://logs.test-lab.xenproject.org/osstest/logs/65758/
> > 
> > From serial-merlot1.log:
> > 
> > Dec 11 18:01:57.001037 (XEN) Flask: 64 avtab hash slots, 236 rules.
> > Dec 11 18:01:57.009023 (XEN) Flask: 64 avtab hash slots, 236 rules.
> > Dec 11 18:01:57.017004 (XEN) Flask:  3 users, 3 roles, 36 types, 2
> > bools
> > Dec 11 18:01:57.017038 (XEN) Flask:  12 classes, 236 rules
> > Dec 11 18:01:57.025015 (XEN) Flask:  Starting in permissive mode.
> > [...]
> > Dec 11 18:06:01.229194 (XEN) avc:  denied  { pcilevel } for domid=2
> > target=1 scontext=system_u:system_r:dm_dom_t
> > tcontext=system_u:system_r:domU_t_target tclass=hvm
> > 
> > http://logs.test-lab.xenproject.org/osstest/logs/65758/test-amd64-amd64
> > -xl-qemut-stubdom-debianhvm-amd64-xsm/merlot1---var-log-xen-qemu-dm-
> > debianhvm.guest.osstest--incoming.log.10
> 
> So wait -- does flask not report denials when in enforcing mode?

It does, I'm not sure what made you think otherwise, earlier in the thread
I quoted such a denial and it was that which lead me down this path.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: stubdom migration failure on merlot* XSM related (Was: [adhoc test] 65682: tolerable FAIL])
  2015-12-14 10:14         ` Ian Campbell
  2015-12-14 11:11           ` George Dunlap
@ 2015-12-14 11:55           ` Ian Campbell
  1 sibling, 0 replies; 8+ messages in thread
From: Ian Campbell @ 2015-12-14 11:55 UTC (permalink / raw)
  To: Ian Jackson, Wei Liu, Daniel De Graaf; +Cc: xen-devel

On Mon, 2015-12-14 at 10:14 +0000, Ian Campbell wrote:
> 
> I've running a test with the following patch. I'm reasonably hopeful.

and it did indeed pass:
http://logs.test-lab.xenproject.org/osstest/logs/66273/

I'll resubmit as a proper patch.

Ian.

> 
> Ian.
> 
> From 3f14c5afedc0df360952364b93c2f04de00f00c4 Mon Sep 17 00:00:00 2001
> From: Ian Campbell <ian.campbell@citrix.com>
> Date: Mon, 14 Dec 2015 08:22:41 +0000
> Subject: [PATCH] flask: Allow device model to raise PCI interrupts
> (pcilevel
>  capability)
> 
> Allows:
> 
> (XEN) avc:  denied  { pcilevel } for domid=2 target=1
> scontext=system_u:system_r:dm_dom_t
> tcontext=system_u:system_r:domU_t_target tclass=hvm
> 
> Which otherwise leads to the following on resume after migrate (comparing
> non-XSM to XSM):
> 
>  ata2.00: configured for MWDMA2
>  usb 1-2: reset full-speed USB device number 2 using uhci_hcd
> +PM: restore of devices complete after 3779.268 msecs
>  usb 1-2: USB disconnect, device number 2
> -PM: restore of devices complete after 2342.528 msecs
>  usb 1-2: new full-speed USB device number 3 using uhci_hcd
>  usb 1-2: New USB device found, idVendor=0627, idProduct=0001
>  usb 1-2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
>  usb 1-2: Product: QEMU USB Tablet
>  usb 1-2: Manufacturer: QEMU 0.10.2
>  usb 1-2: SerialNumber: 1
>  input: QEMU 0.10.2 QEMU USB Tablet as
> /devices/pci0000:00/0000:00:01.2/usb1/1-2/1-2:1.0/input/input8
>  generic-usb 0003:0627:0001.0002: input,hidraw0: USB HID v0.01 Pointer
> [QEMU 0.10.2 QEMU USB Tablet] on usb-0000:00:01.2-2/input0
>  Restarting tasks ... done.
>  Setting capacity to 20480000
>  Setting capacity to 20480000
> +uhci_hcd 0000:00:01.2: Unlink after no-IRQ?  Controller is probably
> using the wrong IRQ.
> 
> And a glitch in the domU which is sufficient to disrupt the post
> migration
> checks done by osstest.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>  tools/flask/policy/policy/modules/xen/xen.if | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/flask/policy/policy/modules/xen/xen.if
> b/tools/flask/policy/policy/modules/xen/xen.if
> index 32dd7b3..00d1bbb 100644
> --- a/tools/flask/policy/policy/modules/xen/xen.if
> +++ b/tools/flask/policy/policy/modules/xen/xen.if
> @@ -150,7 +150,7 @@ define(`device_model', `
>  
>  	allow $1 $2_target:domain shutdown;
>  	allow $1 $2_target:mmu { map_read map_write adjust physmap
> target_hack };
> -	allow $1 $2_target:hvm { getparam setparam trackdirtyvram hvmctl
> irqlevel pciroute cacheattr send_irq };
> +	allow $1 $2_target:hvm { getparam setparam trackdirtyvram hvmctl
> irqlevel pciroute pcilevel cacheattr send_irq };
>  ')
>  
>  # make_device_model(priv, dm_dom, hvm_dom)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-12-14 11:55 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <E1a77eF-00049f-Er@osstest.test-lab.xenproject.org>
2015-12-11 12:12 ` stubdom migration failure on merlot* XSM related (Was: [adhoc test] 65682: tolerable FAIL]) Ian Campbell
2015-12-11 14:05   ` Ian Campbell
2015-12-11 15:12     ` Ian Campbell
2015-12-11 15:16       ` Ian Campbell
2015-12-14 10:14         ` Ian Campbell
2015-12-14 11:11           ` George Dunlap
2015-12-14 11:30             ` Ian Campbell
2015-12-14 11:55           ` Ian Campbell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.