* [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
@ 2011-09-01 15:54 xen.org
2011-09-01 16:26 ` Ian Jackson
0 siblings, 1 reply; 21+ messages in thread
From: xen.org @ 2011-09-01 15:54 UTC (permalink / raw)
To: xen-devel; +Cc: ian.jackson, keir, stefano.stabellini
branch xen-unstable
xen branch xen-unstable
job test-amd64-i386-rhel6hvm-intel
test xen-install
Tree: linux git://git.eu.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git
Tree: qemu git://hg.uk.xensource.com/HG/qemu-xen-unstable.git
Tree: xen http://hg.uk.xensource.com/xen-unstable.hg
*** Found and reproduced problem changeset ***
Bug is in tree: xen http://hg.uk.xensource.com/xen-unstable.hg
Bug introduced: bb9b81008733
Bug not present: d54cfae72cd1
changeset: 23802:bb9b81008733
user: Laszlo Ersek <lersek@redhat.com>
date: Wed Aug 31 15:16:14 2011 +0100
x86: Increase the default NR_CPUS to 256
Changeset 21012:ef845a385014 bumped the default to 128 about one and a
half years ago. Increase it now to 256, as systems with eg. 160
logical CPUs are becoming (have become) common.
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
For bisection revision-tuple graph see:
http://www.chiark.greenend.org.uk/~xensrcts/results/bisect.xen-unstable.test-amd64-i386-rhel6hvm-intel.xen-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.
----------------------------------------
Searching for failure / basis pass:
8791 fail [host=earwig] / 8769 [host=itch-mite] 8760 [host=bush-cricket] 8739 [host=itch-mite] 8735 [host=bush-cricket] 8731 [host=itch-mite] 8729 [host=itch-mite] 8727 [host=gall-mite] 8726 [host=field-cricket] 8725 [host=gall-mite] 8724 [host=bush-cricket] 8723 [host=itch-mite] 8722 [host=bush-cricket] 8721 [host=field-cricket] 8718 [host=gall-mite] 8717 [host=bush-cricket] 8715 [host=itch-mite] 8713 [host=gall-mite] 8712 [host=gall-mite] 8711 [host=gall-mite] 8710 [host=field-cricket] 8707 [host=gall-mite] 8696 [host=gall-mite] 8687 [host=gall-mite] 8674 ok.
Failure / basis pass flights: 8791 / 8674
Tree: linux git://git.eu.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git
Tree: qemu git://hg.uk.xensource.com/HG/qemu-xen-unstable.git
Tree: xen http://hg.uk.xensource.com/xen-unstable.hg
Latest 1c3f03ccc5258887f5f2cafc0836a865834f9205 cd776ee9408ff127f934a707c1a339ee600bc127 4a4882df5649
Basis pass ada3f6a1ba43e163aab95c7808f11b88fc7c79e6 cd776ee9408ff127f934a707c1a339ee600bc127 fc2be6cb89ad
Generating revisions with ./adhoc-revtuple-generator git://git.eu.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git#ada3f6a1ba43e163aab95c7808f11b88fc7c79e6-1c3f03ccc5258887f5f2cafc0836a865834f9205 git://hg.uk.xensource.com/HG/qemu-xen-unstable.git#cd776ee9408ff127f934a707c1a339ee600bc127-cd776ee9408ff127f934a707c1a339ee600bc127 http://hg.uk.xensource.com/xen-unstable.hg#fc2be6cb89ad-4a4882df5649
using cache /export/home/osstest/repos/git-cache...
using cache /export/home/osstest/repos/git-cache...
locked cache /export/home/osstest/repos/git-cache...
processing ./cacheing-git clone --bare git://git.eu.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git /export/home/osstest/repos/xen...
Initialized empty Git repository in /export/home/osstest/repos/xen/
Initialized empty Git repository in /export/home/osstest/repos/xen/
updating cache /export/home/osstest/repos/git-cache xen...
pulling from http://hg.uk.xensource.com/xen-unstable.hg
searching for changes
no changes found
using cache /export/home/osstest/repos/git-cache...
using cache /export/home/osstest/repos/git-cache...
locked cache /export/home/osstest/repos/git-cache...
processing ./cacheing-git clone --bare git://git.eu.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git /export/home/osstest/repos/xen...
Initialized empty Git repository in /export/home/osstest/repos/xen/
Initialized empty Git repository in /export/home/osstest/repos/xen/
updating cache /export/home/osstest/repos/git-cache xen...
pulling from http://hg.uk.xensource.com/xen-unstable.hg
searching for changes
no changes found
Loaded 2260 nodes in revision graph
Searching for test results:
8674 pass ada3f6a1ba43e163aab95c7808f11b88fc7c79e6 cd776ee9408ff127f934a707c1a339ee600bc127 fc2be6cb89ad
8664 pass ada3f6a1ba43e163aab95c7808f11b88fc7c79e6 cd776ee9408ff127f934a707c1a339ee600bc127 fc2be6cb89ad
8712 [host=gall-mite]
8722 [host=bush-cricket]
8713 [host=gall-mite]
8687 [host=gall-mite]
8735 [host=bush-cricket]
8723 [host=itch-mite]
8696 [host=gall-mite]
8715 [host=itch-mite]
8707 [host=gall-mite]
8724 [host=bush-cricket]
8710 [host=field-cricket]
8717 [host=bush-cricket]
8711 [host=gall-mite]
8718 [host=gall-mite]
8725 [host=gall-mite]
8721 [host=field-cricket]
8729 [host=itch-mite]
8726 [host=field-cricket]
8727 [host=gall-mite]
8731 [host=itch-mite]
8739 [host=itch-mite]
8786 pass 1c3f03ccc5258887f5f2cafc0836a865834f9205 cd776ee9408ff127f934a707c1a339ee600bc127 2c687e70a343
8806 fail 1c3f03ccc5258887f5f2cafc0836a865834f9205 cd776ee9408ff127f934a707c1a339ee600bc127 bb9b81008733
8781 [host=field-cricket]
8760 [host=bush-cricket]
8790 pass 20a27c1e25b8550066902c9d6ca91631e656dfa3 cd776ee9408ff127f934a707c1a339ee600bc127 41f00cf6b822
8792 [host=field-cricket]
8793 [host=field-cricket]
8769 [host=itch-mite]
8794 [host=field-cricket]
8791 fail 1c3f03ccc5258887f5f2cafc0836a865834f9205 cd776ee9408ff127f934a707c1a339ee600bc127 4a4882df5649
8795 [host=field-cricket]
8796 pass 1c3f03ccc5258887f5f2cafc0836a865834f9205 cd776ee9408ff127f934a707c1a339ee600bc127 ac9aa65050e9
8797 fail 1c3f03ccc5258887f5f2cafc0836a865834f9205 cd776ee9408ff127f934a707c1a339ee600bc127 51983821efa4
8776 fail 1c3f03ccc5258887f5f2cafc0836a865834f9205 cd776ee9408ff127f934a707c1a339ee600bc127 4a4882df5649
8798 pass 1c3f03ccc5258887f5f2cafc0836a865834f9205 cd776ee9408ff127f934a707c1a339ee600bc127 d54cfae72cd1
8782 pass ada3f6a1ba43e163aab95c7808f11b88fc7c79e6 cd776ee9408ff127f934a707c1a339ee600bc127 fc2be6cb89ad
8799 fail 1c3f03ccc5258887f5f2cafc0836a865834f9205 cd776ee9408ff127f934a707c1a339ee600bc127 bb9b81008733
8784 fail 1c3f03ccc5258887f5f2cafc0836a865834f9205 cd776ee9408ff127f934a707c1a339ee600bc127 4a4882df5649
8800 pass 1c3f03ccc5258887f5f2cafc0836a865834f9205 cd776ee9408ff127f934a707c1a339ee600bc127 d54cfae72cd1
8804 fail 1c3f03ccc5258887f5f2cafc0836a865834f9205 cd776ee9408ff127f934a707c1a339ee600bc127 bb9b81008733
8805 pass 1c3f03ccc5258887f5f2cafc0836a865834f9205 cd776ee9408ff127f934a707c1a339ee600bc127 d54cfae72cd1
Searching for interesting versions
Result found: flight 8664 (pass), for basis pass
Result found: flight 8776 (fail), for basis failure
Repro found: flight 8782 (pass), for basis pass
Repro found: flight 8784 (fail), for basis failure
0 revisions at 1c3f03ccc5258887f5f2cafc0836a865834f9205 cd776ee9408ff127f934a707c1a339ee600bc127 d54cfae72cd1
No revisions left to test, checking graph state.
Result found: flight 8798 (pass), for last pass
Result found: flight 8799 (fail), for first failure
Repro found: flight 8800 (pass), for last pass
Repro found: flight 8804 (fail), for first failure
Repro found: flight 8805 (pass), for last pass
Repro found: flight 8806 (fail), for first failure
*** Found and reproduced problem changeset ***
Bug is in tree: xen http://hg.uk.xensource.com/xen-unstable.hg
Bug introduced: bb9b81008733
Bug not present: d54cfae72cd1
pulling from http://hg.uk.xensource.com/xen-unstable.hg
searching for changes
no changes found
changeset: 23802:bb9b81008733
user: Laszlo Ersek <lersek@redhat.com>
date: Wed Aug 31 15:16:14 2011 +0100
x86: Increase the default NR_CPUS to 256
Changeset 21012:ef845a385014 bumped the default to 128 about one and a
half years ago. Increase it now to 256, as systems with eg. 160
logical CPUs are becoming (have become) common.
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Revision graph left in /home/xc_osstest/results/bisect.xen-unstable.test-amd64-i386-rhel6hvm-intel.xen-install.{dot,ps,png,html}.
----------------------------------------
8806: ALL FAIL
flight 8806 xen-unstable real-bisect [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/8806/
jobs:
build-i386 fail
test-amd64-i386-rhel6hvm-intel fail
------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images
Logs, config files, etc. are available at
http://www.chiark.greenend.org.uk/~xensrcts/logs
Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
2011-09-01 15:54 [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel xen.org
@ 2011-09-01 16:26 ` Ian Jackson
2011-09-01 17:22 ` Laszlo Ersek
` (2 more replies)
0 siblings, 3 replies; 21+ messages in thread
From: Ian Jackson @ 2011-09-01 16:26 UTC (permalink / raw)
To: xen-devel, Laszlo Ersek; +Cc: keir, stefano.stabellini
xen.org writes ("[Xen-devel] [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel"):
> branch xen-unstable
> xen branch xen-unstable
> job test-amd64-i386-rhel6hvm-intel
> test xen-install
>
> Tree: linux git://git.eu.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git
> Tree: qemu git://hg.uk.xensource.com/HG/qemu-xen-unstable.git
> Tree: xen http://hg.uk.xensource.com/xen-unstable.hg
>
> *** Found and reproduced problem changeset ***
>
> Bug is in tree: xen http://hg.uk.xensource.com/xen-unstable.hg
> Bug introduced: bb9b81008733
> Bug not present: d54cfae72cd1
>
>
> changeset: 23802:bb9b81008733
> user: Laszlo Ersek <lersek@redhat.com>
> date: Wed Aug 31 15:16:14 2011 +0100
>
> x86: Increase the default NR_CPUS to 256
>
> Changeset 21012:ef845a385014 bumped the default to 128 about one and a
> half years ago. Increase it now to 256, as systems with eg. 160
> logical CPUs are becoming (have become) common.
>
> Signed-off-by: Laszlo Ersek <lersek@redhat.com>
My bisector is pretty reliable nowadays. Looking at the revision
graph it tested before/after/before/after/before/after, ie three times
each on the same host.
This change looks innocuous enough TBH. Is there any way this change
could have broken a PV-on-HVM guest ? Note that RHEL6, which is what
this is testing, seems to generally be full of bugs.
If the problem is indeed a bug in the current RHEL6 then I will add
this test to the "do not care" list.
Ian.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
2011-09-01 16:26 ` Ian Jackson
@ 2011-09-01 17:22 ` Laszlo Ersek
2011-09-02 7:11 ` Ian Campbell
2011-09-01 17:48 ` Laszlo Ersek
2011-09-01 19:28 ` Andrew Jones
2 siblings, 1 reply; 21+ messages in thread
From: Laszlo Ersek @ 2011-09-01 17:22 UTC (permalink / raw)
To: Ian Jackson
Cc: Drew Jones, xen-devel, keir, stefano.stabellini, Igor Mammedov,
Paolo Bonzini
On 09/01/11 18:26, Ian Jackson wrote:
>> job test-amd64-i386-rhel6hvm-intel
>> changeset: 23802:bb9b81008733
>> user: Laszlo Ersek<lersek@redhat.com>
>> date: Wed Aug 31 15:16:14 2011 +0100
>>
>> x86: Increase the default NR_CPUS to 256
>>
>> Changeset 21012:ef845a385014 bumped the default to 128 about one and a
>> half years ago. Increase it now to 256, as systems with eg. 160
>> logical CPUs are becoming (have become) common.
>>
>> Signed-off-by: Laszlo Ersek<lersek@redhat.com>
>
> My bisector is pretty reliable nowadays. Looking at the revision
> graph it tested before/after/before/after/before/after, ie three times
> each on the same host.
>
> This change looks innocuous enough TBH. Is there any way this change
> could have broken a PV-on-HVM guest ? Note that RHEL6, which is what
> this is testing, seems to generally be full of bugs.
>
> If the problem is indeed a bug in the current RHEL6 then I will add
> this test to the "do not care" list.
In what way was the guest broken? How many physical cores/threads was
the hypervisor running on?
Thanks,
lacos
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
2011-09-01 16:26 ` Ian Jackson
2011-09-01 17:22 ` Laszlo Ersek
@ 2011-09-01 17:48 ` Laszlo Ersek
2011-09-01 19:28 ` Andrew Jones
2 siblings, 0 replies; 21+ messages in thread
From: Laszlo Ersek @ 2011-09-01 17:48 UTC (permalink / raw)
To: Ian Jackson; +Cc: xen-devel, keir, stefano.stabellini
On 09/01/11 18:26, Ian Jackson wrote:
>> changeset: 23802:bb9b81008733
>> user: Laszlo Ersek<lersek@redhat.com>
>> date: Wed Aug 31 15:16:14 2011 +0100
>>
>> x86: Increase the default NR_CPUS to 256
>>
>> Changeset 21012:ef845a385014 bumped the default to 128 about one and a
>> half years ago. Increase it now to 256, as systems with eg. 160
>> logical CPUs are becoming (have become) common.
>>
>> Signed-off-by: Laszlo Ersek<lersek@redhat.com>
FWIW, the hypervisor shipped in RHEL-5 has been built for 256 CPUs since April 2009, using the max_phys_cpus make macro. I posted the patch because now we changed the in-source macro definition too.
lacos
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
2011-09-01 16:26 ` Ian Jackson
2011-09-01 17:22 ` Laszlo Ersek
2011-09-01 17:48 ` Laszlo Ersek
@ 2011-09-01 19:28 ` Andrew Jones
2011-09-02 11:08 ` Ian Jackson
2 siblings, 1 reply; 21+ messages in thread
From: Andrew Jones @ 2011-09-01 19:28 UTC (permalink / raw)
To: Ian Jackson; +Cc: Laszlo Ersek, xen-devel, keir, stefano stabellini
----- Original Message -----
> xen.org writes ("[Xen-devel] [xen-unstable bisection] complete
> test-amd64-i386-rhel6hvm-intel"):
> > branch xen-unstable
> > xen branch xen-unstable
> > job test-amd64-i386-rhel6hvm-intel
> > test xen-install
> >
> > Tree: linux
> > git://git.eu.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git
> > Tree: qemu git://hg.uk.xensource.com/HG/qemu-xen-unstable.git
> > Tree: xen http://hg.uk.xensource.com/xen-unstable.hg
> >
> > *** Found and reproduced problem changeset ***
> >
> > Bug is in tree: xen http://hg.uk.xensource.com/xen-unstable.hg
> > Bug introduced: bb9b81008733
> > Bug not present: d54cfae72cd1
> >
> >
> > changeset: 23802:bb9b81008733
> > user: Laszlo Ersek <lersek@redhat.com>
> > date: Wed Aug 31 15:16:14 2011 +0100
> >
> > x86: Increase the default NR_CPUS to 256
> >
> > Changeset 21012:ef845a385014 bumped the default to 128 about
> > one and a
> > half years ago. Increase it now to 256, as systems with eg.
> > 160
> > logical CPUs are becoming (have become) common.
> >
> > Signed-off-by: Laszlo Ersek <lersek@redhat.com>
>
> My bisector is pretty reliable nowadays. Looking at the revision
> graph it tested before/after/before/after/before/after, ie three times
> each on the same host.
>
> This change looks innocuous enough TBH. Is there any way this change
> could have broken a PV-on-HVM guest ? Note that RHEL6, which is what
> this is testing, seems to generally be full of bugs.
It's seems unlikely this change could break a guest, but without any
output from you tests it's impossible to tell. The fact it failed on
the same host each of the three times is probably a clue worth looking
further at. I take it that it succeeded on other hosts?
Which RHEL6 kernel release do you test with? When you say "full of bugs",
where have the bugs been filed? Are those bugs only present with the
pv-on-hvm drivers? IMO, the HV should support the guest (especially an
HVM guest), even if it was based on something as "old" as 2.6.32. So the
bugs you're finding should likely be looked at from both the host and
the guest sides, certainly not ignored.
>
> If the problem is indeed a bug in the current RHEL6 then I will add
> this test to the "do not care" list.
>
This attitude won't get anybody anywhere.
> Ian.
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
2011-09-01 17:22 ` Laszlo Ersek
@ 2011-09-02 7:11 ` Ian Campbell
0 siblings, 0 replies; 21+ messages in thread
From: Ian Campbell @ 2011-09-02 7:11 UTC (permalink / raw)
To: Laszlo Ersek
Cc: Drew Jones, xen-devel, keir, Stefano Stabellini, Ian Jackson,
Paolo Bonzini, Igor Mammedov
On Thu, 2011-09-01 at 18:22 +0100, Laszlo Ersek wrote:
> On 09/01/11 18:26, Ian Jackson wrote:
>
> >> job test-amd64-i386-rhel6hvm-intel
>
> >> changeset: 23802:bb9b81008733
> >> user: Laszlo Ersek<lersek@redhat.com>
> >> date: Wed Aug 31 15:16:14 2011 +0100
> >>
> >> x86: Increase the default NR_CPUS to 256
> >>
> >> Changeset 21012:ef845a385014 bumped the default to 128 about one and a
> >> half years ago. Increase it now to 256, as systems with eg. 160
> >> logical CPUs are becoming (have become) common.
> >>
> >> Signed-off-by: Laszlo Ersek<lersek@redhat.com>
> >
> > My bisector is pretty reliable nowadays. Looking at the revision
> > graph it tested before/after/before/after/before/after, ie three times
> > each on the same host.
> >
> > This change looks innocuous enough TBH. Is there any way this change
> > could have broken a PV-on-HVM guest ? Note that RHEL6, which is what
> > this is testing, seems to generally be full of bugs.
> >
> > If the problem is indeed a bug in the current RHEL6 then I will add
> > this test to the "do not care" list.
>
> In what way was the guest broken? How many physical cores/threads was
> the hypervisor running on?
This is just confusion over the way the failure is reported. The
bisector was running the test-amd64-i386-rhel6hvm-intel job but it was
actually failing at the build/install Xen stage and not getting anywhere
near actually testing rhel6hvm. This confused me (and apparently IanJ)
too. For future reference the thing to look at is the report's header
which in this case said:
job test-amd64-i386-rhel6hvm-intel
test xen-install
i.e. the xen-install stage failed while running the
test-amd64-i386-rhel6hvm-intel sequence.
The selection of the test-amd64-i386-rhel6hvm-intel sequence for
bisecting is apparently just an arbitrary choice out of all the
sequences which suffered this failure.
The actual fix for this issue was identified and posted in the "8803:
regressions - FAIL" thread.
Ian.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
2011-09-01 19:28 ` Andrew Jones
@ 2011-09-02 11:08 ` Ian Jackson
0 siblings, 0 replies; 21+ messages in thread
From: Ian Jackson @ 2011-09-02 11:08 UTC (permalink / raw)
To: Andrew Jones; +Cc: Laszlo Ersek, xen-devel, keir, Stefano Stabellini
Andrew Jones writes ("Re: [Xen-devel] [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel"):
> It's seems unlikely this change could break a guest, but without any
> output from you tests it's impossible to tell. The fact it failed on
> the same host each of the three times is probably a clue worth looking
> further at. I take it that it succeeded on other hosts?
Sorry, this particular problem was a Xen build failure and nothing to
do with RHEL6.
Ian.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
2013-07-21 5:30 ` Ian Campbell
@ 2013-07-21 15:15 ` Ian Campbell
0 siblings, 0 replies; 21+ messages in thread
From: Ian Campbell @ 2013-07-21 15:15 UTC (permalink / raw)
To: xen.org; +Cc: xen-devel, keir, Jan Beulich, stefano.stabellini
On Sun, 2013-07-21 at 06:30 +0100, Ian Campbell wrote:
> 8<------------------------------
>
> From 5f7b0c68d3721fd2eef80f7e23466425b55d21af Mon Sep 17 00:00:00 2001
> From: Ian Campbell <ian.campbell@citrix.com>
> Date: Sun, 21 Jul 2013 06:24:30 +0100
> Subject: [PATCH] xen: x86: put back .gz suffix on installed hypervisor binary.
>
> This reverts the effect of 524b93def23b "xen: x86: drop the ".gz" suffix when
> installing" which broke things in osstest (Debian Squeeze update-grub apparently
> can't cope). It is not a direct revert because of other changes made since.
Since this is (functionally if not literally) a revert which fixes the
tests I've gone ahead and pushed it without any acks.
>
> We continue to omit the suffix on ARM.
>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: jbeulich@suse.com
> ---
> xen/Makefile | 18 ++++++++++--------
> 1 files changed, 10 insertions(+), 8 deletions(-)
>
> diff --git a/xen/Makefile b/xen/Makefile
> index 2abfa58..597972d 100644
> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -34,12 +34,13 @@ _build: $(TARGET)$(CONFIG_XEN_INSTALL_SUFFIX)
> .PHONY: _install
> _install: D=$(DESTDIR)
> _install: T=$(notdir $(TARGET))
> +_install: Z=$(CONFIG_XEN_INSTALL_SUFFIX)
> _install: $(TARGET)$(CONFIG_XEN_INSTALL_SUFFIX)
> [ -d $(D)/boot ] || $(INSTALL_DIR) $(D)/boot
> - $(INSTALL_DATA) $(TARGET)$(CONFIG_XEN_INSTALL_SUFFIX) $(D)/boot/$(T)-$(XEN_FULLVERSION)
> - ln -f -s $(T)-$(XEN_FULLVERSION) $(D)/boot/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION)
> - ln -f -s $(T)-$(XEN_FULLVERSION) $(D)/boot/$(T)-$(XEN_VERSION)
> - ln -f -s $(T)-$(XEN_FULLVERSION) $(D)/boot/$(T)
> + $(INSTALL_DATA) $(TARGET)$(Z) $(D)/boot/$(T)-$(XEN_FULLVERSION)$(Z)
> + ln -f -s $(T)-$(XEN_FULLVERSION)$(Z) $(D)/boot/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION)$(Z)
> + ln -f -s $(T)-$(XEN_FULLVERSION)$(Z) $(D)/boot/$(T)-$(XEN_VERSION)$(Z)
> + ln -f -s $(T)-$(XEN_FULLVERSION)$(Z) $(D)/boot/$(T)$(Z)
> $(INSTALL_DATA) $(TARGET)-syms $(D)/boot/$(T)-syms-$(XEN_FULLVERSION)
> if [ -r $(TARGET).efi -a -n '$(EFI_DIR)' ]; then \
> [ -d $(D)$(EFI_DIR) ] || $(INSTALL_DIR) $(D)$(EFI_DIR); \
> @@ -57,11 +58,12 @@ _install: $(TARGET)$(CONFIG_XEN_INSTALL_SUFFIX)
> .PHONY: _uninstall
> _uninstall: D=$(DESTDIR)
> _uninstall: T=$(notdir $(TARGET))
> +_uninstall: Z=$(CONFIG_XEN_INSTALL_SUFFIX)
> _uninstall:
> - rm -f $(D)/boot/$(T)-$(XEN_FULLVERSION)
> - rm -f $(D)/boot/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION)
> - rm -f $(D)/boot/$(T)-$(XEN_VERSION)
> - rm -f $(D)/boot/$(T)
> + rm -f $(D)/boot/$(T)-$(XEN_FULLVERSION)$(Z)
> + rm -f $(D)/boot/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION)$(Z)
> + rm -f $(D)/boot/$(T)-$(XEN_VERSION)$(Z)
> + rm -f $(D)/boot/$(T)$(Z)
> rm -f $(D)/boot/$(T)-syms-$(XEN_FULLVERSION)
> rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_FULLVERSION).efi
> rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION).efi
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
2013-07-21 3:26 xen.org
@ 2013-07-21 5:30 ` Ian Campbell
2013-07-21 15:15 ` Ian Campbell
0 siblings, 1 reply; 21+ messages in thread
From: Ian Campbell @ 2013-07-21 5:30 UTC (permalink / raw)
To: xen.org; +Cc: xen-devel, keir, Jan Beulich, stefano.stabellini
On Sun, 2013-07-21 at 04:26 +0100, xen.org wrote:
> commit 524b93def23b9f75fd7851063f5291886e63d1ed
> Author: Ian Campbell <ian.campbell@citrix.com>
> Date: Thu Jul 18 09:41:41 2013 +0100
>
> xen: x86: drop the ".gz" suffix when installing
>
> As Jan says it is pretty meaningless under /boot anyway. However I am slightly
> concerned about breaking bootloaders (or more specifically their help scripts
> which automatically generate config files). By inspection at least grub 2's
> update-grub script (as present in Debian Wheezy) seems to cope (it matches on
> xen* not xen*.gz)
Looks like update-grub in Squeeze doesn't handle this case well.
http://www.chiark.greenend.org.uk/~xensrcts/logs/18537/test-amd64-amd64-xl/4.ts-xen-install.log shows osstest failing to find any hypervisor stanzas in grub.cfg and http://www.chiark.greenend.org.uk/~xensrcts/logs/18537/test-amd64-amd64-xl/field-cricket--grub.cfg.1 shows that there are indeed none present...
Lets just undo things for now:
8<------------------------------
>From 5f7b0c68d3721fd2eef80f7e23466425b55d21af Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Sun, 21 Jul 2013 06:24:30 +0100
Subject: [PATCH] xen: x86: put back .gz suffix on installed hypervisor binary.
This reverts the effect of 524b93def23b "xen: x86: drop the ".gz" suffix when
installing" which broke things in osstest (Debian Squeeze update-grub apparently
can't cope). It is not a direct revert because of other changes made since.
We continue to omit the suffix on ARM.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: jbeulich@suse.com
---
xen/Makefile | 18 ++++++++++--------
1 files changed, 10 insertions(+), 8 deletions(-)
diff --git a/xen/Makefile b/xen/Makefile
index 2abfa58..597972d 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -34,12 +34,13 @@ _build: $(TARGET)$(CONFIG_XEN_INSTALL_SUFFIX)
.PHONY: _install
_install: D=$(DESTDIR)
_install: T=$(notdir $(TARGET))
+_install: Z=$(CONFIG_XEN_INSTALL_SUFFIX)
_install: $(TARGET)$(CONFIG_XEN_INSTALL_SUFFIX)
[ -d $(D)/boot ] || $(INSTALL_DIR) $(D)/boot
- $(INSTALL_DATA) $(TARGET)$(CONFIG_XEN_INSTALL_SUFFIX) $(D)/boot/$(T)-$(XEN_FULLVERSION)
- ln -f -s $(T)-$(XEN_FULLVERSION) $(D)/boot/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION)
- ln -f -s $(T)-$(XEN_FULLVERSION) $(D)/boot/$(T)-$(XEN_VERSION)
- ln -f -s $(T)-$(XEN_FULLVERSION) $(D)/boot/$(T)
+ $(INSTALL_DATA) $(TARGET)$(Z) $(D)/boot/$(T)-$(XEN_FULLVERSION)$(Z)
+ ln -f -s $(T)-$(XEN_FULLVERSION)$(Z) $(D)/boot/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION)$(Z)
+ ln -f -s $(T)-$(XEN_FULLVERSION)$(Z) $(D)/boot/$(T)-$(XEN_VERSION)$(Z)
+ ln -f -s $(T)-$(XEN_FULLVERSION)$(Z) $(D)/boot/$(T)$(Z)
$(INSTALL_DATA) $(TARGET)-syms $(D)/boot/$(T)-syms-$(XEN_FULLVERSION)
if [ -r $(TARGET).efi -a -n '$(EFI_DIR)' ]; then \
[ -d $(D)$(EFI_DIR) ] || $(INSTALL_DIR) $(D)$(EFI_DIR); \
@@ -57,11 +58,12 @@ _install: $(TARGET)$(CONFIG_XEN_INSTALL_SUFFIX)
.PHONY: _uninstall
_uninstall: D=$(DESTDIR)
_uninstall: T=$(notdir $(TARGET))
+_uninstall: Z=$(CONFIG_XEN_INSTALL_SUFFIX)
_uninstall:
- rm -f $(D)/boot/$(T)-$(XEN_FULLVERSION)
- rm -f $(D)/boot/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION)
- rm -f $(D)/boot/$(T)-$(XEN_VERSION)
- rm -f $(D)/boot/$(T)
+ rm -f $(D)/boot/$(T)-$(XEN_FULLVERSION)$(Z)
+ rm -f $(D)/boot/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION)$(Z)
+ rm -f $(D)/boot/$(T)-$(XEN_VERSION)$(Z)
+ rm -f $(D)/boot/$(T)$(Z)
rm -f $(D)/boot/$(T)-syms-$(XEN_FULLVERSION)
rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_FULLVERSION).efi
rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION).efi
--
1.7.2.5
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
@ 2013-07-21 3:26 xen.org
2013-07-21 5:30 ` Ian Campbell
0 siblings, 1 reply; 21+ messages in thread
From: xen.org @ 2013-07-21 3:26 UTC (permalink / raw)
To: xen-devel; +Cc: ian.jackson, keir, stefano.stabellini
branch xen-unstable
xen branch xen-unstable
job test-amd64-i386-rhel6hvm-intel
test xen-install
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-unstable.git
Tree: qemuu git://xenbits.xen.org/staging/qemu-upstream-unstable.git
Tree: xen git://xenbits.xen.org/xen.git
*** Found and reproduced problem changeset ***
Bug is in tree: xen git://xenbits.xen.org/xen.git
Bug introduced: 524b93def23b9f75fd7851063f5291886e63d1ed
Bug not present: 09a08ef52a21d171cc48b54a975f13e7704c912f
commit 524b93def23b9f75fd7851063f5291886e63d1ed
Author: Ian Campbell <ian.campbell@citrix.com>
Date: Thu Jul 18 09:41:41 2013 +0100
xen: x86: drop the ".gz" suffix when installing
As Jan says it is pretty meaningless under /boot anyway. However I am slightly
concerned about breaking bootloaders (or more specifically their help scripts
which automatically generate config files). By inspection at least grub 2's
update-grub script (as present in Debian Wheezy) seems to cope (it matches on
xen* not xen*.gz)
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Keir Fraser <keir@xen.org>
Acked-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
For bisection revision-tuple graph see:
http://www.chiark.greenend.org.uk/~xensrcts/results/bisect.xen-unstable.test-amd64-i386-rhel6hvm-intel.xen-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.
----------------------------------------
Searching for failure / basis pass:
18537 fail [host=bush-cricket] / 18485 [host=gall-mite] 18484 [host=field-cricket] 18466 [host=itch-mite] 18454 [host=earwig] 18431 ok.
Failure / basis pass flights: 18537 / 18431
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-unstable.git
Tree: qemuu git://xenbits.xen.org/staging/qemu-upstream-unstable.git
Tree: xen git://xenbits.xen.org/xen.git
Latest a938a246d34912423c560f475ccf1ce0c71d9d00 c530a75c1e6a472b0eb9558310b518f0dfcd8860 13c144d96e825f145e5b37f97e5f6210c2c645e9 7483e7f15139603380c45ebcd8cc2a57dda5583c 2a6327bf2bfaf5de5e07aed583d2c337c9d368c0
Basis pass a938a246d34912423c560f475ccf1ce0c71d9d00 c530a75c1e6a472b0eb9558310b518f0dfcd8860 13c144d96e825f145e5b37f97e5f6210c2c645e9 7483e7f15139603380c45ebcd8cc2a57dda5583c 5d0ca62156d734a757656b9bcb6bf17ee76d37b4
Generating revisions with ./adhoc-revtuple-generator git://xenbits.xen.org/linux-pvops.git#a938a246d34912423c560f475ccf1ce0c71d9d00-a938a246d34912423c560f475ccf1ce0c71d9d00 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/staging/qemu-xen-unstable.git#13c144d96e825f145e5b37f97e5f6210c2c645e9-13c144d96e825f145e5b37f97e5f6210c2c645e9 git://xenbits.xen.org/staging/qemu-upstream-unstable.git#7483e7f15139603380c45ebcd8cc2a57dda5583c-7483e7f15139603380c45ebcd8cc2a57dda5583c git://xenbits.xen.org/xen.git#5d0ca62156d734a757656b9bcb6bf17ee76d37b4-2a6327bf2bfaf5de5e07aed583d2c337c9d368c0
using cache /export/home/osstest/repos/git-cache...
using cache /export/home/osstest/repos/git-cache...
locked cache /export/home/osstest/repos/git-cache...
processing ./cacheing-git clone --bare git://xenbits.xen.org/xen.git /export/home/osstest/repos/xen...
Initialized empty Git repository in /export/home/osstest/repos/xen/
updating cache /export/home/osstest/repos/git-cache xen...
using cache /export/home/osstest/repos/git-cache...
using cache /export/home/osstest/repos/git-cache...
locked cache /export/home/osstest/repos/git-cache...
processing ./cacheing-git clone --bare git://xenbits.xen.org/xen.git /export/home/osstest/repos/xen...
Initialized empty Git repository in /export/home/osstest/repos/xen/
updating cache /export/home/osstest/repos/git-cache xen...
Loaded 1001 nodes in revision graph
Searching for test results:
18431 pass a938a246d34912423c560f475ccf1ce0c71d9d00 c530a75c1e6a472b0eb9558310b518f0dfcd8860 13c144d96e825f145e5b37f97e5f6210c2c645e9 7483e7f15139603380c45ebcd8cc2a57dda5583c 5d0ca62156d734a757656b9bcb6bf17ee76d37b4
18433 []
18430 pass a938a246d34912423c560f475ccf1ce0c71d9d00 c530a75c1e6a472b0eb9558310b518f0dfcd8860 13c144d96e825f145e5b37f97e5f6210c2c645e9 7483e7f15139603380c45ebcd8cc2a57dda5583c 5d0ca62156d734a757656b9bcb6bf17ee76d37b4
18466 [host=itch-mite]
18438 []
18454 [host=earwig]
18546 fail a938a246d34912423c560f475ccf1ce0c71d9d00 c530a75c1e6a472b0eb9558310b518f0dfcd8860 13c144d96e825f145e5b37f97e5f6210c2c645e9 7483e7f15139603380c45ebcd8cc2a57dda5583c 524b93def23b9f75fd7851063f5291886e63d1ed
18528 []
18479 []
18480 []
18483 []
18496 fail a938a246d34912423c560f475ccf1ce0c71d9d00 c530a75c1e6a472b0eb9558310b518f0dfcd8860 13c144d96e825f145e5b37f97e5f6210c2c645e9 7483e7f15139603380c45ebcd8cc2a57dda5583c 2a6327bf2bfaf5de5e07aed583d2c337c9d368c0
18484 [host=field-cricket]
18485 [host=gall-mite]
18536 pass a938a246d34912423c560f475ccf1ce0c71d9d00 c530a75c1e6a472b0eb9558310b518f0dfcd8860 13c144d96e825f145e5b37f97e5f6210c2c645e9 7483e7f15139603380c45ebcd8cc2a57dda5583c 09a08ef52a21d171cc48b54a975f13e7704c912f
18491 fail a938a246d34912423c560f475ccf1ce0c71d9d00 c530a75c1e6a472b0eb9558310b518f0dfcd8860 13c144d96e825f145e5b37f97e5f6210c2c645e9 7483e7f15139603380c45ebcd8cc2a57dda5583c 2a6327bf2bfaf5de5e07aed583d2c337c9d368c0
18519 fail a938a246d34912423c560f475ccf1ce0c71d9d00 c530a75c1e6a472b0eb9558310b518f0dfcd8860 13c144d96e825f145e5b37f97e5f6210c2c645e9 7483e7f15139603380c45ebcd8cc2a57dda5583c 2a6327bf2bfaf5de5e07aed583d2c337c9d368c0
18529 pass a938a246d34912423c560f475ccf1ce0c71d9d00 c530a75c1e6a472b0eb9558310b518f0dfcd8860 13c144d96e825f145e5b37f97e5f6210c2c645e9 7483e7f15139603380c45ebcd8cc2a57dda5583c 5d0ca62156d734a757656b9bcb6bf17ee76d37b4
18530 fail a938a246d34912423c560f475ccf1ce0c71d9d00 c530a75c1e6a472b0eb9558310b518f0dfcd8860 13c144d96e825f145e5b37f97e5f6210c2c645e9 7483e7f15139603380c45ebcd8cc2a57dda5583c 2a6327bf2bfaf5de5e07aed583d2c337c9d368c0
18531 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 c530a75c1e6a472b0eb9558310b518f0dfcd8860 13c144d96e825f145e5b37f97e5f6210c2c645e9 7483e7f15139603380c45ebcd8cc2a57dda5583c 4816f9a7d47f985dfa796dc632771201b10858e8
18532 pass a938a246d34912423c560f475ccf1ce0c71d9d00 c530a75c1e6a472b0eb9558310b518f0dfcd8860 13c144d96e825f145e5b37f97e5f6210c2c645e9 7483e7f15139603380c45ebcd8cc2a57dda5583c 09a08ef52a21d171cc48b54a975f13e7704c912f
18533 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 c530a75c1e6a472b0eb9558310b518f0dfcd8860 13c144d96e825f145e5b37f97e5f6210c2c645e9 7483e7f15139603380c45ebcd8cc2a57dda5583c 2f044a6a6e4cb0ea24c856c1615e3fb878af2cfb
18537 fail a938a246d34912423c560f475ccf1ce0c71d9d00 c530a75c1e6a472b0eb9558310b518f0dfcd8860 13c144d96e825f145e5b37f97e5f6210c2c645e9 7483e7f15139603380c45ebcd8cc2a57dda5583c 2a6327bf2bfaf5de5e07aed583d2c337c9d368c0
18534 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 c530a75c1e6a472b0eb9558310b518f0dfcd8860 13c144d96e825f145e5b37f97e5f6210c2c645e9 7483e7f15139603380c45ebcd8cc2a57dda5583c c57c50c1de759583d5de629fec205254280da4f0
18535 fail a938a246d34912423c560f475ccf1ce0c71d9d00 c530a75c1e6a472b0eb9558310b518f0dfcd8860 13c144d96e825f145e5b37f97e5f6210c2c645e9 7483e7f15139603380c45ebcd8cc2a57dda5583c 524b93def23b9f75fd7851063f5291886e63d1ed
18544 fail a938a246d34912423c560f475ccf1ce0c71d9d00 c530a75c1e6a472b0eb9558310b518f0dfcd8860 13c144d96e825f145e5b37f97e5f6210c2c645e9 7483e7f15139603380c45ebcd8cc2a57dda5583c 524b93def23b9f75fd7851063f5291886e63d1ed
18545 pass a938a246d34912423c560f475ccf1ce0c71d9d00 c530a75c1e6a472b0eb9558310b518f0dfcd8860 13c144d96e825f145e5b37f97e5f6210c2c645e9 7483e7f15139603380c45ebcd8cc2a57dda5583c 09a08ef52a21d171cc48b54a975f13e7704c912f
Searching for interesting versions
Result found: flight 18430 (pass), for basis pass
Result found: flight 18491 (fail), for basis failure
Repro found: flight 18529 (pass), for basis pass
Repro found: flight 18530 (fail), for basis failure
0 revisions at a938a246d34912423c560f475ccf1ce0c71d9d00 c530a75c1e6a472b0eb9558310b518f0dfcd8860 13c144d96e825f145e5b37f97e5f6210c2c645e9 7483e7f15139603380c45ebcd8cc2a57dda5583c 09a08ef52a21d171cc48b54a975f13e7704c912f
No revisions left to test, checking graph state.
Result found: flight 18532 (pass), for last pass
Result found: flight 18535 (fail), for first failure
Repro found: flight 18536 (pass), for last pass
Repro found: flight 18544 (fail), for first failure
Repro found: flight 18545 (pass), for last pass
Repro found: flight 18546 (fail), for first failure
*** Found and reproduced problem changeset ***
Bug is in tree: xen git://xenbits.xen.org/xen.git
Bug introduced: 524b93def23b9f75fd7851063f5291886e63d1ed
Bug not present: 09a08ef52a21d171cc48b54a975f13e7704c912f
using cache /export/home/osstest/repos/git-cache...
using cache /export/home/osstest/repos/git-cache...
locked cache /export/home/osstest/repos/git-cache...
processing ./cacheing-git clone --bare git://xenbits.xen.org/xen.git /export/home/osstest/repos/xen...
Initialized empty Git repository in /export/home/osstest/repos/xen/
updating cache /export/home/osstest/repos/git-cache xen...
commit 524b93def23b9f75fd7851063f5291886e63d1ed
Author: Ian Campbell <ian.campbell@citrix.com>
Date: Thu Jul 18 09:41:41 2013 +0100
xen: x86: drop the ".gz" suffix when installing
As Jan says it is pretty meaningless under /boot anyway. However I am slightly
concerned about breaking bootloaders (or more specifically their help scripts
which automatically generate config files). By inspection at least grub 2's
update-grub script (as present in Debian Wheezy) seems to cope (it matches on
xen* not xen*.gz)
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Keir Fraser <keir@xen.org>
Acked-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
Revision graph left in /home/xc_osstest/results/bisect.xen-unstable.test-amd64-i386-rhel6hvm-intel.xen-install.{dot,ps,png,html}.
----------------------------------------
18546: tolerable ALL FAIL
flight 18546 xen-unstable real-bisect [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/18546/
Failures :-/ but no regressions.
Tests which did not succeed,
including tests which could not be run:
test-amd64-i386-rhel6hvm-intel 4 xen-install fail baseline untested
jobs:
test-amd64-i386-rhel6hvm-intel fail
------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images
Logs, config files, etc. are available at
http://www.chiark.greenend.org.uk/~xensrcts/logs
Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary
^ permalink raw reply [flat|nested] 21+ messages in thread
* [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
@ 2013-02-06 7:04 xen.org
0 siblings, 0 replies; 21+ messages in thread
From: xen.org @ 2013-02-06 7:04 UTC (permalink / raw)
To: xen-devel; +Cc: ian.jackson, keir, stefano.stabellini
branch xen-unstable
xen branch xen-unstable
job test-amd64-i386-rhel6hvm-intel
test redhat-install
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-unstable.git
Tree: qemuu git://xenbits.xen.org/staging/qemu-upstream-unstable.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-unstable.hg
*** Found and reproduced problem changeset ***
Bug is in tree: xen http://xenbits.xen.org/hg/staging/xen-unstable.hg
Bug introduced: 69398345c10e
Bug not present: d1bf3b21f783
changeset: 26503:69398345c10e
user: Jan Beulich <jbeulich@suse.com>
date: Mon Feb 04 12:03:38 2013 +0100
x86/nestedhvm: properly clean up after failure to set up all vCPU-s
This implies that the individual destroy functions will have to remain
capable of being called for a vCPU that the corresponding init function
was never run on.
Once at it, also clean up some inefficiencies in the corresponding
parameter validation code.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Keir Fraser <keir@xen.org>
For bisection revision-tuple graph see:
http://www.chiark.greenend.org.uk/~xensrcts/results/bisect.xen-unstable.test-amd64-i386-rhel6hvm-intel.redhat-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.
----------------------------------------
Searching for failure / basis pass:
15421 fail [host=earwig] / 15433 ok.
Failure / basis pass flights: 15421 / 15433
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-unstable.git
Tree: qemuu git://xenbits.xen.org/staging/qemu-upstream-unstable.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-unstable.hg
Latest a938a246d34912423c560f475ccf1ce0c71d9d00 2a1354d655d816feaad7dbdb8364f40a208439c1 e6e112f5f1b8a9dde8dd037d6a48f621d8a6ca01 ff77e84ddfdc
Basis pass a938a246d34912423c560f475ccf1ce0c71d9d00 2a1354d655d816feaad7dbdb8364f40a208439c1 e6e112f5f1b8a9dde8dd037d6a48f621d8a6ca01 d1bf3b21f783
Generating revisions with ./adhoc-revtuple-generator git://xenbits.xen.org/linux-pvops.git#a938a246d34912423c560f475ccf1ce0c71d9d00-a938a246d34912423c560f475ccf1ce0c71d9d00 git://xenbits.xen.org/staging/qemu-xen-unstable.git#2a1354d655d816feaad7dbdb8364f40a208439c1-2a1354d655d816feaad7dbdb8364f40a208439c1 git://xenbits.xen.org/staging/qemu-upstream-unstable.git#e6e112f5f1b8a9dde8dd037d6a48f621d8a6ca01-e6e112f5f1b8a9dde8dd037d6a48f621d8a6ca01 http://xenbits.xen.org/hg/staging/xen-unstable.hg#d1bf3b21f783-ff77e84ddfdc
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-unstable.hg
searching for changes
no changes found
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-unstable.hg
searching for changes
no changes found
Loaded 97 nodes in revision graph
Searching for test results:
15394 [host=bush-cricket]
15395 [host=bush-cricket]
15396 [host=bush-cricket]
15431 fail a938a246d34912423c560f475ccf1ce0c71d9d00 2a1354d655d816feaad7dbdb8364f40a208439c1 e6e112f5f1b8a9dde8dd037d6a48f621d8a6ca01 69398345c10e
15401 [host=gall-mite]
15404 [host=bush-cricket]
15405 [host=itch-mite]
15433 pass a938a246d34912423c560f475ccf1ce0c71d9d00 2a1354d655d816feaad7dbdb8364f40a208439c1 e6e112f5f1b8a9dde8dd037d6a48f621d8a6ca01 d1bf3b21f783
15406 pass a938a246d34912423c560f475ccf1ce0c71d9d00 2a1354d655d816feaad7dbdb8364f40a208439c1 e6e112f5f1b8a9dde8dd037d6a48f621d8a6ca01 d1bf3b21f783
15411 [host=gall-mite]
15435 fail a938a246d34912423c560f475ccf1ce0c71d9d00 2a1354d655d816feaad7dbdb8364f40a208439c1 e6e112f5f1b8a9dde8dd037d6a48f621d8a6ca01 69398345c10e
15412 [host=bush-cricket]
15413 [host=field-cricket]
15414 [host=bush-cricket]
15415 [host=bush-cricket]
15416 [host=gall-mite]
15417 [host=field-cricket]
15419 fail a938a246d34912423c560f475ccf1ce0c71d9d00 2a1354d655d816feaad7dbdb8364f40a208439c1 e6e112f5f1b8a9dde8dd037d6a48f621d8a6ca01 ff77e84ddfdc
15422 pass a938a246d34912423c560f475ccf1ce0c71d9d00 2a1354d655d816feaad7dbdb8364f40a208439c1 e6e112f5f1b8a9dde8dd037d6a48f621d8a6ca01 d1bf3b21f783
15424 fail a938a246d34912423c560f475ccf1ce0c71d9d00 2a1354d655d816feaad7dbdb8364f40a208439c1 e6e112f5f1b8a9dde8dd037d6a48f621d8a6ca01 ff77e84ddfdc
15425 fail a938a246d34912423c560f475ccf1ce0c71d9d00 2a1354d655d816feaad7dbdb8364f40a208439c1 e6e112f5f1b8a9dde8dd037d6a48f621d8a6ca01 90525fcb0982
15428 fail a938a246d34912423c560f475ccf1ce0c71d9d00 2a1354d655d816feaad7dbdb8364f40a208439c1 e6e112f5f1b8a9dde8dd037d6a48f621d8a6ca01 69398345c10e
15421 fail a938a246d34912423c560f475ccf1ce0c71d9d00 2a1354d655d816feaad7dbdb8364f40a208439c1 e6e112f5f1b8a9dde8dd037d6a48f621d8a6ca01 ff77e84ddfdc
15429 pass a938a246d34912423c560f475ccf1ce0c71d9d00 2a1354d655d816feaad7dbdb8364f40a208439c1 e6e112f5f1b8a9dde8dd037d6a48f621d8a6ca01 d1bf3b21f783
Searching for interesting versions
Result found: flight 15406 (pass), for basis pass
Result found: flight 15419 (fail), for basis failure
Repro found: flight 15422 (pass), for basis pass
Repro found: flight 15424 (fail), for basis failure
0 revisions at a938a246d34912423c560f475ccf1ce0c71d9d00 2a1354d655d816feaad7dbdb8364f40a208439c1 e6e112f5f1b8a9dde8dd037d6a48f621d8a6ca01 d1bf3b21f783
No revisions left to test, checking graph state.
Result found: flight 15406 (pass), for last pass
Result found: flight 15428 (fail), for first failure
Repro found: flight 15429 (pass), for last pass
Repro found: flight 15431 (fail), for first failure
Repro found: flight 15433 (pass), for last pass
Repro found: flight 15435 (fail), for first failure
*** Found and reproduced problem changeset ***
Bug is in tree: xen http://xenbits.xen.org/hg/staging/xen-unstable.hg
Bug introduced: 69398345c10e
Bug not present: d1bf3b21f783
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-unstable.hg
searching for changes
no changes found
changeset: 26503:69398345c10e
user: Jan Beulich <jbeulich@suse.com>
date: Mon Feb 04 12:03:38 2013 +0100
x86/nestedhvm: properly clean up after failure to set up all vCPU-s
This implies that the individual destroy functions will have to remain
capable of being called for a vCPU that the corresponding init function
was never run on.
Once at it, also clean up some inefficiencies in the corresponding
parameter validation code.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Keir Fraser <keir@xen.org>
Revision graph left in /home/xc_osstest/results/bisect.xen-unstable.test-amd64-i386-rhel6hvm-intel.redhat-install.{dot,ps,png,html}.
----------------------------------------
15435: tolerable ALL FAIL
flight 15435 xen-unstable real-bisect [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/15435/
Failures :-/ but no regressions.
Tests which did not succeed,
including tests which could not be run:
test-amd64-i386-rhel6hvm-intel 7 redhat-install fail baseline untested
jobs:
test-amd64-i386-rhel6hvm-intel fail
------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images
Logs, config files, etc. are available at
http://www.chiark.greenend.org.uk/~xensrcts/logs
Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary
^ permalink raw reply [flat|nested] 21+ messages in thread
* [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
@ 2012-02-25 16:48 xen.org
0 siblings, 0 replies; 21+ messages in thread
From: xen.org @ 2012-02-25 16:48 UTC (permalink / raw)
To: xen-devel; +Cc: ian.jackson, keir, stefano.stabellini
branch xen-unstable
xen branch xen-unstable
job test-amd64-i386-rhel6hvm-intel
test redhat-install
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-unstable.git
Tree: qemuu git://xenbits.xen.org/staging/qemu-upstream-unstable.git
Tree: xen http://xenbits.xen.org/staging/xen-unstable.hg
*** Found and reproduced problem changeset ***
Bug is in tree: xen http://xenbits.xen.org/staging/xen-unstable.hg
Bug introduced: a59c1dcfe968
Bug not present: f9789db96c39
changeset: 24875:a59c1dcfe968
user: Justin T. Gibbs <justing@spectralogic.com>
date: Thu Feb 23 10:03:07 2012 +0000
blkif.h: Define and document the request number/size/segments extension
Note: As of __XEN_INTERFACE_VERSION__ 0x00040201 the definition of
BLKIF_MAX_SEGMENTS_PER_REQUEST has changed. Drivers must be
updated to, at minimum, use BLKIF_MAX_SEGMENTS_PER_HEADER_BLOCK,
before being recompiled with a __XEN_INTERFACE_VERSION greater
than or equal to this value.
This extension first appeared in the FreeBSD Operating System.
Signed-off-by: Justin T. Gibbs <justing@spectralogic.com>
Committed-by: Keir Fraser <keir@xen.org>
For bisection revision-tuple graph see:
http://www.chiark.greenend.org.uk/~xensrcts/results/bisect.xen-unstable.test-amd64-i386-rhel6hvm-intel.redhat-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.
----------------------------------------
Searching for failure / basis pass:
12062 fail [host=bush-cricket] / 12031 ok.
Failure / basis pass flights: 12062 / 12031
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-unstable.git
Tree: qemuu git://xenbits.xen.org/staging/qemu-upstream-unstable.git
Tree: xen http://xenbits.xen.org/staging/xen-unstable.hg
Latest 1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad 128de2549c5f24e4a437b86bd2e46f023976d50a 86a8d63bc11431509506b95c1481e1a023302cbc 71159fb049f2
Basis pass 1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad 128de2549c5f24e4a437b86bd2e46f023976d50a 86a8d63bc11431509506b95c1481e1a023302cbc a4d93d0e0df2
Generating revisions with ./adhoc-revtuple-generator git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git#1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad-1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad git://xenbits.xen.org/staging/qemu-xen-unstable.git#128de2549c5f24e4a437b86bd2e46f023976d50a-128de2549c5f24e4a437b86bd2e46f023976d50a git://xenbits.xen.org/staging/qemu-upstream-unstable.git#86a8d63bc11431509506b95c1481e1a023302cbc-86a8d63bc11431509506b95c1481e1a023302cbc http://xenbits.xen.org/staging/xen-unstable.hg#a4d93d0e0df2-71159fb049f2
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-unstable.hg
searching for changes
no changes found
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-unstable.hg
searching for changes
no changes found
Loaded 62 nodes in revision graph
Searching for test results:
12031 pass 1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad 128de2549c5f24e4a437b86bd2e46f023976d50a 86a8d63bc11431509506b95c1481e1a023302cbc a4d93d0e0df2
12024 pass 1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad 128de2549c5f24e4a437b86bd2e46f023976d50a 86a8d63bc11431509506b95c1481e1a023302cbc a4d93d0e0df2
12043 fail 1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad 128de2549c5f24e4a437b86bd2e46f023976d50a 86a8d63bc11431509506b95c1481e1a023302cbc 0c3d19f40ab1
12035 fail 1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad 128de2549c5f24e4a437b86bd2e46f023976d50a 86a8d63bc11431509506b95c1481e1a023302cbc adcd6ab160fa
12053 fail 1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad 128de2549c5f24e4a437b86bd2e46f023976d50a 86a8d63bc11431509506b95c1481e1a023302cbc 71159fb049f2
12061 pass 1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad 128de2549c5f24e4a437b86bd2e46f023976d50a 86a8d63bc11431509506b95c1481e1a023302cbc a4d93d0e0df2
12063 fail 1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad 128de2549c5f24e4a437b86bd2e46f023976d50a 86a8d63bc11431509506b95c1481e1a023302cbc 71159fb049f2
12064 fail 1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad 128de2549c5f24e4a437b86bd2e46f023976d50a 86a8d63bc11431509506b95c1481e1a023302cbc 7cf234b198a3
12066 pass 1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad 128de2549c5f24e4a437b86bd2e46f023976d50a 86a8d63bc11431509506b95c1481e1a023302cbc 4e1460cd2227
12067 pass 1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad 128de2549c5f24e4a437b86bd2e46f023976d50a 86a8d63bc11431509506b95c1481e1a023302cbc f9789db96c39
12068 fail 1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad 128de2549c5f24e4a437b86bd2e46f023976d50a 86a8d63bc11431509506b95c1481e1a023302cbc a59c1dcfe968
12069 pass 1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad 128de2549c5f24e4a437b86bd2e46f023976d50a 86a8d63bc11431509506b95c1481e1a023302cbc f9789db96c39
12070 fail 1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad 128de2549c5f24e4a437b86bd2e46f023976d50a 86a8d63bc11431509506b95c1481e1a023302cbc a59c1dcfe968
12071 pass 1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad 128de2549c5f24e4a437b86bd2e46f023976d50a 86a8d63bc11431509506b95c1481e1a023302cbc f9789db96c39
12062 fail 1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad 128de2549c5f24e4a437b86bd2e46f023976d50a 86a8d63bc11431509506b95c1481e1a023302cbc 71159fb049f2
12072 fail 1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad 128de2549c5f24e4a437b86bd2e46f023976d50a 86a8d63bc11431509506b95c1481e1a023302cbc a59c1dcfe968
Searching for interesting versions
Result found: flight 12024 (pass), for basis pass
Result found: flight 12053 (fail), for basis failure
Repro found: flight 12061 (pass), for basis pass
Repro found: flight 12062 (fail), for basis failure
0 revisions at 1aaf53ee291d9e71d6ec05c0ebdb2854fea175ad 128de2549c5f24e4a437b86bd2e46f023976d50a 86a8d63bc11431509506b95c1481e1a023302cbc f9789db96c39
No revisions left to test, checking graph state.
Result found: flight 12067 (pass), for last pass
Result found: flight 12068 (fail), for first failure
Repro found: flight 12069 (pass), for last pass
Repro found: flight 12070 (fail), for first failure
Repro found: flight 12071 (pass), for last pass
Repro found: flight 12072 (fail), for first failure
*** Found and reproduced problem changeset ***
Bug is in tree: xen http://xenbits.xen.org/staging/xen-unstable.hg
Bug introduced: a59c1dcfe968
Bug not present: f9789db96c39
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-unstable.hg
searching for changes
no changes found
changeset: 24875:a59c1dcfe968
user: Justin T. Gibbs <justing@spectralogic.com>
date: Thu Feb 23 10:03:07 2012 +0000
blkif.h: Define and document the request number/size/segments extension
Note: As of __XEN_INTERFACE_VERSION__ 0x00040201 the definition of
BLKIF_MAX_SEGMENTS_PER_REQUEST has changed. Drivers must be
updated to, at minimum, use BLKIF_MAX_SEGMENTS_PER_HEADER_BLOCK,
before being recompiled with a __XEN_INTERFACE_VERSION greater
than or equal to this value.
This extension first appeared in the FreeBSD Operating System.
Signed-off-by: Justin T. Gibbs <justing@spectralogic.com>
Committed-by: Keir Fraser <keir@xen.org>
Revision graph left in /home/xc_osstest/results/bisect.xen-unstable.test-amd64-i386-rhel6hvm-intel.redhat-install.{dot,ps,png,html}.
----------------------------------------
12072: ALL FAIL
flight 12072 xen-unstable real-bisect [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/12072/
jobs:
test-amd64-i386-rhel6hvm-intel fail
------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images
Logs, config files, etc. are available at
http://www.chiark.greenend.org.uk/~xensrcts/logs
Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
2011-11-21 21:32 ` Keir Fraser
@ 2011-11-21 21:51 ` Jean Guyader
0 siblings, 0 replies; 21+ messages in thread
From: Jean Guyader @ 2011-11-21 21:51 UTC (permalink / raw)
To: Keir Fraser; +Cc: xen-devel, Ian Jackson, Jean Guyader, Jean Guyader
On 21/11 09:32, Keir Fraser wrote:
> On 21/11/2011 19:43, "Jean Guyader" <jean.guyader@gmail.com> wrote:
>
> > On 21 November 2011 18:47, Keir Fraser <keir.xen@gmail.com> wrote:
> >> On 21/11/2011 11:55, "Keir Fraser" <keir.xen@gmail.com> wrote:
> >>
> >>> On 21/11/2011 11:37, "Ian Jackson" <Ian.Jackson@eu.citrix.com> wrote:
> >>>
> >>>> xen.org writes ("[xen-unstable bisection] complete
> >>>> test-amd64-i386-rhel6hvm-intel"):
> >>>>> branch xen-unstable
> >>>>> xen branch xen-unstable
> >>>>> job test-amd64-i386-rhel6hvm-intel
> >>>>> test redhat-install
> >>>>>
> >>>>> Tree: linux git://github.com/jsgf/linux-xen.git
> >>>>> Tree: qemu git://hg.uk.xensource.com/HG/qemu-xen-unstable.git
> >>>>> Tree: xen http://xenbits.xen.org/staging/xen-unstable.hg
> >>>>>
> >>>>> *** Found and reproduced problem changeset ***
> >>>>>
> >>>>> ? Bug is in tree: ?xen http://xenbits.xen.org/staging/xen-unstable.hg
> >>>>> ? Bug introduced: ?7a9a1261a6b0
> >>>>> ? Bug not present: 9a1a71f7bef2
> >>>>
> >>>> This seems to have completely broken HVM ...
> >>>
> >>> I'll revert if there's no fix forthcoming.
> >>
> >> I hear silence so I will revert the series tomorrow morning.
> >>
> >
> > Ok. I didn't managed to replicate the issue yet.
>
> Actually, it wasn't too hard to work out. This bisection is misleading
> though, as it's zeroed in on the RCU locking bug, which is already fixed.
> The bug is actually in a later changeset which modifies hvmloader.
>
> Looking at the hvmloader/pci.c changes, the unconditional assignment to
> low_mem_pgend after the loop is obviously wrong. As is removing the handling
> for high_mem_pgend==0. I checked in a reworked version that is closer to the
> original code.
>
> Hopefully our tests will work again now.
>
Thanks for looking into it. I would like this serie to get applied in 4.1
but I think that the code have changed a bit. I'll send a version for 4.1
similar to the one we have in xen-unstable.
Jean
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
2011-11-21 19:43 ` Jean Guyader
@ 2011-11-21 21:32 ` Keir Fraser
2011-11-21 21:51 ` Jean Guyader
0 siblings, 1 reply; 21+ messages in thread
From: Keir Fraser @ 2011-11-21 21:32 UTC (permalink / raw)
To: Jean Guyader; +Cc: xen-devel, Ian Jackson, Jean Guyader
On 21/11/2011 19:43, "Jean Guyader" <jean.guyader@gmail.com> wrote:
> On 21 November 2011 18:47, Keir Fraser <keir.xen@gmail.com> wrote:
>> On 21/11/2011 11:55, "Keir Fraser" <keir.xen@gmail.com> wrote:
>>
>>> On 21/11/2011 11:37, "Ian Jackson" <Ian.Jackson@eu.citrix.com> wrote:
>>>
>>>> xen.org writes ("[xen-unstable bisection] complete
>>>> test-amd64-i386-rhel6hvm-intel"):
>>>>> branch xen-unstable
>>>>> xen branch xen-unstable
>>>>> job test-amd64-i386-rhel6hvm-intel
>>>>> test redhat-install
>>>>>
>>>>> Tree: linux git://github.com/jsgf/linux-xen.git
>>>>> Tree: qemu git://hg.uk.xensource.com/HG/qemu-xen-unstable.git
>>>>> Tree: xen http://xenbits.xen.org/staging/xen-unstable.hg
>>>>>
>>>>> *** Found and reproduced problem changeset ***
>>>>>
>>>>> Bug is in tree: xen http://xenbits.xen.org/staging/xen-unstable.hg
>>>>> Bug introduced: 7a9a1261a6b0
>>>>> Bug not present: 9a1a71f7bef2
>>>>
>>>> This seems to have completely broken HVM ...
>>>
>>> I'll revert if there's no fix forthcoming.
>>
>> I hear silence so I will revert the series tomorrow morning.
>>
>
> Ok. I didn't managed to replicate the issue yet.
Actually, it wasn't too hard to work out. This bisection is misleading
though, as it's zeroed in on the RCU locking bug, which is already fixed.
The bug is actually in a later changeset which modifies hvmloader.
Looking at the hvmloader/pci.c changes, the unconditional assignment to
low_mem_pgend after the loop is obviously wrong. As is removing the handling
for high_mem_pgend==0. I checked in a reworked version that is closer to the
original code.
Hopefully our tests will work again now.
-- Keir
> Jean
>
>>> -- Keir
>>>
>>>>> changeset: 24163:7a9a1261a6b0
>>>>> user: Jean Guyader <jean.guyader@eu.citrix.com>
>>>>> date: Fri Nov 18 13:41:33 2011 +0000
>>>>>
>>>>> add_to_physmap: Move the code for XENMEM_add_to_physmap
>>>>>
>>>>> Move the code for the XENMEM_add_to_physmap case into it's own
>>>>> function (xenmem_add_to_physmap).
>>>>>
>>>>> Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
>>>>> Committed-by: Keir Fraser <keir@xen.org>
>>>>
>>>> Ian.
>>>
>>>
>>
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xensource.com
>> http://lists.xensource.com/xen-devel
>>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
2011-11-21 18:47 ` Keir Fraser
@ 2011-11-21 19:43 ` Jean Guyader
2011-11-21 21:32 ` Keir Fraser
0 siblings, 1 reply; 21+ messages in thread
From: Jean Guyader @ 2011-11-21 19:43 UTC (permalink / raw)
To: Keir Fraser; +Cc: xen-devel, Ian Jackson, Jean Guyader
On 21 November 2011 18:47, Keir Fraser <keir.xen@gmail.com> wrote:
> On 21/11/2011 11:55, "Keir Fraser" <keir.xen@gmail.com> wrote:
>
>> On 21/11/2011 11:37, "Ian Jackson" <Ian.Jackson@eu.citrix.com> wrote:
>>
>>> xen.org writes ("[xen-unstable bisection] complete
>>> test-amd64-i386-rhel6hvm-intel"):
>>>> branch xen-unstable
>>>> xen branch xen-unstable
>>>> job test-amd64-i386-rhel6hvm-intel
>>>> test redhat-install
>>>>
>>>> Tree: linux git://github.com/jsgf/linux-xen.git
>>>> Tree: qemu git://hg.uk.xensource.com/HG/qemu-xen-unstable.git
>>>> Tree: xen http://xenbits.xen.org/staging/xen-unstable.hg
>>>>
>>>> *** Found and reproduced problem changeset ***
>>>>
>>>> Bug is in tree: xen http://xenbits.xen.org/staging/xen-unstable.hg
>>>> Bug introduced: 7a9a1261a6b0
>>>> Bug not present: 9a1a71f7bef2
>>>
>>> This seems to have completely broken HVM ...
>>
>> I'll revert if there's no fix forthcoming.
>
> I hear silence so I will revert the series tomorrow morning.
>
Ok. I didn't managed to replicate the issue yet.
Jean
>> -- Keir
>>
>>>> changeset: 24163:7a9a1261a6b0
>>>> user: Jean Guyader <jean.guyader@eu.citrix.com>
>>>> date: Fri Nov 18 13:41:33 2011 +0000
>>>>
>>>> add_to_physmap: Move the code for XENMEM_add_to_physmap
>>>>
>>>> Move the code for the XENMEM_add_to_physmap case into it's own
>>>> function (xenmem_add_to_physmap).
>>>>
>>>> Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
>>>> Committed-by: Keir Fraser <keir@xen.org>
>>>
>>> Ian.
>>
>>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
2011-11-21 11:55 ` Keir Fraser
@ 2011-11-21 18:47 ` Keir Fraser
2011-11-21 19:43 ` Jean Guyader
0 siblings, 1 reply; 21+ messages in thread
From: Keir Fraser @ 2011-11-21 18:47 UTC (permalink / raw)
To: Ian Jackson, xen-devel, Jean Guyader
On 21/11/2011 11:55, "Keir Fraser" <keir.xen@gmail.com> wrote:
> On 21/11/2011 11:37, "Ian Jackson" <Ian.Jackson@eu.citrix.com> wrote:
>
>> xen.org writes ("[xen-unstable bisection] complete
>> test-amd64-i386-rhel6hvm-intel"):
>>> branch xen-unstable
>>> xen branch xen-unstable
>>> job test-amd64-i386-rhel6hvm-intel
>>> test redhat-install
>>>
>>> Tree: linux git://github.com/jsgf/linux-xen.git
>>> Tree: qemu git://hg.uk.xensource.com/HG/qemu-xen-unstable.git
>>> Tree: xen http://xenbits.xen.org/staging/xen-unstable.hg
>>>
>>> *** Found and reproduced problem changeset ***
>>>
>>> Bug is in tree: xen http://xenbits.xen.org/staging/xen-unstable.hg
>>> Bug introduced: 7a9a1261a6b0
>>> Bug not present: 9a1a71f7bef2
>>
>> This seems to have completely broken HVM ...
>
> I'll revert if there's no fix forthcoming.
I hear silence so I will revert the series tomorrow morning.
> -- Keir
>
>>> changeset: 24163:7a9a1261a6b0
>>> user: Jean Guyader <jean.guyader@eu.citrix.com>
>>> date: Fri Nov 18 13:41:33 2011 +0000
>>>
>>> add_to_physmap: Move the code for XENMEM_add_to_physmap
>>>
>>> Move the code for the XENMEM_add_to_physmap case into it's own
>>> function (xenmem_add_to_physmap).
>>>
>>> Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
>>> Committed-by: Keir Fraser <keir@xen.org>
>>
>> Ian.
>
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
2011-11-21 11:37 ` Ian Jackson
@ 2011-11-21 11:55 ` Keir Fraser
2011-11-21 18:47 ` Keir Fraser
0 siblings, 1 reply; 21+ messages in thread
From: Keir Fraser @ 2011-11-21 11:55 UTC (permalink / raw)
To: Ian Jackson, xen-devel, Jean Guyader
On 21/11/2011 11:37, "Ian Jackson" <Ian.Jackson@eu.citrix.com> wrote:
> xen.org writes ("[xen-unstable bisection] complete
> test-amd64-i386-rhel6hvm-intel"):
>> branch xen-unstable
>> xen branch xen-unstable
>> job test-amd64-i386-rhel6hvm-intel
>> test redhat-install
>>
>> Tree: linux git://github.com/jsgf/linux-xen.git
>> Tree: qemu git://hg.uk.xensource.com/HG/qemu-xen-unstable.git
>> Tree: xen http://xenbits.xen.org/staging/xen-unstable.hg
>>
>> *** Found and reproduced problem changeset ***
>>
>> Bug is in tree: xen http://xenbits.xen.org/staging/xen-unstable.hg
>> Bug introduced: 7a9a1261a6b0
>> Bug not present: 9a1a71f7bef2
>
> This seems to have completely broken HVM ...
I'll revert if there's no fix forthcoming.
-- Keir
>> changeset: 24163:7a9a1261a6b0
>> user: Jean Guyader <jean.guyader@eu.citrix.com>
>> date: Fri Nov 18 13:41:33 2011 +0000
>>
>> add_to_physmap: Move the code for XENMEM_add_to_physmap
>>
>> Move the code for the XENMEM_add_to_physmap case into it's own
>> function (xenmem_add_to_physmap).
>>
>> Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
>> Committed-by: Keir Fraser <keir@xen.org>
>
> Ian.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
2011-11-21 3:40 xen.org
@ 2011-11-21 11:37 ` Ian Jackson
2011-11-21 11:55 ` Keir Fraser
0 siblings, 1 reply; 21+ messages in thread
From: Ian Jackson @ 2011-11-21 11:37 UTC (permalink / raw)
To: xen-devel, Keir (Xen.org), Jean Guyader
xen.org writes ("[xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel"):
> branch xen-unstable
> xen branch xen-unstable
> job test-amd64-i386-rhel6hvm-intel
> test redhat-install
>
> Tree: linux git://github.com/jsgf/linux-xen.git
> Tree: qemu git://hg.uk.xensource.com/HG/qemu-xen-unstable.git
> Tree: xen http://xenbits.xen.org/staging/xen-unstable.hg
>
> *** Found and reproduced problem changeset ***
>
> Bug is in tree: xen http://xenbits.xen.org/staging/xen-unstable.hg
> Bug introduced: 7a9a1261a6b0
> Bug not present: 9a1a71f7bef2
This seems to have completely broken HVM ...
> changeset: 24163:7a9a1261a6b0
> user: Jean Guyader <jean.guyader@eu.citrix.com>
> date: Fri Nov 18 13:41:33 2011 +0000
>
> add_to_physmap: Move the code for XENMEM_add_to_physmap
>
> Move the code for the XENMEM_add_to_physmap case into it's own
> function (xenmem_add_to_physmap).
>
> Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
> Committed-by: Keir Fraser <keir@xen.org>
Ian.
^ permalink raw reply [flat|nested] 21+ messages in thread
* [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
@ 2011-11-21 3:40 xen.org
2011-11-21 11:37 ` Ian Jackson
0 siblings, 1 reply; 21+ messages in thread
From: xen.org @ 2011-11-21 3:40 UTC (permalink / raw)
To: xen-devel; +Cc: ian.jackson, keir, stefano.stabellini
branch xen-unstable
xen branch xen-unstable
job test-amd64-i386-rhel6hvm-intel
test redhat-install
Tree: linux git://github.com/jsgf/linux-xen.git
Tree: qemu git://hg.uk.xensource.com/HG/qemu-xen-unstable.git
Tree: xen http://xenbits.xen.org/staging/xen-unstable.hg
*** Found and reproduced problem changeset ***
Bug is in tree: xen http://xenbits.xen.org/staging/xen-unstable.hg
Bug introduced: 7a9a1261a6b0
Bug not present: 9a1a71f7bef2
changeset: 24163:7a9a1261a6b0
user: Jean Guyader <jean.guyader@eu.citrix.com>
date: Fri Nov 18 13:41:33 2011 +0000
add_to_physmap: Move the code for XENMEM_add_to_physmap
Move the code for the XENMEM_add_to_physmap case into it's own
function (xenmem_add_to_physmap).
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
Committed-by: Keir Fraser <keir@xen.org>
For bisection revision-tuple graph see:
http://www.chiark.greenend.org.uk/~xensrcts/results/bisect.xen-unstable.test-amd64-i386-rhel6hvm-intel.redhat-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.
----------------------------------------
Searching for failure / basis pass:
9915 fail [host=field-cricket] / 9855 [host=itch-mite] 9832 [host=gall-mite] 9817 ok.
Failure / basis pass flights: 9915 / 9817
Tree: linux git://github.com/jsgf/linux-xen.git
Tree: qemu git://hg.uk.xensource.com/HG/qemu-xen-unstable.git
Tree: xen http://xenbits.xen.org/staging/xen-unstable.hg
Latest 6bec8b4a4c14095d0b7ce424db9d583c3decae6c 52834188eedfbbca5636fd869d4c86b3b3044439 335e8273a3f3
Basis pass 6bec8b4a4c14095d0b7ce424db9d583c3decae6c 52834188eedfbbca5636fd869d4c86b3b3044439 dbdc840f8f62
Generating revisions with ./adhoc-revtuple-generator git://github.com/jsgf/linux-xen.git#6bec8b4a4c14095d0b7ce424db9d583c3decae6c-6bec8b4a4c14095d0b7ce424db9d583c3decae6c git://hg.uk.xensource.com/HG/qemu-xen-unstable.git#52834188eedfbbca5636fd869d4c86b3b3044439-52834188eedfbbca5636fd869d4c86b3b3044439 http://xenbits.xen.org/staging/xen-unstable.hg#dbdc840f8f62-335e8273a3f3
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-unstable.hg
searching for changes
no changes found
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-unstable.hg
searching for changes
no changes found
Loaded 107 nodes in revision graph
Searching for test results:
9817 pass 6bec8b4a4c14095d0b7ce424db9d583c3decae6c 52834188eedfbbca5636fd869d4c86b3b3044439 dbdc840f8f62
9866 [host=gall-mite]
9878 [host=bush-cricket]
9864 []
9867 [host=gall-mite]
9832 [host=gall-mite]
9883 fail 6bec8b4a4c14095d0b7ce424db9d583c3decae6c 52834188eedfbbca5636fd869d4c86b3b3044439 7a9a1261a6b0
9919 [host=gall-mite]
9868 [host=itch-mite]
9895 fail 6bec8b4a4c14095d0b7ce424db9d583c3decae6c 52834188eedfbbca5636fd869d4c86b3b3044439 335e8273a3f3
9855 [host=itch-mite]
9885 [host=bush-cricket]
9873 [host=itch-mite]
9857 [host=bush-cricket]
9874 [host=itch-mite]
9859 [host=bush-cricket]
9911 [host=itch-mite]
9887 [host=bush-cricket]
9861 [host=bush-cricket]
9860 [host=gall-mite]
9897 pass 6bec8b4a4c14095d0b7ce424db9d583c3decae6c 52834188eedfbbca5636fd869d4c86b3b3044439 aeb628c5af3f
9888 [host=bush-cricket]
9862 [host=bush-cricket]
9875 [host=itch-mite]
9865 [host=gall-mite]
9901 [host=itch-mite]
9872 fail 6bec8b4a4c14095d0b7ce424db9d583c3decae6c 52834188eedfbbca5636fd869d4c86b3b3044439 fe3e9d0c123c
9889 [host=bush-cricket]
9877 [host=itch-mite]
9898 pass 6bec8b4a4c14095d0b7ce424db9d583c3decae6c 52834188eedfbbca5636fd869d4c86b3b3044439 9a1a71f7bef2
9879 pass 6bec8b4a4c14095d0b7ce424db9d583c3decae6c 52834188eedfbbca5636fd869d4c86b3b3044439 dbdc840f8f62
9890 [host=bush-cricket]
9881 fail 6bec8b4a4c14095d0b7ce424db9d583c3decae6c 52834188eedfbbca5636fd869d4c86b3b3044439 fe3e9d0c123c
9899 fail 6bec8b4a4c14095d0b7ce424db9d583c3decae6c 52834188eedfbbca5636fd869d4c86b3b3044439 7a9a1261a6b0
9882 pass 6bec8b4a4c14095d0b7ce424db9d583c3decae6c 52834188eedfbbca5636fd869d4c86b3b3044439 d7e6bfa114d0
9891 [host=bush-cricket]
9904 [host=bush-cricket]
9884 fail 6bec8b4a4c14095d0b7ce424db9d583c3decae6c 52834188eedfbbca5636fd869d4c86b3b3044439 335e8273a3f3
9893 [host=bush-cricket]
9892 [host=bush-cricket]
9912 [host=itch-mite]
9894 pass 6bec8b4a4c14095d0b7ce424db9d583c3decae6c 52834188eedfbbca5636fd869d4c86b3b3044439 dbdc840f8f62
9907 [host=itch-mite]
9900 pass 6bec8b4a4c14095d0b7ce424db9d583c3decae6c 52834188eedfbbca5636fd869d4c86b3b3044439 9a1a71f7bef2
9902 [host=bush-cricket]
9927 fail 6bec8b4a4c14095d0b7ce424db9d583c3decae6c 52834188eedfbbca5636fd869d4c86b3b3044439 7a9a1261a6b0
9903 [host=bush-cricket]
9913 [host=itch-mite]
9922 [host=gall-mite]
9908 [host=itch-mite]
9916 [host=gall-mite]
9910 [host=itch-mite]
9906 [host=gall-mite]
9920 [host=gall-mite]
9914 [host=itch-mite]
9918 [host=gall-mite]
9923 [host=gall-mite]
9925 fail 6bec8b4a4c14095d0b7ce424db9d583c3decae6c 52834188eedfbbca5636fd869d4c86b3b3044439 7a9a1261a6b0
9921 [host=gall-mite]
9915 fail 6bec8b4a4c14095d0b7ce424db9d583c3decae6c 52834188eedfbbca5636fd869d4c86b3b3044439 335e8273a3f3
9926 pass 6bec8b4a4c14095d0b7ce424db9d583c3decae6c 52834188eedfbbca5636fd869d4c86b3b3044439 9a1a71f7bef2
Searching for interesting versions
Result found: flight 9817 (pass), for basis pass
Result found: flight 9884 (fail), for basis failure
Repro found: flight 9894 (pass), for basis pass
Repro found: flight 9895 (fail), for basis failure
0 revisions at 6bec8b4a4c14095d0b7ce424db9d583c3decae6c 52834188eedfbbca5636fd869d4c86b3b3044439 9a1a71f7bef2
No revisions left to test, checking graph state.
Result found: flight 9898 (pass), for last pass
Result found: flight 9899 (fail), for first failure
Repro found: flight 9900 (pass), for last pass
Repro found: flight 9925 (fail), for first failure
Repro found: flight 9926 (pass), for last pass
Repro found: flight 9927 (fail), for first failure
*** Found and reproduced problem changeset ***
Bug is in tree: xen http://xenbits.xen.org/staging/xen-unstable.hg
Bug introduced: 7a9a1261a6b0
Bug not present: 9a1a71f7bef2
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-unstable.hg
searching for changes
no changes found
changeset: 24163:7a9a1261a6b0
user: Jean Guyader <jean.guyader@eu.citrix.com>
date: Fri Nov 18 13:41:33 2011 +0000
add_to_physmap: Move the code for XENMEM_add_to_physmap
Move the code for the XENMEM_add_to_physmap case into it's own
function (xenmem_add_to_physmap).
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
Committed-by: Keir Fraser <keir@xen.org>
Revision graph left in /home/xc_osstest/results/bisect.xen-unstable.test-amd64-i386-rhel6hvm-intel.redhat-install.{dot,ps,png,html}.
----------------------------------------
9927: ALL FAIL
flight 9927 xen-unstable real-bisect [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/9927/
jobs:
test-amd64-i386-rhel6hvm-intel fail
------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images
Logs, config files, etc. are available at
http://www.chiark.greenend.org.uk/~xensrcts/logs
Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary
^ permalink raw reply [flat|nested] 21+ messages in thread
* [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
@ 2011-07-07 14:41 xen.org
0 siblings, 0 replies; 21+ messages in thread
From: xen.org @ 2011-07-07 14:41 UTC (permalink / raw)
To: xen-devel; +Cc: ian.jackson, keir, stefano.stabellini
branch xen-unstable
xen branch xen-unstable
job test-amd64-i386-rhel6hvm-intel
test redhat-install
Tree: linux git://git.eu.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git
Tree: qemu git://hg.uk.xensource.com/HG/qemu-xen-unstable.git
Tree: xen http://hg.uk.xensource.com/xen-unstable.hg
*** Found and reproduced problem changeset ***
Bug is in tree: xen http://hg.uk.xensource.com/xen-unstable.hg
Bug introduced: 0a70aeba14e2
Bug not present: d5dfaa568441
changeset: 23657:0a70aeba14e2
user: Ian Jackson <ian.jackson@eu.citrix.com>
date: Wed Jul 06 18:26:49 2011 +0100
libxl: sane disk backend selection and validation
Introduce a new function libxl__device_disk_set_backend which
does some sanity checks and determines which backend ought to be used.
If the caller specifies LIBXL_DISK_BACKEND_UNKNOWN (which has the
value 0), it tries PHY, TAP and QDISK in that order. Otherwise it
tries only the specified value.
libxl__device_disk_set_backend (and its helper function
disk_try_backend) inherit the role (and small amounts of the code)
from validate_virtual_disk. This is called during do_domain_create
and also from libxl_disk_device_add (for the benefit of hotplug
devices).
It also now takes over the role of the scattered fragments of backend
selection found in libxl_device_disk_add,
libxl_device_disk_local_attach and libxl__need_xenpv_qemu. These
latter functions now simply do the job for the backend they find has
already been specified and checked.
The restrictions on the capabilities of each backend, as expressed in
disk_try_backend (and to an extent in libxl_device_disk_local_attach)
are intended to be identical to the previous arrangements.
In 23618:3173b68c8a94 combined with 23622:160f7f39841b,
23623:c7180c353eb2, "xl" effectively became much more likely to select
TAP as the backend. With this change to libxl the default backend
selected by the libxl__device_disk_set_backend is intended to once
again to be PHY where possible.
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
For bisection revision-tuple graph see:
http://www.chiark.greenend.org.uk/~xensrcts/results/bisect.xen-unstable.test-amd64-i386-rhel6hvm-intel.redhat-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.
----------------------------------------
Searching for failure / basis pass:
7980 fail [host=earwig] / 7922 ok.
Failure / basis pass flights: 7980 / 7922
Tree: linux git://git.eu.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git
Tree: qemu git://hg.uk.xensource.com/HG/qemu-xen-unstable.git
Tree: xen http://hg.uk.xensource.com/xen-unstable.hg
Latest 6d94b752e1363757f8eb4558e6f721a3e703cfe2 cd776ee9408ff127f934a707c1a339ee600bc127 0a70aeba14e2
Basis pass 6d94b752e1363757f8eb4558e6f721a3e703cfe2 cd776ee9408ff127f934a707c1a339ee600bc127 7e4404a8f5f9
Generating revisions with ./adhoc-revtuple-generator git://git.eu.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git#6d94b752e1363757f8eb4558e6f721a3e703cfe2-6d94b752e1363757f8eb4558e6f721a3e703cfe2 git://hg.uk.xensource.com/HG/qemu-xen-unstable.git#cd776ee9408ff127f934a707c1a339ee600bc127-cd776ee9408ff127f934a707c1a339ee600bc127 http://hg.uk.xensource.com/xen-unstable.hg#7e4404a8f5f9-0a70aeba14e2
pulling from http://hg.uk.xensource.com/xen-unstable.hg
searching for changes
no changes found
pulling from http://hg.uk.xensource.com/xen-unstable.hg
searching for changes
no changes found
Loaded 28 nodes in revision graph
Searching for test results:
7995 fail 6d94b752e1363757f8eb4558e6f721a3e703cfe2 cd776ee9408ff127f934a707c1a339ee600bc127 0a70aeba14e2
7889 [host=gall-mite]
7896 pass 6d94b752e1363757f8eb4558e6f721a3e703cfe2 cd776ee9408ff127f934a707c1a339ee600bc127 7e4404a8f5f9
7904 [host=gall-mite]
7978 [host=itch-mite]
7914 [host=gall-mite]
7980 fail 6d94b752e1363757f8eb4558e6f721a3e703cfe2 cd776ee9408ff127f934a707c1a339ee600bc127 0a70aeba14e2
7985 [host=itch-mite]
7922 pass 6d94b752e1363757f8eb4558e6f721a3e703cfe2 cd776ee9408ff127f934a707c1a339ee600bc127 7e4404a8f5f9
7986 pass 6d94b752e1363757f8eb4558e6f721a3e703cfe2 cd776ee9408ff127f934a707c1a339ee600bc127 7e4404a8f5f9
7987 fail 6d94b752e1363757f8eb4558e6f721a3e703cfe2 cd776ee9408ff127f934a707c1a339ee600bc127 0a70aeba14e2
7989 pass 6d94b752e1363757f8eb4558e6f721a3e703cfe2 cd776ee9408ff127f934a707c1a339ee600bc127 d5dfaa568441
7990 fail 6d94b752e1363757f8eb4558e6f721a3e703cfe2 cd776ee9408ff127f934a707c1a339ee600bc127 0a70aeba14e2
7991 pass 6d94b752e1363757f8eb4558e6f721a3e703cfe2 cd776ee9408ff127f934a707c1a339ee600bc127 d5dfaa568441
7992 fail 6d94b752e1363757f8eb4558e6f721a3e703cfe2 cd776ee9408ff127f934a707c1a339ee600bc127 0a70aeba14e2
7993 pass 6d94b752e1363757f8eb4558e6f721a3e703cfe2 cd776ee9408ff127f934a707c1a339ee600bc127 d5dfaa568441
Searching for interesting versions
Result found: flight 7896 (pass), for basis pass
Result found: flight 7980 (fail), for basis failure
Repro found: flight 7986 (pass), for basis pass
Repro found: flight 7987 (fail), for basis failure
0 revisions at 6d94b752e1363757f8eb4558e6f721a3e703cfe2 cd776ee9408ff127f934a707c1a339ee600bc127 d5dfaa568441
No revisions left to test, checking graph state.
Result found: flight 7989 (pass), for last pass
Result found: flight 7990 (fail), for first failure
Repro found: flight 7991 (pass), for last pass
Repro found: flight 7992 (fail), for first failure
Repro found: flight 7993 (pass), for last pass
Repro found: flight 7995 (fail), for first failure
*** Found and reproduced problem changeset ***
Bug is in tree: xen http://hg.uk.xensource.com/xen-unstable.hg
Bug introduced: 0a70aeba14e2
Bug not present: d5dfaa568441
pulling from http://hg.uk.xensource.com/xen-unstable.hg
searching for changes
no changes found
changeset: 23657:0a70aeba14e2
user: Ian Jackson <ian.jackson@eu.citrix.com>
date: Wed Jul 06 18:26:49 2011 +0100
libxl: sane disk backend selection and validation
Introduce a new function libxl__device_disk_set_backend which
does some sanity checks and determines which backend ought to be used.
If the caller specifies LIBXL_DISK_BACKEND_UNKNOWN (which has the
value 0), it tries PHY, TAP and QDISK in that order. Otherwise it
tries only the specified value.
libxl__device_disk_set_backend (and its helper function
disk_try_backend) inherit the role (and small amounts of the code)
from validate_virtual_disk. This is called during do_domain_create
and also from libxl_disk_device_add (for the benefit of hotplug
devices).
It also now takes over the role of the scattered fragments of backend
selection found in libxl_device_disk_add,
libxl_device_disk_local_attach and libxl__need_xenpv_qemu. These
latter functions now simply do the job for the backend they find has
already been specified and checked.
The restrictions on the capabilities of each backend, as expressed in
disk_try_backend (and to an extent in libxl_device_disk_local_attach)
are intended to be identical to the previous arrangements.
In 23618:3173b68c8a94 combined with 23622:160f7f39841b,
23623:c7180c353eb2, "xl" effectively became much more likely to select
TAP as the backend. With this change to libxl the default backend
selected by the libxl__device_disk_set_backend is intended to once
again to be PHY where possible.
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Revision graph left in /home/xc_osstest/results/bisect.xen-unstable.test-amd64-i386-rhel6hvm-intel.redhat-install.{dot,ps,png,html}.
----------------------------------------
7995: ALL FAIL
flight 7995 xen-unstable real-bisect [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/7995/
jobs:
test-amd64-i386-rhel6hvm-intel fail
------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images
Logs, config files, etc. are available at
http://www.chiark.greenend.org.uk/~xensrcts/logs
Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary
^ permalink raw reply [flat|nested] 21+ messages in thread
* [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel
@ 2010-12-22 2:12 xen.org
0 siblings, 0 replies; 21+ messages in thread
From: xen.org @ 2010-12-22 2:12 UTC (permalink / raw)
To: xen-devel; +Cc: ian.jackson, keir.fraser, stefano.stabellini
branch xen-unstable
xen branch xen-unstable
job test-amd64-i386-rhel6hvm-intel
test xen-boot
Tree: git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git
Tree: git://mariner.uk.xensource.com/qemu-xen-unstable.git
Tree: http://hg.uk.xensource.com/xen-unstable.hg
*** Found and reproduced problem changeset ***
Bug is in tree: http://hg.uk.xensource.com/xen-unstable.hg
Bug introduced: 764e95f64b28
Bug not present: f0d26fdebf40
changeset: 22545:764e95f64b28
user: Keir Fraser <keir@xen.org>
date: Wed Dec 15 14:16:03 2010 +0000
EPT/VT-d page table sharing
Basic idea is to leverage 2MB and 1GB page size support in EPT by having
VT-d using the same page tables as EPT. When EPT page table changes, flush
VT-d IOTLB cache.
Signed-off-by: Weidong Han <weidong.han@intel.com>
Signed-off-by: Allen Kay <allen.m.kay@intel.com>
For bisection revision-tuple graph see:
http://www.chiark.greenend.org.uk/~xensrcts/results/bisect.xen-unstable.test-amd64-i386-rhel6hvm-intel.xen-boot.html
Revision IDs in each graph node refer, respectively, to the Trees above.
----------------------------------------
Searching for failure: 4222(fail) 4216(fail) 4202(fail) 4041(fail) 3956(fail) 3902(fail) 3863(fail) 3649(pass)
Using this failure: 3863 [host=earwig]
Searching for basis pass: 3649 [host=gall-mite] 3470.
Basis pass flight 3470.
Tree: git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git
Tree: git://mariner.uk.xensource.com/qemu-xen-unstable.git
Tree: http://hg.uk.xensource.com/xen-unstable.hg
Latest 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68 bb9c9a127a676b53210f71082330c5e94f7b8171 764e95f64b28
Basis pass 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68 dd9d12dc85dfc5f873c8d57bd42f09b81219c250 e5c48e0cd03d
Generating revisions with ./adhoc-revtuple-generator git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git#862ef97190f6b54d35c76c93fb2b8fadd7ab7d68-862ef97190f6b54d35c76c93fb2b8fadd7ab7d68 git://mariner.uk.xensource.com/qemu-xen-unstable.git#dd9d12dc85dfc5f873c8d57bd42f09b81219c250-bb9c9a127a676b53210f71082330c5e94f7b8171 http://hg.uk.xensource.com/xen-unstable.hg#e5c48e0cd03d-764e95f64b28
using cache /export/home/osstest/repos/git-cache...
using cache /export/home/osstest/repos/git-cache...
locked cache /export/home/osstest/repos/git-cache...
processing ./cacheing-git clone --bare git://mariner.uk.xensource.com/qemu-xen-unstable.git /export/home/osstest/repos/qemu-xen-unstable...
Initialized empty Git repository in /export/home/osstest/repos/qemu-xen-unstable/
updating cache /export/home/osstest/repos/git-cache qemu-xen-unstable...
pulling from http://hg.uk.xensource.com/xen-unstable.hg
searching for changes
no changes found
using cache /export/home/osstest/repos/git-cache...
using cache /export/home/osstest/repos/git-cache...
locked cache /export/home/osstest/repos/git-cache...
processing ./cacheing-git clone --bare git://mariner.uk.xensource.com/qemu-xen-unstable.git /export/home/osstest/repos/qemu-xen-unstable...
Initialized empty Git repository in /export/home/osstest/repos/qemu-xen-unstable/
updating cache /export/home/osstest/repos/git-cache qemu-xen-unstable...
pulling from http://hg.uk.xensource.com/xen-unstable.hg
searching for changes
no changes found
Loaded 1129 nodes in revision graph
Searching for test results:
3863 fail 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68 bb9c9a127a676b53210f71082330c5e94f7b8171 764e95f64b28
3956 fail irrelevant
2976 [host=gall-mite]
4264 fail 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68 bb9c9a127a676b53210f71082330c5e94f7b8171 764e95f64b28
4267 pass 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68 bb9c9a127a676b53210f71082330c5e94f7b8171 f0d26fdebf40
4270 fail 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68 bb9c9a127a676b53210f71082330c5e94f7b8171 764e95f64b28
4228 fail irrelevant
4230 pass 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68 bb9c9a127a676b53210f71082330c5e94f7b8171 6df91a11dcb0
4202 [host=itch-mite]
4231 pass 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68 bb9c9a127a676b53210f71082330c5e94f7b8171 c0662cb08260
3902 fail irrelevant
4233 [host=gall-mite]
2968 pass irrelevant
3579 [host=gall-mite]
4216 fail irrelevant
4217 [host=itch-mite]
2962 []
2988 [host=gall-mite]
4238 [host=gall-mite]
4239 fail 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68 bb9c9a127a676b53210f71082330c5e94f7b8171 764e95f64b28
2994 pass irrelevant
4241 pass 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68 bb9c9a127a676b53210f71082330c5e94f7b8171 a3a29e67aa7e
2998 pass irrelevant
3000 pass irrelevant
3001 pass irrelevant
4220 [host=itch-mite]
3003 fail irrelevant
4221 [host=itch-mite]
3010 [host=itch-mite]
3649 [host=gall-mite]
3023 pass irrelevant
3035 pass irrelevant
4222 [host=gall-mite]
3056 [host=itch-mite]
4041 [host=itch-mite]
3251 pass irrelevant
4223 pass 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68 dd9d12dc85dfc5f873c8d57bd42f09b81219c250 e5c48e0cd03d
3470 pass 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68 dd9d12dc85dfc5f873c8d57bd42f09b81219c250 e5c48e0cd03d
4246 blocked 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68 bb9c9a127a676b53210f71082330c5e94f7b8171 6ed80a93a5e0
4249 pass 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68 bb9c9a127a676b53210f71082330c5e94f7b8171 1b1174b7181f
4252 pass 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68 bb9c9a127a676b53210f71082330c5e94f7b8171 f0d26fdebf40
4256 fail 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68 bb9c9a127a676b53210f71082330c5e94f7b8171 764e95f64b28
4259 pass 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68 bb9c9a127a676b53210f71082330c5e94f7b8171 f0d26fdebf40
3554 [host=gall-mite]
Searching for interesting versions
Result found: flight 3470 (pass), for basis pass
Result found: flight 3863 (fail), for basis failure
Repro found: flight 4223 (pass), for basis pass
Repro found: flight 4239 (fail), for basis failure
0 revisions at 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68 bb9c9a127a676b53210f71082330c5e94f7b8171 f0d26fdebf40
No revisions left to test, checking graph state.
Result found: flight 4252 (pass), for last pass
Result found: flight 4256 (fail), for first failure
Repro found: flight 4259 (pass), for last pass
Repro found: flight 4264 (fail), for first failure
Repro found: flight 4267 (pass), for last pass
Repro found: flight 4270 (fail), for first failure
*** Found and reproduced problem changeset ***
Bug is in tree: http://hg.uk.xensource.com/xen-unstable.hg
Bug introduced: 764e95f64b28
Bug not present: f0d26fdebf40
changeset: 22545:764e95f64b28
user: Keir Fraser <keir@xen.org>
date: Wed Dec 15 14:16:03 2010 +0000
EPT/VT-d page table sharing
Basic idea is to leverage 2MB and 1GB page size support in EPT by having
VT-d using the same page tables as EPT. When EPT page table changes, flush
VT-d IOTLB cache.
Signed-off-by: Weidong Han <weidong.han@intel.com>
Signed-off-by: Allen Kay <allen.m.kay@intel.com>
Revision graph left in /home/xc_osstest/results/bisect.xen-unstable.test-amd64-i386-rhel6hvm-intel.xen-boot.{dot,ps,png,html}.
----------------------------------------
4270: ALL FAIL
flight 4270 xen-unstable real-bisect [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/4270/
Tests which did not succeed:
test-amd64-i386-rhel6hvm-intel 5 xen-boot fail
version targeted for testing:
baseline version:
jobs:
test-amd64-i386-rhel6hvm-intel fail
-------------------------------------------------------------------------------
test-amd64-i386-rhel6hvm-intel:
1 xen-build-check(1) pass
2 hosts-allocate pass
3 host-install(3) pass
4 xen-install pass
5 xen-boot fail
6 capture-logs(6) pass
------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images
Logs, config files, etc. are available at
http://www.chiark.greenend.org.uk/~xensrcts/logs
Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2013-07-21 15:15 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-09-01 15:54 [xen-unstable bisection] complete test-amd64-i386-rhel6hvm-intel xen.org
2011-09-01 16:26 ` Ian Jackson
2011-09-01 17:22 ` Laszlo Ersek
2011-09-02 7:11 ` Ian Campbell
2011-09-01 17:48 ` Laszlo Ersek
2011-09-01 19:28 ` Andrew Jones
2011-09-02 11:08 ` Ian Jackson
-- strict thread matches above, loose matches on Subject: below --
2013-07-21 3:26 xen.org
2013-07-21 5:30 ` Ian Campbell
2013-07-21 15:15 ` Ian Campbell
2013-02-06 7:04 xen.org
2012-02-25 16:48 xen.org
2011-11-21 3:40 xen.org
2011-11-21 11:37 ` Ian Jackson
2011-11-21 11:55 ` Keir Fraser
2011-11-21 18:47 ` Keir Fraser
2011-11-21 19:43 ` Jean Guyader
2011-11-21 21:32 ` Keir Fraser
2011-11-21 21:51 ` Jean Guyader
2011-07-07 14:41 xen.org
2010-12-22 2:12 xen.org
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.