* [xen-unstable test] 162845: regressions - FAIL
@ 2021-06-16 6:54 osstest service owner
2021-06-16 7:12 ` Jan Beulich
0 siblings, 1 reply; 7+ messages in thread
From: osstest service owner @ 2021-06-16 6:54 UTC (permalink / raw)
To: xen-devel, osstest-admin
flight 162845 xen-unstable real [real]
flight 162853 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162845/
http://logs.test-lab.xenproject.org/osstest/logs/162853/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
Tests which are failing intermittently (not blocking):
test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 162853-retest
Tests which did not succeed, but are not blocking:
test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail like 162422
test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop fail like 162533
test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162533
test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop fail like 162533
test-armhf-armhf-libvirt 16 saverestore-support-check fail like 162533
test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop fail like 162533
test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop fail like 162533
test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail like 162533
test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop fail like 162533
test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 162533
test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 162533
test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop fail like 162533
test-amd64-amd64-libvirt 15 migrate-support-check fail never pass
test-amd64-i386-libvirt 15 migrate-support-check fail never pass
test-amd64-i386-libvirt-xsm 15 migrate-support-check fail never pass
test-amd64-amd64-libvirt-xsm 15 migrate-support-check fail never pass
test-amd64-i386-xl-pvshim 14 guest-start fail never pass
test-arm64-arm64-xl 15 migrate-support-check fail never pass
test-arm64-arm64-xl 16 saverestore-support-check fail never pass
test-arm64-arm64-xl-xsm 15 migrate-support-check fail never pass
test-arm64-arm64-xl-xsm 16 saverestore-support-check fail never pass
test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail never pass
test-arm64-arm64-xl-credit2 15 migrate-support-check fail never pass
test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail never pass
test-arm64-arm64-xl-credit2 16 saverestore-support-check fail never pass
test-arm64-arm64-xl-thunderx 15 migrate-support-check fail never pass
test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail never pass
test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
test-armhf-armhf-xl-arndale 15 migrate-support-check fail never pass
test-armhf-armhf-xl-arndale 16 saverestore-support-check fail never pass
test-amd64-amd64-libvirt-vhd 14 migrate-support-check fail never pass
test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail never pass
test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail never pass
test-armhf-armhf-xl 15 migrate-support-check fail never pass
test-armhf-armhf-xl 16 saverestore-support-check fail never pass
test-armhf-armhf-xl-cubietruck 15 migrate-support-check fail never pass
test-armhf-armhf-xl-cubietruck 16 saverestore-support-check fail never pass
test-armhf-armhf-xl-rtds 15 migrate-support-check fail never pass
test-armhf-armhf-xl-rtds 16 saverestore-support-check fail never pass
test-armhf-armhf-xl-credit2 15 migrate-support-check fail never pass
test-armhf-armhf-xl-credit2 16 saverestore-support-check fail never pass
test-armhf-armhf-libvirt 15 migrate-support-check fail never pass
test-arm64-arm64-xl-credit1 15 migrate-support-check fail never pass
test-arm64-arm64-xl-credit1 16 saverestore-support-check fail never pass
test-arm64-arm64-xl-seattle 15 migrate-support-check fail never pass
test-arm64-arm64-xl-seattle 16 saverestore-support-check fail never pass
test-armhf-armhf-libvirt-raw 14 migrate-support-check fail never pass
test-armhf-armhf-xl-vhd 14 migrate-support-check fail never pass
test-armhf-armhf-xl-vhd 15 saverestore-support-check fail never pass
test-armhf-armhf-xl-credit1 15 migrate-support-check fail never pass
test-armhf-armhf-xl-credit1 16 saverestore-support-check fail never pass
version targeted for testing:
xen 93c5f98296fc78de79d621418a1e62fd413e73d1
baseline version:
xen 5268b2dcf7e5342c8a51ceb4bed3e7740c69f5c1
Last test of basis 162533 2021-06-08 01:53:53 Z 8 days
Failing since 162556 2021-06-08 22:39:08 Z 7 days 11 attempts
Testing same since 162845 2021-06-15 19:37:46 Z 0 days 1 attempts
------------------------------------------------------------
People who touched revisions under test:
Andrew Cooper <andrew.cooper3@citrix.com>
Anthony PERARD <anthony.perard@citrix.com>
Bobby Eshleman <bobbyeshleman@gmail.com>
Christian Lindig <christian.lindig@citrix.com>
Connor Davis <connojdavis@gmail.com>
Dario Faggioli <dfaggioli@suse.com>
Edgar E. Iglesias <edgar.iglesias@xilinx.com>
George Dunlap <george.dunlap@citrix.com>
Ian Jackson <iwj@xenproject.org>
Jan Beulich <jbeulich@suse.com>
Juergen Gross <jgross@suse.com>
Julien Grall <jgrall@amazon.com>
Roger Pau Monné <roger.pau@citrix.com>
Stefano Stabellini <sstabellini@kernel.org>
Stefano Stabellini <stefano.stabellini@xilinx.com>
Tim Deegan <tim@xen.org>
Wei Liu <wl@xen.org>
jobs:
build-amd64-xsm pass
build-arm64-xsm pass
build-i386-xsm pass
build-amd64-xtf pass
build-amd64 pass
build-arm64 pass
build-armhf pass
build-i386 pass
build-amd64-libvirt pass
build-arm64-libvirt pass
build-armhf-libvirt pass
build-i386-libvirt pass
build-amd64-prev pass
build-i386-prev pass
build-amd64-pvops pass
build-arm64-pvops pass
build-armhf-pvops pass
build-i386-pvops pass
test-xtf-amd64-amd64-1 pass
test-xtf-amd64-amd64-2 pass
test-xtf-amd64-amd64-3 pass
test-xtf-amd64-amd64-4 pass
test-xtf-amd64-amd64-5 pass
test-amd64-amd64-xl pass
test-amd64-coresched-amd64-xl pass
test-arm64-arm64-xl pass
test-armhf-armhf-xl pass
test-amd64-i386-xl pass
test-amd64-coresched-i386-xl pass
test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm pass
test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm pass
test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm pass
test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm pass
test-amd64-amd64-xl-qemut-debianhvm-i386-xsm pass
test-amd64-i386-xl-qemut-debianhvm-i386-xsm pass
test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm pass
test-amd64-i386-xl-qemuu-debianhvm-i386-xsm fail
test-amd64-amd64-libvirt-xsm pass
test-arm64-arm64-libvirt-xsm pass
test-amd64-i386-libvirt-xsm pass
test-amd64-amd64-xl-xsm pass
test-arm64-arm64-xl-xsm pass
test-amd64-i386-xl-xsm pass
test-amd64-amd64-qemuu-nested-amd fail
test-amd64-amd64-xl-pvhv2-amd pass
test-amd64-i386-qemut-rhel6hvm-amd pass
test-amd64-i386-qemuu-rhel6hvm-amd pass
test-amd64-amd64-dom0pvh-xl-amd pass
test-amd64-amd64-xl-qemut-debianhvm-amd64 pass
test-amd64-i386-xl-qemut-debianhvm-amd64 pass
test-amd64-amd64-xl-qemuu-debianhvm-amd64 pass
test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
test-amd64-i386-freebsd10-amd64 pass
test-amd64-amd64-qemuu-freebsd11-amd64 pass
test-amd64-amd64-qemuu-freebsd12-amd64 pass
test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
test-amd64-i386-xl-qemuu-ovmf-amd64 fail
test-amd64-amd64-xl-qemut-win7-amd64 fail
test-amd64-i386-xl-qemut-win7-amd64 fail
test-amd64-amd64-xl-qemuu-win7-amd64 fail
test-amd64-i386-xl-qemuu-win7-amd64 fail
test-amd64-amd64-xl-qemut-ws16-amd64 fail
test-amd64-i386-xl-qemut-ws16-amd64 fail
test-amd64-amd64-xl-qemuu-ws16-amd64 fail
test-amd64-i386-xl-qemuu-ws16-amd64 fail
test-armhf-armhf-xl-arndale pass
test-amd64-amd64-xl-credit1 pass
test-arm64-arm64-xl-credit1 pass
test-armhf-armhf-xl-credit1 pass
test-amd64-amd64-xl-credit2 pass
test-arm64-arm64-xl-credit2 pass
test-armhf-armhf-xl-credit2 pass
test-armhf-armhf-xl-cubietruck pass
test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict pass
test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict pass
test-amd64-amd64-examine pass
test-arm64-arm64-examine pass
test-armhf-armhf-examine pass
test-amd64-i386-examine pass
test-amd64-i386-freebsd10-i386 pass
test-amd64-amd64-qemuu-nested-intel pass
test-amd64-amd64-xl-pvhv2-intel pass
test-amd64-i386-qemut-rhel6hvm-intel pass
test-amd64-i386-qemuu-rhel6hvm-intel pass
test-amd64-amd64-dom0pvh-xl-intel pass
test-amd64-amd64-libvirt pass
test-armhf-armhf-libvirt pass
test-amd64-i386-libvirt pass
test-amd64-amd64-livepatch pass
test-amd64-i386-livepatch pass
test-amd64-amd64-migrupgrade pass
test-amd64-i386-migrupgrade pass
test-amd64-amd64-xl-multivcpu pass
test-armhf-armhf-xl-multivcpu pass
test-amd64-amd64-pair pass
test-amd64-i386-pair pass
test-amd64-amd64-libvirt-pair pass
test-amd64-i386-libvirt-pair pass
test-amd64-amd64-amd64-pvgrub pass
test-amd64-amd64-i386-pvgrub pass
test-amd64-amd64-xl-pvshim pass
test-amd64-i386-xl-pvshim fail
test-amd64-amd64-pygrub pass
test-amd64-amd64-xl-qcow2 pass
test-armhf-armhf-libvirt-raw pass
test-amd64-i386-xl-raw pass
test-amd64-amd64-xl-rtds pass
test-armhf-armhf-xl-rtds fail
test-arm64-arm64-xl-seattle pass
test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow pass
test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow pass
test-amd64-amd64-xl-shadow pass
test-amd64-i386-xl-shadow pass
test-arm64-arm64-xl-thunderx pass
test-amd64-amd64-libvirt-vhd pass
test-armhf-armhf-xl-vhd pass
------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images
Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs
Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master
Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary
Not pushing.
(No revision log; it would be 1003 lines long.)
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [xen-unstable test] 162845: regressions - FAIL
2021-06-16 6:54 [xen-unstable test] 162845: regressions - FAIL osstest service owner
@ 2021-06-16 7:12 ` Jan Beulich
2021-06-16 14:21 ` Anthony PERARD
0 siblings, 1 reply; 7+ messages in thread
From: Jan Beulich @ 2021-06-16 7:12 UTC (permalink / raw)
To: Ian Jackson, Anthony Perard; +Cc: xen-devel, osstest service owner
On 16.06.2021 08:54, osstest service owner wrote:
> flight 162845 xen-unstable real [real]
> flight 162853 xen-unstable real-retest [real]
> http://logs.test-lab.xenproject.org/osstest/logs/162845/
> http://logs.test-lab.xenproject.org/osstest/logs/162853/
>
> Regressions :-(
>
> Tests which did not succeed and are blocking,
> including tests which could not be run:
> test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
> test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
There looks to still be an issue with the ovmf version used. I'm
puzzled to find this flight reporting
built_revision_ovmf e1999b264f1f9d7230edf2448f757c73da567832
which isn't what the tree recently was rewound to, but about two
dozen commits older. I hope one of you has a clue at what is going
on here.
Jan
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [xen-unstable test] 162845: regressions - FAIL
2021-06-16 7:12 ` Jan Beulich
@ 2021-06-16 14:21 ` Anthony PERARD
2021-06-16 14:49 ` Jan Beulich
0 siblings, 1 reply; 7+ messages in thread
From: Anthony PERARD @ 2021-06-16 14:21 UTC (permalink / raw)
To: Jan Beulich; +Cc: Ian Jackson, xen-devel, osstest service owner
On Wed, Jun 16, 2021 at 09:12:52AM +0200, Jan Beulich wrote:
> On 16.06.2021 08:54, osstest service owner wrote:
> > flight 162845 xen-unstable real [real]
> > flight 162853 xen-unstable real-retest [real]
> > http://logs.test-lab.xenproject.org/osstest/logs/162845/
> > http://logs.test-lab.xenproject.org/osstest/logs/162853/
> >
> > Regressions :-(
> >
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> > test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
> > test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
>
> There looks to still be an issue with the ovmf version used. I'm
> puzzled to find this flight reporting
>
> built_revision_ovmf e1999b264f1f9d7230edf2448f757c73da567832
>
> which isn't what the tree recently was rewound to, but about two
> dozen commits older. I hope one of you has a clue at what is going
> on here.
So this commit is "master" from https://xenbits.xen.org/git-http/ovmf.git
rather than "xen-tested-master" from https://xenbits.xen.org/git-http/osstest/ovmf.git
master is what xen.git would have cloned. And "xen-tested-master" is the
commit that I was expecting osstest to pick up, but maybe that as been
setup only for stable trees?
Anyway, after aad7b5c11d51 ("tools/firmware/ovmf: Use OvmfXen platform
file is exist"), it isn't the same OVMF that is been used. We used to
use OvmfX64, but now we are going to use OvmfXen. (Xen support in
OvmfX64 has been removed so can't be used anymore.)
So there is maybe an issue with OvmfXen which doesn't need to block
xen-unstable flights.
As for the failure, I can think of one thing in that is different,
OvmfXen maps the XENMAPSPACE_shared_info page as high as possible in the
guest physical memory, in order to avoid creating hole the RAM, but a
call to XENMEM_remove_from_physmap is done as well. Could that actually
cause issues with saverestore?
So maybe we can force-push in the mean time if tests with OVMF is the
only failure.
--
Anthony PERARD
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [xen-unstable test] 162845: regressions - FAIL
2021-06-16 14:21 ` Anthony PERARD
@ 2021-06-16 14:49 ` Jan Beulich
2021-06-16 15:01 ` Jan Beulich
2021-06-16 15:12 ` Anthony PERARD
0 siblings, 2 replies; 7+ messages in thread
From: Jan Beulich @ 2021-06-16 14:49 UTC (permalink / raw)
To: Anthony PERARD; +Cc: Ian Jackson, xen-devel, osstest service owner
On 16.06.2021 16:21, Anthony PERARD wrote:
> On Wed, Jun 16, 2021 at 09:12:52AM +0200, Jan Beulich wrote:
>> On 16.06.2021 08:54, osstest service owner wrote:
>>> flight 162845 xen-unstable real [real]
>>> flight 162853 xen-unstable real-retest [real]
>>> http://logs.test-lab.xenproject.org/osstest/logs/162845/
>>> http://logs.test-lab.xenproject.org/osstest/logs/162853/
>>>
>>> Regressions :-(
>>>
>>> Tests which did not succeed and are blocking,
>>> including tests which could not be run:
>>> test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
>>> test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
>>
>> There looks to still be an issue with the ovmf version used. I'm
>> puzzled to find this flight reporting
>>
>> built_revision_ovmf e1999b264f1f9d7230edf2448f757c73da567832
>>
>> which isn't what the tree recently was rewound to, but about two
>> dozen commits older. I hope one of you has a clue at what is going
>> on here.
>
> So this commit is "master" from https://xenbits.xen.org/git-http/ovmf.git
> rather than "xen-tested-master" from https://xenbits.xen.org/git-http/osstest/ovmf.git
>
> master is what xen.git would have cloned. And "xen-tested-master" is the
> commit that I was expecting osstest to pick up, but maybe that as been
> setup only for stable trees?
>
> Anyway, after aad7b5c11d51 ("tools/firmware/ovmf: Use OvmfXen platform
> file is exist"), it isn't the same OVMF that is been used. We used to
> use OvmfX64, but now we are going to use OvmfXen. (Xen support in
> OvmfX64 has been removed so can't be used anymore.)
>
>
> So there is maybe an issue with OvmfXen which doesn't need to block
> xen-unstable flights.
>
>
> As for the failure, I can think of one thing in that is different,
> OvmfXen maps the XENMAPSPACE_shared_info page as high as possible in the
> guest physical memory, in order to avoid creating hole the RAM, but a
> call to XENMEM_remove_from_physmap is done as well. Could that actually
> cause issues with saverestore?
I don't think it should. But I now notice I should have looked at the
logs of these tests:
xc: info: Saving domain 2, type x86 HVM
xc: error: Unable to obtain the guest p2m size (1 = Operation not permitted): Internal error
xc: error: Save failed (1 = Operation not permitted): Internal error
which looks suspiciously similar to the issue Jürgen's d21121685fac
("tools/libs/guest: fix save and restore of pv domains after 32-bit
de-support") took care of, just that here we're dealing with a HVM
guest. I'll have to go inspect what exactly the library is doing there,
and hence where in Xen the -EPERM may be coming from all of the
sudden (and only for OVMF).
Of course the behavior you describe above may play into this, since
aiui this might lead to an excessively large p2m (depending what
exactly you mean with "as high as possible").
> So maybe we can force-push in the mean time if tests with OVMF is the
> only failure.
I don't think I see a force push justified just yet.
Jan
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [xen-unstable test] 162845: regressions - FAIL
2021-06-16 14:49 ` Jan Beulich
@ 2021-06-16 15:01 ` Jan Beulich
2021-06-16 15:12 ` Anthony PERARD
1 sibling, 0 replies; 7+ messages in thread
From: Jan Beulich @ 2021-06-16 15:01 UTC (permalink / raw)
To: Anthony PERARD; +Cc: Ian Jackson, xen-devel, osstest service owner
On 16.06.2021 16:49, Jan Beulich wrote:
> On 16.06.2021 16:21, Anthony PERARD wrote:
>> On Wed, Jun 16, 2021 at 09:12:52AM +0200, Jan Beulich wrote:
>>> On 16.06.2021 08:54, osstest service owner wrote:
>>>> flight 162845 xen-unstable real [real]
>>>> flight 162853 xen-unstable real-retest [real]
>>>> http://logs.test-lab.xenproject.org/osstest/logs/162845/
>>>> http://logs.test-lab.xenproject.org/osstest/logs/162853/
>>>>
>>>> Regressions :-(
>>>>
>>>> Tests which did not succeed and are blocking,
>>>> including tests which could not be run:
>>>> test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
>>>> test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 162533
>>>
>>> There looks to still be an issue with the ovmf version used. I'm
>>> puzzled to find this flight reporting
>>>
>>> built_revision_ovmf e1999b264f1f9d7230edf2448f757c73da567832
>>>
>>> which isn't what the tree recently was rewound to, but about two
>>> dozen commits older. I hope one of you has a clue at what is going
>>> on here.
>>
>> So this commit is "master" from https://xenbits.xen.org/git-http/ovmf.git
>> rather than "xen-tested-master" from https://xenbits.xen.org/git-http/osstest/ovmf.git
>>
>> master is what xen.git would have cloned. And "xen-tested-master" is the
>> commit that I was expecting osstest to pick up, but maybe that as been
>> setup only for stable trees?
>>
>> Anyway, after aad7b5c11d51 ("tools/firmware/ovmf: Use OvmfXen platform
>> file is exist"), it isn't the same OVMF that is been used. We used to
>> use OvmfX64, but now we are going to use OvmfXen. (Xen support in
>> OvmfX64 has been removed so can't be used anymore.)
>>
>>
>> So there is maybe an issue with OvmfXen which doesn't need to block
>> xen-unstable flights.
>>
>>
>> As for the failure, I can think of one thing in that is different,
>> OvmfXen maps the XENMAPSPACE_shared_info page as high as possible in the
>> guest physical memory, in order to avoid creating hole the RAM, but a
>> call to XENMEM_remove_from_physmap is done as well. Could that actually
>> cause issues with saverestore?
>
> I don't think it should. But I now notice I should have looked at the
> logs of these tests:
>
> xc: info: Saving domain 2, type x86 HVM
> xc: error: Unable to obtain the guest p2m size (1 = Operation not permitted): Internal error
> xc: error: Save failed (1 = Operation not permitted): Internal error
>
> which looks suspiciously similar to the issue Jürgen's d21121685fac
> ("tools/libs/guest: fix save and restore of pv domains after 32-bit
> de-support") took care of, just that here we're dealing with a HVM
> guest. I'll have to go inspect what exactly the library is doing there,
> and hence where in Xen the -EPERM may be coming from all of the
> sudden (and only for OVMF).
The *-amd64-i386-* variant has
xc: info: Saving domain 2, type x86 HVM
xc: error: Cannot save this big a guest (7 = Argument list too long): Internal error
which to me hints at ...
> Of course the behavior you describe above may play into this, since
> aiui this might lead to an excessively large p2m (depending what
> exactly you mean with "as high as possible").
.. a connection, but I'm not sure at all. XENMEM_maximum_gpfn returns
its result as the hypercall return value, so huge values could be a
problem at least for 32-bit tool stacks.
What page number are you mapping the shared info page at in OVMF?
Jan
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [xen-unstable test] 162845: regressions - FAIL
2021-06-16 14:49 ` Jan Beulich
2021-06-16 15:01 ` Jan Beulich
@ 2021-06-16 15:12 ` Anthony PERARD
2021-06-16 15:34 ` Jan Beulich
1 sibling, 1 reply; 7+ messages in thread
From: Anthony PERARD @ 2021-06-16 15:12 UTC (permalink / raw)
To: Jan Beulich; +Cc: Ian Jackson, xen-devel, osstest service owner
On Wed, Jun 16, 2021 at 04:49:33PM +0200, Jan Beulich wrote:
> I don't think it should. But I now notice I should have looked at the
> logs of these tests:
>
> xc: info: Saving domain 2, type x86 HVM
> xc: error: Unable to obtain the guest p2m size (1 = Operation not permitted): Internal error
> xc: error: Save failed (1 = Operation not permitted): Internal error
>
> which looks suspiciously similar to the issue Jürgen's d21121685fac
> ("tools/libs/guest: fix save and restore of pv domains after 32-bit
> de-support") took care of, just that here we're dealing with a HVM
> guest. I'll have to go inspect what exactly the library is doing there,
> and hence where in Xen the -EPERM may be coming from all of the
> sudden (and only for OVMF).
>
> Of course the behavior you describe above may play into this, since
> aiui this might lead to an excessively large p2m (depending what
> exactly you mean with "as high as possible").
The maximum physical address size as reported by cpuid 0x80000008
(or 1<<48 if above that) minus 1 page, or 1<<36 - 1 page.
--
Anthony PERARD
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [xen-unstable test] 162845: regressions - FAIL
2021-06-16 15:12 ` Anthony PERARD
@ 2021-06-16 15:34 ` Jan Beulich
0 siblings, 0 replies; 7+ messages in thread
From: Jan Beulich @ 2021-06-16 15:34 UTC (permalink / raw)
To: Anthony PERARD; +Cc: Ian Jackson, xen-devel, osstest service owner
On 16.06.2021 17:12, Anthony PERARD wrote:
> On Wed, Jun 16, 2021 at 04:49:33PM +0200, Jan Beulich wrote:
>> I don't think it should. But I now notice I should have looked at the
>> logs of these tests:
>>
>> xc: info: Saving domain 2, type x86 HVM
>> xc: error: Unable to obtain the guest p2m size (1 = Operation not permitted): Internal error
>> xc: error: Save failed (1 = Operation not permitted): Internal error
>>
>> which looks suspiciously similar to the issue Jürgen's d21121685fac
>> ("tools/libs/guest: fix save and restore of pv domains after 32-bit
>> de-support") took care of, just that here we're dealing with a HVM
>> guest. I'll have to go inspect what exactly the library is doing there,
>> and hence where in Xen the -EPERM may be coming from all of the
>> sudden (and only for OVMF).
>>
>> Of course the behavior you describe above may play into this, since
>> aiui this might lead to an excessively large p2m (depending what
>> exactly you mean with "as high as possible").
>
> The maximum physical address size as reported by cpuid 0x80000008
> (or 1<<48 if above that) minus 1 page, or 1<<36 - 1 page.
So this is very likely the problem, and not just for a 32-bit tool
stack right now. With ...
long do_memory_op(xc_interface *xch, int cmd, void *arg, size_t len)
{
DECLARE_HYPERCALL_BOUNCE(arg, len, XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
long ret = -1;
...
ret = xencall2(xch->xcall, __HYPERVISOR_memory_op,
cmd, HYPERCALL_BUFFER_AS_ARG(arg));
... I'm disappointed to find:
int xencall0(xencall_handle *xcall, unsigned int op);
int xencall1(xencall_handle *xcall, unsigned int op,
uint64_t arg1);
int xencall2(xencall_handle *xcall, unsigned int op,
uint64_t arg1, uint64_t arg2);
...
I'm sure we had the problem of a truncated memory-op hypercall
result already in the past, so there definitely was a known problem
that got re-introduced. Or wait, no - I've found that commit
(a27f1fb69d13), and it didn't really have any effect afaict:
Adjusting do_memory_op()'s return type wasn't sufficient, when
do_xen_hypercall() was returning only int. Now on to figuring a
not overly intrusive way of addressing this.
Jan
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2021-06-16 15:34 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-16 6:54 [xen-unstable test] 162845: regressions - FAIL osstest service owner
2021-06-16 7:12 ` Jan Beulich
2021-06-16 14:21 ` Anthony PERARD
2021-06-16 14:49 ` Jan Beulich
2021-06-16 15:01 ` Jan Beulich
2021-06-16 15:12 ` Anthony PERARD
2021-06-16 15:34 ` Jan Beulich
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.