* [xen-unstable test] 152067: regressions - trouble: fail/pass/starved
@ 2020-07-22 0:37 osstest service owner
2020-07-22 8:34 ` Jan Beulich
2020-07-22 8:38 ` Roger Pau Monné
0 siblings, 2 replies; 8+ messages in thread
From: osstest service owner @ 2020-07-22 0:37 UTC (permalink / raw)
To: xen-devel, osstest-admin
flight 152067 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152067/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-dom0pvh-xl-intel 18 guest-localmigrate/x10 fail REGR. vs. 152045
test-amd64-amd64-examine 4 memdisk-try-append fail REGR. vs. 152045
Tests which did not succeed, but are not blocking:
test-armhf-armhf-libvirt 14 saverestore-support-check fail like 152045
test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail like 152045
test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail like 152045
test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 152045
test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail like 152045
test-armhf-armhf-libvirt-raw 13 saverestore-support-check fail like 152045
test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail like 152045
test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 152045
test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 152045
test-amd64-i386-xl-pvshim 12 guest-start fail never pass
test-amd64-amd64-libvirt-xsm 13 migrate-support-check fail never pass
test-amd64-amd64-libvirt 13 migrate-support-check fail never pass
test-amd64-i386-libvirt-xsm 13 migrate-support-check fail never pass
test-amd64-i386-libvirt 13 migrate-support-check fail never pass
test-arm64-arm64-xl-seattle 13 migrate-support-check fail never pass
test-arm64-arm64-xl-seattle 14 saverestore-support-check fail never pass
test-arm64-arm64-xl 13 migrate-support-check fail never pass
test-arm64-arm64-xl 14 saverestore-support-check fail never pass
test-arm64-arm64-xl-credit2 13 migrate-support-check fail never pass
test-arm64-arm64-xl-thunderx 13 migrate-support-check fail never pass
test-arm64-arm64-xl-thunderx 14 saverestore-support-check fail never pass
test-arm64-arm64-xl-credit2 14 saverestore-support-check fail never pass
test-arm64-arm64-libvirt-xsm 13 migrate-support-check fail never pass
test-arm64-arm64-libvirt-xsm 14 saverestore-support-check fail never pass
test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
test-arm64-arm64-xl-credit1 13 migrate-support-check fail never pass
test-arm64-arm64-xl-credit1 14 saverestore-support-check fail never pass
test-amd64-amd64-libvirt-vhd 12 migrate-support-check fail never pass
test-armhf-armhf-xl-cubietruck 13 migrate-support-check fail never pass
test-armhf-armhf-xl-cubietruck 14 saverestore-support-check fail never pass
test-armhf-armhf-xl-rtds 13 migrate-support-check fail never pass
test-armhf-armhf-xl-rtds 14 saverestore-support-check fail never pass
test-armhf-armhf-xl-credit1 13 migrate-support-check fail never pass
test-armhf-armhf-xl-credit1 14 saverestore-support-check fail never pass
test-armhf-armhf-libvirt 13 migrate-support-check fail never pass
test-armhf-armhf-xl 13 migrate-support-check fail never pass
test-armhf-armhf-xl 14 saverestore-support-check fail never pass
test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail never pass
test-arm64-arm64-xl-xsm 13 migrate-support-check fail never pass
test-arm64-arm64-xl-xsm 14 saverestore-support-check fail never pass
test-armhf-armhf-xl-credit2 13 migrate-support-check fail never pass
test-armhf-armhf-xl-credit2 14 saverestore-support-check fail never pass
test-armhf-armhf-xl-multivcpu 13 migrate-support-check fail never pass
test-armhf-armhf-xl-multivcpu 14 saverestore-support-check fail never pass
test-armhf-armhf-xl-vhd 12 migrate-support-check fail never pass
test-armhf-armhf-xl-vhd 13 saverestore-support-check fail never pass
test-armhf-armhf-libvirt-raw 12 migrate-support-check fail never pass
test-armhf-armhf-xl-arndale 13 migrate-support-check fail never pass
test-armhf-armhf-xl-arndale 14 saverestore-support-check fail never pass
test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail never pass
test-amd64-amd64-qemuu-freebsd11-amd64 2 hosts-allocate starved n/a
test-amd64-amd64-qemuu-freebsd12-amd64 2 hosts-allocate starved n/a
version targeted for testing:
xen 9ffdda96d9e7c3d9c7a5bbe2df6ab30f63927542
baseline version:
xen 8c4532f19d6925538fb0c938f7de9a97da8c5c3b
Last test of basis 152045 2020-07-20 13:36:39 Z 1 days
Testing same since 152067 2020-07-21 06:59:07 Z 0 days 1 attempts
------------------------------------------------------------
People who touched revisions under test:
Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich <jbeulich@suse.com>
Juergen Gross <jgross@suse.com>
Stefano Stabellini <sstabellini@kernel.org>
jobs:
build-amd64-xsm pass
build-arm64-xsm pass
build-i386-xsm pass
build-amd64-xtf pass
build-amd64 pass
build-arm64 pass
build-armhf pass
build-i386 pass
build-amd64-libvirt pass
build-arm64-libvirt pass
build-armhf-libvirt pass
build-i386-libvirt pass
build-amd64-prev pass
build-i386-prev pass
build-amd64-pvops pass
build-arm64-pvops pass
build-armhf-pvops pass
build-i386-pvops pass
test-xtf-amd64-amd64-1 pass
test-xtf-amd64-amd64-2 pass
test-xtf-amd64-amd64-3 pass
test-xtf-amd64-amd64-4 pass
test-xtf-amd64-amd64-5 pass
test-amd64-amd64-xl pass
test-amd64-coresched-amd64-xl pass
test-arm64-arm64-xl pass
test-armhf-armhf-xl pass
test-amd64-i386-xl pass
test-amd64-coresched-i386-xl pass
test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm pass
test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm pass
test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm pass
test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm pass
test-amd64-amd64-xl-qemut-debianhvm-i386-xsm pass
test-amd64-i386-xl-qemut-debianhvm-i386-xsm pass
test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm pass
test-amd64-i386-xl-qemuu-debianhvm-i386-xsm pass
test-amd64-amd64-libvirt-xsm pass
test-arm64-arm64-libvirt-xsm pass
test-amd64-i386-libvirt-xsm pass
test-amd64-amd64-xl-xsm pass
test-arm64-arm64-xl-xsm pass
test-amd64-i386-xl-xsm pass
test-amd64-amd64-qemuu-nested-amd fail
test-amd64-amd64-xl-pvhv2-amd pass
test-amd64-i386-qemut-rhel6hvm-amd pass
test-amd64-i386-qemuu-rhel6hvm-amd pass
test-amd64-amd64-dom0pvh-xl-amd pass
test-amd64-amd64-xl-qemut-debianhvm-amd64 pass
test-amd64-i386-xl-qemut-debianhvm-amd64 pass
test-amd64-amd64-xl-qemuu-debianhvm-amd64 pass
test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
test-amd64-i386-freebsd10-amd64 pass
test-amd64-amd64-qemuu-freebsd11-amd64 starved
test-amd64-amd64-qemuu-freebsd12-amd64 starved
test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
test-amd64-i386-xl-qemuu-ovmf-amd64 pass
test-amd64-amd64-xl-qemut-win7-amd64 fail
test-amd64-i386-xl-qemut-win7-amd64 fail
test-amd64-amd64-xl-qemuu-win7-amd64 fail
test-amd64-i386-xl-qemuu-win7-amd64 fail
test-amd64-amd64-xl-qemut-ws16-amd64 fail
test-amd64-i386-xl-qemut-ws16-amd64 fail
test-amd64-amd64-xl-qemuu-ws16-amd64 fail
test-amd64-i386-xl-qemuu-ws16-amd64 fail
test-armhf-armhf-xl-arndale pass
test-amd64-amd64-xl-credit1 pass
test-arm64-arm64-xl-credit1 pass
test-armhf-armhf-xl-credit1 pass
test-amd64-amd64-xl-credit2 pass
test-arm64-arm64-xl-credit2 pass
test-armhf-armhf-xl-credit2 pass
test-armhf-armhf-xl-cubietruck pass
test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict pass
test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict pass
test-amd64-amd64-examine pass
test-arm64-arm64-examine pass
test-armhf-armhf-examine pass
test-amd64-i386-examine pass
test-amd64-i386-freebsd10-i386 pass
test-amd64-amd64-qemuu-nested-intel pass
test-amd64-amd64-xl-pvhv2-intel pass
test-amd64-i386-qemut-rhel6hvm-intel pass
test-amd64-i386-qemuu-rhel6hvm-intel pass
test-amd64-amd64-dom0pvh-xl-intel fail
test-amd64-amd64-libvirt pass
test-armhf-armhf-libvirt pass
test-amd64-i386-libvirt pass
test-amd64-amd64-livepatch pass
test-amd64-i386-livepatch pass
test-amd64-amd64-migrupgrade pass
test-amd64-i386-migrupgrade pass
test-amd64-amd64-xl-multivcpu pass
test-armhf-armhf-xl-multivcpu pass
test-amd64-amd64-pair pass
test-amd64-i386-pair pass
test-amd64-amd64-libvirt-pair pass
test-amd64-i386-libvirt-pair pass
test-amd64-amd64-amd64-pvgrub pass
test-amd64-amd64-i386-pvgrub pass
test-amd64-amd64-xl-pvshim pass
test-amd64-i386-xl-pvshim fail
test-amd64-amd64-pygrub pass
test-amd64-amd64-xl-qcow2 pass
test-armhf-armhf-libvirt-raw pass
test-amd64-i386-xl-raw pass
test-amd64-amd64-xl-rtds pass
test-armhf-armhf-xl-rtds pass
test-arm64-arm64-xl-seattle pass
test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow pass
test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow pass
test-amd64-amd64-xl-shadow pass
test-amd64-i386-xl-shadow pass
test-arm64-arm64-xl-thunderx pass
test-amd64-amd64-libvirt-vhd pass
test-armhf-armhf-xl-vhd pass
------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images
Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs
Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master
Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary
Not pushing.
------------------------------------------------------------
commit 9ffdda96d9e7c3d9c7a5bbe2df6ab30f63927542
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Mon Jul 20 17:54:52 2020 +0100
docs: Replace non-UTF-8 character in hypfs-paths.pandoc
From the docs cronjob on xenbits:
/usr/bin/pandoc --number-sections --toc --standalone misc/hypfs-paths.pandoc --output html/misc/hypfs-paths.html
pandoc: Cannot decode byte '\x92': Data.Text.Internal.Encoding.decodeUtf8: Invalid UTF-8 stream
make: *** [Makefile:236: html/misc/hypfs-paths.html] Error 1
Fixes: 5a4a411bde4 ("docs: specify stability of hypfs path documentation")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Release-acked-by: Paul Durrant <paul@xen.org>
commit 6720345aaf82fc76dca084f3f7a577062f5ff0f3
Author: Jan Beulich <jbeulich@suse.com>
Date: Wed Jul 15 12:39:06 2020 +0200
Arm: prune #include-s needed by domain.h
asm/domain.h is a dependency of xen/sched.h, and hence should not itself
include xen/sched.h. Nor should any of the other #include-s used by it.
While at it, also drop two other #include-s that aren't needed by this
particular header.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
commit 5a4a411bde4f73ff8ce43d6e52b77302973e8f68
Author: Juergen Gross <jgross@suse.com>
Date: Mon Jul 20 13:38:00 2020 +0200
docs: specify stability of hypfs path documentation
In docs/misc/hypfs-paths.pandoc the supported paths in the hypervisor
file system are specified. Make it more clear that path availability
might change, e.g. due to scope widening or narrowing (e.g. being
limited to a specific architecture).
Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
Release-acked-by: Paul Durrant <paul@xen.org>
(qemu changes not included)
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [xen-unstable test] 152067: regressions - trouble: fail/pass/starved
2020-07-22 0:37 [xen-unstable test] 152067: regressions - trouble: fail/pass/starved osstest service owner
@ 2020-07-22 8:34 ` Jan Beulich
2020-07-22 8:38 ` Roger Pau Monné
1 sibling, 0 replies; 8+ messages in thread
From: Jan Beulich @ 2020-07-22 8:34 UTC (permalink / raw)
To: osstest service owner, xen-devel
On 22.07.2020 02:37, osstest service owner wrote:
> flight 152067 xen-unstable real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/152067/
>
> Regressions :-(
>
> Tests which did not succeed and are blocking,
> including tests which could not be run:
> test-amd64-amd64-dom0pvh-xl-intel 18 guest-localmigrate/x10 fail REGR. vs. 152045
Jul 21 16:20:58.985209 [ 530.412043] libxl-save-help: page allocation failure: order:4, mode:0x60c0c0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null)
My first reaction to this would be to ask if Dom0 was given too little
memory here. Or of course there could be a memory leak somewhere. But
the system isn't entirely out of memory (about 7Mb left), so perhaps
the "order:4" aspect here also plays a meaningful role. Hence ...
Jul 21 16:21:00.390810 [ 530.412448] Call Trace:
Jul 21 16:21:00.402721 [ 530.412499] dump_stack+0x72/0x8c
Jul 21 16:21:00.402801 [ 530.412541] warn_alloc.cold.140+0x68/0xe8
Jul 21 16:21:00.402841 [ 530.412585] __alloc_pages_slowpath+0xc73/0xcb0
Jul 21 16:21:00.414737 [ 530.412640] ? __do_page_fault+0x249/0x4d0
Jul 21 16:21:00.414786 [ 530.412681] __alloc_pages_nodemask+0x235/0x250
Jul 21 16:21:00.426555 [ 530.412734] kmalloc_order+0x13/0x60
Jul 21 16:21:00.426619 [ 530.412774] kmalloc_order_trace+0x18/0xa0
Jul 21 16:21:00.426671 [ 530.412816] alloc_empty_pages.isra.15+0x24/0x60
Jul 21 16:21:00.438447 [ 530.412867] privcmd_ioctl_mmap_batch.isra.18+0x303/0x320
Jul 21 16:21:00.438507 [ 530.412918] ? vmacache_find+0xb0/0xb0
Jul 21 16:21:00.450475 [ 530.412957] privcmd_ioctl+0x253/0xa9b
... perhaps we ought to consider re-working this code path to avoid
order > 0 allocations (may be as simple as switching to vmalloc(),
but I say this without having looked at the code).
Jan
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [xen-unstable test] 152067: regressions - trouble: fail/pass/starved
2020-07-22 0:37 [xen-unstable test] 152067: regressions - trouble: fail/pass/starved osstest service owner
2020-07-22 8:34 ` Jan Beulich
@ 2020-07-22 8:38 ` Roger Pau Monné
2020-07-22 8:59 ` Jürgen Groß
1 sibling, 1 reply; 8+ messages in thread
From: Roger Pau Monné @ 2020-07-22 8:38 UTC (permalink / raw)
To: osstest service owner, jgross, boris.ostrovsky; +Cc: xen-devel
On Wed, Jul 22, 2020 at 12:37:46AM +0000, osstest service owner wrote:
> flight 152067 xen-unstable real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/152067/
>
> Regressions :-(
>
> Tests which did not succeed and are blocking,
> including tests which could not be run:
> test-amd64-amd64-dom0pvh-xl-intel 18 guest-localmigrate/x10 fail REGR. vs. 152045
Failure was caused by:
Jul 21 16:20:58.985209 [ 530.412043] libxl-save-help: page allocation failure: order:4, mode:0x60c0c0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null)
Jul 21 16:21:00.378548 [ 530.412261] libxl-save-help cpuset=/ mems_allowed=0
Jul 21 16:21:00.378622 [ 530.412318] CPU: 1 PID: 15485 Comm: libxl-save-help Not tainted 4.19.80+ #1
Jul 21 16:21:00.390740 [ 530.412377] Hardware name: Dell Inc. PowerEdge R420/0K29HN, BIOS 2.4.2 01/29/2015
Jul 21 16:21:00.390810 [ 530.412448] Call Trace:
Jul 21 16:21:00.402721 [ 530.412499] dump_stack+0x72/0x8c
Jul 21 16:21:00.402801 [ 530.412541] warn_alloc.cold.140+0x68/0xe8
Jul 21 16:21:00.402841 [ 530.412585] __alloc_pages_slowpath+0xc73/0xcb0
Jul 21 16:21:00.414737 [ 530.412640] ? __do_page_fault+0x249/0x4d0
Jul 21 16:21:00.414786 [ 530.412681] __alloc_pages_nodemask+0x235/0x250
Jul 21 16:21:00.426555 [ 530.412734] kmalloc_order+0x13/0x60
Jul 21 16:21:00.426619 [ 530.412774] kmalloc_order_trace+0x18/0xa0
Jul 21 16:21:00.426671 [ 530.412816] alloc_empty_pages.isra.15+0x24/0x60
Jul 21 16:21:00.438447 [ 530.412867] privcmd_ioctl_mmap_batch.isra.18+0x303/0x320
Jul 21 16:21:00.438507 [ 530.412918] ? vmacache_find+0xb0/0xb0
Jul 21 16:21:00.450475 [ 530.412957] privcmd_ioctl+0x253/0xa9b
Jul 21 16:21:00.450540 [ 530.412996] ? mmap_region+0x226/0x630
Jul 21 16:21:00.450592 [ 530.413043] ? selinux_mmap_file+0xb0/0xb0
Jul 21 16:21:00.462757 [ 530.413084] ? selinux_file_ioctl+0x15c/0x200
Jul 21 16:21:00.462823 [ 530.413136] do_vfs_ioctl+0x9f/0x630
Jul 21 16:21:00.474698 [ 530.413177] ksys_ioctl+0x5b/0x90
Jul 21 16:21:00.474762 [ 530.413224] __x64_sys_ioctl+0x11/0x20
Jul 21 16:21:00.474813 [ 530.413264] do_syscall_64+0x57/0x130
Jul 21 16:21:00.486480 [ 530.413305] entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jul 21 16:21:00.486548 [ 530.413357] RIP: 0033:0x7f4f7ecde427
Jul 21 16:21:00.486600 [ 530.413395] Code: 00 00 90 48 8b 05 69 aa 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 39 aa 0c 00 f7 d8 64 89 01 48
Jul 21 16:21:00.510766 [ 530.413556] RSP: 002b:00007ffc1ef6eb38 EFLAGS: 00000213 ORIG_RAX: 0000000000000010
Jul 21 16:21:00.522758 [ 530.413629] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f4f7ecde427
Jul 21 16:21:00.534632 [ 530.413699] RDX: 00007ffc1ef6eb90 RSI: 0000000000205004 RDI: 0000000000000007
Jul 21 16:21:00.534702 [ 530.413810] RBP: 00007ffc1ef6ebe0 R08: 0000000000000007 R09: 0000000000000000
Jul 21 16:21:00.547013 [ 530.413881] R10: 0000000000000001 R11: 0000000000000213 R12: 000055d754136200
Jul 21 16:21:00.558751 [ 530.413951] R13: 00007ffc1ef6f340 R14: 0000000000000000 R15: 0000000000000000
Jul 21 16:21:00.558846 [ 530.414079] Mem-Info:
Jul 21 16:21:00.558928 [ 530.414123] active_anon:1724 inactive_anon:3931 isolated_anon:0
Jul 21 16:21:00.570481 [ 530.414123] active_file:7862 inactive_file:86530 isolated_file:0
Jul 21 16:21:00.582599 [ 530.414123] unevictable:0 dirty:18 writeback:0 unstable:0
Jul 21 16:21:00.582668 [ 530.414123] slab_reclaimable:4704 slab_unreclaimable:4036
Jul 21 16:21:00.594782 [ 530.414123] mapped:3461 shmem:124 pagetables:372 bounce:0
Jul 21 16:21:00.594849 [ 530.414123] free:1863 free_pcp:16 free_cma:0
Jul 21 16:21:00.606733 [ 530.414579] Node 0 active_anon:6896kB inactive_anon:15724kB active_file:31448kB inactive_file:346120kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:13844kB dirty:72kB writeback:0kB shmem:496kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
Jul 21 16:21:00.630626 [ 530.414870] DMA free:1816kB min:92kB low:112kB high:132kB active_anon:0kB inactive_anon:0kB active_file:76kB inactive_file:9988kB unevictable:0kB writepending:0kB present:15980kB managed:14328kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
Jul 21 16:21:00.658448 [ 530.415329] lowmem_reserve[]: 0 431 431 431
Jul 21 16:21:00.658513 [ 530.415404] DMA32 free:5512kB min:2608kB low:3260kB high:3912kB active_anon:6896kB inactive_anon:15724kB active_file:31372kB inactive_file:336132kB unevictable:0kB writepending:72kB present:508300kB managed:451760kB mlocked:0kB kernel_stack:2848kB pagetables:1488kB bounce:0kB free_pcp:184kB local_pcp:0kB free_cma:0kB
Jul 21 16:21:00.694702 [ 530.415742] lowmem_reserve[]: 0 0 0 0
Jul 21 16:21:00.694778 [ 530.415806] DMA: 8*4kB (UM) 3*8kB (UM) 4*16kB (UM) 3*32kB (M) 5*64kB (UM) 2*128kB (UM) 4*256kB (UM) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1816kB
Jul 21 16:21:00.706798 [ 530.416015] DMA32: 4*4kB (UH) 459*8kB (MH) 2*16kB (H) 6*32kB (H) 5*64kB (H) 4*128kB (H) 3*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5512kB
Jul 21 16:21:00.718789 [ 530.416287] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
Jul 21 16:21:00.730785 [ 530.416413] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Jul 21 16:21:00.742847 [ 530.416538] 94608 total pagecache pages
Jul 21 16:21:00.742881 [ 530.416598] 79 pages in swap cache
Jul 21 16:21:00.754859 [ 530.416670] Swap cache stats: add 702, delete 623, find 948/1025
Jul 21 16:21:00.754924 [ 530.416759] Free swap = 1947124kB
Jul 21 16:21:00.766880 [ 530.416822] Total swap = 1949692kB
Jul 21 16:21:00.766960 [ 530.416924] 131070 pages RAM
Jul 21 16:21:00.767021 [ 530.416988] 0 pages HighMem/MovableOnly
Jul 21 16:21:00.778697 [ 530.417051] 14548 pages reserved
AFAICT from the kernel config used for the test [0]
CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is enabled, so I'm not sure where
the memory exhaustion is coming from. Maybe 512M is too low for a PVH
dom0, even when using hotplug balloon memory?
Roger.
[0] http://logs.test-lab.xenproject.org/osstest/logs/152067/build-amd64-pvops/godello0--kconfig
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [xen-unstable test] 152067: regressions - trouble: fail/pass/starved
2020-07-22 8:38 ` Roger Pau Monné
@ 2020-07-22 8:59 ` Jürgen Groß
2020-07-22 9:02 ` Roger Pau Monné
0 siblings, 1 reply; 8+ messages in thread
From: Jürgen Groß @ 2020-07-22 8:59 UTC (permalink / raw)
To: Roger Pau Monné, osstest service owner, boris.ostrovsky; +Cc: xen-devel
On 22.07.20 10:38, Roger Pau Monné wrote:
> On Wed, Jul 22, 2020 at 12:37:46AM +0000, osstest service owner wrote:
>> flight 152067 xen-unstable real [real]
>> http://logs.test-lab.xenproject.org/osstest/logs/152067/
>>
>> Regressions :-(
>>
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>> test-amd64-amd64-dom0pvh-xl-intel 18 guest-localmigrate/x10 fail REGR. vs. 152045
>
> Failure was caused by:
>
> Jul 21 16:20:58.985209 [ 530.412043] libxl-save-help: page allocation failure: order:4, mode:0x60c0c0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null)
> Jul 21 16:21:00.378548 [ 530.412261] libxl-save-help cpuset=/ mems_allowed=0
> Jul 21 16:21:00.378622 [ 530.412318] CPU: 1 PID: 15485 Comm: libxl-save-help Not tainted 4.19.80+ #1
> Jul 21 16:21:00.390740 [ 530.412377] Hardware name: Dell Inc. PowerEdge R420/0K29HN, BIOS 2.4.2 01/29/2015
> Jul 21 16:21:00.390810 [ 530.412448] Call Trace:
> Jul 21 16:21:00.402721 [ 530.412499] dump_stack+0x72/0x8c
> Jul 21 16:21:00.402801 [ 530.412541] warn_alloc.cold.140+0x68/0xe8
> Jul 21 16:21:00.402841 [ 530.412585] __alloc_pages_slowpath+0xc73/0xcb0
> Jul 21 16:21:00.414737 [ 530.412640] ? __do_page_fault+0x249/0x4d0
> Jul 21 16:21:00.414786 [ 530.412681] __alloc_pages_nodemask+0x235/0x250
> Jul 21 16:21:00.426555 [ 530.412734] kmalloc_order+0x13/0x60
> Jul 21 16:21:00.426619 [ 530.412774] kmalloc_order_trace+0x18/0xa0
> Jul 21 16:21:00.426671 [ 530.412816] alloc_empty_pages.isra.15+0x24/0x60
> Jul 21 16:21:00.438447 [ 530.412867] privcmd_ioctl_mmap_batch.isra.18+0x303/0x320
> Jul 21 16:21:00.438507 [ 530.412918] ? vmacache_find+0xb0/0xb0
> Jul 21 16:21:00.450475 [ 530.412957] privcmd_ioctl+0x253/0xa9b
> Jul 21 16:21:00.450540 [ 530.412996] ? mmap_region+0x226/0x630
> Jul 21 16:21:00.450592 [ 530.413043] ? selinux_mmap_file+0xb0/0xb0
> Jul 21 16:21:00.462757 [ 530.413084] ? selinux_file_ioctl+0x15c/0x200
> Jul 21 16:21:00.462823 [ 530.413136] do_vfs_ioctl+0x9f/0x630
> Jul 21 16:21:00.474698 [ 530.413177] ksys_ioctl+0x5b/0x90
> Jul 21 16:21:00.474762 [ 530.413224] __x64_sys_ioctl+0x11/0x20
> Jul 21 16:21:00.474813 [ 530.413264] do_syscall_64+0x57/0x130
> Jul 21 16:21:00.486480 [ 530.413305] entry_SYSCALL_64_after_hwframe+0x44/0xa9
> Jul 21 16:21:00.486548 [ 530.413357] RIP: 0033:0x7f4f7ecde427
> Jul 21 16:21:00.486600 [ 530.413395] Code: 00 00 90 48 8b 05 69 aa 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 39 aa 0c 00 f7 d8 64 89 01 48
> Jul 21 16:21:00.510766 [ 530.413556] RSP: 002b:00007ffc1ef6eb38 EFLAGS: 00000213 ORIG_RAX: 0000000000000010
> Jul 21 16:21:00.522758 [ 530.413629] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f4f7ecde427
> Jul 21 16:21:00.534632 [ 530.413699] RDX: 00007ffc1ef6eb90 RSI: 0000000000205004 RDI: 0000000000000007
> Jul 21 16:21:00.534702 [ 530.413810] RBP: 00007ffc1ef6ebe0 R08: 0000000000000007 R09: 0000000000000000
> Jul 21 16:21:00.547013 [ 530.413881] R10: 0000000000000001 R11: 0000000000000213 R12: 000055d754136200
> Jul 21 16:21:00.558751 [ 530.413951] R13: 00007ffc1ef6f340 R14: 0000000000000000 R15: 0000000000000000
> Jul 21 16:21:00.558846 [ 530.414079] Mem-Info:
> Jul 21 16:21:00.558928 [ 530.414123] active_anon:1724 inactive_anon:3931 isolated_anon:0
> Jul 21 16:21:00.570481 [ 530.414123] active_file:7862 inactive_file:86530 isolated_file:0
> Jul 21 16:21:00.582599 [ 530.414123] unevictable:0 dirty:18 writeback:0 unstable:0
> Jul 21 16:21:00.582668 [ 530.414123] slab_reclaimable:4704 slab_unreclaimable:4036
> Jul 21 16:21:00.594782 [ 530.414123] mapped:3461 shmem:124 pagetables:372 bounce:0
> Jul 21 16:21:00.594849 [ 530.414123] free:1863 free_pcp:16 free_cma:0
> Jul 21 16:21:00.606733 [ 530.414579] Node 0 active_anon:6896kB inactive_anon:15724kB active_file:31448kB inactive_file:346120kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:13844kB dirty:72kB writeback:0kB shmem:496kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
> Jul 21 16:21:00.630626 [ 530.414870] DMA free:1816kB min:92kB low:112kB high:132kB active_anon:0kB inactive_anon:0kB active_file:76kB inactive_file:9988kB unevictable:0kB writepending:0kB present:15980kB managed:14328kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
> Jul 21 16:21:00.658448 [ 530.415329] lowmem_reserve[]: 0 431 431 431
> Jul 21 16:21:00.658513 [ 530.415404] DMA32 free:5512kB min:2608kB low:3260kB high:3912kB active_anon:6896kB inactive_anon:15724kB active_file:31372kB inactive_file:336132kB unevictable:0kB writepending:72kB present:508300kB managed:451760kB mlocked:0kB kernel_stack:2848kB pagetables:1488kB bounce:0kB free_pcp:184kB local_pcp:0kB free_cma:0kB
> Jul 21 16:21:00.694702 [ 530.415742] lowmem_reserve[]: 0 0 0 0
> Jul 21 16:21:00.694778 [ 530.415806] DMA: 8*4kB (UM) 3*8kB (UM) 4*16kB (UM) 3*32kB (M) 5*64kB (UM) 2*128kB (UM) 4*256kB (UM) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1816kB
> Jul 21 16:21:00.706798 [ 530.416015] DMA32: 4*4kB (UH) 459*8kB (MH) 2*16kB (H) 6*32kB (H) 5*64kB (H) 4*128kB (H) 3*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5512kB
> Jul 21 16:21:00.718789 [ 530.416287] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
> Jul 21 16:21:00.730785 [ 530.416413] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
> Jul 21 16:21:00.742847 [ 530.416538] 94608 total pagecache pages
> Jul 21 16:21:00.742881 [ 530.416598] 79 pages in swap cache
> Jul 21 16:21:00.754859 [ 530.416670] Swap cache stats: add 702, delete 623, find 948/1025
> Jul 21 16:21:00.754924 [ 530.416759] Free swap = 1947124kB
> Jul 21 16:21:00.766880 [ 530.416822] Total swap = 1949692kB
> Jul 21 16:21:00.766960 [ 530.416924] 131070 pages RAM
> Jul 21 16:21:00.767021 [ 530.416988] 0 pages HighMem/MovableOnly
> Jul 21 16:21:00.778697 [ 530.417051] 14548 pages reserved
>
> AFAICT from the kernel config used for the test [0]
> CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is enabled, so I'm not sure where
> the memory exhaustion is coming from. Maybe 512M is too low for a PVH
> dom0, even when using hotplug balloon memory?
I don't see how CONFIG_XEN_BALLOON_MEMORY_HOTPLUG would help here, as it
will be used for real memory hotplug only. Well, you _can_ use it for
mapping of foreign pages, but you'd have to:
echo 1 > /proc/sys/xen/balloon/hotplug_unpopulated
Juergen
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [xen-unstable test] 152067: regressions - trouble: fail/pass/starved
2020-07-22 8:59 ` Jürgen Groß
@ 2020-07-22 9:02 ` Roger Pau Monné
2020-07-22 9:23 ` Jürgen Groß
0 siblings, 1 reply; 8+ messages in thread
From: Roger Pau Monné @ 2020-07-22 9:02 UTC (permalink / raw)
To: Jürgen Groß; +Cc: xen-devel, boris.ostrovsky, osstest service owner
On Wed, Jul 22, 2020 at 10:59:48AM +0200, Jürgen Groß wrote:
> On 22.07.20 10:38, Roger Pau Monné wrote:
> > On Wed, Jul 22, 2020 at 12:37:46AM +0000, osstest service owner wrote:
> > > flight 152067 xen-unstable real [real]
> > > http://logs.test-lab.xenproject.org/osstest/logs/152067/
> > >
> > > Regressions :-(
> > >
> > > Tests which did not succeed and are blocking,
> > > including tests which could not be run:
> > > test-amd64-amd64-dom0pvh-xl-intel 18 guest-localmigrate/x10 fail REGR. vs. 152045
> >
> > Failure was caused by:
> >
> > Jul 21 16:20:58.985209 [ 530.412043] libxl-save-help: page allocation failure: order:4, mode:0x60c0c0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null)
> > Jul 21 16:21:00.378548 [ 530.412261] libxl-save-help cpuset=/ mems_allowed=0
> > Jul 21 16:21:00.378622 [ 530.412318] CPU: 1 PID: 15485 Comm: libxl-save-help Not tainted 4.19.80+ #1
> > Jul 21 16:21:00.390740 [ 530.412377] Hardware name: Dell Inc. PowerEdge R420/0K29HN, BIOS 2.4.2 01/29/2015
> > Jul 21 16:21:00.390810 [ 530.412448] Call Trace:
> > Jul 21 16:21:00.402721 [ 530.412499] dump_stack+0x72/0x8c
> > Jul 21 16:21:00.402801 [ 530.412541] warn_alloc.cold.140+0x68/0xe8
> > Jul 21 16:21:00.402841 [ 530.412585] __alloc_pages_slowpath+0xc73/0xcb0
> > Jul 21 16:21:00.414737 [ 530.412640] ? __do_page_fault+0x249/0x4d0
> > Jul 21 16:21:00.414786 [ 530.412681] __alloc_pages_nodemask+0x235/0x250
> > Jul 21 16:21:00.426555 [ 530.412734] kmalloc_order+0x13/0x60
> > Jul 21 16:21:00.426619 [ 530.412774] kmalloc_order_trace+0x18/0xa0
> > Jul 21 16:21:00.426671 [ 530.412816] alloc_empty_pages.isra.15+0x24/0x60
> > Jul 21 16:21:00.438447 [ 530.412867] privcmd_ioctl_mmap_batch.isra.18+0x303/0x320
> > Jul 21 16:21:00.438507 [ 530.412918] ? vmacache_find+0xb0/0xb0
> > Jul 21 16:21:00.450475 [ 530.412957] privcmd_ioctl+0x253/0xa9b
> > Jul 21 16:21:00.450540 [ 530.412996] ? mmap_region+0x226/0x630
> > Jul 21 16:21:00.450592 [ 530.413043] ? selinux_mmap_file+0xb0/0xb0
> > Jul 21 16:21:00.462757 [ 530.413084] ? selinux_file_ioctl+0x15c/0x200
> > Jul 21 16:21:00.462823 [ 530.413136] do_vfs_ioctl+0x9f/0x630
> > Jul 21 16:21:00.474698 [ 530.413177] ksys_ioctl+0x5b/0x90
> > Jul 21 16:21:00.474762 [ 530.413224] __x64_sys_ioctl+0x11/0x20
> > Jul 21 16:21:00.474813 [ 530.413264] do_syscall_64+0x57/0x130
> > Jul 21 16:21:00.486480 [ 530.413305] entry_SYSCALL_64_after_hwframe+0x44/0xa9
> > Jul 21 16:21:00.486548 [ 530.413357] RIP: 0033:0x7f4f7ecde427
> > Jul 21 16:21:00.486600 [ 530.413395] Code: 00 00 90 48 8b 05 69 aa 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 39 aa 0c 00 f7 d8 64 89 01 48
> > Jul 21 16:21:00.510766 [ 530.413556] RSP: 002b:00007ffc1ef6eb38 EFLAGS: 00000213 ORIG_RAX: 0000000000000010
> > Jul 21 16:21:00.522758 [ 530.413629] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f4f7ecde427
> > Jul 21 16:21:00.534632 [ 530.413699] RDX: 00007ffc1ef6eb90 RSI: 0000000000205004 RDI: 0000000000000007
> > Jul 21 16:21:00.534702 [ 530.413810] RBP: 00007ffc1ef6ebe0 R08: 0000000000000007 R09: 0000000000000000
> > Jul 21 16:21:00.547013 [ 530.413881] R10: 0000000000000001 R11: 0000000000000213 R12: 000055d754136200
> > Jul 21 16:21:00.558751 [ 530.413951] R13: 00007ffc1ef6f340 R14: 0000000000000000 R15: 0000000000000000
> > Jul 21 16:21:00.558846 [ 530.414079] Mem-Info:
> > Jul 21 16:21:00.558928 [ 530.414123] active_anon:1724 inactive_anon:3931 isolated_anon:0
> > Jul 21 16:21:00.570481 [ 530.414123] active_file:7862 inactive_file:86530 isolated_file:0
> > Jul 21 16:21:00.582599 [ 530.414123] unevictable:0 dirty:18 writeback:0 unstable:0
> > Jul 21 16:21:00.582668 [ 530.414123] slab_reclaimable:4704 slab_unreclaimable:4036
> > Jul 21 16:21:00.594782 [ 530.414123] mapped:3461 shmem:124 pagetables:372 bounce:0
> > Jul 21 16:21:00.594849 [ 530.414123] free:1863 free_pcp:16 free_cma:0
> > Jul 21 16:21:00.606733 [ 530.414579] Node 0 active_anon:6896kB inactive_anon:15724kB active_file:31448kB inactive_file:346120kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:13844kB dirty:72kB writeback:0kB shmem:496kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
> > Jul 21 16:21:00.630626 [ 530.414870] DMA free:1816kB min:92kB low:112kB high:132kB active_anon:0kB inactive_anon:0kB active_file:76kB inactive_file:9988kB unevictable:0kB writepending:0kB present:15980kB managed:14328kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
> > Jul 21 16:21:00.658448 [ 530.415329] lowmem_reserve[]: 0 431 431 431
> > Jul 21 16:21:00.658513 [ 530.415404] DMA32 free:5512kB min:2608kB low:3260kB high:3912kB active_anon:6896kB inactive_anon:15724kB active_file:31372kB inactive_file:336132kB unevictable:0kB writepending:72kB present:508300kB managed:451760kB mlocked:0kB kernel_stack:2848kB pagetables:1488kB bounce:0kB free_pcp:184kB local_pcp:0kB free_cma:0kB
> > Jul 21 16:21:00.694702 [ 530.415742] lowmem_reserve[]: 0 0 0 0
> > Jul 21 16:21:00.694778 [ 530.415806] DMA: 8*4kB (UM) 3*8kB (UM) 4*16kB (UM) 3*32kB (M) 5*64kB (UM) 2*128kB (UM) 4*256kB (UM) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1816kB
> > Jul 21 16:21:00.706798 [ 530.416015] DMA32: 4*4kB (UH) 459*8kB (MH) 2*16kB (H) 6*32kB (H) 5*64kB (H) 4*128kB (H) 3*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5512kB
> > Jul 21 16:21:00.718789 [ 530.416287] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
> > Jul 21 16:21:00.730785 [ 530.416413] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
> > Jul 21 16:21:00.742847 [ 530.416538] 94608 total pagecache pages
> > Jul 21 16:21:00.742881 [ 530.416598] 79 pages in swap cache
> > Jul 21 16:21:00.754859 [ 530.416670] Swap cache stats: add 702, delete 623, find 948/1025
> > Jul 21 16:21:00.754924 [ 530.416759] Free swap = 1947124kB
> > Jul 21 16:21:00.766880 [ 530.416822] Total swap = 1949692kB
> > Jul 21 16:21:00.766960 [ 530.416924] 131070 pages RAM
> > Jul 21 16:21:00.767021 [ 530.416988] 0 pages HighMem/MovableOnly
> > Jul 21 16:21:00.778697 [ 530.417051] 14548 pages reserved
> >
> > AFAICT from the kernel config used for the test [0]
> > CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is enabled, so I'm not sure where
> > the memory exhaustion is coming from. Maybe 512M is too low for a PVH
> > dom0, even when using hotplug balloon memory?
>
> I don't see how CONFIG_XEN_BALLOON_MEMORY_HOTPLUG would help here, as it
> will be used for real memory hotplug only. Well, you _can_ use it for
> mapping of foreign pages, but you'd have to:
>
> echo 1 > /proc/sys/xen/balloon/hotplug_unpopulated
Uh, I've completely missed the point then. I assume there's some
reason for not doing it by default then? (using empty hotplug ranges
to map foreign memory)
Roger.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [xen-unstable test] 152067: regressions - trouble: fail/pass/starved
2020-07-22 9:02 ` Roger Pau Monné
@ 2020-07-22 9:23 ` Jürgen Groß
2020-07-22 9:30 ` Roger Pau Monné
0 siblings, 1 reply; 8+ messages in thread
From: Jürgen Groß @ 2020-07-22 9:23 UTC (permalink / raw)
To: Roger Pau Monné; +Cc: xen-devel, boris.ostrovsky, osstest service owner
On 22.07.20 11:02, Roger Pau Monné wrote:
> On Wed, Jul 22, 2020 at 10:59:48AM +0200, Jürgen Groß wrote:
>> On 22.07.20 10:38, Roger Pau Monné wrote:
>>> On Wed, Jul 22, 2020 at 12:37:46AM +0000, osstest service owner wrote:
>>>> flight 152067 xen-unstable real [real]
>>>> http://logs.test-lab.xenproject.org/osstest/logs/152067/
>>>>
>>>> Regressions :-(
>>>>
>>>> Tests which did not succeed and are blocking,
>>>> including tests which could not be run:
>>>> test-amd64-amd64-dom0pvh-xl-intel 18 guest-localmigrate/x10 fail REGR. vs. 152045
>>>
>>> Failure was caused by:
>>>
>>> Jul 21 16:20:58.985209 [ 530.412043] libxl-save-help: page allocation failure: order:4, mode:0x60c0c0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null)
>>> Jul 21 16:21:00.378548 [ 530.412261] libxl-save-help cpuset=/ mems_allowed=0
>>> Jul 21 16:21:00.378622 [ 530.412318] CPU: 1 PID: 15485 Comm: libxl-save-help Not tainted 4.19.80+ #1
>>> Jul 21 16:21:00.390740 [ 530.412377] Hardware name: Dell Inc. PowerEdge R420/0K29HN, BIOS 2.4.2 01/29/2015
>>> Jul 21 16:21:00.390810 [ 530.412448] Call Trace:
>>> Jul 21 16:21:00.402721 [ 530.412499] dump_stack+0x72/0x8c
>>> Jul 21 16:21:00.402801 [ 530.412541] warn_alloc.cold.140+0x68/0xe8
>>> Jul 21 16:21:00.402841 [ 530.412585] __alloc_pages_slowpath+0xc73/0xcb0
>>> Jul 21 16:21:00.414737 [ 530.412640] ? __do_page_fault+0x249/0x4d0
>>> Jul 21 16:21:00.414786 [ 530.412681] __alloc_pages_nodemask+0x235/0x250
>>> Jul 21 16:21:00.426555 [ 530.412734] kmalloc_order+0x13/0x60
>>> Jul 21 16:21:00.426619 [ 530.412774] kmalloc_order_trace+0x18/0xa0
>>> Jul 21 16:21:00.426671 [ 530.412816] alloc_empty_pages.isra.15+0x24/0x60
>>> Jul 21 16:21:00.438447 [ 530.412867] privcmd_ioctl_mmap_batch.isra.18+0x303/0x320
>>> Jul 21 16:21:00.438507 [ 530.412918] ? vmacache_find+0xb0/0xb0
>>> Jul 21 16:21:00.450475 [ 530.412957] privcmd_ioctl+0x253/0xa9b
>>> Jul 21 16:21:00.450540 [ 530.412996] ? mmap_region+0x226/0x630
>>> Jul 21 16:21:00.450592 [ 530.413043] ? selinux_mmap_file+0xb0/0xb0
>>> Jul 21 16:21:00.462757 [ 530.413084] ? selinux_file_ioctl+0x15c/0x200
>>> Jul 21 16:21:00.462823 [ 530.413136] do_vfs_ioctl+0x9f/0x630
>>> Jul 21 16:21:00.474698 [ 530.413177] ksys_ioctl+0x5b/0x90
>>> Jul 21 16:21:00.474762 [ 530.413224] __x64_sys_ioctl+0x11/0x20
>>> Jul 21 16:21:00.474813 [ 530.413264] do_syscall_64+0x57/0x130
>>> Jul 21 16:21:00.486480 [ 530.413305] entry_SYSCALL_64_after_hwframe+0x44/0xa9
>>> Jul 21 16:21:00.486548 [ 530.413357] RIP: 0033:0x7f4f7ecde427
>>> Jul 21 16:21:00.486600 [ 530.413395] Code: 00 00 90 48 8b 05 69 aa 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 39 aa 0c 00 f7 d8 64 89 01 48
>>> Jul 21 16:21:00.510766 [ 530.413556] RSP: 002b:00007ffc1ef6eb38 EFLAGS: 00000213 ORIG_RAX: 0000000000000010
>>> Jul 21 16:21:00.522758 [ 530.413629] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f4f7ecde427
>>> Jul 21 16:21:00.534632 [ 530.413699] RDX: 00007ffc1ef6eb90 RSI: 0000000000205004 RDI: 0000000000000007
>>> Jul 21 16:21:00.534702 [ 530.413810] RBP: 00007ffc1ef6ebe0 R08: 0000000000000007 R09: 0000000000000000
>>> Jul 21 16:21:00.547013 [ 530.413881] R10: 0000000000000001 R11: 0000000000000213 R12: 000055d754136200
>>> Jul 21 16:21:00.558751 [ 530.413951] R13: 00007ffc1ef6f340 R14: 0000000000000000 R15: 0000000000000000
>>> Jul 21 16:21:00.558846 [ 530.414079] Mem-Info:
>>> Jul 21 16:21:00.558928 [ 530.414123] active_anon:1724 inactive_anon:3931 isolated_anon:0
>>> Jul 21 16:21:00.570481 [ 530.414123] active_file:7862 inactive_file:86530 isolated_file:0
>>> Jul 21 16:21:00.582599 [ 530.414123] unevictable:0 dirty:18 writeback:0 unstable:0
>>> Jul 21 16:21:00.582668 [ 530.414123] slab_reclaimable:4704 slab_unreclaimable:4036
>>> Jul 21 16:21:00.594782 [ 530.414123] mapped:3461 shmem:124 pagetables:372 bounce:0
>>> Jul 21 16:21:00.594849 [ 530.414123] free:1863 free_pcp:16 free_cma:0
>>> Jul 21 16:21:00.606733 [ 530.414579] Node 0 active_anon:6896kB inactive_anon:15724kB active_file:31448kB inactive_file:346120kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:13844kB dirty:72kB writeback:0kB shmem:496kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
>>> Jul 21 16:21:00.630626 [ 530.414870] DMA free:1816kB min:92kB low:112kB high:132kB active_anon:0kB inactive_anon:0kB active_file:76kB inactive_file:9988kB unevictable:0kB writepending:0kB present:15980kB managed:14328kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
>>> Jul 21 16:21:00.658448 [ 530.415329] lowmem_reserve[]: 0 431 431 431
>>> Jul 21 16:21:00.658513 [ 530.415404] DMA32 free:5512kB min:2608kB low:3260kB high:3912kB active_anon:6896kB inactive_anon:15724kB active_file:31372kB inactive_file:336132kB unevictable:0kB writepending:72kB present:508300kB managed:451760kB mlocked:0kB kernel_stack:2848kB pagetables:1488kB bounce:0kB free_pcp:184kB local_pcp:0kB free_cma:0kB
>>> Jul 21 16:21:00.694702 [ 530.415742] lowmem_reserve[]: 0 0 0 0
>>> Jul 21 16:21:00.694778 [ 530.415806] DMA: 8*4kB (UM) 3*8kB (UM) 4*16kB (UM) 3*32kB (M) 5*64kB (UM) 2*128kB (UM) 4*256kB (UM) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1816kB
>>> Jul 21 16:21:00.706798 [ 530.416015] DMA32: 4*4kB (UH) 459*8kB (MH) 2*16kB (H) 6*32kB (H) 5*64kB (H) 4*128kB (H) 3*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5512kB
>>> Jul 21 16:21:00.718789 [ 530.416287] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
>>> Jul 21 16:21:00.730785 [ 530.416413] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
>>> Jul 21 16:21:00.742847 [ 530.416538] 94608 total pagecache pages
>>> Jul 21 16:21:00.742881 [ 530.416598] 79 pages in swap cache
>>> Jul 21 16:21:00.754859 [ 530.416670] Swap cache stats: add 702, delete 623, find 948/1025
>>> Jul 21 16:21:00.754924 [ 530.416759] Free swap = 1947124kB
>>> Jul 21 16:21:00.766880 [ 530.416822] Total swap = 1949692kB
>>> Jul 21 16:21:00.766960 [ 530.416924] 131070 pages RAM
>>> Jul 21 16:21:00.767021 [ 530.416988] 0 pages HighMem/MovableOnly
>>> Jul 21 16:21:00.778697 [ 530.417051] 14548 pages reserved
>>>
>>> AFAICT from the kernel config used for the test [0]
>>> CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is enabled, so I'm not sure where
>>> the memory exhaustion is coming from. Maybe 512M is too low for a PVH
>>> dom0, even when using hotplug balloon memory?
>>
>> I don't see how CONFIG_XEN_BALLOON_MEMORY_HOTPLUG would help here, as it
>> will be used for real memory hotplug only. Well, you _can_ use it for
>> mapping of foreign pages, but you'd have to:
>>
>> echo 1 > /proc/sys/xen/balloon/hotplug_unpopulated
>
> Uh, I've completely missed the point then. I assume there's some
> reason for not doing it by default then? (using empty hotplug ranges
> to map foreign memory)
This dates back to 2015. See commit 1cf6a6c82918c9aad.
I guess we could initialize xen_hotplug_unpopulated with 1 for PVH
dom0.
Juergen
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [xen-unstable test] 152067: regressions - trouble: fail/pass/starved
2020-07-22 9:23 ` Jürgen Groß
@ 2020-07-22 9:30 ` Roger Pau Monné
2020-07-22 9:40 ` Jürgen Groß
0 siblings, 1 reply; 8+ messages in thread
From: Roger Pau Monné @ 2020-07-22 9:30 UTC (permalink / raw)
To: Jürgen Groß; +Cc: xen-devel, boris.ostrovsky, osstest service owner
On Wed, Jul 22, 2020 at 11:23:20AM +0200, Jürgen Groß wrote:
> On 22.07.20 11:02, Roger Pau Monné wrote:
> > On Wed, Jul 22, 2020 at 10:59:48AM +0200, Jürgen Groß wrote:
> > > On 22.07.20 10:38, Roger Pau Monné wrote:
> > > > On Wed, Jul 22, 2020 at 12:37:46AM +0000, osstest service owner wrote:
> > > > > flight 152067 xen-unstable real [real]
> > > > > http://logs.test-lab.xenproject.org/osstest/logs/152067/
> > > > >
> > > > > Regressions :-(
> > > > >
> > > > > Tests which did not succeed and are blocking,
> > > > > including tests which could not be run:
> > > > > test-amd64-amd64-dom0pvh-xl-intel 18 guest-localmigrate/x10 fail REGR. vs. 152045
> > > >
> > > > Failure was caused by:
> > > >
> > > > Jul 21 16:20:58.985209 [ 530.412043] libxl-save-help: page allocation failure: order:4, mode:0x60c0c0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null)
> > > > Jul 21 16:21:00.378548 [ 530.412261] libxl-save-help cpuset=/ mems_allowed=0
> > > > Jul 21 16:21:00.378622 [ 530.412318] CPU: 1 PID: 15485 Comm: libxl-save-help Not tainted 4.19.80+ #1
> > > > Jul 21 16:21:00.390740 [ 530.412377] Hardware name: Dell Inc. PowerEdge R420/0K29HN, BIOS 2.4.2 01/29/2015
> > > > Jul 21 16:21:00.390810 [ 530.412448] Call Trace:
> > > > Jul 21 16:21:00.402721 [ 530.412499] dump_stack+0x72/0x8c
> > > > Jul 21 16:21:00.402801 [ 530.412541] warn_alloc.cold.140+0x68/0xe8
> > > > Jul 21 16:21:00.402841 [ 530.412585] __alloc_pages_slowpath+0xc73/0xcb0
> > > > Jul 21 16:21:00.414737 [ 530.412640] ? __do_page_fault+0x249/0x4d0
> > > > Jul 21 16:21:00.414786 [ 530.412681] __alloc_pages_nodemask+0x235/0x250
> > > > Jul 21 16:21:00.426555 [ 530.412734] kmalloc_order+0x13/0x60
> > > > Jul 21 16:21:00.426619 [ 530.412774] kmalloc_order_trace+0x18/0xa0
> > > > Jul 21 16:21:00.426671 [ 530.412816] alloc_empty_pages.isra.15+0x24/0x60
> > > > Jul 21 16:21:00.438447 [ 530.412867] privcmd_ioctl_mmap_batch.isra.18+0x303/0x320
> > > > Jul 21 16:21:00.438507 [ 530.412918] ? vmacache_find+0xb0/0xb0
> > > > Jul 21 16:21:00.450475 [ 530.412957] privcmd_ioctl+0x253/0xa9b
> > > > Jul 21 16:21:00.450540 [ 530.412996] ? mmap_region+0x226/0x630
> > > > Jul 21 16:21:00.450592 [ 530.413043] ? selinux_mmap_file+0xb0/0xb0
> > > > Jul 21 16:21:00.462757 [ 530.413084] ? selinux_file_ioctl+0x15c/0x200
> > > > Jul 21 16:21:00.462823 [ 530.413136] do_vfs_ioctl+0x9f/0x630
> > > > Jul 21 16:21:00.474698 [ 530.413177] ksys_ioctl+0x5b/0x90
> > > > Jul 21 16:21:00.474762 [ 530.413224] __x64_sys_ioctl+0x11/0x20
> > > > Jul 21 16:21:00.474813 [ 530.413264] do_syscall_64+0x57/0x130
> > > > Jul 21 16:21:00.486480 [ 530.413305] entry_SYSCALL_64_after_hwframe+0x44/0xa9
> > > > Jul 21 16:21:00.486548 [ 530.413357] RIP: 0033:0x7f4f7ecde427
> > > > Jul 21 16:21:00.486600 [ 530.413395] Code: 00 00 90 48 8b 05 69 aa 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 39 aa 0c 00 f7 d8 64 89 01 48
> > > > Jul 21 16:21:00.510766 [ 530.413556] RSP: 002b:00007ffc1ef6eb38 EFLAGS: 00000213 ORIG_RAX: 0000000000000010
> > > > Jul 21 16:21:00.522758 [ 530.413629] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f4f7ecde427
> > > > Jul 21 16:21:00.534632 [ 530.413699] RDX: 00007ffc1ef6eb90 RSI: 0000000000205004 RDI: 0000000000000007
> > > > Jul 21 16:21:00.534702 [ 530.413810] RBP: 00007ffc1ef6ebe0 R08: 0000000000000007 R09: 0000000000000000
> > > > Jul 21 16:21:00.547013 [ 530.413881] R10: 0000000000000001 R11: 0000000000000213 R12: 000055d754136200
> > > > Jul 21 16:21:00.558751 [ 530.413951] R13: 00007ffc1ef6f340 R14: 0000000000000000 R15: 0000000000000000
> > > > Jul 21 16:21:00.558846 [ 530.414079] Mem-Info:
> > > > Jul 21 16:21:00.558928 [ 530.414123] active_anon:1724 inactive_anon:3931 isolated_anon:0
> > > > Jul 21 16:21:00.570481 [ 530.414123] active_file:7862 inactive_file:86530 isolated_file:0
> > > > Jul 21 16:21:00.582599 [ 530.414123] unevictable:0 dirty:18 writeback:0 unstable:0
> > > > Jul 21 16:21:00.582668 [ 530.414123] slab_reclaimable:4704 slab_unreclaimable:4036
> > > > Jul 21 16:21:00.594782 [ 530.414123] mapped:3461 shmem:124 pagetables:372 bounce:0
> > > > Jul 21 16:21:00.594849 [ 530.414123] free:1863 free_pcp:16 free_cma:0
> > > > Jul 21 16:21:00.606733 [ 530.414579] Node 0 active_anon:6896kB inactive_anon:15724kB active_file:31448kB inactive_file:346120kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:13844kB dirty:72kB writeback:0kB shmem:496kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
> > > > Jul 21 16:21:00.630626 [ 530.414870] DMA free:1816kB min:92kB low:112kB high:132kB active_anon:0kB inactive_anon:0kB active_file:76kB inactive_file:9988kB unevictable:0kB writepending:0kB present:15980kB managed:14328kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
> > > > Jul 21 16:21:00.658448 [ 530.415329] lowmem_reserve[]: 0 431 431 431
> > > > Jul 21 16:21:00.658513 [ 530.415404] DMA32 free:5512kB min:2608kB low:3260kB high:3912kB active_anon:6896kB inactive_anon:15724kB active_file:31372kB inactive_file:336132kB unevictable:0kB writepending:72kB present:508300kB managed:451760kB mlocked:0kB kernel_stack:2848kB pagetables:1488kB bounce:0kB free_pcp:184kB local_pcp:0kB free_cma:0kB
> > > > Jul 21 16:21:00.694702 [ 530.415742] lowmem_reserve[]: 0 0 0 0
> > > > Jul 21 16:21:00.694778 [ 530.415806] DMA: 8*4kB (UM) 3*8kB (UM) 4*16kB (UM) 3*32kB (M) 5*64kB (UM) 2*128kB (UM) 4*256kB (UM) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1816kB
> > > > Jul 21 16:21:00.706798 [ 530.416015] DMA32: 4*4kB (UH) 459*8kB (MH) 2*16kB (H) 6*32kB (H) 5*64kB (H) 4*128kB (H) 3*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5512kB
> > > > Jul 21 16:21:00.718789 [ 530.416287] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
> > > > Jul 21 16:21:00.730785 [ 530.416413] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
> > > > Jul 21 16:21:00.742847 [ 530.416538] 94608 total pagecache pages
> > > > Jul 21 16:21:00.742881 [ 530.416598] 79 pages in swap cache
> > > > Jul 21 16:21:00.754859 [ 530.416670] Swap cache stats: add 702, delete 623, find 948/1025
> > > > Jul 21 16:21:00.754924 [ 530.416759] Free swap = 1947124kB
> > > > Jul 21 16:21:00.766880 [ 530.416822] Total swap = 1949692kB
> > > > Jul 21 16:21:00.766960 [ 530.416924] 131070 pages RAM
> > > > Jul 21 16:21:00.767021 [ 530.416988] 0 pages HighMem/MovableOnly
> > > > Jul 21 16:21:00.778697 [ 530.417051] 14548 pages reserved
> > > >
> > > > AFAICT from the kernel config used for the test [0]
> > > > CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is enabled, so I'm not sure where
> > > > the memory exhaustion is coming from. Maybe 512M is too low for a PVH
> > > > dom0, even when using hotplug balloon memory?
> > >
> > > I don't see how CONFIG_XEN_BALLOON_MEMORY_HOTPLUG would help here, as it
> > > will be used for real memory hotplug only. Well, you _can_ use it for
> > > mapping of foreign pages, but you'd have to:
> > >
> > > echo 1 > /proc/sys/xen/balloon/hotplug_unpopulated
> >
> > Uh, I've completely missed the point then. I assume there's some
> > reason for not doing it by default then? (using empty hotplug ranges
> > to map foreign memory)
>
> This dates back to 2015. See commit 1cf6a6c82918c9aad.
>
> I guess we could initialize xen_hotplug_unpopulated with 1 for PVH
> dom0.
Would you like me to enabled it in osstest first and then we can see
about enabling it by default?
Roger.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [xen-unstable test] 152067: regressions - trouble: fail/pass/starved
2020-07-22 9:30 ` Roger Pau Monné
@ 2020-07-22 9:40 ` Jürgen Groß
0 siblings, 0 replies; 8+ messages in thread
From: Jürgen Groß @ 2020-07-22 9:40 UTC (permalink / raw)
To: Roger Pau Monné; +Cc: xen-devel, boris.ostrovsky, osstest service owner
On 22.07.20 11:30, Roger Pau Monné wrote:
> On Wed, Jul 22, 2020 at 11:23:20AM +0200, Jürgen Groß wrote:
>> On 22.07.20 11:02, Roger Pau Monné wrote:
>>> On Wed, Jul 22, 2020 at 10:59:48AM +0200, Jürgen Groß wrote:
>>>> On 22.07.20 10:38, Roger Pau Monné wrote:
>>>>> On Wed, Jul 22, 2020 at 12:37:46AM +0000, osstest service owner wrote:
>>>>>> flight 152067 xen-unstable real [real]
>>>>>> http://logs.test-lab.xenproject.org/osstest/logs/152067/
>>>>>>
>>>>>> Regressions :-(
>>>>>>
>>>>>> Tests which did not succeed and are blocking,
>>>>>> including tests which could not be run:
>>>>>> test-amd64-amd64-dom0pvh-xl-intel 18 guest-localmigrate/x10 fail REGR. vs. 152045
>>>>>
>>>>> Failure was caused by:
>>>>>
>>>>> Jul 21 16:20:58.985209 [ 530.412043] libxl-save-help: page allocation failure: order:4, mode:0x60c0c0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null)
>>>>> Jul 21 16:21:00.378548 [ 530.412261] libxl-save-help cpuset=/ mems_allowed=0
>>>>> Jul 21 16:21:00.378622 [ 530.412318] CPU: 1 PID: 15485 Comm: libxl-save-help Not tainted 4.19.80+ #1
>>>>> Jul 21 16:21:00.390740 [ 530.412377] Hardware name: Dell Inc. PowerEdge R420/0K29HN, BIOS 2.4.2 01/29/2015
>>>>> Jul 21 16:21:00.390810 [ 530.412448] Call Trace:
>>>>> Jul 21 16:21:00.402721 [ 530.412499] dump_stack+0x72/0x8c
>>>>> Jul 21 16:21:00.402801 [ 530.412541] warn_alloc.cold.140+0x68/0xe8
>>>>> Jul 21 16:21:00.402841 [ 530.412585] __alloc_pages_slowpath+0xc73/0xcb0
>>>>> Jul 21 16:21:00.414737 [ 530.412640] ? __do_page_fault+0x249/0x4d0
>>>>> Jul 21 16:21:00.414786 [ 530.412681] __alloc_pages_nodemask+0x235/0x250
>>>>> Jul 21 16:21:00.426555 [ 530.412734] kmalloc_order+0x13/0x60
>>>>> Jul 21 16:21:00.426619 [ 530.412774] kmalloc_order_trace+0x18/0xa0
>>>>> Jul 21 16:21:00.426671 [ 530.412816] alloc_empty_pages.isra.15+0x24/0x60
>>>>> Jul 21 16:21:00.438447 [ 530.412867] privcmd_ioctl_mmap_batch.isra.18+0x303/0x320
>>>>> Jul 21 16:21:00.438507 [ 530.412918] ? vmacache_find+0xb0/0xb0
>>>>> Jul 21 16:21:00.450475 [ 530.412957] privcmd_ioctl+0x253/0xa9b
>>>>> Jul 21 16:21:00.450540 [ 530.412996] ? mmap_region+0x226/0x630
>>>>> Jul 21 16:21:00.450592 [ 530.413043] ? selinux_mmap_file+0xb0/0xb0
>>>>> Jul 21 16:21:00.462757 [ 530.413084] ? selinux_file_ioctl+0x15c/0x200
>>>>> Jul 21 16:21:00.462823 [ 530.413136] do_vfs_ioctl+0x9f/0x630
>>>>> Jul 21 16:21:00.474698 [ 530.413177] ksys_ioctl+0x5b/0x90
>>>>> Jul 21 16:21:00.474762 [ 530.413224] __x64_sys_ioctl+0x11/0x20
>>>>> Jul 21 16:21:00.474813 [ 530.413264] do_syscall_64+0x57/0x130
>>>>> Jul 21 16:21:00.486480 [ 530.413305] entry_SYSCALL_64_after_hwframe+0x44/0xa9
>>>>> Jul 21 16:21:00.486548 [ 530.413357] RIP: 0033:0x7f4f7ecde427
>>>>> Jul 21 16:21:00.486600 [ 530.413395] Code: 00 00 90 48 8b 05 69 aa 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 39 aa 0c 00 f7 d8 64 89 01 48
>>>>> Jul 21 16:21:00.510766 [ 530.413556] RSP: 002b:00007ffc1ef6eb38 EFLAGS: 00000213 ORIG_RAX: 0000000000000010
>>>>> Jul 21 16:21:00.522758 [ 530.413629] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f4f7ecde427
>>>>> Jul 21 16:21:00.534632 [ 530.413699] RDX: 00007ffc1ef6eb90 RSI: 0000000000205004 RDI: 0000000000000007
>>>>> Jul 21 16:21:00.534702 [ 530.413810] RBP: 00007ffc1ef6ebe0 R08: 0000000000000007 R09: 0000000000000000
>>>>> Jul 21 16:21:00.547013 [ 530.413881] R10: 0000000000000001 R11: 0000000000000213 R12: 000055d754136200
>>>>> Jul 21 16:21:00.558751 [ 530.413951] R13: 00007ffc1ef6f340 R14: 0000000000000000 R15: 0000000000000000
>>>>> Jul 21 16:21:00.558846 [ 530.414079] Mem-Info:
>>>>> Jul 21 16:21:00.558928 [ 530.414123] active_anon:1724 inactive_anon:3931 isolated_anon:0
>>>>> Jul 21 16:21:00.570481 [ 530.414123] active_file:7862 inactive_file:86530 isolated_file:0
>>>>> Jul 21 16:21:00.582599 [ 530.414123] unevictable:0 dirty:18 writeback:0 unstable:0
>>>>> Jul 21 16:21:00.582668 [ 530.414123] slab_reclaimable:4704 slab_unreclaimable:4036
>>>>> Jul 21 16:21:00.594782 [ 530.414123] mapped:3461 shmem:124 pagetables:372 bounce:0
>>>>> Jul 21 16:21:00.594849 [ 530.414123] free:1863 free_pcp:16 free_cma:0
>>>>> Jul 21 16:21:00.606733 [ 530.414579] Node 0 active_anon:6896kB inactive_anon:15724kB active_file:31448kB inactive_file:346120kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:13844kB dirty:72kB writeback:0kB shmem:496kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
>>>>> Jul 21 16:21:00.630626 [ 530.414870] DMA free:1816kB min:92kB low:112kB high:132kB active_anon:0kB inactive_anon:0kB active_file:76kB inactive_file:9988kB unevictable:0kB writepending:0kB present:15980kB managed:14328kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
>>>>> Jul 21 16:21:00.658448 [ 530.415329] lowmem_reserve[]: 0 431 431 431
>>>>> Jul 21 16:21:00.658513 [ 530.415404] DMA32 free:5512kB min:2608kB low:3260kB high:3912kB active_anon:6896kB inactive_anon:15724kB active_file:31372kB inactive_file:336132kB unevictable:0kB writepending:72kB present:508300kB managed:451760kB mlocked:0kB kernel_stack:2848kB pagetables:1488kB bounce:0kB free_pcp:184kB local_pcp:0kB free_cma:0kB
>>>>> Jul 21 16:21:00.694702 [ 530.415742] lowmem_reserve[]: 0 0 0 0
>>>>> Jul 21 16:21:00.694778 [ 530.415806] DMA: 8*4kB (UM) 3*8kB (UM) 4*16kB (UM) 3*32kB (M) 5*64kB (UM) 2*128kB (UM) 4*256kB (UM) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1816kB
>>>>> Jul 21 16:21:00.706798 [ 530.416015] DMA32: 4*4kB (UH) 459*8kB (MH) 2*16kB (H) 6*32kB (H) 5*64kB (H) 4*128kB (H) 3*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5512kB
>>>>> Jul 21 16:21:00.718789 [ 530.416287] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
>>>>> Jul 21 16:21:00.730785 [ 530.416413] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
>>>>> Jul 21 16:21:00.742847 [ 530.416538] 94608 total pagecache pages
>>>>> Jul 21 16:21:00.742881 [ 530.416598] 79 pages in swap cache
>>>>> Jul 21 16:21:00.754859 [ 530.416670] Swap cache stats: add 702, delete 623, find 948/1025
>>>>> Jul 21 16:21:00.754924 [ 530.416759] Free swap = 1947124kB
>>>>> Jul 21 16:21:00.766880 [ 530.416822] Total swap = 1949692kB
>>>>> Jul 21 16:21:00.766960 [ 530.416924] 131070 pages RAM
>>>>> Jul 21 16:21:00.767021 [ 530.416988] 0 pages HighMem/MovableOnly
>>>>> Jul 21 16:21:00.778697 [ 530.417051] 14548 pages reserved
>>>>>
>>>>> AFAICT from the kernel config used for the test [0]
>>>>> CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is enabled, so I'm not sure where
>>>>> the memory exhaustion is coming from. Maybe 512M is too low for a PVH
>>>>> dom0, even when using hotplug balloon memory?
>>>>
>>>> I don't see how CONFIG_XEN_BALLOON_MEMORY_HOTPLUG would help here, as it
>>>> will be used for real memory hotplug only. Well, you _can_ use it for
>>>> mapping of foreign pages, but you'd have to:
>>>>
>>>> echo 1 > /proc/sys/xen/balloon/hotplug_unpopulated
>>>
>>> Uh, I've completely missed the point then. I assume there's some
>>> reason for not doing it by default then? (using empty hotplug ranges
>>> to map foreign memory)
>>
>> This dates back to 2015. See commit 1cf6a6c82918c9aad.
>>
>> I guess we could initialize xen_hotplug_unpopulated with 1 for PVH
>> dom0.
>
> Would you like me to enabled it in osstest first and then we can see
> about enabling it by default?
Yes, good idea.
Juergen
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2020-07-22 9:41 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-22 0:37 [xen-unstable test] 152067: regressions - trouble: fail/pass/starved osstest service owner
2020-07-22 8:34 ` Jan Beulich
2020-07-22 8:38 ` Roger Pau Monné
2020-07-22 8:59 ` Jürgen Groß
2020-07-22 9:02 ` Roger Pau Monné
2020-07-22 9:23 ` Jürgen Groß
2020-07-22 9:30 ` Roger Pau Monné
2020-07-22 9:40 ` Jürgen Groß
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).