From: <bercarug@amazon.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
David Woodhouse <dwmw2@infradead.org>,
abelgun@amazon.com
Subject: Re: PVH dom0 creation fails - the system freezes
Date: Wed, 25 Jul 2018 13:06:43 +0300 [thread overview]
Message-ID: <88eaaa06-24c9-d474-c40a-f37bafe1ad67@amazon.com> (raw)
In-Reply-To: <5B56F77202000078001D717D@prv1-mh.provo.novell.com>
[-- Attachment #1: Type: text/plain, Size: 3510 bytes --]
On 07/24/2018 12:54 PM, Jan Beulich wrote:
>>>> On 23.07.18 at 13:50, <bercarug@amazon.com> wrote:
>> For the last few days, I have been trying to get a PVH dom0 running,
>> however I encountered the following problem: the system seems to
>> freeze after the hypervisor boots, the screen goes black. I have tried to
>> debug it via a serial console (using Minicom) and managed to get some
>> more Xen output, after the screen turns black.
>>
>> I mention that I have tried to boot the PVH dom0 using different kernel
>> images (from 4.9.0 to 4.18-rc3), different Xen versions (4.10, 4.11, 4.12).
>>
>> Below I attached my system / hypervisor configuration, as well as the
>> output captured through the serial console, corresponding to the latest
>> versions for Xen and the Linux Kernel (Xen staging and Kernel from the
>> xen/tip tree).
>> [...]
>> (XEN) [VT-D]iommu.c:919: iommu_fault_status: Fault Overflow
>> (XEN) [VT-D]iommu.c:921: iommu_fault_status: Primary Pending Fault
>> (XEN) [VT-D]DMAR:[DMA Write] Request device [0000:00:14.0] fault addr 8deb3000, iommu reg = ffff82c00021b000
>> (XEN) [VT-D]DMAR: reason 05 - PTE Write access is not set
>> (XEN) print_vtd_entries: iommu #0 dev 0000:00:14.0 gmfn 8deb3
>> (XEN) root_entry[00] = 1021c60001
>> (XEN) context[a0] = 2_1021d6d001
>> (XEN) l4[000] = 9c00001021d6c107
>> (XEN) l3[002] = 9c00001021d3e107
>> (XEN) l2[06f] = 9c000010218c0107
>> (XEN) l1[0b3] = 8000000000000000
>> (XEN) l1[0b3] not present
>> (XEN) Dom0 callback via changed to Direct Vector 0xf3
> This might be a hint at a missing RMRR entry in the ACPI tables, as
> we've seen to be the case for a number of systems (I dare to guess
> that 0000:00:14.0 is a USB controller, perhaps one with a keyboard
> and/or mouse connected). You may want to play with the respective
> command line option ("rmrr="). Note that "iommu_inclusive_mapping"
> as you're using it does not have any meaning for PVH (see
> intel_iommu_hwdom_init()).
>
> Jan
>
>
>
Hello,
Following Roger's advice, I rebuilt Xen (4.12) using the staging branch
and I managed to get a PVH dom0 starting. However, some other problems
appeared:
1) The USB devices are not usable anymore (keyboard and mouse), so the
system is only accessible through the serial port.
2) I can run any usual command in dom0, but the ones involving xl
(except for xl info) will make the system run out of memory very fast.
Eventually, when there is no more free memory available, the OOM killer
begins removing processes until the system auto reboots.
I attached a file containing the output of a lsusb, as well as the
output of xl info and xl list -l.
After xl list -l, the “free -m” commands show the available memory
decreasing.
Each command has a timestamp appended, so it can be seen how fast the
available memory decreases.
I removed much of the process killing logs and kept the last one, since
they were following the same pattern.
Dom0 still appears to be of type PV (output of xl list -l), however
during boot, the following messages were displayed: “Building a PVH
Dom0” and “Booting paravirtualized kernel on Xen PVH”.
I mention that I had to add “workaround_bios_bug” in GRUB_CMDLINE_XEN
for iommu to get dom0 running.
What could be causing the available memory loss problem?
Thank you,
Gabriel
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
[-- Attachment #2: lsusb.txt --]
[-- Type: text/plain, Size: 17884 bytes --]
lsusb && tim\b\b\bdate +%c
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Wed 25 Jul 2018 04:59:25 AM EDT
root@debian:/home/test# lsusb && date +%c\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\a\a\a\a\axl \binfo
host : debian
release : 4.17.0-rc5
version : #4 SMP Tue Jul 24 06:12:21 EDT 2018
machine : x86_64
nr_cpus : 8
max_cpu_id : 7
nr_nodes : 1
cores_per_socket : 4
threads_per_core : 2
cpu_mhz : 3792.227
hw_caps : bfebfbff:77faf3ff:2c100800:00000121:0000000f:009c6fbf:00000000:00000100
virt_caps : hvm hvm_directio
total_memory : 65217
free_memory : 56242
sharing_freed_memory : 0
sharing_used_memory : 0
outstanding_claims : 0
free_cpus : 0
xen_major : 4
xen_minor : 12
xen_extra : -unstable
xen_version : 4.12-unstable
xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler : credit
xen_pagesize : 4096
platform_params : virt_start=0xffff800000000000
xen_changeset : Thu Jun 28 10:54:01 2018 +0300 git:61bdddb821
xen_commandline : placeholder dom0=pvh dom0_mem=8192M loglvl=all sync_console console_to_ring=true console=com1,vga com1=115200,8n1 iommu=debug,verbose,workaround_bios_bug
cc_compiler : gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516
cc_compile_by : root
cc_compile_domain :
cc_compile_date : Tue Jul 24 04:03:02 EDT 2018
build_id : e6d3e802a6420aae9e2e25dd5941c5d24adad026
xend_config_format : 4
Wed 25 Jul 2018 04:59:38 AM EDT
root@debian:/home/test# xl info && date +%c\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\a\a\a\a\afree \b-m
total used free shared buff/cache available
Mem: 7977 179 7549 8 248 7560
Swap: 65120 0 65120
Wed 25 Jul 2018 04:59:44 AM EDT
root@debian:/home/test# free -m && date +%c\a\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\a\a\a\axl \blist \b-l
[
{
"domid": 0,
"config": {
"c_info": {
"type": "pv",
"name": "Domain-0"
},
"b_info": {
"max_memkb": 17179869180,
"target_memkb": 8387743,
"sched_params": {
"sched": "credit",
"weight": 256,
"cap": 0
},
"type.pv": {
},
"arch_arm": {
}
}
}
}
]
Wed 25 Jul 2018 04:59:52 AM EDT
root@debian:/home/test# xl list -l && date +%c\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\bfree -m
total used free shared buff/cache available
Mem: 7129 180 6701 8 248 6711
Swap: 65120 0 65120
Wed 25 Jul 2018 04:59:53 AM EDT
root@debian:/home/test# free -m && date +%c
total used free shared buff/cache available
Mem: 6441 180 6012 8 248 6023
Swap: 65120 0 65120
Wed 25 Jul 2018 04:59:54 AM EDT
root@debian:/home/test# free -m && date +%c
total used free shared buff/cache available
Mem: 5789 180 5360 8 248 5371
Swap: 65120 0 65120
Wed 25 Jul 2018 04:59:55 AM EDT
root@debian:/home/test# free -m && date +%c
total used free shared buff/cache available
Mem: 5007 181 4578 8 248 4589
Swap: 65120 0 65120
Wed 25 Jul 2018 04:59:56 AM EDT
root@debian:/home/test# free -m && date +%c
total used free shared buff/cache available
Mem: 4317 180 3888 8 248 3899
Swap: 65120 0 65120
Wed 25 Jul 2018 04:59:57 AM EDT
root@debian:/home/test# free -m && date +%c
total used free shared buff/cache available
Mem: 3603 181 3174 8 248 3184
Swap: 65120 0 65120
Wed 25 Jul 2018 04:59:57 AM EDT
root@debian:/home/test# free -m && date +%c
total used free shared buff/cache available
Mem: 2863 181 2434 8 248 2444
Swap: 65120 0 65120
Wed 25 Jul 2018 04:59:58 AM EDT
root@debian:/home/test# free -m && date +%c
total used free shared buff/cache available
Mem: 2169 181 1739 8 248 1750
Swap: 65120 0 65120
Wed 25 Jul 2018 04:59:59 AM EDT
root@debian:/home/test# free -m && date +%c
total used free shared buff/cache available
Mem: 1495 182 1064 8 248 1075
Swap: 65120 0 65120
Wed 25 Jul 2018 05:00:00 AM EDT
root@debian:/home/test# free -m && date +%c
total used free shared buff/cache available
Mem: 775 182 343 8 248 354
Swap: 65120 0 65120
Wed 25 Jul 2018 05:00:00 AM EDT
root@debian:/home/test# free -m && date +%c
total used free shared buff/cache available
Mem: 293 151 117 1 24 46
Swap: 65120 57 65063
Wed 25 Jul 2018 05:00:01 AM EDT
root@debian:/home/test# free -m && date +%c
total used free shared buff/cache available
Mem: 293 139 129 1 24 4
Swap: 65120 55 65065
Wed 25 Jul 2018 05:00:02 AM EDT
root@debian:/home/test# free -m && date +%c
total used free shared buff/cache available
Mem: 293 139 126 1 26 2
Swap: 65120 55 65065
Wed 25 Jul 2018 05:00:03 AM EDT
root@debian:/home/test# free -m && date +%c
total used free shared buff/cache available
Mem: 246 110 116 0 18 43
Swap: 65120 89 65031
Wed 25 Jul 2018 05:00:04 AM EDT
root@debian:/home/test# free -m && date +%c
total used free shared buff/cache available
Mem: 246 111 116 0 18 42
Swap: 65120 90 65030
Wed 25 Jul 2018 05:00:05 AM EDT
root@debian:/home/test# free -m && date +%c
[...]
[ 255.133877] Out of memory: Kill process 971 (systemd-cgroups) score 0 or sacrifice child
[ 255.142990] Killed process 971 (systemd-cgroups) total-vm:4804kB, anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
[ 255.184192] systemd invoked oom-killer: gfp_mask=0x14200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null), order=0, oom_score_adj=0
[ 255.196535] systemd cpuset=/ mems_allowed=0
[ 255.201282] CPU: 7 PID: 1 Comm: systemd Not tainted 4.17.0-rc5 #4
[ 255.208161] Hardware name: , BIOS
[ 255.212232] Call Trace:
[ 255.215048] dump_stack+0x5c/0x7b
[ 255.218829] dump_header+0x6b/0x28c
[ 255.222801] ? find_lock_task_mm+0x52/0x80
[ 255.227457] ? oom_unkillable_task+0x9b/0xc0
[ 255.232304] out_of_memory+0x328/0x480
[ 255.236569] __alloc_pages_slowpath+0xd25/0xe00
[ 255.241707] ? __do_page_cache_readahead+0x129/0x2e0
[ 255.247329] __alloc_pages_nodemask+0x212/0x250
[ 255.252469] filemap_fault+0x3a0/0x650
[ 255.256737] ? alloc_set_pte+0x39c/0x520
[ 255.261195] ? filemap_map_pages+0x182/0x330
[ 255.266049] ext4_filemap_fault+0x2c/0x40 [ext4]
[ 255.271279] __do_fault+0x1f/0xb3
[ 255.275059] __handle_mm_fault+0xbdf/0x1110
[ 255.279809] handle_mm_fault+0xfc/0x1f0
[ 255.284171] __do_page_fault+0x255/0x4f0
[ 255.288632] ? exit_to_usermode_loop+0xa3/0xc0
[ 255.293672] ? page_fault+0x8/0x30
[ 255.297551] page_fault+0x1e/0x30
[ 255.301332] RIP: 0033:0x7f537ebdad50
[ 255.305408] RSP: 002b:00007ffec5e9bdb8 EFLAGS: 00010202
[ 255.311318] RAX: 0000000000000000 RBX: 00005601f8de19e0 RCX: 00007f537d85db00
[ 255.319365] RDX: 00005601f8de19e0 RSI: 00007f537ecdd76c RDI: 00005601f8de19e0
[ 255.327412] RBP: 00007f537ecdd76c R08: 00007f537d85dbb8 R09: 0000000000000060
[ 255.335460] R10: 00007f537efb0940 R11: 0000000000000206 R12: 00007ffec5e9bde0
[ 255.343508] R13: 00007ffec5e9bef0 R14: 0000000000000000 R15: 0000000000000009
[ 255.351563] Mem-Info:
[ 255.354178] active_anon:10 inactive_anon:4 isolated_anon:0
[ 255.354178] active_file:129 inactive_file:6 isolated_file:0
[ 255.354178] unevictable:0 dirty:0 writeback:0 unstable:0
[ 255.354178] slab_reclaimable:2925 slab_unreclaimable:4868
[ 255.354178] mapped:0 shmem:0 pagetables:65 bounce:0
[ 255.354178] free:26541 free_pcp:0 free_cma:0
[ 255.389666] Node 0 active_anon:40kB inactive_anon:16kB active_file:516kB inactive_file:24kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:0kB dirty:0kB writeback:0kB shmem:0kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? yes
[ 255.418748] Node 0 DMA free:15880kB min:132kB low:164kB high:196kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15964kB managed:15880kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 255.447834] lowmem_reserve[]: 0 1889 7707 7707 7707
[ 255.453359] Node 0 DMA32 free:39428kB min:16532kB low:20664kB high:24796kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:2279640kB managed:44552kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 255.483415] lowmem_reserve[]: 0 0 5818 5818 5818
[ 255.488651] Node 0 Normal free:50856kB min:50912kB low:63640kB high:76368kB active_anon:40kB inactive_anon:16kB active_file:708kB inactive_file:104kB unevictable:0kB writepending:0kB present:6092996kB managed:139648kB mlocked:0kB kernel_stack:2240kB pagetables:260kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 255.519966] lowmem_reserve[]: 0 0 0 0 0
[ 255.524326] Node 0 DMA: 0*4kB 1*8kB (U) 0*16kB 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15880kB
[ 255.538966] Node 0 DMA32: 9*4kB (UM) 6*8kB (UM) 3*16kB (UM) 4*32kB (M) 6*64kB (M) 7*128kB (UM) 4*256kB (M) 6*512kB (M) 3*1024kB (M) 1*2048kB (U) 7*4096kB (UM) = 39428kB
[ 255.555839] Node 0 Normal: 551*4kB (UME) 356*8kB (UME) 185*16kB (UME) 90*32kB (ME) 50*64kB (UME) 36*128kB (UME) 12*256kB (UM) 6*512kB (M) 8*1024kB (UM) 9*2048kB (M) 0*4096kB = 51468kB
[ 255.574160] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[ 255.583952] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 255.593453] 206 total pagecache pages
[ 255.597628] 6 pages in swap cache
[ 255.601404] Swap cache stats: add 127798, delete 127855, find 93647/162354
[ 255.609164] Free swap = 66670076kB
[ 255.613138] Total swap = 66683900kB
[ 255.617115] 2097150 pages RAM
[ 255.620500] 0 pages HighMem/MovableOnly
[ 255.624867] 2047130 pages reserved
[ 255.628746] 0 pages hwpoisoned
[ 255.632230] Unreclaimable slab info:
[ 255.636308] Name Used Total
[ 255.642418] scsi_sense_cache 41KB 56KB
[ 255.648329] ip6_dst_cache 3KB 15KB
[ 255.654245] RAWv6 27KB 31KB
[ 255.660161] sgpool-128 8KB 8KB
[ 255.666072] cfq_io_cq 7KB 19KB
[ 255.671988] cfq_queue 11KB 23KB
[ 255.677901] mqueue_inode_cache 0KB 3KB
[ 255.683910] dnotify_struct 0KB 3KB
[ 255.689828] secpath_cache 0KB 8KB
[ 255.695738] RAW 30KB 30KB
[ 255.701654] hugetlbfs_inode_cache 1KB 7KB
[ 255.707955] eventpoll_pwq 4KB 23KB
[ 255.713872] eventpoll_epi 7KB 32KB
[ 255.719785] request_queue 4KB 12KB
[ 255.725698] blkdev_ioc 6KB 15KB
[ 255.731613] biovec-max 96KB 96KB
[ 255.737526] biovec-128 4KB 4KB
[ 255.743442] biovec-64 293KB 328KB
[ 255.749357] dmaengine-unmap-256 2KB 6KB
[ 255.755466] dmaengine-unmap-128 3KB 22KB
[ 255.761570] dmaengine-unmap-16 6KB 7KB
[ 255.767581] dmaengine-unmap-2 0KB 3KB
[ 255.773497] skbuff_fclone_cache 51KB 84KB
[ 255.779604] skbuff_head_cache 74KB 148KB
[ 255.785518] net_namespace 6KB 6KB
[ 255.791431] shmem_inode_cache 609KB 665KB
[ 255.797348] taskstats 3KB 3KB
[ 255.803259] proc_dir_entry 176KB 192KB
[ 255.809176] pde_opener 0KB 3KB
[ 255.815088] seq_file 2KB 8KB
[ 255.821002] sigqueue 7KB 11KB
[ 255.826915] kernfs_node_cache 2792KB 2812KB
[ 255.832832] mnt_cache 29KB 48KB
[ 255.838745] filp 83KB 352KB
[ 255.844661] names_cache 56KB 56KB
[ 255.850572] vm_area_struct 108KB 566KB
[ 255.856486] mm_struct 76KB 96KB
[ 255.862404] files_cache 29KB 45KB
[ 255.868314] signal_cache 203KB 232KB
[ 255.874227] sighand_cache 406KB 420KB
[ 255.880146] task_struct 655KB 655KB
[ 255.886057] cred_jar 57KB 165KB
[ 255.891972] anon_vma 17KB 105KB
[ 255.897885] pid 49KB 276KB
[ 255.903798] Acpi-Operand 590KB 606KB
[ 255.909715] Acpi-Parse 4KB 15KB
[ 255.915630] Acpi-State 5KB 19KB
[ 255.921545] Acpi-Namespace 221KB 228KB
[ 255.927455] numa_policy 0KB 3KB
[ 255.933370] trace_event_file 114KB 126KB
[ 255.939284] ftrace_event_field 148KB 159KB
[ 255.945297] pool_workqueue 115KB 328KB
[ 255.951211] task_group 12KB 27KB
[ 255.957124] kmalloc-2097152 2048KB 2048KB
[ 255.963038] kmalloc-262144 768KB 768KB
[ 255.968953] kmalloc-131072 128KB 128KB
[ 255.974864] kmalloc-32768 288KB 288KB
[ 255.980778] kmalloc-16384 384KB 384KB
[ 255.986697] kmalloc-8192 712KB 712KB
[ 255.992607] kmalloc-4096 596KB 600KB
[ 255.998521] kmalloc-2048 1546KB 1592KB
[ 256.004436] kmalloc-1024 1158KB 1216KB
[ 256.010352] kmalloc-512 522KB 604KB
[ 256.016264] kmalloc-256 125KB 132KB
[ 256.022179] kmalloc-192 236KB 267KB
[ 256.028092] kmalloc-96 178KB 252KB
[ 256.034009] kmalloc-64 298KB 360KB
[ 256.039924] kmalloc-32 411KB 430KB
[ 256.045839] kmalloc-128 104KB 132KB
[ 256.051749] kmem_cache 33KB 40KB
[ 256.057665] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
[ 256.067170] [ 273] 0 273 11680 1 122880 363 -1000 systemd-udevd
[ 256.076962] Kernel panic - not syncing: Out of memory and no killable processes...
[ 256.076962]
[ 256.087232] CPU: 7 PID: 1 Comm: systemd Not tainted 4.17.0-rc5 #4
[ 256.094113] Hardware name: , BIOS
[ 256.098185] Call Trace:
[ 256.100999] dump_stack+0x5c/0x7b
[ 256.104780] panic+0xe4/0x252
[ 256.108173] ? dump_header+0x189/0x28c
[ 256.112437] out_of_memory+0x334/0x480
[ 256.116705] __alloc_pages_slowpath+0xd25/0xe00
[ 256.121843] ? __do_page_cache_readahead+0x129/0x2e0
[ 256.127465] __alloc_pages_nodemask+0x212/0x250
[ 256.132603] filemap_fault+0x3a0/0x650
[ 256.136871] ? alloc_set_pte+0x39c/0x520
[ 256.141330] ? filemap_map_pages+0x182/0x330
[ 256.146184] ext4_filemap_fault+0x2c/0x40 [ext4]
[ 256.151412] __do_fault+0x1f/0xb3
[ 256.155197] __handle_mm_fault+0xbdf/0x1110
[ 256.159948] handle_mm_fault+0xfc/0x1f0
[ 256.164306] __do_page_fault+0x255/0x4f0
[ 256.168769] ? exit_to_usermode_loop+0xa3/0xc0
[ 256.173807] ? page_fault+0x8/0x30
[ 256.177687] page_fault+0x1e/0x30
[ 256.181467] RIP: 0033:0x7f537ebdad50
[ 256.185538] RSP: 002b:00007ffec5e9bdb8 EFLAGS: 00010202
[ 256.191455] RAX: 0000000000000000 RBX: 00005601f8de19e0 RCX: 00007f537d85db00
[ 256.199500] RDX: 00005601f8de19e0 RSI: 00007f537ecdd76c RDI: 00005601f8de19e0
[ 256.207546] RBP: 00007f537ecdd76c R08: 00007f537d85dbb8 R09: 0000000000000060
[ 256.215593] R10: 00007f537efb0940 R11: 0000000000000206 R12: 00007ffec5e9bde0
[ 256.223642] R13: 00007ffec5e9bef0 R14: 0000000000000000 R15: 0000000000000009
[ 256.231752] Kernel Offset: disabled
(XEN) Hardware Dom0 crashed: rebooting machine in 5 seconds.
(XEN) Resetting with ACPI MEMORY or I/O RESET_REG.
[-- Attachment #3: Type: text/plain, Size: 157 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-07-25 10:06 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-23 11:50 PVH dom0 creation fails - the system freezes bercarug
2018-07-24 9:54 ` Jan Beulich
2018-07-25 10:06 ` bercarug [this message]
2018-07-25 10:22 ` Wei Liu
2018-07-25 10:43 ` Juergen Gross
2018-07-25 13:35 ` Roger Pau Monné
2018-07-25 13:41 ` Juergen Gross
2018-07-25 14:02 ` Wei Liu
2018-07-25 14:05 ` bercarug
2018-07-25 14:10 ` Wei Liu
2018-07-25 16:12 ` Roger Pau Monné
2018-07-25 16:29 ` Juergen Gross
2018-07-25 18:56 ` [Memory Accounting] was: " Andrew Cooper
2018-07-25 23:07 ` Boris Ostrovsky
2018-07-26 9:41 ` Juergen Gross
2018-07-26 9:45 ` George Dunlap
2018-07-26 11:11 ` Roger Pau Monné
2018-07-26 11:22 ` Juergen Gross
2018-07-26 11:27 ` George Dunlap
2018-07-26 12:19 ` Juergen Gross
2018-07-26 14:44 ` George Dunlap
2018-07-26 13:50 ` Roger Pau Monné
2018-07-26 13:58 ` Juergen Gross
2018-07-26 14:35 ` Roger Pau Monné
2018-07-26 11:23 ` George Dunlap
2018-07-26 11:08 ` Roger Pau Monné
2018-07-26 8:15 ` bercarug
2018-07-26 8:31 ` Juergen Gross
2018-07-26 11:05 ` Roger Pau Monné
2018-07-25 13:57 ` bercarug
2018-07-25 14:12 ` Roger Pau Monné
2018-07-25 16:19 ` Paul Durrant
2018-07-26 16:46 ` Roger Pau Monné
2018-07-27 8:48 ` Bercaru, Gabriel
2018-07-27 9:11 ` Roger Pau Monné
2018-08-02 11:36 ` Bercaru, Gabriel
2018-08-02 13:55 ` Roger Pau Monné
2018-08-08 7:46 ` bercarug
2018-08-08 8:08 ` Roger Pau Monné
2018-08-08 8:39 ` bercarug
2018-08-08 8:43 ` Paul Durrant
2018-08-08 8:51 ` Roger Pau Monné
2018-08-08 8:54 ` bercarug
2018-08-08 9:44 ` Roger Pau Monné
2018-08-08 10:11 ` Roger Pau Monné
2018-08-08 10:13 ` bercarug
[not found] ` <5B6AAD430200009A03E1638C@prv1-mh.provo.novell.com>
[not found] ` <5B6AAF130200003B04D2E796@prv1-mh.provo.novell.com>
2018-08-08 10:00 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=88eaaa06-24c9-d474-c40a-f37bafe1ad67@amazon.com \
--to=bercarug@amazon.com \
--cc=JBeulich@suse.com \
--cc=abelgun@amazon.com \
--cc=dwmw2@infradead.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.