* repeating 'd1v0 Over-allocation for domain 1' messages in xen 4.7 Host logs on PVHVM Guest launch @ 2016-06-29 0:06 PGNet Dev 2016-06-29 10:07 ` Jan Beulich 0 siblings, 1 reply; 12+ messages in thread From: PGNet Dev @ 2016-06-29 0:06 UTC (permalink / raw) To: xen-devel (relo'd from user list) I've launched an Ubuntu PVHVM guest, on a Xen 4.7 host cat guest1.cfg name = 'guest1' builder = 'hvm' xen_platform_pci = 1 device_model_version = 'qemu-xen' bios = 'ovmf' bios_override = '/usr/share/qemu/ovmf-x86_64.bin' ... On guest launch, `xl dmesg` in host reports ... (XEN) [2016-06-27 21:54:09] HVM1 restore: CPU 0 (d1) [2016-06-27 21:54:10] HVM Loader (d1) [2016-06-27 21:54:10] Detected Xen v4.7.0_08-452 (d1) [2016-06-27 21:54:10] Xenbus rings @0xfeffc000, event channel 1 (d1) [2016-06-27 21:54:10] System requested OVMF (d1) [2016-06-27 21:54:10] CPU speed is 3093 MHz (d1) [2016-06-27 21:54:10] Relocating guest memory for lowmem MMIO space disabled (d1) [2016-06-27 21:54:10] PCI-ISA link 0 routed to IRQ5 (d1) [2016-06-27 21:54:10] PCI-ISA link 1 routed to IRQ10 (d1) [2016-06-27 21:54:10] PCI-ISA link 2 routed to IRQ11 (d1) [2016-06-27 21:54:10] PCI-ISA link 3 routed to IRQ5 (d1) [2016-06-27 21:54:10] pci dev 01:3 INTA->IRQ10 (d1) [2016-06-27 21:54:10] pci dev 02:0 INTA->IRQ11 (d1) [2016-06-27 21:54:10] pci dev 04:0 INTA->IRQ5 (d1) [2016-06-27 21:54:10] No RAM in high memory; setting high_mem resource base to 100000000 (d1) [2016-06-27 21:54:10] pci dev 02:0 bar 14 size 001000000: 0f0000008 (d1) [2016-06-27 21:54:10] pci dev 03:0 bar 10 size 001000000: 0f1000008 (d1) [2016-06-27 21:54:10] pci dev 04:0 bar 30 size 000040000: 0f2000000 (d1) [2016-06-27 21:54:10] pci dev 04:0 bar 10 size 000020000: 0f2040000 (d1) [2016-06-27 21:54:10] pci dev 03:0 bar 30 size 000010000: 0f2060000 (d1) [2016-06-27 21:54:10] pci dev 03:0 bar 18 size 000001000: 0f2070000 (d1) [2016-06-27 21:54:10] pci dev 02:0 bar 10 size 000000100: 00000c001 (d1) [2016-06-27 21:54:10] pci dev 04:0 bar 14 size 000000040: 00000c101 (d1) [2016-06-27 21:54:10] pci dev 01:1 bar 20 size 000000010: 00000c141 (d1) [2016-06-27 21:54:10] Multiprocessor initialisation: (d1) [2016-06-27 21:54:10] - CPU0 ... 39-bit phys ... fixed MTRRs ... var MTRRs [1/8] ... done. (d1) [2016-06-27 21:54:10] Writing SMBIOS tables ... (d1) [2016-06-27 21:54:10] Loading OVMF ... (XEN) [2016-06-27 21:54:10] d1v0 Over-allocation for domain 1: 524545 > 524544 (d1) [2016-06-27 21:54:10] Loading ACPI ... (d1) [2016-06-27 21:54:10] vm86 TSS at fc009f00 (d1) [2016-06-27 21:54:10] BIOS map: (d1) [2016-06-27 21:54:10] ffe00000-ffffffff: Main BIOS (d1) [2016-06-27 21:54:10] E820 table: (d1) [2016-06-27 21:54:10] [00]: 00000000:00000000 - 00000000:000a0000: RAM (d1) [2016-06-27 21:54:10] HOLE: 00000000:000a0000 - 00000000:000f0000 (d1) [2016-06-27 21:54:10] [01]: 00000000:000f0000 - 00000000:00100000: RESERVED (d1) [2016-06-27 21:54:10] [02]: 00000000:00100000 - 00000000:7eeb6000: RAM (d1) [2016-06-27 21:54:10] HOLE: 00000000:7eeb6000 - 00000000:fc000000 (d1) [2016-06-27 21:54:10] [03]: 00000000:fc000000 - 00000001:00000000: RESERVED (d1) [2016-06-27 21:54:10] Invoking OVMF ... At this point, the guest's up xl list Name ID Mem VCPUs State Time(s) Domain-0 0 4096 1 r----- 31.1 guest1 1 2049 1 -b---- 17.4 , accessible and functional., uname -a Linux guest1 4.4.0-24-generic #43-Ubuntu SMP Wed Jun 8 19:27:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux As long as the guest is up, the Host logs continuously fill up with over-allocation errors. xl dmesg (XEN) [2016-06-27 21:54:32] d1v0 Over-allocation for domain 1: 524545 > 524544 (XEN) [2016-06-27 21:54:32] d1v0 Over-allocation for domain 1: 524545 > 524544 (XEN) [2016-06-27 21:54:34] d1v0 Over-allocation for domain 1: 524545 > 524544 (XEN) [2016-06-27 21:54:38] d1v0 Over-allocation for domain 1: 524545 > 524544 (XEN) [2016-06-27 21:54:46] d1v0 Over-allocation for domain 1: 524545 > 524544 (XEN) [2016-06-27 21:55:02] d1v0 Over-allocation for domain 1: 524545 > 524544 (XEN) [2016-06-27 21:55:35] d1v0 Over-allocation for domain 1: 524545 > 524544 (XEN) [2016-06-27 21:56:07] d1v0 Over-allocation for domain 1: 524545 > 524544 (XEN) [2016-06-27 21:56:39] d1v0 Over-allocation for domain 1: 524545 > 524544 (XEN) [2016-06-27 21:57:11] d1v0 Over-allocation for domain 1: 524545 > 524544 (XEN) [2016-06-27 21:57:43] d1v0 Over-allocation for domain 1: 524545 > 524544 (XEN) [2016-06-27 21:58:15] d1v0 Over-allocation for domain 1: 524545 > 524544 (XEN) [2016-06-27 21:58:47] d1v0 Over-allocation for domain 1: 524545 > 524544 ... with multiple guests, it quickly gets ridiculous, xl dmesg | grep -i Over-allocation | wc -l 22787 What are these 'Over-allocation' messages? What needs to be fixed, or if of no concern, can these messages be silenced? Happy to provide any identified additional logs. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: repeating 'd1v0 Over-allocation for domain 1' messages in xen 4.7 Host logs on PVHVM Guest launch 2016-06-29 0:06 repeating 'd1v0 Over-allocation for domain 1' messages in xen 4.7 Host logs on PVHVM Guest launch PGNet Dev @ 2016-06-29 10:07 ` Jan Beulich 2016-06-29 12:58 ` PGNet Dev 0 siblings, 1 reply; 12+ messages in thread From: Jan Beulich @ 2016-06-29 10:07 UTC (permalink / raw) To: pgnet.dev; +Cc: xen-devel >>> On 29.06.16 at 02:06, <pgnet.dev@gmail.com> wrote: > What are these 'Over-allocation' messages? An indication of the guest trying to allocate more memory that the host admin has allowed. > What needs to be fixed, or if of no concern, can these messages be silenced? Perhaps something wrong in the guest's balloon driver. As to silencing - the message already is a guest one at info level, i.e. not going to get issued at all with default settings. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: repeating 'd1v0 Over-allocation for domain 1' messages in xen 4.7 Host logs on PVHVM Guest launch 2016-06-29 10:07 ` Jan Beulich @ 2016-06-29 12:58 ` PGNet Dev 2016-06-29 14:10 ` PGNet Dev 2016-06-29 14:17 ` Jan Beulich 0 siblings, 2 replies; 12+ messages in thread From: PGNet Dev @ 2016-06-29 12:58 UTC (permalink / raw) To: Jan Beulich; +Cc: xen-devel On 06/29/2016 03:07 AM, Jan Beulich wrote: >> What are these 'Over-allocation' messages? > An indication of the guest trying to allocate more memory that the host admin has allowed. currently, each guest has allocated maxmem = 2048 memory = 2048 >> What needs to be fixed, or if of no concern, can these messages be silenced? > > Perhaps something wrong in the guest's balloon driver. I'm seeing these @host log-entries for Ubuntu, Arch & Opensuse guests. As a guest issue is suspected, a post of debug-level dmesg output from the guest would be useful? _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: repeating 'd1v0 Over-allocation for domain 1' messages in xen 4.7 Host logs on PVHVM Guest launch 2016-06-29 12:58 ` PGNet Dev @ 2016-06-29 14:10 ` PGNet Dev 2016-06-29 14:17 ` Jan Beulich 1 sibling, 0 replies; 12+ messages in thread From: PGNet Dev @ 2016-06-29 14:10 UTC (permalink / raw) To: xen-devel fyi, per Verify Xen Project PVHVM drivers are working in the Linux HVM guest kernel http://wiki.xen.org/wiki/Xen_Linux_PV_on_HVM_drivers "Run "dmesg | egrep -i 'xen|front'" in the HVM guest VM." with Guest cmd line, ... systemd.log_level=debug systemd.log_target=kmsg earlyprintk=vga,keep loglevel=9 after guest reboot, output of dmesg | egrep -i 'xen|front' in the booted VM is http://pastebin.com/raw/N0FtTtfC With no mention of 'balloon' drivers as doc'd on that page _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: repeating 'd1v0 Over-allocation for domain 1' messages in xen 4.7 Host logs on PVHVM Guest launch 2016-06-29 12:58 ` PGNet Dev 2016-06-29 14:10 ` PGNet Dev @ 2016-06-29 14:17 ` Jan Beulich 2016-06-29 15:38 ` PGNet Dev 1 sibling, 1 reply; 12+ messages in thread From: Jan Beulich @ 2016-06-29 14:17 UTC (permalink / raw) To: pgnet.dev; +Cc: xen-devel >>> On 29.06.16 at 14:58, <pgnet.dev@gmail.com> wrote: > On 06/29/2016 03:07 AM, Jan Beulich wrote: >>> What needs to be fixed, or if of no concern, can these messages be silenced? >> >> Perhaps something wrong in the guest's balloon driver. > > I'm seeing these @host log-entries for Ubuntu, Arch & Opensuse guests. > > As a guest issue is suspected, a post of debug-level dmesg output from > the guest would be useful? I don't think a guest would itself issue any relevant messages. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: repeating 'd1v0 Over-allocation for domain 1' messages in xen 4.7 Host logs on PVHVM Guest launch 2016-06-29 14:17 ` Jan Beulich @ 2016-06-29 15:38 ` PGNet Dev 2016-06-29 15:59 ` Jan Beulich 0 siblings, 1 reply; 12+ messages in thread From: PGNet Dev @ 2016-06-29 15:38 UTC (permalink / raw) Cc: xen-devel On 06/29/2016 07:17 AM, Jan Beulich wrote: > I don't think a guest would itself issue any relevant messages. You mentioned ballooning in the guest. The doc I found addressed ballooning, in the guest. If not that, then what output, with specificity, would be helpful in troubleshooting this ? I'd prefer to avoid the guessing game, and provide what's needed. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: repeating 'd1v0 Over-allocation for domain 1' messages in xen 4.7 Host logs on PVHVM Guest launch 2016-06-29 15:38 ` PGNet Dev @ 2016-06-29 15:59 ` Jan Beulich 2016-06-29 16:27 ` PGNet Dev 0 siblings, 1 reply; 12+ messages in thread From: Jan Beulich @ 2016-06-29 15:59 UTC (permalink / raw) To: pgnet.dev; +Cc: xen-devel >>> On 29.06.16 at 17:38, <pgnet.dev@gmail.com> wrote: > On 06/29/2016 07:17 AM, Jan Beulich wrote: >> I don't think a guest would itself issue any relevant messages. > > You mentioned ballooning in the guest. The doc I found addressed > ballooning, in the guest. > > If not that, then what output, with specificity, would be helpful in > troubleshooting this ? I'm simply not aware of existing output which would help; I can't see any way around instrumenting involved code. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: repeating 'd1v0 Over-allocation for domain 1' messages in xen 4.7 Host logs on PVHVM Guest launch 2016-06-29 15:59 ` Jan Beulich @ 2016-06-29 16:27 ` PGNet Dev 2016-07-04 11:22 ` George Dunlap 0 siblings, 1 reply; 12+ messages in thread From: PGNet Dev @ 2016-06-29 16:27 UTC (permalink / raw) To: xen-devel In summary, there's a problem An indication of the guest trying to allocate more memory that the host admin has allowed. that's filling logs with 10s of thousands of redundant log entries, with a suspicion that it's 'ballooning' issue in the guest Perhaps something wrong in the guest's balloon driver. With no currently known way to identify or troubleshoot the problem, and provide info here that could be helpful I'm simply not aware of existing output which would help; I can't see any way around instrumenting involved code. Not particularly ideal. Since this is the recommended bug-report channel, any next suggestions? Is there a particular dev involved in the ballooning that can be cc'd, perhaps to add some insight? _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: repeating 'd1v0 Over-allocation for domain 1' messages in xen 4.7 Host logs on PVHVM Guest launch 2016-06-29 16:27 ` PGNet Dev @ 2016-07-04 11:22 ` George Dunlap 2016-07-04 14:58 ` PGNet Dev 0 siblings, 1 reply; 12+ messages in thread From: George Dunlap @ 2016-07-04 11:22 UTC (permalink / raw) To: pgnet.dev; +Cc: xen-devel On Wed, Jun 29, 2016 at 5:27 PM, PGNet Dev <pgnet.dev@gmail.com> wrote: > In summary, there's a problem > > An indication of the guest trying to allocate more memory that the > host admin has allowed. > > that's filling logs with 10s of thousands of redundant log entries, with a > suspicion that it's 'ballooning' issue in the guest > > Perhaps something wrong in the guest's balloon driver. > > With no currently known way to identify or troubleshoot the problem, and > provide info here that could be helpful > > I'm simply not aware of existing output which would help; I can't > see any way around instrumenting involved code. > > Not particularly ideal. > > Since this is the recommended bug-report channel, any next suggestions? > > Is there a particular dev involved in the ballooning that can be cc'd, > perhaps to add some insight? Thanks for your persistence. :-) It's likely that this is related to a known problem with the interface between the balloon driver and the toolstack. The warning itself is benign: it simply means that the balloon driver asked Xen for another page (thinking incorrectly it was a few pages short), and was told "No" by Xen. Fixing it properly requires a re-architecting of the interface between all the different components that use memory (Xen, qemu, the toolstack, the guest balloon driver, &c). This is on the to-do list, but since it's quite a complicated problem, and the main side-effect is mostly just warnings like this it hasn't been a high priority. If the log space is an issue for you your best bet for now is to turn down the loglevel so that this warning doesn't show up. -George _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: repeating 'd1v0 Over-allocation for domain 1' messages in xen 4.7 Host logs on PVHVM Guest launch 2016-07-04 11:22 ` George Dunlap @ 2016-07-04 14:58 ` PGNet Dev 2016-07-05 13:35 ` George Dunlap 0 siblings, 1 reply; 12+ messages in thread From: PGNet Dev @ 2016-07-04 14:58 UTC (permalink / raw) To: George Dunlap; +Cc: xen-devel On 07/04/2016 04:22 AM, George Dunlap wrote: > Thanks for your persistence. :-) I appreciate the reply :-) > It's likely that this is related to a known problem with the interface > between the balloon driver and the toolstack. The warning itself is > benign: it simply means that the balloon driver asked Xen for another > page (thinking incorrectly it was a few pages short), and was told > "No" by Xen. Reading https://blog.xenproject.org/2014/02/14/ballooning-rebooting-and-the-feature-youve-never-heard-of/ "... Populate-on-demand comes into play in Xen whenever you start an HVM guest with maxmem and memory set to different values. ..." Which sounds like you can turn ballooning in the DomU off. But, currently, my DomUs are all PVHVM, and all have maxmem = 2048 memory = 2048 It appears that having 'maxmem' == 'memory' results in the '"No" by Xen' answer rather than ballooning driver not being used. Which is the intended case? > Fixing it properly requires a re-architecting of the interface between > all the different components that use memory (Xen, qemu, the > toolstack, the guest balloon driver, &c). This is on the to-do list, > but since it's quite a complicated problem, Sounds like the'fix is in'. Eventually. > If the log space is an issue for you your best bet for now is to turn > down the loglevel so that this warning doesn't show up. It's less an issue of space, and more that the incessant noise makes picking out actually important/useful debugging info more of a challenge. These guests are PVHVM-on-EFI, and the host is Xen 4.7+ on EFI. the combo enjoys a fair share of issues; hence the debugging loglevels are higher. > and the main side-effect > is mostly just warnings like this it hasn't been a high priority. That there's no functional ill-effect is the valuable info here. Warning or not, having 10Ks of them does not signal 'all is well' ... btw, is there a relevant tracking bug for this? Thanks for the comments! _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: repeating 'd1v0 Over-allocation for domain 1' messages in xen 4.7 Host logs on PVHVM Guest launch 2016-07-04 14:58 ` PGNet Dev @ 2016-07-05 13:35 ` George Dunlap 2016-07-05 14:13 ` PGNet Dev 0 siblings, 1 reply; 12+ messages in thread From: George Dunlap @ 2016-07-05 13:35 UTC (permalink / raw) To: pgnet.dev; +Cc: xen-devel On 04/07/16 15:58, PGNet Dev wrote: > On 07/04/2016 04:22 AM, George Dunlap wrote: >> Thanks for your persistence. :-) > > I appreciate the reply :-) > >> It's likely that this is related to a known problem with the interface >> between the balloon driver and the toolstack. The warning itself is >> benign: it simply means that the balloon driver asked Xen for another >> page (thinking incorrectly it was a few pages short), and was told >> "No" by Xen. > > Reading > > > https://blog.xenproject.org/2014/02/14/ballooning-rebooting-and-the-feature-youve-never-heard-of/ > > > "... Populate-on-demand comes into play in Xen whenever you start an > HVM guest with maxmem and memory set to different values. ..." > > Which sounds like you can turn ballooning in the DomU off. > > But, currently, my DomUs are all PVHVM, and all have > > maxmem = 2048 > memory = 2048 > > It appears that having 'maxmem' == 'memory' results in the '"No" by Xen' > answer rather than ballooning driver not being used. > > Which is the intended case? It's more complicated than that, unfortunately. :-) A guest has lots of different bits of memory used for different things. There's guest RAM, but an HVM / PVHVM guest also has ROMs that the BIOS needs to have access to -- but of course since there isn't really any ROM, it needs to be allocated as RAM. Then there's extra memory for video cards &c, all of which from Xen's perspective looks like RAM allocated to the VM. And just to make things more fun, there are traditional "holes" in memory where there ends up being nothing anyway. The toolstack takes the number above and ends up allocating not exactly 2048 MiB to the guest, but a slightly larger number such that the guest looks like it has about 2048 MiB of RAM, while still taking into account all of the other random things that need a page here and a page there. Then it tells Xen, "The maximum amount of memory the guest is allowed to have is X", and writes a target value in xenstore for the guest to read. Then inside the guest, there's another process -- the balloon driver -- whose job it is to monitor the 'target value' in xenstore and try to make the guest's actual memory usage match that. It does this by releasing pages back to Xen if it thinks the target value is lower than what it currently has, and by asking Xen for more pages if the target value is higher than what it currently has. And because sometimes it takes a while for pages to become free, if it asks for more pages and is told 'no', it just waits for a bit and asks for pages again. Unfortunately, the interface was designed for PV guests back in the days before things were so complicated. The problem is that now the calculation for "how much memory I need" doesn't match the toolstack's idea. So when the balloon driver comes up, it looks at the value from the toolstack, and thinks, "Oh, looks like I'm a page or two short. Better ask for more." So the only way to fix this is: 1. Fix the interface so that the balloon driver actually knows that it doesn't need to do anything 2. Manually write a lower value into xenstore (different to what the toolstack gives) 3. Disable the balloon driver entirely. Obviously #1 isn't an option for you; if you don't need ballooning, then #3 is probably the best option -- I think you should be able to blacklist the balloon driver so that it doesn't actually load. That should eliminate the warning messages you get. > btw, is there a relevant tracking bug for this? Not really. We've tried some bug tracking systems but none have really "stuck"; instead we end up just keeping track of things individually. -George _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: repeating 'd1v0 Over-allocation for domain 1' messages in xen 4.7 Host logs on PVHVM Guest launch 2016-07-05 13:35 ` George Dunlap @ 2016-07-05 14:13 ` PGNet Dev 0 siblings, 0 replies; 12+ messages in thread From: PGNet Dev @ 2016-07-05 14:13 UTC (permalink / raw) To: George Dunlap; +Cc: xen-devel Reading @ How to know if the balloon driver is running http://www.gossamer-threads.com/lists/xen/users/315064#315064 "... IIRC the core balloon driver is always present when Xen is enabled and so the kernel will respond to requests from the host/toolstack to change the amount of RAM (e.g "xm/xl mem-set foo" in dom0, which would result in changes to /proc/meminfo). In order to get in-guest access to control ballooning you need to CONFIG_XEN_BALLOON enable and load the xen-balloon.ko module. I'm not sure but I think with modern kernels this will appear in /sys and not /proc. ... AFAIK the balloon driver is started if you just have CONFIG_XEN. LIke I said before CONFIG_XEN_BALLOON enables additional support for controlling the balloon driver from within the guest (as opposed to from the host toolstack). ..." in the Guest, grep -i config_xen= /boot/*config* /boot/config-4.4.0-28-generic:CONFIG_XEN=y grep -i xen /boot/*config* | grep -i balloon >> /boot/config-4.4.0-28-generic:CONFIG_XEN_BALLOON=y /boot/config-4.4.0-28-generic:CONFIG_XEN_SELFBALLOONING=y /boot/config-4.4.0-28-generic:CONFIG_XEN_BALLOON_MEMORY_HOTPLUG=y /boot/config-4.4.0-28-generic:CONFIG_XEN_BALLOON_MEMORY_HOTPLUG_LIMIT=512 find /sys /proc -type d | grep -i balloon | grep xen /sys/devices/system/xen_memory/xen_memory0/selfballoon /proc/sys/xen/balloon ls -al /proc/sys/xen/balloon hotplug_unpopulated How do you blacklist the Guest ballooning? Perhaps related? Not clear 2 me that it's the same issue, xen/balloon: cancel ballooning if adding new memory failed https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=3dcf63677d4eb7fdfc13290c8558c301d2588fe8 There's also balloon-related comments at http://wiki.xenproject.org/wiki/XenParavirtOps although they now look somewhat out of date w.r.t. current 4x kernel _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2016-07-05 14:13 UTC | newest] Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2016-06-29 0:06 repeating 'd1v0 Over-allocation for domain 1' messages in xen 4.7 Host logs on PVHVM Guest launch PGNet Dev 2016-06-29 10:07 ` Jan Beulich 2016-06-29 12:58 ` PGNet Dev 2016-06-29 14:10 ` PGNet Dev 2016-06-29 14:17 ` Jan Beulich 2016-06-29 15:38 ` PGNet Dev 2016-06-29 15:59 ` Jan Beulich 2016-06-29 16:27 ` PGNet Dev 2016-07-04 11:22 ` George Dunlap 2016-07-04 14:58 ` PGNet Dev 2016-07-05 13:35 ` George Dunlap 2016-07-05 14:13 ` PGNet Dev
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).