From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dan Magenheimer Subject: RE: [RFC][PATCH] 0/9 Populate-on-demand memory Date: Wed, 24 Dec 2008 07:54:24 -0800 (PST) Message-ID: <8b1390eb-d4f4-448f-8f46-76ce7b042692@default> References: Mime-Version: 1.0 Content-Type: text/plain; charset=Windows-1252 Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: George Dunlap Cc: xen-devel@lists.xensource.com List-Id: xen-devel@lists.xenproject.org > On Wed, Dec 24, 2008 at 2:32 PM, Dan Magenheimer > wrote: > > Yes, its just that with your fix, Windows VM users are much more > > likely to use memory overcommit and will need to be "trained" to > > always configure a swap disk to ensure bad things don't happen. > > And this swap disk had better be on a network-based medium or > > live migration won't work. >=20 > You mean they may be much more likely to under-provision memory to > their VMs, booting with (say) 64M on the assumption that they can > balloon it up to 512M if they want to? That seems rather unlikely to > me... if they're not likely to start a Windows VM with 64M normally, > why would they be more likely to start with 64M now? I'd've thought > it would be likely to go the other way: if they normally boot a guest > with 256M, they can now start with maxmem=3D1G and memory=3D256M, and > balloon it up if they want. What I mean is that now that they CAN start with memory=3D256M and maxmem=3D1G, it is now much more likely that ballooning and memory overcommit will be used, possibly hidden by vendors' tools. Once ballooning is used at all, memory can not only go above the starting memory=3D threshold but can also go below. Thus, your patch will make it more likely that "memory pressure" will be dynamically applied to Windows VMs, which means swapping is more likely to occur, which means there had better be a properly-sized swap disk. For example, on a 2GB system, a reasonable configuration might be: Windows VM1: memory=3D256M maxmem=3D1GB Windows VM2: memory=3D256M maxmem=3D1GB Windows VM3: memory=3D256M maxmem=3D1GB Windows VM4: memory=3D256M maxmem=3D1GB (dom0_mem=3D256M, Xen+heap=3D256M for the sake of argument) Assume that VM1 and VM2 are heavily loaded and VM3 and VM4 are idle (or nearly so). So VM1 and VM2 are ballooned up towards 1G by taking memory away from VM3 and VM4. Say VM3 and VM4 are ballooned down to about 128M each. Now VM3 and VM4 suddenly get loaded and need more memory. But VM1 and VM2 are hesitant to surrender memory because it is fully utilized. SOME VM is going to have to start swapping! So, I'm just saying that your patch makes this kind of scenario more likely, so listing the need for a swap disk in your README would be a good idea.