All of lore.kernel.org
 help / color / mirror / Atom feed
* GSoC 2010 - Migration from memory ballooning to memory hotplug in Xen
@ 2010-07-08 19:45 Daniel Kiper
  2010-07-08 22:32 ` Andi Kleen
  2010-07-08 23:16   ` Jeremy Fitzhardinge
  0 siblings, 2 replies; 19+ messages in thread
From: Daniel Kiper @ 2010-07-08 19:45 UTC (permalink / raw)
  To: xen-devel, linux-kernel, jeremy

Hello,

My name is Daniel Kiper and I am a PhD student
at Warsaw University of Technology, Faculty of Electronics
and Information Technology (I am working on business continuity
and disaster recovery services with emphasis on Air Traffic Management).

This year I put an proposal regarding migration from memory ballooning
to memory hotplug in Xen to Google Summer of Code 2010 (it was one of
my two proposals). It was accepted and now I happy GSoC 2010 student.
My mentor is Jeremy Fitzhardinge. I would like to thank him
for his patience and supporting hand.

OK, let's go to details. When I was playing with Xen I saw that
ballooning does not give possibility to extend memory over boundary
declared at the start of system. Yes, I know that is by desing however
I thought that it is a limitation which could by very annoing in some
enviroments (I think especially about servers). That is why I decided to
develop some code which remove that one. At the beggining I thought
that it should be replaced by memory hotplyg however after some test
and discussion with Jeremy we decided to link balooning (for memory
removal) with memory hotplug (for extending memory above boundary
declared at the startup of system). Additionaly, we decided to implement
this solution for Linux Xen gustes in all forms (PV/i386,x86_64 and
HVM/i386,x86_64).

Now, I have done most of the planned tests and wrote a PoC.

Short description of current algorithm (it was prepared
for PoC and it will be changed to implement convenient
mechanism for user):
  - find free (not claimed by another memory region or device)
    memory region of PAGES_PER_SECTION << PAGE_SHIFT
    size in iomem_resource,
  - find all PFNs for choosen memory region
    (addr >> PAGE_SHIFT),
  - allocate memory from hypervisor by
    HYPERVISOR_memory_op(XENMEM_populate_physmap, &memory_region),
  - inform system about new memory region and reserve it by
    mm/memory_hotplug.c:add_memory(memory_add_physaddr_to_nid(start_addr),
                                   start_addr, PAGES_PER_SECTION << PAGE_SHIFT),
  - online memory region by
    mm/memory_hotplug.c:online_pages(start_addr >> PAGE_SHIFT,
                                     PAGES_PER_SECTION << PAGE_SHIFT).

Currently, memory is added and onlined in 128MiB blocks (section size
for x86), however I am going to do that in smaller chunks.
Additionally, some things are done manually however
it will be changed in final implementation.
I would like to mention that this solution
does not require any change in Xen hypervisor.

I am going to send you first version of patch
(fully working) next week.

If you have any questions please drop me a line.

Daniel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: GSoC 2010 - Migration from memory ballooning to memory hotplug in Xen
  2010-07-08 19:45 GSoC 2010 - Migration from memory ballooning to memory hotplug in Xen Daniel Kiper
@ 2010-07-08 22:32 ` Andi Kleen
  2010-07-08 22:58     ` James Harper
                     ` (2 more replies)
  2010-07-08 23:16   ` Jeremy Fitzhardinge
  1 sibling, 3 replies; 19+ messages in thread
From: Andi Kleen @ 2010-07-08 22:32 UTC (permalink / raw)
  To: Daniel Kiper; +Cc: xen-devel, linux-kernel, jeremy

Daniel Kiper <dkiper@net-space.pl> writes:
>
> OK, let's go to details. When I was playing with Xen I saw that
> ballooning does not give possibility to extend memory over boundary
> declared at the start of system. Yes, I know that is by desing however
> I thought that it is a limitation which could by very annoing in some
> enviroments (I think especially about servers). That is why I decided to
> develop some code which remove that one. At the beggining I thought
> that it should be replaced by memory hotplyg however after some test
> and discussion with Jeremy we decided to link balooning (for memory
> removal) with memory hotplug (for extending memory above boundary
> declared at the startup of system). Additionaly, we decided to implement
> this solution for Linux Xen gustes in all forms (PV/i386,x86_64 and
> HVM/i386,x86_64).

While you can do that the value is not very large because you
could just start the guests with more memory, but ballooned in 
the first place (so that they don't actually use it) 

The only advantage of using memory hotadd is that the mem_map doesn't
need to be pre-allocated, but that's only a few percent of the memory.

So it would only help if you want to add gigantic amounts of memory
to a VM (like >20-30x of what it already has).

One trap is also that memory hotadd is a frequent source of regressions,
so you'll likely run into existing bugs.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [Xen-devel] Re: GSoC 2010 - Migration from memory ballooning tomemory hotplug in Xen
  2010-07-08 22:32 ` Andi Kleen
@ 2010-07-08 22:58     ` James Harper
  2010-07-08 23:12     ` Dan Magenheimer
  2010-07-08 23:51   ` [Xen-devel] " Jeremy Fitzhardinge
  2 siblings, 0 replies; 19+ messages in thread
From: James Harper @ 2010-07-08 22:58 UTC (permalink / raw)
  To: Andi Kleen, Daniel Kiper; +Cc: jeremy, xen-devel, linux-kernel

> 
> Daniel Kiper <dkiper@net-space.pl> writes:
> >
> > OK, let's go to details. When I was playing with Xen I saw that
> > ballooning does not give possibility to extend memory over boundary
> > declared at the start of system. Yes, I know that is by desing
however
> > I thought that it is a limitation which could by very annoing in
some
> > enviroments (I think especially about servers). That is why I
decided to
> > develop some code which remove that one. At the beggining I thought
> > that it should be replaced by memory hotplyg however after some test
> > and discussion with Jeremy we decided to link balooning (for memory
> > removal) with memory hotplug (for extending memory above boundary
> > declared at the startup of system). Additionaly, we decided to
implement
> > this solution for Linux Xen gustes in all forms (PV/i386,x86_64 and
> > HVM/i386,x86_64).
> 
> While you can do that the value is not very large because you
> could just start the guests with more memory, but ballooned in
> the first place (so that they don't actually use it)
> 

I think hotplug is a better method for adding memory for Windows.

James

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: Re: GSoC 2010 - Migration from memory ballooning tomemory hotplug in Xen
@ 2010-07-08 22:58     ` James Harper
  0 siblings, 0 replies; 19+ messages in thread
From: James Harper @ 2010-07-08 22:58 UTC (permalink / raw)
  To: Andi Kleen, Daniel Kiper; +Cc: jeremy, xen-devel, linux-kernel

> 
> Daniel Kiper <dkiper@net-space.pl> writes:
> >
> > OK, let's go to details. When I was playing with Xen I saw that
> > ballooning does not give possibility to extend memory over boundary
> > declared at the start of system. Yes, I know that is by desing
however
> > I thought that it is a limitation which could by very annoing in
some
> > enviroments (I think especially about servers). That is why I
decided to
> > develop some code which remove that one. At the beggining I thought
> > that it should be replaced by memory hotplyg however after some test
> > and discussion with Jeremy we decided to link balooning (for memory
> > removal) with memory hotplug (for extending memory above boundary
> > declared at the startup of system). Additionaly, we decided to
implement
> > this solution for Linux Xen gustes in all forms (PV/i386,x86_64 and
> > HVM/i386,x86_64).
> 
> While you can do that the value is not very large because you
> could just start the guests with more memory, but ballooned in
> the first place (so that they don't actually use it)
> 

I think hotplug is a better method for adding memory for Windows.

James

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [Xen-devel] Re: GSoC 2010 - Migration from memory ballooning to memory hotplug in Xen
  2010-07-08 22:32 ` Andi Kleen
@ 2010-07-08 23:12     ` Dan Magenheimer
  2010-07-08 23:12     ` Dan Magenheimer
  2010-07-08 23:51   ` [Xen-devel] " Jeremy Fitzhardinge
  2 siblings, 0 replies; 19+ messages in thread
From: Dan Magenheimer @ 2010-07-08 23:12 UTC (permalink / raw)
  To: Andi Kleen, Daniel Kiper; +Cc: jeremy, xen-devel, linux-kernel

> From: Andi Kleen [mailto:andi@firstfloor.org]
> 
> Daniel Kiper <dkiper@net-space.pl> writes:
> >
> > OK, let's go to details. When I was playing with Xen I saw that
> > ballooning does not give possibility to extend memory over boundary
> > declared at the start of system. Yes, I know that is by desing
> however
> > I thought that it is a limitation which could by very annoing in some
> > enviroments (I think especially about servers). That is why I decided
> to
> > develop some code which remove that one. At the beggining I thought
> > that it should be replaced by memory hotplyg however after some test
> > and discussion with Jeremy we decided to link balooning (for memory
> > removal) with memory hotplug (for extending memory above boundary
> > declared at the startup of system). Additionaly, we decided to
> implement
> > this solution for Linux Xen gustes in all forms (PV/i386,x86_64 and
> > HVM/i386,x86_64).
> 
> While you can do that the value is not very large because you
> could just start the guests with more memory, but ballooned in
> the first place (so that they don't actually use it)
> 
> The only advantage of using memory hotadd is that the mem_map doesn't
> need to be pre-allocated, but that's only a few percent of the memory.
> 
> So it would only help if you want to add gigantic amounts of memory
> to a VM (like >20-30x of what it already has).

One can envision a scenario where a cloud customer launches a
business-critical VM with some reasonably large "maxmem" set,
balloons up to the max, then finds out it isn't enough after
all and would like to avoid rebooting.  Or a cloud provider
might charge for a specific maxmem, but allow the customer
to increase maxmem if they pay more money.

Dan

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: Re: GSoC 2010 - Migration from memory ballooning to memory hotplug in Xen
@ 2010-07-08 23:12     ` Dan Magenheimer
  0 siblings, 0 replies; 19+ messages in thread
From: Dan Magenheimer @ 2010-07-08 23:12 UTC (permalink / raw)
  To: Andi Kleen, Daniel Kiper; +Cc: jeremy, xen-devel, linux-kernel

> From: Andi Kleen [mailto:andi@firstfloor.org]
> 
> Daniel Kiper <dkiper@net-space.pl> writes:
> >
> > OK, let's go to details. When I was playing with Xen I saw that
> > ballooning does not give possibility to extend memory over boundary
> > declared at the start of system. Yes, I know that is by desing
> however
> > I thought that it is a limitation which could by very annoing in some
> > enviroments (I think especially about servers). That is why I decided
> to
> > develop some code which remove that one. At the beggining I thought
> > that it should be replaced by memory hotplyg however after some test
> > and discussion with Jeremy we decided to link balooning (for memory
> > removal) with memory hotplug (for extending memory above boundary
> > declared at the startup of system). Additionaly, we decided to
> implement
> > this solution for Linux Xen gustes in all forms (PV/i386,x86_64 and
> > HVM/i386,x86_64).
> 
> While you can do that the value is not very large because you
> could just start the guests with more memory, but ballooned in
> the first place (so that they don't actually use it)
> 
> The only advantage of using memory hotadd is that the mem_map doesn't
> need to be pre-allocated, but that's only a few percent of the memory.
> 
> So it would only help if you want to add gigantic amounts of memory
> to a VM (like >20-30x of what it already has).

One can envision a scenario where a cloud customer launches a
business-critical VM with some reasonably large "maxmem" set,
balloons up to the max, then finds out it isn't enough after
all and would like to avoid rebooting.  Or a cloud provider
might charge for a specific maxmem, but allow the customer
to increase maxmem if they pay more money.

Dan

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] GSoC 2010 - Migration from memory ballooning to memory hotplug in Xen
  2010-07-08 19:45 GSoC 2010 - Migration from memory ballooning to memory hotplug in Xen Daniel Kiper
@ 2010-07-08 23:16   ` Jeremy Fitzhardinge
  2010-07-08 23:16   ` Jeremy Fitzhardinge
  1 sibling, 0 replies; 19+ messages in thread
From: Jeremy Fitzhardinge @ 2010-07-08 23:16 UTC (permalink / raw)
  To: Daniel Kiper; +Cc: xen-devel, linux-kernel

On 07/08/2010 12:45 PM, Daniel Kiper wrote:
> Hello,
>
> My name is Daniel Kiper and I am a PhD student
> at Warsaw University of Technology, Faculty of Electronics
> and Information Technology (I am working on business continuity
> and disaster recovery services with emphasis on Air Traffic Management).
>
> This year I put an proposal regarding migration from memory ballooning
> to memory hotplug in Xen to Google Summer of Code 2010 (it was one of
> my two proposals). It was accepted and now I happy GSoC 2010 student.
> My mentor is Jeremy Fitzhardinge. I would like to thank him
> for his patience and supporting hand.
>
> OK, let's go to details. When I was playing with Xen I saw that
> ballooning does not give possibility to extend memory over boundary
> declared at the start of system. Yes, I know that is by desing however
> I thought that it is a limitation which could by very annoing in some
> enviroments (I think especially about servers). That is why I decided to
> develop some code which remove that one. At the beggining I thought
> that it should be replaced by memory hotplyg however after some test
> and discussion with Jeremy we decided to link balooning (for memory
> removal) with memory hotplug (for extending memory above boundary
> declared at the startup of system). Additionaly, we decided to implement
> this solution for Linux Xen gustes in all forms (PV/i386,x86_64 and
> HVM/i386,x86_64).
>
> Now, I have done most of the planned tests and wrote a PoC.
>
> Short description of current algorithm (it was prepared
> for PoC and it will be changed to implement convenient
> mechanism for user):
>   - find free (not claimed by another memory region or device)
>     memory region of PAGES_PER_SECTION << PAGE_SHIFT
>     size in iomem_resource,
>   

Presumably in the common case this will be at the end of the memory
map?  Since a typical PV domain has all its initial memory allocated low
and doesn't have any holes.

>   - find all PFNs for choosen memory region
>     (addr >> PAGE_SHIFT),
>   - allocate memory from hypervisor by
>     HYPERVISOR_memory_op(XENMEM_populate_physmap, &memory_region),
>   

Is it actually necessary to allocate the memory at this point?

>   - inform system about new memory region and reserve it by
>     mm/memory_hotplug.c:add_memory(memory_add_physaddr_to_nid(start_addr),
>                                    start_addr, PAGES_PER_SECTION << PAGE_SHIFT),
>   - online memory region by
>     mm/memory_hotplug.c:online_pages(start_addr >> PAGE_SHIFT,
>                                      PAGES_PER_SECTION << PAGE_SHIFT).
>   

It seems to me you could add the memory (to get the new struct pages)
and "online" it, but immediately take a reference to the page and give
it over to the balloon driver to manage as a ballooned-out page.  Then,
when you actually need the memory, the balloon driver can provide it in
the normal way.

(I'm not sure where it allocates the new page structures from, but if
its out of the newly added memory you'll need to allocate that up-front,
at least.)

> Currently, memory is added and onlined in 128MiB blocks (section size
> for x86), however I am going to do that in smaller chunks.
>   

If you can avoid actually allocating the pages, then 128MiB isn't too
bad.  I think that's only ~2MiB of page structures.

> Additionally, some things are done manually however
> it will be changed in final implementation.
> I would like to mention that this solution
> does not require any change in Xen hypervisor.
>
> I am going to send you first version of patch
> (fully working) next week.
>   

Looking forward to it.  What kernel is it based on?

Thanks,
    J

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: GSoC 2010 - Migration from memory ballooning to memory hotplug in Xen
@ 2010-07-08 23:16   ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 19+ messages in thread
From: Jeremy Fitzhardinge @ 2010-07-08 23:16 UTC (permalink / raw)
  To: Daniel Kiper; +Cc: xen-devel, linux-kernel

On 07/08/2010 12:45 PM, Daniel Kiper wrote:
> Hello,
>
> My name is Daniel Kiper and I am a PhD student
> at Warsaw University of Technology, Faculty of Electronics
> and Information Technology (I am working on business continuity
> and disaster recovery services with emphasis on Air Traffic Management).
>
> This year I put an proposal regarding migration from memory ballooning
> to memory hotplug in Xen to Google Summer of Code 2010 (it was one of
> my two proposals). It was accepted and now I happy GSoC 2010 student.
> My mentor is Jeremy Fitzhardinge. I would like to thank him
> for his patience and supporting hand.
>
> OK, let's go to details. When I was playing with Xen I saw that
> ballooning does not give possibility to extend memory over boundary
> declared at the start of system. Yes, I know that is by desing however
> I thought that it is a limitation which could by very annoing in some
> enviroments (I think especially about servers). That is why I decided to
> develop some code which remove that one. At the beggining I thought
> that it should be replaced by memory hotplyg however after some test
> and discussion with Jeremy we decided to link balooning (for memory
> removal) with memory hotplug (for extending memory above boundary
> declared at the startup of system). Additionaly, we decided to implement
> this solution for Linux Xen gustes in all forms (PV/i386,x86_64 and
> HVM/i386,x86_64).
>
> Now, I have done most of the planned tests and wrote a PoC.
>
> Short description of current algorithm (it was prepared
> for PoC and it will be changed to implement convenient
> mechanism for user):
>   - find free (not claimed by another memory region or device)
>     memory region of PAGES_PER_SECTION << PAGE_SHIFT
>     size in iomem_resource,
>   

Presumably in the common case this will be at the end of the memory
map?  Since a typical PV domain has all its initial memory allocated low
and doesn't have any holes.

>   - find all PFNs for choosen memory region
>     (addr >> PAGE_SHIFT),
>   - allocate memory from hypervisor by
>     HYPERVISOR_memory_op(XENMEM_populate_physmap, &memory_region),
>   

Is it actually necessary to allocate the memory at this point?

>   - inform system about new memory region and reserve it by
>     mm/memory_hotplug.c:add_memory(memory_add_physaddr_to_nid(start_addr),
>                                    start_addr, PAGES_PER_SECTION << PAGE_SHIFT),
>   - online memory region by
>     mm/memory_hotplug.c:online_pages(start_addr >> PAGE_SHIFT,
>                                      PAGES_PER_SECTION << PAGE_SHIFT).
>   

It seems to me you could add the memory (to get the new struct pages)
and "online" it, but immediately take a reference to the page and give
it over to the balloon driver to manage as a ballooned-out page.  Then,
when you actually need the memory, the balloon driver can provide it in
the normal way.

(I'm not sure where it allocates the new page structures from, but if
its out of the newly added memory you'll need to allocate that up-front,
at least.)

> Currently, memory is added and onlined in 128MiB blocks (section size
> for x86), however I am going to do that in smaller chunks.
>   

If you can avoid actually allocating the pages, then 128MiB isn't too
bad.  I think that's only ~2MiB of page structures.

> Additionally, some things are done manually however
> it will be changed in final implementation.
> I would like to mention that this solution
> does not require any change in Xen hypervisor.
>
> I am going to send you first version of patch
> (fully working) next week.
>   

Looking forward to it.  What kernel is it based on?

Thanks,
    J

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] Re: GSoC 2010 - Migration from memory ballooning to memory hotplug in Xen
  2010-07-08 22:32 ` Andi Kleen
  2010-07-08 22:58     ` James Harper
  2010-07-08 23:12     ` Dan Magenheimer
@ 2010-07-08 23:51   ` Jeremy Fitzhardinge
  2010-07-09  0:34       ` Andi Kleen
  2 siblings, 1 reply; 19+ messages in thread
From: Jeremy Fitzhardinge @ 2010-07-08 23:51 UTC (permalink / raw)
  To: Andi Kleen; +Cc: Daniel Kiper, xen-devel, linux-kernel

On 07/08/2010 03:32 PM, Andi Kleen wrote:
> Daniel Kiper <dkiper@net-space.pl> writes:
>   
>> OK, let's go to details. When I was playing with Xen I saw that
>> ballooning does not give possibility to extend memory over boundary
>> declared at the start of system. Yes, I know that is by desing however
>> I thought that it is a limitation which could by very annoing in some
>> enviroments (I think especially about servers). That is why I decided to
>> develop some code which remove that one. At the beggining I thought
>> that it should be replaced by memory hotplyg however after some test
>> and discussion with Jeremy we decided to link balooning (for memory
>> removal) with memory hotplug (for extending memory above boundary
>> declared at the startup of system). Additionaly, we decided to implement
>> this solution for Linux Xen gustes in all forms (PV/i386,x86_64 and
>> HVM/i386,x86_64).
>>     
> While you can do that the value is not very large because you
> could just start the guests with more memory, but ballooned in 
> the first place (so that they don't actually use it) 
>   

Yes.  Another approach would be to fiddle with the E820 maps early at
boot to add more RAM, but then early_reserve it and hand it over to the
control of the balloon driver.  But it does mean you need to statically
come up with the max ever at boot time.

> The only advantage of using memory hotadd is that the mem_map doesn't
> need to be pre-allocated, but that's only a few percent of the memory.
>
> So it would only help if you want to add gigantic amounts of memory
> to a VM (like >20-30x of what it already has).
>   

That's not wildly unreasonable on the face of it; consider a domain
which starts at 1GB but could go up to 32GB as demand requires.  But
that also depends on what other things in the kernel are statically
scaled at boot time according to memory size (hash tables?).

> One trap is also that memory hotadd is a frequent source of regressions,
> so you'll likely run into existing bugs.

That could be painful, but I expect the main reason for regressions is
that the code is fairly underused.  Adding new users should help.

    J

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] Re: GSoC 2010 - Migration from memory ballooning to memory hotplug in Xen
  2010-07-08 23:51   ` [Xen-devel] " Jeremy Fitzhardinge
@ 2010-07-09  0:34       ` Andi Kleen
  0 siblings, 0 replies; 19+ messages in thread
From: Andi Kleen @ 2010-07-09  0:34 UTC (permalink / raw)
  To: Jeremy Fitzhardinge; +Cc: Andi Kleen, Daniel Kiper, xen-devel, linux-kernel

> Yes.  Another approach would be to fiddle with the E820 maps early at
> boot to add more RAM, but then early_reserve it and hand it over to the
> control of the balloon driver.  But it does mean you need to statically
> come up with the max ever at boot time.

You need to do that too for memory hotadd -- you need predeclared
hotadd regions. Linux mainly needs it to know in which node
to put the memory. Other OS use it for other things too.

> > The only advantage of using memory hotadd is that the mem_map doesn't
> > need to be pre-allocated, but that's only a few percent of the memory.
> >
> > So it would only help if you want to add gigantic amounts of memory
> > to a VM (like >20-30x of what it already has).
> >   
> 
> That's not wildly unreasonable on the face of it; consider a domain
> which starts at 1GB but could go up to 32GB as demand requires.  But

The programs which need 32GB will probably not even start in 1GB :)

> > One trap is also that memory hotadd is a frequent source of regressions,
> > so you'll likely run into existing bugs.
> 
> That could be painful, but I expect the main reason for regressions is
> that the code is fairly underused.  Adding new users should help.

Yes, and we fixed a lot of the bugs, but still a lot of them 
were tricky and frankly new ones might be too difficult for a SoC.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Re: GSoC 2010 - Migration from memory ballooning to memory hotplug in Xen
@ 2010-07-09  0:34       ` Andi Kleen
  0 siblings, 0 replies; 19+ messages in thread
From: Andi Kleen @ 2010-07-09  0:34 UTC (permalink / raw)
  To: Jeremy Fitzhardinge; +Cc: Andi Kleen, xen-devel, Daniel Kiper, linux-kernel

> Yes.  Another approach would be to fiddle with the E820 maps early at
> boot to add more RAM, but then early_reserve it and hand it over to the
> control of the balloon driver.  But it does mean you need to statically
> come up with the max ever at boot time.

You need to do that too for memory hotadd -- you need predeclared
hotadd regions. Linux mainly needs it to know in which node
to put the memory. Other OS use it for other things too.

> > The only advantage of using memory hotadd is that the mem_map doesn't
> > need to be pre-allocated, but that's only a few percent of the memory.
> >
> > So it would only help if you want to add gigantic amounts of memory
> > to a VM (like >20-30x of what it already has).
> >   
> 
> That's not wildly unreasonable on the face of it; consider a domain
> which starts at 1GB but could go up to 32GB as demand requires.  But

The programs which need 32GB will probably not even start in 1GB :)

> > One trap is also that memory hotadd is a frequent source of regressions,
> > so you'll likely run into existing bugs.
> 
> That could be painful, but I expect the main reason for regressions is
> that the code is fairly underused.  Adding new users should help.

Yes, and we fixed a lot of the bugs, but still a lot of them 
were tricky and frankly new ones might be too difficult for a SoC.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] Re: GSoC 2010 - Migration from memory ballooning to memory hotplug in Xen
  2010-07-08 23:12     ` Dan Magenheimer
@ 2010-07-09 15:53       ` Daniel Kiper
  -1 siblings, 0 replies; 19+ messages in thread
From: Daniel Kiper @ 2010-07-09 15:53 UTC (permalink / raw)
  To: Dan Magenheimer; +Cc: Andi Kleen, Daniel Kiper, jeremy, xen-devel, linux-kernel

On Thu, Jul 08, 2010 at 04:12:01PM -0700, Dan Magenheimer wrote:
> > From: Andi Kleen [mailto:andi@firstfloor.org]
> >
> > Daniel Kiper <dkiper@net-space.pl> writes:
> > >
> > > OK, let's go to details. When I was playing with Xen I saw that
> > > ballooning does not give possibility to extend memory over boundary
> > > declared at the start of system. Yes, I know that is by desing
> > however
> > > I thought that it is a limitation which could by very annoing in some
> > > enviroments (I think especially about servers). That is why I decided
> > to
> > > develop some code which remove that one. At the beggining I thought
> > > that it should be replaced by memory hotplyg however after some test
> > > and discussion with Jeremy we decided to link balooning (for memory
> > > removal) with memory hotplug (for extending memory above boundary
> > > declared at the startup of system). Additionaly, we decided to
> > implement
> > > this solution for Linux Xen gustes in all forms (PV/i386,x86_64 and
> > > HVM/i386,x86_64).
> >
> > While you can do that the value is not very large because you
> > could just start the guests with more memory, but ballooned in
> > the first place (so that they don't actually use it)
> >
> > The only advantage of using memory hotadd is that the mem_map doesn't
> > need to be pre-allocated, but that's only a few percent of the memory.
> >
> > So it would only help if you want to add gigantic amounts of memory
> > to a VM (like >20-30x of what it already has).
>
> One can envision a scenario where a cloud customer launches a
> business-critical VM with some reasonably large "maxmem" set,
> balloons up to the max, then finds out it isn't enough after
> all and would like to avoid rebooting.  Or a cloud provider
> might charge for a specific maxmem, but allow the customer
> to increase maxmem if they pay more money.

Dan scenario description is very good (thx). The idea behind this
project was to serve that cases. Maybe some misunderstanding come
from short description of my proposal.

Daniel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Re: GSoC 2010 - Migration from memory ballooning to memory hotplug in Xen
@ 2010-07-09 15:53       ` Daniel Kiper
  0 siblings, 0 replies; 19+ messages in thread
From: Daniel Kiper @ 2010-07-09 15:53 UTC (permalink / raw)
  To: Dan Magenheimer; +Cc: jeremy, Andi Kleen, xen-devel, Daniel Kiper, linux-kernel

On Thu, Jul 08, 2010 at 04:12:01PM -0700, Dan Magenheimer wrote:
> > From: Andi Kleen [mailto:andi@firstfloor.org]
> >
> > Daniel Kiper <dkiper@net-space.pl> writes:
> > >
> > > OK, let's go to details. When I was playing with Xen I saw that
> > > ballooning does not give possibility to extend memory over boundary
> > > declared at the start of system. Yes, I know that is by desing
> > however
> > > I thought that it is a limitation which could by very annoing in some
> > > enviroments (I think especially about servers). That is why I decided
> > to
> > > develop some code which remove that one. At the beggining I thought
> > > that it should be replaced by memory hotplyg however after some test
> > > and discussion with Jeremy we decided to link balooning (for memory
> > > removal) with memory hotplug (for extending memory above boundary
> > > declared at the startup of system). Additionaly, we decided to
> > implement
> > > this solution for Linux Xen gustes in all forms (PV/i386,x86_64 and
> > > HVM/i386,x86_64).
> >
> > While you can do that the value is not very large because you
> > could just start the guests with more memory, but ballooned in
> > the first place (so that they don't actually use it)
> >
> > The only advantage of using memory hotadd is that the mem_map doesn't
> > need to be pre-allocated, but that's only a few percent of the memory.
> >
> > So it would only help if you want to add gigantic amounts of memory
> > to a VM (like >20-30x of what it already has).
>
> One can envision a scenario where a cloud customer launches a
> business-critical VM with some reasonably large "maxmem" set,
> balloons up to the max, then finds out it isn't enough after
> all and would like to avoid rebooting.  Or a cloud provider
> might charge for a specific maxmem, but allow the customer
> to increase maxmem if they pay more money.

Dan scenario description is very good (thx). The idea behind this
project was to serve that cases. Maybe some misunderstanding come
from short description of my proposal.

Daniel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] GSoC 2010 - Migration from memory ballooning to memory hotplug in Xen
  2010-07-08 23:16   ` Jeremy Fitzhardinge
  (?)
@ 2010-07-09 17:11   ` Daniel Kiper
  -1 siblings, 0 replies; 19+ messages in thread
From: Daniel Kiper @ 2010-07-09 17:11 UTC (permalink / raw)
  To: Jeremy Fitzhardinge; +Cc: Daniel Kiper, xen-devel, linux-kernel

On Thu, Jul 08, 2010 at 04:16:00PM -0700, Jeremy Fitzhardinge wrote:
> On 07/08/2010 12:45 PM, Daniel Kiper wrote:
> >   - find free (not claimed by another memory region or device)
> >     memory region of PAGES_PER_SECTION << PAGE_SHIFT
> >     size in iomem_resource,
>
> Presumably in the common case this will be at the end of the memory
> map?  Since a typical PV domain has all its initial memory allocated low
> and doesn't have any holes.

Yes, I know about that however I think it is much better
to write more generic algorithm which also looks for
the holes (not claimed regions) in memory (maybe in the
future something changes). Additionally, this list mostly
is very short and cost of scan is considerably low.

> >   - find all PFNs for choosen memory region
> >     (addr >> PAGE_SHIFT),
> >   - allocate memory from hypervisor by
> >     HYPERVISOR_memory_op(XENMEM_populate_physmap, &memory_region),
>
> Is it actually necessary to allocate the memory at this point?

Yes, it is because mm/memory_hotplug.c:add_memory
(not exactly this one) updates memory map.

> >   - inform system about new memory region and reserve it by
> >     mm/memory_hotplug.c:add_memory(memory_add_physaddr_to_nid(start_addr),
> >                                    start_addr, PAGES_PER_SECTION << PAGE_SHIFT),
> >   - online memory region by
> >     mm/memory_hotplug.c:online_pages(start_addr >> PAGE_SHIFT,
> >                                      PAGES_PER_SECTION << PAGE_SHIFT).
>
> It seems to me you could add the memory (to get the new struct pages)
> and "online" it, but immediately take a reference to the page and give
> it over to the balloon driver to manage as a ballooned-out page.  Then,
> when you actually need the memory, the balloon driver can provide it in
> the normal way.

I am going to do that in similar way.

> > I am going to send you first version of patch
> > (fully working) next week.
>
> Looking forward to it.  What kernel is it based on?

Ver. 2.6.32.10 however I suppose it will be no problem
to move it to current version.

Daniel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] Re: GSoC 2010 - Migration from memory ballooning to memory hotplug in Xen
  2010-07-09  0:34       ` Andi Kleen
  (?)
@ 2010-07-09 17:32       ` Daniel Kiper
  -1 siblings, 0 replies; 19+ messages in thread
From: Daniel Kiper @ 2010-07-09 17:32 UTC (permalink / raw)
  To: Andi Kleen; +Cc: Jeremy Fitzhardinge, Daniel Kiper, xen-devel, linux-kernel

On Fri, Jul 09, 2010 at 02:34:09AM +0200, Andi Kleen wrote:
> > > The only advantage of using memory hotadd is that the mem_map doesn't
> > > need to be pre-allocated, but that's only a few percent of the memory.
> > >
> > > So it would only help if you want to add gigantic amounts of memory
> > > to a VM (like >20-30x of what it already has).
> >
> > That's not wildly unreasonable on the face of it; consider a domain
> > which starts at 1GB but could go up to 32GB as demand requires.  But
>
> The programs which need 32GB will probably not even start in 1GB :)

I am able to run program which allocates 32GB on machine 1GB... :-)))
Do not underestimate memory overcommit...

Daniel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] Re: GSoC 2010 - Migration from memory ballooning tomemory hotplug in Xen
  2010-07-08 22:58     ` James Harper
  (?)
@ 2010-07-09 17:34     ` Daniel Kiper
  2010-07-10  5:17         ` James Harper
  -1 siblings, 1 reply; 19+ messages in thread
From: Daniel Kiper @ 2010-07-09 17:34 UTC (permalink / raw)
  To: James Harper; +Cc: Andi Kleen, Daniel Kiper, jeremy, xen-devel, linux-kernel

On Fri, Jul 09, 2010 at 08:58:01AM +1000, James Harper wrote:
> > While you can do that the value is not very large because you
> > could just start the guests with more memory, but ballooned in
> > the first place (so that they don't actually use it)
>
> I think hotplug is a better method for adding memory for Windows.

Maybe in the future I write somthing for Windows...

Daniel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [Xen-devel] Re: GSoC 2010 - Migration from memory ballooning tomemory hotplug in Xen
  2010-07-09 17:34     ` [Xen-devel] " Daniel Kiper
@ 2010-07-10  5:17         ` James Harper
  0 siblings, 0 replies; 19+ messages in thread
From: James Harper @ 2010-07-10  5:17 UTC (permalink / raw)
  To: Daniel Kiper; +Cc: Andi Kleen, jeremy, xen-devel, linux-kernel

> 
> On Fri, Jul 09, 2010 at 08:58:01AM +1000, James Harper wrote:
> > > While you can do that the value is not very large because you
> > > could just start the guests with more memory, but ballooned in
> > > the first place (so that they don't actually use it)
> >
> > I think hotplug is a better method for adding memory for Windows.
> 
> Maybe in the future I write somthing for Windows...
> 

For Windows, I believe you would need to emulate actual hotplug of
memory like a physical machine, using ACPI. It's only supported on
Enterprise versions of Windows too.

James

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: Re: GSoC 2010 - Migration from memory ballooning tomemory hotplug in Xen
@ 2010-07-10  5:17         ` James Harper
  0 siblings, 0 replies; 19+ messages in thread
From: James Harper @ 2010-07-10  5:17 UTC (permalink / raw)
  To: Daniel Kiper; +Cc: jeremy, Andi Kleen, xen-devel, linux-kernel

> 
> On Fri, Jul 09, 2010 at 08:58:01AM +1000, James Harper wrote:
> > > While you can do that the value is not very large because you
> > > could just start the guests with more memory, but ballooned in
> > > the first place (so that they don't actually use it)
> >
> > I think hotplug is a better method for adding memory for Windows.
> 
> Maybe in the future I write somthing for Windows...
> 

For Windows, I believe you would need to emulate actual hotplug of
memory like a physical machine, using ACPI. It's only supported on
Enterprise versions of Windows too.

James

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] Re: GSoC 2010 - Migration from memory ballooning tomemory hotplug in Xen
  2010-07-10  5:17         ` James Harper
  (?)
@ 2010-07-10 12:36         ` Daniel Kiper
  -1 siblings, 0 replies; 19+ messages in thread
From: Daniel Kiper @ 2010-07-10 12:36 UTC (permalink / raw)
  To: James Harper; +Cc: Daniel Kiper, Andi Kleen, jeremy, xen-devel, linux-kernel

On Sat, Jul 10, 2010 at 03:17:57PM +1000, James Harper wrote:
> > On Fri, Jul 09, 2010 at 08:58:01AM +1000, James Harper wrote:
> > > > While you can do that the value is not very large because you
> > > > could just start the guests with more memory, but ballooned in
> > > > the first place (so that they don't actually use it)
> > >
> > > I think hotplug is a better method for adding memory for Windows.
> >
> > Maybe in the future I write somthing for Windows...
>
> For Windows, I believe you would need to emulate actual hotplug of
> memory like a physical machine, using ACPI. It's only supported on
> Enterprise versions of Windows too.

In 99.9% yes because it is normal way of configuring devices in Windows.
However, to take final decision I must read some docs and do some tests.

Daniel

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2010-07-10 12:37 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-07-08 19:45 GSoC 2010 - Migration from memory ballooning to memory hotplug in Xen Daniel Kiper
2010-07-08 22:32 ` Andi Kleen
2010-07-08 22:58   ` [Xen-devel] Re: GSoC 2010 - Migration from memory ballooning tomemory " James Harper
2010-07-08 22:58     ` James Harper
2010-07-09 17:34     ` [Xen-devel] " Daniel Kiper
2010-07-10  5:17       ` James Harper
2010-07-10  5:17         ` James Harper
2010-07-10 12:36         ` [Xen-devel] " Daniel Kiper
2010-07-08 23:12   ` [Xen-devel] Re: GSoC 2010 - Migration from memory ballooning to memory " Dan Magenheimer
2010-07-08 23:12     ` Dan Magenheimer
2010-07-09 15:53     ` [Xen-devel] " Daniel Kiper
2010-07-09 15:53       ` Daniel Kiper
2010-07-08 23:51   ` [Xen-devel] " Jeremy Fitzhardinge
2010-07-09  0:34     ` Andi Kleen
2010-07-09  0:34       ` Andi Kleen
2010-07-09 17:32       ` [Xen-devel] " Daniel Kiper
2010-07-08 23:16 ` [Xen-devel] " Jeremy Fitzhardinge
2010-07-08 23:16   ` Jeremy Fitzhardinge
2010-07-09 17:11   ` [Xen-devel] " Daniel Kiper

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.