All of lore.kernel.org
 help / color / mirror / Atom feed
* xen_phys_start for 32b
@ 2009-01-06  6:36 Cihula, Joseph
  2009-01-06  8:42 ` Keir Fraser
  0 siblings, 1 reply; 26+ messages in thread
From: Cihula, Joseph @ 2009-01-06  6:36 UTC (permalink / raw)
  To: xen-devel

On 64b builds of Xen, xen_phys_start holds the starting (physical) address of the hypervisor.  However, on 32b systems it is 0.  While I realize that 32b Xen does not relocate the hypervisor, why not set this variable to the start of the code (__pa(&_start)) so that it will represent the same thing on all builds?

Joe

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: xen_phys_start for 32b
  2009-01-06  6:36 xen_phys_start for 32b Cihula, Joseph
@ 2009-01-06  8:42 ` Keir Fraser
  2009-01-06 18:29   ` Cihula, Joseph
  0 siblings, 1 reply; 26+ messages in thread
From: Keir Fraser @ 2009-01-06  8:42 UTC (permalink / raw)
  To: Cihula, Joseph, xen-devel

On 06/01/2009 06:36, "Cihula, Joseph" <joseph.cihula@intel.com> wrote:

> On 64b builds of Xen, xen_phys_start holds the starting (physical) address of
> the hypervisor.  However, on 32b systems it is 0.  While I realize that 32b
> Xen does not relocate the hypervisor, why not set this variable to the start
> of the code (__pa(&_start)) so that it will represent the same thing on all
> builds?

For both i386 and x86/64:
 __pa(&_start) == xen_phys_start + 1ul<<20

xen_phys_start marks the start of the Xen heap, which Xen text and data is
embedded within.

 -- Keir

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: xen_phys_start for 32b
  2009-01-06  8:42 ` Keir Fraser
@ 2009-01-06 18:29   ` Cihula, Joseph
  2009-01-06 22:12     ` Keir Fraser
  0 siblings, 1 reply; 26+ messages in thread
From: Cihula, Joseph @ 2009-01-06 18:29 UTC (permalink / raw)
  To: Keir Fraser, xen-devel

-----Original Message-----
> From: Keir Fraser [mailto:keir.fraser@eu.citrix.com]
> Sent: Tuesday, January 06, 2009 12:43 AM
>
> On 06/01/2009 06:36, "Cihula, Joseph" <joseph.cihula@intel.com> wrote:
>
> > On 64b builds of Xen, xen_phys_start holds the starting (physical) address of
> > the hypervisor.  However, on 32b systems it is 0.  While I realize that 32b
> > Xen does not relocate the hypervisor, why not set this variable to the start
> > of the code (__pa(&_start)) so that it will represent the same thing on all
> > builds?
>
> For both i386 and x86/64:
>  __pa(&_start) == xen_phys_start + 1ul<<20
>
> xen_phys_start marks the start of the Xen heap, which Xen text and data is
> embedded within.
>
>  -- Keir

I'm confused by how the code in setup.c sets xenheap_phys_start.  It is initially set by:
    xenheap_phys_start = init_boot_allocator(__pa(&_end));
This value gets used by:
    /* Initialise the Xen heap, skipping RAM holes. */
    init_xenheap_pages(xenheap_phys_start, xenheap_phys_end);
    nr_pages = (xenheap_phys_end - xenheap_phys_start) >> PAGE_SHIFT;
But then a few lines later, it is re-set:
    xenheap_phys_start = xen_phys_start;

I don't understand the reason for this last assignment on 32b systems, since xen isn't really using this low memory for its heap.

Joe

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: xen_phys_start for 32b
  2009-01-06 18:29   ` Cihula, Joseph
@ 2009-01-06 22:12     ` Keir Fraser
  2009-01-06 22:19       ` Cihula, Joseph
  0 siblings, 1 reply; 26+ messages in thread
From: Keir Fraser @ 2009-01-06 22:12 UTC (permalink / raw)
  To: Cihula, Joseph, xen-devel

On 06/01/2009 18:29, "Cihula, Joseph" <joseph.cihula@intel.com> wrote:

> I don't understand the reason for this last assignment on 32b systems, since
> xen isn't really using this low memory for its heap.

It's not used for domheap either. In fact it's not really used at all. Hence
encompassing it within xenheap_phys_start to xenheap_phys_end works okay.

 -- Keir

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: xen_phys_start for 32b
  2009-01-06 22:12     ` Keir Fraser
@ 2009-01-06 22:19       ` Cihula, Joseph
  2009-01-07  8:54         ` Keir Fraser
  0 siblings, 1 reply; 26+ messages in thread
From: Cihula, Joseph @ 2009-01-06 22:19 UTC (permalink / raw)
  To: Keir Fraser, xen-devel

> From: Keir Fraser [mailto:keir.fraser@eu.citrix.com]
> Sent: Tuesday, January 06, 2009 2:12 PM
>
> On 06/01/2009 18:29, "Cihula, Joseph" <joseph.cihula@intel.com> wrote:
>
> > I don't understand the reason for this last assignment on 32b systems, since
> > xen isn't really using this low memory for its heap.
>
> It's not used for domheap either. In fact it's not really used at all. Hence
> encompassing it within xenheap_phys_start to xenheap_phys_end works okay.
>

But shouldn't [xenheap_phys_start, xenheap_phys_end] represent all of the memory that the hypervisor "owns" and which must be protected from even privileged domain writes (modulo the real mode/trampoline code, which has its own variables that represent its range)?  While it may be "OK" on 32b systems, it is not "logically correct" and does not match 64b systems (where this low memory is not so protected).  Would it break anything to set xenheap_phys_start to __pa(&_start) for 32b builds?

Joe

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: xen_phys_start for 32b
  2009-01-06 22:19       ` Cihula, Joseph
@ 2009-01-07  8:54         ` Keir Fraser
  2009-01-07 15:13           ` Cihula, Joseph
  0 siblings, 1 reply; 26+ messages in thread
From: Keir Fraser @ 2009-01-07  8:54 UTC (permalink / raw)
  To: Cihula, Joseph, xen-devel

On 06/01/2009 22:19, "Cihula, Joseph" <joseph.cihula@intel.com> wrote:

>> It's not used for domheap either. In fact it's not really used at all. Hence
>> encompassing it within xenheap_phys_start to xenheap_phys_end works okay.
>> 
> 
> But shouldn't [xenheap_phys_start, xenheap_phys_end] represent all of the
> memory that the hypervisor "owns" and which must be protected from even
> privileged domain writes (modulo the real mode/trampoline code, which has its
> own variables that represent its range)?  While it may be "OK" on 32b systems,
> it is not "logically correct" and does not match 64b systems (where this low
> memory is not so protected).  Would it break anything to set
> xenheap_phys_start to __pa(&_start) for 32b builds?

So what issue does this fix for you?

 -- Keir

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: xen_phys_start for 32b
  2009-01-07  8:54         ` Keir Fraser
@ 2009-01-07 15:13           ` Cihula, Joseph
  2009-01-07 15:40             ` Keir Fraser
  0 siblings, 1 reply; 26+ messages in thread
From: Cihula, Joseph @ 2009-01-07 15:13 UTC (permalink / raw)
  To: Keir Fraser, xen-devel

> From: Keir Fraser [mailto:keir.fraser@eu.citrix.com]
> Sent: Wednesday, January 07, 2009 12:54 AM
>
> On 06/01/2009 22:19, "Cihula, Joseph" <joseph.cihula@intel.com> wrote:
>
> >> It's not used for domheap either. In fact it's not really used at all. Hence
> >> encompassing it within xenheap_phys_start to xenheap_phys_end works okay.
> >>
> >
> > But shouldn't [xenheap_phys_start, xenheap_phys_end] represent all of the
> > memory that the hypervisor "owns" and which must be protected from even
> > privileged domain writes (modulo the real mode/trampoline code, which has its
> > own variables that represent its range)?  While it may be "OK" on 32b systems,
> > it is not "logically correct" and does not match 64b systems (where this low
> > memory is not so protected).  Would it break anything to set
> > xenheap_phys_start to __pa(&_start) for 32b builds?
>
> So what issue does this fix for you?

It moves the '#ifdef __x86_64__' in a couple of places in an upcoming patch into just setup.c ;-)  So practically speaking, it is not very important.  But it seems like it would just be cleaner, today, to have this variable (and xen_phys_start?) be consistent across builds; and thus, usable with the intended meaning in the future.

Joe

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: xen_phys_start for 32b
  2009-01-07 15:13           ` Cihula, Joseph
@ 2009-01-07 15:40             ` Keir Fraser
  2009-01-07 21:37               ` Cihula, Joseph
  2009-01-08  0:17               ` Xenheap disappearance: (was: xen_phys_start for 32b) Dan Magenheimer
  0 siblings, 2 replies; 26+ messages in thread
From: Keir Fraser @ 2009-01-07 15:40 UTC (permalink / raw)
  To: Cihula, Joseph, xen-devel

On 07/01/2009 15:13, "Cihula, Joseph" <joseph.cihula@intel.com> wrote:

>>> But shouldn't [xenheap_phys_start, xenheap_phys_end] represent all of the
>>> memory that the hypervisor "owns" and which must be protected from even
>>> privileged domain writes (modulo the real mode/trampoline code, which has
>>> its
>>> own variables that represent its range)?  While it may be "OK" on 32b
>>> systems,
>>> it is not "logically correct" and does not match 64b systems (where this low
>>> memory is not so protected).  Would it break anything to set
>>> xenheap_phys_start to __pa(&_start) for 32b builds?
>> 
>> So what issue does this fix for you?
> 
> It moves the '#ifdef __x86_64__' in a couple of places in an upcoming patch
> into just setup.c ;-)  So practically speaking, it is not very important.  But
> it seems like it would just be cleaner, today, to have this variable (and
> xen_phys_start?) be consistent across builds; and thus, usable with the
> intended meaning in the future.

Xenheap will disappear entirely on x86/64 in future. So long term is that
i386 and x86/64 are actually to diverge significantly in this area.

Of course I'll consider any patch on its own merits of usefulness and
cleanliness.

 -- Keir

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: xen_phys_start for 32b
  2009-01-07 15:40             ` Keir Fraser
@ 2009-01-07 21:37               ` Cihula, Joseph
  2009-01-07 23:27                 ` Keir Fraser
  2009-01-08  0:17               ` Xenheap disappearance: (was: xen_phys_start for 32b) Dan Magenheimer
  1 sibling, 1 reply; 26+ messages in thread
From: Cihula, Joseph @ 2009-01-07 21:37 UTC (permalink / raw)
  To: Keir Fraser, xen-devel

> From: Keir Fraser [mailto:keir.fraser@eu.citrix.com]
> Sent: Wednesday, January 07, 2009 7:41 AM
>
> On 07/01/2009 15:13, "Cihula, Joseph" <joseph.cihula@intel.com> wrote:
>
> >>> But shouldn't [xenheap_phys_start, xenheap_phys_end] represent all of the
> >>> memory that the hypervisor "owns" and which must be protected from even
> >>> privileged domain writes (modulo the real mode/trampoline code, which has
> >>> its
> >>> own variables that represent its range)?  While it may be "OK" on 32b
> >>> systems,
> >>> it is not "logically correct" and does not match 64b systems (where this low
> >>> memory is not so protected).  Would it break anything to set
> >>> xenheap_phys_start to __pa(&_start) for 32b builds?
> >>
> >> So what issue does this fix for you?
> >
> > It moves the '#ifdef __x86_64__' in a couple of places in an upcoming patch
> > into just setup.c ;-)  So practically speaking, it is not very important.  But
> > it seems like it would just be cleaner, today, to have this variable (and
> > xen_phys_start?) be consistent across builds; and thus, usable with the
> > intended meaning in the future.
>
> Xenheap will disappear entirely on x86/64 in future. So long term is that
> i386 and x86/64 are actually to diverge significantly in this area.
>
> Of course I'll consider any patch on its own merits of usefulness and
> cleanliness.

OK, here is a very small and simple patch to "fix" this.  Note that I used a new '#idef' instead of '#else' to the previous because this statement is logically distinct from the previous block and there is no difference in the generated code.

----------------

For IA32 builds, set xen_phys_start (and by extension, xenheap_phys_start) to be the physical address of the start of xen (instead of the previous value, 0).

Signed-off-by: Joseph Cihula <joseph.cihula@intel.com>

diff -r e2f36d066b7b xen/arch/x86/setup.c
--- a/xen/arch/x86/setup.c      Mon Dec 22 13:48:40 2008 +0000
+++ b/xen/arch/x86/setup.c      Wed Jan 07 09:19:58 2009 -0800
@@ -868,6 +868,9 @@ void __init __start_xen(unsigned long mb
     nr_pages += (__pa(&_start) - xen_phys_start) >> PAGE_SHIFT;
     vesa_init();
 #endif
+#ifndef __x86_64__
+    xen_phys_start = __pa(&_start);
+#endif
     xenheap_phys_start = xen_phys_start;
     printk("Xen heap: %luMB (%lukB)\n",
            nr_pages >> (20 - PAGE_SHIFT),

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: xen_phys_start for 32b
  2009-01-07 21:37               ` Cihula, Joseph
@ 2009-01-07 23:27                 ` Keir Fraser
  0 siblings, 0 replies; 26+ messages in thread
From: Keir Fraser @ 2009-01-07 23:27 UTC (permalink / raw)
  To: Cihula, Joseph, xen-devel

On 07/01/2009 21:37, "Cihula, Joseph" <joseph.cihula@intel.com> wrote:

> OK, here is a very small and simple patch to "fix" this.  Note that I used a
> new '#idef' instead of '#else' to the previous because this statement is
> logically distinct from the previous block and there is no difference in the
> generated code.
> 
> ----------------
> 
> For IA32 builds, set xen_phys_start (and by extension, xenheap_phys_start) to
> be the physical address of the start of xen (instead of the previous value,
> 0).

I'll consider it as part of your larger patch. By itself it's quite
pointless.

 -- Keir

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Xenheap disappearance: (was: xen_phys_start for 32b)
  2009-01-07 15:40             ` Keir Fraser
  2009-01-07 21:37               ` Cihula, Joseph
@ 2009-01-08  0:17               ` Dan Magenheimer
  2009-01-08  9:04                 ` Keir Fraser
  1 sibling, 1 reply; 26+ messages in thread
From: Dan Magenheimer @ 2009-01-08  0:17 UTC (permalink / raw)
  To: Keir Fraser, xen-devel; +Cc: Cihula, Joseph

> Xenheap will disappear entirely on x86/64 in future. So long 
> term is that
> i386 and x86/64 are actually to diverge significantly in this area.

What's the ETA on this?  I've got a big patch in preparation built
on 3.3. that does gymnastics to get around xenheap limitations
and have been holding off updating it to unstable, hoping for
this xenheap change (to avoid re-re-duplicating the wheel).

Thanks,
Dan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Xenheap disappearance: (was: xen_phys_start for 32b)
  2009-01-08  0:17               ` Xenheap disappearance: (was: xen_phys_start for 32b) Dan Magenheimer
@ 2009-01-08  9:04                 ` Keir Fraser
  2009-01-08 17:53                   ` Dan Magenheimer
  2009-01-14 22:45                   ` Dan Magenheimer
  0 siblings, 2 replies; 26+ messages in thread
From: Keir Fraser @ 2009-01-08  9:04 UTC (permalink / raw)
  To: Dan Magenheimer, xen-devel; +Cc: Cihula, Joseph

On 08/01/2009 00:17, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:

>> Xenheap will disappear entirely on x86/64 in future. So long
>> term is that
>> i386 and x86/64 are actually to diverge significantly in this area.
> 
> What's the ETA on this?  I've got a big patch in preparation built
> on 3.3. that does gymnastics to get around xenheap limitations
> and have been holding off updating it to unstable, hoping for
> this xenheap change (to avoid re-re-duplicating the wheel).

How difficult has it been to work around? Is it just pointing xmalloc() at
the domheap instead of xenheap?

 -- Keir

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Xenheap disappearance: (was: xen_phys_start for 32b)
  2009-01-08  9:04                 ` Keir Fraser
@ 2009-01-08 17:53                   ` Dan Magenheimer
  2009-01-14 22:45                   ` Dan Magenheimer
  1 sibling, 0 replies; 26+ messages in thread
From: Dan Magenheimer @ 2009-01-08 17:53 UTC (permalink / raw)
  To: Keir Fraser, xen-devel; +Cc: Cihula, Joseph

> >> Xenheap will disappear entirely on x86/64 in future. So long
> >> term is that
> >> i386 and x86/64 are actually to diverge significantly in this area.
> > 
> > What's the ETA on this?  I've got a big patch in preparation built
> > on 3.3. that does gymnastics to get around xenheap limitations
> > and have been holding off updating it to unstable, hoping for
> > this xenheap change (to avoid re-re-duplicating the wheel).
> 
> How difficult has it been to work around? Is it just pointing 
> xmalloc() at
> the domheap instead of xenheap?

Not difficult.  I just do a lot of dynamic memory allocation in
my patch and those kinds of problems can be difficult to track
down, so I was hoping to avoid changing the interface twice.

I previously posted the patch I am currently using here:
http://lists.xensource.com/archives/html/xen-devel/2008-08/msg01142.html

However, after thinking on this a bit, I'm thinking I may just
change all my code to just use domheap allocation and restrict
usage to just 64-bit hypervisors.  So unless you plan to rewrite
the domheap interface when xenheap-in-64-bit goes away (or unless
I'm told that 32-bit hypervisor support is a must), I guess I can
remove my dependency on xenheap-in-64-bit going away.

Dan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Xenheap disappearance: (was: xen_phys_start for 32b)
  2009-01-08  9:04                 ` Keir Fraser
  2009-01-08 17:53                   ` Dan Magenheimer
@ 2009-01-14 22:45                   ` Dan Magenheimer
  2009-01-15  0:27                     ` Dan Magenheimer
  2009-01-15  8:38                     ` Keir Fraser
  1 sibling, 2 replies; 26+ messages in thread
From: Dan Magenheimer @ 2009-01-14 22:45 UTC (permalink / raw)
  To: Keir Fraser, xen-devel

> How difficult has it been to work around? Is it just pointing 
> xmalloc() at
> the domheap instead of xenheap?
> 
>  -- Keir

Thinking about this a bit more, unless you plan to stop
supporting 32-bit Xen anytime soon, the semantic differences
probably warrant adding a second interface, let's call
it admalloc() (ad == anonymous domain), that should only be
used in 64-bit-only code where it can be guaranteed that
usage of pointers to the alloc'ed memory need not be bracketed
with (ugly) map/unmap_domain_page() calls.

So I'd suggest adding _admalloc() and adfree() to xmalloc_tlsf.c
and when ifdef x86_64, _xmalloc and xfree simply get redefined
to _admalloc/adfree in xmalloc_tlsf.h.

If this sounds sensible, I will spin a patch as I'm the one keen
to get this settled.

Thanks,
Dan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: RE: Xenheap disappearance: (was: xen_phys_start for 32b)
  2009-01-14 22:45                   ` Dan Magenheimer
@ 2009-01-15  0:27                     ` Dan Magenheimer
  2009-01-15  1:48                       ` Dan Magenheimer
  2009-01-15  8:38                     ` Keir Fraser
  1 sibling, 1 reply; 26+ messages in thread
From: Dan Magenheimer @ 2009-01-15  0:27 UTC (permalink / raw)
  To: dan.magenheimer, Keir Fraser, xen-devel

[-- Attachment #1: Type: text/plain, Size: 147 bytes --]

> If this sounds sensible, I will spin a patch as I'm the one keen
> to get this settled.

Here's the patch I had in mind (compile-tested only).

[-- Attachment #2: xmalloc.patch --]
[-- Type: application/octet-stream, Size: 3366 bytes --]

diff -r 4f6a2bbdff3f xen/common/xmalloc_tlsf.c
--- a/xen/common/xmalloc_tlsf.c	Tue Jan 13 15:53:47 2009 +0000
+++ b/xen/common/xmalloc_tlsf.c	Wed Jan 14 17:24:26 2009 -0700
@@ -496,6 +496,119 @@ void xmem_pool_free(void *ptr, struct xm
     spin_unlock(&pool->lock);
 }
 
+#ifdef __x86_64__
+/*
+ * Glue for 64-bit-only admalloc().
+ */
+
+static struct admem_pool *adpool;
+
+static void *admalloc_pool_get(unsigned long size)
+{
+    struct page_info *pi;
+
+    ASSERT(size == PAGE_SIZE);
+
+    if ((pi = alloc_domheap_pages(0,0,0)) == NULL)
+        return NULL;
+    return mfn_to_virt(page_to_mfn(pi));
+}
+
+static void admalloc_pool_put(void *p)
+{
+    free_domheap_pages(mfn_to_page(virt_to_mfn(p)),0);
+}
+
+static void *admalloc_whole_pages(unsigned long size)
+{
+    struct bhdr *b;
+    struct page_info *pi;
+    unsigned int pageorder = get_order_from_bytes(size + BHDR_OVERHEAD);
+
+    pi = alloc_domheap_pages(0,0,0);
+    if (pi == NULL)
+        return NULL;
+    b = mfn_to_virt(page_to_mfn(pi));
+
+    b->size = (1 << (pageorder + PAGE_SHIFT));
+    return (void *)b->ptr.buffer;
+}
+
+static void tlsf_init(void)
+{
+    INIT_LIST_HEAD(&pool_list_head);
+    spin_lock_init(&pool_list_lock);
+    adpool = admem_pool_create(
+        "admalloc", admalloc_pool_get, admalloc_pool_put,
+        PAGE_SIZE, 0, PAGE_SIZE);
+    BUG_ON(!adpool);
+}
+
+/*
+ * admalloc()
+ */
+
+void *_admalloc(unsigned long size, unsigned long align)
+{
+    void *p;
+    u32 pad;
+
+    ASSERT(!in_irq());
+
+    ASSERT((align & (align - 1)) == 0);
+    if ( align < MEM_ALIGN )
+        align = MEM_ALIGN;
+    size += align - MEM_ALIGN;
+
+    if ( !adpool )
+        tlsf_init();
+
+    if ( size >= (PAGE_SIZE - (2*BHDR_OVERHEAD)) )
+        p = admalloc_whole_pages(size);
+    else
+        p = admem_pool_alloc(size, adpool);
+
+    /* Add alignment padding. */
+    if ( (pad = -(long)p & (align - 1)) != 0 )
+    {
+        char *q = (char *)p + pad;
+        struct bhdr *b = (struct bhdr *)(q - BHDR_OVERHEAD);
+        ASSERT(q > (char *)p);
+        b->size = pad | 1;
+        p = q;
+    }
+
+    ASSERT(((unsigned long)p & (align - 1)) == 0);
+    return p;
+}
+
+void adfree(void *p)
+{
+    struct bhdr *b;
+
+    ASSERT(!in_irq());
+
+    if ( p == NULL )
+        return;
+
+    /* Strip alignment padding. */
+    b = (struct bhdr *)((char *) p - BHDR_OVERHEAD);
+    if ( b->size & 1 )
+    {
+        p = (char *)p - (b->size & ~1u);
+        b = (struct bhdr *)((char *)p - BHDR_OVERHEAD);
+        ASSERT(!(b->size & 1));
+    }
+
+    if ( b->size >= (PAGE_SIZE - (2*BHDR_OVERHEAD)) )
+        free_domheap_pages(mfn_to_page(virt_to_mfn(p)),
+                           get_order_from_bytes(b->size));
+    else
+        admem_pool_free(p, adpool);
+}
+
+#else
+
 /*
  * Glue for xmalloc().
  */
@@ -597,3 +710,5 @@ void xfree(void *p)
     else
         xmem_pool_free(p, xenpool);
 }
+
+#endif
diff -r 4f6a2bbdff3f xen/include/xen/xmalloc.h
--- a/xen/include/xen/xmalloc.h	Tue Jan 13 15:53:47 2009 +0000
+++ b/xen/include/xen/xmalloc.h	Wed Jan 14 17:24:26 2009 -0700
@@ -5,6 +5,11 @@
 /*
  * Xen malloc/free-style interface.
  */
+
+#ifdef __x86_64__
+#define _xmalloc _admalloc
+#define xfree adfree
+#endif
 
 /* Allocate space for typed object. */
 #define xmalloc(_type) ((_type *)_xmalloc(sizeof(_type), __alignof__(_type)))

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: RE: Xenheap disappearance: (was: xen_phys_start for 32b)
  2009-01-15  0:27                     ` Dan Magenheimer
@ 2009-01-15  1:48                       ` Dan Magenheimer
  0 siblings, 0 replies; 26+ messages in thread
From: Dan Magenheimer @ 2009-01-15  1:48 UTC (permalink / raw)
  To: dan.magenheimer, Keir Fraser, xen-devel

[-- Attachment #1: Type: text/plain, Size: 630 bytes --]

Oops, accidentally sent a stale patch.  This is the correct one
(and should compile on 64-bit :-)

Also, if you need a signed-off-by line

Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>

> -----Original Message-----
> From: Dan Magenheimer 
> Sent: Wednesday, January 14, 2009 5:27 PM
> To: Dan Magenheimer; Keir Fraser; xen-devel@lists.xensource.com
> Subject: RE: [Xen-devel] RE: Xenheap disappearance: (was: 
> xen_phys_start
> for 32b)
> 
> 
> > If this sounds sensible, I will spin a patch as I'm the one keen
> > to get this settled.
> 
> Here's the patch I had in mind (compile-tested only).

[-- Attachment #2: xmalloc.patch --]
[-- Type: application/octet-stream, Size: 4270 bytes --]

diff -r 4f6a2bbdff3f xen/common/xmalloc_tlsf.c
--- a/xen/common/xmalloc_tlsf.c	Tue Jan 13 15:53:47 2009 +0000
+++ b/xen/common/xmalloc_tlsf.c	Wed Jan 14 18:44:08 2009 -0700
@@ -294,15 +294,24 @@ struct xmem_pool *xmem_pool_create(
     struct xmem_pool *pool;
     void *region;
     int pool_bytes, pool_order;
+#ifdef __x86_64__
+    struct page_info *pi = 0;
+#endif
 
     BUG_ON(max_size && (max_size < init_size));
 
     pool_bytes = ROUNDUP_SIZE(sizeof(*pool));
     pool_order = get_order_from_bytes(pool_bytes);
 
+#ifdef __x86_64__
+    if ((pi = alloc_domheap_pages(0,0,0)) == NULL)
+        return NULL;
+    pool = mfn_to_virt(page_to_mfn(pi));
+#else
     pool = (void *)alloc_xenheap_pages(pool_order);
     if ( pool == NULL )
         return NULL;
+#endif
     memset(pool, 0, pool_bytes);
 
     /* Round to next page boundary */
@@ -334,7 +343,11 @@ struct xmem_pool *xmem_pool_create(
     return pool;
 
  out_region:
+#ifdef __x86_64__
+    free_domheap_pages(pi, pool_order);
+#else
     free_xenheap_pages(pool, pool_order);
+#endif
     return NULL;
 }
 
@@ -496,6 +509,119 @@ void xmem_pool_free(void *ptr, struct xm
     spin_unlock(&pool->lock);
 }
 
+#ifdef __x86_64__
+/*
+ * Glue for 64-bit-only admalloc().
+ */
+
+static struct xmem_pool *adpool;
+
+static void *admalloc_pool_get(unsigned long size)
+{
+    struct page_info *pi;
+
+    ASSERT(size == PAGE_SIZE);
+
+    if ((pi = alloc_domheap_pages(0,0,0)) == NULL)
+        return NULL;
+    return mfn_to_virt(page_to_mfn(pi));
+}
+
+static void admalloc_pool_put(void *p)
+{
+    free_domheap_pages(mfn_to_page(virt_to_mfn(p)),0);
+}
+
+static void *admalloc_whole_pages(unsigned long size)
+{
+    struct bhdr *b;
+    struct page_info *pi;
+    unsigned int pageorder = get_order_from_bytes(size + BHDR_OVERHEAD);
+
+    pi = alloc_domheap_pages(0,0,0);
+    if (pi == NULL)
+        return NULL;
+    b = mfn_to_virt(page_to_mfn(pi));
+
+    b->size = (1 << (pageorder + PAGE_SHIFT));
+    return (void *)b->ptr.buffer;
+}
+
+static void tlsf_init(void)
+{
+    INIT_LIST_HEAD(&pool_list_head);
+    spin_lock_init(&pool_list_lock);
+    adpool = xmem_pool_create(
+        "admalloc", admalloc_pool_get, admalloc_pool_put,
+        PAGE_SIZE, 0, PAGE_SIZE);
+    BUG_ON(!adpool);
+}
+
+/*
+ * admalloc()
+ */
+
+void *_admalloc(unsigned long size, unsigned long align)
+{
+    void *p;
+    u32 pad;
+
+    ASSERT(!in_irq());
+
+    ASSERT((align & (align - 1)) == 0);
+    if ( align < MEM_ALIGN )
+        align = MEM_ALIGN;
+    size += align - MEM_ALIGN;
+
+    if ( !adpool )
+        tlsf_init();
+
+    if ( size >= (PAGE_SIZE - (2*BHDR_OVERHEAD)) )
+        p = admalloc_whole_pages(size);
+    else
+        p = xmem_pool_alloc(size, adpool);
+
+    /* Add alignment padding. */
+    if ( (pad = -(long)p & (align - 1)) != 0 )
+    {
+        char *q = (char *)p + pad;
+        struct bhdr *b = (struct bhdr *)(q - BHDR_OVERHEAD);
+        ASSERT(q > (char *)p);
+        b->size = pad | 1;
+        p = q;
+    }
+
+    ASSERT(((unsigned long)p & (align - 1)) == 0);
+    return p;
+}
+
+void adfree(void *p)
+{
+    struct bhdr *b;
+
+    ASSERT(!in_irq());
+
+    if ( p == NULL )
+        return;
+
+    /* Strip alignment padding. */
+    b = (struct bhdr *)((char *) p - BHDR_OVERHEAD);
+    if ( b->size & 1 )
+    {
+        p = (char *)p - (b->size & ~1u);
+        b = (struct bhdr *)((char *)p - BHDR_OVERHEAD);
+        ASSERT(!(b->size & 1));
+    }
+
+    if ( b->size >= (PAGE_SIZE - (2*BHDR_OVERHEAD)) )
+        free_domheap_pages(mfn_to_page(virt_to_mfn(p)),
+                           get_order_from_bytes(b->size));
+    else
+        xmem_pool_free(p, adpool);
+}
+
+#else
+
 /*
  * Glue for xmalloc().
  */
@@ -597,3 +723,5 @@ void xfree(void *p)
     else
         xmem_pool_free(p, xenpool);
 }
+
+#endif
diff -r 4f6a2bbdff3f xen/include/xen/xmalloc.h
--- a/xen/include/xen/xmalloc.h	Tue Jan 13 15:53:47 2009 +0000
+++ b/xen/include/xen/xmalloc.h	Wed Jan 14 18:44:08 2009 -0700
@@ -5,6 +5,11 @@
 /*
  * Xen malloc/free-style interface.
  */
+
+#ifdef __x86_64__
+#define _xmalloc _admalloc
+#define xfree adfree
+#endif
 
 /* Allocate space for typed object. */
 #define xmalloc(_type) ((_type *)_xmalloc(sizeof(_type), __alignof__(_type)))

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Xenheap disappearance: (was: xen_phys_start for 32b)
  2009-01-14 22:45                   ` Dan Magenheimer
  2009-01-15  0:27                     ` Dan Magenheimer
@ 2009-01-15  8:38                     ` Keir Fraser
  2009-01-15  8:41                       ` Keir Fraser
  2009-01-15 18:15                       ` Dan Magenheimer
  1 sibling, 2 replies; 26+ messages in thread
From: Keir Fraser @ 2009-01-15  8:38 UTC (permalink / raw)
  To: Dan Magenheimer, xen-devel

On 14/01/2009 22:45, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:

> Thinking about this a bit more, unless you plan to stop
> supporting 32-bit Xen anytime soon, the semantic differences
> probably warrant adding a second interface, let's call
> it admalloc() (ad == anonymous domain), that should only be
> used in 64-bit-only code where it can be guaranteed that
> usage of pointers to the alloc'ed memory need not be bracketed
> with (ugly) map/unmap_domain_page() calls.
> 
> So I'd suggest adding _admalloc() and adfree() to xmalloc_tlsf.c
> and when ifdef x86_64, _xmalloc and xfree simply get redefined
> to _admalloc/adfree in xmalloc_tlsf.h.
> 
> If this sounds sensible, I will spin a patch as I'm the one keen
> to get this settled.

Xmalloc/xfree can use alloc_domheap_pages always on x86/64. A temporary
ifdef inside xmalloc is better than an extra xmalloc interface.

 -- Keir

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Re: Xenheap disappearance: (was: xen_phys_start for 32b)
  2009-01-15  8:38                     ` Keir Fraser
@ 2009-01-15  8:41                       ` Keir Fraser
  2009-01-15 18:15                       ` Dan Magenheimer
  1 sibling, 0 replies; 26+ messages in thread
From: Keir Fraser @ 2009-01-15  8:41 UTC (permalink / raw)
  To: Keir Fraser, Dan Magenheimer, xen-devel

On 15/01/2009 08:38, "Keir Fraser" <keir.fraser@eu.citrix.com> wrote:

>> If this sounds sensible, I will spin a patch as I'm the one keen
>> to get this settled.
> 
> Xmalloc/xfree can use alloc_domheap_pages always on x86/64. A temporary
> ifdef inside xmalloc is better than an extra xmalloc interface.

Also, this will be a small patch you can carry in your own patchset for now.

 -- Keir

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Re: Xenheap disappearance: (was: xen_phys_start for 32b)
  2009-01-15  8:38                     ` Keir Fraser
  2009-01-15  8:41                       ` Keir Fraser
@ 2009-01-15 18:15                       ` Dan Magenheimer
  2009-01-15 18:51                         ` Keir Fraser
  1 sibling, 1 reply; 26+ messages in thread
From: Dan Magenheimer @ 2009-01-15 18:15 UTC (permalink / raw)
  To: Keir Fraser, xen-devel

[-- Attachment #1: Type: text/plain, Size: 829 bytes --]

> Xmalloc/xfree can use alloc_domheap_pages always on x86/64.
> A temporary
> ifdef inside xmalloc is better than an extra xmalloc interface.

OK, I see.  So what you want is xmalloc to be the only interface.
And "temporary" means until Xen no longer supports 32-bit at all?

Will you take this patch then?  I think this patch meets your
objectives and is greatly simplified.

> Also, this will be a small patch you can carry in your own
> patchset for now.

I'm just trying to contribute to your stated objective:

> Xenheap will disappear entirely on x86/64 in future.

and trying to get the syntax/semantics pinned down.  Is
this not what you intended to implement for 3.4?  Or did
you have something entirely different in mind?

Thanks,
Dan

Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>

[-- Attachment #2: xmalloc.patch --]
[-- Type: application/octet-stream, Size: 2789 bytes --]

diff -r 4f6a2bbdff3f xen/common/xmalloc_tlsf.c
--- a/xen/common/xmalloc_tlsf.c	Tue Jan 13 15:53:47 2009 +0000
+++ b/xen/common/xmalloc_tlsf.c	Thu Jan 15 10:42:03 2009 -0700
@@ -283,6 +283,39 @@ static inline void ADD_REGION(void *regi
  * TLSF pool-based allocator start.
  */
 
+#ifdef __x86_64__
+extern void free_domheap_pages(struct page_info *pg, unsigned int order);
+extern struct page_info *alloc_domheap_pages(
+    struct domain *d, unsigned int order, unsigned int memflags);
+
+static inline void *alloc_xheap_pages(unsigned int order)
+{
+    struct page_info *pi = 0;
+
+    if ((pi = alloc_domheap_pages(0,order,0)) == NULL)
+        return NULL;
+    return mfn_to_virt(page_to_mfn(pi));
+}
+
+static inline void free_xheap_pages(void *p, unsigned int order)
+{
+    free_domheap_pages(mfn_to_page(virt_to_mfn(p)),order);
+}
+#else
+extern void free_xenheap_pages(void *p, unsigned int order);
+extern void *alloc_xenheap_pages(unsigned int order);
+
+static inline void *alloc_xheap_pages(unsigned int order)
+{
+    return (void *)alloc_xenheap_pages(order);
+}
+
+static inline void free_xheap_pages(void *p, unsigned int order)
+{
+    free_xenheap_pages(p,0);
+}
+#endif
+
 struct xmem_pool *xmem_pool_create(
     const char *name,
     xmem_pool_get_memory get_mem,
@@ -300,7 +333,7 @@ struct xmem_pool *xmem_pool_create(
     pool_bytes = ROUNDUP_SIZE(sizeof(*pool));
     pool_order = get_order_from_bytes(pool_bytes);
 
-    pool = (void *)alloc_xenheap_pages(pool_order);
+    pool = (void *)alloc_xheap_pages(pool_order);
     if ( pool == NULL )
         return NULL;
     memset(pool, 0, pool_bytes);
@@ -334,7 +367,7 @@ struct xmem_pool *xmem_pool_create(
     return pool;
 
  out_region:
-    free_xenheap_pages(pool, pool_order);
+    free_xheap_pages(pool, pool_order);
     return NULL;
 }
 
@@ -505,12 +538,12 @@ static void *xmalloc_pool_get(unsigned l
 static void *xmalloc_pool_get(unsigned long size)
 {
     ASSERT(size == PAGE_SIZE);
-    return alloc_xenheap_pages(0);
+    return alloc_xheap_pages(0);
 }
 
 static void xmalloc_pool_put(void *p)
 {
-    free_xenheap_pages(p,0);
+    free_xheap_pages(p,0);
 }
 
 static void *xmalloc_whole_pages(unsigned long size)
@@ -518,7 +551,7 @@ static void *xmalloc_whole_pages(unsigne
     struct bhdr *b;
     unsigned int pageorder = get_order_from_bytes(size + BHDR_OVERHEAD);
 
-    b = alloc_xenheap_pages(pageorder);
+    b = alloc_xheap_pages(pageorder);
     if ( b == NULL )
         return NULL;
 
@@ -593,7 +626,7 @@ void xfree(void *p)
     }
 
     if ( b->size >= (PAGE_SIZE - (2*BHDR_OVERHEAD)) )
-        free_xenheap_pages((void *)b, get_order_from_bytes(b->size));
+        free_xheap_pages((void *)b, get_order_from_bytes(b->size));
     else
         xmem_pool_free(p, xenpool);
 }

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Re: Xenheap disappearance: (was: xen_phys_start for 32b)
  2009-01-15 18:15                       ` Dan Magenheimer
@ 2009-01-15 18:51                         ` Keir Fraser
  2009-01-16  8:11                           ` Re: Xenheap disappearance: (was: xen_phys_start for32b) Jan Beulich
  0 siblings, 1 reply; 26+ messages in thread
From: Keir Fraser @ 2009-01-15 18:51 UTC (permalink / raw)
  To: Dan Magenheimer, xen-devel

On 15/01/2009 18:15, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:

>> Xmalloc/xfree can use alloc_domheap_pages always on x86/64.
>> A temporary
>> ifdef inside xmalloc is better than an extra xmalloc interface.
> 
> OK, I see.  So what you want is xmalloc to be the only interface.
> And "temporary" means until Xen no longer supports 32-bit at all?

I mean until I get rid of restricted xenheap for x86/64 (and you've caused
me to go look at that patch again now, so hopefully I can get it debugged
and in next week).

> Will you take this patch then?  I think this patch meets your
> objectives and is greatly simplified.

This is indeed the patch I had in mind.

>> Also, this will be a small patch you can carry in your own
>> patchset for now.
> 
> I'm just trying to contribute to your stated objective:
> 
>> Xenheap will disappear entirely on x86/64 in future.
> 
> and trying to get the syntax/semantics pinned down.  Is
> this not what you intended to implement for 3.4?  Or did
> you have something entirely different in mind?

I think the patch you attached will work just fine for you for now. If your
stuff goes in before getting rid of xenheap restrictions on x86/64, then I
would take this patch at that time. But I think that's unlikely. Well, I
hope it is, unless I stall on my xenheap patch again. :-)

 -- Keir

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Re: Xenheap disappearance: (was: xen_phys_start for32b)
  2009-01-15 18:51                         ` Keir Fraser
@ 2009-01-16  8:11                           ` Jan Beulich
  2009-01-16 23:16                             ` Dan Magenheimer
  0 siblings, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2009-01-16  8:11 UTC (permalink / raw)
  To: Keir Fraser, Dan Magenheimer; +Cc: xen-devel

>>> Keir Fraser <keir.fraser@eu.citrix.com> 15.01.09 19:51 >>>
>I think the patch you attached will work just fine for you for now. If your
>stuff goes in before getting rid of xenheap restrictions on x86/64, then I
>would take this patch at that time. But I think that's unlikely. Well, I
>hope it is, unless I stall on my xenheap patch again. :-)

I'd hope that too, because the patch as is must not go in, as it would break
other assumptions afaics: At present, the tools balloon out of Dom0 exactly
the amount needed for creating pv domains (I think there's some slack for
hvm ones), so the fact that the domain heap now serves xmalloc requires
that there always be some extra space available in Xen (also to serve
dynamic allocations). Additionally I think the minimum even a temporary
patch like this should do is fall back to allocating from the Xen heap when
the domain heap is unable the supply the requested amount.

Jan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Re: Xenheap disappearance: (was: xen_phys_start for32b)
  2009-01-16  8:11                           ` Re: Xenheap disappearance: (was: xen_phys_start for32b) Jan Beulich
@ 2009-01-16 23:16                             ` Dan Magenheimer
  2009-01-19  8:22                               ` Re: Xenheap disappearance: (was: xen_phys_startfor32b) Jan Beulich
  0 siblings, 1 reply; 26+ messages in thread
From: Dan Magenheimer @ 2009-01-16 23:16 UTC (permalink / raw)
  To: Jan Beulich, Keir Fraser; +Cc: xen-devel

 -----Original Message-----
> From: Jan Beulich [mailto:jbeulich@novell.com]
> Sent: Friday, January 16, 2009 1:12 AM
> >>> Keir Fraser <keir.fraser@eu.citrix.com> 15.01.09 19:51 >>>
> >I think the patch you attached will work just fine for you 
> for now. If your
> >stuff goes in before getting rid of xenheap restrictions on 
> x86/64, then I
> >would take this patch at that time. But I think that's 
> unlikely. Well, I
> >hope it is, unless I stall on my xenheap patch again. :-)
> 
> I'd hope that too, because the patch as is must not go in, as 
> it would break
> other assumptions afaics: At present, the tools balloon out 
> of Dom0 exactly
> the amount needed for creating pv domains (I think there's 
> some slack for
> hvm ones), so the fact that the domain heap now serves 
> xmalloc requires
> that there always be some extra space available in Xen (also to serve
> dynamic allocations). Additionally I think the minimum even a 
> temporary
> patch like this should do is fall back to allocating from the 
> Xen heap when
> the domain heap is unable the supply the requested amount.

Jan --

I'm not sure what you are getting at.  Are you saying that
creating a domain takes (big)MB from domheap, then later
(little)KB from xenheap, and if we combine domheap and xenheap,
the tools might launch a domain when available memory is
greater than (big)MB but smaller than (big)MB+(small)KB,
and that will result in the tools thinking the domain
can launch but it won't?  I suppose that's possible,
but exceedingly unlikely.  And I think Keir's plan will
have the same problem.  Sounds like a tools bug, not a
reason to avoid modernizing Xen memory management.

Dan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Re: Xenheap disappearance: (was: xen_phys_startfor32b)
  2009-01-16 23:16                             ` Dan Magenheimer
@ 2009-01-19  8:22                               ` Jan Beulich
  2009-01-19 20:04                                 ` Dan Magenheimer
  0 siblings, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2009-01-19  8:22 UTC (permalink / raw)
  To: Keir Fraser, Dan Magenheimer; +Cc: xen-devel

>>> Dan Magenheimer <dan.magenheimer@oracle.com> 17.01.09 00:16 >>>
>I'm not sure what you are getting at.  Are you saying that
>creating a domain takes (big)MB from domheap, then later
>(little)KB from xenheap, and if we combine domheap and xenheap,
>the tools might launch a domain when available memory is
>greater than (big)MB but smaller than (big)MB+(small)KB,
>and that will result in the tools thinking the domain
>can launch but it won't?  I suppose that's possible,

Yes, that's what I'm trying to say. And I think it's rather likely to happen,
as I frequently see systems with completely empty domain heaps.

>but exceedingly unlikely.  And I think Keir's plan will
>have the same problem.  Sounds like a tools bug, not a
>reason to avoid modernizing Xen memory management.

No, I wasn't making the point to ask for not doing improvements in Xen -
in fact, it's been for a long time that I've been raising the scalability issue
of the limited Xen heap. I was just trying to point out that the Xen change
*must* be accompanied by a tools change in order to be usable in other
than development/test environments.

Jan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Re: Xenheap disappearance: (was: xen_phys_startfor32b)
  2009-01-19  8:22                               ` Re: Xenheap disappearance: (was: xen_phys_startfor32b) Jan Beulich
@ 2009-01-19 20:04                                 ` Dan Magenheimer
  2009-01-20  9:11                                   ` Re: Xenheap disappearance: (was:xen_phys_startfor32b) Jan Beulich
  0 siblings, 1 reply; 26+ messages in thread
From: Dan Magenheimer @ 2009-01-19 20:04 UTC (permalink / raw)
  To: Jan Beulich, Keir Fraser; +Cc: xen-devel

> as I frequently see systems with completely empty domain heaps.

Really?  Oh, I see we probably have different default paradigms
for memory allocation.  On Oracle VM, dom0 is by default
launched with 256MB and all remaining memory is "free" and
guests are generally launched with memory==maxmem.  As a
result, it is very unusual to have an empty domheap due
to fragmentation.

I expect you are running without dom0_mem unset, thus
using auto-ballooning from dom0 by default.  And probably
either guests are usually launched with memory<<maxmem or
are using disk='file:...' (which results in dom0 filling
up its page cache) or both.  True?

> *must* be accompanied by a tools change in order to be usable

Yes, I see that it would be a must for your different paradigm,
but less important in the one I am accustomed to.

Thanks,
Dan

> >>> Dan Magenheimer <dan.magenheimer@oracle.com> 17.01.09 00:16 >>>
> >I'm not sure what you are getting at.  Are you saying that
> >creating a domain takes (big)MB from domheap, then later
> >(little)KB from xenheap, and if we combine domheap and xenheap,
> >the tools might launch a domain when available memory is
> >greater than (big)MB but smaller than (big)MB+(small)KB,
> >and that will result in the tools thinking the domain
> >can launch but it won't?  I suppose that's possible,
> 
> Yes, that's what I'm trying to say. And I think it's rather 
> likely to happen,
> as I frequently see systems with completely empty domain heaps.
> 
> >but exceedingly unlikely.  And I think Keir's plan will
> >have the same problem.  Sounds like a tools bug, not a
> >reason to avoid modernizing Xen memory management.
> 
> No, I wasn't making the point to ask for not doing 
> improvements in Xen -
> in fact, it's been for a long time that I've been raising the 
> scalability issue
> of the limited Xen heap. I was just trying to point out that 
> the Xen change
> *must* be accompanied by a tools change in order to be usable in other
> than development/test environments.
> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Re: Xenheap disappearance: (was:xen_phys_startfor32b)
  2009-01-19 20:04                                 ` Dan Magenheimer
@ 2009-01-20  9:11                                   ` Jan Beulich
  2009-01-20  9:16                                     ` Keir Fraser
  0 siblings, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2009-01-20  9:11 UTC (permalink / raw)
  To: Keir Fraser, Dan Magenheimer; +Cc: xen-devel

>>> Dan Magenheimer <dan.magenheimer@oracle.com> 19.01.09 21:04 >>>
>> as I frequently see systems with completely empty domain heaps.
>
>Really?  Oh, I see we probably have different default paradigms
>for memory allocation.  On Oracle VM, dom0 is by default
>launched with 256MB and all remaining memory is "free" and
>guests are generally launched with memory==maxmem.  As a
>result, it is very unusual to have an empty domheap due
>to fragmentation.
>
>I expect you are running without dom0_mem unset, thus
>using auto-ballooning from dom0 by default.  And probably
>either guests are usually launched with memory<<maxmem or
>are using disk='file:...' (which results in dom0 filling
>up its page cache) or both.  True?

All of this indeed is correct for default SuSE installations, and close to
correct for my private way of setting up things (I do use dom0_mem,
but only to avoid auto ballooning for starting a single, average size VM,
as I rarely find myself running more than one at a time).

>> *must* be accompanied by a tools change in order to be usable
>
>Yes, I see that it would be a must for your different paradigm,
>but less important in the one I am accustomed to.

Since there's no dom0_mem used by default in -unstable, the 'must'
applies there too.

Jan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Re: Xenheap disappearance: (was:xen_phys_startfor32b)
  2009-01-20  9:11                                   ` Re: Xenheap disappearance: (was:xen_phys_startfor32b) Jan Beulich
@ 2009-01-20  9:16                                     ` Keir Fraser
  0 siblings, 0 replies; 26+ messages in thread
From: Keir Fraser @ 2009-01-20  9:16 UTC (permalink / raw)
  To: Jan Beulich, Dan Magenheimer; +Cc: xen-devel

On 20/01/2009 09:11, "Jan Beulich" <jbeulich@novell.com> wrote:

>>> *must* be accompanied by a tools change in order to be usable
>> 
>> Yes, I see that it would be a must for your different paradigm,
>> but less important in the one I am accustomed to.
> 
> Since there's no dom0_mem used by default in -unstable, the 'must'
> applies there too.

We do specify one for our internal automated tests however. Really we do not
test the auto-ballooner.

 -- Keir

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2009-01-20  9:16 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-01-06  6:36 xen_phys_start for 32b Cihula, Joseph
2009-01-06  8:42 ` Keir Fraser
2009-01-06 18:29   ` Cihula, Joseph
2009-01-06 22:12     ` Keir Fraser
2009-01-06 22:19       ` Cihula, Joseph
2009-01-07  8:54         ` Keir Fraser
2009-01-07 15:13           ` Cihula, Joseph
2009-01-07 15:40             ` Keir Fraser
2009-01-07 21:37               ` Cihula, Joseph
2009-01-07 23:27                 ` Keir Fraser
2009-01-08  0:17               ` Xenheap disappearance: (was: xen_phys_start for 32b) Dan Magenheimer
2009-01-08  9:04                 ` Keir Fraser
2009-01-08 17:53                   ` Dan Magenheimer
2009-01-14 22:45                   ` Dan Magenheimer
2009-01-15  0:27                     ` Dan Magenheimer
2009-01-15  1:48                       ` Dan Magenheimer
2009-01-15  8:38                     ` Keir Fraser
2009-01-15  8:41                       ` Keir Fraser
2009-01-15 18:15                       ` Dan Magenheimer
2009-01-15 18:51                         ` Keir Fraser
2009-01-16  8:11                           ` Re: Xenheap disappearance: (was: xen_phys_start for32b) Jan Beulich
2009-01-16 23:16                             ` Dan Magenheimer
2009-01-19  8:22                               ` Re: Xenheap disappearance: (was: xen_phys_startfor32b) Jan Beulich
2009-01-19 20:04                                 ` Dan Magenheimer
2009-01-20  9:11                                   ` Re: Xenheap disappearance: (was:xen_phys_startfor32b) Jan Beulich
2009-01-20  9:16                                     ` Keir Fraser

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.