All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH] Start PV guest faster
@ 2014-05-20  7:26 Frediano Ziglio
  2014-05-20  9:07 ` Andrew Cooper
  2014-05-20  9:30 ` Jan Beulich
  0 siblings, 2 replies; 6+ messages in thread
From: Frediano Ziglio @ 2014-05-20  7:26 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Jackson, Ian Campbell, Stefano Stabellini

Experimental patch that try to allocate large chunks in order to start
PV guest quickly.

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
---
 tools/libxc/xc_dom_x86.c | 51 ++++++++++++++++++++++++++++++------------------
 1 file changed, 32 insertions(+), 19 deletions(-)


It's a while I noticed that the time to start a large PV guest depends
on the amount of memory. For VMs with 64 or more GB of ram the time can
become quite significant (like 20 seconds). Digging around I found that
a lot of time is spend populating RAM (from a single hypercall made by
xenguest).

xenguest allocate the memory asking for single pages in a single
hypercall. This patch try to use larger chunks of memory. Note that the
order parameter populating pages has nothing to do in this case with
superpages but just with allocation pages.

The improvement is quite significant (the hypercall is more than 20
times faster for a machine with 3GB) however there are different things
to consider:
- should this optimization be done inside Xen? If the change is just
userspace surely this make Xen simpler and safer but on the other way
Xen is more aware if is better to allocate big chunks or not
- can userspace request some memory statistics to Xen in order to make
better use of chunks?
- how is affected memory fragmentation? Original code request single
pages so Xen can decide to fill the gaps and keep large order of pages
available for HVM guests (which can use superpages). On the other way if
I want to execute 2 PV guest with 60GB each on a 128GB host I don't see
the point to not request large chunks.
- debug Xen return pages in reverse order while the chunks have to be
allocated sequentially. Is this a problem?

I didn't find any piece of code where superpages is turned on in
xc_dom_image but I think that if the number of pages is not multiple of
superpages the code allocate a bit less memory for the guest.


diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xc_dom_x86.c
index e034d62..d09269a 100644
--- a/tools/libxc/xc_dom_x86.c
+++ b/tools/libxc/xc_dom_x86.c
@@ -756,10 +756,34 @@ static int x86_shadow(xc_interface *xch, domid_t domid)
     return rc;
 }
 
+static int populate_range(struct xc_dom_image *dom, xen_pfn_t extents[],
+                          xen_pfn_t pfn_start, unsigned page_order,
+                          xen_pfn_t num_pages)
+{
+    int rc;
+    xen_pfn_t pfn, mask;
+
+    for ( pfn = 0; pfn < num_pages; pfn += 1 << page_order )
+        extents[pfn >> page_order] = pfn + pfn_start;
+
+    rc = xc_domain_populate_physmap_exact(dom->xch, dom->guest_domid,
+                                           pfn >> page_order, page_order, 0,
+                                           extents);
+    if ( rc || page_order == 0 )
+        return rc;
+
+    /* convert to "normal" pages */
+    mask = (1ULL << page_order) - 1;
+    for ( pfn = num_pages; pfn-- > 0; )
+        extents[pfn] = extents[pfn >> page_order] + (pfn & mask);
+
+    return rc;
+}
+
 int arch_setup_meminit(struct xc_dom_image *dom)
 {
     int rc;
-    xen_pfn_t pfn, allocsz, i, j, mfn;
+    xen_pfn_t pfn, allocsz, i;
 
     rc = x86_compat(dom->xch, dom->guest_domid, dom->guest_type);
     if ( rc )
@@ -779,25 +803,12 @@ int arch_setup_meminit(struct xc_dom_image *dom)
     if ( dom->superpages )
     {
         int count = dom->total_pages >> SUPERPAGE_PFN_SHIFT;
-        xen_pfn_t extents[count];
 
         DOMPRINTF("Populating memory with %d superpages", count);
-        for ( pfn = 0; pfn < count; pfn++ )
-            extents[pfn] = pfn << SUPERPAGE_PFN_SHIFT;
-        rc = xc_domain_populate_physmap_exact(dom->xch, dom->guest_domid,
-                                               count, SUPERPAGE_PFN_SHIFT, 0,
-                                               extents);
+        rc = populate_range(dom, dom->p2m_host, 0, SUPERPAGE_PFN_SHIFT,
+                            dom->total_pages);
         if ( rc )
             return rc;
-
-        /* Expand the returned mfn into the p2m array */
-        pfn = 0;
-        for ( i = 0; i < count; i++ )
-        {
-            mfn = extents[i];
-            for ( j = 0; j < SUPERPAGE_NR_PFNS; j++, pfn++ )
-                dom->p2m_host[pfn] = mfn + j;
-        }
     }
     else
     {
@@ -820,9 +831,11 @@ int arch_setup_meminit(struct xc_dom_image *dom)
             allocsz = dom->total_pages - i;
             if ( allocsz > 1024*1024 )
                 allocsz = 1024*1024;
-            rc = xc_domain_populate_physmap_exact(
-                dom->xch, dom->guest_domid, allocsz,
-                0, 0, &dom->p2m_host[i]);
+            /* try bit chunk of memory first */
+            if ( (allocsz & ((1<<10)-1)) == 0 )
+                rc = populate_range(dom, &dom->p2m_host[i], i, 10, allocsz);
+            if ( rc )
+                rc = populate_range(dom, &dom->p2m_host[i], i, 0, allocsz);
         }
 
         /* Ensure no unclaimed pages are left unused.
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH] Start PV guest faster
  2014-05-20  7:26 [RFC PATCH] Start PV guest faster Frediano Ziglio
@ 2014-05-20  9:07 ` Andrew Cooper
  2014-05-20  9:30 ` Jan Beulich
  1 sibling, 0 replies; 6+ messages in thread
From: Andrew Cooper @ 2014-05-20  9:07 UTC (permalink / raw)
  To: Frediano Ziglio; +Cc: xen-devel, Ian Jackson, Ian Campbell, Stefano Stabellini

On 20/05/14 08:26, Frediano Ziglio wrote:
> Experimental patch that try to allocate large chunks in order to start
> PV guest quickly.
>
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
> ---
>  tools/libxc/xc_dom_x86.c | 51 ++++++++++++++++++++++++++++++------------------
>  1 file changed, 32 insertions(+), 19 deletions(-)
>
>
> It's a while I noticed that the time to start a large PV guest depends
> on the amount of memory. For VMs with 64 or more GB of ram the time can
> become quite significant (like 20 seconds). Digging around I found that
> a lot of time is spend populating RAM (from a single hypercall made by
> xenguest).
>
> xenguest allocate the memory asking for single pages in a single
> hypercall. This patch try to use larger chunks of memory. Note that the
> order parameter populating pages has nothing to do in this case with
> superpages but just with allocation pages.

Here, you probably mean 'domain builder', which is a component of libxc.
`xenguest` is a XenServer specific thing which invokes the domain
builder on behalf of Xapi.

>
> The improvement is quite significant (the hypercall is more than 20
> times faster for a machine with 3GB) however there are different things
> to consider:
> - should this optimization be done inside Xen? If the change is just
> userspace surely this make Xen simpler and safer but on the other way
> Xen is more aware if is better to allocate big chunks or not

No - the whole reason for having the order field in the first place is
to allow userspace to batch like this.  Xen cannot guess at what
userspace is likely to ask for in the future.

> - can userspace request some memory statistics to Xen in order to make
> better use of chunks?
> - how is affected memory fragmentation? Original code request single
> pages so Xen can decide to fill the gaps and keep large order of pages
> available for HVM guests (which can use superpages). On the other way if
> I want to execute 2 PV guest with 60GB each on a 128GB host I don't see
> the point to not request large chunks.

Inside xen, the order is broken down into individual pages inside
guest_physmap_add_entry().

The net improvement you are seeing is probably from not taking and
releasing the p2m lock for every single page.

~Andrew

> - debug Xen return pages in reverse order while the chunks have to be
> allocated sequentially. Is this a problem?
>
> I didn't find any piece of code where superpages is turned on in
> xc_dom_image but I think that if the number of pages is not multiple of
> superpages the code allocate a bit less memory for the guest.
>
>
> diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xc_dom_x86.c
> index e034d62..d09269a 100644
> --- a/tools/libxc/xc_dom_x86.c
> +++ b/tools/libxc/xc_dom_x86.c
> @@ -756,10 +756,34 @@ static int x86_shadow(xc_interface *xch, domid_t domid)
>      return rc;
>  }
>  
> +static int populate_range(struct xc_dom_image *dom, xen_pfn_t extents[],
> +                          xen_pfn_t pfn_start, unsigned page_order,
> +                          xen_pfn_t num_pages)
> +{
> +    int rc;
> +    xen_pfn_t pfn, mask;
> +
> +    for ( pfn = 0; pfn < num_pages; pfn += 1 << page_order )
> +        extents[pfn >> page_order] = pfn + pfn_start;
> +
> +    rc = xc_domain_populate_physmap_exact(dom->xch, dom->guest_domid,
> +                                           pfn >> page_order, page_order, 0,
> +                                           extents);
> +    if ( rc || page_order == 0 )
> +        return rc;
> +
> +    /* convert to "normal" pages */
> +    mask = (1ULL << page_order) - 1;
> +    for ( pfn = num_pages; pfn-- > 0; )
> +        extents[pfn] = extents[pfn >> page_order] + (pfn & mask);
> +
> +    return rc;
> +}
> +
>  int arch_setup_meminit(struct xc_dom_image *dom)
>  {
>      int rc;
> -    xen_pfn_t pfn, allocsz, i, j, mfn;
> +    xen_pfn_t pfn, allocsz, i;
>  
>      rc = x86_compat(dom->xch, dom->guest_domid, dom->guest_type);
>      if ( rc )
> @@ -779,25 +803,12 @@ int arch_setup_meminit(struct xc_dom_image *dom)
>      if ( dom->superpages )
>      {
>          int count = dom->total_pages >> SUPERPAGE_PFN_SHIFT;
> -        xen_pfn_t extents[count];
>  
>          DOMPRINTF("Populating memory with %d superpages", count);
> -        for ( pfn = 0; pfn < count; pfn++ )
> -            extents[pfn] = pfn << SUPERPAGE_PFN_SHIFT;
> -        rc = xc_domain_populate_physmap_exact(dom->xch, dom->guest_domid,
> -                                               count, SUPERPAGE_PFN_SHIFT, 0,
> -                                               extents);
> +        rc = populate_range(dom, dom->p2m_host, 0, SUPERPAGE_PFN_SHIFT,
> +                            dom->total_pages);
>          if ( rc )
>              return rc;
> -
> -        /* Expand the returned mfn into the p2m array */
> -        pfn = 0;
> -        for ( i = 0; i < count; i++ )
> -        {
> -            mfn = extents[i];
> -            for ( j = 0; j < SUPERPAGE_NR_PFNS; j++, pfn++ )
> -                dom->p2m_host[pfn] = mfn + j;
> -        }
>      }
>      else
>      {
> @@ -820,9 +831,11 @@ int arch_setup_meminit(struct xc_dom_image *dom)
>              allocsz = dom->total_pages - i;
>              if ( allocsz > 1024*1024 )
>                  allocsz = 1024*1024;
> -            rc = xc_domain_populate_physmap_exact(
> -                dom->xch, dom->guest_domid, allocsz,
> -                0, 0, &dom->p2m_host[i]);
> +            /* try bit chunk of memory first */
> +            if ( (allocsz & ((1<<10)-1)) == 0 )
> +                rc = populate_range(dom, &dom->p2m_host[i], i, 10, allocsz);
> +            if ( rc )
> +                rc = populate_range(dom, &dom->p2m_host[i], i, 0, allocsz);
>          }
>  
>          /* Ensure no unclaimed pages are left unused.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH] Start PV guest faster
  2014-05-20  7:26 [RFC PATCH] Start PV guest faster Frediano Ziglio
  2014-05-20  9:07 ` Andrew Cooper
@ 2014-05-20  9:30 ` Jan Beulich
  2014-05-29 17:24   ` Frediano Ziglio
  1 sibling, 1 reply; 6+ messages in thread
From: Jan Beulich @ 2014-05-20  9:30 UTC (permalink / raw)
  To: Frediano Ziglio; +Cc: xen-devel, Ian Jackson, Ian Campbell, Stefano Stabellini

>>> On 20.05.14 at 09:26, <frediano.ziglio@citrix.com> wrote:
> Experimental patch that try to allocate large chunks in order to start
> PV guest quickly.

The fundamental idea is certainly welcome.

> It's a while I noticed that the time to start a large PV guest depends
> on the amount of memory. For VMs with 64 or more GB of ram the time can
> become quite significant (like 20 seconds). Digging around I found that
> a lot of time is spend populating RAM (from a single hypercall made by
> xenguest).

Did you check whether - like noticed elsewhere - this is due to
excessive hypercall preemption/restart? I.e. whether making
the preemption checks less fine grained helps?

> The improvement is quite significant (the hypercall is more than 20
> times faster for a machine with 3GB) however there are different things
> to consider:
> - should this optimization be done inside Xen? If the change is just
> userspace surely this make Xen simpler and safer but on the other way
> Xen is more aware if is better to allocate big chunks or not

Except that Xen has no way to tell what "better" here would be.

> - debug Xen return pages in reverse order while the chunks have to be
> allocated sequentially. Is this a problem?

I think the ability to populate guest memory with (largely, but not
necessarily entirely) discontiguous memory should be retained for
debugging purposes (see also below).

> I didn't find any piece of code where superpages is turned on in
> xc_dom_image but I think that if the number of pages is not multiple of
> superpages the code allocate a bit less memory for the guest.

I think that's expected - I wonder whether that code is really in use
by anyone...

> @@ -820,9 +831,11 @@ int arch_setup_meminit(struct xc_dom_image *dom)
>              allocsz = dom->total_pages - i;
>              if ( allocsz > 1024*1024 )
>                  allocsz = 1024*1024;
> -            rc = xc_domain_populate_physmap_exact(
> -                dom->xch, dom->guest_domid, allocsz,
> -                0, 0, &dom->p2m_host[i]);
> +            /* try bit chunk of memory first */
> +            if ( (allocsz & ((1<<10)-1)) == 0 )
> +                rc = populate_range(dom, &dom->p2m_host[i], i, 10, allocsz);
> +            if ( rc )
> +                rc = populate_range(dom, &dom->p2m_host[i], i, 0, allocsz);

So on what basis was 10 chosen here? I wonder whether this
shouldn't be
(a) smaller by default,
(b) configurable (globally or even per guest),
(c) dependent on the total memory getting assigned to the guest,
(d) tried with sequentially decreasing order after failure.

Additionally you're certainly aware that allocation failures lead to
hypervisor log messages (as today already seen when HVM guests
can't have their order-18 or order-9 allocations fulfilled). We may
need to think about ways to suppress these messages for such
allocations where the caller intends to retry with a smaller order.

Jan

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH] Start PV guest faster
  2014-05-20  9:30 ` Jan Beulich
@ 2014-05-29 17:24   ` Frediano Ziglio
  2014-05-30  6:54     ` Jan Beulich
  0 siblings, 1 reply; 6+ messages in thread
From: Frediano Ziglio @ 2014-05-29 17:24 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Ian Jackson, Ian Campbell, Stefano Stabellini

On Tue, 2014-05-20 at 10:30 +0100, Jan Beulich wrote:
> >>> On 20.05.14 at 09:26, <frediano.ziglio@citrix.com> wrote:
> > Experimental patch that try to allocate large chunks in order to start
> > PV guest quickly.
> 
> The fundamental idea is certainly welcome.
> 
> > It's a while I noticed that the time to start a large PV guest depends
> > on the amount of memory. For VMs with 64 or more GB of ram the time can
> > become quite significant (like 20 seconds). Digging around I found that
> > a lot of time is spend populating RAM (from a single hypercall made by
> > xenguest).
> 
> Did you check whether - like noticed elsewhere - this is due to
> excessive hypercall preemption/restart? I.e. whether making
> the preemption checks less fine grained helps?
> 

Yes, you are right!

Sorry for late reply, I got some time only now. Doing some tests with a
not so bug machine (3gb) and using strace to see amount of time spent

| Xen preempt check | User allocation | Time all ioctls (sec) |
| yes               | single pages    | 0.262                 |
| no                | single pages    | 0.0612                |
| yes               | bunk of pages   | 0.0325                |
| no                | bunk of pages   | 0.0280                |

So yes, preemption check (I disable entirely for the tests!) is the main
factor. Of course disabling entirely is not the right solution. Are
there some way to understand how often to do, some sort of
computation/timing?


> > The improvement is quite significant (the hypercall is more than 20
> > times faster for a machine with 3GB) however there are different things
> > to consider:
> > - should this optimization be done inside Xen? If the change is just
> > userspace surely this make Xen simpler and safer but on the other way
> > Xen is more aware if is better to allocate big chunks or not
> 
> Except that Xen has no way to tell what "better" here would be.
> 
> > - debug Xen return pages in reverse order while the chunks have to be
> > allocated sequentially. Is this a problem?
> 
> I think the ability to populate guest memory with (largely, but not
> necessarily entirely) discontiguous memory should be retained for
> debugging purposes (see also below).
> 
> > I didn't find any piece of code where superpages is turned on in
> > xc_dom_image but I think that if the number of pages is not multiple of
> > superpages the code allocate a bit less memory for the guest.
> 
> I think that's expected - I wonder whether that code is really in use
> by anyone...
> 
> > @@ -820,9 +831,11 @@ int arch_setup_meminit(struct xc_dom_image *dom)
> >              allocsz = dom->total_pages - i;
> >              if ( allocsz > 1024*1024 )
> >                  allocsz = 1024*1024;
> > -            rc = xc_domain_populate_physmap_exact(
> > -                dom->xch, dom->guest_domid, allocsz,
> > -                0, 0, &dom->p2m_host[i]);
> > +            /* try bit chunk of memory first */
> > +            if ( (allocsz & ((1<<10)-1)) == 0 )
> > +                rc = populate_range(dom, &dom->p2m_host[i], i, 10, allocsz);
> > +            if ( rc )
> > +                rc = populate_range(dom, &dom->p2m_host[i], i, 0, allocsz);
> 
> So on what basis was 10 chosen here? I wonder whether this
> shouldn't be
> (a) smaller by default,
> (b) configurable (globally or even per guest),
> (c) dependent on the total memory getting assigned to the guest,
> (d) tried with sequentially decreasing order after failure.
> 
> Additionally you're certainly aware that allocation failures lead to
> hypervisor log messages (as today already seen when HVM guests
> can't have their order-18 or order-9 allocations fulfilled). We may
> need to think about ways to suppress these messages for such
> allocations where the caller intends to retry with a smaller order.
> 
> Jan
> 

Well, the patch was mainly a test, 10 was chosen just for testing. No
idea about the best algorithm to follow, surely should handle even not
aligned situations better and surely should decrease the size using some
steps.

Tomorrow I hope to get some time trying multiple chunk sizes to
understand better.

Frediano

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH] Start PV guest faster
  2014-05-29 17:24   ` Frediano Ziglio
@ 2014-05-30  6:54     ` Jan Beulich
  2014-05-30  8:37       ` Frediano Ziglio
  0 siblings, 1 reply; 6+ messages in thread
From: Jan Beulich @ 2014-05-30  6:54 UTC (permalink / raw)
  To: frediano.ziglio; +Cc: xen-devel, ian.jackson, ian.campbell, stefano.stabellini

>>> Frediano Ziglio <frediano.ziglio@citrix.com> 05/29/14 7:24 PM >>>
>On Tue, 2014-05-20 at 10:30 +0100, Jan Beulich wrote:
>> >>> On 20.05.14 at 09:26, <frediano.ziglio@citrix.com> wrote:
>> > It's a while I noticed that the time to start a large PV guest depends
>> > on the amount of memory. For VMs with 64 or more GB of ram the time can
>> > become quite significant (like 20 seconds). Digging around I found that
>> > a lot of time is spend populating RAM (from a single hypercall made by
>> > xenguest).
>> 
>> Did you check whether - like noticed elsewhere - this is due to
>> excessive hypercall preemption/restart? I.e. whether making
>> the preemption checks less fine grained helps?
>> 
>
>Yes, you are right!
>
>Sorry for late reply, I got some time only now. Doing some tests with a
>not so bug machine (3gb) and using strace to see amount of time spent
>
>| Xen preempt check | User allocation | Time all ioctls (sec) |
>| yes               | single pages    | 0.262                 |
>| no                | single pages    | 0.0612                |
>| yes               | bunk of pages   | 0.0325                |
>| no                | bunk of pages   | 0.0280                |
>
>So yes, preemption check (I disable entirely for the tests!) is the main
>factor. Of course disabling entirely is not the right solution. Are
>there some way to understand how often to do, some sort of
>computation/timing?

If you look at other instances, it's mostly heuristic at this point. I suppose
you'd want to make the preemption granularity slightly allocation order
dependent, e.g. preempt every 64 pages allocated (unless, of course, the
allocation order is even higher than that).

Generally a time based approach (say every millisecond) might be
reasonable too, but reading out the time on each iteration isn't without
cost, so I'd recommend against this.

Jan

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH] Start PV guest faster
  2014-05-30  6:54     ` Jan Beulich
@ 2014-05-30  8:37       ` Frediano Ziglio
  0 siblings, 0 replies; 6+ messages in thread
From: Frediano Ziglio @ 2014-05-30  8:37 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, ian.jackson, ian.campbell, stefano.stabellini

On Fri, 2014-05-30 at 07:54 +0100, Jan Beulich wrote:
> >>> Frediano Ziglio <frediano.ziglio@citrix.com> 05/29/14 7:24 PM >>>
> >On Tue, 2014-05-20 at 10:30 +0100, Jan Beulich wrote:
> >> >>> On 20.05.14 at 09:26, <frediano.ziglio@citrix.com> wrote:
> >> > It's a while I noticed that the time to start a large PV guest depends
> >> > on the amount of memory. For VMs with 64 or more GB of ram the time can
> >> > become quite significant (like 20 seconds). Digging around I found that
> >> > a lot of time is spend populating RAM (from a single hypercall made by
> >> > xenguest).
> >> 
> >> Did you check whether - like noticed elsewhere - this is due to
> >> excessive hypercall preemption/restart? I.e. whether making
> >> the preemption checks less fine grained helps?
> >> 
> >
> >Yes, you are right!
> >
> >Sorry for late reply, I got some time only now. Doing some tests with a
> >not so bug machine (3gb) and using strace to see amount of time spent
> >
> >| Xen preempt check | User allocation | Time all ioctls (sec) |
> >| yes               | single pages    | 0.262                 |
> >| no                | single pages    | 0.0612                |
> >| yes               | bunk of pages   | 0.0325                |
> >| no                | bunk of pages   | 0.0280                |
> >
> >So yes, preemption check (I disable entirely for the tests!) is the main
> >factor. Of course disabling entirely is not the right solution. Are
> >there some way to understand how often to do, some sort of
> >computation/timing?
> 
> If you look at other instances, it's mostly heuristic at this point. I suppose
> you'd want to make the preemption granularity slightly allocation order
> dependent, e.g. preempt every 64 pages allocated (unless, of course, the
> allocation order is even higher than that).
> 
> Generally a time based approach (say every millisecond) might be
> reasonable too, but reading out the time on each iteration isn't without
> cost, so I'd recommend against this.
> 
> Jan
> 

I was thinking at some adaptive approach where at beginning you try to
understand how much to wait. In the meantime I got results from my
tests.

See
https://docs.google.com/spreadsheets/d/1u_93KXr1SGfn4Pz47tjPA71MMPnnQ4rTPF3SOANd_oY/edit?usp=sharing

For the test I disabled entirely preemption.

Frediano

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2014-05-30  8:38 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-05-20  7:26 [RFC PATCH] Start PV guest faster Frediano Ziglio
2014-05-20  9:07 ` Andrew Cooper
2014-05-20  9:30 ` Jan Beulich
2014-05-29 17:24   ` Frediano Ziglio
2014-05-30  6:54     ` Jan Beulich
2014-05-30  8:37       ` Frediano Ziglio

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.