From mboxrd@z Thu Jan 1 00:00:00 1970 From: Haitao Shan Subject: Re: libxc: maintain a small, per-handle, cache of hypercall buffer memory (Was: Re: Xen 4.1 rc1 test report) Date: Mon, 31 Jan 2011 16:57:58 +0800 Message-ID: References: <1295955798.14780.5930.camel@zakaz.uk.xensource.com> <1296039431.14780.6753.camel@zakaz.uk.xensource.com> <1296462612.20804.181.camel@localhost.localdomain> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============2052498262==" Return-path: In-Reply-To: <1296462612.20804.181.camel@localhost.localdomain> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: Ian Campbell Cc: "Zheng, Shaohui" , Keir Fraser , "xen-devel@lists.xensource.com" List-Id: xen-devel@lists.xenproject.org --===============2052498262== Content-Type: multipart/alternative; boundary=90e6ba4fbe2ec6f784049b209cea --90e6ba4fbe2ec6f784049b209cea Content-Type: text/plain; charset=ISO-8859-1 For example: The patch: void *xc__hypercall_buffer_alloc_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages) { - void *p = xch->ops->u.privcmd.alloc_hypercall_buffer(xch, xch->ops_handle, nr_pages); + void *p = hypercall_buffer_cache_alloc(xch, nr_pages); - if (!p) + if ( !p ) + p = xch->ops->u.privcmd.alloc_hypercall_buffer(xch, xch->ops_handle, nr_pages); + + if ( !p ) return NULL; My code at hand: void *xc__hypercall_buffer_alloc_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages) { size_t size = nr_pages * PAGE_SIZE; void *p; #if defined(_POSIX_C_SOURCE) && !defined(__sun__) int ret; ret = posix_memalign(&p, PAGE_SIZE, size); if (ret != 0) return NULL; #elif defined(__NetBSD__) || defined(__OpenBSD__) p = valloc(size); #else p = memalign(PAGE_SIZE, size); #endif if (!p) return NULL; #ifndef __sun__ if ( mlock(p, size) < 0 ) { free(p); return NULL; } #endif b->hbuf = p; memset(p, 0, size); return b->hbuf; } And BTW: I am using c/s 22846. Shan Haitao 2011/1/31 Ian Campbell > On Mon, 2011-01-31 at 03:06 +0000, Haitao Shan wrote: > > Hi, Ian, > > > > I can not apply your patch. Is this patch developed against more newer > > version of Xen than what I can have at hand? > > There was a typo below (an xc_hypercall_buffer_cache_release which > should have been xc__hypercall_buffer_cache_release) but it sounds like > you have trouble with actually applying the patch? > > It was against a recent xen-unstable on the day I posted it. What were > the rejects you saw? > > If you have a particular version you want to test against I can generate > a suitable patch for you. > > Ian. > > > > > Shan Haitao > > > > 2011/1/26 Ian Campbell : > > > On Wed, 2011-01-26 at 00:47 +0000, Haitao Shan wrote: > > >> I think it is basically the same idea as Keir introduced in 20841. I > > >> guess this bug would happen on platforms which has large number of > > >> physical CPUs, not only on EX system of Intel. > > >> If you can cook the patch, that would be great! Thanks!! > > > > > > Does this help? > > > > > > Ian. > > > > > > 8<--------------------------------------- > > > > > > # HG changeset patch > > > # User Ian Campbell > > > # Date 1296038761 0 > > > # Node ID 8b8b7e024f9d6f4c2ce1a4efbf38f07eeb460d91 > > > # Parent e4e69622dc95037eab6740f79ecf9c1d05bca529 > > > libxc: maintain a small, per-handle, cache of hypercall buffer memory > > > > > > Constantly m(un)locking memory can have significant overhead on > > > systems with large numbers of CPUs. This was previously fixed by > > > 20841:fbe8f32fa257 but this was dropped during the transition to > > > hypercall buffers. > > > > > > Introduce a small cache of single page hypercall buffer allocations > > > which can be reused to avoid this overhead. > > > > > > Add some statistics tracking to the hypercall buffer allocations. > > > > > > The cache size of 4 was chosen based on these statistics since they > > > indicated that 2 pages was sufficient to satisfy all concurrent single > > > page hypercall buffer allocations seen during "xl create", "xl > > > shutdown" and "xl destroy" of both a PV and HVM guest therefore 4 > > > pages should cover the majority of important cases. > > > > > > Signed-off-by: Ian Campbell > > > > > > diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_hcall_buf.c > > > --- a/tools/libxc/xc_hcall_buf.c Wed Jan 26 10:22:42 2011 +0000 > > > +++ b/tools/libxc/xc_hcall_buf.c Wed Jan 26 10:46:01 2011 +0000 > > > @@ -18,6 +18,7 @@ > > > > > > #include > > > #include > > > +#include > > > > > > #include "xc_private.h" > > > #include "xg_private.h" > > > @@ -28,11 +29,108 @@ xc_hypercall_buffer_t XC__HYPERCALL_BUFF > > > HYPERCALL_BUFFER_INIT_NO_BOUNCE > > > }; > > > > > > +pthread_mutex_t hypercall_buffer_cache_mutex = > PTHREAD_MUTEX_INITIALIZER; > > > + > > > +static void hypercall_buffer_cache_lock(xc_interface *xch) > > > +{ > > > + if ( xch->flags & XC_OPENFLAG_NON_REENTRANT ) > > > + return; > > > + pthread_mutex_lock(&hypercall_buffer_cache_mutex); > > > +} > > > + > > > +static void hypercall_buffer_cache_unlock(xc_interface *xch) > > > +{ > > > + if ( xch->flags & XC_OPENFLAG_NON_REENTRANT ) > > > + return; > > > + pthread_mutex_unlock(&hypercall_buffer_cache_mutex); > > > +} > > > + > > > +static void *hypercall_buffer_cache_alloc(xc_interface *xch, int > nr_pages) > > > +{ > > > + void *p = NULL; > > > + > > > + hypercall_buffer_cache_lock(xch); > > > + > > > + xch->hypercall_buffer_total_allocations++; > > > + xch->hypercall_buffer_current_allocations++; > > > + if ( xch->hypercall_buffer_current_allocations > > xch->hypercall_buffer_maximum_allocations ) > > > + xch->hypercall_buffer_maximum_allocations = > xch->hypercall_buffer_current_allocations; > > > + > > > + if ( nr_pages > 1 ) > > > + { > > > + xch->hypercall_buffer_cache_toobig++; > > > + } > > > + else if ( xch->hypercall_buffer_cache_nr > 0 ) > > > + { > > > + p = > xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr]; > > > + xch->hypercall_buffer_cache_hits++; > > > + } > > > + else > > > + { > > > + xch->hypercall_buffer_cache_misses++; > > > + } > > > + > > > + hypercall_buffer_cache_unlock(xch); > > > + > > > + return p; > > > +} > > > + > > > +static int hypercall_buffer_cache_free(xc_interface *xch, void *p, int > nr_pages) > > > +{ > > > + int rc = 0; > > > + > > > + hypercall_buffer_cache_lock(xch); > > > + > > > + xch->hypercall_buffer_total_releases++; > > > + xch->hypercall_buffer_current_allocations--; > > > + > > > + if ( nr_pages == 1 && xch->hypercall_buffer_cache_nr < > HYPERCALL_BUFFER_CACHE_SIZE ) > > > + { > > > + xch->hypercall_buffer_cache[xch->hypercall_buffer_cache_nr++] > = p; > > > + rc = 1; > > > + } > > > + > > > + hypercall_buffer_cache_unlock(xch); > > > + > > > + return rc; > > > +} > > > + > > > +void xc__hypercall_buffer_cache_release(xc_interface *xch) > > > +{ > > > + void *p; > > > + > > > + hypercall_buffer_cache_lock(xch); > > > + > > > + DBGPRINTF("hypercall buffer: total allocations:%d total > releases:%d", > > > + xch->hypercall_buffer_total_allocations, > > > + xch->hypercall_buffer_total_releases); > > > + DBGPRINTF("hypercall buffer: current allocations:%d maximum > allocations:%d", > > > + xch->hypercall_buffer_current_allocations, > > > + xch->hypercall_buffer_maximum_allocations); > > > + DBGPRINTF("hypercall buffer: cache current size:%d", > > > + xch->hypercall_buffer_cache_nr); > > > + DBGPRINTF("hypercall buffer: cache hits:%d misses:%d toobig:%d", > > > + xch->hypercall_buffer_cache_hits, > > > + xch->hypercall_buffer_cache_misses, > > > + xch->hypercall_buffer_cache_toobig); > > > + > > > + while ( xch->hypercall_buffer_cache_nr > 0 ) > > > + { > > > + p = > xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr]; > > > + xch->ops->u.privcmd.free_hypercall_buffer(xch, > xch->ops_handle, p, 1); > > > + } > > > + > > > + hypercall_buffer_cache_unlock(xch); > > > +} > > > + > > > void *xc__hypercall_buffer_alloc_pages(xc_interface *xch, > xc_hypercall_buffer_t *b, int nr_pages) > > > { > > > - void *p = xch->ops->u.privcmd.alloc_hypercall_buffer(xch, > xch->ops_handle, nr_pages); > > > + void *p = hypercall_buffer_cache_alloc(xch, nr_pages); > > > > > > - if (!p) > > > + if ( !p ) > > > + p = xch->ops->u.privcmd.alloc_hypercall_buffer(xch, > xch->ops_handle, nr_pages); > > > + > > > + if ( !p ) > > > return NULL; > > > > > > b->hbuf = p; > > > @@ -47,7 +145,8 @@ void xc__hypercall_buffer_free_pages(xc_ > > > if ( b->hbuf == NULL ) > > > return; > > > > > > - xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, > b->hbuf, nr_pages); > > > + if ( !hypercall_buffer_cache_free(xch, b->hbuf, nr_pages) ) > > > + xch->ops->u.privcmd.free_hypercall_buffer(xch, > xch->ops_handle, b->hbuf, nr_pages); > > > } > > > > > > struct allocation_header { > > > diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_private.c > > > --- a/tools/libxc/xc_private.c Wed Jan 26 10:22:42 2011 +0000 > > > +++ b/tools/libxc/xc_private.c Wed Jan 26 10:46:01 2011 +0000 > > > @@ -126,6 +126,16 @@ static struct xc_interface_core *xc_inte > > > xch->error_handler = logger; xch->error_handler_tofree > = 0; > > > xch->dombuild_logger = dombuild_logger; > xch->dombuild_logger_tofree = 0; > > > > > > + xch->hypercall_buffer_cache_nr = 0; > > > + > > > + xch->hypercall_buffer_total_allocations = 0; > > > + xch->hypercall_buffer_total_releases = 0; > > > + xch->hypercall_buffer_current_allocations = 0; > > > + xch->hypercall_buffer_maximum_allocations = 0; > > > + xch->hypercall_buffer_cache_hits = 0; > > > + xch->hypercall_buffer_cache_misses = 0; > > > + xch->hypercall_buffer_cache_toobig = 0; > > > + > > > xch->ops_handle = XC_OSDEP_OPEN_ERROR; > > > xch->ops = NULL; > > > > > > @@ -171,6 +181,8 @@ static int xc_interface_close_common(xc_ > > > static int xc_interface_close_common(xc_interface *xch) > > > { > > > int rc = 0; > > > + > > > + xc_hypercall_buffer_cache_release(xch); > > > > > > xtl_logger_destroy(xch->dombuild_logger_tofree); > > > xtl_logger_destroy(xch->error_handler_tofree); > > > diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_private.h > > > --- a/tools/libxc/xc_private.h Wed Jan 26 10:22:42 2011 +0000 > > > +++ b/tools/libxc/xc_private.h Wed Jan 26 10:46:01 2011 +0000 > > > @@ -75,6 +75,28 @@ struct xc_interface_core { > > > FILE *dombuild_logger_file; > > > const char *currently_progress_reporting; > > > > > > + /* > > > + * A simple cache of unused, single page, hypercall buffers > > > + * > > > + * Protected by a global lock. > > > + */ > > > +#define HYPERCALL_BUFFER_CACHE_SIZE 4 > > > + int hypercall_buffer_cache_nr; > > > + void *hypercall_buffer_cache[HYPERCALL_BUFFER_CACHE_SIZE]; > > > + > > > + /* > > > + * Hypercall buffer statistics. All protected by the global > > > + * hypercall_buffer_cache lock. > > > + */ > > > + int hypercall_buffer_total_allocations; > > > + int hypercall_buffer_total_releases; > > > + int hypercall_buffer_current_allocations; > > > + int hypercall_buffer_maximum_allocations; > > > + int hypercall_buffer_cache_hits; > > > + int hypercall_buffer_cache_misses; > > > + int hypercall_buffer_cache_toobig; > > > + > > > + /* Low lovel OS interface */ > > > xc_osdep_info_t osdep; > > > xc_osdep_ops *ops; /* backend operations */ > > > xc_osdep_handle ops_handle; /* opaque data for xc_osdep_ops */ > > > @@ -156,6 +178,11 @@ int xc__hypercall_bounce_pre(xc_interfac > > > #define xc_hypercall_bounce_pre(_xch, _name) > xc__hypercall_bounce_pre(_xch, HYPERCALL_BUFFER(_name)) > > > void xc__hypercall_bounce_post(xc_interface *xch, > xc_hypercall_buffer_t *bounce); > > > #define xc_hypercall_bounce_post(_xch, _name) > xc__hypercall_bounce_post(_xch, HYPERCALL_BUFFER(_name)) > > > + > > > +/* > > > + * Release hypercall buffer cache > > > + */ > > > +void xc__hypercall_buffer_cache_release(xc_interface *xch); > > > > > > /* > > > * Hypercall interfaces. > > > > > > > > > > > > --90e6ba4fbe2ec6f784049b209cea Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable For example:

The patch:= =A0

void *xc__hy= percall_buffer_alloc_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int= nr_pages)
=A0{
- =A0 =A0void *p =3D xch->ops->u.privcmd.alloc_hypercall_buff= er(xch, xch->ops_handle, nr_pages);
+ =A0 =A0void *p =3D hypercall_bu= ffer_cache_alloc(xch, nr_pages);

- =A0 =A0if (!p)
+ =A0 =A0if ( != p )
+ =A0 =A0 =A0 =A0p =3D xch->ops->u.privcmd.alloc_hypercall_buf= fer(xch, xch->ops_handle, nr_pages);
+
+ =A0 =A0if ( !p )
=A0 =A0 =A0 =A0 return NULL;

My code at ha= nd:

<= div>v= oid *xc__hypercall_buffer_alloc_pages(xc_interface *xch, xc_hypercall_buffe= r_t *b, int nr_pages)
= {
=A0=A0 =A0size_t size =3D nr_pages * PAGE_SIZE;
<= div>= =A0=A0 =A0void *p;
= #if defined(_POSIX_C_SOURCE) && !defined(__sun__)
= =A0= =A0 =A0int ret;
= =A0=A0 =A0ret =3D posix_memalign(&p, PAGE_SIZE, size);
=A0= =A0 =A0if (ret !=3D 0)
= =A0=A0 =A0 =A0 =A0return NULL;
#elif defined(__NetBSD__) || def= ined(__OpenBSD__)
= =A0=A0 =A0p =3D valloc(size);
#else
=A0=A0 =A0p =3D= memalign(PAGE_SIZE, size);
= #endif

=A0=A0 =A0if (!p)
= =A0=A0 =A0 =A0 =A0return NULL;

#ifndef __sun__=
= =A0=A0 =A0if ( mlock(p, size) < 0 )
=A0=A0 =A0{
=
= =A0=A0 =A0 =A0 =A0free(p);
= =A0=A0 =A0 =A0 =A0return NULL;
=A0=A0 =A0}
#endif
=
=A0=A0 =A0b->hbuf =3D p;

=A0=A0 =A0memset(p, 0, size);
=A0=A0 =A0return b-&= gt;hbuf;
= }

And BTW: I am using c/s 22846.

Shan Haitao

2011/1/31 Ian Campbell <Ian.Campbell@eu= .citrix.com>
On Mon, 2011-01-31 at 03:= 06 +0000, Haitao Shan wrote:
> Hi, Ian,
>
> I can not apply your patch. Is this patch developed against more newer=
> version of Xen than what I can have at hand?

There was a typo below (an xc_hypercall_buffer_cache_release which should have been xc__hypercall_buffer_cache_release) but it sounds like
you have trouble with actually applying the patch?

It was against a recent xen-unstable on the day I posted it. What were
the rejects you saw?

If you have a particular version you want to test against I can generate a suitable patch for you.

Ian.

>
> Shan Haitao
>
> 2011/1/26 Ian Campbell <Ian.Campbell@eu.citrix.com>:
> > On Wed, 2011-01-26 at 00:47 +0000, Haitao Shan wrote:
> >> I think it is basically the same idea as Keir introduced in 2= 0841. I
> >> guess this bug would happen on platforms which has large numb= er of
> >> physical CPUs, not only on EX system of Intel.
> >> If you can cook the patch, that would be great! Thanks!!
> >
> > Does this help?
> >
> > Ian.
> >
> > 8<---------------------------------------
> >
> > # HG changeset patch
> > # User Ian Campbell <ian.campbell@citrix.com>
> > # Date 1296038761 0
> > # Node ID 8b8b7e024f9d6f4c2ce1a4efbf38f07eeb460d91
> > # Parent =A0e4e69622dc95037eab6740f79ecf9c1d05bca529
> > libxc: maintain a small, per-handle, cache of hypercall buffer me= mory
> >
> > Constantly m(un)locking memory can have significant overhead on > > systems with large numbers of CPUs. This was previously fixed by<= br> > > 20841:fbe8f32fa257 but this was dropped during the transition to<= br> > > hypercall buffers.
> >
> > Introduce a small cache of single page hypercall buffer allocatio= ns
> > which can be reused to avoid this overhead.
> >
> > Add some statistics tracking to the hypercall buffer allocations.=
> >
> > The cache size of 4 was chosen based on these statistics since th= ey
> > indicated that 2 pages was sufficient to satisfy all concurrent s= ingle
> > page hypercall buffer allocations seen during "xl create&quo= t;, "xl
> > shutdown" and "xl destroy" of both a PV and HVM gu= est therefore 4
> > pages should cover the majority of important cases.
> >
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> >
> > diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_hcall_buf.c > > --- a/tools/libxc/xc_hcall_buf.c =A0 =A0 =A0 =A0Wed Jan 26 10:22:= 42 2011 +0000
> > +++ b/tools/libxc/xc_hcall_buf.c =A0 =A0 =A0 =A0Wed Jan 26 10:46:= 01 2011 +0000
> > @@ -18,6 +18,7 @@
> >
> > =A0#include <stdlib.h>
> > =A0#include <string.h>
> > +#include <pthread.h>
> >
> > =A0#include "xc_private.h"
> > =A0#include "xg_private.h"
> > @@ -28,11 +29,108 @@ xc_hypercall_buffer_t XC__HYPERCALL_BUFF
> > =A0 =A0 HYPERCALL_BUFFER_INIT_NO_BOUNCE
> > =A0};
> >
> > +pthread_mutex_t hypercall_buffer_cache_mutex =3D PTHREAD_MUTEX_I= NITIALIZER;
> > +
> > +static void hypercall_buffer_cache_lock(xc_interface *xch)
> > +{
> > + =A0 =A0if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
> > + =A0 =A0 =A0 =A0return;
> > + =A0 =A0pthread_mutex_lock(&hypercall_buffer_cache_mutex); > > +}
> > +
> > +static void hypercall_buffer_cache_unlock(xc_interface *xch)
> > +{
> > + =A0 =A0if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
> > + =A0 =A0 =A0 =A0return;
> > + =A0 =A0pthread_mutex_unlock(&hypercall_buffer_cache_mutex);=
> > +}
> > +
> > +static void *hypercall_buffer_cache_alloc(xc_interface *xch, int= nr_pages)
> > +{
> > + =A0 =A0void *p =3D NULL;
> > +
> > + =A0 =A0hypercall_buffer_cache_lock(xch);
> > +
> > + =A0 =A0xch->hypercall_buffer_total_allocations++;
> > + =A0 =A0xch->hypercall_buffer_current_allocations++;
> > + =A0 =A0if ( xch->hypercall_buffer_current_allocations > x= ch->hypercall_buffer_maximum_allocations )
> > + =A0 =A0 =A0 =A0xch->hypercall_buffer_maximum_allocations =3D= xch->hypercall_buffer_current_allocations;
> > +
> > + =A0 =A0if ( nr_pages > 1 )
> > + =A0 =A0{
> > + =A0 =A0 =A0 =A0xch->hypercall_buffer_cache_toobig++;
> > + =A0 =A0}
> > + =A0 =A0else if ( xch->hypercall_buffer_cache_nr > 0 )
> > + =A0 =A0{
> > + =A0 =A0 =A0 =A0p =3D xch->hypercall_buffer_cache[--xch->h= ypercall_buffer_cache_nr];
> > + =A0 =A0 =A0 =A0xch->hypercall_buffer_cache_hits++;
> > + =A0 =A0}
> > + =A0 =A0else
> > + =A0 =A0{
> > + =A0 =A0 =A0 =A0xch->hypercall_buffer_cache_misses++;
> > + =A0 =A0}
> > +
> > + =A0 =A0hypercall_buffer_cache_unlock(xch);
> > +
> > + =A0 =A0return p;
> > +}
> > +
> > +static int hypercall_buffer_cache_free(xc_interface *xch, void *= p, int nr_pages)
> > +{
> > + =A0 =A0int rc =3D 0;
> > +
> > + =A0 =A0hypercall_buffer_cache_lock(xch);
> > +
> > + =A0 =A0xch->hypercall_buffer_total_releases++;
> > + =A0 =A0xch->hypercall_buffer_current_allocations--;
> > +
> > + =A0 =A0if ( nr_pages =3D=3D 1 && xch->hypercall_buff= er_cache_nr < HYPERCALL_BUFFER_CACHE_SIZE )
> > + =A0 =A0{
> > + =A0 =A0 =A0 =A0xch->hypercall_buffer_cache[xch->hypercall= _buffer_cache_nr++] =3D p;
> > + =A0 =A0 =A0 =A0rc =3D 1;
> > + =A0 =A0}
> > +
> > + =A0 =A0hypercall_buffer_cache_unlock(xch);
> > +
> > + =A0 =A0return rc;
> > +}
> > +
> > +void xc__hypercall_buffer_cache_release(xc_interface *xch)
> > +{
> > + =A0 =A0void *p;
> > +
> > + =A0 =A0hypercall_buffer_cache_lock(xch);
> > +
> > + =A0 =A0DBGPRINTF("hypercall buffer: total allocations:%d t= otal releases:%d",
> > + =A0 =A0 =A0 =A0 =A0 =A0 =A0xch->hypercall_buffer_total_alloc= ations,
> > + =A0 =A0 =A0 =A0 =A0 =A0 =A0xch->hypercall_buffer_total_relea= ses);
> > + =A0 =A0DBGPRINTF("hypercall buffer: current allocations:%d= maximum allocations:%d",
> > + =A0 =A0 =A0 =A0 =A0 =A0 =A0xch->hypercall_buffer_current_all= ocations,
> > + =A0 =A0 =A0 =A0 =A0 =A0 =A0xch->hypercall_buffer_maximum_all= ocations);
> > + =A0 =A0DBGPRINTF("hypercall buffer: cache current size:%d&= quot;,
> > + =A0 =A0 =A0 =A0 =A0 =A0 =A0xch->hypercall_buffer_cache_nr);<= br> > > + =A0 =A0DBGPRINTF("hypercall buffer: cache hits:%d misses:%= d toobig:%d",
> > + =A0 =A0 =A0 =A0 =A0 =A0 =A0xch->hypercall_buffer_cache_hits,=
> > + =A0 =A0 =A0 =A0 =A0 =A0 =A0xch->hypercall_buffer_cache_misse= s,
> > + =A0 =A0 =A0 =A0 =A0 =A0 =A0xch->hypercall_buffer_cache_toobi= g);
> > +
> > + =A0 =A0while ( xch->hypercall_buffer_cache_nr > 0 )
> > + =A0 =A0{
> > + =A0 =A0 =A0 =A0p =3D xch->hypercall_buffer_cache[--xch->h= ypercall_buffer_cache_nr];
> > + =A0 =A0 =A0 =A0xch->ops->u.privcmd.free_hypercall_buffer(= xch, xch->ops_handle, p, 1);
> > + =A0 =A0}
> > +
> > + =A0 =A0hypercall_buffer_cache_unlock(xch);
> > +}
> > +
> > =A0void *xc__hypercall_buffer_alloc_pages(xc_interface *xch, xc_h= ypercall_buffer_t *b, int nr_pages)
> > =A0{
> > - =A0 =A0void *p =3D xch->ops->u.privcmd.alloc_hypercall_bu= ffer(xch, xch->ops_handle, nr_pages);
> > + =A0 =A0void *p =3D hypercall_buffer_cache_alloc(xch, nr_pages);=
> >
> > - =A0 =A0if (!p)
> > + =A0 =A0if ( !p )
> > + =A0 =A0 =A0 =A0p =3D xch->ops->u.privcmd.alloc_hypercall_= buffer(xch, xch->ops_handle, nr_pages);
> > +
> > + =A0 =A0if ( !p )
> > =A0 =A0 =A0 =A0 return NULL;
> >
> > =A0 =A0 b->hbuf =3D p;
> > @@ -47,7 +145,8 @@ void xc__hypercall_buffer_free_pages(xc_
> > =A0 =A0 if ( b->hbuf =3D=3D NULL )
> > =A0 =A0 =A0 =A0 return;
> >
> > - =A0 =A0xch->ops->u.privcmd.free_hypercall_buffer(xch, xch= ->ops_handle, b->hbuf, nr_pages);
> > + =A0 =A0if ( !hypercall_buffer_cache_free(xch, b->hbuf, nr_pa= ges) )
> > + =A0 =A0 =A0 =A0xch->ops->u.privcmd.free_hypercall_buffer(= xch, xch->ops_handle, b->hbuf, nr_pages);
> > =A0}
> >
> > =A0struct allocation_header {
> > diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_private.c
> > --- a/tools/libxc/xc_private.c =A0Wed Jan 26 10:22:42 2011 +0000<= br> > > +++ b/tools/libxc/xc_private.c =A0Wed Jan 26 10:46:01 2011 +0000<= br> > > @@ -126,6 +126,16 @@ static struct xc_interface_core *xc_inte
> > =A0 =A0 xch->error_handler =A0 =3D logger; =A0 =A0 =A0 =A0 =A0= xch->error_handler_tofree =A0 =3D 0;
> > =A0 =A0 xch->dombuild_logger =3D dombuild_logger; =A0xch->d= ombuild_logger_tofree =3D 0;
> >
> > + =A0 =A0xch->hypercall_buffer_cache_nr =3D 0;
> > +
> > + =A0 =A0xch->hypercall_buffer_total_allocations =3D 0;
> > + =A0 =A0xch->hypercall_buffer_total_releases =3D 0;
> > + =A0 =A0xch->hypercall_buffer_current_allocations =3D 0;
> > + =A0 =A0xch->hypercall_buffer_maximum_allocations =3D 0;
> > + =A0 =A0xch->hypercall_buffer_cache_hits =3D 0;
> > + =A0 =A0xch->hypercall_buffer_cache_misses =3D 0;
> > + =A0 =A0xch->hypercall_buffer_cache_toobig =3D 0;
> > +
> > =A0 =A0 xch->ops_handle =3D XC_OSDEP_OPEN_ERROR;
> > =A0 =A0 xch->ops =3D NULL;
> >
> > @@ -171,6 +181,8 @@ static int xc_interface_close_common(xc_
> > =A0static int xc_interface_close_common(xc_interface *xch)
> > =A0{
> > =A0 =A0 int rc =3D 0;
> > +
> > + =A0 =A0xc_hypercall_buffer_cache_release(xch);
> >
> > =A0 =A0 xtl_logger_destroy(xch->dombuild_logger_tofree);
> > =A0 =A0 xtl_logger_destroy(xch->error_handler_tofree);
> > diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_private.h
> > --- a/tools/libxc/xc_private.h =A0Wed Jan 26 10:22:42 2011 +0000<= br> > > +++ b/tools/libxc/xc_private.h =A0Wed Jan 26 10:46:01 2011 +0000<= br> > > @@ -75,6 +75,28 @@ struct xc_interface_core {
> > =A0 =A0 FILE *dombuild_logger_file;
> > =A0 =A0 const char *currently_progress_reporting;
> >
> > + =A0 =A0/*
> > + =A0 =A0 * A simple cache of unused, single page, hypercall buff= ers
> > + =A0 =A0 *
> > + =A0 =A0 * Protected by a global lock.
> > + =A0 =A0 */
> > +#define HYPERCALL_BUFFER_CACHE_SIZE 4
> > + =A0 =A0int hypercall_buffer_cache_nr;
> > + =A0 =A0void *hypercall_buffer_cache[HYPERCALL_BUFFER_CACHE_SIZE= ];
> > +
> > + =A0 =A0/*
> > + =A0 =A0 * Hypercall buffer statistics. All protected by the glo= bal
> > + =A0 =A0 * hypercall_buffer_cache lock.
> > + =A0 =A0 */
> > + =A0 =A0int hypercall_buffer_total_allocations;
> > + =A0 =A0int hypercall_buffer_total_releases;
> > + =A0 =A0int hypercall_buffer_current_allocations;
> > + =A0 =A0int hypercall_buffer_maximum_allocations;
> > + =A0 =A0int hypercall_buffer_cache_hits;
> > + =A0 =A0int hypercall_buffer_cache_misses;
> > + =A0 =A0int hypercall_buffer_cache_toobig;
> > +
> > + =A0 =A0/* Low lovel OS interface */
> > =A0 =A0 xc_osdep_info_t =A0osdep;
> > =A0 =A0 xc_osdep_ops =A0 =A0*ops; /* backend operations */
> > =A0 =A0 xc_osdep_handle =A0ops_handle; /* opaque data for xc_osde= p_ops */
> > @@ -156,6 +178,11 @@ int xc__hypercall_bounce_pre(xc_interfac
> > =A0#define xc_hypercall_bounce_pre(_xch, _name) xc__hypercall_bou= nce_pre(_xch, HYPERCALL_BUFFER(_name))
> > =A0void xc__hypercall_bounce_post(xc_interface *xch, xc_hypercall= _buffer_t *bounce);
> > =A0#define xc_hypercall_bounce_post(_xch, _name) xc__hypercall_bo= unce_post(_xch, HYPERCALL_BUFFER(_name))
> > +
> > +/*
> > + * Release hypercall buffer cache
> > + */
> > +void xc__hypercall_buffer_cache_release(xc_interface *xch);
> >
> > =A0/*
> > =A0* Hypercall interfaces.
> >
> >
> >



--90e6ba4fbe2ec6f784049b209cea-- --===============2052498262== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel --===============2052498262==--