All of lore.kernel.org
 help / color / mirror / Atom feed
From: Haitao Shan <maillists.shan@gmail.com>
To: Ian Campbell <Ian.Campbell@eu.citrix.com>
Cc: "Zheng, Shaohui" <shaohui.zheng@intel.com>,
	Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: libxc: maintain a small, per-handle, cache of hypercall buffer memory (Was: Re: Xen 4.1 rc1 test report)
Date: Tue, 1 Feb 2011 12:40:52 +0800	[thread overview]
Message-ID: <AANLkTimAupm1mX8EpmSc-H6=b_3j-WS=dCzA05+=Zter@mail.gmail.com> (raw)
In-Reply-To: <AANLkTi=_MfqQKkBtpt8XoZpK_9zypGLt7V9z9GjjOKRf@mail.gmail.com>


[-- Attachment #1.1: Type: text/plain, Size: 10235 bytes --]

I have tested the patch. The bug can be fixed by Ian's patch. Thanks!

So, Ian, please go on the process of your submitting the patch.

Shan Haitao

2011/1/31 Haitao Shan <maillists.shan@gmail.com>

> Thanks, Ian.
> I will test tomorrow and let you know the result.
>
> Shan Haitao
>
> 2011/1/31 Ian Campbell <Ian.Campbell@eu.citrix.com>
>
>> On Mon, 2011-01-31 at 08:57 +0000, Haitao Shan wrote:
>>
>> > And BTW: I am using c/s 22846.
>>
>> Sorry, I didn't notice a patch which I was holding off for 4.2 in my
>> queue before this one.
>>
>> Version rebased onto 22846:52e928af3637 below.
>>
>> Ian.
>>
>> 8<---------------------------------------
>>
>> # HG changeset patch
>> # User Ian Campbell <ian.campbell@citrix.com>
>> # Date 1296466278 0
>> # Node ID 9e6175469e6f246ee9370ef57f987bb435b00bef
>> # Parent  5b6663ba2bb2c54e8fa6745afa16297ebe43328d
>> libxc: maintain a small, per-handle, cache of hypercall buffer memory
>>
>> Constantly m(un)locking memory can have significant overhead on
>> systems with large numbers of CPUs. This was previously fixed by
>> 20841:fbe8f32fa257 but this was dropped during the transition to
>> hypercall buffers.
>>
>> Introduce a small cache of single page hypercall buffer allocations
>> which can be resused to avoid this overhead.
>>
>> Add some statistics tracking to the hypercall buffer allocations.
>>
>> The cache size of 4 was chosen based on these statistics since they
>> indicated that 2 pages was sufficient to satisfy all concurrent single
>> page hypercall buffer allocations seen during "xl create", "xl
>> shutdown" and "xl destroy" of both a PV and HVM guest therefore 4
>> pages should cover the majority of important cases.
>>
>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>>
>> diff -r 5b6663ba2bb2 -r 9e6175469e6f tools/libxc/xc_hcall_buf.c
>> --- a/tools/libxc/xc_hcall_buf.c        Mon Jan 31 09:14:52 2011 +0000
>> +++ b/tools/libxc/xc_hcall_buf.c        Mon Jan 31 09:31:18 2011 +0000
>> @@ -18,6 +18,7 @@
>>
>>  #include <stdlib.h>
>>  #include <malloc.h>
>> +#include <pthread.h>
>>
>>  #include "xc_private.h"
>>  #include "xg_private.h"
>> @@ -28,31 +29,137 @@ xc_hypercall_buffer_t XC__HYPERCALL_BUFF
>>      HYPERCALL_BUFFER_INIT_NO_BOUNCE
>>  };
>>
>> +pthread_mutex_t hypercall_buffer_cache_mutex = PTHREAD_MUTEX_INITIALIZER;
>> +
>> +static void hypercall_buffer_cache_lock(xc_interface *xch)
>> +{
>> +    if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
>> +        return;
>> +    pthread_mutex_lock(&hypercall_buffer_cache_mutex);
>> +}
>> +
>> +static void hypercall_buffer_cache_unlock(xc_interface *xch)
>> +{
>> +    if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
>> +        return;
>> +    pthread_mutex_unlock(&hypercall_buffer_cache_mutex);
>> +}
>> +
>> +static void *hypercall_buffer_cache_alloc(xc_interface *xch, int
>> nr_pages)
>> +{
>> +    void *p = NULL;
>> +
>> +    hypercall_buffer_cache_lock(xch);
>> +
>> +    xch->hypercall_buffer_total_allocations++;
>> +    xch->hypercall_buffer_current_allocations++;
>> +    if ( xch->hypercall_buffer_current_allocations >
>> xch->hypercall_buffer_maximum_allocations )
>> +        xch->hypercall_buffer_maximum_allocations =
>> xch->hypercall_buffer_current_allocations;
>> +
>> +    if ( nr_pages > 1 )
>> +    {
>> +        xch->hypercall_buffer_cache_toobig++;
>> +    }
>> +    else if ( xch->hypercall_buffer_cache_nr > 0 )
>> +    {
>> +        p =
>> xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr];
>> +        xch->hypercall_buffer_cache_hits++;
>> +    }
>> +    else
>> +    {
>> +        xch->hypercall_buffer_cache_misses++;
>> +    }
>> +
>> +    hypercall_buffer_cache_unlock(xch);
>> +
>> +    return p;
>> +}
>> +
>> +static int hypercall_buffer_cache_free(xc_interface *xch, void *p, int
>> nr_pages)
>> +{
>> +    int rc = 0;
>> +
>> +    hypercall_buffer_cache_lock(xch);
>> +
>> +    xch->hypercall_buffer_total_releases++;
>> +    xch->hypercall_buffer_current_allocations--;
>> +
>> +    if ( nr_pages == 1 && xch->hypercall_buffer_cache_nr <
>> HYPERCALL_BUFFER_CACHE_SIZE )
>> +    {
>> +        xch->hypercall_buffer_cache[xch->hypercall_buffer_cache_nr++] =
>> p;
>> +        rc = 1;
>> +    }
>> +
>> +    hypercall_buffer_cache_unlock(xch);
>> +
>> +    return rc;
>> +}
>> +
>> +static void do_hypercall_buffer_free_pages(void *ptr, int nr_pages)
>> +{
>> +#ifndef __sun__
>> +    (void) munlock(ptr, nr_pages * PAGE_SIZE);
>> +#endif
>> +
>> +    free(ptr);
>> +}
>> +
>> +void xc__hypercall_buffer_cache_release(xc_interface *xch)
>> +{
>> +    void *p;
>> +
>> +    hypercall_buffer_cache_lock(xch);
>> +
>> +    DBGPRINTF("hypercall buffer: total allocations:%d total releases:%d",
>> +              xch->hypercall_buffer_total_allocations,
>> +              xch->hypercall_buffer_total_releases);
>> +    DBGPRINTF("hypercall buffer: current allocations:%d maximum
>> allocations:%d",
>> +              xch->hypercall_buffer_current_allocations,
>> +              xch->hypercall_buffer_maximum_allocations);
>> +    DBGPRINTF("hypercall buffer: cache current size:%d",
>> +              xch->hypercall_buffer_cache_nr);
>> +    DBGPRINTF("hypercall buffer: cache hits:%d misses:%d toobig:%d",
>> +              xch->hypercall_buffer_cache_hits,
>> +              xch->hypercall_buffer_cache_misses,
>> +              xch->hypercall_buffer_cache_toobig);
>> +
>> +    while ( xch->hypercall_buffer_cache_nr > 0 )
>> +    {
>> +        p =
>> xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr];
>> +        do_hypercall_buffer_free_pages(p, 1);
>> +    }
>> +
>> +    hypercall_buffer_cache_unlock(xch);
>> +}
>> +
>>  void *xc__hypercall_buffer_alloc_pages(xc_interface *xch,
>> xc_hypercall_buffer_t *b, int nr_pages)
>>  {
>>      size_t size = nr_pages * PAGE_SIZE;
>> -    void *p;
>> +    void *p = hypercall_buffer_cache_alloc(xch, nr_pages);
>> +
>> +    if ( !p ) {
>>  #if defined(_POSIX_C_SOURCE) && !defined(__sun__)
>> -    int ret;
>> -    ret = posix_memalign(&p, PAGE_SIZE, size);
>> -    if (ret != 0)
>> -        return NULL;
>> +        int ret;
>> +        ret = posix_memalign(&p, PAGE_SIZE, size);
>> +        if (ret != 0)
>> +            return NULL;
>>  #elif defined(__NetBSD__) || defined(__OpenBSD__)
>> -    p = valloc(size);
>> +        p = valloc(size);
>>  #else
>> -    p = memalign(PAGE_SIZE, size);
>> +        p = memalign(PAGE_SIZE, size);
>>  #endif
>>
>> -    if (!p)
>> -        return NULL;
>> +        if (!p)
>> +            return NULL;
>>
>>  #ifndef __sun__
>> -    if ( mlock(p, size) < 0 )
>> -    {
>> -        free(p);
>> -        return NULL;
>> +        if ( mlock(p, size) < 0 )
>> +        {
>> +            free(p);
>> +            return NULL;
>> +        }
>> +#endif
>>     }
>> -#endif
>>
>>     b->hbuf = p;
>>
>> @@ -65,11 +172,8 @@ void xc__hypercall_buffer_free_pages(xc_
>>      if ( b->hbuf == NULL )
>>         return;
>>
>> -#ifndef __sun__
>> -    (void) munlock(b->hbuf, nr_pages * PAGE_SIZE);
>> -#endif
>> -
>> -    free(b->hbuf);
>> +    if ( !hypercall_buffer_cache_free(xch, b->hbuf, nr_pages) )
>> +        do_hypercall_buffer_free_pages(b->hbuf, nr_pages);
>>  }
>>
>>  struct allocation_header {
>> diff -r 5b6663ba2bb2 -r 9e6175469e6f tools/libxc/xc_private.c
>> --- a/tools/libxc/xc_private.c  Mon Jan 31 09:14:52 2011 +0000
>> +++ b/tools/libxc/xc_private.c  Mon Jan 31 09:31:18 2011 +0000
>> @@ -126,6 +126,16 @@ static struct xc_interface_core *xc_inte
>>     xch->error_handler   = logger;           xch->error_handler_tofree   =
>> 0;
>>     xch->dombuild_logger = dombuild_logger;  xch->dombuild_logger_tofree =
>> 0;
>>
>> +    xch->hypercall_buffer_cache_nr = 0;
>> +
>> +    xch->hypercall_buffer_total_allocations = 0;
>> +    xch->hypercall_buffer_total_releases = 0;
>> +    xch->hypercall_buffer_current_allocations = 0;
>> +    xch->hypercall_buffer_maximum_allocations = 0;
>> +    xch->hypercall_buffer_cache_hits = 0;
>> +    xch->hypercall_buffer_cache_misses = 0;
>> +    xch->hypercall_buffer_cache_toobig = 0;
>> +
>>     xch->ops_handle = XC_OSDEP_OPEN_ERROR;
>>     xch->ops = NULL;
>>
>> @@ -171,6 +181,8 @@ static int xc_interface_close_common(xc_
>>  static int xc_interface_close_common(xc_interface *xch)
>>  {
>>     int rc = 0;
>> +
>> +    xc__hypercall_buffer_cache_release(xch);
>>
>>     xtl_logger_destroy(xch->dombuild_logger_tofree);
>>     xtl_logger_destroy(xch->error_handler_tofree);
>> diff -r 5b6663ba2bb2 -r 9e6175469e6f tools/libxc/xc_private.h
>> --- a/tools/libxc/xc_private.h  Mon Jan 31 09:14:52 2011 +0000
>> +++ b/tools/libxc/xc_private.h  Mon Jan 31 09:31:18 2011 +0000
>> @@ -75,6 +75,28 @@ struct xc_interface_core {
>>     FILE *dombuild_logger_file;
>>     const char *currently_progress_reporting;
>>
>> +    /*
>> +     * A simple cache of unused, single page, hypercall buffers
>> +     *
>> +     * Protected by a global lock.
>> +     */
>> +#define HYPERCALL_BUFFER_CACHE_SIZE 4
>> +    int hypercall_buffer_cache_nr;
>> +    void *hypercall_buffer_cache[HYPERCALL_BUFFER_CACHE_SIZE];
>> +
>> +    /*
>> +     * Hypercall buffer statistics. All protected by the global
>> +     * hypercall_buffer_cache lock.
>> +     */
>> +    int hypercall_buffer_total_allocations;
>> +    int hypercall_buffer_total_releases;
>> +    int hypercall_buffer_current_allocations;
>> +    int hypercall_buffer_maximum_allocations;
>> +    int hypercall_buffer_cache_hits;
>> +    int hypercall_buffer_cache_misses;
>> +    int hypercall_buffer_cache_toobig;
>> +
>> +    /* Low lovel OS interface */
>>     xc_osdep_info_t  osdep;
>>     xc_osdep_ops    *ops; /* backend operations */
>>     xc_osdep_handle  ops_handle; /* opaque data for xc_osdep_ops */
>> @@ -156,6 +178,11 @@ int xc__hypercall_bounce_pre(xc_interfac
>>  #define xc_hypercall_bounce_pre(_xch, _name)
>> xc__hypercall_bounce_pre(_xch, HYPERCALL_BUFFER(_name))
>>  void xc__hypercall_bounce_post(xc_interface *xch, xc_hypercall_buffer_t
>> *bounce);
>>  #define xc_hypercall_bounce_post(_xch, _name)
>> xc__hypercall_bounce_post(_xch, HYPERCALL_BUFFER(_name))
>> +
>> +/*
>> + * Release hypercall buffer cache
>> + */
>> +void xc__hypercall_buffer_cache_release(xc_interface *xch);
>>
>>  /*
>>  * Hypercall interfaces.
>>
>>
>>
>

[-- Attachment #1.2: Type: text/html, Size: 12229 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

      reply	other threads:[~2011-02-01  4:40 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <Acu6GEBstpnTfIH/TdeQZvf0FjUZ0Q==>
2011-01-22  9:39 ` Xen 4.1 rc1 test report Zheng, Shaohui
2011-01-24 17:05   ` Xen 4.1 rc1 test report (xl bits) Gianni Tedesco
2011-01-24 17:27     ` Christoph Egger
2011-01-24 17:43       ` [PATCH]: xl: Check a domain exists before destroying it Gianni Tedesco
2011-01-24 18:18         ` [PATCH, v2]: xl: Check domain existance when doing domain identifier lookups Gianni Tedesco
2011-01-24 18:39           ` Stefano Stabellini
2011-01-24 18:53             ` Gianni Tedesco
2011-01-24 19:05               ` Stefano Stabellini
2011-01-25 17:29               ` Ian Jackson
2011-01-25 17:28           ` Ian Jackson
2011-01-25 17:35             ` Gianni Tedesco
2011-01-25 18:28               ` Ian Jackson
2011-01-25 17:07         ` [PATCH]: xl: Check a domain exists before destroying it Ian Jackson
2011-01-25 17:17           ` Gianni Tedesco
2011-01-25 18:25             ` Ian Jackson
2011-01-24 18:35     ` Xen 4.1 rc1 test report (xl bits) Stefano Stabellini
2011-01-25 14:04     ` Gianni Tedesco
2011-01-26  3:47       ` Zhang, Yang Z
2011-01-25  6:24   ` Xen 4.1 rc1 test report Haitao Shan
2011-01-25  8:00     ` Zheng, Shaohui
2011-01-25  8:43     ` Keir Fraser
2011-01-25 11:43     ` Ian Campbell
2011-01-26  0:47       ` Haitao Shan
2011-01-26 10:57         ` libxc: maintain a small, per-handle, cache of hypercall buffer memory (Was: Re: Xen 4.1 rc1 test report) Ian Campbell
2011-01-27  9:47           ` Ian Campbell
2011-01-31  0:58             ` Haitao Shan
2011-01-31  3:06           ` Haitao Shan
2011-01-31  8:30             ` Ian Campbell
2011-01-31  8:57               ` Haitao Shan
2011-01-31  9:32                 ` Ian Campbell
2011-01-31 12:07                   ` Haitao Shan
2011-02-01  4:40                     ` Haitao Shan [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='AANLkTimAupm1mX8EpmSc-H6=b_3j-WS=dCzA05+=Zter@mail.gmail.com' \
    --to=maillists.shan@gmail.com \
    --cc=Ian.Campbell@eu.citrix.com \
    --cc=keir@xen.org \
    --cc=shaohui.zheng@intel.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.