All of lore.kernel.org
 help / color / mirror / Atom feed
* Xen 4.1 rc1 test report
@ 2011-01-22  9:39 ` Zheng, Shaohui
  2011-01-24 17:05   ` Xen 4.1 rc1 test report (xl bits) Gianni Tedesco
  2011-01-25  6:24   ` Xen 4.1 rc1 test report Haitao Shan
  0 siblings, 2 replies; 32+ messages in thread
From: Zheng, Shaohui @ 2011-01-22  9:39 UTC (permalink / raw)
  To: xen-devel

Hi, All
	Intel QA conducted a full validation for xen 4.1 rc1, it includes VT-x, VT-d,  SRIOV, RAS, TXT and xl tools testing. 24 issues were exposed. Refer the bug list, please.

	We already assigned 14 bugs to Intel developers (which has an 'Intel' tag in the bug title), most of the rest 10 bugs are related xl command.  For the these bugs, need community's help to fix them. 

Version information:
Change Set: 22764:75b6287626ee
pvops dom0: 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68
ioemu : 1c304816043c0ffe14d20d6006d6165cb7fddb9b

Bug list:
Vt-d ( 7 bugs)
1. ubuntu PAE SMP guest has network problem with NIC assigned (Community)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1709
2. [VT-d] xen panic on function do_IRQ after many times NIC pass-throu (Intel)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1706
3. [VT-D]run guest with NIC assigned will cause system hang sometimes under PAE on Sandy bridge (Intel)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1725
4.[vt-d] dom0 igb driver is too old to support 4-port	(Intel)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1708
5.[VT-d] xen panic when run guest with NIC assigned sometimes (Intel)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=24
6.[vt-d] xl command does not response after passthrou IGD card (Intel)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1723
7.[vt-d] fail to get IP address after hotplug VF for 300 times (Intel)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1722

RAS (1 bug)
1. System hang when running cpu offline (Intel)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1654

ACPI (1 bug)
1. System cann't resume after do suspend (Intel)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1707

Save/Restore(1 bug)
1. RHEL6 guest fail to do save/restore (Community)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1716

xl command(7 bugs)
1. xl does not check the duplicated configure file and image file (Community)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1711
2. [vt-d] Can not detach the device which was assigned statically (Community)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1717
3. guest shows white screen when boot guest with NIC assigned (Community)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1712
4. memory corruption was reported by "xl" with device pass-throu (Community)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1713
5. [vt-d] fail to passthrou two or more devices to guest (Community)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1710
6. Guest network broken after do SAVE/RESTOR with xl (Community)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1703
7. Too many error information showed when destroy an inexistent guest (Community)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1714

Hypervisor(4 bugs)
1. Only two 1GB-pages be allocated to a 10GBs memory guest (Intel)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1721
2. guest with vnif assigned fail to bootup when disable apic (Community)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1692
3. Dom0 crashes on Core2 when dom0_mem is no more than 1972MB (Community)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1726
4. Guest does not disappear after poweroff it (Community)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1720

Performance(1 bug)
1. guest boot very slowly without limit dom0 cpu number on EX (Intel)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1719

X2APIC (1 bug)
1. Fail to bootup sandy bridge under PAE with x2apic enabled (Intel)
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1718

Windows (1 bug)
1. All windows UP guest boot fail
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1704	(Intel)

Thanks & Regards,
Shaohui

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: Xen 4.1 rc1 test report (xl bits)
  2011-01-22  9:39 ` Xen 4.1 rc1 test report Zheng, Shaohui
@ 2011-01-24 17:05   ` Gianni Tedesco
  2011-01-24 17:27     ` Christoph Egger
                       ` (2 more replies)
  2011-01-25  6:24   ` Xen 4.1 rc1 test report Haitao Shan
  1 sibling, 3 replies; 32+ messages in thread
From: Gianni Tedesco @ 2011-01-24 17:05 UTC (permalink / raw)
  To: Zheng, Shaohui
  Cc: Kamala Narasimhan, xen-devel, Ian Jackson, Stabellini, Stefano

On Sat, 2011-01-22 at 09:39 +0000, Zheng, Shaohui wrote:
> Hi, All
> 	Intel QA conducted a full validation for xen 4.1 rc1, it includes VT-x, VT-d,  SRIOV, RAS, TXT and xl tools testing. 24 issues were exposed. Refer the bug list, please.
> 
> 	We already assigned 14 bugs to Intel developers (which has an 'Intel' tag in the bug title), most of the rest 10 bugs are related xl command.  For the these bugs, need community's help to fix them. 
> 
> Version information:
> Change Set: 22764:75b6287626ee
> pvops dom0: 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68
> ioemu : 1c304816043c0ffe14d20d6006d6165cb7fddb9b

Thanks.

> xl command(7 bugs)
> 1. xl does not check the duplicated configure file and image file (Community)
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1711

Yes this is kind of known about, we give the user a lot of rope. We had
already discussed that it's a bug for two domains to have the same name.
A check for this condition would catch the 'oops I ran the same config
twice' case.

As for going everywhere and preventing two domains from sharing a disk
image it would be a bit more far-reaching and we'd need to think about
legit cases where people might use shared r/w disk images with a
separate synchronisation protocol...

> 2. [vt-d] Can not detach the device which was assigned statically (Community)
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1717
> 3. guest shows white screen when boot guest with NIC assigned (Community)
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1712
> 4. memory corruption was reported by "xl" with device pass-throu (Community)
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1713
> 5. [vt-d] fail to passthrou two or more devices to guest (Community)
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1710

We are well aware that pci passthrough is buggy in xl. At the very
least, not all the safety checks are done. There are also problems with
IRQ's becoming unmapped when one of two pass-through devices which share
one or more IRQ's is detached. There are probably other issues too.

The heap corruption in xl looks particularly worrisome I will try find a
moment to take a look at that one.

> 6. Guest network broken after do SAVE/RESTOR with xl (Community)
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1703

Sounds like a real clanger.

> 7. Too many error information showed when destroy an inexistent guest (Community)
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1714

A three line fix?

Gianni

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: Xen 4.1 rc1 test report (xl bits)
  2011-01-24 17:05   ` Xen 4.1 rc1 test report (xl bits) Gianni Tedesco
@ 2011-01-24 17:27     ` Christoph Egger
  2011-01-24 17:43       ` [PATCH]: xl: Check a domain exists before destroying it Gianni Tedesco
  2011-01-24 18:35     ` Xen 4.1 rc1 test report (xl bits) Stefano Stabellini
  2011-01-25 14:04     ` Gianni Tedesco
  2 siblings, 1 reply; 32+ messages in thread
From: Christoph Egger @ 2011-01-24 17:27 UTC (permalink / raw)
  To: xen-devel; +Cc: Gianni Tedesco


> > 7. Too many error information showed when destroy an inexistent guest
> > (Community) http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1714
>
> A three line fix?

Already posted to this list:
http://lists.xensource.com/archives/html/xen-devel/2011-01/msg01452.html

Christoph


-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85609 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo, Andrew Bowd
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH]: xl: Check a domain exists before destroying it
  2011-01-24 17:27     ` Christoph Egger
@ 2011-01-24 17:43       ` Gianni Tedesco
  2011-01-24 18:18         ` [PATCH, v2]: xl: Check domain existance when doing domain identifier lookups Gianni Tedesco
  2011-01-25 17:07         ` [PATCH]: xl: Check a domain exists before destroying it Ian Jackson
  0 siblings, 2 replies; 32+ messages in thread
From: Gianni Tedesco @ 2011-01-24 17:43 UTC (permalink / raw)
  To: Christoph Egger; +Cc: Ian, xen-devel, Jackson, Stefano Stabellini

On Mon, 2011-01-24 at 17:27 +0000, Christoph Egger wrote:
> > > 7. Too many error information showed when destroy an inexistent guest
> > > (Community) http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1714
> >
> > A three line fix?
> 
> Already posted to this list:
> http://lists.xensource.com/archives/html/xen-devel/2011-01/msg01452.html

This is quite a clever fix but I think Ian Jacksons comments are
correct. We should do a libxl_domain_info() and bail early in the
destroy path if that fails.

---
xl: Check a domain exists before destroying it

Also fix a mis-formatted error message in xl destroy command.

Signed-off-by: Gianni Tedesco <gianni.tedesco@citrix.com>

diff -r b59f04eb8978 tools/libxl/libxl.c
--- a/tools/libxl/libxl.c	Fri Jan 21 18:06:23 2011 +0000
+++ b/tools/libxl/libxl.c	Mon Jan 24 17:39:33 2011 +0000
@@ -654,10 +654,21 @@ int libxl_event_get_disk_eject_info(libx
 int libxl_domain_destroy(libxl_ctx *ctx, uint32_t domid, int force)
 {
     libxl__gc gc = LIBXL_INIT_GC(ctx);
+    libxl_dominfo dominfo;
     char *dom_path;
     char *vm_path;
     int rc, dm_present;
 
+    rc = libxl_domain_info(ctx, &dominfo, domid);
+    switch(rc) {
+    case 0:
+        break;
+    case ERROR_INVAL:
+        LIBXL__LOG(ctx, LIBXL__LOG_ERROR, "non-existant domain %d", domid);
+    default:
+        return rc;
+    }
+
     if (libxl__domain_is_hvm(ctx, domid)) {
         dm_present = 1;
     } else {
diff -r b59f04eb8978 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Fri Jan 21 18:06:23 2011 +0000
+++ b/tools/libxl/xl_cmdimpl.c	Mon Jan 24 17:39:33 2011 +0000
@@ -2176,7 +2176,7 @@ static void destroy_domain(const char *p
         exit(-1);
     }
     rc = libxl_domain_destroy(&ctx, domid, 0);
-    if (rc) { fprintf(stderr,"destroy failed (rc=%d)\n.",rc); exit(-1); }
+    if (rc) { fprintf(stderr,"destroy failed (rc=%d).\n",rc); exit(-1); }
 }
 
 static void shutdown_domain(const char *p, int wait)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH, v2]: xl: Check domain existance when doing domain identifier lookups
  2011-01-24 17:43       ` [PATCH]: xl: Check a domain exists before destroying it Gianni Tedesco
@ 2011-01-24 18:18         ` Gianni Tedesco
  2011-01-24 18:39           ` Stefano Stabellini
  2011-01-25 17:28           ` Ian Jackson
  2011-01-25 17:07         ` [PATCH]: xl: Check a domain exists before destroying it Ian Jackson
  1 sibling, 2 replies; 32+ messages in thread
From: Gianni Tedesco @ 2011-01-24 18:18 UTC (permalink / raw)
  To: Christoph Egger; +Cc: Ian, xen-devel, Jackson, Stefano Stabellini

On Mon, 2011-01-24 at 17:43 +0000, Gianni Tedesco wrote:
> On Mon, 2011-01-24 at 17:27 +0000, Christoph Egger wrote:
> > > > 7. Too many error information showed when destroy an inexistent guest
> > > > (Community) http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1714
> > >
> > > A three line fix?
> > 
> > Already posted to this list:
> > http://lists.xensource.com/archives/html/xen-devel/2011-01/msg01452.html
> 
> This is quite a clever fix but I think Ian Jacksons comments are
> correct. We should do a libxl_domain_info() and bail early in the
> destroy path if that fails.

It occurs to me that the last patch won't fix it for anything but
destroy. We should bail with a nice error for any command looking up a
domain that doesn't exist and be consistent with name vs. numeric ID.

--
xl: Check domain existance when doing domain identifier lookups

Also fix a mis-formatted error message in xl destroy command.

Signed-off-by: Gianni Tedesco <gianni.tedesco@citrix.com>

diff -r b59f04eb8978 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Fri Jan 21 18:06:23 2011 +0000
+++ b/tools/libxl/xl_cmdimpl.c	Mon Jan 24 17:58:20 2011 +0000
@@ -143,11 +143,24 @@ static int qualifier_to_id(const char *p
 static int domain_qualifier_to_domid(const char *p, uint32_t *domid_r,
                                      int *was_name_r)
 {
-    int was_name;
+    libxl_dominfo dominfo;
+    int was_name, rc;
 
     was_name = qualifier_to_id(p, domid_r);
-    if (was_name_r) *was_name_r = was_name;
-    return was_name ? libxl_name_to_domid(&ctx, p, domid_r) : 0;
+    if (was_name_r)
+        *was_name_r = was_name;
+
+    if ( was_name ) {
+        rc = libxl_name_to_domid(&ctx, p, domid_r);
+        if ( rc )
+            return rc;
+    }
+
+    rc = libxl_domain_info(&ctx, &dominfo, *domid_r);
+    if ( rc )
+        return rc;
+
+    return 0;
 }
 
 static int cpupool_qualifier_to_cpupoolid(const char *p, uint32_t *poolid_r,
@@ -2176,7 +2189,7 @@ static void destroy_domain(const char *p
         exit(-1);
     }
     rc = libxl_domain_destroy(&ctx, domid, 0);
-    if (rc) { fprintf(stderr,"destroy failed (rc=%d)\n.",rc); exit(-1); }
+    if (rc) { fprintf(stderr,"destroy failed (rc=%d).\n",rc); exit(-1); }
 }
 
 static void shutdown_domain(const char *p, int wait)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: Xen 4.1 rc1 test report (xl bits)
  2011-01-24 17:05   ` Xen 4.1 rc1 test report (xl bits) Gianni Tedesco
  2011-01-24 17:27     ` Christoph Egger
@ 2011-01-24 18:35     ` Stefano Stabellini
  2011-01-25 14:04     ` Gianni Tedesco
  2 siblings, 0 replies; 32+ messages in thread
From: Stefano Stabellini @ 2011-01-24 18:35 UTC (permalink / raw)
  To: Gianni Tedesco; +Cc: xen-devel, Stabellini, Kamala Narasimhan, Ian Jackson

On Mon, 24 Jan 2011, Gianni Tedesco wrote:
> On Sat, 2011-01-22 at 09:39 +0000, Zheng, Shaohui wrote:
> > Hi, All
> > 	Intel QA conducted a full validation for xen 4.1 rc1, it includes VT-x, VT-d,  SRIOV, RAS, TXT and xl tools testing. 24 issues were exposed. Refer the bug list, please.
> > 
> > 	We already assigned 14 bugs to Intel developers (which has an 'Intel' tag in the bug title), most of the rest 10 bugs are related xl command.  For the these bugs, need community's help to fix them. 
> > 
> > Version information:
> > Change Set: 22764:75b6287626ee
> > pvops dom0: 862ef97190f6b54d35c76c93fb2b8fadd7ab7d68
> > ioemu : 1c304816043c0ffe14d20d6006d6165cb7fddb9b
> 
> Thanks.
> 
> > xl command(7 bugs)
> > 1. xl does not check the duplicated configure file and image file (Community)
> > http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1711
> 
> Yes this is kind of known about, we give the user a lot of rope. We had
> already discussed that it's a bug for two domains to have the same name.
> A check for this condition would catch the 'oops I ran the same config
> twice' case.
> 
> As for going everywhere and preventing two domains from sharing a disk
> image it would be a bit more far-reaching and we'd need to think about
> legit cases where people might use shared r/w disk images with a
> separate synchronisation protocol...

Yes.
I don't think the duplicated image file problem should be solved at the
xl level, however we should at least try to avoid creating domains with
an already existing name.


> > 2. [vt-d] Can not detach the device which was assigned statically (Community)
> > http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1717
> > 3. guest shows white screen when boot guest with NIC assigned (Community)
> > http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1712
> > 4. memory corruption was reported by "xl" with device pass-throu (Community)
> > http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1713
> > 5. [vt-d] fail to passthrou two or more devices to guest (Community)
> > http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1710
> 
> We are well aware that pci passthrough is buggy in xl. At the very
> least, not all the safety checks are done. There are also problems with
> IRQ's becoming unmapped when one of two pass-through devices which share
> one or more IRQ's is detached. There are probably other issues too.

IanC is trying to fix some passthrough problems, hopefully the resulting
patch series will make it in time for the 4.1 release.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH, v2]: xl: Check domain existance when doing domain identifier lookups
  2011-01-24 18:18         ` [PATCH, v2]: xl: Check domain existance when doing domain identifier lookups Gianni Tedesco
@ 2011-01-24 18:39           ` Stefano Stabellini
  2011-01-24 18:53             ` Gianni Tedesco
  2011-01-25 17:28           ` Ian Jackson
  1 sibling, 1 reply; 32+ messages in thread
From: Stefano Stabellini @ 2011-01-24 18:39 UTC (permalink / raw)
  To: Gianni Tedesco
  Cc: Christoph Egger, xen-devel, Ian Jackson, Stefano Stabellini

On Mon, 24 Jan 2011, Gianni Tedesco wrote:
> It occurs to me that the last patch won't fix it for anything but
> destroy. We should bail with a nice error for any command looking up a
> domain that doesn't exist and be consistent with name vs. numeric ID.
> 
> --
> xl: Check domain existance when doing domain identifier lookups
> 
> Also fix a mis-formatted error message in xl destroy command.
> 
> Signed-off-by: Gianni Tedesco <gianni.tedesco@citrix.com>
> 
> diff -r b59f04eb8978 tools/libxl/xl_cmdimpl.c
> --- a/tools/libxl/xl_cmdimpl.c	Fri Jan 21 18:06:23 2011 +0000
> +++ b/tools/libxl/xl_cmdimpl.c	Mon Jan 24 17:58:20 2011 +0000
> @@ -143,11 +143,24 @@ static int qualifier_to_id(const char *p
>  static int domain_qualifier_to_domid(const char *p, uint32_t *domid_r,
>                                       int *was_name_r)
>  {
> -    int was_name;
> +    libxl_dominfo dominfo;
> +    int was_name, rc;
>  
>      was_name = qualifier_to_id(p, domid_r);
> -    if (was_name_r) *was_name_r = was_name;
> -    return was_name ? libxl_name_to_domid(&ctx, p, domid_r) : 0;
> +    if (was_name_r)
> +        *was_name_r = was_name;
> +
> +    if ( was_name ) {
> +        rc = libxl_name_to_domid(&ctx, p, domid_r);
> +        if ( rc )
> +            return rc;
> +    }
> +
> +    rc = libxl_domain_info(&ctx, &dominfo, *domid_r);
> +    if ( rc )
> +        return rc;
> +
> +    return 0;

is this really needed?
libxl_name_to_domid should have returned error already if the domain doesn't exist.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH, v2]: xl: Check domain existance when doing domain identifier lookups
  2011-01-24 18:39           ` Stefano Stabellini
@ 2011-01-24 18:53             ` Gianni Tedesco
  2011-01-24 19:05               ` Stefano Stabellini
  2011-01-25 17:29               ` Ian Jackson
  0 siblings, 2 replies; 32+ messages in thread
From: Gianni Tedesco @ 2011-01-24 18:53 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: Christoph Egger, xen-devel, Ian Jackson

On Mon, 2011-01-24 at 18:39 +0000, Stefano Stabellini wrote:
> On Mon, 24 Jan 2011, Gianni Tedesco wrote:
> > It occurs to me that the last patch won't fix it for anything but
> > destroy. We should bail with a nice error for any command looking up a
> > domain that doesn't exist and be consistent with name vs. numeric ID.
> > 
> > --
> > xl: Check domain existance when doing domain identifier lookups
> > 
> > Also fix a mis-formatted error message in xl destroy command.
> > 
> > Signed-off-by: Gianni Tedesco <gianni.tedesco@citrix.com>
> > 
> > diff -r b59f04eb8978 tools/libxl/xl_cmdimpl.c
> > --- a/tools/libxl/xl_cmdimpl.c	Fri Jan 21 18:06:23 2011 +0000
> > +++ b/tools/libxl/xl_cmdimpl.c	Mon Jan 24 17:58:20 2011 +0000
> > @@ -143,11 +143,24 @@ static int qualifier_to_id(const char *p
> >  static int domain_qualifier_to_domid(const char *p, uint32_t *domid_r,
> >                                       int *was_name_r)
> >  {
> > -    int was_name;
> > +    libxl_dominfo dominfo;
> > +    int was_name, rc;
> >  
> >      was_name = qualifier_to_id(p, domid_r);
> > -    if (was_name_r) *was_name_r = was_name;
> > -    return was_name ? libxl_name_to_domid(&ctx, p, domid_r) : 0;
> > +    if (was_name_r)
> > +        *was_name_r = was_name;
> > +
> > +    if ( was_name ) {
> > +        rc = libxl_name_to_domid(&ctx, p, domid_r);
> > +        if ( rc )
> > +            return rc;
> > +    }
> > +
> > +    rc = libxl_domain_info(&ctx, &dominfo, *domid_r);
> > +    if ( rc )
> > +        return rc;
> > +
> > +    return 0;
> 
> is this really needed?
> libxl_name_to_domid should have returned error already if the domain doesn't exist.

Only if was_name is true... If they pass a number then we need to check
explicitly. Perhaps it would be better to put that check in an else {}
statement though.

Gianni

--
xl: Check domain existance when doing domain identifier lookups

Also fix a mis-formatted error message in xl destroy command.

Signed-off-by: Gianni Tedesco <gianni.tedesco@citrix.com>

diff -r b59f04eb8978 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Fri Jan 21 18:06:23 2011 +0000
+++ b/tools/libxl/xl_cmdimpl.c	Mon Jan 24 18:50:32 2011 +0000
@@ -143,11 +143,24 @@ static int qualifier_to_id(const char *p
 static int domain_qualifier_to_domid(const char *p, uint32_t *domid_r,
                                      int *was_name_r)
 {
-    int was_name;
+    int was_name, rc;
 
     was_name = qualifier_to_id(p, domid_r);
-    if (was_name_r) *was_name_r = was_name;
-    return was_name ? libxl_name_to_domid(&ctx, p, domid_r) : 0;
+    if (was_name_r)
+        *was_name_r = was_name;
+
+    if ( was_name ) {
+        rc = libxl_name_to_domid(&ctx, p, domid_r);
+        if ( rc )
+            return rc;
+    }else{
+        libxl_dominfo dominfo;
+        rc = libxl_domain_info(&ctx, &dominfo, *domid_r);
+        if ( rc )
+            return rc;
+    }
+
+    return 0;
 }
 
 static int cpupool_qualifier_to_cpupoolid(const char *p, uint32_t *poolid_r,
@@ -2176,7 +2189,7 @@ static void destroy_domain(const char *p
         exit(-1);
     }
     rc = libxl_domain_destroy(&ctx, domid, 0);
-    if (rc) { fprintf(stderr,"destroy failed (rc=%d)\n.",rc); exit(-1); }
+    if (rc) { fprintf(stderr,"destroy failed (rc=%d).\n",rc); exit(-1); }
 }
 
 static void shutdown_domain(const char *p, int wait)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH, v2]: xl: Check domain existance when doing domain identifier lookups
  2011-01-24 18:53             ` Gianni Tedesco
@ 2011-01-24 19:05               ` Stefano Stabellini
  2011-01-25 17:29               ` Ian Jackson
  1 sibling, 0 replies; 32+ messages in thread
From: Stefano Stabellini @ 2011-01-24 19:05 UTC (permalink / raw)
  To: Gianni Tedesco
  Cc: Christoph Egger, xen-devel, Ian Jackson, Stefano Stabellini

On Mon, 24 Jan 2011, Gianni Tedesco wrote:
> Only if was_name is true... If they pass a number then we need to check
> explicitly. Perhaps it would be better to put that check in an else {}
> statement though.
> 

Good point.
Apart from the inconsistent code style:

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

> 
> --
> xl: Check domain existance when doing domain identifier lookups
> 
> Also fix a mis-formatted error message in xl destroy command.
> 
> Signed-off-by: Gianni Tedesco <gianni.tedesco@citrix.com>
> 
> diff -r b59f04eb8978 tools/libxl/xl_cmdimpl.c
> --- a/tools/libxl/xl_cmdimpl.c	Fri Jan 21 18:06:23 2011 +0000
> +++ b/tools/libxl/xl_cmdimpl.c	Mon Jan 24 18:50:32 2011 +0000
> @@ -143,11 +143,24 @@ static int qualifier_to_id(const char *p
>  static int domain_qualifier_to_domid(const char *p, uint32_t *domid_r,
>                                       int *was_name_r)
>  {
> -    int was_name;
> +    int was_name, rc;
>  
>      was_name = qualifier_to_id(p, domid_r);
> -    if (was_name_r) *was_name_r = was_name;
> -    return was_name ? libxl_name_to_domid(&ctx, p, domid_r) : 0;
> +    if (was_name_r)
> +        *was_name_r = was_name;
> +
> +    if ( was_name ) {
> +        rc = libxl_name_to_domid(&ctx, p, domid_r);
> +        if ( rc )
> +            return rc;
> +    }else{
> +        libxl_dominfo dominfo;
> +        rc = libxl_domain_info(&ctx, &dominfo, *domid_r);
> +        if ( rc )
> +            return rc;
> +    }
> +
> +    return 0;
>  }
>  
>  static int cpupool_qualifier_to_cpupoolid(const char *p, uint32_t *poolid_r,
> @@ -2176,7 +2189,7 @@ static void destroy_domain(const char *p
>          exit(-1);
>      }
>      rc = libxl_domain_destroy(&ctx, domid, 0);
> -    if (rc) { fprintf(stderr,"destroy failed (rc=%d)\n.",rc); exit(-1); }
> +    if (rc) { fprintf(stderr,"destroy failed (rc=%d).\n",rc); exit(-1); }
>  }
>  
>  static void shutdown_domain(const char *p, int wait)
> 
> 
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: Xen 4.1 rc1 test report
  2011-01-22  9:39 ` Xen 4.1 rc1 test report Zheng, Shaohui
  2011-01-24 17:05   ` Xen 4.1 rc1 test report (xl bits) Gianni Tedesco
@ 2011-01-25  6:24   ` Haitao Shan
  2011-01-25  8:00     ` Zheng, Shaohui
                       ` (2 more replies)
  1 sibling, 3 replies; 32+ messages in thread
From: Haitao Shan @ 2011-01-25  6:24 UTC (permalink / raw)
  To: Zheng, Shaohui, Keir Fraser; +Cc: xen-devel

> Performance(1 bug)
> 1. guest boot very slowly without limit dom0 cpu number on EX (Intel)
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1719
>

This bug happened 1 year before. Keir has made a fix with c/s 20841,
which essentially holds the locked (and hence allocated) hypercall
page so that next hypercall can reuse it without doing alloc and mlock
again. By doing this, overhead of rschedule IPI as a result of
frequent mlock is greatly reduced.

Late in year 2010, libxc introduced a new mechanism called hypercall
buffers, as you can refer c/s 22288~22312. Keir's fix is dropped in
this new framework. As a result, the bug appears again.
Probably the new framework auther can pick up Keir's fix again?

Shan Haitao

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: Xen 4.1 rc1 test report
  2011-01-25  6:24   ` Xen 4.1 rc1 test report Haitao Shan
@ 2011-01-25  8:00     ` Zheng, Shaohui
  2011-01-25  8:43     ` Keir Fraser
  2011-01-25 11:43     ` Ian Campbell
  2 siblings, 0 replies; 32+ messages in thread
From: Zheng, Shaohui @ 2011-01-25  8:00 UTC (permalink / raw)
  To: ian.campbell, Haitao Shan, Keir Fraser; +Cc: xen-devel

Thanks haitao's clear explanation. I see that Ian is the author of series c/s 22288~22312. Add Ian into the loop. 

Thanks & Regards,
Shaohui

> -----Original Message-----
> From: Haitao Shan [mailto:maillists.shan@gmail.com]
> Sent: Tuesday, January 25, 2011 2:24 PM
> To: Zheng, Shaohui; Keir Fraser
> Cc: xen-devel@lists.xensource.com
> Subject: Re: [Xen-devel] Xen 4.1 rc1 test report
> 
> > Performance(1 bug)
> > 1. guest boot very slowly without limit dom0 cpu number on EX (Intel)
> > http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1719
> >
> 
> This bug happened 1 year before. Keir has made a fix with c/s 20841,
> which essentially holds the locked (and hence allocated) hypercall
> page so that next hypercall can reuse it without doing alloc and mlock
> again. By doing this, overhead of rschedule IPI as a result of
> frequent mlock is greatly reduced.
> 
> Late in year 2010, libxc introduced a new mechanism called hypercall
> buffers, as you can refer c/s 22288~22312. Keir's fix is dropped in
> this new framework. As a result, the bug appears again.
> Probably the new framework auther can pick up Keir's fix again?
> 
> Shan Haitao

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: Xen 4.1 rc1 test report
  2011-01-25  6:24   ` Xen 4.1 rc1 test report Haitao Shan
  2011-01-25  8:00     ` Zheng, Shaohui
@ 2011-01-25  8:43     ` Keir Fraser
  2011-01-25 11:43     ` Ian Campbell
  2 siblings, 0 replies; 32+ messages in thread
From: Keir Fraser @ 2011-01-25  8:43 UTC (permalink / raw)
  To: Haitao Shan, Zheng, Shaohui; +Cc: Ian Campbell, xen-devel

On 25/01/2011 06:24, "Haitao Shan" <maillists.shan@gmail.com> wrote:

>> Performance(1 bug)
>> 1. guest boot very slowly without limit dom0 cpu number on EX (Intel)
>> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1719
>> 
> 
> This bug happened 1 year before. Keir has made a fix with c/s 20841,
> which essentially holds the locked (and hence allocated) hypercall
> page so that next hypercall can reuse it without doing alloc and mlock
> again. By doing this, overhead of rschedule IPI as a result of
> frequent mlock is greatly reduced.
> 
> Late in year 2010, libxc introduced a new mechanism called hypercall
> buffers, as you can refer c/s 22288~22312. Keir's fix is dropped in
> this new framework. As a result, the bug appears again.
> Probably the new framework auther can pick up Keir's fix again?

Ian Campbell, I think (cc'ed).

 -- Keir

> Shan Haitao

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: Xen 4.1 rc1 test report
  2011-01-25  6:24   ` Xen 4.1 rc1 test report Haitao Shan
  2011-01-25  8:00     ` Zheng, Shaohui
  2011-01-25  8:43     ` Keir Fraser
@ 2011-01-25 11:43     ` Ian Campbell
  2011-01-26  0:47       ` Haitao Shan
  2 siblings, 1 reply; 32+ messages in thread
From: Ian Campbell @ 2011-01-25 11:43 UTC (permalink / raw)
  To: Haitao Shan; +Cc: Zheng, Shaohui, Keir Fraser, xen-devel

On Tue, 2011-01-25 at 06:24 +0000, Haitao Shan wrote:
> > Performance(1 bug)
> > 1. guest boot very slowly without limit dom0 cpu number on EX (Intel)
> > http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1719
> >
> 
> This bug happened 1 year before. Keir has made a fix with c/s 20841,
> which essentially holds the locked (and hence allocated) hypercall
> page so that next hypercall can reuse it without doing alloc and mlock
> again. By doing this, overhead of rschedule IPI as a result of
> frequent mlock is greatly reduced.
> 
> Late in year 2010, libxc introduced a new mechanism called hypercall
> buffers, as you can refer c/s 22288~22312. Keir's fix is dropped in
> this new framework. As a result, the bug appears again.
> Probably the new framework auther can pick up Keir's fix again?

I think it would make sense to include a low water mark of a small
number of pages (perhaps 4 or 8) which instead of being freed are kept
and reused in preference to future new allocations. These pages would
only finally be released by the xc_interface_close() call.

Is this something which you feel able to make a patch for?

Ian.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: Xen 4.1 rc1 test report (xl bits)
  2011-01-24 17:05   ` Xen 4.1 rc1 test report (xl bits) Gianni Tedesco
  2011-01-24 17:27     ` Christoph Egger
  2011-01-24 18:35     ` Xen 4.1 rc1 test report (xl bits) Stefano Stabellini
@ 2011-01-25 14:04     ` Gianni Tedesco
  2011-01-26  3:47       ` Zhang, Yang Z
  2 siblings, 1 reply; 32+ messages in thread
From: Gianni Tedesco @ 2011-01-25 14:04 UTC (permalink / raw)
  To: Zheng, Shaohui
  Cc: yang.z.zhang, Kamala Narasimhan, xen-devel, Ian Jackson,
	Stefano Stabellini

On Mon, 2011-01-24 at 17:05 +0000, Gianni Tedesco wrote:
> > 6. Guest network broken after do SAVE/RESTOR with xl (Community)
> > http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1703
> 
> Sounds like a real clanger.

But I cannot repro this, at least with hvm guests.

What is guest OS? Was the guest PV or HVM? What network driver was it
using? What is host network setup, the standard xen briding?

Thanks

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH]: xl: Check a domain exists before destroying it
  2011-01-24 17:43       ` [PATCH]: xl: Check a domain exists before destroying it Gianni Tedesco
  2011-01-24 18:18         ` [PATCH, v2]: xl: Check domain existance when doing domain identifier lookups Gianni Tedesco
@ 2011-01-25 17:07         ` Ian Jackson
  2011-01-25 17:17           ` Gianni Tedesco
  1 sibling, 1 reply; 32+ messages in thread
From: Ian Jackson @ 2011-01-25 17:07 UTC (permalink / raw)
  To: Gianni Tedesco; +Cc: Christoph Egger, xen-devel, Stefano Stabellini

Gianni Tedesco writes ("[PATCH]: xl: Check a domain exists before destroying it"):
> This is quite a clever fix but I think Ian Jacksons comments are
> correct. We should do a libxl_domain_info() and bail early in the
> destroy path if that fails.

Thanks, I have applied the libxl part of this patch.

The xl_cmdimpl part:
> Also fix a mis-formatted error message in xl destroy command.
misses three other similar mistakes.

$ egrep '\\n.\"' tools/libxl/*.c
tools/libxl/xl_cmdimpl.c:    if (rc) { fprintf(stderr,"destroy failed (rc=%d)\n.",rc); exit(-1); }
tools/libxl/xl_cmdimpl.c:    if (rc) { fprintf(stderr,"shutdown failed (rc=%d)\n.",rc);exit(-1); }
tools/libxl/xl_cmdimpl.c:    if (rc) { fprintf(stderr,"reboot failed (rc=%d)\n.",rc);exit(-1); }
tools/libxl/xl_cmdimpl.c:    if (rc) { fprintf(stderr,"core dump failed (rc=%d)\n.",rc);exit(-1); }
$ 

Also in general most of the messages from xl don't print full stops.
So I suggest the patch below instead.

Ian.


xl: fix up some minor mistakes in error messages

perl -i~ -pe 's/\\n.\"/\\n\"/' tools/libxl/*.c

Reported-by: Gianni Tedesco <gianni.tedesco@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

diff -r 787af706dadc tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Tue Jan 25 17:02:47 2011 +0000
+++ b/tools/libxl/xl_cmdimpl.c	Tue Jan 25 17:06:46 2011 +0000
@@ -2176,7 +2176,7 @@ static void destroy_domain(const char *p
         exit(-1);
     }
     rc = libxl_domain_destroy(&ctx, domid, 0);
-    if (rc) { fprintf(stderr,"destroy failed (rc=%d)\n.",rc); exit(-1); }
+    if (rc) { fprintf(stderr,"destroy failed (rc=%d)\n",rc); exit(-1); }
 }
 
 static void shutdown_domain(const char *p, int wait)
@@ -2185,7 +2185,7 @@ static void shutdown_domain(const char *
 
     find_domain(p);
     rc=libxl_domain_shutdown(&ctx, domid, 0);
-    if (rc) { fprintf(stderr,"shutdown failed (rc=%d)\n.",rc);exit(-1); }
+    if (rc) { fprintf(stderr,"shutdown failed (rc=%d)\n",rc);exit(-1); }
 
     if (wait) {
         libxl_waiter waiter;
@@ -2227,7 +2227,7 @@ static void reboot_domain(const char *p)
     int rc;
     find_domain(p);
     rc=libxl_domain_shutdown(&ctx, domid, 1);
-    if (rc) { fprintf(stderr,"reboot failed (rc=%d)\n.",rc);exit(-1); }
+    if (rc) { fprintf(stderr,"reboot failed (rc=%d)\n",rc);exit(-1); }
 }
 
 static void list_domains_details(const libxl_dominfo *info, int nb_domain)
@@ -2669,7 +2669,7 @@ static void core_dump_domain(const char 
     int rc;
     find_domain(domain_spec);
     rc=libxl_domain_core_dump(&ctx, domid, filename);
-    if (rc) { fprintf(stderr,"core dump failed (rc=%d)\n.",rc);exit(-1); }
+    if (rc) { fprintf(stderr,"core dump failed (rc=%d)\n",rc);exit(-1); }
 }
 
 static void migrate_receive(int debug, int daemonize)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH]: xl: Check a domain exists before destroying it
  2011-01-25 17:07         ` [PATCH]: xl: Check a domain exists before destroying it Ian Jackson
@ 2011-01-25 17:17           ` Gianni Tedesco
  2011-01-25 18:25             ` Ian Jackson
  0 siblings, 1 reply; 32+ messages in thread
From: Gianni Tedesco @ 2011-01-25 17:17 UTC (permalink / raw)
  To: Ian Jackson; +Cc: Christoph Egger, xen-devel, Stefano Stabellini

On Tue, 2011-01-25 at 17:07 +0000, Ian Jackson wrote:
> Gianni Tedesco writes ("[PATCH]: xl: Check a domain exists before destroying it"):
> > This is quite a clever fix but I think Ian Jacksons comments are
> > correct. We should do a libxl_domain_info() and bail early in the
> > destroy path if that fails.
> 
> Thanks, I have applied the libxl part of this patch.
> 
> The xl_cmdimpl part:
> > Also fix a mis-formatted error message in xl destroy command.
> misses three other similar mistakes.
> 
> $ egrep '\\n.\"' tools/libxl/*.c
> tools/libxl/xl_cmdimpl.c:    if (rc) { fprintf(stderr,"destroy failed (rc=%d)\n.",rc); exit(-1); }
> tools/libxl/xl_cmdimpl.c:    if (rc) { fprintf(stderr,"shutdown failed (rc=%d)\n.",rc);exit(-1); }
> tools/libxl/xl_cmdimpl.c:    if (rc) { fprintf(stderr,"reboot failed (rc=%d)\n.",rc);exit(-1); }
> tools/libxl/xl_cmdimpl.c:    if (rc) { fprintf(stderr,"core dump failed (rc=%d)\n.",rc);exit(-1); }
> $ 
> 
> Also in general most of the messages from xl don't print full stops.
> So I suggest the patch below instead.

Good call, I must have barfed my regexp when I searched for the same
error...

Gianni

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH, v2]: xl: Check domain existance when doing domain identifier lookups
  2011-01-24 18:18         ` [PATCH, v2]: xl: Check domain existance when doing domain identifier lookups Gianni Tedesco
  2011-01-24 18:39           ` Stefano Stabellini
@ 2011-01-25 17:28           ` Ian Jackson
  2011-01-25 17:35             ` Gianni Tedesco
  1 sibling, 1 reply; 32+ messages in thread
From: Ian Jackson @ 2011-01-25 17:28 UTC (permalink / raw)
  To: Gianni Tedesco; +Cc: Christoph Egger, xen-devel, Stefano Stabellini

Gianni Tedesco writes ("[PATCH, v2]: xl: Check domain existance when doing domain identifier lookups"):
> It occurs to me that the last patch won't fix it for anything but
> destroy. We should bail with a nice error for any command looking up a
> domain that doesn't exist and be consistent with name vs. numeric ID.

I think the destroy logic needs to be different because if
libxl_domain_info fails for some other reason than ERROR_INVAL (ie
"domain does not exist"), destruction it needs to try all of the other
destruction steps.

So if you do a general change like this the destroy case probably
needs to bypass it.

Ian.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH, v2]: xl: Check domain existance when doing domain identifier lookups
  2011-01-24 18:53             ` Gianni Tedesco
  2011-01-24 19:05               ` Stefano Stabellini
@ 2011-01-25 17:29               ` Ian Jackson
  1 sibling, 0 replies; 32+ messages in thread
From: Ian Jackson @ 2011-01-25 17:29 UTC (permalink / raw)
  To: Gianni Tedesco; +Cc: Christoph Egger, xen-devel, Stefano Stabellini

Gianni Tedesco writes ("Re: [PATCH, v2]: xl: Check domain existance when doing domain identifier lookups"):
> Only if was_name is true... If they pass a number then we need to check
> explicitly. Perhaps it would be better to put that check in an else {}
> statement though.

I think it's fine to get a whole bunch of errors rather than just one,
if you say "xl something <numeric domain>" and the domain doesn't
exist.

Ian.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH, v2]: xl: Check domain existance when doing domain identifier lookups
  2011-01-25 17:28           ` Ian Jackson
@ 2011-01-25 17:35             ` Gianni Tedesco
  2011-01-25 18:28               ` Ian Jackson
  0 siblings, 1 reply; 32+ messages in thread
From: Gianni Tedesco @ 2011-01-25 17:35 UTC (permalink / raw)
  To: Ian Jackson; +Cc: Christoph Egger, xen-devel, Stefano Stabellini

On Tue, 2011-01-25 at 17:28 +0000, Ian Jackson wrote:
> Gianni Tedesco writes ("[PATCH, v2]: xl: Check domain existance when doing domain identifier lookups"):
> > It occurs to me that the last patch won't fix it for anything but
> > destroy. We should bail with a nice error for any command looking up a
> > domain that doesn't exist and be consistent with name vs. numeric ID.
> 
> I think the destroy logic needs to be different because if
> libxl_domain_info fails for some other reason than ERROR_INVAL (ie
> "domain does not exist"), destruction it needs to try all of the other
> destruction steps.
> 
> So if you do a general change like this the destroy case probably
> needs to bypass it.

Yes I think you are right, except for that we can make the same argument
against more than just destroy. In other words, let's change the logic
such that only non-existant domain causes this to fail and other errors
are ignored.

--

xl: Check domain existance when doing domain identifier lookups

Also fix a mis-formatted error messages in xl destroy command.

Signed-off-by: Gianni Tedesco <gianni.tedesco@citrix.com>

diff -r 5e91e7a6b130 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Tue Jan 25 14:07:39 2011 +0000
+++ b/tools/libxl/xl_cmdimpl.c	Tue Jan 25 17:32:19 2011 +0000
@@ -143,11 +143,25 @@ static int qualifier_to_id(const char *p
 static int domain_qualifier_to_domid(const char *p, uint32_t *domid_r,
                                      int *was_name_r)
 {
-    int was_name;
+    libxl_dominfo dominfo;
+    int was_name, rc;
 
     was_name = qualifier_to_id(p, domid_r);
-    if (was_name_r) *was_name_r = was_name;
-    return was_name ? libxl_name_to_domid(&ctx, p, domid_r) : 0;
+    if (was_name_r)
+        *was_name_r = was_name;
+
+    if ( was_name ) {
+        rc = libxl_name_to_domid(&ctx, p, domid_r);
+        if ( rc )
+            return rc;
+    }else{
+        rc = libxl_domain_info(&ctx, &dominfo, *domid_r);
+        /* error only if domain does not exist */
+        if ( rc == ERROR_INVAL )
+            return rc;
+    }
+
+    return 0;
 }
 
 static int cpupool_qualifier_to_cpupoolid(const char *p, uint32_t *poolid_r,
@@ -2176,7 +2190,7 @@ static void destroy_domain(const char *p
         exit(-1);
     }
     rc = libxl_domain_destroy(&ctx, domid, 0);
-    if (rc) { fprintf(stderr,"destroy failed (rc=%d)\n.",rc); exit(-1); }
+    if (rc) { fprintf(stderr,"destroy failed (rc=%d).\n",rc); exit(-1); }
 }
 
 static void shutdown_domain(const char *p, int wait)
@@ -2185,7 +2199,7 @@ static void shutdown_domain(const char *
 
     find_domain(p);
     rc=libxl_domain_shutdown(&ctx, domid, 0);
-    if (rc) { fprintf(stderr,"shutdown failed (rc=%d)\n.",rc);exit(-1); }
+    if (rc) { fprintf(stderr,"shutdown failed (rc=%d)\n",rc);exit(-1); }
 
     if (wait) {
         libxl_waiter waiter;
@@ -2227,7 +2241,7 @@ static void reboot_domain(const char *p)
     int rc;
     find_domain(p);
     rc=libxl_domain_shutdown(&ctx, domid, 1);
-    if (rc) { fprintf(stderr,"reboot failed (rc=%d)\n.",rc);exit(-1); }
+    if (rc) { fprintf(stderr,"reboot failed (rc=%d)\n",rc);exit(-1); }
 }
 
 static void list_domains_details(const libxl_dominfo *info, int nb_domain)
@@ -2669,7 +2683,7 @@ static void core_dump_domain(const char 
     int rc;
     find_domain(domain_spec);
     rc=libxl_domain_core_dump(&ctx, domid, filename);
-    if (rc) { fprintf(stderr,"core dump failed (rc=%d)\n.",rc);exit(-1); }
+    if (rc) { fprintf(stderr,"core dump failed (rc=%d)\n",rc);exit(-1); }
 }
 
 static void migrate_receive(int debug, int daemonize)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH]: xl: Check a domain exists before destroying it
  2011-01-25 17:17           ` Gianni Tedesco
@ 2011-01-25 18:25             ` Ian Jackson
  0 siblings, 0 replies; 32+ messages in thread
From: Ian Jackson @ 2011-01-25 18:25 UTC (permalink / raw)
  To: Gianni Tedesco; +Cc: Christoph Egger, xen-devel, Stefano Stabellini

Gianni Tedesco writes ("Re: [PATCH]: xl: Check a domain exists before destroying it"):
> On Tue, 2011-01-25 at 17:07 +0000, Ian Jackson wrote:
> > Also in general most of the messages from xl don't print full stops.
> > So I suggest the patch below instead.
> 
> Good call, I must have barfed my regexp when I searched for the same
> error...

NP.  I'll take that as an ack, and have applied my patch.

Thanks,
Ian.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH, v2]: xl: Check domain existance when doing domain identifier lookups
  2011-01-25 17:35             ` Gianni Tedesco
@ 2011-01-25 18:28               ` Ian Jackson
  0 siblings, 0 replies; 32+ messages in thread
From: Ian Jackson @ 2011-01-25 18:28 UTC (permalink / raw)
  To: Gianni Tedesco; +Cc: Christoph Egger, xen-devel, Stefano Stabellini

Gianni Tedesco writes ("Re: [PATCH, v2]: xl: Check domain existance when doing domain identifier lookups"):
> Yes I think you are right, except for that we can make the same argument
> against more than just destroy. In other words, let's change the logic
> such that only non-existant domain causes this to fail and other errors
> are ignored.

Hrm, yes.  Thanks.  I have applied your patch.

I corrected some minor code style problems (whitespace placement) in
your patch.

Regards,
Ian.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: Xen 4.1 rc1 test report
  2011-01-25 11:43     ` Ian Campbell
@ 2011-01-26  0:47       ` Haitao Shan
  2011-01-26 10:57         ` libxc: maintain a small, per-handle, cache of hypercall buffer memory (Was: Re: Xen 4.1 rc1 test report) Ian Campbell
  0 siblings, 1 reply; 32+ messages in thread
From: Haitao Shan @ 2011-01-26  0:47 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Zheng, Shaohui, Keir Fraser, xen-devel

I think it is basically the same idea as Keir introduced in 20841. I
guess this bug would happen on platforms which has large number of
physical CPUs, not only on EX system of Intel.
If you can cook the patch, that would be great! Thanks!!

Shan Haitao

2011/1/25 Ian Campbell <Ian.Campbell@citrix.com>:
> On Tue, 2011-01-25 at 06:24 +0000, Haitao Shan wrote:
>> > Performance(1 bug)
>> > 1. guest boot very slowly without limit dom0 cpu number on EX (Intel)
>> > http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1719
>> >
>>
>> This bug happened 1 year before. Keir has made a fix with c/s 20841,
>> which essentially holds the locked (and hence allocated) hypercall
>> page so that next hypercall can reuse it without doing alloc and mlock
>> again. By doing this, overhead of rschedule IPI as a result of
>> frequent mlock is greatly reduced.
>>
>> Late in year 2010, libxc introduced a new mechanism called hypercall
>> buffers, as you can refer c/s 22288~22312. Keir's fix is dropped in
>> this new framework. As a result, the bug appears again.
>> Probably the new framework auther can pick up Keir's fix again?
>
> I think it would make sense to include a low water mark of a small
> number of pages (perhaps 4 or 8) which instead of being freed are kept
> and reused in preference to future new allocations. These pages would
> only finally be released by the xc_interface_close() call.
>
> Is this something which you feel able to make a patch for?
>
> Ian.
>
>
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: Xen 4.1 rc1 test report (xl bits)
  2011-01-25 14:04     ` Gianni Tedesco
@ 2011-01-26  3:47       ` Zhang, Yang Z
  0 siblings, 0 replies; 32+ messages in thread
From: Zhang, Yang Z @ 2011-01-26  3:47 UTC (permalink / raw)
  To: Gianni Tedesco, Zheng, Shaohui
  Cc: Kamala Narasimhan, xen-devel, Ian Jackson, Stefano Stabellini

[-- Attachment #1: Type: text/plain, Size: 1146 bytes --]


> -----Original Message-----
> From: xen-devel-bounces@lists.xensource.com
> [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Gianni Tedesco
> Sent: Tuesday, January 25, 2011 10:05 PM
> To: Zheng, Shaohui
> Cc: Zhang, Yang Z; Kamala Narasimhan; xen-devel@lists.xensource.com; Ian
> Jackson; Stefano Stabellini
> Subject: Re: [Xen-devel] Xen 4.1 rc1 test report (xl bits)
> 
> On Mon, 2011-01-24 at 17:05 +0000, Gianni Tedesco wrote:
> > > 6. Guest network broken after do SAVE/RESTOR with xl (Community)
> > > http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1703
> >
> > Sounds like a real clanger.
> 
> But I cannot repro this, at least with hvm guests.
> 
> What is guest OS? Was the guest PV or HVM? What network driver was it
> using? What is host network setup, the standard xen briding?
It's HVM guest and run rhel5.5-32e inside guest. We use the emulate network. And host use the standard xen bridge.



best regards
yang


> Thanks
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* libxc: maintain a small, per-handle, cache of hypercall buffer memory (Was: Re: Xen 4.1 rc1 test report)
  2011-01-26  0:47       ` Haitao Shan
@ 2011-01-26 10:57         ` Ian Campbell
  2011-01-27  9:47           ` Ian Campbell
  2011-01-31  3:06           ` Haitao Shan
  0 siblings, 2 replies; 32+ messages in thread
From: Ian Campbell @ 2011-01-26 10:57 UTC (permalink / raw)
  To: Haitao Shan; +Cc: Zheng, Shaohui, Keir Fraser, xen-devel

On Wed, 2011-01-26 at 00:47 +0000, Haitao Shan wrote:
> I think it is basically the same idea as Keir introduced in 20841. I
> guess this bug would happen on platforms which has large number of
> physical CPUs, not only on EX system of Intel.
> If you can cook the patch, that would be great! Thanks!!

Does this help?

Ian.

8<---------------------------------------

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1296038761 0
# Node ID 8b8b7e024f9d6f4c2ce1a4efbf38f07eeb460d91
# Parent  e4e69622dc95037eab6740f79ecf9c1d05bca529
libxc: maintain a small, per-handle, cache of hypercall buffer memory

Constantly m(un)locking memory can have significant overhead on
systems with large numbers of CPUs. This was previously fixed by
20841:fbe8f32fa257 but this was dropped during the transition to
hypercall buffers.

Introduce a small cache of single page hypercall buffer allocations
which can be reused to avoid this overhead.

Add some statistics tracking to the hypercall buffer allocations.

The cache size of 4 was chosen based on these statistics since they
indicated that 2 pages was sufficient to satisfy all concurrent single
page hypercall buffer allocations seen during "xl create", "xl
shutdown" and "xl destroy" of both a PV and HVM guest therefore 4
pages should cover the majority of important cases.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_hcall_buf.c
--- a/tools/libxc/xc_hcall_buf.c	Wed Jan 26 10:22:42 2011 +0000
+++ b/tools/libxc/xc_hcall_buf.c	Wed Jan 26 10:46:01 2011 +0000
@@ -18,6 +18,7 @@
 
 #include <stdlib.h>
 #include <string.h>
+#include <pthread.h>
 
 #include "xc_private.h"
 #include "xg_private.h"
@@ -28,11 +29,108 @@ xc_hypercall_buffer_t XC__HYPERCALL_BUFF
     HYPERCALL_BUFFER_INIT_NO_BOUNCE
 };
 
+pthread_mutex_t hypercall_buffer_cache_mutex = PTHREAD_MUTEX_INITIALIZER;
+
+static void hypercall_buffer_cache_lock(xc_interface *xch)
+{
+    if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
+        return;
+    pthread_mutex_lock(&hypercall_buffer_cache_mutex);
+}
+
+static void hypercall_buffer_cache_unlock(xc_interface *xch)
+{
+    if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
+        return;
+    pthread_mutex_unlock(&hypercall_buffer_cache_mutex);
+}
+
+static void *hypercall_buffer_cache_alloc(xc_interface *xch, int nr_pages)
+{
+    void *p = NULL;
+
+    hypercall_buffer_cache_lock(xch);
+
+    xch->hypercall_buffer_total_allocations++;
+    xch->hypercall_buffer_current_allocations++;
+    if ( xch->hypercall_buffer_current_allocations > xch->hypercall_buffer_maximum_allocations )
+        xch->hypercall_buffer_maximum_allocations = xch->hypercall_buffer_current_allocations;
+
+    if ( nr_pages > 1 )
+    {
+        xch->hypercall_buffer_cache_toobig++;
+    }
+    else if ( xch->hypercall_buffer_cache_nr > 0 )
+    {
+        p = xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr];
+        xch->hypercall_buffer_cache_hits++;
+    }
+    else
+    {
+        xch->hypercall_buffer_cache_misses++;
+    }
+
+    hypercall_buffer_cache_unlock(xch);
+
+    return p;
+}
+
+static int hypercall_buffer_cache_free(xc_interface *xch, void *p, int nr_pages)
+{
+    int rc = 0;
+
+    hypercall_buffer_cache_lock(xch);
+
+    xch->hypercall_buffer_total_releases++;
+    xch->hypercall_buffer_current_allocations--;
+
+    if ( nr_pages == 1 && xch->hypercall_buffer_cache_nr < HYPERCALL_BUFFER_CACHE_SIZE )
+    {
+        xch->hypercall_buffer_cache[xch->hypercall_buffer_cache_nr++] = p;
+        rc = 1;
+    }
+
+    hypercall_buffer_cache_unlock(xch);
+
+    return rc;
+}
+
+void xc__hypercall_buffer_cache_release(xc_interface *xch)
+{
+    void *p;
+
+    hypercall_buffer_cache_lock(xch);
+
+    DBGPRINTF("hypercall buffer: total allocations:%d total releases:%d",
+              xch->hypercall_buffer_total_allocations,
+              xch->hypercall_buffer_total_releases);
+    DBGPRINTF("hypercall buffer: current allocations:%d maximum allocations:%d",
+              xch->hypercall_buffer_current_allocations,
+              xch->hypercall_buffer_maximum_allocations);
+    DBGPRINTF("hypercall buffer: cache current size:%d",
+              xch->hypercall_buffer_cache_nr);
+    DBGPRINTF("hypercall buffer: cache hits:%d misses:%d toobig:%d",
+              xch->hypercall_buffer_cache_hits,
+              xch->hypercall_buffer_cache_misses,
+              xch->hypercall_buffer_cache_toobig);
+
+    while ( xch->hypercall_buffer_cache_nr > 0 )
+    {
+        p = xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr];
+        xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, p, 1);
+    }
+
+    hypercall_buffer_cache_unlock(xch);
+}
+
 void *xc__hypercall_buffer_alloc_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages)
 {
-    void *p = xch->ops->u.privcmd.alloc_hypercall_buffer(xch, xch->ops_handle, nr_pages);
+    void *p = hypercall_buffer_cache_alloc(xch, nr_pages);
 
-    if (!p)
+    if ( !p )
+        p = xch->ops->u.privcmd.alloc_hypercall_buffer(xch, xch->ops_handle, nr_pages);
+
+    if ( !p )
         return NULL;
 
     b->hbuf = p;
@@ -47,7 +145,8 @@ void xc__hypercall_buffer_free_pages(xc_
     if ( b->hbuf == NULL )
         return;
 
-    xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, b->hbuf, nr_pages);
+    if ( !hypercall_buffer_cache_free(xch, b->hbuf, nr_pages) )
+        xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, b->hbuf, nr_pages);
 }
 
 struct allocation_header {
diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_private.c
--- a/tools/libxc/xc_private.c	Wed Jan 26 10:22:42 2011 +0000
+++ b/tools/libxc/xc_private.c	Wed Jan 26 10:46:01 2011 +0000
@@ -126,6 +126,16 @@ static struct xc_interface_core *xc_inte
     xch->error_handler   = logger;           xch->error_handler_tofree   = 0;
     xch->dombuild_logger = dombuild_logger;  xch->dombuild_logger_tofree = 0;
 
+    xch->hypercall_buffer_cache_nr = 0;
+
+    xch->hypercall_buffer_total_allocations = 0;
+    xch->hypercall_buffer_total_releases = 0;
+    xch->hypercall_buffer_current_allocations = 0;
+    xch->hypercall_buffer_maximum_allocations = 0;
+    xch->hypercall_buffer_cache_hits = 0;
+    xch->hypercall_buffer_cache_misses = 0;
+    xch->hypercall_buffer_cache_toobig = 0;
+
     xch->ops_handle = XC_OSDEP_OPEN_ERROR;
     xch->ops = NULL;
 
@@ -171,6 +181,8 @@ static int xc_interface_close_common(xc_
 static int xc_interface_close_common(xc_interface *xch)
 {
     int rc = 0;
+
+    xc_hypercall_buffer_cache_release(xch);
 
     xtl_logger_destroy(xch->dombuild_logger_tofree);
     xtl_logger_destroy(xch->error_handler_tofree);
diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_private.h
--- a/tools/libxc/xc_private.h	Wed Jan 26 10:22:42 2011 +0000
+++ b/tools/libxc/xc_private.h	Wed Jan 26 10:46:01 2011 +0000
@@ -75,6 +75,28 @@ struct xc_interface_core {
     FILE *dombuild_logger_file;
     const char *currently_progress_reporting;
 
+    /*
+     * A simple cache of unused, single page, hypercall buffers
+     *
+     * Protected by a global lock.
+     */
+#define HYPERCALL_BUFFER_CACHE_SIZE 4
+    int hypercall_buffer_cache_nr;
+    void *hypercall_buffer_cache[HYPERCALL_BUFFER_CACHE_SIZE];
+
+    /*
+     * Hypercall buffer statistics. All protected by the global
+     * hypercall_buffer_cache lock.
+     */
+    int hypercall_buffer_total_allocations;
+    int hypercall_buffer_total_releases;
+    int hypercall_buffer_current_allocations;
+    int hypercall_buffer_maximum_allocations;
+    int hypercall_buffer_cache_hits;
+    int hypercall_buffer_cache_misses;
+    int hypercall_buffer_cache_toobig;
+
+    /* Low lovel OS interface */
     xc_osdep_info_t  osdep;
     xc_osdep_ops    *ops; /* backend operations */
     xc_osdep_handle  ops_handle; /* opaque data for xc_osdep_ops */
@@ -156,6 +178,11 @@ int xc__hypercall_bounce_pre(xc_interfac
 #define xc_hypercall_bounce_pre(_xch, _name) xc__hypercall_bounce_pre(_xch, HYPERCALL_BUFFER(_name))
 void xc__hypercall_bounce_post(xc_interface *xch, xc_hypercall_buffer_t *bounce);
 #define xc_hypercall_bounce_post(_xch, _name) xc__hypercall_bounce_post(_xch, HYPERCALL_BUFFER(_name))
+
+/*
+ * Release hypercall buffer cache
+ */
+void xc__hypercall_buffer_cache_release(xc_interface *xch);
 
 /*
  * Hypercall interfaces.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: libxc: maintain a small, per-handle, cache of hypercall buffer memory (Was: Re: Xen 4.1 rc1 test report)
  2011-01-26 10:57         ` libxc: maintain a small, per-handle, cache of hypercall buffer memory (Was: Re: Xen 4.1 rc1 test report) Ian Campbell
@ 2011-01-27  9:47           ` Ian Campbell
  2011-01-31  0:58             ` Haitao Shan
  2011-01-31  3:06           ` Haitao Shan
  1 sibling, 1 reply; 32+ messages in thread
From: Ian Campbell @ 2011-01-27  9:47 UTC (permalink / raw)
  To: Haitao Shan; +Cc: Zheng, Shaohui, Keir Fraser, xen-devel

On Wed, 2011-01-26 at 10:57 +0000, Ian Campbell wrote:
> On Wed, 2011-01-26 at 00:47 +0000, Haitao Shan wrote:
> > I think it is basically the same idea as Keir introduced in 20841. I
> > guess this bug would happen on platforms which has large number of
> > physical CPUs, not only on EX system of Intel.
> > If you can cook the patch, that would be great! Thanks!!
> 
> Does this help?

On my four way Intel system it is pretty negligible, in a very
unscientific experiment the average real time for 3 x "xl create" goes
from 0.333 to 0.330 which is in the noise.

Perhaps on a much larger system it helps more?

However looking at the bugzilla report it seems to concern slowness in
the HVM BIOS before reaching grub which cannot possibly (AFAIK) be
related to libxc or the previous 20841 fix.

The bugzilla report mentions a similar issue fixed by Xiaowei, is there
a reference to that fix?

I think this patch is a good idea in its own right but unless there is
feedback that this patch helps in this specific case I think we should
defer it to 4.2.

(nb, if we do want it for 4.1 then I will resend since there is an
xc__hypercall_buffer_cache_release below which needs to becomes xc__blah
I didn't rebuild after a last minute change)

Ian.

> 
> Ian.
> 
> 8<---------------------------------------
> 
> # HG changeset patch
> # User Ian Campbell <ian.campbell@citrix.com>
> # Date 1296038761 0
> # Node ID 8b8b7e024f9d6f4c2ce1a4efbf38f07eeb460d91
> # Parent  e4e69622dc95037eab6740f79ecf9c1d05bca529
> libxc: maintain a small, per-handle, cache of hypercall buffer memory
> 
> Constantly m(un)locking memory can have significant overhead on
> systems with large numbers of CPUs. This was previously fixed by
> 20841:fbe8f32fa257 but this was dropped during the transition to
> hypercall buffers.
> 
> Introduce a small cache of single page hypercall buffer allocations
> which can be reused to avoid this overhead.
> 
> Add some statistics tracking to the hypercall buffer allocations.
> 
> The cache size of 4 was chosen based on these statistics since they
> indicated that 2 pages was sufficient to satisfy all concurrent single
> page hypercall buffer allocations seen during "xl create", "xl
> shutdown" and "xl destroy" of both a PV and HVM guest therefore 4
> pages should cover the majority of important cases.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_hcall_buf.c
> --- a/tools/libxc/xc_hcall_buf.c        Wed Jan 26 10:22:42 2011 +0000
> +++ b/tools/libxc/xc_hcall_buf.c        Wed Jan 26 10:46:01 2011 +0000
> @@ -18,6 +18,7 @@
> 
>  #include <stdlib.h>
>  #include <string.h>
> +#include <pthread.h>
> 
>  #include "xc_private.h"
>  #include "xg_private.h"
> @@ -28,11 +29,108 @@ xc_hypercall_buffer_t XC__HYPERCALL_BUFF
>      HYPERCALL_BUFFER_INIT_NO_BOUNCE
>  };
> 
> +pthread_mutex_t hypercall_buffer_cache_mutex = PTHREAD_MUTEX_INITIALIZER;
> +
> +static void hypercall_buffer_cache_lock(xc_interface *xch)
> +{
> +    if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
> +        return;
> +    pthread_mutex_lock(&hypercall_buffer_cache_mutex);
> +}
> +
> +static void hypercall_buffer_cache_unlock(xc_interface *xch)
> +{
> +    if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
> +        return;
> +    pthread_mutex_unlock(&hypercall_buffer_cache_mutex);
> +}
> +
> +static void *hypercall_buffer_cache_alloc(xc_interface *xch, int nr_pages)
> +{
> +    void *p = NULL;
> +
> +    hypercall_buffer_cache_lock(xch);
> +
> +    xch->hypercall_buffer_total_allocations++;
> +    xch->hypercall_buffer_current_allocations++;
> +    if ( xch->hypercall_buffer_current_allocations > xch->hypercall_buffer_maximum_allocations )
> +        xch->hypercall_buffer_maximum_allocations = xch->hypercall_buffer_current_allocations;
> +
> +    if ( nr_pages > 1 )
> +    {
> +        xch->hypercall_buffer_cache_toobig++;
> +    }
> +    else if ( xch->hypercall_buffer_cache_nr > 0 )
> +    {
> +        p = xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr];
> +        xch->hypercall_buffer_cache_hits++;
> +    }
> +    else
> +    {
> +        xch->hypercall_buffer_cache_misses++;
> +    }
> +
> +    hypercall_buffer_cache_unlock(xch);
> +
> +    return p;
> +}
> +
> +static int hypercall_buffer_cache_free(xc_interface *xch, void *p, int nr_pages)
> +{
> +    int rc = 0;
> +
> +    hypercall_buffer_cache_lock(xch);
> +
> +    xch->hypercall_buffer_total_releases++;
> +    xch->hypercall_buffer_current_allocations--;
> +
> +    if ( nr_pages == 1 && xch->hypercall_buffer_cache_nr < HYPERCALL_BUFFER_CACHE_SIZE )
> +    {
> +        xch->hypercall_buffer_cache[xch->hypercall_buffer_cache_nr++] = p;
> +        rc = 1;
> +    }
> +
> +    hypercall_buffer_cache_unlock(xch);
> +
> +    return rc;
> +}
> +
> +void xc__hypercall_buffer_cache_release(xc_interface *xch)
> +{
> +    void *p;
> +
> +    hypercall_buffer_cache_lock(xch);
> +
> +    DBGPRINTF("hypercall buffer: total allocations:%d total releases:%d",
> +              xch->hypercall_buffer_total_allocations,
> +              xch->hypercall_buffer_total_releases);
> +    DBGPRINTF("hypercall buffer: current allocations:%d maximum allocations:%d",
> +              xch->hypercall_buffer_current_allocations,
> +              xch->hypercall_buffer_maximum_allocations);
> +    DBGPRINTF("hypercall buffer: cache current size:%d",
> +              xch->hypercall_buffer_cache_nr);
> +    DBGPRINTF("hypercall buffer: cache hits:%d misses:%d toobig:%d",
> +              xch->hypercall_buffer_cache_hits,
> +              xch->hypercall_buffer_cache_misses,
> +              xch->hypercall_buffer_cache_toobig);
> +
> +    while ( xch->hypercall_buffer_cache_nr > 0 )
> +    {
> +        p = xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr];
> +        xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, p, 1);
> +    }
> +
> +    hypercall_buffer_cache_unlock(xch);
> +}
> +
>  void *xc__hypercall_buffer_alloc_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages)
>  {
> -    void *p = xch->ops->u.privcmd.alloc_hypercall_buffer(xch, xch->ops_handle, nr_pages);
> +    void *p = hypercall_buffer_cache_alloc(xch, nr_pages);
> 
> -    if (!p)
> +    if ( !p )
> +        p = xch->ops->u.privcmd.alloc_hypercall_buffer(xch, xch->ops_handle, nr_pages);
> +
> +    if ( !p )
>          return NULL;
> 
>      b->hbuf = p;
> @@ -47,7 +145,8 @@ void xc__hypercall_buffer_free_pages(xc_
>      if ( b->hbuf == NULL )
>          return;
> 
> -    xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, b->hbuf, nr_pages);
> +    if ( !hypercall_buffer_cache_free(xch, b->hbuf, nr_pages) )
> +        xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, b->hbuf, nr_pages);
>  }
> 
>  struct allocation_header {
> diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_private.c
> --- a/tools/libxc/xc_private.c  Wed Jan 26 10:22:42 2011 +0000
> +++ b/tools/libxc/xc_private.c  Wed Jan 26 10:46:01 2011 +0000
> @@ -126,6 +126,16 @@ static struct xc_interface_core *xc_inte
>      xch->error_handler   = logger;           xch->error_handler_tofree   = 0;
>      xch->dombuild_logger = dombuild_logger;  xch->dombuild_logger_tofree = 0;
> 
> +    xch->hypercall_buffer_cache_nr = 0;
> +
> +    xch->hypercall_buffer_total_allocations = 0;
> +    xch->hypercall_buffer_total_releases = 0;
> +    xch->hypercall_buffer_current_allocations = 0;
> +    xch->hypercall_buffer_maximum_allocations = 0;
> +    xch->hypercall_buffer_cache_hits = 0;
> +    xch->hypercall_buffer_cache_misses = 0;
> +    xch->hypercall_buffer_cache_toobig = 0;
> +
>      xch->ops_handle = XC_OSDEP_OPEN_ERROR;
>      xch->ops = NULL;
> 
> @@ -171,6 +181,8 @@ static int xc_interface_close_common(xc_
>  static int xc_interface_close_common(xc_interface *xch)
>  {
>      int rc = 0;
> +
> +    xc_hypercall_buffer_cache_release(xch);
> 
>      xtl_logger_destroy(xch->dombuild_logger_tofree);
>      xtl_logger_destroy(xch->error_handler_tofree);
> diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_private.h
> --- a/tools/libxc/xc_private.h  Wed Jan 26 10:22:42 2011 +0000
> +++ b/tools/libxc/xc_private.h  Wed Jan 26 10:46:01 2011 +0000
> @@ -75,6 +75,28 @@ struct xc_interface_core {
>      FILE *dombuild_logger_file;
>      const char *currently_progress_reporting;
> 
> +    /*
> +     * A simple cache of unused, single page, hypercall buffers
> +     *
> +     * Protected by a global lock.
> +     */
> +#define HYPERCALL_BUFFER_CACHE_SIZE 4
> +    int hypercall_buffer_cache_nr;
> +    void *hypercall_buffer_cache[HYPERCALL_BUFFER_CACHE_SIZE];
> +
> +    /*
> +     * Hypercall buffer statistics. All protected by the global
> +     * hypercall_buffer_cache lock.
> +     */
> +    int hypercall_buffer_total_allocations;
> +    int hypercall_buffer_total_releases;
> +    int hypercall_buffer_current_allocations;
> +    int hypercall_buffer_maximum_allocations;
> +    int hypercall_buffer_cache_hits;
> +    int hypercall_buffer_cache_misses;
> +    int hypercall_buffer_cache_toobig;
> +
> +    /* Low lovel OS interface */
>      xc_osdep_info_t  osdep;
>      xc_osdep_ops    *ops; /* backend operations */
>      xc_osdep_handle  ops_handle; /* opaque data for xc_osdep_ops */
> @@ -156,6 +178,11 @@ int xc__hypercall_bounce_pre(xc_interfac
>  #define xc_hypercall_bounce_pre(_xch, _name) xc__hypercall_bounce_pre(_xch, HYPERCALL_BUFFER(_name))
>  void xc__hypercall_bounce_post(xc_interface *xch, xc_hypercall_buffer_t *bounce);
>  #define xc_hypercall_bounce_post(_xch, _name) xc__hypercall_bounce_post(_xch, HYPERCALL_BUFFER(_name))
> +
> +/*
> + * Release hypercall buffer cache
> + */
> +void xc__hypercall_buffer_cache_release(xc_interface *xch);
> 
>  /*
>   * Hypercall interfaces.
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: libxc: maintain a small, per-handle, cache of hypercall buffer memory (Was: Re: Xen 4.1 rc1 test report)
  2011-01-27  9:47           ` Ian Campbell
@ 2011-01-31  0:58             ` Haitao Shan
  0 siblings, 0 replies; 32+ messages in thread
From: Haitao Shan @ 2011-01-31  0:58 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Zheng, Shaohui, Keir Fraser, xen-devel

Sorry for late response. I happened to miss this important email. We
will do some tests to verify the patch.

I do know there was a similar bug that Xiaowei had done much
investigation. I don't know where I can find it in public bugzilla. I
can post some internal tracking on this case:
------- Comment #10 From Yang, Xiaowei 2010-01-18 21:58:27 (-) [reply]
------- Hard to say now.

Some update: found the frequent resched IPI is not triggered by process
migration. Rather it's caused by one function lru_add_drain_per_cpu() is
inserted into each dom0 vcpu's work queue. As most of the vCPUs are in idle,
they are notified by a resched IPI to execute the work queue.

There are several places that calls lru_add_drain_per_cpu(): mlock, shm, memory
migration. Yet to find which is the culprit.------- Comment #11 From
Yang, Xiaowei 2010-01-25 15:11:33 (-) [reply] ------- Should be fixed
with c/s 20841 - at least, my test on NHM-EP shows HVM guest
boot time is shortened by half (40s -> 20s) and is as fast as with 2.6.18 dom0.
Will have a test on NHM-EX later when grabing the machine.

Shan Haitao

2011/1/27 Ian Campbell <Ian.Campbell@citrix.com>:
> On Wed, 2011-01-26 at 10:57 +0000, Ian Campbell wrote:
>> On Wed, 2011-01-26 at 00:47 +0000, Haitao Shan wrote:
>> > I think it is basically the same idea as Keir introduced in 20841. I
>> > guess this bug would happen on platforms which has large number of
>> > physical CPUs, not only on EX system of Intel.
>> > If you can cook the patch, that would be great! Thanks!!
>>
>> Does this help?
>
> On my four way Intel system it is pretty negligible, in a very
> unscientific experiment the average real time for 3 x "xl create" goes
> from 0.333 to 0.330 which is in the noise.
>
> Perhaps on a much larger system it helps more?
>
> However looking at the bugzilla report it seems to concern slowness in
> the HVM BIOS before reaching grub which cannot possibly (AFAIK) be
> related to libxc or the previous 20841 fix.
>
> The bugzilla report mentions a similar issue fixed by Xiaowei, is there
> a reference to that fix?
>
> I think this patch is a good idea in its own right but unless there is
> feedback that this patch helps in this specific case I think we should
> defer it to 4.2.
>
> (nb, if we do want it for 4.1 then I will resend since there is an
> xc__hypercall_buffer_cache_release below which needs to becomes xc__blah
> I didn't rebuild after a last minute change)
>
> Ian.
>
>>
>> Ian.
>>
>> 8<---------------------------------------
>>
>> # HG changeset patch
>> # User Ian Campbell <ian.campbell@citrix.com>
>> # Date 1296038761 0
>> # Node ID 8b8b7e024f9d6f4c2ce1a4efbf38f07eeb460d91
>> # Parent  e4e69622dc95037eab6740f79ecf9c1d05bca529
>> libxc: maintain a small, per-handle, cache of hypercall buffer memory
>>
>> Constantly m(un)locking memory can have significant overhead on
>> systems with large numbers of CPUs. This was previously fixed by
>> 20841:fbe8f32fa257 but this was dropped during the transition to
>> hypercall buffers.
>>
>> Introduce a small cache of single page hypercall buffer allocations
>> which can be reused to avoid this overhead.
>>
>> Add some statistics tracking to the hypercall buffer allocations.
>>
>> The cache size of 4 was chosen based on these statistics since they
>> indicated that 2 pages was sufficient to satisfy all concurrent single
>> page hypercall buffer allocations seen during "xl create", "xl
>> shutdown" and "xl destroy" of both a PV and HVM guest therefore 4
>> pages should cover the majority of important cases.
>>
>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>>
>> diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_hcall_buf.c
>> --- a/tools/libxc/xc_hcall_buf.c        Wed Jan 26 10:22:42 2011 +0000
>> +++ b/tools/libxc/xc_hcall_buf.c        Wed Jan 26 10:46:01 2011 +0000
>> @@ -18,6 +18,7 @@
>>
>>  #include <stdlib.h>
>>  #include <string.h>
>> +#include <pthread.h>
>>
>>  #include "xc_private.h"
>>  #include "xg_private.h"
>> @@ -28,11 +29,108 @@ xc_hypercall_buffer_t XC__HYPERCALL_BUFF
>>      HYPERCALL_BUFFER_INIT_NO_BOUNCE
>>  };
>>
>> +pthread_mutex_t hypercall_buffer_cache_mutex = PTHREAD_MUTEX_INITIALIZER;
>> +
>> +static void hypercall_buffer_cache_lock(xc_interface *xch)
>> +{
>> +    if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
>> +        return;
>> +    pthread_mutex_lock(&hypercall_buffer_cache_mutex);
>> +}
>> +
>> +static void hypercall_buffer_cache_unlock(xc_interface *xch)
>> +{
>> +    if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
>> +        return;
>> +    pthread_mutex_unlock(&hypercall_buffer_cache_mutex);
>> +}
>> +
>> +static void *hypercall_buffer_cache_alloc(xc_interface *xch, int nr_pages)
>> +{
>> +    void *p = NULL;
>> +
>> +    hypercall_buffer_cache_lock(xch);
>> +
>> +    xch->hypercall_buffer_total_allocations++;
>> +    xch->hypercall_buffer_current_allocations++;
>> +    if ( xch->hypercall_buffer_current_allocations > xch->hypercall_buffer_maximum_allocations )
>> +        xch->hypercall_buffer_maximum_allocations = xch->hypercall_buffer_current_allocations;
>> +
>> +    if ( nr_pages > 1 )
>> +    {
>> +        xch->hypercall_buffer_cache_toobig++;
>> +    }
>> +    else if ( xch->hypercall_buffer_cache_nr > 0 )
>> +    {
>> +        p = xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr];
>> +        xch->hypercall_buffer_cache_hits++;
>> +    }
>> +    else
>> +    {
>> +        xch->hypercall_buffer_cache_misses++;
>> +    }
>> +
>> +    hypercall_buffer_cache_unlock(xch);
>> +
>> +    return p;
>> +}
>> +
>> +static int hypercall_buffer_cache_free(xc_interface *xch, void *p, int nr_pages)
>> +{
>> +    int rc = 0;
>> +
>> +    hypercall_buffer_cache_lock(xch);
>> +
>> +    xch->hypercall_buffer_total_releases++;
>> +    xch->hypercall_buffer_current_allocations--;
>> +
>> +    if ( nr_pages == 1 && xch->hypercall_buffer_cache_nr < HYPERCALL_BUFFER_CACHE_SIZE )
>> +    {
>> +        xch->hypercall_buffer_cache[xch->hypercall_buffer_cache_nr++] = p;
>> +        rc = 1;
>> +    }
>> +
>> +    hypercall_buffer_cache_unlock(xch);
>> +
>> +    return rc;
>> +}
>> +
>> +void xc__hypercall_buffer_cache_release(xc_interface *xch)
>> +{
>> +    void *p;
>> +
>> +    hypercall_buffer_cache_lock(xch);
>> +
>> +    DBGPRINTF("hypercall buffer: total allocations:%d total releases:%d",
>> +              xch->hypercall_buffer_total_allocations,
>> +              xch->hypercall_buffer_total_releases);
>> +    DBGPRINTF("hypercall buffer: current allocations:%d maximum allocations:%d",
>> +              xch->hypercall_buffer_current_allocations,
>> +              xch->hypercall_buffer_maximum_allocations);
>> +    DBGPRINTF("hypercall buffer: cache current size:%d",
>> +              xch->hypercall_buffer_cache_nr);
>> +    DBGPRINTF("hypercall buffer: cache hits:%d misses:%d toobig:%d",
>> +              xch->hypercall_buffer_cache_hits,
>> +              xch->hypercall_buffer_cache_misses,
>> +              xch->hypercall_buffer_cache_toobig);
>> +
>> +    while ( xch->hypercall_buffer_cache_nr > 0 )
>> +    {
>> +        p = xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr];
>> +        xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, p, 1);
>> +    }
>> +
>> +    hypercall_buffer_cache_unlock(xch);
>> +}
>> +
>>  void *xc__hypercall_buffer_alloc_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages)
>>  {
>> -    void *p = xch->ops->u.privcmd.alloc_hypercall_buffer(xch, xch->ops_handle, nr_pages);
>> +    void *p = hypercall_buffer_cache_alloc(xch, nr_pages);
>>
>> -    if (!p)
>> +    if ( !p )
>> +        p = xch->ops->u.privcmd.alloc_hypercall_buffer(xch, xch->ops_handle, nr_pages);
>> +
>> +    if ( !p )
>>          return NULL;
>>
>>      b->hbuf = p;
>> @@ -47,7 +145,8 @@ void xc__hypercall_buffer_free_pages(xc_
>>      if ( b->hbuf == NULL )
>>          return;
>>
>> -    xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, b->hbuf, nr_pages);
>> +    if ( !hypercall_buffer_cache_free(xch, b->hbuf, nr_pages) )
>> +        xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, b->hbuf, nr_pages);
>>  }
>>
>>  struct allocation_header {
>> diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_private.c
>> --- a/tools/libxc/xc_private.c  Wed Jan 26 10:22:42 2011 +0000
>> +++ b/tools/libxc/xc_private.c  Wed Jan 26 10:46:01 2011 +0000
>> @@ -126,6 +126,16 @@ static struct xc_interface_core *xc_inte
>>      xch->error_handler   = logger;           xch->error_handler_tofree   = 0;
>>      xch->dombuild_logger = dombuild_logger;  xch->dombuild_logger_tofree = 0;
>>
>> +    xch->hypercall_buffer_cache_nr = 0;
>> +
>> +    xch->hypercall_buffer_total_allocations = 0;
>> +    xch->hypercall_buffer_total_releases = 0;
>> +    xch->hypercall_buffer_current_allocations = 0;
>> +    xch->hypercall_buffer_maximum_allocations = 0;
>> +    xch->hypercall_buffer_cache_hits = 0;
>> +    xch->hypercall_buffer_cache_misses = 0;
>> +    xch->hypercall_buffer_cache_toobig = 0;
>> +
>>      xch->ops_handle = XC_OSDEP_OPEN_ERROR;
>>      xch->ops = NULL;
>>
>> @@ -171,6 +181,8 @@ static int xc_interface_close_common(xc_
>>  static int xc_interface_close_common(xc_interface *xch)
>>  {
>>      int rc = 0;
>> +
>> +    xc_hypercall_buffer_cache_release(xch);
>>
>>      xtl_logger_destroy(xch->dombuild_logger_tofree);
>>      xtl_logger_destroy(xch->error_handler_tofree);
>> diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_private.h
>> --- a/tools/libxc/xc_private.h  Wed Jan 26 10:22:42 2011 +0000
>> +++ b/tools/libxc/xc_private.h  Wed Jan 26 10:46:01 2011 +0000
>> @@ -75,6 +75,28 @@ struct xc_interface_core {
>>      FILE *dombuild_logger_file;
>>      const char *currently_progress_reporting;
>>
>> +    /*
>> +     * A simple cache of unused, single page, hypercall buffers
>> +     *
>> +     * Protected by a global lock.
>> +     */
>> +#define HYPERCALL_BUFFER_CACHE_SIZE 4
>> +    int hypercall_buffer_cache_nr;
>> +    void *hypercall_buffer_cache[HYPERCALL_BUFFER_CACHE_SIZE];
>> +
>> +    /*
>> +     * Hypercall buffer statistics. All protected by the global
>> +     * hypercall_buffer_cache lock.
>> +     */
>> +    int hypercall_buffer_total_allocations;
>> +    int hypercall_buffer_total_releases;
>> +    int hypercall_buffer_current_allocations;
>> +    int hypercall_buffer_maximum_allocations;
>> +    int hypercall_buffer_cache_hits;
>> +    int hypercall_buffer_cache_misses;
>> +    int hypercall_buffer_cache_toobig;
>> +
>> +    /* Low lovel OS interface */
>>      xc_osdep_info_t  osdep;
>>      xc_osdep_ops    *ops; /* backend operations */
>>      xc_osdep_handle  ops_handle; /* opaque data for xc_osdep_ops */
>> @@ -156,6 +178,11 @@ int xc__hypercall_bounce_pre(xc_interfac
>>  #define xc_hypercall_bounce_pre(_xch, _name) xc__hypercall_bounce_pre(_xch, HYPERCALL_BUFFER(_name))
>>  void xc__hypercall_bounce_post(xc_interface *xch, xc_hypercall_buffer_t *bounce);
>>  #define xc_hypercall_bounce_post(_xch, _name) xc__hypercall_bounce_post(_xch, HYPERCALL_BUFFER(_name))
>> +
>> +/*
>> + * Release hypercall buffer cache
>> + */
>> +void xc__hypercall_buffer_cache_release(xc_interface *xch);
>>
>>  /*
>>   * Hypercall interfaces.
>>
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xensource.com
>> http://lists.xensource.com/xen-devel
>
>
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: libxc: maintain a small, per-handle, cache of hypercall buffer memory (Was: Re: Xen 4.1 rc1 test report)
  2011-01-26 10:57         ` libxc: maintain a small, per-handle, cache of hypercall buffer memory (Was: Re: Xen 4.1 rc1 test report) Ian Campbell
  2011-01-27  9:47           ` Ian Campbell
@ 2011-01-31  3:06           ` Haitao Shan
  2011-01-31  8:30             ` Ian Campbell
  1 sibling, 1 reply; 32+ messages in thread
From: Haitao Shan @ 2011-01-31  3:06 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Zheng, Shaohui, Keir Fraser, xen-devel

Hi, Ian,

I can not apply your patch. Is this patch developed against more newer
version of Xen than what I can have at hand?

Shan Haitao

2011/1/26 Ian Campbell <Ian.Campbell@eu.citrix.com>:
> On Wed, 2011-01-26 at 00:47 +0000, Haitao Shan wrote:
>> I think it is basically the same idea as Keir introduced in 20841. I
>> guess this bug would happen on platforms which has large number of
>> physical CPUs, not only on EX system of Intel.
>> If you can cook the patch, that would be great! Thanks!!
>
> Does this help?
>
> Ian.
>
> 8<---------------------------------------
>
> # HG changeset patch
> # User Ian Campbell <ian.campbell@citrix.com>
> # Date 1296038761 0
> # Node ID 8b8b7e024f9d6f4c2ce1a4efbf38f07eeb460d91
> # Parent  e4e69622dc95037eab6740f79ecf9c1d05bca529
> libxc: maintain a small, per-handle, cache of hypercall buffer memory
>
> Constantly m(un)locking memory can have significant overhead on
> systems with large numbers of CPUs. This was previously fixed by
> 20841:fbe8f32fa257 but this was dropped during the transition to
> hypercall buffers.
>
> Introduce a small cache of single page hypercall buffer allocations
> which can be reused to avoid this overhead.
>
> Add some statistics tracking to the hypercall buffer allocations.
>
> The cache size of 4 was chosen based on these statistics since they
> indicated that 2 pages was sufficient to satisfy all concurrent single
> page hypercall buffer allocations seen during "xl create", "xl
> shutdown" and "xl destroy" of both a PV and HVM guest therefore 4
> pages should cover the majority of important cases.
>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>
> diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_hcall_buf.c
> --- a/tools/libxc/xc_hcall_buf.c        Wed Jan 26 10:22:42 2011 +0000
> +++ b/tools/libxc/xc_hcall_buf.c        Wed Jan 26 10:46:01 2011 +0000
> @@ -18,6 +18,7 @@
>
>  #include <stdlib.h>
>  #include <string.h>
> +#include <pthread.h>
>
>  #include "xc_private.h"
>  #include "xg_private.h"
> @@ -28,11 +29,108 @@ xc_hypercall_buffer_t XC__HYPERCALL_BUFF
>     HYPERCALL_BUFFER_INIT_NO_BOUNCE
>  };
>
> +pthread_mutex_t hypercall_buffer_cache_mutex = PTHREAD_MUTEX_INITIALIZER;
> +
> +static void hypercall_buffer_cache_lock(xc_interface *xch)
> +{
> +    if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
> +        return;
> +    pthread_mutex_lock(&hypercall_buffer_cache_mutex);
> +}
> +
> +static void hypercall_buffer_cache_unlock(xc_interface *xch)
> +{
> +    if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
> +        return;
> +    pthread_mutex_unlock(&hypercall_buffer_cache_mutex);
> +}
> +
> +static void *hypercall_buffer_cache_alloc(xc_interface *xch, int nr_pages)
> +{
> +    void *p = NULL;
> +
> +    hypercall_buffer_cache_lock(xch);
> +
> +    xch->hypercall_buffer_total_allocations++;
> +    xch->hypercall_buffer_current_allocations++;
> +    if ( xch->hypercall_buffer_current_allocations > xch->hypercall_buffer_maximum_allocations )
> +        xch->hypercall_buffer_maximum_allocations = xch->hypercall_buffer_current_allocations;
> +
> +    if ( nr_pages > 1 )
> +    {
> +        xch->hypercall_buffer_cache_toobig++;
> +    }
> +    else if ( xch->hypercall_buffer_cache_nr > 0 )
> +    {
> +        p = xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr];
> +        xch->hypercall_buffer_cache_hits++;
> +    }
> +    else
> +    {
> +        xch->hypercall_buffer_cache_misses++;
> +    }
> +
> +    hypercall_buffer_cache_unlock(xch);
> +
> +    return p;
> +}
> +
> +static int hypercall_buffer_cache_free(xc_interface *xch, void *p, int nr_pages)
> +{
> +    int rc = 0;
> +
> +    hypercall_buffer_cache_lock(xch);
> +
> +    xch->hypercall_buffer_total_releases++;
> +    xch->hypercall_buffer_current_allocations--;
> +
> +    if ( nr_pages == 1 && xch->hypercall_buffer_cache_nr < HYPERCALL_BUFFER_CACHE_SIZE )
> +    {
> +        xch->hypercall_buffer_cache[xch->hypercall_buffer_cache_nr++] = p;
> +        rc = 1;
> +    }
> +
> +    hypercall_buffer_cache_unlock(xch);
> +
> +    return rc;
> +}
> +
> +void xc__hypercall_buffer_cache_release(xc_interface *xch)
> +{
> +    void *p;
> +
> +    hypercall_buffer_cache_lock(xch);
> +
> +    DBGPRINTF("hypercall buffer: total allocations:%d total releases:%d",
> +              xch->hypercall_buffer_total_allocations,
> +              xch->hypercall_buffer_total_releases);
> +    DBGPRINTF("hypercall buffer: current allocations:%d maximum allocations:%d",
> +              xch->hypercall_buffer_current_allocations,
> +              xch->hypercall_buffer_maximum_allocations);
> +    DBGPRINTF("hypercall buffer: cache current size:%d",
> +              xch->hypercall_buffer_cache_nr);
> +    DBGPRINTF("hypercall buffer: cache hits:%d misses:%d toobig:%d",
> +              xch->hypercall_buffer_cache_hits,
> +              xch->hypercall_buffer_cache_misses,
> +              xch->hypercall_buffer_cache_toobig);
> +
> +    while ( xch->hypercall_buffer_cache_nr > 0 )
> +    {
> +        p = xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr];
> +        xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, p, 1);
> +    }
> +
> +    hypercall_buffer_cache_unlock(xch);
> +}
> +
>  void *xc__hypercall_buffer_alloc_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages)
>  {
> -    void *p = xch->ops->u.privcmd.alloc_hypercall_buffer(xch, xch->ops_handle, nr_pages);
> +    void *p = hypercall_buffer_cache_alloc(xch, nr_pages);
>
> -    if (!p)
> +    if ( !p )
> +        p = xch->ops->u.privcmd.alloc_hypercall_buffer(xch, xch->ops_handle, nr_pages);
> +
> +    if ( !p )
>         return NULL;
>
>     b->hbuf = p;
> @@ -47,7 +145,8 @@ void xc__hypercall_buffer_free_pages(xc_
>     if ( b->hbuf == NULL )
>         return;
>
> -    xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, b->hbuf, nr_pages);
> +    if ( !hypercall_buffer_cache_free(xch, b->hbuf, nr_pages) )
> +        xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, b->hbuf, nr_pages);
>  }
>
>  struct allocation_header {
> diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_private.c
> --- a/tools/libxc/xc_private.c  Wed Jan 26 10:22:42 2011 +0000
> +++ b/tools/libxc/xc_private.c  Wed Jan 26 10:46:01 2011 +0000
> @@ -126,6 +126,16 @@ static struct xc_interface_core *xc_inte
>     xch->error_handler   = logger;           xch->error_handler_tofree   = 0;
>     xch->dombuild_logger = dombuild_logger;  xch->dombuild_logger_tofree = 0;
>
> +    xch->hypercall_buffer_cache_nr = 0;
> +
> +    xch->hypercall_buffer_total_allocations = 0;
> +    xch->hypercall_buffer_total_releases = 0;
> +    xch->hypercall_buffer_current_allocations = 0;
> +    xch->hypercall_buffer_maximum_allocations = 0;
> +    xch->hypercall_buffer_cache_hits = 0;
> +    xch->hypercall_buffer_cache_misses = 0;
> +    xch->hypercall_buffer_cache_toobig = 0;
> +
>     xch->ops_handle = XC_OSDEP_OPEN_ERROR;
>     xch->ops = NULL;
>
> @@ -171,6 +181,8 @@ static int xc_interface_close_common(xc_
>  static int xc_interface_close_common(xc_interface *xch)
>  {
>     int rc = 0;
> +
> +    xc_hypercall_buffer_cache_release(xch);
>
>     xtl_logger_destroy(xch->dombuild_logger_tofree);
>     xtl_logger_destroy(xch->error_handler_tofree);
> diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_private.h
> --- a/tools/libxc/xc_private.h  Wed Jan 26 10:22:42 2011 +0000
> +++ b/tools/libxc/xc_private.h  Wed Jan 26 10:46:01 2011 +0000
> @@ -75,6 +75,28 @@ struct xc_interface_core {
>     FILE *dombuild_logger_file;
>     const char *currently_progress_reporting;
>
> +    /*
> +     * A simple cache of unused, single page, hypercall buffers
> +     *
> +     * Protected by a global lock.
> +     */
> +#define HYPERCALL_BUFFER_CACHE_SIZE 4
> +    int hypercall_buffer_cache_nr;
> +    void *hypercall_buffer_cache[HYPERCALL_BUFFER_CACHE_SIZE];
> +
> +    /*
> +     * Hypercall buffer statistics. All protected by the global
> +     * hypercall_buffer_cache lock.
> +     */
> +    int hypercall_buffer_total_allocations;
> +    int hypercall_buffer_total_releases;
> +    int hypercall_buffer_current_allocations;
> +    int hypercall_buffer_maximum_allocations;
> +    int hypercall_buffer_cache_hits;
> +    int hypercall_buffer_cache_misses;
> +    int hypercall_buffer_cache_toobig;
> +
> +    /* Low lovel OS interface */
>     xc_osdep_info_t  osdep;
>     xc_osdep_ops    *ops; /* backend operations */
>     xc_osdep_handle  ops_handle; /* opaque data for xc_osdep_ops */
> @@ -156,6 +178,11 @@ int xc__hypercall_bounce_pre(xc_interfac
>  #define xc_hypercall_bounce_pre(_xch, _name) xc__hypercall_bounce_pre(_xch, HYPERCALL_BUFFER(_name))
>  void xc__hypercall_bounce_post(xc_interface *xch, xc_hypercall_buffer_t *bounce);
>  #define xc_hypercall_bounce_post(_xch, _name) xc__hypercall_bounce_post(_xch, HYPERCALL_BUFFER(_name))
> +
> +/*
> + * Release hypercall buffer cache
> + */
> +void xc__hypercall_buffer_cache_release(xc_interface *xch);
>
>  /*
>  * Hypercall interfaces.
>
>
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: libxc: maintain a small, per-handle, cache of hypercall buffer memory (Was: Re: Xen 4.1 rc1 test report)
  2011-01-31  3:06           ` Haitao Shan
@ 2011-01-31  8:30             ` Ian Campbell
  2011-01-31  8:57               ` Haitao Shan
  0 siblings, 1 reply; 32+ messages in thread
From: Ian Campbell @ 2011-01-31  8:30 UTC (permalink / raw)
  To: Haitao Shan; +Cc: Zheng, Shaohui, Keir Fraser, xen-devel

On Mon, 2011-01-31 at 03:06 +0000, Haitao Shan wrote: 
> Hi, Ian,
> 
> I can not apply your patch. Is this patch developed against more newer
> version of Xen than what I can have at hand?

There was a typo below (an xc_hypercall_buffer_cache_release which
should have been xc__hypercall_buffer_cache_release) but it sounds like
you have trouble with actually applying the patch? 

It was against a recent xen-unstable on the day I posted it. What were
the rejects you saw?

If you have a particular version you want to test against I can generate
a suitable patch for you.

Ian.

> 
> Shan Haitao
> 
> 2011/1/26 Ian Campbell <Ian.Campbell@eu.citrix.com>:
> > On Wed, 2011-01-26 at 00:47 +0000, Haitao Shan wrote:
> >> I think it is basically the same idea as Keir introduced in 20841. I
> >> guess this bug would happen on platforms which has large number of
> >> physical CPUs, not only on EX system of Intel.
> >> If you can cook the patch, that would be great! Thanks!!
> >
> > Does this help?
> >
> > Ian.
> >
> > 8<---------------------------------------
> >
> > # HG changeset patch
> > # User Ian Campbell <ian.campbell@citrix.com>
> > # Date 1296038761 0
> > # Node ID 8b8b7e024f9d6f4c2ce1a4efbf38f07eeb460d91
> > # Parent  e4e69622dc95037eab6740f79ecf9c1d05bca529
> > libxc: maintain a small, per-handle, cache of hypercall buffer memory
> >
> > Constantly m(un)locking memory can have significant overhead on
> > systems with large numbers of CPUs. This was previously fixed by
> > 20841:fbe8f32fa257 but this was dropped during the transition to
> > hypercall buffers.
> >
> > Introduce a small cache of single page hypercall buffer allocations
> > which can be reused to avoid this overhead.
> >
> > Add some statistics tracking to the hypercall buffer allocations.
> >
> > The cache size of 4 was chosen based on these statistics since they
> > indicated that 2 pages was sufficient to satisfy all concurrent single
> > page hypercall buffer allocations seen during "xl create", "xl
> > shutdown" and "xl destroy" of both a PV and HVM guest therefore 4
> > pages should cover the majority of important cases.
> >
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> >
> > diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_hcall_buf.c
> > --- a/tools/libxc/xc_hcall_buf.c        Wed Jan 26 10:22:42 2011 +0000
> > +++ b/tools/libxc/xc_hcall_buf.c        Wed Jan 26 10:46:01 2011 +0000
> > @@ -18,6 +18,7 @@
> >
> >  #include <stdlib.h>
> >  #include <string.h>
> > +#include <pthread.h>
> >
> >  #include "xc_private.h"
> >  #include "xg_private.h"
> > @@ -28,11 +29,108 @@ xc_hypercall_buffer_t XC__HYPERCALL_BUFF
> >     HYPERCALL_BUFFER_INIT_NO_BOUNCE
> >  };
> >
> > +pthread_mutex_t hypercall_buffer_cache_mutex = PTHREAD_MUTEX_INITIALIZER;
> > +
> > +static void hypercall_buffer_cache_lock(xc_interface *xch)
> > +{
> > +    if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
> > +        return;
> > +    pthread_mutex_lock(&hypercall_buffer_cache_mutex);
> > +}
> > +
> > +static void hypercall_buffer_cache_unlock(xc_interface *xch)
> > +{
> > +    if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
> > +        return;
> > +    pthread_mutex_unlock(&hypercall_buffer_cache_mutex);
> > +}
> > +
> > +static void *hypercall_buffer_cache_alloc(xc_interface *xch, int nr_pages)
> > +{
> > +    void *p = NULL;
> > +
> > +    hypercall_buffer_cache_lock(xch);
> > +
> > +    xch->hypercall_buffer_total_allocations++;
> > +    xch->hypercall_buffer_current_allocations++;
> > +    if ( xch->hypercall_buffer_current_allocations > xch->hypercall_buffer_maximum_allocations )
> > +        xch->hypercall_buffer_maximum_allocations = xch->hypercall_buffer_current_allocations;
> > +
> > +    if ( nr_pages > 1 )
> > +    {
> > +        xch->hypercall_buffer_cache_toobig++;
> > +    }
> > +    else if ( xch->hypercall_buffer_cache_nr > 0 )
> > +    {
> > +        p = xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr];
> > +        xch->hypercall_buffer_cache_hits++;
> > +    }
> > +    else
> > +    {
> > +        xch->hypercall_buffer_cache_misses++;
> > +    }
> > +
> > +    hypercall_buffer_cache_unlock(xch);
> > +
> > +    return p;
> > +}
> > +
> > +static int hypercall_buffer_cache_free(xc_interface *xch, void *p, int nr_pages)
> > +{
> > +    int rc = 0;
> > +
> > +    hypercall_buffer_cache_lock(xch);
> > +
> > +    xch->hypercall_buffer_total_releases++;
> > +    xch->hypercall_buffer_current_allocations--;
> > +
> > +    if ( nr_pages == 1 && xch->hypercall_buffer_cache_nr < HYPERCALL_BUFFER_CACHE_SIZE )
> > +    {
> > +        xch->hypercall_buffer_cache[xch->hypercall_buffer_cache_nr++] = p;
> > +        rc = 1;
> > +    }
> > +
> > +    hypercall_buffer_cache_unlock(xch);
> > +
> > +    return rc;
> > +}
> > +
> > +void xc__hypercall_buffer_cache_release(xc_interface *xch)
> > +{
> > +    void *p;
> > +
> > +    hypercall_buffer_cache_lock(xch);
> > +
> > +    DBGPRINTF("hypercall buffer: total allocations:%d total releases:%d",
> > +              xch->hypercall_buffer_total_allocations,
> > +              xch->hypercall_buffer_total_releases);
> > +    DBGPRINTF("hypercall buffer: current allocations:%d maximum allocations:%d",
> > +              xch->hypercall_buffer_current_allocations,
> > +              xch->hypercall_buffer_maximum_allocations);
> > +    DBGPRINTF("hypercall buffer: cache current size:%d",
> > +              xch->hypercall_buffer_cache_nr);
> > +    DBGPRINTF("hypercall buffer: cache hits:%d misses:%d toobig:%d",
> > +              xch->hypercall_buffer_cache_hits,
> > +              xch->hypercall_buffer_cache_misses,
> > +              xch->hypercall_buffer_cache_toobig);
> > +
> > +    while ( xch->hypercall_buffer_cache_nr > 0 )
> > +    {
> > +        p = xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr];
> > +        xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, p, 1);
> > +    }
> > +
> > +    hypercall_buffer_cache_unlock(xch);
> > +}
> > +
> >  void *xc__hypercall_buffer_alloc_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages)
> >  {
> > -    void *p = xch->ops->u.privcmd.alloc_hypercall_buffer(xch, xch->ops_handle, nr_pages);
> > +    void *p = hypercall_buffer_cache_alloc(xch, nr_pages);
> >
> > -    if (!p)
> > +    if ( !p )
> > +        p = xch->ops->u.privcmd.alloc_hypercall_buffer(xch, xch->ops_handle, nr_pages);
> > +
> > +    if ( !p )
> >         return NULL;
> >
> >     b->hbuf = p;
> > @@ -47,7 +145,8 @@ void xc__hypercall_buffer_free_pages(xc_
> >     if ( b->hbuf == NULL )
> >         return;
> >
> > -    xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, b->hbuf, nr_pages);
> > +    if ( !hypercall_buffer_cache_free(xch, b->hbuf, nr_pages) )
> > +        xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, b->hbuf, nr_pages);
> >  }
> >
> >  struct allocation_header {
> > diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_private.c
> > --- a/tools/libxc/xc_private.c  Wed Jan 26 10:22:42 2011 +0000
> > +++ b/tools/libxc/xc_private.c  Wed Jan 26 10:46:01 2011 +0000
> > @@ -126,6 +126,16 @@ static struct xc_interface_core *xc_inte
> >     xch->error_handler   = logger;           xch->error_handler_tofree   = 0;
> >     xch->dombuild_logger = dombuild_logger;  xch->dombuild_logger_tofree = 0;
> >
> > +    xch->hypercall_buffer_cache_nr = 0;
> > +
> > +    xch->hypercall_buffer_total_allocations = 0;
> > +    xch->hypercall_buffer_total_releases = 0;
> > +    xch->hypercall_buffer_current_allocations = 0;
> > +    xch->hypercall_buffer_maximum_allocations = 0;
> > +    xch->hypercall_buffer_cache_hits = 0;
> > +    xch->hypercall_buffer_cache_misses = 0;
> > +    xch->hypercall_buffer_cache_toobig = 0;
> > +
> >     xch->ops_handle = XC_OSDEP_OPEN_ERROR;
> >     xch->ops = NULL;
> >
> > @@ -171,6 +181,8 @@ static int xc_interface_close_common(xc_
> >  static int xc_interface_close_common(xc_interface *xch)
> >  {
> >     int rc = 0;
> > +
> > +    xc_hypercall_buffer_cache_release(xch);
> >
> >     xtl_logger_destroy(xch->dombuild_logger_tofree);
> >     xtl_logger_destroy(xch->error_handler_tofree);
> > diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_private.h
> > --- a/tools/libxc/xc_private.h  Wed Jan 26 10:22:42 2011 +0000
> > +++ b/tools/libxc/xc_private.h  Wed Jan 26 10:46:01 2011 +0000
> > @@ -75,6 +75,28 @@ struct xc_interface_core {
> >     FILE *dombuild_logger_file;
> >     const char *currently_progress_reporting;
> >
> > +    /*
> > +     * A simple cache of unused, single page, hypercall buffers
> > +     *
> > +     * Protected by a global lock.
> > +     */
> > +#define HYPERCALL_BUFFER_CACHE_SIZE 4
> > +    int hypercall_buffer_cache_nr;
> > +    void *hypercall_buffer_cache[HYPERCALL_BUFFER_CACHE_SIZE];
> > +
> > +    /*
> > +     * Hypercall buffer statistics. All protected by the global
> > +     * hypercall_buffer_cache lock.
> > +     */
> > +    int hypercall_buffer_total_allocations;
> > +    int hypercall_buffer_total_releases;
> > +    int hypercall_buffer_current_allocations;
> > +    int hypercall_buffer_maximum_allocations;
> > +    int hypercall_buffer_cache_hits;
> > +    int hypercall_buffer_cache_misses;
> > +    int hypercall_buffer_cache_toobig;
> > +
> > +    /* Low lovel OS interface */
> >     xc_osdep_info_t  osdep;
> >     xc_osdep_ops    *ops; /* backend operations */
> >     xc_osdep_handle  ops_handle; /* opaque data for xc_osdep_ops */
> > @@ -156,6 +178,11 @@ int xc__hypercall_bounce_pre(xc_interfac
> >  #define xc_hypercall_bounce_pre(_xch, _name) xc__hypercall_bounce_pre(_xch, HYPERCALL_BUFFER(_name))
> >  void xc__hypercall_bounce_post(xc_interface *xch, xc_hypercall_buffer_t *bounce);
> >  #define xc_hypercall_bounce_post(_xch, _name) xc__hypercall_bounce_post(_xch, HYPERCALL_BUFFER(_name))
> > +
> > +/*
> > + * Release hypercall buffer cache
> > + */
> > +void xc__hypercall_buffer_cache_release(xc_interface *xch);
> >
> >  /*
> >  * Hypercall interfaces.
> >
> >
> >

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: libxc: maintain a small, per-handle, cache of hypercall buffer memory (Was: Re: Xen 4.1 rc1 test report)
  2011-01-31  8:30             ` Ian Campbell
@ 2011-01-31  8:57               ` Haitao Shan
  2011-01-31  9:32                 ` Ian Campbell
  0 siblings, 1 reply; 32+ messages in thread
From: Haitao Shan @ 2011-01-31  8:57 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Zheng, Shaohui, Keir Fraser, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 11740 bytes --]

For example:

The patch:

void *xc__hypercall_buffer_alloc_pages(xc_interface *xch,
xc_hypercall_buffer_t *b, int nr_pages)
 {
-    void *p = xch->ops->u.privcmd.alloc_hypercall_buffer(xch,
xch->ops_handle, nr_pages);
+    void *p = hypercall_buffer_cache_alloc(xch, nr_pages);

-    if (!p)
+    if ( !p )
+        p = xch->ops->u.privcmd.alloc_hypercall_buffer(xch,
xch->ops_handle, nr_pages);
+
+    if ( !p )
        return NULL;

My code at hand:

void *xc__hypercall_buffer_alloc_pages(xc_interface *xch,
xc_hypercall_buffer_t *b, int nr_pages)
{
    size_t size = nr_pages * PAGE_SIZE;
    void *p;
#if defined(_POSIX_C_SOURCE) && !defined(__sun__)
    int ret;
    ret = posix_memalign(&p, PAGE_SIZE, size);
    if (ret != 0)
        return NULL;
#elif defined(__NetBSD__) || defined(__OpenBSD__)
    p = valloc(size);
#else
    p = memalign(PAGE_SIZE, size);
#endif

    if (!p)
        return NULL;

#ifndef __sun__
    if ( mlock(p, size) < 0 )
    {
        free(p);
        return NULL;
    }
#endif

    b->hbuf = p;

    memset(p, 0, size);
    return b->hbuf;
}

And BTW: I am using c/s 22846.

Shan Haitao

2011/1/31 Ian Campbell <Ian.Campbell@eu.citrix.com>

> On Mon, 2011-01-31 at 03:06 +0000, Haitao Shan wrote:
> > Hi, Ian,
> >
> > I can not apply your patch. Is this patch developed against more newer
> > version of Xen than what I can have at hand?
>
> There was a typo below (an xc_hypercall_buffer_cache_release which
> should have been xc__hypercall_buffer_cache_release) but it sounds like
> you have trouble with actually applying the patch?
>
> It was against a recent xen-unstable on the day I posted it. What were
> the rejects you saw?
>
> If you have a particular version you want to test against I can generate
> a suitable patch for you.
>
> Ian.
>
> >
> > Shan Haitao
> >
> > 2011/1/26 Ian Campbell <Ian.Campbell@eu.citrix.com>:
> > > On Wed, 2011-01-26 at 00:47 +0000, Haitao Shan wrote:
> > >> I think it is basically the same idea as Keir introduced in 20841. I
> > >> guess this bug would happen on platforms which has large number of
> > >> physical CPUs, not only on EX system of Intel.
> > >> If you can cook the patch, that would be great! Thanks!!
> > >
> > > Does this help?
> > >
> > > Ian.
> > >
> > > 8<---------------------------------------
> > >
> > > # HG changeset patch
> > > # User Ian Campbell <ian.campbell@citrix.com>
> > > # Date 1296038761 0
> > > # Node ID 8b8b7e024f9d6f4c2ce1a4efbf38f07eeb460d91
> > > # Parent  e4e69622dc95037eab6740f79ecf9c1d05bca529
> > > libxc: maintain a small, per-handle, cache of hypercall buffer memory
> > >
> > > Constantly m(un)locking memory can have significant overhead on
> > > systems with large numbers of CPUs. This was previously fixed by
> > > 20841:fbe8f32fa257 but this was dropped during the transition to
> > > hypercall buffers.
> > >
> > > Introduce a small cache of single page hypercall buffer allocations
> > > which can be reused to avoid this overhead.
> > >
> > > Add some statistics tracking to the hypercall buffer allocations.
> > >
> > > The cache size of 4 was chosen based on these statistics since they
> > > indicated that 2 pages was sufficient to satisfy all concurrent single
> > > page hypercall buffer allocations seen during "xl create", "xl
> > > shutdown" and "xl destroy" of both a PV and HVM guest therefore 4
> > > pages should cover the majority of important cases.
> > >
> > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > >
> > > diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_hcall_buf.c
> > > --- a/tools/libxc/xc_hcall_buf.c        Wed Jan 26 10:22:42 2011 +0000
> > > +++ b/tools/libxc/xc_hcall_buf.c        Wed Jan 26 10:46:01 2011 +0000
> > > @@ -18,6 +18,7 @@
> > >
> > >  #include <stdlib.h>
> > >  #include <string.h>
> > > +#include <pthread.h>
> > >
> > >  #include "xc_private.h"
> > >  #include "xg_private.h"
> > > @@ -28,11 +29,108 @@ xc_hypercall_buffer_t XC__HYPERCALL_BUFF
> > >     HYPERCALL_BUFFER_INIT_NO_BOUNCE
> > >  };
> > >
> > > +pthread_mutex_t hypercall_buffer_cache_mutex =
> PTHREAD_MUTEX_INITIALIZER;
> > > +
> > > +static void hypercall_buffer_cache_lock(xc_interface *xch)
> > > +{
> > > +    if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
> > > +        return;
> > > +    pthread_mutex_lock(&hypercall_buffer_cache_mutex);
> > > +}
> > > +
> > > +static void hypercall_buffer_cache_unlock(xc_interface *xch)
> > > +{
> > > +    if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
> > > +        return;
> > > +    pthread_mutex_unlock(&hypercall_buffer_cache_mutex);
> > > +}
> > > +
> > > +static void *hypercall_buffer_cache_alloc(xc_interface *xch, int
> nr_pages)
> > > +{
> > > +    void *p = NULL;
> > > +
> > > +    hypercall_buffer_cache_lock(xch);
> > > +
> > > +    xch->hypercall_buffer_total_allocations++;
> > > +    xch->hypercall_buffer_current_allocations++;
> > > +    if ( xch->hypercall_buffer_current_allocations >
> xch->hypercall_buffer_maximum_allocations )
> > > +        xch->hypercall_buffer_maximum_allocations =
> xch->hypercall_buffer_current_allocations;
> > > +
> > > +    if ( nr_pages > 1 )
> > > +    {
> > > +        xch->hypercall_buffer_cache_toobig++;
> > > +    }
> > > +    else if ( xch->hypercall_buffer_cache_nr > 0 )
> > > +    {
> > > +        p =
> xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr];
> > > +        xch->hypercall_buffer_cache_hits++;
> > > +    }
> > > +    else
> > > +    {
> > > +        xch->hypercall_buffer_cache_misses++;
> > > +    }
> > > +
> > > +    hypercall_buffer_cache_unlock(xch);
> > > +
> > > +    return p;
> > > +}
> > > +
> > > +static int hypercall_buffer_cache_free(xc_interface *xch, void *p, int
> nr_pages)
> > > +{
> > > +    int rc = 0;
> > > +
> > > +    hypercall_buffer_cache_lock(xch);
> > > +
> > > +    xch->hypercall_buffer_total_releases++;
> > > +    xch->hypercall_buffer_current_allocations--;
> > > +
> > > +    if ( nr_pages == 1 && xch->hypercall_buffer_cache_nr <
> HYPERCALL_BUFFER_CACHE_SIZE )
> > > +    {
> > > +        xch->hypercall_buffer_cache[xch->hypercall_buffer_cache_nr++]
> = p;
> > > +        rc = 1;
> > > +    }
> > > +
> > > +    hypercall_buffer_cache_unlock(xch);
> > > +
> > > +    return rc;
> > > +}
> > > +
> > > +void xc__hypercall_buffer_cache_release(xc_interface *xch)
> > > +{
> > > +    void *p;
> > > +
> > > +    hypercall_buffer_cache_lock(xch);
> > > +
> > > +    DBGPRINTF("hypercall buffer: total allocations:%d total
> releases:%d",
> > > +              xch->hypercall_buffer_total_allocations,
> > > +              xch->hypercall_buffer_total_releases);
> > > +    DBGPRINTF("hypercall buffer: current allocations:%d maximum
> allocations:%d",
> > > +              xch->hypercall_buffer_current_allocations,
> > > +              xch->hypercall_buffer_maximum_allocations);
> > > +    DBGPRINTF("hypercall buffer: cache current size:%d",
> > > +              xch->hypercall_buffer_cache_nr);
> > > +    DBGPRINTF("hypercall buffer: cache hits:%d misses:%d toobig:%d",
> > > +              xch->hypercall_buffer_cache_hits,
> > > +              xch->hypercall_buffer_cache_misses,
> > > +              xch->hypercall_buffer_cache_toobig);
> > > +
> > > +    while ( xch->hypercall_buffer_cache_nr > 0 )
> > > +    {
> > > +        p =
> xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr];
> > > +        xch->ops->u.privcmd.free_hypercall_buffer(xch,
> xch->ops_handle, p, 1);
> > > +    }
> > > +
> > > +    hypercall_buffer_cache_unlock(xch);
> > > +}
> > > +
> > >  void *xc__hypercall_buffer_alloc_pages(xc_interface *xch,
> xc_hypercall_buffer_t *b, int nr_pages)
> > >  {
> > > -    void *p = xch->ops->u.privcmd.alloc_hypercall_buffer(xch,
> xch->ops_handle, nr_pages);
> > > +    void *p = hypercall_buffer_cache_alloc(xch, nr_pages);
> > >
> > > -    if (!p)
> > > +    if ( !p )
> > > +        p = xch->ops->u.privcmd.alloc_hypercall_buffer(xch,
> xch->ops_handle, nr_pages);
> > > +
> > > +    if ( !p )
> > >         return NULL;
> > >
> > >     b->hbuf = p;
> > > @@ -47,7 +145,8 @@ void xc__hypercall_buffer_free_pages(xc_
> > >     if ( b->hbuf == NULL )
> > >         return;
> > >
> > > -    xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle,
> b->hbuf, nr_pages);
> > > +    if ( !hypercall_buffer_cache_free(xch, b->hbuf, nr_pages) )
> > > +        xch->ops->u.privcmd.free_hypercall_buffer(xch,
> xch->ops_handle, b->hbuf, nr_pages);
> > >  }
> > >
> > >  struct allocation_header {
> > > diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_private.c
> > > --- a/tools/libxc/xc_private.c  Wed Jan 26 10:22:42 2011 +0000
> > > +++ b/tools/libxc/xc_private.c  Wed Jan 26 10:46:01 2011 +0000
> > > @@ -126,6 +126,16 @@ static struct xc_interface_core *xc_inte
> > >     xch->error_handler   = logger;           xch->error_handler_tofree
>   = 0;
> > >     xch->dombuild_logger = dombuild_logger;
>  xch->dombuild_logger_tofree = 0;
> > >
> > > +    xch->hypercall_buffer_cache_nr = 0;
> > > +
> > > +    xch->hypercall_buffer_total_allocations = 0;
> > > +    xch->hypercall_buffer_total_releases = 0;
> > > +    xch->hypercall_buffer_current_allocations = 0;
> > > +    xch->hypercall_buffer_maximum_allocations = 0;
> > > +    xch->hypercall_buffer_cache_hits = 0;
> > > +    xch->hypercall_buffer_cache_misses = 0;
> > > +    xch->hypercall_buffer_cache_toobig = 0;
> > > +
> > >     xch->ops_handle = XC_OSDEP_OPEN_ERROR;
> > >     xch->ops = NULL;
> > >
> > > @@ -171,6 +181,8 @@ static int xc_interface_close_common(xc_
> > >  static int xc_interface_close_common(xc_interface *xch)
> > >  {
> > >     int rc = 0;
> > > +
> > > +    xc_hypercall_buffer_cache_release(xch);
> > >
> > >     xtl_logger_destroy(xch->dombuild_logger_tofree);
> > >     xtl_logger_destroy(xch->error_handler_tofree);
> > > diff -r e4e69622dc95 -r 8b8b7e024f9d tools/libxc/xc_private.h
> > > --- a/tools/libxc/xc_private.h  Wed Jan 26 10:22:42 2011 +0000
> > > +++ b/tools/libxc/xc_private.h  Wed Jan 26 10:46:01 2011 +0000
> > > @@ -75,6 +75,28 @@ struct xc_interface_core {
> > >     FILE *dombuild_logger_file;
> > >     const char *currently_progress_reporting;
> > >
> > > +    /*
> > > +     * A simple cache of unused, single page, hypercall buffers
> > > +     *
> > > +     * Protected by a global lock.
> > > +     */
> > > +#define HYPERCALL_BUFFER_CACHE_SIZE 4
> > > +    int hypercall_buffer_cache_nr;
> > > +    void *hypercall_buffer_cache[HYPERCALL_BUFFER_CACHE_SIZE];
> > > +
> > > +    /*
> > > +     * Hypercall buffer statistics. All protected by the global
> > > +     * hypercall_buffer_cache lock.
> > > +     */
> > > +    int hypercall_buffer_total_allocations;
> > > +    int hypercall_buffer_total_releases;
> > > +    int hypercall_buffer_current_allocations;
> > > +    int hypercall_buffer_maximum_allocations;
> > > +    int hypercall_buffer_cache_hits;
> > > +    int hypercall_buffer_cache_misses;
> > > +    int hypercall_buffer_cache_toobig;
> > > +
> > > +    /* Low lovel OS interface */
> > >     xc_osdep_info_t  osdep;
> > >     xc_osdep_ops    *ops; /* backend operations */
> > >     xc_osdep_handle  ops_handle; /* opaque data for xc_osdep_ops */
> > > @@ -156,6 +178,11 @@ int xc__hypercall_bounce_pre(xc_interfac
> > >  #define xc_hypercall_bounce_pre(_xch, _name)
> xc__hypercall_bounce_pre(_xch, HYPERCALL_BUFFER(_name))
> > >  void xc__hypercall_bounce_post(xc_interface *xch,
> xc_hypercall_buffer_t *bounce);
> > >  #define xc_hypercall_bounce_post(_xch, _name)
> xc__hypercall_bounce_post(_xch, HYPERCALL_BUFFER(_name))
> > > +
> > > +/*
> > > + * Release hypercall buffer cache
> > > + */
> > > +void xc__hypercall_buffer_cache_release(xc_interface *xch);
> > >
> > >  /*
> > >  * Hypercall interfaces.
> > >
> > >
> > >
>
>
>

[-- Attachment #1.2: Type: text/html, Size: 18930 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: libxc: maintain a small, per-handle, cache of hypercall buffer memory (Was: Re: Xen 4.1 rc1 test report)
  2011-01-31  8:57               ` Haitao Shan
@ 2011-01-31  9:32                 ` Ian Campbell
  2011-01-31 12:07                   ` Haitao Shan
  0 siblings, 1 reply; 32+ messages in thread
From: Ian Campbell @ 2011-01-31  9:32 UTC (permalink / raw)
  To: Haitao Shan; +Cc: Zheng, Shaohui, Keir Fraser, xen-devel

On Mon, 2011-01-31 at 08:57 +0000, Haitao Shan wrote:
> And BTW: I am using c/s 22846. 

Sorry, I didn't notice a patch which I was holding off for 4.2 in my
queue before this one.

Version rebased onto 22846:52e928af3637 below.

Ian.

8<---------------------------------------

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1296466278 0
# Node ID 9e6175469e6f246ee9370ef57f987bb435b00bef
# Parent  5b6663ba2bb2c54e8fa6745afa16297ebe43328d
libxc: maintain a small, per-handle, cache of hypercall buffer memory

Constantly m(un)locking memory can have significant overhead on
systems with large numbers of CPUs. This was previously fixed by
20841:fbe8f32fa257 but this was dropped during the transition to
hypercall buffers.

Introduce a small cache of single page hypercall buffer allocations
which can be resused to avoid this overhead.

Add some statistics tracking to the hypercall buffer allocations.

The cache size of 4 was chosen based on these statistics since they
indicated that 2 pages was sufficient to satisfy all concurrent single
page hypercall buffer allocations seen during "xl create", "xl
shutdown" and "xl destroy" of both a PV and HVM guest therefore 4
pages should cover the majority of important cases.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 5b6663ba2bb2 -r 9e6175469e6f tools/libxc/xc_hcall_buf.c
--- a/tools/libxc/xc_hcall_buf.c	Mon Jan 31 09:14:52 2011 +0000
+++ b/tools/libxc/xc_hcall_buf.c	Mon Jan 31 09:31:18 2011 +0000
@@ -18,6 +18,7 @@
 
 #include <stdlib.h>
 #include <malloc.h>
+#include <pthread.h>
 
 #include "xc_private.h"
 #include "xg_private.h"
@@ -28,31 +29,137 @@ xc_hypercall_buffer_t XC__HYPERCALL_BUFF
     HYPERCALL_BUFFER_INIT_NO_BOUNCE
 };
 
+pthread_mutex_t hypercall_buffer_cache_mutex = PTHREAD_MUTEX_INITIALIZER;
+
+static void hypercall_buffer_cache_lock(xc_interface *xch)
+{
+    if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
+        return;
+    pthread_mutex_lock(&hypercall_buffer_cache_mutex);
+}
+
+static void hypercall_buffer_cache_unlock(xc_interface *xch)
+{
+    if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
+        return;
+    pthread_mutex_unlock(&hypercall_buffer_cache_mutex);
+}
+
+static void *hypercall_buffer_cache_alloc(xc_interface *xch, int nr_pages)
+{
+    void *p = NULL;
+
+    hypercall_buffer_cache_lock(xch);
+
+    xch->hypercall_buffer_total_allocations++;
+    xch->hypercall_buffer_current_allocations++;
+    if ( xch->hypercall_buffer_current_allocations > xch->hypercall_buffer_maximum_allocations )
+        xch->hypercall_buffer_maximum_allocations = xch->hypercall_buffer_current_allocations;
+
+    if ( nr_pages > 1 )
+    {
+        xch->hypercall_buffer_cache_toobig++;
+    }
+    else if ( xch->hypercall_buffer_cache_nr > 0 )
+    {
+        p = xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr];
+        xch->hypercall_buffer_cache_hits++;
+    }
+    else
+    {
+        xch->hypercall_buffer_cache_misses++;
+    }
+
+    hypercall_buffer_cache_unlock(xch);
+
+    return p;
+}
+
+static int hypercall_buffer_cache_free(xc_interface *xch, void *p, int nr_pages)
+{
+    int rc = 0;
+
+    hypercall_buffer_cache_lock(xch);
+
+    xch->hypercall_buffer_total_releases++;
+    xch->hypercall_buffer_current_allocations--;
+
+    if ( nr_pages == 1 && xch->hypercall_buffer_cache_nr < HYPERCALL_BUFFER_CACHE_SIZE )
+    {
+        xch->hypercall_buffer_cache[xch->hypercall_buffer_cache_nr++] = p;
+        rc = 1;
+    }
+
+    hypercall_buffer_cache_unlock(xch);
+
+    return rc;
+}
+
+static void do_hypercall_buffer_free_pages(void *ptr, int nr_pages)
+{
+#ifndef __sun__
+    (void) munlock(ptr, nr_pages * PAGE_SIZE);
+#endif
+
+    free(ptr);
+}
+
+void xc__hypercall_buffer_cache_release(xc_interface *xch)
+{
+    void *p;
+
+    hypercall_buffer_cache_lock(xch);
+
+    DBGPRINTF("hypercall buffer: total allocations:%d total releases:%d",
+              xch->hypercall_buffer_total_allocations,
+              xch->hypercall_buffer_total_releases);
+    DBGPRINTF("hypercall buffer: current allocations:%d maximum allocations:%d",
+              xch->hypercall_buffer_current_allocations,
+              xch->hypercall_buffer_maximum_allocations);
+    DBGPRINTF("hypercall buffer: cache current size:%d",
+              xch->hypercall_buffer_cache_nr);
+    DBGPRINTF("hypercall buffer: cache hits:%d misses:%d toobig:%d",
+              xch->hypercall_buffer_cache_hits,
+              xch->hypercall_buffer_cache_misses,
+              xch->hypercall_buffer_cache_toobig);
+
+    while ( xch->hypercall_buffer_cache_nr > 0 )
+    {
+        p = xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr];
+        do_hypercall_buffer_free_pages(p, 1);
+    }
+
+    hypercall_buffer_cache_unlock(xch);
+}
+
 void *xc__hypercall_buffer_alloc_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages)
 {
     size_t size = nr_pages * PAGE_SIZE;
-    void *p;
+    void *p = hypercall_buffer_cache_alloc(xch, nr_pages);
+
+    if ( !p ) {
 #if defined(_POSIX_C_SOURCE) && !defined(__sun__)
-    int ret;
-    ret = posix_memalign(&p, PAGE_SIZE, size);
-    if (ret != 0)
-        return NULL;
+        int ret;
+        ret = posix_memalign(&p, PAGE_SIZE, size);
+        if (ret != 0)
+            return NULL;
 #elif defined(__NetBSD__) || defined(__OpenBSD__)
-    p = valloc(size);
+        p = valloc(size);
 #else
-    p = memalign(PAGE_SIZE, size);
+        p = memalign(PAGE_SIZE, size);
 #endif
 
-    if (!p)
-        return NULL;
+        if (!p)
+            return NULL;
 
 #ifndef __sun__
-    if ( mlock(p, size) < 0 )
-    {
-        free(p);
-        return NULL;
+        if ( mlock(p, size) < 0 )
+        {
+            free(p);
+            return NULL;
+        }
+#endif
     }
-#endif
 
     b->hbuf = p;
 
@@ -65,11 +172,8 @@ void xc__hypercall_buffer_free_pages(xc_
     if ( b->hbuf == NULL )
         return;
 
-#ifndef __sun__
-    (void) munlock(b->hbuf, nr_pages * PAGE_SIZE);
-#endif
-
-    free(b->hbuf);
+    if ( !hypercall_buffer_cache_free(xch, b->hbuf, nr_pages) )
+        do_hypercall_buffer_free_pages(b->hbuf, nr_pages);
 }
 
 struct allocation_header {
diff -r 5b6663ba2bb2 -r 9e6175469e6f tools/libxc/xc_private.c
--- a/tools/libxc/xc_private.c	Mon Jan 31 09:14:52 2011 +0000
+++ b/tools/libxc/xc_private.c	Mon Jan 31 09:31:18 2011 +0000
@@ -126,6 +126,16 @@ static struct xc_interface_core *xc_inte
     xch->error_handler   = logger;           xch->error_handler_tofree   = 0;
     xch->dombuild_logger = dombuild_logger;  xch->dombuild_logger_tofree = 0;
 
+    xch->hypercall_buffer_cache_nr = 0;
+
+    xch->hypercall_buffer_total_allocations = 0;
+    xch->hypercall_buffer_total_releases = 0;
+    xch->hypercall_buffer_current_allocations = 0;
+    xch->hypercall_buffer_maximum_allocations = 0;
+    xch->hypercall_buffer_cache_hits = 0;
+    xch->hypercall_buffer_cache_misses = 0;
+    xch->hypercall_buffer_cache_toobig = 0;
+
     xch->ops_handle = XC_OSDEP_OPEN_ERROR;
     xch->ops = NULL;
 
@@ -171,6 +181,8 @@ static int xc_interface_close_common(xc_
 static int xc_interface_close_common(xc_interface *xch)
 {
     int rc = 0;
+
+    xc__hypercall_buffer_cache_release(xch);
 
     xtl_logger_destroy(xch->dombuild_logger_tofree);
     xtl_logger_destroy(xch->error_handler_tofree);
diff -r 5b6663ba2bb2 -r 9e6175469e6f tools/libxc/xc_private.h
--- a/tools/libxc/xc_private.h	Mon Jan 31 09:14:52 2011 +0000
+++ b/tools/libxc/xc_private.h	Mon Jan 31 09:31:18 2011 +0000
@@ -75,6 +75,28 @@ struct xc_interface_core {
     FILE *dombuild_logger_file;
     const char *currently_progress_reporting;
 
+    /*
+     * A simple cache of unused, single page, hypercall buffers
+     *
+     * Protected by a global lock.
+     */
+#define HYPERCALL_BUFFER_CACHE_SIZE 4
+    int hypercall_buffer_cache_nr;
+    void *hypercall_buffer_cache[HYPERCALL_BUFFER_CACHE_SIZE];
+
+    /*
+     * Hypercall buffer statistics. All protected by the global
+     * hypercall_buffer_cache lock.
+     */
+    int hypercall_buffer_total_allocations;
+    int hypercall_buffer_total_releases;
+    int hypercall_buffer_current_allocations;
+    int hypercall_buffer_maximum_allocations;
+    int hypercall_buffer_cache_hits;
+    int hypercall_buffer_cache_misses;
+    int hypercall_buffer_cache_toobig;
+
+    /* Low lovel OS interface */
     xc_osdep_info_t  osdep;
     xc_osdep_ops    *ops; /* backend operations */
     xc_osdep_handle  ops_handle; /* opaque data for xc_osdep_ops */
@@ -156,6 +178,11 @@ int xc__hypercall_bounce_pre(xc_interfac
 #define xc_hypercall_bounce_pre(_xch, _name) xc__hypercall_bounce_pre(_xch, HYPERCALL_BUFFER(_name))
 void xc__hypercall_bounce_post(xc_interface *xch, xc_hypercall_buffer_t *bounce);
 #define xc_hypercall_bounce_post(_xch, _name) xc__hypercall_bounce_post(_xch, HYPERCALL_BUFFER(_name))
+
+/*
+ * Release hypercall buffer cache
+ */
+void xc__hypercall_buffer_cache_release(xc_interface *xch);
 
 /*
  * Hypercall interfaces.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: libxc: maintain a small, per-handle, cache of hypercall buffer memory (Was: Re: Xen 4.1 rc1 test report)
  2011-01-31  9:32                 ` Ian Campbell
@ 2011-01-31 12:07                   ` Haitao Shan
  2011-02-01  4:40                     ` Haitao Shan
  0 siblings, 1 reply; 32+ messages in thread
From: Haitao Shan @ 2011-01-31 12:07 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Zheng, Shaohui, Keir Fraser, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 9707 bytes --]

Thanks, Ian.
I will test tomorrow and let you know the result.

Shan Haitao

2011/1/31 Ian Campbell <Ian.Campbell@eu.citrix.com>

> On Mon, 2011-01-31 at 08:57 +0000, Haitao Shan wrote:
> > And BTW: I am using c/s 22846.
>
> Sorry, I didn't notice a patch which I was holding off for 4.2 in my
> queue before this one.
>
> Version rebased onto 22846:52e928af3637 below.
>
> Ian.
>
> 8<---------------------------------------
>
> # HG changeset patch
> # User Ian Campbell <ian.campbell@citrix.com>
> # Date 1296466278 0
> # Node ID 9e6175469e6f246ee9370ef57f987bb435b00bef
> # Parent  5b6663ba2bb2c54e8fa6745afa16297ebe43328d
> libxc: maintain a small, per-handle, cache of hypercall buffer memory
>
> Constantly m(un)locking memory can have significant overhead on
> systems with large numbers of CPUs. This was previously fixed by
> 20841:fbe8f32fa257 but this was dropped during the transition to
> hypercall buffers.
>
> Introduce a small cache of single page hypercall buffer allocations
> which can be resused to avoid this overhead.
>
> Add some statistics tracking to the hypercall buffer allocations.
>
> The cache size of 4 was chosen based on these statistics since they
> indicated that 2 pages was sufficient to satisfy all concurrent single
> page hypercall buffer allocations seen during "xl create", "xl
> shutdown" and "xl destroy" of both a PV and HVM guest therefore 4
> pages should cover the majority of important cases.
>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>
> diff -r 5b6663ba2bb2 -r 9e6175469e6f tools/libxc/xc_hcall_buf.c
> --- a/tools/libxc/xc_hcall_buf.c        Mon Jan 31 09:14:52 2011 +0000
> +++ b/tools/libxc/xc_hcall_buf.c        Mon Jan 31 09:31:18 2011 +0000
> @@ -18,6 +18,7 @@
>
>  #include <stdlib.h>
>  #include <malloc.h>
> +#include <pthread.h>
>
>  #include "xc_private.h"
>  #include "xg_private.h"
> @@ -28,31 +29,137 @@ xc_hypercall_buffer_t XC__HYPERCALL_BUFF
>      HYPERCALL_BUFFER_INIT_NO_BOUNCE
>  };
>
> +pthread_mutex_t hypercall_buffer_cache_mutex = PTHREAD_MUTEX_INITIALIZER;
> +
> +static void hypercall_buffer_cache_lock(xc_interface *xch)
> +{
> +    if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
> +        return;
> +    pthread_mutex_lock(&hypercall_buffer_cache_mutex);
> +}
> +
> +static void hypercall_buffer_cache_unlock(xc_interface *xch)
> +{
> +    if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
> +        return;
> +    pthread_mutex_unlock(&hypercall_buffer_cache_mutex);
> +}
> +
> +static void *hypercall_buffer_cache_alloc(xc_interface *xch, int nr_pages)
> +{
> +    void *p = NULL;
> +
> +    hypercall_buffer_cache_lock(xch);
> +
> +    xch->hypercall_buffer_total_allocations++;
> +    xch->hypercall_buffer_current_allocations++;
> +    if ( xch->hypercall_buffer_current_allocations >
> xch->hypercall_buffer_maximum_allocations )
> +        xch->hypercall_buffer_maximum_allocations =
> xch->hypercall_buffer_current_allocations;
> +
> +    if ( nr_pages > 1 )
> +    {
> +        xch->hypercall_buffer_cache_toobig++;
> +    }
> +    else if ( xch->hypercall_buffer_cache_nr > 0 )
> +    {
> +        p = xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr];
> +        xch->hypercall_buffer_cache_hits++;
> +    }
> +    else
> +    {
> +        xch->hypercall_buffer_cache_misses++;
> +    }
> +
> +    hypercall_buffer_cache_unlock(xch);
> +
> +    return p;
> +}
> +
> +static int hypercall_buffer_cache_free(xc_interface *xch, void *p, int
> nr_pages)
> +{
> +    int rc = 0;
> +
> +    hypercall_buffer_cache_lock(xch);
> +
> +    xch->hypercall_buffer_total_releases++;
> +    xch->hypercall_buffer_current_allocations--;
> +
> +    if ( nr_pages == 1 && xch->hypercall_buffer_cache_nr <
> HYPERCALL_BUFFER_CACHE_SIZE )
> +    {
> +        xch->hypercall_buffer_cache[xch->hypercall_buffer_cache_nr++] = p;
> +        rc = 1;
> +    }
> +
> +    hypercall_buffer_cache_unlock(xch);
> +
> +    return rc;
> +}
> +
> +static void do_hypercall_buffer_free_pages(void *ptr, int nr_pages)
> +{
> +#ifndef __sun__
> +    (void) munlock(ptr, nr_pages * PAGE_SIZE);
> +#endif
> +
> +    free(ptr);
> +}
> +
> +void xc__hypercall_buffer_cache_release(xc_interface *xch)
> +{
> +    void *p;
> +
> +    hypercall_buffer_cache_lock(xch);
> +
> +    DBGPRINTF("hypercall buffer: total allocations:%d total releases:%d",
> +              xch->hypercall_buffer_total_allocations,
> +              xch->hypercall_buffer_total_releases);
> +    DBGPRINTF("hypercall buffer: current allocations:%d maximum
> allocations:%d",
> +              xch->hypercall_buffer_current_allocations,
> +              xch->hypercall_buffer_maximum_allocations);
> +    DBGPRINTF("hypercall buffer: cache current size:%d",
> +              xch->hypercall_buffer_cache_nr);
> +    DBGPRINTF("hypercall buffer: cache hits:%d misses:%d toobig:%d",
> +              xch->hypercall_buffer_cache_hits,
> +              xch->hypercall_buffer_cache_misses,
> +              xch->hypercall_buffer_cache_toobig);
> +
> +    while ( xch->hypercall_buffer_cache_nr > 0 )
> +    {
> +        p = xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr];
> +        do_hypercall_buffer_free_pages(p, 1);
> +    }
> +
> +    hypercall_buffer_cache_unlock(xch);
> +}
> +
>  void *xc__hypercall_buffer_alloc_pages(xc_interface *xch,
> xc_hypercall_buffer_t *b, int nr_pages)
>  {
>      size_t size = nr_pages * PAGE_SIZE;
> -    void *p;
> +    void *p = hypercall_buffer_cache_alloc(xch, nr_pages);
> +
> +    if ( !p ) {
>  #if defined(_POSIX_C_SOURCE) && !defined(__sun__)
> -    int ret;
> -    ret = posix_memalign(&p, PAGE_SIZE, size);
> -    if (ret != 0)
> -        return NULL;
> +        int ret;
> +        ret = posix_memalign(&p, PAGE_SIZE, size);
> +        if (ret != 0)
> +            return NULL;
>  #elif defined(__NetBSD__) || defined(__OpenBSD__)
> -    p = valloc(size);
> +        p = valloc(size);
>  #else
> -    p = memalign(PAGE_SIZE, size);
> +        p = memalign(PAGE_SIZE, size);
>  #endif
>
> -    if (!p)
> -        return NULL;
> +        if (!p)
> +            return NULL;
>
>  #ifndef __sun__
> -    if ( mlock(p, size) < 0 )
> -    {
> -        free(p);
> -        return NULL;
> +        if ( mlock(p, size) < 0 )
> +        {
> +            free(p);
> +            return NULL;
> +        }
> +#endif
>     }
> -#endif
>
>     b->hbuf = p;
>
> @@ -65,11 +172,8 @@ void xc__hypercall_buffer_free_pages(xc_
>      if ( b->hbuf == NULL )
>         return;
>
> -#ifndef __sun__
> -    (void) munlock(b->hbuf, nr_pages * PAGE_SIZE);
> -#endif
> -
> -    free(b->hbuf);
> +    if ( !hypercall_buffer_cache_free(xch, b->hbuf, nr_pages) )
> +        do_hypercall_buffer_free_pages(b->hbuf, nr_pages);
>  }
>
>  struct allocation_header {
> diff -r 5b6663ba2bb2 -r 9e6175469e6f tools/libxc/xc_private.c
> --- a/tools/libxc/xc_private.c  Mon Jan 31 09:14:52 2011 +0000
> +++ b/tools/libxc/xc_private.c  Mon Jan 31 09:31:18 2011 +0000
> @@ -126,6 +126,16 @@ static struct xc_interface_core *xc_inte
>     xch->error_handler   = logger;           xch->error_handler_tofree   =
> 0;
>     xch->dombuild_logger = dombuild_logger;  xch->dombuild_logger_tofree =
> 0;
>
> +    xch->hypercall_buffer_cache_nr = 0;
> +
> +    xch->hypercall_buffer_total_allocations = 0;
> +    xch->hypercall_buffer_total_releases = 0;
> +    xch->hypercall_buffer_current_allocations = 0;
> +    xch->hypercall_buffer_maximum_allocations = 0;
> +    xch->hypercall_buffer_cache_hits = 0;
> +    xch->hypercall_buffer_cache_misses = 0;
> +    xch->hypercall_buffer_cache_toobig = 0;
> +
>     xch->ops_handle = XC_OSDEP_OPEN_ERROR;
>     xch->ops = NULL;
>
> @@ -171,6 +181,8 @@ static int xc_interface_close_common(xc_
>  static int xc_interface_close_common(xc_interface *xch)
>  {
>     int rc = 0;
> +
> +    xc__hypercall_buffer_cache_release(xch);
>
>     xtl_logger_destroy(xch->dombuild_logger_tofree);
>     xtl_logger_destroy(xch->error_handler_tofree);
> diff -r 5b6663ba2bb2 -r 9e6175469e6f tools/libxc/xc_private.h
> --- a/tools/libxc/xc_private.h  Mon Jan 31 09:14:52 2011 +0000
> +++ b/tools/libxc/xc_private.h  Mon Jan 31 09:31:18 2011 +0000
> @@ -75,6 +75,28 @@ struct xc_interface_core {
>     FILE *dombuild_logger_file;
>     const char *currently_progress_reporting;
>
> +    /*
> +     * A simple cache of unused, single page, hypercall buffers
> +     *
> +     * Protected by a global lock.
> +     */
> +#define HYPERCALL_BUFFER_CACHE_SIZE 4
> +    int hypercall_buffer_cache_nr;
> +    void *hypercall_buffer_cache[HYPERCALL_BUFFER_CACHE_SIZE];
> +
> +    /*
> +     * Hypercall buffer statistics. All protected by the global
> +     * hypercall_buffer_cache lock.
> +     */
> +    int hypercall_buffer_total_allocations;
> +    int hypercall_buffer_total_releases;
> +    int hypercall_buffer_current_allocations;
> +    int hypercall_buffer_maximum_allocations;
> +    int hypercall_buffer_cache_hits;
> +    int hypercall_buffer_cache_misses;
> +    int hypercall_buffer_cache_toobig;
> +
> +    /* Low lovel OS interface */
>     xc_osdep_info_t  osdep;
>     xc_osdep_ops    *ops; /* backend operations */
>     xc_osdep_handle  ops_handle; /* opaque data for xc_osdep_ops */
> @@ -156,6 +178,11 @@ int xc__hypercall_bounce_pre(xc_interfac
>  #define xc_hypercall_bounce_pre(_xch, _name)
> xc__hypercall_bounce_pre(_xch, HYPERCALL_BUFFER(_name))
>  void xc__hypercall_bounce_post(xc_interface *xch, xc_hypercall_buffer_t
> *bounce);
>  #define xc_hypercall_bounce_post(_xch, _name)
> xc__hypercall_bounce_post(_xch, HYPERCALL_BUFFER(_name))
> +
> +/*
> + * Release hypercall buffer cache
> + */
> +void xc__hypercall_buffer_cache_release(xc_interface *xch);
>
>  /*
>  * Hypercall interfaces.
>
>
>

[-- Attachment #1.2: Type: text/html, Size: 11810 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: libxc: maintain a small, per-handle, cache of hypercall buffer memory (Was: Re: Xen 4.1 rc1 test report)
  2011-01-31 12:07                   ` Haitao Shan
@ 2011-02-01  4:40                     ` Haitao Shan
  0 siblings, 0 replies; 32+ messages in thread
From: Haitao Shan @ 2011-02-01  4:40 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Zheng, Shaohui, Keir Fraser, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 10235 bytes --]

I have tested the patch. The bug can be fixed by Ian's patch. Thanks!

So, Ian, please go on the process of your submitting the patch.

Shan Haitao

2011/1/31 Haitao Shan <maillists.shan@gmail.com>

> Thanks, Ian.
> I will test tomorrow and let you know the result.
>
> Shan Haitao
>
> 2011/1/31 Ian Campbell <Ian.Campbell@eu.citrix.com>
>
>> On Mon, 2011-01-31 at 08:57 +0000, Haitao Shan wrote:
>>
>> > And BTW: I am using c/s 22846.
>>
>> Sorry, I didn't notice a patch which I was holding off for 4.2 in my
>> queue before this one.
>>
>> Version rebased onto 22846:52e928af3637 below.
>>
>> Ian.
>>
>> 8<---------------------------------------
>>
>> # HG changeset patch
>> # User Ian Campbell <ian.campbell@citrix.com>
>> # Date 1296466278 0
>> # Node ID 9e6175469e6f246ee9370ef57f987bb435b00bef
>> # Parent  5b6663ba2bb2c54e8fa6745afa16297ebe43328d
>> libxc: maintain a small, per-handle, cache of hypercall buffer memory
>>
>> Constantly m(un)locking memory can have significant overhead on
>> systems with large numbers of CPUs. This was previously fixed by
>> 20841:fbe8f32fa257 but this was dropped during the transition to
>> hypercall buffers.
>>
>> Introduce a small cache of single page hypercall buffer allocations
>> which can be resused to avoid this overhead.
>>
>> Add some statistics tracking to the hypercall buffer allocations.
>>
>> The cache size of 4 was chosen based on these statistics since they
>> indicated that 2 pages was sufficient to satisfy all concurrent single
>> page hypercall buffer allocations seen during "xl create", "xl
>> shutdown" and "xl destroy" of both a PV and HVM guest therefore 4
>> pages should cover the majority of important cases.
>>
>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>>
>> diff -r 5b6663ba2bb2 -r 9e6175469e6f tools/libxc/xc_hcall_buf.c
>> --- a/tools/libxc/xc_hcall_buf.c        Mon Jan 31 09:14:52 2011 +0000
>> +++ b/tools/libxc/xc_hcall_buf.c        Mon Jan 31 09:31:18 2011 +0000
>> @@ -18,6 +18,7 @@
>>
>>  #include <stdlib.h>
>>  #include <malloc.h>
>> +#include <pthread.h>
>>
>>  #include "xc_private.h"
>>  #include "xg_private.h"
>> @@ -28,31 +29,137 @@ xc_hypercall_buffer_t XC__HYPERCALL_BUFF
>>      HYPERCALL_BUFFER_INIT_NO_BOUNCE
>>  };
>>
>> +pthread_mutex_t hypercall_buffer_cache_mutex = PTHREAD_MUTEX_INITIALIZER;
>> +
>> +static void hypercall_buffer_cache_lock(xc_interface *xch)
>> +{
>> +    if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
>> +        return;
>> +    pthread_mutex_lock(&hypercall_buffer_cache_mutex);
>> +}
>> +
>> +static void hypercall_buffer_cache_unlock(xc_interface *xch)
>> +{
>> +    if ( xch->flags & XC_OPENFLAG_NON_REENTRANT )
>> +        return;
>> +    pthread_mutex_unlock(&hypercall_buffer_cache_mutex);
>> +}
>> +
>> +static void *hypercall_buffer_cache_alloc(xc_interface *xch, int
>> nr_pages)
>> +{
>> +    void *p = NULL;
>> +
>> +    hypercall_buffer_cache_lock(xch);
>> +
>> +    xch->hypercall_buffer_total_allocations++;
>> +    xch->hypercall_buffer_current_allocations++;
>> +    if ( xch->hypercall_buffer_current_allocations >
>> xch->hypercall_buffer_maximum_allocations )
>> +        xch->hypercall_buffer_maximum_allocations =
>> xch->hypercall_buffer_current_allocations;
>> +
>> +    if ( nr_pages > 1 )
>> +    {
>> +        xch->hypercall_buffer_cache_toobig++;
>> +    }
>> +    else if ( xch->hypercall_buffer_cache_nr > 0 )
>> +    {
>> +        p =
>> xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr];
>> +        xch->hypercall_buffer_cache_hits++;
>> +    }
>> +    else
>> +    {
>> +        xch->hypercall_buffer_cache_misses++;
>> +    }
>> +
>> +    hypercall_buffer_cache_unlock(xch);
>> +
>> +    return p;
>> +}
>> +
>> +static int hypercall_buffer_cache_free(xc_interface *xch, void *p, int
>> nr_pages)
>> +{
>> +    int rc = 0;
>> +
>> +    hypercall_buffer_cache_lock(xch);
>> +
>> +    xch->hypercall_buffer_total_releases++;
>> +    xch->hypercall_buffer_current_allocations--;
>> +
>> +    if ( nr_pages == 1 && xch->hypercall_buffer_cache_nr <
>> HYPERCALL_BUFFER_CACHE_SIZE )
>> +    {
>> +        xch->hypercall_buffer_cache[xch->hypercall_buffer_cache_nr++] =
>> p;
>> +        rc = 1;
>> +    }
>> +
>> +    hypercall_buffer_cache_unlock(xch);
>> +
>> +    return rc;
>> +}
>> +
>> +static void do_hypercall_buffer_free_pages(void *ptr, int nr_pages)
>> +{
>> +#ifndef __sun__
>> +    (void) munlock(ptr, nr_pages * PAGE_SIZE);
>> +#endif
>> +
>> +    free(ptr);
>> +}
>> +
>> +void xc__hypercall_buffer_cache_release(xc_interface *xch)
>> +{
>> +    void *p;
>> +
>> +    hypercall_buffer_cache_lock(xch);
>> +
>> +    DBGPRINTF("hypercall buffer: total allocations:%d total releases:%d",
>> +              xch->hypercall_buffer_total_allocations,
>> +              xch->hypercall_buffer_total_releases);
>> +    DBGPRINTF("hypercall buffer: current allocations:%d maximum
>> allocations:%d",
>> +              xch->hypercall_buffer_current_allocations,
>> +              xch->hypercall_buffer_maximum_allocations);
>> +    DBGPRINTF("hypercall buffer: cache current size:%d",
>> +              xch->hypercall_buffer_cache_nr);
>> +    DBGPRINTF("hypercall buffer: cache hits:%d misses:%d toobig:%d",
>> +              xch->hypercall_buffer_cache_hits,
>> +              xch->hypercall_buffer_cache_misses,
>> +              xch->hypercall_buffer_cache_toobig);
>> +
>> +    while ( xch->hypercall_buffer_cache_nr > 0 )
>> +    {
>> +        p =
>> xch->hypercall_buffer_cache[--xch->hypercall_buffer_cache_nr];
>> +        do_hypercall_buffer_free_pages(p, 1);
>> +    }
>> +
>> +    hypercall_buffer_cache_unlock(xch);
>> +}
>> +
>>  void *xc__hypercall_buffer_alloc_pages(xc_interface *xch,
>> xc_hypercall_buffer_t *b, int nr_pages)
>>  {
>>      size_t size = nr_pages * PAGE_SIZE;
>> -    void *p;
>> +    void *p = hypercall_buffer_cache_alloc(xch, nr_pages);
>> +
>> +    if ( !p ) {
>>  #if defined(_POSIX_C_SOURCE) && !defined(__sun__)
>> -    int ret;
>> -    ret = posix_memalign(&p, PAGE_SIZE, size);
>> -    if (ret != 0)
>> -        return NULL;
>> +        int ret;
>> +        ret = posix_memalign(&p, PAGE_SIZE, size);
>> +        if (ret != 0)
>> +            return NULL;
>>  #elif defined(__NetBSD__) || defined(__OpenBSD__)
>> -    p = valloc(size);
>> +        p = valloc(size);
>>  #else
>> -    p = memalign(PAGE_SIZE, size);
>> +        p = memalign(PAGE_SIZE, size);
>>  #endif
>>
>> -    if (!p)
>> -        return NULL;
>> +        if (!p)
>> +            return NULL;
>>
>>  #ifndef __sun__
>> -    if ( mlock(p, size) < 0 )
>> -    {
>> -        free(p);
>> -        return NULL;
>> +        if ( mlock(p, size) < 0 )
>> +        {
>> +            free(p);
>> +            return NULL;
>> +        }
>> +#endif
>>     }
>> -#endif
>>
>>     b->hbuf = p;
>>
>> @@ -65,11 +172,8 @@ void xc__hypercall_buffer_free_pages(xc_
>>      if ( b->hbuf == NULL )
>>         return;
>>
>> -#ifndef __sun__
>> -    (void) munlock(b->hbuf, nr_pages * PAGE_SIZE);
>> -#endif
>> -
>> -    free(b->hbuf);
>> +    if ( !hypercall_buffer_cache_free(xch, b->hbuf, nr_pages) )
>> +        do_hypercall_buffer_free_pages(b->hbuf, nr_pages);
>>  }
>>
>>  struct allocation_header {
>> diff -r 5b6663ba2bb2 -r 9e6175469e6f tools/libxc/xc_private.c
>> --- a/tools/libxc/xc_private.c  Mon Jan 31 09:14:52 2011 +0000
>> +++ b/tools/libxc/xc_private.c  Mon Jan 31 09:31:18 2011 +0000
>> @@ -126,6 +126,16 @@ static struct xc_interface_core *xc_inte
>>     xch->error_handler   = logger;           xch->error_handler_tofree   =
>> 0;
>>     xch->dombuild_logger = dombuild_logger;  xch->dombuild_logger_tofree =
>> 0;
>>
>> +    xch->hypercall_buffer_cache_nr = 0;
>> +
>> +    xch->hypercall_buffer_total_allocations = 0;
>> +    xch->hypercall_buffer_total_releases = 0;
>> +    xch->hypercall_buffer_current_allocations = 0;
>> +    xch->hypercall_buffer_maximum_allocations = 0;
>> +    xch->hypercall_buffer_cache_hits = 0;
>> +    xch->hypercall_buffer_cache_misses = 0;
>> +    xch->hypercall_buffer_cache_toobig = 0;
>> +
>>     xch->ops_handle = XC_OSDEP_OPEN_ERROR;
>>     xch->ops = NULL;
>>
>> @@ -171,6 +181,8 @@ static int xc_interface_close_common(xc_
>>  static int xc_interface_close_common(xc_interface *xch)
>>  {
>>     int rc = 0;
>> +
>> +    xc__hypercall_buffer_cache_release(xch);
>>
>>     xtl_logger_destroy(xch->dombuild_logger_tofree);
>>     xtl_logger_destroy(xch->error_handler_tofree);
>> diff -r 5b6663ba2bb2 -r 9e6175469e6f tools/libxc/xc_private.h
>> --- a/tools/libxc/xc_private.h  Mon Jan 31 09:14:52 2011 +0000
>> +++ b/tools/libxc/xc_private.h  Mon Jan 31 09:31:18 2011 +0000
>> @@ -75,6 +75,28 @@ struct xc_interface_core {
>>     FILE *dombuild_logger_file;
>>     const char *currently_progress_reporting;
>>
>> +    /*
>> +     * A simple cache of unused, single page, hypercall buffers
>> +     *
>> +     * Protected by a global lock.
>> +     */
>> +#define HYPERCALL_BUFFER_CACHE_SIZE 4
>> +    int hypercall_buffer_cache_nr;
>> +    void *hypercall_buffer_cache[HYPERCALL_BUFFER_CACHE_SIZE];
>> +
>> +    /*
>> +     * Hypercall buffer statistics. All protected by the global
>> +     * hypercall_buffer_cache lock.
>> +     */
>> +    int hypercall_buffer_total_allocations;
>> +    int hypercall_buffer_total_releases;
>> +    int hypercall_buffer_current_allocations;
>> +    int hypercall_buffer_maximum_allocations;
>> +    int hypercall_buffer_cache_hits;
>> +    int hypercall_buffer_cache_misses;
>> +    int hypercall_buffer_cache_toobig;
>> +
>> +    /* Low lovel OS interface */
>>     xc_osdep_info_t  osdep;
>>     xc_osdep_ops    *ops; /* backend operations */
>>     xc_osdep_handle  ops_handle; /* opaque data for xc_osdep_ops */
>> @@ -156,6 +178,11 @@ int xc__hypercall_bounce_pre(xc_interfac
>>  #define xc_hypercall_bounce_pre(_xch, _name)
>> xc__hypercall_bounce_pre(_xch, HYPERCALL_BUFFER(_name))
>>  void xc__hypercall_bounce_post(xc_interface *xch, xc_hypercall_buffer_t
>> *bounce);
>>  #define xc_hypercall_bounce_post(_xch, _name)
>> xc__hypercall_bounce_post(_xch, HYPERCALL_BUFFER(_name))
>> +
>> +/*
>> + * Release hypercall buffer cache
>> + */
>> +void xc__hypercall_buffer_cache_release(xc_interface *xch);
>>
>>  /*
>>  * Hypercall interfaces.
>>
>>
>>
>

[-- Attachment #1.2: Type: text/html, Size: 12229 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2011-02-01  4:40 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <Acu6GEBstpnTfIH/TdeQZvf0FjUZ0Q==>
2011-01-22  9:39 ` Xen 4.1 rc1 test report Zheng, Shaohui
2011-01-24 17:05   ` Xen 4.1 rc1 test report (xl bits) Gianni Tedesco
2011-01-24 17:27     ` Christoph Egger
2011-01-24 17:43       ` [PATCH]: xl: Check a domain exists before destroying it Gianni Tedesco
2011-01-24 18:18         ` [PATCH, v2]: xl: Check domain existance when doing domain identifier lookups Gianni Tedesco
2011-01-24 18:39           ` Stefano Stabellini
2011-01-24 18:53             ` Gianni Tedesco
2011-01-24 19:05               ` Stefano Stabellini
2011-01-25 17:29               ` Ian Jackson
2011-01-25 17:28           ` Ian Jackson
2011-01-25 17:35             ` Gianni Tedesco
2011-01-25 18:28               ` Ian Jackson
2011-01-25 17:07         ` [PATCH]: xl: Check a domain exists before destroying it Ian Jackson
2011-01-25 17:17           ` Gianni Tedesco
2011-01-25 18:25             ` Ian Jackson
2011-01-24 18:35     ` Xen 4.1 rc1 test report (xl bits) Stefano Stabellini
2011-01-25 14:04     ` Gianni Tedesco
2011-01-26  3:47       ` Zhang, Yang Z
2011-01-25  6:24   ` Xen 4.1 rc1 test report Haitao Shan
2011-01-25  8:00     ` Zheng, Shaohui
2011-01-25  8:43     ` Keir Fraser
2011-01-25 11:43     ` Ian Campbell
2011-01-26  0:47       ` Haitao Shan
2011-01-26 10:57         ` libxc: maintain a small, per-handle, cache of hypercall buffer memory (Was: Re: Xen 4.1 rc1 test report) Ian Campbell
2011-01-27  9:47           ` Ian Campbell
2011-01-31  0:58             ` Haitao Shan
2011-01-31  3:06           ` Haitao Shan
2011-01-31  8:30             ` Ian Campbell
2011-01-31  8:57               ` Haitao Shan
2011-01-31  9:32                 ` Ian Campbell
2011-01-31 12:07                   ` Haitao Shan
2011-02-01  4:40                     ` Haitao Shan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.