All of lore.kernel.org
 help / color / mirror / Atom feed
* How to use valgrind to detect xen hypervisor's memory leak
@ 2011-07-15  2:42 hellokitty
  2011-07-15  7:19 ` Ian Campbell
  0 siblings, 1 reply; 13+ messages in thread
From: hellokitty @ 2011-07-15  2:42 UTC (permalink / raw)
  To: xen-devel

Hi all,
      Thanks to Ian Campbell, I am a student and now i am doing a research
about using valgrind to detect xen hypervisor's memory leak .
      The procedure of my detection is as follows:
      First I follow the patch
http://xen.1045712.n5.nabble.com/PATCHv2-valgrind-support-for-Xen-privcmd-ioctls-hypercalls-tc2640861.html#a4568310 
to let valgrind support Xen privcmd/hypercall.
      Second I compile the source code of valgrind (have patched the upper
patch) 
      Third I use the valgrind tool to detect the hypervisor's memory leak .
      
      And the issues now i have are as follows:
	1 , Now i come to second step , and i run the commands to compile the
source code :
            1)     ./configure --with-xen=/usr/include/xen/
            2)     make && make install
            here i encounter the following error messages, how to fix it ?
           
--------------------------------------------------------------------------------------
		echo "# This is a generated file, composed of the following suppression
		rules:" > default.supp
		echo "# " exp-ptrcheck.supp xfree-3.supp xfree-4.supp glibc-2.X-drd.supp
		glibc-2.34567-NPTL-helgrind.supp glibc-2.X.supp  >> default.supp
		cat exp-ptrcheck.supp xfree-3.supp xfree-4.supp glibc-2.X-drd.supp
		glibc-2.34567-NPTL-helgrind.supp glibc-2.X.supp  >> default.supp
		make  all-recursive
		make[1]: Entering directory `/home/popo/valgrind-3.6.1'
		Making all in include
		make[2]: Entering directory `/home/popo/valgrind-3.6.1/include'
		make[2]: Nothing to be done for `all'.
		make[2]: Leaving directory `/home/popo/valgrind-3.6.1/include'
		Making all in VEX
		make[2]: Entering directory `/home/popo/valgrind-3.6.1/VEX'
		make  all-am
		make[3]: Entering directory `/home/popo/valgrind-3.6.1/VEX'
		make[3]: Nothing to be done for `all-am'.
		make[3]: Leaving directory `/home/popo/valgrind-3.6.1/VEX'
		make[2]: Leaving directory `/home/popo/valgrind-3.6.1/VEX'
		Making all in coregrind
		make[2]: Entering directory `/home/popo/valgrind-3.6.1/coregrind'
		make  all-am
		make[3]: Entering directory `/home/popo/valgrind-3.6.1/coregrind'
		gcc -DHAVE_CONFIG_H -I. -I..  -I.. -I../include -I../VEX/pub -DVGA_x86=1
		-DVGO_linux=1 -DVGP_x86_linux=1 -I../coregrind
		-DVG_LIBDIR="\"/usr/local/lib/valgrind"\" -DVG_PLATFORM="\"x86-linux\"" 
		-m32 -mpreferred-stack-boundary=2 -O2 -g -Wall -Wmissing-prototypes
-Wshadow
		-Wpointer-arith -Wstrict-prototypes -Wmissing-declarations
		-Wno-format-zero-length -fno-strict-aliasing @XEN_CFLAGS@ -Wno-long-long 
		-Wno-pointer-sign -fno-stack-protector -MT
		libcoregrind_x86_linux_a-m_debuglog.o -MD -MP -MF
		.deps/libcoregrind_x86_linux_a-m_debuglog.Tpo -c -o
		libcoregrind_x86_linux_a-m_debuglog.o `test -f 'm_debuglog.c' || echo
'./'`m_debuglog.c
		gcc: @XEN_CFLAGS@doesn't exist the file or directory
		make[3]: *** [libcoregrind_x86_linux_a-m_debuglog.o] error 1
		make[3]: Leaving directory `/home/popo/valgrind-3.6.1/coregrind'
		make[2]: *** [all] error 2
		make[2]: Leaving directory `/home/popo/valgrind-3.6.1/coregrind'
		make[1]: *** [all-recursive] error 1
		make[1]: Leaving directory `/home/popo/valgrind-3.6.1'
		make: *** [all] error 2
           
--------------------------------------------------------------------------------------
 	2, Suppose that i come to the third step, is it right to use the command
to do the detection ?
        "valgrind --tool=memcheck --leak-check=yes ./xen" (here xen is the
binary file compiled by the source code of xen_4.0.1)
        3,Does anyone here ever detect the xen hypervisor's memory leak
before ?  Does someone have the valgrind tool support for hypervisor well ,
can you send me one ?
       
 Thank You & Best Wishes !

--
View this message in context: http://xen.1045712.n5.nabble.com/How-to-use-valgrind-to-detect-xen-hypervisor-s-memory-leak-tp4589174p4589174.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: How to use valgrind to detect xen hypervisor's memory leak
  2011-07-15  2:42 How to use valgrind to detect xen hypervisor's memory leak hellokitty
@ 2011-07-15  7:19 ` Ian Campbell
  2011-07-15  9:15   ` hellokitty
  0 siblings, 1 reply; 13+ messages in thread
From: Ian Campbell @ 2011-07-15  7:19 UTC (permalink / raw)
  To: hellokitty; +Cc: xen-devel

On Fri, 2011-07-15 at 03:42 +0100, hellokitty wrote:
> Hi all,
>       Thanks to Ian Campbell, I am a student and now i am doing a research
> about using valgrind to detect xen hypervisor's memory leak .
>       The procedure of my detection is as follows:
>       First I follow the patch
> http://xen.1045712.n5.nabble.com/PATCHv2-valgrind-support-for-Xen-privcmd-ioctls-hypercalls-tc2640861.html#a4568310 
> to let valgrind support Xen privcmd/hypercall.
>       Second I compile the source code of valgrind (have patched the upper
> patch) 

Please post your modified version of the patch.

>       Third I use the valgrind tool to detect the hypervisor's memory leak .
>       
>       And the issues now i have are as follows:
> 	1 , Now i come to second step , and i run the commands to compile the
> source code :

You seem to have missed step 0) which is to regenerate configure and
Makefile.* using autoconf/automake as I described in a previous mail.

>             1)     ./configure --with-xen=/usr/include/xen/
>             2)     make && make install
>             here i encounter the following error messages, how to fix it ?
>            
> --------------------------------------------------------------------------------------
[..]
> 		gcc: @XEN_CFLAGS@doesn't exist the file or directory

The configure script should have substituted this out, but if you didn't
regenerate it after apply the patch then it won't know to do this.

> [...]

> --------------------------------------------------------------------------------------
>  	2, Suppose that i come to the third step, is it right to use the command
> to do the detection ?
>         "valgrind --tool=memcheck --leak-check=yes ./xen" (here xen is the
> binary file compiled by the source code of xen_4.0.1)

Wait, are you trying to use valgrind on the hypervisor itself?

The Xen hypervisor is an "Operating System" and runs on bare metal --
you can't run it as a process under Linux and therefore you cannot run a
tool like valgrind on it.

The valgrind support in my patch is useful for debugging the Xen
toolstack (e.g. "xl"), but not the hypervisor itself.

>         3,Does anyone here ever detect the xen hypervisor's memory leak
> before ?  Does someone have the valgrind tool support for hypervisor well ,
> can you send me one ?

Either Valgrind or Xen does not work how you seem to think it works. I
think you should consult with your advisor before trying to progress
this approach any further.

You might want to investigate the Linux kernel's "kmemleak" stuff, I
suppose something like that could be ported to Xen (although I expect it
to be a non-trivial amount of work).

Ian.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: How to use valgrind to detect xen hypervisor's memory leak
  2011-07-15  7:19 ` Ian Campbell
@ 2011-07-15  9:15   ` hellokitty
  2011-07-15  9:43     ` Ian Campbell
  0 siblings, 1 reply; 13+ messages in thread
From: hellokitty @ 2011-07-15  9:15 UTC (permalink / raw)
  To: xen-devel

> Please post your modified version of the patch. 
My patch is in the attach file named  xen_patch.patch 
http://xen.1045712.n5.nabble.com/file/n4589993/xen_patch.patch
xen_patch.patch 

> You seem to have missed step 0) which is to regenerate configure and
> Makefile.* using autoconf/automake as I described in a previous mail. 
Oh , here before i run 
                  1)     ./configure --with-xen=/usr/include/xen/     &&  
                  2)     make && make install 
 i had run automake/autoconf tool , and the errors still like what i got .
So how to fix it ?

> Wait, are you trying to use valgrind on the hypervisor itself?
> The Xen hypervisor is an "Operating System" and runs on bare metal --
> you can't run it as a process under Linux and therefore you cannot run a
> tool like valgrind on it.

> The valgrind support in my patch is useful for debugging the Xen
> toolstack (e.g. "xl"), but not the hypervisor itself.

Oh , I know that, maybe i didn't express the right meaning . Now , i am more
clearer . Yes , what i want to detect is the Xen toolstack's memory leak not
the hypervisor itself . 

and so suppose that i come to the third step, is it right to use the command
to do the detection ?
"valgrind --tool=memcheck --leak-check=yes xm" 


and more does someone have the valgrind tool support for hypervisor well ,
can you send me one ?

Thank you and best wishes .


--
View this message in context: http://xen.1045712.n5.nabble.com/How-to-use-valgrind-to-detect-xen-hypervisor-s-memory-leak-tp4589174p4589993.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Re: How to use valgrind to detect xen hypervisor's memory leak
  2011-07-15  9:15   ` hellokitty
@ 2011-07-15  9:43     ` Ian Campbell
  2011-07-15 13:39       ` hellokitty
  0 siblings, 1 reply; 13+ messages in thread
From: Ian Campbell @ 2011-07-15  9:43 UTC (permalink / raw)
  To: hellokitty; +Cc: xen-devel

On Fri, 2011-07-15 at 10:15 +0100, hellokitty wrote:
> > Please post your modified version of the patch. 
> My patch is in the attach file named  xen_patch.patch 
> http://xen.1045712.n5.nabble.com/file/n4589993/xen_patch.patch
> xen_patch.patch 
> 
> > You seem to have missed step 0) which is to regenerate configure and
> > Makefile.* using autoconf/automake as I described in a previous mail. 
> Oh , here before i run 
>                   1)     ./configure --with-xen=/usr/include/xen/     &&  
>                   2)     make && make install 
>  i had run automake/autoconf tool , and the errors still like what i got .
> So how to fix it ?

I'm not sure to be honest. It would help if you would post the actual
patch you are using (including your modifications etc). Also which
specific baseline valgrind are you using?

I'm afraid you will most likely need to roll your sleeves up and get
stuck into the build system to figure out why the @XEN_CFLAGS@ macro
isn't getting substituted.

> > Wait, are you trying to use valgrind on the hypervisor itself?
> > The Xen hypervisor is an "Operating System" and runs on bare metal --
> > you can't run it as a process under Linux and therefore you cannot run a
> > tool like valgrind on it.
> 
> > The valgrind support in my patch is useful for debugging the Xen
> > toolstack (e.g. "xl"), but not the hypervisor itself.
> 
> Oh , I know that, maybe i didn't express the right meaning . Now , i am more
> clearer . Yes , what i want to detect is the Xen toolstack's memory leak not
> the hypervisor itself . 
> 
> and so suppose that i come to the third step, is it right to use the command
> to do the detection ?
> "valgrind --tool=memcheck --leak-check=yes xm"

I think you can use any of the valgrind options in the normal way.

Personally I was using (with xl)
	--track-origins=yes --trace-children=yes --leak-check=full --show-reachable=yes

I've not got any experience with running valgrind on python programs but
since python is interpreted and garbage collected I expect that what you
will actually end up tracking is bugs in the python runtime and what you
will miss is any kind of leak due to something not getting garbage
collected when you might expect etc. I'm not sure what kind of tools are
available for tracking memory "leaks" of this sort in python. (In other
words I'm not sure there is much utility to running python programs
under valgrind, but I suppose you know different)

Note that xm is really just an RPC client to the xend server, so you
won't actually be checking the toolstack by running valgrind on xm, only
the client RPC implementation. To actually measure anything useful you
would need to measure xend itself (also note that xend is not generally
well supported these days and xl is generally preferred for new
development).

My patch was written to support all the hypercalls done by "xl create"
on my specific guest configuration -- you may find you need to implement
support within Valgrind for other hypercalls if you step outside this
limited usage.

> and more does someone have the valgrind tool support for hypervisor well ,
> can you send me one ?

Um, I think I explained in my previous mail why this isn't possible and
that this request doesn't make sense. (plus you stated right above that
you don't want to do this on the hypervisor!)

Ian.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: How to use valgrind to detect xen hypervisor's memory leak
  2011-07-15  9:43     ` Ian Campbell
@ 2011-07-15 13:39       ` hellokitty
  2011-07-15 15:15         ` Ian Campbell
  2011-07-15 15:35         ` Ian Campbell
  0 siblings, 2 replies; 13+ messages in thread
From: hellokitty @ 2011-07-15 13:39 UTC (permalink / raw)
  To: xen-devel

> It would help if you would post the actual patch you are using (including
your modifications etc). Also which specific baseline valgrind are you
using? 
The patch i upload is the actual one i am using , and i just modify some
lines that do not match the valgrind source code . such as 1608 of "@@
-1608,6 +1608,11 @@ " , and the version of valgrind is 3.6.1 . 


Thanks to your advise , now i know i should focus on the detection of
xend(xl create) , but first i have to use the upload patch to build in the
source code of valgrind , but that's the point , and i can't fix it .
sigh...

> and more does someone have the valgrind tool support for hypervisor well ,
> can you send me one ?

> Um, I think I explained in my previous mail why this isn't possible and
> that this request doesn't make sense

Sorry , i know the discipline , and but can you tell me your version of
valgrind  according to your patch ??

Thank you and Best Wishes!

--
View this message in context: http://xen.1045712.n5.nabble.com/How-to-use-valgrind-to-detect-xen-hypervisor-s-memory-leak-tp4589174p4590704.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Re: How to use valgrind to detect xen hypervisor's memory leak
  2011-07-15 13:39       ` hellokitty
@ 2011-07-15 15:15         ` Ian Campbell
  2011-07-16  2:33           ` hellokitty
  2011-07-15 15:35         ` Ian Campbell
  1 sibling, 1 reply; 13+ messages in thread
From: Ian Campbell @ 2011-07-15 15:15 UTC (permalink / raw)
  To: hellokitty; +Cc: xen-devel

On Fri, 2011-07-15 at 14:39 +0100, hellokitty wrote:
> > It would help if you would post the actual patch you are using (including
> your modifications etc). Also which specific baseline valgrind are you
> using? 
> The patch i upload 

upload where? Please post the patch you are using as an attachment to an
email on this list.

> is the actual one i am using , and i just modify some
> lines that do not match the valgrind source code . such as 1608 of "@@
> -1608,6 +1608,11 @@ " , and the version of valgrind is 3.6.1 . 
> 
> 
> Thanks to your advise , now i know i should focus on the detection of
> xend(xl create) , but first i have to use the upload patch to build in the
> source code of valgrind , but that's the point , and i can't fix it .
> sigh...
> 
> > and more does someone have the valgrind tool support for hypervisor well ,
> > can you send me one ?
> 
> > Um, I think I explained in my previous mail why this isn't possible and
> > that this request doesn't make sense
> 
> Sorry , i know the discipline , and but can you tell me your version of
> valgrind  according to your patch ??

My patch was based on r11231 from the valgrind subversion repository.

> 
> Thank you and Best Wishes!
> 
> --
> View this message in context: http://xen.1045712.n5.nabble.com/How-to-use-valgrind-to-detect-xen-hypervisor-s-memory-leak-tp4589174p4590704.html
> Sent from the Xen - Dev mailing list archive at Nabble.com.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Re: How to use valgrind to detect xen hypervisor's memory leak
  2011-07-15 13:39       ` hellokitty
  2011-07-15 15:15         ` Ian Campbell
@ 2011-07-15 15:35         ` Ian Campbell
  2011-07-16  6:19           ` hellokitty
  1 sibling, 1 reply; 13+ messages in thread
From: Ian Campbell @ 2011-07-15 15:35 UTC (permalink / raw)
  To: hellokitty; +Cc: xen-devel

[-- Attachment #1: Type: text/plain, Size: 1047 bytes --]

On Fri, 2011-07-15 at 14:39 +0100, hellokitty wrote:
> > It would help if you would post the actual patch you are using (including
> your modifications etc). Also which specific baseline valgrind are you
> using? 
> The patch i upload is the actual one i am using , and i just modify some
> lines that do not match the valgrind source code . such as 1608 of "@@
> -1608,6 +1608,11 @@ " , and the version of valgrind is 3.6.1 . 

I downloaded 3.6.1, applied the attached patch and ran:
        aclocal && autoheader && automake -a && autoconf
(per the autogen.sh in valgrind SVN)

Then I ran:
        ./configure --with-xen && make
and it built fine (since it defaults to looking in /usr/include for
headers).

I also tried "./configure --with-xen=/usr/include" which also worked.

However --with-xen=/usr/include/xen (as you had) did not work because
the path is wrong and should not include the final /xen (since the
#includes in the code are of the form <xen/thing.h>), although my error
messages in this case were not the same as yours.

Ian.


[-- Attachment #2: X --]
[-- Type: text/plain, Size: 35883 bytes --]

commit c86b82b0c6bfceb7bd15b024fd24b938c4c07b81
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Thu Aug 19 13:51:04 2010 +0100

    patch xen.patch

diff --git a/configure.in b/configure.in
index 62e1837..e71ecd6 100644
--- a/configure.in
+++ b/configure.in
@@ -1558,6 +1558,11 @@ elif test x$VGCONF_PLATFORM_SEC_CAPS = xPPC32_AIX5 ; then
   mflag_secondary=-q32
 fi
 
+AC_ARG_WITH(xen,
+   [  --with-xen=             Specify location of Xen headers],
+   XEN_CFLAGS=-I$withval
+)
+AC_SUBST(XEN_CFLAGS)
 
 AC_ARG_WITH(mpicc,
    [  --with-mpicc=           Specify name of MPI2-ised C compiler],
diff --git a/coregrind/Makefile.am b/coregrind/Makefile.am
index d9d1bca..d7216f9 100644
--- a/coregrind/Makefile.am
+++ b/coregrind/Makefile.am
@@ -211,6 +211,7 @@ noinst_HEADERS = \
 	m_syswrap/priv_syswrap-aix5.h \
 	m_syswrap/priv_syswrap-darwin.h \
 	m_syswrap/priv_syswrap-main.h \
+	m_syswrap/priv_syswrap-xen.h \
 	m_ume/priv_ume.h
 
 #----------------------------------------------------------------------------
@@ -338,6 +339,7 @@ COREGRIND_SOURCES_COMMON = \
 	m_syswrap/syswrap-ppc64-aix5.c \
 	m_syswrap/syswrap-x86-darwin.c \
 	m_syswrap/syswrap-amd64-darwin.c \
+	m_syswrap/syswrap-xen.c \
 	m_ume/elf.c \
 	m_ume/macho.c \
 	m_ume/main.c \
@@ -350,7 +352,7 @@ nodist_libcoregrind_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_SOURCES = \
 libcoregrind_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS = \
     $(AM_CPPFLAGS_@VGCONF_PLATFORM_PRI_CAPS@)
 libcoregrind_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS = \
-    $(AM_CFLAGS_@VGCONF_PLATFORM_PRI_CAPS@)
+    $(AM_CFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) @XEN_CFLAGS@
 libcoregrind_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CCASFLAGS = \
     $(AM_CCASFLAGS_@VGCONF_PLATFORM_PRI_CAPS@)
 if VGCONF_HAVE_PLATFORM_SEC
diff --git a/coregrind/m_debuginfo/debuginfo.c b/coregrind/m_debuginfo/debuginfo.c
index 08babd0..5272fae 100644
--- a/coregrind/m_debuginfo/debuginfo.c
+++ b/coregrind/m_debuginfo/debuginfo.c
@@ -637,6 +637,11 @@ ULong VG_(di_notify_mmap)( Addr a, Bool allow_SkFileV )
    if (!filename)
       return 0;
 
+   if (strncmp(filename, "/proc/xen/", 10) == 0) {
+      //VG_(printf)("ignoring mmap of %s\n", filename);
+      return 0;
+   }
+
    if (debug)
       VG_(printf)("di_notify_mmap-2: %s\n", filename);
 
diff --git a/coregrind/m_syswrap/priv_syswrap-xen.h b/coregrind/m_syswrap/priv_syswrap-xen.h
new file mode 100644
index 0000000..42505bb
--- /dev/null
+++ b/coregrind/m_syswrap/priv_syswrap-xen.h
@@ -0,0 +1,13 @@
+#ifndef __PRIV_SYSWRAP_XEN_H
+#define __PRIV_SYSWRAP_XEN_H
+
+DECL_TEMPLATE(xen, ioctl_privcmd_hypercall);
+DECL_TEMPLATE(xen, ioctl_privcmd_mmap);
+DECL_TEMPLATE(xen, ioctl_privcmd_mmapbatch);
+DECL_TEMPLATE(xen, ioctl_privcmd_mmapbatch_v2);
+
+#endif   // __PRIV_SYSWRAP_XEN_H
+
+/*--------------------------------------------------------------------*/
+/*--- end                                                          ---*/
+/*--------------------------------------------------------------------*/
diff --git a/coregrind/m_syswrap/syswrap-linux.c b/coregrind/m_syswrap/syswrap-linux.c
index 247402d..baa33c2 100644
--- a/coregrind/m_syswrap/syswrap-linux.c
+++ b/coregrind/m_syswrap/syswrap-linux.c
@@ -57,7 +57,7 @@
 #include "priv_types_n_macros.h"
 #include "priv_syswrap-generic.h"
 #include "priv_syswrap-linux.h"
-
+#include "priv_syswrap-xen.h"
 
 // Run a thread from beginning to end and return the thread's
 // scheduler-return-code.
@@ -4821,6 +4821,20 @@ PRE(sys_ioctl)
       }
       break;
 
+
+   case VKI_XEN_IOCTL_PRIVCMD_HYPERCALL:
+      WRAPPER_PRE_NAME(xen, ioctl_privcmd_hypercall)(tid, layout, arrghs, status, flags);
+      break;
+   case VKI_XEN_IOCTL_PRIVCMD_MMAP:
+      WRAPPER_PRE_NAME(xen, ioctl_privcmd_mmap)(tid, layout, arrghs, status, flags);
+      break;
+   case VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH:
+      WRAPPER_PRE_NAME(xen, ioctl_privcmd_mmapbatch)(tid, layout, arrghs, status, flags);
+      break;
+   case VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH_V2:
+      WRAPPER_PRE_NAME(xen, ioctl_privcmd_mmapbatch_v2)(tid, layout, arrghs, status, flags);
+      break;
+
    default:
       /* EVIOC* are variable length and return size written on success */
       switch (ARG2 & ~(_VKI_IOC_SIZEMASK << _VKI_IOC_SIZESHIFT)) {
@@ -5633,6 +5647,19 @@ POST(sys_ioctl)
       }
       break;
 
+   case VKI_XEN_IOCTL_PRIVCMD_HYPERCALL:
+      WRAPPER_POST_NAME(xen, ioctl_privcmd_hypercall)(tid, arrghs, status);
+      break;
+   case VKI_XEN_IOCTL_PRIVCMD_MMAP:
+      WRAPPER_POST_NAME(xen, ioctl_privcmd_mmap)(tid, arrghs, status);
+      break;
+   case VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH:
+      WRAPPER_POST_NAME(xen, ioctl_privcmd_mmapbatch)(tid, arrghs, status);
+      break;
+   case VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH_V2:
+      WRAPPER_POST_NAME(xen, ioctl_privcmd_mmapbatch_v2)(tid, arrghs, status);
+      break;
+
    default:
       /* EVIOC* are variable length and return size written on success */
       switch (ARG2 & ~(_VKI_IOC_SIZEMASK << _VKI_IOC_SIZESHIFT)) {
diff --git a/coregrind/m_syswrap/syswrap-xen.c b/coregrind/m_syswrap/syswrap-xen.c
new file mode 100644
index 0000000..0713618
--- /dev/null
+++ b/coregrind/m_syswrap/syswrap-xen.c
@@ -0,0 +1,750 @@
+#include "pub_core_basics.h"
+#include "pub_core_vki.h"
+#include "pub_core_vkiscnums.h"
+#include "pub_core_threadstate.h"
+#include "pub_core_aspacemgr.h"
+#include "pub_core_debuginfo.h"    // VG_(di_notify_*)
+#include "pub_core_transtab.h"     // VG_(discard_translations)
+#include "pub_core_xarray.h"
+#include "pub_core_clientstate.h"
+#include "pub_core_debuglog.h"
+#include "pub_core_libcbase.h"
+#include "pub_core_libcassert.h"
+#include "pub_core_libcfile.h"
+#include "pub_core_libcprint.h"
+#include "pub_core_libcproc.h"
+#include "pub_core_libcsignal.h"
+#include "pub_core_mallocfree.h"
+#include "pub_core_tooliface.h"
+#include "pub_core_options.h"
+#include "pub_core_scheduler.h"
+#include "pub_core_signals.h"
+#include "pub_core_syscall.h"
+#include "pub_core_syswrap.h"
+
+#include "priv_types_n_macros.h"
+#include "priv_syswrap-generic.h"
+#include "priv_syswrap-xen.h"
+
+#include <stdint.h>
+
+#define __XEN_TOOLS__
+
+#include <xen/xen.h>
+#include <xen/sysctl.h>
+#include <xen/domctl.h>
+#include <xen/memory.h>
+#include <xen/event_channel.h>
+#include <xen/version.h>
+
+#include <xen/hvm/hvm_op.h>
+
+#define PRE(name)       DEFN_PRE_TEMPLATE(xen, name)
+#define POST(name)      DEFN_POST_TEMPLATE(xen, name)
+
+PRE(ioctl_privcmd_hypercall)
+{
+   struct vki_xen_privcmd_hypercall *args = (struct vki_xen_privcmd_hypercall *)(ARG3);
+
+   if (!args)
+      return;
+
+
+   switch (args->op) {
+   case __HYPERVISOR_memory_op:
+      PRINT("__HYPERVISOR_memory_op ( %lld, %llx )", args->arg[0], args->arg[1]);
+
+      switch (args->arg[0]) {
+      case XENMEM_set_memory_map: {
+	 xen_foreign_memory_map_t *arg = (xen_foreign_memory_map_t *)(unsigned int)args->arg[1];
+	 PRE_MEM_READ("XENMEM_set_memory_map", (Addr)&arg->domid, sizeof(arg->domid));
+	 PRE_MEM_READ("XENMEM_set_memory_map", (Addr)&arg->map, sizeof(arg->map));
+	 break;
+      }
+      case XENMEM_increase_reservation:
+      case XENMEM_decrease_reservation:
+      case XENMEM_populate_physmap: {
+	 struct xen_memory_reservation *memory_reservation = (struct xen_memory_reservation *)(unsigned int)args->arg[1];
+	 char *which;
+
+	 switch (args->arg[0]) {
+	 case XENMEM_increase_reservation:
+	    which = "XENMEM_increase_reservation";
+	    break;
+	 case XENMEM_decrease_reservation:
+	    which = "XENMEM_decrease_reservation";
+	    PRE_MEM_READ(which, (Addr)memory_reservation->extent_start.p, sizeof(xen_pfn_t) * memory_reservation->nr_extents);
+	 case XENMEM_populate_physmap:
+	    which = "XENMEM_populate_physmap";
+	    PRE_MEM_READ(which, (Addr)memory_reservation->extent_start.p, sizeof(xen_pfn_t) * memory_reservation->nr_extents);
+	    break;
+	 default:
+	    which = "XENMEM_unknown";
+	    break;
+	 }
+
+	 PRE_MEM_READ(which, (Addr)&memory_reservation->extent_start, sizeof(memory_reservation->extent_start));
+	 PRE_MEM_READ(which, (Addr)&memory_reservation->nr_extents, sizeof(memory_reservation->nr_extents));
+	 PRE_MEM_READ(which, (Addr)&memory_reservation->extent_order, sizeof(memory_reservation->extent_order));
+	 PRE_MEM_READ(which, (Addr)&memory_reservation->mem_flags, sizeof(memory_reservation->mem_flags));
+	 PRE_MEM_READ(which, (Addr)&memory_reservation->domid, sizeof(memory_reservation->domid));
+
+	 break;
+      }
+
+      default:
+	 VG_(printf)("pre __HYPERVISOR_memory_op unknown command %lld\n", args->arg[0]);
+	 break;
+      }
+      break;
+
+   case __HYPERVISOR_mmuext_op: {
+	   mmuext_op_t *ops = (void *)(unsigned int)args->arg[0];
+	   unsigned int i, nr = args->arg[1];
+	   //unsigned int *pdone = (void *)(unsigned int)args->arg[2];
+	   //unsigned int foreigndom = args->arg[3];
+	   //VG_(printf)("HYPERVISOR_mmuext_op %d ops at %p on dom%d done at %p\n", nr, ops, foreigndom, pdone);
+	   for (i=0; i<nr; i++) {
+		   mmuext_op_t *op = ops + i;
+		   PRE_MEM_READ("__HYPERVISOR_MMUEXT_OP", (Addr)&op->cmd, sizeof(op->cmd));
+		   switch(op->cmd) {
+		   case MMUEXT_PIN_L1_TABLE:
+		   case MMUEXT_PIN_L2_TABLE:
+		   case MMUEXT_PIN_L3_TABLE:
+		   case MMUEXT_PIN_L4_TABLE:
+		   case MMUEXT_UNPIN_TABLE:
+		   case MMUEXT_NEW_BASEPTR:
+		   case MMUEXT_CLEAR_PAGE:
+		   case MMUEXT_COPY_PAGE:
+		   case MMUEXT_MARK_SUPER:
+		   case MMUEXT_UNMARK_SUPER:
+			   PRE_MEM_READ("__HYPERVISOR_MMUEXT_OP arg1.mfn", (Addr)&op->arg1.mfn, sizeof(op->arg1.mfn));
+			   break;
+
+		   case MMUEXT_INVLPG_LOCAL:
+		   case MMUEXT_INVLPG_ALL:
+		   case MMUEXT_SET_LDT:
+			   PRE_MEM_READ("__HYPERVISOR_MMUEXT_OP arg1.mfn", (Addr)&op->arg1.linear_addr, sizeof(op->arg1.linear_addr));
+			   break;
+
+		   case MMUEXT_TLB_FLUSH_LOCAL:
+		   case MMUEXT_TLB_FLUSH_MULTI:
+		   case MMUEXT_INVLPG_MULTI:
+		   case MMUEXT_TLB_FLUSH_ALL:
+		   case MMUEXT_FLUSH_CACHE:
+		   case MMUEXT_NEW_USER_BASEPTR:
+		   case MMUEXT_FLUSH_CACHE_GLOBAL:
+			   /* None */
+			   break;
+		   }
+
+		   switch(op->cmd) {
+		   case MMUEXT_SET_LDT:
+			   PRE_MEM_READ("__HYPERVISOR_MMUEXT_OP arg2.nr_ents", (Addr)&op->arg2.nr_ents, sizeof(op->arg2.nr_ents));
+			   break;
+
+		   case MMUEXT_TLB_FLUSH_MULTI:
+		   case MMUEXT_INVLPG_MULTI:
+			   /* How many??? */
+			   PRE_MEM_READ("__HYPERVISOR_MMUEXT_OP arg2.vcpumask", (Addr)&op->arg2.vcpumask, sizeof(op->arg2.vcpumask));
+			   break;
+
+		   case MMUEXT_COPY_PAGE:
+			   PRE_MEM_READ("__HYPERVISOR_MMUEXT_OP arg2.src_mfn", (Addr)&op->arg2.src_mfn, sizeof(op->arg2.src_mfn));
+			   break;
+
+		   case MMUEXT_PIN_L1_TABLE:
+		   case MMUEXT_PIN_L2_TABLE:
+		   case MMUEXT_PIN_L3_TABLE:
+		   case MMUEXT_PIN_L4_TABLE:
+		   case MMUEXT_UNPIN_TABLE:
+		   case MMUEXT_NEW_BASEPTR:
+		   case MMUEXT_TLB_FLUSH_LOCAL:
+		   case MMUEXT_INVLPG_LOCAL:
+		   case MMUEXT_TLB_FLUSH_ALL:
+		   case MMUEXT_INVLPG_ALL:
+		   case MMUEXT_FLUSH_CACHE:
+		   case MMUEXT_NEW_USER_BASEPTR:
+		   case MMUEXT_CLEAR_PAGE:
+		   case MMUEXT_FLUSH_CACHE_GLOBAL:
+		   case MMUEXT_MARK_SUPER:
+		   case MMUEXT_UNMARK_SUPER:
+			   /* None */
+			   break;
+		   }
+	   }
+	   break;
+   }
+
+   case __HYPERVISOR_event_channel_op:
+   case __HYPERVISOR_event_channel_op_compat: {
+      __vki_u32 cmd;
+      void *arg;
+      int compat = 0;
+
+      if (args->op == __HYPERVISOR_event_channel_op) {
+	 cmd = args->arg[0];
+	 arg = (void *)(unsigned int)args->arg[1];
+      } else {
+	 struct evtchn_op *evtchn = (struct evtchn_op *)(unsigned int)args->arg[0];
+	 cmd = evtchn->cmd;
+	 arg = &evtchn->u;
+	 compat = 1;
+      }
+      PRINT("__HYPERVISOR_event_channel_op ( %d, %p )%s", cmd, arg, compat ? " compat" : "");
+
+      switch (cmd) {
+      case EVTCHNOP_alloc_unbound: {
+	 struct evtchn_alloc_unbound *alloc_unbound = arg;
+	 PRE_MEM_READ("EVTCHNOP_alloc_unbound", (Addr)&alloc_unbound->dom, sizeof(alloc_unbound->dom));
+	 PRE_MEM_READ("EVTCHNOP_alloc_unbound", (Addr)&alloc_unbound->remote_dom, sizeof(alloc_unbound->remote_dom));
+	 break;
+      }
+      default:
+	 VG_(printf)("pre __HYPERVISOR_event_channel_op unknown command %d\n", cmd);
+	 break;
+      }
+      break;
+   }
+
+   case __HYPERVISOR_xen_version:
+      PRINT("__HYPERVISOR_xen_version ( %lld, %llx )", args->arg[0], args->arg[1]);
+
+      switch (args->arg[0]) {
+      case XENVER_version:
+      case XENVER_extraversion:
+      case XENVER_compile_info:
+      case XENVER_capabilities:
+      case XENVER_changeset:
+      case XENVER_platform_parameters:
+      case XENVER_get_features:
+      case XENVER_pagesize:
+      case XENVER_guest_handle:
+      case XENVER_commandline:
+	 /* No inputs */
+	 break;
+
+      default:
+	 VG_(printf)("pre __HYPERVISOR_xen_version unknown command %lld\n", args->arg[0]);
+	 break;
+      }
+      break;
+      break;
+   case __HYPERVISOR_sysctl: {
+      struct xen_sysctl *sysctl = (struct xen_sysctl *)(unsigned int)args->arg[0];
+
+      PRINT("__HYPERVISOR_sysctl ( %d )", sysctl->cmd);
+
+      /* Single argument hypercall */
+      PRE_MEM_READ("hypercall", ARG3, 8 + ( 8 * 1 ) );
+
+      /*
+       * Common part of xen_sysctl:
+       *    uint32_t cmd;
+       *    uint32_t interface_version;
+       */
+      PRE_MEM_READ("__HYPERVISOR_sysctl", args->arg[0], sizeof(uint32_t) + sizeof(uint32_t));
+
+      if (!sysctl || sysctl->interface_version != XEN_SYSCTL_INTERFACE_VERSION)
+	 /* BUG ? */
+	 return;
+
+#define __PRE_XEN_SYSCTL_READ(_sysctl, _union, _field) PRE_MEM_READ("XEN_SYSCTL_" # _sysctl, \
+							 (Addr)&sysctl->u._union._field, \
+							 sizeof(sysctl->u._union._field))
+#define PRE_XEN_SYSCTL_READ(_sysctl, _field) __PRE_XEN_SYSCTL_READ(_sysctl, _sysctl, _field)
+      switch (sysctl->cmd) {
+      case XEN_SYSCTL_getdomaininfolist:
+	 PRE_XEN_SYSCTL_READ(getdomaininfolist, first_domain);
+	 PRE_XEN_SYSCTL_READ(getdomaininfolist, max_domains);
+	 PRE_XEN_SYSCTL_READ(getdomaininfolist, buffer);
+	 break;
+
+      case XEN_SYSCTL_cpupool_op:
+	 PRE_XEN_SYSCTL_READ(cpupool_op, op);
+
+	 switch(sysctl->u.cpupool_op.op) {
+	 case XEN_SYSCTL_CPUPOOL_OP_CREATE:
+	 case XEN_SYSCTL_CPUPOOL_OP_DESTROY:
+	 case XEN_SYSCTL_CPUPOOL_OP_INFO:
+	 case XEN_SYSCTL_CPUPOOL_OP_ADDCPU:
+	 case XEN_SYSCTL_CPUPOOL_OP_RMCPU:
+	 case XEN_SYSCTL_CPUPOOL_OP_MOVEDOMAIN:
+	    PRE_XEN_SYSCTL_READ(cpupool_op, cpupool_id);
+	 }
+
+	 if (sysctl->u.cpupool_op.op == XEN_SYSCTL_CPUPOOL_OP_CREATE)
+	    PRE_XEN_SYSCTL_READ(cpupool_op, sched_id);
+
+	 if (sysctl->u.cpupool_op.op == XEN_SYSCTL_CPUPOOL_OP_MOVEDOMAIN)
+	    PRE_XEN_SYSCTL_READ(cpupool_op, domid);
+
+	 if (sysctl->u.cpupool_op.op == XEN_SYSCTL_CPUPOOL_OP_ADDCPU ||
+	     sysctl->u.cpupool_op.op == XEN_SYSCTL_CPUPOOL_OP_RMCPU)
+	    PRE_XEN_SYSCTL_READ(cpupool_op, cpu);
+
+	 break;
+
+      case XEN_SYSCTL_physinfo:
+	 /* No input params */
+	 break;
+
+      default:
+	 VG_(printf)("pre sysctl version %x unknown cmd %d\n",
+		     sysctl->interface_version, sysctl->cmd);
+	 break;
+      }
+#undef PRE_XEN_SYSCTL_READ
+#undef __PRE_XEN_SYSCTL_READ
+   }
+      break;
+
+   case __HYPERVISOR_domctl: {
+      struct xen_domctl *domctl = (struct xen_domctl *)(unsigned int)args->arg[0];
+
+      PRINT("__HYPERVISOR_domctl ( %d )", domctl->cmd);
+
+      /* Single argument hypercall */
+      PRE_MEM_READ("hypercall", ARG3, 8 + ( 8 * 1 ) );
+
+      /*
+       * Common part of xen_domctl:
+       *    uint32_t cmd;
+       *    uint32_t interface_version;
+       *    domid_t  domain;
+       */
+      PRE_MEM_READ("__HYPERVISOR_domctl", args->arg[0], sizeof(uint32_t) + sizeof(uint32_t) + sizeof(domid_t));
+
+      if (!domctl || domctl->interface_version != XEN_DOMCTL_INTERFACE_VERSION)
+	 /* BUG ? */
+	 return;
+
+      //PRE_REG_READ1(long, "__HYPERVISOR_domctl",);
+#define __PRE_XEN_DOMCTL_READ(_domctl, _union, _field) PRE_MEM_READ("XEN_DOMCTL_" # _domctl, \
+							 (Addr)&domctl->u._union._field, \
+							 sizeof(domctl->u._union._field))
+#define PRE_XEN_DOMCTL_READ(_domctl, _field) __PRE_XEN_DOMCTL_READ(_domctl, _domctl, _field)
+
+      switch (domctl->cmd) {
+      case XEN_DOMCTL_destroydomain:
+      case XEN_DOMCTL_pausedomain:
+      case XEN_DOMCTL_max_vcpus:
+      case XEN_DOMCTL_get_address_size:
+      case XEN_DOMCTL_gettscinfo:
+      case XEN_DOMCTL_getdomaininfo:
+      case XEN_DOMCTL_unpausedomain:
+	 /* No input fields. */
+	 break;
+
+      case XEN_DOMCTL_createdomain:
+	 PRE_XEN_DOMCTL_READ(createdomain, ssidref);
+	 PRE_XEN_DOMCTL_READ(createdomain, handle);
+	 PRE_XEN_DOMCTL_READ(createdomain, flags);
+	 break;
+
+      case XEN_DOMCTL_max_mem:
+	 PRE_XEN_DOMCTL_READ(max_mem, max_memkb);
+	 break;
+
+      case XEN_DOMCTL_set_address_size:
+	 __PRE_XEN_DOMCTL_READ(set_address_size, address_size, size);
+	 break;
+
+      case XEN_DOMCTL_settscinfo:
+	 __PRE_XEN_DOMCTL_READ(settscinfo, tsc_info, info.tsc_mode);
+	 __PRE_XEN_DOMCTL_READ(settscinfo, tsc_info, info.gtsc_khz);
+	 __PRE_XEN_DOMCTL_READ(settscinfo, tsc_info, info.incarnation);
+	 __PRE_XEN_DOMCTL_READ(settscinfo, tsc_info, info.elapsed_nsec);
+	 break;
+
+      case XEN_DOMCTL_hypercall_init:
+	 PRE_XEN_DOMCTL_READ(hypercall_init, gmfn);
+	 break;
+
+      case XEN_DOMCTL_getvcpuinfo:
+	 PRE_XEN_DOMCTL_READ(getvcpuinfo, vcpu);
+	 break;
+
+      case XEN_DOMCTL_getvcpuaffinity:
+	 __PRE_XEN_DOMCTL_READ(getvcpuaffinity, vcpuaffinity, vcpu);
+	 break;
+
+      case XEN_DOMCTL_setvcpuaffinity:
+	 __PRE_XEN_DOMCTL_READ(setvcpuaffinity, vcpuaffinity, vcpu);
+	 PRE_MEM_READ("XEN_DOMCTL_setvcpuaffinity",
+		      (Addr)domctl->u.vcpuaffinity.cpumap.bitmap.p,
+			domctl->u.vcpuaffinity.cpumap.nr_cpus / 8);
+	 break;
+
+      case XEN_DOMCTL_getvcpucontext:
+	 __PRE_XEN_DOMCTL_READ(getvcpucontext, vcpucontext, vcpu);
+	 break;
+
+      case XEN_DOMCTL_setvcpucontext:
+	 __PRE_XEN_DOMCTL_READ(setvcpucontext, vcpucontext, vcpu);
+	 __PRE_XEN_DOMCTL_READ(setvcpucontext, vcpucontext, ctxt.p);
+	 break;
+
+      case XEN_DOMCTL_set_cpuid:
+	 PRE_MEM_READ("XEN_DOMCTL_set_cpuid", (Addr)&domctl->u.cpuid, sizeof(domctl->u.cpuid));
+	 break;
+      default:
+	 VG_(printf)("pre domctl version %x unknown cmd %d on domain %d\n",
+		     domctl->interface_version, domctl->cmd, domctl->domain);
+	 break;
+      }
+#undef PRE_XEN_DOMCTL_READ
+#undef __PRE_XEN_DOMCTL_READ
+   }
+      break;
+
+   case __HYPERVISOR_hvm_op: {
+      unsigned long op = args->arg[0];
+      void *arg = (void *)(unsigned long)args->arg[1];
+
+      PRINT("__HYPERVISOR_hvm_op ( %ld, %p )", op, arg);
+
+      //PRE_REG_READ1(long, "__HYPERVISOR_hvm_op",);
+#define __PRE_XEN_HVMOP_READ(_hvm_op, _type, _field) PRE_MEM_READ("XEN_HVMOP_" # _hvm_op, \
+								   (Addr)&((_type*)arg)->_field, \
+							 sizeof(((_type*)arg)->_field))
+#define PRE_XEN_HVMOP_READ(_hvm_op, _field) __PRE_XEN_HVMOP_READ(_hvm_op, "xen_hvm_" # _hvm_op "_t", _field)
+
+      switch (op) {
+      case HVMOP_set_param:
+	 __PRE_XEN_HVMOP_READ(set_param, xen_hvm_param_t, domid);
+	 __PRE_XEN_HVMOP_READ(set_param, xen_hvm_param_t, index);
+	 __PRE_XEN_HVMOP_READ(set_param, xen_hvm_param_t, value);
+	 break;
+
+      case HVMOP_get_param:
+	 __PRE_XEN_HVMOP_READ(get_param, xen_hvm_param_t, domid);
+	 __PRE_XEN_HVMOP_READ(get_param, xen_hvm_param_t, index);
+	 break;
+
+      default:
+	 VG_(printf)("pre hvm_op unknown OP %ld\n", op);
+	 break;
+#undef __PRE_XEN_HVMOP_READ
+#undef PRE_XEN_HVMOP_READ
+      }
+   }
+      break;
+
+   default:
+      VG_(printf)("pre unknown hypercall %lld ( %#llx, %#llx, %#llx, %#llx, %#llx )\n",
+		  args->op, args->arg[0], args->arg[1], args->arg[2], args->arg[3], args->arg[4]);
+   }
+}
+
+POST(ioctl_privcmd_hypercall)
+{
+   struct vki_xen_privcmd_hypercall *args = (struct vki_xen_privcmd_hypercall *)(ARG3);
+
+   if (!args)
+      return;
+
+   switch (args->op) {
+   case __HYPERVISOR_memory_op:
+      switch (args->arg[0]) {
+      case XENMEM_set_memory_map:
+      case XENMEM_decrease_reservation:
+	 /* No outputs */
+	 break;
+      case XENMEM_increase_reservation:
+      case XENMEM_populate_physmap: {
+	 struct xen_memory_reservation *memory_reservation = (struct xen_memory_reservation *)(unsigned int)args->arg[1];
+
+	 POST_MEM_WRITE((Addr)memory_reservation->extent_start.p, sizeof(xen_pfn_t) * ARG1);
+      }
+	 break;
+
+      default:
+	 VG_(printf)("post __HYPERVISOR_memory_op unknown command %lld\n", args->arg[0]);
+	 break;
+      }
+      break;
+
+   case __HYPERVISOR_mmuext_op: {
+	   //mmuext_op_t *ops = (void *)(unsigned int)args->arg[0];
+	   //unsigned int nr = args->arg[1];
+	   unsigned int *pdone = (void *)(unsigned int)args->arg[2];
+	   //unsigned int foreigndom = args->arg[3];
+	   /* simplistic */
+	   POST_MEM_WRITE((Addr)pdone, sizeof(*pdone));
+	   break;
+   }
+
+   case __HYPERVISOR_event_channel_op:
+   case __HYPERVISOR_event_channel_op_compat: {
+      __vki_u32 cmd;
+      void *arg;
+
+      if (args->op == __HYPERVISOR_event_channel_op) {
+	 cmd = args->arg[0];
+	 arg = (void *)(unsigned int)args->arg[1];
+      } else {
+	 struct evtchn_op *evtchn = (struct evtchn_op *)(unsigned int)args->arg[0];
+	 cmd = evtchn->cmd;
+	 arg = &evtchn->u;
+      }
+      switch (cmd) {
+      case EVTCHNOP_alloc_unbound: {
+	 struct evtchn_alloc_unbound *alloc_unbound = arg;
+	 POST_MEM_WRITE((Addr)&alloc_unbound->port, sizeof(alloc_unbound->port));
+	 break;
+      }
+      default:
+	 VG_(printf)("post __HYPERVISOR_event_channel_op unknown command %d\n", cmd);
+	 break;
+      }
+      break;
+
+   }
+
+   case __HYPERVISOR_xen_version:
+      switch (args->arg[0]) {
+      case XENVER_version:
+	 /* No outputs */
+	 break;
+      case XENVER_extraversion:
+	 POST_MEM_WRITE((Addr)args->arg[1], sizeof(xen_extraversion_t));
+	 break;
+      case XENVER_compile_info:
+	 POST_MEM_WRITE((Addr)args->arg[1], sizeof(xen_compile_info_t));
+	 break;
+      case XENVER_capabilities:
+	 POST_MEM_WRITE((Addr)args->arg[1], sizeof(xen_capabilities_info_t));
+	 break;
+      case XENVER_changeset:
+	 POST_MEM_WRITE((Addr)args->arg[1], sizeof(xen_changeset_info_t));
+	 break;
+      case XENVER_platform_parameters:
+	 POST_MEM_WRITE((Addr)args->arg[1], sizeof(xen_platform_parameters_t));
+	 break;
+      case XENVER_get_features:
+	 POST_MEM_WRITE((Addr)args->arg[1], sizeof(xen_feature_info_t));
+	 break;
+      case XENVER_pagesize:
+	 /* No outputs */
+	 break;
+      case XENVER_guest_handle:
+	 POST_MEM_WRITE((Addr)args->arg[1], sizeof(xen_domain_handle_t));
+	 break;
+      case XENVER_commandline:
+	 POST_MEM_WRITE((Addr)args->arg[1], sizeof(xen_commandline_t));
+	 break;
+      default:
+	 VG_(printf)("post __HYPERVISOR_xen_version unknown command %lld\n", args->arg[0]);
+	 break;
+      }
+      break;
+
+   case __HYPERVISOR_sysctl: {
+      struct xen_sysctl *sysctl = (struct xen_sysctl *)(unsigned int)args->arg[0];
+
+      if (!sysctl || sysctl->interface_version != XEN_SYSCTL_INTERFACE_VERSION)
+	 return;
+
+#define __POST_XEN_SYSCTL_WRITE(_sysctl, _union, _field) POST_MEM_WRITE((Addr)&sysctl->u._union._field, sizeof(sysctl->u._union._field));
+#define POST_XEN_SYSCTL_WRITE(_sysctl, _field) __POST_XEN_SYSCTL_WRITE(_sysctl, _sysctl, _field)
+      switch (sysctl->cmd) {
+      case XEN_SYSCTL_getdomaininfolist:
+	 POST_XEN_SYSCTL_WRITE(getdomaininfolist, num_domains);
+	 POST_MEM_WRITE((Addr)sysctl->u.getdomaininfolist.buffer.p,
+			sizeof(xen_domctl_getdomaininfo_t) * sysctl->u.getdomaininfolist.num_domains);
+	 break;
+
+      case XEN_SYSCTL_cpupool_op:
+	 if (sysctl->u.cpupool_op.op == XEN_SYSCTL_CPUPOOL_OP_CREATE ||
+	     sysctl->u.cpupool_op.op == XEN_SYSCTL_CPUPOOL_OP_INFO)
+	    POST_XEN_SYSCTL_WRITE(cpupool_op, cpupool_id);
+	 if (sysctl->u.cpupool_op.op == XEN_SYSCTL_CPUPOOL_OP_INFO) {
+	    POST_XEN_SYSCTL_WRITE(cpupool_op, sched_id);
+	    POST_XEN_SYSCTL_WRITE(cpupool_op, n_dom);
+	 }
+	 if (sysctl->u.cpupool_op.op == XEN_SYSCTL_CPUPOOL_OP_INFO ||
+	     sysctl->u.cpupool_op.op == XEN_SYSCTL_CPUPOOL_OP_FREEINFO)
+	    POST_XEN_SYSCTL_WRITE(cpupool_op, cpumap);
+	 break;
+
+      case XEN_SYSCTL_physinfo:
+	 POST_XEN_SYSCTL_WRITE(physinfo, threads_per_core);
+	 POST_XEN_SYSCTL_WRITE(physinfo, cores_per_socket);
+	 POST_XEN_SYSCTL_WRITE(physinfo, nr_cpus);
+	 POST_XEN_SYSCTL_WRITE(physinfo, max_cpu_id);
+	 POST_XEN_SYSCTL_WRITE(physinfo, nr_nodes);
+	 POST_XEN_SYSCTL_WRITE(physinfo, max_node_id);
+ POST_XEN_SYSCTL_WRITE(physinfo, cpu_khz);
+	 POST_XEN_SYSCTL_WRITE(physinfo, total_pages);
+	 POST_XEN_SYSCTL_WRITE(physinfo, free_pages);
+	 POST_XEN_SYSCTL_WRITE(physinfo, scrub_pages);
+	 POST_XEN_SYSCTL_WRITE(physinfo, hw_cap[8]);
+	 POST_XEN_SYSCTL_WRITE(physinfo, capabilities);
+	 break;
+
+      default:
+	 VG_(printf)("post sysctl version %x cmd %d\n",
+		     sysctl->interface_version, sysctl->cmd);
+	 break;
+      }
+#undef POST_XEN_SYSCTL_WRITE
+#undef __POST_XEN_SYSCTL_WRITE
+      break;
+   }
+
+   case __HYPERVISOR_domctl: {
+      struct xen_domctl *domctl = (struct xen_domctl *)(unsigned int)args->arg[0];
+
+      if (!domctl || domctl->interface_version != XEN_DOMCTL_INTERFACE_VERSION)
+	 return;
+
+#define __POST_XEN_DOMCTL_WRITE(_domctl, _union, _field) POST_MEM_WRITE((Addr)&domctl->u._union._field, sizeof(domctl->u._union._field));
+#define POST_XEN_DOMCTL_WRITE(_domctl, _field) __POST_XEN_DOMCTL_WRITE(_domctl, _domctl, _field)
+      switch (domctl->cmd) {
+      case XEN_DOMCTL_createdomain:
+      case XEN_DOMCTL_destroydomain:
+      case XEN_DOMCTL_pausedomain:
+      case XEN_DOMCTL_max_mem:
+      case XEN_DOMCTL_set_address_size:
+      case XEN_DOMCTL_settscinfo:
+      case XEN_DOMCTL_hypercall_init:
+      case XEN_DOMCTL_setvcpuaffinity:
+      case XEN_DOMCTL_setvcpucontext:
+      case XEN_DOMCTL_set_cpuid:
+      case XEN_DOMCTL_unpausedomain:
+	 /* No output fields */
+	 break;
+
+      case XEN_DOMCTL_max_vcpus:
+	 POST_XEN_DOMCTL_WRITE(max_vcpus, max);
+
+      case XEN_DOMCTL_get_address_size:
+	 __POST_XEN_DOMCTL_WRITE(get_address_size, address_size, size);
+	 break;
+
+      case XEN_DOMCTL_gettscinfo:
+	 __POST_XEN_DOMCTL_WRITE(settscinfo, tsc_info, info.tsc_mode);
+	 __POST_XEN_DOMCTL_WRITE(settscinfo, tsc_info, info.gtsc_khz);
+	 __POST_XEN_DOMCTL_WRITE(settscinfo, tsc_info, info.incarnation);
+	 __POST_XEN_DOMCTL_WRITE(settscinfo, tsc_info, info.elapsed_nsec);
+	 break;
+
+      case XEN_DOMCTL_getvcpuinfo:
+	 POST_XEN_DOMCTL_WRITE(getvcpuinfo, online);
+	 POST_XEN_DOMCTL_WRITE(getvcpuinfo, blocked);
+	 POST_XEN_DOMCTL_WRITE(getvcpuinfo, running);
+	 POST_XEN_DOMCTL_WRITE(getvcpuinfo, cpu_time);
+	 POST_XEN_DOMCTL_WRITE(getvcpuinfo, cpu);
+	 break;
+
+      case XEN_DOMCTL_getvcpuaffinity:
+	 POST_MEM_WRITE((Addr)domctl->u.vcpuaffinity.cpumap.bitmap.p,
+			domctl->u.vcpuaffinity.cpumap.nr_cpus / 8);
+	 break;
+
+      case XEN_DOMCTL_getdomaininfo:
+	 POST_XEN_DOMCTL_WRITE(getdomaininfo, domain);
+	 POST_XEN_DOMCTL_WRITE(getdomaininfo, flags);
+	 POST_XEN_DOMCTL_WRITE(getdomaininfo, tot_pages);
+	 POST_XEN_DOMCTL_WRITE(getdomaininfo, max_pages);
+	 POST_XEN_DOMCTL_WRITE(getdomaininfo, shr_pages);
+	 POST_XEN_DOMCTL_WRITE(getdomaininfo, shared_info_frame);
+	 POST_XEN_DOMCTL_WRITE(getdomaininfo, cpu_time);
+	 POST_XEN_DOMCTL_WRITE(getdomaininfo, nr_online_vcpus);
+	 POST_XEN_DOMCTL_WRITE(getdomaininfo, max_vcpu_id);
+	 POST_XEN_DOMCTL_WRITE(getdomaininfo, ssidref);
+	 POST_XEN_DOMCTL_WRITE(getdomaininfo, handle);
+	 POST_XEN_DOMCTL_WRITE(getdomaininfo, cpupool);
+	 break;
+
+      case XEN_DOMCTL_getvcpucontext:
+	 __POST_XEN_DOMCTL_WRITE(getvcpucontext, vcpucontext, ctxt.p);
+	 break;
+
+      default:
+	 VG_(printf)("post domctl version %x cmd %d on domain %d\n",
+		     domctl->interface_version, domctl->cmd, domctl->domain);
+	 break;
+      }
+#undef POST_XEN_DOMCTL_WRITE
+#undef __POST_XEN_DOMCTL_WRITE
+      break;
+   }
+
+
+   case __HYPERVISOR_hvm_op: {
+      unsigned long op = args->arg[0];
+      void *arg = (void *)(unsigned long)args->arg[1];
+
+#define __POST_XEN_HVMOP_WRITE(_hvm_op, _type, _field) POST_MEM_WRITE((Addr)&((_type*)arg)->_field, \
+								       sizeof(((_type*)arg)->_field))
+#define POST_XEN_HVMOP_WRITE(_hvm_op, _field) __PRE_XEN_HVMOP_READ(_hvm_op, "xen_hvm_" # _hvm_op "_t", _field)
+      switch (op) {
+      case HVMOP_set_param:
+	 /* No output paramters */
+	 break;
+
+      case HVMOP_get_param:
+	 __POST_XEN_HVMOP_WRITE(get_param, xen_hvm_param_t, value);
+	 break;
+
+      default:
+	 VG_(printf)("post hvm_op unknown OP %ld\n", op);
+	 break;
+#undef __POST_XEN_HVMOP_WRITE
+#undef POST_XEN_HVMOP_WRITE
+      }
+   }
+      break;
+
+   default:
+      VG_(printf)("post unknown hypercall %lld ( %#llx, %#llx, %#llx, %#llx, %#llx )\n",
+		  args->op, args->arg[0], args->arg[1], args->arg[2], args->arg[3], args->arg[4]);
+      break;
+   }
+}
+
+
+PRE(ioctl_privcmd_mmap)
+{
+   struct vki_xen_privcmd_mmap *args = (struct vki_xen_privcmd_mmap *)(ARG3);
+   PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAP", (Addr)&args->num, sizeof(args->num));
+   PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAP", (Addr)&args->dom, sizeof(args->dom));
+   PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAP", (Addr)args->entry, sizeof(*(args->entry)) * args->num);
+}
+
+PRE(ioctl_privcmd_mmapbatch)
+{
+   struct vki_xen_privcmd_mmapbatch *args = (struct vki_xen_privcmd_mmapbatch *)(ARG3);
+   PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH", (Addr)&args->num, sizeof(args->num));
+   PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH", (Addr)&args->dom, sizeof(args->dom));
+   PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH", (Addr)&args->addr, sizeof(args->addr));
+   PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH", (Addr)args->arr, sizeof(*(args->arr)) * args->num);
+}
+
+PRE(ioctl_privcmd_mmapbatch_v2)
+{
+   struct vki_xen_privcmd_mmapbatch_v2 *args = (struct vki_xen_privcmd_mmapbatch_v2 *)(ARG3);
+   PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH_V2", (Addr)&args->num, sizeof(args->num));
+   PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH_V2", (Addr)&args->dom, sizeof(args->dom));
+   PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH_V2", (Addr)&args->addr, sizeof(args->addr));
+   PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH_V2", (Addr)args->arr, sizeof(*(args->arr)) * args->num);
+}
+
+POST(ioctl_privcmd_mmap)
+{
+   //struct vki_xen_privcmd_mmap *args = (struct vki_xen_privcmd_mmap *)(ARG3);
+}
+
+POST(ioctl_privcmd_mmapbatch)
+{
+   struct vki_xen_privcmd_mmapbatch *args = (struct vki_xen_privcmd_mmapbatch *)(ARG3);
+   POST_MEM_WRITE((Addr)args->arr, sizeof(*(args->arr)) * args->num);
+}
+
+POST(ioctl_privcmd_mmapbatch_v2)
+{
+   struct vki_xen_privcmd_mmapbatch_v2 *args = (struct vki_xen_privcmd_mmapbatch_v2 *)(ARG3);
+   POST_MEM_WRITE((Addr)args->err, sizeof(*(args->err)) * args->num);
+}
diff --git a/glibc-2.7.supp b/glibc-2.7.supp
index 1079dcf..886cfc0 100644
--- a/glibc-2.7.supp
+++ b/glibc-2.7.supp
@@ -28,3 +28,105 @@
    obj:/lib*/ld-2.7*.so*
    obj:/lib*/ld-2.7*.so*
 }
+
+{
+   ijc-dl-hack-pthread_cancel-1
+   Memcheck:Leak
+   fun:malloc
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/libc-*.so
+   obj:*/ld-*.so
+   fun:__libc_dlopen_mode
+   fun:pthread_cancel_init
+   fun:pthread_cancel
+   fun:xs_daemon_close
+}
+
+{
+   ijc-dl-hack-pthread_cancel-2
+   Memcheck:Leak
+   fun:malloc
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/libc-*.so
+   obj:*/ld-*.so
+   fun:__libc_dlopen_mode
+   fun:pthread_cancel_init
+   fun:pthread_cancel
+}
+
+{
+   ijc-dl-hack-pthread_cancel-3
+   Memcheck:Leak
+   fun:malloc
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/libc-*.so
+   obj:*/ld-*.so
+   fun:__libc_dlopen_mode
+   fun:pthread_cancel_init
+   fun:pthread_cancel
+   fun:xs_daemon_close
+   fun:libxl_ctx_free
+}
+
+{
+   ijc-dl-hack-pthread_cancel-4
+   Memcheck:Leak
+   fun:calloc
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/libc-*.so
+   obj:*/ld-*.so
+   fun:__libc_dlopen_mode
+   fun:pthread_cancel_init
+   fun:pthread_cancel
+   fun:xs_daemon_close
+   fun:libxl_ctx_free
+}
+
+{
+   ijc-dl-hack-pthread_cancel-4
+   Memcheck:Leak
+   fun:calloc
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/libc-*.so
+   obj:*/ld-*.so
+   fun:__libc_dlopen_mode
+   fun:pthread_cancel_init
+   fun:pthread_cancel
+}
+
+{
+   ijc-dl-hack-pthread_cancel-5
+   Memcheck:Leak
+   fun:calloc
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/ld-*.so
+   obj:*/libc-*.so
+   obj:*/ld-*.so
+   fun:__libc_dlopen_mode
+   fun:pthread_cancel_init
+   fun:pthread_cancel
+}
+
diff --git a/include/Makefile.am b/include/Makefile.am
index 33d0857..22bffa7 100644
--- a/include/Makefile.am
+++ b/include/Makefile.am
@@ -54,7 +54,8 @@ nobase_pkginclude_HEADERS = \
 	vki/vki-scnums-ppc64-linux.h	\
 	vki/vki-scnums-x86-linux.h	\
 	vki/vki-scnums-arm-linux.h	\
-	vki/vki-scnums-darwin.h
+	vki/vki-scnums-darwin.h		
+	vki/vki-xen.h
 
 noinst_HEADERS = \
 	vki/vki-ppc32-aix5.h		\
diff --git a/include/pub_tool_vki.h b/include/pub_tool_vki.h
index 73a4174..c4c117f 100644
--- a/include/pub_tool_vki.h
+++ b/include/pub_tool_vki.h
@@ -47,6 +47,7 @@
 
 #if defined(VGO_linux)
 #  include "vki/vki-linux.h"
+#  include "vki/vki-xen.h"
 #elif defined(VGP_ppc32_aix5)
 #  include "vki/vki-ppc32-aix5.h"
 #elif defined(VGP_ppc64_aix5)
diff --git a/include/vki/vki-linux.h b/include/vki/vki-linux.h
index beff378..1214300 100644
--- a/include/vki/vki-linux.h
+++ b/include/vki/vki-linux.h
@@ -2709,6 +2709,51 @@ struct vki_getcpu_cache {
 #define VKI_EV_MAX		0x1f
 #define VKI_EV_CNT		(VKI_EV_MAX+1)
 
+//----------------------------------------------------------------------
+// Xen privcmd IOCTL
+//----------------------------------------------------------------------
+
+typedef unsigned long __vki_xen_pfn_t;
+
+struct vki_xen_privcmd_hypercall {
+	__vki_u64 op;
+	__vki_u64 arg[5];
+};
+
+struct vki_xen_privcmd_mmap_entry {
+        __vki_u64 va;
+        __vki_u64 mfn;
+        __vki_u64 npages;
+};
+
+struct vki_xen_privcmd_mmap {
+        int num;
+        __vki_u16 dom; /* target domain */
+        struct vki_xen_privcmd_mmap_entry *entry;
+};
+
+struct vki_xen_privcmd_mmapbatch {
+        int num;     /* number of pages to populate */
+        __vki_u16 dom; /* target domain */
+        __vki_u64 addr;  /* virtual address */
+        __vki_xen_pfn_t *arr; /* array of mfns - top nibble set on err */
+};
+
+struct vki_xen_privcmd_mmapbatch_v2 {
+        unsigned int num; /* number of pages to populate */
+        __vki_u16 dom;      /* target domain */
+        __vki_u64 addr;       /* virtual address */
+        const __vki_xen_pfn_t *arr; /* array of mfns */
+        int __user *err;  /* array of error codes */
+};
+
+#define VKI_XEN_IOCTL_PRIVCMD_HYPERCALL    _VKI_IOC(_VKI_IOC_NONE, 'P', 0, sizeof(struct vki_xen_privcmd_hypercall))
+#define VKI_XEN_IOCTL_PRIVCMD_MMAP         _VKI_IOC(_VKI_IOC_NONE, 'P', 2, sizeof(struct vki_xen_privcmd_mmap))
+
+#define VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH    _VKI_IOC(_VKI_IOC_NONE, 'P', 3, sizeof(struct vki_xen_privcmd_mmapbatch))
+#define VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH_V2 _VKI_IOC(_VKI_IOC_NONE, 'P', 4, sizeof(struct vki_xen_privcmd_mmapbatch_v2))
+
+
 #endif // __VKI_LINUX_H
 
 /*--------------------------------------------------------------------*/
diff --git a/include/vki/vki-xen.h b/include/vki/vki-xen.h
new file mode 100644
index 0000000..7842cc9
--- /dev/null
+++ b/include/vki/vki-xen.h
@@ -0,0 +1,8 @@
+#ifndef __VKI_XEN_H
+#define __VKI_XEN_H
+
+#endif // __VKI_XEN_H
+
+/*--------------------------------------------------------------------*/
+/*--- end                                                          ---*/
+/*--------------------------------------------------------------------*/

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: How to use valgrind to detect xen hypervisor's memory leak
  2011-07-15 15:15         ` Ian Campbell
@ 2011-07-16  2:33           ` hellokitty
  0 siblings, 0 replies; 13+ messages in thread
From: hellokitty @ 2011-07-16  2:33 UTC (permalink / raw)
  To: xen-devel

here is the patch , can't you receive it ? 
http://xen.1045712.n5.nabble.com/file/n4592915/xen_patch.patch
xen_patch.patch 

--
View this message in context: http://xen.1045712.n5.nabble.com/How-to-use-valgrind-to-detect-xen-hypervisor-s-memory-leak-tp4589174p4592915.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: How to use valgrind to detect xen hypervisor's memory leak
  2011-07-15 15:35         ` Ian Campbell
@ 2011-07-16  6:19           ` hellokitty
  2011-07-16  6:23             ` Ian Campbell
  0 siblings, 1 reply; 13+ messages in thread
From: hellokitty @ 2011-07-16  6:19 UTC (permalink / raw)
  To: xen-devel

> I downloaded 3.6.1, applied the attached patch and ran:
>   aclocal && autoheader && automake -a && autoconf (per the autogen.sh in
> valgrind SVN)
> Then I ran:
>         ./configure --with-xen && make
> and it built fine (since it defaults to looking in /usr/include for
> headers).
> I also tried "./configure --with-xen=/usr/include" which also worked.
> However --with-xen=/usr/include/xen (as you had) did not work because
> the path is wrong and should not include the final /xen (since the
> #includes in the code are of the form &lt;xen/thing.h&gt;), although my
> error
> messages in this case were not the same as yours. 

Here , i follow your step , and use the "X" patch you uploaded , and run 
"aclocal && autoheader && automake -a && autoconf " ,goes well  and then run
"./configure --with-xen && make" , and when it comes to make , it has the
errors below :

Making all in coregrind
make[2]: Entering directory `/home/popo/valgrind-3.6.1/coregrind'
make  all-am
make[3]: Entering directory `/home/popo/valgrind-3.6.1/coregrind'
gcc -DHAVE_CONFIG_H -I. -I..  -I.. -I../include -I../VEX/pub -DVGA_x86=1
-DVGO_linux=1 -DVGP_x86_linux=1 -I../coregrind
-DVG_LIBDIR="\"/usr/local/lib/valgrind"\" -DVG_PLATFORM="\"x86-linux\"" 
-m32 -mpreferred-stack-boundary=2 -O2 -g -Wall -Wmissing-prototypes -Wshadow
-Wpointer-arith -Wstrict-prototypes -Wmissing-declarations
-Wno-format-zero-length -fno-strict-aliasing -Iyes -Wno-long-long 
-Wno-pointer-sign -fno-stack-protector -MT
libcoregrind_x86_linux_a-syswrap-xen.o -MD -MP -MF
.deps/libcoregrind_x86_linux_a-syswrap-xen.Tpo -c -o
libcoregrind_x86_linux_a-syswrap-xen.o `test -f 'm_syswrap/syswrap-xen.c' ||
echo './'`m_syswrap/syswrap-xen.c
m_syswrap/syswrap-xen.c: In function
‘vgSysWrap_xen_ioctl_privcmd_hypercall_before’:
m_syswrap/syswrap-xen.c:119: error: ‘MMUEXT_MARK_SUPER’ undeclared (first
use in this function)
m_syswrap/syswrap-xen.c:119: error: (Each undeclared identifier is reported
only once
m_syswrap/syswrap-xen.c:119: error: for each function it appears in.)
m_syswrap/syswrap-xen.c:120: error: ‘MMUEXT_UNMARK_SUPER’ undeclared (first
use in this function)
m_syswrap/syswrap-xen.c:136: error: ‘MMUEXT_FLUSH_CACHE_GLOBAL’ undeclared
(first use in this function)
m_syswrap/syswrap-xen.c:263: error: ‘XEN_SYSCTL_cpupool_op’ undeclared
(first use in this function)
m_syswrap/syswrap-xen.c:264: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:264: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:266: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:267: error: ‘XEN_SYSCTL_CPUPOOL_OP_CREATE’
undeclared (first use in this function)
m_syswrap/syswrap-xen.c:268: error: ‘XEN_SYSCTL_CPUPOOL_OP_DESTROY’
undeclared (first use in this function)
m_syswrap/syswrap-xen.c:269: error: ‘XEN_SYSCTL_CPUPOOL_OP_INFO’ undeclared
(first use in this function)
m_syswrap/syswrap-xen.c:270: error: ‘XEN_SYSCTL_CPUPOOL_OP_ADDCPU’
undeclared (first use in this function)
m_syswrap/syswrap-xen.c:271: error: ‘XEN_SYSCTL_CPUPOOL_OP_RMCPU’ undeclared
(first use in this function)
m_syswrap/syswrap-xen.c:272: error: ‘XEN_SYSCTL_CPUPOOL_OP_MOVEDOMAIN’
undeclared (first use in this function)
m_syswrap/syswrap-xen.c:273: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:273: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:276: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:277: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:277: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:279: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:280: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:280: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:282: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:283: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:284: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:284: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c: In function
‘vgSysWrap_xen_ioctl_privcmd_hypercall_after’:
m_syswrap/syswrap-xen.c:558: error: ‘XEN_SYSCTL_cpupool_op’ undeclared
(first use in this function)
m_syswrap/syswrap-xen.c:559: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:559: error: ‘XEN_SYSCTL_CPUPOOL_OP_CREATE’
undeclared (first use in this function)
m_syswrap/syswrap-xen.c:560: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:560: error: ‘XEN_SYSCTL_CPUPOOL_OP_INFO’ undeclared
(first use in this function)
m_syswrap/syswrap-xen.c:561: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:561: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:562: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:563: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:563: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:564: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:564: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:566: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:567: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:567: error: ‘XEN_SYSCTL_CPUPOOL_OP_FREEINFO’
undeclared (first use in this function)
m_syswrap/syswrap-xen.c:568: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:568: error: ‘union <anonymous>’ has no member named
‘cpupool_op’
m_syswrap/syswrap-xen.c:576: error: ‘struct xen_sysctl_physinfo’ has no
member named ‘nr_nodes’
m_syswrap/syswrap-xen.c:576: error: ‘struct xen_sysctl_physinfo’ has no
member named ‘nr_nodes’
m_syswrap/syswrap-xen.c:658: error: ‘struct xen_domctl_getdomaininfo’ has no
member named ‘cpupool’
m_syswrap/syswrap-xen.c:658: error: ‘struct xen_domctl_getdomaininfo’ has no
member named ‘cpupool’
make[3]: *** [libcoregrind_x86_linux_a-syswrap-xen.o] Error 1
make[3]: Leaving directory `/home/popo/valgrind-3.6.1/coregrind'
make[2]: *** [all] Error 2
make[2]: Leaving directory `/home/popo/valgrind-3.6.1/coregrind'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/popo/valgrind-3.6.1'
make: *** [all] Error 2


How about you ? and i don't know how to fix it yet .....

--
View this message in context: http://xen.1045712.n5.nabble.com/How-to-use-valgrind-to-detect-xen-hypervisor-s-memory-leak-tp4589174p4593243.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Re: How to use valgrind to detect xen hypervisor's memory leak
  2011-07-16  6:19           ` hellokitty
@ 2011-07-16  6:23             ` Ian Campbell
  2011-07-16  6:40               ` hellokitty
  0 siblings, 1 reply; 13+ messages in thread
From: Ian Campbell @ 2011-07-16  6:23 UTC (permalink / raw)
  To: hellokitty; +Cc: xen-devel

On Sat, 2011-07-16 at 07:19 +0100, hellokitty wrote:

> ‘vgSysWrap_xen_ioctl_privcmd_hypercall_before’:
> m_syswrap/syswrap-xen.c:119: error: ‘MMUEXT_MARK_SUPER’ undeclared (first
> use in this function)

The patch requires Xen 4.1 or later.

Ian.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: How to use valgrind to detect xen hypervisor's memory leak
  2011-07-16  6:23             ` Ian Campbell
@ 2011-07-16  6:40               ` hellokitty
  2011-07-16  6:43                 ` Ian Campbell
  0 siblings, 1 reply; 13+ messages in thread
From: hellokitty @ 2011-07-16  6:40 UTC (permalink / raw)
  To: xen-devel

>  The patch requires Xen 4.1 or later. 
So in all , the problem is the version problem ? and do you have the patch
for valgrind  requires Xen 3.3.0 ?

--
View this message in context: http://xen.1045712.n5.nabble.com/How-to-use-valgrind-to-detect-xen-hypervisor-s-memory-leak-tp4589174p4593261.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Re: How to use valgrind to detect xen hypervisor's memory leak
  2011-07-16  6:40               ` hellokitty
@ 2011-07-16  6:43                 ` Ian Campbell
  2011-07-16 15:24                   ` hellokitty
  0 siblings, 1 reply; 13+ messages in thread
From: Ian Campbell @ 2011-07-16  6:43 UTC (permalink / raw)
  To: hellokitty; +Cc: xen-devel

On Sat, 2011-07-16 at 07:40 +0100, hellokitty wrote:
> >  The patch requires Xen 4.1 or later. 
> So in all , the problem is the version problem ? and do you have the patch
> for valgrind  requires Xen 3.3.0 ?

No I don't.

3.3.0 is a pretty ancient version of Xen (released in 2008, 3 major
releases ago). If you don't want to base your work on something newer
then I'm afraid you will need to backport the patch yourself.

Ian.

> 
> --
> View this message in context: http://xen.1045712.n5.nabble.com/How-to-use-valgrind-to-detect-xen-hypervisor-s-memory-leak-tp4589174p4593261.html
> Sent from the Xen - Dev mailing list archive at Nabble.com.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: How to use valgrind to detect xen hypervisor's memory leak
  2011-07-16  6:43                 ` Ian Campbell
@ 2011-07-16 15:24                   ` hellokitty
  0 siblings, 0 replies; 13+ messages in thread
From: hellokitty @ 2011-07-16 15:24 UTC (permalink / raw)
  To: xen-devel

Thank you lan , finally i get the Xen 4.1.1 version and patch the valgrind
using your attach "X" and it works !! 

Thank you so much , now i can use the new valgrind to do the work well !

Thank you .

--
View this message in context: http://xen.1045712.n5.nabble.com/How-to-use-valgrind-to-detect-xen-hypervisor-s-memory-leak-tp4589174p4594076.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2011-07-16 15:24 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-07-15  2:42 How to use valgrind to detect xen hypervisor's memory leak hellokitty
2011-07-15  7:19 ` Ian Campbell
2011-07-15  9:15   ` hellokitty
2011-07-15  9:43     ` Ian Campbell
2011-07-15 13:39       ` hellokitty
2011-07-15 15:15         ` Ian Campbell
2011-07-16  2:33           ` hellokitty
2011-07-15 15:35         ` Ian Campbell
2011-07-16  6:19           ` hellokitty
2011-07-16  6:23             ` Ian Campbell
2011-07-16  6:40               ` hellokitty
2011-07-16  6:43                 ` Ian Campbell
2011-07-16 15:24                   ` hellokitty

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.