All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] libxc: Replace alloca() with mmap() in linux_privcmd_map_foreign_bulk()
@ 2012-04-19 22:24 Aravindh Puthiyaparambil
  2012-04-20 12:36 ` Ian Campbell
  0 siblings, 1 reply; 5+ messages in thread
From: Aravindh Puthiyaparambil @ 2012-04-19 22:24 UTC (permalink / raw)
  To: xen-devel; +Cc: Andres Lagar-Cavilla

When mapping in large amounts of pages (in the GB range) from a guest in to Dom0 using xc_map_foreign_bulk(), a segfault occurs in the libxc client application. This is because the pfn array in linux_privcmd_map_foreign_bulk() is being allocated using alloca() and the subsequent memcpy causes the stack to blow. This patch replaces the alloca() with mmap().

Signed-off-by: Aravindh Puthiyaparambil <aravindh@virtuata.com>
Acked-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>

diff -r 7c777cb8f705 -r 2f68aefc46c3 tools/libxc/xc_linux_osdep.c
--- a/tools/libxc/xc_linux_osdep.c	Wed Apr 18 16:49:55 2012 +0100
+++ b/tools/libxc/xc_linux_osdep.c	Thu Apr 19 15:21:43 2012 -0700
@@ -39,6 +39,7 @@
 #include "xenctrl.h"
 #include "xenctrlosdep.h"
 
+#define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
 #define ERROR(_m, _a...)  xc_osdep_log(xch,XTL_ERROR,XC_INTERNAL_ERROR,_m , ## _a )
 #define PERROR(_m, _a...) xc_osdep_log(xch,XTL_ERROR,XC_INTERNAL_ERROR,_m \
                   " (%d = %s)", ## _a , errno, xc_strerror(xch, errno))
@@ -286,7 +287,14 @@ static void *linux_privcmd_map_foreign_b
          * IOCTL_PRIVCMD_MMAPBATCH.
          */
         privcmd_mmapbatch_t ioctlx;
-        xen_pfn_t *pfn = alloca(num * sizeof(*pfn));
+        xen_pfn_t *pfn = mmap(NULL, ROUNDUP((num * sizeof(*pfn)), XC_PAGE_SHIFT),
+                              PROT_READ | PROT_WRITE,
+                              MAP_PRIVATE | MAP_ANON | MAP_POPULATE, -1, 0);
+        if ( pfn == MAP_FAILED )
+        {
+            PERROR("xc_map_foreign_bulk: mmap of pfn array failed");
+            return NULL;
+        }
 
         memcpy(pfn, arr, num * sizeof(*arr));
 
@@ -328,6 +336,8 @@ static void *linux_privcmd_map_foreign_b
             break;
         }
 
+        munmap(pfn, ROUNDUP((num * sizeof(*pfn)), XC_PAGE_SHIFT));
+
         if ( rc == -ENOENT && i == num )
             rc = 0;
         else if ( rc )

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] libxc: Replace alloca() with mmap() in linux_privcmd_map_foreign_bulk()
  2012-04-19 22:24 [PATCH] libxc: Replace alloca() with mmap() in linux_privcmd_map_foreign_bulk() Aravindh Puthiyaparambil
@ 2012-04-20 12:36 ` Ian Campbell
  2012-04-20 14:08   ` Andres Lagar-Cavilla
  0 siblings, 1 reply; 5+ messages in thread
From: Ian Campbell @ 2012-04-20 12:36 UTC (permalink / raw)
  To: Aravindh Puthiyaparambil; +Cc: xen-devel, Andres Lagar-Cavilla, Santosh Jodh

On Thu, 2012-04-19 at 23:24 +0100, Aravindh Puthiyaparambil wrote:
> When mapping in large amounts of pages (in the GB range) from a guest
> in to Dom0 using xc_map_foreign_bulk(), a segfault occurs in the libxc
> client application. This is because the pfn array in
> linux_privcmd_map_foreign_bulk() is being allocated using alloca() and
> the subsequent memcpy causes the stack to blow. This patch replaces
> the alloca() with mmap().

The original reason for switching to alloca from malloc was because the
similar gntmap path was a hot path and it seemed like if it was
reasonable there it would be reasonable here too.

So I have a couple of questions...

Is it possible that we also introduced this same issue at the other
callsites which were changed in that patch or are there other
constraints on the size of the allocation which make it not a problem?

Where does this mmap approach fall in terms of performance relative to
just switching back to malloc? I presume that alloca is faster than
both, but if it doesn't work reliably then it's not an option.

If mmap and malloc have roughly equivalent performance properties, or
the fast one is still too slow for Santosh's use case, then maybe we
need to have a think about other options.

e.g. use alloca for small numbers of mappings and mmap/malloc for larger
ones. Or do large numbers of mappings in multiple batches.

Ian.

> 
> Signed-off-by: Aravindh Puthiyaparambil <aravindh@virtuata.com>
> Acked-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> 
> diff -r 7c777cb8f705 -r 2f68aefc46c3 tools/libxc/xc_linux_osdep.c
> --- a/tools/libxc/xc_linux_osdep.c	Wed Apr 18 16:49:55 2012 +0100
> +++ b/tools/libxc/xc_linux_osdep.c	Thu Apr 19 15:21:43 2012 -0700
> @@ -39,6 +39,7 @@
>  #include "xenctrl.h"
>  #include "xenctrlosdep.h"
>  
> +#define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
>  #define ERROR(_m, _a...)  xc_osdep_log(xch,XTL_ERROR,XC_INTERNAL_ERROR,_m , ## _a )
>  #define PERROR(_m, _a...) xc_osdep_log(xch,XTL_ERROR,XC_INTERNAL_ERROR,_m \
>                    " (%d = %s)", ## _a , errno, xc_strerror(xch, errno))
> @@ -286,7 +287,14 @@ static void *linux_privcmd_map_foreign_b
>           * IOCTL_PRIVCMD_MMAPBATCH.
>           */
>          privcmd_mmapbatch_t ioctlx;
> -        xen_pfn_t *pfn = alloca(num * sizeof(*pfn));
> +        xen_pfn_t *pfn = mmap(NULL, ROUNDUP((num * sizeof(*pfn)), XC_PAGE_SHIFT),
> +                              PROT_READ | PROT_WRITE,
> +                              MAP_PRIVATE | MAP_ANON | MAP_POPULATE, -1, 0);
> +        if ( pfn == MAP_FAILED )
> +        {
> +            PERROR("xc_map_foreign_bulk: mmap of pfn array failed");
> +            return NULL;
> +        }
>  
>          memcpy(pfn, arr, num * sizeof(*arr));
>  
> @@ -328,6 +336,8 @@ static void *linux_privcmd_map_foreign_b
>              break;
>          }
>  
> +        munmap(pfn, ROUNDUP((num * sizeof(*pfn)), XC_PAGE_SHIFT));
> +
>          if ( rc == -ENOENT && i == num )
>              rc = 0;
>          else if ( rc )
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] libxc: Replace alloca() with mmap() in linux_privcmd_map_foreign_bulk()
  2012-04-20 12:36 ` Ian Campbell
@ 2012-04-20 14:08   ` Andres Lagar-Cavilla
  2012-04-20 14:15     ` Ian Campbell
  0 siblings, 1 reply; 5+ messages in thread
From: Andres Lagar-Cavilla @ 2012-04-20 14:08 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Aravindh Puthiyaparambil, xen-devel, Santosh Jodh

> On Thu, 2012-04-19 at 23:24 +0100, Aravindh Puthiyaparambil wrote:
>> When mapping in large amounts of pages (in the GB range) from a guest
>> in to Dom0 using xc_map_foreign_bulk(), a segfault occurs in the libxc
>> client application. This is because the pfn array in
>> linux_privcmd_map_foreign_bulk() is being allocated using alloca() and
>> the subsequent memcpy causes the stack to blow. This patch replaces
>> the alloca() with mmap().
>
> The original reason for switching to alloca from malloc was because the
> similar gntmap path was a hot path and it seemed like if it was
> reasonable there it would be reasonable here too.
>
> So I have a couple of questions...
>
> Is it possible that we also introduced this same issue at the other
> callsites which were changed in that patch or are there other
> constraints on the size of the allocation which make it not a problem?

I don't know how arbitrary or large can the buffer sizes be in the other
callsites. It would seem that they can be large enough.
>
> Where does this mmap approach fall in terms of performance relative to
> just switching back to malloc? I presume that alloca is faster than
> both, but if it doesn't work reliably then it's not an option.

It's hard to dig out exactly how alloca is implemented, but it would seem
to simply move the stack pointer and return the old one (it's arguable
whether there is a buggy check for limits or a buggy lack of check against
the guard page going on -- man page says "behavior undefined"). So, if you
need new pages in the stack as a result of the stack pointer motion, those
pages will be demand allocated.

mmap just short circuits straight to page allocation. Note that malloc may
or may not wind up calling mmap if it needs more pages, depending on its
internal machinations.

MAP_POPULATE tells the mmap syscall to allocate right now, avoiding demand
allocation. Presumably this will batch all page allocations in the single
syscall, and avoid page faults down the line.

Short story, I suggested mmap with MAP_POPULATE because it will do the
allocation of pages in the most optimal fashion, even better than malloc
or alloca. But it's only worth if the buffer is big enough such that new
pages will need to be allocated.

I think a reasonable compromise would be to do alloca if the buffer is
less than one page (a fairly common case, single pfn buffers, etc), and do
mmap(MAP_POPULATE) for buffers larger than a page.

Andres

>
> If mmap and malloc have roughly equivalent performance properties, or
> the fast one is still too slow for Santosh's use case, then maybe we
> need to have a think about other options.
>
> e.g. use alloca for small numbers of mappings and mmap/malloc for larger
> ones. Or do large numbers of mappings in multiple batches.
>
> Ian.
>
>>
>> Signed-off-by: Aravindh Puthiyaparambil <aravindh@virtuata.com>
>> Acked-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
>>
>> diff -r 7c777cb8f705 -r 2f68aefc46c3 tools/libxc/xc_linux_osdep.c
>> --- a/tools/libxc/xc_linux_osdep.c	Wed Apr 18 16:49:55 2012 +0100
>> +++ b/tools/libxc/xc_linux_osdep.c	Thu Apr 19 15:21:43 2012 -0700
>> @@ -39,6 +39,7 @@
>>  #include "xenctrl.h"
>>  #include "xenctrlosdep.h"
>>
>> +#define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) &
>> ~((1UL<<(_w))-1))
>>  #define ERROR(_m, _a...)
>> xc_osdep_log(xch,XTL_ERROR,XC_INTERNAL_ERROR,_m , ## _a )
>>  #define PERROR(_m, _a...)
>> xc_osdep_log(xch,XTL_ERROR,XC_INTERNAL_ERROR,_m \
>>                    " (%d = %s)", ## _a , errno, xc_strerror(xch, errno))
>> @@ -286,7 +287,14 @@ static void *linux_privcmd_map_foreign_b
>>           * IOCTL_PRIVCMD_MMAPBATCH.
>>           */
>>          privcmd_mmapbatch_t ioctlx;
>> -        xen_pfn_t *pfn = alloca(num * sizeof(*pfn));
>> +        xen_pfn_t *pfn = mmap(NULL, ROUNDUP((num * sizeof(*pfn)),
>> XC_PAGE_SHIFT),
>> +                              PROT_READ | PROT_WRITE,
>> +                              MAP_PRIVATE | MAP_ANON | MAP_POPULATE,
>> -1, 0);
>> +        if ( pfn == MAP_FAILED )
>> +        {
>> +            PERROR("xc_map_foreign_bulk: mmap of pfn array failed");
>> +            return NULL;
>> +        }
>>
>>          memcpy(pfn, arr, num * sizeof(*arr));
>>
>> @@ -328,6 +336,8 @@ static void *linux_privcmd_map_foreign_b
>>              break;
>>          }
>>
>> +        munmap(pfn, ROUNDUP((num * sizeof(*pfn)), XC_PAGE_SHIFT));
>> +
>>          if ( rc == -ENOENT && i == num )
>>              rc = 0;
>>          else if ( rc )
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
>
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] libxc: Replace alloca() with mmap() in linux_privcmd_map_foreign_bulk()
  2012-04-20 14:08   ` Andres Lagar-Cavilla
@ 2012-04-20 14:15     ` Ian Campbell
  2012-04-20 15:12       ` Aravindh Puthiyaparambil
  0 siblings, 1 reply; 5+ messages in thread
From: Ian Campbell @ 2012-04-20 14:15 UTC (permalink / raw)
  To: andres; +Cc: Aravindh Puthiyaparambil, xen-devel, Santosh Jodh

On Fri, 2012-04-20 at 15:08 +0100, Andres Lagar-Cavilla wrote:
> > On Thu, 2012-04-19 at 23:24 +0100, Aravindh Puthiyaparambil wrote:
> >> When mapping in large amounts of pages (in the GB range) from a guest
> >> in to Dom0 using xc_map_foreign_bulk(), a segfault occurs in the libxc
> >> client application. This is because the pfn array in
> >> linux_privcmd_map_foreign_bulk() is being allocated using alloca() and
> >> the subsequent memcpy causes the stack to blow. This patch replaces
> >> the alloca() with mmap().
> >
> > The original reason for switching to alloca from malloc was because the
> > similar gntmap path was a hot path and it seemed like if it was
> > reasonable there it would be reasonable here too.
> >
> > So I have a couple of questions...
> >
> > Is it possible that we also introduced this same issue at the other
> > callsites which were changed in that patch or are there other
> > constraints on the size of the allocation which make it not a problem?
> 
> I don't know how arbitrary or large can the buffer sizes be in the other
> callsites. It would seem that they can be large enough.
> >
> > Where does this mmap approach fall in terms of performance relative to
> > just switching back to malloc? I presume that alloca is faster than
> > both, but if it doesn't work reliably then it's not an option.
> 
> It's hard to dig out exactly how alloca is implemented, but it would seem
> to simply move the stack pointer and return the old one (it's arguable
> whether there is a buggy check for limits or a buggy lack of check against
> the guard page going on -- man page says "behavior undefined"). So, if you
> need new pages in the stack as a result of the stack pointer motion, those
> pages will be demand allocated.
> 
> mmap just short circuits straight to page allocation. Note that malloc may
> or may not wind up calling mmap if it needs more pages, depending on its
> internal machinations.
> 
> MAP_POPULATE tells the mmap syscall to allocate right now, avoiding demand
> allocation. Presumably this will batch all page allocations in the single
> syscall, and avoid page faults down the line.
> 
> Short story, I suggested mmap with MAP_POPULATE because it will do the
> allocation of pages in the most optimal fashion, even better than malloc
> or alloca. But it's only worth if the buffer is big enough such that new
> pages will need to be allocated.

I think in these cases the whole buffer will always be used, since they
are sized precisely for what they need to contain.

> I think a reasonable compromise would be to do alloca if the buffer is
> less than one page (a fairly common case, single pfn buffers, etc), and do
> mmap(MAP_POPULATE) for buffers larger than a page.

I agree.

Ian.

> 
> Andres
> 
> >
> > If mmap and malloc have roughly equivalent performance properties, or
> > the fast one is still too slow for Santosh's use case, then maybe we
> > need to have a think about other options.
> >
> > e.g. use alloca for small numbers of mappings and mmap/malloc for larger
> > ones. Or do large numbers of mappings in multiple batches.
> >
> > Ian.
> >
> >>
> >> Signed-off-by: Aravindh Puthiyaparambil <aravindh@virtuata.com>
> >> Acked-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> >>
> >> diff -r 7c777cb8f705 -r 2f68aefc46c3 tools/libxc/xc_linux_osdep.c
> >> --- a/tools/libxc/xc_linux_osdep.c	Wed Apr 18 16:49:55 2012 +0100
> >> +++ b/tools/libxc/xc_linux_osdep.c	Thu Apr 19 15:21:43 2012 -0700
> >> @@ -39,6 +39,7 @@
> >>  #include "xenctrl.h"
> >>  #include "xenctrlosdep.h"
> >>
> >> +#define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) &
> >> ~((1UL<<(_w))-1))
> >>  #define ERROR(_m, _a...)
> >> xc_osdep_log(xch,XTL_ERROR,XC_INTERNAL_ERROR,_m , ## _a )
> >>  #define PERROR(_m, _a...)
> >> xc_osdep_log(xch,XTL_ERROR,XC_INTERNAL_ERROR,_m \
> >>                    " (%d = %s)", ## _a , errno, xc_strerror(xch, errno))
> >> @@ -286,7 +287,14 @@ static void *linux_privcmd_map_foreign_b
> >>           * IOCTL_PRIVCMD_MMAPBATCH.
> >>           */
> >>          privcmd_mmapbatch_t ioctlx;
> >> -        xen_pfn_t *pfn = alloca(num * sizeof(*pfn));
> >> +        xen_pfn_t *pfn = mmap(NULL, ROUNDUP((num * sizeof(*pfn)),
> >> XC_PAGE_SHIFT),
> >> +                              PROT_READ | PROT_WRITE,
> >> +                              MAP_PRIVATE | MAP_ANON | MAP_POPULATE,
> >> -1, 0);
> >> +        if ( pfn == MAP_FAILED )
> >> +        {
> >> +            PERROR("xc_map_foreign_bulk: mmap of pfn array failed");
> >> +            return NULL;
> >> +        }
> >>
> >>          memcpy(pfn, arr, num * sizeof(*arr));
> >>
> >> @@ -328,6 +336,8 @@ static void *linux_privcmd_map_foreign_b
> >>              break;
> >>          }
> >>
> >> +        munmap(pfn, ROUNDUP((num * sizeof(*pfn)), XC_PAGE_SHIFT));
> >> +
> >>          if ( rc == -ENOENT && i == num )
> >>              rc = 0;
> >>          else if ( rc )
> >>
> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@lists.xen.org
> >> http://lists.xen.org/xen-devel
> >
> >
> >
> 
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] libxc: Replace alloca() with mmap() in linux_privcmd_map_foreign_bulk()
  2012-04-20 14:15     ` Ian Campbell
@ 2012-04-20 15:12       ` Aravindh Puthiyaparambil
  0 siblings, 0 replies; 5+ messages in thread
From: Aravindh Puthiyaparambil @ 2012-04-20 15:12 UTC (permalink / raw)
  To: Ian Campbell; +Cc: xen-devel, andres, Santosh Jodh


[-- Attachment #1.1: Type: text/plain, Size: 5449 bytes --]

On Apr 20, 2012 7:15 AM, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:
>
> On Fri, 2012-04-20 at 15:08 +0100, Andres Lagar-Cavilla wrote:
> > > On Thu, 2012-04-19 at 23:24 +0100, Aravindh Puthiyaparambil wrote:
> > >> When mapping in large amounts of pages (in the GB range) from a guest
> > >> in to Dom0 using xc_map_foreign_bulk(), a segfault occurs in the
libxc
> > >> client application. This is because the pfn array in
> > >> linux_privcmd_map_foreign_bulk() is being allocated using alloca()
and
> > >> the subsequent memcpy causes the stack to blow. This patch replaces
> > >> the alloca() with mmap().
> > >
> > > The original reason for switching to alloca from malloc was because
the
> > > similar gntmap path was a hot path and it seemed like if it was
> > > reasonable there it would be reasonable here too.
> > >
> > > So I have a couple of questions...
> > >
> > > Is it possible that we also introduced this same issue at the other
> > > callsites which were changed in that patch or are there other
> > > constraints on the size of the allocation which make it not a problem?
> >
> > I don't know how arbitrary or large can the buffer sizes be in the other
> > callsites. It would seem that they can be large enough.
> > >
> > > Where does this mmap approach fall in terms of performance relative to
> > > just switching back to malloc? I presume that alloca is faster than
> > > both, but if it doesn't work reliably then it's not an option.
> >
> > It's hard to dig out exactly how alloca is implemented, but it would
seem
> > to simply move the stack pointer and return the old one (it's arguable
> > whether there is a buggy check for limits or a buggy lack of check
against
> > the guard page going on -- man page says "behavior undefined"). So, if
you
> > need new pages in the stack as a result of the stack pointer motion,
those
> > pages will be demand allocated.
> >
> > mmap just short circuits straight to page allocation. Note that malloc
may
> > or may not wind up calling mmap if it needs more pages, depending on its
> > internal machinations.
> >
> > MAP_POPULATE tells the mmap syscall to allocate right now, avoiding
demand
> > allocation. Presumably this will batch all page allocations in the
single
> > syscall, and avoid page faults down the line.
> >
> > Short story, I suggested mmap with MAP_POPULATE because it will do the
> > allocation of pages in the most optimal fashion, even better than malloc
> > or alloca. But it's only worth if the buffer is big enough such that new
> > pages will need to be allocated.
>
> I think in these cases the whole buffer will always be used, since they
> are sized precisely for what they need to contain.
>
> > I think a reasonable compromise would be to do alloca if the buffer is
> > less than one page (a fairly common case, single pfn buffers, etc), and
do
> > mmap(MAP_POPULATE) for buffers larger than a page.
>
> I agree.

Sounds good. I will send out a patch that does this.

> Ian.
>
> >
> > Andres
> >
> > >
> > > If mmap and malloc have roughly equivalent performance properties, or
> > > the fast one is still too slow for Santosh's use case, then maybe we
> > > need to have a think about other options.
> > >
> > > e.g. use alloca for small numbers of mappings and mmap/malloc for
larger
> > > ones. Or do large numbers of mappings in multiple batches.
> > >
> > > Ian.
> > >
> > >>
> > >> Signed-off-by: Aravindh Puthiyaparambil <aravindh@virtuata.com>
> > >> Acked-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> > >>
> > >> diff -r 7c777cb8f705 -r 2f68aefc46c3 tools/libxc/xc_linux_osdep.c
> > >> --- a/tools/libxc/xc_linux_osdep.c Wed Apr 18 16:49:55 2012 +0100
> > >> +++ b/tools/libxc/xc_linux_osdep.c Thu Apr 19 15:21:43 2012 -0700
> > >> @@ -39,6 +39,7 @@
> > >>  #include "xenctrl.h"
> > >>  #include "xenctrlosdep.h"
> > >>
> > >> +#define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) &
> > >> ~((1UL<<(_w))-1))
> > >>  #define ERROR(_m, _a...)
> > >> xc_osdep_log(xch,XTL_ERROR,XC_INTERNAL_ERROR,_m , ## _a )
> > >>  #define PERROR(_m, _a...)
> > >> xc_osdep_log(xch,XTL_ERROR,XC_INTERNAL_ERROR,_m \
> > >>                    " (%d = %s)", ## _a , errno, xc_strerror(xch,
errno))
> > >> @@ -286,7 +287,14 @@ static void *linux_privcmd_map_foreign_b
> > >>           * IOCTL_PRIVCMD_MMAPBATCH.
> > >>           */
> > >>          privcmd_mmapbatch_t ioctlx;
> > >> -        xen_pfn_t *pfn = alloca(num * sizeof(*pfn));
> > >> +        xen_pfn_t *pfn = mmap(NULL, ROUNDUP((num * sizeof(*pfn)),
> > >> XC_PAGE_SHIFT),
> > >> +                              PROT_READ | PROT_WRITE,
> > >> +                              MAP_PRIVATE | MAP_ANON | MAP_POPULATE,
> > >> -1, 0);
> > >> +        if ( pfn == MAP_FAILED )
> > >> +        {
> > >> +            PERROR("xc_map_foreign_bulk: mmap of pfn array failed");
> > >> +            return NULL;
> > >> +        }
> > >>
> > >>          memcpy(pfn, arr, num * sizeof(*arr));
> > >>
> > >> @@ -328,6 +336,8 @@ static void *linux_privcmd_map_foreign_b
> > >>              break;
> > >>          }
> > >>
> > >> +        munmap(pfn, ROUNDUP((num * sizeof(*pfn)), XC_PAGE_SHIFT));
> > >> +
> > >>          if ( rc == -ENOENT && i == num )
> > >>              rc = 0;
> > >>          else if ( rc )
> > >>
> > >> _______________________________________________
> > >> Xen-devel mailing list
> > >> Xen-devel@lists.xen.org
> > >> http://lists.xen.org/xen-devel
> > >
> > >
> > >
> >
> >
>
>

[-- Attachment #1.2: Type: text/html, Size: 7606 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2012-04-20 15:12 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-04-19 22:24 [PATCH] libxc: Replace alloca() with mmap() in linux_privcmd_map_foreign_bulk() Aravindh Puthiyaparambil
2012-04-20 12:36 ` Ian Campbell
2012-04-20 14:08   ` Andres Lagar-Cavilla
2012-04-20 14:15     ` Ian Campbell
2012-04-20 15:12       ` Aravindh Puthiyaparambil

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.