xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [PATCH] libxc: fix memory leak in migration v2
Date: Fri, 24 Jul 2015 19:00:41 -0400	[thread overview]
Message-ID: <30ADA84E-7C85-4927-9E64-CFCB44E19245@oracle.com> (raw)
In-Reply-To: <1437777627-15758-1-git-send-email-wei.liu2@citrix.com>

On July 24, 2015 6:40:27 PM EDT, Wei Liu <wei.liu2@citrix.com> wrote:
>Originally there was only one counter to keep track of pages. It was
>used erroneously to keep track of how many pages were mapped and how
>many pages needed to be send. In the


sent

> end munmap(2) always has 0 as the
>length argument, which resulted in leaking the mapping.
>
>This problem is discovered on 32bit toolstack because 32bit application

/is/was/

>has notably smaller address space. In fact this bug affects 64bit
>toolstack too.

s/has/have/

>
>Use a separate counter to keep track of how many pages to send to solve
>this issue.
>
>Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>---
>Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>Cc: Ian Campbell <ian.campbell@citrix.com>
>Cc: Ian Jackson <ian.jackson@eu.citrix.com>
>---
> tools/libxc/xc_sr_save.c | 18 ++++++++++--------
> 1 file changed, 10 insertions(+), 8 deletions(-)
>
>diff --git a/tools/libxc/xc_sr_save.c b/tools/libxc/xc_sr_save.c
>index d63b783..bbcfeb6 100644
>--- a/tools/libxc/xc_sr_save.c
>+++ b/tools/libxc/xc_sr_save.c
>@@ -84,7 +84,7 @@ static int write_batch(struct xc_sr_context *ctx)
>     void **guest_data = NULL;
>     void **local_pages = NULL;
>     int *errors = NULL, rc = -1;
>-    unsigned i, p, nr_pages = 0;
>+    unsigned i, p, nr_pages = 0, nr_pages_to_send;
>     unsigned nr_pfns = ctx->save.nr_batch_pfns;
>     void *page, *orig_page;
>     uint64_t *rec_pfns = NULL;
>@@ -151,10 +151,12 @@ static int write_batch(struct xc_sr_context *ctx)
>         mfns[nr_pages++] = mfns[i];
>     }
> 
>-    if ( nr_pages > 0 )
>+    nr_pages_to_send = nr_pages;
>+
>+    if ( nr_pages_to_send > 0 )
>     {
>         guest_mapping = xc_map_foreign_bulk(
>-            xch, ctx->domid, PROT_READ, mfns, errors, nr_pages);
>+            xch, ctx->domid, PROT_READ, mfns, errors,
>nr_pages_to_send);
>         if ( !guest_mapping )
>         {
>             PERROR("Failed to map guest pages");
>@@ -191,7 +193,7 @@ static int write_batch(struct xc_sr_context *ctx)
>            set_bit(ctx->save.batch_pfns[i], ctx->save.deferred_pages);
>                     ++ctx->save.nr_deferred_pages;
>                     types[i] = XEN_DOMCTL_PFINFO_XTAB;
>-                    --nr_pages;
>+                    --nr_pages_to_send;
>                 }
>                 else
>                     goto err;
>@@ -216,7 +218,7 @@ static int write_batch(struct xc_sr_context *ctx)
> 
>     rec.length = sizeof(hdr);
>     rec.length += nr_pfns * sizeof(*rec_pfns);
>-    rec.length += nr_pages * PAGE_SIZE;
>+    rec.length += nr_pages_to_send * PAGE_SIZE;
> 
>     for ( i = 0; i < nr_pfns; ++i )
>  rec_pfns[i] = ((uint64_t)(types[i]) << 32) | ctx->save.batch_pfns[i];
>@@ -235,7 +237,7 @@ static int write_batch(struct xc_sr_context *ctx)
> 
>     iovcnt = 4;
> 
>-    if ( nr_pages )
>+    if ( nr_pages_to_send )
>     {
>         for ( i = 0; i < nr_pfns; ++i )
>         {
>@@ -244,7 +246,7 @@ static int write_batch(struct xc_sr_context *ctx)
>                 iov[iovcnt].iov_base = guest_data[i];
>                 iov[iovcnt].iov_len = PAGE_SIZE;
>                 iovcnt++;
>-                --nr_pages;
>+                --nr_pages_to_send;
>             }
>         }
>     }
>@@ -256,7 +258,7 @@ static int write_batch(struct xc_sr_context *ctx)
>     }
> 
>     /* Sanity check we have sent all the pages we expected to. */
>-    assert(nr_pages == 0);
>+    assert(nr_pages_to_send == 0);
>     rc = ctx->save.nr_batch_pfns = 0;
> 
>  err:

  reply	other threads:[~2015-07-24 23:00 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-07-24 22:40 [PATCH] libxc: fix memory leak in migration v2 Wei Liu
2015-07-24 23:00 ` Konrad Rzeszutek Wilk [this message]
2015-07-26 16:47 ` Andrew Cooper
2015-07-27  8:04   ` Ian Campbell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=30ADA84E-7C85-4927-9E64-CFCB44E19245@oracle.com \
    --to=konrad.wilk@oracle.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=ian.campbell@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).