From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Wei Yang <richardw.yang@linux.intel.com>
Cc: qemu-devel@nongnu.org, quintela@redhat.com
Subject: Re: [PATCH 0/6] migration/postcopy: enable compress during postcopy
Date: Thu, 7 Nov 2019 12:06:23 +0000 [thread overview]
Message-ID: <20191107120623.GH2816@work-vm> (raw)
In-Reply-To: <20191107120332.GA25593@richard>
* Wei Yang (richardw.yang@linux.intel.com) wrote:
> On Thu, Nov 07, 2019 at 09:15:44AM +0000, Dr. David Alan Gilbert wrote:
> >* Wei Yang (richardw.yang@linux.intel.com) wrote:
> >> On Wed, Nov 06, 2019 at 08:11:44PM +0000, Dr. David Alan Gilbert wrote:
> >> >* Wei Yang (richardw.yang@linux.intel.com) wrote:
> >> >> This patch set tries enable compress during postcopy.
> >> >>
> >> >> postcopy requires to place a whole host page, while migration thread migrate
> >> >> memory in target page size. This makes postcopy need to collect all target
> >> >> pages in one host page before placing via userfaultfd.
> >> >>
> >> >> To enable compress during postcopy, there are two problems to solve:
> >> >>
> >> >> 1. Random order for target page arrival
> >> >> 2. Target pages in one host page arrives without interrupt by target
> >> >> page from other host page
> >> >>
> >> >> The first one is handled by counting the number of target pages arrived
> >> >> instead of the last target page arrived.
> >> >>
> >> >> The second one is handled by:
> >> >>
> >> >> 1. Flush compress thread for each host page
> >> >> 2. Wait for decompress thread for before placing host page
> >> >>
> >> >> With the combination of these two changes, compress is enabled during
> >> >> postcopy.
> >> >
> >> >What have you tested this with? 2MB huge pages I guess?
> >> >
> >>
> >> I tried with this qemu option:
> >>
> >> -object memory-backend-file,id=mem1,mem-path=/dev/hugepages/guest2,size=4G \
> >> -device pc-dimm,id=dimm1,memdev=mem1
> >>
> >> /dev/hugepages/guest2 is a file under hugetlbfs
> >>
> >> hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
> >
> >OK, yes that should be fine.
> >I suspect on Power/ARM where they have normal memory with 16/64k pages,
> >the cost of the flush will mean compression is more expensive in
> >postcopy mode; but still makes it possible.
> >
>
> Not get your point clearly about more expensive. You mean more expensive on
> ARM/Power?
Yes; you're doing a flush at the end of each host page; on x86 without
hugepages you don't do anything, on arm/power you'll need to do a flush
at the end of each of their normal pages - so that's a bit more
expensive.
> If the solution looks good to you, I would prepare v2.
Yes; I think it is OK.
Dave
> >Dave
> >
> >> >Dave
> >> >
> >> >> Wei Yang (6):
> >> >> migration/postcopy: reduce memset when it is zero page and
> >> >> matches_target_page_size
> >> >> migration/postcopy: wait for decompress thread in precopy
> >> >> migration/postcopy: count target page number to decide the
> >> >> place_needed
> >> >> migration/postcopy: set all_zero to true on the first target page
> >> >> migration/postcopy: enable random order target page arrival
> >> >> migration/postcopy: enable compress during postcopy
> >> >>
> >> >> migration/migration.c | 11 --------
> >> >> migration/ram.c | 65 ++++++++++++++++++++++++++++++-------------
> >> >> 2 files changed, 45 insertions(+), 31 deletions(-)
> >> >>
> >> >> --
> >> >> 2.17.1
> >> >>
> >> >--
> >> >Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> >>
> >> --
> >> Wei Yang
> >> Help you, Help me
> >--
> >Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>
> --
> Wei Yang
> Help you, Help me
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
prev parent reply other threads:[~2019-11-07 12:07 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-18 0:48 [PATCH 0/6] migration/postcopy: enable compress during postcopy Wei Yang
2019-10-18 0:48 ` [PATCH 1/6] migration/postcopy: reduce memset when it is zero page and matches_target_page_size Wei Yang
2019-11-06 18:18 ` Dr. David Alan Gilbert
2019-10-18 0:48 ` [PATCH 2/6] migration/postcopy: wait for decompress thread in precopy Wei Yang
2019-11-06 19:57 ` Dr. David Alan Gilbert
2019-10-18 0:48 ` [PATCH 3/6] migration/postcopy: count target page number to decide the place_needed Wei Yang
2019-11-06 19:59 ` Dr. David Alan Gilbert
2019-10-18 0:48 ` [PATCH 4/6] migration/postcopy: set all_zero to true on the first target page Wei Yang
2019-11-06 20:04 ` Dr. David Alan Gilbert
2019-10-18 0:48 ` [PATCH 5/6] migration/postcopy: enable random order target page arrival Wei Yang
2019-11-06 20:08 ` Dr. David Alan Gilbert
2019-11-07 6:00 ` Wei Yang
2019-11-07 9:14 ` Dr. David Alan Gilbert
2019-10-18 0:48 ` [PATCH 6/6] migration/postcopy: enable compress during postcopy Wei Yang
2019-11-06 19:55 ` Dr. David Alan Gilbert
2019-10-18 16:50 ` [PATCH 0/6] " no-reply
2019-10-19 0:15 ` Wei Yang
2019-11-06 20:11 ` Dr. David Alan Gilbert
2019-11-07 6:02 ` Wei Yang
2019-11-07 9:15 ` Dr. David Alan Gilbert
2019-11-07 12:03 ` Wei Yang
2019-11-07 12:06 ` Dr. David Alan Gilbert [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191107120623.GH2816@work-vm \
--to=dgilbert@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=richardw.yang@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).