All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: "Liu, Yuan1" <yuan1.liu@intel.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>,
	"farosas@suse.de" <farosas@suse.de>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"hao.xiang@bytedance.com" <hao.xiang@bytedance.com>,
	"bryan.zhang@bytedance.com" <bryan.zhang@bytedance.com>,
	"Zou, Nanhai" <nanhai.zou@intel.com>
Subject: Re: [PATCH v5 5/7] migration/multifd: implement initialization of qpl compression
Date: Thu, 28 Mar 2024 11:16:04 -0400	[thread overview]
Message-ID: <ZgWJtFtbpXDvelvh@x1n> (raw)
In-Reply-To: <PH7PR11MB59417CED1514B574523D6B1CA33B2@PH7PR11MB5941.namprd11.prod.outlook.com>

On Thu, Mar 28, 2024 at 02:32:37AM +0000, Liu, Yuan1 wrote:
> > -----Original Message-----
> > From: Peter Xu <peterx@redhat.com>
> > Sent: Thursday, March 28, 2024 3:26 AM
> > To: Liu, Yuan1 <yuan1.liu@intel.com>
> > Cc: Daniel P. Berrangé <berrange@redhat.com>; farosas@suse.de; qemu-
> > devel@nongnu.org; hao.xiang@bytedance.com; bryan.zhang@bytedance.com; Zou,
> > Nanhai <nanhai.zou@intel.com>
> > Subject: Re: [PATCH v5 5/7] migration/multifd: implement initialization of
> > qpl compression
> > 
> > On Fri, Mar 22, 2024 at 12:40:32PM -0400, Peter Xu wrote:
> > > > > void multifd_recv_zero_page_process(MultiFDRecvParams *p)
> > > > > {
> > > > >     for (int i = 0; i < p->zero_num; i++) {
> > > > >         void *page = p->host + p->zero[i];
> > > > >         if (!buffer_is_zero(page, p->page_size)) {
> > > > >             memset(page, 0, p->page_size);
> > > > >         }
> > > > >     }
> > > > > }
> > >
> > > It may not matter much (where I also see your below comments), but just
> > to
> > > mention another solution to avoid this read is that we can maintain
> > > RAMBlock->receivedmap for precopy (especially, multifd, afaiu multifd
> > > doesn't yet update this bitmap.. even if normal precopy does), then here
> > > instead of scanning every time, maybe we can do:
> > >
> > >   /*
> > >    * If it's the 1st time receiving it, no need to clear it as it must
> > be
> > >    * all zeros now.
> > >    */
> > >   if (bitmap_test(rb->receivedmap, page_offset)) {
> > >       memset(page, 0, ...);
> > >   } else {
> > >       bitmap_set(rb->receivedmap, page_offset);
> > >   }
> > >
> > > And we also always set the bit when !zero too.
> > >
> > > My rational is that it's unlikely a zero page if it's sent once or more,
> > > while OTOH for the 1st time we receive it, it must be a zero page, so no
> > > need to scan for the 1st round.
> > 
> > Thinking about this, I'm wondering whether we should have this regardless.
> > IIUC now multifd will always require two page faults on destination for
> > anonymous guest memories (I suppose shmem/hugetlb is fine as no zero page
> > in those worlds).  Even though it should be faster than DMA faults, it
> > still is unwanted.
> > 
> > I'll take a note myself as todo to do some measurements in the future
> > first.  However if anyone thinks that makes sense and want to have a look,
> > please say so.  It'll be more than welcomed.
> 
> Yes, I think this is a better improvement to avoid two page faults. I can test
> the performance impact of this change on SVM-capable devices and give some data
> later. As we saw before, the IOTLB flush occurs via COW, with the change, the 
> impact of the COW should be gone.
> 
> If you need more testing and analysis on this, please let me know

Nothing more than that.  Just a heads up that Xiang used to mention a test
case where Richard used to suggest dropping the zero check:

https://lore.kernel.org/r/CAAYibXib+TWnJpV22E=adncdBmwXJRqgRjJXK7X71J=bDfaxDg@mail.gmail.com

AFAIU this should be resolved if we have the bitmap maintained, but we can
double check.  IIUC that's exactly the case for an idle guest, in that case
it should be even faster to skip the memcmp when bit clear.

If you're going to post the patches, feel free to post that as a standalone
small series first, then that can be considered merge even earlier.

Thanks a lot for doing this.

-- 
Peter Xu



  reply	other threads:[~2024-03-28 15:17 UTC|newest]

Thread overview: 45+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-19 16:45 [PATCH v5 0/7] Live Migration With IAA Yuan Liu
2024-03-19 16:45 ` [PATCH v5 1/7] docs/migration: add qpl compression feature Yuan Liu
2024-03-26 17:58   ` Peter Xu
2024-03-27  2:14     ` Liu, Yuan1
2024-03-19 16:45 ` [PATCH v5 2/7] migration/multifd: put IOV initialization into compression method Yuan Liu
2024-03-20 15:18   ` Fabiano Rosas
2024-03-20 15:32     ` Liu, Yuan1
2024-03-19 16:45 ` [PATCH v5 3/7] configure: add --enable-qpl build option Yuan Liu
2024-03-20  8:55   ` Thomas Huth
2024-03-20  8:56     ` Thomas Huth
2024-03-20 14:34       ` Liu, Yuan1
2024-03-20 10:31   ` Daniel P. Berrangé
2024-03-20 14:42     ` Liu, Yuan1
2024-03-19 16:45 ` [PATCH v5 4/7] migration/multifd: add qpl compression method Yuan Liu
2024-03-27 19:49   ` Peter Xu
2024-03-28  3:03     ` Liu, Yuan1
2024-03-19 16:45 ` [PATCH v5 5/7] migration/multifd: implement initialization of qpl compression Yuan Liu
2024-03-20 10:42   ` Daniel P. Berrangé
2024-03-20 15:02     ` Liu, Yuan1
2024-03-20 15:20       ` Daniel P. Berrangé
2024-03-20 16:04         ` Liu, Yuan1
2024-03-20 15:34       ` Peter Xu
2024-03-20 16:23         ` Liu, Yuan1
2024-03-20 20:31           ` Peter Xu
2024-03-21  1:37             ` Liu, Yuan1
2024-03-21 15:28               ` Peter Xu
2024-03-22  2:06                 ` Liu, Yuan1
2024-03-22 14:47                   ` Liu, Yuan1
2024-03-22 16:40                     ` Peter Xu
2024-03-27 19:25                       ` Peter Xu
2024-03-28  2:32                         ` Liu, Yuan1
2024-03-28 15:16                           ` Peter Xu [this message]
2024-03-29  2:04                             ` Liu, Yuan1
2024-03-19 16:45 ` [PATCH v5 6/7] migration/multifd: implement qpl compression and decompression Yuan Liu
2024-03-19 16:45 ` [PATCH v5 7/7] tests/migration-test: add qpl compression test Yuan Liu
2024-03-20 10:45   ` Daniel P. Berrangé
2024-03-20 15:30     ` Liu, Yuan1
2024-03-20 15:39       ` Daniel P. Berrangé
2024-03-20 16:26         ` Liu, Yuan1
2024-03-26 20:30 ` [PATCH v5 0/7] Live Migration With IAA Peter Xu
2024-03-27  3:20   ` Liu, Yuan1
2024-03-27 19:46     ` Peter Xu
2024-03-28  3:02       ` Liu, Yuan1
2024-03-28 15:22         ` Peter Xu
2024-03-29  3:33           ` Liu, Yuan1

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZgWJtFtbpXDvelvh@x1n \
    --to=peterx@redhat.com \
    --cc=berrange@redhat.com \
    --cc=bryan.zhang@bytedance.com \
    --cc=farosas@suse.de \
    --cc=hao.xiang@bytedance.com \
    --cc=nanhai.zou@intel.com \
    --cc=qemu-devel@nongnu.org \
    --cc=yuan1.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.