All of lore.kernel.org
 help / color / mirror / Atom feed
From: Juan Quintela <quintela@redhat.com>
To: guangrong.xiao@gmail.com
Cc: kvm@vger.kernel.org, mst@redhat.com, mtosatti@redhat.com,
	Xiao Guangrong <xiaoguangrong@tencent.com>,
	dgilbert@redhat.com, peterx@redhat.com, qemu-devel@nongnu.org,
	wei.w.wang@intel.com, jiang.biao2@zte.com.cn,
	pbonzini@redhat.com
Subject: Re: [PATCH v5 1/4] migration: do not flush_compressed_data at the end of each iteration
Date: Mon, 03 Sep 2018 18:38:08 +0200	[thread overview]
Message-ID: <87ftyq35an.fsf@trasno.org> (raw)
In-Reply-To: <20180903092644.25812-2-xiaoguangrong@tencent.com> (guangrong xiao's message of "Mon, 3 Sep 2018 17:26:41 +0800")

guangrong.xiao@gmail.com wrote:
> From: Xiao Guangrong <xiaoguangrong@tencent.com>
>
> flush_compressed_data() needs to wait all compression threads to
> finish their work, after that all threads are free until the
> migration feeds new request to them, reducing its call can improve
> the throughput and use CPU resource more effectively
> We do not need to flush all threads at the end of iteration, the
> data can be kept locally until the memory block is changed or
> memory migration starts over in that case we will meet a dirtied
> page which may still exists in compression threads's ring
>
> Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>

I am not so sure about this patch.

Right now, we warantee that after each iteration, all data is written
before we start a new round.

This patch changes to only "flush" the compression threads if we have
"synched" with the kvm migration bitmap.  Idea is good but as far as I
can see:

- we already call flush_compressed_data() inside firnd_dirty_block if we
  synchronize the bitmap.  So, at least, we need to update
  dirty_sync_count there.

- queued pages are "interesting", but I am not sure if compression and
  postcopy work well together.

So, if we don't need to call flush_compressed_data() every round, then
the one inside find_dirty_block() should be enough.  Otherwise, I can't
see why we need this other.


Independent of this patch:
- We always send data for every compression thread without testing if
there is any there.


> @@ -3212,7 +3225,6 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>          }
>          i++;
>      }
> -    flush_compressed_data(rs);
>      rcu_read_unlock();
>  
>      /*

Why is not enough just to remove this call to flush_compressed_data?

Later, Juan.

WARNING: multiple messages have this Message-ID (diff)
From: Juan Quintela <quintela@redhat.com>
To: guangrong.xiao@gmail.com
Cc: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com,
	qemu-devel@nongnu.org, kvm@vger.kernel.org, dgilbert@redhat.com,
	peterx@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn,
	eblake@redhat.com, Xiao Guangrong <xiaoguangrong@tencent.com>
Subject: Re: [Qemu-devel] [PATCH v5 1/4] migration: do not flush_compressed_data at the end of each iteration
Date: Mon, 03 Sep 2018 18:38:08 +0200	[thread overview]
Message-ID: <87ftyq35an.fsf@trasno.org> (raw)
In-Reply-To: <20180903092644.25812-2-xiaoguangrong@tencent.com> (guangrong xiao's message of "Mon, 3 Sep 2018 17:26:41 +0800")

guangrong.xiao@gmail.com wrote:
> From: Xiao Guangrong <xiaoguangrong@tencent.com>
>
> flush_compressed_data() needs to wait all compression threads to
> finish their work, after that all threads are free until the
> migration feeds new request to them, reducing its call can improve
> the throughput and use CPU resource more effectively
> We do not need to flush all threads at the end of iteration, the
> data can be kept locally until the memory block is changed or
> memory migration starts over in that case we will meet a dirtied
> page which may still exists in compression threads's ring
>
> Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>

I am not so sure about this patch.

Right now, we warantee that after each iteration, all data is written
before we start a new round.

This patch changes to only "flush" the compression threads if we have
"synched" with the kvm migration bitmap.  Idea is good but as far as I
can see:

- we already call flush_compressed_data() inside firnd_dirty_block if we
  synchronize the bitmap.  So, at least, we need to update
  dirty_sync_count there.

- queued pages are "interesting", but I am not sure if compression and
  postcopy work well together.

So, if we don't need to call flush_compressed_data() every round, then
the one inside find_dirty_block() should be enough.  Otherwise, I can't
see why we need this other.


Independent of this patch:
- We always send data for every compression thread without testing if
there is any there.


> @@ -3212,7 +3225,6 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>          }
>          i++;
>      }
> -    flush_compressed_data(rs);
>      rcu_read_unlock();
>  
>      /*

Why is not enough just to remove this call to flush_compressed_data?

Later, Juan.

  reply	other threads:[~2018-09-03 16:38 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-03  9:26 [PATCH v5 0/4] migration: compression optimization guangrong.xiao
2018-09-03  9:26 ` [Qemu-devel] " guangrong.xiao
2018-09-03  9:26 ` [PATCH v5 1/4] migration: do not flush_compressed_data at the end of each iteration guangrong.xiao
2018-09-03  9:26   ` [Qemu-devel] " guangrong.xiao
2018-09-03 16:38   ` Juan Quintela [this message]
2018-09-03 16:38     ` Juan Quintela
2018-09-04  3:54     ` Xiao Guangrong
2018-09-04  3:54       ` [Qemu-devel] " Xiao Guangrong
2018-09-04  4:00       ` Xiao Guangrong
2018-09-04  4:00         ` [Qemu-devel] " Xiao Guangrong
2018-09-04  9:28       ` Juan Quintela
2018-09-04  9:28         ` [Qemu-devel] " Juan Quintela
2018-09-03  9:26 ` [PATCH v5 2/4] migration: fix calculating xbzrle_counters.cache_miss_rate guangrong.xiao
2018-09-03  9:26   ` [Qemu-devel] " guangrong.xiao
2018-09-03 17:19   ` Juan Quintela
2018-09-03 17:19     ` [Qemu-devel] " Juan Quintela
2018-09-03  9:26 ` [PATCH v5 3/4] migration: show the statistics of compression guangrong.xiao
2018-09-03  9:26   ` [Qemu-devel] " guangrong.xiao
2018-09-03 17:22   ` Juan Quintela
2018-09-03 17:22     ` [Qemu-devel] " Juan Quintela
2018-09-03  9:26 ` [PATCH v5 4/4] migration: handle the error condition properly guangrong.xiao
2018-09-03  9:26   ` [Qemu-devel] " guangrong.xiao
2018-09-03 17:28   ` Juan Quintela
2018-09-03 17:28     ` [Qemu-devel] " Juan Quintela

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87ftyq35an.fsf@trasno.org \
    --to=quintela@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=guangrong.xiao@gmail.com \
    --cc=jiang.biao2@zte.com.cn \
    --cc=kvm@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=mtosatti@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=wei.w.wang@intel.com \
    --cc=xiaoguangrong@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.