All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: huangy81@chinatelecom.cn
Cc: "Eduardo Habkost" <eduardo@habkost.net>,
	"David Hildenbrand" <david@redhat.com>,
	"Juan Quintela" <quintela@redhat.com>,
	"Richard Henderson" <richard.henderson@linaro.org>,
	qemu-devel <qemu-devel@nongnu.org>,
	"Markus ArmBruster" <armbru@redhat.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Philippe Mathieu-Daudé" <philmd@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>
Subject: Re: [PATCH v11 1/4] migration/dirtyrate: refactor dirty page rate calculation
Date: Mon, 17 Jan 2022 10:19:14 +0800	[thread overview]
Message-ID: <YeTSIh2Osx7Yrjle@xz-m1.local> (raw)
In-Reply-To: <7cc032ae98e29471de57c00d3c0bd0fc5129ae23.1641316375.git.huangy81@chinatelecom.cn>

On Wed, Jan 05, 2022 at 01:14:06AM +0800, huangy81@chinatelecom.cn wrote:
> From: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> 
> abstract out dirty log change logic into function
> global_dirty_log_change.
> 
> abstract out dirty page rate calculation logic via
> dirty-ring into function vcpu_calculate_dirtyrate.
> 
> abstract out mathematical dirty page rate calculation
> into do_calculate_dirtyrate, decouple it from DirtyStat.
> 
> rename set_sample_page_period to dirty_stat_wait, which
> is well-understood and will be reused in dirtylimit.
> 
> add cpu_list_lock to protect cpu list before walking
> through it in case of race against cpu hotplug/unplug.
> 
> export util functions outside migration.
> 
> Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> ---
>  include/sysemu/dirtyrate.h |  29 ++++++
>  migration/dirtyrate.c      | 220 ++++++++++++++++++++++++++++-----------------
>  migration/dirtyrate.h      |   7 +-
>  3 files changed, 171 insertions(+), 85 deletions(-)
>  create mode 100644 include/sysemu/dirtyrate.h
> 
> diff --git a/include/sysemu/dirtyrate.h b/include/sysemu/dirtyrate.h
> new file mode 100644
> index 0000000..cb6f02b
> --- /dev/null
> +++ b/include/sysemu/dirtyrate.h
> @@ -0,0 +1,29 @@
> +/*
> + * dirty page rate helper functions
> + *
> + * Copyright (c) 2022 CHINA TELECOM CO.,LTD.
> + *
> + * Authors:
> + *  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> + * See the COPYING file in the top-level directory.
> + */
> +
> +#ifndef QEMU_DIRTYRATE_H
> +#define QEMU_DIRTYRATE_H
> +
> +typedef struct VcpuStat {
> +    int nvcpu; /* number of vcpu */
> +    DirtyRateVcpu *rates; /* array of dirty rate for each vcpu */
> +} VcpuStat;
> +
> +int64_t vcpu_calculate_dirtyrate(int64_t calc_time_ms,
> +                                 int64_t init_time_ms,
> +                                 VcpuStat *stat,
> +                                 unsigned int flag,
> +                                 bool one_shot);
> +
> +void global_dirty_log_change(unsigned int flag,
> +                             bool start);
> +#endif
> diff --git a/migration/dirtyrate.c b/migration/dirtyrate.c
> index d65e744..1407455 100644
> --- a/migration/dirtyrate.c
> +++ b/migration/dirtyrate.c
> @@ -46,7 +46,7 @@ static struct DirtyRateStat DirtyStat;
>  static DirtyRateMeasureMode dirtyrate_mode =
>                  DIRTY_RATE_MEASURE_MODE_PAGE_SAMPLING;
>  
> -static int64_t set_sample_page_period(int64_t msec, int64_t initial_time)
> +static int64_t dirty_stat_wait(int64_t msec, int64_t initial_time)
>  {
>      int64_t current_time;
>  
> @@ -60,6 +60,128 @@ static int64_t set_sample_page_period(int64_t msec, int64_t initial_time)
>      return msec;
>  }
>  
> +static inline void record_dirtypages(DirtyPageRecord *dirty_pages,
> +                                     CPUState *cpu, bool start)
> +{
> +    if (start) {
> +        dirty_pages[cpu->cpu_index].start_pages = cpu->dirty_pages;
> +    } else {
> +        dirty_pages[cpu->cpu_index].end_pages = cpu->dirty_pages;
> +    }
> +}
> +
> +static int64_t do_calculate_dirtyrate(DirtyPageRecord dirty_pages,
> +                                      int64_t calc_time_ms)
> +{
> +    uint64_t memory_size_MB;
> +    uint64_t increased_dirty_pages =
> +        dirty_pages.end_pages - dirty_pages.start_pages;
> +
> +    memory_size_MB = (increased_dirty_pages * TARGET_PAGE_SIZE) >> 20;
> +
> +    return memory_size_MB * 1000 / calc_time_ms;
> +}
> +
> +void global_dirty_log_change(unsigned int flag, bool start)
> +{
> +    qemu_mutex_lock_iothread();
> +    if (start) {
> +        memory_global_dirty_log_start(flag);
> +    } else {
> +        memory_global_dirty_log_stop(flag);
> +    }
> +    qemu_mutex_unlock_iothread();
> +}
> +
> +/*
> + * global_dirty_log_sync
> + * 1. sync dirty log from kvm
> + * 2. stop dirty tracking if needed.
> + */
> +static void global_dirty_log_sync(unsigned int flag, bool one_shot)
> +{
> +    qemu_mutex_lock_iothread();
> +    memory_global_dirty_log_sync();
> +    if (one_shot) {
> +        memory_global_dirty_log_stop(flag);
> +    }
> +    qemu_mutex_unlock_iothread();
> +}
> +
> +static DirtyPageRecord *vcpu_dirty_stat_alloc(VcpuStat *stat)
> +{
> +    CPUState *cpu;
> +    DirtyPageRecord *records;
> +    int nvcpu = 0;
> +
> +    CPU_FOREACH(cpu) {
> +        nvcpu++;
> +    }
> +
> +    stat->nvcpu = nvcpu;
> +    stat->rates = g_malloc0(sizeof(DirtyRateVcpu) * nvcpu);
> +
> +    records = g_malloc0(sizeof(DirtyPageRecord) * nvcpu);
> +
> +    return records;
> +}
> +
> +static void vcpu_dirty_stat_collect(VcpuStat *stat,
> +                                    DirtyPageRecord *records,
> +                                    bool start)
> +{
> +    CPUState *cpu;
> +
> +    CPU_FOREACH(cpu) {
> +        if (!start && cpu->cpu_index >= stat->nvcpu) {
> +            /*
> +             * Never go there unless cpu is hot-plugged,
> +             * just ignore in this case.
> +             */
> +            continue;
> +        }

As commented before, I think the only way to do it right is does not allow cpu
plug/unplug during measurement..

Say, even if index didn't get out of range, an unplug even should generate very
stange output of the unplugged cpu.  Please see more below.

> +        record_dirtypages(records, cpu, start);
> +    }
> +}
> +
> +int64_t vcpu_calculate_dirtyrate(int64_t calc_time_ms,
> +                                 int64_t init_time_ms,
> +                                 VcpuStat *stat,
> +                                 unsigned int flag,
> +                                 bool one_shot)
> +{
> +    DirtyPageRecord *records;
> +    int64_t duration;
> +    int64_t dirtyrate;
> +    int i = 0;
> +
> +    cpu_list_lock();
> +    records = vcpu_dirty_stat_alloc(stat);
> +    vcpu_dirty_stat_collect(stat, records, true);
> +    cpu_list_unlock();

Continue with above - then I'm wondering whether we should just keep taking the
lock until vcpu_dirty_stat_collect().

Yes we could be taking the lock for a long time because of the sleep, but the
main thread plug thread will just wait for it to complete and it is at least
not a e.g. deadlock.

The other solution is we do cpu_list_unlock() like this, but introduce another
cpu_list_generation_id and boost it after any plug/unplug of cpu, aka, when cpu
list changes.

Then we record cpu generation ID at the entry of this function and retry the
whole measurement if at some point we found generation ID changed (we need to
fetch the gen ID after having the lock, of course).  That could avoid us taking
the cpu list lock during dirty_stat_wait(), but it'll start to complicate cpu
list locking rules.

The simpler way is still just to take the lock, imho.

The rest looks good, thanks.

> +
> +    duration = dirty_stat_wait(calc_time_ms, init_time_ms);
> +
> +    global_dirty_log_sync(flag, one_shot);
> +
> +    cpu_list_lock();
> +    vcpu_dirty_stat_collect(stat, records, false);
> +    cpu_list_unlock();
> +
> +    for (i = 0; i < stat->nvcpu; i++) {
> +        dirtyrate = do_calculate_dirtyrate(records[i], duration);
> +
> +        stat->rates[i].id = i;
> +        stat->rates[i].dirty_rate = dirtyrate;
> +
> +        trace_dirtyrate_do_calculate_vcpu(i, dirtyrate);
> +    }
> +
> +    g_free(records);
> +
> +    return duration;
> +}
> +
>  static bool is_sample_period_valid(int64_t sec)
>  {
>      if (sec < MIN_FETCH_DIRTYRATE_TIME_SEC ||
> @@ -396,44 +518,6 @@ static bool compare_page_hash_info(struct RamblockDirtyInfo *info,
>      return true;
>  }
>  
> -static inline void record_dirtypages(DirtyPageRecord *dirty_pages,
> -                                     CPUState *cpu, bool start)
> -{
> -    if (start) {
> -        dirty_pages[cpu->cpu_index].start_pages = cpu->dirty_pages;
> -    } else {
> -        dirty_pages[cpu->cpu_index].end_pages = cpu->dirty_pages;
> -    }
> -}
> -
> -static void dirtyrate_global_dirty_log_start(void)
> -{
> -    qemu_mutex_lock_iothread();
> -    memory_global_dirty_log_start(GLOBAL_DIRTY_DIRTY_RATE);
> -    qemu_mutex_unlock_iothread();
> -}
> -
> -static void dirtyrate_global_dirty_log_stop(void)
> -{
> -    qemu_mutex_lock_iothread();
> -    memory_global_dirty_log_sync();
> -    memory_global_dirty_log_stop(GLOBAL_DIRTY_DIRTY_RATE);
> -    qemu_mutex_unlock_iothread();
> -}
> -
> -static int64_t do_calculate_dirtyrate_vcpu(DirtyPageRecord dirty_pages)
> -{
> -    uint64_t memory_size_MB;
> -    int64_t time_s;
> -    uint64_t increased_dirty_pages =
> -        dirty_pages.end_pages - dirty_pages.start_pages;
> -
> -    memory_size_MB = (increased_dirty_pages * TARGET_PAGE_SIZE) >> 20;
> -    time_s = DirtyStat.calc_time;
> -
> -    return memory_size_MB / time_s;
> -}
> -
>  static inline void record_dirtypages_bitmap(DirtyPageRecord *dirty_pages,
>                                              bool start)
>  {
> @@ -444,11 +528,6 @@ static inline void record_dirtypages_bitmap(DirtyPageRecord *dirty_pages,
>      }
>  }
>  
> -static void do_calculate_dirtyrate_bitmap(DirtyPageRecord dirty_pages)
> -{
> -    DirtyStat.dirty_rate = do_calculate_dirtyrate_vcpu(dirty_pages);
> -}
> -
>  static inline void dirtyrate_manual_reset_protect(void)
>  {
>      RAMBlock *block = NULL;
> @@ -492,71 +571,52 @@ static void calculate_dirtyrate_dirty_bitmap(struct DirtyRateConfig config)
>      DirtyStat.start_time = start_time / 1000;
>  
>      msec = config.sample_period_seconds * 1000;
> -    msec = set_sample_page_period(msec, start_time);
> +    msec = dirty_stat_wait(msec, start_time);
>      DirtyStat.calc_time = msec / 1000;
>  
>      /*
> -     * dirtyrate_global_dirty_log_stop do two things.
> +     * do two things.
>       * 1. fetch dirty bitmap from kvm
>       * 2. stop dirty tracking
>       */
> -    dirtyrate_global_dirty_log_stop();
> +    global_dirty_log_sync(GLOBAL_DIRTY_DIRTY_RATE, true);
>  
>      record_dirtypages_bitmap(&dirty_pages, false);
>  
> -    do_calculate_dirtyrate_bitmap(dirty_pages);
> +    DirtyStat.dirty_rate = do_calculate_dirtyrate(dirty_pages, msec);
>  }
>  
>  static void calculate_dirtyrate_dirty_ring(struct DirtyRateConfig config)
>  {
> -    CPUState *cpu;
> -    int64_t msec = 0;
>      int64_t start_time;
> +    int64_t duration;
>      uint64_t dirtyrate = 0;
>      uint64_t dirtyrate_sum = 0;
> -    DirtyPageRecord *dirty_pages;
> -    int nvcpu = 0;
>      int i = 0;
>  
> -    CPU_FOREACH(cpu) {
> -        nvcpu++;
> -    }
> -
> -    dirty_pages = malloc(sizeof(*dirty_pages) * nvcpu);
> -
> -    DirtyStat.dirty_ring.nvcpu = nvcpu;
> -    DirtyStat.dirty_ring.rates = malloc(sizeof(DirtyRateVcpu) * nvcpu);
> -
> -    dirtyrate_global_dirty_log_start();
> -
> -    CPU_FOREACH(cpu) {
> -        record_dirtypages(dirty_pages, cpu, true);
> -    }
> +    /* start log sync */
> +    global_dirty_log_change(GLOBAL_DIRTY_DIRTY_RATE, true);
>  
>      start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
>      DirtyStat.start_time = start_time / 1000;
>  
> -    msec = config.sample_period_seconds * 1000;
> -    msec = set_sample_page_period(msec, start_time);
> -    DirtyStat.calc_time = msec / 1000;
> +    /* calculate vcpu dirtyrate */
> +    duration = vcpu_calculate_dirtyrate(config.sample_period_seconds * 1000,
> +                                        start_time,
> +                                        &DirtyStat.dirty_ring,
> +                                        GLOBAL_DIRTY_DIRTY_RATE,
> +                                        true);
>  
> -    dirtyrate_global_dirty_log_stop();
> -
> -    CPU_FOREACH(cpu) {
> -        record_dirtypages(dirty_pages, cpu, false);
> -    }
> +    DirtyStat.calc_time = duration / 1000;
>  
> +    /* calculate vm dirtyrate */
>      for (i = 0; i < DirtyStat.dirty_ring.nvcpu; i++) {
> -        dirtyrate = do_calculate_dirtyrate_vcpu(dirty_pages[i]);
> -        trace_dirtyrate_do_calculate_vcpu(i, dirtyrate);
> -
> -        DirtyStat.dirty_ring.rates[i].id = i;
> +        dirtyrate = DirtyStat.dirty_ring.rates[i].dirty_rate;
>          DirtyStat.dirty_ring.rates[i].dirty_rate = dirtyrate;
>          dirtyrate_sum += dirtyrate;
>      }
>  
>      DirtyStat.dirty_rate = dirtyrate_sum;
> -    free(dirty_pages);
>  }
>  
>  static void calculate_dirtyrate_sample_vm(struct DirtyRateConfig config)
> @@ -574,7 +634,7 @@ static void calculate_dirtyrate_sample_vm(struct DirtyRateConfig config)
>      rcu_read_unlock();
>  
>      msec = config.sample_period_seconds * 1000;
> -    msec = set_sample_page_period(msec, initial_time);
> +    msec = dirty_stat_wait(msec, initial_time);
>      DirtyStat.start_time = initial_time / 1000;
>      DirtyStat.calc_time = msec / 1000;
>  
> diff --git a/migration/dirtyrate.h b/migration/dirtyrate.h
> index 69d4c5b..594a5c0 100644
> --- a/migration/dirtyrate.h
> +++ b/migration/dirtyrate.h
> @@ -13,6 +13,8 @@
>  #ifndef QEMU_MIGRATION_DIRTYRATE_H
>  #define QEMU_MIGRATION_DIRTYRATE_H
>  
> +#include "sysemu/dirtyrate.h"
> +
>  /*
>   * Sample 512 pages per GB as default.
>   */
> @@ -65,11 +67,6 @@ typedef struct SampleVMStat {
>      uint64_t total_block_mem_MB; /* size of total sampled pages in MB */
>  } SampleVMStat;
>  
> -typedef struct VcpuStat {
> -    int nvcpu; /* number of vcpu */
> -    DirtyRateVcpu *rates; /* array of dirty rate for each vcpu */
> -} VcpuStat;
> -
>  /*
>   * Store calculation statistics for each measure.
>   */
> -- 
> 1.8.3.1
> 

-- 
Peter Xu



  reply	other threads:[~2022-01-17  2:20 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-04 17:14 [PATCH v11 0/4] support dirty restraint on vCPU huangy81
     [not found] ` <cover.1641316375.git.huangy81@chinatelecom.cn>
2022-01-04 17:14   ` [PATCH v11 1/4] migration/dirtyrate: refactor dirty page rate calculation huangy81
2022-01-17  2:19     ` Peter Xu [this message]
2022-01-22  3:22       ` Hyman Huang
2022-01-24  3:08         ` Peter Xu
2022-01-24  9:36           ` Hyman Huang
2022-01-04 17:14   ` [PATCH v11 2/4] softmmu/dirtylimit: implement vCPU dirtyrate calculation periodically huangy81
2022-01-17  2:31     ` Peter Xu
2022-01-04 17:14   ` [PATCH v11 3/4] softmmu/dirtylimit: implement virtual CPU throttle huangy81
2022-01-17  7:32     ` Peter Xu
2022-01-17 14:00       ` Hyman Huang
2022-01-18  1:00         ` Peter Xu
2022-01-18  2:08           ` Hyman Huang
2022-01-20  8:26       ` Hyman Huang
2022-01-20  9:25         ` Peter Xu
2022-01-20 10:03           ` Hyman Huang
2022-01-20 10:58             ` Peter Xu
2022-01-20 10:39           ` Hyman Huang
2022-01-20 10:56             ` Peter Xu
2022-01-20 11:03               ` Hyman Huang
2022-01-20 11:13                 ` Peter Xu
2022-01-21  8:07       ` Hyman Huang
2022-01-21  9:19         ` Peter Xu
2022-01-22  3:54       ` Hyman Huang
2022-01-24  3:10         ` Peter Xu
2022-01-24  4:20       ` Hyman Huang
2022-01-17  9:01     ` Peter Xu
2022-01-19 12:16     ` Markus Armbruster
2022-01-20 11:22       ` Hyman Huang
2022-01-04 17:14   ` [PATCH v11 4/4] softmmu/dirtylimit: implement dirty page rate limit huangy81
2022-01-17  7:35     ` Peter Xu
2022-01-19 12:16     ` Markus Armbruster
2022-01-17  8:54 ` [PATCH v11 0/4] support dirty restraint on vCPU Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YeTSIh2Osx7Yrjle@xz-m1.local \
    --to=peterx@redhat.com \
    --cc=armbru@redhat.com \
    --cc=david@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=eduardo@habkost.net \
    --cc=huangy81@chinatelecom.cn \
    --cc=pbonzini@redhat.com \
    --cc=philmd@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=richard.henderson@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.