All of lore.kernel.org
 help / color / mirror / Atom feed
From: Umesh Deshpande <udeshpan@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>,
	kvm@vger.kernel.org, quintela@redhat.com
Subject: Re: [RFC PATCH v3 3/4] lock to protect memslots
Date: Mon, 15 Aug 2011 16:27:02 -0400	[thread overview]
Message-ID: <4E498116.9030800@redhat.com> (raw)
In-Reply-To: <4E4929B8.2010509@redhat.com>

On 08/15/2011 10:14 AM, Paolo Bonzini wrote:
> On 08/15/2011 12:26 AM, Marcelo Tosatti wrote:
>> Actually the previous patchset does not traverse the ramlist without
>> qemu_mutex locked, which is safe versus the most-recently-used-block
>> optimization.
> Actually it does:
>
>       bytes_transferred_last = bytes_transferred;
>       bwidth = qemu_get_clock_ns(rt_clock);
>
> +    if (stage != 3) {
> +        qemu_mutex_lock_ramlist();
> +        qemu_mutex_unlock_iothread();
> +    }
> +
>       while (!qemu_file_rate_limit(f)) {
>           int bytes_sent;
>
>           /* ram_save_block does traverse memory.  */
>           bytes_sent = ram_save_block(f);
>           bytes_transferred += bytes_sent;
>           if (bytes_sent == 0) { /* no more blocks */
>               break;
>           }
>       }
>
> +    if (stage != 3) {
> +        qemu_mutex_lock_iothread();
> +        qemu_mutex_unlock_ramlist();
> +    }
> +
>       bwidth = qemu_get_clock_ns(rt_clock) - bwidth;
>       bwidth = (bytes_transferred - bytes_transferred_last) / bwidth;
>
>
> What Umesh is doing is using "either ramlist mutex or iothread mutex" when reading
> the ramlist, and "both" when writing the ramlist; similar to rwlocks done with a
> regular mutex per CPU---clever!  So this:
>
> +                qemu_mutex_lock_ramlist();
>                   QLIST_REMOVE(block, next);
>                   QLIST_INSERT_HEAD(&ram_list.blocks, block, next);
> +                qemu_mutex_unlock_ramlist();
>
> is effectively upgrading the lock from read-side to write-side, assuming that
> qemu_get_ram_ptr is never called from the migration thread (which is true).
>
> However, I propose that you put the MRU order in a separate list.  You would still
> need two locks: the IO thread lock to protect the new list, a new lock to protect
> the other fields in the ram_list.  For simplicity you may skip the new lock if you
> assume that the migration and I/O threads never modify the list concurrently,
> which is true.
Yes, the mru list patch would obviate the need of holding the ram_list 
mutex in qemu_get_ram_ptr.
Also, I was planning to protect the whole migration thread with iothread 
mutex, and ram_list mutex. (i.e. holding ram_list mutex while sleeping 
between two iterations, when we release iothread mutex). This will 
prevent the memslot block removal altogether during the migration. Do 
you see any problem with this?

> And more importantly, the MRU and migration code absolutely do not
> affect each other, because indeed the migration thread does not do MRU accesses.
> See the attachment for an untested patch.
>
> Paolo
Thanks
Umesh

  reply	other threads:[~2011-08-15 20:27 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-08-11 15:32 [RFC PATCH v3 0/4] Separate thread for VM migration Umesh Deshpande
2011-08-11 15:32 ` [RFC PATCH v3 1/4] separate " Umesh Deshpande
2011-08-11 16:18   ` Paolo Bonzini
2011-08-11 17:36     ` Umesh Deshpande
2011-08-12  6:40       ` Paolo Bonzini
2011-08-11 15:32 ` [RFC PATCH v3 2/4] Making iothread block for migrate_cancel Umesh Deshpande
2011-08-11 15:32 ` [RFC PATCH v3 3/4] lock to protect memslots Umesh Deshpande
2011-08-11 16:20   ` Paolo Bonzini
2011-08-12  6:45     ` Paolo Bonzini
2011-08-15  6:45       ` Umesh Deshpande
2011-08-15 14:10         ` Paolo Bonzini
2011-08-15  7:26       ` Marcelo Tosatti
2011-08-15 14:14         ` Paolo Bonzini
2011-08-15 20:27           ` Umesh Deshpande [this message]
2011-08-16  6:15             ` Paolo Bonzini
2011-08-16  7:56               ` Paolo Bonzini
2011-08-11 15:32 ` [RFC PATCH v3 4/4] Separate migration bitmap Umesh Deshpande
2011-08-11 16:23 ` [RFC PATCH v3 0/4] Separate thread for VM migration Paolo Bonzini
2011-08-11 18:25 ` Anthony Liguori

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4E498116.9030800@redhat.com \
    --to=udeshpan@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=quintela@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.