From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:51187) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Uai2q-0001LV-CM for qemu-devel@nongnu.org; Fri, 10 May 2013 03:47:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Uai2k-0007VO-F7 for qemu-devel@nongnu.org; Fri, 10 May 2013 03:47:12 -0400 Received: from mail-ye0-f173.google.com ([209.85.213.173]:50297) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Uai2k-0007VI-5i for qemu-devel@nongnu.org; Fri, 10 May 2013 03:47:06 -0400 Received: by mail-ye0-f173.google.com with SMTP id l2so855992yen.32 for ; Fri, 10 May 2013 00:47:05 -0700 (PDT) Sender: Paolo Bonzini Message-ID: <518CA5F4.5070005@redhat.com> Date: Fri, 10 May 2013 09:47:00 +0200 From: Paolo Bonzini MIME-Version: 1.0 References: <1368128600-30721-1-git-send-email-chegu_vinod@hp.com> <1368128600-30721-4-git-send-email-chegu_vinod@hp.com> <20130509222439.0e1c5041@thinkpad> <518C2A96.7010508@hp.com> In-Reply-To: <518C2A96.7010508@hp.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC PATCH v5 3/3] Force auto-convegence of live migration List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Chegu Vinod Cc: Igor Mammedov , owasserm@redhat.com, qemu-devel@nongnu.org, anthony@codemonkey.ws, quintela@redhat.com Il 10/05/2013 01:00, Chegu Vinod ha scritto: > On 5/9/2013 1:24 PM, Igor Mammedov wrote: >> On Thu, 9 May 2013 12:43:20 -0700 >> Chegu Vinod wrote: >> >>> If a user chooses to turn on the auto-converge migration capability >>> these changes detect the lack of convergence and throttle down the >>> guest. i.e. force the VCPUs out of the guest for some duration >>> and let the migration thread catchup and help converge. >>> >> [...] >>> + >>> +static void mig_delay_vcpu(void) >>> +{ >>> + qemu_mutex_unlock_iothread(); >>> + g_usleep(50*1000); >>> + qemu_mutex_lock_iothread(); >>> +} >>> + >>> +/* Stub used for getting the vcpu out of VM and into qemu via >>> + run_on_cpu()*/ >>> +static void mig_kick_cpu(void *opq) >>> +{ >>> + mig_delay_vcpu(); >>> + return; >>> +} >>> + >>> +/* To reduce the dirty rate explicitly disallow the VCPUs from spending >>> + much time in the VM. The migration thread will try to catchup. >>> + Workload will experience a performance drop. >>> +*/ >>> +void migration_throttle_down(void) >>> +{ >>> + if (throttling_needed()) { >>> + CPUArchState *penv = first_cpu; >>> + while (penv) { >>> + qemu_mutex_lock_iothread(); >> Locking it here and the unlocking it inside of queued work doesn't >> look nice. > Yes...but see below. Actually, no. :) It looks strange, but it is correct and perfectly fine. The queued work is running in a completely different thread. run_on_cpu work items run under the BQL, thus mig_delay_vcpu needs to unlock. On the other hand, migration_throttle_down runs in the migration thread, outside the BQL. It needs to lock because the first_cpu list can change through hotplug at any time. qemu_for_each_cpu would also need the BQL for the same reason. Paolo