From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:49784) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UaXOT-0002jh-IR for qemu-devel@nongnu.org; Thu, 09 May 2013 16:24:50 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UaXOP-0007nT-Ti for qemu-devel@nongnu.org; Thu, 09 May 2013 16:24:49 -0400 Received: from mx1.redhat.com ([209.132.183.28]:33807) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UaXOP-0007nB-Li for qemu-devel@nongnu.org; Thu, 09 May 2013 16:24:45 -0400 Date: Thu, 9 May 2013 22:24:39 +0200 From: Igor Mammedov Message-ID: <20130509222439.0e1c5041@thinkpad> In-Reply-To: <1368128600-30721-4-git-send-email-chegu_vinod@hp.com> References: <1368128600-30721-1-git-send-email-chegu_vinod@hp.com> <1368128600-30721-4-git-send-email-chegu_vinod@hp.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC PATCH v5 3/3] Force auto-convegence of live migration List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Chegu Vinod Cc: quintela@redhat.com, qemu-devel@nongnu.org, owasserm@redhat.com, anthony@codemonkey.ws, pbonzini@redhat.com On Thu, 9 May 2013 12:43:20 -0700 Chegu Vinod wrote: > If a user chooses to turn on the auto-converge migration capability > these changes detect the lack of convergence and throttle down the > guest. i.e. force the VCPUs out of the guest for some duration > and let the migration thread catchup and help converge. > [...] > + > +static void mig_delay_vcpu(void) > +{ > + qemu_mutex_unlock_iothread(); > + g_usleep(50*1000); > + qemu_mutex_lock_iothread(); > +} > + > +/* Stub used for getting the vcpu out of VM and into qemu via > + run_on_cpu()*/ > +static void mig_kick_cpu(void *opq) > +{ > + mig_delay_vcpu(); > + return; > +} > + > +/* To reduce the dirty rate explicitly disallow the VCPUs from spending > + much time in the VM. The migration thread will try to catchup. > + Workload will experience a performance drop. > +*/ > +void migration_throttle_down(void) > +{ > + if (throttling_needed()) { > + CPUArchState *penv = first_cpu; > + while (penv) { > + qemu_mutex_lock_iothread(); Locking it here and the unlocking it inside of queued work doesn't look nice. What exactly are you protecting with this lock? > + async_run_on_cpu(ENV_GET_CPU(penv), mig_kick_cpu, NULL); > + qemu_mutex_unlock_iothread(); > + penv = penv->next_cpu; > + } > + } > +} -- Regards, Igor