From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:40352) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z7OaN-0007Oc-5G for qemu-devel@nongnu.org; Tue, 23 Jun 2015 09:50:00 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Z7OaG-0008QV-Nh for qemu-devel@nongnu.org; Tue, 23 Jun 2015 09:49:59 -0400 Received: from greensocs.com ([193.104.36.180]:34027) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z7OaG-0008Q0-A0 for qemu-devel@nongnu.org; Tue, 23 Jun 2015 09:49:52 -0400 Message-ID: <558963FC.6030308@greensocs.com> Date: Tue, 23 Jun 2015 15:49:48 +0200 From: Frederic Konrad MIME-Version: 1.0 References: <1434646046-27150-1-git-send-email-pbonzini@redhat.com> <1434646046-27150-2-git-send-email-pbonzini@redhat.com> In-Reply-To: <1434646046-27150-2-git-send-email-pbonzini@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 1/9] main-loop: use qemu_mutex_lock_iothread consistently List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini , qemu-devel@nongnu.org Cc: jan.kiszka@siemens.com, Mark Burton , Guillaume Delbergue On 18/06/2015 18:47, Paolo Bonzini wrote: > The next patch will require the BQL to be always taken with > qemu_mutex_lock_iothread(), while right now this isn't the case. > > Outside TCG mode this is not a problem. In TCG mode, we need to be > careful and avoid the "prod out of compiled code" step if already > in a VCPU thread. This is easily done with a check on current_cpu, > i.e. qemu_in_vcpu_thread(). > > Hopefully, multithreaded TCG will get rid of the whole logic to kick > VCPUs whenever an I/O event occurs! Hopefully :), this means dropping the iothread mutex as soon as possible and removing the iothread_requesting_mutex I guess.. Fred > > Signed-off-by: Paolo Bonzini > --- > cpus.c | 13 ++++++++----- > 1 file changed, 8 insertions(+), 5 deletions(-) > > diff --git a/cpus.c b/cpus.c > index de6469f..2e807f9 100644 > --- a/cpus.c > +++ b/cpus.c > @@ -924,7 +924,7 @@ static void *qemu_kvm_cpu_thread_fn(void *arg) > CPUState *cpu = arg; > int r; > > - qemu_mutex_lock(&qemu_global_mutex); > + qemu_mutex_lock_iothread(); > qemu_thread_get_self(cpu->thread); > cpu->thread_id = qemu_get_thread_id(); > cpu->can_do_io = 1; > @@ -1004,10 +1004,10 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) > { > CPUState *cpu = arg; > > + qemu_mutex_lock_iothread(); > qemu_tcg_init_cpu_signals(); > qemu_thread_get_self(cpu->thread); > > - qemu_mutex_lock(&qemu_global_mutex); > CPU_FOREACH(cpu) { > cpu->thread_id = qemu_get_thread_id(); > cpu->created = true; > @@ -1118,7 +1118,11 @@ bool qemu_in_vcpu_thread(void) > > void qemu_mutex_lock_iothread(void) > { > atomic_inc(&iothread_requesting_mutex); > - if (!tcg_enabled() || !first_cpu || !first_cpu->thread) { > + /* In the simple case there is no need to bump the VCPU thread out of > + * TCG code execution. > + */ > + if (!tcg_enabled() || qemu_in_vcpu_thread() || > + !first_cpu || !first_cpu->thread) { > qemu_mutex_lock(&qemu_global_mutex); > atomic_dec(&iothread_requesting_mutex); > } else {