From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57526) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bXrqe-0003or-Ps for qemu-devel@nongnu.org; Thu, 11 Aug 2016 11:24:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bXrqc-0006du-VS for qemu-devel@nongnu.org; Thu, 11 Aug 2016 11:24:44 -0400 Received: from mail-wm0-x235.google.com ([2a00:1450:400c:c09::235]:35390) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bXrqc-0006dj-Kg for qemu-devel@nongnu.org; Thu, 11 Aug 2016 11:24:42 -0400 Received: by mail-wm0-x235.google.com with SMTP id f65so6282361wmi.0 for ; Thu, 11 Aug 2016 08:24:42 -0700 (PDT) From: =?UTF-8?q?Alex=20Benn=C3=A9e?= Date: Thu, 11 Aug 2016 16:24:14 +0100 Message-Id: <1470929064-4092-19-git-send-email-alex.bennee@linaro.org> In-Reply-To: <1470929064-4092-1-git-send-email-alex.bennee@linaro.org> References: <1470929064-4092-1-git-send-email-alex.bennee@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Subject: [Qemu-devel] [RFC v4 18/28] tcg: remove global exit_request List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: mttcg@listserver.greensocs.com, qemu-devel@nongnu.org, fred.konrad@greensocs.com, a.rigo@virtualopensystems.com, cota@braap.org, bobby.prani@gmail.com, nikunj@linux.vnet.ibm.com Cc: mark.burton@greensocs.com, pbonzini@redhat.com, jan.kiszka@siemens.com, serge.fdrv@gmail.com, rth@twiddle.net, peter.maydell@linaro.org, claudio.fontana@huawei.com, =?UTF-8?q?Alex=20Benn=C3=A9e?= , Peter Crosthwaite There are now only two uses of the global exit_request left. The first ensures we exit the run_loop when we first start to process pending work and in the kick handler. This is just as easily done by setting the first_cpu->exit_request flag. The second use is in the round robin kick routine. The global exit_request ensured every vCPU would set its local exit_request and cause a full exit of the loop. Now the iothread isn't being held while running we can just rely on the kick handler to push us out as intended. We lightly re-factor the main vCPU thread to ensure cpu->exit_requests cause us to exit the main loop and process any IO requests that might come along. Signed-off-by: Alex Bennée --- v4 - moved to after iothread unlocking patch - needed to remove kick exit_request as well. - remove extraneous cpu->exit_request check - remove stray exit_request setting - remove needless atomic operation --- cpu-exec-common.c | 1 - cpu-exec.c | 9 ++------- cpus.c | 18 ++++++++++-------- include/exec/exec-all.h | 1 - 4 files changed, 12 insertions(+), 17 deletions(-) diff --git a/cpu-exec-common.c b/cpu-exec-common.c index e220ba7..e29cf6d 100644 --- a/cpu-exec-common.c +++ b/cpu-exec-common.c @@ -23,7 +23,6 @@ #include "exec/exec-all.h" #include "exec/memory-internal.h" -bool exit_request; int tcg_pending_threads; /* exit the current TB, but without causing any exception to be raised */ diff --git a/cpu-exec.c b/cpu-exec.c index 317ad8e..d2e8b9e 100644 --- a/cpu-exec.c +++ b/cpu-exec.c @@ -531,9 +531,8 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb, /* Something asked us to stop executing * chained TBs; just continue round the main * loop. Whatever requested the exit will also - * have set something else (eg exit_request or - * interrupt_request) which we will handle - * next time around the loop. But we need to + * have set something else (eg interrupt_request) which we + * will handle next time around the loop. But we need to * ensure the tcg_exit_req read in generated code * comes before the next read of cpu->exit_request * or cpu->interrupt_request. @@ -589,10 +588,6 @@ int cpu_exec(CPUState *cpu) rcu_read_lock(); - if (unlikely(atomic_mb_read(&exit_request))) { - cpu->exit_request = 1; - } - cc->cpu_exec_enter(cpu); /* Calculate difference between guest clock and host clock. diff --git a/cpus.c b/cpus.c index 5ed6dc0..8a40a08 100644 --- a/cpus.c +++ b/cpus.c @@ -1215,7 +1215,6 @@ static CPUState *tcg_current_rr_cpu; static void qemu_cpu_kick_rr_cpu(void) { CPUState *cpu; - atomic_mb_set(&exit_request, 1); do { cpu = atomic_mb_read(&tcg_current_rr_cpu); if (cpu) { @@ -1265,11 +1264,11 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) timer_mod(kick_timer, qemu_tcg_next_kick()); } - /* process any pending work */ - atomic_mb_set(&exit_request, 1); - cpu = first_cpu; + /* process any pending work */ + cpu->exit_request = 1; + while (1) { /* Account partial waits to QEMU_CLOCK_VIRTUAL. */ qemu_account_warp_timer(); @@ -1278,7 +1277,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) cpu = first_cpu; } - for (; cpu != NULL && !exit_request; cpu = CPU_NEXT(cpu)) { + while (cpu && !cpu->exit_request) { atomic_mb_set(&tcg_current_rr_cpu, cpu); qemu_clock_enable(QEMU_CLOCK_VIRTUAL, @@ -1300,12 +1299,15 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) break; } - } /* for cpu.. */ + cpu = CPU_NEXT(cpu); + } /* while (cpu && !cpu->exit_request).. */ + /* Does not need atomic_mb_set because a spurious wakeup is okay. */ atomic_set(&tcg_current_rr_cpu, NULL); - /* Pairs with smp_wmb in qemu_cpu_kick. */ - atomic_mb_set(&exit_request, 0); + if (cpu && cpu->exit_request) { + atomic_mb_set(&cpu->exit_request, 0); + } handle_icount_deadline(); diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index 3547dfa..27d1a24 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -409,7 +409,6 @@ extern int singlestep; /* cpu-exec.c, accessed with atomic_mb_read/atomic_mb_set */ extern int tcg_pending_threads; -extern bool exit_request; /** * qemu_work_cond - condition to wait for CPU work items completion -- 2.7.4