From: Peter Xu <peterx@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
peterx@redhat.com, Richard Henderson <rth@twiddle.net>
Subject: [PATCH 4/8] cpus: Introduce qemu_cond_timedwait_iothread()
Date: Tue, 21 Apr 2020 12:21:04 -0400 [thread overview]
Message-ID: <20200421162108.594796-5-peterx@redhat.com> (raw)
In-Reply-To: <20200421162108.594796-1-peterx@redhat.com>
This is the sister function of qemu_cond_wait_iothread(). Use it at the only
place where we do cond_timedwait() on BQL.
Signed-off-by: Peter Xu <peterx@redhat.com>
---
cpus.c | 9 +++++++--
include/qemu/main-loop.h | 7 +++++++
2 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/cpus.c b/cpus.c
index 00f6e361af..263eddc8b0 100644
--- a/cpus.c
+++ b/cpus.c
@@ -726,8 +726,8 @@ static void cpu_throttle_thread(CPUState *cpu, run_on_cpu_data opaque)
endtime_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + sleeptime_ns;
while (sleeptime_ns > 0 && !cpu->stop) {
if (sleeptime_ns > SCALE_MS) {
- qemu_cond_timedwait(cpu->halt_cond, &qemu_global_mutex,
- sleeptime_ns / SCALE_MS);
+ qemu_cond_timedwait_iothread(cpu->halt_cond,
+ sleeptime_ns / SCALE_MS);
} else {
qemu_mutex_unlock_iothread();
g_usleep(sleeptime_ns / SCALE_US);
@@ -1844,6 +1844,11 @@ void qemu_cond_wait_iothread(QemuCond *cond)
qemu_cond_wait(cond, &qemu_global_mutex);
}
+void qemu_cond_timedwait_iothread(QemuCond *cond, int ms)
+{
+ qemu_cond_timedwait(cond, &qemu_global_mutex, ms);
+}
+
static bool all_vcpus_paused(void)
{
CPUState *cpu;
diff --git a/include/qemu/main-loop.h b/include/qemu/main-loop.h
index a6d20b0719..a0e59c5563 100644
--- a/include/qemu/main-loop.h
+++ b/include/qemu/main-loop.h
@@ -303,6 +303,13 @@ void qemu_mutex_unlock_iothread(void);
*/
void qemu_cond_wait_iothread(QemuCond *cond);
+/*
+ * qemu_cond_timedwait_iothread: Wait on condition for the main loop mutex
+ *
+ * This is the same as qemu_cond_wait_iothread() but allows a timeout.
+ */
+void qemu_cond_timedwait_iothread(QemuCond *cond, int ms);
+
/* internal interfaces */
void qemu_fd_register(int fd);
--
2.24.1
next prev parent reply other threads:[~2020-04-21 16:26 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-21 16:21 [PATCH 0/8] memory: Sanity checks memory transaction when releasing BQL Peter Xu
2020-04-21 16:21 ` [PATCH 1/8] memory: Introduce memory_region_transaction_{push|pop}() Peter Xu
2020-04-21 16:21 ` [PATCH 2/8] memory: Don't do topology update in memory finalize() Peter Xu
2020-04-21 16:21 ` [PATCH 3/8] cpus: Use qemu_cond_wait_iothread() where proper Peter Xu
2020-04-21 16:21 ` Peter Xu [this message]
2020-04-21 16:21 ` [PATCH 5/8] cpus: Remove the mutex parameter from do_run_on_cpu() Peter Xu
2020-04-21 16:21 ` [PATCH 6/8] cpus: Introduce qemu_mutex_unlock_iothread_prepare() Peter Xu
2020-04-21 16:21 ` [PATCH 7/8] memory: Assert on no ongoing memory transaction before release BQL Peter Xu
2020-04-21 16:21 ` [PATCH 8/8] memory: Delay the transaction pop() until commit completed Peter Xu
2020-05-23 20:30 ` [PATCH 0/8] memory: Sanity checks memory transaction when releasing BQL Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200421162108.594796-5-peterx@redhat.com \
--to=peterx@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=rth@twiddle.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).