All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCHv2] [RFC 0/7] aio / timers: Add AioContext timers and use ppoll
       [not found] <1E8E204.8000201@redhat.com>
@ 2013-07-20 18:06 ` Alex Bligh
  2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 1/7] aio / timers: Remove alarm timers Alex Bligh
                     ` (7 more replies)
  0 siblings, 8 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-20 18:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

This patch series adds support for timers attached to an AioContext clock
which get called within aio_poll.

In doing so it removes alarm timers and moves to use ppoll where possible.

This patch set 'sort of' passes make check (see below for caveat)
including a new test harness for the aio timers, but has not been
tested much beyond that. In particular, the win32 changes have not
even been compile tested.

Caveat: make check fails one test only with:

ERROR:tests/test-aio.c:346:test_wait_event_notifier_noflush: assertion failed: (aio_poll(ctx, false))

As gar as I can tell, this check is incorrect, in that it checking aio_poll
makes progress when in fact it should not make progress. I fixed an issue
where aio_poll was (as far as I can tell) wrongly returning true on
a timeout, and that generated this error.

Alex Bligh (7):
  aio / timers: Remove alarm timers
  aio / timers: qemu-timer.c utility functions and add list of clocks
  aio / timers: add ppoll support with qemu_g_poll_ns
  aio / timers: Make qemu_run_timers and qemu_run_all_timers return
    progress
  aio / timers: Add a clock to AioContext
  aio / timers: Switch to ppoll, run AioContext timers in
    aio_poll/aio_dispatch
  aio / timers: Add test harness for AioContext timers

 aio-posix.c          |   20 +-
 aio-win32.c          |   20 +-
 async.c              |   18 +-
 configure            |   19 ++
 include/block/aio.h  |    5 +
 include/qemu/timer.h |   25 +-
 main-loop.c          |   47 ++--
 qemu-timer.c         |  619 +++++++++-----------------------------------------
 tests/test-aio.c     |  124 +++++++++-
 vl.c                 |    5 +-
 10 files changed, 363 insertions(+), 539 deletions(-)

-- 
1.7.9.5

^ permalink raw reply	[flat|nested] 128+ messages in thread

* [Qemu-devel] [PATCHv2] [RFC 1/7] aio / timers: Remove alarm timers
  2013-07-20 18:06 ` [Qemu-devel] [PATCHv2] [RFC 0/7] aio / timers: Add AioContext timers and use ppoll Alex Bligh
@ 2013-07-20 18:06   ` Alex Bligh
  2013-07-25  9:10     ` Stefan Hajnoczi
  2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 2/7] aio / timers: qemu-timer.c utility functions and add list of clocks Alex Bligh
                     ` (6 subsequent siblings)
  7 siblings, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-07-20 18:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Remove alarm timers from qemu-timers.c in anticipation of using
timeouts for g_poll / p_poll instead.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |    2 -
 main-loop.c          |    4 -
 qemu-timer.c         |  501 +-------------------------------------------------
 vl.c                 |    5 +-
 4 files changed, 7 insertions(+), 505 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 9dd206c..8638d36 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -55,9 +55,7 @@ uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts);
 
 void qemu_run_timers(QEMUClock *clock);
 void qemu_run_all_timers(void);
-void configure_alarms(char const *opt);
 void init_clocks(void);
-int init_timer_alarm(void);
 
 int64_t cpu_get_ticks(void);
 void cpu_enable_ticks(void);
diff --git a/main-loop.c b/main-loop.c
index a44fff6..8918dd1 100644
--- a/main-loop.c
+++ b/main-loop.c
@@ -131,10 +131,6 @@ int qemu_init_main_loop(void)
     GSource *src;
 
     init_clocks();
-    if (init_timer_alarm() < 0) {
-        fprintf(stderr, "could not initialize alarm timer\n");
-        exit(1);
-    }
 
     ret = qemu_signal_init();
     if (ret) {
diff --git a/qemu-timer.c b/qemu-timer.c
index b2d95e2..062af38 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -33,10 +33,6 @@
 #include <pthread.h>
 #endif
 
-#ifdef _WIN32
-#include <mmsystem.h>
-#endif
-
 /***********************************************************/
 /* timers */
 
@@ -63,170 +59,11 @@ struct QEMUTimer {
     int scale;
 };
 
-struct qemu_alarm_timer {
-    char const *name;
-    int (*start)(struct qemu_alarm_timer *t);
-    void (*stop)(struct qemu_alarm_timer *t);
-    void (*rearm)(struct qemu_alarm_timer *t, int64_t nearest_delta_ns);
-#if defined(__linux__)
-    timer_t timer;
-    int fd;
-#elif defined(_WIN32)
-    HANDLE timer;
-#endif
-    bool expired;
-    bool pending;
-};
-
-static struct qemu_alarm_timer *alarm_timer;
-
 static bool qemu_timer_expired_ns(QEMUTimer *timer_head, int64_t current_time)
 {
     return timer_head && (timer_head->expire_time <= current_time);
 }
 
-static int64_t qemu_next_alarm_deadline(void)
-{
-    int64_t delta = INT64_MAX;
-    int64_t rtdelta;
-
-    if (!use_icount && vm_clock->enabled && vm_clock->active_timers) {
-        delta = vm_clock->active_timers->expire_time -
-                     qemu_get_clock_ns(vm_clock);
-    }
-    if (host_clock->enabled && host_clock->active_timers) {
-        int64_t hdelta = host_clock->active_timers->expire_time -
-                 qemu_get_clock_ns(host_clock);
-        if (hdelta < delta) {
-            delta = hdelta;
-        }
-    }
-    if (rt_clock->enabled && rt_clock->active_timers) {
-        rtdelta = (rt_clock->active_timers->expire_time -
-                 qemu_get_clock_ns(rt_clock));
-        if (rtdelta < delta) {
-            delta = rtdelta;
-        }
-    }
-
-    return delta;
-}
-
-static void qemu_rearm_alarm_timer(struct qemu_alarm_timer *t)
-{
-    int64_t nearest_delta_ns = qemu_next_alarm_deadline();
-    if (nearest_delta_ns < INT64_MAX) {
-        t->rearm(t, nearest_delta_ns);
-    }
-}
-
-/* TODO: MIN_TIMER_REARM_NS should be optimized */
-#define MIN_TIMER_REARM_NS 250000
-
-#ifdef _WIN32
-
-static int mm_start_timer(struct qemu_alarm_timer *t);
-static void mm_stop_timer(struct qemu_alarm_timer *t);
-static void mm_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
-
-static int win32_start_timer(struct qemu_alarm_timer *t);
-static void win32_stop_timer(struct qemu_alarm_timer *t);
-static void win32_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
-
-#else
-
-static int unix_start_timer(struct qemu_alarm_timer *t);
-static void unix_stop_timer(struct qemu_alarm_timer *t);
-static void unix_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
-
-#ifdef __linux__
-
-static int dynticks_start_timer(struct qemu_alarm_timer *t);
-static void dynticks_stop_timer(struct qemu_alarm_timer *t);
-static void dynticks_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
-
-#endif /* __linux__ */
-
-#endif /* _WIN32 */
-
-static struct qemu_alarm_timer alarm_timers[] = {
-#ifndef _WIN32
-#ifdef __linux__
-    {"dynticks", dynticks_start_timer,
-     dynticks_stop_timer, dynticks_rearm_timer},
-#endif
-    {"unix", unix_start_timer, unix_stop_timer, unix_rearm_timer},
-#else
-    {"mmtimer", mm_start_timer, mm_stop_timer, mm_rearm_timer},
-    {"dynticks", win32_start_timer, win32_stop_timer, win32_rearm_timer},
-#endif
-    {NULL, }
-};
-
-static void show_available_alarms(void)
-{
-    int i;
-
-    printf("Available alarm timers, in order of precedence:\n");
-    for (i = 0; alarm_timers[i].name; i++)
-        printf("%s\n", alarm_timers[i].name);
-}
-
-void configure_alarms(char const *opt)
-{
-    int i;
-    int cur = 0;
-    int count = ARRAY_SIZE(alarm_timers) - 1;
-    char *arg;
-    char *name;
-    struct qemu_alarm_timer tmp;
-
-    if (is_help_option(opt)) {
-        show_available_alarms();
-        exit(0);
-    }
-
-    arg = g_strdup(opt);
-
-    /* Reorder the array */
-    name = strtok(arg, ",");
-    while (name) {
-        for (i = 0; i < count && alarm_timers[i].name; i++) {
-            if (!strcmp(alarm_timers[i].name, name))
-                break;
-        }
-
-        if (i == count) {
-            fprintf(stderr, "Unknown clock %s\n", name);
-            goto next;
-        }
-
-        if (i < cur)
-            /* Ignore */
-            goto next;
-
-	/* Swap */
-        tmp = alarm_timers[i];
-        alarm_timers[i] = alarm_timers[cur];
-        alarm_timers[cur] = tmp;
-
-        cur++;
-next:
-        name = strtok(NULL, ",");
-    }
-
-    g_free(arg);
-
-    if (cur) {
-        /* Disable remaining timers */
-        for (i = cur; i < count; i++)
-            alarm_timers[i].name = NULL;
-    } else {
-        show_available_alarms();
-        exit(1);
-    }
-}
-
 QEMUClock *rt_clock;
 QEMUClock *vm_clock;
 QEMUClock *host_clock;
@@ -245,11 +82,7 @@ static QEMUClock *qemu_new_clock(int type)
 
 void qemu_clock_enable(QEMUClock *clock, bool enabled)
 {
-    bool old = clock->enabled;
     clock->enabled = enabled;
-    if (enabled && !old) {
-        qemu_rearm_alarm_timer(alarm_timer);
-    }
 }
 
 int64_t qemu_clock_has_timers(QEMUClock *clock)
@@ -340,10 +173,9 @@ void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
 
     /* Rearm if necessary  */
     if (pt == &ts->clock->active_timers) {
-        if (!alarm_timer->pending) {
-            qemu_rearm_alarm_timer(alarm_timer);
-        }
-        /* Interrupt execution to force deadline recalculation.  */
+        /* Interrupt execution to force deadline recalculation.
+         * FIXME: Do we need to do this now?
+         */
         qemu_clock_warp(ts->clock);
         if (use_icount) {
             qemu_notify_event();
@@ -446,335 +278,8 @@ uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts)
 
 void qemu_run_all_timers(void)
 {
-    alarm_timer->pending = false;
-
     /* vm time timers */
     qemu_run_timers(vm_clock);
     qemu_run_timers(rt_clock);
     qemu_run_timers(host_clock);
-
-    /* rearm timer, if not periodic */
-    if (alarm_timer->expired) {
-        alarm_timer->expired = false;
-        qemu_rearm_alarm_timer(alarm_timer);
-    }
-}
-
-#ifdef _WIN32
-static void CALLBACK host_alarm_handler(PVOID lpParam, BOOLEAN unused)
-#else
-static void host_alarm_handler(int host_signum)
-#endif
-{
-    struct qemu_alarm_timer *t = alarm_timer;
-    if (!t)
-	return;
-
-    t->expired = true;
-    t->pending = true;
-    qemu_notify_event();
 }
-
-#if defined(__linux__)
-
-#include "qemu/compatfd.h"
-
-static int dynticks_start_timer(struct qemu_alarm_timer *t)
-{
-    struct sigevent ev;
-    timer_t host_timer;
-    struct sigaction act;
-
-    sigfillset(&act.sa_mask);
-    act.sa_flags = 0;
-    act.sa_handler = host_alarm_handler;
-
-    sigaction(SIGALRM, &act, NULL);
-
-    /* 
-     * Initialize ev struct to 0 to avoid valgrind complaining
-     * about uninitialized data in timer_create call
-     */
-    memset(&ev, 0, sizeof(ev));
-    ev.sigev_value.sival_int = 0;
-    ev.sigev_notify = SIGEV_SIGNAL;
-#ifdef CONFIG_SIGEV_THREAD_ID
-    if (qemu_signalfd_available()) {
-        ev.sigev_notify = SIGEV_THREAD_ID;
-        ev._sigev_un._tid = qemu_get_thread_id();
-    }
-#endif /* CONFIG_SIGEV_THREAD_ID */
-    ev.sigev_signo = SIGALRM;
-
-    if (timer_create(CLOCK_REALTIME, &ev, &host_timer)) {
-        perror("timer_create");
-        return -1;
-    }
-
-    t->timer = host_timer;
-
-    return 0;
-}
-
-static void dynticks_stop_timer(struct qemu_alarm_timer *t)
-{
-    timer_t host_timer = t->timer;
-
-    timer_delete(host_timer);
-}
-
-static void dynticks_rearm_timer(struct qemu_alarm_timer *t,
-                                 int64_t nearest_delta_ns)
-{
-    timer_t host_timer = t->timer;
-    struct itimerspec timeout;
-    int64_t current_ns;
-
-    if (nearest_delta_ns < MIN_TIMER_REARM_NS)
-        nearest_delta_ns = MIN_TIMER_REARM_NS;
-
-    /* check whether a timer is already running */
-    if (timer_gettime(host_timer, &timeout)) {
-        perror("gettime");
-        fprintf(stderr, "Internal timer error: aborting\n");
-        exit(1);
-    }
-    current_ns = timeout.it_value.tv_sec * 1000000000LL + timeout.it_value.tv_nsec;
-    if (current_ns && current_ns <= nearest_delta_ns)
-        return;
-
-    timeout.it_interval.tv_sec = 0;
-    timeout.it_interval.tv_nsec = 0; /* 0 for one-shot timer */
-    timeout.it_value.tv_sec =  nearest_delta_ns / 1000000000;
-    timeout.it_value.tv_nsec = nearest_delta_ns % 1000000000;
-    if (timer_settime(host_timer, 0 /* RELATIVE */, &timeout, NULL)) {
-        perror("settime");
-        fprintf(stderr, "Internal timer error: aborting\n");
-        exit(1);
-    }
-}
-
-#endif /* defined(__linux__) */
-
-#if !defined(_WIN32)
-
-static int unix_start_timer(struct qemu_alarm_timer *t)
-{
-    struct sigaction act;
-
-    /* timer signal */
-    sigfillset(&act.sa_mask);
-    act.sa_flags = 0;
-    act.sa_handler = host_alarm_handler;
-
-    sigaction(SIGALRM, &act, NULL);
-    return 0;
-}
-
-static void unix_rearm_timer(struct qemu_alarm_timer *t,
-                             int64_t nearest_delta_ns)
-{
-    struct itimerval itv;
-    int err;
-
-    if (nearest_delta_ns < MIN_TIMER_REARM_NS)
-        nearest_delta_ns = MIN_TIMER_REARM_NS;
-
-    itv.it_interval.tv_sec = 0;
-    itv.it_interval.tv_usec = 0; /* 0 for one-shot timer */
-    itv.it_value.tv_sec =  nearest_delta_ns / 1000000000;
-    itv.it_value.tv_usec = (nearest_delta_ns % 1000000000) / 1000;
-    err = setitimer(ITIMER_REAL, &itv, NULL);
-    if (err) {
-        perror("setitimer");
-        fprintf(stderr, "Internal timer error: aborting\n");
-        exit(1);
-    }
-}
-
-static void unix_stop_timer(struct qemu_alarm_timer *t)
-{
-    struct itimerval itv;
-
-    memset(&itv, 0, sizeof(itv));
-    setitimer(ITIMER_REAL, &itv, NULL);
-}
-
-#endif /* !defined(_WIN32) */
-
-
-#ifdef _WIN32
-
-static MMRESULT mm_timer;
-static TIMECAPS mm_tc;
-
-static void CALLBACK mm_alarm_handler(UINT uTimerID, UINT uMsg,
-                                      DWORD_PTR dwUser, DWORD_PTR dw1,
-                                      DWORD_PTR dw2)
-{
-    struct qemu_alarm_timer *t = alarm_timer;
-    if (!t) {
-        return;
-    }
-    t->expired = true;
-    t->pending = true;
-    qemu_notify_event();
-}
-
-static int mm_start_timer(struct qemu_alarm_timer *t)
-{
-    timeGetDevCaps(&mm_tc, sizeof(mm_tc));
-    return 0;
-}
-
-static void mm_stop_timer(struct qemu_alarm_timer *t)
-{
-    if (mm_timer) {
-        timeKillEvent(mm_timer);
-    }
-}
-
-static void mm_rearm_timer(struct qemu_alarm_timer *t, int64_t delta)
-{
-    int64_t nearest_delta_ms = delta / 1000000;
-    if (nearest_delta_ms < mm_tc.wPeriodMin) {
-        nearest_delta_ms = mm_tc.wPeriodMin;
-    } else if (nearest_delta_ms > mm_tc.wPeriodMax) {
-        nearest_delta_ms = mm_tc.wPeriodMax;
-    }
-
-    if (mm_timer) {
-        timeKillEvent(mm_timer);
-    }
-    mm_timer = timeSetEvent((UINT)nearest_delta_ms,
-                            mm_tc.wPeriodMin,
-                            mm_alarm_handler,
-                            (DWORD_PTR)t,
-                            TIME_ONESHOT | TIME_CALLBACK_FUNCTION);
-
-    if (!mm_timer) {
-        fprintf(stderr, "Failed to re-arm win32 alarm timer\n");
-        timeEndPeriod(mm_tc.wPeriodMin);
-        exit(1);
-    }
-}
-
-static int win32_start_timer(struct qemu_alarm_timer *t)
-{
-    HANDLE hTimer;
-    BOOLEAN success;
-
-    /* If you call ChangeTimerQueueTimer on a one-shot timer (its period
-       is zero) that has already expired, the timer is not updated.  Since
-       creating a new timer is relatively expensive, set a bogus one-hour
-       interval in the dynticks case.  */
-    success = CreateTimerQueueTimer(&hTimer,
-                          NULL,
-                          host_alarm_handler,
-                          t,
-                          1,
-                          3600000,
-                          WT_EXECUTEINTIMERTHREAD);
-
-    if (!success) {
-        fprintf(stderr, "Failed to initialize win32 alarm timer: %ld\n",
-                GetLastError());
-        return -1;
-    }
-
-    t->timer = hTimer;
-    return 0;
-}
-
-static void win32_stop_timer(struct qemu_alarm_timer *t)
-{
-    HANDLE hTimer = t->timer;
-
-    if (hTimer) {
-        DeleteTimerQueueTimer(NULL, hTimer, NULL);
-    }
-}
-
-static void win32_rearm_timer(struct qemu_alarm_timer *t,
-                              int64_t nearest_delta_ns)
-{
-    HANDLE hTimer = t->timer;
-    int64_t nearest_delta_ms;
-    BOOLEAN success;
-
-    nearest_delta_ms = nearest_delta_ns / 1000000;
-    if (nearest_delta_ms < 1) {
-        nearest_delta_ms = 1;
-    }
-    /* ULONG_MAX can be 32 bit */
-    if (nearest_delta_ms > ULONG_MAX) {
-        nearest_delta_ms = ULONG_MAX;
-    }
-    success = ChangeTimerQueueTimer(NULL,
-                                    hTimer,
-                                    (unsigned long) nearest_delta_ms,
-                                    3600000);
-
-    if (!success) {
-        fprintf(stderr, "Failed to rearm win32 alarm timer: %ld\n",
-                GetLastError());
-        exit(-1);
-    }
-
-}
-
-#endif /* _WIN32 */
-
-static void quit_timers(void)
-{
-    struct qemu_alarm_timer *t = alarm_timer;
-    alarm_timer = NULL;
-    t->stop(t);
-}
-
-#ifdef CONFIG_POSIX
-static void reinit_timers(void)
-{
-    struct qemu_alarm_timer *t = alarm_timer;
-    t->stop(t);
-    if (t->start(t)) {
-        fprintf(stderr, "Internal timer error: aborting\n");
-        exit(1);
-    }
-    qemu_rearm_alarm_timer(t);
-}
-#endif /* CONFIG_POSIX */
-
-int init_timer_alarm(void)
-{
-    struct qemu_alarm_timer *t = NULL;
-    int i, err = -1;
-
-    if (alarm_timer) {
-        return 0;
-    }
-
-    for (i = 0; alarm_timers[i].name; i++) {
-        t = &alarm_timers[i];
-
-        err = t->start(t);
-        if (!err)
-            break;
-    }
-
-    if (err) {
-        err = -ENOENT;
-        goto fail;
-    }
-
-    atexit(quit_timers);
-#ifdef CONFIG_POSIX
-    pthread_atfork(NULL, NULL, reinit_timers);
-#endif
-    alarm_timer = t;
-    return 0;
-
-fail:
-    return err;
-}
-
diff --git a/vl.c b/vl.c
index 25b8f2f..612c609 100644
--- a/vl.c
+++ b/vl.c
@@ -3714,7 +3714,10 @@ int main(int argc, char **argv, char **envp)
                 old_param = 1;
                 break;
             case QEMU_OPTION_clock:
-                configure_alarms(optarg);
+                /* Once upon a time we did:
+                 *   configure_alarms(optarg);
+                 * here. This is stubbed out for compatibility.
+                 */
                 break;
             case QEMU_OPTION_startdate:
                 configure_rtc_date_offset(optarg, 1);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [PATCHv2] [RFC 2/7] aio / timers: qemu-timer.c utility functions and add list of clocks
  2013-07-20 18:06 ` [Qemu-devel] [PATCHv2] [RFC 0/7] aio / timers: Add AioContext timers and use ppoll Alex Bligh
  2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 1/7] aio / timers: Remove alarm timers Alex Bligh
@ 2013-07-20 18:06   ` Alex Bligh
  2013-07-23 21:09     ` Richard Henderson
                       ` (2 more replies)
  2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 3/7] aio / timers: add ppoll support with qemu_g_poll_ns Alex Bligh
                     ` (5 subsequent siblings)
  7 siblings, 3 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-20 18:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Add utility functions to qemu-timer.c for nanosecond timing.

Ensure we keep track of all QEMUClocks on a list.

Add qemu_clock_deadline_ns and qemu_clock_deadline_all_ns to
calculate deadlines to nanosecond accuracy.

Add utility function qemu_soonest_timeout to calculate soonest deadline.

Add qemu_timeout_ns_to_ms to convert a timeout in nanoseconds back to
milliseconds for when ppoll is not used.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |   19 ++++++++++++
 qemu-timer.c         |   83 ++++++++++++++++++++++++++++++++++++++++++++++----
 2 files changed, 96 insertions(+), 6 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 8638d36..2f1b609 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -11,6 +11,10 @@
 #define SCALE_US 1000
 #define SCALE_NS 1
 
+#define QEMU_CLOCK_REALTIME 0
+#define QEMU_CLOCK_VIRTUAL  1
+#define QEMU_CLOCK_HOST     2
+
 typedef struct QEMUClock QEMUClock;
 typedef void QEMUTimerCB(void *opaque);
 
@@ -32,10 +36,16 @@ extern QEMUClock *vm_clock;
    the virtual clock. */
 extern QEMUClock *host_clock;
 
+QEMUClock *qemu_new_clock(int type);
+void qemu_free_clock(QEMUClock *clock);
 int64_t qemu_get_clock_ns(QEMUClock *clock);
 int64_t qemu_clock_has_timers(QEMUClock *clock);
 int64_t qemu_clock_expired(QEMUClock *clock);
 int64_t qemu_clock_deadline(QEMUClock *clock);
+int64_t qemu_clock_deadline_ns(QEMUClock *clock);
+int64_t qemu_clock_deadline_all_ns(void);
+int qemu_timeout_ns_to_ms(int64_t ns);
+gint qemu_g_poll_ns(GPollFD *fds, guint nfds, int64_t timeout);
 void qemu_clock_enable(QEMUClock *clock, bool enabled);
 void qemu_clock_warp(QEMUClock *clock);
 
@@ -61,6 +71,15 @@ int64_t cpu_get_ticks(void);
 void cpu_enable_ticks(void);
 void cpu_disable_ticks(void);
 
+static inline int64_t qemu_soonest_timeout(int64_t timeout1, int64_t timeout2)
+{
+    /* we can abuse the fact that -1 (which means infinite) is a maximal
+     * value when cast to unsigned. As this is disgusting, it's kept in
+     * one inline function.
+     */
+    return ((uint64_t) timeout1 < (uint64_t) timeout2) ? timeout1 : timeout2;
+}
+
 static inline QEMUTimer *qemu_new_timer_ns(QEMUClock *clock, QEMUTimerCB *cb,
                                            void *opaque)
 {
diff --git a/qemu-timer.c b/qemu-timer.c
index 062af38..2150782 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -36,10 +36,6 @@
 /***********************************************************/
 /* timers */
 
-#define QEMU_CLOCK_REALTIME 0
-#define QEMU_CLOCK_VIRTUAL  1
-#define QEMU_CLOCK_HOST     2
-
 struct QEMUClock {
     QEMUTimer *active_timers;
 
@@ -48,6 +44,8 @@ struct QEMUClock {
 
     int type;
     bool enabled;
+
+    QLIST_ENTRY(QEMUClock) list;
 };
 
 struct QEMUTimer {
@@ -59,6 +57,9 @@ struct QEMUTimer {
     int scale;
 };
 
+static QLIST_HEAD(, QEMUClock) qemu_clocks =
+    QLIST_HEAD_INITIALIZER(qemu_clocks);
+
 static bool qemu_timer_expired_ns(QEMUTimer *timer_head, int64_t current_time)
 {
     return timer_head && (timer_head->expire_time <= current_time);
@@ -68,7 +69,7 @@ QEMUClock *rt_clock;
 QEMUClock *vm_clock;
 QEMUClock *host_clock;
 
-static QEMUClock *qemu_new_clock(int type)
+QEMUClock *qemu_new_clock(int type)
 {
     QEMUClock *clock;
 
@@ -77,9 +78,16 @@ static QEMUClock *qemu_new_clock(int type)
     clock->enabled = true;
     clock->last = INT64_MIN;
     notifier_list_init(&clock->reset_notifiers);
+    QLIST_INSERT_HEAD(&qemu_clocks, clock, list);
     return clock;
 }
 
+void qemu_free_clock(QEMUClock *clock)
+{
+    QLIST_REMOVE(clock, list);
+    g_free(clock);
+}
+
 void qemu_clock_enable(QEMUClock *clock, bool enabled)
 {
     clock->enabled = enabled;
@@ -101,7 +109,7 @@ int64_t qemu_clock_deadline(QEMUClock *clock)
     /* To avoid problems with overflow limit this to 2^32.  */
     int64_t delta = INT32_MAX;
 
-    if (clock->active_timers) {
+    if (clock->enabled && clock->active_timers) {
         delta = clock->active_timers->expire_time - qemu_get_clock_ns(clock);
     }
     if (delta < 0) {
@@ -110,6 +118,69 @@ int64_t qemu_clock_deadline(QEMUClock *clock)
     return delta;
 }
 
+/*
+ * As above, but return -1 for no deadline, and do not cap to 2^32
+ * as we know the result is always positive.
+ */
+
+int64_t qemu_clock_deadline_ns(QEMUClock *clock)
+{
+    int64_t delta;
+
+    if (!clock->enabled || !clock->active_timers) {
+        return -1;
+    }
+
+    delta = clock->active_timers->expire_time - qemu_get_clock_ns(clock);
+
+    if (delta <= 0) {
+        return 0;
+    }
+
+    return delta;
+}
+
+/* Return the minimum deadline across all clocks or -1 for no
+ * deadline
+ */
+int64_t qemu_clock_deadline_all_ns(void)
+{
+    QEMUClock *clock;
+    int64_t ret = -1;
+    QLIST_FOREACH(clock, &qemu_clocks, list) {
+        ret = qemu_soonest_timeout(ret, qemu_clock_deadline_ns(clock));
+    }
+    return ret;
+}
+
+/* Transition function to convert a nanosecond timeout to ms
+ * This is used where a system does not support ppoll
+ */
+int qemu_timeout_ns_to_ms(int64_t ns)
+{
+    int64_t ms;
+    if (ns < 0) {
+        return -1;
+    }
+
+    if (!ns) {
+        return 0;
+    }
+
+    /* Always round up, because it's better to wait too long than to wait too
+     * little and effectively busy-wait
+     */
+    ms = (ns + SCALE_MS - 1) / SCALE_MS;
+
+    /* To avoid overflow problems, limit this to 2^31, i.e. approx 25 days */
+    if (ms > (int64_t) INT32_MAX) {
+        ms = INT32_MAX;
+    }
+
+    return (int) ms;
+}
+
+
 QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
                           QEMUTimerCB *cb, void *opaque)
 {
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [PATCHv2] [RFC 3/7] aio / timers: add ppoll support with qemu_g_poll_ns
  2013-07-20 18:06 ` [Qemu-devel] [PATCHv2] [RFC 0/7] aio / timers: Add AioContext timers and use ppoll Alex Bligh
  2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 1/7] aio / timers: Remove alarm timers Alex Bligh
  2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 2/7] aio / timers: qemu-timer.c utility functions and add list of clocks Alex Bligh
@ 2013-07-20 18:06   ` Alex Bligh
  2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 4/7] aio / timers: Make qemu_run_timers and qemu_run_all_timers return progress Alex Bligh
                     ` (4 subsequent siblings)
  7 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-20 18:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Add qemu_g_poll_ns which works like g_poll but takes a nanosecond
timeout.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 configure    |   19 +++++++++++++++++++
 qemu-timer.c |   24 ++++++++++++++++++++++++
 2 files changed, 43 insertions(+)

diff --git a/configure b/configure
index 9e1cd19..b491c00 100755
--- a/configure
+++ b/configure
@@ -2801,6 +2801,22 @@ if compile_prog "" "" ; then
   dup3=yes
 fi
 
+# check for ppoll support
+ppoll=no
+cat > $TMPC << EOF
+#include <poll.h>
+
+int main(void)
+{
+    struct pollfd pfd = { .fd = 0, .events = 0, .revents = 0 };
+    ppoll(&pfd, 1, 0, 0);
+    return 0;
+}
+EOF
+if compile_prog "" "" ; then
+  ppoll=yes
+fi
+
 # check for epoll support
 epoll=no
 cat > $TMPC << EOF
@@ -3792,6 +3808,9 @@ fi
 if test "$dup3" = "yes" ; then
   echo "CONFIG_DUP3=y" >> $config_host_mak
 fi
+if test "$ppoll" = "yes" ; then
+  echo "CONFIG_PPOLL=y" >> $config_host_mak
+fi
 if test "$epoll" = "yes" ; then
   echo "CONFIG_EPOLL=y" >> $config_host_mak
 fi
diff --git a/qemu-timer.c b/qemu-timer.c
index 2150782..5c576b4 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -33,6 +33,10 @@
 #include <pthread.h>
 #endif
 
+#ifdef CONFIG_PPOLL
+#include <poll.h>
+#endif
+
 /***********************************************************/
 /* timers */
 
@@ -181,6 +185,26 @@ int qemu_timeout_ns_to_ms(int64_t ns)
 }
 
 
+/* qemu implementation of g_poll which uses a nanosecond timeout but is
+ * otherwise identical to g_poll
+ */
+gint qemu_g_poll_ns(GPollFD *fds, guint nfds, int64_t timeout)
+{
+#ifdef CONFIG_PPOLL
+    if (timeout < 0) {
+        return ppoll((struct pollfd *)fds, nfds, NULL, NULL);
+    } else {
+        struct timespec ts;
+        ts.tv_sec = timeout / 1000000000LL;
+        ts.tv_nsec = timeout % 1000000000LL;
+        return ppoll((struct pollfd *)fds, nfds, &ts, NULL);
+    }
+#else
+    return g_poll(fds, nfds, qemu_timeout_ns_to_ms(timeout));
+#endif
+}
+
+
 QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
                           QEMUTimerCB *cb, void *opaque)
 {
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [PATCHv2] [RFC 4/7] aio / timers: Make qemu_run_timers and qemu_run_all_timers return progress
  2013-07-20 18:06 ` [Qemu-devel] [PATCHv2] [RFC 0/7] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                     ` (2 preceding siblings ...)
  2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 3/7] aio / timers: add ppoll support with qemu_g_poll_ns Alex Bligh
@ 2013-07-20 18:06   ` Alex Bligh
  2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 5/7] aio / timers: Add a clock to AioContext Alex Bligh
                     ` (3 subsequent siblings)
  7 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-20 18:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Make qemu_run_timers and qemu_run_all_timers return progress
so that aio_poll etc. can determine whether a timer has been
run.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |    4 ++--
 qemu-timer.c         |   17 +++++++++++------
 2 files changed, 13 insertions(+), 8 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 2f1b609..e0922e6 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -63,8 +63,8 @@ bool qemu_timer_pending(QEMUTimer *ts);
 bool qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time);
 uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts);
 
-void qemu_run_timers(QEMUClock *clock);
-void qemu_run_all_timers(void);
+bool qemu_run_timers(QEMUClock *clock);
+bool qemu_run_all_timers(void);
 void init_clocks(void);
 
 int64_t cpu_get_ticks(void);
diff --git a/qemu-timer.c b/qemu-timer.c
index 5c576b4..4cba055 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -299,13 +299,14 @@ bool qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time)
     return qemu_timer_expired_ns(timer_head, current_time * timer_head->scale);
 }
 
-void qemu_run_timers(QEMUClock *clock)
+bool qemu_run_timers(QEMUClock *clock)
 {
     QEMUTimer *ts;
     int64_t current_time;
+    bool progress = false;
    
     if (!clock->enabled)
-        return;
+        return progress;
 
     current_time = qemu_get_clock_ns(clock);
     for(;;) {
@@ -319,7 +320,9 @@ void qemu_run_timers(QEMUClock *clock)
 
         /* run the callback (the timer list can be modified) */
         ts->cb(ts->opaque);
+        progress = true;
     }
+    return progress;
 }
 
 int64_t qemu_get_clock_ns(QEMUClock *clock)
@@ -371,10 +374,12 @@ uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts)
     return qemu_timer_pending(ts) ? ts->expire_time : -1;
 }
 
-void qemu_run_all_timers(void)
+bool qemu_run_all_timers(void)
 {
     /* vm time timers */
-    qemu_run_timers(vm_clock);
-    qemu_run_timers(rt_clock);
-    qemu_run_timers(host_clock);
+    bool progress = false;
+    progress |= qemu_run_timers(vm_clock);
+    progress |= qemu_run_timers(rt_clock);
+    progress |= qemu_run_timers(host_clock);
+    return progress;
 }
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [PATCHv2] [RFC 5/7] aio / timers: Add a clock to AioContext
  2013-07-20 18:06 ` [Qemu-devel] [PATCHv2] [RFC 0/7] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                     ` (3 preceding siblings ...)
  2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 4/7] aio / timers: Make qemu_run_timers and qemu_run_all_timers return progress Alex Bligh
@ 2013-07-20 18:06   ` Alex Bligh
  2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 6/7] aio / timers: Switch to ppoll, run AioContext timers in aio_poll/aio_dispatch Alex Bligh
                     ` (2 subsequent siblings)
  7 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-20 18:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Add a clock to each AioContext and delete it when freed.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 async.c             |    2 ++
 include/block/aio.h |    5 +++++
 2 files changed, 7 insertions(+)

diff --git a/async.c b/async.c
index 90fe906..0d41431 100644
--- a/async.c
+++ b/async.c
@@ -177,6 +177,7 @@ aio_ctx_finalize(GSource     *source)
     aio_set_event_notifier(ctx, &ctx->notifier, NULL, NULL);
     event_notifier_cleanup(&ctx->notifier);
     g_array_free(ctx->pollfds, TRUE);
+    qemu_free_clock(ctx->clock);
 }
 
 static GSourceFuncs aio_source_funcs = {
@@ -215,6 +216,7 @@ AioContext *aio_context_new(void)
     aio_set_event_notifier(ctx, &ctx->notifier, 
                            (EventNotifierHandler *)
                            event_notifier_test_and_clear, NULL);
+    ctx->clock = qemu_new_clock(QEMU_CLOCK_REALTIME);
 
     return ctx;
 }
diff --git a/include/block/aio.h b/include/block/aio.h
index 1836793..0835a4d 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -41,6 +41,8 @@ typedef struct AioHandler AioHandler;
 typedef void QEMUBHFunc(void *opaque);
 typedef void IOHandler(void *opaque);
 
+typedef struct QEMUClock QEMUClock;
+
 typedef struct AioContext {
     GSource source;
 
@@ -69,6 +71,9 @@ typedef struct AioContext {
 
     /* Thread pool for performing work and receiving completion callbacks */
     struct ThreadPool *thread_pool;
+
+    /* Clock for calling timers */
+    QEMUClock *clock;
 } AioContext;
 
 /* Returns 1 if there are still outstanding AIO requests; 0 otherwise */
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [PATCHv2] [RFC 6/7] aio / timers: Switch to ppoll, run AioContext timers in aio_poll/aio_dispatch
  2013-07-20 18:06 ` [Qemu-devel] [PATCHv2] [RFC 0/7] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                     ` (4 preceding siblings ...)
  2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 5/7] aio / timers: Add a clock to AioContext Alex Bligh
@ 2013-07-20 18:06   ` Alex Bligh
  2013-07-25  9:33     ` Stefan Hajnoczi
  2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 7/7] aio / timers: Add test harness for AioContext timers Alex Bligh
  2013-07-25  9:37   ` [Qemu-devel] [PATCHv2] [RFC 0/7] aio / timers: Add AioContext timers and use ppoll Stefan Hajnoczi
  7 siblings, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-07-20 18:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Switch to ppoll (or rather qemu_g_poll_ns which will use ppoll if available).

Set timeouts for aio, g_source, and mainloop from earliest timer deadline.

Run timers for AioContext (only) in aio_poll/aio_dispatch.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 aio-posix.c |   20 +++++++++++++-------
 aio-win32.c |   20 ++++++++++++++++++--
 async.c     |   16 ++++++++++++++--
 main-loop.c |   43 ++++++++++++++++++++++++++++++++-----------
 4 files changed, 77 insertions(+), 22 deletions(-)

diff --git a/aio-posix.c b/aio-posix.c
index b68eccd..5bdb9fe 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -166,6 +166,10 @@ static bool aio_dispatch(AioContext *ctx)
             g_free(tmp);
         }
     }
+
+    /* Run our timers */
+    progress |= qemu_run_timers(ctx->clock);
+
     return progress;
 }
 
@@ -232,9 +236,9 @@ bool aio_poll(AioContext *ctx, bool blocking)
     }
 
     /* wait until next event */
-    ret = g_poll((GPollFD *)ctx->pollfds->data,
-                 ctx->pollfds->len,
-                 blocking ? -1 : 0);
+    ret = qemu_g_poll_ns((GPollFD *)ctx->pollfds->data,
+                         ctx->pollfds->len,
+                         blocking ? qemu_clock_deadline_all_ns() : 0);
 
     /* if we have any readable fds, dispatch event */
     if (ret > 0) {
@@ -245,11 +249,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
                 node->pfd.revents = pfd->revents;
             }
         }
-        if (aio_dispatch(ctx)) {
-            progress = true;
-        }
+    }
+
+    /* Run dispatch even if there were no readable fds to run timers */
+    if (aio_dispatch(ctx)) {
+        progress = true;
     }
 
     assert(progress || busy);
-    return true;
+    return progress;
 }
diff --git a/aio-win32.c b/aio-win32.c
index 38723bf..68343ba 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -98,6 +98,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
     HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
     bool busy, progress;
     int count;
+    int timeout;
 
     progress = false;
 
@@ -111,6 +112,9 @@ bool aio_poll(AioContext *ctx, bool blocking)
         progress = true;
     }
 
+    /* Run timers */
+    progress |= qemu_run_timers(ctx->clock);
+
     /*
      * Then dispatch any pending callbacks from the GSource.
      *
@@ -174,8 +178,11 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
     /* wait until next event */
     while (count > 0) {
-        int timeout = blocking ? INFINITE : 0;
-        int ret = WaitForMultipleObjects(count, events, FALSE, timeout);
+        int ret;
+
+        timeout = blocking ?
+            qemu_timeout_ns_to_ms(qemu_clock_deadline_all_ns()) : 0;
+        ret = WaitForMultipleObjects(count, events, FALSE, timeout);
 
         /* if we have any signaled events, dispatch event */
         if ((DWORD) (ret - WAIT_OBJECT_0) >= count) {
@@ -214,6 +221,15 @@ bool aio_poll(AioContext *ctx, bool blocking)
         events[ret - WAIT_OBJECT_0] = events[--count];
     }
 
+    if (blocking) {
+        /* Run the timers a second time. We do this because otherwise aio_wait
+         * will not note progress - and will stop a drain early - if we have
+         * a timer that was not ready to run entering g_poll but is ready
+         * after g_poll. This will only do anything if a timer has expired.
+         */
+        progress |= qemu_run_timers(ctx->clock);
+    }
+
     assert(progress || busy);
     return true;
 }
diff --git a/async.c b/async.c
index 0d41431..cb6b1d4 100644
--- a/async.c
+++ b/async.c
@@ -123,13 +123,16 @@ aio_ctx_prepare(GSource *source, gint    *timeout)
 {
     AioContext *ctx = (AioContext *) source;
     QEMUBH *bh;
+    int deadline;
 
     for (bh = ctx->first_bh; bh; bh = bh->next) {
         if (!bh->deleted && bh->scheduled) {
             if (bh->idle) {
                 /* idle bottom halves will be polled at least
                  * every 10ms */
-                *timeout = 10;
+                if ((*timeout < 0) || (*timeout > 10)) {
+                    *timeout = 10;
+                }
             } else {
                 /* non-idle bottom halves will be executed
                  * immediately */
@@ -139,6 +142,15 @@ aio_ctx_prepare(GSource *source, gint    *timeout)
         }
     }
 
+    deadline = qemu_timeout_ns_to_ms(qemu_clock_deadline_ns(ctx->clock));
+    if (deadline == 0) {
+        *timeout = 0;
+        return true;
+    } else if ((deadline > 0) &&
+               ((*timeout < 0) || (deadline < *timeout))) {
+        *timeout = deadline;
+    }
+
     return false;
 }
 
@@ -153,7 +165,7 @@ aio_ctx_check(GSource *source)
             return true;
 	}
     }
-    return aio_pending(ctx);
+    return aio_pending(ctx) || (qemu_clock_deadline_ns(ctx->clock) >= 0);
 }
 
 static gboolean
diff --git a/main-loop.c b/main-loop.c
index 8918dd1..1bd10e8 100644
--- a/main-loop.c
+++ b/main-loop.c
@@ -151,10 +151,11 @@ static int max_priority;
 static int glib_pollfds_idx;
 static int glib_n_poll_fds;
 
-static void glib_pollfds_fill(uint32_t *cur_timeout)
+static void glib_pollfds_fill(int64_t *cur_timeout)
 {
     GMainContext *context = g_main_context_default();
     int timeout = 0;
+    int64_t timeout_ns;
     int n;
 
     g_main_context_prepare(context, &max_priority);
@@ -170,9 +171,13 @@ static void glib_pollfds_fill(uint32_t *cur_timeout)
                                  glib_n_poll_fds);
     } while (n != glib_n_poll_fds);
 
-    if (timeout >= 0 && timeout < *cur_timeout) {
-        *cur_timeout = timeout;
+    if (timeout < 0) {
+        timeout_ns = -1;
+    } else {
+      timeout_ns = (int64_t)timeout * (int64_t)SCALE_MS;
     }
+
+    *cur_timeout = qemu_soonest_timeout(timeout_ns, *cur_timeout);
 }
 
 static void glib_pollfds_poll(void)
@@ -187,7 +192,7 @@ static void glib_pollfds_poll(void)
 
 #define MAX_MAIN_LOOP_SPIN (1000)
 
-static int os_host_main_loop_wait(uint32_t timeout)
+static int os_host_main_loop_wait(int64_t timeout)
 {
     int ret;
     static int spin_counter;
@@ -210,7 +215,7 @@ static int os_host_main_loop_wait(uint32_t timeout)
             notified = true;
         }
 
-        timeout = 1;
+        timeout = SCALE_MS;
     }
 
     if (timeout > 0) {
@@ -220,7 +225,7 @@ static int os_host_main_loop_wait(uint32_t timeout)
         spin_counter++;
     }
 
-    ret = g_poll((GPollFD *)gpollfds->data, gpollfds->len, timeout);
+    ret = qemu_g_poll_ns((GPollFD *)gpollfds->data, gpollfds->len, timeout);
 
     if (timeout > 0) {
         qemu_mutex_lock_iothread();
@@ -369,7 +374,7 @@ static void pollfds_poll(GArray *pollfds, int nfds, fd_set *rfds,
     }
 }
 
-static int os_host_main_loop_wait(uint32_t timeout)
+static int os_host_main_loop_wait(int64_t timeout)
 {
     GMainContext *context = g_main_context_default();
     GPollFD poll_fds[1024 * 2]; /* this is probably overkill */
@@ -378,6 +383,7 @@ static int os_host_main_loop_wait(uint32_t timeout)
     PollingEntry *pe;
     WaitObjects *w = &wait_objects;
     gint poll_timeout;
+    int64_t poll_timeout_ns;
     static struct timeval tv0;
     fd_set rfds, wfds, xfds;
     int nfds;
@@ -415,12 +421,17 @@ static int os_host_main_loop_wait(uint32_t timeout)
         poll_fds[n_poll_fds + i].events = G_IO_IN;
     }
 
-    if (poll_timeout < 0 || timeout < poll_timeout) {
-        poll_timeout = timeout;
+    if (poll_timeout < 0) {
+        poll_timeout_ns = -1;
+    } else {
+        poll_timeout_ns = (int64_t)poll_timeout * (int64_t)SCALE_MS;
     }
 
+    poll_timeout_ns = qemu_soonest_timeout(poll_timeout_ns, timeout);
+
     qemu_mutex_unlock_iothread();
-    g_poll_ret = g_poll(poll_fds, n_poll_fds + w->num, poll_timeout);
+    g_poll_ret = qemu_g_poll_ns(poll_fds, n_poll_fds + w->num, poll_timeout_ns);
+
     qemu_mutex_lock_iothread();
     if (g_poll_ret > 0) {
         for (i = 0; i < w->num; i++) {
@@ -445,6 +456,7 @@ int main_loop_wait(int nonblocking)
 {
     int ret;
     uint32_t timeout = UINT32_MAX;
+    int64_t timeout_ns;
 
     if (nonblocking) {
         timeout = 0;
@@ -458,7 +470,16 @@ int main_loop_wait(int nonblocking)
     slirp_pollfds_fill(gpollfds);
 #endif
     qemu_iohandler_fill(gpollfds);
-    ret = os_host_main_loop_wait(timeout);
+
+    if (timeout == UINT32_MAX) {
+        timeout_ns = -1;
+    } else {
+        timeout_ns = (uint64_t)timeout * (int64_t)(SCALE_MS);
+    }
+
+    timeout_ns = qemu_soonest_timeout(timeout_ns, qemu_clock_deadline_all_ns());
+
+    ret = os_host_main_loop_wait(timeout_ns);
     qemu_iohandler_poll(gpollfds, ret);
 #ifdef CONFIG_SLIRP
     slirp_pollfds_poll(gpollfds, (ret < 0));
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [PATCHv2] [RFC 7/7] aio / timers: Add test harness for AioContext timers
  2013-07-20 18:06 ` [Qemu-devel] [PATCHv2] [RFC 0/7] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                     ` (5 preceding siblings ...)
  2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 6/7] aio / timers: Switch to ppoll, run AioContext timers in aio_poll/aio_dispatch Alex Bligh
@ 2013-07-20 18:06   ` Alex Bligh
  2013-07-25  9:37   ` [Qemu-devel] [PATCHv2] [RFC 0/7] aio / timers: Add AioContext timers and use ppoll Stefan Hajnoczi
  7 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-20 18:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Add a test harness for AioContext timers. The g_source equivalent is
unsatisfactory as it suffers from false wakeups.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 tests/test-aio.c |  124 +++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 123 insertions(+), 1 deletion(-)

diff --git a/tests/test-aio.c b/tests/test-aio.c
index c173870..7460c40 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -1,5 +1,5 @@
 /*
- * AioContext tests
+ * Aiocontext tests
  *
  * Copyright Red Hat, Inc. 2012
  *
@@ -12,6 +12,7 @@
 
 #include <glib.h>
 #include "block/aio.h"
+#include "qemu/timer.h"
 
 AioContext *ctx;
 
@@ -31,6 +32,15 @@ typedef struct {
     int max;
 } BHTestData;
 
+typedef struct {
+    QEMUTimer *timer;
+    QEMUClock *clock;
+    int n;
+    int max;
+    int64_t ns;
+    AioContext *ctx;
+} TimerTestData;
+
 static void bh_test_cb(void *opaque)
 {
     BHTestData *data = opaque;
@@ -39,6 +49,24 @@ static void bh_test_cb(void *opaque)
     }
 }
 
+static void timer_test_cb(void *opaque)
+{
+    TimerTestData *data = opaque;
+    if (++data->n < data->max) {
+        qemu_mod_timer(data->timer,
+                       qemu_get_clock_ns(data->clock) + data->ns);
+    }
+}
+
+static void dummy_io_handler_read(void *opaque)
+{
+}
+
+static int dummy_io_handler_flush(void *opaque)
+{
+    return 1;
+}
+
 static void bh_delete_cb(void *opaque)
 {
     BHTestData *data = opaque;
@@ -340,6 +368,51 @@ static void test_wait_event_notifier_noflush(void)
     event_notifier_cleanup(&data.e);
 }
 
+static void test_timer_schedule(void)
+{
+    TimerTestData data = { .n = 0, .ctx = ctx, .ns = SCALE_MS*750,
+                           .max = 2, .clock = ctx->clock };
+    int pipefd[2];
+
+    /* aio_poll will not block to wait for timers to complete unless it has
+     * an fd to wait on. Fixing this breaks other tests. So create a dummy one.
+     */
+    g_assert(!pipe2(pipefd, O_NONBLOCK));
+    aio_set_fd_handler(ctx, pipefd[0],
+                       dummy_io_handler_read, NULL, dummy_io_handler_flush,
+                       NULL);
+    aio_poll(ctx, false);
+
+    data.timer = qemu_new_timer_ns(data.clock, timer_test_cb, &data);
+    qemu_mod_timer(data.timer, qemu_get_clock_ns(data.clock) + data.ns);
+
+    g_assert_cmpint(data.n, ==, 0);
+
+    g_assert(!aio_poll(ctx, false));
+    g_assert_cmpint(data.n, ==, 0);
+
+    sleep(1);
+    g_assert_cmpint(data.n, ==, 0);
+
+    g_assert(aio_poll(ctx, false));
+    g_assert_cmpint(data.n, ==, 1);
+
+    g_assert(!aio_poll(ctx, false));
+    g_assert_cmpint(data.n, ==, 1);
+
+    g_assert(aio_poll(ctx, true));
+    g_assert_cmpint(data.n, ==, 2);
+
+    g_assert(!aio_poll(ctx, false));
+    g_assert_cmpint(data.n, ==, 2);
+
+    aio_set_fd_handler(ctx, pipefd[0], NULL, NULL, NULL, NULL);
+    close(pipefd[0]);
+    close(pipefd[1]);
+
+    qemu_del_timer(data.timer);
+}
+
 /* Now the same tests, using the context as a GSource.  They are
  * very similar to the ones above, with g_main_context_iteration
  * replacing aio_poll.  However:
@@ -622,12 +695,59 @@ static void test_source_wait_event_notifier_noflush(void)
     event_notifier_cleanup(&data.e);
 }
 
+static void test_source_timer_schedule(void)
+{
+    TimerTestData data = { .n = 0, .ctx = ctx, .ns = SCALE_MS*750,
+                           .max = 2, .clock = ctx->clock };
+    int pipefd[2];
+    int64_t expiry;
+
+    /* aio_poll will not block to wait for timers to complete unless it has
+     * an fd to wait on. Fixing this breaks other tests. So create a dummy one.
+     */
+    g_assert(!pipe2(pipefd, O_NONBLOCK));
+    aio_set_fd_handler(ctx, pipefd[0],
+                       dummy_io_handler_read, NULL, dummy_io_handler_flush,
+                       NULL);
+    while (g_main_context_iteration(NULL, false));
+
+    data.timer = qemu_new_timer_ns(data.clock, timer_test_cb, &data);
+    expiry = qemu_get_clock_ns(data.clock) + data.ns;
+    qemu_mod_timer(data.timer, expiry);
+
+    g_assert_cmpint(data.n, ==, 0);
+
+    sleep(1);
+    g_assert_cmpint(data.n, ==, 0);
+
+    g_assert(g_main_context_iteration(NULL, false));
+    g_assert_cmpint(data.n, ==, 1);
+
+    /* The comment above was not kidding when it said this wakes up itself */
+    do {
+        g_assert(g_main_context_iteration(NULL, true));
+    } while (qemu_get_clock_ns(data.clock) <= expiry);
+    sleep(1);
+    g_main_context_iteration(NULL, false);
+
+    g_assert_cmpint(data.n, ==, 2);
+
+    aio_set_fd_handler(ctx, pipefd[0], NULL, NULL, NULL, NULL);
+    close(pipefd[0]);
+    close(pipefd[1]);
+
+    qemu_del_timer(data.timer);
+}
+
+
 /* End of tests.  */
 
 int main(int argc, char **argv)
 {
     GSource *src;
 
+    init_clocks();
+
     ctx = aio_context_new();
     src = aio_get_g_source(ctx);
     g_source_attach(src, NULL);
@@ -648,6 +768,7 @@ int main(int argc, char **argv)
     g_test_add_func("/aio/event/wait",              test_wait_event_notifier);
     g_test_add_func("/aio/event/wait/no-flush-cb",  test_wait_event_notifier_noflush);
     g_test_add_func("/aio/event/flush",             test_flush_event_notifier);
+    g_test_add_func("/aio/timer/schedule",          test_timer_schedule);
 
     g_test_add_func("/aio-gsource/notify",                  test_source_notify);
     g_test_add_func("/aio-gsource/flush",                   test_source_flush);
@@ -662,5 +783,6 @@ int main(int argc, char **argv)
     g_test_add_func("/aio-gsource/event/wait",              test_source_wait_event_notifier);
     g_test_add_func("/aio-gsource/event/wait/no-flush-cb",  test_source_wait_event_notifier_noflush);
     g_test_add_func("/aio-gsource/event/flush",             test_source_flush_event_notifier);
+    g_test_add_func("/aio-gsource/timer/schedule",          test_source_timer_schedule);
     return g_test_run();
 }
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [PATCHv2] [RFC 2/7] aio / timers: qemu-timer.c utility functions and add list of clocks
  2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 2/7] aio / timers: qemu-timer.c utility functions and add list of clocks Alex Bligh
@ 2013-07-23 21:09     ` Richard Henderson
  2013-07-23 21:34       ` Alex Bligh
  2013-07-25  9:19     ` Stefan Hajnoczi
  2013-07-25  9:21     ` Stefan Hajnoczi
  2 siblings, 1 reply; 128+ messages in thread
From: Richard Henderson @ 2013-07-23 21:09 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, qemu-devel, Stefan Hajnoczi

On 07/20/2013 10:06 AM, Alex Bligh wrote:
> +int64_t qemu_clock_deadline_ns(QEMUClock *clock);
> +int64_t qemu_clock_deadline_all_ns(void);
> +int qemu_timeout_ns_to_ms(int64_t ns);
> +gint qemu_g_poll_ns(GPollFD *fds, guint nfds, int64_t timeout);

Why continue with the g_ prefix here?  Surely qemu_poll_ns is sufficient.


r~

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [PATCHv2] [RFC 2/7] aio / timers: qemu-timer.c utility functions and add list of clocks
  2013-07-23 21:09     ` Richard Henderson
@ 2013-07-23 21:34       ` Alex Bligh
  0 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-23 21:34 UTC (permalink / raw)
  To: Richard Henderson
  Cc: Kevin Wolf, Anthony Liguori, Stefan Hajnoczi, qemu-devel,
	Alex Bligh, Paolo Bonzini

Richard,

--On 23 July 2013 13:09:18 -0800 Richard Henderson <rth@twiddle.net> wrote:

> On 07/20/2013 10:06 AM, Alex Bligh wrote:
>> +int64_t qemu_clock_deadline_ns(QEMUClock *clock);
>> +int64_t qemu_clock_deadline_all_ns(void);
>> +int qemu_timeout_ns_to_ms(int64_t ns);
>> +gint qemu_g_poll_ns(GPollFD *fds, guint nfds, int64_t timeout);
>
> Why continue with the g_ prefix here?  Surely qemu_poll_ns is sufficient.

Only because it was a g_ type function (takes GPollFD*, guint, and returns
gint). Quite happy to make it a qemu_poll_ns. Perhaps I should make it take
uint and return int at the same time.

TBH I am confused as to why we use g_poll at all. I thought originally
we were using it for win32, but that doesn't use g_poll. Given poll is
POSIX, what platforms do we target which are glib, !win32, and !posix?
Using straight posix would be easier.

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [PATCHv2] [RFC 1/7] aio / timers: Remove alarm timers
  2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 1/7] aio / timers: Remove alarm timers Alex Bligh
@ 2013-07-25  9:10     ` Stefan Hajnoczi
  2013-07-25  9:11       ` Paolo Bonzini
  2013-07-25  9:37       ` Alex Bligh
  0 siblings, 2 replies; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25  9:10 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, Stefan Hajnoczi,
	Paolo Bonzini, rth

On Sat, Jul 20, 2013 at 07:06:37PM +0100, Alex Bligh wrote:
> Remove alarm timers from qemu-timers.c in anticipation of using
> timeouts for g_poll / p_poll instead.
> 
> Signed-off-by: Alex Bligh <alex@alex.org.uk>
> ---
>  include/qemu/timer.h |    2 -
>  main-loop.c          |    4 -
>  qemu-timer.c         |  501 +-------------------------------------------------
>  vl.c                 |    5 +-
>  4 files changed, 7 insertions(+), 505 deletions(-)

This should be one of the last patches so qemu.git remains bisectable.
Only remove the alarm timer once the event loops are already using the
timeout argument.

> @@ -245,11 +82,7 @@ static QEMUClock *qemu_new_clock(int type)
>  
>  void qemu_clock_enable(QEMUClock *clock, bool enabled)
>  {
> -    bool old = clock->enabled;
>      clock->enabled = enabled;
> -    if (enabled && !old) {
> -        qemu_rearm_alarm_timer(alarm_timer);
> -    }

If this function is supposed to work when called from another thread
(e.g. vcpu thread), then you need to call qemu_notify_event().  For
AioContext clocks that should be aio_notify() with the relevant
AioContext, but we don't need that yet.

>  }
>  
>  int64_t qemu_clock_has_timers(QEMUClock *clock)
> @@ -340,10 +173,9 @@ void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
>  
>      /* Rearm if necessary  */
>      if (pt == &ts->clock->active_timers) {
> -        if (!alarm_timer->pending) {
> -            qemu_rearm_alarm_timer(alarm_timer);
> -        }
> -        /* Interrupt execution to force deadline recalculation.  */
> +        /* Interrupt execution to force deadline recalculation.
> +         * FIXME: Do we need to do this now?
> +         */
>          qemu_clock_warp(ts->clock);
>          if (use_icount) {
>              qemu_notify_event();

Same here.

> diff --git a/vl.c b/vl.c
> index 25b8f2f..612c609 100644
> --- a/vl.c
> +++ b/vl.c
> @@ -3714,7 +3714,10 @@ int main(int argc, char **argv, char **envp)
>                  old_param = 1;
>                  break;
>              case QEMU_OPTION_clock:
> -                configure_alarms(optarg);
> +                /* Once upon a time we did:
> +                 *   configure_alarms(optarg);
> +                 * here. This is stubbed out for compatibility.
> +                 */
>                  break;

This could be made clearer to say outright that the options don't exist
anymore:

/* Clock options no longer exist.  Keep this option for backward
 * compatibility.
 */

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [PATCHv2] [RFC 1/7] aio / timers: Remove alarm timers
  2013-07-25  9:10     ` Stefan Hajnoczi
@ 2013-07-25  9:11       ` Paolo Bonzini
  2013-07-25  9:38         ` Alex Bligh
  2013-07-25  9:37       ` Alex Bligh
  1 sibling, 1 reply; 128+ messages in thread
From: Paolo Bonzini @ 2013-07-25  9:11 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, qemu-devel,
	Stefan Hajnoczi, rth

Il 25/07/2013 11:10, Stefan Hajnoczi ha scritto:
> On Sat, Jul 20, 2013 at 07:06:37PM +0100, Alex Bligh wrote:
>> Remove alarm timers from qemu-timers.c in anticipation of using
>> timeouts for g_poll / p_poll instead.
>>
>> Signed-off-by: Alex Bligh <alex@alex.org.uk>
>> ---
>>  include/qemu/timer.h |    2 -
>>  main-loop.c          |    4 -
>>  qemu-timer.c         |  501 +-------------------------------------------------
>>  vl.c                 |    5 +-
>>  4 files changed, 7 insertions(+), 505 deletions(-)
> 
> This should be one of the last patches so qemu.git remains bisectable.
> Only remove the alarm timer once the event loops are already using the
> timeout argument.
> 
>> @@ -245,11 +82,7 @@ static QEMUClock *qemu_new_clock(int type)
>>  
>>  void qemu_clock_enable(QEMUClock *clock, bool enabled)
>>  {
>> -    bool old = clock->enabled;
>>      clock->enabled = enabled;
>> -    if (enabled && !old) {
>> -        qemu_rearm_alarm_timer(alarm_timer);
>> -    }
> 
> If this function is supposed to work when called from another thread
> (e.g. vcpu thread), then you need to call qemu_notify_event().  For
> AioContext clocks that should be aio_notify() with the relevant
> AioContext, but we don't need that yet.

In general QEMUClock should have an AioContext pointer so that it can
call aio_notify.

Paolo

>>  }
>>  
>>  int64_t qemu_clock_has_timers(QEMUClock *clock)
>> @@ -340,10 +173,9 @@ void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
>>  
>>      /* Rearm if necessary  */
>>      if (pt == &ts->clock->active_timers) {
>> -        if (!alarm_timer->pending) {
>> -            qemu_rearm_alarm_timer(alarm_timer);
>> -        }
>> -        /* Interrupt execution to force deadline recalculation.  */
>> +        /* Interrupt execution to force deadline recalculation.
>> +         * FIXME: Do we need to do this now?
>> +         */
>>          qemu_clock_warp(ts->clock);
>>          if (use_icount) {
>>              qemu_notify_event();
> 
> Same here.
> 
>> diff --git a/vl.c b/vl.c
>> index 25b8f2f..612c609 100644
>> --- a/vl.c
>> +++ b/vl.c
>> @@ -3714,7 +3714,10 @@ int main(int argc, char **argv, char **envp)
>>                  old_param = 1;
>>                  break;
>>              case QEMU_OPTION_clock:
>> -                configure_alarms(optarg);
>> +                /* Once upon a time we did:
>> +                 *   configure_alarms(optarg);
>> +                 * here. This is stubbed out for compatibility.
>> +                 */
>>                  break;
> 
> This could be made clearer to say outright that the options don't exist
> anymore:
> 
> /* Clock options no longer exist.  Keep this option for backward
>  * compatibility.
>  */
> 

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [PATCHv2] [RFC 2/7] aio / timers: qemu-timer.c utility functions and add list of clocks
  2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 2/7] aio / timers: qemu-timer.c utility functions and add list of clocks Alex Bligh
  2013-07-23 21:09     ` Richard Henderson
@ 2013-07-25  9:19     ` Stefan Hajnoczi
  2013-07-25  9:46       ` Alex Bligh
  2013-07-25  9:21     ` Stefan Hajnoczi
  2 siblings, 1 reply; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25  9:19 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, Stefan Hajnoczi,
	Paolo Bonzini, rth

On Sat, Jul 20, 2013 at 07:06:38PM +0100, Alex Bligh wrote:
> Add utility functions to qemu-timer.c for nanosecond timing.
> 
> Ensure we keep track of all QEMUClocks on a list.
> 
> Add qemu_clock_deadline_ns and qemu_clock_deadline_all_ns to
> calculate deadlines to nanosecond accuracy.
> 
> Add utility function qemu_soonest_timeout to calculate soonest deadline.
> 
> Add qemu_timeout_ns_to_ms to convert a timeout in nanoseconds back to
> milliseconds for when ppoll is not used.

Please split this into smaller patches.  There are several logical
changes happening here.

> @@ -61,6 +71,15 @@ int64_t cpu_get_ticks(void);
>  void cpu_enable_ticks(void);
>  void cpu_disable_ticks(void);
>  
> +static inline int64_t qemu_soonest_timeout(int64_t timeout1, int64_t timeout2)
> +{
> +    /* we can abuse the fact that -1 (which means infinite) is a maximal
> +     * value when cast to unsigned. As this is disgusting, it's kept in
> +     * one inline function.
> +     */
> +    return ((uint64_t) timeout1 < (uint64_t) timeout2) ? timeout1 : timeout2;

The straightforward version isn't much longer than the commented casting trick:

if (timeout1 == -1) {
    return timeout2;
} else if (timeout2 == -1) {
    return timeout1;
} else {
    return timeout1 < timeout2 ? timeout1 : timeout2;
}

> @@ -48,6 +44,8 @@ struct QEMUClock {
>  
>      int type;
>      bool enabled;
> +
> +    QLIST_ENTRY(QEMUClock) list;

I don't think global state is necessary.  The
run_timers()/clock_deadline() users have QEMUClock references, they can
just call per-QEMUClock functions instead of requiring qemu-timers.c to
keep a list.

This way AioContext clocks are safe to use from threads.

> +void qemu_free_clock(QEMUClock *clock)
> +{
> +    QLIST_REMOVE(clock, list);
> +    g_free(clock);

assert that there are no timers left?

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [PATCHv2] [RFC 2/7] aio / timers: qemu-timer.c utility functions and add list of clocks
  2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 2/7] aio / timers: qemu-timer.c utility functions and add list of clocks Alex Bligh
  2013-07-23 21:09     ` Richard Henderson
  2013-07-25  9:19     ` Stefan Hajnoczi
@ 2013-07-25  9:21     ` Stefan Hajnoczi
  2013-07-25  9:46       ` Alex Bligh
  2 siblings, 1 reply; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25  9:21 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, Stefan Hajnoczi,
	Paolo Bonzini, rth

On Sat, Jul 20, 2013 at 07:06:38PM +0100, Alex Bligh wrote:
> +gint qemu_g_poll_ns(GPollFD *fds, guint nfds, int64_t timeout);

You didn't define the function in this patch.

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [PATCHv2] [RFC 6/7] aio / timers: Switch to ppoll, run AioContext timers in aio_poll/aio_dispatch
  2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 6/7] aio / timers: Switch to ppoll, run AioContext timers in aio_poll/aio_dispatch Alex Bligh
@ 2013-07-25  9:33     ` Stefan Hajnoczi
  2013-07-25 14:53       ` Alex Bligh
  2013-07-25 18:51       ` Alex Bligh
  0 siblings, 2 replies; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25  9:33 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, Stefan Hajnoczi,
	Paolo Bonzini, rth

On Sat, Jul 20, 2013 at 07:06:42PM +0100, Alex Bligh wrote:
> @@ -245,11 +249,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
>                  node->pfd.revents = pfd->revents;
>              }
>          }
> -        if (aio_dispatch(ctx)) {
> -            progress = true;
> -        }
> +    }
> +
> +    /* Run dispatch even if there were no readable fds to run timers */
> +    if (aio_dispatch(ctx)) {
> +        progress = true;
>      }
>  
>      assert(progress || busy);
> -    return true;
> +    return progress;

Now aio_poll() can return false when it used to return true?

> @@ -214,6 +221,15 @@ bool aio_poll(AioContext *ctx, bool blocking)
>          events[ret - WAIT_OBJECT_0] = events[--count];
>      }
>  
> +    if (blocking) {
> +        /* Run the timers a second time. We do this because otherwise aio_wait
> +         * will not note progress - and will stop a drain early - if we have
> +         * a timer that was not ready to run entering g_poll but is ready
> +         * after g_poll. This will only do anything if a timer has expired.
> +         */
> +        progress |= qemu_run_timers(ctx->clock);
> +    }
> +
>      assert(progress || busy);
>      return true;

You didn't update this to return just progress.

>  }
> diff --git a/async.c b/async.c
> index 0d41431..cb6b1d4 100644
> --- a/async.c
> +++ b/async.c
> @@ -123,13 +123,16 @@ aio_ctx_prepare(GSource *source, gint    *timeout)
>  {
>      AioContext *ctx = (AioContext *) source;
>      QEMUBH *bh;
> +    int deadline;
>  
>      for (bh = ctx->first_bh; bh; bh = bh->next) {
>          if (!bh->deleted && bh->scheduled) {
>              if (bh->idle) {
>                  /* idle bottom halves will be polled at least
>                   * every 10ms */
> -                *timeout = 10;
> +                if ((*timeout < 0) || (*timeout > 10)) {
> +                    *timeout = 10;
> +                }

Use the function you introduced earlier to return the nearest timeout?

>              } else {
>                  /* non-idle bottom halves will be executed
>                   * immediately */
> @@ -139,6 +142,15 @@ aio_ctx_prepare(GSource *source, gint    *timeout)
>          }
>      }
>  
> +    deadline = qemu_timeout_ns_to_ms(qemu_clock_deadline_ns(ctx->clock));
> +    if (deadline == 0) {
> +        *timeout = 0;
> +        return true;
> +    } else if ((deadline > 0) &&
> +               ((*timeout < 0) || (deadline < *timeout))) {
> +        *timeout = deadline;

Same here.

> @@ -170,9 +171,13 @@ static void glib_pollfds_fill(uint32_t *cur_timeout)
>                                   glib_n_poll_fds);
>      } while (n != glib_n_poll_fds);
>  
> -    if (timeout >= 0 && timeout < *cur_timeout) {
> -        *cur_timeout = timeout;
> +    if (timeout < 0) {
> +        timeout_ns = -1;
> +    } else {
> +      timeout_ns = (int64_t)timeout * (int64_t)SCALE_MS;

Indentation.

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [PATCHv2] [RFC 1/7] aio / timers: Remove alarm timers
  2013-07-25  9:10     ` Stefan Hajnoczi
  2013-07-25  9:11       ` Paolo Bonzini
@ 2013-07-25  9:37       ` Alex Bligh
  2013-07-25  9:38         ` Paolo Bonzini
  1 sibling, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-07-25  9:37 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, qemu-devel,
	Stefan Hajnoczi, Paolo Bonzini, rth

Stefan,

> This should be one of the last patches so qemu.git remains bisectable.
> Only remove the alarm timer once the event loops are already using the
> timeout argument.

OK

>> @@ -245,11 +82,7 @@ static QEMUClock *qemu_new_clock(int type)
>>
>>  void qemu_clock_enable(QEMUClock *clock, bool enabled)
>>  {
>> -    bool old = clock->enabled;
>>      clock->enabled = enabled;
>> -    if (enabled && !old) {
>> -        qemu_rearm_alarm_timer(alarm_timer);
>> -    }
>
> If this function is supposed to work when called from another thread
> (e.g. vcpu thread), then you need to call qemu_notify_event().  For
> AioContext clocks that should be aio_notify() with the relevant
> AioContext, but we don't need that yet.

Each AioContext knows which clock it has but each clock doesn't know if
it's part of an AioContext. I suggest this is infrequent enough that always
using qemu_notify_event() would be OK. That should interrupt any poll.

>>  int64_t qemu_clock_has_timers(QEMUClock *clock)
>> @@ -340,10 +173,9 @@ void qemu_mod_timer_ns(QEMUTimer *ts, int64_t
>> expire_time)
>>
>>      /* Rearm if necessary  */
>>      if (pt == &ts->clock->active_timers) {
>> -        if (!alarm_timer->pending) {
>> -            qemu_rearm_alarm_timer(alarm_timer);
>> -        }
>> -        /* Interrupt execution to force deadline recalculation.  */
>> +        /* Interrupt execution to force deadline recalculation.
>> +         * FIXME: Do we need to do this now?
>> +         */
>>          qemu_clock_warp(ts->clock);
>>          if (use_icount) {
>>              qemu_notify_event();
>
> Same here.

I think I just need to delete the FIXME comment.

>> diff --git a/vl.c b/vl.c
>> index 25b8f2f..612c609 100644
>> --- a/vl.c
>> +++ b/vl.c
>> @@ -3714,7 +3714,10 @@ int main(int argc, char **argv, char **envp)
>>                  old_param = 1;
>>                  break;
>>              case QEMU_OPTION_clock:
>> -                configure_alarms(optarg);
>> +                /* Once upon a time we did:
>> +                 *   configure_alarms(optarg);
>> +                 * here. This is stubbed out for compatibility.
>> +                 */
>>                  break;
>
> This could be made clearer to say outright that the options don't exist
> anymore:
>
> /* Clock options no longer exist.  Keep this option for backward
>  * compatibility.
>  */

OK

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [PATCHv2] [RFC 0/7] aio / timers: Add AioContext timers and use ppoll
  2013-07-20 18:06 ` [Qemu-devel] [PATCHv2] [RFC 0/7] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                     ` (6 preceding siblings ...)
  2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 7/7] aio / timers: Add test harness for AioContext timers Alex Bligh
@ 2013-07-25  9:37   ` Stefan Hajnoczi
  2013-07-25 22:16     ` [Qemu-devel] [RFC] [PATCHv3 00/12] " Alex Bligh
  7 siblings, 1 reply; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-07-25  9:37 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, Stefan Hajnoczi,
	Paolo Bonzini, rth

On Sat, Jul 20, 2013 at 07:06:36PM +0100, Alex Bligh wrote:
> This patch series adds support for timers attached to an AioContext clock
> which get called within aio_poll.
> 
> In doing so it removes alarm timers and moves to use ppoll where possible.
> 
> This patch set 'sort of' passes make check (see below for caveat)
> including a new test harness for the aio timers, but has not been
> tested much beyond that. In particular, the win32 changes have not
> even been compile tested.
> 
> Caveat: make check fails one test only with:
> 
> ERROR:tests/test-aio.c:346:test_wait_event_notifier_noflush: assertion failed: (aio_poll(ctx, false))
> 
> As gar as I can tell, this check is incorrect, in that it checking aio_poll
> makes progress when in fact it should not make progress. I fixed an issue
> where aio_poll was (as far as I can tell) wrongly returning true on
> a timeout, and that generated this error.
> 
> Alex Bligh (7):
>   aio / timers: Remove alarm timers
>   aio / timers: qemu-timer.c utility functions and add list of clocks
>   aio / timers: add ppoll support with qemu_g_poll_ns
>   aio / timers: Make qemu_run_timers and qemu_run_all_timers return
>     progress
>   aio / timers: Add a clock to AioContext
>   aio / timers: Switch to ppoll, run AioContext timers in
>     aio_poll/aio_dispatch
>   aio / timers: Add test harness for AioContext timers
> 
>  aio-posix.c          |   20 +-
>  aio-win32.c          |   20 +-
>  async.c              |   18 +-
>  configure            |   19 ++
>  include/block/aio.h  |    5 +
>  include/qemu/timer.h |   25 +-
>  main-loop.c          |   47 ++--
>  qemu-timer.c         |  619 +++++++++-----------------------------------------
>  tests/test-aio.c     |  124 +++++++++-
>  vl.c                 |    5 +-
>  10 files changed, 363 insertions(+), 539 deletions(-)

This looks promising for QEMU 1.7.

Please split into logical patches - some of the patches you posted are
too big and do several independent things.

Also please order patches so qemu.git remains bisectable - remove alarm
timers after switch event loops to use timeouts.

Stefan

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [PATCHv2] [RFC 1/7] aio / timers: Remove alarm timers
  2013-07-25  9:37       ` Alex Bligh
@ 2013-07-25  9:38         ` Paolo Bonzini
  0 siblings, 0 replies; 128+ messages in thread
From: Paolo Bonzini @ 2013-07-25  9:38 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, Stefan Hajnoczi, qemu-devel,
	Stefan Hajnoczi, rth

Il 25/07/2013 11:37, Alex Bligh ha scritto:
> Stefan,
> 
>> This should be one of the last patches so qemu.git remains bisectable.
>> Only remove the alarm timer once the event loops are already using the
>> timeout argument.
> 
> OK
> 
>>> @@ -245,11 +82,7 @@ static QEMUClock *qemu_new_clock(int type)
>>>
>>>  void qemu_clock_enable(QEMUClock *clock, bool enabled)
>>>  {
>>> -    bool old = clock->enabled;
>>>      clock->enabled = enabled;
>>> -    if (enabled && !old) {
>>> -        qemu_rearm_alarm_timer(alarm_timer);
>>> -    }
>>
>> If this function is supposed to work when called from another thread
>> (e.g. vcpu thread), then you need to call qemu_notify_event().  For
>> AioContext clocks that should be aio_notify() with the relevant
>> AioContext, but we don't need that yet.
> 
> Each AioContext knows which clock it has but each clock doesn't know if
> it's part of an AioContext. I suggest this is infrequent enough that always
> using qemu_notify_event() would be OK. That should interrupt any poll.

No, qemu_notify_event() only interrupts the main clock's poll.

Paolo

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [PATCHv2] [RFC 1/7] aio / timers: Remove alarm timers
  2013-07-25  9:11       ` Paolo Bonzini
@ 2013-07-25  9:38         ` Alex Bligh
  0 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-25  9:38 UTC (permalink / raw)
  To: Paolo Bonzini, Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, Stefan Hajnoczi, qemu-devel,
	Alex Bligh, rth



--On 25 July 2013 11:11:22 +0200 Paolo Bonzini <pbonzini@redhat.com> wrote:

>> If this function is supposed to work when called from another thread
>> (e.g. vcpu thread), then you need to call qemu_notify_event().  For
>> AioContext clocks that should be aio_notify() with the relevant
>> AioContext, but we don't need that yet.
>
> In general QEMUClock should have an AioContext pointer so that it can
> call aio_notify.

OK, if we do that, disregard my comment about always using
qemu_notify_event.

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [PATCHv2] [RFC 2/7] aio / timers: qemu-timer.c utility functions and add list of clocks
  2013-07-25  9:19     ` Stefan Hajnoczi
@ 2013-07-25  9:46       ` Alex Bligh
  2013-07-26  8:26         ` Stefan Hajnoczi
  0 siblings, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-07-25  9:46 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, qemu-devel,
	Stefan Hajnoczi, Paolo Bonzini, rth

Stefan,

--On 25 July 2013 11:19:29 +0200 Stefan Hajnoczi <stefanha@gmail.com> wrote:

> On Sat, Jul 20, 2013 at 07:06:38PM +0100, Alex Bligh wrote:
>> Add utility functions to qemu-timer.c for nanosecond timing.
>>
>> Ensure we keep track of all QEMUClocks on a list.
>>
>> Add qemu_clock_deadline_ns and qemu_clock_deadline_all_ns to
>> calculate deadlines to nanosecond accuracy.
>>
>> Add utility function qemu_soonest_timeout to calculate soonest deadline.
>>
>> Add qemu_timeout_ns_to_ms to convert a timeout in nanoseconds back to
>> milliseconds for when ppoll is not used.
>
> Please split this into smaller patches.  There are several logical
> changes happening here.

OK

>> @@ -61,6 +71,15 @@ int64_t cpu_get_ticks(void);
>>  void cpu_enable_ticks(void);
>>  void cpu_disable_ticks(void);
>>
>> +static inline int64_t qemu_soonest_timeout(int64_t timeout1, int64_t
>> timeout2) +{
>> +    /* we can abuse the fact that -1 (which means infinite) is a maximal
>> +     * value when cast to unsigned. As this is disgusting, it's kept in
>> +     * one inline function.
>> +     */
>> +    return ((uint64_t) timeout1 < (uint64_t) timeout2) ? timeout1 :
>> timeout2;
>
> The straightforward version isn't much longer than the commented casting
> trick:
>
> if (timeout1 == -1) {
>     return timeout2;
> } else if (timeout2 == -1) {
>     return timeout1;
> } else {
>     return timeout1 < timeout2 ? timeout1 : timeout2;
> }

Well, it should be (timeout1 < 0) for consistency. It may be a micro
optimisation but I'm pretty sure the casting trick will produce better
code. With the comment, it's arguably more readable too.

>> @@ -48,6 +44,8 @@ struct QEMUClock {
>>
>>      int type;
>>      bool enabled;
>> +
>> +    QLIST_ENTRY(QEMUClock) list;
>
> I don't think global state is necessary.  The
> run_timers()/clock_deadline() users have QEMUClock references, they can
> just call per-QEMUClock functions instead of requiring qemu-timers.c to
> keep a list.
>
> This way AioContext clocks are safe to use from threads.

I don't think that's true. The thing that uses the list is
qemu_run_all_timers which doesn't have a list of all *clocks*
(as opposed to timers).

I had assumed (possibly wrongly) that AioContexts and clocks were
only created / deleted from the main thread. This is certainly true
currently as there is only one AioContext (as far as I can see - perhaps
I'm wrong here) and the other clocks are created on init.

If we need to support locking here I think we probably must.

>> +void qemu_free_clock(QEMUClock *clock)
>> +{
>> +    QLIST_REMOVE(clock, list);
>> +    g_free(clock);
>
> assert that there are no timers left?

Yes I wasn't quite sure of the right semantics here as no clocks are
currently ever destroyed. I'm not quite sure how we know all timers
are destroyed when an AioContext is destroyed. Should I go and manually
free them or assert the right way?

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [PATCHv2] [RFC 2/7] aio / timers: qemu-timer.c utility functions and add list of clocks
  2013-07-25  9:21     ` Stefan Hajnoczi
@ 2013-07-25  9:46       ` Alex Bligh
  0 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-25  9:46 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, qemu-devel,
	Stefan Hajnoczi, Paolo Bonzini, rth



--On 25 July 2013 11:21:33 +0200 Stefan Hajnoczi <stefanha@gmail.com> wrote:

> On Sat, Jul 20, 2013 at 07:06:38PM +0100, Alex Bligh wrote:
>> +gint qemu_g_poll_ns(GPollFD *fds, guint nfds, int64_t timeout);
>
> You didn't define the function in this patch.

Will fix

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [PATCHv2] [RFC 6/7] aio / timers: Switch to ppoll, run AioContext timers in aio_poll/aio_dispatch
  2013-07-25  9:33     ` Stefan Hajnoczi
@ 2013-07-25 14:53       ` Alex Bligh
  2013-07-26  8:34         ` Stefan Hajnoczi
  2013-07-25 18:51       ` Alex Bligh
  1 sibling, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-07-25 14:53 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, qemu-devel,
	Stefan Hajnoczi, Paolo Bonzini, rth

Stefan,

--On 25 July 2013 11:33:43 +0200 Stefan Hajnoczi <stefanha@gmail.com> wrote:

>>      assert(progress || busy);
>> -    return true;
>> +    return progress;
>
> Now aio_poll() can return false when it used to return true?

I don't think that's a bug.

Firstly, this is the same thing you fixed and we discussed on another
thread.

Secondly, aio_poll always could return false. With the post patch line
numbering, here:

    233     /* No AIO operations?  Get us out of here */
    234     if (!busy) {
    235         return progress;
    236     }

The only circumstance where it now return false when previously it would
have exited at the bottom of aio_poll and returned true is if g_poll returns
such that aio_dispatch does nothing. That requires there to be no
aio_dispatch to the normal FD handlers (which would generally mean a
timeout) AND no timers running. This might happen if there was zero timeout.

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [PATCHv2] [RFC 6/7] aio / timers: Switch to ppoll, run AioContext timers in aio_poll/aio_dispatch
  2013-07-25  9:33     ` Stefan Hajnoczi
  2013-07-25 14:53       ` Alex Bligh
@ 2013-07-25 18:51       ` Alex Bligh
  1 sibling, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-25 18:51 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, qemu-devel,
	Stefan Hajnoczi, Paolo Bonzini, rth

Stefan,

I missed a couple of comments.

--On 25 July 2013 11:33:43 +0200 Stefan Hajnoczi <stefanha@gmail.com> wrote:

>> @@ -214,6 +221,15 @@ bool aio_poll(AioContext *ctx, bool blocking)
>>          events[ret - WAIT_OBJECT_0] = events[--count];
>>      }
>>
>> +    if (blocking) {
>> +        /* Run the timers a second time. We do this because otherwise
>> aio_wait +         * will not note progress - and will stop a drain
>> early - if we have +         * a timer that was not ready to run
>> entering g_poll but is ready +         * after g_poll. This will only do
>> anything if a timer has expired. +         */
>> +        progress |= qemu_run_timers(ctx->clock);
>> +    }
>> +
>>      assert(progress || busy);
>>      return true;
>
> You didn't update this to return just progress.

Will fix

>>  }
>> diff --git a/async.c b/async.c
>> index 0d41431..cb6b1d4 100644
>> --- a/async.c
>> +++ b/async.c
>> @@ -123,13 +123,16 @@ aio_ctx_prepare(GSource *source, gint    *timeout)
>>  {
>>      AioContext *ctx = (AioContext *) source;
>>      QEMUBH *bh;
>> +    int deadline;
>>
>>      for (bh = ctx->first_bh; bh; bh = bh->next) {
>>          if (!bh->deleted && bh->scheduled) {
>>              if (bh->idle) {
>>                  /* idle bottom halves will be polled at least
>>                   * every 10ms */
>> -                *timeout = 10;
>> +                if ((*timeout < 0) || (*timeout > 10)) {
>> +                    *timeout = 10;
>> +                }
>
> Use the function you introduced earlier to return the nearest timeout?

The function I introduced takes an int64_t not an int. I think that's OK
though.

>
>>              } else {
>>                  /* non-idle bottom halves will be executed
>>                   * immediately */
>> @@ -139,6 +142,15 @@ aio_ctx_prepare(GSource *source, gint    *timeout)
>>          }
>>      }
>>
>> +    deadline =
>> qemu_timeout_ns_to_ms(qemu_clock_deadline_ns(ctx->clock)); +    if
>> (deadline == 0) {
>> +        *timeout = 0;
>> +        return true;
>> +    } else if ((deadline > 0) &&
>> +               ((*timeout < 0) || (deadline < *timeout))) {
>> +        *timeout = deadline;
>
> Same here.

Ditto

>> @@ -170,9 +171,13 @@ static void glib_pollfds_fill(uint32_t *cur_timeout)
>>                                   glib_n_poll_fds);
>>      } while (n != glib_n_poll_fds);
>>
>> -    if (timeout >= 0 && timeout < *cur_timeout) {
>> -        *cur_timeout = timeout;
>> +    if (timeout < 0) {
>> +        timeout_ns = -1;
>> +    } else {
>> +      timeout_ns = (int64_t)timeout * (int64_t)SCALE_MS;
>
> Indentation.

Will fix

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv3 00/12] aio / timers: Add AioContext timers and use ppoll
  2013-07-25  9:37   ` [Qemu-devel] [PATCHv2] [RFC 0/7] aio / timers: Add AioContext timers and use ppoll Stefan Hajnoczi
@ 2013-07-25 22:16     ` Alex Bligh
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 01/12] aio / timers: add qemu-timer.c utility functions Alex Bligh
                         ` (12 more replies)
  0 siblings, 13 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-25 22:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

This patch series adds support for timers attached to an AioContext clock
which get called within aio_poll.

In doing so it removes alarm timers and moves to use ppoll where possible.

This patch set 'sort of' passes make check (see below for caveat)
including a new test harness for the aio timers, but has not been
tested much beyond that. In particular, the win32 changes have not
even been compile tested.

Caveat: make check fails one test only with:

ERROR:tests/test-aio.c:346:test_wait_event_notifier_noflush: assertion failed: (aio_poll(ctx, false))

As gar as I can tell, this check is incorrect, in that it checking aio_poll
makes progress when in fact it should not make progress. I fixed an issue
where aio_poll was (as far as I can tell) wrongly returning true on
a timeout, and that generated this error.

Changes since v2:
* Reordered to remove alarm timers last
* Added prctl(PR_SET_TIMERSLACK, 1, ...)
* Renamed qemu_g_poll_ns to qemu_poll_ns
* Moved declaration of above & drop glib types
* Do not use a global list of qemu clocks
* Add AioContext * to QEMUClock
* Split up conversion to use ppoll and timers
* Indentation fix
* Fix aio_win32.c aio_poll to return progress
* aio_notify / qemu_notify when timers are modified
* change comment in deprecation of clock options

Things NOT changed since v2: I have *NOT* disaggregated QEMUClock and
QEMUTimerList. This is very intrusive (horrendous git stats) and largely
orthogonal. I will however submit a separate patch set to do this.

Alex Bligh (12):
  aio / timers: add qemu-timer.c utility functions
  aio / timers: add ppoll support with qemu_poll_ns
  aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer
    slack
  aio / timers: Make qemu_run_timers and qemu_run_all_timers return
    progress
  aio / timers: Add a clock to AioContext
  aio / timers: Add an AioContext pointer to QEMUClock
  aio / timers: aio_ctx_prepare sets timeout from AioContext timers
  aio / timers: Convert aio_poll to use AioContext timers' deadline
  aio / timers: convert mainloop to use timeout
  aio / timers: on timer modification, qemu_notify or aio_notify
  aio / timers: Remove alarm timers
  aio / timers: Add test harness for AioContext timers

 aio-posix.c          |   20 +-
 aio-win32.c          |   22 +-
 async.c              |   16 +-
 configure            |   37 +++
 include/block/aio.h  |    5 +
 include/qemu/timer.h |   27 ++-
 main-loop.c          |   52 +++--
 qemu-timer.c         |  629 ++++++++++----------------------------------------
 tests/test-aio.c     |  122 ++++++++++
 vl.c                 |    5 +-
 10 files changed, 396 insertions(+), 539 deletions(-)

-- 
1.7.9.5

^ permalink raw reply	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv3 01/12] aio / timers: add qemu-timer.c utility functions
  2013-07-25 22:16     ` [Qemu-devel] [RFC] [PATCHv3 00/12] " Alex Bligh
@ 2013-07-25 22:16       ` Alex Bligh
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 02/12] aio / timers: add ppoll support with qemu_poll_ns Alex Bligh
                         ` (11 subsequent siblings)
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-25 22:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Add qemu_free_clock and expose qemu_new_clock and clock types.

Add utility functions to qemu-timer.c for nanosecond timing.

Add qemu_clock_deadline_ns to calculate deadlines to
nanosecond accuracy.

Add utility function qemu_soonest_timeout to calculate soonest deadline.

Add qemu_timeout_ns_to_ms to convert a timeout in nanoseconds back to
milliseconds for when ppoll is not used.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |   17 ++++++++++++++
 qemu-timer.c         |   63 +++++++++++++++++++++++++++++++++++++++++++++-----
 2 files changed, 74 insertions(+), 6 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 9dd206c..6171db3 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -11,6 +11,10 @@
 #define SCALE_US 1000
 #define SCALE_NS 1
 
+#define QEMU_CLOCK_REALTIME 0
+#define QEMU_CLOCK_VIRTUAL  1
+#define QEMU_CLOCK_HOST     2
+
 typedef struct QEMUClock QEMUClock;
 typedef void QEMUTimerCB(void *opaque);
 
@@ -32,10 +36,14 @@ extern QEMUClock *vm_clock;
    the virtual clock. */
 extern QEMUClock *host_clock;
 
+QEMUClock *qemu_new_clock(int type);
+void qemu_free_clock(QEMUClock *clock);
 int64_t qemu_get_clock_ns(QEMUClock *clock);
 int64_t qemu_clock_has_timers(QEMUClock *clock);
 int64_t qemu_clock_expired(QEMUClock *clock);
 int64_t qemu_clock_deadline(QEMUClock *clock);
+int64_t qemu_clock_deadline_ns(QEMUClock *clock);
+int qemu_timeout_ns_to_ms(int64_t ns);
 void qemu_clock_enable(QEMUClock *clock, bool enabled);
 void qemu_clock_warp(QEMUClock *clock);
 
@@ -63,6 +71,15 @@ int64_t cpu_get_ticks(void);
 void cpu_enable_ticks(void);
 void cpu_disable_ticks(void);
 
+static inline int64_t qemu_soonest_timeout(int64_t timeout1, int64_t timeout2)
+{
+    /* we can abuse the fact that -1 (which means infinite) is a maximal
+     * value when cast to unsigned. As this is disgusting, it's kept in
+     * one inline function.
+     */
+    return ((uint64_t) timeout1 < (uint64_t) timeout2) ? timeout1 : timeout2;
+}
+
 static inline QEMUTimer *qemu_new_timer_ns(QEMUClock *clock, QEMUTimerCB *cb,
                                            void *opaque)
 {
diff --git a/qemu-timer.c b/qemu-timer.c
index b2d95e2..3dfbdbf 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -40,10 +40,6 @@
 /***********************************************************/
 /* timers */
 
-#define QEMU_CLOCK_REALTIME 0
-#define QEMU_CLOCK_VIRTUAL  1
-#define QEMU_CLOCK_HOST     2
-
 struct QEMUClock {
     QEMUTimer *active_timers;
 
@@ -231,7 +227,7 @@ QEMUClock *rt_clock;
 QEMUClock *vm_clock;
 QEMUClock *host_clock;
 
-static QEMUClock *qemu_new_clock(int type)
+QEMUClock *qemu_new_clock(int type)
 {
     QEMUClock *clock;
 
@@ -243,6 +239,11 @@ static QEMUClock *qemu_new_clock(int type)
     return clock;
 }
 
+void qemu_free_clock(QEMUClock *clock)
+{
+    g_free(clock);
+}
+
 void qemu_clock_enable(QEMUClock *clock, bool enabled)
 {
     bool old = clock->enabled;
@@ -268,7 +269,7 @@ int64_t qemu_clock_deadline(QEMUClock *clock)
     /* To avoid problems with overflow limit this to 2^32.  */
     int64_t delta = INT32_MAX;
 
-    if (clock->active_timers) {
+    if (clock->enabled && clock->active_timers) {
         delta = clock->active_timers->expire_time - qemu_get_clock_ns(clock);
     }
     if (delta < 0) {
@@ -277,6 +278,56 @@ int64_t qemu_clock_deadline(QEMUClock *clock)
     return delta;
 }
 
+/*
+ * As above, but return -1 for no deadline, and do not cap to 2^32
+ * as we know the result is always positive.
+ */
+
+int64_t qemu_clock_deadline_ns(QEMUClock *clock)
+{
+    int64_t delta;
+
+    if (!clock->enabled || !clock->active_timers) {
+        return -1;
+    }
+
+    delta = clock->active_timers->expire_time - qemu_get_clock_ns(clock);
+
+    if (delta <= 0) {
+        return 0;
+    }
+
+    return delta;
+}
+
+/* Transition function to convert a nanosecond timeout to ms
+ * This is used where a system does not support ppoll
+ */
+int qemu_timeout_ns_to_ms(int64_t ns)
+{
+    int64_t ms;
+    if (ns < 0) {
+        return -1;
+    }
+
+    if (!ns) {
+        return 0;
+    }
+
+    /* Always round up, because it's better to wait too long than to wait too
+     * little and effectively busy-wait
+     */
+    ms = (ns + SCALE_MS - 1) / SCALE_MS;
+
+    /* To avoid overflow problems, limit this to 2^31, i.e. approx 25 days */
+    if (ms > (int64_t) INT32_MAX) {
+        ms = INT32_MAX;
+    }
+
+    return (int) ms;
+}
+
+
 QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
                           QEMUTimerCB *cb, void *opaque)
 {
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv3 02/12] aio / timers: add ppoll support with qemu_poll_ns
  2013-07-25 22:16     ` [Qemu-devel] [RFC] [PATCHv3 00/12] " Alex Bligh
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 01/12] aio / timers: add qemu-timer.c utility functions Alex Bligh
@ 2013-07-25 22:16       ` Alex Bligh
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 03/12] aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer slack Alex Bligh
                         ` (10 subsequent siblings)
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-25 22:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Add qemu_poll_ns which works like g_poll but takes a nanosecond
timeout.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 configure            |   19 +++++++++++++++++++
 include/qemu/timer.h |    1 +
 qemu-timer.c         |   24 ++++++++++++++++++++++++
 3 files changed, 44 insertions(+)

diff --git a/configure b/configure
index 9e1cd19..b491c00 100755
--- a/configure
+++ b/configure
@@ -2801,6 +2801,22 @@ if compile_prog "" "" ; then
   dup3=yes
 fi
 
+# check for ppoll support
+ppoll=no
+cat > $TMPC << EOF
+#include <poll.h>
+
+int main(void)
+{
+    struct pollfd pfd = { .fd = 0, .events = 0, .revents = 0 };
+    ppoll(&pfd, 1, 0, 0);
+    return 0;
+}
+EOF
+if compile_prog "" "" ; then
+  ppoll=yes
+fi
+
 # check for epoll support
 epoll=no
 cat > $TMPC << EOF
@@ -3792,6 +3808,9 @@ fi
 if test "$dup3" = "yes" ; then
   echo "CONFIG_DUP3=y" >> $config_host_mak
 fi
+if test "$ppoll" = "yes" ; then
+  echo "CONFIG_PPOLL=y" >> $config_host_mak
+fi
 if test "$epoll" = "yes" ; then
   echo "CONFIG_EPOLL=y" >> $config_host_mak
 fi
diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 6171db3..f434ecb 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -44,6 +44,7 @@ int64_t qemu_clock_expired(QEMUClock *clock);
 int64_t qemu_clock_deadline(QEMUClock *clock);
 int64_t qemu_clock_deadline_ns(QEMUClock *clock);
 int qemu_timeout_ns_to_ms(int64_t ns);
+int qemu_poll_ns(GPollFD *fds, uint nfds, int64_t timeout);
 void qemu_clock_enable(QEMUClock *clock, bool enabled);
 void qemu_clock_warp(QEMUClock *clock);
 
diff --git a/qemu-timer.c b/qemu-timer.c
index 3dfbdbf..b57bd78 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -37,6 +37,10 @@
 #include <mmsystem.h>
 #endif
 
+#ifdef CONFIG_PPOLL
+#include <poll.h>
+#endif
+
 /***********************************************************/
 /* timers */
 
@@ -328,6 +332,26 @@ int qemu_timeout_ns_to_ms(int64_t ns)
 }
 
 
+/* qemu implementation of g_poll which uses a nanosecond timeout but is
+ * otherwise identical to g_poll
+ */
+int qemu_poll_ns(GPollFD *fds, uint nfds, int64_t timeout)
+{
+#ifdef CONFIG_PPOLL
+    if (timeout < 0) {
+        return ppoll((struct pollfd *)fds, nfds, NULL, NULL);
+    } else {
+        struct timespec ts;
+        ts.tv_sec = timeout / 1000000000LL;
+        ts.tv_nsec = timeout % 1000000000LL;
+        return ppoll((struct pollfd *)fds, nfds, &ts, NULL);
+    }
+#else
+    return g_poll(fds, nfds, qemu_timeout_ns_to_ms(timeout));
+#endif
+}
+
+
 QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
                           QEMUTimerCB *cb, void *opaque)
 {
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv3 03/12] aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer slack
  2013-07-25 22:16     ` [Qemu-devel] [RFC] [PATCHv3 00/12] " Alex Bligh
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 01/12] aio / timers: add qemu-timer.c utility functions Alex Bligh
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 02/12] aio / timers: add ppoll support with qemu_poll_ns Alex Bligh
@ 2013-07-25 22:16       ` Alex Bligh
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 04/12] aio / timers: Make qemu_run_timers and qemu_run_all_timers return progress Alex Bligh
                         ` (9 subsequent siblings)
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-25 22:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Where supported, called prctl(PR_SET_TIMERSLACK, 1, ...) to
set one nanosecond timer slack to increase precision of timer
calls.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 configure    |   18 ++++++++++++++++++
 qemu-timer.c |    7 +++++++
 2 files changed, 25 insertions(+)

diff --git a/configure b/configure
index b491c00..c8e39bc 100755
--- a/configure
+++ b/configure
@@ -2817,6 +2817,21 @@ if compile_prog "" "" ; then
   ppoll=yes
 fi
 
+# check for prctl(PR_SET_TIMERSLACK , ... ) support
+prctl_pr_set_timerslack=no
+cat > $TMPC << EOF
+#include <sys/prctl.h>
+
+int main(void)
+{
+    prctl(PR_SET_TIMERSLACK, 1, 0, 0, 0);
+    return 0;
+}
+EOF
+if compile_prog "" "" ; then
+  prctl_pr_set_timerslack=yes
+fi
+
 # check for epoll support
 epoll=no
 cat > $TMPC << EOF
@@ -3811,6 +3826,9 @@ fi
 if test "$ppoll" = "yes" ; then
   echo "CONFIG_PPOLL=y" >> $config_host_mak
 fi
+if test "$prctl_pr_set_timerslack" = "yes" ; then
+  echo "CONFIG_PRCTL_PR_SET_TIMERSLACK=y" >> $config_host_mak
+fi
 if test "$epoll" = "yes" ; then
   echo "CONFIG_EPOLL=y" >> $config_host_mak
 fi
diff --git a/qemu-timer.c b/qemu-timer.c
index b57bd78..a8b270f 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -41,6 +41,10 @@
 #include <poll.h>
 #endif
 
+#ifdef CONFIG_PRCTL_PR_SET_TIMERSLACK
+#include <sys/prctl.h>
+#endif
+
 /***********************************************************/
 /* timers */
 
@@ -512,6 +516,9 @@ void init_clocks(void)
         vm_clock = qemu_new_clock(QEMU_CLOCK_VIRTUAL);
         host_clock = qemu_new_clock(QEMU_CLOCK_HOST);
     }
+#ifdef CONFIG_PRCTL_PR_SET_TIMERSLACK
+    prctl(PR_SET_TIMERSLACK, 1, 0, 0, 0);
+#endif
 }
 
 uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts)
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv3 04/12] aio / timers: Make qemu_run_timers and qemu_run_all_timers return progress
  2013-07-25 22:16     ` [Qemu-devel] [RFC] [PATCHv3 00/12] " Alex Bligh
                         ` (2 preceding siblings ...)
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 03/12] aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer slack Alex Bligh
@ 2013-07-25 22:16       ` Alex Bligh
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 05/12] aio / timers: Add a clock to AioContext Alex Bligh
                         ` (8 subsequent siblings)
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-25 22:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Make qemu_run_timers and qemu_run_all_timers return progress
so that aio_poll etc. can determine whether a timer has been
run.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |    4 ++--
 qemu-timer.c         |   18 ++++++++++++------
 2 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index f434ecb..a1f2ac8 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -62,8 +62,8 @@ bool qemu_timer_pending(QEMUTimer *ts);
 bool qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time);
 uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts);
 
-void qemu_run_timers(QEMUClock *clock);
-void qemu_run_all_timers(void);
+bool qemu_run_timers(QEMUClock *clock);
+bool qemu_run_all_timers(void);
 void configure_alarms(char const *opt);
 void init_clocks(void);
 int init_timer_alarm(void);
diff --git a/qemu-timer.c b/qemu-timer.c
index a8b270f..714bc92 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -451,13 +451,14 @@ bool qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time)
     return qemu_timer_expired_ns(timer_head, current_time * timer_head->scale);
 }
 
-void qemu_run_timers(QEMUClock *clock)
+bool qemu_run_timers(QEMUClock *clock)
 {
     QEMUTimer *ts;
     int64_t current_time;
+    bool progress = false;
    
     if (!clock->enabled)
-        return;
+        return progress;
 
     current_time = qemu_get_clock_ns(clock);
     for(;;) {
@@ -471,7 +472,9 @@ void qemu_run_timers(QEMUClock *clock)
 
         /* run the callback (the timer list can be modified) */
         ts->cb(ts->opaque);
+        progress = true;
     }
+    return progress;
 }
 
 int64_t qemu_get_clock_ns(QEMUClock *clock)
@@ -526,20 +529,23 @@ uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts)
     return qemu_timer_pending(ts) ? ts->expire_time : -1;
 }
 
-void qemu_run_all_timers(void)
+bool qemu_run_all_timers(void)
 {
+    bool progress = false;
     alarm_timer->pending = false;
 
     /* vm time timers */
-    qemu_run_timers(vm_clock);
-    qemu_run_timers(rt_clock);
-    qemu_run_timers(host_clock);
+    progress |= qemu_run_timers(vm_clock);
+    progress |= qemu_run_timers(rt_clock);
+    progress |= qemu_run_timers(host_clock);
 
     /* rearm timer, if not periodic */
     if (alarm_timer->expired) {
         alarm_timer->expired = false;
         qemu_rearm_alarm_timer(alarm_timer);
     }
+
+    return progress;
 }
 
 #ifdef _WIN32
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv3 05/12] aio / timers: Add a clock to AioContext
  2013-07-25 22:16     ` [Qemu-devel] [RFC] [PATCHv3 00/12] " Alex Bligh
                         ` (3 preceding siblings ...)
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 04/12] aio / timers: Make qemu_run_timers and qemu_run_all_timers return progress Alex Bligh
@ 2013-07-25 22:16       ` Alex Bligh
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 06/12] aio / timers: Add an AioContext pointer to QEMUClock Alex Bligh
                         ` (7 subsequent siblings)
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-25 22:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Add a clock to each AioContext and delete it when freed.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 async.c             |    2 ++
 include/block/aio.h |    5 +++++
 2 files changed, 7 insertions(+)

diff --git a/async.c b/async.c
index 90fe906..0d41431 100644
--- a/async.c
+++ b/async.c
@@ -177,6 +177,7 @@ aio_ctx_finalize(GSource     *source)
     aio_set_event_notifier(ctx, &ctx->notifier, NULL, NULL);
     event_notifier_cleanup(&ctx->notifier);
     g_array_free(ctx->pollfds, TRUE);
+    qemu_free_clock(ctx->clock);
 }
 
 static GSourceFuncs aio_source_funcs = {
@@ -215,6 +216,7 @@ AioContext *aio_context_new(void)
     aio_set_event_notifier(ctx, &ctx->notifier, 
                            (EventNotifierHandler *)
                            event_notifier_test_and_clear, NULL);
+    ctx->clock = qemu_new_clock(QEMU_CLOCK_REALTIME);
 
     return ctx;
 }
diff --git a/include/block/aio.h b/include/block/aio.h
index 1836793..0835a4d 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -41,6 +41,8 @@ typedef struct AioHandler AioHandler;
 typedef void QEMUBHFunc(void *opaque);
 typedef void IOHandler(void *opaque);
 
+typedef struct QEMUClock QEMUClock;
+
 typedef struct AioContext {
     GSource source;
 
@@ -69,6 +71,9 @@ typedef struct AioContext {
 
     /* Thread pool for performing work and receiving completion callbacks */
     struct ThreadPool *thread_pool;
+
+    /* Clock for calling timers */
+    QEMUClock *clock;
 } AioContext;
 
 /* Returns 1 if there are still outstanding AIO requests; 0 otherwise */
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv3 06/12] aio / timers: Add an AioContext pointer to QEMUClock
  2013-07-25 22:16     ` [Qemu-devel] [RFC] [PATCHv3 00/12] " Alex Bligh
                         ` (4 preceding siblings ...)
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 05/12] aio / timers: Add a clock to AioContext Alex Bligh
@ 2013-07-25 22:16       ` Alex Bligh
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 07/12] aio / timers: aio_ctx_prepare sets timeout from AioContext timers Alex Bligh
                         ` (6 subsequent siblings)
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-25 22:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Add an AioContext pointer to QEMUClock so it knows what to notify
on a timer change.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 async.c              |    1 +
 include/qemu/timer.h |    3 +++
 qemu-timer.c         |   12 ++++++++++++
 3 files changed, 16 insertions(+)

diff --git a/async.c b/async.c
index 0d41431..2968c68 100644
--- a/async.c
+++ b/async.c
@@ -217,6 +217,7 @@ AioContext *aio_context_new(void)
                            (EventNotifierHandler *)
                            event_notifier_test_and_clear, NULL);
     ctx->clock = qemu_new_clock(QEMU_CLOCK_REALTIME);
+    qemu_clock_set_ctx(ctx->clock, ctx);
 
     return ctx;
 }
diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index a1f2ac8..29817ab 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -16,6 +16,7 @@
 #define QEMU_CLOCK_HOST     2
 
 typedef struct QEMUClock QEMUClock;
+typedef struct AioContext AioContext;
 typedef void QEMUTimerCB(void *opaque);
 
 /* The real time clock should be used only for stuff which does not
@@ -38,6 +39,8 @@ extern QEMUClock *host_clock;
 
 QEMUClock *qemu_new_clock(int type);
 void qemu_free_clock(QEMUClock *clock);
+AioContext *qemu_clock_get_ctx(QEMUClock *clock);
+void qemu_clock_set_ctx(QEMUClock *clock, AioContext * ctx);
 int64_t qemu_get_clock_ns(QEMUClock *clock);
 int64_t qemu_clock_has_timers(QEMUClock *clock);
 int64_t qemu_clock_expired(QEMUClock *clock);
diff --git a/qemu-timer.c b/qemu-timer.c
index 714bc92..6efd1b4 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -29,6 +29,7 @@
 #include "hw/hw.h"
 
 #include "qemu/timer.h"
+#include "block/aio.h"
 #ifdef CONFIG_POSIX
 #include <pthread.h>
 #endif
@@ -50,6 +51,7 @@
 
 struct QEMUClock {
     QEMUTimer *active_timers;
+    AioContext *ctx;
 
     NotifierList reset_notifiers;
     int64_t last;
@@ -252,6 +254,16 @@ void qemu_free_clock(QEMUClock *clock)
     g_free(clock);
 }
 
+AioContext *qemu_clock_get_ctx(QEMUClock *clock)
+{
+    return clock->ctx;
+}
+
+void qemu_clock_set_ctx(QEMUClock *clock, AioContext * ctx)
+{
+    clock->ctx = ctx;
+}
+
 void qemu_clock_enable(QEMUClock *clock, bool enabled)
 {
     bool old = clock->enabled;
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv3 07/12] aio / timers: aio_ctx_prepare sets timeout from AioContext timers
  2013-07-25 22:16     ` [Qemu-devel] [RFC] [PATCHv3 00/12] " Alex Bligh
                         ` (5 preceding siblings ...)
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 06/12] aio / timers: Add an AioContext pointer to QEMUClock Alex Bligh
@ 2013-07-25 22:16       ` Alex Bligh
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 08/12] aio / timers: Convert aio_poll to use AioContext timers' deadline Alex Bligh
                         ` (5 subsequent siblings)
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-25 22:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Calculate the timeout in aio_ctx_prepare taking into account
the timers attached to the AioContext.

Alter aio_ctx_check similarly.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 async.c |   13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/async.c b/async.c
index 2968c68..a62c463 100644
--- a/async.c
+++ b/async.c
@@ -123,13 +123,14 @@ aio_ctx_prepare(GSource *source, gint    *timeout)
 {
     AioContext *ctx = (AioContext *) source;
     QEMUBH *bh;
+    int deadline;
 
     for (bh = ctx->first_bh; bh; bh = bh->next) {
         if (!bh->deleted && bh->scheduled) {
             if (bh->idle) {
                 /* idle bottom halves will be polled at least
                  * every 10ms */
-                *timeout = 10;
+                *timeout = qemu_soonest_timeout(*timeout, 10);
             } else {
                 /* non-idle bottom halves will be executed
                  * immediately */
@@ -139,6 +140,14 @@ aio_ctx_prepare(GSource *source, gint    *timeout)
         }
     }
 
+    deadline = qemu_timeout_ns_to_ms(qemu_clock_deadline_ns(ctx->clock));
+    if (deadline == 0) {
+        *timeout = 0;
+        return true;
+    } else {
+        *timeout = qemu_soonest_timeout(*timeout, deadline);
+    }
+
     return false;
 }
 
@@ -153,7 +162,7 @@ aio_ctx_check(GSource *source)
             return true;
 	}
     }
-    return aio_pending(ctx);
+    return aio_pending(ctx) || (qemu_clock_deadline_ns(ctx->clock) >= 0);
 }
 
 static gboolean
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv3 08/12] aio / timers: Convert aio_poll to use AioContext timers' deadline
  2013-07-25 22:16     ` [Qemu-devel] [RFC] [PATCHv3 00/12] " Alex Bligh
                         ` (6 preceding siblings ...)
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 07/12] aio / timers: aio_ctx_prepare sets timeout from AioContext timers Alex Bligh
@ 2013-07-25 22:16       ` Alex Bligh
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 09/12] aio / timers: convert mainloop to use timeout Alex Bligh
                         ` (4 subsequent siblings)
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-25 22:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Convert aio_poll to use deadline based on AioContext's timers.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 aio-posix.c |   20 +++++++++++++-------
 aio-win32.c |   22 +++++++++++++++++++---
 2 files changed, 32 insertions(+), 10 deletions(-)

diff --git a/aio-posix.c b/aio-posix.c
index b68eccd..fe12022 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -166,6 +166,10 @@ static bool aio_dispatch(AioContext *ctx)
             g_free(tmp);
         }
     }
+
+    /* Run our timers */
+    progress |= qemu_run_timers(ctx->clock);
+
     return progress;
 }
 
@@ -232,9 +236,9 @@ bool aio_poll(AioContext *ctx, bool blocking)
     }
 
     /* wait until next event */
-    ret = g_poll((GPollFD *)ctx->pollfds->data,
-                 ctx->pollfds->len,
-                 blocking ? -1 : 0);
+    ret = qemu_poll_ns((GPollFD *)ctx->pollfds->data,
+                         ctx->pollfds->len,
+                         blocking ? qemu_clock_deadline_ns(ctx->clock) : 0);
 
     /* if we have any readable fds, dispatch event */
     if (ret > 0) {
@@ -245,11 +249,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
                 node->pfd.revents = pfd->revents;
             }
         }
-        if (aio_dispatch(ctx)) {
-            progress = true;
-        }
+    }
+
+    /* Run dispatch even if there were no readable fds to run timers */
+    if (aio_dispatch(ctx)) {
+        progress = true;
     }
 
     assert(progress || busy);
-    return true;
+    return progress;
 }
diff --git a/aio-win32.c b/aio-win32.c
index 38723bf..c4f8cbf 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -98,6 +98,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
     HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
     bool busy, progress;
     int count;
+    int timeout;
 
     progress = false;
 
@@ -111,6 +112,9 @@ bool aio_poll(AioContext *ctx, bool blocking)
         progress = true;
     }
 
+    /* Run timers */
+    progress |= qemu_run_timers(ctx->clock);
+
     /*
      * Then dispatch any pending callbacks from the GSource.
      *
@@ -174,8 +178,11 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
     /* wait until next event */
     while (count > 0) {
-        int timeout = blocking ? INFINITE : 0;
-        int ret = WaitForMultipleObjects(count, events, FALSE, timeout);
+        int ret;
+
+        timeout = blocking ?
+            qemu_timeout_ns_to_ms(qemu_clock_deadline_ns(ctx->clock)) : 0;
+        ret = WaitForMultipleObjects(count, events, FALSE, timeout);
 
         /* if we have any signaled events, dispatch event */
         if ((DWORD) (ret - WAIT_OBJECT_0) >= count) {
@@ -214,6 +221,15 @@ bool aio_poll(AioContext *ctx, bool blocking)
         events[ret - WAIT_OBJECT_0] = events[--count];
     }
 
+    if (blocking) {
+        /* Run the timers a second time. We do this because otherwise aio_wait
+         * will not note progress - and will stop a drain early - if we have
+         * a timer that was not ready to run entering g_poll but is ready
+         * after g_poll. This will only do anything if a timer has expired.
+         */
+        progress |= qemu_run_timers(ctx->clock);
+    }
+
     assert(progress || busy);
-    return true;
+    return progress;
 }
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv3 09/12] aio / timers: convert mainloop to use timeout
  2013-07-25 22:16     ` [Qemu-devel] [RFC] [PATCHv3 00/12] " Alex Bligh
                         ` (7 preceding siblings ...)
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 08/12] aio / timers: Convert aio_poll to use AioContext timers' deadline Alex Bligh
@ 2013-07-25 22:16       ` Alex Bligh
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 10/12] aio / timers: on timer modification, qemu_notify or aio_notify Alex Bligh
                         ` (3 subsequent siblings)
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-25 22:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Convert mainloop to use timeout from 3 static timers.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 main-loop.c |   48 +++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 37 insertions(+), 11 deletions(-)

diff --git a/main-loop.c b/main-loop.c
index a44fff6..c30978b 100644
--- a/main-loop.c
+++ b/main-loop.c
@@ -155,10 +155,11 @@ static int max_priority;
 static int glib_pollfds_idx;
 static int glib_n_poll_fds;
 
-static void glib_pollfds_fill(uint32_t *cur_timeout)
+static void glib_pollfds_fill(int64_t *cur_timeout)
 {
     GMainContext *context = g_main_context_default();
     int timeout = 0;
+    int64_t timeout_ns;
     int n;
 
     g_main_context_prepare(context, &max_priority);
@@ -174,9 +175,13 @@ static void glib_pollfds_fill(uint32_t *cur_timeout)
                                  glib_n_poll_fds);
     } while (n != glib_n_poll_fds);
 
-    if (timeout >= 0 && timeout < *cur_timeout) {
-        *cur_timeout = timeout;
+    if (timeout < 0) {
+        timeout_ns = -1;
+    } else {
+        timeout_ns = (int64_t)timeout * (int64_t)SCALE_MS;
     }
+
+    *cur_timeout = qemu_soonest_timeout(timeout_ns, *cur_timeout);
 }
 
 static void glib_pollfds_poll(void)
@@ -191,7 +196,7 @@ static void glib_pollfds_poll(void)
 
 #define MAX_MAIN_LOOP_SPIN (1000)
 
-static int os_host_main_loop_wait(uint32_t timeout)
+static int os_host_main_loop_wait(int64_t timeout)
 {
     int ret;
     static int spin_counter;
@@ -214,7 +219,7 @@ static int os_host_main_loop_wait(uint32_t timeout)
             notified = true;
         }
 
-        timeout = 1;
+        timeout = SCALE_MS;
     }
 
     if (timeout > 0) {
@@ -224,7 +229,7 @@ static int os_host_main_loop_wait(uint32_t timeout)
         spin_counter++;
     }
 
-    ret = g_poll((GPollFD *)gpollfds->data, gpollfds->len, timeout);
+    ret = qemu_poll_ns((GPollFD *)gpollfds->data, gpollfds->len, timeout);
 
     if (timeout > 0) {
         qemu_mutex_lock_iothread();
@@ -373,7 +378,7 @@ static void pollfds_poll(GArray *pollfds, int nfds, fd_set *rfds,
     }
 }
 
-static int os_host_main_loop_wait(uint32_t timeout)
+static int os_host_main_loop_wait(int64_t timeout)
 {
     GMainContext *context = g_main_context_default();
     GPollFD poll_fds[1024 * 2]; /* this is probably overkill */
@@ -382,6 +387,7 @@ static int os_host_main_loop_wait(uint32_t timeout)
     PollingEntry *pe;
     WaitObjects *w = &wait_objects;
     gint poll_timeout;
+    int64_t poll_timeout_ns;
     static struct timeval tv0;
     fd_set rfds, wfds, xfds;
     int nfds;
@@ -419,12 +425,17 @@ static int os_host_main_loop_wait(uint32_t timeout)
         poll_fds[n_poll_fds + i].events = G_IO_IN;
     }
 
-    if (poll_timeout < 0 || timeout < poll_timeout) {
-        poll_timeout = timeout;
+    if (poll_timeout < 0) {
+        poll_timeout_ns = -1;
+    } else {
+        poll_timeout_ns = (int64_t)poll_timeout * (int64_t)SCALE_MS;
     }
 
+    poll_timeout_ns = qemu_soonest_timeout(poll_timeout_ns, timeout);
+
     qemu_mutex_unlock_iothread();
-    g_poll_ret = g_poll(poll_fds, n_poll_fds + w->num, poll_timeout);
+    g_poll_ret = qemu_poll_ns(poll_fds, n_poll_fds + w->num, poll_timeout_ns);
+
     qemu_mutex_lock_iothread();
     if (g_poll_ret > 0) {
         for (i = 0; i < w->num; i++) {
@@ -449,6 +460,7 @@ int main_loop_wait(int nonblocking)
 {
     int ret;
     uint32_t timeout = UINT32_MAX;
+    int64_t timeout_ns;
 
     if (nonblocking) {
         timeout = 0;
@@ -462,7 +474,21 @@ int main_loop_wait(int nonblocking)
     slirp_pollfds_fill(gpollfds);
 #endif
     qemu_iohandler_fill(gpollfds);
-    ret = os_host_main_loop_wait(timeout);
+
+    if (timeout == UINT32_MAX) {
+        timeout_ns = -1;
+    } else {
+        timeout_ns = (uint64_t)timeout * (int64_t)(SCALE_MS);
+    }
+
+    timeout_ns = qemu_soonest_timeout(timeout_ns,
+                                      qemu_clock_deadline_ns(rt_clock));
+    timeout_ns = qemu_soonest_timeout(timeout_ns,
+                                      qemu_clock_deadline_ns(vm_clock));
+    timeout_ns = qemu_soonest_timeout(timeout_ns,
+                                      qemu_clock_deadline_ns(host_clock));
+
+    ret = os_host_main_loop_wait(timeout_ns);
     qemu_iohandler_poll(gpollfds, ret);
 #ifdef CONFIG_SLIRP
     slirp_pollfds_poll(gpollfds, (ret < 0));
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv3 10/12] aio / timers: on timer modification, qemu_notify or aio_notify
  2013-07-25 22:16     ` [Qemu-devel] [RFC] [PATCHv3 00/12] " Alex Bligh
                         ` (8 preceding siblings ...)
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 09/12] aio / timers: convert mainloop to use timeout Alex Bligh
@ 2013-07-25 22:16       ` Alex Bligh
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 11/12] aio / timers: Remove alarm timers Alex Bligh
                         ` (2 subsequent siblings)
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-25 22:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

On qemu_clock_enable or qemu_mod_timer_ns, ensure qemu_notify
or aio_notify is called to end the appropriate poll().

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 qemu-timer.c |   14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/qemu-timer.c b/qemu-timer.c
index 6efd1b4..a0cbeaa 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -264,11 +264,21 @@ void qemu_clock_set_ctx(QEMUClock *clock, AioContext * ctx)
     clock->ctx = ctx;
 }
 
+static void qemu_clock_notify(QEMUClock *clock)
+{
+    if (clock->ctx) {
+        aio_notify(clock->ctx);
+    } else {
+        qemu_notify_event();
+    }
+}
+
 void qemu_clock_enable(QEMUClock *clock, bool enabled)
 {
     bool old = clock->enabled;
     clock->enabled = enabled;
     if (enabled && !old) {
+        qemu_clock_notify(clock);
         qemu_rearm_alarm_timer(alarm_timer);
     }
 }
@@ -436,9 +446,7 @@ void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
         }
         /* Interrupt execution to force deadline recalculation.  */
         qemu_clock_warp(ts->clock);
-        if (use_icount) {
-            qemu_notify_event();
-        }
+        qemu_clock_notify(ts->clock);
     }
 }
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv3 11/12] aio / timers: Remove alarm timers
  2013-07-25 22:16     ` [Qemu-devel] [RFC] [PATCHv3 00/12] " Alex Bligh
                         ` (9 preceding siblings ...)
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 10/12] aio / timers: on timer modification, qemu_notify or aio_notify Alex Bligh
@ 2013-07-25 22:16       ` Alex Bligh
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 12/12] aio / timers: Add test harness for AioContext timers Alex Bligh
  2013-07-25 22:22       ` [Qemu-devel] [RFC] [PATCHv3 00/12] aio / timers: Add AioContext timers and use ppoll Alex Bligh
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-25 22:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Remove alarm timers from qemu-timers.c now we use g_poll / ppoll
instead.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |    2 -
 main-loop.c          |    4 -
 qemu-timer.c         |  501 +-------------------------------------------------
 vl.c                 |    5 +-
 4 files changed, 8 insertions(+), 504 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 29817ab..6f3654a 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -67,9 +67,7 @@ uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts);
 
 bool qemu_run_timers(QEMUClock *clock);
 bool qemu_run_all_timers(void);
-void configure_alarms(char const *opt);
 void init_clocks(void);
-int init_timer_alarm(void);
 
 int64_t cpu_get_ticks(void);
 void cpu_enable_ticks(void);
diff --git a/main-loop.c b/main-loop.c
index c30978b..d8ec7d5 100644
--- a/main-loop.c
+++ b/main-loop.c
@@ -131,10 +131,6 @@ int qemu_init_main_loop(void)
     GSource *src;
 
     init_clocks();
-    if (init_timer_alarm() < 0) {
-        fprintf(stderr, "could not initialize alarm timer\n");
-        exit(1);
-    }
 
     ret = qemu_signal_init();
     if (ret) {
diff --git a/qemu-timer.c b/qemu-timer.c
index a0cbeaa..6ab1bf4 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -34,10 +34,6 @@
 #include <pthread.h>
 #endif
 
-#ifdef _WIN32
-#include <mmsystem.h>
-#endif
-
 #ifdef CONFIG_PPOLL
 #include <poll.h>
 #endif
@@ -69,170 +65,11 @@ struct QEMUTimer {
     int scale;
 };
 
-struct qemu_alarm_timer {
-    char const *name;
-    int (*start)(struct qemu_alarm_timer *t);
-    void (*stop)(struct qemu_alarm_timer *t);
-    void (*rearm)(struct qemu_alarm_timer *t, int64_t nearest_delta_ns);
-#if defined(__linux__)
-    timer_t timer;
-    int fd;
-#elif defined(_WIN32)
-    HANDLE timer;
-#endif
-    bool expired;
-    bool pending;
-};
-
-static struct qemu_alarm_timer *alarm_timer;
-
 static bool qemu_timer_expired_ns(QEMUTimer *timer_head, int64_t current_time)
 {
     return timer_head && (timer_head->expire_time <= current_time);
 }
 
-static int64_t qemu_next_alarm_deadline(void)
-{
-    int64_t delta = INT64_MAX;
-    int64_t rtdelta;
-
-    if (!use_icount && vm_clock->enabled && vm_clock->active_timers) {
-        delta = vm_clock->active_timers->expire_time -
-                     qemu_get_clock_ns(vm_clock);
-    }
-    if (host_clock->enabled && host_clock->active_timers) {
-        int64_t hdelta = host_clock->active_timers->expire_time -
-                 qemu_get_clock_ns(host_clock);
-        if (hdelta < delta) {
-            delta = hdelta;
-        }
-    }
-    if (rt_clock->enabled && rt_clock->active_timers) {
-        rtdelta = (rt_clock->active_timers->expire_time -
-                 qemu_get_clock_ns(rt_clock));
-        if (rtdelta < delta) {
-            delta = rtdelta;
-        }
-    }
-
-    return delta;
-}
-
-static void qemu_rearm_alarm_timer(struct qemu_alarm_timer *t)
-{
-    int64_t nearest_delta_ns = qemu_next_alarm_deadline();
-    if (nearest_delta_ns < INT64_MAX) {
-        t->rearm(t, nearest_delta_ns);
-    }
-}
-
-/* TODO: MIN_TIMER_REARM_NS should be optimized */
-#define MIN_TIMER_REARM_NS 250000
-
-#ifdef _WIN32
-
-static int mm_start_timer(struct qemu_alarm_timer *t);
-static void mm_stop_timer(struct qemu_alarm_timer *t);
-static void mm_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
-
-static int win32_start_timer(struct qemu_alarm_timer *t);
-static void win32_stop_timer(struct qemu_alarm_timer *t);
-static void win32_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
-
-#else
-
-static int unix_start_timer(struct qemu_alarm_timer *t);
-static void unix_stop_timer(struct qemu_alarm_timer *t);
-static void unix_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
-
-#ifdef __linux__
-
-static int dynticks_start_timer(struct qemu_alarm_timer *t);
-static void dynticks_stop_timer(struct qemu_alarm_timer *t);
-static void dynticks_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
-
-#endif /* __linux__ */
-
-#endif /* _WIN32 */
-
-static struct qemu_alarm_timer alarm_timers[] = {
-#ifndef _WIN32
-#ifdef __linux__
-    {"dynticks", dynticks_start_timer,
-     dynticks_stop_timer, dynticks_rearm_timer},
-#endif
-    {"unix", unix_start_timer, unix_stop_timer, unix_rearm_timer},
-#else
-    {"mmtimer", mm_start_timer, mm_stop_timer, mm_rearm_timer},
-    {"dynticks", win32_start_timer, win32_stop_timer, win32_rearm_timer},
-#endif
-    {NULL, }
-};
-
-static void show_available_alarms(void)
-{
-    int i;
-
-    printf("Available alarm timers, in order of precedence:\n");
-    for (i = 0; alarm_timers[i].name; i++)
-        printf("%s\n", alarm_timers[i].name);
-}
-
-void configure_alarms(char const *opt)
-{
-    int i;
-    int cur = 0;
-    int count = ARRAY_SIZE(alarm_timers) - 1;
-    char *arg;
-    char *name;
-    struct qemu_alarm_timer tmp;
-
-    if (is_help_option(opt)) {
-        show_available_alarms();
-        exit(0);
-    }
-
-    arg = g_strdup(opt);
-
-    /* Reorder the array */
-    name = strtok(arg, ",");
-    while (name) {
-        for (i = 0; i < count && alarm_timers[i].name; i++) {
-            if (!strcmp(alarm_timers[i].name, name))
-                break;
-        }
-
-        if (i == count) {
-            fprintf(stderr, "Unknown clock %s\n", name);
-            goto next;
-        }
-
-        if (i < cur)
-            /* Ignore */
-            goto next;
-
-	/* Swap */
-        tmp = alarm_timers[i];
-        alarm_timers[i] = alarm_timers[cur];
-        alarm_timers[cur] = tmp;
-
-        cur++;
-next:
-        name = strtok(NULL, ",");
-    }
-
-    g_free(arg);
-
-    if (cur) {
-        /* Disable remaining timers */
-        for (i = cur; i < count; i++)
-            alarm_timers[i].name = NULL;
-    } else {
-        show_available_alarms();
-        exit(1);
-    }
-}
-
 QEMUClock *rt_clock;
 QEMUClock *vm_clock;
 QEMUClock *host_clock;
@@ -279,7 +116,6 @@ void qemu_clock_enable(QEMUClock *clock, bool enabled)
     clock->enabled = enabled;
     if (enabled && !old) {
         qemu_clock_notify(clock);
-        qemu_rearm_alarm_timer(alarm_timer);
     }
 }
 
@@ -441,10 +277,9 @@ void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
 
     /* Rearm if necessary  */
     if (pt == &ts->clock->active_timers) {
-        if (!alarm_timer->pending) {
-            qemu_rearm_alarm_timer(alarm_timer);
-        }
-        /* Interrupt execution to force deadline recalculation.  */
+        /* Interrupt execution to force deadline recalculation.
+         * FIXME: Do we need to do this now?
+         */
         qemu_clock_warp(ts->clock);
         qemu_clock_notify(ts->clock);
     }
@@ -551,338 +386,10 @@ uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts)
 
 bool qemu_run_all_timers(void)
 {
-    bool progress = false;
-    alarm_timer->pending = false;
-
     /* vm time timers */
+    bool progress = false;
     progress |= qemu_run_timers(vm_clock);
     progress |= qemu_run_timers(rt_clock);
     progress |= qemu_run_timers(host_clock);
-
-    /* rearm timer, if not periodic */
-    if (alarm_timer->expired) {
-        alarm_timer->expired = false;
-        qemu_rearm_alarm_timer(alarm_timer);
-    }
-
     return progress;
 }
-
-#ifdef _WIN32
-static void CALLBACK host_alarm_handler(PVOID lpParam, BOOLEAN unused)
-#else
-static void host_alarm_handler(int host_signum)
-#endif
-{
-    struct qemu_alarm_timer *t = alarm_timer;
-    if (!t)
-	return;
-
-    t->expired = true;
-    t->pending = true;
-    qemu_notify_event();
-}
-
-#if defined(__linux__)
-
-#include "qemu/compatfd.h"
-
-static int dynticks_start_timer(struct qemu_alarm_timer *t)
-{
-    struct sigevent ev;
-    timer_t host_timer;
-    struct sigaction act;
-
-    sigfillset(&act.sa_mask);
-    act.sa_flags = 0;
-    act.sa_handler = host_alarm_handler;
-
-    sigaction(SIGALRM, &act, NULL);
-
-    /* 
-     * Initialize ev struct to 0 to avoid valgrind complaining
-     * about uninitialized data in timer_create call
-     */
-    memset(&ev, 0, sizeof(ev));
-    ev.sigev_value.sival_int = 0;
-    ev.sigev_notify = SIGEV_SIGNAL;
-#ifdef CONFIG_SIGEV_THREAD_ID
-    if (qemu_signalfd_available()) {
-        ev.sigev_notify = SIGEV_THREAD_ID;
-        ev._sigev_un._tid = qemu_get_thread_id();
-    }
-#endif /* CONFIG_SIGEV_THREAD_ID */
-    ev.sigev_signo = SIGALRM;
-
-    if (timer_create(CLOCK_REALTIME, &ev, &host_timer)) {
-        perror("timer_create");
-        return -1;
-    }
-
-    t->timer = host_timer;
-
-    return 0;
-}
-
-static void dynticks_stop_timer(struct qemu_alarm_timer *t)
-{
-    timer_t host_timer = t->timer;
-
-    timer_delete(host_timer);
-}
-
-static void dynticks_rearm_timer(struct qemu_alarm_timer *t,
-                                 int64_t nearest_delta_ns)
-{
-    timer_t host_timer = t->timer;
-    struct itimerspec timeout;
-    int64_t current_ns;
-
-    if (nearest_delta_ns < MIN_TIMER_REARM_NS)
-        nearest_delta_ns = MIN_TIMER_REARM_NS;
-
-    /* check whether a timer is already running */
-    if (timer_gettime(host_timer, &timeout)) {
-        perror("gettime");
-        fprintf(stderr, "Internal timer error: aborting\n");
-        exit(1);
-    }
-    current_ns = timeout.it_value.tv_sec * 1000000000LL + timeout.it_value.tv_nsec;
-    if (current_ns && current_ns <= nearest_delta_ns)
-        return;
-
-    timeout.it_interval.tv_sec = 0;
-    timeout.it_interval.tv_nsec = 0; /* 0 for one-shot timer */
-    timeout.it_value.tv_sec =  nearest_delta_ns / 1000000000;
-    timeout.it_value.tv_nsec = nearest_delta_ns % 1000000000;
-    if (timer_settime(host_timer, 0 /* RELATIVE */, &timeout, NULL)) {
-        perror("settime");
-        fprintf(stderr, "Internal timer error: aborting\n");
-        exit(1);
-    }
-}
-
-#endif /* defined(__linux__) */
-
-#if !defined(_WIN32)
-
-static int unix_start_timer(struct qemu_alarm_timer *t)
-{
-    struct sigaction act;
-
-    /* timer signal */
-    sigfillset(&act.sa_mask);
-    act.sa_flags = 0;
-    act.sa_handler = host_alarm_handler;
-
-    sigaction(SIGALRM, &act, NULL);
-    return 0;
-}
-
-static void unix_rearm_timer(struct qemu_alarm_timer *t,
-                             int64_t nearest_delta_ns)
-{
-    struct itimerval itv;
-    int err;
-
-    if (nearest_delta_ns < MIN_TIMER_REARM_NS)
-        nearest_delta_ns = MIN_TIMER_REARM_NS;
-
-    itv.it_interval.tv_sec = 0;
-    itv.it_interval.tv_usec = 0; /* 0 for one-shot timer */
-    itv.it_value.tv_sec =  nearest_delta_ns / 1000000000;
-    itv.it_value.tv_usec = (nearest_delta_ns % 1000000000) / 1000;
-    err = setitimer(ITIMER_REAL, &itv, NULL);
-    if (err) {
-        perror("setitimer");
-        fprintf(stderr, "Internal timer error: aborting\n");
-        exit(1);
-    }
-}
-
-static void unix_stop_timer(struct qemu_alarm_timer *t)
-{
-    struct itimerval itv;
-
-    memset(&itv, 0, sizeof(itv));
-    setitimer(ITIMER_REAL, &itv, NULL);
-}
-
-#endif /* !defined(_WIN32) */
-
-
-#ifdef _WIN32
-
-static MMRESULT mm_timer;
-static TIMECAPS mm_tc;
-
-static void CALLBACK mm_alarm_handler(UINT uTimerID, UINT uMsg,
-                                      DWORD_PTR dwUser, DWORD_PTR dw1,
-                                      DWORD_PTR dw2)
-{
-    struct qemu_alarm_timer *t = alarm_timer;
-    if (!t) {
-        return;
-    }
-    t->expired = true;
-    t->pending = true;
-    qemu_notify_event();
-}
-
-static int mm_start_timer(struct qemu_alarm_timer *t)
-{
-    timeGetDevCaps(&mm_tc, sizeof(mm_tc));
-    return 0;
-}
-
-static void mm_stop_timer(struct qemu_alarm_timer *t)
-{
-    if (mm_timer) {
-        timeKillEvent(mm_timer);
-    }
-}
-
-static void mm_rearm_timer(struct qemu_alarm_timer *t, int64_t delta)
-{
-    int64_t nearest_delta_ms = delta / 1000000;
-    if (nearest_delta_ms < mm_tc.wPeriodMin) {
-        nearest_delta_ms = mm_tc.wPeriodMin;
-    } else if (nearest_delta_ms > mm_tc.wPeriodMax) {
-        nearest_delta_ms = mm_tc.wPeriodMax;
-    }
-
-    if (mm_timer) {
-        timeKillEvent(mm_timer);
-    }
-    mm_timer = timeSetEvent((UINT)nearest_delta_ms,
-                            mm_tc.wPeriodMin,
-                            mm_alarm_handler,
-                            (DWORD_PTR)t,
-                            TIME_ONESHOT | TIME_CALLBACK_FUNCTION);
-
-    if (!mm_timer) {
-        fprintf(stderr, "Failed to re-arm win32 alarm timer\n");
-        timeEndPeriod(mm_tc.wPeriodMin);
-        exit(1);
-    }
-}
-
-static int win32_start_timer(struct qemu_alarm_timer *t)
-{
-    HANDLE hTimer;
-    BOOLEAN success;
-
-    /* If you call ChangeTimerQueueTimer on a one-shot timer (its period
-       is zero) that has already expired, the timer is not updated.  Since
-       creating a new timer is relatively expensive, set a bogus one-hour
-       interval in the dynticks case.  */
-    success = CreateTimerQueueTimer(&hTimer,
-                          NULL,
-                          host_alarm_handler,
-                          t,
-                          1,
-                          3600000,
-                          WT_EXECUTEINTIMERTHREAD);
-
-    if (!success) {
-        fprintf(stderr, "Failed to initialize win32 alarm timer: %ld\n",
-                GetLastError());
-        return -1;
-    }
-
-    t->timer = hTimer;
-    return 0;
-}
-
-static void win32_stop_timer(struct qemu_alarm_timer *t)
-{
-    HANDLE hTimer = t->timer;
-
-    if (hTimer) {
-        DeleteTimerQueueTimer(NULL, hTimer, NULL);
-    }
-}
-
-static void win32_rearm_timer(struct qemu_alarm_timer *t,
-                              int64_t nearest_delta_ns)
-{
-    HANDLE hTimer = t->timer;
-    int64_t nearest_delta_ms;
-    BOOLEAN success;
-
-    nearest_delta_ms = nearest_delta_ns / 1000000;
-    if (nearest_delta_ms < 1) {
-        nearest_delta_ms = 1;
-    }
-    /* ULONG_MAX can be 32 bit */
-    if (nearest_delta_ms > ULONG_MAX) {
-        nearest_delta_ms = ULONG_MAX;
-    }
-    success = ChangeTimerQueueTimer(NULL,
-                                    hTimer,
-                                    (unsigned long) nearest_delta_ms,
-                                    3600000);
-
-    if (!success) {
-        fprintf(stderr, "Failed to rearm win32 alarm timer: %ld\n",
-                GetLastError());
-        exit(-1);
-    }
-
-}
-
-#endif /* _WIN32 */
-
-static void quit_timers(void)
-{
-    struct qemu_alarm_timer *t = alarm_timer;
-    alarm_timer = NULL;
-    t->stop(t);
-}
-
-#ifdef CONFIG_POSIX
-static void reinit_timers(void)
-{
-    struct qemu_alarm_timer *t = alarm_timer;
-    t->stop(t);
-    if (t->start(t)) {
-        fprintf(stderr, "Internal timer error: aborting\n");
-        exit(1);
-    }
-    qemu_rearm_alarm_timer(t);
-}
-#endif /* CONFIG_POSIX */
-
-int init_timer_alarm(void)
-{
-    struct qemu_alarm_timer *t = NULL;
-    int i, err = -1;
-
-    if (alarm_timer) {
-        return 0;
-    }
-
-    for (i = 0; alarm_timers[i].name; i++) {
-        t = &alarm_timers[i];
-
-        err = t->start(t);
-        if (!err)
-            break;
-    }
-
-    if (err) {
-        err = -ENOENT;
-        goto fail;
-    }
-
-    atexit(quit_timers);
-#ifdef CONFIG_POSIX
-    pthread_atfork(NULL, NULL, reinit_timers);
-#endif
-    alarm_timer = t;
-    return 0;
-
-fail:
-    return err;
-}
-
diff --git a/vl.c b/vl.c
index 25b8f2f..612c609 100644
--- a/vl.c
+++ b/vl.c
@@ -3714,7 +3714,10 @@ int main(int argc, char **argv, char **envp)
                 old_param = 1;
                 break;
             case QEMU_OPTION_clock:
-                configure_alarms(optarg);
+                /* Once upon a time we did:
+                 *   configure_alarms(optarg);
+                 * here. This is stubbed out for compatibility.
+                 */
                 break;
             case QEMU_OPTION_startdate:
                 configure_rtc_date_offset(optarg, 1);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv3 12/12] aio / timers: Add test harness for AioContext timers
  2013-07-25 22:16     ` [Qemu-devel] [RFC] [PATCHv3 00/12] " Alex Bligh
                         ` (10 preceding siblings ...)
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 11/12] aio / timers: Remove alarm timers Alex Bligh
@ 2013-07-25 22:16       ` Alex Bligh
  2013-07-25 22:22       ` [Qemu-devel] [RFC] [PATCHv3 00/12] aio / timers: Add AioContext timers and use ppoll Alex Bligh
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-25 22:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Add a test harness for AioContext timers. The g_source equivalent is
unsatisfactory as it suffers from false wakeups.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 tests/test-aio.c |  122 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 122 insertions(+)

diff --git a/tests/test-aio.c b/tests/test-aio.c
index c173870..71841c0 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -12,6 +12,7 @@
 
 #include <glib.h>
 #include "block/aio.h"
+#include "qemu/timer.h"
 
 AioContext *ctx;
 
@@ -31,6 +32,15 @@ typedef struct {
     int max;
 } BHTestData;
 
+typedef struct {
+    QEMUTimer *timer;
+    QEMUClock *clock;
+    int n;
+    int max;
+    int64_t ns;
+    AioContext *ctx;
+} TimerTestData;
+
 static void bh_test_cb(void *opaque)
 {
     BHTestData *data = opaque;
@@ -39,6 +49,24 @@ static void bh_test_cb(void *opaque)
     }
 }
 
+static void timer_test_cb(void *opaque)
+{
+    TimerTestData *data = opaque;
+    if (++data->n < data->max) {
+        qemu_mod_timer(data->timer,
+                       qemu_get_clock_ns(data->clock) + data->ns);
+    }
+}
+
+static void dummy_io_handler_read(void *opaque)
+{
+}
+
+static int dummy_io_handler_flush(void *opaque)
+{
+    return 1;
+}
+
 static void bh_delete_cb(void *opaque)
 {
     BHTestData *data = opaque;
@@ -340,6 +368,51 @@ static void test_wait_event_notifier_noflush(void)
     event_notifier_cleanup(&data.e);
 }
 
+static void test_timer_schedule(void)
+{
+    TimerTestData data = { .n = 0, .ctx = ctx, .ns = SCALE_MS*750,
+                           .max = 2, .clock = ctx->clock };
+    int pipefd[2];
+
+    /* aio_poll will not block to wait for timers to complete unless it has
+     * an fd to wait on. Fixing this breaks other tests. So create a dummy one.
+     */
+    g_assert(!pipe2(pipefd, O_NONBLOCK));
+    aio_set_fd_handler(ctx, pipefd[0],
+                       dummy_io_handler_read, NULL, dummy_io_handler_flush,
+                       NULL);
+    aio_poll(ctx, false);
+
+    data.timer = qemu_new_timer_ns(data.clock, timer_test_cb, &data);
+    qemu_mod_timer(data.timer, qemu_get_clock_ns(data.clock) + data.ns);
+
+    g_assert_cmpint(data.n, ==, 0);
+
+    g_assert(!aio_poll(ctx, false));
+    g_assert_cmpint(data.n, ==, 0);
+
+    sleep(1);
+    g_assert_cmpint(data.n, ==, 0);
+
+    g_assert(aio_poll(ctx, false));
+    g_assert_cmpint(data.n, ==, 1);
+
+    g_assert(!aio_poll(ctx, false));
+    g_assert_cmpint(data.n, ==, 1);
+
+    g_assert(aio_poll(ctx, true));
+    g_assert_cmpint(data.n, ==, 2);
+
+    g_assert(!aio_poll(ctx, false));
+    g_assert_cmpint(data.n, ==, 2);
+
+    aio_set_fd_handler(ctx, pipefd[0], NULL, NULL, NULL, NULL);
+    close(pipefd[0]);
+    close(pipefd[1]);
+
+    qemu_del_timer(data.timer);
+}
+
 /* Now the same tests, using the context as a GSource.  They are
  * very similar to the ones above, with g_main_context_iteration
  * replacing aio_poll.  However:
@@ -622,12 +695,59 @@ static void test_source_wait_event_notifier_noflush(void)
     event_notifier_cleanup(&data.e);
 }
 
+static void test_source_timer_schedule(void)
+{
+    TimerTestData data = { .n = 0, .ctx = ctx, .ns = SCALE_MS*750,
+                           .max = 2, .clock = ctx->clock };
+    int pipefd[2];
+    int64_t expiry;
+
+    /* aio_poll will not block to wait for timers to complete unless it has
+     * an fd to wait on. Fixing this breaks other tests. So create a dummy one.
+     */
+    g_assert(!pipe2(pipefd, O_NONBLOCK));
+    aio_set_fd_handler(ctx, pipefd[0],
+                       dummy_io_handler_read, NULL, dummy_io_handler_flush,
+                       NULL);
+    while (g_main_context_iteration(NULL, false));
+
+    data.timer = qemu_new_timer_ns(data.clock, timer_test_cb, &data);
+    expiry = qemu_get_clock_ns(data.clock) + data.ns;
+    qemu_mod_timer(data.timer, expiry);
+
+    g_assert_cmpint(data.n, ==, 0);
+
+    sleep(1);
+    g_assert_cmpint(data.n, ==, 0);
+
+    g_assert(g_main_context_iteration(NULL, false));
+    g_assert_cmpint(data.n, ==, 1);
+
+    /* The comment above was not kidding when it said this wakes up itself */
+    do {
+        g_assert(g_main_context_iteration(NULL, true));
+    } while (qemu_get_clock_ns(data.clock) <= expiry);
+    sleep(1);
+    g_main_context_iteration(NULL, false);
+
+    g_assert_cmpint(data.n, ==, 2);
+
+    aio_set_fd_handler(ctx, pipefd[0], NULL, NULL, NULL, NULL);
+    close(pipefd[0]);
+    close(pipefd[1]);
+
+    qemu_del_timer(data.timer);
+}
+
+
 /* End of tests.  */
 
 int main(int argc, char **argv)
 {
     GSource *src;
 
+    init_clocks();
+
     ctx = aio_context_new();
     src = aio_get_g_source(ctx);
     g_source_attach(src, NULL);
@@ -648,6 +768,7 @@ int main(int argc, char **argv)
     g_test_add_func("/aio/event/wait",              test_wait_event_notifier);
     g_test_add_func("/aio/event/wait/no-flush-cb",  test_wait_event_notifier_noflush);
     g_test_add_func("/aio/event/flush",             test_flush_event_notifier);
+    g_test_add_func("/aio/timer/schedule",          test_timer_schedule);
 
     g_test_add_func("/aio-gsource/notify",                  test_source_notify);
     g_test_add_func("/aio-gsource/flush",                   test_source_flush);
@@ -662,5 +783,6 @@ int main(int argc, char **argv)
     g_test_add_func("/aio-gsource/event/wait",              test_source_wait_event_notifier);
     g_test_add_func("/aio-gsource/event/wait/no-flush-cb",  test_source_wait_event_notifier_noflush);
     g_test_add_func("/aio-gsource/event/flush",             test_source_flush_event_notifier);
+    g_test_add_func("/aio-gsource/timer/schedule",          test_source_timer_schedule);
     return g_test_run();
 }
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv3 00/12] aio / timers: Add AioContext timers and use ppoll
  2013-07-25 22:16     ` [Qemu-devel] [RFC] [PATCHv3 00/12] " Alex Bligh
                         ` (11 preceding siblings ...)
  2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 12/12] aio / timers: Add test harness for AioContext timers Alex Bligh
@ 2013-07-25 22:22       ` Alex Bligh
  2013-07-26 18:37         ` [Qemu-devel] [RFC] [PATCHv4 00/13] " Alex Bligh
  12 siblings, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-07-25 22:22 UTC (permalink / raw)
  To: Alex Bligh, qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Stefan Hajnoczi, Paolo Bonzini, rth



--On 25 July 2013 23:16:36 +0100 Alex Bligh <alex@alex.org.uk> wrote:

> * change comment in deprecation of clock options

This got missed due to my own inadequacy. Delta is below. It will be in
the next series.

-- 
Alex Bligh

commit 699c55b421822ad2d14b5520b04db8fa9f77c4e0
Author: Alex Bligh <alex@alex.org.uk>
Date:   Thu Jul 25 23:19:52 2013 +0100

    aio / timers: fix comment in vl.c

diff --git a/vl.c b/vl.c
index 612c609..af04644 100644
--- a/vl.c
+++ b/vl.c
@@ -3714,9 +3714,8 @@ int main(int argc, char **argv, char **envp)
                 old_param = 1;
                 break;
             case QEMU_OPTION_clock:
-                /* Once upon a time we did:
-                 *   configure_alarms(optarg);
-                 * here. This is stubbed out for compatibility.
+                /* Clock options no longer exist.  Keep this option for
+                 * backward compatibility.
                  */
                 break;
             case QEMU_OPTION_startdate:

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [PATCHv2] [RFC 2/7] aio / timers: qemu-timer.c utility functions and add list of clocks
  2013-07-25  9:46       ` Alex Bligh
@ 2013-07-26  8:26         ` Stefan Hajnoczi
  0 siblings, 0 replies; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-07-26  8:26 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, Stefan Hajnoczi,
	Paolo Bonzini, rth

On Thu, Jul 25, 2013 at 10:46:18AM +0100, Alex Bligh wrote:
> >>@@ -61,6 +71,15 @@ int64_t cpu_get_ticks(void);
> >> void cpu_enable_ticks(void);
> >> void cpu_disable_ticks(void);
> >>
> >>+static inline int64_t qemu_soonest_timeout(int64_t timeout1, int64_t
> >>timeout2) +{
> >>+    /* we can abuse the fact that -1 (which means infinite) is a maximal
> >>+     * value when cast to unsigned. As this is disgusting, it's kept in
> >>+     * one inline function.
> >>+     */
> >>+    return ((uint64_t) timeout1 < (uint64_t) timeout2) ? timeout1 :
> >>timeout2;
> >
> >The straightforward version isn't much longer than the commented casting
> >trick:
> >
> >if (timeout1 == -1) {
> >    return timeout2;
> >} else if (timeout2 == -1) {
> >    return timeout1;
> >} else {
> >    return timeout1 < timeout2 ? timeout1 : timeout2;
> >}
> 
> Well, it should be (timeout1 < 0) for consistency. It may be a micro
> optimisation but I'm pretty sure the casting trick will produce better
> code. With the comment, it's arguably more readable too.

Seems like a compiler could be smart enough to use unsigned
instructions.  Seems like a ">> 9" vs "/ 512" micro-optimization to me.

> >>+void qemu_free_clock(QEMUClock *clock)
> >>+{
> >>+    QLIST_REMOVE(clock, list);
> >>+    g_free(clock);
> >
> >assert that there are no timers left?
> 
> Yes I wasn't quite sure of the right semantics here as no clocks are
> currently ever destroyed. I'm not quite sure how we know all timers
> are destroyed when an AioContext is destroyed. Should I go and manually
> free them or assert the right way?

It is not possible to free them since their owner still holds a pointer.
That is why I'd use an assert.  The code needs to be written so that
timers are destroyed before the clock is destroyed.

Stefan

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [PATCHv2] [RFC 6/7] aio / timers: Switch to ppoll, run AioContext timers in aio_poll/aio_dispatch
  2013-07-25 14:53       ` Alex Bligh
@ 2013-07-26  8:34         ` Stefan Hajnoczi
  0 siblings, 0 replies; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-07-26  8:34 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, Stefan Hajnoczi,
	Paolo Bonzini, rth

On Thu, Jul 25, 2013 at 03:53:55PM +0100, Alex Bligh wrote:
> Stefan,
> 
> --On 25 July 2013 11:33:43 +0200 Stefan Hajnoczi <stefanha@gmail.com> wrote:
> 
> >>     assert(progress || busy);
> >>-    return true;
> >>+    return progress;
> >
> >Now aio_poll() can return false when it used to return true?
> 
> I don't think that's a bug.
> 
> Firstly, this is the same thing you fixed and we discussed on another
> thread.

I'm fine with the change itself but it needs to be explained in the
commit message or a comment.

In the patch where I changed the semantics of aio_poll() the change is
explained in detail in the commit message:

http://patchwork.ozlabs.org/patch/261786/

Stefan

^ permalink raw reply	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv4 00/13] aio / timers: Add AioContext timers and use ppoll
  2013-07-25 22:22       ` [Qemu-devel] [RFC] [PATCHv3 00/12] aio / timers: Add AioContext timers and use ppoll Alex Bligh
@ 2013-07-26 18:37         ` Alex Bligh
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 01/13] aio / timers: add qemu-timer.c utility functions Alex Bligh
                             ` (12 more replies)
  0 siblings, 13 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-26 18:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

This patch series adds support for timers attached to an AioContext clock
which get called within aio_poll.

In doing so it removes alarm timers and moves to use ppoll where possible.

This patch set 'sort of' passes make check (see below for caveat)
including a new test harness for the aio timers, but has not been
tested much beyond that. In particular, the win32 changes have not
even been compile tested.

Caveat: I have had to alter tests/test-aio.c so the following error
no longer occurs.

ERROR:tests/test-aio.c:346:test_wait_event_notifier_noflush: assertion failed: (aio_poll(ctx, false))

As gar as I can tell, this check was incorrect, in that it checking aio_poll
makes progress when in fact it should not make progress. I fixed an issue
where aio_poll was (as far as I can tell) wrongly returning true on
a timeout, and that generated this error.

Changes since v3
* Split up QEMUClock and QEMUClock list
* Improve commenting
* Fix comment in vl.c
* Change test/test-aio.c to reflect correct behaviour in aio_poll.

Changes since v2:
* Reordered to remove alarm timers last
* Added prctl(PR_SET_TIMERSLACK, 1, ...)
* Renamed qemu_g_poll_ns to qemu_poll_ns
* Moved declaration of above & drop glib types
* Do not use a global list of qemu clocks
* Add AioContext * to QEMUClock
* Split up conversion to use ppoll and timers
* Indentation fix
* Fix aio_win32.c aio_poll to return progress
* aio_notify / qemu_notify when timers are modified
* change comment in deprecation of clock options

Alex Bligh (13):
  aio / timers: add qemu-timer.c utility functions
  aio / timers: add ppoll support with qemu_poll_ns
  aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer
    slack
  aio / timers: Make qemu_run_timers and qemu_run_all_timers return
    progress
  aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList
  aio / timers: Add a QEMUTimerList to AioContext
  aio / timers: Add an AioContext pointer to QEMUTimerList
  aio / timers: aio_ctx_prepare sets timeout from AioContext timers
  aio / timers: Convert aio_poll to use AioContext timers' deadline
  aio / timers: Convert mainloop to use timeout
  aio / timers: on timer modification, qemu_notify or aio_notify
  aio / timers: Remove alarm timers
  aio / timers: Add test harness for AioContext timers

 aio-posix.c              |   20 +-
 aio-win32.c              |   22 +-
 async.c                  |   21 +-
 configure                |   37 +++
 include/block/aio.h      |    5 +
 include/qemu/timer.h     |   53 +++-
 main-loop.c              |   52 +++-
 qemu-timer.c             |  735 ++++++++++++++--------------------------------
 tests/test-aio.c         |  143 ++++++++-
 tests/test-thread-pool.c |    3 +
 vl.c                     |    4 +-
 11 files changed, 544 insertions(+), 551 deletions(-)

-- 
1.7.9.5

^ permalink raw reply	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv4 01/13] aio / timers: add qemu-timer.c utility functions
  2013-07-26 18:37         ` [Qemu-devel] [RFC] [PATCHv4 00/13] " Alex Bligh
@ 2013-07-26 18:37           ` Alex Bligh
  2013-08-01 12:07             ` Paolo Bonzini
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 02/13] aio / timers: add ppoll support with qemu_poll_ns Alex Bligh
                             ` (11 subsequent siblings)
  12 siblings, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-07-26 18:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Add qemu_free_clock and expose qemu_new_clock and clock types.

Add utility functions to qemu-timer.c for nanosecond timing.

Add qemu_clock_deadline_ns to calculate deadlines to
nanosecond accuracy.

Add utility function qemu_soonest_timeout to calculate soonest deadline.

Add qemu_timeout_ns_to_ms to convert a timeout in nanoseconds back to
milliseconds for when ppoll is not used.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |   17 ++++++++++++++
 qemu-timer.c         |   63 +++++++++++++++++++++++++++++++++++++++++++++-----
 2 files changed, 74 insertions(+), 6 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 9dd206c..6171db3 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -11,6 +11,10 @@
 #define SCALE_US 1000
 #define SCALE_NS 1
 
+#define QEMU_CLOCK_REALTIME 0
+#define QEMU_CLOCK_VIRTUAL  1
+#define QEMU_CLOCK_HOST     2
+
 typedef struct QEMUClock QEMUClock;
 typedef void QEMUTimerCB(void *opaque);
 
@@ -32,10 +36,14 @@ extern QEMUClock *vm_clock;
    the virtual clock. */
 extern QEMUClock *host_clock;
 
+QEMUClock *qemu_new_clock(int type);
+void qemu_free_clock(QEMUClock *clock);
 int64_t qemu_get_clock_ns(QEMUClock *clock);
 int64_t qemu_clock_has_timers(QEMUClock *clock);
 int64_t qemu_clock_expired(QEMUClock *clock);
 int64_t qemu_clock_deadline(QEMUClock *clock);
+int64_t qemu_clock_deadline_ns(QEMUClock *clock);
+int qemu_timeout_ns_to_ms(int64_t ns);
 void qemu_clock_enable(QEMUClock *clock, bool enabled);
 void qemu_clock_warp(QEMUClock *clock);
 
@@ -63,6 +71,15 @@ int64_t cpu_get_ticks(void);
 void cpu_enable_ticks(void);
 void cpu_disable_ticks(void);
 
+static inline int64_t qemu_soonest_timeout(int64_t timeout1, int64_t timeout2)
+{
+    /* we can abuse the fact that -1 (which means infinite) is a maximal
+     * value when cast to unsigned. As this is disgusting, it's kept in
+     * one inline function.
+     */
+    return ((uint64_t) timeout1 < (uint64_t) timeout2) ? timeout1 : timeout2;
+}
+
 static inline QEMUTimer *qemu_new_timer_ns(QEMUClock *clock, QEMUTimerCB *cb,
                                            void *opaque)
 {
diff --git a/qemu-timer.c b/qemu-timer.c
index b2d95e2..3dfbdbf 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -40,10 +40,6 @@
 /***********************************************************/
 /* timers */
 
-#define QEMU_CLOCK_REALTIME 0
-#define QEMU_CLOCK_VIRTUAL  1
-#define QEMU_CLOCK_HOST     2
-
 struct QEMUClock {
     QEMUTimer *active_timers;
 
@@ -231,7 +227,7 @@ QEMUClock *rt_clock;
 QEMUClock *vm_clock;
 QEMUClock *host_clock;
 
-static QEMUClock *qemu_new_clock(int type)
+QEMUClock *qemu_new_clock(int type)
 {
     QEMUClock *clock;
 
@@ -243,6 +239,11 @@ static QEMUClock *qemu_new_clock(int type)
     return clock;
 }
 
+void qemu_free_clock(QEMUClock *clock)
+{
+    g_free(clock);
+}
+
 void qemu_clock_enable(QEMUClock *clock, bool enabled)
 {
     bool old = clock->enabled;
@@ -268,7 +269,7 @@ int64_t qemu_clock_deadline(QEMUClock *clock)
     /* To avoid problems with overflow limit this to 2^32.  */
     int64_t delta = INT32_MAX;
 
-    if (clock->active_timers) {
+    if (clock->enabled && clock->active_timers) {
         delta = clock->active_timers->expire_time - qemu_get_clock_ns(clock);
     }
     if (delta < 0) {
@@ -277,6 +278,56 @@ int64_t qemu_clock_deadline(QEMUClock *clock)
     return delta;
 }
 
+/*
+ * As above, but return -1 for no deadline, and do not cap to 2^32
+ * as we know the result is always positive.
+ */
+
+int64_t qemu_clock_deadline_ns(QEMUClock *clock)
+{
+    int64_t delta;
+
+    if (!clock->enabled || !clock->active_timers) {
+        return -1;
+    }
+
+    delta = clock->active_timers->expire_time - qemu_get_clock_ns(clock);
+
+    if (delta <= 0) {
+        return 0;
+    }
+
+    return delta;
+}
+
+/* Transition function to convert a nanosecond timeout to ms
+ * This is used where a system does not support ppoll
+ */
+int qemu_timeout_ns_to_ms(int64_t ns)
+{
+    int64_t ms;
+    if (ns < 0) {
+        return -1;
+    }
+
+    if (!ns) {
+        return 0;
+    }
+
+    /* Always round up, because it's better to wait too long than to wait too
+     * little and effectively busy-wait
+     */
+    ms = (ns + SCALE_MS - 1) / SCALE_MS;
+
+    /* To avoid overflow problems, limit this to 2^31, i.e. approx 25 days */
+    if (ms > (int64_t) INT32_MAX) {
+        ms = INT32_MAX;
+    }
+
+    return (int) ms;
+}
+
+
 QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
                           QEMUTimerCB *cb, void *opaque)
 {
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv4 02/13] aio / timers: add ppoll support with qemu_poll_ns
  2013-07-26 18:37         ` [Qemu-devel] [RFC] [PATCHv4 00/13] " Alex Bligh
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 01/13] aio / timers: add qemu-timer.c utility functions Alex Bligh
@ 2013-07-26 18:37           ` Alex Bligh
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 03/13] aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer slack Alex Bligh
                             ` (10 subsequent siblings)
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-26 18:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Add qemu_poll_ns which works like g_poll but takes a nanosecond
timeout.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 configure            |   19 +++++++++++++++++++
 include/qemu/timer.h |    1 +
 qemu-timer.c         |   24 ++++++++++++++++++++++++
 3 files changed, 44 insertions(+)

diff --git a/configure b/configure
index 9e1cd19..b491c00 100755
--- a/configure
+++ b/configure
@@ -2801,6 +2801,22 @@ if compile_prog "" "" ; then
   dup3=yes
 fi
 
+# check for ppoll support
+ppoll=no
+cat > $TMPC << EOF
+#include <poll.h>
+
+int main(void)
+{
+    struct pollfd pfd = { .fd = 0, .events = 0, .revents = 0 };
+    ppoll(&pfd, 1, 0, 0);
+    return 0;
+}
+EOF
+if compile_prog "" "" ; then
+  ppoll=yes
+fi
+
 # check for epoll support
 epoll=no
 cat > $TMPC << EOF
@@ -3792,6 +3808,9 @@ fi
 if test "$dup3" = "yes" ; then
   echo "CONFIG_DUP3=y" >> $config_host_mak
 fi
+if test "$ppoll" = "yes" ; then
+  echo "CONFIG_PPOLL=y" >> $config_host_mak
+fi
 if test "$epoll" = "yes" ; then
   echo "CONFIG_EPOLL=y" >> $config_host_mak
 fi
diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 6171db3..f434ecb 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -44,6 +44,7 @@ int64_t qemu_clock_expired(QEMUClock *clock);
 int64_t qemu_clock_deadline(QEMUClock *clock);
 int64_t qemu_clock_deadline_ns(QEMUClock *clock);
 int qemu_timeout_ns_to_ms(int64_t ns);
+int qemu_poll_ns(GPollFD *fds, uint nfds, int64_t timeout);
 void qemu_clock_enable(QEMUClock *clock, bool enabled);
 void qemu_clock_warp(QEMUClock *clock);
 
diff --git a/qemu-timer.c b/qemu-timer.c
index 3dfbdbf..b57bd78 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -37,6 +37,10 @@
 #include <mmsystem.h>
 #endif
 
+#ifdef CONFIG_PPOLL
+#include <poll.h>
+#endif
+
 /***********************************************************/
 /* timers */
 
@@ -328,6 +332,26 @@ int qemu_timeout_ns_to_ms(int64_t ns)
 }
 
 
+/* qemu implementation of g_poll which uses a nanosecond timeout but is
+ * otherwise identical to g_poll
+ */
+int qemu_poll_ns(GPollFD *fds, uint nfds, int64_t timeout)
+{
+#ifdef CONFIG_PPOLL
+    if (timeout < 0) {
+        return ppoll((struct pollfd *)fds, nfds, NULL, NULL);
+    } else {
+        struct timespec ts;
+        ts.tv_sec = timeout / 1000000000LL;
+        ts.tv_nsec = timeout % 1000000000LL;
+        return ppoll((struct pollfd *)fds, nfds, &ts, NULL);
+    }
+#else
+    return g_poll(fds, nfds, qemu_timeout_ns_to_ms(timeout));
+#endif
+}
+
+
 QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
                           QEMUTimerCB *cb, void *opaque)
 {
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv4 03/13] aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer slack
  2013-07-26 18:37         ` [Qemu-devel] [RFC] [PATCHv4 00/13] " Alex Bligh
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 01/13] aio / timers: add qemu-timer.c utility functions Alex Bligh
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 02/13] aio / timers: add ppoll support with qemu_poll_ns Alex Bligh
@ 2013-07-26 18:37           ` Alex Bligh
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 04/13] aio / timers: Make qemu_run_timers and qemu_run_all_timers return progress Alex Bligh
                             ` (9 subsequent siblings)
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-26 18:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Where supported, called prctl(PR_SET_TIMERSLACK, 1, ...) to
set one nanosecond timer slack to increase precision of timer
calls.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 configure    |   18 ++++++++++++++++++
 qemu-timer.c |    7 +++++++
 2 files changed, 25 insertions(+)

diff --git a/configure b/configure
index b491c00..c8e39bc 100755
--- a/configure
+++ b/configure
@@ -2817,6 +2817,21 @@ if compile_prog "" "" ; then
   ppoll=yes
 fi
 
+# check for prctl(PR_SET_TIMERSLACK , ... ) support
+prctl_pr_set_timerslack=no
+cat > $TMPC << EOF
+#include <sys/prctl.h>
+
+int main(void)
+{
+    prctl(PR_SET_TIMERSLACK, 1, 0, 0, 0);
+    return 0;
+}
+EOF
+if compile_prog "" "" ; then
+  prctl_pr_set_timerslack=yes
+fi
+
 # check for epoll support
 epoll=no
 cat > $TMPC << EOF
@@ -3811,6 +3826,9 @@ fi
 if test "$ppoll" = "yes" ; then
   echo "CONFIG_PPOLL=y" >> $config_host_mak
 fi
+if test "$prctl_pr_set_timerslack" = "yes" ; then
+  echo "CONFIG_PRCTL_PR_SET_TIMERSLACK=y" >> $config_host_mak
+fi
 if test "$epoll" = "yes" ; then
   echo "CONFIG_EPOLL=y" >> $config_host_mak
 fi
diff --git a/qemu-timer.c b/qemu-timer.c
index b57bd78..a8b270f 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -41,6 +41,10 @@
 #include <poll.h>
 #endif
 
+#ifdef CONFIG_PRCTL_PR_SET_TIMERSLACK
+#include <sys/prctl.h>
+#endif
+
 /***********************************************************/
 /* timers */
 
@@ -512,6 +516,9 @@ void init_clocks(void)
         vm_clock = qemu_new_clock(QEMU_CLOCK_VIRTUAL);
         host_clock = qemu_new_clock(QEMU_CLOCK_HOST);
     }
+#ifdef CONFIG_PRCTL_PR_SET_TIMERSLACK
+    prctl(PR_SET_TIMERSLACK, 1, 0, 0, 0);
+#endif
 }
 
 uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts)
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv4 04/13] aio / timers: Make qemu_run_timers and qemu_run_all_timers return progress
  2013-07-26 18:37         ` [Qemu-devel] [RFC] [PATCHv4 00/13] " Alex Bligh
                             ` (2 preceding siblings ...)
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 03/13] aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer slack Alex Bligh
@ 2013-07-26 18:37           ` Alex Bligh
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 05/13] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList Alex Bligh
                             ` (8 subsequent siblings)
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-26 18:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Make qemu_run_timers and qemu_run_all_timers return progress
so that aio_poll etc. can determine whether a timer has been
run.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |    4 ++--
 qemu-timer.c         |   18 ++++++++++++------
 2 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index f434ecb..a1f2ac8 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -62,8 +62,8 @@ bool qemu_timer_pending(QEMUTimer *ts);
 bool qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time);
 uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts);
 
-void qemu_run_timers(QEMUClock *clock);
-void qemu_run_all_timers(void);
+bool qemu_run_timers(QEMUClock *clock);
+bool qemu_run_all_timers(void);
 void configure_alarms(char const *opt);
 void init_clocks(void);
 int init_timer_alarm(void);
diff --git a/qemu-timer.c b/qemu-timer.c
index a8b270f..714bc92 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -451,13 +451,14 @@ bool qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time)
     return qemu_timer_expired_ns(timer_head, current_time * timer_head->scale);
 }
 
-void qemu_run_timers(QEMUClock *clock)
+bool qemu_run_timers(QEMUClock *clock)
 {
     QEMUTimer *ts;
     int64_t current_time;
+    bool progress = false;
    
     if (!clock->enabled)
-        return;
+        return progress;
 
     current_time = qemu_get_clock_ns(clock);
     for(;;) {
@@ -471,7 +472,9 @@ void qemu_run_timers(QEMUClock *clock)
 
         /* run the callback (the timer list can be modified) */
         ts->cb(ts->opaque);
+        progress = true;
     }
+    return progress;
 }
 
 int64_t qemu_get_clock_ns(QEMUClock *clock)
@@ -526,20 +529,23 @@ uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts)
     return qemu_timer_pending(ts) ? ts->expire_time : -1;
 }
 
-void qemu_run_all_timers(void)
+bool qemu_run_all_timers(void)
 {
+    bool progress = false;
     alarm_timer->pending = false;
 
     /* vm time timers */
-    qemu_run_timers(vm_clock);
-    qemu_run_timers(rt_clock);
-    qemu_run_timers(host_clock);
+    progress |= qemu_run_timers(vm_clock);
+    progress |= qemu_run_timers(rt_clock);
+    progress |= qemu_run_timers(host_clock);
 
     /* rearm timer, if not periodic */
     if (alarm_timer->expired) {
         alarm_timer->expired = false;
         qemu_rearm_alarm_timer(alarm_timer);
     }
+
+    return progress;
 }
 
 #ifdef _WIN32
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv4 05/13] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList
  2013-07-26 18:37         ` [Qemu-devel] [RFC] [PATCHv4 00/13] " Alex Bligh
                             ` (3 preceding siblings ...)
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 04/13] aio / timers: Make qemu_run_timers and qemu_run_all_timers return progress Alex Bligh
@ 2013-07-26 18:37           ` Alex Bligh
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 06/13] aio / timers: Add a QEMUTimerList to AioContext Alex Bligh
                             ` (7 subsequent siblings)
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-26 18:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Split QEMUClock into QEMUClock and QEMUTimerList so that we can
have more than one QEMUTimerList associated with the same clock.

Introduce a default_timerlist concept and make existing
qemu_clock_* calls that actually should operate on a QEMUTimerList
call the relevant QEMUTimerList implementations, using the clock's
default timerlist. This vastly reduces the invasiveness of this
change and means the API stays constant for existing users.

Introduce a list of QEMUTimerLists associated with each clock
so that reenabling the clock can cause all the notifiers
to be called. Note the code to do the notifications is added
in a later patch.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |   26 +++++++++
 qemu-timer.c         |  150 +++++++++++++++++++++++++++++++++++++++-----------
 2 files changed, 143 insertions(+), 33 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index a1f2ac8..e627033 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -16,6 +16,7 @@
 #define QEMU_CLOCK_HOST     2
 
 typedef struct QEMUClock QEMUClock;
+typedef struct QEMUTimerList QEMUTimerList;
 typedef void QEMUTimerCB(void *opaque);
 
 /* The real time clock should be used only for stuff which does not
@@ -38,11 +39,19 @@ extern QEMUClock *host_clock;
 
 QEMUClock *qemu_new_clock(int type);
 void qemu_free_clock(QEMUClock *clock);
+QEMUTimerList *qemu_new_timerlist(QEMUClock *clock);
+void qemu_free_timerlist(QEMUTimerList *tl);
 int64_t qemu_get_clock_ns(QEMUClock *clock);
 int64_t qemu_clock_has_timers(QEMUClock *clock);
 int64_t qemu_clock_expired(QEMUClock *clock);
 int64_t qemu_clock_deadline(QEMUClock *clock);
 int64_t qemu_clock_deadline_ns(QEMUClock *clock);
+int64_t qemu_timerlist_has_timers(QEMUTimerList *tl);
+int64_t qemu_timerlist_expired(QEMUTimerList *tl);
+int64_t qemu_timerlist_deadline(QEMUTimerList *tl);
+int64_t qemu_timerlist_deadline_ns(QEMUTimerList *tl);
+QEMUClock *qemu_timerlist_get_clock(QEMUTimerList *tl);
+QEMUTimerList *qemu_clock_get_default_timerlist(QEMUClock *clock);
 int qemu_timeout_ns_to_ms(int64_t ns);
 int qemu_poll_ns(GPollFD *fds, uint nfds, int64_t timeout);
 void qemu_clock_enable(QEMUClock *clock, bool enabled);
@@ -54,6 +63,8 @@ void qemu_unregister_clock_reset_notifier(QEMUClock *clock,
 
 QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
                           QEMUTimerCB *cb, void *opaque);
+QEMUTimer *qemu_new_timer_timerlist(QEMUTimerList *tl, int scale,
+                                    QEMUTimerCB *cb, void *opaque);
 void qemu_free_timer(QEMUTimer *ts);
 void qemu_del_timer(QEMUTimer *ts);
 void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time);
@@ -63,6 +74,7 @@ bool qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time);
 uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts);
 
 bool qemu_run_timers(QEMUClock *clock);
+bool qemu_run_timerlist(QEMUTimerList *tl);
 bool qemu_run_all_timers(void);
 void configure_alarms(char const *opt);
 void init_clocks(void);
@@ -87,12 +99,26 @@ static inline QEMUTimer *qemu_new_timer_ns(QEMUClock *clock, QEMUTimerCB *cb,
     return qemu_new_timer(clock, SCALE_NS, cb, opaque);
 }
 
+static inline QEMUTimer *qemu_new_timer_timerlist_ns(QEMUTimerList *tl,
+                                                     QEMUTimerCB *cb,
+                                                     void *opaque)
+{
+    return qemu_new_timer_timerlist(tl, SCALE_NS, cb, opaque);
+}
+
 static inline QEMUTimer *qemu_new_timer_ms(QEMUClock *clock, QEMUTimerCB *cb,
                                            void *opaque)
 {
     return qemu_new_timer(clock, SCALE_MS, cb, opaque);
 }
 
+static inline QEMUTimer *qemu_new_timer_timerlist_ms(QEMUTimerList *tl,
+                                                     QEMUTimerCB *cb,
+                                                     void *opaque)
+{
+    return qemu_new_timer_timerlist(tl, SCALE_MS, cb, opaque);
+}
+
 static inline int64_t qemu_get_clock_ms(QEMUClock *clock)
 {
     return qemu_get_clock_ns(clock) / SCALE_MS;
diff --git a/qemu-timer.c b/qemu-timer.c
index 714bc92..211c379 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -49,7 +49,8 @@
 /* timers */
 
 struct QEMUClock {
-    QEMUTimer *active_timers;
+    QEMUTimerList *default_timerlist;
+    QLIST_HEAD(, QEMUTimerList) timerlists;
 
     NotifierList reset_notifiers;
     int64_t last;
@@ -58,9 +59,22 @@ struct QEMUClock {
     bool enabled;
 };
 
+/* A QEMUTimerList is a list of timers attached to a clock. More
+ * than one QEMUTimerList can be attached to each clock, for instance
+ * used by different AioContexts / threads. Each clock also has
+ * a list of the QEMUTimerLists associated with it, in order that
+ * reenabling the clock can call all the notifiers.
+ */
+
+struct QEMUTimerList {
+    QEMUClock *clock;
+    QEMUTimer *active_timers;
+    QLIST_ENTRY(QEMUTimerList) list;
+};
+
 struct QEMUTimer {
     int64_t expire_time;	/* in nanoseconds */
-    QEMUClock *clock;
+    QEMUTimerList *tl;
     QEMUTimerCB *cb;
     void *opaque;
     QEMUTimer *next;
@@ -93,21 +107,25 @@ static int64_t qemu_next_alarm_deadline(void)
 {
     int64_t delta = INT64_MAX;
     int64_t rtdelta;
+    int64_t hdelta;
 
-    if (!use_icount && vm_clock->enabled && vm_clock->active_timers) {
-        delta = vm_clock->active_timers->expire_time -
-                     qemu_get_clock_ns(vm_clock);
+    if (!use_icount && vm_clock->enabled &&
+        vm_clock->default_timerlist->active_timers) {
+        delta = vm_clock->default_timerlist->active_timers->expire_time -
+            qemu_get_clock_ns(vm_clock);
     }
-    if (host_clock->enabled && host_clock->active_timers) {
-        int64_t hdelta = host_clock->active_timers->expire_time -
-                 qemu_get_clock_ns(host_clock);
+    if (host_clock->enabled &&
+        host_clock->default_timerlist->active_timers) {
+        hdelta = host_clock->default_timerlist->active_timers->expire_time -
+            qemu_get_clock_ns(host_clock);
         if (hdelta < delta) {
             delta = hdelta;
         }
     }
-    if (rt_clock->enabled && rt_clock->active_timers) {
-        rtdelta = (rt_clock->active_timers->expire_time -
-                 qemu_get_clock_ns(rt_clock));
+    if (rt_clock->enabled &&
+        rt_clock->default_timerlist->active_timers) {
+        rtdelta = (rt_clock->default_timerlist->active_timers->expire_time -
+                   qemu_get_clock_ns(rt_clock));
         if (rtdelta < delta) {
             delta = rtdelta;
         }
@@ -244,14 +262,37 @@ QEMUClock *qemu_new_clock(int type)
     clock->enabled = true;
     clock->last = INT64_MIN;
     notifier_list_init(&clock->reset_notifiers);
+    clock->default_timerlist = qemu_new_timerlist(clock);
     return clock;
 }
 
 void qemu_free_clock(QEMUClock *clock)
 {
+    qemu_free_timerlist(clock->default_timerlist);
     g_free(clock);
 }
 
+QEMUTimerList *qemu_new_timerlist(QEMUClock *clock)
+{
+    QEMUTimerList *tl;
+
+    tl = g_malloc0(sizeof(QEMUTimerList));
+    tl->clock = clock;
+    QLIST_INSERT_HEAD(&clock->timerlists, tl, list);
+    return tl;
+}
+
+void qemu_free_timerlist(QEMUTimerList *tl)
+{
+    if (tl->clock) {
+        QLIST_REMOVE(tl, list);
+        if (tl->clock->default_timerlist == tl) {
+            tl->clock->default_timerlist = NULL;
+        }
+    }
+    g_free(tl);
+}
+
 void qemu_clock_enable(QEMUClock *clock, bool enabled)
 {
     bool old = clock->enabled;
@@ -261,24 +302,34 @@ void qemu_clock_enable(QEMUClock *clock, bool enabled)
     }
 }
 
+int64_t qemu_timerlist_has_timers(QEMUTimerList *tl)
+{
+    return !!tl->active_timers;
+}
+
 int64_t qemu_clock_has_timers(QEMUClock *clock)
 {
-    return !!clock->active_timers;
+    return qemu_timerlist_has_timers(clock->default_timerlist);
+}
+
+int64_t qemu_timerlist_expired(QEMUTimerList *tl)
+{
+    return (tl->active_timers &&
+            tl->active_timers->expire_time < qemu_get_clock_ns(tl->clock));
 }
 
 int64_t qemu_clock_expired(QEMUClock *clock)
 {
-    return (clock->active_timers &&
-            clock->active_timers->expire_time < qemu_get_clock_ns(clock));
+    return qemu_timerlist_expired(clock->default_timerlist);
 }
 
-int64_t qemu_clock_deadline(QEMUClock *clock)
+int64_t qemu_timerlist_deadline(QEMUTimerList *tl)
 {
     /* To avoid problems with overflow limit this to 2^32.  */
     int64_t delta = INT32_MAX;
 
-    if (clock->enabled && clock->active_timers) {
-        delta = clock->active_timers->expire_time - qemu_get_clock_ns(clock);
+    if (tl->clock->enabled && tl->active_timers) {
+        delta = tl->active_timers->expire_time - qemu_get_clock_ns(tl->clock);
     }
     if (delta < 0) {
         delta = 0;
@@ -286,20 +337,25 @@ int64_t qemu_clock_deadline(QEMUClock *clock)
     return delta;
 }
 
+int64_t qemu_clock_deadline(QEMUClock *clock)
+{
+    return qemu_timerlist_deadline(clock->default_timerlist);
+}
+
 /*
  * As above, but return -1 for no deadline, and do not cap to 2^32
  * as we know the result is always positive.
  */
 
-int64_t qemu_clock_deadline_ns(QEMUClock *clock)
+int64_t qemu_timerlist_deadline_ns(QEMUTimerList *tl)
 {
     int64_t delta;
 
-    if (!clock->enabled || !clock->active_timers) {
+    if (!tl->clock->enabled || !tl->active_timers) {
         return -1;
     }
 
-    delta = clock->active_timers->expire_time - qemu_get_clock_ns(clock);
+    delta = tl->active_timers->expire_time - qemu_get_clock_ns(tl->clock);
 
     if (delta <= 0) {
         return 0;
@@ -308,6 +364,21 @@ int64_t qemu_clock_deadline_ns(QEMUClock *clock)
     return delta;
 }
 
+int64_t qemu_clock_deadline_ns(QEMUClock *clock)
+{
+    return qemu_timerlist_deadline_ns(clock->default_timerlist);
+}
+
+QEMUClock *qemu_timerlist_get_clock(QEMUTimerList *tl)
+{
+    return tl->clock;
+}
+
+QEMUTimerList *qemu_clock_get_default_timerlist(QEMUClock *clock)
+{
+    return clock->default_timerlist;
+}
+
 /* Transition function to convert a nanosecond timeout to ms
  * This is used where a system does not support ppoll
  */
@@ -356,19 +427,26 @@ int qemu_poll_ns(GPollFD *fds, uint nfds, int64_t timeout)
 }
 
 
-QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
-                          QEMUTimerCB *cb, void *opaque)
+QEMUTimer *qemu_new_timer_timerlist(QEMUTimerList *tl, int scale,
+                                    QEMUTimerCB *cb, void *opaque)
 {
     QEMUTimer *ts;
 
     ts = g_malloc0(sizeof(QEMUTimer));
-    ts->clock = clock;
+    ts->tl = tl;
     ts->cb = cb;
     ts->opaque = opaque;
     ts->scale = scale;
     return ts;
 }
 
+QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
+                          QEMUTimerCB *cb, void *opaque)
+{
+    return qemu_new_timer_timerlist(clock->default_timerlist,
+                                    scale, cb, opaque);
+}
+
 void qemu_free_timer(QEMUTimer *ts)
 {
     g_free(ts);
@@ -381,7 +459,7 @@ void qemu_del_timer(QEMUTimer *ts)
 
     /* NOTE: this code must be signal safe because
        qemu_timer_expired() can be called from a signal. */
-    pt = &ts->clock->active_timers;
+    pt = &ts->tl->active_timers;
     for(;;) {
         t = *pt;
         if (!t)
@@ -405,7 +483,7 @@ void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
     /* add the timer in the sorted list */
     /* NOTE: this code must be signal safe because
        qemu_timer_expired() can be called from a signal. */
-    pt = &ts->clock->active_timers;
+    pt = &ts->tl->active_timers;
     for(;;) {
         t = *pt;
         if (!qemu_timer_expired_ns(t, expire_time)) {
@@ -418,12 +496,12 @@ void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
     *pt = ts;
 
     /* Rearm if necessary  */
-    if (pt == &ts->clock->active_timers) {
+    if (pt == &ts->tl->active_timers) {
         if (!alarm_timer->pending) {
             qemu_rearm_alarm_timer(alarm_timer);
         }
         /* Interrupt execution to force deadline recalculation.  */
-        qemu_clock_warp(ts->clock);
+        qemu_clock_warp(ts->tl->clock);
         if (use_icount) {
             qemu_notify_event();
         }
@@ -438,7 +516,7 @@ void qemu_mod_timer(QEMUTimer *ts, int64_t expire_time)
 bool qemu_timer_pending(QEMUTimer *ts)
 {
     QEMUTimer *t;
-    for (t = ts->clock->active_timers; t != NULL; t = t->next) {
+    for (t = ts->tl->active_timers; t != NULL; t = t->next) {
         if (t == ts) {
             return true;
         }
@@ -451,23 +529,24 @@ bool qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time)
     return qemu_timer_expired_ns(timer_head, current_time * timer_head->scale);
 }
 
-bool qemu_run_timers(QEMUClock *clock)
+bool qemu_run_timerlist(QEMUTimerList *tl)
 {
     QEMUTimer *ts;
     int64_t current_time;
     bool progress = false;
    
-    if (!clock->enabled)
+    if (!tl->clock->enabled) {
         return progress;
+    }
 
-    current_time = qemu_get_clock_ns(clock);
+    current_time = qemu_get_clock_ns(tl->clock);
     for(;;) {
-        ts = clock->active_timers;
+        ts = tl->active_timers;
         if (!qemu_timer_expired_ns(ts, current_time)) {
             break;
         }
         /* remove timer from the list before calling the callback */
-        clock->active_timers = ts->next;
+        tl->active_timers = ts->next;
         ts->next = NULL;
 
         /* run the callback (the timer list can be modified) */
@@ -477,6 +556,11 @@ bool qemu_run_timers(QEMUClock *clock)
     return progress;
 }
 
+bool qemu_run_timers(QEMUClock *clock)
+{
+    return qemu_run_timerlist(clock->default_timerlist);
+}
+
 int64_t qemu_get_clock_ns(QEMUClock *clock)
 {
     int64_t now, last;
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv4 06/13] aio / timers: Add a QEMUTimerList to AioContext
  2013-07-26 18:37         ` [Qemu-devel] [RFC] [PATCHv4 00/13] " Alex Bligh
                             ` (4 preceding siblings ...)
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 05/13] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList Alex Bligh
@ 2013-07-26 18:37           ` Alex Bligh
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 07/13] aio / timers: Add an AioContext pointer to QEMUTimerList Alex Bligh
                             ` (6 subsequent siblings)
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-26 18:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Add a QEMUTimerList to each AioContext and delete it when the
AioContext is freed.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 async.c                  |    7 +++++++
 include/block/aio.h      |    5 +++++
 tests/test-aio.c         |    3 +++
 tests/test-thread-pool.c |    3 +++
 4 files changed, 18 insertions(+)

diff --git a/async.c b/async.c
index 90fe906..8acbd4c 100644
--- a/async.c
+++ b/async.c
@@ -177,6 +177,7 @@ aio_ctx_finalize(GSource     *source)
     aio_set_event_notifier(ctx, &ctx->notifier, NULL, NULL);
     event_notifier_cleanup(&ctx->notifier);
     g_array_free(ctx->pollfds, TRUE);
+    qemu_free_timerlist(ctx->tl);
 }
 
 static GSourceFuncs aio_source_funcs = {
@@ -215,6 +216,12 @@ AioContext *aio_context_new(void)
     aio_set_event_notifier(ctx, &ctx->notifier, 
                            (EventNotifierHandler *)
                            event_notifier_test_and_clear, NULL);
+    /* Assert if we don't have rt_clock yet. If you see this assertion
+     * it means you are using AioContext without having first called
+     * init_clocks() in main().
+     */
+    assert(rt_clock);
+    ctx->tl = qemu_new_timerlist(rt_clock);
 
     return ctx;
 }
diff --git a/include/block/aio.h b/include/block/aio.h
index 1836793..1595167 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -41,6 +41,8 @@ typedef struct AioHandler AioHandler;
 typedef void QEMUBHFunc(void *opaque);
 typedef void IOHandler(void *opaque);
 
+typedef struct QEMUTimerList QEMUTimerList;
+
 typedef struct AioContext {
     GSource source;
 
@@ -69,6 +71,9 @@ typedef struct AioContext {
 
     /* Thread pool for performing work and receiving completion callbacks */
     struct ThreadPool *thread_pool;
+
+    /* TimerList for calling timers */
+    QEMUTimerList *tl;
 } AioContext;
 
 /* Returns 1 if there are still outstanding AIO requests; 0 otherwise */
diff --git a/tests/test-aio.c b/tests/test-aio.c
index c173870..2d7ec4c 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -12,6 +12,7 @@
 
 #include <glib.h>
 #include "block/aio.h"
+#include "qemu/timer.h"
 
 AioContext *ctx;
 
@@ -628,6 +629,8 @@ int main(int argc, char **argv)
 {
     GSource *src;
 
+    init_clocks();
+
     ctx = aio_context_new();
     src = aio_get_g_source(ctx);
     g_source_attach(src, NULL);
diff --git a/tests/test-thread-pool.c b/tests/test-thread-pool.c
index b62338f..27d6190 100644
--- a/tests/test-thread-pool.c
+++ b/tests/test-thread-pool.c
@@ -3,6 +3,7 @@
 #include "block/aio.h"
 #include "block/thread-pool.h"
 #include "block/block.h"
+#include "qemu/timer.h"
 
 static AioContext *ctx;
 static ThreadPool *pool;
@@ -205,6 +206,8 @@ int main(int argc, char **argv)
 {
     int ret;
 
+    init_clocks();
+
     ctx = aio_context_new();
     pool = aio_get_thread_pool(ctx);
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv4 07/13] aio / timers: Add an AioContext pointer to QEMUTimerList
  2013-07-26 18:37         ` [Qemu-devel] [RFC] [PATCHv4 00/13] " Alex Bligh
                             ` (5 preceding siblings ...)
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 06/13] aio / timers: Add a QEMUTimerList to AioContext Alex Bligh
@ 2013-07-26 18:37           ` Alex Bligh
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 08/13] aio / timers: aio_ctx_prepare sets timeout from AioContext timers Alex Bligh
                             ` (5 subsequent siblings)
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-26 18:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Add an AioContext pointer to QEMUTimerList so it knows what to notify
on a timer change.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 async.c              |    1 +
 include/qemu/timer.h |    3 +++
 qemu-timer.c         |   12 ++++++++++++
 3 files changed, 16 insertions(+)

diff --git a/async.c b/async.c
index 8acbd4c..d3ed868 100644
--- a/async.c
+++ b/async.c
@@ -222,6 +222,7 @@ AioContext *aio_context_new(void)
      */
     assert(rt_clock);
     ctx->tl = qemu_new_timerlist(rt_clock);
+    qemu_timerlist_set_ctx(ctx->tl, ctx);
 
     return ctx;
 }
diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index e627033..3eb9fb7 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -17,6 +17,7 @@
 
 typedef struct QEMUClock QEMUClock;
 typedef struct QEMUTimerList QEMUTimerList;
+typedef struct AioContext AioContext;
 typedef void QEMUTimerCB(void *opaque);
 
 /* The real time clock should be used only for stuff which does not
@@ -52,6 +53,8 @@ int64_t qemu_timerlist_deadline(QEMUTimerList *tl);
 int64_t qemu_timerlist_deadline_ns(QEMUTimerList *tl);
 QEMUClock *qemu_timerlist_get_clock(QEMUTimerList *tl);
 QEMUTimerList *qemu_clock_get_default_timerlist(QEMUClock *clock);
+AioContext *qemu_timerlist_get_ctx(QEMUTimerList *tl);
+void qemu_timerlist_set_ctx(QEMUTimerList *tl, AioContext * ctx);
 int qemu_timeout_ns_to_ms(int64_t ns);
 int qemu_poll_ns(GPollFD *fds, uint nfds, int64_t timeout);
 void qemu_clock_enable(QEMUClock *clock, bool enabled);
diff --git a/qemu-timer.c b/qemu-timer.c
index 211c379..9d992d6 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -29,6 +29,7 @@
 #include "hw/hw.h"
 
 #include "qemu/timer.h"
+#include "block/aio.h"
 #ifdef CONFIG_POSIX
 #include <pthread.h>
 #endif
@@ -70,6 +71,7 @@ struct QEMUTimerList {
     QEMUClock *clock;
     QEMUTimer *active_timers;
     QLIST_ENTRY(QEMUTimerList) list;
+    AioContext *ctx;
 };
 
 struct QEMUTimer {
@@ -379,6 +381,16 @@ QEMUTimerList *qemu_clock_get_default_timerlist(QEMUClock *clock)
     return clock->default_timerlist;
 }
 
+AioContext *qemu_timerlist_get_ctx(QEMUTimerList *tl)
+{
+    return tl->ctx;
+}
+
+void qemu_timerlist_set_ctx(QEMUTimerList *tl, AioContext * ctx)
+{
+    tl->ctx = ctx;
+}
+
 /* Transition function to convert a nanosecond timeout to ms
  * This is used where a system does not support ppoll
  */
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv4 08/13] aio / timers: aio_ctx_prepare sets timeout from AioContext timers
  2013-07-26 18:37         ` [Qemu-devel] [RFC] [PATCHv4 00/13] " Alex Bligh
                             ` (6 preceding siblings ...)
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 07/13] aio / timers: Add an AioContext pointer to QEMUTimerList Alex Bligh
@ 2013-07-26 18:37           ` Alex Bligh
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 09/13] aio / timers: Convert aio_poll to use AioContext timers' deadline Alex Bligh
                             ` (4 subsequent siblings)
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-26 18:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Calculate the timeout in aio_ctx_prepare taking into account
the timers attached to the AioContext.

Alter aio_ctx_check similarly.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 async.c |   13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/async.c b/async.c
index d3ed868..2a907e1 100644
--- a/async.c
+++ b/async.c
@@ -123,13 +123,14 @@ aio_ctx_prepare(GSource *source, gint    *timeout)
 {
     AioContext *ctx = (AioContext *) source;
     QEMUBH *bh;
+    int deadline;
 
     for (bh = ctx->first_bh; bh; bh = bh->next) {
         if (!bh->deleted && bh->scheduled) {
             if (bh->idle) {
                 /* idle bottom halves will be polled at least
                  * every 10ms */
-                *timeout = 10;
+                *timeout = qemu_soonest_timeout(*timeout, 10);
             } else {
                 /* non-idle bottom halves will be executed
                  * immediately */
@@ -139,6 +140,14 @@ aio_ctx_prepare(GSource *source, gint    *timeout)
         }
     }
 
+    deadline = qemu_timeout_ns_to_ms(qemu_timerlist_deadline_ns(ctx->tl));
+    if (deadline == 0) {
+        *timeout = 0;
+        return true;
+    } else {
+        *timeout = qemu_soonest_timeout(*timeout, deadline);
+    }
+
     return false;
 }
 
@@ -153,7 +162,7 @@ aio_ctx_check(GSource *source)
             return true;
 	}
     }
-    return aio_pending(ctx);
+    return aio_pending(ctx) || (qemu_timerlist_deadline_ns(ctx->tl) >= 0);
 }
 
 static gboolean
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv4 09/13] aio / timers: Convert aio_poll to use AioContext timers' deadline
  2013-07-26 18:37         ` [Qemu-devel] [RFC] [PATCHv4 00/13] " Alex Bligh
                             ` (7 preceding siblings ...)
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 08/13] aio / timers: aio_ctx_prepare sets timeout from AioContext timers Alex Bligh
@ 2013-07-26 18:37           ` Alex Bligh
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 10/13] aio / timers: Convert mainloop to use timeout Alex Bligh
                             ` (3 subsequent siblings)
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-26 18:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Convert aio_poll to use deadline based on AioContext's timers.

aio_poll has been changed to return accurately whether progress
has occurred. Prior to this commit, aio_poll always returned
true if g_poll was entered, whether or not any progress was
made. This required a change to tests/test-aio.c where an
assert was backwards.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 aio-posix.c      |   20 +++++++++++++-------
 aio-win32.c      |   22 +++++++++++++++++++---
 tests/test-aio.c |    4 ++--
 3 files changed, 34 insertions(+), 12 deletions(-)

diff --git a/aio-posix.c b/aio-posix.c
index b68eccd..4331308 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -166,6 +166,10 @@ static bool aio_dispatch(AioContext *ctx)
             g_free(tmp);
         }
     }
+
+    /* Run our timers */
+    progress |= qemu_run_timerlist(ctx->tl);
+
     return progress;
 }
 
@@ -232,9 +236,9 @@ bool aio_poll(AioContext *ctx, bool blocking)
     }
 
     /* wait until next event */
-    ret = g_poll((GPollFD *)ctx->pollfds->data,
-                 ctx->pollfds->len,
-                 blocking ? -1 : 0);
+    ret = qemu_poll_ns((GPollFD *)ctx->pollfds->data,
+                         ctx->pollfds->len,
+                         blocking ? qemu_timerlist_deadline_ns(ctx->tl) : 0);
 
     /* if we have any readable fds, dispatch event */
     if (ret > 0) {
@@ -245,11 +249,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
                 node->pfd.revents = pfd->revents;
             }
         }
-        if (aio_dispatch(ctx)) {
-            progress = true;
-        }
+    }
+
+    /* Run dispatch even if there were no readable fds to run timers */
+    if (aio_dispatch(ctx)) {
+        progress = true;
     }
 
     assert(progress || busy);
-    return true;
+    return progress;
 }
diff --git a/aio-win32.c b/aio-win32.c
index 38723bf..9492d6c 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -98,6 +98,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
     HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
     bool busy, progress;
     int count;
+    int timeout;
 
     progress = false;
 
@@ -111,6 +112,9 @@ bool aio_poll(AioContext *ctx, bool blocking)
         progress = true;
     }
 
+    /* Run timers */
+    progress |= qemu_run_timerlist(ctx->tl);
+
     /*
      * Then dispatch any pending callbacks from the GSource.
      *
@@ -174,8 +178,11 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
     /* wait until next event */
     while (count > 0) {
-        int timeout = blocking ? INFINITE : 0;
-        int ret = WaitForMultipleObjects(count, events, FALSE, timeout);
+        int ret;
+
+        timeout = blocking ?
+            qemu_timeout_ns_to_ms(qemu_clock_timerlist_ns(ctx->tl)) : 0;
+        ret = WaitForMultipleObjects(count, events, FALSE, timeout);
 
         /* if we have any signaled events, dispatch event */
         if ((DWORD) (ret - WAIT_OBJECT_0) >= count) {
@@ -214,6 +221,15 @@ bool aio_poll(AioContext *ctx, bool blocking)
         events[ret - WAIT_OBJECT_0] = events[--count];
     }
 
+    if (blocking) {
+        /* Run the timers a second time. We do this because otherwise aio_wait
+         * will not note progress - and will stop a drain early - if we have
+         * a timer that was not ready to run entering g_poll but is ready
+         * after g_poll. This will only do anything if a timer has expired.
+         */
+        progress |= qemu_run_timerlist(ctx->tl);
+    }
+
     assert(progress || busy);
-    return true;
+    return progress;
 }
diff --git a/tests/test-aio.c b/tests/test-aio.c
index 2d7ec4c..eedf7f8 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -316,13 +316,13 @@ static void test_wait_event_notifier_noflush(void)
     event_notifier_set(&data.e);
     g_assert(aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 1);
-    g_assert(aio_poll(ctx, false));
+    g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 1);
 
     event_notifier_set(&data.e);
     g_assert(aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 2);
-    g_assert(aio_poll(ctx, false));
+    g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 2);
 
     event_notifier_set(&dummy.e);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv4 10/13] aio / timers: Convert mainloop to use timeout
  2013-07-26 18:37         ` [Qemu-devel] [RFC] [PATCHv4 00/13] " Alex Bligh
                             ` (8 preceding siblings ...)
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 09/13] aio / timers: Convert aio_poll to use AioContext timers' deadline Alex Bligh
@ 2013-07-26 18:37           ` Alex Bligh
  2013-08-01 12:41             ` Paolo Bonzini
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 11/13] aio / timers: on timer modification, qemu_notify or aio_notify Alex Bligh
                             ` (2 subsequent siblings)
  12 siblings, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-07-26 18:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Convert mainloop to use timeout from 3 static timers.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 main-loop.c |   48 +++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 37 insertions(+), 11 deletions(-)

diff --git a/main-loop.c b/main-loop.c
index a44fff6..c30978b 100644
--- a/main-loop.c
+++ b/main-loop.c
@@ -155,10 +155,11 @@ static int max_priority;
 static int glib_pollfds_idx;
 static int glib_n_poll_fds;
 
-static void glib_pollfds_fill(uint32_t *cur_timeout)
+static void glib_pollfds_fill(int64_t *cur_timeout)
 {
     GMainContext *context = g_main_context_default();
     int timeout = 0;
+    int64_t timeout_ns;
     int n;
 
     g_main_context_prepare(context, &max_priority);
@@ -174,9 +175,13 @@ static void glib_pollfds_fill(uint32_t *cur_timeout)
                                  glib_n_poll_fds);
     } while (n != glib_n_poll_fds);
 
-    if (timeout >= 0 && timeout < *cur_timeout) {
-        *cur_timeout = timeout;
+    if (timeout < 0) {
+        timeout_ns = -1;
+    } else {
+        timeout_ns = (int64_t)timeout * (int64_t)SCALE_MS;
     }
+
+    *cur_timeout = qemu_soonest_timeout(timeout_ns, *cur_timeout);
 }
 
 static void glib_pollfds_poll(void)
@@ -191,7 +196,7 @@ static void glib_pollfds_poll(void)
 
 #define MAX_MAIN_LOOP_SPIN (1000)
 
-static int os_host_main_loop_wait(uint32_t timeout)
+static int os_host_main_loop_wait(int64_t timeout)
 {
     int ret;
     static int spin_counter;
@@ -214,7 +219,7 @@ static int os_host_main_loop_wait(uint32_t timeout)
             notified = true;
         }
 
-        timeout = 1;
+        timeout = SCALE_MS;
     }
 
     if (timeout > 0) {
@@ -224,7 +229,7 @@ static int os_host_main_loop_wait(uint32_t timeout)
         spin_counter++;
     }
 
-    ret = g_poll((GPollFD *)gpollfds->data, gpollfds->len, timeout);
+    ret = qemu_poll_ns((GPollFD *)gpollfds->data, gpollfds->len, timeout);
 
     if (timeout > 0) {
         qemu_mutex_lock_iothread();
@@ -373,7 +378,7 @@ static void pollfds_poll(GArray *pollfds, int nfds, fd_set *rfds,
     }
 }
 
-static int os_host_main_loop_wait(uint32_t timeout)
+static int os_host_main_loop_wait(int64_t timeout)
 {
     GMainContext *context = g_main_context_default();
     GPollFD poll_fds[1024 * 2]; /* this is probably overkill */
@@ -382,6 +387,7 @@ static int os_host_main_loop_wait(uint32_t timeout)
     PollingEntry *pe;
     WaitObjects *w = &wait_objects;
     gint poll_timeout;
+    int64_t poll_timeout_ns;
     static struct timeval tv0;
     fd_set rfds, wfds, xfds;
     int nfds;
@@ -419,12 +425,17 @@ static int os_host_main_loop_wait(uint32_t timeout)
         poll_fds[n_poll_fds + i].events = G_IO_IN;
     }
 
-    if (poll_timeout < 0 || timeout < poll_timeout) {
-        poll_timeout = timeout;
+    if (poll_timeout < 0) {
+        poll_timeout_ns = -1;
+    } else {
+        poll_timeout_ns = (int64_t)poll_timeout * (int64_t)SCALE_MS;
     }
 
+    poll_timeout_ns = qemu_soonest_timeout(poll_timeout_ns, timeout);
+
     qemu_mutex_unlock_iothread();
-    g_poll_ret = g_poll(poll_fds, n_poll_fds + w->num, poll_timeout);
+    g_poll_ret = qemu_poll_ns(poll_fds, n_poll_fds + w->num, poll_timeout_ns);
+
     qemu_mutex_lock_iothread();
     if (g_poll_ret > 0) {
         for (i = 0; i < w->num; i++) {
@@ -449,6 +460,7 @@ int main_loop_wait(int nonblocking)
 {
     int ret;
     uint32_t timeout = UINT32_MAX;
+    int64_t timeout_ns;
 
     if (nonblocking) {
         timeout = 0;
@@ -462,7 +474,21 @@ int main_loop_wait(int nonblocking)
     slirp_pollfds_fill(gpollfds);
 #endif
     qemu_iohandler_fill(gpollfds);
-    ret = os_host_main_loop_wait(timeout);
+
+    if (timeout == UINT32_MAX) {
+        timeout_ns = -1;
+    } else {
+        timeout_ns = (uint64_t)timeout * (int64_t)(SCALE_MS);
+    }
+
+    timeout_ns = qemu_soonest_timeout(timeout_ns,
+                                      qemu_clock_deadline_ns(rt_clock));
+    timeout_ns = qemu_soonest_timeout(timeout_ns,
+                                      qemu_clock_deadline_ns(vm_clock));
+    timeout_ns = qemu_soonest_timeout(timeout_ns,
+                                      qemu_clock_deadline_ns(host_clock));
+
+    ret = os_host_main_loop_wait(timeout_ns);
     qemu_iohandler_poll(gpollfds, ret);
 #ifdef CONFIG_SLIRP
     slirp_pollfds_poll(gpollfds, (ret < 0));
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv4 11/13] aio / timers: on timer modification, qemu_notify or aio_notify
  2013-07-26 18:37         ` [Qemu-devel] [RFC] [PATCHv4 00/13] " Alex Bligh
                             ` (9 preceding siblings ...)
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 10/13] aio / timers: Convert mainloop to use timeout Alex Bligh
@ 2013-07-26 18:37           ` Alex Bligh
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 12/13] aio / timers: Remove alarm timers Alex Bligh
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 13/13] aio / timers: Add test harness for AioContext timers Alex Bligh
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-26 18:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

On qemu_mod_timer_ns, ensure qemu_notify or aio_notify is called to
end the appropriate poll().

On qemu_clock_enable, ensure qemu_notify or aio_notify is called for
all QEMUTimerLists attached to the QEMUClock.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 qemu-timer.c |   22 +++++++++++++++++++---
 1 file changed, 19 insertions(+), 3 deletions(-)

diff --git a/qemu-timer.c b/qemu-timer.c
index 9d992d6..ade1449 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -295,11 +295,29 @@ void qemu_free_timerlist(QEMUTimerList *tl)
     g_free(tl);
 }
 
+static void qemu_timerlist_notify(QEMUTimerList *tl)
+{
+    if (tl->ctx) {
+        aio_notify(tl->ctx);
+    } else {
+        qemu_notify_event();
+    }
+}
+
+static void qemu_clock_notify(QEMUClock *clock)
+{
+    QEMUTimerList *tl;
+    QLIST_FOREACH(tl, &clock->timerlists, list) {
+        qemu_timerlist_notify(tl);
+    }
+}
+
 void qemu_clock_enable(QEMUClock *clock, bool enabled)
 {
     bool old = clock->enabled;
     clock->enabled = enabled;
     if (enabled && !old) {
+        qemu_clock_notify(clock);
         qemu_rearm_alarm_timer(alarm_timer);
     }
 }
@@ -514,9 +532,7 @@ void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
         }
         /* Interrupt execution to force deadline recalculation.  */
         qemu_clock_warp(ts->tl->clock);
-        if (use_icount) {
-            qemu_notify_event();
-        }
+        qemu_timerlist_notify(ts->tl);
     }
 }
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv4 12/13] aio / timers: Remove alarm timers
  2013-07-26 18:37         ` [Qemu-devel] [RFC] [PATCHv4 00/13] " Alex Bligh
                             ` (10 preceding siblings ...)
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 11/13] aio / timers: on timer modification, qemu_notify or aio_notify Alex Bligh
@ 2013-07-26 18:37           ` Alex Bligh
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 13/13] aio / timers: Add test harness for AioContext timers Alex Bligh
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-26 18:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Remove alarm timers from qemu-timers.c now we use g_poll / ppoll
instead.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |    2 -
 main-loop.c          |    4 -
 qemu-timer.c         |  501 +-------------------------------------------------
 vl.c                 |    4 +-
 4 files changed, 4 insertions(+), 507 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 3eb9fb7..c822c81 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -79,9 +79,7 @@ uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts);
 bool qemu_run_timers(QEMUClock *clock);
 bool qemu_run_timerlist(QEMUTimerList *tl);
 bool qemu_run_all_timers(void);
-void configure_alarms(char const *opt);
 void init_clocks(void);
-int init_timer_alarm(void);
 
 int64_t cpu_get_ticks(void);
 void cpu_enable_ticks(void);
diff --git a/main-loop.c b/main-loop.c
index c30978b..d8ec7d5 100644
--- a/main-loop.c
+++ b/main-loop.c
@@ -131,10 +131,6 @@ int qemu_init_main_loop(void)
     GSource *src;
 
     init_clocks();
-    if (init_timer_alarm() < 0) {
-        fprintf(stderr, "could not initialize alarm timer\n");
-        exit(1);
-    }
 
     ret = qemu_signal_init();
     if (ret) {
diff --git a/qemu-timer.c b/qemu-timer.c
index ade1449..0f42bb2 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -34,10 +34,6 @@
 #include <pthread.h>
 #endif
 
-#ifdef _WIN32
-#include <mmsystem.h>
-#endif
-
 #ifdef CONFIG_PPOLL
 #include <poll.h>
 #endif
@@ -83,174 +79,11 @@ struct QEMUTimer {
     int scale;
 };
 
-struct qemu_alarm_timer {
-    char const *name;
-    int (*start)(struct qemu_alarm_timer *t);
-    void (*stop)(struct qemu_alarm_timer *t);
-    void (*rearm)(struct qemu_alarm_timer *t, int64_t nearest_delta_ns);
-#if defined(__linux__)
-    timer_t timer;
-    int fd;
-#elif defined(_WIN32)
-    HANDLE timer;
-#endif
-    bool expired;
-    bool pending;
-};
-
-static struct qemu_alarm_timer *alarm_timer;
-
 static bool qemu_timer_expired_ns(QEMUTimer *timer_head, int64_t current_time)
 {
     return timer_head && (timer_head->expire_time <= current_time);
 }
 
-static int64_t qemu_next_alarm_deadline(void)
-{
-    int64_t delta = INT64_MAX;
-    int64_t rtdelta;
-    int64_t hdelta;
-
-    if (!use_icount && vm_clock->enabled &&
-        vm_clock->default_timerlist->active_timers) {
-        delta = vm_clock->default_timerlist->active_timers->expire_time -
-            qemu_get_clock_ns(vm_clock);
-    }
-    if (host_clock->enabled &&
-        host_clock->default_timerlist->active_timers) {
-        hdelta = host_clock->default_timerlist->active_timers->expire_time -
-            qemu_get_clock_ns(host_clock);
-        if (hdelta < delta) {
-            delta = hdelta;
-        }
-    }
-    if (rt_clock->enabled &&
-        rt_clock->default_timerlist->active_timers) {
-        rtdelta = (rt_clock->default_timerlist->active_timers->expire_time -
-                   qemu_get_clock_ns(rt_clock));
-        if (rtdelta < delta) {
-            delta = rtdelta;
-        }
-    }
-
-    return delta;
-}
-
-static void qemu_rearm_alarm_timer(struct qemu_alarm_timer *t)
-{
-    int64_t nearest_delta_ns = qemu_next_alarm_deadline();
-    if (nearest_delta_ns < INT64_MAX) {
-        t->rearm(t, nearest_delta_ns);
-    }
-}
-
-/* TODO: MIN_TIMER_REARM_NS should be optimized */
-#define MIN_TIMER_REARM_NS 250000
-
-#ifdef _WIN32
-
-static int mm_start_timer(struct qemu_alarm_timer *t);
-static void mm_stop_timer(struct qemu_alarm_timer *t);
-static void mm_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
-
-static int win32_start_timer(struct qemu_alarm_timer *t);
-static void win32_stop_timer(struct qemu_alarm_timer *t);
-static void win32_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
-
-#else
-
-static int unix_start_timer(struct qemu_alarm_timer *t);
-static void unix_stop_timer(struct qemu_alarm_timer *t);
-static void unix_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
-
-#ifdef __linux__
-
-static int dynticks_start_timer(struct qemu_alarm_timer *t);
-static void dynticks_stop_timer(struct qemu_alarm_timer *t);
-static void dynticks_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
-
-#endif /* __linux__ */
-
-#endif /* _WIN32 */
-
-static struct qemu_alarm_timer alarm_timers[] = {
-#ifndef _WIN32
-#ifdef __linux__
-    {"dynticks", dynticks_start_timer,
-     dynticks_stop_timer, dynticks_rearm_timer},
-#endif
-    {"unix", unix_start_timer, unix_stop_timer, unix_rearm_timer},
-#else
-    {"mmtimer", mm_start_timer, mm_stop_timer, mm_rearm_timer},
-    {"dynticks", win32_start_timer, win32_stop_timer, win32_rearm_timer},
-#endif
-    {NULL, }
-};
-
-static void show_available_alarms(void)
-{
-    int i;
-
-    printf("Available alarm timers, in order of precedence:\n");
-    for (i = 0; alarm_timers[i].name; i++)
-        printf("%s\n", alarm_timers[i].name);
-}
-
-void configure_alarms(char const *opt)
-{
-    int i;
-    int cur = 0;
-    int count = ARRAY_SIZE(alarm_timers) - 1;
-    char *arg;
-    char *name;
-    struct qemu_alarm_timer tmp;
-
-    if (is_help_option(opt)) {
-        show_available_alarms();
-        exit(0);
-    }
-
-    arg = g_strdup(opt);
-
-    /* Reorder the array */
-    name = strtok(arg, ",");
-    while (name) {
-        for (i = 0; i < count && alarm_timers[i].name; i++) {
-            if (!strcmp(alarm_timers[i].name, name))
-                break;
-        }
-
-        if (i == count) {
-            fprintf(stderr, "Unknown clock %s\n", name);
-            goto next;
-        }
-
-        if (i < cur)
-            /* Ignore */
-            goto next;
-
-	/* Swap */
-        tmp = alarm_timers[i];
-        alarm_timers[i] = alarm_timers[cur];
-        alarm_timers[cur] = tmp;
-
-        cur++;
-next:
-        name = strtok(NULL, ",");
-    }
-
-    g_free(arg);
-
-    if (cur) {
-        /* Disable remaining timers */
-        for (i = cur; i < count; i++)
-            alarm_timers[i].name = NULL;
-    } else {
-        show_available_alarms();
-        exit(1);
-    }
-}
-
 QEMUClock *rt_clock;
 QEMUClock *vm_clock;
 QEMUClock *host_clock;
@@ -318,7 +151,6 @@ void qemu_clock_enable(QEMUClock *clock, bool enabled)
     clock->enabled = enabled;
     if (enabled && !old) {
         qemu_clock_notify(clock);
-        qemu_rearm_alarm_timer(alarm_timer);
     }
 }
 
@@ -527,9 +359,6 @@ void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
 
     /* Rearm if necessary  */
     if (pt == &ts->tl->active_timers) {
-        if (!alarm_timer->pending) {
-            qemu_rearm_alarm_timer(alarm_timer);
-        }
         /* Interrupt execution to force deadline recalculation.  */
         qemu_clock_warp(ts->tl->clock);
         qemu_timerlist_notify(ts->tl);
@@ -643,338 +472,10 @@ uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts)
 
 bool qemu_run_all_timers(void)
 {
-    bool progress = false;
-    alarm_timer->pending = false;
-
     /* vm time timers */
+    bool progress = false;
     progress |= qemu_run_timers(vm_clock);
     progress |= qemu_run_timers(rt_clock);
     progress |= qemu_run_timers(host_clock);
-
-    /* rearm timer, if not periodic */
-    if (alarm_timer->expired) {
-        alarm_timer->expired = false;
-        qemu_rearm_alarm_timer(alarm_timer);
-    }
-
     return progress;
 }
-
-#ifdef _WIN32
-static void CALLBACK host_alarm_handler(PVOID lpParam, BOOLEAN unused)
-#else
-static void host_alarm_handler(int host_signum)
-#endif
-{
-    struct qemu_alarm_timer *t = alarm_timer;
-    if (!t)
-	return;
-
-    t->expired = true;
-    t->pending = true;
-    qemu_notify_event();
-}
-
-#if defined(__linux__)
-
-#include "qemu/compatfd.h"
-
-static int dynticks_start_timer(struct qemu_alarm_timer *t)
-{
-    struct sigevent ev;
-    timer_t host_timer;
-    struct sigaction act;
-
-    sigfillset(&act.sa_mask);
-    act.sa_flags = 0;
-    act.sa_handler = host_alarm_handler;
-
-    sigaction(SIGALRM, &act, NULL);
-
-    /* 
-     * Initialize ev struct to 0 to avoid valgrind complaining
-     * about uninitialized data in timer_create call
-     */
-    memset(&ev, 0, sizeof(ev));
-    ev.sigev_value.sival_int = 0;
-    ev.sigev_notify = SIGEV_SIGNAL;
-#ifdef CONFIG_SIGEV_THREAD_ID
-    if (qemu_signalfd_available()) {
-        ev.sigev_notify = SIGEV_THREAD_ID;
-        ev._sigev_un._tid = qemu_get_thread_id();
-    }
-#endif /* CONFIG_SIGEV_THREAD_ID */
-    ev.sigev_signo = SIGALRM;
-
-    if (timer_create(CLOCK_REALTIME, &ev, &host_timer)) {
-        perror("timer_create");
-        return -1;
-    }
-
-    t->timer = host_timer;
-
-    return 0;
-}
-
-static void dynticks_stop_timer(struct qemu_alarm_timer *t)
-{
-    timer_t host_timer = t->timer;
-
-    timer_delete(host_timer);
-}
-
-static void dynticks_rearm_timer(struct qemu_alarm_timer *t,
-                                 int64_t nearest_delta_ns)
-{
-    timer_t host_timer = t->timer;
-    struct itimerspec timeout;
-    int64_t current_ns;
-
-    if (nearest_delta_ns < MIN_TIMER_REARM_NS)
-        nearest_delta_ns = MIN_TIMER_REARM_NS;
-
-    /* check whether a timer is already running */
-    if (timer_gettime(host_timer, &timeout)) {
-        perror("gettime");
-        fprintf(stderr, "Internal timer error: aborting\n");
-        exit(1);
-    }
-    current_ns = timeout.it_value.tv_sec * 1000000000LL + timeout.it_value.tv_nsec;
-    if (current_ns && current_ns <= nearest_delta_ns)
-        return;
-
-    timeout.it_interval.tv_sec = 0;
-    timeout.it_interval.tv_nsec = 0; /* 0 for one-shot timer */
-    timeout.it_value.tv_sec =  nearest_delta_ns / 1000000000;
-    timeout.it_value.tv_nsec = nearest_delta_ns % 1000000000;
-    if (timer_settime(host_timer, 0 /* RELATIVE */, &timeout, NULL)) {
-        perror("settime");
-        fprintf(stderr, "Internal timer error: aborting\n");
-        exit(1);
-    }
-}
-
-#endif /* defined(__linux__) */
-
-#if !defined(_WIN32)
-
-static int unix_start_timer(struct qemu_alarm_timer *t)
-{
-    struct sigaction act;
-
-    /* timer signal */
-    sigfillset(&act.sa_mask);
-    act.sa_flags = 0;
-    act.sa_handler = host_alarm_handler;
-
-    sigaction(SIGALRM, &act, NULL);
-    return 0;
-}
-
-static void unix_rearm_timer(struct qemu_alarm_timer *t,
-                             int64_t nearest_delta_ns)
-{
-    struct itimerval itv;
-    int err;
-
-    if (nearest_delta_ns < MIN_TIMER_REARM_NS)
-        nearest_delta_ns = MIN_TIMER_REARM_NS;
-
-    itv.it_interval.tv_sec = 0;
-    itv.it_interval.tv_usec = 0; /* 0 for one-shot timer */
-    itv.it_value.tv_sec =  nearest_delta_ns / 1000000000;
-    itv.it_value.tv_usec = (nearest_delta_ns % 1000000000) / 1000;
-    err = setitimer(ITIMER_REAL, &itv, NULL);
-    if (err) {
-        perror("setitimer");
-        fprintf(stderr, "Internal timer error: aborting\n");
-        exit(1);
-    }
-}
-
-static void unix_stop_timer(struct qemu_alarm_timer *t)
-{
-    struct itimerval itv;
-
-    memset(&itv, 0, sizeof(itv));
-    setitimer(ITIMER_REAL, &itv, NULL);
-}
-
-#endif /* !defined(_WIN32) */
-
-
-#ifdef _WIN32
-
-static MMRESULT mm_timer;
-static TIMECAPS mm_tc;
-
-static void CALLBACK mm_alarm_handler(UINT uTimerID, UINT uMsg,
-                                      DWORD_PTR dwUser, DWORD_PTR dw1,
-                                      DWORD_PTR dw2)
-{
-    struct qemu_alarm_timer *t = alarm_timer;
-    if (!t) {
-        return;
-    }
-    t->expired = true;
-    t->pending = true;
-    qemu_notify_event();
-}
-
-static int mm_start_timer(struct qemu_alarm_timer *t)
-{
-    timeGetDevCaps(&mm_tc, sizeof(mm_tc));
-    return 0;
-}
-
-static void mm_stop_timer(struct qemu_alarm_timer *t)
-{
-    if (mm_timer) {
-        timeKillEvent(mm_timer);
-    }
-}
-
-static void mm_rearm_timer(struct qemu_alarm_timer *t, int64_t delta)
-{
-    int64_t nearest_delta_ms = delta / 1000000;
-    if (nearest_delta_ms < mm_tc.wPeriodMin) {
-        nearest_delta_ms = mm_tc.wPeriodMin;
-    } else if (nearest_delta_ms > mm_tc.wPeriodMax) {
-        nearest_delta_ms = mm_tc.wPeriodMax;
-    }
-
-    if (mm_timer) {
-        timeKillEvent(mm_timer);
-    }
-    mm_timer = timeSetEvent((UINT)nearest_delta_ms,
-                            mm_tc.wPeriodMin,
-                            mm_alarm_handler,
-                            (DWORD_PTR)t,
-                            TIME_ONESHOT | TIME_CALLBACK_FUNCTION);
-
-    if (!mm_timer) {
-        fprintf(stderr, "Failed to re-arm win32 alarm timer\n");
-        timeEndPeriod(mm_tc.wPeriodMin);
-        exit(1);
-    }
-}
-
-static int win32_start_timer(struct qemu_alarm_timer *t)
-{
-    HANDLE hTimer;
-    BOOLEAN success;
-
-    /* If you call ChangeTimerQueueTimer on a one-shot timer (its period
-       is zero) that has already expired, the timer is not updated.  Since
-       creating a new timer is relatively expensive, set a bogus one-hour
-       interval in the dynticks case.  */
-    success = CreateTimerQueueTimer(&hTimer,
-                          NULL,
-                          host_alarm_handler,
-                          t,
-                          1,
-                          3600000,
-                          WT_EXECUTEINTIMERTHREAD);
-
-    if (!success) {
-        fprintf(stderr, "Failed to initialize win32 alarm timer: %ld\n",
-                GetLastError());
-        return -1;
-    }
-
-    t->timer = hTimer;
-    return 0;
-}
-
-static void win32_stop_timer(struct qemu_alarm_timer *t)
-{
-    HANDLE hTimer = t->timer;
-
-    if (hTimer) {
-        DeleteTimerQueueTimer(NULL, hTimer, NULL);
-    }
-}
-
-static void win32_rearm_timer(struct qemu_alarm_timer *t,
-                              int64_t nearest_delta_ns)
-{
-    HANDLE hTimer = t->timer;
-    int64_t nearest_delta_ms;
-    BOOLEAN success;
-
-    nearest_delta_ms = nearest_delta_ns / 1000000;
-    if (nearest_delta_ms < 1) {
-        nearest_delta_ms = 1;
-    }
-    /* ULONG_MAX can be 32 bit */
-    if (nearest_delta_ms > ULONG_MAX) {
-        nearest_delta_ms = ULONG_MAX;
-    }
-    success = ChangeTimerQueueTimer(NULL,
-                                    hTimer,
-                                    (unsigned long) nearest_delta_ms,
-                                    3600000);
-
-    if (!success) {
-        fprintf(stderr, "Failed to rearm win32 alarm timer: %ld\n",
-                GetLastError());
-        exit(-1);
-    }
-
-}
-
-#endif /* _WIN32 */
-
-static void quit_timers(void)
-{
-    struct qemu_alarm_timer *t = alarm_timer;
-    alarm_timer = NULL;
-    t->stop(t);
-}
-
-#ifdef CONFIG_POSIX
-static void reinit_timers(void)
-{
-    struct qemu_alarm_timer *t = alarm_timer;
-    t->stop(t);
-    if (t->start(t)) {
-        fprintf(stderr, "Internal timer error: aborting\n");
-        exit(1);
-    }
-    qemu_rearm_alarm_timer(t);
-}
-#endif /* CONFIG_POSIX */
-
-int init_timer_alarm(void)
-{
-    struct qemu_alarm_timer *t = NULL;
-    int i, err = -1;
-
-    if (alarm_timer) {
-        return 0;
-    }
-
-    for (i = 0; alarm_timers[i].name; i++) {
-        t = &alarm_timers[i];
-
-        err = t->start(t);
-        if (!err)
-            break;
-    }
-
-    if (err) {
-        err = -ENOENT;
-        goto fail;
-    }
-
-    atexit(quit_timers);
-#ifdef CONFIG_POSIX
-    pthread_atfork(NULL, NULL, reinit_timers);
-#endif
-    alarm_timer = t;
-    return 0;
-
-fail:
-    return err;
-}
-
diff --git a/vl.c b/vl.c
index 25b8f2f..83047fc 100644
--- a/vl.c
+++ b/vl.c
@@ -3714,7 +3714,9 @@ int main(int argc, char **argv, char **envp)
                 old_param = 1;
                 break;
             case QEMU_OPTION_clock:
-                configure_alarms(optarg);
+                /* Clock options no longer exist.  Keep this option for
+                 * backward compatibility.
+                 */
                 break;
             case QEMU_OPTION_startdate:
                 configure_rtc_date_offset(optarg, 1);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv4 13/13] aio / timers: Add test harness for AioContext timers
  2013-07-26 18:37         ` [Qemu-devel] [RFC] [PATCHv4 00/13] " Alex Bligh
                             ` (11 preceding siblings ...)
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 12/13] aio / timers: Remove alarm timers Alex Bligh
@ 2013-07-26 18:37           ` Alex Bligh
  12 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-07-26 18:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, Stefan Hajnoczi,
	Paolo Bonzini, rth

Add a test harness for AioContext timers. The g_source equivalent is
unsatisfactory as it suffers from false wakeups.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 tests/test-aio.c |  136 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 136 insertions(+)

diff --git a/tests/test-aio.c b/tests/test-aio.c
index eedf7f8..f86c972 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -32,6 +32,15 @@ typedef struct {
     int max;
 } BHTestData;
 
+typedef struct {
+    QEMUTimer *timer;
+    QEMUTimerList *tl;
+    int n;
+    int max;
+    int64_t ns;
+    AioContext *ctx;
+} TimerTestData;
+
 static void bh_test_cb(void *opaque)
 {
     BHTestData *data = opaque;
@@ -40,6 +49,25 @@ static void bh_test_cb(void *opaque)
     }
 }
 
+static void timer_test_cb(void *opaque)
+{
+    TimerTestData *data = opaque;
+    if (++data->n < data->max) {
+        qemu_mod_timer(data->timer,
+                       qemu_get_clock_ns(qemu_timerlist_get_clock(data->tl)) +
+                       data->ns);
+    }
+}
+
+static void dummy_io_handler_read(void *opaque)
+{
+}
+
+static int dummy_io_handler_flush(void *opaque)
+{
+    return 1;
+}
+
 static void bh_delete_cb(void *opaque)
 {
     BHTestData *data = opaque;
@@ -341,6 +369,64 @@ static void test_wait_event_notifier_noflush(void)
     event_notifier_cleanup(&data.e);
 }
 
+static void test_timer_schedule(void)
+{
+    TimerTestData data = { .n = 0, .ctx = ctx, .ns = SCALE_MS * 750LL,
+                           .max = 2, .tl = ctx->tl };
+    int pipefd[2];
+
+    /* aio_poll will not block to wait for timers to complete unless it has
+     * an fd to wait on. Fixing this breaks other tests. So create a dummy one.
+     */
+    g_assert(!pipe2(pipefd, O_NONBLOCK));
+    aio_set_fd_handler(ctx, pipefd[0],
+                       dummy_io_handler_read, NULL, dummy_io_handler_flush,
+                       NULL);
+    aio_poll(ctx, false);
+
+    data.timer = qemu_new_timer_timerlist(data.tl, SCALE_NS,
+                                          timer_test_cb, &data);
+    qemu_mod_timer(data.timer,
+                   qemu_get_clock_ns(qemu_timerlist_get_clock(data.tl)) +
+                   data.ns);
+
+    g_assert_cmpint(data.n, ==, 0);
+
+    /* qemu_mod_timer may well cause an event notifer to have gone off,
+     * so clear that
+     */
+    do {} while (aio_poll(ctx, false));
+
+    g_assert(!aio_poll(ctx, false));
+    g_assert_cmpint(data.n, ==, 0);
+
+    sleep(1);
+    g_assert_cmpint(data.n, ==, 0);
+
+    g_assert(aio_poll(ctx, false));
+    g_assert_cmpint(data.n, ==, 1);
+
+    /* qemu_mod_timer called by our callback */
+    do {} while (aio_poll(ctx, false));
+
+    g_assert(!aio_poll(ctx, false));
+    g_assert_cmpint(data.n, ==, 1);
+
+    g_assert(aio_poll(ctx, true));
+    g_assert_cmpint(data.n, ==, 2);
+
+    /* As max is now 2, an event notifier should not have gone off */
+
+    g_assert(!aio_poll(ctx, false));
+    g_assert_cmpint(data.n, ==, 2);
+
+    aio_set_fd_handler(ctx, pipefd[0], NULL, NULL, NULL, NULL);
+    close(pipefd[0]);
+    close(pipefd[1]);
+
+    qemu_del_timer(data.timer);
+}
+
 /* Now the same tests, using the context as a GSource.  They are
  * very similar to the ones above, with g_main_context_iteration
  * replacing aio_poll.  However:
@@ -623,6 +709,54 @@ static void test_source_wait_event_notifier_noflush(void)
     event_notifier_cleanup(&data.e);
 }
 
+static void test_source_timer_schedule(void)
+{
+    TimerTestData data = { .n = 0, .ctx = ctx, .ns = SCALE_MS * 750LL,
+                           .max = 2, .tl = ctx->tl };
+    int pipefd[2];
+    int64_t expiry;
+
+    /* aio_poll will not block to wait for timers to complete unless it has
+     * an fd to wait on. Fixing this breaks other tests. So create a dummy one.
+     */
+    g_assert(!pipe2(pipefd, O_NONBLOCK));
+    aio_set_fd_handler(ctx, pipefd[0],
+                       dummy_io_handler_read, NULL, dummy_io_handler_flush,
+                       NULL);
+    do {} while (g_main_context_iteration(NULL, false));
+
+    data.timer = qemu_new_timer_timerlist(data.tl, SCALE_NS,
+                                          timer_test_cb, &data);
+    expiry = qemu_get_clock_ns(qemu_timerlist_get_clock(data.tl)) +
+        data.ns;
+    qemu_mod_timer(data.timer, expiry);
+
+    g_assert_cmpint(data.n, ==, 0);
+
+    sleep(1);
+    g_assert_cmpint(data.n, ==, 0);
+
+    g_assert(g_main_context_iteration(NULL, false));
+    g_assert_cmpint(data.n, ==, 1);
+
+    /* The comment above was not kidding when it said this wakes up itself */
+    do {
+        g_assert(g_main_context_iteration(NULL, true));
+    } while (qemu_get_clock_ns(
+                 qemu_timerlist_get_clock(data.tl)) <= expiry);
+    sleep(1);
+    g_main_context_iteration(NULL, false);
+
+    g_assert_cmpint(data.n, ==, 2);
+
+    aio_set_fd_handler(ctx, pipefd[0], NULL, NULL, NULL, NULL);
+    close(pipefd[0]);
+    close(pipefd[1]);
+
+    qemu_del_timer(data.timer);
+}
+
+
 /* End of tests.  */
 
 int main(int argc, char **argv)
@@ -651,6 +785,7 @@ int main(int argc, char **argv)
     g_test_add_func("/aio/event/wait",              test_wait_event_notifier);
     g_test_add_func("/aio/event/wait/no-flush-cb",  test_wait_event_notifier_noflush);
     g_test_add_func("/aio/event/flush",             test_flush_event_notifier);
+    g_test_add_func("/aio/timer/schedule",          test_timer_schedule);
 
     g_test_add_func("/aio-gsource/notify",                  test_source_notify);
     g_test_add_func("/aio-gsource/flush",                   test_source_flush);
@@ -665,5 +800,6 @@ int main(int argc, char **argv)
     g_test_add_func("/aio-gsource/event/wait",              test_source_wait_event_notifier);
     g_test_add_func("/aio-gsource/event/wait/no-flush-cb",  test_source_wait_event_notifier_noflush);
     g_test_add_func("/aio-gsource/event/flush",             test_source_flush_event_notifier);
+    g_test_add_func("/aio-gsource/timer/schedule",          test_source_timer_schedule);
     return g_test_run();
 }
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv4 01/13] aio / timers: add qemu-timer.c utility functions
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 01/13] aio / timers: add qemu-timer.c utility functions Alex Bligh
@ 2013-08-01 12:07             ` Paolo Bonzini
  0 siblings, 0 replies; 128+ messages in thread
From: Paolo Bonzini @ 2013-08-01 12:07 UTC (permalink / raw)
  To: Alex Bligh; +Cc: Kevin Wolf, Anthony Liguori, qemu-devel, Stefan Hajnoczi, rth

 On Jul 26 2013, Alex Bligh wrote:
> Add qemu_free_clock and expose qemu_new_clock and clock types.
> 
> Add utility functions to qemu-timer.c for nanosecond timing.
> 
> Add qemu_clock_deadline_ns to calculate deadlines to
> nanosecond accuracy.
> 
> Add utility function qemu_soonest_timeout to calculate soonest deadline.
> 
> Add qemu_timeout_ns_to_ms to convert a timeout in nanoseconds back to
> milliseconds for when ppoll is not used.
> 
> Signed-off-by: Alex Bligh <alex@alex.org.uk>
> ---
>  include/qemu/timer.h |   17 ++++++++++++++
>  qemu-timer.c         |   63 +++++++++++++++++++++++++++++++++++++++++++++-----
>  2 files changed, 74 insertions(+), 6 deletions(-)
> 
> diff --git a/include/qemu/timer.h b/include/qemu/timer.h
> index 9dd206c..6171db3 100644
> --- a/include/qemu/timer.h
> +++ b/include/qemu/timer.h
> @@ -11,6 +11,10 @@
>  #define SCALE_US 1000
>  #define SCALE_NS 1
>  
> +#define QEMU_CLOCK_REALTIME 0
> +#define QEMU_CLOCK_VIRTUAL  1
> +#define QEMU_CLOCK_HOST     2
> +
>  typedef struct QEMUClock QEMUClock;
>  typedef void QEMUTimerCB(void *opaque);
>  
> @@ -32,10 +36,14 @@ extern QEMUClock *vm_clock;
>     the virtual clock. */
>  extern QEMUClock *host_clock;
>  
> +QEMUClock *qemu_new_clock(int type);
> +void qemu_free_clock(QEMUClock *clock);
>  int64_t qemu_get_clock_ns(QEMUClock *clock);
>  int64_t qemu_clock_has_timers(QEMUClock *clock);
>  int64_t qemu_clock_expired(QEMUClock *clock);
>  int64_t qemu_clock_deadline(QEMUClock *clock);
> +int64_t qemu_clock_deadline_ns(QEMUClock *clock);
> +int qemu_timeout_ns_to_ms(int64_t ns);
>  void qemu_clock_enable(QEMUClock *clock, bool enabled);
>  void qemu_clock_warp(QEMUClock *clock);
>  
> @@ -63,6 +71,15 @@ int64_t cpu_get_ticks(void);
>  void cpu_enable_ticks(void);
>  void cpu_disable_ticks(void);
>  
> +static inline int64_t qemu_soonest_timeout(int64_t timeout1, int64_t timeout2)
> +{
> +    /* we can abuse the fact that -1 (which means infinite) is a maximal
> +     * value when cast to unsigned. As this is disgusting, it's kept in
> +     * one inline function.
> +     */
> +    return ((uint64_t) timeout1 < (uint64_t) timeout2) ? timeout1 : timeout2;
> +}
> +

It becomes much less disgusting if timeouts are made unsigned.  I agree
we can do it later.

>  static inline QEMUTimer *qemu_new_timer_ns(QEMUClock *clock, QEMUTimerCB *cb,
>                                             void *opaque)
>  {
> diff --git a/qemu-timer.c b/qemu-timer.c
> index b2d95e2..3dfbdbf 100644
> --- a/qemu-timer.c
> +++ b/qemu-timer.c
> @@ -40,10 +40,6 @@
>  /***********************************************************/
>  /* timers */
>  
> -#define QEMU_CLOCK_REALTIME 0
> -#define QEMU_CLOCK_VIRTUAL  1
> -#define QEMU_CLOCK_HOST     2
> -
>  struct QEMUClock {
>      QEMUTimer *active_timers;
>  
> @@ -231,7 +227,7 @@ QEMUClock *rt_clock;
>  QEMUClock *vm_clock;
>  QEMUClock *host_clock;
>  
> -static QEMUClock *qemu_new_clock(int type)
> +QEMUClock *qemu_new_clock(int type)
>  {
>      QEMUClock *clock;
>  
> @@ -243,6 +239,11 @@ static QEMUClock *qemu_new_clock(int type)
>      return clock;
>  }
>  
> +void qemu_free_clock(QEMUClock *clock)
> +{
> +    g_free(clock);
> +}
> +
>  void qemu_clock_enable(QEMUClock *clock, bool enabled)
>  {
>      bool old = clock->enabled;
> @@ -268,7 +269,7 @@ int64_t qemu_clock_deadline(QEMUClock *clock)
>      /* To avoid problems with overflow limit this to 2^32.  */
>      int64_t delta = INT32_MAX;
>  
> -    if (clock->active_timers) {
> +    if (clock->enabled && clock->active_timers) {
>          delta = clock->active_timers->expire_time - qemu_get_clock_ns(clock);
>      }
>      if (delta < 0) {
> @@ -277,6 +278,56 @@ int64_t qemu_clock_deadline(QEMUClock *clock)
>      return delta;
>  }
>  
> +/*
> + * As above, but return -1 for no deadline, and do not cap to 2^32
> + * as we know the result is always positive.
> + */
> +
> +int64_t qemu_clock_deadline_ns(QEMUClock *clock)
> +{
> +    int64_t delta;
> +
> +    if (!clock->enabled || !clock->active_timers) {
> +        return -1;
> +    }
> +
> +    delta = clock->active_timers->expire_time - qemu_get_clock_ns(clock);
> +
> +    if (delta <= 0) {
> +        return 0;
> +    }
> +
> +    return delta;
> +}
> +
> +/* Transition function to convert a nanosecond timeout to ms
> + * This is used where a system does not support ppoll
> + */
> +int qemu_timeout_ns_to_ms(int64_t ns)
> +{
> +    int64_t ms;
> +    if (ns < 0) {
> +        return -1;
> +    }
> +
> +    if (!ns) {
> +        return 0;
> +    }
> +
> +    /* Always round up, because it's better to wait too long than to wait too
> +     * little and effectively busy-wait
> +     */
> +    ms = (ns + SCALE_MS - 1) / SCALE_MS;
> +
> +    /* To avoid overflow problems, limit this to 2^31, i.e. approx 25 days */
> +    if (ms > (int64_t) INT32_MAX) {
> +        ms = INT32_MAX;
> +    }
> +
> +    return (int) ms;
> +}
> +
> +
>  QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
>                            QEMUTimerCB *cb, void *opaque)
>  {
> -- 
> 1.7.9.5
> 

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv4 10/13] aio / timers: Convert mainloop to use timeout
  2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 10/13] aio / timers: Convert mainloop to use timeout Alex Bligh
@ 2013-08-01 12:41             ` Paolo Bonzini
  2013-08-01 13:43               ` Alex Bligh
  0 siblings, 1 reply; 128+ messages in thread
From: Paolo Bonzini @ 2013-08-01 12:41 UTC (permalink / raw)
  To: Alex Bligh; +Cc: Kevin Wolf, Anthony Liguori, qemu-devel, Stefan Hajnoczi, rth

 On Jul 26 2013, Alex Bligh wrote:
> Convert mainloop to use timeout from 3 static timers.
> 
> Signed-off-by: Alex Bligh <alex@alex.org.uk>
> ---
>  main-loop.c |   48 +++++++++++++++++++++++++++++++++++++-----------
>  1 file changed, 37 insertions(+), 11 deletions(-)
> 
> diff --git a/main-loop.c b/main-loop.c
> index a44fff6..c30978b 100644
> --- a/main-loop.c
> +++ b/main-loop.c
> @@ -155,10 +155,11 @@ static int max_priority;
>  static int glib_pollfds_idx;
>  static int glib_n_poll_fds;
>  
> -static void glib_pollfds_fill(uint32_t *cur_timeout)
> +static void glib_pollfds_fill(int64_t *cur_timeout)
>  {
>      GMainContext *context = g_main_context_default();
>      int timeout = 0;
> +    int64_t timeout_ns;
>      int n;
>  
>      g_main_context_prepare(context, &max_priority);
> @@ -174,9 +175,13 @@ static void glib_pollfds_fill(uint32_t *cur_timeout)
>                                   glib_n_poll_fds);
>      } while (n != glib_n_poll_fds);
>  
> -    if (timeout >= 0 && timeout < *cur_timeout) {
> -        *cur_timeout = timeout;
> +    if (timeout < 0) {
> +        timeout_ns = -1;
> +    } else {
> +        timeout_ns = (int64_t)timeout * (int64_t)SCALE_MS;
>      }
> +
> +    *cur_timeout = qemu_soonest_timeout(timeout_ns, *cur_timeout);
>  }
>  
>  static void glib_pollfds_poll(void)
> @@ -191,7 +196,7 @@ static void glib_pollfds_poll(void)
>  
>  #define MAX_MAIN_LOOP_SPIN (1000)
>  
> -static int os_host_main_loop_wait(uint32_t timeout)
> +static int os_host_main_loop_wait(int64_t timeout)
>  {
>      int ret;
>      static int spin_counter;
> @@ -214,7 +219,7 @@ static int os_host_main_loop_wait(uint32_t timeout)
>              notified = true;
>          }
>  
> -        timeout = 1;
> +        timeout = SCALE_MS;
>      }
>  
>      if (timeout > 0) {
> @@ -224,7 +229,7 @@ static int os_host_main_loop_wait(uint32_t timeout)
>          spin_counter++;
>      }
>  
> -    ret = g_poll((GPollFD *)gpollfds->data, gpollfds->len, timeout);
> +    ret = qemu_poll_ns((GPollFD *)gpollfds->data, gpollfds->len, timeout);
>  
>      if (timeout > 0) {
>          qemu_mutex_lock_iothread();
> @@ -373,7 +378,7 @@ static void pollfds_poll(GArray *pollfds, int nfds, fd_set *rfds,
>      }
>  }
>  
> -static int os_host_main_loop_wait(uint32_t timeout)
> +static int os_host_main_loop_wait(int64_t timeout)
>  {
>      GMainContext *context = g_main_context_default();
>      GPollFD poll_fds[1024 * 2]; /* this is probably overkill */
> @@ -382,6 +387,7 @@ static int os_host_main_loop_wait(uint32_t timeout)
>      PollingEntry *pe;
>      WaitObjects *w = &wait_objects;
>      gint poll_timeout;
> +    int64_t poll_timeout_ns;
>      static struct timeval tv0;
>      fd_set rfds, wfds, xfds;
>      int nfds;
> @@ -419,12 +425,17 @@ static int os_host_main_loop_wait(uint32_t timeout)
>          poll_fds[n_poll_fds + i].events = G_IO_IN;
>      }
>  
> -    if (poll_timeout < 0 || timeout < poll_timeout) {
> -        poll_timeout = timeout;
> +    if (poll_timeout < 0) {
> +        poll_timeout_ns = -1;
> +    } else {
> +        poll_timeout_ns = (int64_t)poll_timeout * (int64_t)SCALE_MS;
>      }
>  
> +    poll_timeout_ns = qemu_soonest_timeout(poll_timeout_ns, timeout);
> +
>      qemu_mutex_unlock_iothread();
> -    g_poll_ret = g_poll(poll_fds, n_poll_fds + w->num, poll_timeout);
> +    g_poll_ret = qemu_poll_ns(poll_fds, n_poll_fds + w->num, poll_timeout_ns);
> +
>      qemu_mutex_lock_iothread();
>      if (g_poll_ret > 0) {
>          for (i = 0; i < w->num; i++) {
> @@ -449,6 +460,7 @@ int main_loop_wait(int nonblocking)
>  {
>      int ret;
>      uint32_t timeout = UINT32_MAX;
> +    int64_t timeout_ns;
>  
>      if (nonblocking) {
>          timeout = 0;
> @@ -462,7 +474,21 @@ int main_loop_wait(int nonblocking)
>      slirp_pollfds_fill(gpollfds);
>  #endif
>      qemu_iohandler_fill(gpollfds);
> -    ret = os_host_main_loop_wait(timeout);
> +
> +    if (timeout == UINT32_MAX) {
> +        timeout_ns = -1;
> +    } else {
> +        timeout_ns = (uint64_t)timeout * (int64_t)(SCALE_MS);
> +    }
> +
> +    timeout_ns = qemu_soonest_timeout(timeout_ns,
> +                                      qemu_clock_deadline_ns(rt_clock));
> +    timeout_ns = qemu_soonest_timeout(timeout_ns,
> +                                      qemu_clock_deadline_ns(vm_clock));

This must not be included if use_icount.

Allowing only one rt_clock clock for each AioContext is a simplification,
but I'm worried that it will be a problem later.  For example, the block
layer wants to use vm_clock.  Perhaps QEMUTimerList should really have
three lists, one for each clock type?

Once you do this, you get some complications due to more data structures,
but other code is simplified noticeably.  For example, you lose the concept
of a default timerlist (it's just the timerlist of the default AioContext).
And because all timerlists have an AioContext, you do not need to special
case aio_notify() vs. qemu_notify_event().

There are a couple of places to be careful about, of course.  For example,

        if (use_icount && qemu_clock_deadline(vm_clock) <= 0) {
            qemu_notify_event();
        }

in cpus.c must be changed to iterate over all timerlists.

Paolo

> +    timeout_ns = qemu_soonest_timeout(timeout_ns,
> +                                      qemu_clock_deadline_ns(host_clock));
> +
> +    ret = os_host_main_loop_wait(timeout_ns);
>      qemu_iohandler_poll(gpollfds, ret);
>  #ifdef CONFIG_SLIRP
>      slirp_pollfds_poll(gpollfds, (ret < 0));
> -- 
> 1.7.9.5
> 

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv4 10/13] aio / timers: Convert mainloop to use timeout
  2013-08-01 12:41             ` Paolo Bonzini
@ 2013-08-01 13:43               ` Alex Bligh
  2013-08-01 14:14                 ` Paolo Bonzini
  0 siblings, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-08-01 13:43 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Kevin Wolf, Anthony Liguori, Stefan Hajnoczi, qemu-devel,
	Alex Bligh, rth

Paolo,

>> @@ -449,6 +460,7 @@ int main_loop_wait(int nonblocking)
>>  {
>>      int ret;
>>      uint32_t timeout = UINT32_MAX;
>> +    int64_t timeout_ns;
>>
>>      if (nonblocking) {
>>          timeout = 0;
>> @@ -462,7 +474,21 @@ int main_loop_wait(int nonblocking)
>>      slirp_pollfds_fill(gpollfds);
>>  # endif
>>      qemu_iohandler_fill(gpollfds);
>> -    ret = os_host_main_loop_wait(timeout);
>> +
>> +    if (timeout == UINT32_MAX) {
>> +        timeout_ns = -1;
>> +    } else {
>> +        timeout_ns = (uint64_t)timeout * (int64_t)(SCALE_MS);
>> +    }
>> +
>> +    timeout_ns = qemu_soonest_timeout(timeout_ns,
>> +                                      qemu_clock_deadline_ns(rt_clock));
>> +    timeout_ns = qemu_soonest_timeout(timeout_ns,
>> +                                      qemu_clock_deadline_ns(vm_clock));
>
> This must not be included if use_icount.

Really? qemu_run_all_timers was running all 3 timers irrespective of
use_icount, and doing a qemu_notify if anything expired, so I'm merely
mimicking the existing behaviour here.

I'm not quite sure what use_icount does. Does vm_clock get disabled
if it is set? (in which case it won't matter).

> Allowing only one rt_clock clock for each AioContext is a simplification,
> but I'm worried that it will be a problem later.  For example, the block
> layer wants to use vm_clock.  Perhaps QEMUTimerList should really have
> three lists, one for each clock type?

Well currently each QEMUClock has a default QEMUTimerList, so that
wouldn't work well - see below.

The way it's done at the moment is that the QEMUTimerList user can
create as many QEMUTimerLists as he wants. So AioContext asks for one
of one type. It could equally ask for three - one of each type.

I think that's probably adequate.

> Once you do this, you get some complications due to more data structures,
> but other code is simplified noticeably.  For example, you lose the
> concept of a default timerlist (it's just the timerlist of the default
> AioContext).

Yep - per the above that's really intrusive (I think I touched well over
a hundred files). The problem is that lots of things refer to vm_clock
to set a timer (when it's a clock so should use a timer list) and
also to vm_clock to read the timer (which is a clock function). Changing
vm_clock to vm_timerlist and vm_clock was truly horrible. Hence the
default timer list concept, which I admit is not great but saved a
horribly intrusive patch not all of which I could test. I did this
patch, and scrapped it.

> And because all timerlists have an AioContext,

Well old callers, particularly those not using an AioContext, would
not have an AioContext would they?

> you do not
> need to special case aio_notify() vs. qemu_notify_event().

Well, if I do a v5, I was going to make the constructor for
creating a timerlist specify a callback function to say what should
happen if the clock is enabled etc., and if none was specified
call qemu_notify_event(). The AioContext user would specify a callback
that called aio_notify(). This would be a bit nicer.

> There are a couple of places to be careful about, of course.  For example,
>
>         if (use_icount && qemu_clock_deadline(vm_clock) <= 0) {
>             qemu_notify_event();
>         }
>
> in cpus.c must be changed to iterate over all timerlists.

I was trying hard to avoid anything having to iterate over all
timerlists, and leave the timerlist to be per-thread where possible.
This may well fail for the clock warp stuff. I probably need to
exactly the same as on qemu_clock_enable() here if use_icount is
true. WDYT?

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv4 10/13] aio / timers: Convert mainloop to use timeout
  2013-08-01 13:43               ` Alex Bligh
@ 2013-08-01 14:14                 ` Paolo Bonzini
  2013-08-02 13:09                   ` Alex Bligh
  0 siblings, 1 reply; 128+ messages in thread
From: Paolo Bonzini @ 2013-08-01 14:14 UTC (permalink / raw)
  To: Alex Bligh; +Cc: Kevin Wolf, Anthony Liguori, qemu-devel, Stefan Hajnoczi, rth

 On Aug 01 2013, Alex Bligh wrote:
> Paolo,
> 
> >>@@ -449,6 +460,7 @@ int main_loop_wait(int nonblocking)
> >> {
> >>     int ret;
> >>     uint32_t timeout = UINT32_MAX;
> >>+    int64_t timeout_ns;
> >>
> >>     if (nonblocking) {
> >>         timeout = 0;
> >>@@ -462,7 +474,21 @@ int main_loop_wait(int nonblocking)
> >>     slirp_pollfds_fill(gpollfds);
> >> # endif
> >>     qemu_iohandler_fill(gpollfds);
> >>-    ret = os_host_main_loop_wait(timeout);
> >>+
> >>+    if (timeout == UINT32_MAX) {
> >>+        timeout_ns = -1;
> >>+    } else {
> >>+        timeout_ns = (uint64_t)timeout * (int64_t)(SCALE_MS);
> >>+    }
> >>+
> >>+    timeout_ns = qemu_soonest_timeout(timeout_ns,
> >>+                                      qemu_clock_deadline_ns(rt_clock));
> >>+    timeout_ns = qemu_soonest_timeout(timeout_ns,
> >>+                                      qemu_clock_deadline_ns(vm_clock));
> >
> >This must not be included if use_icount.
> 
> Really? qemu_run_all_timers was running all 3 timers irrespective of
> use_icount, and doing a qemu_notify if anything expired, so I'm merely
> mimicking the existing behaviour here.

Maybe I'm misreading the code.  If it is a replacement of
qemu_next_alarm_deadline, then it is indeed omitting vm_clock.

> I'm not quite sure what use_icount does. Does vm_clock get disabled
> if it is set? (in which case it won't matter).

It doesn't count nanoseconds anymore.  The clock is updated by the
VCPU thread.  When the VCPU thread notices that the clock is past the
earliest timers, it does a qemu_notify_event().  That exits the g_poll
and qemu_run_all_timers then can process the callbacks.

> The way it's done at the moment is that the QEMUTimerList user can
> create as many QEMUTimerLists as he wants. So AioContext asks for one
> of one type. It could equally ask for three - one of each type.
> 
> I think that's probably adequate.
> 
> >Once you do this, you get some complications due to more data structures,
> >but other code is simplified noticeably.  For example, you lose the
> >concept of a default timerlist (it's just the timerlist of the default
> >AioContext).
> 
> Yep - per the above that's really intrusive (I think I touched well over
> a hundred files). The problem is that lots of things refer to vm_clock
> to set a timer (when it's a clock so should use a timer list) and
> also to vm_clock to read the timer (which is a clock function).

Yes, that's fine.  You can still keep the shorter invocation,
but instead of using clock->timerlist it would use
qemu_aio_context->clocks[clock->type].

Related to this, a better name for the "full" functions taking
a timerlist could be simply timer_new_ns etc.  And I would remove
the allocation step for these functions.  It is shameful how little
we use qemu_free_timer, and removing allocation would "fix" the
problem completely for users of the QEMUTimerList-based functions.

It's already a convention to use qemu_* only for functions that use some
global state, for example qemu_notify_event() vs. aio_notify().

> >And because all timerlists have an AioContext,
> 
> Well old callers, particularly those not using an AioContext, would
> not have an AioContext would they?

It would be qemu_aio_context.

> I was trying hard to avoid anything having to iterate over all
> timerlists, and leave the timerlist to be per-thread where possible.
> This may well fail for the clock warp stuff. I probably need to
> exactly the same as on qemu_clock_enable() here if use_icount is
> true. WDYT?

Yes.  This:

        qemu_mod_timer(icount_warp_timer, vm_clock_warp_start + deadline);

would have to use the earliest deadline of all vm_clock timerlists.

And this:

        if (qemu_clock_expired(vm_clock)) {
            qemu_notify_event();
        }

would also have to walk all timerlists for vm_clock, and notify
those that have expired.  But you would not need one warp timer
per timerlist.

Paolo

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv4 10/13] aio / timers: Convert mainloop to use timeout
  2013-08-01 14:14                 ` Paolo Bonzini
@ 2013-08-02 13:09                   ` Alex Bligh
  2013-08-04 18:09                     ` [Qemu-devel] [RFC] [PATCHv5 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
  0 siblings, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-08-02 13:09 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Kevin Wolf, Anthony Liguori, Stefan Hajnoczi, qemu-devel,
	Alex Bligh, rth

Paolo,

(apologies for taking a little time to reply to this one)

--On 1 August 2013 16:14:11 +0200 Paolo Bonzini <pbonzini@redhat.com> wrote:

>> >> @@ -449,6 +460,7 @@ int main_loop_wait(int nonblocking)
>> >> {
>> >>     int ret;
>> >>     uint32_t timeout = UINT32_MAX;
>> >> +    int64_t timeout_ns;
>> >>
>> >>     if (nonblocking) {
>> >>         timeout = 0;
>> >> @@ -462,7 +474,21 @@ int main_loop_wait(int nonblocking)
>> >>     slirp_pollfds_fill(gpollfds);
>> >> # endif
>> >>     qemu_iohandler_fill(gpollfds);
>> >> -    ret = os_host_main_loop_wait(timeout);
>> >> +
>> >> +    if (timeout == UINT32_MAX) {
>> >> +        timeout_ns = -1;
>> >> +    } else {
>> >> +        timeout_ns = (uint64_t)timeout * (int64_t)(SCALE_MS);
>> >> +    }
>> >> +
>> >> +    timeout_ns = qemu_soonest_timeout(timeout_ns,
>> >> +
>> >> qemu_clock_deadline_ns(rt_clock)); +    timeout_ns =
>> >> qemu_soonest_timeout(timeout_ns,
>> >> +
>> >> qemu_clock_deadline_ns(vm_clock));
>> >
>> > This must not be included if use_icount.
>>
>> Really? qemu_run_all_timers was running all 3 timers irrespective of
>> use_icount, and doing a qemu_notify if anything expired, so I'm merely
>> mimicking the existing behaviour here.
>
> Maybe I'm misreading the code.  If it is a replacement of
> qemu_next_alarm_deadline, then it is indeed omitting vm_clock.

Well, qemu_next_alarm_deadline calculated the time at which the
next alarm signal would be sent. What I'm calculating is the
wait timeout for poll(), either in the mainloop or in AioContext.

In the mainloop, what this appears to do is to ignore the vm_clock
(whether or not it is enabled) in calculating the timeout if
use_icount is set. However qemu_run_timers is running the timers
attached to the vm_clock whether or not use_icount is set.

If use_icount is set, should be assuming vm_timers have an infinite
timeout for the purpose of calculating timeouts? What the the timer
has already expired (i.e. qemu_run_timers would run it immediately)?

In AioContext, there were previously no timers, so I'm not sure
whether or not this should be doing the same thing.

>> I'm not quite sure what use_icount does. Does vm_clock get disabled
>> if it is set? (in which case it won't matter).
>
> It doesn't count nanoseconds anymore.

"it" being the vm_timer clock source I presume.

> The clock is updated by the
> VCPU thread.  When the VCPU thread notices that the clock is past the
> earliest timers, it does a qemu_notify_event().  That exits the g_poll
> and qemu_run_all_timers then can process the callbacks.

So the first problem is that this will not cause the other threads
to have aio_notify called, which they should do.

The second question is whether this approach is consistent with using
a timeout for poll at all. I think the answer to that is "yes"
PROVIDED THAT the VCPU thread is never the one doing the poll,
or we'll go in with an infinite timeout.

I am presuming use_icount never changes at runtime? If not, I
think the way to address this is to add a 'lineartime' member
to QEMUClock, and set that false on the vm_timer QEMUClock if
use_icount is true. I can then ignore it when looping through
clocksources bases on that rather than special casing vm_clock.

>> Yep - per the above that's really intrusive (I think I touched well over
>> a hundred files). The problem is that lots of things refer to vm_clock
>> to set a timer (when it's a clock so should use a timer list) and
>> also to vm_clock to read the timer (which is a clock function).
>
> Yes, that's fine.  You can still keep the shorter invocation,
> but instead of using clock->timerlist it would use
> qemu_aio_context->clocks[clock->type].

Note that the aio context contains a QEMUTimerList and not
a clock. So it would have 3 QEMUTimerLists, and not 3 QEMUClocks.

I might see if there is a neat way to encapsulate these
(QEMUTimerListGroup or similar?) but yes I get the idea.

Do you want this in a v5 or are we polishing the ball a bit
much here?

> Related to this, a better name for the "full" functions taking
> a timerlist could be simply timer_new_ns etc.  And I would remove
> the allocation step for these functions.  It is shameful how little
> we use qemu_free_timer, and removing allocation would "fix" the
> problem completely for users of the QEMUTimerList-based functions.
>
> It's already a convention to use qemu_* only for functions that use some
> global state, for example qemu_notify_event() vs. aio_notify().

Agree

>> > And because all timerlists have an AioContext,
>>
>> Well old callers, particularly those not using an AioContext, would
>> not have an AioContext would they?
>
> It would be qemu_aio_context.

If this were the case, then the timeout for the mainloop would be
the same as the timeout for the qemu_aio_context. Is that an improvement?

What about a (putative) thread that wants a timer but has no AioContext?

>> I was trying hard to avoid anything having to iterate over all
>> timerlists, and leave the timerlist to be per-thread where possible.
>> This may well fail for the clock warp stuff. I probably need to
>> exactly the same as on qemu_clock_enable() here if use_icount is
>> true. WDYT?
>
> Yes.  This:
>
>         qemu_mod_timer(icount_warp_timer, vm_clock_warp_start + deadline);
>
> would have to use the earliest deadline of all vm_clock timerlists.
>
> And this:
>
>         if (qemu_clock_expired(vm_clock)) {
>             qemu_notify_event();
>         }
>
> would also have to walk all timerlists for vm_clock, and notify
> those that have expired.  But you would not need one warp timer
> per timerlist.

OK looks like I need to review the patch looking for use_icount.

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv5 00/16] aio / timers: Add AioContext timers and use ppoll
  2013-08-02 13:09                   ` Alex Bligh
@ 2013-08-04 18:09                     ` Alex Bligh
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 01/16] aio / timers: add qemu-timer.c utility functions Alex Bligh
                                         ` (16 more replies)
  0 siblings, 17 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-04 18:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

This patch series adds support for timers attached to an AioContext clock
which get called within aio_poll.

In doing so it removes alarm timers and moves to use ppoll where possible.

This patch set 'sort of' passes make check (see below for caveat)
including a new test harness for the aio timers, but has not been
tested much beyond that. In particular, the win32 changes have not
even been compile tested. Equally, alterations to use_icount
are untested.

Caveat: I have had to alter tests/test-aio.c so the following error
no longer occurs.

ERROR:tests/test-aio.c:346:test_wait_event_notifier_noflush: assertion failed: (aio_poll(ctx, false))

As gar as I can tell, this check was incorrect, in that it checking
aio_poll makes progress when in fact it should not make progress. I
fixed an issue where aio_poll was (as far as I can tell) wrongly
returning true on a timeout, and that generated this error.

Note also the comment on patch 15 in relation to a possible bug
in cpus.c.

Changes since v4:
* Rename qemu_timerlist_ functions to timer_list (per Paolo Bonzini)
* Rename qemu_timer_.*timerlist.* to timer_ (per Paolo Bonzini)
* Use enum for QEMUClockType
* Put clocks into an array; remove global variables
* Introduce QEMUTimerListGroup - a timeliest of each type
* Add a QEMUTimerListGroup to AioContext
* Use a callback on timer modification, rather than binding in
  AioContext into the timeliest
* Make cpus.c iterate over all timerlists when it does a notify
* Make cpus.c icount timeout use soonest timeout
  across all timerlists

Not changed since v4:
* Paolo Bonzini suggested getting rid of the default timerlist
  from the clock. The replacement would be the default AioContext's
  timer lists. I am concerned about the lifetime of the default
  AioContext compared to existing timers that might use the default
  clocks. This could be changed at a later date after code audit.

Changes since v3:
* Split up QEMUClock and QEMUClock list
* Improve commenting
* Fix comment in vl.c
* Change test/test-aio.c to reflect correct behaviour in aio_poll.

Changes since v2:
* Reordered to remove alarm timers last
* Added prctl(PR_SET_TIMERSLACK, 1, ...)
* Renamed qemu_g_poll_ns to qemu_poll_ns
* Moved declaration of above & drop glib types
* Do not use a global list of qemu clocks
* Add AioContext * to QEMUClock
* Split up conversion to use ppoll and timers
* Indentation fix
* Fix aio_win32.c aio_poll to return progress
* aio_notify / qemu_notify when timers are modified
* change comment in deprecation of clock options

Alex Bligh (16):
  aio / timers: add qemu-timer.c utility functions
  aio / timers: add ppoll support with qemu_poll_ns
  aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer
    slack
  aio / timers: Make qemu_run_timers and qemu_run_all_timers return
    progress
  aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList
  aio / timers: Untangle include files
  aio / timers: Add QEMUTimerListGroup and helper functions
  aio / timers: Add QEMUTimerListGroup to AioContext
  aio / timers: Add a notify callback to QEMUTimerList
  aio / timers: aio_ctx_prepare sets timeout from AioContext timers
  aio / timers: Convert aio_poll to use AioContext timers' deadline
  aio / timers: Convert mainloop to use timeout
  aio / timers: On timer modification, qemu_notify or aio_notify
  aio / timers: Use all timerlists in icount warp calculations
  aio / timers: Remove alarm timers
  aio / timers: Add test harness for AioContext timers

 aio-posix.c               |   20 +-
 aio-win32.c               |   22 +-
 async.c                   |   20 +-
 configure                 |   37 +++
 cpus.c                    |   44 ++-
 dma-helpers.c             |    1 +
 hw/dma/xilinx_axidma.c    |    1 +
 hw/timer/arm_timer.c      |    1 +
 hw/timer/grlib_gptimer.c  |    2 +
 hw/timer/imx_epit.c       |    1 +
 hw/timer/imx_gpt.c        |    1 +
 hw/timer/lm32_timer.c     |    1 +
 hw/timer/puv3_ost.c       |    1 +
 hw/timer/slavio_timer.c   |    1 +
 hw/timer/xilinx_timer.c   |    1 +
 hw/usb/hcd-uhci.c         |    1 +
 include/block/aio.h       |    4 +
 include/block/block_int.h |    1 +
 include/block/coroutine.h |    2 +
 include/qemu/timer.h      |  122 ++++++-
 main-loop.c               |   49 ++-
 migration-exec.c          |    1 +
 migration-fd.c            |    1 +
 migration-tcp.c           |    1 +
 migration-unix.c          |    1 +
 migration.c               |    1 +
 nbd.c                     |    1 +
 net/net.c                 |    1 +
 net/socket.c              |    1 +
 qemu-coroutine-io.c       |    1 +
 qemu-io-cmds.c            |    1 +
 qemu-nbd.c                |    1 +
 qemu-timer.c              |  803 ++++++++++++++++-----------------------------
 slirp/misc.c              |    1 +
 tests/test-aio.c          |  141 +++++++-
 tests/test-thread-pool.c  |    3 +
 thread-pool.c             |    1 +
 ui/vnc-auth-vencrypt.c    |    2 +-
 ui/vnc-ws.c               |    1 +
 vl.c                      |    4 +-
 40 files changed, 736 insertions(+), 564 deletions(-)

-- 
1.7.9.5

^ permalink raw reply	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv5 01/16] aio / timers: add qemu-timer.c utility functions
  2013-08-04 18:09                     ` [Qemu-devel] [RFC] [PATCHv5 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
@ 2013-08-04 18:09                       ` Alex Bligh
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 02/16] aio / timers: add ppoll support with qemu_poll_ns Alex Bligh
                                         ` (15 subsequent siblings)
  16 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-04 18:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Add qemu_free_clock and expose qemu_new_clock and clock types.

Add utility functions to qemu-timer.c for nanosecond timing.

Add qemu_clock_deadline_ns to calculate deadlines to
nanosecond accuracy.

Add utility function qemu_soonest_timeout to calculate soonest deadline.

Add qemu_timeout_ns_to_ms to convert a timeout in nanoseconds back to
milliseconds for when ppoll is not used.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |   17 ++++++++++++++
 qemu-timer.c         |   63 +++++++++++++++++++++++++++++++++++++++++++++-----
 2 files changed, 74 insertions(+), 6 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 9dd206c..6171db3 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -11,6 +11,10 @@
 #define SCALE_US 1000
 #define SCALE_NS 1
 
+#define QEMU_CLOCK_REALTIME 0
+#define QEMU_CLOCK_VIRTUAL  1
+#define QEMU_CLOCK_HOST     2
+
 typedef struct QEMUClock QEMUClock;
 typedef void QEMUTimerCB(void *opaque);
 
@@ -32,10 +36,14 @@ extern QEMUClock *vm_clock;
    the virtual clock. */
 extern QEMUClock *host_clock;
 
+QEMUClock *qemu_new_clock(int type);
+void qemu_free_clock(QEMUClock *clock);
 int64_t qemu_get_clock_ns(QEMUClock *clock);
 int64_t qemu_clock_has_timers(QEMUClock *clock);
 int64_t qemu_clock_expired(QEMUClock *clock);
 int64_t qemu_clock_deadline(QEMUClock *clock);
+int64_t qemu_clock_deadline_ns(QEMUClock *clock);
+int qemu_timeout_ns_to_ms(int64_t ns);
 void qemu_clock_enable(QEMUClock *clock, bool enabled);
 void qemu_clock_warp(QEMUClock *clock);
 
@@ -63,6 +71,15 @@ int64_t cpu_get_ticks(void);
 void cpu_enable_ticks(void);
 void cpu_disable_ticks(void);
 
+static inline int64_t qemu_soonest_timeout(int64_t timeout1, int64_t timeout2)
+{
+    /* we can abuse the fact that -1 (which means infinite) is a maximal
+     * value when cast to unsigned. As this is disgusting, it's kept in
+     * one inline function.
+     */
+    return ((uint64_t) timeout1 < (uint64_t) timeout2) ? timeout1 : timeout2;
+}
+
 static inline QEMUTimer *qemu_new_timer_ns(QEMUClock *clock, QEMUTimerCB *cb,
                                            void *opaque)
 {
diff --git a/qemu-timer.c b/qemu-timer.c
index b2d95e2..3dfbdbf 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -40,10 +40,6 @@
 /***********************************************************/
 /* timers */
 
-#define QEMU_CLOCK_REALTIME 0
-#define QEMU_CLOCK_VIRTUAL  1
-#define QEMU_CLOCK_HOST     2
-
 struct QEMUClock {
     QEMUTimer *active_timers;
 
@@ -231,7 +227,7 @@ QEMUClock *rt_clock;
 QEMUClock *vm_clock;
 QEMUClock *host_clock;
 
-static QEMUClock *qemu_new_clock(int type)
+QEMUClock *qemu_new_clock(int type)
 {
     QEMUClock *clock;
 
@@ -243,6 +239,11 @@ static QEMUClock *qemu_new_clock(int type)
     return clock;
 }
 
+void qemu_free_clock(QEMUClock *clock)
+{
+    g_free(clock);
+}
+
 void qemu_clock_enable(QEMUClock *clock, bool enabled)
 {
     bool old = clock->enabled;
@@ -268,7 +269,7 @@ int64_t qemu_clock_deadline(QEMUClock *clock)
     /* To avoid problems with overflow limit this to 2^32.  */
     int64_t delta = INT32_MAX;
 
-    if (clock->active_timers) {
+    if (clock->enabled && clock->active_timers) {
         delta = clock->active_timers->expire_time - qemu_get_clock_ns(clock);
     }
     if (delta < 0) {
@@ -277,6 +278,56 @@ int64_t qemu_clock_deadline(QEMUClock *clock)
     return delta;
 }
 
+/*
+ * As above, but return -1 for no deadline, and do not cap to 2^32
+ * as we know the result is always positive.
+ */
+
+int64_t qemu_clock_deadline_ns(QEMUClock *clock)
+{
+    int64_t delta;
+
+    if (!clock->enabled || !clock->active_timers) {
+        return -1;
+    }
+
+    delta = clock->active_timers->expire_time - qemu_get_clock_ns(clock);
+
+    if (delta <= 0) {
+        return 0;
+    }
+
+    return delta;
+}
+
+/* Transition function to convert a nanosecond timeout to ms
+ * This is used where a system does not support ppoll
+ */
+int qemu_timeout_ns_to_ms(int64_t ns)
+{
+    int64_t ms;
+    if (ns < 0) {
+        return -1;
+    }
+
+    if (!ns) {
+        return 0;
+    }
+
+    /* Always round up, because it's better to wait too long than to wait too
+     * little and effectively busy-wait
+     */
+    ms = (ns + SCALE_MS - 1) / SCALE_MS;
+
+    /* To avoid overflow problems, limit this to 2^31, i.e. approx 25 days */
+    if (ms > (int64_t) INT32_MAX) {
+        ms = INT32_MAX;
+    }
+
+    return (int) ms;
+}
+
+
 QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
                           QEMUTimerCB *cb, void *opaque)
 {
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv5 02/16] aio / timers: add ppoll support with qemu_poll_ns
  2013-08-04 18:09                     ` [Qemu-devel] [RFC] [PATCHv5 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 01/16] aio / timers: add qemu-timer.c utility functions Alex Bligh
@ 2013-08-04 18:09                       ` Alex Bligh
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 03/16] aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer slack Alex Bligh
                                         ` (14 subsequent siblings)
  16 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-04 18:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Add qemu_poll_ns which works like g_poll but takes a nanosecond
timeout.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 configure            |   19 +++++++++++++++++++
 include/qemu/timer.h |    1 +
 qemu-timer.c         |   24 ++++++++++++++++++++++++
 3 files changed, 44 insertions(+)

diff --git a/configure b/configure
index 9e1cd19..b491c00 100755
--- a/configure
+++ b/configure
@@ -2801,6 +2801,22 @@ if compile_prog "" "" ; then
   dup3=yes
 fi
 
+# check for ppoll support
+ppoll=no
+cat > $TMPC << EOF
+#include <poll.h>
+
+int main(void)
+{
+    struct pollfd pfd = { .fd = 0, .events = 0, .revents = 0 };
+    ppoll(&pfd, 1, 0, 0);
+    return 0;
+}
+EOF
+if compile_prog "" "" ; then
+  ppoll=yes
+fi
+
 # check for epoll support
 epoll=no
 cat > $TMPC << EOF
@@ -3792,6 +3808,9 @@ fi
 if test "$dup3" = "yes" ; then
   echo "CONFIG_DUP3=y" >> $config_host_mak
 fi
+if test "$ppoll" = "yes" ; then
+  echo "CONFIG_PPOLL=y" >> $config_host_mak
+fi
 if test "$epoll" = "yes" ; then
   echo "CONFIG_EPOLL=y" >> $config_host_mak
 fi
diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 6171db3..f434ecb 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -44,6 +44,7 @@ int64_t qemu_clock_expired(QEMUClock *clock);
 int64_t qemu_clock_deadline(QEMUClock *clock);
 int64_t qemu_clock_deadline_ns(QEMUClock *clock);
 int qemu_timeout_ns_to_ms(int64_t ns);
+int qemu_poll_ns(GPollFD *fds, uint nfds, int64_t timeout);
 void qemu_clock_enable(QEMUClock *clock, bool enabled);
 void qemu_clock_warp(QEMUClock *clock);
 
diff --git a/qemu-timer.c b/qemu-timer.c
index 3dfbdbf..b57bd78 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -37,6 +37,10 @@
 #include <mmsystem.h>
 #endif
 
+#ifdef CONFIG_PPOLL
+#include <poll.h>
+#endif
+
 /***********************************************************/
 /* timers */
 
@@ -328,6 +332,26 @@ int qemu_timeout_ns_to_ms(int64_t ns)
 }
 
 
+/* qemu implementation of g_poll which uses a nanosecond timeout but is
+ * otherwise identical to g_poll
+ */
+int qemu_poll_ns(GPollFD *fds, uint nfds, int64_t timeout)
+{
+#ifdef CONFIG_PPOLL
+    if (timeout < 0) {
+        return ppoll((struct pollfd *)fds, nfds, NULL, NULL);
+    } else {
+        struct timespec ts;
+        ts.tv_sec = timeout / 1000000000LL;
+        ts.tv_nsec = timeout % 1000000000LL;
+        return ppoll((struct pollfd *)fds, nfds, &ts, NULL);
+    }
+#else
+    return g_poll(fds, nfds, qemu_timeout_ns_to_ms(timeout));
+#endif
+}
+
+
 QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
                           QEMUTimerCB *cb, void *opaque)
 {
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv5 03/16] aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer slack
  2013-08-04 18:09                     ` [Qemu-devel] [RFC] [PATCHv5 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 01/16] aio / timers: add qemu-timer.c utility functions Alex Bligh
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 02/16] aio / timers: add ppoll support with qemu_poll_ns Alex Bligh
@ 2013-08-04 18:09                       ` Alex Bligh
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 04/16] aio / timers: Make qemu_run_timers and qemu_run_all_timers return progress Alex Bligh
                                         ` (13 subsequent siblings)
  16 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-04 18:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Where supported, called prctl(PR_SET_TIMERSLACK, 1, ...) to
set one nanosecond timer slack to increase precision of timer
calls.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 configure    |   18 ++++++++++++++++++
 qemu-timer.c |    7 +++++++
 2 files changed, 25 insertions(+)

diff --git a/configure b/configure
index b491c00..c8e39bc 100755
--- a/configure
+++ b/configure
@@ -2817,6 +2817,21 @@ if compile_prog "" "" ; then
   ppoll=yes
 fi
 
+# check for prctl(PR_SET_TIMERSLACK , ... ) support
+prctl_pr_set_timerslack=no
+cat > $TMPC << EOF
+#include <sys/prctl.h>
+
+int main(void)
+{
+    prctl(PR_SET_TIMERSLACK, 1, 0, 0, 0);
+    return 0;
+}
+EOF
+if compile_prog "" "" ; then
+  prctl_pr_set_timerslack=yes
+fi
+
 # check for epoll support
 epoll=no
 cat > $TMPC << EOF
@@ -3811,6 +3826,9 @@ fi
 if test "$ppoll" = "yes" ; then
   echo "CONFIG_PPOLL=y" >> $config_host_mak
 fi
+if test "$prctl_pr_set_timerslack" = "yes" ; then
+  echo "CONFIG_PRCTL_PR_SET_TIMERSLACK=y" >> $config_host_mak
+fi
 if test "$epoll" = "yes" ; then
   echo "CONFIG_EPOLL=y" >> $config_host_mak
 fi
diff --git a/qemu-timer.c b/qemu-timer.c
index b57bd78..a8b270f 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -41,6 +41,10 @@
 #include <poll.h>
 #endif
 
+#ifdef CONFIG_PRCTL_PR_SET_TIMERSLACK
+#include <sys/prctl.h>
+#endif
+
 /***********************************************************/
 /* timers */
 
@@ -512,6 +516,9 @@ void init_clocks(void)
         vm_clock = qemu_new_clock(QEMU_CLOCK_VIRTUAL);
         host_clock = qemu_new_clock(QEMU_CLOCK_HOST);
     }
+#ifdef CONFIG_PRCTL_PR_SET_TIMERSLACK
+    prctl(PR_SET_TIMERSLACK, 1, 0, 0, 0);
+#endif
 }
 
 uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts)
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv5 04/16] aio / timers: Make qemu_run_timers and qemu_run_all_timers return progress
  2013-08-04 18:09                     ` [Qemu-devel] [RFC] [PATCHv5 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                         ` (2 preceding siblings ...)
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 03/16] aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer slack Alex Bligh
@ 2013-08-04 18:09                       ` Alex Bligh
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 05/16] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList Alex Bligh
                                         ` (12 subsequent siblings)
  16 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-04 18:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Make qemu_run_timers and qemu_run_all_timers return progress
so that aio_poll etc. can determine whether a timer has been
run.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |    4 ++--
 qemu-timer.c         |   18 ++++++++++++------
 2 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index f434ecb..a1f2ac8 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -62,8 +62,8 @@ bool qemu_timer_pending(QEMUTimer *ts);
 bool qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time);
 uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts);
 
-void qemu_run_timers(QEMUClock *clock);
-void qemu_run_all_timers(void);
+bool qemu_run_timers(QEMUClock *clock);
+bool qemu_run_all_timers(void);
 void configure_alarms(char const *opt);
 void init_clocks(void);
 int init_timer_alarm(void);
diff --git a/qemu-timer.c b/qemu-timer.c
index a8b270f..714bc92 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -451,13 +451,14 @@ bool qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time)
     return qemu_timer_expired_ns(timer_head, current_time * timer_head->scale);
 }
 
-void qemu_run_timers(QEMUClock *clock)
+bool qemu_run_timers(QEMUClock *clock)
 {
     QEMUTimer *ts;
     int64_t current_time;
+    bool progress = false;
    
     if (!clock->enabled)
-        return;
+        return progress;
 
     current_time = qemu_get_clock_ns(clock);
     for(;;) {
@@ -471,7 +472,9 @@ void qemu_run_timers(QEMUClock *clock)
 
         /* run the callback (the timer list can be modified) */
         ts->cb(ts->opaque);
+        progress = true;
     }
+    return progress;
 }
 
 int64_t qemu_get_clock_ns(QEMUClock *clock)
@@ -526,20 +529,23 @@ uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts)
     return qemu_timer_pending(ts) ? ts->expire_time : -1;
 }
 
-void qemu_run_all_timers(void)
+bool qemu_run_all_timers(void)
 {
+    bool progress = false;
     alarm_timer->pending = false;
 
     /* vm time timers */
-    qemu_run_timers(vm_clock);
-    qemu_run_timers(rt_clock);
-    qemu_run_timers(host_clock);
+    progress |= qemu_run_timers(vm_clock);
+    progress |= qemu_run_timers(rt_clock);
+    progress |= qemu_run_timers(host_clock);
 
     /* rearm timer, if not periodic */
     if (alarm_timer->expired) {
         alarm_timer->expired = false;
         qemu_rearm_alarm_timer(alarm_timer);
     }
+
+    return progress;
 }
 
 #ifdef _WIN32
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv5 05/16] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList
  2013-08-04 18:09                     ` [Qemu-devel] [RFC] [PATCHv5 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                         ` (3 preceding siblings ...)
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 04/16] aio / timers: Make qemu_run_timers and qemu_run_all_timers return progress Alex Bligh
@ 2013-08-04 18:09                       ` Alex Bligh
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 06/16] aio / timers: Untangle include files Alex Bligh
                                         ` (11 subsequent siblings)
  16 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-04 18:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Split QEMUClock into QEMUClock and QEMUTimerList so that we can
have more than one QEMUTimerList associated with the same clock.

Introduce a default_timerlist concept and make existing
qemu_clock_* calls that actually should operate on a QEMUTimerList
call the relevant QEMUTimerList implementations, using the clock's
default timerlist. This vastly reduces the invasiveness of this
change and means the API stays constant for existing users.

Introduce a list of QEMUTimerLists associated with each clock
so that reenabling the clock can cause all the notifiers
to be called. Note the code to do the notifications is added
in a later patch.

Switch QEMUClockType to an enum. Remove global variables vm_clock,
host_clock and rt_clock and add compatibility defines. Do not
fix qemu_next_alarm_deadline as it's going to be deleted.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |   91 +++++++++++++++++++++++--
 qemu-timer.c         |  185 ++++++++++++++++++++++++++++++++++++++------------
 2 files changed, 224 insertions(+), 52 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index a1f2ac8..465eee7 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -11,38 +11,65 @@
 #define SCALE_US 1000
 #define SCALE_NS 1
 
-#define QEMU_CLOCK_REALTIME 0
-#define QEMU_CLOCK_VIRTUAL  1
-#define QEMU_CLOCK_HOST     2
+typedef enum {
+    QEMU_CLOCK_REALTIME = 0,
+    QEMU_CLOCK_VIRTUAL = 1,
+    QEMU_CLOCK_HOST = 2,
+    QEMU_CLOCK_MAX
+} QEMUClockType;
 
 typedef struct QEMUClock QEMUClock;
+typedef struct QEMUTimerList QEMUTimerList;
 typedef void QEMUTimerCB(void *opaque);
 
+extern QEMUClock *QEMUClocks[QEMU_CLOCK_MAX];
+
+static inline QEMUClock *qemu_get_clock(QEMUClockType type)
+{
+    return QEMUClocks[type];
+}
+
+/* These three clocks are maintained here with separate variable
+   names for compatibility only.
+*/
+
 /* The real time clock should be used only for stuff which does not
    change the virtual machine state, as it is run even if the virtual
    machine is stopped. The real time clock has a frequency of 1000
    Hz. */
-extern QEMUClock *rt_clock;
+#define rt_clock (qemu_get_clock(QEMU_CLOCK_REALTIME))
 
 /* The virtual clock is only run during the emulation. It is stopped
    when the virtual machine is stopped. Virtual timers use a high
    precision clock, usually cpu cycles (use ticks_per_sec). */
-extern QEMUClock *vm_clock;
+#define vm_clock (qemu_get_clock(QEMU_CLOCK_VIRTUAL))
 
 /* The host clock should be use for device models that emulate accurate
    real time sources. It will continue to run when the virtual machine
    is suspended, and it will reflect system time changes the host may
    undergo (e.g. due to NTP). The host clock has the same precision as
    the virtual clock. */
-extern QEMUClock *host_clock;
+#define host_clock (qemu_get_clock(QEMU_CLOCK_HOST))
 
-QEMUClock *qemu_new_clock(int type);
+QEMUClock *qemu_new_clock(QEMUClockType type);
 void qemu_free_clock(QEMUClock *clock);
 int64_t qemu_get_clock_ns(QEMUClock *clock);
 int64_t qemu_clock_has_timers(QEMUClock *clock);
 int64_t qemu_clock_expired(QEMUClock *clock);
 int64_t qemu_clock_deadline(QEMUClock *clock);
 int64_t qemu_clock_deadline_ns(QEMUClock *clock);
+bool qemu_clock_use_for_deadline(QEMUClock *clock);
+QEMUTimerList *qemu_clock_get_default_timerlist(QEMUClock *clock);
+
+QEMUTimerList *timerlist_new(QEMUClockType type);
+void timerlist_free(QEMUTimerList *tl);
+int64_t timerlist_has_timers(QEMUTimerList *tl);
+int64_t timerlist_expired(QEMUTimerList *tl);
+int64_t timerlist_deadline(QEMUTimerList *tl);
+int64_t timerlist_deadline_ns(QEMUTimerList *tl);
+QEMUClock *timerlist_get_clock(QEMUTimerList *tl);
+bool timerlist_run_timers(QEMUTimerList *tl);
+
 int qemu_timeout_ns_to_ms(int64_t ns);
 int qemu_poll_ns(GPollFD *fds, uint nfds, int64_t timeout);
 void qemu_clock_enable(QEMUClock *clock, bool enabled);
@@ -54,6 +81,8 @@ void qemu_unregister_clock_reset_notifier(QEMUClock *clock,
 
 QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
                           QEMUTimerCB *cb, void *opaque);
+QEMUTimer *timer_new(QEMUTimerList *tl, int scale,
+                     QEMUTimerCB *cb, void *opaque);
 void qemu_free_timer(QEMUTimer *ts);
 void qemu_del_timer(QEMUTimer *ts);
 void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time);
@@ -62,6 +91,42 @@ bool qemu_timer_pending(QEMUTimer *ts);
 bool qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time);
 uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts);
 
+/* New format calling conventions for timers */
+static inline void timer_free(QEMUTimer *ts)
+{
+    qemu_free_timer(ts);
+}
+
+static inline void timer_del(QEMUTimer *ts)
+{
+    qemu_del_timer(ts);
+}
+
+static inline void timer_mod_ns(QEMUTimer *ts, int64_t expire_time)
+{
+    qemu_mod_timer_ns(ts, expire_time);
+}
+
+static inline void timer_mod(QEMUTimer *ts, int64_t expire_timer)
+{
+    qemu_mod_timer(ts, expire_timer);
+}
+
+static inline bool timer_pending(QEMUTimer *ts)
+{
+    return qemu_timer_pending(ts);
+}
+
+static inline bool timer_expired(QEMUTimer *timer_head, int64_t current_time)
+{
+    return qemu_timer_expired(timer_head, current_time);
+}
+
+static inline uint64_t timer_expire_time_ns(QEMUTimer *ts)
+{
+    return qemu_timer_expire_time_ns(ts);
+}
+
 bool qemu_run_timers(QEMUClock *clock);
 bool qemu_run_all_timers(void);
 void configure_alarms(char const *opt);
@@ -87,12 +152,24 @@ static inline QEMUTimer *qemu_new_timer_ns(QEMUClock *clock, QEMUTimerCB *cb,
     return qemu_new_timer(clock, SCALE_NS, cb, opaque);
 }
 
+static inline QEMUTimer *timer_new_ns(QEMUTimerList *tl, QEMUTimerCB *cb,
+                                      void *opaque)
+{
+    return timer_new(tl, SCALE_NS, cb, opaque);
+}
+
 static inline QEMUTimer *qemu_new_timer_ms(QEMUClock *clock, QEMUTimerCB *cb,
                                            void *opaque)
 {
     return qemu_new_timer(clock, SCALE_MS, cb, opaque);
 }
 
+static inline QEMUTimer *timer_new_ms(QEMUTimerList *tl, QEMUTimerCB *cb,
+                                      void *opaque)
+{
+    return timer_new(tl, SCALE_MS, cb, opaque);
+}
+
 static inline int64_t qemu_get_clock_ms(QEMUClock *clock)
 {
     return qemu_get_clock_ns(clock) / SCALE_MS;
diff --git a/qemu-timer.c b/qemu-timer.c
index 714bc92..231953a 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -49,18 +49,34 @@
 /* timers */
 
 struct QEMUClock {
-    QEMUTimer *active_timers;
+    QEMUTimerList *default_timerlist;
+    QLIST_HEAD(, QEMUTimerList) timerlists;
 
     NotifierList reset_notifiers;
     int64_t last;
 
-    int type;
+    QEMUClockType type;
     bool enabled;
 };
 
+QEMUClock *QEMUClocks[QEMU_CLOCK_MAX];
+
+/* A QEMUTimerList is a list of timers attached to a clock. More
+ * than one QEMUTimerList can be attached to each clock, for instance
+ * used by different AioContexts / threads. Each clock also has
+ * a list of the QEMUTimerLists associated with it, in order that
+ * reenabling the clock can call all the notifiers.
+ */
+
+struct QEMUTimerList {
+    QEMUClock *clock;
+    QEMUTimer *active_timers;
+    QLIST_ENTRY(QEMUTimerList) list;
+};
+
 struct QEMUTimer {
     int64_t expire_time;	/* in nanoseconds */
-    QEMUClock *clock;
+    QEMUTimerList *tl;
     QEMUTimerCB *cb;
     void *opaque;
     QEMUTimer *next;
@@ -93,21 +109,25 @@ static int64_t qemu_next_alarm_deadline(void)
 {
     int64_t delta = INT64_MAX;
     int64_t rtdelta;
+    int64_t hdelta;
 
-    if (!use_icount && vm_clock->enabled && vm_clock->active_timers) {
-        delta = vm_clock->active_timers->expire_time -
-                     qemu_get_clock_ns(vm_clock);
+    if (!use_icount && vm_clock->enabled &&
+        vm_clock->default_timerlist->active_timers) {
+        delta = vm_clock->default_timerlist->active_timers->expire_time -
+            qemu_get_clock_ns(vm_clock);
     }
-    if (host_clock->enabled && host_clock->active_timers) {
-        int64_t hdelta = host_clock->active_timers->expire_time -
-                 qemu_get_clock_ns(host_clock);
+    if (host_clock->enabled &&
+        host_clock->default_timerlist->active_timers) {
+        hdelta = host_clock->default_timerlist->active_timers->expire_time -
+            qemu_get_clock_ns(host_clock);
         if (hdelta < delta) {
             delta = hdelta;
         }
     }
-    if (rt_clock->enabled && rt_clock->active_timers) {
-        rtdelta = (rt_clock->active_timers->expire_time -
-                 qemu_get_clock_ns(rt_clock));
+    if (rt_clock->enabled &&
+        rt_clock->default_timerlist->active_timers) {
+        rtdelta = (rt_clock->default_timerlist->active_timers->expire_time -
+                   qemu_get_clock_ns(rt_clock));
         if (rtdelta < delta) {
             delta = rtdelta;
         }
@@ -231,11 +251,33 @@ next:
     }
 }
 
-QEMUClock *rt_clock;
-QEMUClock *vm_clock;
-QEMUClock *host_clock;
+static QEMUTimerList *timerlist_new_from_clock(QEMUClock *clock)
+{
+    QEMUTimerList *tl;
+
+    tl = g_malloc0(sizeof(QEMUTimerList));
+    tl->clock = clock;
+    QLIST_INSERT_HEAD(&clock->timerlists, tl, list);
+    return tl;
+}
+
+QEMUTimerList *timerlist_new(QEMUClockType type)
+{
+    return timerlist_new_from_clock(qemu_get_clock(type));
+}
 
-QEMUClock *qemu_new_clock(int type)
+void timerlist_free(QEMUTimerList *tl)
+{
+    if (tl->clock) {
+        QLIST_REMOVE(tl, list);
+        if (tl->clock->default_timerlist == tl) {
+            tl->clock->default_timerlist = NULL;
+        }
+    }
+    g_free(tl);
+}
+
+QEMUClock *qemu_new_clock(QEMUClockType type)
 {
     QEMUClock *clock;
 
@@ -244,14 +286,21 @@ QEMUClock *qemu_new_clock(int type)
     clock->enabled = true;
     clock->last = INT64_MIN;
     notifier_list_init(&clock->reset_notifiers);
+    clock->default_timerlist = timerlist_new_from_clock(clock);
     return clock;
 }
 
 void qemu_free_clock(QEMUClock *clock)
 {
+    timerlist_free(clock->default_timerlist);
     g_free(clock);
 }
 
+bool qemu_clock_use_for_deadline(QEMUClock *clock)
+{
+    return !(use_icount && (clock->type = QEMU_CLOCK_VIRTUAL));
+}
+
 void qemu_clock_enable(QEMUClock *clock, bool enabled)
 {
     bool old = clock->enabled;
@@ -261,24 +310,34 @@ void qemu_clock_enable(QEMUClock *clock, bool enabled)
     }
 }
 
+int64_t timerlist_has_timers(QEMUTimerList *tl)
+{
+    return !!tl->active_timers;
+}
+
 int64_t qemu_clock_has_timers(QEMUClock *clock)
 {
-    return !!clock->active_timers;
+    return timerlist_has_timers(clock->default_timerlist);
+}
+
+int64_t timerlist_expired(QEMUTimerList *tl)
+{
+    return (tl->active_timers &&
+            tl->active_timers->expire_time < qemu_get_clock_ns(tl->clock));
 }
 
 int64_t qemu_clock_expired(QEMUClock *clock)
 {
-    return (clock->active_timers &&
-            clock->active_timers->expire_time < qemu_get_clock_ns(clock));
+    return timerlist_expired(clock->default_timerlist);
 }
 
-int64_t qemu_clock_deadline(QEMUClock *clock)
+int64_t timerlist_deadline(QEMUTimerList *tl)
 {
     /* To avoid problems with overflow limit this to 2^32.  */
     int64_t delta = INT32_MAX;
 
-    if (clock->enabled && clock->active_timers) {
-        delta = clock->active_timers->expire_time - qemu_get_clock_ns(clock);
+    if (tl->clock->enabled && tl->active_timers) {
+        delta = tl->active_timers->expire_time - qemu_get_clock_ns(tl->clock);
     }
     if (delta < 0) {
         delta = 0;
@@ -286,20 +345,25 @@ int64_t qemu_clock_deadline(QEMUClock *clock)
     return delta;
 }
 
+int64_t qemu_clock_deadline(QEMUClock *clock)
+{
+    return timerlist_deadline(clock->default_timerlist);
+}
+
 /*
  * As above, but return -1 for no deadline, and do not cap to 2^32
  * as we know the result is always positive.
  */
 
-int64_t qemu_clock_deadline_ns(QEMUClock *clock)
+int64_t timerlist_deadline_ns(QEMUTimerList *tl)
 {
     int64_t delta;
 
-    if (!clock->enabled || !clock->active_timers) {
+    if (!tl->clock->enabled || !tl->active_timers) {
         return -1;
     }
 
-    delta = clock->active_timers->expire_time - qemu_get_clock_ns(clock);
+    delta = tl->active_timers->expire_time - qemu_get_clock_ns(tl->clock);
 
     if (delta <= 0) {
         return 0;
@@ -308,6 +372,21 @@ int64_t qemu_clock_deadline_ns(QEMUClock *clock)
     return delta;
 }
 
+int64_t qemu_clock_deadline_ns(QEMUClock *clock)
+{
+    return timerlist_deadline_ns(clock->default_timerlist);
+}
+
+QEMUClock *timerlist_get_clock(QEMUTimerList *tl)
+{
+    return tl->clock;
+}
+
+QEMUTimerList *qemu_clock_get_default_timerlist(QEMUClock *clock)
+{
+    return clock->default_timerlist;
+}
+
 /* Transition function to convert a nanosecond timeout to ms
  * This is used where a system does not support ppoll
  */
@@ -356,19 +435,26 @@ int qemu_poll_ns(GPollFD *fds, uint nfds, int64_t timeout)
 }
 
 
-QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
-                          QEMUTimerCB *cb, void *opaque)
+QEMUTimer *timer_new(QEMUTimerList *tl, int scale,
+                     QEMUTimerCB *cb, void *opaque)
 {
     QEMUTimer *ts;
 
     ts = g_malloc0(sizeof(QEMUTimer));
-    ts->clock = clock;
+    ts->tl = tl;
     ts->cb = cb;
     ts->opaque = opaque;
     ts->scale = scale;
     return ts;
 }
 
+QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
+                          QEMUTimerCB *cb, void *opaque)
+{
+    return timer_new(clock->default_timerlist,
+                     scale, cb, opaque);
+}
+
 void qemu_free_timer(QEMUTimer *ts)
 {
     g_free(ts);
@@ -381,7 +467,7 @@ void qemu_del_timer(QEMUTimer *ts)
 
     /* NOTE: this code must be signal safe because
        qemu_timer_expired() can be called from a signal. */
-    pt = &ts->clock->active_timers;
+    pt = &ts->tl->active_timers;
     for(;;) {
         t = *pt;
         if (!t)
@@ -405,7 +491,7 @@ void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
     /* add the timer in the sorted list */
     /* NOTE: this code must be signal safe because
        qemu_timer_expired() can be called from a signal. */
-    pt = &ts->clock->active_timers;
+    pt = &ts->tl->active_timers;
     for(;;) {
         t = *pt;
         if (!qemu_timer_expired_ns(t, expire_time)) {
@@ -418,12 +504,12 @@ void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
     *pt = ts;
 
     /* Rearm if necessary  */
-    if (pt == &ts->clock->active_timers) {
+    if (pt == &ts->tl->active_timers) {
         if (!alarm_timer->pending) {
             qemu_rearm_alarm_timer(alarm_timer);
         }
         /* Interrupt execution to force deadline recalculation.  */
-        qemu_clock_warp(ts->clock);
+        qemu_clock_warp(ts->tl->clock);
         if (use_icount) {
             qemu_notify_event();
         }
@@ -438,7 +524,7 @@ void qemu_mod_timer(QEMUTimer *ts, int64_t expire_time)
 bool qemu_timer_pending(QEMUTimer *ts)
 {
     QEMUTimer *t;
-    for (t = ts->clock->active_timers; t != NULL; t = t->next) {
+    for (t = ts->tl->active_timers; t != NULL; t = t->next) {
         if (t == ts) {
             return true;
         }
@@ -451,23 +537,24 @@ bool qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time)
     return qemu_timer_expired_ns(timer_head, current_time * timer_head->scale);
 }
 
-bool qemu_run_timers(QEMUClock *clock)
+bool timerlist_run_timers(QEMUTimerList *tl)
 {
     QEMUTimer *ts;
     int64_t current_time;
     bool progress = false;
    
-    if (!clock->enabled)
+    if (!tl->clock->enabled) {
         return progress;
+    }
 
-    current_time = qemu_get_clock_ns(clock);
+    current_time = qemu_get_clock_ns(tl->clock);
     for(;;) {
-        ts = clock->active_timers;
+        ts = tl->active_timers;
         if (!qemu_timer_expired_ns(ts, current_time)) {
             break;
         }
         /* remove timer from the list before calling the callback */
-        clock->active_timers = ts->next;
+        tl->active_timers = ts->next;
         ts->next = NULL;
 
         /* run the callback (the timer list can be modified) */
@@ -477,6 +564,11 @@ bool qemu_run_timers(QEMUClock *clock)
     return progress;
 }
 
+bool qemu_run_timers(QEMUClock *clock)
+{
+    return timerlist_run_timers(clock->default_timerlist);
+}
+
 int64_t qemu_get_clock_ns(QEMUClock *clock)
 {
     int64_t now, last;
@@ -514,11 +606,13 @@ void qemu_unregister_clock_reset_notifier(QEMUClock *clock, Notifier *notifier)
 
 void init_clocks(void)
 {
-    if (!rt_clock) {
-        rt_clock = qemu_new_clock(QEMU_CLOCK_REALTIME);
-        vm_clock = qemu_new_clock(QEMU_CLOCK_VIRTUAL);
-        host_clock = qemu_new_clock(QEMU_CLOCK_HOST);
+    QEMUClockType type;
+    for (type = 0; type < QEMU_CLOCK_MAX; type++) {
+        if (!QEMUClocks[type]) {
+            QEMUClocks[type] = qemu_new_clock(type);
+        }
     }
+
 #ifdef CONFIG_PRCTL_PR_SET_TIMERSLACK
     prctl(PR_SET_TIMERSLACK, 1, 0, 0, 0);
 #endif
@@ -535,9 +629,10 @@ bool qemu_run_all_timers(void)
     alarm_timer->pending = false;
 
     /* vm time timers */
-    progress |= qemu_run_timers(vm_clock);
-    progress |= qemu_run_timers(rt_clock);
-    progress |= qemu_run_timers(host_clock);
+    QEMUClockType type;
+    for (type = 0; type < QEMU_CLOCK_MAX; type++) {
+        progress |= qemu_run_timers(qemu_get_clock(type));
+    }
 
     /* rearm timer, if not periodic */
     if (alarm_timer->expired) {
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv5 06/16] aio / timers: Untangle include files
  2013-08-04 18:09                     ` [Qemu-devel] [RFC] [PATCHv5 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                         ` (4 preceding siblings ...)
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 05/16] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList Alex Bligh
@ 2013-08-04 18:09                       ` Alex Bligh
  2013-08-06 10:10                         ` Stefan Hajnoczi
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 07/16] aio / timers: Add QEMUTimerListGroup and helper functions Alex Bligh
                                         ` (10 subsequent siblings)
  16 siblings, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-08-04 18:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

include/qemu/timer.h has no need to include main-loop.h and
doing so causes an issue for the next patch. Unfortunately
various files assume including timers.h will pull in main-loop.h.
Untangle this mess.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 dma-helpers.c             |    1 +
 hw/dma/xilinx_axidma.c    |    1 +
 hw/timer/arm_timer.c      |    1 +
 hw/timer/grlib_gptimer.c  |    2 ++
 hw/timer/imx_epit.c       |    1 +
 hw/timer/imx_gpt.c        |    1 +
 hw/timer/lm32_timer.c     |    1 +
 hw/timer/puv3_ost.c       |    1 +
 hw/timer/slavio_timer.c   |    1 +
 hw/timer/xilinx_timer.c   |    1 +
 hw/usb/hcd-uhci.c         |    1 +
 include/block/block_int.h |    1 +
 include/block/coroutine.h |    2 ++
 include/qemu/timer.h      |    1 -
 migration-exec.c          |    1 +
 migration-fd.c            |    1 +
 migration-tcp.c           |    1 +
 migration-unix.c          |    1 +
 migration.c               |    1 +
 nbd.c                     |    1 +
 net/net.c                 |    1 +
 net/socket.c              |    1 +
 qemu-coroutine-io.c       |    1 +
 qemu-io-cmds.c            |    1 +
 qemu-nbd.c                |    1 +
 slirp/misc.c              |    1 +
 thread-pool.c             |    1 +
 ui/vnc-auth-vencrypt.c    |    2 +-
 ui/vnc-ws.c               |    1 +
 29 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/dma-helpers.c b/dma-helpers.c
index 499550f..c9620a5 100644
--- a/dma-helpers.c
+++ b/dma-helpers.c
@@ -11,6 +11,7 @@
 #include "trace.h"
 #include "qemu/range.h"
 #include "qemu/thread.h"
+#include "qemu/main-loop.h"
 
 /* #define DEBUG_IOMMU */
 
diff --git a/hw/dma/xilinx_axidma.c b/hw/dma/xilinx_axidma.c
index a48e3ba..59e8e35 100644
--- a/hw/dma/xilinx_axidma.c
+++ b/hw/dma/xilinx_axidma.c
@@ -27,6 +27,7 @@
 #include "hw/ptimer.h"
 #include "qemu/log.h"
 #include "qapi/qmp/qerror.h"
+#include "qemu/main-loop.h"
 
 #include "hw/stream.h"
 
diff --git a/hw/timer/arm_timer.c b/hw/timer/arm_timer.c
index 798a8da..eac625a 100644
--- a/hw/timer/arm_timer.c
+++ b/hw/timer/arm_timer.c
@@ -12,6 +12,7 @@
 #include "qemu-common.h"
 #include "hw/qdev.h"
 #include "hw/ptimer.h"
+#include "qemu/main-loop.h"
 
 /* Common timer implementation.  */
 
diff --git a/hw/timer/grlib_gptimer.c b/hw/timer/grlib_gptimer.c
index 37ba47d..9aee78f 100644
--- a/hw/timer/grlib_gptimer.c
+++ b/hw/timer/grlib_gptimer.c
@@ -25,6 +25,8 @@
 #include "hw/sysbus.h"
 #include "qemu/timer.h"
 #include "hw/ptimer.h"
+#include "qemu/timer.h"
+#include "qemu/main-loop.h"
 
 #include "trace.h"
 
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
index e24e0c4..3bba937 100644
--- a/hw/timer/imx_epit.c
+++ b/hw/timer/imx_epit.c
@@ -18,6 +18,7 @@
 #include "hw/ptimer.h"
 #include "hw/sysbus.h"
 #include "hw/arm/imx.h"
+#include "qemu/main-loop.h"
 
 #define TYPE_IMX_EPIT "imx.epit"
 
diff --git a/hw/timer/imx_gpt.c b/hw/timer/imx_gpt.c
index 97fbebb..4063a62 100644
--- a/hw/timer/imx_gpt.c
+++ b/hw/timer/imx_gpt.c
@@ -18,6 +18,7 @@
 #include "hw/ptimer.h"
 #include "hw/sysbus.h"
 #include "hw/arm/imx.h"
+#include "qemu/main-loop.h"
 
 #define TYPE_IMX_GPT "imx.gpt"
 
diff --git a/hw/timer/lm32_timer.c b/hw/timer/lm32_timer.c
index 016dade..4ee22f0 100644
--- a/hw/timer/lm32_timer.c
+++ b/hw/timer/lm32_timer.c
@@ -27,6 +27,7 @@
 #include "qemu/timer.h"
 #include "hw/ptimer.h"
 #include "qemu/error-report.h"
+#include "qemu/main-loop.h"
 
 #define DEFAULT_FREQUENCY (50*1000000)
 
diff --git a/hw/timer/puv3_ost.c b/hw/timer/puv3_ost.c
index 63f2c9f..12a2346 100644
--- a/hw/timer/puv3_ost.c
+++ b/hw/timer/puv3_ost.c
@@ -10,6 +10,7 @@
  */
 #include "hw/sysbus.h"
 #include "hw/ptimer.h"
+#include "qemu/main-loop.h"
 
 #undef DEBUG_PUV3
 #include "hw/unicore32/puv3.h"
diff --git a/hw/timer/slavio_timer.c b/hw/timer/slavio_timer.c
index 7f844d7..f9e4a0c 100644
--- a/hw/timer/slavio_timer.c
+++ b/hw/timer/slavio_timer.c
@@ -27,6 +27,7 @@
 #include "hw/ptimer.h"
 #include "hw/sysbus.h"
 #include "trace.h"
+#include "qemu/main-loop.h"
 
 /*
  * Registers of hardware timer in sun4m.
diff --git a/hw/timer/xilinx_timer.c b/hw/timer/xilinx_timer.c
index ee53834..b3f25e6 100644
--- a/hw/timer/xilinx_timer.c
+++ b/hw/timer/xilinx_timer.c
@@ -25,6 +25,7 @@
 #include "hw/sysbus.h"
 #include "hw/ptimer.h"
 #include "qemu/log.h"
+#include "qemu/main-loop.h"
 
 #define D(x)
 
diff --git a/hw/usb/hcd-uhci.c b/hw/usb/hcd-uhci.c
index 066072e..76a3cd9 100644
--- a/hw/usb/hcd-uhci.c
+++ b/hw/usb/hcd-uhci.c
@@ -32,6 +32,7 @@
 #include "qemu/iov.h"
 #include "sysemu/dma.h"
 #include "trace.h"
+#include "qemu/main-loop.h"
 
 //#define DEBUG
 //#define DEBUG_DUMP_DATA
diff --git a/include/block/block_int.h b/include/block/block_int.h
index c6ac871..c003196 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -34,6 +34,7 @@
 #include "monitor/monitor.h"
 #include "qemu/hbitmap.h"
 #include "block/snapshot.h"
+#include "qemu/main-loop.h"
 
 #define BLOCK_FLAG_ENCRYPT          1
 #define BLOCK_FLAG_COMPAT6          4
diff --git a/include/block/coroutine.h b/include/block/coroutine.h
index 377805a..a1594a1 100644
--- a/include/block/coroutine.h
+++ b/include/block/coroutine.h
@@ -19,6 +19,8 @@
 #include "qemu/queue.h"
 #include "qemu/timer.h"
 
+typedef struct AioContext AioContext;
+
 /**
  * Coroutines are a mechanism for stack switching and can be used for
  * cooperative userspace threading.  These functions provide a simple but
diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 465eee7..9f1ff95 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -2,7 +2,6 @@
 #define QEMU_TIMER_H
 
 #include "qemu-common.h"
-#include "qemu/main-loop.h"
 #include "qemu/notify.h"
 
 /* timers */
diff --git a/migration-exec.c b/migration-exec.c
index deab4e3..4790247 100644
--- a/migration-exec.c
+++ b/migration-exec.c
@@ -17,6 +17,7 @@
 
 #include "qemu-common.h"
 #include "qemu/sockets.h"
+#include "qemu/main-loop.h"
 #include "migration/migration.h"
 #include "migration/qemu-file.h"
 #include "block/block.h"
diff --git a/migration-fd.c b/migration-fd.c
index 3d4613c..d2e523a 100644
--- a/migration-fd.c
+++ b/migration-fd.c
@@ -14,6 +14,7 @@
  */
 
 #include "qemu-common.h"
+#include "qemu/main-loop.h"
 #include "qemu/sockets.h"
 #include "migration/migration.h"
 #include "monitor/monitor.h"
diff --git a/migration-tcp.c b/migration-tcp.c
index b20ee58..782572d 100644
--- a/migration-tcp.c
+++ b/migration-tcp.c
@@ -18,6 +18,7 @@
 #include "migration/migration.h"
 #include "migration/qemu-file.h"
 #include "block/block.h"
+#include "qemu/main-loop.h"
 
 //#define DEBUG_MIGRATION_TCP
 
diff --git a/migration-unix.c b/migration-unix.c
index 94b7022..651fc5b 100644
--- a/migration-unix.c
+++ b/migration-unix.c
@@ -15,6 +15,7 @@
 
 #include "qemu-common.h"
 #include "qemu/sockets.h"
+#include "qemu/main-loop.h"
 #include "migration/migration.h"
 #include "migration/qemu-file.h"
 #include "block/block.h"
diff --git a/migration.c b/migration.c
index 9f5a423..99a9869 100644
--- a/migration.c
+++ b/migration.c
@@ -14,6 +14,7 @@
  */
 
 #include "qemu-common.h"
+#include "qemu/main-loop.h"
 #include "migration/migration.h"
 #include "monitor/monitor.h"
 #include "migration/qemu-file.h"
diff --git a/nbd.c b/nbd.c
index 2606403..0fd0583 100644
--- a/nbd.c
+++ b/nbd.c
@@ -38,6 +38,7 @@
 
 #include "qemu/sockets.h"
 #include "qemu/queue.h"
+#include "qemu/main-loop.h"
 
 //#define DEBUG_NBD
 
diff --git a/net/net.c b/net/net.c
index c0d61bf..1148592 100644
--- a/net/net.c
+++ b/net/net.c
@@ -36,6 +36,7 @@
 #include "qmp-commands.h"
 #include "hw/qdev.h"
 #include "qemu/iov.h"
+#include "qemu/main-loop.h"
 #include "qapi-visit.h"
 #include "qapi/opts-visitor.h"
 #include "qapi/dealloc-visitor.h"
diff --git a/net/socket.c b/net/socket.c
index 87af1d3..e61309d 100644
--- a/net/socket.c
+++ b/net/socket.c
@@ -31,6 +31,7 @@
 #include "qemu/option.h"
 #include "qemu/sockets.h"
 #include "qemu/iov.h"
+#include "qemu/main-loop.h"
 
 typedef struct NetSocketState {
     NetClientState nc;
diff --git a/qemu-coroutine-io.c b/qemu-coroutine-io.c
index c4df35a..054ca70 100644
--- a/qemu-coroutine-io.c
+++ b/qemu-coroutine-io.c
@@ -26,6 +26,7 @@
 #include "qemu/sockets.h"
 #include "block/coroutine.h"
 #include "qemu/iov.h"
+#include "qemu/main-loop.h"
 
 ssize_t coroutine_fn
 qemu_co_sendv_recvv(int sockfd, struct iovec *iov, unsigned iov_cnt,
diff --git a/qemu-io-cmds.c b/qemu-io-cmds.c
index ffbcf31..f91b6c4 100644
--- a/qemu-io-cmds.c
+++ b/qemu-io-cmds.c
@@ -10,6 +10,7 @@
 
 #include "qemu-io.h"
 #include "block/block_int.h"
+#include "qemu/main-loop.h"
 
 #define CMD_NOFILE_OK   0x01
 
diff --git a/qemu-nbd.c b/qemu-nbd.c
index 9c31d45..f044546 100644
--- a/qemu-nbd.c
+++ b/qemu-nbd.c
@@ -19,6 +19,7 @@
 #include "qemu-common.h"
 #include "block/block.h"
 #include "block/nbd.h"
+#include "qemu/main-loop.h"
 
 #include <stdarg.h>
 #include <stdio.h>
diff --git a/slirp/misc.c b/slirp/misc.c
index 0bcc481..c0d4899 100644
--- a/slirp/misc.c
+++ b/slirp/misc.c
@@ -9,6 +9,7 @@
 #include <libslirp.h>
 
 #include "monitor/monitor.h"
+#include "qemu/main-loop.h"
 
 #ifdef DEBUG
 int slirp_debug = DBG_CALL|DBG_MISC|DBG_ERROR;
diff --git a/thread-pool.c b/thread-pool.c
index 0ebd4c2..25bfa41 100644
--- a/thread-pool.c
+++ b/thread-pool.c
@@ -23,6 +23,7 @@
 #include "block/block_int.h"
 #include "qemu/event_notifier.h"
 #include "block/thread-pool.h"
+#include "qemu/main-loop.h"
 
 static void do_spawn_thread(ThreadPool *pool);
 
diff --git a/ui/vnc-auth-vencrypt.c b/ui/vnc-auth-vencrypt.c
index c59b188..bc7032e 100644
--- a/ui/vnc-auth-vencrypt.c
+++ b/ui/vnc-auth-vencrypt.c
@@ -25,7 +25,7 @@
  */
 
 #include "vnc.h"
-
+#include "qemu/main-loop.h"
 
 static void start_auth_vencrypt_subauth(VncState *vs)
 {
diff --git a/ui/vnc-ws.c b/ui/vnc-ws.c
index df89315..e304baf 100644
--- a/ui/vnc-ws.c
+++ b/ui/vnc-ws.c
@@ -19,6 +19,7 @@
  */
 
 #include "vnc.h"
+#include "qemu/main-loop.h"
 
 #ifdef CONFIG_VNC_TLS
 #include "qemu/sockets.h"
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv5 07/16] aio / timers: Add QEMUTimerListGroup and helper functions
  2013-08-04 18:09                     ` [Qemu-devel] [RFC] [PATCHv5 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                         ` (5 preceding siblings ...)
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 06/16] aio / timers: Untangle include files Alex Bligh
@ 2013-08-04 18:09                       ` Alex Bligh
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 08/16] aio / timers: Add QEMUTimerListGroup to AioContext Alex Bligh
                                         ` (9 subsequent siblings)
  16 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-04 18:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Add QEMUTimerListGroup and helper functions, to represent
a QEMUTimerList associated with each clock. Add a default
QEMUTimerListGroup representing the default timer lists
which are not associated with any other object (e.g.
an AioContext as added by future patches).

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |    7 +++++++
 qemu-timer.c         |   41 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 48 insertions(+)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 9f1ff95..87e7b6e 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -19,8 +19,10 @@ typedef enum {
 
 typedef struct QEMUClock QEMUClock;
 typedef struct QEMUTimerList QEMUTimerList;
+typedef QEMUTimerList * QEMUTimerListGroup[QEMU_CLOCK_MAX];
 typedef void QEMUTimerCB(void *opaque);
 
+extern QEMUTimerListGroup qemu_default_tlg;
 extern QEMUClock *QEMUClocks[QEMU_CLOCK_MAX];
 
 static inline QEMUClock *qemu_get_clock(QEMUClockType type)
@@ -69,6 +71,11 @@ int64_t timerlist_deadline_ns(QEMUTimerList *tl);
 QEMUClock *timerlist_get_clock(QEMUTimerList *tl);
 bool timerlist_run_timers(QEMUTimerList *tl);
 
+void timerlistgroup_init(QEMUTimerListGroup tlg);
+void timerlistgroup_deinit(QEMUTimerListGroup tlg);
+bool timerlistgroup_run_timers(QEMUTimerListGroup tlg);
+int64_t timerlistgroup_deadline_ns(QEMUTimerListGroup tlg);
+
 int qemu_timeout_ns_to_ms(int64_t ns);
 int qemu_poll_ns(GPollFD *fds, uint nfds, int64_t timeout);
 void qemu_clock_enable(QEMUClock *clock, bool enabled);
diff --git a/qemu-timer.c b/qemu-timer.c
index 231953a..1af3b65 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -59,6 +59,7 @@ struct QEMUClock {
     bool enabled;
 };
 
+QEMUTimerListGroup qemu_default_tlg;
 QEMUClock *QEMUClocks[QEMU_CLOCK_MAX];
 
 /* A QEMUTimerList is a list of timers attached to a clock. More
@@ -569,6 +570,45 @@ bool qemu_run_timers(QEMUClock *clock)
     return timerlist_run_timers(clock->default_timerlist);
 }
 
+void timerlistgroup_init(QEMUTimerListGroup tlg)
+{
+    QEMUClockType clock;
+    for (clock = 0; clock < QEMU_CLOCK_MAX; clock++) {
+        tlg[clock] = timerlist_new(clock);
+    }
+}
+
+void timerlistgroup_deinit(QEMUTimerListGroup tlg)
+{
+    QEMUClockType clock;
+    for (clock = 0; clock < QEMU_CLOCK_MAX; clock++) {
+        timerlist_free(tlg[clock]);
+    }
+}
+
+bool timerlistgroup_run_timers(QEMUTimerListGroup tlg)
+{
+    QEMUClockType clock;
+    bool progress = false;
+    for (clock = 0; clock < QEMU_CLOCK_MAX; clock++) {
+        progress |= timerlist_run_timers(tlg[clock]);
+    }
+    return progress;
+}
+
+int64_t timerlistgroup_deadline_ns(QEMUTimerListGroup tlg)
+{
+    int64_t deadline = -1;
+    QEMUClockType clock;
+    for (clock = 0; clock < QEMU_CLOCK_MAX; clock++) {
+        if (qemu_clock_use_for_deadline(tlg[clock]->clock)) {
+            deadline = qemu_soonest_timeout(deadline,
+                                            timerlist_deadline_ns(tlg[clock]));
+        }
+    }
+    return deadline;
+}
+
 int64_t qemu_get_clock_ns(QEMUClock *clock)
 {
     int64_t now, last;
@@ -610,6 +650,7 @@ void init_clocks(void)
     for (type = 0; type < QEMU_CLOCK_MAX; type++) {
         if (!QEMUClocks[type]) {
             QEMUClocks[type] = qemu_new_clock(type);
+            qemu_default_tlg[type] = QEMUClocks[type]->default_timerlist;
         }
     }
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv5 08/16] aio / timers: Add QEMUTimerListGroup to AioContext
  2013-08-04 18:09                     ` [Qemu-devel] [RFC] [PATCHv5 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                         ` (6 preceding siblings ...)
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 07/16] aio / timers: Add QEMUTimerListGroup and helper functions Alex Bligh
@ 2013-08-04 18:09                       ` Alex Bligh
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 09/16] aio / timers: Add a notify callback to QEMUTimerList Alex Bligh
                                         ` (8 subsequent siblings)
  16 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-04 18:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Add a QEMUTimerListGroup each AioContext (meaning a QEMUTimerList
associated with each clock is added) and delete it when the
AioContext is freed.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 async.c                  |    2 ++
 include/block/aio.h      |    4 ++++
 tests/test-aio.c         |    3 +++
 tests/test-thread-pool.c |    3 +++
 4 files changed, 12 insertions(+)

diff --git a/async.c b/async.c
index 90fe906..b3512ca 100644
--- a/async.c
+++ b/async.c
@@ -177,6 +177,7 @@ aio_ctx_finalize(GSource     *source)
     aio_set_event_notifier(ctx, &ctx->notifier, NULL, NULL);
     event_notifier_cleanup(&ctx->notifier);
     g_array_free(ctx->pollfds, TRUE);
+    timerlistgroup_deinit(ctx->tlg);
 }
 
 static GSourceFuncs aio_source_funcs = {
@@ -215,6 +216,7 @@ AioContext *aio_context_new(void)
     aio_set_event_notifier(ctx, &ctx->notifier, 
                            (EventNotifierHandler *)
                            event_notifier_test_and_clear, NULL);
+    timerlistgroup_init(ctx->tlg);
 
     return ctx;
 }
diff --git a/include/block/aio.h b/include/block/aio.h
index 1836793..aea3a97 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -17,6 +17,7 @@
 #include "qemu-common.h"
 #include "qemu/queue.h"
 #include "qemu/event_notifier.h"
+#include "qemu/timer.h"
 
 typedef struct BlockDriverAIOCB BlockDriverAIOCB;
 typedef void BlockDriverCompletionFunc(void *opaque, int ret);
@@ -69,6 +70,9 @@ typedef struct AioContext {
 
     /* Thread pool for performing work and receiving completion callbacks */
     struct ThreadPool *thread_pool;
+
+    /* TimerLists for calling timers - one per clock type */
+    QEMUTimerListGroup tlg;
 } AioContext;
 
 /* Returns 1 if there are still outstanding AIO requests; 0 otherwise */
diff --git a/tests/test-aio.c b/tests/test-aio.c
index c173870..2d7ec4c 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -12,6 +12,7 @@
 
 #include <glib.h>
 #include "block/aio.h"
+#include "qemu/timer.h"
 
 AioContext *ctx;
 
@@ -628,6 +629,8 @@ int main(int argc, char **argv)
 {
     GSource *src;
 
+    init_clocks();
+
     ctx = aio_context_new();
     src = aio_get_g_source(ctx);
     g_source_attach(src, NULL);
diff --git a/tests/test-thread-pool.c b/tests/test-thread-pool.c
index b62338f..27d6190 100644
--- a/tests/test-thread-pool.c
+++ b/tests/test-thread-pool.c
@@ -3,6 +3,7 @@
 #include "block/aio.h"
 #include "block/thread-pool.h"
 #include "block/block.h"
+#include "qemu/timer.h"
 
 static AioContext *ctx;
 static ThreadPool *pool;
@@ -205,6 +206,8 @@ int main(int argc, char **argv)
 {
     int ret;
 
+    init_clocks();
+
     ctx = aio_context_new();
     pool = aio_get_thread_pool(ctx);
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv5 09/16] aio / timers: Add a notify callback to QEMUTimerList
  2013-08-04 18:09                     ` [Qemu-devel] [RFC] [PATCHv5 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                         ` (7 preceding siblings ...)
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 08/16] aio / timers: Add QEMUTimerListGroup to AioContext Alex Bligh
@ 2013-08-04 18:09                       ` Alex Bligh
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 10/16] aio / timers: aio_ctx_prepare sets timeout from AioContext timers Alex Bligh
                                         ` (7 subsequent siblings)
  16 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-04 18:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Add a notify pointer to QEMUTimerList so it knows what to notify
on a timer change.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 async.c              |    7 ++++++-
 include/qemu/timer.h |    7 ++++++-
 qemu-timer.c         |   24 ++++++++++++++++++++++--
 3 files changed, 34 insertions(+), 4 deletions(-)

diff --git a/async.c b/async.c
index b3512ca..c80da2f 100644
--- a/async.c
+++ b/async.c
@@ -206,6 +206,11 @@ void aio_notify(AioContext *ctx)
     event_notifier_set(&ctx->notifier);
 }
 
+static void aio_timerlist_notify(void *opaque)
+{
+    aio_notify((AioContext *)opaque);
+}
+
 AioContext *aio_context_new(void)
 {
     AioContext *ctx;
@@ -216,7 +221,7 @@ AioContext *aio_context_new(void)
     aio_set_event_notifier(ctx, &ctx->notifier, 
                            (EventNotifierHandler *)
                            event_notifier_test_and_clear, NULL);
-    timerlistgroup_init(ctx->tlg);
+    timerlistgroup_init(ctx->tlg, aio_timerlist_notify, ctx);
 
     return ctx;
 }
diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 87e7b6e..970042d 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -21,6 +21,7 @@ typedef struct QEMUClock QEMUClock;
 typedef struct QEMUTimerList QEMUTimerList;
 typedef QEMUTimerList * QEMUTimerListGroup[QEMU_CLOCK_MAX];
 typedef void QEMUTimerCB(void *opaque);
+typedef void QEMUTimerListNotifyCB(void *opaque);
 
 extern QEMUTimerListGroup qemu_default_tlg;
 extern QEMUClock *QEMUClocks[QEMU_CLOCK_MAX];
@@ -70,8 +71,12 @@ int64_t timerlist_deadline(QEMUTimerList *tl);
 int64_t timerlist_deadline_ns(QEMUTimerList *tl);
 QEMUClock *timerlist_get_clock(QEMUTimerList *tl);
 bool timerlist_run_timers(QEMUTimerList *tl);
+void timerlist_set_notify_cb(QEMUTimerList *tl,
+                             QEMUTimerListNotifyCB *cb, void *opaque);
+void timerlist_notify(QEMUTimerList *tl);
 
-void timerlistgroup_init(QEMUTimerListGroup tlg);
+void timerlistgroup_init(QEMUTimerListGroup tlg,
+                         QEMUTimerListNotifyCB *cb, void *opaque);
 void timerlistgroup_deinit(QEMUTimerListGroup tlg);
 bool timerlistgroup_run_timers(QEMUTimerListGroup tlg);
 int64_t timerlistgroup_deadline_ns(QEMUTimerListGroup tlg);
diff --git a/qemu-timer.c b/qemu-timer.c
index 1af3b65..4562c70 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -73,6 +73,8 @@ struct QEMUTimerList {
     QEMUClock *clock;
     QEMUTimer *active_timers;
     QLIST_ENTRY(QEMUTimerList) list;
+    QEMUTimerListNotifyCB *notify_cb;
+    void *notify_opaque;
 };
 
 struct QEMUTimer {
@@ -388,6 +390,22 @@ QEMUTimerList *qemu_clock_get_default_timerlist(QEMUClock *clock)
     return clock->default_timerlist;
 }
 
+void timerlist_set_notify_cb(QEMUTimerList *tl,
+                             QEMUTimerListNotifyCB *cb, void *opaque)
+{
+    tl->notify_cb = cb;
+    tl->notify_opaque = opaque;
+}
+
+void timerlist_notify(QEMUTimerList *tl)
+{
+    if (tl->notify_cb) {
+        tl->notify_cb(tl->notify_opaque);
+    } else {
+        qemu_notify_event();
+    }
+}
+
 /* Transition function to convert a nanosecond timeout to ms
  * This is used where a system does not support ppoll
  */
@@ -512,7 +530,7 @@ void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
         /* Interrupt execution to force deadline recalculation.  */
         qemu_clock_warp(ts->tl->clock);
         if (use_icount) {
-            qemu_notify_event();
+            timerlist_notify(ts->tl);
         }
     }
 }
@@ -570,11 +588,13 @@ bool qemu_run_timers(QEMUClock *clock)
     return timerlist_run_timers(clock->default_timerlist);
 }
 
-void timerlistgroup_init(QEMUTimerListGroup tlg)
+void timerlistgroup_init(QEMUTimerListGroup tlg,
+                         QEMUTimerListNotifyCB *cb, void *opaque)
 {
     QEMUClockType clock;
     for (clock = 0; clock < QEMU_CLOCK_MAX; clock++) {
         tlg[clock] = timerlist_new(clock);
+        timerlist_set_notify_cb(tlg[clock], cb, opaque);
     }
 }
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv5 10/16] aio / timers: aio_ctx_prepare sets timeout from AioContext timers
  2013-08-04 18:09                     ` [Qemu-devel] [RFC] [PATCHv5 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                         ` (8 preceding siblings ...)
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 09/16] aio / timers: Add a notify callback to QEMUTimerList Alex Bligh
@ 2013-08-04 18:09                       ` Alex Bligh
  2013-08-04 18:10                       ` [Qemu-devel] [RFC] [PATCHv5 11/16] aio / timers: Convert aio_poll to use AioContext timers' deadline Alex Bligh
                                         ` (6 subsequent siblings)
  16 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-04 18:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Calculate the timeout in aio_ctx_prepare taking into account
the timers attached to the AioContext.

Alter aio_ctx_check similarly.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 async.c |   13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/async.c b/async.c
index c80da2f..8df3e99 100644
--- a/async.c
+++ b/async.c
@@ -123,13 +123,14 @@ aio_ctx_prepare(GSource *source, gint    *timeout)
 {
     AioContext *ctx = (AioContext *) source;
     QEMUBH *bh;
+    int deadline;
 
     for (bh = ctx->first_bh; bh; bh = bh->next) {
         if (!bh->deleted && bh->scheduled) {
             if (bh->idle) {
                 /* idle bottom halves will be polled at least
                  * every 10ms */
-                *timeout = 10;
+                *timeout = qemu_soonest_timeout(*timeout, 10);
             } else {
                 /* non-idle bottom halves will be executed
                  * immediately */
@@ -139,6 +140,14 @@ aio_ctx_prepare(GSource *source, gint    *timeout)
         }
     }
 
+    deadline = qemu_timeout_ns_to_ms(timerlistgroup_deadline_ns(ctx->tlg));
+    if (deadline == 0) {
+        *timeout = 0;
+        return true;
+    } else {
+        *timeout = qemu_soonest_timeout(*timeout, deadline);
+    }
+
     return false;
 }
 
@@ -153,7 +162,7 @@ aio_ctx_check(GSource *source)
             return true;
 	}
     }
-    return aio_pending(ctx);
+    return aio_pending(ctx) || (timerlistgroup_deadline_ns(ctx->tlg) >= 0);
 }
 
 static gboolean
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv5 11/16] aio / timers: Convert aio_poll to use AioContext timers' deadline
  2013-08-04 18:09                     ` [Qemu-devel] [RFC] [PATCHv5 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                         ` (9 preceding siblings ...)
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 10/16] aio / timers: aio_ctx_prepare sets timeout from AioContext timers Alex Bligh
@ 2013-08-04 18:10                       ` Alex Bligh
  2013-08-04 18:10                       ` [Qemu-devel] [RFC] [PATCHv5 12/16] aio / timers: Convert mainloop to use timeout Alex Bligh
                                         ` (5 subsequent siblings)
  16 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-04 18:10 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Convert aio_poll to use deadline based on AioContext's timers.

aio_poll has been changed to return accurately whether progress
has occurred. Prior to this commit, aio_poll always returned
true if g_poll was entered, whether or not any progress was
made. This required a change to tests/test-aio.c where an
assert was backwards.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 aio-posix.c      |   20 +++++++++++++-------
 aio-win32.c      |   22 +++++++++++++++++++---
 tests/test-aio.c |    4 ++--
 3 files changed, 34 insertions(+), 12 deletions(-)

diff --git a/aio-posix.c b/aio-posix.c
index b68eccd..2ec419d 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -166,6 +166,10 @@ static bool aio_dispatch(AioContext *ctx)
             g_free(tmp);
         }
     }
+
+    /* Run our timers */
+    progress |= timerlistgroup_run_timers(ctx->tlg);
+
     return progress;
 }
 
@@ -232,9 +236,9 @@ bool aio_poll(AioContext *ctx, bool blocking)
     }
 
     /* wait until next event */
-    ret = g_poll((GPollFD *)ctx->pollfds->data,
-                 ctx->pollfds->len,
-                 blocking ? -1 : 0);
+    ret = qemu_poll_ns((GPollFD *)ctx->pollfds->data,
+                         ctx->pollfds->len,
+                         blocking ? timerlistgroup_deadline_ns(ctx->tlg) : 0);
 
     /* if we have any readable fds, dispatch event */
     if (ret > 0) {
@@ -245,11 +249,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
                 node->pfd.revents = pfd->revents;
             }
         }
-        if (aio_dispatch(ctx)) {
-            progress = true;
-        }
+    }
+
+    /* Run dispatch even if there were no readable fds to run timers */
+    if (aio_dispatch(ctx)) {
+        progress = true;
     }
 
     assert(progress || busy);
-    return true;
+    return progress;
 }
diff --git a/aio-win32.c b/aio-win32.c
index 38723bf..acdc48a 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -98,6 +98,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
     HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
     bool busy, progress;
     int count;
+    int timeout;
 
     progress = false;
 
@@ -111,6 +112,9 @@ bool aio_poll(AioContext *ctx, bool blocking)
         progress = true;
     }
 
+    /* Run timers */
+    progress |= timerlistgroup_run_timers(ctx->tlg);
+
     /*
      * Then dispatch any pending callbacks from the GSource.
      *
@@ -174,8 +178,11 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
     /* wait until next event */
     while (count > 0) {
-        int timeout = blocking ? INFINITE : 0;
-        int ret = WaitForMultipleObjects(count, events, FALSE, timeout);
+        int ret;
+
+        timeout = blocking ?
+            qemu_timeout_ns_to_ms(timerlistgroup_deadline_ns(ctx->tlg)) : 0;
+        ret = WaitForMultipleObjects(count, events, FALSE, timeout);
 
         /* if we have any signaled events, dispatch event */
         if ((DWORD) (ret - WAIT_OBJECT_0) >= count) {
@@ -214,6 +221,15 @@ bool aio_poll(AioContext *ctx, bool blocking)
         events[ret - WAIT_OBJECT_0] = events[--count];
     }
 
+    if (blocking) {
+        /* Run the timers a second time. We do this because otherwise aio_wait
+         * will not note progress - and will stop a drain early - if we have
+         * a timer that was not ready to run entering g_poll but is ready
+         * after g_poll. This will only do anything if a timer has expired.
+         */
+        progress |= timerlistgroup_run_timers(ctx->tl);
+    }
+
     assert(progress || busy);
-    return true;
+    return progress;
 }
diff --git a/tests/test-aio.c b/tests/test-aio.c
index 2d7ec4c..eedf7f8 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -316,13 +316,13 @@ static void test_wait_event_notifier_noflush(void)
     event_notifier_set(&data.e);
     g_assert(aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 1);
-    g_assert(aio_poll(ctx, false));
+    g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 1);
 
     event_notifier_set(&data.e);
     g_assert(aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 2);
-    g_assert(aio_poll(ctx, false));
+    g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 2);
 
     event_notifier_set(&dummy.e);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv5 12/16] aio / timers: Convert mainloop to use timeout
  2013-08-04 18:09                     ` [Qemu-devel] [RFC] [PATCHv5 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                         ` (10 preceding siblings ...)
  2013-08-04 18:10                       ` [Qemu-devel] [RFC] [PATCHv5 11/16] aio / timers: Convert aio_poll to use AioContext timers' deadline Alex Bligh
@ 2013-08-04 18:10                       ` Alex Bligh
  2013-08-04 18:10                       ` [Qemu-devel] [RFC] [PATCHv5 13/16] aio / timers: On timer modification, qemu_notify or aio_notify Alex Bligh
                                         ` (4 subsequent siblings)
  16 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-04 18:10 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Convert mainloop to use timeout from default timerlist group
(i.e. the current 3 static timers)

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 main-loop.c |   45 ++++++++++++++++++++++++++++++++++-----------
 1 file changed, 34 insertions(+), 11 deletions(-)

diff --git a/main-loop.c b/main-loop.c
index a44fff6..43dfcd7 100644
--- a/main-loop.c
+++ b/main-loop.c
@@ -155,10 +155,11 @@ static int max_priority;
 static int glib_pollfds_idx;
 static int glib_n_poll_fds;
 
-static void glib_pollfds_fill(uint32_t *cur_timeout)
+static void glib_pollfds_fill(int64_t *cur_timeout)
 {
     GMainContext *context = g_main_context_default();
     int timeout = 0;
+    int64_t timeout_ns;
     int n;
 
     g_main_context_prepare(context, &max_priority);
@@ -174,9 +175,13 @@ static void glib_pollfds_fill(uint32_t *cur_timeout)
                                  glib_n_poll_fds);
     } while (n != glib_n_poll_fds);
 
-    if (timeout >= 0 && timeout < *cur_timeout) {
-        *cur_timeout = timeout;
+    if (timeout < 0) {
+        timeout_ns = -1;
+    } else {
+        timeout_ns = (int64_t)timeout * (int64_t)SCALE_MS;
     }
+
+    *cur_timeout = qemu_soonest_timeout(timeout_ns, *cur_timeout);
 }
 
 static void glib_pollfds_poll(void)
@@ -191,7 +196,7 @@ static void glib_pollfds_poll(void)
 
 #define MAX_MAIN_LOOP_SPIN (1000)
 
-static int os_host_main_loop_wait(uint32_t timeout)
+static int os_host_main_loop_wait(int64_t timeout)
 {
     int ret;
     static int spin_counter;
@@ -214,7 +219,7 @@ static int os_host_main_loop_wait(uint32_t timeout)
             notified = true;
         }
 
-        timeout = 1;
+        timeout = SCALE_MS;
     }
 
     if (timeout > 0) {
@@ -224,7 +229,7 @@ static int os_host_main_loop_wait(uint32_t timeout)
         spin_counter++;
     }
 
-    ret = g_poll((GPollFD *)gpollfds->data, gpollfds->len, timeout);
+    ret = qemu_poll_ns((GPollFD *)gpollfds->data, gpollfds->len, timeout);
 
     if (timeout > 0) {
         qemu_mutex_lock_iothread();
@@ -373,7 +378,7 @@ static void pollfds_poll(GArray *pollfds, int nfds, fd_set *rfds,
     }
 }
 
-static int os_host_main_loop_wait(uint32_t timeout)
+static int os_host_main_loop_wait(int64_t timeout)
 {
     GMainContext *context = g_main_context_default();
     GPollFD poll_fds[1024 * 2]; /* this is probably overkill */
@@ -382,6 +387,7 @@ static int os_host_main_loop_wait(uint32_t timeout)
     PollingEntry *pe;
     WaitObjects *w = &wait_objects;
     gint poll_timeout;
+    int64_t poll_timeout_ns;
     static struct timeval tv0;
     fd_set rfds, wfds, xfds;
     int nfds;
@@ -419,12 +425,17 @@ static int os_host_main_loop_wait(uint32_t timeout)
         poll_fds[n_poll_fds + i].events = G_IO_IN;
     }
 
-    if (poll_timeout < 0 || timeout < poll_timeout) {
-        poll_timeout = timeout;
+    if (poll_timeout < 0) {
+        poll_timeout_ns = -1;
+    } else {
+        poll_timeout_ns = (int64_t)poll_timeout * (int64_t)SCALE_MS;
     }
 
+    poll_timeout_ns = qemu_soonest_timeout(poll_timeout_ns, timeout);
+
     qemu_mutex_unlock_iothread();
-    g_poll_ret = g_poll(poll_fds, n_poll_fds + w->num, poll_timeout);
+    g_poll_ret = qemu_poll_ns(poll_fds, n_poll_fds + w->num, poll_timeout_ns);
+
     qemu_mutex_lock_iothread();
     if (g_poll_ret > 0) {
         for (i = 0; i < w->num; i++) {
@@ -449,6 +460,7 @@ int main_loop_wait(int nonblocking)
 {
     int ret;
     uint32_t timeout = UINT32_MAX;
+    int64_t timeout_ns;
 
     if (nonblocking) {
         timeout = 0;
@@ -462,7 +474,18 @@ int main_loop_wait(int nonblocking)
     slirp_pollfds_fill(gpollfds);
 #endif
     qemu_iohandler_fill(gpollfds);
-    ret = os_host_main_loop_wait(timeout);
+
+    if (timeout == UINT32_MAX) {
+        timeout_ns = -1;
+    } else {
+        timeout_ns = (uint64_t)timeout * (int64_t)(SCALE_MS);
+    }
+
+    timeout_ns = qemu_soonest_timeout(timeout_ns,
+                                      timerlistgroup_deadline_ns(
+                                          qemu_default_tlg));
+
+    ret = os_host_main_loop_wait(timeout_ns);
     qemu_iohandler_poll(gpollfds, ret);
 #ifdef CONFIG_SLIRP
     slirp_pollfds_poll(gpollfds, (ret < 0));
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv5 13/16] aio / timers: On timer modification, qemu_notify or aio_notify
  2013-08-04 18:09                     ` [Qemu-devel] [RFC] [PATCHv5 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                         ` (11 preceding siblings ...)
  2013-08-04 18:10                       ` [Qemu-devel] [RFC] [PATCHv5 12/16] aio / timers: Convert mainloop to use timeout Alex Bligh
@ 2013-08-04 18:10                       ` Alex Bligh
  2013-08-04 18:10                       ` [Qemu-devel] [RFC] [PATCHv5 14/16] aio / timers: Use all timerlists in icount warp calculations Alex Bligh
                                         ` (3 subsequent siblings)
  16 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-04 18:10 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

On qemu_mod_timer_ns, ensure qemu_notify or aio_notify is called to
end the appropriate poll(), irrespective of use_icount value.

On qemu_clock_enable, ensure qemu_notify or aio_notify is called for
all QEMUTimerLists attached to the QEMUClock.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |    1 +
 qemu-timer.c         |   13 ++++++++++---
 2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 970042d..0a02a17 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -62,6 +62,7 @@ int64_t qemu_clock_deadline(QEMUClock *clock);
 int64_t qemu_clock_deadline_ns(QEMUClock *clock);
 bool qemu_clock_use_for_deadline(QEMUClock *clock);
 QEMUTimerList *qemu_clock_get_default_timerlist(QEMUClock *clock);
+void qemu_clock_notify(QEMUClock *clock);
 
 QEMUTimerList *timerlist_new(QEMUClockType type);
 void timerlist_free(QEMUTimerList *tl);
diff --git a/qemu-timer.c b/qemu-timer.c
index 4562c70..789bb77 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -304,11 +304,20 @@ bool qemu_clock_use_for_deadline(QEMUClock *clock)
     return !(use_icount && (clock->type = QEMU_CLOCK_VIRTUAL));
 }
 
+void qemu_clock_notify(QEMUClock *clock)
+{
+    QEMUTimerList *tl;
+    QLIST_FOREACH(tl, &clock->timerlists, list) {
+        timerlist_notify(tl);
+    }
+}
+
 void qemu_clock_enable(QEMUClock *clock, bool enabled)
 {
     bool old = clock->enabled;
     clock->enabled = enabled;
     if (enabled && !old) {
+        qemu_clock_notify(clock);
         qemu_rearm_alarm_timer(alarm_timer);
     }
 }
@@ -529,9 +538,7 @@ void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
         }
         /* Interrupt execution to force deadline recalculation.  */
         qemu_clock_warp(ts->tl->clock);
-        if (use_icount) {
-            timerlist_notify(ts->tl);
-        }
+        timerlist_notify(ts->tl);
     }
 }
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv5 14/16] aio / timers: Use all timerlists in icount warp calculations
  2013-08-04 18:09                     ` [Qemu-devel] [RFC] [PATCHv5 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                         ` (12 preceding siblings ...)
  2013-08-04 18:10                       ` [Qemu-devel] [RFC] [PATCHv5 13/16] aio / timers: On timer modification, qemu_notify or aio_notify Alex Bligh
@ 2013-08-04 18:10                       ` Alex Bligh
  2013-08-04 18:10                       ` [Qemu-devel] [RFC] [PATCHv5 15/16] aio / timers: Remove alarm timers Alex Bligh
                                         ` (2 subsequent siblings)
  16 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-04 18:10 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Notify all timerlists derived from vm_clock in icount warp
calculations.

When calculating timer delay based on vm_clock deadline, use
all timerlists.

For compatibility, maintain an apparent bug where when using
icount, if no vm_clock timer was set, qemu_clock_deadline
would return INT32_MAX and always set an icount clock expiry
about 2 seconds ahead.

NB: thread safety - when different timerlists sit on different
threads, this will need some locking.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 cpus.c               |   44 ++++++++++++++++++++++++++++++++++++--------
 include/qemu/timer.h |    1 +
 qemu-timer.c         |   16 ++++++++++++++++
 3 files changed, 53 insertions(+), 8 deletions(-)

diff --git a/cpus.c b/cpus.c
index 8062cdd..85e743d 100644
--- a/cpus.c
+++ b/cpus.c
@@ -262,7 +262,7 @@ static void icount_warp_rt(void *opaque)
             qemu_icount_bias += MIN(warp_delta, delta);
         }
         if (qemu_clock_expired(vm_clock)) {
-            qemu_notify_event();
+            qemu_clock_notify(vm_clock);
         }
     }
     vm_clock_warp_start = -1;
@@ -279,7 +279,7 @@ void qtest_clock_warp(int64_t dest)
         qemu_run_timers(vm_clock);
         clock = qemu_get_clock_ns(vm_clock);
     }
-    qemu_notify_event();
+    qemu_clock_notify(vm_clock);
 }
 
 void qemu_clock_warp(QEMUClock *clock)
@@ -314,7 +314,18 @@ void qemu_clock_warp(QEMUClock *clock)
     }
 
     vm_clock_warp_start = qemu_get_clock_ns(rt_clock);
-    deadline = qemu_clock_deadline(vm_clock);
+    /* We want to use the earliest deadline from ALL vm_clocks */
+    deadline = qemu_clock_deadline_ns_all(vm_clock);
+
+    /* Maintain prior (possibly buggy) behaviour where if no deadline
+     * was set (as there is no vm_clock timer) or it is more than
+     * INT32_MAX nanoseconds ahead, we still use INT32_MAX
+     * nanoseconds.
+     */
+    if ((deadline < 0) || (deadline > INT32_MAX)) {
+        deadline = INT32_MAX;
+    }
+
     if (deadline > 0) {
         /*
          * Ensure the vm_clock proceeds even when the virtual CPU goes to
@@ -333,8 +344,8 @@ void qemu_clock_warp(QEMUClock *clock)
          * packets continuously instead of every 100ms.
          */
         qemu_mod_timer(icount_warp_timer, vm_clock_warp_start + deadline);
-    } else {
-        qemu_notify_event();
+    } else if (deadline == 0) {
+        qemu_clock_notify(vm_clock);
     }
 }
 
@@ -865,8 +876,13 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
 
     while (1) {
         tcg_exec_all();
-        if (use_icount && qemu_clock_deadline(vm_clock) <= 0) {
-            qemu_notify_event();
+
+        if (use_icount) {
+            int64_t deadline = qemu_clock_deadline_ns_all(vm_clock);
+
+            if (deadline == 0) {
+                qemu_clock_notify(vm_clock);
+            }
         }
         qemu_tcg_wait_io_event();
     }
@@ -1142,11 +1158,23 @@ static int tcg_cpu_exec(CPUArchState *env)
 #endif
     if (use_icount) {
         int64_t count;
+        int64_t deadline;
         int decr;
         qemu_icount -= (env->icount_decr.u16.low + env->icount_extra);
         env->icount_decr.u16.low = 0;
         env->icount_extra = 0;
-        count = qemu_icount_round(qemu_clock_deadline(vm_clock));
+        deadline = qemu_clock_deadline_ns_all(vm_clock);
+
+        /* Maintain prior (possibly buggy) behaviour where if no deadline
+         * was set (as there is no vm_clock timer) or it is more than
+         * INT32_MAX nanoseconds ahead, we still use INT32_MAX
+         * nanoseconds.
+         */
+        if ((deadline < 0) || (deadline > INT32_MAX)) {
+            deadline = INT32_MAX;
+        }
+
+        count = qemu_icount_round(deadline);
         qemu_icount += count;
         decr = (count > 0xffff) ? 0xffff : count;
         count -= decr;
diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 0a02a17..306187f 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -60,6 +60,7 @@ int64_t qemu_clock_has_timers(QEMUClock *clock);
 int64_t qemu_clock_expired(QEMUClock *clock);
 int64_t qemu_clock_deadline(QEMUClock *clock);
 int64_t qemu_clock_deadline_ns(QEMUClock *clock);
+int64_t qemu_clock_deadline_ns_all(QEMUClock *clock);
 bool qemu_clock_use_for_deadline(QEMUClock *clock);
 QEMUTimerList *qemu_clock_get_default_timerlist(QEMUClock *clock);
 void qemu_clock_notify(QEMUClock *clock);
diff --git a/qemu-timer.c b/qemu-timer.c
index 789bb77..7ff0416 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -389,6 +389,22 @@ int64_t qemu_clock_deadline_ns(QEMUClock *clock)
     return timerlist_deadline_ns(clock->default_timerlist);
 }
 
+/* Calculate the soonest deadline across all timerlists attached
+ * to the clock. This is used for the icount timeout so we
+ * ignore whether or not the clock should be used in deadline
+ * calculations.
+ */
+int64_t qemu_clock_deadline_ns_all(QEMUClock *clock)
+{
+    int64_t deadline = -1;
+    QEMUTimerList *tl;
+    QLIST_FOREACH(tl, &clock->timerlists, list) {
+        deadline = qemu_soonest_timeout(deadline,
+                                        timerlist_deadline_ns(tl));
+    }
+    return deadline;
+}
+
 QEMUClock *timerlist_get_clock(QEMUTimerList *tl)
 {
     return tl->clock;
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv5 15/16] aio / timers: Remove alarm timers
  2013-08-04 18:09                     ` [Qemu-devel] [RFC] [PATCHv5 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                         ` (13 preceding siblings ...)
  2013-08-04 18:10                       ` [Qemu-devel] [RFC] [PATCHv5 14/16] aio / timers: Use all timerlists in icount warp calculations Alex Bligh
@ 2013-08-04 18:10                       ` Alex Bligh
  2013-08-04 18:10                       ` [Qemu-devel] [RFC] [PATCHv5 16/16] aio / timers: Add test harness for AioContext timers Alex Bligh
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
  16 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-04 18:10 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Remove alarm timers from qemu-timers.c now we use g_poll / ppoll
instead.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |    2 -
 main-loop.c          |    4 -
 qemu-timer.c         |  500 +-------------------------------------------------
 vl.c                 |    4 +-
 4 files changed, 4 insertions(+), 506 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 306187f..1363316 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -142,9 +142,7 @@ static inline uint64_t timer_expire_time_ns(QEMUTimer *ts)
 
 bool qemu_run_timers(QEMUClock *clock);
 bool qemu_run_all_timers(void);
-void configure_alarms(char const *opt);
 void init_clocks(void);
-int init_timer_alarm(void);
 
 int64_t cpu_get_ticks(void);
 void cpu_enable_ticks(void);
diff --git a/main-loop.c b/main-loop.c
index 43dfcd7..754d276 100644
--- a/main-loop.c
+++ b/main-loop.c
@@ -131,10 +131,6 @@ int qemu_init_main_loop(void)
     GSource *src;
 
     init_clocks();
-    if (init_timer_alarm() < 0) {
-        fprintf(stderr, "could not initialize alarm timer\n");
-        exit(1);
-    }
 
     ret = qemu_signal_init();
     if (ret) {
diff --git a/qemu-timer.c b/qemu-timer.c
index 7ff0416..ebe7597 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -33,10 +33,6 @@
 #include <pthread.h>
 #endif
 
-#ifdef _WIN32
-#include <mmsystem.h>
-#endif
-
 #ifdef CONFIG_PPOLL
 #include <poll.h>
 #endif
@@ -86,174 +82,11 @@ struct QEMUTimer {
     int scale;
 };
 
-struct qemu_alarm_timer {
-    char const *name;
-    int (*start)(struct qemu_alarm_timer *t);
-    void (*stop)(struct qemu_alarm_timer *t);
-    void (*rearm)(struct qemu_alarm_timer *t, int64_t nearest_delta_ns);
-#if defined(__linux__)
-    timer_t timer;
-    int fd;
-#elif defined(_WIN32)
-    HANDLE timer;
-#endif
-    bool expired;
-    bool pending;
-};
-
-static struct qemu_alarm_timer *alarm_timer;
-
 static bool qemu_timer_expired_ns(QEMUTimer *timer_head, int64_t current_time)
 {
     return timer_head && (timer_head->expire_time <= current_time);
 }
 
-static int64_t qemu_next_alarm_deadline(void)
-{
-    int64_t delta = INT64_MAX;
-    int64_t rtdelta;
-    int64_t hdelta;
-
-    if (!use_icount && vm_clock->enabled &&
-        vm_clock->default_timerlist->active_timers) {
-        delta = vm_clock->default_timerlist->active_timers->expire_time -
-            qemu_get_clock_ns(vm_clock);
-    }
-    if (host_clock->enabled &&
-        host_clock->default_timerlist->active_timers) {
-        hdelta = host_clock->default_timerlist->active_timers->expire_time -
-            qemu_get_clock_ns(host_clock);
-        if (hdelta < delta) {
-            delta = hdelta;
-        }
-    }
-    if (rt_clock->enabled &&
-        rt_clock->default_timerlist->active_timers) {
-        rtdelta = (rt_clock->default_timerlist->active_timers->expire_time -
-                   qemu_get_clock_ns(rt_clock));
-        if (rtdelta < delta) {
-            delta = rtdelta;
-        }
-    }
-
-    return delta;
-}
-
-static void qemu_rearm_alarm_timer(struct qemu_alarm_timer *t)
-{
-    int64_t nearest_delta_ns = qemu_next_alarm_deadline();
-    if (nearest_delta_ns < INT64_MAX) {
-        t->rearm(t, nearest_delta_ns);
-    }
-}
-
-/* TODO: MIN_TIMER_REARM_NS should be optimized */
-#define MIN_TIMER_REARM_NS 250000
-
-#ifdef _WIN32
-
-static int mm_start_timer(struct qemu_alarm_timer *t);
-static void mm_stop_timer(struct qemu_alarm_timer *t);
-static void mm_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
-
-static int win32_start_timer(struct qemu_alarm_timer *t);
-static void win32_stop_timer(struct qemu_alarm_timer *t);
-static void win32_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
-
-#else
-
-static int unix_start_timer(struct qemu_alarm_timer *t);
-static void unix_stop_timer(struct qemu_alarm_timer *t);
-static void unix_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
-
-#ifdef __linux__
-
-static int dynticks_start_timer(struct qemu_alarm_timer *t);
-static void dynticks_stop_timer(struct qemu_alarm_timer *t);
-static void dynticks_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
-
-#endif /* __linux__ */
-
-#endif /* _WIN32 */
-
-static struct qemu_alarm_timer alarm_timers[] = {
-#ifndef _WIN32
-#ifdef __linux__
-    {"dynticks", dynticks_start_timer,
-     dynticks_stop_timer, dynticks_rearm_timer},
-#endif
-    {"unix", unix_start_timer, unix_stop_timer, unix_rearm_timer},
-#else
-    {"mmtimer", mm_start_timer, mm_stop_timer, mm_rearm_timer},
-    {"dynticks", win32_start_timer, win32_stop_timer, win32_rearm_timer},
-#endif
-    {NULL, }
-};
-
-static void show_available_alarms(void)
-{
-    int i;
-
-    printf("Available alarm timers, in order of precedence:\n");
-    for (i = 0; alarm_timers[i].name; i++)
-        printf("%s\n", alarm_timers[i].name);
-}
-
-void configure_alarms(char const *opt)
-{
-    int i;
-    int cur = 0;
-    int count = ARRAY_SIZE(alarm_timers) - 1;
-    char *arg;
-    char *name;
-    struct qemu_alarm_timer tmp;
-
-    if (is_help_option(opt)) {
-        show_available_alarms();
-        exit(0);
-    }
-
-    arg = g_strdup(opt);
-
-    /* Reorder the array */
-    name = strtok(arg, ",");
-    while (name) {
-        for (i = 0; i < count && alarm_timers[i].name; i++) {
-            if (!strcmp(alarm_timers[i].name, name))
-                break;
-        }
-
-        if (i == count) {
-            fprintf(stderr, "Unknown clock %s\n", name);
-            goto next;
-        }
-
-        if (i < cur)
-            /* Ignore */
-            goto next;
-
-	/* Swap */
-        tmp = alarm_timers[i];
-        alarm_timers[i] = alarm_timers[cur];
-        alarm_timers[cur] = tmp;
-
-        cur++;
-next:
-        name = strtok(NULL, ",");
-    }
-
-    g_free(arg);
-
-    if (cur) {
-        /* Disable remaining timers */
-        for (i = cur; i < count; i++)
-            alarm_timers[i].name = NULL;
-    } else {
-        show_available_alarms();
-        exit(1);
-    }
-}
-
 static QEMUTimerList *timerlist_new_from_clock(QEMUClock *clock)
 {
     QEMUTimerList *tl;
@@ -318,7 +151,6 @@ void qemu_clock_enable(QEMUClock *clock, bool enabled)
     clock->enabled = enabled;
     if (enabled && !old) {
         qemu_clock_notify(clock);
-        qemu_rearm_alarm_timer(alarm_timer);
     }
 }
 
@@ -549,9 +381,6 @@ void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
 
     /* Rearm if necessary  */
     if (pt == &ts->tl->active_timers) {
-        if (!alarm_timer->pending) {
-            qemu_rearm_alarm_timer(alarm_timer);
-        }
         /* Interrupt execution to force deadline recalculation.  */
         qemu_clock_warp(ts->tl->clock);
         timerlist_notify(ts->tl);
@@ -710,338 +539,11 @@ uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts)
 bool qemu_run_all_timers(void)
 {
     bool progress = false;
-    alarm_timer->pending = false;
-
-    /* vm time timers */
     QEMUClockType type;
+
     for (type = 0; type < QEMU_CLOCK_MAX; type++) {
         progress |= qemu_run_timers(qemu_get_clock(type));
     }
 
-    /* rearm timer, if not periodic */
-    if (alarm_timer->expired) {
-        alarm_timer->expired = false;
-        qemu_rearm_alarm_timer(alarm_timer);
-    }
-
     return progress;
 }
-
-#ifdef _WIN32
-static void CALLBACK host_alarm_handler(PVOID lpParam, BOOLEAN unused)
-#else
-static void host_alarm_handler(int host_signum)
-#endif
-{
-    struct qemu_alarm_timer *t = alarm_timer;
-    if (!t)
-	return;
-
-    t->expired = true;
-    t->pending = true;
-    qemu_notify_event();
-}
-
-#if defined(__linux__)
-
-#include "qemu/compatfd.h"
-
-static int dynticks_start_timer(struct qemu_alarm_timer *t)
-{
-    struct sigevent ev;
-    timer_t host_timer;
-    struct sigaction act;
-
-    sigfillset(&act.sa_mask);
-    act.sa_flags = 0;
-    act.sa_handler = host_alarm_handler;
-
-    sigaction(SIGALRM, &act, NULL);
-
-    /* 
-     * Initialize ev struct to 0 to avoid valgrind complaining
-     * about uninitialized data in timer_create call
-     */
-    memset(&ev, 0, sizeof(ev));
-    ev.sigev_value.sival_int = 0;
-    ev.sigev_notify = SIGEV_SIGNAL;
-#ifdef CONFIG_SIGEV_THREAD_ID
-    if (qemu_signalfd_available()) {
-        ev.sigev_notify = SIGEV_THREAD_ID;
-        ev._sigev_un._tid = qemu_get_thread_id();
-    }
-#endif /* CONFIG_SIGEV_THREAD_ID */
-    ev.sigev_signo = SIGALRM;
-
-    if (timer_create(CLOCK_REALTIME, &ev, &host_timer)) {
-        perror("timer_create");
-        return -1;
-    }
-
-    t->timer = host_timer;
-
-    return 0;
-}
-
-static void dynticks_stop_timer(struct qemu_alarm_timer *t)
-{
-    timer_t host_timer = t->timer;
-
-    timer_delete(host_timer);
-}
-
-static void dynticks_rearm_timer(struct qemu_alarm_timer *t,
-                                 int64_t nearest_delta_ns)
-{
-    timer_t host_timer = t->timer;
-    struct itimerspec timeout;
-    int64_t current_ns;
-
-    if (nearest_delta_ns < MIN_TIMER_REARM_NS)
-        nearest_delta_ns = MIN_TIMER_REARM_NS;
-
-    /* check whether a timer is already running */
-    if (timer_gettime(host_timer, &timeout)) {
-        perror("gettime");
-        fprintf(stderr, "Internal timer error: aborting\n");
-        exit(1);
-    }
-    current_ns = timeout.it_value.tv_sec * 1000000000LL + timeout.it_value.tv_nsec;
-    if (current_ns && current_ns <= nearest_delta_ns)
-        return;
-
-    timeout.it_interval.tv_sec = 0;
-    timeout.it_interval.tv_nsec = 0; /* 0 for one-shot timer */
-    timeout.it_value.tv_sec =  nearest_delta_ns / 1000000000;
-    timeout.it_value.tv_nsec = nearest_delta_ns % 1000000000;
-    if (timer_settime(host_timer, 0 /* RELATIVE */, &timeout, NULL)) {
-        perror("settime");
-        fprintf(stderr, "Internal timer error: aborting\n");
-        exit(1);
-    }
-}
-
-#endif /* defined(__linux__) */
-
-#if !defined(_WIN32)
-
-static int unix_start_timer(struct qemu_alarm_timer *t)
-{
-    struct sigaction act;
-
-    /* timer signal */
-    sigfillset(&act.sa_mask);
-    act.sa_flags = 0;
-    act.sa_handler = host_alarm_handler;
-
-    sigaction(SIGALRM, &act, NULL);
-    return 0;
-}
-
-static void unix_rearm_timer(struct qemu_alarm_timer *t,
-                             int64_t nearest_delta_ns)
-{
-    struct itimerval itv;
-    int err;
-
-    if (nearest_delta_ns < MIN_TIMER_REARM_NS)
-        nearest_delta_ns = MIN_TIMER_REARM_NS;
-
-    itv.it_interval.tv_sec = 0;
-    itv.it_interval.tv_usec = 0; /* 0 for one-shot timer */
-    itv.it_value.tv_sec =  nearest_delta_ns / 1000000000;
-    itv.it_value.tv_usec = (nearest_delta_ns % 1000000000) / 1000;
-    err = setitimer(ITIMER_REAL, &itv, NULL);
-    if (err) {
-        perror("setitimer");
-        fprintf(stderr, "Internal timer error: aborting\n");
-        exit(1);
-    }
-}
-
-static void unix_stop_timer(struct qemu_alarm_timer *t)
-{
-    struct itimerval itv;
-
-    memset(&itv, 0, sizeof(itv));
-    setitimer(ITIMER_REAL, &itv, NULL);
-}
-
-#endif /* !defined(_WIN32) */
-
-
-#ifdef _WIN32
-
-static MMRESULT mm_timer;
-static TIMECAPS mm_tc;
-
-static void CALLBACK mm_alarm_handler(UINT uTimerID, UINT uMsg,
-                                      DWORD_PTR dwUser, DWORD_PTR dw1,
-                                      DWORD_PTR dw2)
-{
-    struct qemu_alarm_timer *t = alarm_timer;
-    if (!t) {
-        return;
-    }
-    t->expired = true;
-    t->pending = true;
-    qemu_notify_event();
-}
-
-static int mm_start_timer(struct qemu_alarm_timer *t)
-{
-    timeGetDevCaps(&mm_tc, sizeof(mm_tc));
-    return 0;
-}
-
-static void mm_stop_timer(struct qemu_alarm_timer *t)
-{
-    if (mm_timer) {
-        timeKillEvent(mm_timer);
-    }
-}
-
-static void mm_rearm_timer(struct qemu_alarm_timer *t, int64_t delta)
-{
-    int64_t nearest_delta_ms = delta / 1000000;
-    if (nearest_delta_ms < mm_tc.wPeriodMin) {
-        nearest_delta_ms = mm_tc.wPeriodMin;
-    } else if (nearest_delta_ms > mm_tc.wPeriodMax) {
-        nearest_delta_ms = mm_tc.wPeriodMax;
-    }
-
-    if (mm_timer) {
-        timeKillEvent(mm_timer);
-    }
-    mm_timer = timeSetEvent((UINT)nearest_delta_ms,
-                            mm_tc.wPeriodMin,
-                            mm_alarm_handler,
-                            (DWORD_PTR)t,
-                            TIME_ONESHOT | TIME_CALLBACK_FUNCTION);
-
-    if (!mm_timer) {
-        fprintf(stderr, "Failed to re-arm win32 alarm timer\n");
-        timeEndPeriod(mm_tc.wPeriodMin);
-        exit(1);
-    }
-}
-
-static int win32_start_timer(struct qemu_alarm_timer *t)
-{
-    HANDLE hTimer;
-    BOOLEAN success;
-
-    /* If you call ChangeTimerQueueTimer on a one-shot timer (its period
-       is zero) that has already expired, the timer is not updated.  Since
-       creating a new timer is relatively expensive, set a bogus one-hour
-       interval in the dynticks case.  */
-    success = CreateTimerQueueTimer(&hTimer,
-                          NULL,
-                          host_alarm_handler,
-                          t,
-                          1,
-                          3600000,
-                          WT_EXECUTEINTIMERTHREAD);
-
-    if (!success) {
-        fprintf(stderr, "Failed to initialize win32 alarm timer: %ld\n",
-                GetLastError());
-        return -1;
-    }
-
-    t->timer = hTimer;
-    return 0;
-}
-
-static void win32_stop_timer(struct qemu_alarm_timer *t)
-{
-    HANDLE hTimer = t->timer;
-
-    if (hTimer) {
-        DeleteTimerQueueTimer(NULL, hTimer, NULL);
-    }
-}
-
-static void win32_rearm_timer(struct qemu_alarm_timer *t,
-                              int64_t nearest_delta_ns)
-{
-    HANDLE hTimer = t->timer;
-    int64_t nearest_delta_ms;
-    BOOLEAN success;
-
-    nearest_delta_ms = nearest_delta_ns / 1000000;
-    if (nearest_delta_ms < 1) {
-        nearest_delta_ms = 1;
-    }
-    /* ULONG_MAX can be 32 bit */
-    if (nearest_delta_ms > ULONG_MAX) {
-        nearest_delta_ms = ULONG_MAX;
-    }
-    success = ChangeTimerQueueTimer(NULL,
-                                    hTimer,
-                                    (unsigned long) nearest_delta_ms,
-                                    3600000);
-
-    if (!success) {
-        fprintf(stderr, "Failed to rearm win32 alarm timer: %ld\n",
-                GetLastError());
-        exit(-1);
-    }
-
-}
-
-#endif /* _WIN32 */
-
-static void quit_timers(void)
-{
-    struct qemu_alarm_timer *t = alarm_timer;
-    alarm_timer = NULL;
-    t->stop(t);
-}
-
-#ifdef CONFIG_POSIX
-static void reinit_timers(void)
-{
-    struct qemu_alarm_timer *t = alarm_timer;
-    t->stop(t);
-    if (t->start(t)) {
-        fprintf(stderr, "Internal timer error: aborting\n");
-        exit(1);
-    }
-    qemu_rearm_alarm_timer(t);
-}
-#endif /* CONFIG_POSIX */
-
-int init_timer_alarm(void)
-{
-    struct qemu_alarm_timer *t = NULL;
-    int i, err = -1;
-
-    if (alarm_timer) {
-        return 0;
-    }
-
-    for (i = 0; alarm_timers[i].name; i++) {
-        t = &alarm_timers[i];
-
-        err = t->start(t);
-        if (!err)
-            break;
-    }
-
-    if (err) {
-        err = -ENOENT;
-        goto fail;
-    }
-
-    atexit(quit_timers);
-#ifdef CONFIG_POSIX
-    pthread_atfork(NULL, NULL, reinit_timers);
-#endif
-    alarm_timer = t;
-    return 0;
-
-fail:
-    return err;
-}
-
diff --git a/vl.c b/vl.c
index 25b8f2f..83047fc 100644
--- a/vl.c
+++ b/vl.c
@@ -3714,7 +3714,9 @@ int main(int argc, char **argv, char **envp)
                 old_param = 1;
                 break;
             case QEMU_OPTION_clock:
-                configure_alarms(optarg);
+                /* Clock options no longer exist.  Keep this option for
+                 * backward compatibility.
+                 */
                 break;
             case QEMU_OPTION_startdate:
                 configure_rtc_date_offset(optarg, 1);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv5 16/16] aio / timers: Add test harness for AioContext timers
  2013-08-04 18:09                     ` [Qemu-devel] [RFC] [PATCHv5 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                         ` (14 preceding siblings ...)
  2013-08-04 18:10                       ` [Qemu-devel] [RFC] [PATCHv5 15/16] aio / timers: Remove alarm timers Alex Bligh
@ 2013-08-04 18:10                       ` Alex Bligh
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
  16 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-04 18:10 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Add a test harness for AioContext timers. The g_source equivalent is
unsatisfactory as it suffers from false wakeups.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 tests/test-aio.c |  134 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 134 insertions(+)

diff --git a/tests/test-aio.c b/tests/test-aio.c
index eedf7f8..8e2fa97 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -32,6 +32,15 @@ typedef struct {
     int max;
 } BHTestData;
 
+typedef struct {
+    QEMUTimer *timer;
+    QEMUTimerList *tl;
+    int n;
+    int max;
+    int64_t ns;
+    AioContext *ctx;
+} TimerTestData;
+
 static void bh_test_cb(void *opaque)
 {
     BHTestData *data = opaque;
@@ -40,6 +49,25 @@ static void bh_test_cb(void *opaque)
     }
 }
 
+static void timer_test_cb(void *opaque)
+{
+    TimerTestData *data = opaque;
+    if (++data->n < data->max) {
+        qemu_mod_timer(data->timer,
+                       qemu_get_clock_ns(timerlist_get_clock(data->tl)) +
+                       data->ns);
+    }
+}
+
+static void dummy_io_handler_read(void *opaque)
+{
+}
+
+static int dummy_io_handler_flush(void *opaque)
+{
+    return 1;
+}
+
 static void bh_delete_cb(void *opaque)
 {
     BHTestData *data = opaque;
@@ -341,6 +369,63 @@ static void test_wait_event_notifier_noflush(void)
     event_notifier_cleanup(&data.e);
 }
 
+static void test_timer_schedule(void)
+{
+    TimerTestData data = { .n = 0, .ctx = ctx, .ns = SCALE_MS * 750LL,
+                           .max = 2, .tl = ctx->tlg[QEMU_CLOCK_VIRTUAL] };
+    int pipefd[2];
+
+    /* aio_poll will not block to wait for timers to complete unless it has
+     * an fd to wait on. Fixing this breaks other tests. So create a dummy one.
+     */
+    g_assert(!pipe2(pipefd, O_NONBLOCK));
+    aio_set_fd_handler(ctx, pipefd[0],
+                       dummy_io_handler_read, NULL, dummy_io_handler_flush,
+                       NULL);
+    aio_poll(ctx, false);
+
+    data.timer = timer_new(data.tl, SCALE_NS, timer_test_cb, &data);
+    qemu_mod_timer(data.timer,
+                   qemu_get_clock_ns(timerlist_get_clock(data.tl)) +
+                   data.ns);
+
+    g_assert_cmpint(data.n, ==, 0);
+
+    /* qemu_mod_timer may well cause an event notifer to have gone off,
+     * so clear that
+     */
+    do {} while (aio_poll(ctx, false));
+
+    g_assert(!aio_poll(ctx, false));
+    g_assert_cmpint(data.n, ==, 0);
+
+    sleep(1);
+    g_assert_cmpint(data.n, ==, 0);
+
+    g_assert(aio_poll(ctx, false));
+    g_assert_cmpint(data.n, ==, 1);
+
+    /* qemu_mod_timer called by our callback */
+    do {} while (aio_poll(ctx, false));
+
+    g_assert(!aio_poll(ctx, false));
+    g_assert_cmpint(data.n, ==, 1);
+
+    g_assert(aio_poll(ctx, true));
+    g_assert_cmpint(data.n, ==, 2);
+
+    /* As max is now 2, an event notifier should not have gone off */
+
+    g_assert(!aio_poll(ctx, false));
+    g_assert_cmpint(data.n, ==, 2);
+
+    aio_set_fd_handler(ctx, pipefd[0], NULL, NULL, NULL, NULL);
+    close(pipefd[0]);
+    close(pipefd[1]);
+
+    qemu_del_timer(data.timer);
+}
+
 /* Now the same tests, using the context as a GSource.  They are
  * very similar to the ones above, with g_main_context_iteration
  * replacing aio_poll.  However:
@@ -623,6 +708,53 @@ static void test_source_wait_event_notifier_noflush(void)
     event_notifier_cleanup(&data.e);
 }
 
+static void test_source_timer_schedule(void)
+{
+    TimerTestData data = { .n = 0, .ctx = ctx, .ns = SCALE_MS * 750LL,
+                           .max = 2, .tl = ctx->tlg[QEMU_CLOCK_VIRTUAL] };
+    int pipefd[2];
+    int64_t expiry;
+
+    /* aio_poll will not block to wait for timers to complete unless it has
+     * an fd to wait on. Fixing this breaks other tests. So create a dummy one.
+     */
+    g_assert(!pipe2(pipefd, O_NONBLOCK));
+    aio_set_fd_handler(ctx, pipefd[0],
+                       dummy_io_handler_read, NULL, dummy_io_handler_flush,
+                       NULL);
+    do {} while (g_main_context_iteration(NULL, false));
+
+    data.timer = timer_new(data.tl, SCALE_NS, timer_test_cb, &data);
+    expiry = qemu_get_clock_ns(timerlist_get_clock(data.tl)) +
+        data.ns;
+    qemu_mod_timer(data.timer, expiry);
+
+    g_assert_cmpint(data.n, ==, 0);
+
+    sleep(1);
+    g_assert_cmpint(data.n, ==, 0);
+
+    g_assert(g_main_context_iteration(NULL, false));
+    g_assert_cmpint(data.n, ==, 1);
+
+    /* The comment above was not kidding when it said this wakes up itself */
+    do {
+        g_assert(g_main_context_iteration(NULL, true));
+    } while (qemu_get_clock_ns(
+                 timerlist_get_clock(data.tl)) <= expiry);
+    sleep(1);
+    g_main_context_iteration(NULL, false);
+
+    g_assert_cmpint(data.n, ==, 2);
+
+    aio_set_fd_handler(ctx, pipefd[0], NULL, NULL, NULL, NULL);
+    close(pipefd[0]);
+    close(pipefd[1]);
+
+    qemu_del_timer(data.timer);
+}
+
+
 /* End of tests.  */
 
 int main(int argc, char **argv)
@@ -651,6 +783,7 @@ int main(int argc, char **argv)
     g_test_add_func("/aio/event/wait",              test_wait_event_notifier);
     g_test_add_func("/aio/event/wait/no-flush-cb",  test_wait_event_notifier_noflush);
     g_test_add_func("/aio/event/flush",             test_flush_event_notifier);
+    g_test_add_func("/aio/timer/schedule",          test_timer_schedule);
 
     g_test_add_func("/aio-gsource/notify",                  test_source_notify);
     g_test_add_func("/aio-gsource/flush",                   test_source_flush);
@@ -665,5 +798,6 @@ int main(int argc, char **argv)
     g_test_add_func("/aio-gsource/event/wait",              test_source_wait_event_notifier);
     g_test_add_func("/aio-gsource/event/wait/no-flush-cb",  test_source_wait_event_notifier_noflush);
     g_test_add_func("/aio-gsource/event/flush",             test_source_flush_event_notifier);
+    g_test_add_func("/aio-gsource/timer/schedule",          test_source_timer_schedule);
     return g_test_run();
 }
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll
  2013-08-04 18:09                     ` [Qemu-devel] [RFC] [PATCHv5 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                         ` (15 preceding siblings ...)
  2013-08-04 18:10                       ` [Qemu-devel] [RFC] [PATCHv5 16/16] aio / timers: Add test harness for AioContext timers Alex Bligh
@ 2013-08-06  9:16                       ` Alex Bligh
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 01/16] aio / timers: add qemu-timer.c utility functions Alex Bligh
                                           ` (18 more replies)
  16 siblings, 19 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-06  9:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

This patch series adds support for timers attached to an AioContext clock
which get called within aio_poll.

In doing so it removes alarm timers and moves to use ppoll where possible.

This patch set 'sort of' passes make check (see below for caveat)
including a new test harness for the aio timers, but has not been
tested much beyond that. In particular, the win32 changes have not
even been compile tested. Equally, alterations to use_icount
are untested.

Caveat: I have had to alter tests/test-aio.c so the following error
no longer occurs.

ERROR:tests/test-aio.c:346:test_wait_event_notifier_noflush: assertion failed: (aio_poll(ctx, false))

As gar as I can tell, this check was incorrect, in that it checking
aio_poll makes progress when in fact it should not make progress. I
fixed an issue where aio_poll was (as far as I can tell) wrongly
returning true on a timeout, and that generated this error.

Note also the comment on patch 15 in relation to a possible bug
in cpus.c.

Changes since v5:
* Rebase onto master (b9ac5d9)
* Fix spacing in typedef QEMUTimerList
* Rename 'QEMUClocks' extern to 'qemu_clocks'

Changes since v4:
* Rename qemu_timerlist_ functions to timer_list (per Paolo Bonzini)
* Rename qemu_timer_.*timerlist.* to timer_ (per Paolo Bonzini)
* Use enum for QEMUClockType
* Put clocks into an array; remove global variables
* Introduce QEMUTimerListGroup - a timeliest of each type
* Add a QEMUTimerListGroup to AioContext
* Use a callback on timer modification, rather than binding in
  AioContext into the timeliest
* Make cpus.c iterate over all timerlists when it does a notify
* Make cpus.c icount timeout use soonest timeout
  across all timerlists

Changes since v3:
* Split up QEMUClock and QEMUClock list
* Improve commenting
* Fix comment in vl.c
* Change test/test-aio.c to reflect correct behaviour in aio_poll.

Changes since v2:
* Reordered to remove alarm timers last
* Added prctl(PR_SET_TIMERSLACK, 1, ...)
* Renamed qemu_g_poll_ns to qemu_poll_ns
* Moved declaration of above & drop glib types
* Do not use a global list of qemu clocks
* Add AioContext * to QEMUClock
* Split up conversion to use ppoll and timers
* Indentation fix
* Fix aio_win32.c aio_poll to return progress
* aio_notify / qemu_notify when timers are modified
* change comment in deprecation of clock options

Alex Bligh (16):
  aio / timers: add qemu-timer.c utility functions
  aio / timers: add ppoll support with qemu_poll_ns
  aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer
    slack
  aio / timers: Make qemu_run_timers and qemu_run_all_timers return
    progress
  aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList
  aio / timers: Untangle include files
  aio / timers: Add QEMUTimerListGroup and helper functions
  aio / timers: Add QEMUTimerListGroup to AioContext
  aio / timers: Add a notify callback to QEMUTimerList
  aio / timers: aio_ctx_prepare sets timeout from AioContext timers
  aio / timers: Convert aio_poll to use AioContext timers' deadline
  aio / timers: Convert mainloop to use timeout
  aio / timers: On timer modification, qemu_notify or aio_notify
  aio / timers: Use all timerlists in icount warp calculations
  aio / timers: Remove alarm timers
  aio / timers: Add test harness for AioContext timers

 aio-posix.c               |   20 +-
 aio-win32.c               |   22 +-
 async.c                   |   20 +-
 configure                 |   37 +++
 cpus.c                    |   44 ++-
 dma-helpers.c             |    1 +
 hw/dma/xilinx_axidma.c    |    1 +
 hw/timer/arm_timer.c      |    1 +
 hw/timer/grlib_gptimer.c  |    2 +
 hw/timer/imx_epit.c       |    1 +
 hw/timer/imx_gpt.c        |    1 +
 hw/timer/lm32_timer.c     |    1 +
 hw/timer/puv3_ost.c       |    1 +
 hw/timer/slavio_timer.c   |    1 +
 hw/timer/xilinx_timer.c   |    1 +
 hw/usb/hcd-uhci.c         |    1 +
 include/block/aio.h       |    4 +
 include/block/block_int.h |    1 +
 include/block/coroutine.h |    2 +
 include/qemu/timer.h      |  122 ++++++-
 main-loop.c               |   49 ++-
 migration-exec.c          |    1 +
 migration-fd.c            |    1 +
 migration-tcp.c           |    1 +
 migration-unix.c          |    1 +
 migration.c               |    1 +
 nbd.c                     |    1 +
 net/net.c                 |    1 +
 net/socket.c              |    1 +
 qemu-coroutine-io.c       |    1 +
 qemu-io-cmds.c            |    1 +
 qemu-nbd.c                |    1 +
 qemu-timer.c              |  803 ++++++++++++++++-----------------------------
 slirp/misc.c              |    1 +
 tests/test-aio.c          |  141 +++++++-
 tests/test-thread-pool.c  |    3 +
 thread-pool.c             |    1 +
 ui/vnc-auth-vencrypt.c    |    2 +-
 ui/vnc-ws.c               |    1 +
 vl.c                      |    4 +-
 40 files changed, 736 insertions(+), 564 deletions(-)

-- 
1.7.9.5

*** BLRUB STOP ***

Alex Bligh (16):
  aio / timers: add qemu-timer.c utility functions
  aio / timers: add ppoll support with qemu_poll_ns
  aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer
    slack
  aio / timers: Make qemu_run_timers and qemu_run_all_timers return
    progress
  aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList
  aio / timers: Untangle include files
  aio / timers: Add QEMUTimerListGroup and helper functions
  aio / timers: Add QEMUTimerListGroup to AioContext
  aio / timers: Add a notify callback to QEMUTimerList
  aio / timers: aio_ctx_prepare sets timeout from AioContext timers
  aio / timers: Convert aio_poll to use AioContext timers' deadline
  aio / timers: Convert mainloop to use timeout
  aio / timers: On timer modification, qemu_notify or aio_notify
  aio / timers: Use all timerlists in icount warp calculations
  aio / timers: Remove alarm timers
  aio / timers: Add test harness for AioContext timers

 aio-posix.c               |   20 +-
 aio-win32.c               |   22 +-
 async.c                   |   20 +-
 configure                 |   37 +++
 cpus.c                    |   44 ++-
 dma-helpers.c             |    1 +
 hw/dma/xilinx_axidma.c    |    1 +
 hw/timer/arm_timer.c      |    1 +
 hw/timer/grlib_gptimer.c  |    2 +
 hw/timer/imx_epit.c       |    1 +
 hw/timer/imx_gpt.c        |    1 +
 hw/timer/lm32_timer.c     |    1 +
 hw/timer/puv3_ost.c       |    1 +
 hw/timer/slavio_timer.c   |    1 +
 hw/timer/xilinx_timer.c   |    1 +
 hw/usb/hcd-uhci.c         |    1 +
 include/block/aio.h       |    4 +
 include/block/block_int.h |    1 +
 include/block/coroutine.h |    2 +
 include/qemu/timer.h      |  122 ++++++-
 main-loop.c               |   49 ++-
 migration-exec.c          |    1 +
 migration-fd.c            |    1 +
 migration-tcp.c           |    1 +
 migration-unix.c          |    1 +
 migration.c               |    1 +
 nbd.c                     |    1 +
 net/net.c                 |    1 +
 net/socket.c              |    1 +
 qemu-coroutine-io.c       |    1 +
 qemu-io-cmds.c            |    1 +
 qemu-nbd.c                |    1 +
 qemu-timer.c              |  803 ++++++++++++++++-----------------------------
 slirp/misc.c              |    1 +
 tests/test-aio.c          |  141 +++++++-
 tests/test-thread-pool.c  |    3 +
 thread-pool.c             |    1 +
 ui/vnc-auth-vencrypt.c    |    2 +-
 ui/vnc-ws.c               |    1 +
 vl.c                      |    4 +-
 40 files changed, 736 insertions(+), 564 deletions(-)

-- 
1.7.9.5

^ permalink raw reply	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv6 01/16] aio / timers: add qemu-timer.c utility functions
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
@ 2013-08-06  9:16                         ` Alex Bligh
  2013-08-06 12:02                           ` Stefan Hajnoczi
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 02/16] aio / timers: add ppoll support with qemu_poll_ns Alex Bligh
                                           ` (17 subsequent siblings)
  18 siblings, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-08-06  9:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Add qemu_free_clock and expose qemu_new_clock and clock types.

Add utility functions to qemu-timer.c for nanosecond timing.

Add qemu_clock_deadline_ns to calculate deadlines to
nanosecond accuracy.

Add utility function qemu_soonest_timeout to calculate soonest deadline.

Add qemu_timeout_ns_to_ms to convert a timeout in nanoseconds back to
milliseconds for when ppoll is not used.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |   17 ++++++++++++++
 qemu-timer.c         |   63 +++++++++++++++++++++++++++++++++++++++++++++-----
 2 files changed, 74 insertions(+), 6 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 9dd206c..6171db3 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -11,6 +11,10 @@
 #define SCALE_US 1000
 #define SCALE_NS 1
 
+#define QEMU_CLOCK_REALTIME 0
+#define QEMU_CLOCK_VIRTUAL  1
+#define QEMU_CLOCK_HOST     2
+
 typedef struct QEMUClock QEMUClock;
 typedef void QEMUTimerCB(void *opaque);
 
@@ -32,10 +36,14 @@ extern QEMUClock *vm_clock;
    the virtual clock. */
 extern QEMUClock *host_clock;
 
+QEMUClock *qemu_new_clock(int type);
+void qemu_free_clock(QEMUClock *clock);
 int64_t qemu_get_clock_ns(QEMUClock *clock);
 int64_t qemu_clock_has_timers(QEMUClock *clock);
 int64_t qemu_clock_expired(QEMUClock *clock);
 int64_t qemu_clock_deadline(QEMUClock *clock);
+int64_t qemu_clock_deadline_ns(QEMUClock *clock);
+int qemu_timeout_ns_to_ms(int64_t ns);
 void qemu_clock_enable(QEMUClock *clock, bool enabled);
 void qemu_clock_warp(QEMUClock *clock);
 
@@ -63,6 +71,15 @@ int64_t cpu_get_ticks(void);
 void cpu_enable_ticks(void);
 void cpu_disable_ticks(void);
 
+static inline int64_t qemu_soonest_timeout(int64_t timeout1, int64_t timeout2)
+{
+    /* we can abuse the fact that -1 (which means infinite) is a maximal
+     * value when cast to unsigned. As this is disgusting, it's kept in
+     * one inline function.
+     */
+    return ((uint64_t) timeout1 < (uint64_t) timeout2) ? timeout1 : timeout2;
+}
+
 static inline QEMUTimer *qemu_new_timer_ns(QEMUClock *clock, QEMUTimerCB *cb,
                                            void *opaque)
 {
diff --git a/qemu-timer.c b/qemu-timer.c
index b2d95e2..3dfbdbf 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -40,10 +40,6 @@
 /***********************************************************/
 /* timers */
 
-#define QEMU_CLOCK_REALTIME 0
-#define QEMU_CLOCK_VIRTUAL  1
-#define QEMU_CLOCK_HOST     2
-
 struct QEMUClock {
     QEMUTimer *active_timers;
 
@@ -231,7 +227,7 @@ QEMUClock *rt_clock;
 QEMUClock *vm_clock;
 QEMUClock *host_clock;
 
-static QEMUClock *qemu_new_clock(int type)
+QEMUClock *qemu_new_clock(int type)
 {
     QEMUClock *clock;
 
@@ -243,6 +239,11 @@ static QEMUClock *qemu_new_clock(int type)
     return clock;
 }
 
+void qemu_free_clock(QEMUClock *clock)
+{
+    g_free(clock);
+}
+
 void qemu_clock_enable(QEMUClock *clock, bool enabled)
 {
     bool old = clock->enabled;
@@ -268,7 +269,7 @@ int64_t qemu_clock_deadline(QEMUClock *clock)
     /* To avoid problems with overflow limit this to 2^32.  */
     int64_t delta = INT32_MAX;
 
-    if (clock->active_timers) {
+    if (clock->enabled && clock->active_timers) {
         delta = clock->active_timers->expire_time - qemu_get_clock_ns(clock);
     }
     if (delta < 0) {
@@ -277,6 +278,56 @@ int64_t qemu_clock_deadline(QEMUClock *clock)
     return delta;
 }
 
+/*
+ * As above, but return -1 for no deadline, and do not cap to 2^32
+ * as we know the result is always positive.
+ */
+
+int64_t qemu_clock_deadline_ns(QEMUClock *clock)
+{
+    int64_t delta;
+
+    if (!clock->enabled || !clock->active_timers) {
+        return -1;
+    }
+
+    delta = clock->active_timers->expire_time - qemu_get_clock_ns(clock);
+
+    if (delta <= 0) {
+        return 0;
+    }
+
+    return delta;
+}
+
+/* Transition function to convert a nanosecond timeout to ms
+ * This is used where a system does not support ppoll
+ */
+int qemu_timeout_ns_to_ms(int64_t ns)
+{
+    int64_t ms;
+    if (ns < 0) {
+        return -1;
+    }
+
+    if (!ns) {
+        return 0;
+    }
+
+    /* Always round up, because it's better to wait too long than to wait too
+     * little and effectively busy-wait
+     */
+    ms = (ns + SCALE_MS - 1) / SCALE_MS;
+
+    /* To avoid overflow problems, limit this to 2^31, i.e. approx 25 days */
+    if (ms > (int64_t) INT32_MAX) {
+        ms = INT32_MAX;
+    }
+
+    return (int) ms;
+}
+
+
 QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
                           QEMUTimerCB *cb, void *opaque)
 {
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv6 02/16] aio / timers: add ppoll support with qemu_poll_ns
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 01/16] aio / timers: add qemu-timer.c utility functions Alex Bligh
@ 2013-08-06  9:16                         ` Alex Bligh
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 03/16] aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer slack Alex Bligh
                                           ` (16 subsequent siblings)
  18 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-06  9:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Add qemu_poll_ns which works like g_poll but takes a nanosecond
timeout.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 configure            |   19 +++++++++++++++++++
 include/qemu/timer.h |    1 +
 qemu-timer.c         |   24 ++++++++++++++++++++++++
 3 files changed, 44 insertions(+)

diff --git a/configure b/configure
index f0761ea..4e54d1b 100755
--- a/configure
+++ b/configure
@@ -2818,6 +2818,22 @@ if compile_prog "" "" ; then
   dup3=yes
 fi
 
+# check for ppoll support
+ppoll=no
+cat > $TMPC << EOF
+#include <poll.h>
+
+int main(void)
+{
+    struct pollfd pfd = { .fd = 0, .events = 0, .revents = 0 };
+    ppoll(&pfd, 1, 0, 0);
+    return 0;
+}
+EOF
+if compile_prog "" "" ; then
+  ppoll=yes
+fi
+
 # check for epoll support
 epoll=no
 cat > $TMPC << EOF
@@ -3809,6 +3825,9 @@ fi
 if test "$dup3" = "yes" ; then
   echo "CONFIG_DUP3=y" >> $config_host_mak
 fi
+if test "$ppoll" = "yes" ; then
+  echo "CONFIG_PPOLL=y" >> $config_host_mak
+fi
 if test "$epoll" = "yes" ; then
   echo "CONFIG_EPOLL=y" >> $config_host_mak
 fi
diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 6171db3..f434ecb 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -44,6 +44,7 @@ int64_t qemu_clock_expired(QEMUClock *clock);
 int64_t qemu_clock_deadline(QEMUClock *clock);
 int64_t qemu_clock_deadline_ns(QEMUClock *clock);
 int qemu_timeout_ns_to_ms(int64_t ns);
+int qemu_poll_ns(GPollFD *fds, uint nfds, int64_t timeout);
 void qemu_clock_enable(QEMUClock *clock, bool enabled);
 void qemu_clock_warp(QEMUClock *clock);
 
diff --git a/qemu-timer.c b/qemu-timer.c
index 3dfbdbf..b57bd78 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -37,6 +37,10 @@
 #include <mmsystem.h>
 #endif
 
+#ifdef CONFIG_PPOLL
+#include <poll.h>
+#endif
+
 /***********************************************************/
 /* timers */
 
@@ -328,6 +332,26 @@ int qemu_timeout_ns_to_ms(int64_t ns)
 }
 
 
+/* qemu implementation of g_poll which uses a nanosecond timeout but is
+ * otherwise identical to g_poll
+ */
+int qemu_poll_ns(GPollFD *fds, uint nfds, int64_t timeout)
+{
+#ifdef CONFIG_PPOLL
+    if (timeout < 0) {
+        return ppoll((struct pollfd *)fds, nfds, NULL, NULL);
+    } else {
+        struct timespec ts;
+        ts.tv_sec = timeout / 1000000000LL;
+        ts.tv_nsec = timeout % 1000000000LL;
+        return ppoll((struct pollfd *)fds, nfds, &ts, NULL);
+    }
+#else
+    return g_poll(fds, nfds, qemu_timeout_ns_to_ms(timeout));
+#endif
+}
+
+
 QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
                           QEMUTimerCB *cb, void *opaque)
 {
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv6 03/16] aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer slack
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 01/16] aio / timers: add qemu-timer.c utility functions Alex Bligh
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 02/16] aio / timers: add ppoll support with qemu_poll_ns Alex Bligh
@ 2013-08-06  9:16                         ` Alex Bligh
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 04/16] aio / timers: Make qemu_run_timers and qemu_run_all_timers return progress Alex Bligh
                                           ` (15 subsequent siblings)
  18 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-06  9:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Where supported, called prctl(PR_SET_TIMERSLACK, 1, ...) to
set one nanosecond timer slack to increase precision of timer
calls.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 configure    |   18 ++++++++++++++++++
 qemu-timer.c |    7 +++++++
 2 files changed, 25 insertions(+)

diff --git a/configure b/configure
index 4e54d1b..331139e 100755
--- a/configure
+++ b/configure
@@ -2834,6 +2834,21 @@ if compile_prog "" "" ; then
   ppoll=yes
 fi
 
+# check for prctl(PR_SET_TIMERSLACK , ... ) support
+prctl_pr_set_timerslack=no
+cat > $TMPC << EOF
+#include <sys/prctl.h>
+
+int main(void)
+{
+    prctl(PR_SET_TIMERSLACK, 1, 0, 0, 0);
+    return 0;
+}
+EOF
+if compile_prog "" "" ; then
+  prctl_pr_set_timerslack=yes
+fi
+
 # check for epoll support
 epoll=no
 cat > $TMPC << EOF
@@ -3828,6 +3843,9 @@ fi
 if test "$ppoll" = "yes" ; then
   echo "CONFIG_PPOLL=y" >> $config_host_mak
 fi
+if test "$prctl_pr_set_timerslack" = "yes" ; then
+  echo "CONFIG_PRCTL_PR_SET_TIMERSLACK=y" >> $config_host_mak
+fi
 if test "$epoll" = "yes" ; then
   echo "CONFIG_EPOLL=y" >> $config_host_mak
 fi
diff --git a/qemu-timer.c b/qemu-timer.c
index b57bd78..a8b270f 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -41,6 +41,10 @@
 #include <poll.h>
 #endif
 
+#ifdef CONFIG_PRCTL_PR_SET_TIMERSLACK
+#include <sys/prctl.h>
+#endif
+
 /***********************************************************/
 /* timers */
 
@@ -512,6 +516,9 @@ void init_clocks(void)
         vm_clock = qemu_new_clock(QEMU_CLOCK_VIRTUAL);
         host_clock = qemu_new_clock(QEMU_CLOCK_HOST);
     }
+#ifdef CONFIG_PRCTL_PR_SET_TIMERSLACK
+    prctl(PR_SET_TIMERSLACK, 1, 0, 0, 0);
+#endif
 }
 
 uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts)
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv6 04/16] aio / timers: Make qemu_run_timers and qemu_run_all_timers return progress
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                           ` (2 preceding siblings ...)
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 03/16] aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer slack Alex Bligh
@ 2013-08-06  9:16                         ` Alex Bligh
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 05/16] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList Alex Bligh
                                           ` (14 subsequent siblings)
  18 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-06  9:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Make qemu_run_timers and qemu_run_all_timers return progress
so that aio_poll etc. can determine whether a timer has been
run.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |    4 ++--
 qemu-timer.c         |   18 ++++++++++++------
 2 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index f434ecb..a1f2ac8 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -62,8 +62,8 @@ bool qemu_timer_pending(QEMUTimer *ts);
 bool qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time);
 uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts);
 
-void qemu_run_timers(QEMUClock *clock);
-void qemu_run_all_timers(void);
+bool qemu_run_timers(QEMUClock *clock);
+bool qemu_run_all_timers(void);
 void configure_alarms(char const *opt);
 void init_clocks(void);
 int init_timer_alarm(void);
diff --git a/qemu-timer.c b/qemu-timer.c
index a8b270f..714bc92 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -451,13 +451,14 @@ bool qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time)
     return qemu_timer_expired_ns(timer_head, current_time * timer_head->scale);
 }
 
-void qemu_run_timers(QEMUClock *clock)
+bool qemu_run_timers(QEMUClock *clock)
 {
     QEMUTimer *ts;
     int64_t current_time;
+    bool progress = false;
    
     if (!clock->enabled)
-        return;
+        return progress;
 
     current_time = qemu_get_clock_ns(clock);
     for(;;) {
@@ -471,7 +472,9 @@ void qemu_run_timers(QEMUClock *clock)
 
         /* run the callback (the timer list can be modified) */
         ts->cb(ts->opaque);
+        progress = true;
     }
+    return progress;
 }
 
 int64_t qemu_get_clock_ns(QEMUClock *clock)
@@ -526,20 +529,23 @@ uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts)
     return qemu_timer_pending(ts) ? ts->expire_time : -1;
 }
 
-void qemu_run_all_timers(void)
+bool qemu_run_all_timers(void)
 {
+    bool progress = false;
     alarm_timer->pending = false;
 
     /* vm time timers */
-    qemu_run_timers(vm_clock);
-    qemu_run_timers(rt_clock);
-    qemu_run_timers(host_clock);
+    progress |= qemu_run_timers(vm_clock);
+    progress |= qemu_run_timers(rt_clock);
+    progress |= qemu_run_timers(host_clock);
 
     /* rearm timer, if not periodic */
     if (alarm_timer->expired) {
         alarm_timer->expired = false;
         qemu_rearm_alarm_timer(alarm_timer);
     }
+
+    return progress;
 }
 
 #ifdef _WIN32
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv6 05/16] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                           ` (3 preceding siblings ...)
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 04/16] aio / timers: Make qemu_run_timers and qemu_run_all_timers return progress Alex Bligh
@ 2013-08-06  9:16                         ` Alex Bligh
  2013-08-06 12:26                           ` Stefan Hajnoczi
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 06/16] aio / timers: Untangle include files Alex Bligh
                                           ` (13 subsequent siblings)
  18 siblings, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-08-06  9:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Split QEMUClock into QEMUClock and QEMUTimerList so that we can
have more than one QEMUTimerList associated with the same clock.

Introduce a default_timerlist concept and make existing
qemu_clock_* calls that actually should operate on a QEMUTimerList
call the relevant QEMUTimerList implementations, using the clock's
default timerlist. This vastly reduces the invasiveness of this
change and means the API stays constant for existing users.

Introduce a list of QEMUTimerLists associated with each clock
so that reenabling the clock can cause all the notifiers
to be called. Note the code to do the notifications is added
in a later patch.

Switch QEMUClockType to an enum. Remove global variables vm_clock,
host_clock and rt_clock and add compatibility defines. Do not
fix qemu_next_alarm_deadline as it's going to be deleted.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |   91 +++++++++++++++++++++++--
 qemu-timer.c         |  185 ++++++++++++++++++++++++++++++++++++++------------
 2 files changed, 224 insertions(+), 52 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index a1f2ac8..4bf9a9c 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -11,38 +11,65 @@
 #define SCALE_US 1000
 #define SCALE_NS 1
 
-#define QEMU_CLOCK_REALTIME 0
-#define QEMU_CLOCK_VIRTUAL  1
-#define QEMU_CLOCK_HOST     2
+typedef enum {
+    QEMU_CLOCK_REALTIME = 0,
+    QEMU_CLOCK_VIRTUAL = 1,
+    QEMU_CLOCK_HOST = 2,
+    QEMU_CLOCK_MAX
+} QEMUClockType;
 
 typedef struct QEMUClock QEMUClock;
+typedef struct QEMUTimerList QEMUTimerList;
 typedef void QEMUTimerCB(void *opaque);
 
+extern QEMUClock *qemu_clocks[QEMU_CLOCK_MAX];
+
+static inline QEMUClock *qemu_get_clock(QEMUClockType type)
+{
+    return qemu_clocks[type];
+}
+
+/* These three clocks are maintained here with separate variable
+   names for compatibility only.
+*/
+
 /* The real time clock should be used only for stuff which does not
    change the virtual machine state, as it is run even if the virtual
    machine is stopped. The real time clock has a frequency of 1000
    Hz. */
-extern QEMUClock *rt_clock;
+#define rt_clock (qemu_get_clock(QEMU_CLOCK_REALTIME))
 
 /* The virtual clock is only run during the emulation. It is stopped
    when the virtual machine is stopped. Virtual timers use a high
    precision clock, usually cpu cycles (use ticks_per_sec). */
-extern QEMUClock *vm_clock;
+#define vm_clock (qemu_get_clock(QEMU_CLOCK_VIRTUAL))
 
 /* The host clock should be use for device models that emulate accurate
    real time sources. It will continue to run when the virtual machine
    is suspended, and it will reflect system time changes the host may
    undergo (e.g. due to NTP). The host clock has the same precision as
    the virtual clock. */
-extern QEMUClock *host_clock;
+#define host_clock (qemu_get_clock(QEMU_CLOCK_HOST))
 
-QEMUClock *qemu_new_clock(int type);
+QEMUClock *qemu_new_clock(QEMUClockType type);
 void qemu_free_clock(QEMUClock *clock);
 int64_t qemu_get_clock_ns(QEMUClock *clock);
 int64_t qemu_clock_has_timers(QEMUClock *clock);
 int64_t qemu_clock_expired(QEMUClock *clock);
 int64_t qemu_clock_deadline(QEMUClock *clock);
 int64_t qemu_clock_deadline_ns(QEMUClock *clock);
+bool qemu_clock_use_for_deadline(QEMUClock *clock);
+QEMUTimerList *qemu_clock_get_default_timerlist(QEMUClock *clock);
+
+QEMUTimerList *timerlist_new(QEMUClockType type);
+void timerlist_free(QEMUTimerList *tl);
+int64_t timerlist_has_timers(QEMUTimerList *tl);
+int64_t timerlist_expired(QEMUTimerList *tl);
+int64_t timerlist_deadline(QEMUTimerList *tl);
+int64_t timerlist_deadline_ns(QEMUTimerList *tl);
+QEMUClock *timerlist_get_clock(QEMUTimerList *tl);
+bool timerlist_run_timers(QEMUTimerList *tl);
+
 int qemu_timeout_ns_to_ms(int64_t ns);
 int qemu_poll_ns(GPollFD *fds, uint nfds, int64_t timeout);
 void qemu_clock_enable(QEMUClock *clock, bool enabled);
@@ -54,6 +81,8 @@ void qemu_unregister_clock_reset_notifier(QEMUClock *clock,
 
 QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
                           QEMUTimerCB *cb, void *opaque);
+QEMUTimer *timer_new(QEMUTimerList *tl, int scale,
+                     QEMUTimerCB *cb, void *opaque);
 void qemu_free_timer(QEMUTimer *ts);
 void qemu_del_timer(QEMUTimer *ts);
 void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time);
@@ -62,6 +91,42 @@ bool qemu_timer_pending(QEMUTimer *ts);
 bool qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time);
 uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts);
 
+/* New format calling conventions for timers */
+static inline void timer_free(QEMUTimer *ts)
+{
+    qemu_free_timer(ts);
+}
+
+static inline void timer_del(QEMUTimer *ts)
+{
+    qemu_del_timer(ts);
+}
+
+static inline void timer_mod_ns(QEMUTimer *ts, int64_t expire_time)
+{
+    qemu_mod_timer_ns(ts, expire_time);
+}
+
+static inline void timer_mod(QEMUTimer *ts, int64_t expire_timer)
+{
+    qemu_mod_timer(ts, expire_timer);
+}
+
+static inline bool timer_pending(QEMUTimer *ts)
+{
+    return qemu_timer_pending(ts);
+}
+
+static inline bool timer_expired(QEMUTimer *timer_head, int64_t current_time)
+{
+    return qemu_timer_expired(timer_head, current_time);
+}
+
+static inline uint64_t timer_expire_time_ns(QEMUTimer *ts)
+{
+    return qemu_timer_expire_time_ns(ts);
+}
+
 bool qemu_run_timers(QEMUClock *clock);
 bool qemu_run_all_timers(void);
 void configure_alarms(char const *opt);
@@ -87,12 +152,24 @@ static inline QEMUTimer *qemu_new_timer_ns(QEMUClock *clock, QEMUTimerCB *cb,
     return qemu_new_timer(clock, SCALE_NS, cb, opaque);
 }
 
+static inline QEMUTimer *timer_new_ns(QEMUTimerList *tl, QEMUTimerCB *cb,
+                                      void *opaque)
+{
+    return timer_new(tl, SCALE_NS, cb, opaque);
+}
+
 static inline QEMUTimer *qemu_new_timer_ms(QEMUClock *clock, QEMUTimerCB *cb,
                                            void *opaque)
 {
     return qemu_new_timer(clock, SCALE_MS, cb, opaque);
 }
 
+static inline QEMUTimer *timer_new_ms(QEMUTimerList *tl, QEMUTimerCB *cb,
+                                      void *opaque)
+{
+    return timer_new(tl, SCALE_MS, cb, opaque);
+}
+
 static inline int64_t qemu_get_clock_ms(QEMUClock *clock)
 {
     return qemu_get_clock_ns(clock) / SCALE_MS;
diff --git a/qemu-timer.c b/qemu-timer.c
index 714bc92..b2d18e4 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -49,18 +49,34 @@
 /* timers */
 
 struct QEMUClock {
-    QEMUTimer *active_timers;
+    QEMUTimerList *default_timerlist;
+    QLIST_HEAD(, QEMUTimerList) timerlists;
 
     NotifierList reset_notifiers;
     int64_t last;
 
-    int type;
+    QEMUClockType type;
     bool enabled;
 };
 
+QEMUClock *qemu_clocks[QEMU_CLOCK_MAX];
+
+/* A QEMUTimerList is a list of timers attached to a clock. More
+ * than one QEMUTimerList can be attached to each clock, for instance
+ * used by different AioContexts / threads. Each clock also has
+ * a list of the QEMUTimerLists associated with it, in order that
+ * reenabling the clock can call all the notifiers.
+ */
+
+struct QEMUTimerList {
+    QEMUClock *clock;
+    QEMUTimer *active_timers;
+    QLIST_ENTRY(QEMUTimerList) list;
+};
+
 struct QEMUTimer {
     int64_t expire_time;	/* in nanoseconds */
-    QEMUClock *clock;
+    QEMUTimerList *tl;
     QEMUTimerCB *cb;
     void *opaque;
     QEMUTimer *next;
@@ -93,21 +109,25 @@ static int64_t qemu_next_alarm_deadline(void)
 {
     int64_t delta = INT64_MAX;
     int64_t rtdelta;
+    int64_t hdelta;
 
-    if (!use_icount && vm_clock->enabled && vm_clock->active_timers) {
-        delta = vm_clock->active_timers->expire_time -
-                     qemu_get_clock_ns(vm_clock);
+    if (!use_icount && vm_clock->enabled &&
+        vm_clock->default_timerlist->active_timers) {
+        delta = vm_clock->default_timerlist->active_timers->expire_time -
+            qemu_get_clock_ns(vm_clock);
     }
-    if (host_clock->enabled && host_clock->active_timers) {
-        int64_t hdelta = host_clock->active_timers->expire_time -
-                 qemu_get_clock_ns(host_clock);
+    if (host_clock->enabled &&
+        host_clock->default_timerlist->active_timers) {
+        hdelta = host_clock->default_timerlist->active_timers->expire_time -
+            qemu_get_clock_ns(host_clock);
         if (hdelta < delta) {
             delta = hdelta;
         }
     }
-    if (rt_clock->enabled && rt_clock->active_timers) {
-        rtdelta = (rt_clock->active_timers->expire_time -
-                 qemu_get_clock_ns(rt_clock));
+    if (rt_clock->enabled &&
+        rt_clock->default_timerlist->active_timers) {
+        rtdelta = (rt_clock->default_timerlist->active_timers->expire_time -
+                   qemu_get_clock_ns(rt_clock));
         if (rtdelta < delta) {
             delta = rtdelta;
         }
@@ -231,11 +251,33 @@ next:
     }
 }
 
-QEMUClock *rt_clock;
-QEMUClock *vm_clock;
-QEMUClock *host_clock;
+static QEMUTimerList *timerlist_new_from_clock(QEMUClock *clock)
+{
+    QEMUTimerList *tl;
+
+    tl = g_malloc0(sizeof(QEMUTimerList));
+    tl->clock = clock;
+    QLIST_INSERT_HEAD(&clock->timerlists, tl, list);
+    return tl;
+}
+
+QEMUTimerList *timerlist_new(QEMUClockType type)
+{
+    return timerlist_new_from_clock(qemu_get_clock(type));
+}
 
-QEMUClock *qemu_new_clock(int type)
+void timerlist_free(QEMUTimerList *tl)
+{
+    if (tl->clock) {
+        QLIST_REMOVE(tl, list);
+        if (tl->clock->default_timerlist == tl) {
+            tl->clock->default_timerlist = NULL;
+        }
+    }
+    g_free(tl);
+}
+
+QEMUClock *qemu_new_clock(QEMUClockType type)
 {
     QEMUClock *clock;
 
@@ -244,14 +286,21 @@ QEMUClock *qemu_new_clock(int type)
     clock->enabled = true;
     clock->last = INT64_MIN;
     notifier_list_init(&clock->reset_notifiers);
+    clock->default_timerlist = timerlist_new_from_clock(clock);
     return clock;
 }
 
 void qemu_free_clock(QEMUClock *clock)
 {
+    timerlist_free(clock->default_timerlist);
     g_free(clock);
 }
 
+bool qemu_clock_use_for_deadline(QEMUClock *clock)
+{
+    return !(use_icount && (clock->type = QEMU_CLOCK_VIRTUAL));
+}
+
 void qemu_clock_enable(QEMUClock *clock, bool enabled)
 {
     bool old = clock->enabled;
@@ -261,24 +310,34 @@ void qemu_clock_enable(QEMUClock *clock, bool enabled)
     }
 }
 
+int64_t timerlist_has_timers(QEMUTimerList *tl)
+{
+    return !!tl->active_timers;
+}
+
 int64_t qemu_clock_has_timers(QEMUClock *clock)
 {
-    return !!clock->active_timers;
+    return timerlist_has_timers(clock->default_timerlist);
+}
+
+int64_t timerlist_expired(QEMUTimerList *tl)
+{
+    return (tl->active_timers &&
+            tl->active_timers->expire_time < qemu_get_clock_ns(tl->clock));
 }
 
 int64_t qemu_clock_expired(QEMUClock *clock)
 {
-    return (clock->active_timers &&
-            clock->active_timers->expire_time < qemu_get_clock_ns(clock));
+    return timerlist_expired(clock->default_timerlist);
 }
 
-int64_t qemu_clock_deadline(QEMUClock *clock)
+int64_t timerlist_deadline(QEMUTimerList *tl)
 {
     /* To avoid problems with overflow limit this to 2^32.  */
     int64_t delta = INT32_MAX;
 
-    if (clock->enabled && clock->active_timers) {
-        delta = clock->active_timers->expire_time - qemu_get_clock_ns(clock);
+    if (tl->clock->enabled && tl->active_timers) {
+        delta = tl->active_timers->expire_time - qemu_get_clock_ns(tl->clock);
     }
     if (delta < 0) {
         delta = 0;
@@ -286,20 +345,25 @@ int64_t qemu_clock_deadline(QEMUClock *clock)
     return delta;
 }
 
+int64_t qemu_clock_deadline(QEMUClock *clock)
+{
+    return timerlist_deadline(clock->default_timerlist);
+}
+
 /*
  * As above, but return -1 for no deadline, and do not cap to 2^32
  * as we know the result is always positive.
  */
 
-int64_t qemu_clock_deadline_ns(QEMUClock *clock)
+int64_t timerlist_deadline_ns(QEMUTimerList *tl)
 {
     int64_t delta;
 
-    if (!clock->enabled || !clock->active_timers) {
+    if (!tl->clock->enabled || !tl->active_timers) {
         return -1;
     }
 
-    delta = clock->active_timers->expire_time - qemu_get_clock_ns(clock);
+    delta = tl->active_timers->expire_time - qemu_get_clock_ns(tl->clock);
 
     if (delta <= 0) {
         return 0;
@@ -308,6 +372,21 @@ int64_t qemu_clock_deadline_ns(QEMUClock *clock)
     return delta;
 }
 
+int64_t qemu_clock_deadline_ns(QEMUClock *clock)
+{
+    return timerlist_deadline_ns(clock->default_timerlist);
+}
+
+QEMUClock *timerlist_get_clock(QEMUTimerList *tl)
+{
+    return tl->clock;
+}
+
+QEMUTimerList *qemu_clock_get_default_timerlist(QEMUClock *clock)
+{
+    return clock->default_timerlist;
+}
+
 /* Transition function to convert a nanosecond timeout to ms
  * This is used where a system does not support ppoll
  */
@@ -356,19 +435,26 @@ int qemu_poll_ns(GPollFD *fds, uint nfds, int64_t timeout)
 }
 
 
-QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
-                          QEMUTimerCB *cb, void *opaque)
+QEMUTimer *timer_new(QEMUTimerList *tl, int scale,
+                     QEMUTimerCB *cb, void *opaque)
 {
     QEMUTimer *ts;
 
     ts = g_malloc0(sizeof(QEMUTimer));
-    ts->clock = clock;
+    ts->tl = tl;
     ts->cb = cb;
     ts->opaque = opaque;
     ts->scale = scale;
     return ts;
 }
 
+QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
+                          QEMUTimerCB *cb, void *opaque)
+{
+    return timer_new(clock->default_timerlist,
+                     scale, cb, opaque);
+}
+
 void qemu_free_timer(QEMUTimer *ts)
 {
     g_free(ts);
@@ -381,7 +467,7 @@ void qemu_del_timer(QEMUTimer *ts)
 
     /* NOTE: this code must be signal safe because
        qemu_timer_expired() can be called from a signal. */
-    pt = &ts->clock->active_timers;
+    pt = &ts->tl->active_timers;
     for(;;) {
         t = *pt;
         if (!t)
@@ -405,7 +491,7 @@ void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
     /* add the timer in the sorted list */
     /* NOTE: this code must be signal safe because
        qemu_timer_expired() can be called from a signal. */
-    pt = &ts->clock->active_timers;
+    pt = &ts->tl->active_timers;
     for(;;) {
         t = *pt;
         if (!qemu_timer_expired_ns(t, expire_time)) {
@@ -418,12 +504,12 @@ void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
     *pt = ts;
 
     /* Rearm if necessary  */
-    if (pt == &ts->clock->active_timers) {
+    if (pt == &ts->tl->active_timers) {
         if (!alarm_timer->pending) {
             qemu_rearm_alarm_timer(alarm_timer);
         }
         /* Interrupt execution to force deadline recalculation.  */
-        qemu_clock_warp(ts->clock);
+        qemu_clock_warp(ts->tl->clock);
         if (use_icount) {
             qemu_notify_event();
         }
@@ -438,7 +524,7 @@ void qemu_mod_timer(QEMUTimer *ts, int64_t expire_time)
 bool qemu_timer_pending(QEMUTimer *ts)
 {
     QEMUTimer *t;
-    for (t = ts->clock->active_timers; t != NULL; t = t->next) {
+    for (t = ts->tl->active_timers; t != NULL; t = t->next) {
         if (t == ts) {
             return true;
         }
@@ -451,23 +537,24 @@ bool qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time)
     return qemu_timer_expired_ns(timer_head, current_time * timer_head->scale);
 }
 
-bool qemu_run_timers(QEMUClock *clock)
+bool timerlist_run_timers(QEMUTimerList *tl)
 {
     QEMUTimer *ts;
     int64_t current_time;
     bool progress = false;
    
-    if (!clock->enabled)
+    if (!tl->clock->enabled) {
         return progress;
+    }
 
-    current_time = qemu_get_clock_ns(clock);
+    current_time = qemu_get_clock_ns(tl->clock);
     for(;;) {
-        ts = clock->active_timers;
+        ts = tl->active_timers;
         if (!qemu_timer_expired_ns(ts, current_time)) {
             break;
         }
         /* remove timer from the list before calling the callback */
-        clock->active_timers = ts->next;
+        tl->active_timers = ts->next;
         ts->next = NULL;
 
         /* run the callback (the timer list can be modified) */
@@ -477,6 +564,11 @@ bool qemu_run_timers(QEMUClock *clock)
     return progress;
 }
 
+bool qemu_run_timers(QEMUClock *clock)
+{
+    return timerlist_run_timers(clock->default_timerlist);
+}
+
 int64_t qemu_get_clock_ns(QEMUClock *clock)
 {
     int64_t now, last;
@@ -514,11 +606,13 @@ void qemu_unregister_clock_reset_notifier(QEMUClock *clock, Notifier *notifier)
 
 void init_clocks(void)
 {
-    if (!rt_clock) {
-        rt_clock = qemu_new_clock(QEMU_CLOCK_REALTIME);
-        vm_clock = qemu_new_clock(QEMU_CLOCK_VIRTUAL);
-        host_clock = qemu_new_clock(QEMU_CLOCK_HOST);
+    QEMUClockType type;
+    for (type = 0; type < QEMU_CLOCK_MAX; type++) {
+        if (!qemu_clocks[type]) {
+            qemu_clocks[type] = qemu_new_clock(type);
+        }
     }
+
 #ifdef CONFIG_PRCTL_PR_SET_TIMERSLACK
     prctl(PR_SET_TIMERSLACK, 1, 0, 0, 0);
 #endif
@@ -535,9 +629,10 @@ bool qemu_run_all_timers(void)
     alarm_timer->pending = false;
 
     /* vm time timers */
-    progress |= qemu_run_timers(vm_clock);
-    progress |= qemu_run_timers(rt_clock);
-    progress |= qemu_run_timers(host_clock);
+    QEMUClockType type;
+    for (type = 0; type < QEMU_CLOCK_MAX; type++) {
+        progress |= qemu_run_timers(qemu_get_clock(type));
+    }
 
     /* rearm timer, if not periodic */
     if (alarm_timer->expired) {
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv6 06/16] aio / timers: Untangle include files
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                           ` (4 preceding siblings ...)
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 05/16] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList Alex Bligh
@ 2013-08-06  9:16                         ` Alex Bligh
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 07/16] aio / timers: Add QEMUTimerListGroup and helper functions Alex Bligh
                                           ` (12 subsequent siblings)
  18 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-06  9:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

include/qemu/timer.h has no need to include main-loop.h and
doing so causes an issue for the next patch. Unfortunately
various files assume including timers.h will pull in main-loop.h.
Untangle this mess.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 dma-helpers.c             |    1 +
 hw/dma/xilinx_axidma.c    |    1 +
 hw/timer/arm_timer.c      |    1 +
 hw/timer/grlib_gptimer.c  |    2 ++
 hw/timer/imx_epit.c       |    1 +
 hw/timer/imx_gpt.c        |    1 +
 hw/timer/lm32_timer.c     |    1 +
 hw/timer/puv3_ost.c       |    1 +
 hw/timer/slavio_timer.c   |    1 +
 hw/timer/xilinx_timer.c   |    1 +
 hw/usb/hcd-uhci.c         |    1 +
 include/block/block_int.h |    1 +
 include/block/coroutine.h |    2 ++
 include/qemu/timer.h      |    1 -
 migration-exec.c          |    1 +
 migration-fd.c            |    1 +
 migration-tcp.c           |    1 +
 migration-unix.c          |    1 +
 migration.c               |    1 +
 nbd.c                     |    1 +
 net/net.c                 |    1 +
 net/socket.c              |    1 +
 qemu-coroutine-io.c       |    1 +
 qemu-io-cmds.c            |    1 +
 qemu-nbd.c                |    1 +
 slirp/misc.c              |    1 +
 thread-pool.c             |    1 +
 ui/vnc-auth-vencrypt.c    |    2 +-
 ui/vnc-ws.c               |    1 +
 29 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/dma-helpers.c b/dma-helpers.c
index 499550f..c9620a5 100644
--- a/dma-helpers.c
+++ b/dma-helpers.c
@@ -11,6 +11,7 @@
 #include "trace.h"
 #include "qemu/range.h"
 #include "qemu/thread.h"
+#include "qemu/main-loop.h"
 
 /* #define DEBUG_IOMMU */
 
diff --git a/hw/dma/xilinx_axidma.c b/hw/dma/xilinx_axidma.c
index a48e3ba..59e8e35 100644
--- a/hw/dma/xilinx_axidma.c
+++ b/hw/dma/xilinx_axidma.c
@@ -27,6 +27,7 @@
 #include "hw/ptimer.h"
 #include "qemu/log.h"
 #include "qapi/qmp/qerror.h"
+#include "qemu/main-loop.h"
 
 #include "hw/stream.h"
 
diff --git a/hw/timer/arm_timer.c b/hw/timer/arm_timer.c
index acfea59..a47afde 100644
--- a/hw/timer/arm_timer.c
+++ b/hw/timer/arm_timer.c
@@ -12,6 +12,7 @@
 #include "qemu-common.h"
 #include "hw/qdev.h"
 #include "hw/ptimer.h"
+#include "qemu/main-loop.h"
 
 /* Common timer implementation.  */
 
diff --git a/hw/timer/grlib_gptimer.c b/hw/timer/grlib_gptimer.c
index 7c1055a..74c16d6 100644
--- a/hw/timer/grlib_gptimer.c
+++ b/hw/timer/grlib_gptimer.c
@@ -25,6 +25,8 @@
 #include "hw/sysbus.h"
 #include "qemu/timer.h"
 #include "hw/ptimer.h"
+#include "qemu/timer.h"
+#include "qemu/main-loop.h"
 
 #include "trace.h"
 
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
index 117dc7b..efe2ff9 100644
--- a/hw/timer/imx_epit.c
+++ b/hw/timer/imx_epit.c
@@ -18,6 +18,7 @@
 #include "hw/ptimer.h"
 #include "hw/sysbus.h"
 #include "hw/arm/imx.h"
+#include "qemu/main-loop.h"
 
 #define TYPE_IMX_EPIT "imx.epit"
 
diff --git a/hw/timer/imx_gpt.c b/hw/timer/imx_gpt.c
index 87db0e1..f2d1975 100644
--- a/hw/timer/imx_gpt.c
+++ b/hw/timer/imx_gpt.c
@@ -18,6 +18,7 @@
 #include "hw/ptimer.h"
 #include "hw/sysbus.h"
 #include "hw/arm/imx.h"
+#include "qemu/main-loop.h"
 
 #define TYPE_IMX_GPT "imx.gpt"
 
diff --git a/hw/timer/lm32_timer.c b/hw/timer/lm32_timer.c
index 986e6a1..8ed138c 100644
--- a/hw/timer/lm32_timer.c
+++ b/hw/timer/lm32_timer.c
@@ -27,6 +27,7 @@
 #include "qemu/timer.h"
 #include "hw/ptimer.h"
 #include "qemu/error-report.h"
+#include "qemu/main-loop.h"
 
 #define DEFAULT_FREQUENCY (50*1000000)
 
diff --git a/hw/timer/puv3_ost.c b/hw/timer/puv3_ost.c
index 4bd2b76..fa9eefd 100644
--- a/hw/timer/puv3_ost.c
+++ b/hw/timer/puv3_ost.c
@@ -10,6 +10,7 @@
  */
 #include "hw/sysbus.h"
 #include "hw/ptimer.h"
+#include "qemu/main-loop.h"
 
 #undef DEBUG_PUV3
 #include "hw/unicore32/puv3.h"
diff --git a/hw/timer/slavio_timer.c b/hw/timer/slavio_timer.c
index 33e8f6c..f75b914 100644
--- a/hw/timer/slavio_timer.c
+++ b/hw/timer/slavio_timer.c
@@ -27,6 +27,7 @@
 #include "hw/ptimer.h"
 #include "hw/sysbus.h"
 #include "trace.h"
+#include "qemu/main-loop.h"
 
 /*
  * Registers of hardware timer in sun4m.
diff --git a/hw/timer/xilinx_timer.c b/hw/timer/xilinx_timer.c
index 5f2c902..6113b97 100644
--- a/hw/timer/xilinx_timer.c
+++ b/hw/timer/xilinx_timer.c
@@ -25,6 +25,7 @@
 #include "hw/sysbus.h"
 #include "hw/ptimer.h"
 #include "qemu/log.h"
+#include "qemu/main-loop.h"
 
 #define D(x)
 
diff --git a/hw/usb/hcd-uhci.c b/hw/usb/hcd-uhci.c
index ac82833..ec51883 100644
--- a/hw/usb/hcd-uhci.c
+++ b/hw/usb/hcd-uhci.c
@@ -32,6 +32,7 @@
 #include "qemu/iov.h"
 #include "sysemu/dma.h"
 #include "trace.h"
+#include "qemu/main-loop.h"
 
 //#define DEBUG
 //#define DEBUG_DUMP_DATA
diff --git a/include/block/block_int.h b/include/block/block_int.h
index e45f2a0..3d798ce 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -34,6 +34,7 @@
 #include "monitor/monitor.h"
 #include "qemu/hbitmap.h"
 #include "block/snapshot.h"
+#include "qemu/main-loop.h"
 
 #define BLOCK_FLAG_ENCRYPT          1
 #define BLOCK_FLAG_COMPAT6          4
diff --git a/include/block/coroutine.h b/include/block/coroutine.h
index 1f2db3e..9197bfb 100644
--- a/include/block/coroutine.h
+++ b/include/block/coroutine.h
@@ -19,6 +19,8 @@
 #include "qemu/queue.h"
 #include "qemu/timer.h"
 
+typedef struct AioContext AioContext;
+
 /**
  * Coroutines are a mechanism for stack switching and can be used for
  * cooperative userspace threading.  These functions provide a simple but
diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 4bf9a9c..93c4a91 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -2,7 +2,6 @@
 #define QEMU_TIMER_H
 
 #include "qemu-common.h"
-#include "qemu/main-loop.h"
 #include "qemu/notify.h"
 
 /* timers */
diff --git a/migration-exec.c b/migration-exec.c
index deab4e3..4790247 100644
--- a/migration-exec.c
+++ b/migration-exec.c
@@ -17,6 +17,7 @@
 
 #include "qemu-common.h"
 #include "qemu/sockets.h"
+#include "qemu/main-loop.h"
 #include "migration/migration.h"
 #include "migration/qemu-file.h"
 #include "block/block.h"
diff --git a/migration-fd.c b/migration-fd.c
index 3d4613c..d2e523a 100644
--- a/migration-fd.c
+++ b/migration-fd.c
@@ -14,6 +14,7 @@
  */
 
 #include "qemu-common.h"
+#include "qemu/main-loop.h"
 #include "qemu/sockets.h"
 #include "migration/migration.h"
 #include "monitor/monitor.h"
diff --git a/migration-tcp.c b/migration-tcp.c
index b20ee58..782572d 100644
--- a/migration-tcp.c
+++ b/migration-tcp.c
@@ -18,6 +18,7 @@
 #include "migration/migration.h"
 #include "migration/qemu-file.h"
 #include "block/block.h"
+#include "qemu/main-loop.h"
 
 //#define DEBUG_MIGRATION_TCP
 
diff --git a/migration-unix.c b/migration-unix.c
index 94b7022..651fc5b 100644
--- a/migration-unix.c
+++ b/migration-unix.c
@@ -15,6 +15,7 @@
 
 #include "qemu-common.h"
 #include "qemu/sockets.h"
+#include "qemu/main-loop.h"
 #include "migration/migration.h"
 #include "migration/qemu-file.h"
 #include "block/block.h"
diff --git a/migration.c b/migration.c
index 1402fa7..ac200ed 100644
--- a/migration.c
+++ b/migration.c
@@ -14,6 +14,7 @@
  */
 
 #include "qemu-common.h"
+#include "qemu/main-loop.h"
 #include "migration/migration.h"
 #include "monitor/monitor.h"
 #include "migration/qemu-file.h"
diff --git a/nbd.c b/nbd.c
index 2606403..0fd0583 100644
--- a/nbd.c
+++ b/nbd.c
@@ -38,6 +38,7 @@
 
 #include "qemu/sockets.h"
 #include "qemu/queue.h"
+#include "qemu/main-loop.h"
 
 //#define DEBUG_NBD
 
diff --git a/net/net.c b/net/net.c
index c0d61bf..1148592 100644
--- a/net/net.c
+++ b/net/net.c
@@ -36,6 +36,7 @@
 #include "qmp-commands.h"
 #include "hw/qdev.h"
 #include "qemu/iov.h"
+#include "qemu/main-loop.h"
 #include "qapi-visit.h"
 #include "qapi/opts-visitor.h"
 #include "qapi/dealloc-visitor.h"
diff --git a/net/socket.c b/net/socket.c
index 87af1d3..e61309d 100644
--- a/net/socket.c
+++ b/net/socket.c
@@ -31,6 +31,7 @@
 #include "qemu/option.h"
 #include "qemu/sockets.h"
 #include "qemu/iov.h"
+#include "qemu/main-loop.h"
 
 typedef struct NetSocketState {
     NetClientState nc;
diff --git a/qemu-coroutine-io.c b/qemu-coroutine-io.c
index c4df35a..054ca70 100644
--- a/qemu-coroutine-io.c
+++ b/qemu-coroutine-io.c
@@ -26,6 +26,7 @@
 #include "qemu/sockets.h"
 #include "block/coroutine.h"
 #include "qemu/iov.h"
+#include "qemu/main-loop.h"
 
 ssize_t coroutine_fn
 qemu_co_sendv_recvv(int sockfd, struct iovec *iov, unsigned iov_cnt,
diff --git a/qemu-io-cmds.c b/qemu-io-cmds.c
index ffbcf31..f91b6c4 100644
--- a/qemu-io-cmds.c
+++ b/qemu-io-cmds.c
@@ -10,6 +10,7 @@
 
 #include "qemu-io.h"
 #include "block/block_int.h"
+#include "qemu/main-loop.h"
 
 #define CMD_NOFILE_OK   0x01
 
diff --git a/qemu-nbd.c b/qemu-nbd.c
index 9c31d45..f044546 100644
--- a/qemu-nbd.c
+++ b/qemu-nbd.c
@@ -19,6 +19,7 @@
 #include "qemu-common.h"
 #include "block/block.h"
 #include "block/nbd.h"
+#include "qemu/main-loop.h"
 
 #include <stdarg.h>
 #include <stdio.h>
diff --git a/slirp/misc.c b/slirp/misc.c
index 0bcc481..c0d4899 100644
--- a/slirp/misc.c
+++ b/slirp/misc.c
@@ -9,6 +9,7 @@
 #include <libslirp.h>
 
 #include "monitor/monitor.h"
+#include "qemu/main-loop.h"
 
 #ifdef DEBUG
 int slirp_debug = DBG_CALL|DBG_MISC|DBG_ERROR;
diff --git a/thread-pool.c b/thread-pool.c
index 0ebd4c2..25bfa41 100644
--- a/thread-pool.c
+++ b/thread-pool.c
@@ -23,6 +23,7 @@
 #include "block/block_int.h"
 #include "qemu/event_notifier.h"
 #include "block/thread-pool.h"
+#include "qemu/main-loop.h"
 
 static void do_spawn_thread(ThreadPool *pool);
 
diff --git a/ui/vnc-auth-vencrypt.c b/ui/vnc-auth-vencrypt.c
index c59b188..bc7032e 100644
--- a/ui/vnc-auth-vencrypt.c
+++ b/ui/vnc-auth-vencrypt.c
@@ -25,7 +25,7 @@
  */
 
 #include "vnc.h"
-
+#include "qemu/main-loop.h"
 
 static void start_auth_vencrypt_subauth(VncState *vs)
 {
diff --git a/ui/vnc-ws.c b/ui/vnc-ws.c
index df89315..e304baf 100644
--- a/ui/vnc-ws.c
+++ b/ui/vnc-ws.c
@@ -19,6 +19,7 @@
  */
 
 #include "vnc.h"
+#include "qemu/main-loop.h"
 
 #ifdef CONFIG_VNC_TLS
 #include "qemu/sockets.h"
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv6 07/16] aio / timers: Add QEMUTimerListGroup and helper functions
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                           ` (5 preceding siblings ...)
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 06/16] aio / timers: Untangle include files Alex Bligh
@ 2013-08-06  9:16                         ` Alex Bligh
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 08/16] aio / timers: Add QEMUTimerListGroup to AioContext Alex Bligh
                                           ` (11 subsequent siblings)
  18 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-06  9:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Add QEMUTimerListGroup and helper functions, to represent
a QEMUTimerList associated with each clock. Add a default
QEMUTimerListGroup representing the default timer lists
which are not associated with any other object (e.g.
an AioContext as added by future patches).

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |    7 +++++++
 qemu-timer.c         |   41 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 48 insertions(+)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 93c4a91..466ae98 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -19,8 +19,10 @@ typedef enum {
 
 typedef struct QEMUClock QEMUClock;
 typedef struct QEMUTimerList QEMUTimerList;
+typedef QEMUTimerList *QEMUTimerListGroup[QEMU_CLOCK_MAX];
 typedef void QEMUTimerCB(void *opaque);
 
+extern QEMUTimerListGroup qemu_default_tlg;
 extern QEMUClock *qemu_clocks[QEMU_CLOCK_MAX];
 
 static inline QEMUClock *qemu_get_clock(QEMUClockType type)
@@ -69,6 +71,11 @@ int64_t timerlist_deadline_ns(QEMUTimerList *tl);
 QEMUClock *timerlist_get_clock(QEMUTimerList *tl);
 bool timerlist_run_timers(QEMUTimerList *tl);
 
+void timerlistgroup_init(QEMUTimerListGroup tlg);
+void timerlistgroup_deinit(QEMUTimerListGroup tlg);
+bool timerlistgroup_run_timers(QEMUTimerListGroup tlg);
+int64_t timerlistgroup_deadline_ns(QEMUTimerListGroup tlg);
+
 int qemu_timeout_ns_to_ms(int64_t ns);
 int qemu_poll_ns(GPollFD *fds, uint nfds, int64_t timeout);
 void qemu_clock_enable(QEMUClock *clock, bool enabled);
diff --git a/qemu-timer.c b/qemu-timer.c
index b2d18e4..83d23e3 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -59,6 +59,7 @@ struct QEMUClock {
     bool enabled;
 };
 
+QEMUTimerListGroup qemu_default_tlg;
 QEMUClock *qemu_clocks[QEMU_CLOCK_MAX];
 
 /* A QEMUTimerList is a list of timers attached to a clock. More
@@ -569,6 +570,45 @@ bool qemu_run_timers(QEMUClock *clock)
     return timerlist_run_timers(clock->default_timerlist);
 }
 
+void timerlistgroup_init(QEMUTimerListGroup tlg)
+{
+    QEMUClockType clock;
+    for (clock = 0; clock < QEMU_CLOCK_MAX; clock++) {
+        tlg[clock] = timerlist_new(clock);
+    }
+}
+
+void timerlistgroup_deinit(QEMUTimerListGroup tlg)
+{
+    QEMUClockType clock;
+    for (clock = 0; clock < QEMU_CLOCK_MAX; clock++) {
+        timerlist_free(tlg[clock]);
+    }
+}
+
+bool timerlistgroup_run_timers(QEMUTimerListGroup tlg)
+{
+    QEMUClockType clock;
+    bool progress = false;
+    for (clock = 0; clock < QEMU_CLOCK_MAX; clock++) {
+        progress |= timerlist_run_timers(tlg[clock]);
+    }
+    return progress;
+}
+
+int64_t timerlistgroup_deadline_ns(QEMUTimerListGroup tlg)
+{
+    int64_t deadline = -1;
+    QEMUClockType clock;
+    for (clock = 0; clock < QEMU_CLOCK_MAX; clock++) {
+        if (qemu_clock_use_for_deadline(tlg[clock]->clock)) {
+            deadline = qemu_soonest_timeout(deadline,
+                                            timerlist_deadline_ns(tlg[clock]));
+        }
+    }
+    return deadline;
+}
+
 int64_t qemu_get_clock_ns(QEMUClock *clock)
 {
     int64_t now, last;
@@ -610,6 +650,7 @@ void init_clocks(void)
     for (type = 0; type < QEMU_CLOCK_MAX; type++) {
         if (!qemu_clocks[type]) {
             qemu_clocks[type] = qemu_new_clock(type);
+            qemu_default_tlg[type] = qemu_clocks[type]->default_timerlist;
         }
     }
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv6 08/16] aio / timers: Add QEMUTimerListGroup to AioContext
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                           ` (6 preceding siblings ...)
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 07/16] aio / timers: Add QEMUTimerListGroup and helper functions Alex Bligh
@ 2013-08-06  9:16                         ` Alex Bligh
  2013-08-06 12:30                           ` Stefan Hajnoczi
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 09/16] aio / timers: Add a notify callback to QEMUTimerList Alex Bligh
                                           ` (10 subsequent siblings)
  18 siblings, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-08-06  9:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Add a QEMUTimerListGroup each AioContext (meaning a QEMUTimerList
associated with each clock is added) and delete it when the
AioContext is freed.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 async.c                  |    2 ++
 include/block/aio.h      |    4 ++++
 tests/test-aio.c         |    3 +++
 tests/test-thread-pool.c |    3 +++
 4 files changed, 12 insertions(+)

diff --git a/async.c b/async.c
index 5ce3633..99fb5a8 100644
--- a/async.c
+++ b/async.c
@@ -205,6 +205,7 @@ aio_ctx_finalize(GSource     *source)
     event_notifier_cleanup(&ctx->notifier);
     qemu_mutex_destroy(&ctx->bh_lock);
     g_array_free(ctx->pollfds, TRUE);
+    timerlistgroup_deinit(ctx->tlg);
 }
 
 static GSourceFuncs aio_source_funcs = {
@@ -244,6 +245,7 @@ AioContext *aio_context_new(void)
     aio_set_event_notifier(ctx, &ctx->notifier, 
                            (EventNotifierHandler *)
                            event_notifier_test_and_clear, NULL);
+    timerlistgroup_init(ctx->tlg);
 
     return ctx;
 }
diff --git a/include/block/aio.h b/include/block/aio.h
index cc77771..a13f6e8 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -18,6 +18,7 @@
 #include "qemu/queue.h"
 #include "qemu/event_notifier.h"
 #include "qemu/thread.h"
+#include "qemu/timer.h"
 
 typedef struct BlockDriverAIOCB BlockDriverAIOCB;
 typedef void BlockDriverCompletionFunc(void *opaque, int ret);
@@ -72,6 +73,9 @@ typedef struct AioContext {
 
     /* Thread pool for performing work and receiving completion callbacks */
     struct ThreadPool *thread_pool;
+
+    /* TimerLists for calling timers - one per clock type */
+    QEMUTimerListGroup tlg;
 } AioContext;
 
 /* Returns 1 if there are still outstanding AIO requests; 0 otherwise */
diff --git a/tests/test-aio.c b/tests/test-aio.c
index c173870..2d7ec4c 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -12,6 +12,7 @@
 
 #include <glib.h>
 #include "block/aio.h"
+#include "qemu/timer.h"
 
 AioContext *ctx;
 
@@ -628,6 +629,8 @@ int main(int argc, char **argv)
 {
     GSource *src;
 
+    init_clocks();
+
     ctx = aio_context_new();
     src = aio_get_g_source(ctx);
     g_source_attach(src, NULL);
diff --git a/tests/test-thread-pool.c b/tests/test-thread-pool.c
index b62338f..27d6190 100644
--- a/tests/test-thread-pool.c
+++ b/tests/test-thread-pool.c
@@ -3,6 +3,7 @@
 #include "block/aio.h"
 #include "block/thread-pool.h"
 #include "block/block.h"
+#include "qemu/timer.h"
 
 static AioContext *ctx;
 static ThreadPool *pool;
@@ -205,6 +206,8 @@ int main(int argc, char **argv)
 {
     int ret;
 
+    init_clocks();
+
     ctx = aio_context_new();
     pool = aio_get_thread_pool(ctx);
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv6 09/16] aio / timers: Add a notify callback to QEMUTimerList
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                           ` (7 preceding siblings ...)
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 08/16] aio / timers: Add QEMUTimerListGroup to AioContext Alex Bligh
@ 2013-08-06  9:16                         ` Alex Bligh
  2013-08-06 12:34                           ` Stefan Hajnoczi
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 10/16] aio / timers: aio_ctx_prepare sets timeout from AioContext timers Alex Bligh
                                           ` (9 subsequent siblings)
  18 siblings, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-08-06  9:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Add a notify pointer to QEMUTimerList so it knows what to notify
on a timer change.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 async.c              |    7 ++++++-
 include/qemu/timer.h |    7 ++++++-
 qemu-timer.c         |   24 ++++++++++++++++++++++--
 3 files changed, 34 insertions(+), 4 deletions(-)

diff --git a/async.c b/async.c
index 99fb5a8..8daa232 100644
--- a/async.c
+++ b/async.c
@@ -234,6 +234,11 @@ void aio_notify(AioContext *ctx)
     event_notifier_set(&ctx->notifier);
 }
 
+static void aio_timerlist_notify(void *opaque)
+{
+    aio_notify((AioContext *)opaque);
+}
+
 AioContext *aio_context_new(void)
 {
     AioContext *ctx;
@@ -245,7 +250,7 @@ AioContext *aio_context_new(void)
     aio_set_event_notifier(ctx, &ctx->notifier, 
                            (EventNotifierHandler *)
                            event_notifier_test_and_clear, NULL);
-    timerlistgroup_init(ctx->tlg);
+    timerlistgroup_init(ctx->tlg, aio_timerlist_notify, ctx);
 
     return ctx;
 }
diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 466ae98..d23173c 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -21,6 +21,7 @@ typedef struct QEMUClock QEMUClock;
 typedef struct QEMUTimerList QEMUTimerList;
 typedef QEMUTimerList *QEMUTimerListGroup[QEMU_CLOCK_MAX];
 typedef void QEMUTimerCB(void *opaque);
+typedef void QEMUTimerListNotifyCB(void *opaque);
 
 extern QEMUTimerListGroup qemu_default_tlg;
 extern QEMUClock *qemu_clocks[QEMU_CLOCK_MAX];
@@ -70,8 +71,12 @@ int64_t timerlist_deadline(QEMUTimerList *tl);
 int64_t timerlist_deadline_ns(QEMUTimerList *tl);
 QEMUClock *timerlist_get_clock(QEMUTimerList *tl);
 bool timerlist_run_timers(QEMUTimerList *tl);
+void timerlist_set_notify_cb(QEMUTimerList *tl,
+                             QEMUTimerListNotifyCB *cb, void *opaque);
+void timerlist_notify(QEMUTimerList *tl);
 
-void timerlistgroup_init(QEMUTimerListGroup tlg);
+void timerlistgroup_init(QEMUTimerListGroup tlg,
+                         QEMUTimerListNotifyCB *cb, void *opaque);
 void timerlistgroup_deinit(QEMUTimerListGroup tlg);
 bool timerlistgroup_run_timers(QEMUTimerListGroup tlg);
 int64_t timerlistgroup_deadline_ns(QEMUTimerListGroup tlg);
diff --git a/qemu-timer.c b/qemu-timer.c
index 83d23e3..dd5ed3f 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -73,6 +73,8 @@ struct QEMUTimerList {
     QEMUClock *clock;
     QEMUTimer *active_timers;
     QLIST_ENTRY(QEMUTimerList) list;
+    QEMUTimerListNotifyCB *notify_cb;
+    void *notify_opaque;
 };
 
 struct QEMUTimer {
@@ -388,6 +390,22 @@ QEMUTimerList *qemu_clock_get_default_timerlist(QEMUClock *clock)
     return clock->default_timerlist;
 }
 
+void timerlist_set_notify_cb(QEMUTimerList *tl,
+                             QEMUTimerListNotifyCB *cb, void *opaque)
+{
+    tl->notify_cb = cb;
+    tl->notify_opaque = opaque;
+}
+
+void timerlist_notify(QEMUTimerList *tl)
+{
+    if (tl->notify_cb) {
+        tl->notify_cb(tl->notify_opaque);
+    } else {
+        qemu_notify_event();
+    }
+}
+
 /* Transition function to convert a nanosecond timeout to ms
  * This is used where a system does not support ppoll
  */
@@ -512,7 +530,7 @@ void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
         /* Interrupt execution to force deadline recalculation.  */
         qemu_clock_warp(ts->tl->clock);
         if (use_icount) {
-            qemu_notify_event();
+            timerlist_notify(ts->tl);
         }
     }
 }
@@ -570,11 +588,13 @@ bool qemu_run_timers(QEMUClock *clock)
     return timerlist_run_timers(clock->default_timerlist);
 }
 
-void timerlistgroup_init(QEMUTimerListGroup tlg)
+void timerlistgroup_init(QEMUTimerListGroup tlg,
+                         QEMUTimerListNotifyCB *cb, void *opaque)
 {
     QEMUClockType clock;
     for (clock = 0; clock < QEMU_CLOCK_MAX; clock++) {
         tlg[clock] = timerlist_new(clock);
+        timerlist_set_notify_cb(tlg[clock], cb, opaque);
     }
 }
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv6 10/16] aio / timers: aio_ctx_prepare sets timeout from AioContext timers
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                           ` (8 preceding siblings ...)
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 09/16] aio / timers: Add a notify callback to QEMUTimerList Alex Bligh
@ 2013-08-06  9:16                         ` Alex Bligh
  2013-08-06 12:40                           ` Stefan Hajnoczi
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 11/16] aio / timers: Convert aio_poll to use AioContext timers' deadline Alex Bligh
                                           ` (8 subsequent siblings)
  18 siblings, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-08-06  9:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Calculate the timeout in aio_ctx_prepare taking into account
the timers attached to the AioContext.

Alter aio_ctx_check similarly.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 async.c |   13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/async.c b/async.c
index 8daa232..0a8a85b 100644
--- a/async.c
+++ b/async.c
@@ -150,13 +150,14 @@ aio_ctx_prepare(GSource *source, gint    *timeout)
 {
     AioContext *ctx = (AioContext *) source;
     QEMUBH *bh;
+    int deadline;
 
     for (bh = ctx->first_bh; bh; bh = bh->next) {
         if (!bh->deleted && bh->scheduled) {
             if (bh->idle) {
                 /* idle bottom halves will be polled at least
                  * every 10ms */
-                *timeout = 10;
+                *timeout = qemu_soonest_timeout(*timeout, 10);
             } else {
                 /* non-idle bottom halves will be executed
                  * immediately */
@@ -166,6 +167,14 @@ aio_ctx_prepare(GSource *source, gint    *timeout)
         }
     }
 
+    deadline = qemu_timeout_ns_to_ms(timerlistgroup_deadline_ns(ctx->tlg));
+    if (deadline == 0) {
+        *timeout = 0;
+        return true;
+    } else {
+        *timeout = qemu_soonest_timeout(*timeout, deadline);
+    }
+
     return false;
 }
 
@@ -180,7 +189,7 @@ aio_ctx_check(GSource *source)
             return true;
 	}
     }
-    return aio_pending(ctx);
+    return aio_pending(ctx) || (timerlistgroup_deadline_ns(ctx->tlg) >= 0);
 }
 
 static gboolean
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv6 11/16] aio / timers: Convert aio_poll to use AioContext timers' deadline
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                           ` (9 preceding siblings ...)
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 10/16] aio / timers: aio_ctx_prepare sets timeout from AioContext timers Alex Bligh
@ 2013-08-06  9:16                         ` Alex Bligh
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 12/16] aio / timers: Convert mainloop to use timeout Alex Bligh
                                           ` (7 subsequent siblings)
  18 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-06  9:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Convert aio_poll to use deadline based on AioContext's timers.

aio_poll has been changed to return accurately whether progress
has occurred. Prior to this commit, aio_poll always returned
true if g_poll was entered, whether or not any progress was
made. This required a change to tests/test-aio.c where an
assert was backwards.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 aio-posix.c      |   20 +++++++++++++-------
 aio-win32.c      |   22 +++++++++++++++++++---
 tests/test-aio.c |    4 ++--
 3 files changed, 34 insertions(+), 12 deletions(-)

diff --git a/aio-posix.c b/aio-posix.c
index b68eccd..2ec419d 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -166,6 +166,10 @@ static bool aio_dispatch(AioContext *ctx)
             g_free(tmp);
         }
     }
+
+    /* Run our timers */
+    progress |= timerlistgroup_run_timers(ctx->tlg);
+
     return progress;
 }
 
@@ -232,9 +236,9 @@ bool aio_poll(AioContext *ctx, bool blocking)
     }
 
     /* wait until next event */
-    ret = g_poll((GPollFD *)ctx->pollfds->data,
-                 ctx->pollfds->len,
-                 blocking ? -1 : 0);
+    ret = qemu_poll_ns((GPollFD *)ctx->pollfds->data,
+                         ctx->pollfds->len,
+                         blocking ? timerlistgroup_deadline_ns(ctx->tlg) : 0);
 
     /* if we have any readable fds, dispatch event */
     if (ret > 0) {
@@ -245,11 +249,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
                 node->pfd.revents = pfd->revents;
             }
         }
-        if (aio_dispatch(ctx)) {
-            progress = true;
-        }
+    }
+
+    /* Run dispatch even if there were no readable fds to run timers */
+    if (aio_dispatch(ctx)) {
+        progress = true;
     }
 
     assert(progress || busy);
-    return true;
+    return progress;
 }
diff --git a/aio-win32.c b/aio-win32.c
index 38723bf..acdc48a 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -98,6 +98,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
     HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
     bool busy, progress;
     int count;
+    int timeout;
 
     progress = false;
 
@@ -111,6 +112,9 @@ bool aio_poll(AioContext *ctx, bool blocking)
         progress = true;
     }
 
+    /* Run timers */
+    progress |= timerlistgroup_run_timers(ctx->tlg);
+
     /*
      * Then dispatch any pending callbacks from the GSource.
      *
@@ -174,8 +178,11 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
     /* wait until next event */
     while (count > 0) {
-        int timeout = blocking ? INFINITE : 0;
-        int ret = WaitForMultipleObjects(count, events, FALSE, timeout);
+        int ret;
+
+        timeout = blocking ?
+            qemu_timeout_ns_to_ms(timerlistgroup_deadline_ns(ctx->tlg)) : 0;
+        ret = WaitForMultipleObjects(count, events, FALSE, timeout);
 
         /* if we have any signaled events, dispatch event */
         if ((DWORD) (ret - WAIT_OBJECT_0) >= count) {
@@ -214,6 +221,15 @@ bool aio_poll(AioContext *ctx, bool blocking)
         events[ret - WAIT_OBJECT_0] = events[--count];
     }
 
+    if (blocking) {
+        /* Run the timers a second time. We do this because otherwise aio_wait
+         * will not note progress - and will stop a drain early - if we have
+         * a timer that was not ready to run entering g_poll but is ready
+         * after g_poll. This will only do anything if a timer has expired.
+         */
+        progress |= timerlistgroup_run_timers(ctx->tl);
+    }
+
     assert(progress || busy);
-    return true;
+    return progress;
 }
diff --git a/tests/test-aio.c b/tests/test-aio.c
index 2d7ec4c..eedf7f8 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -316,13 +316,13 @@ static void test_wait_event_notifier_noflush(void)
     event_notifier_set(&data.e);
     g_assert(aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 1);
-    g_assert(aio_poll(ctx, false));
+    g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 1);
 
     event_notifier_set(&data.e);
     g_assert(aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 2);
-    g_assert(aio_poll(ctx, false));
+    g_assert(!aio_poll(ctx, false));
     g_assert_cmpint(data.n, ==, 2);
 
     event_notifier_set(&dummy.e);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv6 12/16] aio / timers: Convert mainloop to use timeout
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                           ` (10 preceding siblings ...)
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 11/16] aio / timers: Convert aio_poll to use AioContext timers' deadline Alex Bligh
@ 2013-08-06  9:16                         ` Alex Bligh
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 13/16] aio / timers: On timer modification, qemu_notify or aio_notify Alex Bligh
                                           ` (6 subsequent siblings)
  18 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-06  9:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Convert mainloop to use timeout from default timerlist group
(i.e. the current 3 static timers)

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 main-loop.c |   45 ++++++++++++++++++++++++++++++++++-----------
 1 file changed, 34 insertions(+), 11 deletions(-)

diff --git a/main-loop.c b/main-loop.c
index a44fff6..43dfcd7 100644
--- a/main-loop.c
+++ b/main-loop.c
@@ -155,10 +155,11 @@ static int max_priority;
 static int glib_pollfds_idx;
 static int glib_n_poll_fds;
 
-static void glib_pollfds_fill(uint32_t *cur_timeout)
+static void glib_pollfds_fill(int64_t *cur_timeout)
 {
     GMainContext *context = g_main_context_default();
     int timeout = 0;
+    int64_t timeout_ns;
     int n;
 
     g_main_context_prepare(context, &max_priority);
@@ -174,9 +175,13 @@ static void glib_pollfds_fill(uint32_t *cur_timeout)
                                  glib_n_poll_fds);
     } while (n != glib_n_poll_fds);
 
-    if (timeout >= 0 && timeout < *cur_timeout) {
-        *cur_timeout = timeout;
+    if (timeout < 0) {
+        timeout_ns = -1;
+    } else {
+        timeout_ns = (int64_t)timeout * (int64_t)SCALE_MS;
     }
+
+    *cur_timeout = qemu_soonest_timeout(timeout_ns, *cur_timeout);
 }
 
 static void glib_pollfds_poll(void)
@@ -191,7 +196,7 @@ static void glib_pollfds_poll(void)
 
 #define MAX_MAIN_LOOP_SPIN (1000)
 
-static int os_host_main_loop_wait(uint32_t timeout)
+static int os_host_main_loop_wait(int64_t timeout)
 {
     int ret;
     static int spin_counter;
@@ -214,7 +219,7 @@ static int os_host_main_loop_wait(uint32_t timeout)
             notified = true;
         }
 
-        timeout = 1;
+        timeout = SCALE_MS;
     }
 
     if (timeout > 0) {
@@ -224,7 +229,7 @@ static int os_host_main_loop_wait(uint32_t timeout)
         spin_counter++;
     }
 
-    ret = g_poll((GPollFD *)gpollfds->data, gpollfds->len, timeout);
+    ret = qemu_poll_ns((GPollFD *)gpollfds->data, gpollfds->len, timeout);
 
     if (timeout > 0) {
         qemu_mutex_lock_iothread();
@@ -373,7 +378,7 @@ static void pollfds_poll(GArray *pollfds, int nfds, fd_set *rfds,
     }
 }
 
-static int os_host_main_loop_wait(uint32_t timeout)
+static int os_host_main_loop_wait(int64_t timeout)
 {
     GMainContext *context = g_main_context_default();
     GPollFD poll_fds[1024 * 2]; /* this is probably overkill */
@@ -382,6 +387,7 @@ static int os_host_main_loop_wait(uint32_t timeout)
     PollingEntry *pe;
     WaitObjects *w = &wait_objects;
     gint poll_timeout;
+    int64_t poll_timeout_ns;
     static struct timeval tv0;
     fd_set rfds, wfds, xfds;
     int nfds;
@@ -419,12 +425,17 @@ static int os_host_main_loop_wait(uint32_t timeout)
         poll_fds[n_poll_fds + i].events = G_IO_IN;
     }
 
-    if (poll_timeout < 0 || timeout < poll_timeout) {
-        poll_timeout = timeout;
+    if (poll_timeout < 0) {
+        poll_timeout_ns = -1;
+    } else {
+        poll_timeout_ns = (int64_t)poll_timeout * (int64_t)SCALE_MS;
     }
 
+    poll_timeout_ns = qemu_soonest_timeout(poll_timeout_ns, timeout);
+
     qemu_mutex_unlock_iothread();
-    g_poll_ret = g_poll(poll_fds, n_poll_fds + w->num, poll_timeout);
+    g_poll_ret = qemu_poll_ns(poll_fds, n_poll_fds + w->num, poll_timeout_ns);
+
     qemu_mutex_lock_iothread();
     if (g_poll_ret > 0) {
         for (i = 0; i < w->num; i++) {
@@ -449,6 +460,7 @@ int main_loop_wait(int nonblocking)
 {
     int ret;
     uint32_t timeout = UINT32_MAX;
+    int64_t timeout_ns;
 
     if (nonblocking) {
         timeout = 0;
@@ -462,7 +474,18 @@ int main_loop_wait(int nonblocking)
     slirp_pollfds_fill(gpollfds);
 #endif
     qemu_iohandler_fill(gpollfds);
-    ret = os_host_main_loop_wait(timeout);
+
+    if (timeout == UINT32_MAX) {
+        timeout_ns = -1;
+    } else {
+        timeout_ns = (uint64_t)timeout * (int64_t)(SCALE_MS);
+    }
+
+    timeout_ns = qemu_soonest_timeout(timeout_ns,
+                                      timerlistgroup_deadline_ns(
+                                          qemu_default_tlg));
+
+    ret = os_host_main_loop_wait(timeout_ns);
     qemu_iohandler_poll(gpollfds, ret);
 #ifdef CONFIG_SLIRP
     slirp_pollfds_poll(gpollfds, (ret < 0));
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv6 13/16] aio / timers: On timer modification, qemu_notify or aio_notify
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                           ` (11 preceding siblings ...)
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 12/16] aio / timers: Convert mainloop to use timeout Alex Bligh
@ 2013-08-06  9:16                         ` Alex Bligh
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 14/16] aio / timers: Use all timerlists in icount warp calculations Alex Bligh
                                           ` (5 subsequent siblings)
  18 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-06  9:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

On qemu_mod_timer_ns, ensure qemu_notify or aio_notify is called to
end the appropriate poll(), irrespective of use_icount value.

On qemu_clock_enable, ensure qemu_notify or aio_notify is called for
all QEMUTimerLists attached to the QEMUClock.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |    1 +
 qemu-timer.c         |   13 ++++++++++---
 2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index d23173c..d3d4701 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -62,6 +62,7 @@ int64_t qemu_clock_deadline(QEMUClock *clock);
 int64_t qemu_clock_deadline_ns(QEMUClock *clock);
 bool qemu_clock_use_for_deadline(QEMUClock *clock);
 QEMUTimerList *qemu_clock_get_default_timerlist(QEMUClock *clock);
+void qemu_clock_notify(QEMUClock *clock);
 
 QEMUTimerList *timerlist_new(QEMUClockType type);
 void timerlist_free(QEMUTimerList *tl);
diff --git a/qemu-timer.c b/qemu-timer.c
index dd5ed3f..5f65455 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -304,11 +304,20 @@ bool qemu_clock_use_for_deadline(QEMUClock *clock)
     return !(use_icount && (clock->type = QEMU_CLOCK_VIRTUAL));
 }
 
+void qemu_clock_notify(QEMUClock *clock)
+{
+    QEMUTimerList *tl;
+    QLIST_FOREACH(tl, &clock->timerlists, list) {
+        timerlist_notify(tl);
+    }
+}
+
 void qemu_clock_enable(QEMUClock *clock, bool enabled)
 {
     bool old = clock->enabled;
     clock->enabled = enabled;
     if (enabled && !old) {
+        qemu_clock_notify(clock);
         qemu_rearm_alarm_timer(alarm_timer);
     }
 }
@@ -529,9 +538,7 @@ void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
         }
         /* Interrupt execution to force deadline recalculation.  */
         qemu_clock_warp(ts->tl->clock);
-        if (use_icount) {
-            timerlist_notify(ts->tl);
-        }
+        timerlist_notify(ts->tl);
     }
 }
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv6 14/16] aio / timers: Use all timerlists in icount warp calculations
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                           ` (12 preceding siblings ...)
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 13/16] aio / timers: On timer modification, qemu_notify or aio_notify Alex Bligh
@ 2013-08-06  9:16                         ` Alex Bligh
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 15/16] aio / timers: Remove alarm timers Alex Bligh
                                           ` (4 subsequent siblings)
  18 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-06  9:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Notify all timerlists derived from vm_clock in icount warp
calculations.

When calculating timer delay based on vm_clock deadline, use
all timerlists.

For compatibility, maintain an apparent bug where when using
icount, if no vm_clock timer was set, qemu_clock_deadline
would return INT32_MAX and always set an icount clock expiry
about 2 seconds ahead.

NB: thread safety - when different timerlists sit on different
threads, this will need some locking.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 cpus.c               |   44 ++++++++++++++++++++++++++++++++++++--------
 include/qemu/timer.h |    1 +
 qemu-timer.c         |   16 ++++++++++++++++
 3 files changed, 53 insertions(+), 8 deletions(-)

diff --git a/cpus.c b/cpus.c
index 0f65e76..55b9550 100644
--- a/cpus.c
+++ b/cpus.c
@@ -262,7 +262,7 @@ static void icount_warp_rt(void *opaque)
             qemu_icount_bias += MIN(warp_delta, delta);
         }
         if (qemu_clock_expired(vm_clock)) {
-            qemu_notify_event();
+            qemu_clock_notify(vm_clock);
         }
     }
     vm_clock_warp_start = -1;
@@ -279,7 +279,7 @@ void qtest_clock_warp(int64_t dest)
         qemu_run_timers(vm_clock);
         clock = qemu_get_clock_ns(vm_clock);
     }
-    qemu_notify_event();
+    qemu_clock_notify(vm_clock);
 }
 
 void qemu_clock_warp(QEMUClock *clock)
@@ -314,7 +314,18 @@ void qemu_clock_warp(QEMUClock *clock)
     }
 
     vm_clock_warp_start = qemu_get_clock_ns(rt_clock);
-    deadline = qemu_clock_deadline(vm_clock);
+    /* We want to use the earliest deadline from ALL vm_clocks */
+    deadline = qemu_clock_deadline_ns_all(vm_clock);
+
+    /* Maintain prior (possibly buggy) behaviour where if no deadline
+     * was set (as there is no vm_clock timer) or it is more than
+     * INT32_MAX nanoseconds ahead, we still use INT32_MAX
+     * nanoseconds.
+     */
+    if ((deadline < 0) || (deadline > INT32_MAX)) {
+        deadline = INT32_MAX;
+    }
+
     if (deadline > 0) {
         /*
          * Ensure the vm_clock proceeds even when the virtual CPU goes to
@@ -333,8 +344,8 @@ void qemu_clock_warp(QEMUClock *clock)
          * packets continuously instead of every 100ms.
          */
         qemu_mod_timer(icount_warp_timer, vm_clock_warp_start + deadline);
-    } else {
-        qemu_notify_event();
+    } else if (deadline == 0) {
+        qemu_clock_notify(vm_clock);
     }
 }
 
@@ -866,8 +877,13 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
 
     while (1) {
         tcg_exec_all();
-        if (use_icount && qemu_clock_deadline(vm_clock) <= 0) {
-            qemu_notify_event();
+
+        if (use_icount) {
+            int64_t deadline = qemu_clock_deadline_ns_all(vm_clock);
+
+            if (deadline == 0) {
+                qemu_clock_notify(vm_clock);
+            }
         }
         qemu_tcg_wait_io_event();
     }
@@ -1145,11 +1161,23 @@ static int tcg_cpu_exec(CPUArchState *env)
 #endif
     if (use_icount) {
         int64_t count;
+        int64_t deadline;
         int decr;
         qemu_icount -= (env->icount_decr.u16.low + env->icount_extra);
         env->icount_decr.u16.low = 0;
         env->icount_extra = 0;
-        count = qemu_icount_round(qemu_clock_deadline(vm_clock));
+        deadline = qemu_clock_deadline_ns_all(vm_clock);
+
+        /* Maintain prior (possibly buggy) behaviour where if no deadline
+         * was set (as there is no vm_clock timer) or it is more than
+         * INT32_MAX nanoseconds ahead, we still use INT32_MAX
+         * nanoseconds.
+         */
+        if ((deadline < 0) || (deadline > INT32_MAX)) {
+            deadline = INT32_MAX;
+        }
+
+        count = qemu_icount_round(deadline);
         qemu_icount += count;
         decr = (count > 0xffff) ? 0xffff : count;
         count -= decr;
diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index d3d4701..b6e5efd 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -60,6 +60,7 @@ int64_t qemu_clock_has_timers(QEMUClock *clock);
 int64_t qemu_clock_expired(QEMUClock *clock);
 int64_t qemu_clock_deadline(QEMUClock *clock);
 int64_t qemu_clock_deadline_ns(QEMUClock *clock);
+int64_t qemu_clock_deadline_ns_all(QEMUClock *clock);
 bool qemu_clock_use_for_deadline(QEMUClock *clock);
 QEMUTimerList *qemu_clock_get_default_timerlist(QEMUClock *clock);
 void qemu_clock_notify(QEMUClock *clock);
diff --git a/qemu-timer.c b/qemu-timer.c
index 5f65455..a7ef047 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -389,6 +389,22 @@ int64_t qemu_clock_deadline_ns(QEMUClock *clock)
     return timerlist_deadline_ns(clock->default_timerlist);
 }
 
+/* Calculate the soonest deadline across all timerlists attached
+ * to the clock. This is used for the icount timeout so we
+ * ignore whether or not the clock should be used in deadline
+ * calculations.
+ */
+int64_t qemu_clock_deadline_ns_all(QEMUClock *clock)
+{
+    int64_t deadline = -1;
+    QEMUTimerList *tl;
+    QLIST_FOREACH(tl, &clock->timerlists, list) {
+        deadline = qemu_soonest_timeout(deadline,
+                                        timerlist_deadline_ns(tl));
+    }
+    return deadline;
+}
+
 QEMUClock *timerlist_get_clock(QEMUTimerList *tl)
 {
     return tl->clock;
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv6 15/16] aio / timers: Remove alarm timers
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                           ` (13 preceding siblings ...)
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 14/16] aio / timers: Use all timerlists in icount warp calculations Alex Bligh
@ 2013-08-06  9:16                         ` Alex Bligh
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 16/16] aio / timers: Add test harness for AioContext timers Alex Bligh
                                           ` (3 subsequent siblings)
  18 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-06  9:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Remove alarm timers from qemu-timers.c now we use g_poll / ppoll
instead.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 include/qemu/timer.h |    2 -
 main-loop.c          |    4 -
 qemu-timer.c         |  500 +-------------------------------------------------
 vl.c                 |    4 +-
 4 files changed, 4 insertions(+), 506 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index b6e5efd..8d6a050 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -142,9 +142,7 @@ static inline uint64_t timer_expire_time_ns(QEMUTimer *ts)
 
 bool qemu_run_timers(QEMUClock *clock);
 bool qemu_run_all_timers(void);
-void configure_alarms(char const *opt);
 void init_clocks(void);
-int init_timer_alarm(void);
 
 int64_t cpu_get_ticks(void);
 void cpu_enable_ticks(void);
diff --git a/main-loop.c b/main-loop.c
index 43dfcd7..754d276 100644
--- a/main-loop.c
+++ b/main-loop.c
@@ -131,10 +131,6 @@ int qemu_init_main_loop(void)
     GSource *src;
 
     init_clocks();
-    if (init_timer_alarm() < 0) {
-        fprintf(stderr, "could not initialize alarm timer\n");
-        exit(1);
-    }
 
     ret = qemu_signal_init();
     if (ret) {
diff --git a/qemu-timer.c b/qemu-timer.c
index a7ef047..e820e63 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -33,10 +33,6 @@
 #include <pthread.h>
 #endif
 
-#ifdef _WIN32
-#include <mmsystem.h>
-#endif
-
 #ifdef CONFIG_PPOLL
 #include <poll.h>
 #endif
@@ -86,174 +82,11 @@ struct QEMUTimer {
     int scale;
 };
 
-struct qemu_alarm_timer {
-    char const *name;
-    int (*start)(struct qemu_alarm_timer *t);
-    void (*stop)(struct qemu_alarm_timer *t);
-    void (*rearm)(struct qemu_alarm_timer *t, int64_t nearest_delta_ns);
-#if defined(__linux__)
-    timer_t timer;
-    int fd;
-#elif defined(_WIN32)
-    HANDLE timer;
-#endif
-    bool expired;
-    bool pending;
-};
-
-static struct qemu_alarm_timer *alarm_timer;
-
 static bool qemu_timer_expired_ns(QEMUTimer *timer_head, int64_t current_time)
 {
     return timer_head && (timer_head->expire_time <= current_time);
 }
 
-static int64_t qemu_next_alarm_deadline(void)
-{
-    int64_t delta = INT64_MAX;
-    int64_t rtdelta;
-    int64_t hdelta;
-
-    if (!use_icount && vm_clock->enabled &&
-        vm_clock->default_timerlist->active_timers) {
-        delta = vm_clock->default_timerlist->active_timers->expire_time -
-            qemu_get_clock_ns(vm_clock);
-    }
-    if (host_clock->enabled &&
-        host_clock->default_timerlist->active_timers) {
-        hdelta = host_clock->default_timerlist->active_timers->expire_time -
-            qemu_get_clock_ns(host_clock);
-        if (hdelta < delta) {
-            delta = hdelta;
-        }
-    }
-    if (rt_clock->enabled &&
-        rt_clock->default_timerlist->active_timers) {
-        rtdelta = (rt_clock->default_timerlist->active_timers->expire_time -
-                   qemu_get_clock_ns(rt_clock));
-        if (rtdelta < delta) {
-            delta = rtdelta;
-        }
-    }
-
-    return delta;
-}
-
-static void qemu_rearm_alarm_timer(struct qemu_alarm_timer *t)
-{
-    int64_t nearest_delta_ns = qemu_next_alarm_deadline();
-    if (nearest_delta_ns < INT64_MAX) {
-        t->rearm(t, nearest_delta_ns);
-    }
-}
-
-/* TODO: MIN_TIMER_REARM_NS should be optimized */
-#define MIN_TIMER_REARM_NS 250000
-
-#ifdef _WIN32
-
-static int mm_start_timer(struct qemu_alarm_timer *t);
-static void mm_stop_timer(struct qemu_alarm_timer *t);
-static void mm_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
-
-static int win32_start_timer(struct qemu_alarm_timer *t);
-static void win32_stop_timer(struct qemu_alarm_timer *t);
-static void win32_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
-
-#else
-
-static int unix_start_timer(struct qemu_alarm_timer *t);
-static void unix_stop_timer(struct qemu_alarm_timer *t);
-static void unix_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
-
-#ifdef __linux__
-
-static int dynticks_start_timer(struct qemu_alarm_timer *t);
-static void dynticks_stop_timer(struct qemu_alarm_timer *t);
-static void dynticks_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
-
-#endif /* __linux__ */
-
-#endif /* _WIN32 */
-
-static struct qemu_alarm_timer alarm_timers[] = {
-#ifndef _WIN32
-#ifdef __linux__
-    {"dynticks", dynticks_start_timer,
-     dynticks_stop_timer, dynticks_rearm_timer},
-#endif
-    {"unix", unix_start_timer, unix_stop_timer, unix_rearm_timer},
-#else
-    {"mmtimer", mm_start_timer, mm_stop_timer, mm_rearm_timer},
-    {"dynticks", win32_start_timer, win32_stop_timer, win32_rearm_timer},
-#endif
-    {NULL, }
-};
-
-static void show_available_alarms(void)
-{
-    int i;
-
-    printf("Available alarm timers, in order of precedence:\n");
-    for (i = 0; alarm_timers[i].name; i++)
-        printf("%s\n", alarm_timers[i].name);
-}
-
-void configure_alarms(char const *opt)
-{
-    int i;
-    int cur = 0;
-    int count = ARRAY_SIZE(alarm_timers) - 1;
-    char *arg;
-    char *name;
-    struct qemu_alarm_timer tmp;
-
-    if (is_help_option(opt)) {
-        show_available_alarms();
-        exit(0);
-    }
-
-    arg = g_strdup(opt);
-
-    /* Reorder the array */
-    name = strtok(arg, ",");
-    while (name) {
-        for (i = 0; i < count && alarm_timers[i].name; i++) {
-            if (!strcmp(alarm_timers[i].name, name))
-                break;
-        }
-
-        if (i == count) {
-            fprintf(stderr, "Unknown clock %s\n", name);
-            goto next;
-        }
-
-        if (i < cur)
-            /* Ignore */
-            goto next;
-
-	/* Swap */
-        tmp = alarm_timers[i];
-        alarm_timers[i] = alarm_timers[cur];
-        alarm_timers[cur] = tmp;
-
-        cur++;
-next:
-        name = strtok(NULL, ",");
-    }
-
-    g_free(arg);
-
-    if (cur) {
-        /* Disable remaining timers */
-        for (i = cur; i < count; i++)
-            alarm_timers[i].name = NULL;
-    } else {
-        show_available_alarms();
-        exit(1);
-    }
-}
-
 static QEMUTimerList *timerlist_new_from_clock(QEMUClock *clock)
 {
     QEMUTimerList *tl;
@@ -318,7 +151,6 @@ void qemu_clock_enable(QEMUClock *clock, bool enabled)
     clock->enabled = enabled;
     if (enabled && !old) {
         qemu_clock_notify(clock);
-        qemu_rearm_alarm_timer(alarm_timer);
     }
 }
 
@@ -549,9 +381,6 @@ void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
 
     /* Rearm if necessary  */
     if (pt == &ts->tl->active_timers) {
-        if (!alarm_timer->pending) {
-            qemu_rearm_alarm_timer(alarm_timer);
-        }
         /* Interrupt execution to force deadline recalculation.  */
         qemu_clock_warp(ts->tl->clock);
         timerlist_notify(ts->tl);
@@ -710,338 +539,11 @@ uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts)
 bool qemu_run_all_timers(void)
 {
     bool progress = false;
-    alarm_timer->pending = false;
-
-    /* vm time timers */
     QEMUClockType type;
+
     for (type = 0; type < QEMU_CLOCK_MAX; type++) {
         progress |= qemu_run_timers(qemu_get_clock(type));
     }
 
-    /* rearm timer, if not periodic */
-    if (alarm_timer->expired) {
-        alarm_timer->expired = false;
-        qemu_rearm_alarm_timer(alarm_timer);
-    }
-
     return progress;
 }
-
-#ifdef _WIN32
-static void CALLBACK host_alarm_handler(PVOID lpParam, BOOLEAN unused)
-#else
-static void host_alarm_handler(int host_signum)
-#endif
-{
-    struct qemu_alarm_timer *t = alarm_timer;
-    if (!t)
-	return;
-
-    t->expired = true;
-    t->pending = true;
-    qemu_notify_event();
-}
-
-#if defined(__linux__)
-
-#include "qemu/compatfd.h"
-
-static int dynticks_start_timer(struct qemu_alarm_timer *t)
-{
-    struct sigevent ev;
-    timer_t host_timer;
-    struct sigaction act;
-
-    sigfillset(&act.sa_mask);
-    act.sa_flags = 0;
-    act.sa_handler = host_alarm_handler;
-
-    sigaction(SIGALRM, &act, NULL);
-
-    /* 
-     * Initialize ev struct to 0 to avoid valgrind complaining
-     * about uninitialized data in timer_create call
-     */
-    memset(&ev, 0, sizeof(ev));
-    ev.sigev_value.sival_int = 0;
-    ev.sigev_notify = SIGEV_SIGNAL;
-#ifdef CONFIG_SIGEV_THREAD_ID
-    if (qemu_signalfd_available()) {
-        ev.sigev_notify = SIGEV_THREAD_ID;
-        ev._sigev_un._tid = qemu_get_thread_id();
-    }
-#endif /* CONFIG_SIGEV_THREAD_ID */
-    ev.sigev_signo = SIGALRM;
-
-    if (timer_create(CLOCK_REALTIME, &ev, &host_timer)) {
-        perror("timer_create");
-        return -1;
-    }
-
-    t->timer = host_timer;
-
-    return 0;
-}
-
-static void dynticks_stop_timer(struct qemu_alarm_timer *t)
-{
-    timer_t host_timer = t->timer;
-
-    timer_delete(host_timer);
-}
-
-static void dynticks_rearm_timer(struct qemu_alarm_timer *t,
-                                 int64_t nearest_delta_ns)
-{
-    timer_t host_timer = t->timer;
-    struct itimerspec timeout;
-    int64_t current_ns;
-
-    if (nearest_delta_ns < MIN_TIMER_REARM_NS)
-        nearest_delta_ns = MIN_TIMER_REARM_NS;
-
-    /* check whether a timer is already running */
-    if (timer_gettime(host_timer, &timeout)) {
-        perror("gettime");
-        fprintf(stderr, "Internal timer error: aborting\n");
-        exit(1);
-    }
-    current_ns = timeout.it_value.tv_sec * 1000000000LL + timeout.it_value.tv_nsec;
-    if (current_ns && current_ns <= nearest_delta_ns)
-        return;
-
-    timeout.it_interval.tv_sec = 0;
-    timeout.it_interval.tv_nsec = 0; /* 0 for one-shot timer */
-    timeout.it_value.tv_sec =  nearest_delta_ns / 1000000000;
-    timeout.it_value.tv_nsec = nearest_delta_ns % 1000000000;
-    if (timer_settime(host_timer, 0 /* RELATIVE */, &timeout, NULL)) {
-        perror("settime");
-        fprintf(stderr, "Internal timer error: aborting\n");
-        exit(1);
-    }
-}
-
-#endif /* defined(__linux__) */
-
-#if !defined(_WIN32)
-
-static int unix_start_timer(struct qemu_alarm_timer *t)
-{
-    struct sigaction act;
-
-    /* timer signal */
-    sigfillset(&act.sa_mask);
-    act.sa_flags = 0;
-    act.sa_handler = host_alarm_handler;
-
-    sigaction(SIGALRM, &act, NULL);
-    return 0;
-}
-
-static void unix_rearm_timer(struct qemu_alarm_timer *t,
-                             int64_t nearest_delta_ns)
-{
-    struct itimerval itv;
-    int err;
-
-    if (nearest_delta_ns < MIN_TIMER_REARM_NS)
-        nearest_delta_ns = MIN_TIMER_REARM_NS;
-
-    itv.it_interval.tv_sec = 0;
-    itv.it_interval.tv_usec = 0; /* 0 for one-shot timer */
-    itv.it_value.tv_sec =  nearest_delta_ns / 1000000000;
-    itv.it_value.tv_usec = (nearest_delta_ns % 1000000000) / 1000;
-    err = setitimer(ITIMER_REAL, &itv, NULL);
-    if (err) {
-        perror("setitimer");
-        fprintf(stderr, "Internal timer error: aborting\n");
-        exit(1);
-    }
-}
-
-static void unix_stop_timer(struct qemu_alarm_timer *t)
-{
-    struct itimerval itv;
-
-    memset(&itv, 0, sizeof(itv));
-    setitimer(ITIMER_REAL, &itv, NULL);
-}
-
-#endif /* !defined(_WIN32) */
-
-
-#ifdef _WIN32
-
-static MMRESULT mm_timer;
-static TIMECAPS mm_tc;
-
-static void CALLBACK mm_alarm_handler(UINT uTimerID, UINT uMsg,
-                                      DWORD_PTR dwUser, DWORD_PTR dw1,
-                                      DWORD_PTR dw2)
-{
-    struct qemu_alarm_timer *t = alarm_timer;
-    if (!t) {
-        return;
-    }
-    t->expired = true;
-    t->pending = true;
-    qemu_notify_event();
-}
-
-static int mm_start_timer(struct qemu_alarm_timer *t)
-{
-    timeGetDevCaps(&mm_tc, sizeof(mm_tc));
-    return 0;
-}
-
-static void mm_stop_timer(struct qemu_alarm_timer *t)
-{
-    if (mm_timer) {
-        timeKillEvent(mm_timer);
-    }
-}
-
-static void mm_rearm_timer(struct qemu_alarm_timer *t, int64_t delta)
-{
-    int64_t nearest_delta_ms = delta / 1000000;
-    if (nearest_delta_ms < mm_tc.wPeriodMin) {
-        nearest_delta_ms = mm_tc.wPeriodMin;
-    } else if (nearest_delta_ms > mm_tc.wPeriodMax) {
-        nearest_delta_ms = mm_tc.wPeriodMax;
-    }
-
-    if (mm_timer) {
-        timeKillEvent(mm_timer);
-    }
-    mm_timer = timeSetEvent((UINT)nearest_delta_ms,
-                            mm_tc.wPeriodMin,
-                            mm_alarm_handler,
-                            (DWORD_PTR)t,
-                            TIME_ONESHOT | TIME_CALLBACK_FUNCTION);
-
-    if (!mm_timer) {
-        fprintf(stderr, "Failed to re-arm win32 alarm timer\n");
-        timeEndPeriod(mm_tc.wPeriodMin);
-        exit(1);
-    }
-}
-
-static int win32_start_timer(struct qemu_alarm_timer *t)
-{
-    HANDLE hTimer;
-    BOOLEAN success;
-
-    /* If you call ChangeTimerQueueTimer on a one-shot timer (its period
-       is zero) that has already expired, the timer is not updated.  Since
-       creating a new timer is relatively expensive, set a bogus one-hour
-       interval in the dynticks case.  */
-    success = CreateTimerQueueTimer(&hTimer,
-                          NULL,
-                          host_alarm_handler,
-                          t,
-                          1,
-                          3600000,
-                          WT_EXECUTEINTIMERTHREAD);
-
-    if (!success) {
-        fprintf(stderr, "Failed to initialize win32 alarm timer: %ld\n",
-                GetLastError());
-        return -1;
-    }
-
-    t->timer = hTimer;
-    return 0;
-}
-
-static void win32_stop_timer(struct qemu_alarm_timer *t)
-{
-    HANDLE hTimer = t->timer;
-
-    if (hTimer) {
-        DeleteTimerQueueTimer(NULL, hTimer, NULL);
-    }
-}
-
-static void win32_rearm_timer(struct qemu_alarm_timer *t,
-                              int64_t nearest_delta_ns)
-{
-    HANDLE hTimer = t->timer;
-    int64_t nearest_delta_ms;
-    BOOLEAN success;
-
-    nearest_delta_ms = nearest_delta_ns / 1000000;
-    if (nearest_delta_ms < 1) {
-        nearest_delta_ms = 1;
-    }
-    /* ULONG_MAX can be 32 bit */
-    if (nearest_delta_ms > ULONG_MAX) {
-        nearest_delta_ms = ULONG_MAX;
-    }
-    success = ChangeTimerQueueTimer(NULL,
-                                    hTimer,
-                                    (unsigned long) nearest_delta_ms,
-                                    3600000);
-
-    if (!success) {
-        fprintf(stderr, "Failed to rearm win32 alarm timer: %ld\n",
-                GetLastError());
-        exit(-1);
-    }
-
-}
-
-#endif /* _WIN32 */
-
-static void quit_timers(void)
-{
-    struct qemu_alarm_timer *t = alarm_timer;
-    alarm_timer = NULL;
-    t->stop(t);
-}
-
-#ifdef CONFIG_POSIX
-static void reinit_timers(void)
-{
-    struct qemu_alarm_timer *t = alarm_timer;
-    t->stop(t);
-    if (t->start(t)) {
-        fprintf(stderr, "Internal timer error: aborting\n");
-        exit(1);
-    }
-    qemu_rearm_alarm_timer(t);
-}
-#endif /* CONFIG_POSIX */
-
-int init_timer_alarm(void)
-{
-    struct qemu_alarm_timer *t = NULL;
-    int i, err = -1;
-
-    if (alarm_timer) {
-        return 0;
-    }
-
-    for (i = 0; alarm_timers[i].name; i++) {
-        t = &alarm_timers[i];
-
-        err = t->start(t);
-        if (!err)
-            break;
-    }
-
-    if (err) {
-        err = -ENOENT;
-        goto fail;
-    }
-
-    atexit(quit_timers);
-#ifdef CONFIG_POSIX
-    pthread_atfork(NULL, NULL, reinit_timers);
-#endif
-    alarm_timer = t;
-    return 0;
-
-fail:
-    return err;
-}
-
diff --git a/vl.c b/vl.c
index f422a1c..4c68668 100644
--- a/vl.c
+++ b/vl.c
@@ -3714,7 +3714,9 @@ int main(int argc, char **argv, char **envp)
                 old_param = 1;
                 break;
             case QEMU_OPTION_clock:
-                configure_alarms(optarg);
+                /* Clock options no longer exist.  Keep this option for
+                 * backward compatibility.
+                 */
                 break;
             case QEMU_OPTION_startdate:
                 configure_rtc_date_offset(optarg, 1);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv6 16/16] aio / timers: Add test harness for AioContext timers
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                           ` (14 preceding siblings ...)
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 15/16] aio / timers: Remove alarm timers Alex Bligh
@ 2013-08-06  9:16                         ` Alex Bligh
  2013-08-06  9:29                         ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                           ` (2 subsequent siblings)
  18 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-06  9:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

Add a test harness for AioContext timers. The g_source equivalent is
unsatisfactory as it suffers from false wakeups.

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 tests/test-aio.c |  134 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 134 insertions(+)

diff --git a/tests/test-aio.c b/tests/test-aio.c
index eedf7f8..8e2fa97 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -32,6 +32,15 @@ typedef struct {
     int max;
 } BHTestData;
 
+typedef struct {
+    QEMUTimer *timer;
+    QEMUTimerList *tl;
+    int n;
+    int max;
+    int64_t ns;
+    AioContext *ctx;
+} TimerTestData;
+
 static void bh_test_cb(void *opaque)
 {
     BHTestData *data = opaque;
@@ -40,6 +49,25 @@ static void bh_test_cb(void *opaque)
     }
 }
 
+static void timer_test_cb(void *opaque)
+{
+    TimerTestData *data = opaque;
+    if (++data->n < data->max) {
+        qemu_mod_timer(data->timer,
+                       qemu_get_clock_ns(timerlist_get_clock(data->tl)) +
+                       data->ns);
+    }
+}
+
+static void dummy_io_handler_read(void *opaque)
+{
+}
+
+static int dummy_io_handler_flush(void *opaque)
+{
+    return 1;
+}
+
 static void bh_delete_cb(void *opaque)
 {
     BHTestData *data = opaque;
@@ -341,6 +369,63 @@ static void test_wait_event_notifier_noflush(void)
     event_notifier_cleanup(&data.e);
 }
 
+static void test_timer_schedule(void)
+{
+    TimerTestData data = { .n = 0, .ctx = ctx, .ns = SCALE_MS * 750LL,
+                           .max = 2, .tl = ctx->tlg[QEMU_CLOCK_VIRTUAL] };
+    int pipefd[2];
+
+    /* aio_poll will not block to wait for timers to complete unless it has
+     * an fd to wait on. Fixing this breaks other tests. So create a dummy one.
+     */
+    g_assert(!pipe2(pipefd, O_NONBLOCK));
+    aio_set_fd_handler(ctx, pipefd[0],
+                       dummy_io_handler_read, NULL, dummy_io_handler_flush,
+                       NULL);
+    aio_poll(ctx, false);
+
+    data.timer = timer_new(data.tl, SCALE_NS, timer_test_cb, &data);
+    qemu_mod_timer(data.timer,
+                   qemu_get_clock_ns(timerlist_get_clock(data.tl)) +
+                   data.ns);
+
+    g_assert_cmpint(data.n, ==, 0);
+
+    /* qemu_mod_timer may well cause an event notifer to have gone off,
+     * so clear that
+     */
+    do {} while (aio_poll(ctx, false));
+
+    g_assert(!aio_poll(ctx, false));
+    g_assert_cmpint(data.n, ==, 0);
+
+    sleep(1);
+    g_assert_cmpint(data.n, ==, 0);
+
+    g_assert(aio_poll(ctx, false));
+    g_assert_cmpint(data.n, ==, 1);
+
+    /* qemu_mod_timer called by our callback */
+    do {} while (aio_poll(ctx, false));
+
+    g_assert(!aio_poll(ctx, false));
+    g_assert_cmpint(data.n, ==, 1);
+
+    g_assert(aio_poll(ctx, true));
+    g_assert_cmpint(data.n, ==, 2);
+
+    /* As max is now 2, an event notifier should not have gone off */
+
+    g_assert(!aio_poll(ctx, false));
+    g_assert_cmpint(data.n, ==, 2);
+
+    aio_set_fd_handler(ctx, pipefd[0], NULL, NULL, NULL, NULL);
+    close(pipefd[0]);
+    close(pipefd[1]);
+
+    qemu_del_timer(data.timer);
+}
+
 /* Now the same tests, using the context as a GSource.  They are
  * very similar to the ones above, with g_main_context_iteration
  * replacing aio_poll.  However:
@@ -623,6 +708,53 @@ static void test_source_wait_event_notifier_noflush(void)
     event_notifier_cleanup(&data.e);
 }
 
+static void test_source_timer_schedule(void)
+{
+    TimerTestData data = { .n = 0, .ctx = ctx, .ns = SCALE_MS * 750LL,
+                           .max = 2, .tl = ctx->tlg[QEMU_CLOCK_VIRTUAL] };
+    int pipefd[2];
+    int64_t expiry;
+
+    /* aio_poll will not block to wait for timers to complete unless it has
+     * an fd to wait on. Fixing this breaks other tests. So create a dummy one.
+     */
+    g_assert(!pipe2(pipefd, O_NONBLOCK));
+    aio_set_fd_handler(ctx, pipefd[0],
+                       dummy_io_handler_read, NULL, dummy_io_handler_flush,
+                       NULL);
+    do {} while (g_main_context_iteration(NULL, false));
+
+    data.timer = timer_new(data.tl, SCALE_NS, timer_test_cb, &data);
+    expiry = qemu_get_clock_ns(timerlist_get_clock(data.tl)) +
+        data.ns;
+    qemu_mod_timer(data.timer, expiry);
+
+    g_assert_cmpint(data.n, ==, 0);
+
+    sleep(1);
+    g_assert_cmpint(data.n, ==, 0);
+
+    g_assert(g_main_context_iteration(NULL, false));
+    g_assert_cmpint(data.n, ==, 1);
+
+    /* The comment above was not kidding when it said this wakes up itself */
+    do {
+        g_assert(g_main_context_iteration(NULL, true));
+    } while (qemu_get_clock_ns(
+                 timerlist_get_clock(data.tl)) <= expiry);
+    sleep(1);
+    g_main_context_iteration(NULL, false);
+
+    g_assert_cmpint(data.n, ==, 2);
+
+    aio_set_fd_handler(ctx, pipefd[0], NULL, NULL, NULL, NULL);
+    close(pipefd[0]);
+    close(pipefd[1]);
+
+    qemu_del_timer(data.timer);
+}
+
+
 /* End of tests.  */
 
 int main(int argc, char **argv)
@@ -651,6 +783,7 @@ int main(int argc, char **argv)
     g_test_add_func("/aio/event/wait",              test_wait_event_notifier);
     g_test_add_func("/aio/event/wait/no-flush-cb",  test_wait_event_notifier_noflush);
     g_test_add_func("/aio/event/flush",             test_flush_event_notifier);
+    g_test_add_func("/aio/timer/schedule",          test_timer_schedule);
 
     g_test_add_func("/aio-gsource/notify",                  test_source_notify);
     g_test_add_func("/aio-gsource/flush",                   test_source_flush);
@@ -665,5 +798,6 @@ int main(int argc, char **argv)
     g_test_add_func("/aio-gsource/event/wait",              test_source_wait_event_notifier);
     g_test_add_func("/aio-gsource/event/wait/no-flush-cb",  test_source_wait_event_notifier_noflush);
     g_test_add_func("/aio-gsource/event/flush",             test_source_flush_event_notifier);
+    g_test_add_func("/aio-gsource/timer/schedule",          test_source_timer_schedule);
     return g_test_run();
 }
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 128+ messages in thread

* [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                           ` (15 preceding siblings ...)
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 16/16] aio / timers: Add test harness for AioContext timers Alex Bligh
@ 2013-08-06  9:29                         ` Alex Bligh
  2013-08-06 11:53                         ` Stefan Hajnoczi
  2013-08-06 15:38                         ` Stefan Hajnoczi
  18 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-06  9:29 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

[ I managed to screw up the cover letter by including 2 diffstats;
  here's the fixed one]

This patch series adds support for timers attached to an AioContext clock
which get called within aio_poll.

In doing so it removes alarm timers and moves to use ppoll where possible.

This patch set 'sort of' passes make check (see below for caveat)
including a new test harness for the aio timers, but has not been
tested much beyond that. In particular, the win32 changes have not
even been compile tested. Equally, alterations to use_icount
are untested.

Caveat: I have had to alter tests/test-aio.c so the following error
no longer occurs.

ERROR:tests/test-aio.c:346:test_wait_event_notifier_noflush: assertion failed: (aio_poll(ctx, false))

As gar as I can tell, this check was incorrect, in that it checking
aio_poll makes progress when in fact it should not make progress. I
fixed an issue where aio_poll was (as far as I can tell) wrongly
returning true on a timeout, and that generated this error.

Note also the comment on patch 15 in relation to a possible bug
in cpus.c.

Changes since v5:
* Rebase onto master (b9ac5d9)
* Fix spacing in typedef QEMUTimerList
* Rename 'QEMUClocks' extern to 'qemu_clocks'

Changes since v4:
* Rename qemu_timerlist_ functions to timer_list (per Paolo Bonzini)
* Rename qemu_timer_.*timerlist.* to timer_ (per Paolo Bonzini)
* Use enum for QEMUClockType
* Put clocks into an array; remove global variables
* Introduce QEMUTimerListGroup - a timeliest of each type
* Add a QEMUTimerListGroup to AioContext
* Use a callback on timer modification, rather than binding in
  AioContext into the timeliest
* Make cpus.c iterate over all timerlists when it does a notify
* Make cpus.c icount timeout use soonest timeout
  across all timerlists

Changes since v3:
* Split up QEMUClock and QEMUClock list
* Improve commenting
* Fix comment in vl.c
* Change test/test-aio.c to reflect correct behaviour in aio_poll.

Changes since v2:
* Reordered to remove alarm timers last
* Added prctl(PR_SET_TIMERSLACK, 1, ...)
* Renamed qemu_g_poll_ns to qemu_poll_ns
* Moved declaration of above & drop glib types
* Do not use a global list of qemu clocks
* Add AioContext * to QEMUClock
* Split up conversion to use ppoll and timers
* Indentation fix
* Fix aio_win32.c aio_poll to return progress
* aio_notify / qemu_notify when timers are modified
* change comment in deprecation of clock options

Alex Bligh (16):
  aio / timers: add qemu-timer.c utility functions
  aio / timers: add ppoll support with qemu_poll_ns
  aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer
    slack
  aio / timers: Make qemu_run_timers and qemu_run_all_timers return
    progress
  aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList
  aio / timers: Untangle include files
  aio / timers: Add QEMUTimerListGroup and helper functions
  aio / timers: Add QEMUTimerListGroup to AioContext
  aio / timers: Add a notify callback to QEMUTimerList
  aio / timers: aio_ctx_prepare sets timeout from AioContext timers
  aio / timers: Convert aio_poll to use AioContext timers' deadline
  aio / timers: Convert mainloop to use timeout
  aio / timers: On timer modification, qemu_notify or aio_notify
  aio / timers: Use all timerlists in icount warp calculations
  aio / timers: Remove alarm timers
  aio / timers: Add test harness for AioContext timers

 aio-posix.c               |   20 +-
 aio-win32.c               |   22 +-
 async.c                   |   20 +-
 configure                 |   37 +++
 cpus.c                    |   44 ++-
 dma-helpers.c             |    1 +
 hw/dma/xilinx_axidma.c    |    1 +
 hw/timer/arm_timer.c      |    1 +
 hw/timer/grlib_gptimer.c  |    2 +
 hw/timer/imx_epit.c       |    1 +
 hw/timer/imx_gpt.c        |    1 +
 hw/timer/lm32_timer.c     |    1 +
 hw/timer/puv3_ost.c       |    1 +
 hw/timer/slavio_timer.c   |    1 +
 hw/timer/xilinx_timer.c   |    1 +
 hw/usb/hcd-uhci.c         |    1 +
 include/block/aio.h       |    4 +
 include/block/block_int.h |    1 +
 include/block/coroutine.h |    2 +
 include/qemu/timer.h      |  122 ++++++-
 main-loop.c               |   49 ++-
 migration-exec.c          |    1 +
 migration-fd.c            |    1 +
 migration-tcp.c           |    1 +
 migration-unix.c          |    1 +
 migration.c               |    1 +
 nbd.c                     |    1 +
 net/net.c                 |    1 +
 net/socket.c              |    1 +
 qemu-coroutine-io.c       |    1 +
 qemu-io-cmds.c            |    1 +
 qemu-nbd.c                |    1 +
 qemu-timer.c              |  803 ++++++++++++++++-----------------------------
 slirp/misc.c              |    1 +
 tests/test-aio.c          |  141 +++++++-
 tests/test-thread-pool.c  |    3 +
 thread-pool.c             |    1 +
 ui/vnc-auth-vencrypt.c    |    2 +-
 ui/vnc-ws.c               |    1 +
 vl.c                      |    4 +-
 40 files changed, 736 insertions(+), 564 deletions(-)

-- 
1.7.9.5

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv5 06/16] aio / timers: Untangle include files
  2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 06/16] aio / timers: Untangle include files Alex Bligh
@ 2013-08-06 10:10                         ` Stefan Hajnoczi
  2013-08-06 11:20                           ` Alex Bligh
  2013-08-06 23:52                           ` Alex Bligh
  0 siblings, 2 replies; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-08-06 10:10 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

On Sun, Aug 04, 2013 at 07:09:55PM +0100, Alex Bligh wrote:
> include/qemu/timer.h has no need to include main-loop.h and
> doing so causes an issue for the next patch. Unfortunately
> various files assume including timers.h will pull in main-loop.h.
> Untangle this mess.
> 
> Signed-off-by: Alex Bligh <alex@alex.org.uk>
> ---
>  dma-helpers.c             |    1 +
>  hw/dma/xilinx_axidma.c    |    1 +
>  hw/timer/arm_timer.c      |    1 +
>  hw/timer/grlib_gptimer.c  |    2 ++
>  hw/timer/imx_epit.c       |    1 +
>  hw/timer/imx_gpt.c        |    1 +
>  hw/timer/lm32_timer.c     |    1 +
>  hw/timer/puv3_ost.c       |    1 +
>  hw/timer/slavio_timer.c   |    1 +
>  hw/timer/xilinx_timer.c   |    1 +
>  hw/usb/hcd-uhci.c         |    1 +
>  include/block/block_int.h |    1 +
>  include/block/coroutine.h |    2 ++
>  include/qemu/timer.h      |    1 -
>  migration-exec.c          |    1 +
>  migration-fd.c            |    1 +
>  migration-tcp.c           |    1 +
>  migration-unix.c          |    1 +
>  migration.c               |    1 +
>  nbd.c                     |    1 +
>  net/net.c                 |    1 +
>  net/socket.c              |    1 +
>  qemu-coroutine-io.c       |    1 +
>  qemu-io-cmds.c            |    1 +
>  qemu-nbd.c                |    1 +
>  slirp/misc.c              |    1 +
>  thread-pool.c             |    1 +
>  ui/vnc-auth-vencrypt.c    |    2 +-
>  ui/vnc-ws.c               |    1 +
>  29 files changed, 30 insertions(+), 2 deletions(-)

Fails to build ui/vnc-auth-sasl.c

Please make sure that your ./configure output shows most optional
dependencies are available (you need to install development packages for
these libraries).  Otherwise you will not see build breakage.

Stefan

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv5 06/16] aio / timers: Untangle include files
  2013-08-06 10:10                         ` Stefan Hajnoczi
@ 2013-08-06 11:20                           ` Alex Bligh
  2013-08-06 11:48                             ` Stefan Hajnoczi
  2013-08-06 23:52                           ` Alex Bligh
  1 sibling, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-08-06 11:20 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, qemu-devel,
	liu ping fan, Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka,
	rth

Stefan,

--On 6 August 2013 12:10:17 +0200 Stefan Hajnoczi <stefanha@gmail.com> 
wrote:

> Fails to build ui/vnc-auth-sasl.c
>
> Please make sure that your ./configure output shows most optional
> dependencies are available (you need to install development packages for
> these libraries).  Otherwise you will not see build breakage.

Thanks. It will no doubt need a:

#include "qemu/main-loop.h"

I'll need to find something to build all this on a little more powerful
than the puny VM on my laptop.

Is this untangling include files approach a good one? Most of the functions
that these things needed appeared to have little to do with main-loop
per se. (though they are in main-loop.c).

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv5 06/16] aio / timers: Untangle include files
  2013-08-06 11:20                           ` Alex Bligh
@ 2013-08-06 11:48                             ` Stefan Hajnoczi
  0 siblings, 0 replies; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-08-06 11:48 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

On Tue, Aug 06, 2013 at 12:20:31PM +0100, Alex Bligh wrote:
> Is this untangling include files approach a good one? Most of the functions
> that these things needed appeared to have little to do with main-loop
> per se. (though they are in main-loop.c).

I'm happy with it since qemu_set_fd_handler2() is indeed a main-loop
function.

Stefan

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                           ` (16 preceding siblings ...)
  2013-08-06  9:29                         ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
@ 2013-08-06 11:53                         ` Stefan Hajnoczi
  2013-08-06 15:38                         ` Stefan Hajnoczi
  18 siblings, 0 replies; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-08-06 11:53 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Paolo Bonzini, MORITA Kazutaka, rth

On Tue, Aug 06, 2013 at 10:16:16AM +0100, Alex Bligh wrote:
> Changes since v5:

Please post revisions as separate email threads instead of replying to
the last revision.  It's easy for patches to get overlooked by
maintainers if they are not top-level threads.

No need to resend v6 but please send v7 as a separate thread.

Stefan

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 01/16] aio / timers: add qemu-timer.c utility functions
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 01/16] aio / timers: add qemu-timer.c utility functions Alex Bligh
@ 2013-08-06 12:02                           ` Stefan Hajnoczi
  2013-08-06 12:30                             ` Alex Bligh
  0 siblings, 1 reply; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-08-06 12:02 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Paolo Bonzini, MORITA Kazutaka, rth

On Tue, Aug 06, 2013 at 10:16:17AM +0100, Alex Bligh wrote:
> Add qemu_free_clock and expose qemu_new_clock and clock types.
> 
> Add utility functions to qemu-timer.c for nanosecond timing.
> 
> Add qemu_clock_deadline_ns to calculate deadlines to
> nanosecond accuracy.
> 
> Add utility function qemu_soonest_timeout to calculate soonest deadline.
> 
> Add qemu_timeout_ns_to_ms to convert a timeout in nanoseconds back to
> milliseconds for when ppoll is not used.
> 
> Signed-off-by: Alex Bligh <alex@alex.org.uk>
> ---
>  include/qemu/timer.h |   17 ++++++++++++++
>  qemu-timer.c         |   63 +++++++++++++++++++++++++++++++++++++++++++++-----
>  2 files changed, 74 insertions(+), 6 deletions(-)

There is still too much happening in this patch.  Making
qemu_new_clock()/qemu_free_clock() public and moving the clock source
constants can be done in a single patch.

The next patch can change the semantics of qemu_clock_deadline() to
return INT32_MAX when the clock source is disabled.  I'm not sure why
you do this and whether you checked that existing users continue to work
correctly?  This is worth a separate patch.

Introducing qemu_timeout_ns_to_ms() and qemu_soonest_timeout() could be
done separately or together, I don't care as much there.  Please include
an explanation of why qemu_timeout_ns_to_ms() will be needed in the
future (there are no callers in this patch).

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 05/16] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 05/16] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList Alex Bligh
@ 2013-08-06 12:26                           ` Stefan Hajnoczi
  2013-08-06 12:49                             ` Alex Bligh
  2013-08-06 13:16                             ` Alex Bligh
  0 siblings, 2 replies; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-08-06 12:26 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Paolo Bonzini, MORITA Kazutaka, rth

On Tue, Aug 06, 2013 at 10:16:21AM +0100, Alex Bligh wrote:
> Split QEMUClock into QEMUClock and QEMUTimerList so that we can
> have more than one QEMUTimerList associated with the same clock.
> 
> Introduce a default_timerlist concept and make existing
> qemu_clock_* calls that actually should operate on a QEMUTimerList
> call the relevant QEMUTimerList implementations, using the clock's
> default timerlist. This vastly reduces the invasiveness of this
> change and means the API stays constant for existing users.
> 
> Introduce a list of QEMUTimerLists associated with each clock
> so that reenabling the clock can cause all the notifiers
> to be called. Note the code to do the notifications is added
> in a later patch.
> 
> Switch QEMUClockType to an enum. Remove global variables vm_clock,
> host_clock and rt_clock and add compatibility defines. Do not
> fix qemu_next_alarm_deadline as it's going to be deleted.
> 
> Signed-off-by: Alex Bligh <alex@alex.org.uk>
> ---
>  include/qemu/timer.h |   91 +++++++++++++++++++++++--
>  qemu-timer.c         |  185 ++++++++++++++++++++++++++++++++++++++------------
>  2 files changed, 224 insertions(+), 52 deletions(-)

Although in the short time it's easier to keep the old API where
QEMUClock also acts as a timer list, I don't think it's a good idea to
do that.

It makes timers harder to understand for everyone because we now have
timerlists but an extra layer to sort of make QEMUClock like a
timerlist.

The API we should end up with has two concepts: clock sources (rt, vm,
host) and timer lists.  This patch is adding unnecessary indirections
and making the clock source look like a timer list.  I think it's worth
converting existing code once and for all instead of carrying this
baggage forever.

> +extern QEMUClock *qemu_clocks[QEMU_CLOCK_MAX];
> +
> +static inline QEMUClock *qemu_get_clock(QEMUClockType type)
> +{
> +    return qemu_clocks[type];
> +}
> +
> +/* These three clocks are maintained here with separate variable
> +   names for compatibility only.
> +*/
> +
>  /* The real time clock should be used only for stuff which does not
>     change the virtual machine state, as it is run even if the virtual
>     machine is stopped. The real time clock has a frequency of 1000
>     Hz. */
> -extern QEMUClock *rt_clock;
> +#define rt_clock (qemu_get_clock(QEMU_CLOCK_REALTIME))
>  
>  /* The virtual clock is only run during the emulation. It is stopped
>     when the virtual machine is stopped. Virtual timers use a high
>     precision clock, usually cpu cycles (use ticks_per_sec). */
> -extern QEMUClock *vm_clock;
> +#define vm_clock (qemu_get_clock(QEMU_CLOCK_VIRTUAL))
>  
>  /* The host clock should be use for device models that emulate accurate
>     real time sources. It will continue to run when the virtual machine
>     is suspended, and it will reflect system time changes the host may
>     undergo (e.g. due to NTP). The host clock has the same precision as
>     the virtual clock. */
> -extern QEMUClock *host_clock;
> +#define host_clock (qemu_get_clock(QEMU_CLOCK_HOST))

What is the point of this change?  It's not clear how using
qemu_clocks[] is an improvement over rt_clock, vm_clock, and host_clock.

Or in other words: why is timerlist_new necessary?

>  struct QEMUTimer {
>      int64_t expire_time;	/* in nanoseconds */
> -    QEMUClock *clock;
> +    QEMUTimerList *tl;

'timer_list' is easier to read than just 'tl'.

> -QEMUClock *qemu_new_clock(int type)
> +void timerlist_free(QEMUTimerList *tl)
> +{

Assert that active_timers is empty.

> +bool qemu_clock_use_for_deadline(QEMUClock *clock)
> +{
> +    return !(use_icount && (clock->type = QEMU_CLOCK_VIRTUAL));
> +}

Please use doc comments (see include/object/qom.h for example doc
comment syntax).  No idea why this function is needed or what it will be
used for.

Also, should it be '==' instead of '='?

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 01/16] aio / timers: add qemu-timer.c utility functions
  2013-08-06 12:02                           ` Stefan Hajnoczi
@ 2013-08-06 12:30                             ` Alex Bligh
  2013-08-06 13:59                               ` Stefan Hajnoczi
  0 siblings, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-08-06 12:30 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Alex Bligh, Paolo Bonzini, MORITA Kazutaka, rth

Stefan,

--On 6 August 2013 14:02:18 +0200 Stefan Hajnoczi <stefanha@redhat.com> 
wrote:

> There is still too much happening in this patch.  Making
> qemu_new_clock()/qemu_free_clock() public and moving the clock source
> constants can be done in a single patch.

OK I'll split it up.

> The next patch can change the semantics of qemu_clock_deadline() to
> return INT32_MAX when the clock source is disabled.  I'm not sure why
> you do this and whether you checked that existing users continue to work
> correctly?

Rationale: I suspect it doesn't really matter (see below) but the
rationale was that if the clock is disabled, timers will expire in
an infinite time. I was trying to make treatment of expire times
consistent throughout qemu-timer, and this was the one place a
disabled clock was being treated as if the timers were going to
expire.

Audit: The only users (at least by the time my patch set is finished) are:

* cpus.c within qtest_warp(), which appears not to consider the case of
  vm_clock being disabled. I do not think this function has been written
  considering the possibility that there are no active vm_clock timers
  or where the deadline is > INT32_MAX ns away.

* qtest_process_command(), evaluating clock_step, which calls the above.

My preference would be to move these to qemu_clock_deadline_ns (without
the INT32_MAX check) and delete the old qemu_clock_deadline routine
entirely, but I don't really understand the full set of circumstances
in which the qtest routines are meant to work.

Of course cpus.c now uses qemu_clock_deadline_ns_all for a couple of
the previous two uses, and in patch 14 of my series I've commented
that I think the previous use was buggy but have maintained bug-for-bug
compatibility.

I'd particularly like comments on patch 14, which I've mostly blind
coded based on what Paolo asked for. I'm afraid I found the icount
stuff a bit opaque. I'll hold off from v7 until someone's had a look
at these.

> Please include
> an explanation of why qemu_timeout_ns_to_ms() will be needed in the
> future (there are no callers in this patch).

You mean in the commit text as well as the following?

+/* Transition function to convert a nanosecond timeout to ms
+ * This is used where a system does not support ppoll
+ */

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 08/16] aio / timers: Add QEMUTimerListGroup to AioContext
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 08/16] aio / timers: Add QEMUTimerListGroup to AioContext Alex Bligh
@ 2013-08-06 12:30                           ` Stefan Hajnoczi
  2013-08-06 12:50                             ` Alex Bligh
  0 siblings, 1 reply; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-08-06 12:30 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Paolo Bonzini, MORITA Kazutaka, rth

On Tue, Aug 06, 2013 at 10:16:24AM +0100, Alex Bligh wrote:
> @@ -628,6 +629,8 @@ int main(int argc, char **argv)
>  {
>      GSource *src;
>  
> +    init_clocks();
> +

Why add this call now?  Are there any other programs where this should
be added too?

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 09/16] aio / timers: Add a notify callback to QEMUTimerList
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 09/16] aio / timers: Add a notify callback to QEMUTimerList Alex Bligh
@ 2013-08-06 12:34                           ` Stefan Hajnoczi
  2013-08-06 12:50                             ` Alex Bligh
  0 siblings, 1 reply; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-08-06 12:34 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Paolo Bonzini, MORITA Kazutaka, rth

On Tue, Aug 06, 2013 at 10:16:25AM +0100, Alex Bligh wrote:
> Add a notify pointer to QEMUTimerList so it knows what to notify
> on a timer change.
> 
> Signed-off-by: Alex Bligh <alex@alex.org.uk>
> ---
>  async.c              |    7 ++++++-
>  include/qemu/timer.h |    7 ++++++-
>  qemu-timer.c         |   24 ++++++++++++++++++++++--
>  3 files changed, 34 insertions(+), 4 deletions(-)
> 
> diff --git a/async.c b/async.c
> index 99fb5a8..8daa232 100644
> --- a/async.c
> +++ b/async.c
> @@ -234,6 +234,11 @@ void aio_notify(AioContext *ctx)
>      event_notifier_set(&ctx->notifier);
>  }
>  
> +static void aio_timerlist_notify(void *opaque)
> +{
> +    aio_notify((AioContext *)opaque);

void * is automatically converted to any pointer type.  No need for an
explicit cast, a C compiler doesn't emit a warning here.

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 10/16] aio / timers: aio_ctx_prepare sets timeout from AioContext timers
  2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 10/16] aio / timers: aio_ctx_prepare sets timeout from AioContext timers Alex Bligh
@ 2013-08-06 12:40                           ` Stefan Hajnoczi
  2013-08-06 12:54                             ` Alex Bligh
  0 siblings, 1 reply; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-08-06 12:40 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Paolo Bonzini, MORITA Kazutaka, rth

On Tue, Aug 06, 2013 at 10:16:26AM +0100, Alex Bligh wrote:
> @@ -180,7 +189,7 @@ aio_ctx_check(GSource *source)
>              return true;
>  	}
>      }
> -    return aio_pending(ctx);
> +    return aio_pending(ctx) || (timerlistgroup_deadline_ns(ctx->tlg) >= 0);

Now we always dispatch if there is a timer?  Should this comparison be
timerlistgroup_deadline_ns(ctx->tlg) == 0 instead to only dispatch
expired timers?

Stefan

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 05/16] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList
  2013-08-06 12:26                           ` Stefan Hajnoczi
@ 2013-08-06 12:49                             ` Alex Bligh
  2013-08-06 14:29                               ` Stefan Hajnoczi
  2013-08-06 13:16                             ` Alex Bligh
  1 sibling, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-08-06 12:49 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Alex Bligh, Paolo Bonzini, MORITA Kazutaka, rth

Stefan,

> Although in the short time it's easier to keep the old API where
> QEMUClock also acts as a timer list, I don't think it's a good idea to
> do that.
>
> It makes timers harder to understand for everyone because we now have
> timerlists but an extra layer to sort of make QEMUClock like a
> timerlist.

I think I disagree here.

At the very least we should put the conversion to use the new API
into a separate patch (possibly a separate patch set). It's fantastically
intrusive.

However it touches so much (it touches just about every driver) that
it's going to be really hard for people to maintain a driver that
works with two versions of qemu. This is really useful (having just
backported the modern ceph driver to qemu 1.0 and 1.3). Moreover it's
going to break any developer with an existing non-upstreamed branch

The change is pretty mechanical, so what I'd suggest is that we keep
the old API around too (as deprecated) for a little while. At some
stage we can fix existing code and after that point not accept patches
that use the existing API.

Even if the period is just a month (i.e. the old API goes before 1.7),
why break things unnecessarily?

> What is the point of this change?  It's not clear how using
> qemu_clocks[] is an improvement over rt_clock, vm_clock, and host_clock.

Because you can iterate through all the clocks with a for() loop as
is done in six places in qemu-timer.c by the end of the patch
series (look for QEMU_CLOCK_MAX). This also allows us to use
a timerlistgroup (a set of timerlists, one for each clock) so
each AioContext can have a clock of each type, as Paolo requested
in response to v4.

> Or in other words: why is timerlist_new necessary?

I think that question is entirely orthogonal. This generates a new
timerlist. In v4 each AioContext had its own timerlist. Now it has
its own timerlistgroup, with one timerlist of each clock type.
The constructor timerlist_new is there to initialise the timerlist
which broadly is a malloc(), setting ->clock, and inserting it
onto the list of timerlists associated with that clock. How would
we avoid this if we still want multiple timerlists per clock?

>>  struct QEMUTimer {
>>      int64_t expire_time;	/* in nanoseconds */
>> -    QEMUClock *clock;
>> +    QEMUTimerList *tl;
>
> 'timer_list' is easier to read than just 'tl'.

It caused a pile of line wrap issues which made the patch harder
to read, so I shortened it. I can put it back if you like.

>> -QEMUClock *qemu_new_clock(int type)
>> +void timerlist_free(QEMUTimerList *tl)
>> +{
>
> Assert that active_timers is empty.

OK

>> +bool qemu_clock_use_for_deadline(QEMUClock *clock)
>> +{
>> +    return !(use_icount && (clock->type = QEMU_CLOCK_VIRTUAL));
>> +}
>
> Please use doc comments (see include/object/qom.h for example doc
> comment syntax).  No idea why this function is needed or what it will be
> used for.

I will comment it, but it mostly does what it says in the tin. Per
Paolo's comment, the vm_clock should not be used for calculation of
deadlines to ppoll etc. if use_icount is true, because it's not actually
in nanoseconds; rather qemu_notify() or aio_notify() get called by the
vm cpu thread when the relevant instruction counter is exceeded.

> Also, should it be '==' instead of '='?

Good catch!

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 08/16] aio / timers: Add QEMUTimerListGroup to AioContext
  2013-08-06 12:30                           ` Stefan Hajnoczi
@ 2013-08-06 12:50                             ` Alex Bligh
  2013-08-06 14:45                               ` Stefan Hajnoczi
  0 siblings, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-08-06 12:50 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Alex Bligh, Paolo Bonzini, MORITA Kazutaka, rth



--On 6 August 2013 14:30:43 +0200 Stefan Hajnoczi <stefanha@redhat.com> 
wrote:

> On Tue, Aug 06, 2013 at 10:16:24AM +0100, Alex Bligh wrote:
>> @@ -628,6 +629,8 @@ int main(int argc, char **argv)
>>  {
>>      GSource *src;
>>
>> +    init_clocks();
>> +
>
> Why add this call now?

Because otherwise make check SEGVs after the patch.

> Are there any other programs where this should
> be added too?

I believe not (at least make check does not fail anything).

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 09/16] aio / timers: Add a notify callback to QEMUTimerList
  2013-08-06 12:34                           ` Stefan Hajnoczi
@ 2013-08-06 12:50                             ` Alex Bligh
  0 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-06 12:50 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Alex Bligh, Paolo Bonzini, MORITA Kazutaka, rth



--On 6 August 2013 14:34:00 +0200 Stefan Hajnoczi <stefanha@redhat.com> 
wrote:

>> +static void aio_timerlist_notify(void *opaque)
>> +{
>> +    aio_notify((AioContext *)opaque);
>
> void * is automatically converted to any pointer type.  No need for an
> explicit cast, a C compiler doesn't emit a warning here.

OK

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 10/16] aio / timers: aio_ctx_prepare sets timeout from AioContext timers
  2013-08-06 12:40                           ` Stefan Hajnoczi
@ 2013-08-06 12:54                             ` Alex Bligh
  0 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-06 12:54 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Alex Bligh, Paolo Bonzini, MORITA Kazutaka, rth

Stefan,

--On 6 August 2013 14:40:00 +0200 Stefan Hajnoczi <stefanha@redhat.com> 
wrote:

> On Tue, Aug 06, 2013 at 10:16:26AM +0100, Alex Bligh wrote:
>> @@ -180,7 +189,7 @@ aio_ctx_check(GSource *source)
>>              return true;
>>  	}
>>      }
>> -    return aio_pending(ctx);
>> +    return aio_pending(ctx) || (timerlistgroup_deadline_ns(ctx->tlg) >=
>> 0);
>
> Now we always dispatch if there is a timer?  Should this comparison be
> timerlistgroup_deadline_ns(ctx->tlg) == 0 instead to only dispatch
> expired timers?

I think you are right.

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 05/16] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList
  2013-08-06 12:26                           ` Stefan Hajnoczi
  2013-08-06 12:49                             ` Alex Bligh
@ 2013-08-06 13:16                             ` Alex Bligh
  2013-08-06 14:31                               ` Stefan Hajnoczi
  1 sibling, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-08-06 13:16 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Alex Bligh, Paolo Bonzini, MORITA Kazutaka, rth



--On 6 August 2013 14:26:33 +0200 Stefan Hajnoczi <stefanha@redhat.com> 
wrote:

> Please use doc comments (see include/object/qom.h for example doc
> comment syntax).

Should these go in the .h (per include/qom/object.h) or the .c (per
your comment)?

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 01/16] aio / timers: add qemu-timer.c utility functions
  2013-08-06 12:30                             ` Alex Bligh
@ 2013-08-06 13:59                               ` Stefan Hajnoczi
  2013-08-06 14:18                                 ` Alex Bligh
  0 siblings, 1 reply; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-08-06 13:59 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

On Tue, Aug 06, 2013 at 01:30:18PM +0100, Alex Bligh wrote:
> --On 6 August 2013 14:02:18 +0200 Stefan Hajnoczi
> <stefanha@redhat.com> wrote:
> My preference would be to move these to qemu_clock_deadline_ns (without
> the INT32_MAX check) and delete the old qemu_clock_deadline routine
> entirely, but I don't really understand the full set of circumstances
> in which the qtest routines are meant to work.

Okay, that's excellent.  It would be great to move to a single function.

The way qtest works is that it executes QEMU in a mode that does not run
guest code.  Instead of running guest code it listens for commands over
a socket.  The wire protocol can peek/poke memory, notify of interrupts,
and warp the clock.

There are test cases that use qtest to test emulated devices.

When qtest either steps the clock or sets it to a completely new value
using qtest_clock_warp() it runs all vm_clock timers that should expire
before the new time.

Does this help?

> >Please include
> >an explanation of why qemu_timeout_ns_to_ms() will be needed in the
> >future (there are no callers in this patch).
> 
> You mean in the commit text as well as the following?
> 
> +/* Transition function to convert a nanosecond timeout to ms
> + * This is used where a system does not support ppoll
> + */

Usually a doc comment is enough since it explains what the function
does.  If it's a low-level function is may be necessary to give more
context in the commit description.

Stefan

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 01/16] aio / timers: add qemu-timer.c utility functions
  2013-08-06 13:59                               ` Stefan Hajnoczi
@ 2013-08-06 14:18                                 ` Alex Bligh
  2013-08-07  7:21                                   ` Stefan Hajnoczi
  0 siblings, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-08-06 14:18 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, qemu-devel,
	liu ping fan, Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka,
	rth

Stefan,

--On 6 August 2013 15:59:11 +0200 Stefan Hajnoczi <stefanha@gmail.com> 
wrote:

>> --On 6 August 2013 14:02:18 +0200 Stefan Hajnoczi
>> <stefanha@redhat.com> wrote:
>> My preference would be to move these to qemu_clock_deadline_ns (without
>> the INT32_MAX check) and delete the old qemu_clock_deadline routine
>> entirely, but I don't really understand the full set of circumstances
>> in which the qtest routines are meant to work.
>
> Okay, that's excellent.  It would be great to move to a single function.
>
> The way qtest works is that it executes QEMU in a mode that does not run
> guest code.  Instead of running guest code it listens for commands over
> a socket.  The wire protocol can peek/poke memory, notify of interrupts,
> and warp the clock.
>
> There are test cases that use qtest to test emulated devices.
>
> When qtest either steps the clock or sets it to a completely new value
> using qtest_clock_warp() it runs all vm_clock timers that should expire
> before the new time.
>
> Does this help?

Nearly :-)

How do I actually run the code (i.e. how do I test whether I've broken
it)? I take it that's something different from just 'make check'?

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 05/16] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList
  2013-08-06 12:49                             ` Alex Bligh
@ 2013-08-06 14:29                               ` Stefan Hajnoczi
  2013-08-06 14:52                                 ` Alex Bligh
  2013-08-06 23:54                                 ` Alex Bligh
  0 siblings, 2 replies; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-08-06 14:29 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

On Tue, Aug 06, 2013 at 01:49:03PM +0100, Alex Bligh wrote:
> Stefan,
> 
> >Although in the short time it's easier to keep the old API where
> >QEMUClock also acts as a timer list, I don't think it's a good idea to
> >do that.
> >
> >It makes timers harder to understand for everyone because we now have
> >timerlists but an extra layer to sort of make QEMUClock like a
> >timerlist.
> 
> I think I disagree here.
> 
> At the very least we should put the conversion to use the new API
> into a separate patch (possibly a separate patch set). It's fantastically
> intrusive.

Yes, it should be a separate patch.

> However it touches so much (it touches just about every driver) that
> it's going to be really hard for people to maintain a driver that
> works with two versions of qemu. This is really useful (having just
> backported the modern ceph driver to qemu 1.0 and 1.3). Moreover it's
> going to break any developer with an existing non-upstreamed branch

That's true and there are other instances of this like the multi-queue
networking or qemu_get_clock_ns().  But upstream cannot be held back
because of downstreams and especially not because of out-of-tree
branches.

> The change is pretty mechanical, so what I'd suggest is that we keep
> the old API around too (as deprecated) for a little while. At some
> stage we can fix existing code and after that point not accept patches
> that use the existing API.
> 
> Even if the period is just a month (i.e. the old API goes before 1.7),
> why break things unnecessarily?

Nothing upstream breaks.  Only out-of-tree code breaks but that's life.

What's important is that upstream will be clean and easy to understand
or debug.  Given how undocumented the QEMU codebase is, leaving legacy
API layers around just does more to confuse new contributors.

That's why I'd really like to transition now instead of leaving things
in a more complex state than before.

> >What is the point of this change?  It's not clear how using
> >qemu_clocks[] is an improvement over rt_clock, vm_clock, and host_clock.
> 
> Because you can iterate through all the clocks with a for() loop as
> is done in six places in qemu-timer.c by the end of the patch
> series (look for QEMU_CLOCK_MAX). This also allows us to use
> a timerlistgroup (a set of timerlists, one for each clock) so
> each AioContext can have a clock of each type, as Paolo requested
> in response to v4.

Later in the patch series I realized how this gets used.  Thanks for
explaining.

We end up with:

AioContext->tlg and default_timerlistgroup.

Regarding naming, I think default_timerlistgroup should be called
main_loop_timerlistgroup instead.  The meaning of "default" is not
obvious.

Now let's think about how callers will create QEMUTimers:

1. AioContext

   timer_new(ctx->tlg[QEMU_CLOCK_RT], SCALE_NS, cb, opaque);

   Or with a wrapper:

   QEMUTimer *aio_new_timer(AioContext *ctx, QEMUClockType type, int scale,
                            QEMUTimerCB *cb, void *opaque)
   {
       return timer_new(ctx->tlg[type], scale, cb, opaque);
   }

   aio_new_timer(ctx, QEMU_CLOCK_RT, SCALE_NS, cb, opaque);

2. main-loop

   /* without legacy qemu_timer_new() */
   timer_new(main_loop_tlg[QEMU_CLOCK_RT], SCALE_NS, cb, opaque);

   Or with a wrapper:

   QEMUTimer *qemu_new_timer(QEMUClockType type, int scale,
                             QEMUTimerCB *cb, void *opaque)
   {
       return timer_new(main_loop_tlg[type], scale, cb, opaque);
   }

   qemu_new_timer(QEMU_CLOCK_RT, SCALE_NS, cb, opaque);

Is this what you have in mind too?

> >Or in other words: why is timerlist_new necessary?
> 
> I think that question is entirely orthogonal. This generates a new
> timerlist. In v4 each AioContext had its own timerlist. Now it has
> its own timerlistgroup, with one timerlist of each clock type.
> The constructor timerlist_new is there to initialise the timerlist
> which broadly is a malloc(), setting ->clock, and inserting it
> onto the list of timerlists associated with that clock. How would
> we avoid this if we still want multiple timerlists per clock?

I think I'm okay with the array indexing.  Anyway, here were my
thoughts:

I guess I was thinking about keeping global clock sources for rt, vm,
and host.  Then you always use timerlist_new_from_clock() and you don't
need timerlist_new() at all.

But this doesn't allow for the array indexing that you do in
TimerListGroup later.  I didn't know that at this point in the patch
series.

> >> struct QEMUTimer {
> >>     int64_t expire_time;	/* in nanoseconds */
> >>-    QEMUClock *clock;
> >>+    QEMUTimerList *tl;
> >
> >'timer_list' is easier to read than just 'tl'.
> 
> It caused a pile of line wrap issues which made the patch harder
> to read, so I shortened it. I can put it back if you like.

Are you sure it's the QEMUTimer->tl field that causes line wraps?

I took a quick look and it seemed like only the QEMUTimerList *tl
function argument to the deadline functions could cause line wrap.  The
argument variable is unrelated and can stay short since it has a very
narrow context - the reader can be expected to remember the tl argument
while reading the code for the function.

> >>+bool qemu_clock_use_for_deadline(QEMUClock *clock)
> >>+{
> >>+    return !(use_icount && (clock->type = QEMU_CLOCK_VIRTUAL));
> >>+}
> >
> >Please use doc comments (see include/object/qom.h for example doc
> >comment syntax).  No idea why this function is needed or what it will be
> >used for.
> 
> I will comment it, but it mostly does what it says in the tin. Per
> Paolo's comment, the vm_clock should not be used for calculation of
> deadlines to ppoll etc. if use_icount is true, because it's not actually
> in nanoseconds; rather qemu_notify() or aio_notify() get called by the
> vm cpu thread when the relevant instruction counter is exceeded.

I didn't know that but the explanation makes sense.  Definitely
something that could be in a comment.  Perhaps its best to introduce
this small helper function in the patch that actually calls it.

Stefan

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 05/16] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList
  2013-08-06 13:16                             ` Alex Bligh
@ 2013-08-06 14:31                               ` Stefan Hajnoczi
  0 siblings, 0 replies; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-08-06 14:31 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

On Tue, Aug 06, 2013 at 02:16:02PM +0100, Alex Bligh wrote:
> --On 6 August 2013 14:26:33 +0200 Stefan Hajnoczi
> <stefanha@redhat.com> wrote:
> 
> >Please use doc comments (see include/object/qom.h for example doc
> >comment syntax).
> 
> Should these go in the .h (per include/qom/object.h) or the .c (per
> your comment)?

I don't think there is a strict rule.

I like putting doc comments in the .c file for non-external APIs.  That
way it's easier to remember that the doc comment needs to be updated if
the function is modified.

Stefan

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 08/16] aio / timers: Add QEMUTimerListGroup to AioContext
  2013-08-06 12:50                             ` Alex Bligh
@ 2013-08-06 14:45                               ` Stefan Hajnoczi
  2013-08-06 14:58                                 ` Alex Bligh
  0 siblings, 1 reply; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-08-06 14:45 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

On Tue, Aug 06, 2013 at 01:50:02PM +0100, Alex Bligh wrote:
> 
> 
> --On 6 August 2013 14:30:43 +0200 Stefan Hajnoczi
> <stefanha@redhat.com> wrote:
> 
> >On Tue, Aug 06, 2013 at 10:16:24AM +0100, Alex Bligh wrote:
> >>@@ -628,6 +629,8 @@ int main(int argc, char **argv)
> >> {
> >>     GSource *src;
> >>
> >>+    init_clocks();
> >>+
> >
> >Why add this call now?
> 
> Because otherwise make check SEGVs after the patch.

It wasn't clear from the patch why there would be a crash.  I looked
deeper and timerlistgroup_init() calls qemu_get_clock() indirectly, so
we need to make sure that qemu_clocks[] is initialized to avoid a NULL
pointer dereference.

Now I'm confident that these init_clocks() are needed in this patch.

> >Are there any other programs where this should
> >be added too?
> 
> I believe not (at least make check does not fail anything).

I have checked that qemu-io, qemu-img, and qemu-nbd are okay because
they call qemu_init_main_loop(), which calls init_clocks().

Stefan

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 05/16] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList
  2013-08-06 14:29                               ` Stefan Hajnoczi
@ 2013-08-06 14:52                                 ` Alex Bligh
  2013-08-07  7:27                                   ` Stefan Hajnoczi
  2013-08-06 23:54                                 ` Alex Bligh
  1 sibling, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-08-06 14:52 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, qemu-devel,
	liu ping fan, Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka,
	rth

Stefan,

>> I think I disagree here.
>>
>> At the very least we should put the conversion to use the new API
>> into a separate patch (possibly a separate patch set). It's fantastically
>> intrusive.
>
> Yes, it should be a separate patch.
...
>> Even if the period is just a month (i.e. the old API goes before 1.7),
>> why break things unnecessarily?
>
> Nothing upstream breaks.  Only out-of-tree code breaks but that's life.
>
> What's important is that upstream will be clean and easy to understand
> or debug.  Given how undocumented the QEMU codebase is, leaving legacy
> API layers around just does more to confuse new contributors.
>
> That's why I'd really like to transition now instead of leaving things
> in a more complex state than before.

OK. I'm just concerned about the potential fall out. If that's
what everyone wants, I will do a monster patch to fix this up. Need
that be part of this series? I can't help thinking it would be
better to have the series applied first.

> We end up with:
>
> AioContext->tlg and default_timerlistgroup.
>
> Regarding naming, I think default_timerlistgroup should be called
> main_loop_timerlistgroup instead.  The meaning of "default" is not
> obvious.

I agree. I will change this.

> Now let's think about how callers will create QEMUTimers:
>
> 1. AioContext
>
>    timer_new(ctx->tlg[QEMU_CLOCK_RT], SCALE_NS, cb, opaque);
>
>    Or with a wrapper:
>
>    QEMUTimer *aio_new_timer(AioContext *ctx, QEMUClockType type, int
> scale,                             QEMUTimerCB *cb, void *opaque)
>    {
>        return timer_new(ctx->tlg[type], scale, cb, opaque);
>    }
>
>    aio_new_timer(ctx, QEMU_CLOCK_RT, SCALE_NS, cb, opaque);

I was actually thinking about adding that wrapper anyway.

Do you think we need to wrap timer_mod, timer_del, timer_free
etc. to make aio_timer_mod and so forth, even though they are
a straight pass through?

> 2. main-loop
>
>    /* without legacy qemu_timer_new() */
>    timer_new(main_loop_tlg[QEMU_CLOCK_RT], SCALE_NS, cb, opaque);
>
>    Or with a wrapper:
>
>    QEMUTimer *qemu_new_timer(QEMUClockType type, int scale,
>                              QEMUTimerCB *cb, void *opaque)
>    {
>        return timer_new(main_loop_tlg[type], scale, cb, opaque);
>    }
>
>    qemu_new_timer(QEMU_CLOCK_RT, SCALE_NS, cb, opaque);
>
> Is this what you have in mind too?

Yes.

Certainly qemu_timer_new (as opposed to qemu_new_timer)
would be a good addition.

> But this doesn't allow for the array indexing that you do in
> TimerListGroup later.  I didn't know that at this point in the patch
> series.

Yep. I'll leave that if that's OK.

>> >> struct QEMUTimer {
>> >>     int64_t expire_time;	/* in nanoseconds */
>> >> -    QEMUClock *clock;
>> >> +    QEMUTimerList *tl;
>> >
>> > 'timer_list' is easier to read than just 'tl'.
>>
>> It caused a pile of line wrap issues which made the patch harder
>> to read, so I shortened it. I can put it back if you like.
>
> Are you sure it's the QEMUTimer->tl field that causes line wraps?
>
> I took a quick look and it seemed like only the QEMUTimerList *tl
> function argument to the deadline functions could cause line wrap.  The
> argument variable is unrelated and can stay short since it has a very
> narrow context - the reader can be expected to remember the tl argument
> while reading the code for the function.

>From memory it was lots of ->tl expanding within the code the issue
rather than the arguments to functions. I'll try again. To be honest
it's probably easier to change tl to timer_list throughout.

>
>> >> +bool qemu_clock_use_for_deadline(QEMUClock *clock)
>> >> +{
>> >> +    return !(use_icount && (clock->type = QEMU_CLOCK_VIRTUAL));
>> >> +}
>> >
>> > Please use doc comments (see include/object/qom.h for example doc
>> > comment syntax).  No idea why this function is needed or what it will
>> > be used for.
>>
>> I will comment it, but it mostly does what it says in the tin. Per
>> Paolo's comment, the vm_clock should not be used for calculation of
>> deadlines to ppoll etc. if use_icount is true, because it's not actually
>> in nanoseconds; rather qemu_notify() or aio_notify() get called by the
>> vm cpu thread when the relevant instruction counter is exceeded.
>
> I didn't know that but the explanation makes sense.  Definitely
> something that could be in a comment.  Perhaps its best to introduce
> this small helper function in the patch that actually calls it.

It's preparation for the next patch. I will add a comment in the
commit message.

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 08/16] aio / timers: Add QEMUTimerListGroup to AioContext
  2013-08-06 14:45                               ` Stefan Hajnoczi
@ 2013-08-06 14:58                                 ` Alex Bligh
  2013-08-07  7:28                                   ` Stefan Hajnoczi
  0 siblings, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-08-06 14:58 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, qemu-devel,
	liu ping fan, Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka,
	rth

Stefan,

--On 6 August 2013 16:45:12 +0200 Stefan Hajnoczi <stefanha@gmail.com> 
wrote:

>> Because otherwise make check SEGVs after the patch.
>
> It wasn't clear from the patch why there would be a crash.  I looked
> deeper and timerlistgroup_init() calls qemu_get_clock() indirectly, so
> we need to make sure that qemu_clocks[] is initialized to avoid a NULL
> pointer dereference.

Actually now I recall v4 had:

@@ -215,6 +216,12 @@ AioContext *aio_context_new(void)
     aio_set_event_notifier(ctx, &ctx->notifier,
                            (EventNotifierHandler *)
                            event_notifier_test_and_clear, NULL);
+    /* Assert if we don't have rt_clock yet. If you see this assertion
+     * it means you are using AioContext without having first called
+     * init_clocks() in main().
+     */
+    assert(rt_clock);
+    ctx->tl = qemu_new_timerlist(rt_clock);

The equivalent in v7 would be an assert in timerlist_new_from_clock
to check 'clock' is non-NULL. I shall put that in as the reason for
this SEGV is non-obvious.

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll
  2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
                                           ` (17 preceding siblings ...)
  2013-08-06 11:53                         ` Stefan Hajnoczi
@ 2013-08-06 15:38                         ` Stefan Hajnoczi
  2013-08-07 15:43                           ` Paolo Bonzini
  18 siblings, 1 reply; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-08-06 15:38 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

On Tue, Aug 06, 2013 at 10:16:16AM +0100, Alex Bligh wrote:
> This patch series adds support for timers attached to an AioContext clock
> which get called within aio_poll.
> 
> In doing so it removes alarm timers and moves to use ppoll where possible.
> 
> This patch set 'sort of' passes make check (see below for caveat)
> including a new test harness for the aio timers, but has not been
> tested much beyond that. In particular, the win32 changes have not
> even been compile tested. Equally, alterations to use_icount
> are untested.
> 
> Caveat: I have had to alter tests/test-aio.c so the following error
> no longer occurs.
> 
> ERROR:tests/test-aio.c:346:test_wait_event_notifier_noflush: assertion failed: (aio_poll(ctx, false))
> 
> As gar as I can tell, this check was incorrect, in that it checking
> aio_poll makes progress when in fact it should not make progress. I
> fixed an issue where aio_poll was (as far as I can tell) wrongly
> returning true on a timeout, and that generated this error.
> 
> Note also the comment on patch 15 in relation to a possible bug
> in cpus.c.
> 
> Changes since v5:
> * Rebase onto master (b9ac5d9)
> * Fix spacing in typedef QEMUTimerList
> * Rename 'QEMUClocks' extern to 'qemu_clocks'
> 
> Changes since v4:
> * Rename qemu_timerlist_ functions to timer_list (per Paolo Bonzini)
> * Rename qemu_timer_.*timerlist.* to timer_ (per Paolo Bonzini)
> * Use enum for QEMUClockType
> * Put clocks into an array; remove global variables
> * Introduce QEMUTimerListGroup - a timeliest of each type
> * Add a QEMUTimerListGroup to AioContext
> * Use a callback on timer modification, rather than binding in
>   AioContext into the timeliest
> * Make cpus.c iterate over all timerlists when it does a notify
> * Make cpus.c icount timeout use soonest timeout
>   across all timerlists
> 
> Changes since v3:
> * Split up QEMUClock and QEMUClock list
> * Improve commenting
> * Fix comment in vl.c
> * Change test/test-aio.c to reflect correct behaviour in aio_poll.
> 
> Changes since v2:
> * Reordered to remove alarm timers last
> * Added prctl(PR_SET_TIMERSLACK, 1, ...)
> * Renamed qemu_g_poll_ns to qemu_poll_ns
> * Moved declaration of above & drop glib types
> * Do not use a global list of qemu clocks
> * Add AioContext * to QEMUClock
> * Split up conversion to use ppoll and timers
> * Indentation fix
> * Fix aio_win32.c aio_poll to return progress
> * aio_notify / qemu_notify when timers are modified
> * change comment in deprecation of clock options
> 
> Alex Bligh (16):
>   aio / timers: add qemu-timer.c utility functions
>   aio / timers: add ppoll support with qemu_poll_ns
>   aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer
>     slack
>   aio / timers: Make qemu_run_timers and qemu_run_all_timers return
>     progress
>   aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList
>   aio / timers: Untangle include files
>   aio / timers: Add QEMUTimerListGroup and helper functions
>   aio / timers: Add QEMUTimerListGroup to AioContext
>   aio / timers: Add a notify callback to QEMUTimerList
>   aio / timers: aio_ctx_prepare sets timeout from AioContext timers
>   aio / timers: Convert aio_poll to use AioContext timers' deadline
>   aio / timers: Convert mainloop to use timeout
>   aio / timers: On timer modification, qemu_notify or aio_notify
>   aio / timers: Use all timerlists in icount warp calculations
>   aio / timers: Remove alarm timers
>   aio / timers: Add test harness for AioContext timers
> 
>  aio-posix.c               |   20 +-
>  aio-win32.c               |   22 +-
>  async.c                   |   20 +-
>  configure                 |   37 +++
>  cpus.c                    |   44 ++-
>  dma-helpers.c             |    1 +
>  hw/dma/xilinx_axidma.c    |    1 +
>  hw/timer/arm_timer.c      |    1 +
>  hw/timer/grlib_gptimer.c  |    2 +
>  hw/timer/imx_epit.c       |    1 +
>  hw/timer/imx_gpt.c        |    1 +
>  hw/timer/lm32_timer.c     |    1 +
>  hw/timer/puv3_ost.c       |    1 +
>  hw/timer/slavio_timer.c   |    1 +
>  hw/timer/xilinx_timer.c   |    1 +
>  hw/usb/hcd-uhci.c         |    1 +
>  include/block/aio.h       |    4 +
>  include/block/block_int.h |    1 +
>  include/block/coroutine.h |    2 +
>  include/qemu/timer.h      |  122 ++++++-
>  main-loop.c               |   49 ++-
>  migration-exec.c          |    1 +
>  migration-fd.c            |    1 +
>  migration-tcp.c           |    1 +
>  migration-unix.c          |    1 +
>  migration.c               |    1 +
>  nbd.c                     |    1 +
>  net/net.c                 |    1 +
>  net/socket.c              |    1 +
>  qemu-coroutine-io.c       |    1 +
>  qemu-io-cmds.c            |    1 +
>  qemu-nbd.c                |    1 +
>  qemu-timer.c              |  803 ++++++++++++++++-----------------------------
>  slirp/misc.c              |    1 +
>  tests/test-aio.c          |  141 +++++++-
>  tests/test-thread-pool.c  |    3 +
>  thread-pool.c             |    1 +
>  ui/vnc-auth-vencrypt.c    |    2 +-
>  ui/vnc-ws.c               |    1 +
>  vl.c                      |    4 +-
>  40 files changed, 736 insertions(+), 564 deletions(-)
> 
> -- 
> 1.7.9.5
> 
> *** BLRUB STOP ***
> 
> Alex Bligh (16):
>   aio / timers: add qemu-timer.c utility functions
>   aio / timers: add ppoll support with qemu_poll_ns
>   aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer
>     slack
>   aio / timers: Make qemu_run_timers and qemu_run_all_timers return
>     progress
>   aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList
>   aio / timers: Untangle include files
>   aio / timers: Add QEMUTimerListGroup and helper functions
>   aio / timers: Add QEMUTimerListGroup to AioContext
>   aio / timers: Add a notify callback to QEMUTimerList
>   aio / timers: aio_ctx_prepare sets timeout from AioContext timers
>   aio / timers: Convert aio_poll to use AioContext timers' deadline
>   aio / timers: Convert mainloop to use timeout
>   aio / timers: On timer modification, qemu_notify or aio_notify
>   aio / timers: Use all timerlists in icount warp calculations
>   aio / timers: Remove alarm timers
>   aio / timers: Add test harness for AioContext timers
> 
>  aio-posix.c               |   20 +-
>  aio-win32.c               |   22 +-
>  async.c                   |   20 +-
>  configure                 |   37 +++
>  cpus.c                    |   44 ++-
>  dma-helpers.c             |    1 +
>  hw/dma/xilinx_axidma.c    |    1 +
>  hw/timer/arm_timer.c      |    1 +
>  hw/timer/grlib_gptimer.c  |    2 +
>  hw/timer/imx_epit.c       |    1 +
>  hw/timer/imx_gpt.c        |    1 +
>  hw/timer/lm32_timer.c     |    1 +
>  hw/timer/puv3_ost.c       |    1 +
>  hw/timer/slavio_timer.c   |    1 +
>  hw/timer/xilinx_timer.c   |    1 +
>  hw/usb/hcd-uhci.c         |    1 +
>  include/block/aio.h       |    4 +
>  include/block/block_int.h |    1 +
>  include/block/coroutine.h |    2 +
>  include/qemu/timer.h      |  122 ++++++-
>  main-loop.c               |   49 ++-
>  migration-exec.c          |    1 +
>  migration-fd.c            |    1 +
>  migration-tcp.c           |    1 +
>  migration-unix.c          |    1 +
>  migration.c               |    1 +
>  nbd.c                     |    1 +
>  net/net.c                 |    1 +
>  net/socket.c              |    1 +
>  qemu-coroutine-io.c       |    1 +
>  qemu-io-cmds.c            |    1 +
>  qemu-nbd.c                |    1 +
>  qemu-timer.c              |  803 ++++++++++++++++-----------------------------
>  slirp/misc.c              |    1 +
>  tests/test-aio.c          |  141 +++++++-
>  tests/test-thread-pool.c  |    3 +
>  thread-pool.c             |    1 +
>  ui/vnc-auth-vencrypt.c    |    2 +-
>  ui/vnc-ws.c               |    1 +
>  vl.c                      |    4 +-
>  40 files changed, 736 insertions(+), 564 deletions(-)

For anyone wishing to review this series, here's a diagram showing the
new relationships and a summary of this series:

http://vmsplice.net/~stefan/timerlist.jpg

TimerList is the list of QEMUTimers which are pending on a QEMUClock
clock source.

Previously QEMUClock contained this list inline but the goal is to let
AioContexts have their own independent timer lists.  This makes timers
much more thread-friendly since we don't work on global lists and
callbacks are dispatched in the same thread's AioContext where they were
created.

TimerListGroup holds a timerlist for each clock source ("rt", "vm", and
"host").  The main loop has the default TimerListGroup.  Each AioContext
has its own TimerListGroup.

The other major change is that QEMU now uses the event loop's poll()
timeout value instead of a separate OS-specific timer mechanism (aka the
"alarm timer").

Stefan

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv5 06/16] aio / timers: Untangle include files
  2013-08-06 10:10                         ` Stefan Hajnoczi
  2013-08-06 11:20                           ` Alex Bligh
@ 2013-08-06 23:52                           ` Alex Bligh
  2013-08-07  7:19                             ` Stefan Hajnoczi
  1 sibling, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-08-06 23:52 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, qemu-devel,
	liu ping fan, Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka,
	rth

Stefan,

--On 6 August 2013 12:10:17 +0200 Stefan Hajnoczi <stefanha@gmail.com> 
wrote:

> Fails to build ui/vnc-auth-sasl.c
>
> Please make sure that your ./configure output shows most optional
> dependencies are available (you need to install development packages for
> these libraries).  Otherwise you will not see build breakage.

I think I have caught most of these I can test on Ubuntu Precise in
my v7 patch. I enabled all options I could, and installed all the
dev options I could for linux. Some things would not build, despite
installing the development libraries (by 'not build' I mostly mean
configure would not pick them up when passed --enable-foo with
the obvious dev libraries installed). These are:

host big endian   no
gprof enabled     no
sparse enabled    no
strip binaries    no
profiler          no
static build      no
SDL support       no
mingw32 support   no
vde support       no
RDMA support      no
spice support     no (/)
usb net redir     no
libiscsi support  no
seccomp support   no
GlusterFS support no
libssh2 support   no

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 05/16] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList
  2013-08-06 14:29                               ` Stefan Hajnoczi
  2013-08-06 14:52                                 ` Alex Bligh
@ 2013-08-06 23:54                                 ` Alex Bligh
  1 sibling, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-06 23:54 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, qemu-devel,
	liu ping fan, Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka,
	rth

Stefan,

--On 6 August 2013 16:29:40 +0200 Stefan Hajnoczi <stefanha@gmail.com> 
wrote:

>> At the very least we should put the conversion to use the new API
>> into a separate patch (possibly a separate patch set). It's fantastically
>> intrusive.
>
> Yes, it should be a separate patch.

OK - this separate patch is not in the v7 series not least because I
want to ensure we all agree on the new API before I try to write some
perl to convert the lot automagically.

Other than that, I think I have picked up everything we talked about for
v7.

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv5 06/16] aio / timers: Untangle include files
  2013-08-06 23:52                           ` Alex Bligh
@ 2013-08-07  7:19                             ` Stefan Hajnoczi
  2013-08-07 10:14                               ` Alex Bligh
  0 siblings, 1 reply; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-08-07  7:19 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

On Wed, Aug 07, 2013 at 12:52:57AM +0100, Alex Bligh wrote:
> --On 6 August 2013 12:10:17 +0200 Stefan Hajnoczi
> <stefanha@gmail.com> wrote:
> 
> >Fails to build ui/vnc-auth-sasl.c
> >
> >Please make sure that your ./configure output shows most optional
> >dependencies are available (you need to install development packages for
> >these libraries).  Otherwise you will not see build breakage.
> 
> I think I have caught most of these I can test on Ubuntu Precise in
> my v7 patch. I enabled all options I could, and installed all the
> dev options I could for linux. Some things would not build, despite
> installing the development libraries (by 'not build' I mostly mean
> configure would not pick them up when passed --enable-foo with
> the obvious dev libraries installed). These are:

Some tips on which packages to install:

> host big endian   no
> gprof enabled     no
> sparse enabled    no
> strip binaries    no
> profiler          no
> static build      no

All the ones above are okay, fine to ignore.

> SDL support       no

Do you have libsdl1.2-dev installed?

> mingw32 support   no

Ok.

> vde support       no

You need libvde-dev and/or libvdeplug2-dev.

> RDMA support      no

Ok.

> spice support     no (/)

libspice-protocol-dev and/or libspice-server-dev (can't remember).

> usb net redir     no
> libiscsi support  no
> seccomp support   no
> GlusterFS support no

Ok.

> libssh2 support   no

libssh2-1-dev

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 01/16] aio / timers: add qemu-timer.c utility functions
  2013-08-06 14:18                                 ` Alex Bligh
@ 2013-08-07  7:21                                   ` Stefan Hajnoczi
  2013-08-07 10:14                                     ` Alex Bligh
  0 siblings, 1 reply; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-08-07  7:21 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

On Tue, Aug 06, 2013 at 03:18:13PM +0100, Alex Bligh wrote:
> Stefan,
> 
> --On 6 August 2013 15:59:11 +0200 Stefan Hajnoczi
> <stefanha@gmail.com> wrote:
> 
> >>--On 6 August 2013 14:02:18 +0200 Stefan Hajnoczi
> >><stefanha@redhat.com> wrote:
> >>My preference would be to move these to qemu_clock_deadline_ns (without
> >>the INT32_MAX check) and delete the old qemu_clock_deadline routine
> >>entirely, but I don't really understand the full set of circumstances
> >>in which the qtest routines are meant to work.
> >
> >Okay, that's excellent.  It would be great to move to a single function.
> >
> >The way qtest works is that it executes QEMU in a mode that does not run
> >guest code.  Instead of running guest code it listens for commands over
> >a socket.  The wire protocol can peek/poke memory, notify of interrupts,
> >and warp the clock.
> >
> >There are test cases that use qtest to test emulated devices.
> >
> >When qtest either steps the clock or sets it to a completely new value
> >using qtest_clock_warp() it runs all vm_clock timers that should expire
> >before the new time.
> >
> >Does this help?
> 
> Nearly :-)
> 
> How do I actually run the code (i.e. how do I test whether I've broken
> it)? I take it that's something different from just 'make check'?

make check includes qtest test cases like rtc-test, i440fx-test,
fdc-test, and others.  As long as they continue to pass all is good.

Stefan

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 05/16] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList
  2013-08-06 14:52                                 ` Alex Bligh
@ 2013-08-07  7:27                                   ` Stefan Hajnoczi
  2013-08-07 10:16                                     ` Alex Bligh
  0 siblings, 1 reply; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-08-07  7:27 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

On Tue, Aug 06, 2013 at 03:52:37PM +0100, Alex Bligh wrote:
> Stefan,
> 
> >>I think I disagree here.
> >>
> >>At the very least we should put the conversion to use the new API
> >>into a separate patch (possibly a separate patch set). It's fantastically
> >>intrusive.
> >
> >Yes, it should be a separate patch.
> ...
> >>Even if the period is just a month (i.e. the old API goes before 1.7),
> >>why break things unnecessarily?
> >
> >Nothing upstream breaks.  Only out-of-tree code breaks but that's life.
> >
> >What's important is that upstream will be clean and easy to understand
> >or debug.  Given how undocumented the QEMU codebase is, leaving legacy
> >API layers around just does more to confuse new contributors.
> >
> >That's why I'd really like to transition now instead of leaving things
> >in a more complex state than before.
> 
> OK. I'm just concerned about the potential fall out. If that's
> what everyone wants, I will do a monster patch to fix this up. Need
> that be part of this series? I can't help thinking it would be
> better to have the series applied first.
> 
> >We end up with:
> >
> >AioContext->tlg and default_timerlistgroup.
> >
> >Regarding naming, I think default_timerlistgroup should be called
> >main_loop_timerlistgroup instead.  The meaning of "default" is not
> >obvious.
> 
> I agree. I will change this.
> 
> >Now let's think about how callers will create QEMUTimers:
> >
> >1. AioContext
> >
> >   timer_new(ctx->tlg[QEMU_CLOCK_RT], SCALE_NS, cb, opaque);
> >
> >   Or with a wrapper:
> >
> >   QEMUTimer *aio_new_timer(AioContext *ctx, QEMUClockType type, int
> >scale,                             QEMUTimerCB *cb, void *opaque)
> >   {
> >       return timer_new(ctx->tlg[type], scale, cb, opaque);
> >   }
> >
> >   aio_new_timer(ctx, QEMU_CLOCK_RT, SCALE_NS, cb, opaque);
> 
> I was actually thinking about adding that wrapper anyway.
> 
> Do you think we need to wrap timer_mod, timer_del, timer_free
> etc. to make aio_timer_mod and so forth, even though they are
> a straight pass through?

Those wrappers are not necessary.  Once the caller has their QEMUTimer
pointer they should use the QEMUTimer APIs directly.

> >2. main-loop
> >
> >   /* without legacy qemu_timer_new() */
> >   timer_new(main_loop_tlg[QEMU_CLOCK_RT], SCALE_NS, cb, opaque);
> >
> >   Or with a wrapper:
> >
> >   QEMUTimer *qemu_new_timer(QEMUClockType type, int scale,
> >                             QEMUTimerCB *cb, void *opaque)
> >   {
> >       return timer_new(main_loop_tlg[type], scale, cb, opaque);
> >   }
> >
> >   qemu_new_timer(QEMU_CLOCK_RT, SCALE_NS, cb, opaque);
> >
> >Is this what you have in mind too?
> 
> Yes.
> 
> Certainly qemu_timer_new (as opposed to qemu_new_timer)
> would be a good addition.

Okay, great.  This makes the conversion from legacy QEMUClock functions
pretty straightforward.  It can be done mechanically in a single big
patch that converts the tree.

> >But this doesn't allow for the array indexing that you do in
> >TimerListGroup later.  I didn't know that at this point in the patch
> >series.
> 
> Yep. I'll leave that if that's OK.

Yes, I'm convinced now that the it's worth having.

> >>>> struct QEMUTimer {
> >>>>     int64_t expire_time;	/* in nanoseconds */
> >>>> -    QEMUClock *clock;
> >>>> +    QEMUTimerList *tl;
> >>>
> >>> 'timer_list' is easier to read than just 'tl'.
> >>
> >>It caused a pile of line wrap issues which made the patch harder
> >>to read, so I shortened it. I can put it back if you like.
> >
> >Are you sure it's the QEMUTimer->tl field that causes line wraps?
> >
> >I took a quick look and it seemed like only the QEMUTimerList *tl
> >function argument to the deadline functions could cause line wrap.  The
> >argument variable is unrelated and can stay short since it has a very
> >narrow context - the reader can be expected to remember the tl argument
> >while reading the code for the function.
> 
> >From memory it was lots of ->tl expanding within the code the issue
> rather than the arguments to functions. I'll try again. To be honest
> it's probably easier to change tl to timer_list throughout.

If you convert both that's good too.

> >
> >>>> +bool qemu_clock_use_for_deadline(QEMUClock *clock)
> >>>> +{
> >>>> +    return !(use_icount && (clock->type = QEMU_CLOCK_VIRTUAL));
> >>>> +}
> >>>
> >>> Please use doc comments (see include/object/qom.h for example doc
> >>> comment syntax).  No idea why this function is needed or what it will
> >>> be used for.
> >>
> >>I will comment it, but it mostly does what it says in the tin. Per
> >>Paolo's comment, the vm_clock should not be used for calculation of
> >>deadlines to ppoll etc. if use_icount is true, because it's not actually
> >>in nanoseconds; rather qemu_notify() or aio_notify() get called by the
> >>vm cpu thread when the relevant instruction counter is exceeded.
> >
> >I didn't know that but the explanation makes sense.  Definitely
> >something that could be in a comment.  Perhaps its best to introduce
> >this small helper function in the patch that actually calls it.
> 
> It's preparation for the next patch. I will add a comment in the
> commit message.

Thanks!

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 08/16] aio / timers: Add QEMUTimerListGroup to AioContext
  2013-08-06 14:58                                 ` Alex Bligh
@ 2013-08-07  7:28                                   ` Stefan Hajnoczi
  0 siblings, 0 replies; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-08-07  7:28 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel, liu ping fan,
	Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka, rth

On Tue, Aug 06, 2013 at 03:58:40PM +0100, Alex Bligh wrote:
> --On 6 August 2013 16:45:12 +0200 Stefan Hajnoczi
> <stefanha@gmail.com> wrote:
> 
> >>Because otherwise make check SEGVs after the patch.
> >
> >It wasn't clear from the patch why there would be a crash.  I looked
> >deeper and timerlistgroup_init() calls qemu_get_clock() indirectly, so
> >we need to make sure that qemu_clocks[] is initialized to avoid a NULL
> >pointer dereference.
> 
> Actually now I recall v4 had:
> 
> @@ -215,6 +216,12 @@ AioContext *aio_context_new(void)
>     aio_set_event_notifier(ctx, &ctx->notifier,
>                            (EventNotifierHandler *)
>                            event_notifier_test_and_clear, NULL);
> +    /* Assert if we don't have rt_clock yet. If you see this assertion
> +     * it means you are using AioContext without having first called
> +     * init_clocks() in main().
> +     */
> +    assert(rt_clock);
> +    ctx->tl = qemu_new_timerlist(rt_clock);
> 
> The equivalent in v7 would be an assert in timerlist_new_from_clock
> to check 'clock' is non-NULL. I shall put that in as the reason for
> this SEGV is non-obvious.

Nice, the comment makes the SEGV clear.

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv5 06/16] aio / timers: Untangle include files
  2013-08-07  7:19                             ` Stefan Hajnoczi
@ 2013-08-07 10:14                               ` Alex Bligh
  2013-08-08  8:34                                 ` Stefan Hajnoczi
  0 siblings, 1 reply; 128+ messages in thread
From: Alex Bligh @ 2013-08-07 10:14 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, qemu-devel,
	liu ping fan, Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka,
	rth

Stefan,

> Some tips on which packages to install:
>
>> host big endian   no
>> gprof enabled     no
>> sparse enabled    no
>> strip binaries    no
>> profiler          no
>> static build      no
>
> All the ones above are okay, fine to ignore.
>
>> SDL support       no
>
> Do you have libsdl1.2-dev installed?

ii  libsdl1.2-dev                             1.2.14-6.4ubuntu3

but still doesn't work

>> vde support       no
>
> You need libvde-dev and/or libvdeplug2-dev.

ii  libvde-dev                                2.2.3-3build2

but still doesn't work

>> spice support     no (/)
>
> libspice-protocol-dev and/or libspice-server-dev (can't remember).

ii  gir1.2-spice-client-glib-2.0              0.9-0ubuntu1 
GObject for communicating with Spice servers (GObject-Introspection)
ii  gir1.2-spice-client-gtk-2.0               0.9-0ubuntu1 
GTK2 widget for SPICE clients (GObject-Introspection)
ii  gir1.2-spice-client-gtk-3.0               0.9-0ubuntu1 
GTK3 widget for SPICE clients (GObject-Introspection)
ii  libspice-client-glib-2.0-1                0.9-0ubuntu1 
GObject for communicating with Spice servers (runtime library)
ii  libspice-client-glib-2.0-dev              0.9-0ubuntu1 
GObject for communicating with Spice servers (development files)
ii  libspice-client-gtk-2.0-1                 0.9-0ubuntu1 
GTK2 widget for SPICE clients (runtime library)
ii  libspice-client-gtk-2.0-dev               0.9-0ubuntu1 
GTK2 widget for SPICE clients (development files)
ii  libspice-client-gtk-3.0-1                 0.9-0ubuntu1 
GTK3 widget for SPICE clients (runtime library)
ii  libspice-client-gtk-3.0-dev               0.9-0ubuntu1 
GTK3 widget for SPICE clients (development files)
ii  libspice-protocol-dev                     0.10.1-1 
SPICE protocol headers
ii  libspice-server-dev                       0.10.0-1 
Header files and development documentation for spice-server
ii  libspice-server1                          0.10.0-1 
Implements the server side of the SPICE protocol

but still doesn't work

>> usb net redir     no
>> libiscsi support  no
>> seccomp support   no
>> GlusterFS support no
>
> Ok.
>
>> libssh2 support   no
>
> libssh2-1-dev

I thought I'd installed that but I hadn't. I have now and it builds
fine with v7.

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 01/16] aio / timers: add qemu-timer.c utility functions
  2013-08-07  7:21                                   ` Stefan Hajnoczi
@ 2013-08-07 10:14                                     ` Alex Bligh
  0 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-07 10:14 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, qemu-devel,
	liu ping fan, Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka,
	rth



--On 7 August 2013 09:21:17 +0200 Stefan Hajnoczi <stefanha@gmail.com> 
wrote:

>> How do I actually run the code (i.e. how do I test whether I've broken
>> it)? I take it that's something different from just 'make check'?
>
> make check includes qtest test cases like rtc-test, i440fx-test,
> fdc-test, and others.  As long as they continue to pass all is good.

Oh great. We're fine then.

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 05/16] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList
  2013-08-07  7:27                                   ` Stefan Hajnoczi
@ 2013-08-07 10:16                                     ` Alex Bligh
  0 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-07 10:16 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, qemu-devel,
	liu ping fan, Stefan Hajnoczi, Paolo Bonzini, MORITA Kazutaka,
	rth



--On 7 August 2013 09:27:36 +0200 Stefan Hajnoczi <stefanha@gmail.com> 
wrote:

>> Do you think we need to wrap timer_mod, timer_del, timer_free
>> etc. to make aio_timer_mod and so forth, even though they are
>> a straight pass through?
>
> Those wrappers are not necessary.  Once the caller has their QEMUTimer
> pointer they should use the QEMUTimer APIs directly.

Good. That's the approach I took in v7. I'm progressing well with a perl
program to do automatic conversion.

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll
  2013-08-06 15:38                         ` Stefan Hajnoczi
@ 2013-08-07 15:43                           ` Paolo Bonzini
  2013-08-07 17:20                             ` Alex Bligh
  0 siblings, 1 reply; 128+ messages in thread
From: Paolo Bonzini @ 2013-08-07 15:43 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, qemu-devel,
	liu ping fan, Stefan Hajnoczi, MORITA Kazutaka, rth

> For anyone wishing to review this series, here's a diagram showing the
> new relationships and a summary of this series:
> 
> http://vmsplice.net/~stefan/timerlist.jpg
> 
> TimerList is the list of QEMUTimers which are pending on a QEMUClock
> clock source.

I just started reviewing this and the diagram may be different in
v7, anyway: why do you need a default TimerListGroup?  I would have
thought it is enough to use the default AioContext's own TLG.

Paolo

> Previously QEMUClock contained this list inline but the goal is to let
> AioContexts have their own independent timer lists.  This makes timers
> much more thread-friendly since we don't work on global lists and
> callbacks are dispatched in the same thread's AioContext where they were
> created.
> 
> TimerListGroup holds a timerlist for each clock source ("rt", "vm", and
> "host").  The main loop has the default TimerListGroup.  Each AioContext
> has its own TimerListGroup.
> 
> The other major change is that QEMU now uses the event loop's poll()
> timeout value instead of a separate OS-specific timer mechanism (aka the
> "alarm timer").
> 
> Stefan
> 

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll
  2013-08-07 15:43                           ` Paolo Bonzini
@ 2013-08-07 17:20                             ` Alex Bligh
  0 siblings, 0 replies; 128+ messages in thread
From: Alex Bligh @ 2013-08-07 17:20 UTC (permalink / raw)
  To: Paolo Bonzini, Stefan Hajnoczi
  Cc: Kevin Wolf, Anthony Liguori, Alex Bligh, qemu-devel,
	liu ping fan, Stefan Hajnoczi, MORITA Kazutaka, rth

Paolo,

--On 7 August 2013 11:43:58 -0400 Paolo Bonzini <pbonzini@redhat.com> wrote:

>> For anyone wishing to review this series, here's a diagram showing the
>> new relationships and a summary of this series:
>>
>> http://vmsplice.net/~stefan/timerlist.jpg
>>
>> TimerList is the list of QEMUTimers which are pending on a QEMUClock
>> clock source.
>
> I just started reviewing this and the diagram may be different in
> v7,

No the data structures are the same, just bits renamed.

> anyway: why do you need a default TimerListGroup?  I would have
> thought it is enough to use the default AioContext's own TLG.

The default TimerListGroup has been renamed main_loop_tlg (or similar)
in order to make this clearer.

Right now, we have a peculiar double set of polling and waiting,
an inner one in AioContext and an outer one in main_loop. Existing
timer users may not be safe to be called within the inner AioContext
and may expect to be run only from main_loop. Also, I believe there
are some binaries that don't even have an AioContext at the moment,
or may not have them at all times when timers are needed.

I understand this may be simplified in the future, in which case
if we always (in every binary) have a default AioContext AND it
exists early enough, we can remove the main_loop_tlg and make
the qemu_ functions uses the default AioContext. That's far from
a huge change but I'd rather not do it is part of this series
as I suspect there is the risk of breakage.

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 128+ messages in thread

* Re: [Qemu-devel] [RFC] [PATCHv5 06/16] aio / timers: Untangle include files
  2013-08-07 10:14                               ` Alex Bligh
@ 2013-08-08  8:34                                 ` Stefan Hajnoczi
  0 siblings, 0 replies; 128+ messages in thread
From: Stefan Hajnoczi @ 2013-08-08  8:34 UTC (permalink / raw)
  To: Alex Bligh
  Cc: Kevin Wolf, Anthony Liguori, Stefan Hajnoczi, qemu-devel,
	liu ping fan, Paolo Bonzini, MORITA Kazutaka, rth

On Wed, Aug 07, 2013 at 11:14:30AM +0100, Alex Bligh wrote:
> >Some tips on which packages to install:
> >
> >>host big endian   no
> >>gprof enabled     no
> >>sparse enabled    no
> >>strip binaries    no
> >>profiler          no
> >>static build      no
> >
> >All the ones above are okay, fine to ignore.
> >
> >>SDL support       no
> >
> >Do you have libsdl1.2-dev installed?
> 
> ii  libsdl1.2-dev                             1.2.14-6.4ubuntu3
> 
> but still doesn't work
> 
> >>vde support       no
> >
> >You need libvde-dev and/or libvdeplug2-dev.
> 
> ii  libvde-dev                                2.2.3-3build2
> 
> but still doesn't work
> 
> >>spice support     no (/)
> >
> >libspice-protocol-dev and/or libspice-server-dev (can't remember).
> 
> ii  gir1.2-spice-client-glib-2.0              0.9-0ubuntu1 GObject
> for communicating with Spice servers (GObject-Introspection)
> ii  gir1.2-spice-client-gtk-2.0               0.9-0ubuntu1 GTK2
> widget for SPICE clients (GObject-Introspection)
> ii  gir1.2-spice-client-gtk-3.0               0.9-0ubuntu1 GTK3
> widget for SPICE clients (GObject-Introspection)
> ii  libspice-client-glib-2.0-1                0.9-0ubuntu1 GObject
> for communicating with Spice servers (runtime library)
> ii  libspice-client-glib-2.0-dev              0.9-0ubuntu1 GObject
> for communicating with Spice servers (development files)
> ii  libspice-client-gtk-2.0-1                 0.9-0ubuntu1 GTK2
> widget for SPICE clients (runtime library)
> ii  libspice-client-gtk-2.0-dev               0.9-0ubuntu1 GTK2
> widget for SPICE clients (development files)
> ii  libspice-client-gtk-3.0-1                 0.9-0ubuntu1 GTK3
> widget for SPICE clients (runtime library)
> ii  libspice-client-gtk-3.0-dev               0.9-0ubuntu1 GTK3
> widget for SPICE clients (development files)
> ii  libspice-protocol-dev                     0.10.1-1 SPICE
> protocol headers
> ii  libspice-server-dev                       0.10.0-1 Header files
> and development documentation for spice-server
> ii  libspice-server1                          0.10.0-1 Implements
> the server side of the SPICE protocol
> 
> but still doesn't work

Spice can be tricky because you need a sufficiently new version of the
headers.

In general the process of troubleshooting this is to run ./configure and
then chase up the unexpected "no" results in config.log where you can
see the compile test that failed.

Stefan

^ permalink raw reply	[flat|nested] 128+ messages in thread

end of thread, other threads:[~2013-08-08  8:34 UTC | newest]

Thread overview: 128+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <1E8E204.8000201@redhat.com>
2013-07-20 18:06 ` [Qemu-devel] [PATCHv2] [RFC 0/7] aio / timers: Add AioContext timers and use ppoll Alex Bligh
2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 1/7] aio / timers: Remove alarm timers Alex Bligh
2013-07-25  9:10     ` Stefan Hajnoczi
2013-07-25  9:11       ` Paolo Bonzini
2013-07-25  9:38         ` Alex Bligh
2013-07-25  9:37       ` Alex Bligh
2013-07-25  9:38         ` Paolo Bonzini
2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 2/7] aio / timers: qemu-timer.c utility functions and add list of clocks Alex Bligh
2013-07-23 21:09     ` Richard Henderson
2013-07-23 21:34       ` Alex Bligh
2013-07-25  9:19     ` Stefan Hajnoczi
2013-07-25  9:46       ` Alex Bligh
2013-07-26  8:26         ` Stefan Hajnoczi
2013-07-25  9:21     ` Stefan Hajnoczi
2013-07-25  9:46       ` Alex Bligh
2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 3/7] aio / timers: add ppoll support with qemu_g_poll_ns Alex Bligh
2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 4/7] aio / timers: Make qemu_run_timers and qemu_run_all_timers return progress Alex Bligh
2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 5/7] aio / timers: Add a clock to AioContext Alex Bligh
2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 6/7] aio / timers: Switch to ppoll, run AioContext timers in aio_poll/aio_dispatch Alex Bligh
2013-07-25  9:33     ` Stefan Hajnoczi
2013-07-25 14:53       ` Alex Bligh
2013-07-26  8:34         ` Stefan Hajnoczi
2013-07-25 18:51       ` Alex Bligh
2013-07-20 18:06   ` [Qemu-devel] [PATCHv2] [RFC 7/7] aio / timers: Add test harness for AioContext timers Alex Bligh
2013-07-25  9:37   ` [Qemu-devel] [PATCHv2] [RFC 0/7] aio / timers: Add AioContext timers and use ppoll Stefan Hajnoczi
2013-07-25 22:16     ` [Qemu-devel] [RFC] [PATCHv3 00/12] " Alex Bligh
2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 01/12] aio / timers: add qemu-timer.c utility functions Alex Bligh
2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 02/12] aio / timers: add ppoll support with qemu_poll_ns Alex Bligh
2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 03/12] aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer slack Alex Bligh
2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 04/12] aio / timers: Make qemu_run_timers and qemu_run_all_timers return progress Alex Bligh
2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 05/12] aio / timers: Add a clock to AioContext Alex Bligh
2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 06/12] aio / timers: Add an AioContext pointer to QEMUClock Alex Bligh
2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 07/12] aio / timers: aio_ctx_prepare sets timeout from AioContext timers Alex Bligh
2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 08/12] aio / timers: Convert aio_poll to use AioContext timers' deadline Alex Bligh
2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 09/12] aio / timers: convert mainloop to use timeout Alex Bligh
2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 10/12] aio / timers: on timer modification, qemu_notify or aio_notify Alex Bligh
2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 11/12] aio / timers: Remove alarm timers Alex Bligh
2013-07-25 22:16       ` [Qemu-devel] [RFC] [PATCHv3 12/12] aio / timers: Add test harness for AioContext timers Alex Bligh
2013-07-25 22:22       ` [Qemu-devel] [RFC] [PATCHv3 00/12] aio / timers: Add AioContext timers and use ppoll Alex Bligh
2013-07-26 18:37         ` [Qemu-devel] [RFC] [PATCHv4 00/13] " Alex Bligh
2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 01/13] aio / timers: add qemu-timer.c utility functions Alex Bligh
2013-08-01 12:07             ` Paolo Bonzini
2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 02/13] aio / timers: add ppoll support with qemu_poll_ns Alex Bligh
2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 03/13] aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer slack Alex Bligh
2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 04/13] aio / timers: Make qemu_run_timers and qemu_run_all_timers return progress Alex Bligh
2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 05/13] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList Alex Bligh
2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 06/13] aio / timers: Add a QEMUTimerList to AioContext Alex Bligh
2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 07/13] aio / timers: Add an AioContext pointer to QEMUTimerList Alex Bligh
2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 08/13] aio / timers: aio_ctx_prepare sets timeout from AioContext timers Alex Bligh
2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 09/13] aio / timers: Convert aio_poll to use AioContext timers' deadline Alex Bligh
2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 10/13] aio / timers: Convert mainloop to use timeout Alex Bligh
2013-08-01 12:41             ` Paolo Bonzini
2013-08-01 13:43               ` Alex Bligh
2013-08-01 14:14                 ` Paolo Bonzini
2013-08-02 13:09                   ` Alex Bligh
2013-08-04 18:09                     ` [Qemu-devel] [RFC] [PATCHv5 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 01/16] aio / timers: add qemu-timer.c utility functions Alex Bligh
2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 02/16] aio / timers: add ppoll support with qemu_poll_ns Alex Bligh
2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 03/16] aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer slack Alex Bligh
2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 04/16] aio / timers: Make qemu_run_timers and qemu_run_all_timers return progress Alex Bligh
2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 05/16] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList Alex Bligh
2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 06/16] aio / timers: Untangle include files Alex Bligh
2013-08-06 10:10                         ` Stefan Hajnoczi
2013-08-06 11:20                           ` Alex Bligh
2013-08-06 11:48                             ` Stefan Hajnoczi
2013-08-06 23:52                           ` Alex Bligh
2013-08-07  7:19                             ` Stefan Hajnoczi
2013-08-07 10:14                               ` Alex Bligh
2013-08-08  8:34                                 ` Stefan Hajnoczi
2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 07/16] aio / timers: Add QEMUTimerListGroup and helper functions Alex Bligh
2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 08/16] aio / timers: Add QEMUTimerListGroup to AioContext Alex Bligh
2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 09/16] aio / timers: Add a notify callback to QEMUTimerList Alex Bligh
2013-08-04 18:09                       ` [Qemu-devel] [RFC] [PATCHv5 10/16] aio / timers: aio_ctx_prepare sets timeout from AioContext timers Alex Bligh
2013-08-04 18:10                       ` [Qemu-devel] [RFC] [PATCHv5 11/16] aio / timers: Convert aio_poll to use AioContext timers' deadline Alex Bligh
2013-08-04 18:10                       ` [Qemu-devel] [RFC] [PATCHv5 12/16] aio / timers: Convert mainloop to use timeout Alex Bligh
2013-08-04 18:10                       ` [Qemu-devel] [RFC] [PATCHv5 13/16] aio / timers: On timer modification, qemu_notify or aio_notify Alex Bligh
2013-08-04 18:10                       ` [Qemu-devel] [RFC] [PATCHv5 14/16] aio / timers: Use all timerlists in icount warp calculations Alex Bligh
2013-08-04 18:10                       ` [Qemu-devel] [RFC] [PATCHv5 15/16] aio / timers: Remove alarm timers Alex Bligh
2013-08-04 18:10                       ` [Qemu-devel] [RFC] [PATCHv5 16/16] aio / timers: Add test harness for AioContext timers Alex Bligh
2013-08-06  9:16                       ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 01/16] aio / timers: add qemu-timer.c utility functions Alex Bligh
2013-08-06 12:02                           ` Stefan Hajnoczi
2013-08-06 12:30                             ` Alex Bligh
2013-08-06 13:59                               ` Stefan Hajnoczi
2013-08-06 14:18                                 ` Alex Bligh
2013-08-07  7:21                                   ` Stefan Hajnoczi
2013-08-07 10:14                                     ` Alex Bligh
2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 02/16] aio / timers: add ppoll support with qemu_poll_ns Alex Bligh
2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 03/16] aio / timers: Add prctl(PR_SET_TIMERSLACK, 1, ...) to reduce timer slack Alex Bligh
2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 04/16] aio / timers: Make qemu_run_timers and qemu_run_all_timers return progress Alex Bligh
2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 05/16] aio / timers: Split QEMUClock into QEMUClock and QEMUTimerList Alex Bligh
2013-08-06 12:26                           ` Stefan Hajnoczi
2013-08-06 12:49                             ` Alex Bligh
2013-08-06 14:29                               ` Stefan Hajnoczi
2013-08-06 14:52                                 ` Alex Bligh
2013-08-07  7:27                                   ` Stefan Hajnoczi
2013-08-07 10:16                                     ` Alex Bligh
2013-08-06 23:54                                 ` Alex Bligh
2013-08-06 13:16                             ` Alex Bligh
2013-08-06 14:31                               ` Stefan Hajnoczi
2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 06/16] aio / timers: Untangle include files Alex Bligh
2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 07/16] aio / timers: Add QEMUTimerListGroup and helper functions Alex Bligh
2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 08/16] aio / timers: Add QEMUTimerListGroup to AioContext Alex Bligh
2013-08-06 12:30                           ` Stefan Hajnoczi
2013-08-06 12:50                             ` Alex Bligh
2013-08-06 14:45                               ` Stefan Hajnoczi
2013-08-06 14:58                                 ` Alex Bligh
2013-08-07  7:28                                   ` Stefan Hajnoczi
2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 09/16] aio / timers: Add a notify callback to QEMUTimerList Alex Bligh
2013-08-06 12:34                           ` Stefan Hajnoczi
2013-08-06 12:50                             ` Alex Bligh
2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 10/16] aio / timers: aio_ctx_prepare sets timeout from AioContext timers Alex Bligh
2013-08-06 12:40                           ` Stefan Hajnoczi
2013-08-06 12:54                             ` Alex Bligh
2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 11/16] aio / timers: Convert aio_poll to use AioContext timers' deadline Alex Bligh
2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 12/16] aio / timers: Convert mainloop to use timeout Alex Bligh
2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 13/16] aio / timers: On timer modification, qemu_notify or aio_notify Alex Bligh
2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 14/16] aio / timers: Use all timerlists in icount warp calculations Alex Bligh
2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 15/16] aio / timers: Remove alarm timers Alex Bligh
2013-08-06  9:16                         ` [Qemu-devel] [RFC] [PATCHv6 16/16] aio / timers: Add test harness for AioContext timers Alex Bligh
2013-08-06  9:29                         ` [Qemu-devel] [RFC] [PATCHv6 00/16] aio / timers: Add AioContext timers and use ppoll Alex Bligh
2013-08-06 11:53                         ` Stefan Hajnoczi
2013-08-06 15:38                         ` Stefan Hajnoczi
2013-08-07 15:43                           ` Paolo Bonzini
2013-08-07 17:20                             ` Alex Bligh
2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 11/13] aio / timers: on timer modification, qemu_notify or aio_notify Alex Bligh
2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 12/13] aio / timers: Remove alarm timers Alex Bligh
2013-07-26 18:37           ` [Qemu-devel] [RFC] [PATCHv4 13/13] aio / timers: Add test harness for AioContext timers Alex Bligh

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.