All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] timer issue on 1.7.0 and later
@ 2014-02-07 18:15 Rob Herring
  2014-02-08 11:48 ` Alex Bligh
  0 siblings, 1 reply; 11+ messages in thread
From: Rob Herring @ 2014-02-07 18:15 UTC (permalink / raw)
  To: qemu-devel, alex; +Cc: Peter Maydell

I've bisected a problem with system emulation and SMP kernels using
per cpu timers to this commit. I can reproduce this problem on ARM
emulation with both ARM generic timers (only in 1.7.0) and ARM MPCore
timers. Using a single broadcast timer in the guest kernel works fine.
 My host is ubuntu 13.10.

commit b1bbfe72ec1ebf302d97f886cc646466c0abd679
Author: Alex Bligh <alex@alex.org.uk>
Date:   Wed Aug 21 16:02:55 2013 +0100

    aio / timers: On timer modification, qemu_notify or aio_notify

    On qemu_mod_timer_ns, ensure qemu_notify or aio_notify is called to
    end the appropriate poll(), irrespective of use_icount value.

    On qemu_clock_enable, ensure qemu_notify or aio_notify is called for
    all QEMUTimerLists attached to the QEMUClock.

    Signed-off-by: Alex Bligh <alex@alex.org.uk>
    Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>


This can be reproduced with a simple busybox initramfs and spawning
several instances of a simple shell script to load the cores:

while [ 1 ]; do echo rob > /dev/null; done &

The symptom is user interaction become sluggish and jerky, and then
kernel messages about soft lockup, rcu stalls and/or like this:

hrtimer: interrupt took 3030033000 ns
[sched_delayed] sched: RT throttling activated

I also intermittently hang on boot hitting this warning:

[    0.640204] WARNING: CPU: 0 PID: 0 at
/home/rob/proj/git/linux-2.6/kernel/time/clockevents.c:212
clockevents_program_event+0x50/0x138()

which is from here:

if (unlikely(expires.tv64 < 0)) {
WARN_ON_ONCE(1);
return -ETIME;
}

I'm not sure if this warning is caused by the same commit or not, but
it seems like I'm getting wrong timer values from qemu.


It appears to me that this bug report may also be related:

https://bugs.launchpad.net/qemu/+bug/1222034

Rob

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] timer issue on 1.7.0 and later
  2014-02-07 18:15 [Qemu-devel] timer issue on 1.7.0 and later Rob Herring
@ 2014-02-08 11:48 ` Alex Bligh
  2014-02-08 13:26   ` Paolo Bonzini
  0 siblings, 1 reply; 11+ messages in thread
From: Alex Bligh @ 2014-02-08 11:48 UTC (permalink / raw)
  To: Rob Herring; +Cc: Peter Maydell, qemu-devel, Alex Bligh

Rob,

On 7 Feb 2014, at 18:15, Rob Herring wrote:

> I've bisected a problem with system emulation and SMP kernels using
> per cpu timers to this commit. I can reproduce this problem on ARM
> emulation with both ARM generic timers (only in 1.7.0) and ARM MPCore
> timers. Using a single broadcast timer in the guest kernel works fine.
> My host is ubuntu 13.10.

I don't know the ARM emulation well, but from the description this looks
like a problem where timers are firing too often. The timer changes have
tended to uncover bugs elsewhere in the QEMU, for instance setting timer
expiry times to something very near zero. So far (touch wood) the timer
changes themselves have been relatively clean.

What I'd suggest you do is run qemu within gdb, and when you have
seen the sluggish behaviour, set a breakpoint in timerlist_run_timers
just before the line saying cb(opaque). If I'm right your breakpoint
should immediately hit. Step in to the callback and note which it is.
It should always (or nearly always) be the same timer (note the process
of debugging will cause other timers to expire).

You then want to find where that timer is set and see if the expiry
value looks silly. Normally the issue is that timer_mod takes an
argument (i) which is absolute time, not an interval, and (ii) is
measured in nanoseconds unless the timer's scale has been
set otherwise (quite often I've seen it read the current time
in nanoseconds then add an offset in milliseconds).

If you find which timer it is but can't work out why it's doing it,
I can take another look.

Alex


> 
> commit b1bbfe72ec1ebf302d97f886cc646466c0abd679
> Author: Alex Bligh <alex@alex.org.uk>
> Date:   Wed Aug 21 16:02:55 2013 +0100
> 
>    aio / timers: On timer modification, qemu_notify or aio_notify
> 
>    On qemu_mod_timer_ns, ensure qemu_notify or aio_notify is called to
>    end the appropriate poll(), irrespective of use_icount value.
> 
>    On qemu_clock_enable, ensure qemu_notify or aio_notify is called for
>    all QEMUTimerLists attached to the QEMUClock.
> 
>    Signed-off-by: Alex Bligh <alex@alex.org.uk>
>    Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> 
> 
> This can be reproduced with a simple busybox initramfs and spawning
> several instances of a simple shell script to load the cores:
> 
> while [ 1 ]; do echo rob > /dev/null; done &
> 
> The symptom is user interaction become sluggish and jerky, and then
> kernel messages about soft lockup, rcu stalls and/or like this:
> 
> hrtimer: interrupt took 3030033000 ns
> [sched_delayed] sched: RT throttling activated
> 
> I also intermittently hang on boot hitting this warning:
> 
> [    0.640204] WARNING: CPU: 0 PID: 0 at
> /home/rob/proj/git/linux-2.6/kernel/time/clockevents.c:212
> clockevents_program_event+0x50/0x138()
> 
> which is from here:
> 
> if (unlikely(expires.tv64 < 0)) {
> WARN_ON_ONCE(1);
> return -ETIME;
> }
> 
> I'm not sure if this warning is caused by the same commit or not, but
> it seems like I'm getting wrong timer values from qemu.
> 
> 
> It appears to me that this bug report may also be related:
> 
> https://bugs.launchpad.net/qemu/+bug/1222034
> 
> Rob
> 
> 

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] timer issue on 1.7.0 and later
  2014-02-08 11:48 ` Alex Bligh
@ 2014-02-08 13:26   ` Paolo Bonzini
  2014-02-08 15:20     ` Alex Bligh
  0 siblings, 1 reply; 11+ messages in thread
From: Paolo Bonzini @ 2014-02-08 13:26 UTC (permalink / raw)
  To: Alex Bligh, Rob Herring; +Cc: Peter Maydell, qemu-devel

Il 08/02/2014 12:48, Alex Bligh ha scritto:
> Rob,
>
> On 7 Feb 2014, at 18:15, Rob Herring wrote:
>
>> I've bisected a problem with system emulation and SMP kernels using
>> per cpu timers to this commit. I can reproduce this problem on ARM
>> emulation with both ARM generic timers (only in 1.7.0) and ARM MPCore
>> timers. Using a single broadcast timer in the guest kernel works fine.
>> My host is ubuntu 13.10.
>
> I don't know the ARM emulation well, but from the description this looks
> like a problem where timers are firing too often. The timer changes have
> tended to uncover bugs elsewhere in the QEMU, for instance setting timer
> expiry times to something very near zero. So far (touch wood) the timer
> changes themselves have been relatively clean.
>
> What I'd suggest you do is run qemu within gdb, and when you have
> seen the sluggish behaviour, set a breakpoint in timerlist_run_timers
> just before the line saying cb(opaque). If I'm right your breakpoint
> should immediately hit. Step in to the callback and note which it is.
> It should always (or nearly always) be the same timer (note the process
> of debugging will cause other timers to expire).

Alex, could you add a trace event to timerlist_run_timers and mod_timer? 
  For timerlist_run_timer it should include the timer (ts), the callback 
and the opaque value.

This would make it simpler to find if this is the cause, and to debug it 
without introducing as much perturbation.

Paolo

> You then want to find where that timer is set and see if the expiry
> value looks silly. Normally the issue is that timer_mod takes an
> argument (i) which is absolute time, not an interval, and (ii) is
> measured in nanoseconds unless the timer's scale has been
> set otherwise (quite often I've seen it read the current time
> in nanoseconds then add an offset in milliseconds).
>
> If you find which timer it is but can't work out why it's doing it,
> I can take another look.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] timer issue on 1.7.0 and later
  2014-02-08 13:26   ` Paolo Bonzini
@ 2014-02-08 15:20     ` Alex Bligh
  2014-02-10 17:15       ` Rob Herring
  0 siblings, 1 reply; 11+ messages in thread
From: Alex Bligh @ 2014-02-08 15:20 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Peter Maydell, Rob Herring, qemu-devel, Alex Bligh

Paolo,

On 8 Feb 2014, at 13:26, Paolo Bonzini wrote:

>> 
>> What I'd suggest you do is run qemu within gdb, and when you have
>> seen the sluggish behaviour, set a breakpoint in timerlist_run_timers
>> just before the line saying cb(opaque). If I'm right your breakpoint
>> should immediately hit. Step in to the callback and note which it is.
>> It should always (or nearly always) be the same timer (note the process
>> of debugging will cause other timers to expire).
> 
> Alex, could you add a trace event to timerlist_run_timers and mod_timer?  For timerlist_run_timer it should include the timer (ts), the callback and the opaque value.
> 
> This would make it simpler to find if this is the cause, and to debug it without introducing as much perturbation.

A while ago a wrote some instrumentation. I can't recall how far I got with
testing it, but here it is rebased onto master without so much as a compile test
(top 4 commits - ignore the 5th):
   https://github.com/abligh/qemu/commit/e711816da9831c97f7d8a111b610ef42f7ba69fa

As is evident, this does not use the qemu trace infrastructure (and it needs
to be rewritten to do that). However, it does usefully instrument the timers
in that it records where they are registered (by file & line number),
counts expiries, short timer expiries (currently less than 1 microsecond),
measures average expiry length etc. What I was trying to do was to get a single
file out that would very obviously determine whether timers were the cause
of slowness, and if so which timer was guilty.

I agree re the tracepoints. How much of the rest of this is useful? Is there
currently a debug mechanism in qemu where you can say "dump some stats out"
that doesn't require adding command line options and the like?

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] timer issue on 1.7.0 and later
  2014-02-08 15:20     ` Alex Bligh
@ 2014-02-10 17:15       ` Rob Herring
  2014-02-10 18:54         ` Alex Bligh
  2014-02-13 15:36         ` Peter Maydell
  0 siblings, 2 replies; 11+ messages in thread
From: Rob Herring @ 2014-02-10 17:15 UTC (permalink / raw)
  To: Alex Bligh; +Cc: Paolo Bonzini, qemu-devel, Peter Maydell

On Sat, Feb 8, 2014 at 9:20 AM, Alex Bligh <alex@alex.org.uk> wrote:
> Paolo,
>
> On 8 Feb 2014, at 13:26, Paolo Bonzini wrote:
>
>>>
>>> What I'd suggest you do is run qemu within gdb, and when you have
>>> seen the sluggish behaviour, set a breakpoint in timerlist_run_timers
>>> just before the line saying cb(opaque). If I'm right your breakpoint
>>> should immediately hit. Step in to the callback and note which it is.
>>> It should always (or nearly always) be the same timer (note the process
>>> of debugging will cause other timers to expire).
>>
>> Alex, could you add a trace event to timerlist_run_timers and mod_timer?  For timerlist_run_timer it should include the timer (ts), the callback and the opaque value.
>>
>> This would make it simpler to find if this is the cause, and to debug it without introducing as much perturbation.
>
> A while ago a wrote some instrumentation. I can't recall how far I got with
> testing it, but here it is rebased onto master without so much as a compile test
> (top 4 commits - ignore the 5th):
>    https://github.com/abligh/qemu/commit/e711816da9831c97f7d8a111b610ef42f7ba69fa

This doesn't appear to be too useful. The AvgLength is large because
INT64_MAX / GTIMER_SCALE is used as the next timer value when no timer
is needed. I also changed the code to just call timer_del instead of
setting a max timeout and the length looks normal (~10ms).

Timer list at 0x7fae6463d680 clock 1:
           Address       Expiries      AvgLength       NumShort Source
    0x7fae646659e0            673 13704855334071269              0 cpu.c:229
    0x7fae646650f0            691    -6386455212              0 cpu.c:229
    0x7fae6480e630              1  4294967294556              0 ptimer.c:229

This seems to be more related to interaction between the 2 timers as
the same timer works fine for single core use. The relevant code is
target-arm/helper.c:gt_recalc_timer.

I did find one bug that would seem to cause interrupts to be missed or
fire twice, but it didn't make any difference:

diff --git a/target-arm/helper.c b/target-arm/helper.c
index ca5b000..325515f 100644
--- a/target-arm/helper.c
+++ b/target-arm/helper.c
@@ -905,7 +905,7 @@ static int gt_ctl_write(CPUARMState *env, const
ARMCPRegInfo *ri,
     if ((oldval ^ value) & 1) {
         /* Enable toggled */
         gt_recalc_timer(cpu, timeridx);
-    } else if ((oldval & value) & 2) {
+    } else if ((oldval ^ value) & 2) {
         /* IMASK toggled: don't need to recalculate,
          * just set the interrupt line based on ISTATUS
          */

Rob

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] timer issue on 1.7.0 and later
  2014-02-10 17:15       ` Rob Herring
@ 2014-02-10 18:54         ` Alex Bligh
  2014-02-13 15:36         ` Peter Maydell
  1 sibling, 0 replies; 11+ messages in thread
From: Alex Bligh @ 2014-02-10 18:54 UTC (permalink / raw)
  To: Rob Herring; +Cc: Paolo Bonzini, qemu-devel, Alex Bligh, Peter Maydell

Rob,

>> A while ago a wrote some instrumentation. I can't recall how far I got with
>> testing it, but here it is rebased onto master without so much as a compile test
>> (top 4 commits - ignore the 5th):
>>   https://github.com/abligh/qemu/commit/e711816da9831c97f7d8a111b610ef42f7ba69fa
> 
> This doesn't appear to be too useful.

Ha - I didn't expect you to run this code (this was mainly to ask Paolo
what he wanted in) as I haven't even used it myself. I'm surprised it
didn't SEGV everywhere!

> The AvgLength is large because
> INT64_MAX / GTIMER_SCALE is used as the next timer value when no timer
> is needed.

That's pretty ugly. Perhaps timer_mod should take an expire time of 0 to
mean 'make the timer already expired'; currently this causes it to
immediately expire as far as I can see.

> I also changed the code to just call timer_del instead of
> setting a max timeout and the length looks normal (~10ms).

OK

> Timer list at 0x7fae6463d680 clock 1:
>           Address       Expiries      AvgLength       NumShort Source
>    0x7fae646659e0            673 13704855334071269              0 cpu.c:229
>    0x7fae646650f0            691    -6386455212              0 cpu.c:229
>    0x7fae6480e630              1  4294967294556              0 ptimer.c:229

This is without your change to use timer_del? I guess the negative one
is wrapping ints.

The interesting point here is 0 sub-millisecond expiries (if that
bit of code is working).

Do you think that line 229 is bogus? My copy of master says there's
a timer at line 225/226 and line 227/228 of the arm cpu.c and at
line 229 of ptimer.c, but it's a bit of a coincidence and I'd
have thought cpu.c would have 2 timers going.

> This seems to be more related to interaction between the 2 timers as
> the same timer works fine for single core use. The relevant code is
> target-arm/helper.c:gt_recalc_timer.

This code confuses me a little. GTIMER_SCALE is 16. That means the clock
operates in periods of 16ns, which is not something I'd envisaged, but
I suppose should work. That does indeed give a 62.5MHz timer or would
do if programme with '1'.  Given that, I am not /quite/ sure why the
clamping is so worrisome given as far as I can tell 2^63 nanoseconds
is about 292 years.

I wonder if what is happening is that with 2 cores (for some reason)
it's constantly modifying the timers (when they have yet to expire)
despite this not actually changing when anything expires.

Is use_icount set to true? If so, it might be worth looking at
the icount code.

If not, I'm wondering whether what is happening is that it is
calling timer_mod sufficiently frequently where each timer_mod
causes a qemu_notify_event. This breaks out of the mainloop
ppoll() because the expiry time for the ppoll has changed (marginally).

One optimisation here would be to not do a qemu_notify_event if
the deadline has extended.

Summary: I'm interested in quite how frequently timer_mod is being
called. If it's 'far more frequently than the timer actually
expires' I think that could cause an issue.

> I did find one bug that would seem to cause interrupts to be missed or
> fire twice, but it didn't make any difference:
> 
> diff --git a/target-arm/helper.c b/target-arm/helper.c
> index ca5b000..325515f 100644
> --- a/target-arm/helper.c
> +++ b/target-arm/helper.c
> @@ -905,7 +905,7 @@ static int gt_ctl_write(CPUARMState *env, const
> ARMCPRegInfo *ri,
>     if ((oldval ^ value) & 1) {
>         /* Enable toggled */
>         gt_recalc_timer(cpu, timeridx);
> -    } else if ((oldval & value) & 2) {
> +    } else if ((oldval ^ value) & 2) {
>         /* IMASK toggled: don't need to recalculate,
>          * just set the interrupt line based on ISTATUS
>          */


I'm not familiar enough with the arm code to help here.

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] timer issue on 1.7.0 and later
  2014-02-10 17:15       ` Rob Herring
  2014-02-10 18:54         ` Alex Bligh
@ 2014-02-13 15:36         ` Peter Maydell
  2014-02-13 16:09           ` Alex Bligh
  1 sibling, 1 reply; 11+ messages in thread
From: Peter Maydell @ 2014-02-13 15:36 UTC (permalink / raw)
  To: Rob Herring; +Cc: Paolo Bonzini, QEMU Developers, Alex Bligh

On 10 February 2014 17:15, Rob Herring <robherring2@gmail.com> wrote:
> This doesn't appear to be too useful. The AvgLength is large because
> INT64_MAX / GTIMER_SCALE is used as the next timer value when no timer
> is needed.

The code already uses timer_del() when no next timer is needed
(that's the "timer disabled" case in gt_recalc_timer()). We only
set the timer to INT64_MAX / GTIMER_SCALE for the case where the
architecture demands an interrupt occurs in the really distant
future. Admittedly the chances of QEMU still running in 200 years
time are quite low, but it seemed at the time to be better to just
write the code to behave correctly, since I assumed that calling
timer_mod with a large timeout wasn't going to be really more
expensive than calling timer_del. (Looking at the code, we end
up walking the list of timers twice rather than once.)

thanks
-- PMM

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] timer issue on 1.7.0 and later
  2014-02-13 15:36         ` Peter Maydell
@ 2014-02-13 16:09           ` Alex Bligh
  2014-02-13 16:20             ` Peter Maydell
  0 siblings, 1 reply; 11+ messages in thread
From: Alex Bligh @ 2014-02-13 16:09 UTC (permalink / raw)
  To: Peter Maydell; +Cc: Paolo Bonzini, Rob Herring, QEMU Developers, Alex Bligh


On 13 Feb 2014, at 15:36, Peter Maydell wrote:

> On 10 February 2014 17:15, Rob Herring <robherring2@gmail.com> wrote:
>> This doesn't appear to be too useful. The AvgLength is large because
>> INT64_MAX / GTIMER_SCALE is used as the next timer value when no timer
>> is needed.
> 
> The code already uses timer_del() when no next timer is needed
> (that's the "timer disabled" case in gt_recalc_timer()). We only
> set the timer to INT64_MAX / GTIMER_SCALE for the case where the
> architecture demands an interrupt occurs in the really distant
> future. Admittedly the chances of QEMU still running in 200 years
> time are quite low, but it seemed at the time to be better to just
> write the code to behave correctly, since I assumed that calling
> timer_mod with a large timeout wasn't going to be really more
> expensive than calling timer_del. (Looking at the code, we end
> up walking the list of timers twice rather than once.)

I suspect the issue is not walking the lists, but calling
qemu_notify, breaking out of mainloop select etc. etc.; that
happens on a timer_modify but not on a timer_del. We could
fix this so that it only happened if the timer's expiry
time was reduced in timer_mod (I think).

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] timer issue on 1.7.0 and later
  2014-02-13 16:09           ` Alex Bligh
@ 2014-02-13 16:20             ` Peter Maydell
  2014-02-13 16:31               ` Alex Bligh
  0 siblings, 1 reply; 11+ messages in thread
From: Peter Maydell @ 2014-02-13 16:20 UTC (permalink / raw)
  To: Alex Bligh; +Cc: Paolo Bonzini, Rob Herring, QEMU Developers

On 13 February 2014 16:09, Alex Bligh <alex@alex.org.uk> wrote:
> I suspect the issue is not walking the lists, but calling
> qemu_notify, breaking out of mainloop select etc. etc.; that
> happens on a timer_modify but not on a timer_del. We could
> fix this so that it only happened if the timer's expiry
> time was reduced in timer_mod (I think).

Surely you only want to do all that work if the fiddling
with the timer means the next most immediate deadline
has changed? If you have two timers, and A is going to
expire before B, then even reducing B's expiry time shouldn't
trigger renotifying unless it's reduced so much it will
expire before A.

thanks
-- PMM

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] timer issue on 1.7.0 and later
  2014-02-13 16:20             ` Peter Maydell
@ 2014-02-13 16:31               ` Alex Bligh
  2014-02-13 17:04                 ` Peter Maydell
  0 siblings, 1 reply; 11+ messages in thread
From: Alex Bligh @ 2014-02-13 16:31 UTC (permalink / raw)
  To: Peter Maydell; +Cc: Paolo Bonzini, Rob Herring, QEMU Developers, Alex Bligh


On 13 Feb 2014, at 16:20, Peter Maydell wrote:

> On 13 February 2014 16:09, Alex Bligh <alex@alex.org.uk> wrote:
>> I suspect the issue is not walking the lists, but calling
>> qemu_notify, breaking out of mainloop select etc. etc.; that
>> happens on a timer_modify but not on a timer_del. We could
>> fix this so that it only happened if the timer's expiry
>> time was reduced in timer_mod (I think).
> 
> Surely you only want to do all that work if the fiddling
> with the timer means the next most immediate deadline
> has changed? If you have two timers, and A is going to
> expire before B, then even reducing B's expiry time shouldn't
> trigger renotifying unless it's reduced so much it will
> expire before A.

Correct.

The current code (timer_mod_ns_locked) runs the rearm code
if the modified timer is at the front of the timer queue
(only). So if you modify B (in your example above) whether
you extend or reduce the time, it will only 'rearm' if
B now occurs before A. However, if you modify A, it will
currently rearm whether you reduce the expiry time of
A (correct) or whether you extend it so long as it doesn't
now expire after B (this could perhaps be optimised).

So it's not quite as bad as you think.

-- 
Alex Bligh

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] timer issue on 1.7.0 and later
  2014-02-13 16:31               ` Alex Bligh
@ 2014-02-13 17:04                 ` Peter Maydell
  0 siblings, 0 replies; 11+ messages in thread
From: Peter Maydell @ 2014-02-13 17:04 UTC (permalink / raw)
  To: Alex Bligh; +Cc: Paolo Bonzini, Rob Herring, QEMU Developers

On 13 February 2014 16:31, Alex Bligh <alex@alex.org.uk> wrote:
> The current code (timer_mod_ns_locked) runs the rearm code
> if the modified timer is at the front of the timer queue
> (only). So if you modify B (in your example above) whether
> you extend or reduce the time, it will only 'rearm' if
> B now occurs before A. However, if you modify A, it will
> currently rearm whether you reduce the expiry time of
> A (correct) or whether you extend it so long as it doesn't
> now expire after B (this could perhaps be optimised).
>
> So it's not quite as bad as you think.

A little experimentation with the debugger for an SMP
A9 guest suggests this may be a red herring anyway. Once
it's got started, the only thing that was causing calls
to timerlist_notify was the GUI, and if you run with
-display none we don't call timerlist_notify at all
at the point where we seem to be stalling.

thanks
-- PMM

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2014-02-13 17:05 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-02-07 18:15 [Qemu-devel] timer issue on 1.7.0 and later Rob Herring
2014-02-08 11:48 ` Alex Bligh
2014-02-08 13:26   ` Paolo Bonzini
2014-02-08 15:20     ` Alex Bligh
2014-02-10 17:15       ` Rob Herring
2014-02-10 18:54         ` Alex Bligh
2014-02-13 15:36         ` Peter Maydell
2014-02-13 16:09           ` Alex Bligh
2014-02-13 16:20             ` Peter Maydell
2014-02-13 16:31               ` Alex Bligh
2014-02-13 17:04                 ` Peter Maydell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.