From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:37069) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V4tTs-0005pa-Uj for qemu-devel@nongnu.org; Thu, 01 Aug 2013 10:03:56 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1V4tTp-0004O5-Gs for qemu-devel@nongnu.org; Thu, 01 Aug 2013 10:03:52 -0400 Received: from mail.avalus.com ([2001:41c8:10:1dd::10]:46710) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V4tAf-0005GV-VN for qemu-devel@nongnu.org; Thu, 01 Aug 2013 09:44:02 -0400 Date: Thu, 01 Aug 2013 14:43:48 +0100 From: Alex Bligh Message-ID: In-Reply-To: <20130801124152.GF5245@mail.corp.redhat.com> References: <34B083D4DB6D2988A571E420@nimrod.local> <1374863862-32517-1-git-send-email-alex@alex.org.uk> <1374863862-32517-11-git-send-email-alex@alex.org.uk> <20130801124152.GF5245@mail.corp.redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline Subject: Re: [Qemu-devel] [RFC] [PATCHv4 10/13] aio / timers: Convert mainloop to use timeout Reply-To: Alex Bligh List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: Kevin Wolf , Anthony Liguori , Stefan Hajnoczi , qemu-devel@nongnu.org, Alex Bligh , rth@twiddle.net Paolo, >> @@ -449,6 +460,7 @@ int main_loop_wait(int nonblocking) >> { >> int ret; >> uint32_t timeout = UINT32_MAX; >> + int64_t timeout_ns; >> >> if (nonblocking) { >> timeout = 0; >> @@ -462,7 +474,21 @@ int main_loop_wait(int nonblocking) >> slirp_pollfds_fill(gpollfds); >> # endif >> qemu_iohandler_fill(gpollfds); >> - ret = os_host_main_loop_wait(timeout); >> + >> + if (timeout == UINT32_MAX) { >> + timeout_ns = -1; >> + } else { >> + timeout_ns = (uint64_t)timeout * (int64_t)(SCALE_MS); >> + } >> + >> + timeout_ns = qemu_soonest_timeout(timeout_ns, >> + qemu_clock_deadline_ns(rt_clock)); >> + timeout_ns = qemu_soonest_timeout(timeout_ns, >> + qemu_clock_deadline_ns(vm_clock)); > > This must not be included if use_icount. Really? qemu_run_all_timers was running all 3 timers irrespective of use_icount, and doing a qemu_notify if anything expired, so I'm merely mimicking the existing behaviour here. I'm not quite sure what use_icount does. Does vm_clock get disabled if it is set? (in which case it won't matter). > Allowing only one rt_clock clock for each AioContext is a simplification, > but I'm worried that it will be a problem later. For example, the block > layer wants to use vm_clock. Perhaps QEMUTimerList should really have > three lists, one for each clock type? Well currently each QEMUClock has a default QEMUTimerList, so that wouldn't work well - see below. The way it's done at the moment is that the QEMUTimerList user can create as many QEMUTimerLists as he wants. So AioContext asks for one of one type. It could equally ask for three - one of each type. I think that's probably adequate. > Once you do this, you get some complications due to more data structures, > but other code is simplified noticeably. For example, you lose the > concept of a default timerlist (it's just the timerlist of the default > AioContext). Yep - per the above that's really intrusive (I think I touched well over a hundred files). The problem is that lots of things refer to vm_clock to set a timer (when it's a clock so should use a timer list) and also to vm_clock to read the timer (which is a clock function). Changing vm_clock to vm_timerlist and vm_clock was truly horrible. Hence the default timer list concept, which I admit is not great but saved a horribly intrusive patch not all of which I could test. I did this patch, and scrapped it. > And because all timerlists have an AioContext, Well old callers, particularly those not using an AioContext, would not have an AioContext would they? > you do not > need to special case aio_notify() vs. qemu_notify_event(). Well, if I do a v5, I was going to make the constructor for creating a timerlist specify a callback function to say what should happen if the clock is enabled etc., and if none was specified call qemu_notify_event(). The AioContext user would specify a callback that called aio_notify(). This would be a bit nicer. > There are a couple of places to be careful about, of course. For example, > > if (use_icount && qemu_clock_deadline(vm_clock) <= 0) { > qemu_notify_event(); > } > > in cpus.c must be changed to iterate over all timerlists. I was trying hard to avoid anything having to iterate over all timerlists, and leave the timerlist to be per-thread where possible. This may well fail for the clock warp stuff. I probably need to exactly the same as on qemu_clock_enable() here if use_icount is true. WDYT? -- Alex Bligh