From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:34104) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UzI0o-0003xd-DS for qemu-devel@nongnu.org; Tue, 16 Jul 2013 23:02:43 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UzI0n-0007LA-E1 for qemu-devel@nongnu.org; Tue, 16 Jul 2013 23:02:42 -0400 Received: from mail-wi0-x236.google.com ([2a00:1450:400c:c05::236]:49729) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UzI0n-0007Kn-8k for qemu-devel@nongnu.org; Tue, 16 Jul 2013 23:02:41 -0400 Received: by mail-wi0-f182.google.com with SMTP id m6so1411177wiv.9 for ; Tue, 16 Jul 2013 20:02:40 -0700 (PDT) Date: Wed, 17 Jul 2013 11:02:30 +0800 From: Stefan Hajnoczi Message-ID: <20130717030230.GA27807@stefanha-thinkpad.redhat.com> References: <51E4613D.9000106@redhat.com> <44590808AF4A6E7DC093637A@nimrod.local> <51E4E54A.10908@redhat.com> <51E4F77C.2090509@redhat.com> <794E19D97CCC267CCBFA8397@Ximines.local> <51E56A1A.50502@redhat.com> <19631228D7B62545DC6A2928@Ximines.local> <51E57AF3.1050409@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Subject: Re: [Qemu-devel] [PATCH] [RFC] aio/async: Add timed bottom-halves List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alex Bligh Cc: Kevin Wolf , Anthony Liguori , qemu-devel@nongnu.org, Stefan Hajnoczi , Paolo Bonzini , rth@twiddle.net On Tue, Jul 16, 2013 at 10:24:38PM +0100, Alex Bligh wrote: > --On 16 July 2013 18:55:15 +0200 Paolo Bonzini wrote: > > >>What do you think? In the end I thought the schedule_bh_at stuff > >>was simpler. > > > >It is simpler, but I'm not sure it is the right API. Of course, if > >Kevin and Stefan says it is, I have no problem with that. > > For the sake of having something to comment on, I just sent v3 of this > patch to the list. This is basically a 'minimal change' version that > fixes the issue with aio_poll (I think). It passes make check. I would prefer to stick with QEMUTimer instead of introducing an AioContext-specific concept that does something very similar. This can be done by introducing a per-AioContext QEMUClock. Legacy QEMUTimers will not run during aio_poll() because they are associated with vm_clock, host_clock, or rt_clock. Only QEMUTimers associated with this AioContext's aio_ctx_clock will run. In other words, the main loop will run vm_clock, host_clock, and rt_clock timers. The AioContext will run its aio_ctx_clock timers. A few notes about QEMUTimer and QEMUClock: * QEMUClock can be enabled/disabled. Disabled clocks suppress timer expiration until re-enabled. * QEMUClock can use an arbitrary time source, which is used to present a virtual time based on the instruction counter when icount mode is active. * QEMUTimer is associated with a QEMUClock. This allows timers that only expire when the vm_clock is enabled, for example. * Modifying a QEMUTimer uses qemu_notify_event() since it may be called from a vcpu thread while the iothread is blocked. The steps to achieving this: 1. Drop alarm timers from qemu-timer.c and calculate g_poll() timeout instead for the main loop. 2. Introduce a per-AioContext aio_ctx_clock that can be used with qemu_new_timer() to create a QEMUTimer that expires during aio_poll(). 3. Calculate g_poll() timeout for aio_ctx_clock in aio_poll(). Stefan