From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1MJ45f-0002bv-O8 for qemu-devel@nongnu.org; Tue, 23 Jun 2009 07:23:04 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1MJ45Z-0002Z6-Ux for qemu-devel@nongnu.org; Tue, 23 Jun 2009 07:23:02 -0400 Received: from [199.232.76.173] (port=52037 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1MJ45Z-0002YQ-2a for qemu-devel@nongnu.org; Tue, 23 Jun 2009 07:22:57 -0400 Received: from mx2.redhat.com ([66.187.237.31]:55971) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1MJ45Y-0002Be-CN for qemu-devel@nongnu.org; Tue, 23 Jun 2009 07:22:56 -0400 Message-ID: <4A40BB44.5020608@redhat.com> Date: Tue, 23 Jun 2009 14:23:48 +0300 From: Avi Kivity MIME-Version: 1.0 References: <20090623012911.16b8c4d5@doriath> <20090623103251.GH6881@redhat.com> In-Reply-To: <20090623103251.GH6881@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: [PATCH 06/11] QMP: Introduce asynchronous events infrastructure List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Daniel P. Berrange" Cc: aliguori@us.ibm.com, ehabkost@redhat.com, jan.kiszka@siemens.com, dlaor@redhat.com, qemu-devel@nongnu.org, Luiz Capitulino On 06/23/2009 01:32 PM, Daniel P. Berrange wrote: >> +/* Asynchronous events main function */ >> +void monitor_notify_event(MonitorEvent event) >> +{ >> + if (!monitor_ctrl_mode(cur_mon)) >> + return; >> + >> + assert(event< EVENT_MAX); >> + monitor_puts(cur_mon, "* EVENT "); >> + >> + switch (event) { >> + case EVENT_MAX: >> + // Avoid gcc warning, will never get here >> + break; >> + } >> + >> + monitor_puts(cur_mon, "\n"); >> +} >> + >> > > If a client is not reading from the monitor channel quickly enough, then > would this cause QEMU to block once the FD buffer is full ? A QEMU level > buffer might give more leyway, but we don't want to grow unbounded, so > ultimately we'll end up having to drop events. At which point you'd want > to send an event to the client indicating that the event queue overflowed, > so it can take remedial action to re-sync its state. > IMO relying on kernel buffers is sufficient here. An 8k buffer will hold hundreds of events. If the system is so congested that management cannot consume events rapidly enough, then it will also be so congested that guest vcpus will be starved. Blocking on buffers full is acceptable IMO. -- error compiling committee.c: too many arguments to function