From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ian Campbell Subject: Re: Interesting observation with network event notification and batching Date: Mon, 17 Jun 2013 11:16:53 +0100 Message-ID: <1371464213.23802.20.camel@zakaz.uk.xensource.com> References: <20130612101451.GF2765@zion.uk.xensource.com> <20130614185303.GC21280@phenom.dumpdata.com> <20130616095433.GA27462@zion.uk.xensource.com> <1371461913.3967.68.camel@zakaz.uk.xensource.com> <51BEFBBD02000078000DEC3F@nat28.tlf.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <51BEFBBD02000078000DEC3F@nat28.tlf.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: Wei Liu , Konrad Rzeszutek Wilk , stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org, annie.li@oracle.com, andrew.bennieston@citrix.com List-Id: xen-devel@lists.xenproject.org On Mon, 2013-06-17 at 11:06 +0100, Jan Beulich wrote: > >>> On 17.06.13 at 11:38, Ian Campbell wrote: > > On Sun, 2013-06-16 at 10:54 +0100, Wei Liu wrote: > >> > > Konrad, IIRC you once mentioned you discovered something with event > >> > > notification, what's that? > >> > > >> > They were bizzare. I naively expected some form of # of physical NIC > >> > interrupts to be around the same as the VIF or less. And I figured > >> > that the amount of interrupts would be constant irregardless of the > >> > size of the packets. In other words #packets == #interrupts. > >> > > >> > >> It could be that the frontend notifies the backend for every packet it > >> sends. This is not desirable and I don't expect the ring to behave that > >> way. > > > > It is probably worth checking that things are working how we think they > > should. i.e. that netback's calls to RING_FINAL_CHECK_FOR_.. and > > netfront's calls to RING_PUSH_..._AND_CHECK_NOTIFY are placed at > > suitable points to maximise batching. > > > > Is the RING_FINAL_CHECK_FOR_REQUESTS inside the xen_netbk_tx_build_gops > > loop right? This would push the req_event pointer to just after the last > > request, meaning the net request enqueued by the frontend would cause a > > notification -- even though the backend is actually still continuing to > > process requests and would have picked up that packet without further > > notification. n this case there is a fair bit of work left in the > > backend for this iteration i.e. plenty of opportunity for the frontend > > to queue more requests. > > > > The comments in ring.h say: > > * These macros will set the req_event/rsp_event field to trigger a > > * notification on the very next message that is enqueued. If you want to > > * create batches of work (i.e., only receive a notification after several > > * messages have been enqueued) then you will need to create a customised > > * version of the FINAL_CHECK macro in your own code, which sets the event > > * field appropriately. > > > > Perhaps we want to just use RING_HAS_UNCONSUMED_REQUESTS in that loop > > (and other similar loops) and add a FINAL check at the very end? > > But then again the macro doesn't update req_event when there > are unconsumed requests already upon entry to the macro. My concern was that when we process the last request currently on the ring we immediately set it forward, even though netback goes on to do a bunch more work (including e.g. the grant copies) before looping back and looking for more work. That's a potentially large window for the frontend to enqueue and then needlessly notify a new packet. It could potentially lead to a pathological case of notifying every packet unnecessarily. Ian.