From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754429Ab1E3Gbw (ORCPT ); Mon, 30 May 2011 02:31:52 -0400 Received: from ozlabs.org ([203.10.76.45]:44650 "EHLO ozlabs.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751026Ab1E3Gbu (ORCPT ); Mon, 30 May 2011 02:31:50 -0400 From: Rusty Russell To: "Michael S. Tsirkin" Cc: linux-kernel@vger.kernel.org, Carsten Otte , Christian Borntraeger , linux390@de.ibm.com, Martin Schwidefsky , Heiko Carstens , Shirley Ma , lguest@lists.ozlabs.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-s390@vger.kernel.org, kvm@vger.kernel.org, Krishna Kumar , Tom Lendacky , steved@us.ibm.com, habanero@linux.vnet.ibm.com Subject: Re: [PATCHv2 10/14] virtio_net: limit xmit polling In-Reply-To: <20110528200204.GB7046@redhat.com> References: <877h9kvlps.fsf@rustcorp.com.au> <20110522121008.GA12155@redhat.com> <87boyutbjg.fsf@rustcorp.com.au> <20110523111900.GB27212@redhat.com> <8739k3k1fb.fsf@rustcorp.com.au> <20110525060759.GC26352@redhat.com> <87vcwyjg2w.fsf@rustcorp.com.au> <20110528200204.GB7046@redhat.com> User-Agent: Notmuch/0.5 (http://notmuchmail.org) Emacs/23.2.1 (i686-pc-linux-gnu) Date: Mon, 30 May 2011 15:57:39 +0930 Message-ID: <8739jwk8is.fsf@rustcorp.com.au> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, 28 May 2011 23:02:04 +0300, "Michael S. Tsirkin" wrote: > On Thu, May 26, 2011 at 12:58:23PM +0930, Rusty Russell wrote: > > ie. free two packets for every one we're about to add. For steady state > > that would work really well. > > Sure, with indirect buffers, but if we > don't use indirect (and we discussed switching indirect off > dynamically in the past) this becomes harder to > be sure about. I think I understand why but > does not a simple capacity check make it more obvious? ... > > Then we hit the case where the ring > > seems full after we do the add: at that point, screw latency, and just > > try to free all the buffers we can. > > I see. But the code currently does this: > > for(..) > get_buf > add_buf > if (capacity < max_sk_frags+2) { > if (!enable_cb) > for(..) > get_buf > } > > > In other words the second get_buf is only called > in the unlikely case of race condition. > > So we'll need to add *another* call to get_buf. > Is it just me or is this becoming messy? Yes, good point. I really wonder if anyone would be able to measure the difference between simply freeing 2 every time (with possible extra stalls for strange cases) and the more complete version. But it runs against my grain to implement heuristics when one more call would make it provably reliable. Please find a way to make that for loop less ugly though! Thanks, Rusty. From mboxrd@z Thu Jan 1 00:00:00 1970 From: Rusty Russell Subject: Re: [PATCHv2 10/14] virtio_net: limit xmit polling Date: Mon, 30 May 2011 15:57:39 +0930 Message-ID: <8739jwk8is.fsf@rustcorp.com.au> References: <877h9kvlps.fsf@rustcorp.com.au> <20110522121008.GA12155@redhat.com> <87boyutbjg.fsf@rustcorp.com.au> <20110523111900.GB27212@redhat.com> <8739k3k1fb.fsf@rustcorp.com.au> <20110525060759.GC26352@redhat.com> <87vcwyjg2w.fsf@rustcorp.com.au> <20110528200204.GB7046@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: Krishna Kumar , Carsten Otte , lguest-uLR06cmDAlY/bJ5BZ2RsiQ@public.gmane.org, Shirley Ma , kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-s390-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, netdev-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, habanero-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org, Heiko Carstens , linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org, steved-r/Jw6+rmf7HQT0dZR+AlfA@public.gmane.org, Christian Borntraeger , Tom Lendacky , Martin Schwidefsky , linux390-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org To: "Michael S. Tsirkin" Return-path: In-Reply-To: <20110528200204.GB7046-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lguest-bounces+glkvl-lguest=m.gmane.org-uLR06cmDAlY/bJ5BZ2RsiQ@public.gmane.org Sender: lguest-bounces+glkvl-lguest=m.gmane.org-uLR06cmDAlY/bJ5BZ2RsiQ@public.gmane.org List-Id: netdev.vger.kernel.org On Sat, 28 May 2011 23:02:04 +0300, "Michael S. Tsirkin" wrote: > On Thu, May 26, 2011 at 12:58:23PM +0930, Rusty Russell wrote: > > ie. free two packets for every one we're about to add. For steady state > > that would work really well. > > Sure, with indirect buffers, but if we > don't use indirect (and we discussed switching indirect off > dynamically in the past) this becomes harder to > be sure about. I think I understand why but > does not a simple capacity check make it more obvious? ... > > Then we hit the case where the ring > > seems full after we do the add: at that point, screw latency, and just > > try to free all the buffers we can. > > I see. But the code currently does this: > > for(..) > get_buf > add_buf > if (capacity < max_sk_frags+2) { > if (!enable_cb) > for(..) > get_buf > } > > > In other words the second get_buf is only called > in the unlikely case of race condition. > > So we'll need to add *another* call to get_buf. > Is it just me or is this becoming messy? Yes, good point. I really wonder if anyone would be able to measure the difference between simply freeing 2 every time (with possible extra stalls for strange cases) and the more complete version. But it runs against my grain to implement heuristics when one more call would make it provably reliable. Please find a way to make that for loop less ugly though! Thanks, Rusty.