From mboxrd@z Thu Jan 1 00:00:00 1970 From: Krishna Kumar2 Subject: Re: [PATCHv2 10/14] virtio_net: limit xmit polling Date: Tue, 24 May 2011 18:20:35 +0530 Message-ID: References: <877h9kvlps.fsf@rustcorp.com.au> <20110522121008.GA12155@redhat.com> <87boyutbjg.fsf@rustcorp.com.au> <20110523111900.GB27212@redhat.com> <20110524091255.GB16886@redhat.com> <20110524112901.GB17087@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20110524112901.GB17087@redhat.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: virtualization-bounces@lists.linux-foundation.org Errors-To: virtualization-bounces@lists.linux-foundation.org To: "Michael S. Tsirkin" Cc: habanero@linux.vnet.ibm.com, lguest@lists.ozlabs.org, Shirley Ma , kvm@vger.kernel.org, Carsten Otte , linux-s390@vger.kernel.org, Heiko Carstens , linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, steved@us.ibm.com, Christian Borntraeger , Tom Lendacky , netdev@vger.kernel.org, Martin Schwidefsky , linux390@de.ibm.com List-Id: virtualization@lists.linuxfoundation.org "Michael S. Tsirkin" wrote on 05/24/2011 04:59:39 PM: > > > > Maybe Rusty means it is a simpler model to free the amount > > > > of space that this xmit needs. We will still fail anyway > > > > at some time but it is unlikely, since earlier iteration > > > > freed up atleast the space that it was going to use. > > > > > > Not sure I nderstand. We can't know space is freed in the previous > > > iteration as buffers might not have been used by then. > > > > Yes, the first few iterations may not have freed up space, but > > later ones should. The amount of free space should increase > > from then on, especially since we try to free double of what > > we consume. > > Hmm. This is only an upper limit on the # of entries in the queue. > Assume that vq size is 4 and we transmit 4 enties without > getting anything in the used ring. The next transmit will fail. > > So I don't really see why it's unlikely that we reach the packet > drop code with your patch. I was assuming 256 entries :) I will try to get some numbers to see how often it is true tomorrow. > > I am not sure of why it was changed, since returning TX_BUSY > > seems more efficient IMHO. > > qdisc_restart() handles requeue'd > > packets much better than a stopped queue, as a significant > > part of this code is skipped if gso_skb is present > > I think this is the argument: > http://www.mail-archive.com/virtualization@lists.linux- > foundation.org/msg06364.html Thanks for digging up that thread! Yes, that one skb would get sent first ahead of possibly higher priority skbs. However, from a performance point, TX_BUSY code skips a lot of checks and code for all subsequent packets till the device is restarted. I can test performance with both cases and report what I find (the requeue code has become very simple and clean from "horribly complex", thanks to Herbert and Dave). > > (qdisc > > will eventually start dropping packets when tx_queue_len is > > tx_queue_len is a pretty large buffer so maybe no. I remember seeing tons of drops (pfifo_fast_enqueue) when xmit returns TX_BUSY. > I think the packet drops from the scheduler queue can also be > done intelligently (e.g. with CHOKe) which should > work better than dropping a random packet? I am not sure of that - choke_enqueue checks against a random skb to drop current skb, and also during congestion. But for my "sample driver xmit", returning TX_BUSY could still allow to be used with CHOKe. thanks, - KK