From: "Michael S. Tsirkin" <mst@redhat.com> To: Rusty Russell <rusty@rustcorp.com.au> Cc: linux-kernel@vger.kernel.org, Carsten Otte <cotte@de.ibm.com>, Christian Borntraeger <borntraeger@de.ibm.com>, linux390@de.ibm.com, Martin Schwidefsky <schwidefsky@de.ibm.com>, Heiko Carstens <heiko.carstens@de.ibm.com>, Shirley Ma <xma@us.ibm.com>, lguest@lists.ozlabs.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-s390@vger.kernel.org, kvm@vger.kernel.org, Krishna Kumar <krkumar2@in.ibm.com>, Tom Lendacky <tahm@linux.vnet.ibm.com>, steved@us.ibm.com, habanero@linux.vnet.ibm.com Subject: Re: [PATCHv2 10/14] virtio_net: limit xmit polling Date: Sat, 28 May 2011 23:02:04 +0300 [thread overview] Message-ID: <20110528200204.GB7046@redhat.com> (raw) In-Reply-To: <87vcwyjg2w.fsf@rustcorp.com.au> On Thu, May 26, 2011 at 12:58:23PM +0930, Rusty Russell wrote: > On Wed, 25 May 2011 09:07:59 +0300, "Michael S. Tsirkin" <mst@redhat.com> wrote: > > On Wed, May 25, 2011 at 11:05:04AM +0930, Rusty Russell wrote: > > Hmm I'm not sure I got it, need to think about this. > > I'd like to go back and document how my design was supposed to work. > > This really should have been in commit log or even a comment. > > I thought we need a min, not a max. > > We start with this: > > > > while ((c = (virtqueue_get_capacity(vq) < 2 + MAX_SKB_FRAGS) && > > (skb = get_buf))) > > kfree_skb(skb); > > return !c; > > > > This is clean and simple, right? And it's exactly asking for what we need. > > No, I started from the other direction: > > for (i = 0; i < 2; i++) { > skb = get_buf(); > if (!skb) > break; > kfree_skb(skb); > } > > ie. free two packets for every one we're about to add. For steady state > that would work really well. Sure, with indirect buffers, but if we don't use indirect (and we discussed switching indirect off dynamically in the past) this becomes harder to be sure about. I think I understand why but does not a simple capacity check make it more obvious? > Then we hit the case where the ring > seems full after we do the add: at that point, screw latency, and just > try to free all the buffers we can. I see. But the code currently does this: for(..) get_buf add_buf if (capacity < max_sk_frags+2) { if (!enable_cb) for(..) get_buf } In other words the second get_buf is only called in the unlikely case of race condition. So we'll need to add *another* call to get_buf. Is it just me or is this becoming messy? I was also be worried that we are adding more "modes" to the code: high and low latency depending on different speeds between host and guest, which would be hard to trigger and test. That's why I tried hard to make the code behave the same all the time and free up just a bit more than the minimum necessary. > > on the normal path min == 2 so we're low latency but we keep ahead on > > average. min == 0 for the "we're out of capacity, we may have to stop > > the queue". > > > > Does the above make sense at all? > > It makes sense, but I think it's a classic case where incremental > improvements aren't as good as starting from scratch. > > Cheers, > Rusty. The only difference on good path seems an extra capacity check, so I don't expect the difference will be testable, do you? -- MST
WARNING: multiple messages have this Message-ID (diff)
From: "Michael S. Tsirkin" <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> To: Rusty Russell <rusty-8n+1lVoiYb80n/F98K4Iww@public.gmane.org> Cc: Krishna Kumar <krkumar2-xthvdsQ13ZrQT0dZR+AlfA@public.gmane.org>, Carsten Otte <cotte-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org>, lguest-uLR06cmDAlY/bJ5BZ2RsiQ@public.gmane.org, Shirley Ma <xma-r/Jw6+rmf7HQT0dZR+AlfA@public.gmane.org>, kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-s390-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, netdev-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, habanero-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org, Heiko Carstens <heiko.carstens-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org>, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org, steved-r/Jw6+rmf7HQT0dZR+AlfA@public.gmane.org, Christian Borntraeger <borntraeger-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org>, Tom Lendacky <tahm-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org>, Martin Schwidefsky <schwidefsky-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org>, linux390-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org Subject: Re: [PATCHv2 10/14] virtio_net: limit xmit polling Date: Sat, 28 May 2011 23:02:04 +0300 [thread overview] Message-ID: <20110528200204.GB7046@redhat.com> (raw) In-Reply-To: <87vcwyjg2w.fsf-8n+1lVoiYb80n/F98K4Iww@public.gmane.org> On Thu, May 26, 2011 at 12:58:23PM +0930, Rusty Russell wrote: > On Wed, 25 May 2011 09:07:59 +0300, "Michael S. Tsirkin" <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote: > > On Wed, May 25, 2011 at 11:05:04AM +0930, Rusty Russell wrote: > > Hmm I'm not sure I got it, need to think about this. > > I'd like to go back and document how my design was supposed to work. > > This really should have been in commit log or even a comment. > > I thought we need a min, not a max. > > We start with this: > > > > while ((c = (virtqueue_get_capacity(vq) < 2 + MAX_SKB_FRAGS) && > > (skb = get_buf))) > > kfree_skb(skb); > > return !c; > > > > This is clean and simple, right? And it's exactly asking for what we need. > > No, I started from the other direction: > > for (i = 0; i < 2; i++) { > skb = get_buf(); > if (!skb) > break; > kfree_skb(skb); > } > > ie. free two packets for every one we're about to add. For steady state > that would work really well. Sure, with indirect buffers, but if we don't use indirect (and we discussed switching indirect off dynamically in the past) this becomes harder to be sure about. I think I understand why but does not a simple capacity check make it more obvious? > Then we hit the case where the ring > seems full after we do the add: at that point, screw latency, and just > try to free all the buffers we can. I see. But the code currently does this: for(..) get_buf add_buf if (capacity < max_sk_frags+2) { if (!enable_cb) for(..) get_buf } In other words the second get_buf is only called in the unlikely case of race condition. So we'll need to add *another* call to get_buf. Is it just me or is this becoming messy? I was also be worried that we are adding more "modes" to the code: high and low latency depending on different speeds between host and guest, which would be hard to trigger and test. That's why I tried hard to make the code behave the same all the time and free up just a bit more than the minimum necessary. > > on the normal path min == 2 so we're low latency but we keep ahead on > > average. min == 0 for the "we're out of capacity, we may have to stop > > the queue". > > > > Does the above make sense at all? > > It makes sense, but I think it's a classic case where incremental > improvements aren't as good as starting from scratch. > > Cheers, > Rusty. The only difference on good path seems an extra capacity check, so I don't expect the difference will be testable, do you? -- MST
next prev parent reply other threads:[~2011-05-28 20:02 UTC|newest] Thread overview: 132+ messages / expand[flat|nested] mbox.gz Atom feed top 2011-05-19 23:10 [PATCHv2 00/14] virtio and vhost-net performance enhancements Michael S. Tsirkin 2011-05-19 23:10 ` Michael S. Tsirkin 2011-05-19 23:10 ` [PATCHv2 01/14] virtio: event index interface Michael S. Tsirkin 2011-05-19 23:10 ` Michael S. Tsirkin 2011-05-21 2:29 ` Rusty Russell 2011-05-21 2:29 ` Rusty Russell 2011-05-21 2:29 ` Rusty Russell 2011-05-19 23:10 ` Michael S. Tsirkin 2011-05-19 23:10 ` [PATCHv2 02/14] virtio ring: inline function to check for events Michael S. Tsirkin 2011-05-19 23:10 ` Michael S. Tsirkin 2011-05-21 2:29 ` Rusty Russell 2011-05-21 2:29 ` Rusty Russell 2011-05-21 2:29 ` Rusty Russell 2011-05-19 23:10 ` Michael S. Tsirkin 2011-05-19 23:10 ` [PATCHv2 03/14] virtio_ring: support event idx feature Michael S. Tsirkin 2011-05-19 23:10 ` Michael S. Tsirkin 2011-05-19 23:10 ` Michael S. Tsirkin 2011-05-21 2:31 ` Rusty Russell 2011-05-21 2:31 ` Rusty Russell 2011-05-21 2:31 ` Rusty Russell 2011-05-19 23:10 ` [PATCHv2 04/14] vhost: support event index Michael S. Tsirkin 2011-05-19 23:10 ` Michael S. Tsirkin 2011-05-21 2:31 ` Rusty Russell 2011-05-21 2:31 ` Rusty Russell 2011-05-21 2:31 ` Rusty Russell 2011-05-19 23:10 ` Michael S. Tsirkin 2011-05-19 23:11 ` [PATCHv2 05/14] virtio_test: " Michael S. Tsirkin 2011-05-19 23:11 ` Michael S. Tsirkin 2011-05-19 23:11 ` Michael S. Tsirkin 2011-05-21 2:32 ` Rusty Russell 2011-05-21 2:32 ` Rusty Russell 2011-05-21 2:32 ` Rusty Russell 2011-05-19 23:11 ` [PATCHv2 06/14] virtio: add api for delayed callbacks Michael S. Tsirkin 2011-05-19 23:11 ` Michael S. Tsirkin 2011-05-19 23:11 ` Michael S. Tsirkin 2011-05-21 2:33 ` Rusty Russell 2011-05-21 2:33 ` Rusty Russell 2011-05-21 2:33 ` Rusty Russell 2011-05-19 23:11 ` [PATCHv2 07/14] virtio_net: delay TX callbacks Michael S. Tsirkin 2011-05-19 23:11 ` Michael S. Tsirkin 2011-05-19 23:11 ` Michael S. Tsirkin 2011-05-19 23:11 ` [PATCHv2 08/14] virtio_ring: Add capacity check API Michael S. Tsirkin 2011-05-19 23:11 ` Michael S. Tsirkin 2011-05-19 23:11 ` Michael S. Tsirkin 2011-05-19 23:11 ` [PATCHv2 09/14] virtio_net: fix TX capacity checks using new API Michael S. Tsirkin 2011-05-19 23:11 ` Michael S. Tsirkin 2011-05-21 2:13 ` Rusty Russell 2011-05-21 2:13 ` Rusty Russell 2011-05-21 2:13 ` Rusty Russell 2011-05-19 23:11 ` Michael S. Tsirkin 2011-05-19 23:11 ` [PATCHv2 10/14] virtio_net: limit xmit polling Michael S. Tsirkin 2011-05-19 23:11 ` Michael S. Tsirkin 2011-05-21 2:19 ` Rusty Russell 2011-05-21 2:19 ` Rusty Russell 2011-05-21 2:19 ` Rusty Russell 2011-05-22 12:10 ` Michael S. Tsirkin 2011-05-22 12:10 ` Michael S. Tsirkin 2011-05-23 2:07 ` Rusty Russell 2011-05-23 2:07 ` Rusty Russell 2011-05-23 11:19 ` Michael S. Tsirkin 2011-05-23 11:19 ` Michael S. Tsirkin 2011-05-23 11:19 ` Michael S. Tsirkin 2011-05-24 7:54 ` Krishna Kumar2 2011-05-24 7:54 ` Krishna Kumar2 2011-05-24 9:12 ` Michael S. Tsirkin 2011-05-24 9:12 ` Michael S. Tsirkin 2011-05-24 9:12 ` Michael S. Tsirkin 2011-05-24 9:27 ` Krishna Kumar2 2011-05-24 9:27 ` Krishna Kumar2 2011-05-24 11:29 ` Michael S. Tsirkin 2011-05-24 11:29 ` Michael S. Tsirkin 2011-05-24 12:50 ` Krishna Kumar2 2011-05-24 12:50 ` Krishna Kumar2 2011-05-24 13:52 ` Michael S. Tsirkin 2011-05-24 13:52 ` Michael S. Tsirkin 2011-05-24 13:52 ` Michael S. Tsirkin 2011-05-24 12:50 ` Krishna Kumar2 2011-05-24 11:29 ` Michael S. Tsirkin 2011-05-24 7:54 ` Krishna Kumar2 2011-05-25 1:28 ` Rusty Russell 2011-05-25 1:28 ` Rusty Russell 2011-05-25 5:50 ` Michael S. Tsirkin 2011-05-25 5:50 ` Michael S. Tsirkin 2011-05-25 5:50 ` Michael S. Tsirkin 2011-05-25 1:28 ` Rusty Russell 2011-05-25 1:35 ` Rusty Russell 2011-05-25 1:35 ` Rusty Russell 2011-05-25 1:35 ` Rusty Russell 2011-05-25 6:07 ` Michael S. Tsirkin 2011-05-25 6:07 ` Michael S. Tsirkin 2011-05-26 3:28 ` Rusty Russell 2011-05-26 3:28 ` Rusty Russell 2011-05-26 3:28 ` Rusty Russell 2011-05-28 20:02 ` Michael S. Tsirkin [this message] 2011-05-28 20:02 ` Michael S. Tsirkin 2011-05-30 6:27 ` Rusty Russell 2011-05-30 6:27 ` Rusty Russell 2011-05-30 6:27 ` Rusty Russell 2011-05-28 20:02 ` Michael S. Tsirkin 2011-05-25 6:07 ` Michael S. Tsirkin 2011-05-23 2:07 ` Rusty Russell 2011-05-22 12:10 ` Michael S. Tsirkin 2011-05-19 23:11 ` Michael S. Tsirkin 2011-05-19 23:12 ` [PATCHv2 11/14] virtio: don't delay avail index update Michael S. Tsirkin 2011-05-19 23:12 ` Michael S. Tsirkin 2011-05-21 2:26 ` Rusty Russell 2011-05-21 2:26 ` Rusty Russell 2011-05-21 2:26 ` Rusty Russell 2011-05-19 23:12 ` Michael S. Tsirkin 2011-05-19 23:12 ` [PATCHv2 12/14] virtio: 64 bit features Michael S. Tsirkin 2011-05-19 23:12 ` Michael S. Tsirkin 2011-05-19 23:12 ` Michael S. Tsirkin 2011-05-19 23:12 ` [PATCHv2 13/14] virtio_test: update for " Michael S. Tsirkin 2011-05-19 23:12 ` Michael S. Tsirkin 2011-05-19 23:12 ` Michael S. Tsirkin 2011-05-19 23:12 ` [PATCHv2 14/14] vhost: fix " Michael S. Tsirkin 2011-05-19 23:12 ` Michael S. Tsirkin 2011-05-19 23:12 ` Michael S. Tsirkin 2011-05-19 23:20 ` [PATCHv2 00/14] virtio and vhost-net performance enhancements David Miller 2011-05-19 23:20 ` David Miller 2011-05-20 7:51 ` Rusty Russell 2011-05-20 7:51 ` Rusty Russell 2011-05-20 7:51 ` Rusty Russell 2011-05-26 15:32 ` [PERF RESULTS] " Krishna Kumar2 2011-05-26 15:32 ` Krishna Kumar2 2011-05-26 15:42 ` Shirley Ma 2011-05-26 16:21 ` Krishna Kumar2 2011-05-26 16:21 ` Krishna Kumar2 [not found] ` <OFF9D0E604.B865A006-ON6525789C.00597010-6525789C.0059987A@LocalDomain> 2011-05-26 16:29 ` Krishna Kumar2 2011-05-26 16:29 ` Krishna Kumar2 2011-05-26 16:29 ` Krishna Kumar2 2011-05-26 15:32 ` Krishna Kumar2
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20110528200204.GB7046@redhat.com \ --to=mst@redhat.com \ --cc=borntraeger@de.ibm.com \ --cc=cotte@de.ibm.com \ --cc=habanero@linux.vnet.ibm.com \ --cc=heiko.carstens@de.ibm.com \ --cc=krkumar2@in.ibm.com \ --cc=kvm@vger.kernel.org \ --cc=lguest@lists.ozlabs.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-s390@vger.kernel.org \ --cc=linux390@de.ibm.com \ --cc=netdev@vger.kernel.org \ --cc=rusty@rustcorp.com.au \ --cc=schwidefsky@de.ibm.com \ --cc=steved@us.ibm.com \ --cc=tahm@linux.vnet.ibm.com \ --cc=virtualization@lists.linux-foundation.org \ --cc=xma@us.ibm.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.