linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: Rick Jones <rick.jones2@hp.com>
Cc: mst@redhat.com, mashirle@us.ibm.com, krkumar2@in.ibm.com,
	habanero@linux.vnet.ibm.com, rusty@rustcorp.com.au,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, edumazet@google.com,
	tahm@linux.vnet.ibm.com, jwhan@filewood.snu.ac.kr,
	davem@davemloft.net, akong@redhat.com, kvm@vger.kernel.org,
	sri@us.ibm.com
Subject: Re: [net-next RFC V5 0/5] Multiqueue virtio-net
Date: Mon, 09 Jul 2012 11:23:25 +0800	[thread overview]
Message-ID: <4FFA4EAD.7000707@redhat.com> (raw)
In-Reply-To: <4FF710FD.2090100@hp.com>

On 07/07/2012 12:23 AM, Rick Jones wrote:
> On 07/06/2012 12:42 AM, Jason Wang wrote:
>> I'm not expert of tcp, but looks like the changes are reasonable:
>> - we can do full-sized TSO check in tcp_tso_should_defer() only for
>> westwood, according to tcp westwood
>> - run tcp_tso_should_defer for tso_segs = 1 when tso is enabled.
>
> I'm sure Eric and David will weigh-in on the TCP change.  My initial 
> inclination would have been to say "well, if multiqueue is draining 
> faster, that means ACKs come-back faster, which means the "race" 
> between more data being queued by netperf and ACKs will go more to the 
> ACKs which means the segments being sent will be smaller - as 
> TCP_NODELAY is not set, the Nagle algorithm is in force, which means 
> once there is data outstanding on the connection, no more will be sent 
> until either the outstanding data is ACKed, or there is an 
> accumulation of > MSS worth of data to send.
>
>>> Also, how are you combining the concurrent netperf results?  Are you
>>> taking sums of what netperf reports, or are you gathering statistics
>>> outside of netperf?
>>>
>>
>> The throughput were just sumed from netperf result like what netperf
>> manual suggests. The cpu utilization were measured by mpstat.
>
> Which mechanism to address skew error?  The netperf manual describes 
> more than one:

This mechanism is missed in my test, I would add them to my test scripts.
>
> http://www.netperf.org/svn/netperf2/trunk/doc/netperf.html#Using-Netperf-to-Measure-Aggregate-Performance 
>
>
> Personally, my preference these days is to use the "demo mode" method 
> of aggregate results as it can be rather faster than (ab)using the 
> confidence intervals mechanism, which I suspect may not really scale 
> all that well to large numbers of concurrent netperfs.

During my test, the confidence interval would even hard to achieved in 
RR test when I pin vhost/vcpus in the processors, so I didn't use it.
>
> I also tend to use the --enable-burst configure option to allow me to 
> minimize the number of concurrent netperfs in the first place.  Set 
> TCP_NODELAY (the test-specific -D option) and then have several 
> transactions outstanding at one time (test-specific -b option with a 
> number of additional in-flight transactions).
>
> This is expressed in the runemomniaggdemo.sh script:
>
> http://www.netperf.org/svn/netperf2/trunk/doc/examples/runemomniaggdemo.sh 
>
>
> which uses the find_max_burst.sh script:
>
> http://www.netperf.org/svn/netperf2/trunk/doc/examples/find_max_burst.sh
>
> to pick the burst size to use in the concurrent netperfs, the results 
> of which can be post-processed with:
>
> http://www.netperf.org/svn/netperf2/trunk/doc/examples/post_proc.py
>
> The nice feature of using the "demo mode" mechanism is when it is 
> coupled with systems with reasonably synchronized clocks (eg NTP) it 
> can be used for many-to-many testing in addition to one-to-many 
> testing (which cannot be dealt with by the confidence interval method 
> of dealing with skew error)
>

Yes, looks "demo mode" is helpful. I would have a look at these scripts, 
Thanks.
>>> A single instance TCP_RR test would help confirm/refute any
>>> non-trivial change in (effective) path length between the two cases.
>>>
>>
>> Yes, I would test this thanks.
>
> Excellent.
>
> happy benchmarking,
>
> rick jones
>
> -- 
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


  reply	other threads:[~2012-07-09  3:21 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-07-05 10:29 [net-next RFC V5 0/5] Multiqueue virtio-net Jason Wang
2012-07-05 10:29 ` [net-next RFC V5 1/5] virtio_net: Introduce VIRTIO_NET_F_MULTIQUEUE Jason Wang
2012-07-05 10:29 ` [net-next RFC V5 2/5] virtio_ring: move queue_index to vring_virtqueue Jason Wang
2012-07-05 11:40   ` Sasha Levin
2012-07-06  3:17     ` Jason Wang
2012-07-26  8:20     ` Paolo Bonzini
2012-07-30  3:30       ` Jason Wang
2012-07-05 10:29 ` [net-next RFC V5 3/5] virtio: intorduce an API to set affinity for a virtqueue Jason Wang
2012-07-27 14:38   ` Paolo Bonzini
2012-07-29 20:40     ` Michael S. Tsirkin
2012-07-30  6:27       ` Paolo Bonzini
2012-08-09 15:14         ` Paolo Bonzini
2012-08-09 15:13   ` Paolo Bonzini
2012-08-09 15:35     ` Avi Kivity
2012-07-05 10:29 ` [net-next RFC V5 4/5] virtio_net: multiqueue support Jason Wang
2012-07-05 20:02   ` Amos Kong
2012-07-06  7:45     ` Jason Wang
2012-07-20 13:40   ` Michael S. Tsirkin
2012-07-21 12:02     ` Sasha Levin
2012-07-23  5:54       ` Jason Wang
2012-07-23  9:28         ` Sasha Levin
2012-07-30  3:29           ` Jason Wang
2012-07-29  9:44       ` Michael S. Tsirkin
2012-07-30  3:26         ` Jason Wang
2012-07-30 13:00         ` Sasha Levin
2012-07-23  5:48     ` Jason Wang
2012-07-29  9:50       ` Michael S. Tsirkin
2012-07-30  5:15         ` Jason Wang
2012-07-05 10:29 ` [net-next RFC V5 5/5] virtio_net: support negotiating the number of queues through ctrl vq Jason Wang
2012-07-05 12:51   ` Sasha Levin
2012-07-05 20:07     ` Amos Kong
2012-07-06  7:46       ` Jason Wang
2012-07-06  3:20     ` Jason Wang
2012-07-06  6:38       ` Stephen Hemminger
2012-07-06  9:26         ` Jason Wang
2012-07-06  8:10       ` Sasha Levin
2012-07-09 20:13   ` Ben Hutchings
2012-07-20 12:33   ` Michael S. Tsirkin
2012-07-23  5:32     ` Jason Wang
2012-07-05 17:45 ` [net-next RFC V5 0/5] Multiqueue virtio-net Rick Jones
2012-07-06  7:42   ` Jason Wang
2012-07-06 16:23     ` Rick Jones
2012-07-09  3:23       ` Jason Wang [this message]
2012-07-09 16:46         ` Rick Jones
2012-07-08  8:19 ` Ronen Hod
2012-07-09  5:35   ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4FFA4EAD.7000707@redhat.com \
    --to=jasowang@redhat.com \
    --cc=akong@redhat.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=habanero@linux.vnet.ibm.com \
    --cc=jwhan@filewood.snu.ac.kr \
    --cc=krkumar2@in.ibm.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mashirle@us.ibm.com \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=rick.jones2@hp.com \
    --cc=rusty@rustcorp.com.au \
    --cc=sri@us.ibm.com \
    --cc=tahm@linux.vnet.ibm.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).