From mboxrd@z Thu Jan 1 00:00:00 1970 From: Wei Liu Subject: Re: Interesting observation with network event notification and batching Date: Wed, 3 Jul 2013 16:18:25 +0100 Message-ID: <20130703151825.GO7483@zion.uk.xensource.com> References: <20130612101451.GF2765@zion.uk.xensource.com> <20130628161542.GF16643@zion.uk.xensource.com> <51D13456.1040609@oracle.com> <20130701085436.GA7483@zion.uk.xensource.com> <51D1A74C.1090705@oracle.com> <20130701160628.GI7483@zion.uk.xensource.com> <51D1B407.9040105@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <51D1B407.9040105@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Andrew Bennieston Cc: annie li , xen-devel@lists.xen.org, Wei Liu , ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com List-Id: xen-devel@lists.xenproject.org On Mon, Jul 01, 2013 at 05:53:27PM +0100, Andrew Bennieston wrote: [...] > >I would say that removing copy in netback can scale better. > > > >>Moreover, I also have a feeling that we got persistent grant > >>performance based on default netperf params test, just like wei's > >>hack which does not get better performance without large packets. So > >>let me try some test with large packets though. > >> > > > >Sadly enough, I found out today these sort of test seems to be quite > >inconsistent. On a Intel 10G Nic the throughput is actually higher > >without enforcing iperf / netperf to generate large packets. > > When I have made performance measurements using iperf, I found that > for a given point in the parameter space (e.g. for a fixed number of > guests, interfaces, fixed parameters to iperf, fixed test run > duration, etc.) the variation was typically _smaller than_ +/- 1 > Gbit/s on a 10G NIC. > > I notice that your results don't include any error bars or > indication of standard deviation... > > With this sort of data (or, really, any data) measuring at least 5 > times will help to get an idea of the fluctuations present (i.e. a > measure of statistical uncertainty) by quoting a mean +/- standard > deviation. Having the standard deviation (or other estimator for the > uncertainty in the results) allows us to better determine how > significant this difference in results really is. > > For example, is the high throughput you quoted (~ 14 Gbit/s) an > upward fluctuation, and the low value (~6) a downward fluctuation? > Having a mean and standard deviation would allow us to determine > just how (in)compatible these values are. > > Assuming a Gaussian distribution (and when sampled sufficient times, > "everything" tends to a Gaussian) you have an almost 5% chance that > a result lies more than 2 standard deviations from the mean (and a > 0.3% chance that it lies more than 3 s.d. from the mean!). Results > that appear "high" or "low" may, therefore, not be entirely > unexpected. Having a measure of the standard deviation provides some > basis against which to determine how likely it is that a measured > value is just statistical fluctuation, or whether it is a > significant result. > > Another thing I noticed is that you're running the iperf test for > only 5 seconds. I have found in the past that iperf (or, more > likely, TCP) takes a while to "ramp up" (even with all parameters > fixed e.g. "-l -w ") and that tests run for 2 minutes > or more (e.g. "-t 120") give much more stable results. > > Andrew. > Here you go, results for the new conducted benchmarks. Was about to do graph but found out not really worth it because it's only single stream. For iperf tests unit is Gb/s, for netperf tests unit is Mb/s. COPY SCHEME iperf -c 10.80.237.127 -t 120 6.19 6.23 6.26 6.25 6.27 mean 6.24 s.d. 0.031622776601759 iperf -c 10.80.237.127 -t 120 -l 131072 6.07 6.07 6.03 6.06 6.06 mean 6.058 s.d. 0.016431676725514 netperf -H 10.80.237.127 -l120 -f m 5662.55 5636.6 5641.52 5631.39 5630.98 mean 5640.608 s.d. 13.0001642297036 netperf -H 10.80.237.127 -l120 -f m -- -s 131072 -S 131072 5831.19 5833.03 5829.54 5838.89 5830.5 mean 5832.63 s.d. 3.72512415992628 PERMANENT MAP SCHEME "iperf -c 10.80.237.127 -t 120 2.42 2.41 2.41 2.42 2.43 mean 2.418 s.d. 0.00836660026531 iperf -c 10.80.237.127 -t 120 -l 131072 14.3 14.2 14.2 14.4 14.3 mean 14.28 s.d. 0.083666002653234 netperf -H 10.80.237.127 -l120 -f m 4632.27 4630.08 4633.18 4641.25 4632.23 mean 4633.802 s.d. 4.31656924013371 netperf -H 10.80.237.127 -l120 -f m -- -s 131072 -S 131072 10556.04 10532.89 10541.83 10552.77 10546.77 mean 10546.06 s.d. 9.17156475133789 Short run of iperf / netperf was conducted before each test run so that the system was "warmed-up". The results show that the single stream performance is quite stable. Also there's not much difference between running tests for 5s or 120s. Wei. > > > > > >Wei. > > > >>Thanks > >>Annie