From: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
To: Or Gerlitz <or.gerlitz@gmail.com>
Cc: Rick Jones <rick.jones2@hp.com>,
"davem@davemloft.net" <davem@davemloft.net>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"yevgenyp@mellanox.co.il" <yevgenyp@mellanox.co.il>,
"ogerlitz@mellanox.com" <ogerlitz@mellanox.com>,
"amirv@mellanox.com" <amirv@mellanox.com>,
"brking@linux.vnet.ibm.com" <brking@linux.vnet.ibm.com>,
"leitao@linux.vnet.ibm.com" <leitao@linux.vnet.ibm.com>,
"klebers@linux.vnet.ibm.com" <klebers@linux.vnet.ibm.com>,
"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
"anton@samba.org" <anton@samba.org>
Subject: Re: [PATCH] mlx4_en: map entire pages to increase throughput
Date: Mon, 16 Jul 2012 17:57:08 -0300 [thread overview]
Message-ID: <20120716205708.GB16137@oc1711230544.ibm.com> (raw)
In-Reply-To: <CAJZOPZL3F+xdHSFfhg7v9A6DDjT6CPK=kgwyzcE6c0pGYFyupg@mail.gmail.com>
On Mon, Jul 16, 2012 at 11:43:33PM +0300, Or Gerlitz wrote:
> On Mon, Jul 16, 2012 at 10:42 PM, Rick Jones <rick.jones2@hp.com> wrote:
>
> > I was thinking more along the lines of an additional comparison,
> > explicitly using netperf TCP_RR or something like it, not just the packets
> > per second from a bulk transfer test.
>
>
> TCP_STREAM from this setup before the patch would be good to know as well
>
Hi, Or.
Does the stream test that I did with uperf using messages of 64000 bytes
fit?
TCP_NODELAY does not make a difference in this case. I get something
around 3Gbps before the patch and something around 9Gbps after the
patch.
Before the patch:
# ./uperf-1.0.3-beta/src/uperf -m tcp.xml
Starting 16 threads running profile:tcp_stream ... 0.00 seconds
Txn1 0 /1.00(s) = 0 16op/s
Txn2 20.81GB /59.26(s) = 3.02Gb/s 5914op/s
Txn3 0 /0.00(s) = 0 128295op/s
-------------------------------------------------------------------------------------------------------------------------------
Total 20.81GB /61.37(s) = 2.91Gb/s 5712op/s
Netstat statistics for this run
-------------------------------------------------------------------------------------------------------------------------------
Nic opkts/s ipkts/s obits/s ibits/s
eth6 252459 31694 3.06Gb/s 16.74Mb/s
eth0 2 18 3.87Kb/s 14.28Kb/s
-------------------------------------------------------------------------------------------------------------------------------
Run Statistics
Hostname Time Data Throughput Operations
Errors
-------------------------------------------------------------------------------------------------------------------------------
10.0.0.2 61.47s 20.81GB 2.91Gb/s 350528
0.00
master 61.37s 20.81GB 2.91Gb/s 350528
0.00
-------------------------------------------------------------------------------------------------------------------------------
Difference(%) -0.16% 0.00% 0.16% 0.00%
0.00%
After the patch:
# ./uperf-1.0.3-beta/src/uperf -m tcp.xml
Starting 16 threads running profile:tcp_stream ... 0.00 seconds
Txn1 0 /1.00(s) = 0 16op/s
Txn2 64.50GB /60.27(s) = 9.19Gb/s 17975op/s
Txn3 0 /0.00(s) = 0
-------------------------------------------------------------------------------------------------------------------------------
Total 64.50GB /62.27(s) = 8.90Gb/s 17397op/s
Netstat statistics for this run
-------------------------------------------------------------------------------------------------------------------------------
Nic opkts/s ipkts/s obits/s ibits/s
eth6 769428 96018 9.31Gb/s 50.72Mb/s
eth0 1 15 2.48Kb/s 13.59Kb/s
-------------------------------------------------------------------------------------------------------------------------------
Run Statistics
Hostname Time Data Throughput Operations
Errors
-------------------------------------------------------------------------------------------------------------------------------
10.0.0.2 62.27s 64.36GB 8.88Gb/s 1081096
0.00
master 62.27s 64.50GB 8.90Gb/s 1083325
0.00
-------------------------------------------------------------------------------------------------------------------------------
Difference(%) -0.00% 0.21% 0.21% 0.21%
0.00%
Profile tcp.xml:
<?xml version="1.0"?>
<profile name="TCP_STREAM">
<group nthreads="16">
<transaction iterations="1">
<flowop type="connect" options="remotehost=10.0.0.2 protocol=tcp tcp_nodelay"/>
</transaction>
<transaction duration="60">
<flowop type="write" options="count=160 size=64000"/>
</transaction>
<transaction iterations="1">
<flowop type="disconnect" />
</transaction>
</group>
</profile>
WARNING: multiple messages have this Message-ID (diff)
From: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
To: Or Gerlitz <or.gerlitz@gmail.com>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"leitao@linux.vnet.ibm.com" <leitao@linux.vnet.ibm.com>,
Rick Jones <rick.jones2@hp.com>,
"amirv@mellanox.com" <amirv@mellanox.com>,
"yevgenyp@mellanox.co.il" <yevgenyp@mellanox.co.il>,
"klebers@linux.vnet.ibm.com" <klebers@linux.vnet.ibm.com>,
"anton@samba.org" <anton@samba.org>,
"brking@linux.vnet.ibm.com" <brking@linux.vnet.ibm.com>,
"ogerlitz@mellanox.com" <ogerlitz@mellanox.com>,
"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
"davem@davemloft.net" <davem@davemloft.net>
Subject: Re: [PATCH] mlx4_en: map entire pages to increase throughput
Date: Mon, 16 Jul 2012 17:57:08 -0300 [thread overview]
Message-ID: <20120716205708.GB16137@oc1711230544.ibm.com> (raw)
In-Reply-To: <CAJZOPZL3F+xdHSFfhg7v9A6DDjT6CPK=kgwyzcE6c0pGYFyupg@mail.gmail.com>
On Mon, Jul 16, 2012 at 11:43:33PM +0300, Or Gerlitz wrote:
> On Mon, Jul 16, 2012 at 10:42 PM, Rick Jones <rick.jones2@hp.com> wrote:
>
> > I was thinking more along the lines of an additional comparison,
> > explicitly using netperf TCP_RR or something like it, not just the packets
> > per second from a bulk transfer test.
>
>
> TCP_STREAM from this setup before the patch would be good to know as well
>
Hi, Or.
Does the stream test that I did with uperf using messages of 64000 bytes
fit?
TCP_NODELAY does not make a difference in this case. I get something
around 3Gbps before the patch and something around 9Gbps after the
patch.
Before the patch:
# ./uperf-1.0.3-beta/src/uperf -m tcp.xml
Starting 16 threads running profile:tcp_stream ... 0.00 seconds
Txn1 0 /1.00(s) = 0 16op/s
Txn2 20.81GB /59.26(s) = 3.02Gb/s 5914op/s
Txn3 0 /0.00(s) = 0 128295op/s
-------------------------------------------------------------------------------------------------------------------------------
Total 20.81GB /61.37(s) = 2.91Gb/s 5712op/s
Netstat statistics for this run
-------------------------------------------------------------------------------------------------------------------------------
Nic opkts/s ipkts/s obits/s ibits/s
eth6 252459 31694 3.06Gb/s 16.74Mb/s
eth0 2 18 3.87Kb/s 14.28Kb/s
-------------------------------------------------------------------------------------------------------------------------------
Run Statistics
Hostname Time Data Throughput Operations
Errors
-------------------------------------------------------------------------------------------------------------------------------
10.0.0.2 61.47s 20.81GB 2.91Gb/s 350528
0.00
master 61.37s 20.81GB 2.91Gb/s 350528
0.00
-------------------------------------------------------------------------------------------------------------------------------
Difference(%) -0.16% 0.00% 0.16% 0.00%
0.00%
After the patch:
# ./uperf-1.0.3-beta/src/uperf -m tcp.xml
Starting 16 threads running profile:tcp_stream ... 0.00 seconds
Txn1 0 /1.00(s) = 0 16op/s
Txn2 64.50GB /60.27(s) = 9.19Gb/s 17975op/s
Txn3 0 /0.00(s) = 0
-------------------------------------------------------------------------------------------------------------------------------
Total 64.50GB /62.27(s) = 8.90Gb/s 17397op/s
Netstat statistics for this run
-------------------------------------------------------------------------------------------------------------------------------
Nic opkts/s ipkts/s obits/s ibits/s
eth6 769428 96018 9.31Gb/s 50.72Mb/s
eth0 1 15 2.48Kb/s 13.59Kb/s
-------------------------------------------------------------------------------------------------------------------------------
Run Statistics
Hostname Time Data Throughput Operations
Errors
-------------------------------------------------------------------------------------------------------------------------------
10.0.0.2 62.27s 64.36GB 8.88Gb/s 1081096
0.00
master 62.27s 64.50GB 8.90Gb/s 1083325
0.00
-------------------------------------------------------------------------------------------------------------------------------
Difference(%) -0.00% 0.21% 0.21% 0.21%
0.00%
Profile tcp.xml:
<?xml version="1.0"?>
<profile name="TCP_STREAM">
<group nthreads="16">
<transaction iterations="1">
<flowop type="connect" options="remotehost=10.0.0.2 protocol=tcp tcp_nodelay"/>
</transaction>
<transaction duration="60">
<flowop type="write" options="count=160 size=64000"/>
</transaction>
<transaction iterations="1">
<flowop type="disconnect" />
</transaction>
</group>
</profile>
next prev parent reply other threads:[~2012-07-16 20:57 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-07-16 17:01 [PATCH] mlx4_en: map entire pages to increase throughput Thadeu Lima de Souza Cascardo
2012-07-16 17:27 ` Rick Jones
2012-07-16 19:06 ` Thadeu Lima de Souza Cascardo
2012-07-16 19:06 ` Thadeu Lima de Souza Cascardo
2012-07-16 19:42 ` Rick Jones
2012-07-16 19:42 ` Rick Jones
2012-07-16 20:36 ` Or Gerlitz
2012-07-16 20:36 ` Or Gerlitz
2012-07-16 20:43 ` Or Gerlitz
2012-07-16 20:43 ` Or Gerlitz
2012-07-16 20:57 ` Thadeu Lima de Souza Cascardo [this message]
2012-07-16 20:57 ` Thadeu Lima de Souza Cascardo
2012-07-18 14:59 ` Or Gerlitz
2012-07-18 14:59 ` Or Gerlitz
2012-07-16 20:47 ` Thadeu Lima de Souza Cascardo
2012-07-16 20:47 ` Thadeu Lima de Souza Cascardo
2012-07-16 21:08 ` Rick Jones
2012-07-16 21:08 ` Rick Jones
2012-07-17 5:29 ` David Miller
2012-07-17 12:42 ` David Laight
2012-07-17 12:50 ` David Miller
2012-07-17 13:36 ` David Laight
2012-07-17 13:46 ` David Miller
2012-07-17 13:50 ` Eric Dumazet
2012-07-17 18:17 ` Rick Jones
2012-07-17 20:10 ` Brian King
2012-07-17 20:20 ` David Miller
2012-07-19 17:53 ` David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120716205708.GB16137@oc1711230544.ibm.com \
--to=cascardo@linux.vnet.ibm.com \
--cc=amirv@mellanox.com \
--cc=anton@samba.org \
--cc=brking@linux.vnet.ibm.com \
--cc=davem@davemloft.net \
--cc=klebers@linux.vnet.ibm.com \
--cc=leitao@linux.vnet.ibm.com \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=netdev@vger.kernel.org \
--cc=ogerlitz@mellanox.com \
--cc=or.gerlitz@gmail.com \
--cc=rick.jones2@hp.com \
--cc=yevgenyp@mellanox.co.il \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.