All of lore.kernel.org
 help / color / mirror / Atom feed
* [LTP] [PATCH] busy_poll* : revise poll_cmp
@ 2018-03-19  8:18 sunlianwen
  2018-03-19 12:25 ` Alexey Kodanev
  0 siblings, 1 reply; 7+ messages in thread
From: sunlianwen @ 2018-03-19  8:18 UTC (permalink / raw)
  To: ltp

set low latency busy poll to 50 per socket will
spent more time than set low latency busy poll
to 0 per socket. that mean the value of res_50
is bigger than res_0, so the value of poll_comp
always less than zero and  test fail.

Signed-off-by: Lianwen Sun <sunlw.fnst@cn.fujitsu.com>
---
  testcases/network/busy_poll/busy_poll01.sh | 2 +-
  testcases/network/busy_poll/busy_poll02.sh | 2 +-
  2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/testcases/network/busy_poll/busy_poll01.sh 
b/testcases/network/busy_poll/busy_poll01.sh
index 3c3035600..119ae0176 100755
--- a/testcases/network/busy_poll/busy_poll01.sh
+++ b/testcases/network/busy_poll/busy_poll01.sh
@@ -60,7 +60,7 @@ for x in 50 0; do
         tst_netload -H $(tst_ipaddr rhost) -d res_$x
  done

-poll_cmp=$(( 100 - ($(cat res_50) * 100) / $(cat res_0) ))
+poll_cmp=$(( 100 - ($(cat res_0) * 100) / $(cat res_50) ))

  if [ "$poll_cmp" -lt 1 ]; then
         tst_resm TFAIL "busy poll result is '$poll_cmp' %"
diff --git a/testcases/network/busy_poll/busy_poll02.sh 
b/testcases/network/busy_poll/busy_poll02.sh
index 427857996..05b5cf8c4 100755
--- a/testcases/network/busy_poll/busy_poll02.sh
+++ b/testcases/network/busy_poll/busy_poll02.sh
@@ -51,7 +51,7 @@ for x in 50 0; do
         tst_netload -H $(tst_ipaddr rhost) -d res_$x -b $x
  done

-poll_cmp=$(( 100 - ($(cat res_50) * 100) / $(cat res_0) ))
+poll_cmp=$(( 100 - ($(cat res_0) * 100) / $(cat res_50) ))

  if [ "$poll_cmp" -lt 1 ]; then
         tst_resm TFAIL "busy poll result is '$poll_cmp' %"
-- 
2.16.2




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linux.it/pipermail/ltp/attachments/20180319/4f9f0282/attachment.html>

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [LTP] [PATCH] busy_poll* : revise poll_cmp
  2018-03-19  8:18 [LTP] [PATCH] busy_poll* : revise poll_cmp sunlianwen
@ 2018-03-19 12:25 ` Alexey Kodanev
  2018-03-20  0:18   ` sunlianwen
  2018-03-20  0:20   ` sunlianwen
  0 siblings, 2 replies; 7+ messages in thread
From: Alexey Kodanev @ 2018-03-19 12:25 UTC (permalink / raw)
  To: ltp

On 19.03.2018 11:18, sunlianwen wrote:
> set low latency busy poll to 50 per socket will
> spent more time than set low latency busy poll
> to 0 per socket. that mean the value of res_50
> is bigger than res_0, so the value of poll_comp
> always less than zero and  test fail.

No, if busy poll is enabled, i.e. value set above 0, we should expect
performance gain in this test.

With what driver and kernel version you have such results?
Is it really supported by that driver?

Thanks,
Alexey

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [LTP] [PATCH] busy_poll* : revise poll_cmp
  2018-03-19 12:25 ` Alexey Kodanev
@ 2018-03-20  0:18   ` sunlianwen
  2018-03-21 10:50     ` Alexey Kodanev
  2018-03-20  0:20   ` sunlianwen
  1 sibling, 1 reply; 7+ messages in thread
From: sunlianwen @ 2018-03-20  0:18 UTC (permalink / raw)
  To: ltp

Hi Alexey

   You are right, I think is wrong, I debug this case again,

and find the driver is virtio_net no support busy poll.

Thanks your advise.

Best Regards,

Lianwen Sun.


On 03/19/2018 08:25 PM, Alexey Kodanev wrote:
> On 19.03.2018 11:18, sunlianwen wrote:
>> set low latency busy poll to 50 per socket will
>> spent more time than set low latency busy poll
>> to 0 per socket. that mean the value of res_50
>> is bigger than res_0, so the value of poll_comp
>> always less than zero and  test fail.
> No, if busy poll is enabled, i.e. value set above 0, we should expect
> performance gain in this test.
>
> With what driver and kernel version you have such results?
> Is it really supported by that driver?
>
> Thanks,
> Alexey
>
>
>



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linux.it/pipermail/ltp/attachments/20180320/900716ab/attachment.html>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [LTP] [PATCH] busy_poll* : revise poll_cmp
  2018-03-19 12:25 ` Alexey Kodanev
  2018-03-20  0:18   ` sunlianwen
@ 2018-03-20  0:20   ` sunlianwen
  1 sibling, 0 replies; 7+ messages in thread
From: sunlianwen @ 2018-03-20  0:20 UTC (permalink / raw)
  To: ltp



On 03/19/2018 08:25 PM, Alexey Kodanev wrote:
> On 19.03.2018 11:18, sunlianwen wrote:
>> set low latency busy poll to 50 per socket will
>> spent more time than set low latency busy poll
>> to 0 per socket. that mean the value of res_50
>> is bigger than res_0, so the value of poll_comp
>> always less than zero and  test fail.
> No, if busy poll is enabled, i.e. value set above 0, we should expect
> performance gain in this test.
>
> With what driver and kernel version you have such results?
> Is it really supported by that driver?
the kernel version is 4.16-rc5, the driver is virtio_net.
> Thanks,
> Alexey
>
>
>
Thanks,
Lianwen Sun


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linux.it/pipermail/ltp/attachments/20180320/a93edea1/attachment.html>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [LTP] [PATCH] busy_poll* : revise poll_cmp
  2018-03-20  0:18   ` sunlianwen
@ 2018-03-21 10:50     ` Alexey Kodanev
  2018-03-22  8:03       ` sunlianwen
  0 siblings, 1 reply; 7+ messages in thread
From: Alexey Kodanev @ 2018-03-21 10:50 UTC (permalink / raw)
  To: ltp

On 03/20/2018 03:18 AM, sunlianwen wrote:
> Hi Alexey
> 
>   You are right, I think is wrong, I debug this case again,
> 
> and find the driver is virtio_net no support busy poll.


There is support in virtio_net... may be the problem in the underlying
configuration/driver, latency between guest and the other host? you could
also try netperf -H remote_host -t TCP_RR with/without busy_polling:

# sysctl net.core.busy_read=50
# sysctl net.core.busy_poll=50

Thanks,
Alexey

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [LTP] [PATCH] busy_poll* : revise poll_cmp
  2018-03-21 10:50     ` Alexey Kodanev
@ 2018-03-22  8:03       ` sunlianwen
  2018-03-22 11:14         ` Alexey Kodanev
  0 siblings, 1 reply; 7+ messages in thread
From: sunlianwen @ 2018-03-22  8:03 UTC (permalink / raw)
  To: ltp

Hi Alexey

On 03/21/2018 06:50 PM, Alexey Kodanev wrote:
> On 03/20/2018 03:18 AM, sunlianwen wrote:
>> Hi Alexey
>>
>>    You are right, I think is wrong, I debug this case again,
>>
>> and find the driver is virtio_net no support busy poll.
>
> There is support in virtio_net... may be the problem in the underlying
> configuration/driver, latency between guest and the other host? you could
> also try netperf -H remote_host -t TCP_RR with/without busy_polling:
>
> # sysctl net.core.busy_read=50
> # sysctl net.core.busy_poll=50
>
Thanks your advise. and I find a patch:"virtio_net: remove custom busy_poll"
patch link 
:https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/net/virtio_net.c?id=ceef438d613f6d
I am not sure  whether this patch  mean virtio_net no support busy poll.

Below is debuginfo follow your advise.

# sysctl net.core.busy_read=0
net.core.busy_read = 0

# sysctl net.core.busy_poll=0
net.core.busy_poll = 0

# netperf -H 192.168.122.248 -t TCP_RR
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET 
to 192.168.122.248 () port 0 AF_INET : first burst 0
Local /Remote
Socket Size   Request  Resp.   Elapsed  Trans.
Send   Recv   Size     Size    Time     Rate
bytes  Bytes  bytes    bytes   secs.    per sec

16384  87380  1        1       10.00    30101.63
16384  87380

# sysctl net.core.busy_read=50
net.core.busy_read = 50
# sysctl net.core.busy_poll=50
net.core.busy_poll = 50

# netperf -H 192.168.122.248 -t TCP_RR
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET 
to 192.168.122.248 () port 0 AF_INET : first burst 0
Local /Remote
Socket Size   Request  Resp.   Elapsed  Trans.
Send   Recv   Size     Size    Time     Rate
bytes  Bytes  bytes    bytes   secs.    per sec

16384  87380  1        1       10.00    37968.90
16384  87380

-----------------------------------------------------------------------
<<<test_output>>>
incrementing stop
busy_poll01 1 TINFO: Network config (local -- remote):
busy_poll01 1 TINFO: eth1 -- eth1
busy_poll01 1 TINFO: 192.168.1.41/24 -- 192.168.1.20/24
busy_poll01 1 TINFO: fd00:1:1:1::1/64 -- fd00:1:1:1::2/64
busy_poll01 1 TINFO: set low latency busy poll to 50
busy_poll01 1 TINFO: run server 'netstress -R 500000 -B 
/tmp/ltp-EmybkMxKgu/busy_poll01.IIOgfKYQ6P'
busy_poll01 1 TINFO: run client 'netstress -l -H 192.168.1.20 -a 2 -r 
500000 -d res_50 -g 44175'
busy_poll01 1 TPASS: netstress passed, time spent '53265' ms
busy_poll01 2 TINFO: set low latency busy poll to 0
busy_poll01 2 TINFO: run server 'netstress -R 500000 -B 
/tmp/ltp-EmybkMxKgu/busy_poll01.IIOgfKYQ6P'
busy_poll01 2 TINFO: run client 'netstress -l -H 192.168.1.20 -a 2 -r 
500000 -d res_0 -g 46767'
busy_poll01 2 TPASS: netstress passed, time spent '23393' ms
busy_poll01 3 TFAIL: busy poll result is '-127' %
<<<execution_status>>>
initiation_status="ok"
duration=79 termination_type=exited termination_id=1 corefile=no
cutime=148 cstime=6930
<<<test_end>>>
INFO: ltp-pan reported some tests FAIL
LTP Version: 20180118

###############################################################

             Done executing testcases.
             LTP Version:  20180118
###############################################################

Thanks,
Lianwen Sun


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linux.it/pipermail/ltp/attachments/20180322/f8d6223c/attachment.html>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [LTP] [PATCH] busy_poll* : revise poll_cmp
  2018-03-22  8:03       ` sunlianwen
@ 2018-03-22 11:14         ` Alexey Kodanev
  0 siblings, 0 replies; 7+ messages in thread
From: Alexey Kodanev @ 2018-03-22 11:14 UTC (permalink / raw)
  To: ltp

On 22.03.2018 11:03, sunlianwen wrote:
> Hi Alexey
> 
> On 03/21/2018 06:50 PM, Alexey Kodanev wrote:
>> On 03/20/2018 03:18 AM, sunlianwen wrote:
>>> Hi Alexey
>>>
>>>   You are right, I think is wrong, I debug this case again,
>>>
>>> and find the driver is virtio_net no support busy poll.
>>
>> There is support in virtio_net... may be the problem in the underlying
>> configuration/driver, latency between guest and the other host? you could
>> also try netperf -H remote_host -t TCP_RR with/without busy_polling:
>>
>> # sysctl net.core.busy_read=50
>> # sysctl net.core.busy_poll=50
>>
> Thanks your advise. and I find a patch:"virtio_net: remove custom busy_poll"
> patch link :https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/net/virtio_net.c?id=ceef438d613f6d
> I am not sure  whether this patch  mean virtio_net no support busy poll.

It means it is using generic busy polling implementation now.

> 
> Below is debuginfo follow your advise.
> 
> # sysctl net.core.busy_read=0
> net.core.busy_read = 0
> 
> # sysctl net.core.busy_poll=0
> net.core.busy_poll = 0
> 
> # netperf -H 192.168.122.248 -t TCP_RR
> MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.122.248 () port 0 AF_INET : first burst 0
> Local /Remote
> Socket Size   Request  Resp.   Elapsed  Trans.
> Send   Recv   Size     Size    Time     Rate
> bytes  Bytes  bytes    bytes   secs.    per sec
> 
> 16384  87380  1        1       10.00    30101.63
> 16384  87380
> 
> # sysctl net.core.busy_read=50
> net.core.busy_read = 50
> # sysctl net.core.busy_poll=50
> net.core.busy_poll = 50
> 
> # netperf -H 192.168.122.248 -t TCP_RR
> MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.122.248 () port 0 AF_INET : first burst 0
> Local /Remote
> Socket Size   Request  Resp.   Elapsed  Trans.
> Send   Recv   Size     Size    Time     Rate
> bytes  Bytes  bytes    bytes   secs.    per sec
> 
> 16384  87380  1        1       10.00    37968.90
> 16384  87380
> 

Looks like busy polling is working, transaction rate is obviously faster
than without busy polling.

> -----------------------------------------------------------------------
> <<<test_output>>>
> incrementing stop
> busy_poll01 1 TINFO: Network config (local -- remote):
> busy_poll01 1 TINFO: eth1 -- eth1
> busy_poll01 1 TINFO: 192.168.1.41/24 -- 192.168.1.20/24

You run netperf with 192.168.122.248 server, LTP using the other hosts?
Could you check netperf and LTP for the same hosts/interfaces?


> busy_poll01 1 TINFO: fd00:1:1:1::1/64 -- fd00:1:1:1::2/6> busy_poll01 1 TINFO: set low latency busy poll to 50
> busy_poll01 1 TINFO: run server 'netstress -R 500000 -B /tmp/ltp-EmybkMxKgu/busy_poll01.IIOgfKYQ6P'
> busy_poll01 1 TINFO: run client 'netstress -l -H 192.168.1.20 -a 2 -r 500000 -d res_50 -g 44175'
> busy_poll01 1 TPASS: netstress passed, time spent '53265' ms
> busy_poll01 2 TINFO: set low latency busy poll to 0
> busy_poll01 2 TINFO: run server 'netstress -R 500000 -B /tmp/ltp-EmybkMxKgu/busy_poll01.IIOgfKYQ6P'
> busy_poll01 2 TINFO: run client 'netstress -l -H 192.168.1.20 -a 2 -r 500000 -d res_0 -g 46767'
> busy_poll01 2 TPASS: netstress passed, time spent '23393' ms
> busy_poll01 3 TFAIL: busy poll result is '-127' %
> <<<execution_status>>>
> initiation_status="ok"
> duration=79 termination_type=exited termination_id=1 corefile=no
> cutime=148 cstime=6930
> <<<test_end>>>
> INFO: ltp-pan reported some tests FAIL
> LTP Version: 20180118
> 
>        ###############################################################
> 
>             Done executing testcases.
>             LTP Version:  20180118
>        ###############################################################
> 
> Thanks,
> Lianwen Sun


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-03-22 11:14 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-19  8:18 [LTP] [PATCH] busy_poll* : revise poll_cmp sunlianwen
2018-03-19 12:25 ` Alexey Kodanev
2018-03-20  0:18   ` sunlianwen
2018-03-21 10:50     ` Alexey Kodanev
2018-03-22  8:03       ` sunlianwen
2018-03-22 11:14         ` Alexey Kodanev
2018-03-20  0:20   ` sunlianwen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.