All of lore.kernel.org
 help / color / mirror / Atom feed
* Increasing number of txd and rxd from 256 to 1024 for virtio-net-pmd-1.1
@ 2013-11-27  5:15 James Yu
       [not found] ` <CAFMB=kByTv2MKmyxS7AsJ-7jA30jxaJiDzFXcnd9MH34ag3urA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 3+ messages in thread
From: James Yu @ 2013-11-27  5:15 UTC (permalink / raw)
  To: dev-VfR2kkLFssw, James Yu

Running one directional traffic from Spirent traffic generator to l2fwd
running inside a guest OS on a RHEL 6.2 KVM host, I encountered performance
issue and need to increase the number of rxd and txd from 256 to 1024.
There was not enough freeslots for packets to be transmitted in this routine
      virtio_send_packet(){
      ....
        if (tq->freeslots < nseg + 1) {
                return -1;
        }
      ....
      }

How do I solve the performance issue by one of the following
1. increase the number of rxd and txd from 256 to 1024
        This should prevent packets could not be stored into the ring due
to lack of freeslots. But l2fwd fails to run and indicate the number must
be equal to 256.
2. increase the MAX_PKT_BURST
        But this is not ideal since it will increase the delay while
improving the throughput
3. other mechanism that you know can improve it ?
        Is there any other approach to have enough freeslots to store the
packets before passing down to PCI ?


Thanks

James


This is the performance numbers I measured on the l2fwd printout for the
receiving part. I added codes inside l2fwd to do tx part.
====================================================================================
vhost-net is enabled on KVM host, # of cache buffer 4096, Ubuntu 12.04.3
LTS (3.2.0-53-generic); kvm 1.2.0, libvirtd: 0.9.8
64 Bytes/pkt from Spirent @ 223k pps, running test for 10 seconds.
====================================================================================
DPDK 1.3 + virtio + 256 txd/rxd + nice -19 priority (l2fwd, guest kvm
process)
bash command: nice -n -19
/root/dpdk/dpdk-1.3.1r2/examples/l2fwd/build/l2fwd -c 3 -n 1 -b 000:00:03.0
-b 000:00:07.0 -b 000:00:0a.0 -b 000:00:09.0 -d
/root/dpdk/virtio-net-pmd-1.1/librte_pmd_virtio.so -- -q 1 -p 1
====================================================================================
Spirent -> l2fwd (receiving 10G) (RX on KVM guest)
    MAX_PKT_BURST     10seconds (<1% loss)  Packets Per Second
-------------------------------------------------------------------------------------------------------------------------------
    32                              74k pps
    64                              80k pps
    128                           126kpps
    256                           133kpps

l2fw -> Spirent (10G port) (transmitting) (using one-directional one port
(port 0) setup)
    MAX_PKT_BURST     < 1% packet loss
    32                             88kpp


**********************************
The same test run on e1000 ports

====================================================================================
DPDK 1.3 + e1000 + 1024 txd/rxd + nice -19 priority (l2fwd, guest kvm
process)
bash command: nice -n -19
/root/dpdk/dpdk-1.3.1r2/examples/l2fwd/build/l2fwd -c 3 -n 1 -b 000:00:03.0
-b 000:00:07.0 -b 000:00:0a.0 -b 000:00:09.0 -- -q 1 -p 1
====================================================================================
Spirent -> l2fwd (RECEIVING 10G)
    MAX_PKT_BURST     <= 1% packet loss
    32                             110k pps

l2fw -> Spirent (10G port) (TRANSMITTING) (using one-directional one port
(port 0) setup)
    MAX_PKT_BURST     pkts transmitted on l2fwd
    32                            171k pps (0% dropped)
    240                          203k pps (6% dropped, 130k pps received on
eth6 (assumed on Spirent)) **
**: not enough freeslots in tx ring
==> this indicate the effects of small txd/rxd (256) when more traffic is
generated, the packets can not
    be sent due to lack of freeslots in tx ring. I guess this is the
symptom occurs in the virtio_net

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Increasing number of txd and rxd from 256 to 1024 for virtio-net-pmd-1.1
       [not found] ` <CAFMB=kByTv2MKmyxS7AsJ-7jA30jxaJiDzFXcnd9MH34ag3urA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-11-27  6:26   ` Stephen Hemminger
       [not found]     ` <20131126222606.3b99a80b-We1ePj4FEcvRI77zikRAJc56i+j3xesD0e7PPNI6Mm0@public.gmane.org>
  0 siblings, 1 reply; 3+ messages in thread
From: Stephen Hemminger @ 2013-11-27  6:26 UTC (permalink / raw)
  To: James Yu; +Cc: dev-VfR2kkLFssw

On Tue, 26 Nov 2013 21:15:02 -0800
James Yu <ypyu2011-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:

> Running one directional traffic from Spirent traffic generator to l2fwd
> running inside a guest OS on a RHEL 6.2 KVM host, I encountered performance
> issue and need to increase the number of rxd and txd from 256 to 1024.
> There was not enough freeslots for packets to be transmitted in this routine
>       virtio_send_packet(){
>       ....
>         if (tq->freeslots < nseg + 1) {
>                 return -1;
>         }
>       ....
>       }
> 
> How do I solve the performance issue by one of the following
> 1. increase the number of rxd and txd from 256 to 1024
>         This should prevent packets could not be stored into the ring due
> to lack of freeslots. But l2fwd fails to run and indicate the number must
> be equal to 256.
> 2. increase the MAX_PKT_BURST
>         But this is not ideal since it will increase the delay while
> improving the throughput
> 3. other mechanism that you know can improve it ?
>         Is there any other approach to have enough freeslots to store the
> packets before passing down to PCI ?
> 
> 
> Thanks
> 
> James
> 
> 
> This is the performance numbers I measured on the l2fwd printout for the
> receiving part. I added codes inside l2fwd to do tx part.
> ====================================================================================
> vhost-net is enabled on KVM host, # of cache buffer 4096, Ubuntu 12.04.3
> LTS (3.2.0-53-generic); kvm 1.2.0, libvirtd: 0.9.8
> 64 Bytes/pkt from Spirent @ 223k pps, running test for 10 seconds.
> ====================================================================================
> DPDK 1.3 + virtio + 256 txd/rxd + nice -19 priority (l2fwd, guest kvm
> process)
> bash command: nice -n -19
> /root/dpdk/dpdk-1.3.1r2/examples/l2fwd/build/l2fwd -c 3 -n 1 -b 000:00:03.0
> -b 000:00:07.0 -b 000:00:0a.0 -b 000:00:09.0 -d
> /root/dpdk/virtio-net-pmd-1.1/librte_pmd_virtio.so -- -q 1 -p 1
> ====================================================================================
> Spirent -> l2fwd (receiving 10G) (RX on KVM guest)
>     MAX_PKT_BURST     10seconds (<1% loss)  Packets Per Second
> -------------------------------------------------------------------------------------------------------------------------------
>     32                              74k pps
>     64                              80k pps
>     128                           126kpps
>     256                           133kpps
> 
> l2fw -> Spirent (10G port) (transmitting) (using one-directional one port
> (port 0) setup)
>     MAX_PKT_BURST     < 1% packet loss
>     32                             88kpp
> 
> 
> **********************************
> The same test run on e1000 ports
> 
> ====================================================================================
> DPDK 1.3 + e1000 + 1024 txd/rxd + nice -19 priority (l2fwd, guest kvm
> process)
> bash command: nice -n -19
> /root/dpdk/dpdk-1.3.1r2/examples/l2fwd/build/l2fwd -c 3 -n 1 -b 000:00:03.0
> -b 000:00:07.0 -b 000:00:0a.0 -b 000:00:09.0 -- -q 1 -p 1
> ====================================================================================
> Spirent -> l2fwd (RECEIVING 10G)
>     MAX_PKT_BURST     <= 1% packet loss
>     32                             110k pps
> 
> l2fw -> Spirent (10G port) (TRANSMITTING) (using one-directional one port
> (port 0) setup)
>     MAX_PKT_BURST     pkts transmitted on l2fwd
>     32                            171k pps (0% dropped)
>     240                          203k pps (6% dropped, 130k pps received on
> eth6 (assumed on Spirent)) **
> **: not enough freeslots in tx ring
> ==> this indicate the effects of small txd/rxd (256) when more traffic is
> generated, the packets can not
>     be sent due to lack of freeslots in tx ring. I guess this is the
> symptom occurs in the virtio_net

The number of slots with virtio is a parameter negotiated with the host.
So unless the host (KVM) gives the device more slots, then it won't work.
I have a better virtio driver and one of the features being added is multiqueue
and merged TX buffer support which would give a bigger queue.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Increasing number of txd and rxd from 256 to 1024 for virtio-net-pmd-1.1
       [not found]     ` <20131126222606.3b99a80b-We1ePj4FEcvRI77zikRAJc56i+j3xesD0e7PPNI6Mm0@public.gmane.org>
@ 2013-11-27 20:06       ` James Yu
  0 siblings, 0 replies; 3+ messages in thread
From: James Yu @ 2013-11-27 20:06 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev-VfR2kkLFssw

Can you share your virtio driver with me ?

Do you mean to create multiple queues, each has 256 txd/rxd ? The packets
could be stored into the freeslots in those queues. But how can the virtio
pmd codes feed the slots down to the hardware to deliver them ?

The other question is that I was using vhost-net on the KVM host. This
supposed to be transparent to the DPDK + virtio pmd codes. But this cause
problem in the packet delivery ?

Thanks



On Tue, Nov 26, 2013 at 10:26 PM, Stephen Hemminger <
stephen-OTpzqLSitTUnbdJkjeBofR2eb7JE58TQ@public.gmane.org> wrote:

> On Tue, 26 Nov 2013 21:15:02 -0800
> James Yu <ypyu2011-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>
> > Running one directional traffic from Spirent traffic generator to l2fwd
> > running inside a guest OS on a RHEL 6.2 KVM host, I encountered
> performance
> > issue and need to increase the number of rxd and txd from 256 to 1024.
> > There was not enough freeslots for packets to be transmitted in this
> routine
> >       virtio_send_packet(){
> >       ....
> >         if (tq->freeslots < nseg + 1) {
> >                 return -1;
> >         }
> >       ....
> >       }
> >
> > How do I solve the performance issue by one of the following
> > 1. increase the number of rxd and txd from 256 to 1024
> >         This should prevent packets could not be stored into the ring due
> > to lack of freeslots. But l2fwd fails to run and indicate the number must
> > be equal to 256.
> > 2. increase the MAX_PKT_BURST
> >         But this is not ideal since it will increase the delay while
> > improving the throughput
> > 3. other mechanism that you know can improve it ?
> >         Is there any other approach to have enough freeslots to store the
> > packets before passing down to PCI ?
> >
> >
> > Thanks
> >
> > James
> >
> >
> > This is the performance numbers I measured on the l2fwd printout for the
> > receiving part. I added codes inside l2fwd to do tx part.
> >
> ====================================================================================
> > vhost-net is enabled on KVM host, # of cache buffer 4096, Ubuntu 12.04.3
> > LTS (3.2.0-53-generic); kvm 1.2.0, libvirtd: 0.9.8
> > 64 Bytes/pkt from Spirent @ 223k pps, running test for 10 seconds.
> >
> ====================================================================================
> > DPDK 1.3 + virtio + 256 txd/rxd + nice -19 priority (l2fwd, guest kvm
> > process)
> > bash command: nice -n -19
> > /root/dpdk/dpdk-1.3.1r2/examples/l2fwd/build/l2fwd -c 3 -n 1 -b
> 000:00:03.0
> > -b 000:00:07.0 -b 000:00:0a.0 -b 000:00:09.0 -d
> > /root/dpdk/virtio-net-pmd-1.1/librte_pmd_virtio.so -- -q 1 -p 1
> >
> ====================================================================================
> > Spirent -> l2fwd (receiving 10G) (RX on KVM guest)
> >     MAX_PKT_BURST     10seconds (<1% loss)  Packets Per Second
> >
> -------------------------------------------------------------------------------------------------------------------------------
> >     32                              74k pps
> >     64                              80k pps
> >     128                           126kpps
> >     256                           133kpps
> >
> > l2fw -> Spirent (10G port) (transmitting) (using one-directional one port
> > (port 0) setup)
> >     MAX_PKT_BURST     < 1% packet loss
> >     32                             88kpp
> >
> >
> > **********************************
> > The same test run on e1000 ports
> >
> >
> ====================================================================================
> > DPDK 1.3 + e1000 + 1024 txd/rxd + nice -19 priority (l2fwd, guest kvm
> > process)
> > bash command: nice -n -19
> > /root/dpdk/dpdk-1.3.1r2/examples/l2fwd/build/l2fwd -c 3 -n 1 -b
> 000:00:03.0
> > -b 000:00:07.0 -b 000:00:0a.0 -b 000:00:09.0 -- -q 1 -p 1
> >
> ====================================================================================
> > Spirent -> l2fwd (RECEIVING 10G)
> >     MAX_PKT_BURST     <= 1% packet loss
> >     32                             110k pps
> >
> > l2fw -> Spirent (10G port) (TRANSMITTING) (using one-directional one port
> > (port 0) setup)
> >     MAX_PKT_BURST     pkts transmitted on l2fwd
> >     32                            171k pps (0% dropped)
> >     240                          203k pps (6% dropped, 130k pps received
> on
> > eth6 (assumed on Spirent)) **
> > **: not enough freeslots in tx ring
> > ==> this indicate the effects of small txd/rxd (256) when more traffic is
> > generated, the packets can not
> >     be sent due to lack of freeslots in tx ring. I guess this is the
> > symptom occurs in the virtio_net
>
> The number of slots with virtio is a parameter negotiated with the host.
> So unless the host (KVM) gives the device more slots, then it won't work.
> I have a better virtio driver and one of the features being added is
> multiqueue
> and merged TX buffer support which would give a bigger queue.
>
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2013-11-27 20:06 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-11-27  5:15 Increasing number of txd and rxd from 256 to 1024 for virtio-net-pmd-1.1 James Yu
     [not found] ` <CAFMB=kByTv2MKmyxS7AsJ-7jA30jxaJiDzFXcnd9MH34ag3urA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-11-27  6:26   ` Stephen Hemminger
     [not found]     ` <20131126222606.3b99a80b-We1ePj4FEcvRI77zikRAJc56i+j3xesD0e7PPNI6Mm0@public.gmane.org>
2013-11-27 20:06       ` James Yu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.