All of lore.kernel.org
 help / color / mirror / Atom feed
* iptables use only one cpu core.
@ 2010-02-26  4:55 MontyRee
  2010-02-26  7:07 ` Marek Kierdelewicz
  0 siblings, 1 reply; 2+ messages in thread
From: MontyRee @ 2010-02-26  4:55 UTC (permalink / raw)
  To: netfilter


Hi all.
 
 
I did NAT performance test at the iptables enabled system. 
 
When I send lots of queries to the iptables enabled NAT system, 
I found that NAT system use only 1 cpu core even this system has 4 cores.
 
So the (TPS)performance result was not good. 
 
Anyone who knows about that? 
or is there any solution to distribute the core usage?
 
 
 
Thanks.
  		 	   		  
_________________________________________________________________
윈걸의 좌충우돌 UCC 제작기, 지금 확인해보세요!
http://windowslive.msn.co.kr/im/main/mainCoverDetail.asp?BbsCode=bbs01&Seq=4349

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: iptables use only one cpu core.
  2010-02-26  4:55 iptables use only one cpu core MontyRee
@ 2010-02-26  7:07 ` Marek Kierdelewicz
  0 siblings, 0 replies; 2+ messages in thread
From: Marek Kierdelewicz @ 2010-02-26  7:07 UTC (permalink / raw)
  To: MontyRee; +Cc: netfilter

>Hi all.

Hello
 
>When I send lots of queries to the iptables enabled NAT system, 
>I found that NAT system use only 1 cpu core even this system has 4
>cores. So the (TPS)performance result was not good. 
 
You probably use just one nic that is singlequeue and capable of
directing interrupts to one core. There are two good solutions:

1) Put n*NICs into the box, where n-number of cores. Distribute traffic
between those nics (Etherchannel on switch + bonding[1] on Linux side).
Then use smp affinity settings to bind different nics (irqs) to
different Cores. It can be done with simple script below:

ETH0_IRQ=`cat /proc/interrupts|grep eth0|cut -d: -f1`
ETH1_IRQ=`cat /proc/interrupts|grep eth1|cut -d: -f1`
ETH2_IRQ=`cat /proc/interrupts|grep eth2|cut -d: -f1`
ETH3_IRQ=`cat /proc/interrupts|grep eth3|cut -d: -f1`
echo 1 > /proc/irq/$ETH0_IRQ/smp_affinity
echo 2 > /proc/irq/$ETH1_IRQ/smp_affinity
echo 4 > /proc/irq/$ETH2_IRQ/smp_affinity
echo 8 > /proc/irq/$ETH3_IRQ/smp_affinity

2) Use intel nic with chip >=82575 (igb driver) [2]. Those nics support
MULTIQUEUE. In your case can have 4 separate irq-rx vectors on one
nic. Basicly hardware on NIC chip is doing the same the
bonding/etherchannel done in 1).

[1]http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding
[2]http://download.intel.com/design/network/applnots/319935.pdf


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2010-02-26  7:07 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-02-26  4:55 iptables use only one cpu core MontyRee
2010-02-26  7:07 ` Marek Kierdelewicz

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.