All of lore.kernel.org
 help / color / mirror / Atom feed
From: Badalian Vyacheslav <slavon@bigtelecom.ru>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: "Waskiewicz Jr, Peter P" <peter.p.waskiewicz.jr@intel.com>,
	Linux Netdev List <netdev@vger.kernel.org>
Subject: Re: ixgbe question
Date: Tue, 24 Nov 2009 11:46:46 +0300	[thread overview]
Message-ID: <4B0B9D76.8090009@bigtelecom.ru> (raw)
In-Reply-To: <4B0B8F52.3010005@gmail.com>

Eric Dumazet пишет:
> Waskiewicz Jr, Peter P a écrit :
>> Ok, I was confused earlier.  I thought you were saying that all packets 
>> were headed into a single Rx queue.  This is different.
>>
>> Do you know what version of irqbalance you're running, or if it's running 
>> at all?  We've seen issues with irqbalance where it won't recognize the 
>> ethernet device if the driver has been reloaded.  In that case, it won't 
>> balance the interrupts at all.  If the default affinity was set to one 
>> CPU, then well, you're screwed.
>>
>> My suggestion in this case is after you reload ixgbe and start your tests, 
>> see if it all goes to one CPU.  If it does, then restart irqbalance 
>> (service irqbalance restart - or just kill it and restart by hand).  Then 
>> start running your test, and in 10 seconds you should see the interrupts 
>> move and spread out.
>>
>> Let me know if this helps,
> 
> Sure it helps !
> 
> I tried without irqbalance and with irqbalance (Ubuntu 9.10 ships irqbalance 0.55-4)
> I can see irqbalance setting smp_affinities to 5555 or AAAA with no direct effect.
> 
> I do receive 16 different irqs, but all serviced on one cpu.
> 
> Only way to have irqs on different cpus is to manualy force irq affinities to be exclusive
> (one bit set in the mask, not several ones), and that is not optimal for moderate loads.
> 
> echo 1 >`echo /proc/irq/*/fiber1-TxRx-0/../smp_affinity`
> echo 1 >`echo /proc/irq/*/fiber1-TxRx-1/../smp_affinity`
> echo 4 >`echo /proc/irq/*/fiber1-TxRx-2/../smp_affinity`
> echo 4 >`echo /proc/irq/*/fiber1-TxRx-3/../smp_affinity`
> echo 10 >`echo /proc/irq/*/fiber1-TxRx-4/../smp_affinity`
> echo 10 >`echo /proc/irq/*/fiber1-TxRx-5/../smp_affinity`
> echo 40 >`echo /proc/irq/*/fiber1-TxRx-6/../smp_affinity`
> echo 40 >`echo /proc/irq/*/fiber1-TxRx-7/../smp_affinity`
> echo 100 >`echo /proc/irq/*/fiber1-TxRx-8/../smp_affinity`
> echo 100 >`echo /proc/irq/*/fiber1-TxRx-9/../smp_affinity`
> echo 400 >`echo /proc/irq/*/fiber1-TxRx-10/../smp_affinity`
> echo 400 >`echo /proc/irq/*/fiber1-TxRx-11/../smp_affinity`
> echo 1000 >`echo /proc/irq/*/fiber1-TxRx-12/../smp_affinity`
> echo 1000 >`echo /proc/irq/*/fiber1-TxRx-13/../smp_affinity`
> echo 4000 >`echo /proc/irq/*/fiber1-TxRx-14/../smp_affinity`
> echo 4000 >`echo /proc/irq/*/fiber1-TxRx-15/../smp_affinity`
> 
> 
> One other problem is that after reload of ixgbe driver, link is 95% of the time
> at 1 Gbps speed, and I could not find an easy way to force it being 10 Gbps
> 
> I run following script many times and stop it when 10 Gbps speed if reached.
> 
> ethtool -A fiber0 rx off tx off
> ip link set fiber0 down
> ip link set fiber1 down
> sleep 2
> ethtool fiber0
> ethtool -s fiber0 speed 10000
> ethtool -s fiber1 speed 10000
> ethtool -r fiber0 &
> ethtool -r fiber1 &
> ethtool fiber0
> ip link set fiber1 up &
> ip link set fiber0 up &
> ethtool fiber0
> 
> [   33.625689] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - version 2.0.44-k2
> [   33.625692] ixgbe: Copyright (c) 1999-2009 Intel Corporation.
> [   33.625741] ixgbe 0000:07:00.0: PCI INT A -> GSI 32 (level, low) -> IRQ 32
> [   33.625760] ixgbe 0000:07:00.0: setting latency timer to 64
> [   33.735579] ixgbe 0000:07:00.0: irq 100 for MSI/MSI-X
> [   33.735583] ixgbe 0000:07:00.0: irq 101 for MSI/MSI-X
> [   33.735585] ixgbe 0000:07:00.0: irq 102 for MSI/MSI-X
> [   33.735587] ixgbe 0000:07:00.0: irq 103 for MSI/MSI-X
> [   33.735589] ixgbe 0000:07:00.0: irq 104 for MSI/MSI-X
> [   33.735591] ixgbe 0000:07:00.0: irq 105 for MSI/MSI-X
> [   33.735593] ixgbe 0000:07:00.0: irq 106 for MSI/MSI-X
> [   33.735595] ixgbe 0000:07:00.0: irq 107 for MSI/MSI-X
> [   33.735597] ixgbe 0000:07:00.0: irq 108 for MSI/MSI-X
> [   33.735599] ixgbe 0000:07:00.0: irq 109 for MSI/MSI-X
> [   33.735602] ixgbe 0000:07:00.0: irq 110 for MSI/MSI-X
> [   33.735604] ixgbe 0000:07:00.0: irq 111 for MSI/MSI-X
> [   33.735606] ixgbe 0000:07:00.0: irq 112 for MSI/MSI-X
> [   33.735608] ixgbe 0000:07:00.0: irq 113 for MSI/MSI-X
> [   33.735610] ixgbe 0000:07:00.0: irq 114 for MSI/MSI-X
> [   33.735612] ixgbe 0000:07:00.0: irq 115 for MSI/MSI-X
> [   33.735614] ixgbe 0000:07:00.0: irq 116 for MSI/MSI-X
> [   33.735633] ixgbe: 0000:07:00.0: ixgbe_init_interrupt_scheme: Multiqueue Enabled: Rx Queue count = 16, Tx Queue count = 16
> [   33.735638] ixgbe 0000:07:00.0: (PCI Express:5.0Gb/s:Width x8) 00:1b:21:4a:fe:54
> [   33.735722] ixgbe 0000:07:00.0: MAC: 2, PHY: 11, SFP+: 5, PBA No: e66562-003
> [   33.738111] ixgbe 0000:07:00.0: Intel(R) 10 Gigabit Network Connection
> [   33.738135] ixgbe 0000:07:00.1: PCI INT B -> GSI 42 (level, low) -> IRQ 42
> [   33.738151] ixgbe 0000:07:00.1: setting latency timer to 64
> [   33.853526] ixgbe 0000:07:00.1: irq 117 for MSI/MSI-X
> [   33.853529] ixgbe 0000:07:00.1: irq 118 for MSI/MSI-X
> [   33.853532] ixgbe 0000:07:00.1: irq 119 for MSI/MSI-X
> [   33.853534] ixgbe 0000:07:00.1: irq 120 for MSI/MSI-X
> [   33.853536] ixgbe 0000:07:00.1: irq 121 for MSI/MSI-X
> [   33.853538] ixgbe 0000:07:00.1: irq 122 for MSI/MSI-X
> [   33.853540] ixgbe 0000:07:00.1: irq 123 for MSI/MSI-X
> [   33.853542] ixgbe 0000:07:00.1: irq 124 for MSI/MSI-X
> [   33.853544] ixgbe 0000:07:00.1: irq 125 for MSI/MSI-X
> [   33.853546] ixgbe 0000:07:00.1: irq 126 for MSI/MSI-X
> [   33.853548] ixgbe 0000:07:00.1: irq 127 for MSI/MSI-X
> [   33.853550] ixgbe 0000:07:00.1: irq 128 for MSI/MSI-X
> [   33.853552] ixgbe 0000:07:00.1: irq 129 for MSI/MSI-X
> [   33.853554] ixgbe 0000:07:00.1: irq 130 for MSI/MSI-X
> [   33.853556] ixgbe 0000:07:00.1: irq 131 for MSI/MSI-X
> [   33.853558] ixgbe 0000:07:00.1: irq 132 for MSI/MSI-X
> [   33.853560] ixgbe 0000:07:00.1: irq 133 for MSI/MSI-X
> [   33.853580] ixgbe: 0000:07:00.1: ixgbe_init_interrupt_scheme: Multiqueue Enabled: Rx Queue count = 16, Tx Queue count = 16
> [   33.853585] ixgbe 0000:07:00.1: (PCI Express:5.0Gb/s:Width x8) 00:1b:21:4a:fe:55
> [   33.853669] ixgbe 0000:07:00.1: MAC: 2, PHY: 11, SFP+: 5, PBA No: e66562-003
> [   33.855956] ixgbe 0000:07:00.1: Intel(R) 10 Gigabit Network Connection
> 
> [   85.208233] ixgbe: fiber1 NIC Link is Up 1 Gbps, Flow Control: RX/TX
> [   85.237453] ixgbe: fiber0 NIC Link is Up 1 Gbps, Flow Control: RX/TX
> [   96.080713] ixgbe: fiber1 NIC Link is Down
> [  102.094610] ixgbe: fiber0 NIC Link is Up 1 Gbps, Flow Control: None
> [  102.119572] ixgbe: fiber1 NIC Link is Up 1 Gbps, Flow Control: None
> [  142.524691] ixgbe: fiber1 NIC Link is Down
> [  148.421332] ixgbe: fiber1 NIC Link is Up 1 Gbps, Flow Control: None
> [  148.449465] ixgbe: fiber0 NIC Link is Up 1 Gbps, Flow Control: None
> [  160.728643] ixgbe: fiber1 NIC Link is Down
> [  172.832301] ixgbe: fiber0 NIC Link is Up 1 Gbps, Flow Control: None
> [  173.659038] ixgbe: fiber1 NIC Link is Up 1 Gbps, Flow Control: None
> [  184.554501] ixgbe: fiber0 NIC Link is Down
> [  185.376273] ixgbe: fiber1 NIC Link is Up 1 Gbps, Flow Control: None
> [  186.493598] ixgbe: fiber0 NIC Link is Up 1 Gbps, Flow Control: None
> [  190.564383] ixgbe: fiber0 NIC Link is Down
> [  191.391149] ixgbe: fiber1 NIC Link is Up 1 Gbps, Flow Control: None
> [  192.484492] ixgbe: fiber0 NIC Link is Up 1 Gbps, Flow Control: None
> [  192.545424] ixgbe: fiber1 NIC Link is Down
> [  205.858197] ixgbe: fiber0 NIC Link is Up 1 Gbps, Flow Control: None
> [  206.684940] ixgbe: fiber1 NIC Link is Up 1 Gbps, Flow Control: None
> [  211.991875] ixgbe: fiber1 NIC Link is Down
> [  220.833478] ixgbe: fiber1 NIC Link is Up 1 Gbps, Flow Control: None
> [  220.833630] ixgbe: fiber0 NIC Link is Up 1 Gbps, Flow Control: None
> [  229.804853] ixgbe: fiber1 NIC Link is Down
> [  248.395672] ixgbe: fiber0 NIC Link is Up 1 Gbps, Flow Control: None
> [  249.222408] ixgbe: fiber1 NIC Link is Up 1 Gbps, Flow Control: None
> [  484.631598] ixgbe: fiber1 NIC Link is Down
> [  490.138931] ixgbe: fiber1 NIC Link is Up 10 Gbps, Flow Control: None
> [  490.167880] ixgbe: fiber0 NIC Link is Up 10 Gbps, Flow Control: None
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

May be its Flow Director?
Multuqueue in this network card work only if you set 1 queue to 1 cpu core in smp_affinity :(
In README:


Intel(R) Ethernet Flow Director
-------------------------------
Supports advanced filters that direct receive packets by their flows to
different queues. Enables tight control on routing a flow in the platform.
Matches flows and CPU cores for flow affinity. Supports multiple parameters
for flexible flow classification and load balancing.

Flow director is enabled only if the kernel is multiple TX queue capable.

An included script (set_irq_affinity.sh) automates setting the IRQ to CPU
affinity.

You can verify that the driver is using Flow Director by looking at the counter
in ethtool: fdir_miss and fdir_match.

The following three parameters impact Flow Director.


FdirMode
--------
Valid Range: 0-2 (0=off, 1=ATR, 2=Perfect filter mode)
Default Value: 1

  Flow Director filtering modes.


FdirPballoc
-----------
Valid Range: 0-2 (0=64k, 1=128k, 2=256k)
Default Value: 0

  Flow Director allocated packet buffer size.


AtrSampleRate
--------------
Valid Range: 1-100
Default Value: 20

  Software ATR Tx packet sample rate. For example, when set to 20, every 20th
  packet, looks to see if the packet will create a new flow.






  reply	other threads:[~2009-11-24  8:46 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-11-23  6:46 [PATCH] irq: Add node_affinity CPU masks for smarter irqbalance hints Peter P Waskiewicz Jr
2009-11-23  7:32 ` Yong Zhang
2009-11-23  7:32   ` Yong Zhang
2009-11-23  9:36   ` Peter P Waskiewicz Jr
2009-11-23 10:21     ` ixgbe question Eric Dumazet
2009-11-23 10:30       ` Badalian Vyacheslav
2009-11-23 10:34       ` Waskiewicz Jr, Peter P
2009-11-23 10:37         ` Eric Dumazet
2009-11-23 14:05           ` Eric Dumazet
2009-11-23 21:26           ` David Miller
2009-11-23 14:10       ` Jesper Dangaard Brouer
2009-11-23 14:38         ` Eric Dumazet
2009-11-23 18:30           ` robert
2009-11-23 16:59             ` Eric Dumazet
2009-11-23 20:54               ` robert
2009-11-23 21:28                 ` David Miller
2009-11-23 22:14                   ` Robert Olsson
2009-11-23 23:28               ` Waskiewicz Jr, Peter P
2009-11-23 23:44                 ` David Miller
2009-11-24  7:46                 ` Eric Dumazet
2009-11-24  8:46                   ` Badalian Vyacheslav [this message]
2009-11-24  9:07                   ` Peter P Waskiewicz Jr
2009-11-24  9:55                     ` Eric Dumazet
2009-11-24 10:06                       ` Peter P Waskiewicz Jr
2009-11-24 11:37                         ` [PATCH net-next-2.6] ixgbe: Fix TX stats accounting Eric Dumazet
2009-11-24 13:23                           ` Eric Dumazet
2009-11-25  7:38                             ` Jeff Kirsher
2009-11-25  9:31                               ` Eric Dumazet
2009-11-25  9:38                                 ` Jeff Kirsher
2009-11-24 13:14                         ` ixgbe question John Fastabend
2009-11-29  8:18                           ` David Miller
2009-11-30 13:02                             ` Eric Dumazet
2009-11-30 20:20                               ` John Fastabend
2009-11-26 14:10                       ` Badalian Vyacheslav
2009-11-23 17:05     ` [PATCH] irq: Add node_affinity CPU masks for smarter irqbalance hints Peter Zijlstra
2009-11-23 23:32       ` Waskiewicz Jr, Peter P
2009-11-24  8:38         ` Peter Zijlstra
2009-11-24  8:59           ` Peter P Waskiewicz Jr
2009-11-24  9:08             ` Peter Zijlstra
2009-11-24  9:15               ` Peter P Waskiewicz Jr
2009-11-24 14:43               ` Arjan van de Ven
2009-11-24  9:15             ` Peter Zijlstra
2009-11-24 10:07             ` Thomas Gleixner
2009-11-24 17:55               ` Peter P Waskiewicz Jr
2009-11-25 11:18               ` Peter Zijlstra
2009-11-24  6:07       ` Arjan van de Ven
2009-11-24  8:39         ` Peter Zijlstra
2009-11-24 14:42           ` Arjan van de Ven
2009-11-24 17:39           ` David Miller
2009-11-24 17:56             ` Peter P Waskiewicz Jr
2009-11-24 18:26               ` Eric Dumazet
2009-11-24 18:33                 ` Peter P Waskiewicz Jr
2009-11-24 19:01                   ` Eric Dumazet
2009-11-24 19:53                     ` Peter P Waskiewicz Jr
2009-11-24 18:54                 ` David Miller
2009-11-24 18:58                   ` Eric Dumazet
2009-11-24 20:35                     ` Andi Kleen
2009-11-24 20:46                       ` Eric Dumazet
2009-11-25 10:30                         ` Eric Dumazet
2009-11-25 10:37                           ` Andi Kleen
2009-11-25 11:35                             ` Eric Dumazet
2009-11-25 11:50                               ` Andi Kleen
2009-11-26 11:43                                 ` Eric Dumazet
2009-11-24  5:17     ` Yong Zhang
2009-11-24  5:17       ` Yong Zhang
2009-11-24  8:39       ` Peter P Waskiewicz Jr
  -- strict thread matches above, loose matches on Subject: below --
2008-03-10 21:27 Ixgbe question Ben Greear
2008-03-11  1:01 ` Brandeburg, Jesse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4B0B9D76.8090009@bigtelecom.ru \
    --to=slavon@bigtelecom.ru \
    --cc=eric.dumazet@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=peter.p.waskiewicz.jr@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.