All of lore.kernel.org
 help / color / mirror / Atom feed
* Channel bonding with e1000
@ 2008-09-05  8:36 Carsten Aulbert
  2008-09-05 14:37 ` Jay Vosburgh
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Carsten Aulbert @ 2008-09-05  8:36 UTC (permalink / raw)
  To: netdev

Hi,

I have a brief problem and would ask for a little assistance:

On a few data servers we intend to do channel bonding. The boxes have
two NICs on the motherboard and two extra ones on an expansion card:

04:00.0 Ethernet controller: Intel Corporation 631xESB/632xESB DPT LAN
Controller Copper (rev 01)
04:00.1 Ethernet controller: Intel Corporation 631xESB/632xESB DPT LAN
Controller Copper (rev 01)
05:02.0 Ethernet controller: Intel Corporation 82546GB Gigabit Ethernet
Controller (rev 03)
05:02.1 Ethernet controller: Intel Corporation 82546GB Gigabit Ethernet
Controller (rev 03)

My simple question would be: Does it matter which two ports I can use to
channel together when using in a set-up with MTU=9000?

Thanks for a brief answer

Carsten

-- 
Dr. Carsten Aulbert - Max Planck Institute for Gravitational Physics
Callinstrasse 38, 30167 Hannover, Germany
Phone/Fax: +49 511 762-17185 / -17193
http://www.top500.org/system/9234 | http://www.top500.org/connfam/6/list/31

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Channel bonding with e1000
  2008-09-05  8:36 Channel bonding with e1000 Carsten Aulbert
@ 2008-09-05 14:37 ` Jay Vosburgh
  2008-09-06  1:36 ` Bill Fink
  2008-09-08 18:37 ` Brandeburg, Jesse
  2 siblings, 0 replies; 6+ messages in thread
From: Jay Vosburgh @ 2008-09-05 14:37 UTC (permalink / raw)
  To: Carsten Aulbert; +Cc: netdev

Carsten Aulbert <carsten.aulbert@aei.mpg.de> wrote:
[...]
>On a few data servers we intend to do channel bonding. The boxes have
>two NICs on the motherboard and two extra ones on an expansion card:
>
>04:00.0 Ethernet controller: Intel Corporation 631xESB/632xESB DPT LAN
>Controller Copper (rev 01)
>04:00.1 Ethernet controller: Intel Corporation 631xESB/632xESB DPT LAN
>Controller Copper (rev 01)
>05:02.0 Ethernet controller: Intel Corporation 82546GB Gigabit Ethernet
>Controller (rev 03)
>05:02.1 Ethernet controller: Intel Corporation 82546GB Gigabit Ethernet
>Controller (rev 03)
>
>My simple question would be: Does it matter which two ports I can use to
>channel together when using in a set-up with MTU=9000?

	Nope.

	If you're concerned about tolerance to failure, you may want to
pick one port from each (one built-in port, one from the card) so if the
expansion card dies you're not totally cut off.

	-J

---
	-Jay Vosburgh, IBM Linux Technology Center, fubar@us.ibm.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Channel bonding with e1000
  2008-09-05  8:36 Channel bonding with e1000 Carsten Aulbert
  2008-09-05 14:37 ` Jay Vosburgh
@ 2008-09-06  1:36 ` Bill Fink
  2008-09-06  8:53   ` Carsten Aulbert
  2008-09-08 18:37 ` Brandeburg, Jesse
  2 siblings, 1 reply; 6+ messages in thread
From: Bill Fink @ 2008-09-06  1:36 UTC (permalink / raw)
  To: Carsten Aulbert; +Cc: netdev

On Fri, 05 Sep 2008, Carsten Aulbert wrote:

> I have a brief problem and would ask for a little assistance:
> 
> On a few data servers we intend to do channel bonding. The boxes have
> two NICs on the motherboard and two extra ones on an expansion card:
> 
> 04:00.0 Ethernet controller: Intel Corporation 631xESB/632xESB DPT LAN
> Controller Copper (rev 01)
> 04:00.1 Ethernet controller: Intel Corporation 631xESB/632xESB DPT LAN
> Controller Copper (rev 01)
> 05:02.0 Ethernet controller: Intel Corporation 82546GB Gigabit Ethernet
> Controller (rev 03)
> 05:02.1 Ethernet controller: Intel Corporation 82546GB Gigabit Ethernet
> Controller (rev 03)
> 
> My simple question would be: Does it matter which two ports I can use to
> channel together when using in a set-up with MTU=9000?

I don't know the specifics of your case, but sometimes the builtin
NICs on the motherboard may not have as much memory buffering as
the better addon NICs.

Also check the respective PCI buses of the onboard NICs versus the
addon NICs.  If one is PCI versus PCI-X or PCI-E, or are different
speeds or bus widths, this can obviously significantly impact
performance, especially when doing bonding.

Of course nothing beats some quick performance tests to determine
which is the better performing combination.

						-Bill

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Channel bonding with e1000
  2008-09-06  1:36 ` Bill Fink
@ 2008-09-06  8:53   ` Carsten Aulbert
  0 siblings, 0 replies; 6+ messages in thread
From: Carsten Aulbert @ 2008-09-06  8:53 UTC (permalink / raw)
  To: Bill Fink; +Cc: netdev

Hi all,

Bill Fink wrote:
> Also check the respective PCI buses of the onboard NICs versus the
> addon NICs.  If one is PCI versus PCI-X or PCI-E, or are different
> speeds or bus widths, this can obviously significantly impact
> performance, especially when doing bonding.

I need to go back to Supermicro's manual for that one. The add-on ones
are definitely PCIe.

> 
> Of course nothing beats some quick performance tests to determine
> which is the better performing combination.
> 

Wisely put :)

Thanks for the input so far. We will start some testing sooner rather
than later to get more out of our file servers!

Cheers

Carsten

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Channel bonding with e1000
  2008-09-05  8:36 Channel bonding with e1000 Carsten Aulbert
  2008-09-05 14:37 ` Jay Vosburgh
  2008-09-06  1:36 ` Bill Fink
@ 2008-09-08 18:37 ` Brandeburg, Jesse
  2008-09-09  6:21   ` Carsten Aulbert
  2 siblings, 1 reply; 6+ messages in thread
From: Brandeburg, Jesse @ 2008-09-08 18:37 UTC (permalink / raw)
  To: Carsten Aulbert; +Cc: netdev

On Fri, 5 Sep 2008, Carsten Aulbert wrote:
> I have a brief problem and would ask for a little assistance:
> 
> On a few data servers we intend to do channel bonding. The boxes have
> two NICs on the motherboard and two extra ones on an expansion card:
> 
> 04:00.0 Ethernet controller: Intel Corporation 631xESB/632xESB DPT LAN
> Controller Copper (rev 01)
> 04:00.1 Ethernet controller: Intel Corporation 631xESB/632xESB DPT LAN
> Controller Copper (rev 01)

To help you with your consideration: this chip is embedded in the ESB2 
southbridge, and is connected (technically) over PCIe.

> 05:02.0 Ethernet controller: Intel Corporation 82546GB Gigabit Ethernet
> Controller (rev 03)
> 05:02.1 Ethernet controller: Intel Corporation 82546GB Gigabit Ethernet
> Controller (rev 03)

This chip is connected over PCI-X and should be significantly slower 
and/or higher CPU utilization than the ESB2 based chip.

Both have 64kB of internal FIFO per port, split between tx and rx.

> My simple question would be: Does it matter which two ports I can use to
> channel together when using in a set-up with MTU=9000?

It shouldn't matter, but I would take into consideration that the ESB2 
ports should be faster.

Jesse

PS in the future questions like this could be cc:'d to 
e1000-devel@lists.sourceforge.net where all the Intel wired developers 
hang out (in addition to netdev)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Channel bonding with e1000
  2008-09-08 18:37 ` Brandeburg, Jesse
@ 2008-09-09  6:21   ` Carsten Aulbert
  0 siblings, 0 replies; 6+ messages in thread
From: Carsten Aulbert @ 2008-09-09  6:21 UTC (permalink / raw)
  To: Brandeburg, Jesse; +Cc: netdev, e1000-devel

Hi Jesse,

Brandeburg, Jesse wrote:
>> 05:02.0 Ethernet controller: Intel Corporation 82546GB Gigabit Ethernet
>> Controller (rev 03)
>> 05:02.1 Ethernet controller: Intel Corporation 82546GB Gigabit Ethernet
>> Controller (rev 03)
> 
> This chip is connected over PCI-X and should be significantly slower 
> and/or higher CPU utilization than the ESB2 based chip.

At first I couldn't believe it since I put in some of the cards myself
and those were PCIe x1 cards. But checking with lshw it looks you are
right (and the expert anyway):

  *-pci:1
                description: PCI bridge
                product: 6311ESB/6321ESB PCI Express to PCI-X Bridge
                vendor: Intel Corporation
                physical id: 0.3
                bus info: pci@01:00.3
                version: 01
                width: 32 bits
                clock: 33MHz
                capabilities: pci normal_decode bus_master cap_list
              *-network:0 DISABLED
                   description: Ethernet interface
                   product: 82546GB Gigabit Ethernet Controller
                   vendor: Intel Corporation
                   physical id: 2
                   bus info: pci@05:02.0
                   logical name: eth2
                   version: 03
                   serial: 00:1b:21:0d:c4:2c
                   capacity: 1GB/s
                   width: 64 bits
                   clock: 66MHz
                   capabilities: bus_master cap_list ethernet physical
tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
                   configuration: autonegotiation=on broadcast=yes
driver=e1000 driverversion=7.3.20-k2-NAPI firmware=N/A latency=52
link=no mingnt=255 multicast=yes port=twisted pair
                   resources: iomemory:d8080000-d809ffff
iomemory:d8000000-d803ffff ioport:3000-303f irq:28
              *-network:1
                   description: Ethernet interface
                   product: 82546GB Gigabit Ethernet Controller
                   vendor: Intel Corporation
                   physical id: 2.1
                   bus info: pci@05:02.1
                   logical name: eth3
                   version: 03
                   serial: 00:1b:21:0d:c4:2d
                   size: 100MB/s
                   capacity: 1GB/s
                   width: 64 bits
                   clock: 66MHz
                   capabilities: bus_master cap_list ethernet physical
tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
                   configuration: autonegotiation=on broadcast=yes
driver=e1000 driverversion=7.3.20-k2-NAPI duplex=full firmware=N/A
ip=172.28.11.4 latency=52 link=yes mingnt=255 multicast=yes port=twisted pai
r speed=100MB/s
                   resources: iomemory:d80a0000-d80bffff
iomemory:d8040000-d807ffff ioport:3040-307f irq:29

> 
> It shouldn't matter, but I would take into consideration that the ESB2 
> ports should be faster.
> 

We'll start looking into this soon and try to get some tests underway.

> 
> PS in the future questions like this could be cc:'d to 
> e1000-devel@lists.sourceforge.net where all the Intel wired developers 
> hang out (in addition to netdev)

Sorry, I should have remembered that one from ~6-9 months ago. Sorry,
I'll start with that mid-thread (and another sorry for that).

Cheers

Carsten

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2008-09-09  6:21 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-09-05  8:36 Channel bonding with e1000 Carsten Aulbert
2008-09-05 14:37 ` Jay Vosburgh
2008-09-06  1:36 ` Bill Fink
2008-09-06  8:53   ` Carsten Aulbert
2008-09-08 18:37 ` Brandeburg, Jesse
2008-09-09  6:21   ` Carsten Aulbert

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.