All of lore.kernel.org
 help / color / mirror / Atom feed
* bonding mode 1 works as designed. Or not?
@ 2006-02-14 21:38 Heiko Gerstung
  2006-02-14 21:47 ` Willy Tarreau
  0 siblings, 1 reply; 5+ messages in thread
From: Heiko Gerstung @ 2006-02-14 21:38 UTC (permalink / raw)
  To: linux-kernel

Hi, there!

I just set up bonding for a 2.6.12 box in active-backup mode (mode 1)
and found out that every packet is duplicated, despite the fact that the
documentation (Documentation/network/bonding.txt) says:

"active-backup or 1
                 Active-backup policy: Only one slave in the bond is
                 active.  A different slave becomes active if, and only
                 if, the active slave fails.  The bond's MAC address is
                 externally visible on only one port (network adapter)
                 to avoid confusing the switch.  This mode provides
                 fault tolerance.  The primary option affects the
                 behavior of this mode."


My understanding of this mode is:
eth0 and eth1 are in a bonding group, mode=1, miimon=100 ... eth0 is the
active slave and used as long as the physical link is available (checked
by using MII monitoring), at the same time eth1 is totally passive,
neither passing any received packets to the kernel nor sending packets,
if the kernel wants it to do so. As soon as the eth0 link status changes
to "down", eth1 is activated and used, and now eth0 remains silent and
deaf until it becomes the active slave again.

Any comments on that? Is the documentation wrong OR is there a bug in
the implementation of the bonding module?

Thank you in advance,
kind regards,

Heiko




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: bonding mode 1 works as designed. Or not?
  2006-02-14 21:38 bonding mode 1 works as designed. Or not? Heiko Gerstung
@ 2006-02-14 21:47 ` Willy Tarreau
  2006-02-14 21:52   ` Heiko Gerstung
  0 siblings, 1 reply; 5+ messages in thread
From: Willy Tarreau @ 2006-02-14 21:47 UTC (permalink / raw)
  To: Heiko Gerstung; +Cc: linux-kernel

Hi Heiko,

On Tue, Feb 14, 2006 at 10:38:02PM +0100, Heiko Gerstung wrote:
> Hi, there!
> 
> I just set up bonding for a 2.6.12 box in active-backup mode (mode 1)
> and found out that every packet is duplicated, despite the fact that the
> documentation (Documentation/network/bonding.txt) says:
> 
> "active-backup or 1
>                 Active-backup policy: Only one slave in the bond is
>                 active.  A different slave becomes active if, and only
>                 if, the active slave fails.  The bond's MAC address is
>                 externally visible on only one port (network adapter)
>                 to avoid confusing the switch.  This mode provides
>                 fault tolerance.  The primary option affects the
>                 behavior of this mode."
> 
> 
> My understanding of this mode is:
> eth0 and eth1 are in a bonding group, mode=1, miimon=100 ... eth0 is the
> active slave and used as long as the physical link is available (checked
> by using MII monitoring), at the same time eth1 is totally passive,
> neither passing any received packets to the kernel nor sending packets,
> if the kernel wants it to do so. As soon as the eth0 link status changes
> to "down", eth1 is activated and used, and now eth0 remains silent and
> deaf until it becomes the active slave again.
> 
> Any comments on that? Is the documentation wrong OR is there a bug in
> the implementation of the bonding module?

Neither, it's your understanding described above :-)
In fact, the bonding is used to select an OUTPUT device. If some trafic
manages to enter through the backup interface, it will reach the kernel.
It can be useful to implement some link health-checks for instance. However,
the only packets that you should receive are multicast and broadcast packets,
so this should be very limited anyway by design. After several years using
it, it has not caused me any trouble, including in environments involving
multicast for VRRP.

> Thank you in advance,
> kind regards,
> 
> Heiko

Regards,
willy


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: bonding mode 1 works as designed. Or not?
  2006-02-14 21:47 ` Willy Tarreau
@ 2006-02-14 21:52   ` Heiko Gerstung
  2006-02-14 22:04     ` Stephen Hemminger
  2006-02-14 23:49     ` Willy Tarreau
  0 siblings, 2 replies; 5+ messages in thread
From: Heiko Gerstung @ 2006-02-14 21:52 UTC (permalink / raw)
  To: Willy Tarreau; +Cc: linux-kernel

Hi Willy,

Willy Tarreau wrote:
>> [...]eth0 and eth1 are in a bonding group, mode=1, miimon=100 ... eth0 is the
>> active slave and used as long as the physical link is available (checked
>> by using MII monitoring), at the same time eth1 is totally passive,
>> neither passing any received packets to the kernel nor sending packets,
>> if the kernel wants it to do so. As soon as the eth0 link status changes
>> to "down", eth1 is activated and used, and now eth0 remains silent and
>> deaf until it becomes the active slave again.
>>
>> Any comments on that? Is the documentation wrong OR is there a bug in
>> the implementation of the bonding module?
>>     
>
> Neither, it's your understanding described above :-)
> In fact, the bonding is used to select an OUTPUT device. If some trafic
> manages to enter through the backup interface, it will reach the kernel.
> It can be useful to implement some link health-checks for instance. However,
> the only packets that you should receive are multicast and broadcast packets,
> so this should be very limited anyway by design. After several years using
> it, it has not caused me any trouble, including in environments involving
> multicast for VRRP.
>
>   
Unfortunately the ping replies come in on both interfaces, as well as 
any other traffic (like ssh or web traffic). Everything works but the 
load of the system caused by network traffic is nearly doubled this way 
and may cause confusion in a number of applications. 

Would there be a way to stop the non-active slave(s) from "listening", 
i.e. drop all traffic received by them? If yes, where could I do that?
> Regards,
> willy
>
>   
Thank you for your reply,
kind regards,
Heiko


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: bonding mode 1 works as designed. Or not?
  2006-02-14 21:52   ` Heiko Gerstung
@ 2006-02-14 22:04     ` Stephen Hemminger
  2006-02-14 23:49     ` Willy Tarreau
  1 sibling, 0 replies; 5+ messages in thread
From: Stephen Hemminger @ 2006-02-14 22:04 UTC (permalink / raw)
  To: linux-kernel

On Tue, 14 Feb 2006 22:52:56 +0100
Heiko Gerstung <heiko@am-anger-1.de> wrote:

> Hi Willy,
> 
> Willy Tarreau wrote:
> >> [...]eth0 and eth1 are in a bonding group, mode=1, miimon=100 ... eth0 is the
> >> active slave and used as long as the physical link is available (checked
> >> by using MII monitoring), at the same time eth1 is totally passive,
> >> neither passing any received packets to the kernel nor sending packets,
> >> if the kernel wants it to do so. As soon as the eth0 link status changes
> >> to "down", eth1 is activated and used, and now eth0 remains silent and
> >> deaf until it becomes the active slave again.
> >>
> >> Any comments on that? Is the documentation wrong OR is there a bug in
> >> the implementation of the bonding module?
> >>     
> >
> > Neither, it's your understanding described above :-)
> > In fact, the bonding is used to select an OUTPUT device. If some trafic
> > manages to enter through the backup interface, it will reach the kernel.
> > It can be useful to implement some link health-checks for instance. However,
> > the only packets that you should receive are multicast and broadcast packets,
> > so this should be very limited anyway by design. After several years using
> > it, it has not caused me any trouble, including in environments involving
> > multicast for VRRP.
> >
> >   
> Unfortunately the ping replies come in on both interfaces, as well as 
> any other traffic (like ssh or web traffic). Everything works but the 
> load of the system caused by network traffic is nearly doubled this way 
> and may cause confusion in a number of applications. 
> 
> Would there be a way to stop the non-active slave(s) from "listening", 
> i.e. drop all traffic received by them? If yes, where could I do that?
> > Regards,
> > willy
> >
> >   

You will probably get a better answer if you ask the developers
directly.

BONDING DRIVER
P:   Chad Tindel
M:   ctindel@users.sourceforge.net
P:   Jay Vosburgh
M:   fubar@us.ibm.com
L:   bonding-devel@lists.sourceforge.net
W:   http://sourceforge.net/projects/bonding



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: bonding mode 1 works as designed. Or not?
  2006-02-14 21:52   ` Heiko Gerstung
  2006-02-14 22:04     ` Stephen Hemminger
@ 2006-02-14 23:49     ` Willy Tarreau
  1 sibling, 0 replies; 5+ messages in thread
From: Willy Tarreau @ 2006-02-14 23:49 UTC (permalink / raw)
  To: Heiko Gerstung; +Cc: linux-kernel

On Tue, Feb 14, 2006 at 10:52:56PM +0100, Heiko Gerstung wrote:
> Hi Willy,
> 
> Willy Tarreau wrote:
> >>[...]eth0 and eth1 are in a bonding group, mode=1, miimon=100 ... eth0 
> >>is the
> >>active slave and used as long as the physical link is available (checked
> >>by using MII monitoring), at the same time eth1 is totally passive,
> >>neither passing any received packets to the kernel nor sending packets,
> >>if the kernel wants it to do so. As soon as the eth0 link status changes
> >>to "down", eth1 is activated and used, and now eth0 remains silent and
> >>deaf until it becomes the active slave again.
> >>
> >>Any comments on that? Is the documentation wrong OR is there a bug in
> >>the implementation of the bonding module?
> >>    
> >
> >Neither, it's your understanding described above :-)
> >In fact, the bonding is used to select an OUTPUT device. If some trafic
> >manages to enter through the backup interface, it will reach the kernel.
> >It can be useful to implement some link health-checks for instance. 
> >However,
> >the only packets that you should receive are multicast and broadcast 
> >packets,
> >so this should be very limited anyway by design. After several years 
> >using
> >it, it has not caused me any trouble, including in environments involving
> >multicast for VRRP.
> >
> >  
> Unfortunately the ping replies come in on both interfaces, as well as 
> any other traffic (like ssh or web traffic). Everything works but the 
> load of the system caused by network traffic is nearly doubled this way 
> and may cause confusion in a number of applications. 

So you are using a hub instead of a switch, otherwise, your switch is
duplicating the traffic. You agree that it's not expected to find an
unicast packet on two different ports of the same switch when mirroring
is disabled and mac-learning has not been disabled ?

> Would there be a way to stop the non-active slave(s) from "listening", 
> i.e. drop all traffic received by them? If yes, where could I do that?

I don't see how. It would be fairly simpler IMHO to fix the switch's
configuration.

> >Regards,
> >willy
> >
> >  
> Thank you for your reply,
> kind regards,
> Heiko

Regards,
Willy


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2006-02-14 23:50 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-02-14 21:38 bonding mode 1 works as designed. Or not? Heiko Gerstung
2006-02-14 21:47 ` Willy Tarreau
2006-02-14 21:52   ` Heiko Gerstung
2006-02-14 22:04     ` Stephen Hemminger
2006-02-14 23:49     ` Willy Tarreau

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.