netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* igb + balance-rr + bridge + IPv6 = no go without promiscuous mode
@ 2011-12-23  9:42 Chris Boot
       [not found] ` <CAADHFRBWR0JJNmPzQA3=40s5a6fZ3TJEoC0aWzo+3wXruEZC5A@mail.gmail.com>
  0 siblings, 1 reply; 10+ messages in thread
From: Chris Boot @ 2011-12-23  9:42 UTC (permalink / raw)
  To: netdev, lkml

Hi folks,

As per Eric Dumazet and Dave Miller, I'm opening up a separate thread on 
this issue.

I have two identical servers in a cluster for running KVM virtual 
machines. They each have a single connection to the Internet (irrelevant 
for this) and two gigabit connections between each other for cluster 
replication, etc... These two connections are in a balance-rr bonded 
connection, which is itself member of a bridge that the VMs attach to. 
I'm running v3.2-rc6-140-gb9e26df on Debian Wheezy.

When the bridge is brought up, IPv4 works fine but IPv6 does not. I can 
use neither the automatic link-local on the bridge nor the static global 
address I assign. Neither machine can perform neighbour discovery over 
the link until I put the bond members (eth0 and eth1) into promiscuous 
mode.  I can do this either with tcpdump or 'ip link set dev ethX 
promisc on' and this is enough to make the link spring to life.

This cluster is not currently live so I can easily test patches and 
various configurations.

The relevant parts of /etc/network/interfaces:

iface bond0 inet manual
         slaves eth0 eth1
         bond-mode balance-rr
         bond-miimon 100
         bond-downdelay 200
         bond-updelay 200

iface br0 inet static
         address [snip]
         netmask 255.255.255.224
         bridge_ports bond0
         bridge_stp off
         bridge_fd 0
         bridge_maxwait 5
iface br0 inet6 static
         address [snip]
         netmask 64

lspci:
02:00.0 Ethernet controller [0200]: Intel Corporation I350 Gigabit 
Network Connection [8086:1521] (rev 01)
02:00.1 Ethernet controller [0200]: Intel Corporation I350 Gigabit 
Network Connection [8086:1521] (rev 01)

These use the 'igb' driver.

Thanks,
Chris

-- 
Chris Boot
bootc@bootc.net

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: igb + balance-rr + bridge + IPv6 = no go without promiscuous mode
       [not found]   ` <4EF454C7.8020305@bootc.net>
@ 2011-12-23 10:48     ` Nicolas de Pesloüan
  2011-12-23 10:56       ` Chris Boot
  0 siblings, 1 reply; 10+ messages in thread
From: Nicolas de Pesloüan @ 2011-12-23 10:48 UTC (permalink / raw)
  To: netdev; +Cc: Chris Boot

[ Forwarded to netdev, because two previous e-mail erroneously sent in HTML ]

Le 23/12/2011 11:15, Chris Boot a écrit :
> On 23/12/2011 09:52, Nicolas de Pesloüan wrote:
>>
>>
>> Le 23 déc. 2011 10:42, "Chris Boot" <bootc@bootc.net <mailto:bootc@bootc.net>> a écrit :
>> >
>> > Hi folks,
>> >
>> > As per Eric Dumazet and Dave Miller, I'm opening up a separate thread on this issue.
>> >
>> > I have two identical servers in a cluster for running KVM virtual machines. They each have a
>> single connection to the Internet (irrelevant for this) and two gigabit connections between each
>> other for cluster replication, etc... These two connections are in a balance-rr bonded connection,
>> which is itself member of a bridge that the VMs attach to. I'm running v3.2-rc6-140-gb9e26df on
>> Debian Wheezy.
>> >
>> > When the bridge is brought up, IPv4 works fine but IPv6 does not. I can use neither the
>> automatic link-local on the brid ge nor the static global address I assign. Neither machine can
>> perform neighbour discovery over the link until I put the bond members (eth0 and eth1) into
>> promiscuous mode.  I can do this either with tcpdump or 'ip link set dev ethX promisc on' and this
>> is enough to make the link spring to life.
>>
>> For as far as I remember, setting bond0 to promisc should set the bonding member to promisc too.
>> And inserting bond0 into br0 should set bond0 to promisc... So everything should be in promisc
>> mode anyway... but you shoudn't have to do it by hand.
>>
>
> Sorry, I should have added that I tried this. Setting bond0 or br0 to promisc has no effect. I
> discovered this by running tcpdump on br0 first, then bond0, then eventually each bond member in
> turn. Only at the last stage did things jump to life.
>
>> >
>> > This cluster is not currently live so I can easily test patches and various configurations.
>>
>> Can you try to remove the bonding part, connecting eth0 and eth1 directly to br0 and see if it
>> works better? (This is a test ony. I perfectly understand that you would loose balance-rr in this
>> setup.)
>>
>
> Good call. Let's see.
>
> I took br0 and bond0 apart, took eth0 and eth1 out of enforced promisc mode, then manually built a
> br0 with eth0 in only so I didn't cause a network loop. Adding eth0 to br0 did not make it go into
> promisc mode, but IPv6 does work over this setup. I also made sure ip -6 neigh was empty on both
> machines before I started.
>
> I then decided to try the test with just the bond0 in balance-rr mode. Once again I took everything
> down and ensured no promisc mode and no ip -6 neigh. I noticed bond0 wasn't getting a link-local and
> I found out for some reason /proc/sys/net/ipv6/conf/bond0/disable_ipv6 was set on both servers so I
> set it to 0. That brought things to life.
>
> So then I put it all back together again and it didn't work. I once again noticed disable_ipv6 was
> set on the bond0 interfaces, now part of the bridge. Toggling this on the _bond_ interface made
> things work again.
>
> What's setting disable_ipv6? Should this be having an impact if the port is part of a bridge?
>
> Chris
>
> --
> Chris Boot
> bootc@bootc.net
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: igb + balance-rr + bridge + IPv6 = no go without promiscuous mode
  2011-12-23 10:48     ` Nicolas de Pesloüan
@ 2011-12-23 10:56       ` Chris Boot
  2011-12-27 21:53         ` Chris Boot
  0 siblings, 1 reply; 10+ messages in thread
From: Chris Boot @ 2011-12-23 10:56 UTC (permalink / raw)
  To: Nicolas de Pesloüan; +Cc: netdev

On 23/12/2011 10:48, Nicolas de Pesloüan wrote:
> [ Forwarded to netdev, because two previous e-mail erroneously sent in 
> HTML ]
>
> Le 23/12/2011 11:15, Chris Boot a écrit :
>> On 23/12/2011 09:52, Nicolas de Pesloüan wrote:
>>>
>>>
>>> Le 23 déc. 2011 10:42, "Chris Boot" <bootc@bootc.net 
>>> <mailto:bootc@bootc.net>> a écrit :
>>> >
>>> > Hi folks,
>>> >
>>> > As per Eric Dumazet and Dave Miller, I'm opening up a separate 
>>> thread on this issue.
>>> >
>>> > I have two identical servers in a cluster for running KVM virtual 
>>> machines. They each have a
>>> single connection to the Internet (irrelevant for this) and two 
>>> gigabit connections between each
>>> other for cluster replication, etc... These two connections are in a 
>>> balance-rr bonded connection,
>>> which is itself member of a bridge that the VMs attach to. I'm 
>>> running v3.2-rc6-140-gb9e26df on
>>> Debian Wheezy.
>>> >
>>> > When the bridge is brought up, IPv4 works fine but IPv6 does not. 
>>> I can use neither the
>>> automatic link-local on the brid ge nor the static global address I 
>>> assign. Neither machine can
>>> perform neighbour discovery over the link until I put the bond 
>>> members (eth0 and eth1) into
>>> promiscuous mode.  I can do this either with tcpdump or 'ip link set 
>>> dev ethX promisc on' and this
>>> is enough to make the link spring to life.
>>>
>>> For as far as I remember, setting bond0 to promisc should set the 
>>> bonding member to promisc too.
>>> And inserting bond0 into br0 should set bond0 to promisc... So 
>>> everything should be in promisc
>>> mode anyway... but you shoudn't have to do it by hand.
>>>
>>
>> Sorry, I should have added that I tried this. Setting bond0 or br0 to 
>> promisc has no effect. I
>> discovered this by running tcpdump on br0 first, then bond0, then 
>> eventually each bond member in
>> turn. Only at the last stage did things jump to life.
>>
>>> >
>>> > This cluster is not currently live so I can easily test patches 
>>> and various configurations.
>>>
>>> Can you try to remove the bonding part, connecting eth0 and eth1 
>>> directly to br0 and see if it
>>> works better? (This is a test ony. I perfectly understand that you 
>>> would loose balance-rr in this
>>> setup.)
>>>
>>
>> Good call. Let's see.
>>
>> I took br0 and bond0 apart, took eth0 and eth1 out of enforced 
>> promisc mode, then manually built a
>> br0 with eth0 in only so I didn't cause a network loop. Adding eth0 
>> to br0 did not make it go into
>> promisc mode, but IPv6 does work over this setup. I also made sure ip 
>> -6 neigh was empty on both
>> machines before I started.
>>
>> I then decided to try the test with just the bond0 in balance-rr 
>> mode. Once again I took everything
>> down and ensured no promisc mode and no ip -6 neigh. I noticed bond0 
>> wasn't getting a link-local and
>> I found out for some reason 
>> /proc/sys/net/ipv6/conf/bond0/disable_ipv6 was set on both servers so I
>> set it to 0. That brought things to life.
>>
>> So then I put it all back together again and it didn't work. I once 
>> again noticed disable_ipv6 was
>> set on the bond0 interfaces, now part of the bridge. Toggling this on 
>> the _bond_ interface made
>> things work again.
>>
>> What's setting disable_ipv6? Should this be having an impact if the 
>> port is part of a bridge?

Hmm, as a further update... I brought up my VMs on the bridge with 
disable_ipv6 turned off. The VMs on one host couldn't see what was on 
the other side of the bridge (on the other server) until I turned 
promisc back on manually. So it's not entirely disable_ipv6's fault.

Chris

-- 
Chris Boot
bootc@bootc.net

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: igb + balance-rr + bridge + IPv6 = no go without promiscuous mode
  2011-12-23 10:56       ` Chris Boot
@ 2011-12-27 21:53         ` Chris Boot
  2012-01-03 23:23           ` Wyborny, Carolyn
  0 siblings, 1 reply; 10+ messages in thread
From: Chris Boot @ 2011-12-27 21:53 UTC (permalink / raw)
  To: Nicolas de Pesloüan; +Cc: netdev

On 23/12/2011 10:56, Chris Boot wrote:
> On 23/12/2011 10:48, Nicolas de Pesloüan wrote:
>> [ Forwarded to netdev, because two previous e-mail erroneously sent in
>> HTML ]
>>
>> Le 23/12/2011 11:15, Chris Boot a écrit :
>>> On 23/12/2011 09:52, Nicolas de Pesloüan wrote:
>>>>
>>>>
>>>> Le 23 déc. 2011 10:42, "Chris Boot" <bootc@bootc.net
>>>> <mailto:bootc@bootc.net>> a écrit :
>>>> >
>>>> > Hi folks,
>>>> >
>>>> > As per Eric Dumazet and Dave Miller, I'm opening up a separate
>>>> thread on this issue.
>>>> >
>>>> > I have two identical servers in a cluster for running KVM virtual
>>>> machines. They each have a
>>>> single connection to the Internet (irrelevant for this) and two
>>>> gigabit connections between each
>>>> other for cluster replication, etc... These two connections are in a
>>>> balance-rr bonded connection,
>>>> which is itself member of a bridge that the VMs attach to. I'm
>>>> running v3.2-rc6-140-gb9e26df on
>>>> Debian Wheezy.
>>>> >
>>>> > When the bridge is brought up, IPv4 works fine but IPv6 does not.
>>>> I can use neither the
>>>> automatic link-local on the brid ge nor the static global address I
>>>> assign. Neither machine can
>>>> perform neighbour discovery over the link until I put the bond
>>>> members (eth0 and eth1) into
>>>> promiscuous mode. I can do this either with tcpdump or 'ip link set
>>>> dev ethX promisc on' and this
>>>> is enough to make the link spring to life.
>>>>
>>>> For as far as I remember, setting bond0 to promisc should set the
>>>> bonding member to promisc too.
>>>> And inserting bond0 into br0 should set bond0 to promisc... So
>>>> everything should be in promisc
>>>> mode anyway... but you shoudn't have to do it by hand.
>>>>
>>>
>>> Sorry, I should have added that I tried this. Setting bond0 or br0 to
>>> promisc has no effect. I
>>> discovered this by running tcpdump on br0 first, then bond0, then
>>> eventually each bond member in
>>> turn. Only at the last stage did things jump to life.
>>>
>>>> >
>>>> > This cluster is not currently live so I can easily test patches
>>>> and various configurations.
>>>>
>>>> Can you try to remove the bonding part, connecting eth0 and eth1
>>>> directly to br0 and see if it
>>>> works better? (This is a test ony. I perfectly understand that you
>>>> would loose balance-rr in this
>>>> setup.)
>>>>
>>>
>>> Good call. Let's see.
>>>
>>> I took br0 and bond0 apart, took eth0 and eth1 out of enforced
>>> promisc mode, then manually built a
>>> br0 with eth0 in only so I didn't cause a network loop. Adding eth0
>>> to br0 did not make it go into
>>> promisc mode, but IPv6 does work over this setup. I also made sure ip
>>> -6 neigh was empty on both
>>> machines before I started.
>>>
>>> I then decided to try the test with just the bond0 in balance-rr
>>> mode. Once again I took everything
>>> down and ensured no promisc mode and no ip -6 neigh. I noticed bond0
>>> wasn't getting a link-local and
>>> I found out for some reason
>>> /proc/sys/net/ipv6/conf/bond0/disable_ipv6 was set on both servers so I
>>> set it to 0. That brought things to life.
>>>
>>> So then I put it all back together again and it didn't work. I once
>>> again noticed disable_ipv6 was
>>> set on the bond0 interfaces, now part of the bridge. Toggling this on
>>> the _bond_ interface made
>>> things work again.
>>>
>>> What's setting disable_ipv6? Should this be having an impact if the
>>> port is part of a bridge?
>
> Hmm, as a further update... I brought up my VMs on the bridge with
> disable_ipv6 turned off. The VMs on one host couldn't see what was on
> the other side of the bridge (on the other server) until I turned
> promisc back on manually. So it's not entirely disable_ipv6's fault.

Hi,

I don't want this to get lost around the Christmas break, so I'm just 
resending it. I'm still seeing the same behaviour as before.

 From above:

>>>> For as far as I remember, setting bond0 to promisc should set the
>>>> bonding member to promisc too.
>>>> And inserting bond0 into br0 should set bond0 to promisc... So
>>>> everything should be in promisc
>>>> mode anyway... but you shoudn't have to do it by hand.

This definitely doesn't happen, at least according to 'ip link show | 
grep PROMISC'.

Chris

-- 
Chris Boot
bootc@bootc.net

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: igb + balance-rr + bridge + IPv6 = no go without promiscuous mode
  2011-12-27 21:53         ` Chris Boot
@ 2012-01-03 23:23           ` Wyborny, Carolyn
  2012-01-04 16:00             ` Wyborny, Carolyn
  0 siblings, 1 reply; 10+ messages in thread
From: Wyborny, Carolyn @ 2012-01-03 23:23 UTC (permalink / raw)
  To: Chris Boot, Nicolas de Pesloüan; +Cc: netdev, e1000-devel



>-----Original Message-----
>From: netdev-owner@vger.kernel.org [mailto:netdev-owner@vger.kernel.org]
>On Behalf Of Chris Boot
>Sent: Tuesday, December 27, 2011 1:53 PM
>To: Nicolas de Pesloüan
>Cc: netdev
>Subject: Re: igb + balance-rr + bridge + IPv6 = no go without
>promiscuous mode
>
>On 23/12/2011 10:56, Chris Boot wrote:
>> On 23/12/2011 10:48, Nicolas de Pesloüan wrote:
>>> [ Forwarded to netdev, because two previous e-mail erroneously sent
>in
>>> HTML ]
>>>
>>> Le 23/12/2011 11:15, Chris Boot a écrit :
>>>> On 23/12/2011 09:52, Nicolas de Pesloüan wrote:
>>>>>
>>>>>
>>>>> Le 23 déc. 2011 10:42, "Chris Boot" <bootc@bootc.net
>>>>> <mailto:bootc@bootc.net>> a écrit :
>>>>> >
>>>>> > Hi folks,
>>>>> >
>>>>> > As per Eric Dumazet and Dave Miller, I'm opening up a separate
>>>>> thread on this issue.
>>>>> >
>>>>> > I have two identical servers in a cluster for running KVM virtual
>>>>> machines. They each have a
>>>>> single connection to the Internet (irrelevant for this) and two
>>>>> gigabit connections between each
>>>>> other for cluster replication, etc... These two connections are in
>a
>>>>> balance-rr bonded connection,
>>>>> which is itself member of a bridge that the VMs attach to. I'm
>>>>> running v3.2-rc6-140-gb9e26df on
>>>>> Debian Wheezy.
>>>>> >
>>>>> > When the bridge is brought up, IPv4 works fine but IPv6 does not.
>>>>> I can use neither the
>>>>> automatic link-local on the brid ge nor the static global address I
>>>>> assign. Neither machine can
>>>>> perform neighbour discovery over the link until I put the bond
>>>>> members (eth0 and eth1) into
>>>>> promiscuous mode. I can do this either with tcpdump or 'ip link set
>>>>> dev ethX promisc on' and this
>>>>> is enough to make the link spring to life.
>>>>>
>>>>> For as far as I remember, setting bond0 to promisc should set the
>>>>> bonding member to promisc too.
>>>>> And inserting bond0 into br0 should set bond0 to promisc... So
>>>>> everything should be in promisc
>>>>> mode anyway... but you shoudn't have to do it by hand.
>>>>>
>>>>
>>>> Sorry, I should have added that I tried this. Setting bond0 or br0
>to
>>>> promisc has no effect. I
>>>> discovered this by running tcpdump on br0 first, then bond0, then
>>>> eventually each bond member in
>>>> turn. Only at the last stage did things jump to life.
>>>>
>>>>> >
>>>>> > This cluster is not currently live so I can easily test patches
>>>>> and various configurations.
>>>>>
>>>>> Can you try to remove the bonding part, connecting eth0 and eth1
>>>>> directly to br0 and see if it
>>>>> works better? (This is a test ony. I perfectly understand that you
>>>>> would loose balance-rr in this
>>>>> setup.)
>>>>>
>>>>
>>>> Good call. Let's see.
>>>>
>>>> I took br0 and bond0 apart, took eth0 and eth1 out of enforced
>>>> promisc mode, then manually built a
>>>> br0 with eth0 in only so I didn't cause a network loop. Adding eth0
>>>> to br0 did not make it go into
>>>> promisc mode, but IPv6 does work over this setup. I also made sure
>ip
>>>> -6 neigh was empty on both
>>>> machines before I started.
>>>>
>>>> I then decided to try the test with just the bond0 in balance-rr
>>>> mode. Once again I took everything
>>>> down and ensured no promisc mode and no ip -6 neigh. I noticed bond0
>>>> wasn't getting a link-local and
>>>> I found out for some reason
>>>> /proc/sys/net/ipv6/conf/bond0/disable_ipv6 was set on both servers
>so I
>>>> set it to 0. That brought things to life.
>>>>
>>>> So then I put it all back together again and it didn't work. I once
>>>> again noticed disable_ipv6 was
>>>> set on the bond0 interfaces, now part of the bridge. Toggling this
>on
>>>> the _bond_ interface made
>>>> things work again.
>>>>
>>>> What's setting disable_ipv6? Should this be having an impact if the
>>>> port is part of a bridge?
>>
>> Hmm, as a further update... I brought up my VMs on the bridge with
>> disable_ipv6 turned off. The VMs on one host couldn't see what was on
>> the other side of the bridge (on the other server) until I turned
>> promisc back on manually. So it's not entirely disable_ipv6's fault.
>
>Hi,
>
>I don't want this to get lost around the Christmas break, so I'm just
>resending it. I'm still seeing the same behaviour as before.
>
> From above:
>
>>>>> For as far as I remember, setting bond0 to promisc should set the
>>>>> bonding member to promisc too.
>>>>> And inserting bond0 into br0 should set bond0 to promisc... So
>>>>> everything should be in promisc
>>>>> mode anyway... but you shoudn't have to do it by hand.
>
>This definitely doesn't happen, at least according to 'ip link show |
>grep PROMISC'.
>
>Chris
>
>--
>Chris Boot
>bootc@bootc.net
>--
>To unsubscribe from this list: send the line "unsubscribe netdev" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html

Sorry for the delay in responding.  I'm not sure what is going on here and I'm not our bonding expert who is still out on holidays.  However, we'll try to reproduce this.  When I get some more advice, I may be asking for some more data.

Thanks,

Carolyn
Carolyn Wyborny
Linux Development
LAN Access Division
Intel Corporation

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: igb + balance-rr + bridge + IPv6 = no go without promiscuous mode
  2012-01-03 23:23           ` Wyborny, Carolyn
@ 2012-01-04 16:00             ` Wyborny, Carolyn
  2012-01-04 16:58               ` Chris Boot
  0 siblings, 1 reply; 10+ messages in thread
From: Wyborny, Carolyn @ 2012-01-04 16:00 UTC (permalink / raw)
  To: Wyborny, Carolyn, Chris Boot, Nicolas de Pesloüan
  Cc: netdev, e1000-devel



>-----Original Message-----
>From: netdev-owner@vger.kernel.org [mailto:netdev-owner@vger.kernel.org]
>On Behalf Of Wyborny, Carolyn
>Sent: Tuesday, January 03, 2012 3:24 PM
>To: Chris Boot; Nicolas de Pesloüan
>Cc: netdev; e1000-devel@lists.sourceforge.net
>Subject: RE: igb + balance-rr + bridge + IPv6 = no go without
>promiscuous mode
>
>
>
>>-----Original Message-----
>>From: netdev-owner@vger.kernel.org [mailto:netdev-
>owner@vger.kernel.org]
>>On Behalf Of Chris Boot
>>Sent: Tuesday, December 27, 2011 1:53 PM
>>To: Nicolas de Pesloüan
>>Cc: netdev
>>Subject: Re: igb + balance-rr + bridge + IPv6 = no go without
>>promiscuous mode
>>
>>On 23/12/2011 10:56, Chris Boot wrote:
>>> On 23/12/2011 10:48, Nicolas de Pesloüan wrote:
>>>> [ Forwarded to netdev, because two previous e-mail erroneously sent
>>in
>>>> HTML ]
>>>>
>>>> Le 23/12/2011 11:15, Chris Boot a écrit :
>>>>> On 23/12/2011 09:52, Nicolas de Pesloüan wrote:
>>>>>>
>>>>>>
>>>>>> Le 23 déc. 2011 10:42, "Chris Boot" <bootc@bootc.net
>>>>>> <mailto:bootc@bootc.net>> a écrit :
>>>>>> >
>>>>>> > Hi folks,
>>>>>> >
>>>>>> > As per Eric Dumazet and Dave Miller, I'm opening up a separate
>>>>>> thread on this issue.
>>>>>> >
>>>>>> > I have two identical servers in a cluster for running KVM
>virtual
>>>>>> machines. They each have a
>>>>>> single connection to the Internet (irrelevant for this) and two
>>>>>> gigabit connections between each
>>>>>> other for cluster replication, etc... These two connections are in
>>a
>>>>>> balance-rr bonded connection,
>>>>>> which is itself member of a bridge that the VMs attach to. I'm
>>>>>> running v3.2-rc6-140-gb9e26df on
>>>>>> Debian Wheezy.
>>>>>> >
>>>>>> > When the bridge is brought up, IPv4 works fine but IPv6 does
>not.
>>>>>> I can use neither the
>>>>>> automatic link-local on the brid ge nor the static global address
>I
>>>>>> assign. Neither machine can
>>>>>> perform neighbour discovery over the link until I put the bond
>>>>>> members (eth0 and eth1) into
>>>>>> promiscuous mode. I can do this either with tcpdump or 'ip link
>set
>>>>>> dev ethX promisc on' and this
>>>>>> is enough to make the link spring to life.
>>>>>>
>>>>>> For as far as I remember, setting bond0 to promisc should set the
>>>>>> bonding member to promisc too.
>>>>>> And inserting bond0 into br0 should set bond0 to promisc... So
>>>>>> everything should be in promisc
>>>>>> mode anyway... but you shoudn't have to do it by hand.
>>>>>>
>>>>>
>>>>> Sorry, I should have added that I tried this. Setting bond0 or br0
>>to
>>>>> promisc has no effect. I
>>>>> discovered this by running tcpdump on br0 first, then bond0, then
>>>>> eventually each bond member in
>>>>> turn. Only at the last stage did things jump to life.
>>>>>
>>>>>> >
>>>>>> > This cluster is not currently live so I can easily test patches
>>>>>> and various configurations.
>>>>>>
>>>>>> Can you try to remove the bonding part, connecting eth0 and eth1
>>>>>> directly to br0 and see if it
>>>>>> works better? (This is a test ony. I perfectly understand that you
>>>>>> would loose balance-rr in this
>>>>>> setup.)
>>>>>>
>>>>>
>>>>> Good call. Let's see.
>>>>>
>>>>> I took br0 and bond0 apart, took eth0 and eth1 out of enforced
>>>>> promisc mode, then manually built a
>>>>> br0 with eth0 in only so I didn't cause a network loop. Adding eth0
>>>>> to br0 did not make it go into
>>>>> promisc mode, but IPv6 does work over this setup. I also made sure
>>ip
>>>>> -6 neigh was empty on both
>>>>> machines before I started.
>>>>>
>>>>> I then decided to try the test with just the bond0 in balance-rr
>>>>> mode. Once again I took everything
>>>>> down and ensured no promisc mode and no ip -6 neigh. I noticed
>bond0
>>>>> wasn't getting a link-local and
>>>>> I found out for some reason
>>>>> /proc/sys/net/ipv6/conf/bond0/disable_ipv6 was set on both servers
>>so I
>>>>> set it to 0. That brought things to life.
>>>>>
>>>>> So then I put it all back together again and it didn't work. I once
>>>>> again noticed disable_ipv6 was
>>>>> set on the bond0 interfaces, now part of the bridge. Toggling this
>>on
>>>>> the _bond_ interface made
>>>>> things work again.
>>>>>
>>>>> What's setting disable_ipv6? Should this be having an impact if the
>>>>> port is part of a bridge?
>>>
>>> Hmm, as a further update... I brought up my VMs on the bridge with
>>> disable_ipv6 turned off. The VMs on one host couldn't see what was on
>>> the other side of the bridge (on the other server) until I turned
>>> promisc back on manually. So it's not entirely disable_ipv6's fault.
>>
>>Hi,
>>
>>I don't want this to get lost around the Christmas break, so I'm just
>>resending it. I'm still seeing the same behaviour as before.
>>
>> From above:
>>
>>>>>> For as far as I remember, setting bond0 to promisc should set the
>>>>>> bonding member to promisc too.
>>>>>> And inserting bond0 into br0 should set bond0 to promisc... So
>>>>>> everything should be in promisc
>>>>>> mode anyway... but you shoudn't have to do it by hand.
>>
>>This definitely doesn't happen, at least according to 'ip link show |
>>grep PROMISC'.
>>
>>Chris
>>
>>--
>>Chris Boot
>>bootc@bootc.net
>>--
>>To unsubscribe from this list: send the line "unsubscribe netdev" in
>>the body of a message to majordomo@vger.kernel.org
>>More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>Sorry for the delay in responding.  I'm not sure what is going on here
>and I'm not our bonding expert who is still out on holidays.  However,
>we'll try to reproduce this.  When I get some more advice, I may be
>asking for some more data.
>
>Thanks,
>
>Carolyn
>Carolyn Wyborny
>Linux Development
>LAN Access Division
>Intel Corporation
>N�����r��y���b�X��ǧv�^�)޺{.n�+���z�^�)���w*
>jg���\x1e�����ݢj/���z�ޖ��2�ޙ���&�)ߡ�a��\x7f��\x1e�G���h�\x0f�j:+v���w�٥

Hello,

Check your ip_forward configuration on your bridge to make sure its configured to forward ipv6 packets and also please send the contents of /etc/modprobe.d/bonding.conf and the contents of your routing table and we'll continue to work on this.

Thanks,

Carolyn

Carolyn Wyborny
Linux Development
LAN Access Division
Intel Corporation



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: igb + balance-rr + bridge + IPv6 = no go without promiscuous mode
  2012-01-04 16:00             ` Wyborny, Carolyn
@ 2012-01-04 16:58               ` Chris Boot
  2012-01-04 18:10                 ` Neil Horman
  2012-01-09 17:19                 ` Wyborny, Carolyn
  0 siblings, 2 replies; 10+ messages in thread
From: Chris Boot @ 2012-01-04 16:58 UTC (permalink / raw)
  To: Wyborny, Carolyn; +Cc: Nicolas de Pesloüan, netdev, e1000-devel

On 04/01/2012 16:00, Wyborny, Carolyn wrote:
>
>
>> -----Original Message-----
>> From: netdev-owner@vger.kernel.org [mailto:netdev-owner@vger.kernel.org]
>> On Behalf Of Wyborny, Carolyn
>> Sent: Tuesday, January 03, 2012 3:24 PM
>> To: Chris Boot; Nicolas de Pesloüan
>> Cc: netdev; e1000-devel@lists.sourceforge.net
>> Subject: RE: igb + balance-rr + bridge + IPv6 = no go without
>> promiscuous mode
>>
>>
>>
>>> -----Original Message-----
>>> From: netdev-owner@vger.kernel.org [mailto:netdev-
>> owner@vger.kernel.org]
>>> On Behalf Of Chris Boot
>>> Sent: Tuesday, December 27, 2011 1:53 PM
>>> To: Nicolas de Pesloüan
>>> Cc: netdev
>>> Subject: Re: igb + balance-rr + bridge + IPv6 = no go without
>>> promiscuous mode
>>>
>>> On 23/12/2011 10:56, Chris Boot wrote:
>>>> On 23/12/2011 10:48, Nicolas de Pesloüan wrote:
>>>>> [ Forwarded to netdev, because two previous e-mail erroneously sent
>>> in
>>>>> HTML ]
>>>>>
>>>>> Le 23/12/2011 11:15, Chris Boot a écrit :
>>>>>> On 23/12/2011 09:52, Nicolas de Pesloüan wrote:
>>>>>>>
>>>>>>>
>>>>>>> Le 23 déc. 2011 10:42, "Chris Boot"<bootc@bootc.net
>>>>>>> <mailto:bootc@bootc.net>>  a écrit :
>>>>>>>>
>>>>>>>> Hi folks,
>>>>>>>>
>>>>>>>> As per Eric Dumazet and Dave Miller, I'm opening up a separate
>>>>>>> thread on this issue.
>>>>>>>>
>>>>>>>> I have two identical servers in a cluster for running KVM
>> virtual
>>>>>>> machines. They each have a
>>>>>>> single connection to the Internet (irrelevant for this) and two
>>>>>>> gigabit connections between each
>>>>>>> other for cluster replication, etc... These two connections are in
>>> a
>>>>>>> balance-rr bonded connection,
>>>>>>> which is itself member of a bridge that the VMs attach to. I'm
>>>>>>> running v3.2-rc6-140-gb9e26df on
>>>>>>> Debian Wheezy.
>>>>>>>>
>>>>>>>> When the bridge is brought up, IPv4 works fine but IPv6 does
>> not.
>>>>>>> I can use neither the
>>>>>>> automatic link-local on the brid ge nor the static global address
>> I
>>>>>>> assign. Neither machine can
>>>>>>> perform neighbour discovery over the link until I put the bond
>>>>>>> members (eth0 and eth1) into
>>>>>>> promiscuous mode. I can do this either with tcpdump or 'ip link
>> set
>>>>>>> dev ethX promisc on' and this
>>>>>>> is enough to make the link spring to life.
>>>>>>>
>>>>>>> For as far as I remember, setting bond0 to promisc should set the
>>>>>>> bonding member to promisc too.
>>>>>>> And inserting bond0 into br0 should set bond0 to promisc... So
>>>>>>> everything should be in promisc
>>>>>>> mode anyway... but you shoudn't have to do it by hand.
>>>>>>>
>>>>>>
>>>>>> Sorry, I should have added that I tried this. Setting bond0 or br0
>>> to
>>>>>> promisc has no effect. I
>>>>>> discovered this by running tcpdump on br0 first, then bond0, then
>>>>>> eventually each bond member in
>>>>>> turn. Only at the last stage did things jump to life.
>>>>>>
>>>>>>>>
>>>>>>>> This cluster is not currently live so I can easily test patches
>>>>>>> and various configurations.
>>>>>>>
>>>>>>> Can you try to remove the bonding part, connecting eth0 and eth1
>>>>>>> directly to br0 and see if it
>>>>>>> works better? (This is a test ony. I perfectly understand that you
>>>>>>> would loose balance-rr in this
>>>>>>> setup.)
>>>>>>>
>>>>>>
>>>>>> Good call. Let's see.
>>>>>>
>>>>>> I took br0 and bond0 apart, took eth0 and eth1 out of enforced
>>>>>> promisc mode, then manually built a
>>>>>> br0 with eth0 in only so I didn't cause a network loop. Adding eth0
>>>>>> to br0 did not make it go into
>>>>>> promisc mode, but IPv6 does work over this setup. I also made sure
>>> ip
>>>>>> -6 neigh was empty on both
>>>>>> machines before I started.
>>>>>>
>>>>>> I then decided to try the test with just the bond0 in balance-rr
>>>>>> mode. Once again I took everything
>>>>>> down and ensured no promisc mode and no ip -6 neigh. I noticed
>> bond0
>>>>>> wasn't getting a link-local and
>>>>>> I found out for some reason
>>>>>> /proc/sys/net/ipv6/conf/bond0/disable_ipv6 was set on both servers
>>> so I
>>>>>> set it to 0. That brought things to life.
>>>>>>
>>>>>> So then I put it all back together again and it didn't work. I once
>>>>>> again noticed disable_ipv6 was
>>>>>> set on the bond0 interfaces, now part of the bridge. Toggling this
>>> on
>>>>>> the _bond_ interface made
>>>>>> things work again.
>>>>>>
>>>>>> What's setting disable_ipv6? Should this be having an impact if the
>>>>>> port is part of a bridge?
>>>>
>>>> Hmm, as a further update... I brought up my VMs on the bridge with
>>>> disable_ipv6 turned off. The VMs on one host couldn't see what was on
>>>> the other side of the bridge (on the other server) until I turned
>>>> promisc back on manually. So it's not entirely disable_ipv6's fault.
>>>
>>> Hi,
>>>
>>> I don't want this to get lost around the Christmas break, so I'm just
>>> resending it. I'm still seeing the same behaviour as before.
>>>
>>>  From above:
>>>
>>>>>>> For as far as I remember, setting bond0 to promisc should set the
>>>>>>> bonding member to promisc too.
>>>>>>> And inserting bond0 into br0 should set bond0 to promisc... So
>>>>>>> everything should be in promisc
>>>>>>> mode anyway... but you shoudn't have to do it by hand.
>>>
>>> This definitely doesn't happen, at least according to 'ip link show |
>>> grep PROMISC'.
>>>
>>> Chris
>>>
>>> --
>>> Chris Boot
>>> bootc@bootc.net
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe netdev" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>> Sorry for the delay in responding.  I'm not sure what is going on here
>> and I'm not our bonding expert who is still out on holidays.  However,
>> we'll try to reproduce this.  When I get some more advice, I may be
>> asking for some more data.
>>
>> Thanks,
>>
>> Carolyn
>> Carolyn Wyborny
>> Linux Development
>> LAN Access Division
>> Intel Corporation
>> N�����r��y���b�X��ǧv�^�)޺{.n�+���z�^�)���w*
>> jg���\x1e�����ݢj/���z�ޖ��2�ޙ���&�)ߡ�a��\x7f��\x1e�G���h�\x0f�j:+v���w�٥
>
> Hello,
>
> Check your ip_forward configuration on your bridge to make sure its configured to forward ipv6 packets and also please send the contents of /etc/modprobe.d/bonding.conf and the contents of your routing table and we'll continue to work on this.

Hi Carolyn,

Surely ip_forward only needs to be set if I'm wanting to _route_ IPv6 
rather than simply have them go through a bridge untouched? I don't want 
the host to route IPv6 at all. Setting this also has the unintended 
effect of disabling SLAAC which I wish to keep enabled.

I don't have a /etc/modprobe.d/bonding.conf; I'm using Debian and 
configuring my bonding and bridging using the configuration I pasted in 
my original email. Here it is again:

> iface bond0 inet manual
>         slaves eth0 eth1
>         bond-mode balance-rr
>         bond-miimon 100
>         bond-downdelay 200
>         bond-updelay 200
>
> iface br0 inet static
>         address [snip]
>         netmask 255.255.255.224
>         bridge_ports bond0
>         bridge_stp off
>         bridge_fd 0
>         bridge_maxwait 5
> iface br0 inet6 static
>         address [snip]
>         netmask 64

Despite the static IPv6 address I use SLAAC to grab a default gateway.

My IPv6 routing table:

2001:8b0:49:200::/64 dev br0  proto kernel  metric 256  expires 2592317sec
fe80::/64 dev br0  proto kernel  metric 256
fe80::/64 dev bond1  proto kernel  metric 256
fe80::/64 dev vnet0  proto kernel  metric 256
fe80::/64 dev vnet1  proto kernel  metric 256
fe80::/64 dev vnet2  proto kernel  metric 256
fe80::/64 dev vnet3  proto kernel  metric 256
fe80::/64 dev vnet4  proto kernel  metric 256
default via fe80::5652:ff:fe16:15a0 dev br0  proto kernel  metric 1024 
expires 1793sec

HTH,
Chris

-- 
Chris Boot
bootc@bootc.net

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: igb + balance-rr + bridge + IPv6 = no go without promiscuous mode
  2012-01-04 16:58               ` Chris Boot
@ 2012-01-04 18:10                 ` Neil Horman
  2012-01-09 17:19                 ` Wyborny, Carolyn
  1 sibling, 0 replies; 10+ messages in thread
From: Neil Horman @ 2012-01-04 18:10 UTC (permalink / raw)
  To: Chris Boot
  Cc: Wyborny, Carolyn, Nicolas de Pesloüan, netdev, e1000-devel

On Wed, Jan 04, 2012 at 04:58:28PM +0000, Chris Boot wrote:
> On 04/01/2012 16:00, Wyborny, Carolyn wrote:
> >
> >
> >>-----Original Message-----
> >>From: netdev-owner@vger.kernel.org [mailto:netdev-owner@vger.kernel.org]
> >>On Behalf Of Wyborny, Carolyn
> >>Sent: Tuesday, January 03, 2012 3:24 PM
> >>To: Chris Boot; Nicolas de Pesloüan
> >>Cc: netdev; e1000-devel@lists.sourceforge.net
> >>Subject: RE: igb + balance-rr + bridge + IPv6 = no go without
> >>promiscuous mode
> >>
> >>
> >>
> >>>-----Original Message-----
> >>>From: netdev-owner@vger.kernel.org [mailto:netdev-
> >>owner@vger.kernel.org]
> >>>On Behalf Of Chris Boot
> >>>Sent: Tuesday, December 27, 2011 1:53 PM
> >>>To: Nicolas de Pesloüan
> >>>Cc: netdev
> >>>Subject: Re: igb + balance-rr + bridge + IPv6 = no go without
> >>>promiscuous mode
> >>>
> >>>On 23/12/2011 10:56, Chris Boot wrote:
> >>>>On 23/12/2011 10:48, Nicolas de Pesloüan wrote:
> >>>>>[ Forwarded to netdev, because two previous e-mail erroneously sent
> >>>in
> >>>>>HTML ]
> >>>>>
> >>>>>Le 23/12/2011 11:15, Chris Boot a écrit :
> >>>>>>On 23/12/2011 09:52, Nicolas de Pesloüan wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>>Le 23 déc. 2011 10:42, "Chris Boot"<bootc@bootc.net
> >>>>>>><mailto:bootc@bootc.net>>  a écrit :
> >>>>>>>>
> >>>>>>>>Hi folks,
> >>>>>>>>
> >>>>>>>>As per Eric Dumazet and Dave Miller, I'm opening up a separate
> >>>>>>>thread on this issue.
> >>>>>>>>
> >>>>>>>>I have two identical servers in a cluster for running KVM
> >>virtual
> >>>>>>>machines. They each have a
> >>>>>>>single connection to the Internet (irrelevant for this) and two
> >>>>>>>gigabit connections between each
> >>>>>>>other for cluster replication, etc... These two connections are in
> >>>a
> >>>>>>>balance-rr bonded connection,
> >>>>>>>which is itself member of a bridge that the VMs attach to. I'm
> >>>>>>>running v3.2-rc6-140-gb9e26df on
> >>>>>>>Debian Wheezy.
> >>>>>>>>
> >>>>>>>>When the bridge is brought up, IPv4 works fine but IPv6 does
> >>not.
> >>>>>>>I can use neither the
> >>>>>>>automatic link-local on the brid ge nor the static global address
> >>I
> >>>>>>>assign. Neither machine can
> >>>>>>>perform neighbour discovery over the link until I put the bond
> >>>>>>>members (eth0 and eth1) into
> >>>>>>>promiscuous mode. I can do this either with tcpdump or 'ip link
> >>set
> >>>>>>>dev ethX promisc on' and this
> >>>>>>>is enough to make the link spring to life.
> >>>>>>>
> >>>>>>>For as far as I remember, setting bond0 to promisc should set the
> >>>>>>>bonding member to promisc too.
> >>>>>>>And inserting bond0 into br0 should set bond0 to promisc... So
> >>>>>>>everything should be in promisc
> >>>>>>>mode anyway... but you shoudn't have to do it by hand.
> >>>>>>>
> >>>>>>
> >>>>>>Sorry, I should have added that I tried this. Setting bond0 or br0
> >>>to
> >>>>>>promisc has no effect. I
> >>>>>>discovered this by running tcpdump on br0 first, then bond0, then
> >>>>>>eventually each bond member in
> >>>>>>turn. Only at the last stage did things jump to life.
> >>>>>>
> >>>>>>>>
> >>>>>>>>This cluster is not currently live so I can easily test patches
> >>>>>>>and various configurations.
> >>>>>>>
> >>>>>>>Can you try to remove the bonding part, connecting eth0 and eth1
> >>>>>>>directly to br0 and see if it
> >>>>>>>works better? (This is a test ony. I perfectly understand that you
> >>>>>>>would loose balance-rr in this
> >>>>>>>setup.)
> >>>>>>>
> >>>>>>
> >>>>>>Good call. Let's see.
> >>>>>>
> >>>>>>I took br0 and bond0 apart, took eth0 and eth1 out of enforced
> >>>>>>promisc mode, then manually built a
> >>>>>>br0 with eth0 in only so I didn't cause a network loop. Adding eth0
> >>>>>>to br0 did not make it go into
> >>>>>>promisc mode, but IPv6 does work over this setup. I also made sure
> >>>ip
> >>>>>>-6 neigh was empty on both
> >>>>>>machines before I started.
> >>>>>>
> >>>>>>I then decided to try the test with just the bond0 in balance-rr
> >>>>>>mode. Once again I took everything
> >>>>>>down and ensured no promisc mode and no ip -6 neigh. I noticed
> >>bond0
> >>>>>>wasn't getting a link-local and
> >>>>>>I found out for some reason
> >>>>>>/proc/sys/net/ipv6/conf/bond0/disable_ipv6 was set on both servers
> >>>so I
> >>>>>>set it to 0. That brought things to life.
> >>>>>>
> >>>>>>So then I put it all back together again and it didn't work. I once
> >>>>>>again noticed disable_ipv6 was
> >>>>>>set on the bond0 interfaces, now part of the bridge. Toggling this
> >>>on
> >>>>>>the _bond_ interface made
> >>>>>>things work again.
> >>>>>>
> >>>>>>What's setting disable_ipv6? Should this be having an impact if the
> >>>>>>port is part of a bridge?
> >>>>
> >>>>Hmm, as a further update... I brought up my VMs on the bridge with
> >>>>disable_ipv6 turned off. The VMs on one host couldn't see what was on
> >>>>the other side of the bridge (on the other server) until I turned
> >>>>promisc back on manually. So it's not entirely disable_ipv6's fault.
> >>>
> >>>Hi,
> >>>
> >>>I don't want this to get lost around the Christmas break, so I'm just
> >>>resending it. I'm still seeing the same behaviour as before.
> >>>
> >>> From above:
> >>>
> >>>>>>>For as far as I remember, setting bond0 to promisc should set the
> >>>>>>>bonding member to promisc too.
> >>>>>>>And inserting bond0 into br0 should set bond0 to promisc... So
> >>>>>>>everything should be in promisc
> >>>>>>>mode anyway... but you shoudn't have to do it by hand.
> >>>
> >>>This definitely doesn't happen, at least according to 'ip link show |
> >>>grep PROMISC'.
> >>>
> >>>Chris
> >>>
> >>>--
> >>>Chris Boot
> >>>bootc@bootc.net
> >>>--
> >>>To unsubscribe from this list: send the line "unsubscribe netdev" in
> >>>the body of a message to majordomo@vger.kernel.org
> >>>More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >>
> >>Sorry for the delay in responding.  I'm not sure what is going on here
> >>and I'm not our bonding expert who is still out on holidays.  However,
> >>we'll try to reproduce this.  When I get some more advice, I may be
> >>asking for some more data.
> >>
> >>Thanks,
> >>
> >>Carolyn
> >>Carolyn Wyborny
> >>Linux Development
> >>LAN Access Division
> >>Intel Corporation
> >>N�����r��y���b�X��ǧv�^�)޺{.n�+���z�^�)���w*
> >>jg���\x1e�����ݢj/���z�ޖ��2�ޙ���&�)ߡ�a��\x7f��\x1e�G���h�\x0f�j:+v���w�٥
> >
> >Hello,
> >
> >Check your ip_forward configuration on your bridge to make sure its configured to forward ipv6 packets and also please send the contents of /etc/modprobe.d/bonding.conf and the contents of your routing table and we'll continue to work on this.
> 
> Hi Carolyn,
> 
> Surely ip_forward only needs to be set if I'm wanting to _route_
> IPv6 rather than simply have them go through a bridge untouched? I
> don't want the host to route IPv6 at all. Setting this also has the
> unintended effect of disabling SLAAC which I wish to keep enabled.
> 
> I don't have a /etc/modprobe.d/bonding.conf; I'm using Debian and
> configuring my bonding and bridging using the configuration I pasted
> in my original email. Here it is again:
> 
> >iface bond0 inet manual
> >        slaves eth0 eth1
> >        bond-mode balance-rr
> >        bond-miimon 100
> >        bond-downdelay 200
> >        bond-updelay 200
> >
> >iface br0 inet static
> >        address [snip]
> >        netmask 255.255.255.224
> >        bridge_ports bond0
> >        bridge_stp off
> >        bridge_fd 0
> >        bridge_maxwait 5
> >iface br0 inet6 static
> >        address [snip]
> >        netmask 64
> 
> Despite the static IPv6 address I use SLAAC to grab a default gateway.
> 
> My IPv6 routing table:
> 
> 2001:8b0:49:200::/64 dev br0  proto kernel  metric 256  expires 2592317sec
> fe80::/64 dev br0  proto kernel  metric 256
> fe80::/64 dev bond1  proto kernel  metric 256
> fe80::/64 dev vnet0  proto kernel  metric 256
> fe80::/64 dev vnet1  proto kernel  metric 256
> fe80::/64 dev vnet2  proto kernel  metric 256
> fe80::/64 dev vnet3  proto kernel  metric 256
> fe80::/64 dev vnet4  proto kernel  metric 256
> default via fe80::5652:ff:fe16:15a0 dev br0  proto kernel  metric
> 1024 expires 1793sec
> 
> HTH,
> Chris
> 
> -- 
> Chris Boot
> bootc@bootc.net
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
Can you post the output of ip -6 addr show?  My first thought would be that
something in the bond is looping back ndisc frames and causing duplicate address
detection to fail.  That will show up as the dadfailed flag in the addr show
command.  If thats the case, we'd need to figure out why thats happening, but
you could likely work around it by setting
/proc/sys/net/ipv6/conf/<ifname>/accept_dad to 0 for the time being

Best
Neil

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: igb + balance-rr + bridge + IPv6 = no go without promiscuous mode
  2012-01-04 16:58               ` Chris Boot
  2012-01-04 18:10                 ` Neil Horman
@ 2012-01-09 17:19                 ` Wyborny, Carolyn
  2012-01-09 19:44                   ` Chris Boot
  1 sibling, 1 reply; 10+ messages in thread
From: Wyborny, Carolyn @ 2012-01-09 17:19 UTC (permalink / raw)
  To: Chris Boot; +Cc: Nicolas de Pesloüan, netdev, e1000-devel



>-----Original Message-----
>From: Chris Boot [mailto:bootc@bootc.net]
>Sent: Wednesday, January 04, 2012 8:58 AM
>To: Wyborny, Carolyn
>Cc: Nicolas de Pesloüan; netdev; e1000-devel@lists.sourceforge.net
>Subject: Re: igb + balance-rr + bridge + IPv6 = no go without
>promiscuous mode
>
>On 04/01/2012 16:00, Wyborny, Carolyn wrote:
>>
>>
>>> -----Original Message-----
>>> From: netdev-owner@vger.kernel.org [mailto:netdev-
>owner@vger.kernel.org]
>>> On Behalf Of Wyborny, Carolyn
>>> Sent: Tuesday, January 03, 2012 3:24 PM
>>> To: Chris Boot; Nicolas de Pesloüan
>>> Cc: netdev; e1000-devel@lists.sourceforge.net
>>> Subject: RE: igb + balance-rr + bridge + IPv6 = no go without
>>> promiscuous mode
>>>
>>>
>>>
>>>> -----Original Message-----
>>>> From: netdev-owner@vger.kernel.org [mailto:netdev-
>>> owner@vger.kernel.org]
>>>> On Behalf Of Chris Boot
>>>> Sent: Tuesday, December 27, 2011 1:53 PM
>>>> To: Nicolas de Pesloüan
>>>> Cc: netdev
>>>> Subject: Re: igb + balance-rr + bridge + IPv6 = no go without
>>>> promiscuous mode
>>>>
>>>> On 23/12/2011 10:56, Chris Boot wrote:
>>>>> On 23/12/2011 10:48, Nicolas de Pesloüan wrote:
>>>>>> [ Forwarded to netdev, because two previous e-mail erroneously
>sent
>>>> in
>>>>>> HTML ]
>>>>>>
>>>>>> Le 23/12/2011 11:15, Chris Boot a écrit :
>>>>>>> On 23/12/2011 09:52, Nicolas de Pesloüan wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> Le 23 déc. 2011 10:42, "Chris Boot"<bootc@bootc.net
>>>>>>>> <mailto:bootc@bootc.net>>  a écrit :
>>>>>>>>>
>>>>>>>>> Hi folks,
>>>>>>>>>
>>>>>>>>> As per Eric Dumazet and Dave Miller, I'm opening up a separate
>>>>>>>> thread on this issue.
>>>>>>>>>
>>>>>>>>> I have two identical servers in a cluster for running KVM
>>> virtual
>>>>>>>> machines. They each have a
>>>>>>>> single connection to the Internet (irrelevant for this) and two
>>>>>>>> gigabit connections between each
>>>>>>>> other for cluster replication, etc... These two connections are
>in
>>>> a
>>>>>>>> balance-rr bonded connection,
>>>>>>>> which is itself member of a bridge that the VMs attach to. I'm
>>>>>>>> running v3.2-rc6-140-gb9e26df on
>>>>>>>> Debian Wheezy.
>>>>>>>>>
>>>>>>>>> When the bridge is brought up, IPv4 works fine but IPv6 does
>>> not.
>>>>>>>> I can use neither the
>>>>>>>> automatic link-local on the brid ge nor the static global
>address
>>> I
>>>>>>>> assign. Neither machine can
>>>>>>>> perform neighbour discovery over the link until I put the bond
>>>>>>>> members (eth0 and eth1) into
>>>>>>>> promiscuous mode. I can do this either with tcpdump or 'ip link
>>> set
>>>>>>>> dev ethX promisc on' and this
>>>>>>>> is enough to make the link spring to life.
>>>>>>>>
>>>>>>>> For as far as I remember, setting bond0 to promisc should set
>the
>>>>>>>> bonding member to promisc too.
>>>>>>>> And inserting bond0 into br0 should set bond0 to promisc... So
>>>>>>>> everything should be in promisc
>>>>>>>> mode anyway... but you shoudn't have to do it by hand.
>>>>>>>>
>>>>>>>
>>>>>>> Sorry, I should have added that I tried this. Setting bond0 or
>br0
>>>> to
>>>>>>> promisc has no effect. I
>>>>>>> discovered this by running tcpdump on br0 first, then bond0, then
>>>>>>> eventually each bond member in
>>>>>>> turn. Only at the last stage did things jump to life.
>>>>>>>
>>>>>>>>>
>>>>>>>>> This cluster is not currently live so I can easily test patches
>>>>>>>> and various configurations.
>>>>>>>>
>>>>>>>> Can you try to remove the bonding part, connecting eth0 and eth1
>>>>>>>> directly to br0 and see if it
>>>>>>>> works better? (This is a test ony. I perfectly understand that
>you
>>>>>>>> would loose balance-rr in this
>>>>>>>> setup.)
>>>>>>>>
>>>>>>>
>>>>>>> Good call. Let's see.
>>>>>>>
>>>>>>> I took br0 and bond0 apart, took eth0 and eth1 out of enforced
>>>>>>> promisc mode, then manually built a
>>>>>>> br0 with eth0 in only so I didn't cause a network loop. Adding
>eth0
>>>>>>> to br0 did not make it go into
>>>>>>> promisc mode, but IPv6 does work over this setup. I also made
>sure
>>>> ip
>>>>>>> -6 neigh was empty on both
>>>>>>> machines before I started.
>>>>>>>
>>>>>>> I then decided to try the test with just the bond0 in balance-rr
>>>>>>> mode. Once again I took everything
>>>>>>> down and ensured no promisc mode and no ip -6 neigh. I noticed
>>> bond0
>>>>>>> wasn't getting a link-local and
>>>>>>> I found out for some reason
>>>>>>> /proc/sys/net/ipv6/conf/bond0/disable_ipv6 was set on both
>servers
>>>> so I
>>>>>>> set it to 0. That brought things to life.
>>>>>>>
>>>>>>> So then I put it all back together again and it didn't work. I
>once
>>>>>>> again noticed disable_ipv6 was
>>>>>>> set on the bond0 interfaces, now part of the bridge. Toggling
>this
>>>> on
>>>>>>> the _bond_ interface made
>>>>>>> things work again.
>>>>>>>
>>>>>>> What's setting disable_ipv6? Should this be having an impact if
>the
>>>>>>> port is part of a bridge?
>>>>>
>>>>> Hmm, as a further update... I brought up my VMs on the bridge with
>>>>> disable_ipv6 turned off. The VMs on one host couldn't see what was
>on
>>>>> the other side of the bridge (on the other server) until I turned
>>>>> promisc back on manually. So it's not entirely disable_ipv6's
>fault.
>>>>
>>>> Hi,
>>>>
>>>> I don't want this to get lost around the Christmas break, so I'm
>just
>>>> resending it. I'm still seeing the same behaviour as before.
>>>>
>>>>  From above:
>>>>
>>>>>>>> For as far as I remember, setting bond0 to promisc should set
>the
>>>>>>>> bonding member to promisc too.
>>>>>>>> And inserting bond0 into br0 should set bond0 to promisc... So
>>>>>>>> everything should be in promisc
>>>>>>>> mode anyway... but you shoudn't have to do it by hand.
>>>>
>>>> This definitely doesn't happen, at least according to 'ip link show
>|
>>>> grep PROMISC'.
>>>>
>>>> Chris
>>>>
>>>> --
>>>> Chris Boot
>>>> bootc@bootc.net
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe netdev" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>> Sorry for the delay in responding.  I'm not sure what is going on
>here
>>> and I'm not our bonding expert who is still out on holidays.
>However,
>>> we'll try to reproduce this.  When I get some more advice, I may be
>>> asking for some more data.
>>>
>>> Thanks,
>>>
>>> Carolyn
>>> Carolyn Wyborny
>>> Linux Development
>>> LAN Access Division
>>> Intel Corporation
>>> N�����r��y���b�X��ǧv�^�)޺{.n�+���z�^�)���w*
>>> jg���\x1e�����ݢj/���z�ޖ��2�ޙ���&�)ߡ�a��\x7f��\x1e�G���h�\x0f�j:+v���w�٥
>>
>> Hello,
>>
>> Check your ip_forward configuration on your bridge to make sure its
>configured to forward ipv6 packets and also please send the contents of
>/etc/modprobe.d/bonding.conf and the contents of your routing table and
>we'll continue to work on this.
>
>Hi Carolyn,
>
>Surely ip_forward only needs to be set if I'm wanting to _route_ IPv6
>rather than simply have them go through a bridge untouched? I don't want
>the host to route IPv6 at all. Setting this also has the unintended
>effect of disabling SLAAC which I wish to keep enabled.
>
>I don't have a /etc/modprobe.d/bonding.conf; I'm using Debian and
>configuring my bonding and bridging using the configuration I pasted in
>my original email. Here it is again:
>
>> iface bond0 inet manual
>>         slaves eth0 eth1
>>         bond-mode balance-rr
>>         bond-miimon 100
>>         bond-downdelay 200
>>         bond-updelay 200
>>
>> iface br0 inet static
>>         address [snip]
>>         netmask 255.255.255.224
>>         bridge_ports bond0
>>         bridge_stp off
>>         bridge_fd 0
>>         bridge_maxwait 5
>> iface br0 inet6 static
>>         address [snip]
>>         netmask 64
>
>Despite the static IPv6 address I use SLAAC to grab a default gateway.
>
>My IPv6 routing table:
>
>2001:8b0:49:200::/64 dev br0  proto kernel  metric 256  expires
>2592317sec
>fe80::/64 dev br0  proto kernel  metric 256
>fe80::/64 dev bond1  proto kernel  metric 256
>fe80::/64 dev vnet0  proto kernel  metric 256
>fe80::/64 dev vnet1  proto kernel  metric 256
>fe80::/64 dev vnet2  proto kernel  metric 256
>fe80::/64 dev vnet3  proto kernel  metric 256
>fe80::/64 dev vnet4  proto kernel  metric 256
>default via fe80::5652:ff:fe16:15a0 dev br0  proto kernel  metric 1024
>expires 1793sec
>
>HTH,
>Chris
>
>--
>Chris Boot
>bootc@bootc.net

This does seem more like a bonding problem than a driver problem and we haven't seen a lot of ipv6 using with bonding, so we may be in new territory here.  Do you have any other adapters, our or anyone else's to try the same setup and see if the problem persists?  There is a situation between the bonding driver and all our drivers where the promiscuous setting is not passed down to the driver.  We are not sure if this is expected or not and it has not been addressed yet, but this explains why promiscuous has to be set directly on the device. 

Thanks,

Carolyn

Carolyn Wyborny
Linux Development
LAN Access Division
Intel Corporation


 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: igb + balance-rr + bridge + IPv6 = no go without promiscuous mode
  2012-01-09 17:19                 ` Wyborny, Carolyn
@ 2012-01-09 19:44                   ` Chris Boot
  0 siblings, 0 replies; 10+ messages in thread
From: Chris Boot @ 2012-01-09 19:44 UTC (permalink / raw)
  To: Wyborny, Carolyn; +Cc: Nicolas de Pesloüan, netdev, e1000-devel

On 09/01/2012 17:19, Wyborny, Carolyn wrote:
>
>
>> -----Original Message-----
>> From: Chris Boot [mailto:bootc@bootc.net]
>> Sent: Wednesday, January 04, 2012 8:58 AM
>> To: Wyborny, Carolyn
>> Cc: Nicolas de Pesloüan; netdev; e1000-devel@lists.sourceforge.net
>> Subject: Re: igb + balance-rr + bridge + IPv6 = no go without
>> promiscuous mode
>>
>> On 04/01/2012 16:00, Wyborny, Carolyn wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: netdev-owner@vger.kernel.org [mailto:netdev-
>> owner@vger.kernel.org]
>>>> On Behalf Of Wyborny, Carolyn
>>>> Sent: Tuesday, January 03, 2012 3:24 PM
>>>> To: Chris Boot; Nicolas de Pesloüan
>>>> Cc: netdev; e1000-devel@lists.sourceforge.net
>>>> Subject: RE: igb + balance-rr + bridge + IPv6 = no go without
>>>> promiscuous mode
>>>>
>>>>
>>>>
>>>>> -----Original Message-----
>>>>> From: netdev-owner@vger.kernel.org [mailto:netdev-
>>>> owner@vger.kernel.org]
>>>>> On Behalf Of Chris Boot
>>>>> Sent: Tuesday, December 27, 2011 1:53 PM
>>>>> To: Nicolas de Pesloüan
>>>>> Cc: netdev
>>>>> Subject: Re: igb + balance-rr + bridge + IPv6 = no go without
>>>>> promiscuous mode
>>>>>
>>>>> On 23/12/2011 10:56, Chris Boot wrote:
>>>>>> On 23/12/2011 10:48, Nicolas de Pesloüan wrote:
>>>>>>> [ Forwarded to netdev, because two previous e-mail erroneously
>> sent
>>>>> in
>>>>>>> HTML ]
>>>>>>>
>>>>>>> Le 23/12/2011 11:15, Chris Boot a écrit :
>>>>>>>> On 23/12/2011 09:52, Nicolas de Pesloüan wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Le 23 déc. 2011 10:42, "Chris Boot"<bootc@bootc.net
>>>>>>>>> <mailto:bootc@bootc.net>>   a écrit :
>>>>>>>>>>
>>>>>>>>>> Hi folks,
>>>>>>>>>>
>>>>>>>>>> As per Eric Dumazet and Dave Miller, I'm opening up a separate
>>>>>>>>> thread on this issue.
>>>>>>>>>>
>>>>>>>>>> I have two identical servers in a cluster for running KVM
>>>> virtual
>>>>>>>>> machines. They each have a
>>>>>>>>> single connection to the Internet (irrelevant for this) and two
>>>>>>>>> gigabit connections between each
>>>>>>>>> other for cluster replication, etc... These two connections are
>> in
>>>>> a
>>>>>>>>> balance-rr bonded connection,
>>>>>>>>> which is itself member of a bridge that the VMs attach to. I'm
>>>>>>>>> running v3.2-rc6-140-gb9e26df on
>>>>>>>>> Debian Wheezy.
>>>>>>>>>>
>>>>>>>>>> When the bridge is brought up, IPv4 works fine but IPv6 does
>>>> not.
>>>>>>>>> I can use neither the
>>>>>>>>> automatic link-local on the brid ge nor the static global
>> address
>>>> I
>>>>>>>>> assign. Neither machine can
>>>>>>>>> perform neighbour discovery over the link until I put the bond
>>>>>>>>> members (eth0 and eth1) into
>>>>>>>>> promiscuous mode. I can do this either with tcpdump or 'ip link
>>>> set
>>>>>>>>> dev ethX promisc on' and this
>>>>>>>>> is enough to make the link spring to life.
>>>>>>>>>
>>>>>>>>> For as far as I remember, setting bond0 to promisc should set
>> the
>>>>>>>>> bonding member to promisc too.
>>>>>>>>> And inserting bond0 into br0 should set bond0 to promisc... So
>>>>>>>>> everything should be in promisc
>>>>>>>>> mode anyway... but you shoudn't have to do it by hand.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Sorry, I should have added that I tried this. Setting bond0 or
>> br0
>>>>> to
>>>>>>>> promisc has no effect. I
>>>>>>>> discovered this by running tcpdump on br0 first, then bond0, then
>>>>>>>> eventually each bond member in
>>>>>>>> turn. Only at the last stage did things jump to life.
>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> This cluster is not currently live so I can easily test patches
>>>>>>>>> and various configurations.
>>>>>>>>>
>>>>>>>>> Can you try to remove the bonding part, connecting eth0 and eth1
>>>>>>>>> directly to br0 and see if it
>>>>>>>>> works better? (This is a test ony. I perfectly understand that
>> you
>>>>>>>>> would loose balance-rr in this
>>>>>>>>> setup.)
>>>>>>>>>
>>>>>>>>
>>>>>>>> Good call. Let's see.
>>>>>>>>
>>>>>>>> I took br0 and bond0 apart, took eth0 and eth1 out of enforced
>>>>>>>> promisc mode, then manually built a
>>>>>>>> br0 with eth0 in only so I didn't cause a network loop. Adding
>> eth0
>>>>>>>> to br0 did not make it go into
>>>>>>>> promisc mode, but IPv6 does work over this setup. I also made
>> sure
>>>>> ip
>>>>>>>> -6 neigh was empty on both
>>>>>>>> machines before I started.
>>>>>>>>
>>>>>>>> I then decided to try the test with just the bond0 in balance-rr
>>>>>>>> mode. Once again I took everything
>>>>>>>> down and ensured no promisc mode and no ip -6 neigh. I noticed
>>>> bond0
>>>>>>>> wasn't getting a link-local and
>>>>>>>> I found out for some reason
>>>>>>>> /proc/sys/net/ipv6/conf/bond0/disable_ipv6 was set on both
>> servers
>>>>> so I
>>>>>>>> set it to 0. That brought things to life.
>>>>>>>>
>>>>>>>> So then I put it all back together again and it didn't work. I
>> once
>>>>>>>> again noticed disable_ipv6 was
>>>>>>>> set on the bond0 interfaces, now part of the bridge. Toggling
>> this
>>>>> on
>>>>>>>> the _bond_ interface made
>>>>>>>> things work again.
>>>>>>>>
>>>>>>>> What's setting disable_ipv6? Should this be having an impact if
>> the
>>>>>>>> port is part of a bridge?
>>>>>>
>>>>>> Hmm, as a further update... I brought up my VMs on the bridge with
>>>>>> disable_ipv6 turned off. The VMs on one host couldn't see what was
>> on
>>>>>> the other side of the bridge (on the other server) until I turned
>>>>>> promisc back on manually. So it's not entirely disable_ipv6's
>> fault.
>>>>>
>>>>> Hi,
>>>>>
>>>>> I don't want this to get lost around the Christmas break, so I'm
>> just
>>>>> resending it. I'm still seeing the same behaviour as before.
>>>>>
>>>>>   From above:
>>>>>
>>>>>>>>> For as far as I remember, setting bond0 to promisc should set
>> the
>>>>>>>>> bonding member to promisc too.
>>>>>>>>> And inserting bond0 into br0 should set bond0 to promisc... So
>>>>>>>>> everything should be in promisc
>>>>>>>>> mode anyway... but you shoudn't have to do it by hand.
>>>>>
>>>>> This definitely doesn't happen, at least according to 'ip link show
>> |
>>>>> grep PROMISC'.
>>>>>
>>>>> Chris
>>>>>
>>>>> --
>>>>> Chris Boot
>>>>> bootc@bootc.net
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe netdev" in
>>>>> the body of a message to majordomo@vger.kernel.org
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>
>>>> Sorry for the delay in responding.  I'm not sure what is going on
>> here
>>>> and I'm not our bonding expert who is still out on holidays.
>> However,
>>>> we'll try to reproduce this.  When I get some more advice, I may be
>>>> asking for some more data.
>>>>
>>>> Thanks,
>>>>
>>>> Carolyn
>>>> Carolyn Wyborny
>>>> Linux Development
>>>> LAN Access Division
>>>> Intel Corporation
>>>> N�����r��y���b�X��ǧv�^�)޺{.n�+���z�^�)���w*
>>>> jg���\x1e�����ݢj/���z�ޖ��2�ޙ���&�)ߡ�a��\x7f��\x1e�G���h�\x0f�j:+v���w�٥
>>>
>>> Hello,
>>>
>>> Check your ip_forward configuration on your bridge to make sure its
>> configured to forward ipv6 packets and also please send the contents of
>> /etc/modprobe.d/bonding.conf and the contents of your routing table and
>> we'll continue to work on this.
>>
>> Hi Carolyn,
>>
>> Surely ip_forward only needs to be set if I'm wanting to _route_ IPv6
>> rather than simply have them go through a bridge untouched? I don't want
>> the host to route IPv6 at all. Setting this also has the unintended
>> effect of disabling SLAAC which I wish to keep enabled.
>>
>> I don't have a /etc/modprobe.d/bonding.conf; I'm using Debian and
>> configuring my bonding and bridging using the configuration I pasted in
>> my original email. Here it is again:
>>
>>> iface bond0 inet manual
>>>          slaves eth0 eth1
>>>          bond-mode balance-rr
>>>          bond-miimon 100
>>>          bond-downdelay 200
>>>          bond-updelay 200
>>>
>>> iface br0 inet static
>>>          address [snip]
>>>          netmask 255.255.255.224
>>>          bridge_ports bond0
>>>          bridge_stp off
>>>          bridge_fd 0
>>>          bridge_maxwait 5
>>> iface br0 inet6 static
>>>          address [snip]
>>>          netmask 64
>>
>> Despite the static IPv6 address I use SLAAC to grab a default gateway.
>>
>> My IPv6 routing table:
>>
>> 2001:8b0:49:200::/64 dev br0  proto kernel  metric 256  expires
>> 2592317sec
>> fe80::/64 dev br0  proto kernel  metric 256
>> fe80::/64 dev bond1  proto kernel  metric 256
>> fe80::/64 dev vnet0  proto kernel  metric 256
>> fe80::/64 dev vnet1  proto kernel  metric 256
>> fe80::/64 dev vnet2  proto kernel  metric 256
>> fe80::/64 dev vnet3  proto kernel  metric 256
>> fe80::/64 dev vnet4  proto kernel  metric 256
>> default via fe80::5652:ff:fe16:15a0 dev br0  proto kernel  metric 1024
>> expires 1793sec
>>
>> HTH,
>> Chris
>>
>> --
>> Chris Boot
>> bootc@bootc.net
>
> This does seem more like a bonding problem than a driver problem and we haven't seen a lot of ipv6 using with bonding, so we may be in new territory here.  Do you have any other adapters, our or anyone else's to try the same setup and see if the problem persists?

Unfortunately these machines are now in production use, and I can work 
around the problem by manually setting promiscuous mode. I can test 
various software configurations and kernels, but I can't change the 
hardware around at all.

> There is a situation between the bonding driver and all our drivers where the promiscuous setting is not passed down to the driver.  We are not sure if this is expected or not and it has not been addressed yet, but this explains why promiscuous has to be set directly on the device.

If this is a known problem it would seem like that is indeed the 
culprit. Can someone confirm that adding a port to a bridge should set 
promisc mode on the port? And that setting promisc mode on a bond should 
set the ports within the bond to promisc as well?

Thanks,
Chris

-- 
Chris Boot
bootc@bootc.net

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2012-01-09 19:44 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-12-23  9:42 igb + balance-rr + bridge + IPv6 = no go without promiscuous mode Chris Boot
     [not found] ` <CAADHFRBWR0JJNmPzQA3=40s5a6fZ3TJEoC0aWzo+3wXruEZC5A@mail.gmail.com>
     [not found]   ` <4EF454C7.8020305@bootc.net>
2011-12-23 10:48     ` Nicolas de Pesloüan
2011-12-23 10:56       ` Chris Boot
2011-12-27 21:53         ` Chris Boot
2012-01-03 23:23           ` Wyborny, Carolyn
2012-01-04 16:00             ` Wyborny, Carolyn
2012-01-04 16:58               ` Chris Boot
2012-01-04 18:10                 ` Neil Horman
2012-01-09 17:19                 ` Wyborny, Carolyn
2012-01-09 19:44                   ` Chris Boot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).