linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Slowness forming TIPC cluster with explicit node addresses
@ 2019-07-25 23:37 Chris Packham
  2019-07-26 13:31 ` Jon Maloy
  0 siblings, 1 reply; 8+ messages in thread
From: Chris Packham @ 2019-07-25 23:37 UTC (permalink / raw)
  To: tipc-discussion; +Cc: netdev, linux-kernel

Hi,

I'm having problems forming a TIPC cluster between 2 nodes.

This is the basic steps I'm going through on each node.

modprobe tipc
ip link set eth2 up
tipc node set addr 1.1.5 # or 1.1.6
tipc bearer enable media eth dev eth0

Then to confirm if the cluster is formed I use tipc link list

[root@node-5 ~]# tipc link list
broadcast-link: up
...

Looking at tcpdump the two nodes are sending packets 

22:30:05.782320 TIPC v2.0 1.1.5 > 0.0.0, headerlength 60 bytes,
MessageSize 76 bytes, Neighbor Detection Protocol internal, messageType
Link request
22:30:05.863555 TIPC v2.0 1.1.6 > 0.0.0, headerlength 60 bytes,
MessageSize 76 bytes, Neighbor Detection Protocol internal, messageType
Link request

Eventually (after a few minutes) the link does come up

[root@node-6 ~]# tipc link list
broadcast-link: up
1001006:eth2-1001005:eth2: up

[root@node-5 ~]# tipc link list
broadcast-link: up
1001005:eth2-1001006:eth2: up

When I remove the "tipc node set addr" things seem to kick into life
straight away

[root@node-5 ~]# tipc link list
broadcast-link: up
0050b61bd2aa:eth2-0050b61e6dfa:eth2: up

So there appears to be some difference in behaviour between having an
explicit node address and using the default. Unfortunately our
application relies on setting the node addresses.

[root@node-5 ~]# uname -a
Linux linuxbox 5.2.0-at1+ #8 SMP Thu Jul 25 23:22:41 UTC 2019 ppc
GNU/Linux

Any thoughts on the problem?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: Slowness forming TIPC cluster with explicit node addresses
  2019-07-25 23:37 Slowness forming TIPC cluster with explicit node addresses Chris Packham
@ 2019-07-26 13:31 ` Jon Maloy
  2019-07-28 21:04   ` Chris Packham
  0 siblings, 1 reply; 8+ messages in thread
From: Jon Maloy @ 2019-07-26 13:31 UTC (permalink / raw)
  To: Chris Packham, tipc-discussion; +Cc: netdev, linux-kernel



> -----Original Message-----
> From: netdev-owner@vger.kernel.org <netdev-owner@vger.kernel.org> On
> Behalf Of Chris Packham
> Sent: 25-Jul-19 19:37
> To: tipc-discussion@lists.sourceforge.net
> Cc: netdev@vger.kernel.org; linux-kernel@vger.kernel.org
> Subject: Slowness forming TIPC cluster with explicit node addresses
> 
> Hi,
> 
> I'm having problems forming a TIPC cluster between 2 nodes.
> 
> This is the basic steps I'm going through on each node.
> 
> modprobe tipc
> ip link set eth2 up
> tipc node set addr 1.1.5 # or 1.1.6
> tipc bearer enable media eth dev eth0

eth2, I assume...

> 
> Then to confirm if the cluster is formed I use tipc link list
> 
> [root@node-5 ~]# tipc link list
> broadcast-link: up
> ...
> 
> Looking at tcpdump the two nodes are sending packets
> 
> 22:30:05.782320 TIPC v2.0 1.1.5 > 0.0.0, headerlength 60 bytes, MessageSize
> 76 bytes, Neighbor Detection Protocol internal, messageType Link request
> 22:30:05.863555 TIPC v2.0 1.1.6 > 0.0.0, headerlength 60 bytes, MessageSize
> 76 bytes, Neighbor Detection Protocol internal, messageType Link request
> 
> Eventually (after a few minutes) the link does come up
> 
> [root@node-6 ~]# tipc link list
> broadcast-link: up
> 1001006:eth2-1001005:eth2: up
> 
> [root@node-5 ~]# tipc link list
> broadcast-link: up
> 1001005:eth2-1001006:eth2: up
> 
> When I remove the "tipc node set addr" things seem to kick into life straight
> away
> 
> [root@node-5 ~]# tipc link list
> broadcast-link: up
> 0050b61bd2aa:eth2-0050b61e6dfa:eth2: up
> 
> So there appears to be some difference in behaviour between having an
> explicit node address and using the default. Unfortunately our application
> relies on setting the node addresses.

I do this many times a day, without any problems. If there would be any time difference, I would expect the 'auto configurable' version to be slower, because it involves a DAD step.
Are you sure you don't have any other nodes running in your system?

///jon


> 
> [root@node-5 ~]# uname -a
> Linux linuxbox 5.2.0-at1+ #8 SMP Thu Jul 25 23:22:41 UTC 2019 ppc
> GNU/Linux
> 
> Any thoughts on the problem?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Slowness forming TIPC cluster with explicit node addresses
  2019-07-26 13:31 ` Jon Maloy
@ 2019-07-28 21:04   ` Chris Packham
  2019-08-02  5:11     ` Chris Packham
  0 siblings, 1 reply; 8+ messages in thread
From: Chris Packham @ 2019-07-28 21:04 UTC (permalink / raw)
  To: jon.maloy, tipc-discussion; +Cc: netdev, linux-kernel

On Fri, 2019-07-26 at 13:31 +0000, Jon Maloy wrote:
> 
> > 
> > -----Original Message-----
> > From: netdev-owner@vger.kernel.org <netdev-owner@vger.kernel.org>
> > On
> > Behalf Of Chris Packham
> > Sent: 25-Jul-19 19:37
> > To: tipc-discussion@lists.sourceforge.net
> > Cc: netdev@vger.kernel.org; linux-kernel@vger.kernel.org
> > Subject: Slowness forming TIPC cluster with explicit node addresses
> > 
> > Hi,
> > 
> > I'm having problems forming a TIPC cluster between 2 nodes.
> > 
> > This is the basic steps I'm going through on each node.
> > 
> > modprobe tipc
> > ip link set eth2 up
> > tipc node set addr 1.1.5 # or 1.1.6
> > tipc bearer enable media eth dev eth0
> eth2, I assume...
> 

Yes sorry I keep switching between between Ethernet ports for testing
so I hand edited the email.

> > 
> > 
> > Then to confirm if the cluster is formed I use tipc link list
> > 
> > [root@node-5 ~]# tipc link list
> > broadcast-link: up
> > ...
> > 
> > Looking at tcpdump the two nodes are sending packets
> > 
> > 22:30:05.782320 TIPC v2.0 1.1.5 > 0.0.0, headerlength 60 bytes,
> > MessageSize
> > 76 bytes, Neighbor Detection Protocol internal, messageType Link
> > request
> > 22:30:05.863555 TIPC v2.0 1.1.6 > 0.0.0, headerlength 60 bytes,
> > MessageSize
> > 76 bytes, Neighbor Detection Protocol internal, messageType Link
> > request
> > 
> > Eventually (after a few minutes) the link does come up
> > 
> > [root@node-6 ~]# tipc link list
> > broadcast-link: up
> > 1001006:eth2-1001005:eth2: up
> > 
> > [root@node-5 ~]# tipc link list
> > broadcast-link: up
> > 1001005:eth2-1001006:eth2: up
> > 
> > When I remove the "tipc node set addr" things seem to kick into
> > life straight
> > away
> > 
> > [root@node-5 ~]# tipc link list
> > broadcast-link: up
> > 0050b61bd2aa:eth2-0050b61e6dfa:eth2: up
> > 
> > So there appears to be some difference in behaviour between having
> > an
> > explicit node address and using the default. Unfortunately our
> > application
> > relies on setting the node addresses.
> I do this many times a day, without any problems. If there would be
> any time difference, I would expect the 'auto configurable' version
> to be slower, because it involves a DAD step.
> Are you sure you don't have any other nodes running in your system?
> 
> ///jon
> 

Nope the two nodes are connected back to back. Does the number of
Ethernet interfaces make a difference? As you can see I've got 3 on
each node. One is completely disconnected, one is for booting over TFTP
 (only used by U-boot) and the other is the USB Ethernet I'm using for
testing.

> 
> > 
> > 
> > [root@node-5 ~]# uname -a
> > Linux linuxbox 5.2.0-at1+ #8 SMP Thu Jul 25 23:22:41 UTC 2019 ppc
> > GNU/Linux
> > 
> > Any thoughts on the problem?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Slowness forming TIPC cluster with explicit node addresses
  2019-07-28 21:04   ` Chris Packham
@ 2019-08-02  5:11     ` Chris Packham
  2019-08-04 21:53       ` Jon Maloy
  0 siblings, 1 reply; 8+ messages in thread
From: Chris Packham @ 2019-08-02  5:11 UTC (permalink / raw)
  To: jon.maloy, tipc-discussion; +Cc: netdev, linux-kernel

On Mon, 2019-07-29 at 09:04 +1200, Chris Packham wrote:
> On Fri, 2019-07-26 at 13:31 +0000, Jon Maloy wrote:
> > 
> > 
> > > 
> > > 
> > > -----Original Message-----
> > > From: netdev-owner@vger.kernel.org <netdev-owner@vger.kernel.org>
> > > On
> > > Behalf Of Chris Packham
> > > Sent: 25-Jul-19 19:37
> > > To: tipc-discussion@lists.sourceforge.net
> > > Cc: netdev@vger.kernel.org; linux-kernel@vger.kernel.org
> > > Subject: Slowness forming TIPC cluster with explicit node
> > > addresses
> > > 
> > > Hi,
> > > 
> > > I'm having problems forming a TIPC cluster between 2 nodes.
> > > 
> > > This is the basic steps I'm going through on each node.
> > > 
> > > modprobe tipc
> > > ip link set eth2 up
> > > tipc node set addr 1.1.5 # or 1.1.6
> > > tipc bearer enable media eth dev eth0
> > eth2, I assume...
> > 
> Yes sorry I keep switching between between Ethernet ports for testing
> so I hand edited the email.
> 
> > 
> > > 
> > > 
> > > 
> > > Then to confirm if the cluster is formed I use tipc link list
> > > 
> > > [root@node-5 ~]# tipc link list
> > > broadcast-link: up
> > > ...
> > > 
> > > Looking at tcpdump the two nodes are sending packets
> > > 
> > > 22:30:05.782320 TIPC v2.0 1.1.5 > 0.0.0, headerlength 60 bytes,
> > > MessageSize
> > > 76 bytes, Neighbor Detection Protocol internal, messageType Link
> > > request
> > > 22:30:05.863555 TIPC v2.0 1.1.6 > 0.0.0, headerlength 60 bytes,
> > > MessageSize
> > > 76 bytes, Neighbor Detection Protocol internal, messageType Link
> > > request
> > > 
> > > Eventually (after a few minutes) the link does come up
> > > 
> > > [root@node-6 ~]# tipc link list
> > > broadcast-link: up
> > > 1001006:eth2-1001005:eth2: up
> > > 
> > > [root@node-5 ~]# tipc link list
> > > broadcast-link: up
> > > 1001005:eth2-1001006:eth2: up
> > > 
> > > When I remove the "tipc node set addr" things seem to kick into
> > > life straight
> > > away
> > > 
> > > [root@node-5 ~]# tipc link list
> > > broadcast-link: up
> > > 0050b61bd2aa:eth2-0050b61e6dfa:eth2: up
> > > 
> > > So there appears to be some difference in behaviour between
> > > having
> > > an
> > > explicit node address and using the default. Unfortunately our
> > > application
> > > relies on setting the node addresses.
> > I do this many times a day, without any problems. If there would be
> > any time difference, I would expect the 'auto configurable' version
> > to be slower, because it involves a DAD step.
> > Are you sure you don't have any other nodes running in your system?
> > 
> > ///jon
> > 
> Nope the two nodes are connected back to back. Does the number of
> Ethernet interfaces make a difference? As you can see I've got 3 on
> each node. One is completely disconnected, one is for booting over
> TFTP
>  (only used by U-boot) and the other is the USB Ethernet I'm using
> for
> testing.
> 

So I can still reproduce this on nodes that only have one network
interface and are the only things connected.

I did find one thing that helps

diff --git a/net/tipc/discover.c b/net/tipc/discover.c
index c138d68e8a69..49921dad404a 100644
--- a/net/tipc/discover.c
+++ b/net/tipc/discover.c
@@ -358,10 +358,10 @@ int tipc_disc_create(struct net *net, struct
tipc_bearer *b,
        tipc_disc_init_msg(net, d->skb, DSC_REQ_MSG, b);
 
        /* Do we need an address trial period first ? */
-       if (!tipc_own_addr(net)) {
+//     if (!tipc_own_addr(net)) {
                tn->addr_trial_end = jiffies + msecs_to_jiffies(1000);
                msg_set_type(buf_msg(d->skb), DSC_TRIAL_MSG);
-       }
+//     }
        memcpy(&d->dest, dest, sizeof(*dest));
        d->net = net;
        d->bearer_id = b->identity;

I think because with pre-configured addresses the duplicate address
detection is skipped the shorter init phase is skipped. Would is make
sense to unconditionally do the trial step? Or is there some better way
to get things to transition with pre-assigned addresses.

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* RE: Slowness forming TIPC cluster with explicit node addresses
  2019-08-02  5:11     ` Chris Packham
@ 2019-08-04 21:53       ` Jon Maloy
  2019-08-04 23:04         ` Chris Packham
  0 siblings, 1 reply; 8+ messages in thread
From: Jon Maloy @ 2019-08-04 21:53 UTC (permalink / raw)
  To: Chris Packham, tipc-discussion; +Cc: netdev, linux-kernel



> -----Original Message-----
> From: netdev-owner@vger.kernel.org <netdev-owner@vger.kernel.org> On
> Behalf Of Chris Packham
> Sent: 2-Aug-19 01:11
> To: Jon Maloy <jon.maloy@ericsson.com>; tipc-
> discussion@lists.sourceforge.net
> Cc: netdev@vger.kernel.org; linux-kernel@vger.kernel.org
> Subject: Re: Slowness forming TIPC cluster with explicit node addresses
> 
> On Mon, 2019-07-29 at 09:04 +1200, Chris Packham wrote:
> > On Fri, 2019-07-26 at 13:31 +0000, Jon Maloy wrote:
> > >
> > >
> > > >
> > > >
> > > > -----Original Message-----
> > > > From: netdev-owner@vger.kernel.org <netdev-
> owner@vger.kernel.org>
> > > > On Behalf Of Chris Packham
> > > > Sent: 25-Jul-19 19:37
> > > > To: tipc-discussion@lists.sourceforge.net
> > > > Cc: netdev@vger.kernel.org; linux-kernel@vger.kernel.org
> > > > Subject: Slowness forming TIPC cluster with explicit node
> > > > addresses
> > > >
> > > > Hi,
> > > >
> > > > I'm having problems forming a TIPC cluster between 2 nodes.
> > > >
> > > > This is the basic steps I'm going through on each node.
> > > >
> > > > modprobe tipc
> > > > ip link set eth2 up
> > > > tipc node set addr 1.1.5 # or 1.1.6 tipc bearer enable media eth
> > > > dev eth0
> > > eth2, I assume...
> > >
> > Yes sorry I keep switching between between Ethernet ports for testing
> > so I hand edited the email.
> >
> > >
> > > >
> > > >
> > > >
> > > > Then to confirm if the cluster is formed I use tipc link list
> > > >
> > > > [root@node-5 ~]# tipc link list
> > > > broadcast-link: up
> > > > ...
> > > >
> > > > Looking at tcpdump the two nodes are sending packets
> > > >
> > > > 22:30:05.782320 TIPC v2.0 1.1.5 > 0.0.0, headerlength 60 bytes,
> > > > MessageSize
> > > > 76 bytes, Neighbor Detection Protocol internal, messageType Link
> > > > request
> > > > 22:30:05.863555 TIPC v2.0 1.1.6 > 0.0.0, headerlength 60 bytes,
> > > > MessageSize
> > > > 76 bytes, Neighbor Detection Protocol internal, messageType Link
> > > > request
> > > >
> > > > Eventually (after a few minutes) the link does come up
> > > >
> > > > [root@node-6 ~]# tipc link list
> > > > broadcast-link: up
> > > > 1001006:eth2-1001005:eth2: up
> > > >
> > > > [root@node-5 ~]# tipc link list
> > > > broadcast-link: up
> > > > 1001005:eth2-1001006:eth2: up
> > > >
> > > > When I remove the "tipc node set addr" things seem to kick into
> > > > life straight away
> > > >
> > > > [root@node-5 ~]# tipc link list
> > > > broadcast-link: up
> > > > 0050b61bd2aa:eth2-0050b61e6dfa:eth2: up
> > > >
> > > > So there appears to be some difference in behaviour between having
> > > > an explicit node address and using the default. Unfortunately our
> > > > application relies on setting the node addresses.
> > > I do this many times a day, without any problems. If there would be
> > > any time difference, I would expect the 'auto configurable' version
> > > to be slower, because it involves a DAD step.
> > > Are you sure you don't have any other nodes running in your system?
> > >
> > > ///jon
> > >
> > Nope the two nodes are connected back to back. Does the number of
> > Ethernet interfaces make a difference? As you can see I've got 3 on
> > each node. One is completely disconnected, one is for booting over
> > TFTP
> >  (only used by U-boot) and the other is the USB Ethernet I'm using for
> > testing.
> >
> 
> So I can still reproduce this on nodes that only have one network interface and
> are the only things connected.
> 
> I did find one thing that helps
> 
> diff --git a/net/tipc/discover.c b/net/tipc/discover.c index
> c138d68e8a69..49921dad404a 100644
> --- a/net/tipc/discover.c
> +++ b/net/tipc/discover.c
> @@ -358,10 +358,10 @@ int tipc_disc_create(struct net *net, struct
> tipc_bearer *b,
>         tipc_disc_init_msg(net, d->skb, DSC_REQ_MSG, b);
> 
>         /* Do we need an address trial period first ? */
> -       if (!tipc_own_addr(net)) {
> +//     if (!tipc_own_addr(net)) {
>                 tn->addr_trial_end = jiffies + msecs_to_jiffies(1000);
>                 msg_set_type(buf_msg(d->skb), DSC_TRIAL_MSG);
> -       }
> +//     }
>         memcpy(&d->dest, dest, sizeof(*dest));
>         d->net = net;
>         d->bearer_id = b->identity;
> 
> I think because with pre-configured addresses the duplicate address detection
> is skipped the shorter init phase is skipped. Would is make sense to
> unconditionally do the trial step? Or is there some better way to get things to
> transition with pre-assigned addresses.

I am on vacation until the end of next-week, so I can't give you any good analysis right now.
To do the trial step doesn’t make much sense to me, -it would only delay the setup unnecessarily (but with only 1 second).
Can you check the initial value of addr_trial_end when there a pre-configured address?

///jon


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Slowness forming TIPC cluster with explicit node addresses
  2019-08-04 21:53       ` Jon Maloy
@ 2019-08-04 23:04         ` Chris Packham
  2019-08-07  2:55           ` Jon Maloy
  0 siblings, 1 reply; 8+ messages in thread
From: Chris Packham @ 2019-08-04 23:04 UTC (permalink / raw)
  To: jon.maloy, tipc-discussion; +Cc: netdev, linux-kernel

On Sun, 2019-08-04 at 21:53 +0000, Jon Maloy wrote:
> 
> > 
> > -----Original Message-----
> > From: netdev-owner@vger.kernel.org <netdev-owner@vger.kernel.org>
> > On
> > Behalf Of Chris Packham
> > Sent: 2-Aug-19 01:11
> > To: Jon Maloy <jon.maloy@ericsson.com>; tipc-
> > discussion@lists.sourceforge.net
> > Cc: netdev@vger.kernel.org; linux-kernel@vger.kernel.org
> > Subject: Re: Slowness forming TIPC cluster with explicit node
> > addresses
> > 
> > On Mon, 2019-07-29 at 09:04 +1200, Chris Packham wrote:
> > > 
> > > On Fri, 2019-07-26 at 13:31 +0000, Jon Maloy wrote:
> > > > 
> > > > 
> > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > -----Original Message-----
> > > > > From: netdev-owner@vger.kernel.org <netdev-
> > owner@vger.kernel.org>
> > > 
> > > > 
> > > > > 
> > > > > On Behalf Of Chris Packham
> > > > > Sent: 25-Jul-19 19:37
> > > > > To: tipc-discussion@lists.sourceforge.net
> > > > > Cc: netdev@vger.kernel.org; linux-kernel@vger.kernel.org
> > > > > Subject: Slowness forming TIPC cluster with explicit node
> > > > > addresses
> > > > > 
> > > > > Hi,
> > > > > 
> > > > > I'm having problems forming a TIPC cluster between 2 nodes.
> > > > > 
> > > > > This is the basic steps I'm going through on each node.
> > > > > 
> > > > > modprobe tipc
> > > > > ip link set eth2 up
> > > > > tipc node set addr 1.1.5 # or 1.1.6 tipc bearer enable media
> > > > > eth
> > > > > dev eth0
> > > > eth2, I assume...
> > > > 
> > > Yes sorry I keep switching between between Ethernet ports for
> > > testing
> > > so I hand edited the email.
> > > 
> > > > 
> > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > Then to confirm if the cluster is formed I use tipc link list
> > > > > 
> > > > > [root@node-5 ~]# tipc link list
> > > > > broadcast-link: up
> > > > > ...
> > > > > 
> > > > > Looking at tcpdump the two nodes are sending packets
> > > > > 
> > > > > 22:30:05.782320 TIPC v2.0 1.1.5 > 0.0.0, headerlength 60
> > > > > bytes,
> > > > > MessageSize
> > > > > 76 bytes, Neighbor Detection Protocol internal, messageType
> > > > > Link
> > > > > request
> > > > > 22:30:05.863555 TIPC v2.0 1.1.6 > 0.0.0, headerlength 60
> > > > > bytes,
> > > > > MessageSize
> > > > > 76 bytes, Neighbor Detection Protocol internal, messageType
> > > > > Link
> > > > > request
> > > > > 
> > > > > Eventually (after a few minutes) the link does come up
> > > > > 
> > > > > [root@node-6 ~]# tipc link list
> > > > > broadcast-link: up
> > > > > 1001006:eth2-1001005:eth2: up
> > > > > 
> > > > > [root@node-5 ~]# tipc link list
> > > > > broadcast-link: up
> > > > > 1001005:eth2-1001006:eth2: up
> > > > > 
> > > > > When I remove the "tipc node set addr" things seem to kick
> > > > > into
> > > > > life straight away
> > > > > 
> > > > > [root@node-5 ~]# tipc link list
> > > > > broadcast-link: up
> > > > > 0050b61bd2aa:eth2-0050b61e6dfa:eth2: up
> > > > > 
> > > > > So there appears to be some difference in behaviour between
> > > > > having
> > > > > an explicit node address and using the default. Unfortunately
> > > > > our
> > > > > application relies on setting the node addresses.
> > > > I do this many times a day, without any problems. If there
> > > > would be
> > > > any time difference, I would expect the 'auto configurable'
> > > > version
> > > > to be slower, because it involves a DAD step.
> > > > Are you sure you don't have any other nodes running in your
> > > > system?
> > > > 
> > > > ///jon
> > > > 
> > > Nope the two nodes are connected back to back. Does the number of
> > > Ethernet interfaces make a difference? As you can see I've got 3
> > > on
> > > each node. One is completely disconnected, one is for booting
> > > over
> > > TFTP
> > >  (only used by U-boot) and the other is the USB Ethernet I'm
> > > using for
> > > testing.
> > > 
> > So I can still reproduce this on nodes that only have one network
> > interface and
> > are the only things connected.
> > 
> > I did find one thing that helps
> > 
> > diff --git a/net/tipc/discover.c b/net/tipc/discover.c index
> > c138d68e8a69..49921dad404a 100644
> > --- a/net/tipc/discover.c
> > +++ b/net/tipc/discover.c
> > @@ -358,10 +358,10 @@ int tipc_disc_create(struct net *net, struct
> > tipc_bearer *b,
> >         tipc_disc_init_msg(net, d->skb, DSC_REQ_MSG, b);
> > 
> >         /* Do we need an address trial period first ? */
> > -       if (!tipc_own_addr(net)) {
> > +//     if (!tipc_own_addr(net)) {
> >                 tn->addr_trial_end = jiffies +
> > msecs_to_jiffies(1000);
> >                 msg_set_type(buf_msg(d->skb), DSC_TRIAL_MSG);
> > -       }
> > +//     }
> >         memcpy(&d->dest, dest, sizeof(*dest));
> >         d->net = net;
> >         d->bearer_id = b->identity;
> > 
> > I think because with pre-configured addresses the duplicate address
> > detection
> > is skipped the shorter init phase is skipped. Would is make sense
> > to
> > unconditionally do the trial step? Or is there some better way to
> > get things to
> > transition with pre-assigned addresses.
>
> I am on vacation until the end of next-week, so I can't give you any
> good analysis right now.

Thanks for taking the time to respond.

> To do the trial step doesn’t make much sense to me, -it would only
> delay the setup unnecessarily (but with only 1 second).
> Can you check the initial value of addr_trial_end when there a pre-
> configured address?

I had the same thought. For both my devices 'addr_trial_end = 0' so I
think tipc_disc_addr_trial_msg should end up with trial == false

> 
> ///jon
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: Slowness forming TIPC cluster with explicit node addresses
  2019-08-04 23:04         ` Chris Packham
@ 2019-08-07  2:55           ` Jon Maloy
  2019-08-07  3:45             ` Chris Packham
  0 siblings, 1 reply; 8+ messages in thread
From: Jon Maloy @ 2019-08-07  2:55 UTC (permalink / raw)
  To: Chris Packham, tipc-discussion; +Cc: netdev, linux-kernel



> -----Original Message-----
> From: Chris Packham <Chris.Packham@alliedtelesis.co.nz>
> Sent: 4-Aug-19 19:05
> To: Jon Maloy <jon.maloy@ericsson.com>; tipc-
> discussion@lists.sourceforge.net
> Cc: netdev@vger.kernel.org; linux-kernel@vger.kernel.org
> Subject: Re: Slowness forming TIPC cluster with explicit node addresses
> 
> On Sun, 2019-08-04 at 21:53 +0000, Jon Maloy wrote:
> >
> > >
> > > -----Original Message-----
> > > From: netdev-owner@vger.kernel.org <netdev-owner@vger.kernel.org>
> On
> > > Behalf Of Chris Packham
> > > Sent: 2-Aug-19 01:11
> > > To: Jon Maloy <jon.maloy@ericsson.com>; tipc-
> > > discussion@lists.sourceforge.net
> > > Cc: netdev@vger.kernel.org; linux-kernel@vger.kernel.org
> > > Subject: Re: Slowness forming TIPC cluster with explicit node
> > > addresses
> > >
> > > On Mon, 2019-07-29 at 09:04 +1200, Chris Packham wrote:
> > > >
> > > > On Fri, 2019-07-26 at 13:31 +0000, Jon Maloy wrote:
> > > > >
> > > > >
> > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: netdev-owner@vger.kernel.org <netdev-
> > > owner@vger.kernel.org>
> > > >
> > > > >
> > > > > >
> > > > > > On Behalf Of Chris Packham
> > > > > > Sent: 25-Jul-19 19:37
> > > > > > To: tipc-discussion@lists.sourceforge.net
> > > > > > Cc: netdev@vger.kernel.org; linux-kernel@vger.kernel.org
> > > > > > Subject: Slowness forming TIPC cluster with explicit node
> > > > > > addresses
> > > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I'm having problems forming a TIPC cluster between 2 nodes.
> > > > > >
> > > > > > This is the basic steps I'm going through on each node.
> > > > > >
> > > > > > modprobe tipc
> > > > > > ip link set eth2 up
> > > > > > tipc node set addr 1.1.5 # or 1.1.6 tipc bearer enable media
> > > > > > eth dev eth0
> > > > > eth2, I assume...
> > > > >
> > > > Yes sorry I keep switching between between Ethernet ports for
> > > > testing
> > > > so I hand edited the email.
> > > >
> > > > >
> > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > Then to confirm if the cluster is formed I use tipc link list
> > > > > >
> > > > > > [root@node-5 ~]# tipc link list
> > > > > > broadcast-link: up
> > > > > > ...
> > > > > >
> > > > > > Looking at tcpdump the two nodes are sending packets
> > > > > >
> > > > > > 22:30:05.782320 TIPC v2.0 1.1.5 > 0.0.0, headerlength 60
> > > > > > bytes,
> > > > > > MessageSize
> > > > > > 76 bytes, Neighbor Detection Protocol internal, messageType
> > > > > > Link
> > > > > > request
> > > > > > 22:30:05.863555 TIPC v2.0 1.1.6 > 0.0.0, headerlength 60
> > > > > > bytes,
> > > > > > MessageSize
> > > > > > 76 bytes, Neighbor Detection Protocol internal, messageType
> > > > > > Link
> > > > > > request
> > > > > >
> > > > > > Eventually (after a few minutes) the link does come up
> > > > > >
> > > > > > [root@node-6 ~]# tipc link list
> > > > > > broadcast-link: up
> > > > > > 1001006:eth2-1001005:eth2: up
> > > > > >
> > > > > > [root@node-5 ~]# tipc link list
> > > > > > broadcast-link: up
> > > > > > 1001005:eth2-1001006:eth2: up
> > > > > >
> > > > > > When I remove the "tipc node set addr" things seem to kick
> > > > > > into
> > > > > > life straight away
> > > > > >
> > > > > > [root@node-5 ~]# tipc link list
> > > > > > broadcast-link: up
> > > > > > 0050b61bd2aa:eth2-0050b61e6dfa:eth2: up
> > > > > >
> > > > > > So there appears to be some difference in behaviour between
> > > > > > having
> > > > > > an explicit node address and using the default. Unfortunately
> > > > > > our
> > > > > > application relies on setting the node addresses.
> > > > > I do this many times a day, without any problems. If there
> > > > > would be
> > > > > any time difference, I would expect the 'auto configurable'
> > > > > version
> > > > > to be slower, because it involves a DAD step.
> > > > > Are you sure you don't have any other nodes running in your
> > > > > system?
> > > > >
> > > > > ///jon
> > > > >
> > > > Nope the two nodes are connected back to back. Does the number of
> > > > Ethernet interfaces make a difference? As you can see I've got 3
> > > > on
> > > > each node. One is completely disconnected, one is for booting
> > > > over
> > > > TFTP
> > > >  (only used by U-boot) and the other is the USB Ethernet I'm
> > > > using for
> > > > testing.
> > > >
> > > So I can still reproduce this on nodes that only have one network
> > > interface and
> > > are the only things connected.
> > >
> > > I did find one thing that helps
> > >
> > > diff --git a/net/tipc/discover.c b/net/tipc/discover.c index
> > > c138d68e8a69..49921dad404a 100644
> > > --- a/net/tipc/discover.c
> > > +++ b/net/tipc/discover.c
> > > @@ -358,10 +358,10 @@ int tipc_disc_create(struct net *net, struct
> > > tipc_bearer *b,
> > >         tipc_disc_init_msg(net, d->skb, DSC_REQ_MSG, b);
> > >
> > >         /* Do we need an address trial period first ? */
> > > -       if (!tipc_own_addr(net)) {
> > > +//     if (!tipc_own_addr(net)) {
> > >                 tn->addr_trial_end = jiffies +
> > > msecs_to_jiffies(1000);
> > >                 msg_set_type(buf_msg(d->skb), DSC_TRIAL_MSG);
> > > -       }
> > > +//     }
> > >         memcpy(&d->dest, dest, sizeof(*dest));
> > >         d->net = net;
> > >         d->bearer_id = b->identity;
> > >
> > > I think because with pre-configured addresses the duplicate address
> > > detection
> > > is skipped the shorter init phase is skipped. Would is make sense
> > > to
> > > unconditionally do the trial step? Or is there some better way to
> > > get things to
> > > transition with pre-assigned addresses.
> >
> > I am on vacation until the end of next-week, so I can't give you any
> > good analysis right now.
> 
> Thanks for taking the time to respond.
> 
> > To do the trial step doesn’t make much sense to me, -it would only
> > delay the setup unnecessarily (but with only 1 second).
> > Can you check the initial value of addr_trial_end when there a pre-
> > configured address?
> 
> I had the same thought. For both my devices 'addr_trial_end = 0' so I
> think tipc_disc_addr_trial_msg should end up with trial == false

I suggest you try initializing it to jiffies and see what happens.

///jon

> 
> >
> > ///jon
> >

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Slowness forming TIPC cluster with explicit node addresses
  2019-08-07  2:55           ` Jon Maloy
@ 2019-08-07  3:45             ` Chris Packham
  0 siblings, 0 replies; 8+ messages in thread
From: Chris Packham @ 2019-08-07  3:45 UTC (permalink / raw)
  To: jon.maloy, tipc-discussion; +Cc: netdev, linux-kernel

Hi Jon,

On Wed, 2019-08-07 at 02:55 +0000, Jon Maloy wrote:
> 
> > 
> > -----Original Message-----
> > From: Chris Packham <Chris.Packham@alliedtelesis.co.nz>
> > Sent: 4-Aug-19 19:05
> > To: Jon Maloy <jon.maloy@ericsson.com>; tipc-
> > discussion@lists.sourceforge.net
> > Cc: netdev@vger.kernel.org; linux-kernel@vger.kernel.org
> > Subject: Re: Slowness forming TIPC cluster with explicit node
> > addresses
> > 
> > On Sun, 2019-08-04 at 21:53 +0000, Jon Maloy wrote:
> > > 
> > > 
> > > > 
> > > > 
> > > > -----Original Message-----
> > > > From: netdev-owner@vger.kernel.org <netdev-owner@vger.kernel.or
> > > > g>
> > On
> > > 
> > > > 
> > > > Behalf Of Chris Packham
> > > > Sent: 2-Aug-19 01:11
> > > > To: Jon Maloy <jon.maloy@ericsson.com>; tipc-
> > > > discussion@lists.sourceforge.net
> > > > Cc: netdev@vger.kernel.org; linux-kernel@vger.kernel.org
> > > > Subject: Re: Slowness forming TIPC cluster with explicit node
> > > > addresses
> > > > 
> > > > On Mon, 2019-07-29 at 09:04 +1200, Chris Packham wrote:
> > > > > 
> > > > > 
> > > > > On Fri, 2019-07-26 at 13:31 +0000, Jon Maloy wrote:
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > -----Original Message-----
> > > > > > > From: netdev-owner@vger.kernel.org <netdev-
> > > > owner@vger.kernel.org>
> > > > > 
> > > > > 
> > > > > > 
> > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > On Behalf Of Chris Packham
> > > > > > > Sent: 25-Jul-19 19:37
> > > > > > > To: tipc-discussion@lists.sourceforge.net
> > > > > > > Cc: netdev@vger.kernel.org; linux-kernel@vger.kernel.org
> > > > > > > Subject: Slowness forming TIPC cluster with explicit node
> > > > > > > addresses
> > > > > > > 
> > > > > > > Hi,
> > > > > > > 
> > > > > > > I'm having problems forming a TIPC cluster between 2
> > > > > > > nodes.
> > > > > > > 
> > > > > > > This is the basic steps I'm going through on each node.
> > > > > > > 
> > > > > > > modprobe tipc
> > > > > > > ip link set eth2 up
> > > > > > > tipc node set addr 1.1.5 # or 1.1.6 tipc bearer enable
> > > > > > > media
> > > > > > > eth dev eth0
> > > > > > eth2, I assume...
> > > > > > 
> > > > > Yes sorry I keep switching between between Ethernet ports for
> > > > > testing
> > > > > so I hand edited the email.
> > > > > 
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > Then to confirm if the cluster is formed I use tipc link
> > > > > > > list
> > > > > > > 
> > > > > > > [root@node-5 ~]# tipc link list
> > > > > > > broadcast-link: up
> > > > > > > ...
> > > > > > > 
> > > > > > > Looking at tcpdump the two nodes are sending packets
> > > > > > > 
> > > > > > > 22:30:05.782320 TIPC v2.0 1.1.5 > 0.0.0, headerlength 60
> > > > > > > bytes,
> > > > > > > MessageSize
> > > > > > > 76 bytes, Neighbor Detection Protocol internal,
> > > > > > > messageType
> > > > > > > Link
> > > > > > > request
> > > > > > > 22:30:05.863555 TIPC v2.0 1.1.6 > 0.0.0, headerlength 60
> > > > > > > bytes,
> > > > > > > MessageSize
> > > > > > > 76 bytes, Neighbor Detection Protocol internal,
> > > > > > > messageType
> > > > > > > Link
> > > > > > > request
> > > > > > > 
> > > > > > > Eventually (after a few minutes) the link does come up
> > > > > > > 
> > > > > > > [root@node-6 ~]# tipc link list
> > > > > > > broadcast-link: up
> > > > > > > 1001006:eth2-1001005:eth2: up
> > > > > > > 
> > > > > > > [root@node-5 ~]# tipc link list
> > > > > > > broadcast-link: up
> > > > > > > 1001005:eth2-1001006:eth2: up
> > > > > > > 
> > > > > > > When I remove the "tipc node set addr" things seem to
> > > > > > > kick
> > > > > > > into
> > > > > > > life straight away
> > > > > > > 
> > > > > > > [root@node-5 ~]# tipc link list
> > > > > > > broadcast-link: up
> > > > > > > 0050b61bd2aa:eth2-0050b61e6dfa:eth2: up
> > > > > > > 
> > > > > > > So there appears to be some difference in behaviour
> > > > > > > between
> > > > > > > having
> > > > > > > an explicit node address and using the default.
> > > > > > > Unfortunately
> > > > > > > our
> > > > > > > application relies on setting the node addresses.
> > > > > > I do this many times a day, without any problems. If there
> > > > > > would be
> > > > > > any time difference, I would expect the 'auto configurable'
> > > > > > version
> > > > > > to be slower, because it involves a DAD step.
> > > > > > Are you sure you don't have any other nodes running in your
> > > > > > system?
> > > > > > 
> > > > > > ///jon
> > > > > > 
> > > > > Nope the two nodes are connected back to back. Does the
> > > > > number of
> > > > > Ethernet interfaces make a difference? As you can see I've
> > > > > got 3
> > > > > on
> > > > > each node. One is completely disconnected, one is for booting
> > > > > over
> > > > > TFTP
> > > > >  (only used by U-boot) and the other is the USB Ethernet I'm
> > > > > using for
> > > > > testing.
> > > > > 
> > > > So I can still reproduce this on nodes that only have one
> > > > network
> > > > interface and
> > > > are the only things connected.
> > > > 
> > > > I did find one thing that helps
> > > > 
> > > > diff --git a/net/tipc/discover.c b/net/tipc/discover.c index
> > > > c138d68e8a69..49921dad404a 100644
> > > > --- a/net/tipc/discover.c
> > > > +++ b/net/tipc/discover.c
> > > > @@ -358,10 +358,10 @@ int tipc_disc_create(struct net *net,
> > > > struct
> > > > tipc_bearer *b,
> > > >         tipc_disc_init_msg(net, d->skb, DSC_REQ_MSG, b);
> > > > 
> > > >         /* Do we need an address trial period first ? */
> > > > -       if (!tipc_own_addr(net)) {
> > > > +//     if (!tipc_own_addr(net)) {
> > > >                 tn->addr_trial_end = jiffies +
> > > > msecs_to_jiffies(1000);
> > > >                 msg_set_type(buf_msg(d->skb), DSC_TRIAL_MSG);
> > > > -       }
> > > > +//     }
> > > >         memcpy(&d->dest, dest, sizeof(*dest));
> > > >         d->net = net;
> > > >         d->bearer_id = b->identity;
> > > > 
> > > > I think because with pre-configured addresses the duplicate
> > > > address
> > > > detection
> > > > is skipped the shorter init phase is skipped. Would is make
> > > > sense
> > > > to
> > > > unconditionally do the trial step? Or is there some better way
> > > > to
> > > > get things to
> > > > transition with pre-assigned addresses.
> > > I am on vacation until the end of next-week, so I can't give you
> > > any
> > > good analysis right now.
> > Thanks for taking the time to respond.
> > 
> > > 
> > > To do the trial step doesn’t make much sense to me, -it would
> > > only
> > > delay the setup unnecessarily (but with only 1 second).
> > > Can you check the initial value of addr_trial_end when there a
> > > pre-
> > > configured address?
> > I had the same thought. For both my devices 'addr_trial_end = 0' so
> > I
> > think tipc_disc_addr_trial_msg should end up with trial == false
> I suggest you try initializing it to jiffies and see what happens.
> 

Setting addr_trial_end to jiffies seems to do the trick. I'll prepare a
patch and send it through.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-08-07  3:45 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-25 23:37 Slowness forming TIPC cluster with explicit node addresses Chris Packham
2019-07-26 13:31 ` Jon Maloy
2019-07-28 21:04   ` Chris Packham
2019-08-02  5:11     ` Chris Packham
2019-08-04 21:53       ` Jon Maloy
2019-08-04 23:04         ` Chris Packham
2019-08-07  2:55           ` Jon Maloy
2019-08-07  3:45             ` Chris Packham

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).