All of lore.kernel.org
 help / color / mirror / Atom feed
* [Patch] fix packet loss and massive ping spikes with PPP multi-link
@ 2010-03-26 15:50 ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-03-26 15:50 UTC (permalink / raw)
  To: linux-kernel, netdev, linux-ppp

Hi all,

as you may be aware, it is recommended to switch of fragmentation when
doing PPP multi-link. Problems which will be seen if you don't do that
involve packet loss and massive spikes in the round-trip times.
An increase of 1.5 seconds(!) is what we usually see.

Every Cisco CPE offers to switch off fragmentation for multi-link, other
manufacturers are likely to offer it, as well.
We implemented a really ugly hack which allows us to do the same with
the Linux kernel. I can confirm that it gets rid of the problem 100%
of the time.

We are fully aware that this code is not nearly up for inclusion into
the kernel. What we hope to achieve is that someone with the skills to
do this properly will introduce an option to turn off fragmentation on
PPP multi-link or to just do away with it completely.

Some stats for a run of 4 hours each:

No patch:
    129 lost fragments, 575803 reordered
    127/3960 discarded fragments/bytes, 0 lost received

Patch:
    0 lost fragments, 0 reordered
    0/0 discarded fragments/bytes, 0 lost received

Unfortunately, I don't have paste-able stats for ping times available,
at the moment.


Any and all feedback on this is appreciated,
Richard

PS: Our image which is deployed in the field is using 2.6.32 which is
why we were forced to develop for and test with 2.6.32 instead of
linux-next; sorry for that.


--- /usr/src/linux-2.6.32.3/drivers/net/ppp_generic.c.orig	2010-03-25
16:56:05.000000000 +0100
+++ /usr/src/linux-2.6.32.3/drivers/net/ppp_generic.c	2010-03-26
14:42:42.000000000 +0100
@@ -123,6 +123,7 @@
 	struct net_device *dev;		/* network interface device a4 */
 	int		closing;	/* is device closing down? a8 */
 #ifdef CONFIG_PPP_MULTILINK
+	int		rrsched;	/* round robin scheduler for packet distribution */
 	int		nxchan;		/* next channel to send something on */
 	u32		nxseq;		/* next sequence number to send */
 	int		mrru;		/* MP: max reconst. receive unit */
@@ -1261,6 +1262,7 @@
 	struct list_head *list;
 	struct channel *pch;
 	struct sk_buff *skb = ppp->xmit_pending;
+	int i;

 	if (!skb)
 		return;
@@ -1292,10 +1294,40 @@
 	}

 #ifdef CONFIG_PPP_MULTILINK
-	/* Multilink: fragment the packet over as many links
-	   as can take the packet at the moment. */
-	if (!ppp_mp_explode(ppp, skb))
-		return;
+	ppp->rrsched++;
+//    printk(KERN_ERR "ppp: multi new packet, rrsched = %d\n", ppp->rrsched);
+
+	i = 0;
+	list_for_each_entry(pch, &ppp->channels, clist)      {
+//    	printk(KERN_ERR "ppp: channel %d ... \n", i);
+		if(pch->chan == NULL) continue;
+
+		if (ppp->rrsched % ppp->n_channels == i) {
+//    		printk(KERN_ERR "use channel %d\n", i);
+			spin_lock_bh(&pch->downl);
+			if (pch->chan) {
+//				++ppp->nxseq;
+				if (pch->chan->ops->start_xmit(pch->chan, skb)) {
+					ppp->xmit_pending = NULL;
+				}
+			} else {
+				/* channel got unregistered */
+				kfree_skb(skb);
+				ppp->xmit_pending = NULL;
+			}
+			spin_unlock_bh(&pch->downl);
+			return;
+		}
+		i++;
+	}
+//	printk(KERN_ERR "keep in queue\n");
+	return;
+
+
+//	/* Multilink: fragment the packet over as many links
+//	   as can take the packet at the moment. */
+//	if (!ppp_mp_explode(ppp, skb))
+//		return;
 #endif /* CONFIG_PPP_MULTILINK */

 	ppp->xmit_pending = NULL;

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [Patch] fix packet loss and massive ping spikes with PPP multi-link
@ 2010-03-26 15:50 ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-03-26 15:50 UTC (permalink / raw)
  To: linux-kernel, netdev, linux-ppp

Hi all,

as you may be aware, it is recommended to switch of fragmentation when
doing PPP multi-link. Problems which will be seen if you don't do that
involve packet loss and massive spikes in the round-trip times.
An increase of 1.5 seconds(!) is what we usually see.

Every Cisco CPE offers to switch off fragmentation for multi-link, other
manufacturers are likely to offer it, as well.
We implemented a really ugly hack which allows us to do the same with
the Linux kernel. I can confirm that it gets rid of the problem 100%
of the time.

We are fully aware that this code is not nearly up for inclusion into
the kernel. What we hope to achieve is that someone with the skills to
do this properly will introduce an option to turn off fragmentation on
PPP multi-link or to just do away with it completely.

Some stats for a run of 4 hours each:

No patch:
    129 lost fragments, 575803 reordered
    127/3960 discarded fragments/bytes, 0 lost received

Patch:
    0 lost fragments, 0 reordered
    0/0 discarded fragments/bytes, 0 lost received

Unfortunately, I don't have paste-able stats for ping times available,
at the moment.


Any and all feedback on this is appreciated,
Richard

PS: Our image which is deployed in the field is using 2.6.32 which is
why we were forced to develop for and test with 2.6.32 instead of
linux-next; sorry for that.


--- /usr/src/linux-2.6.32.3/drivers/net/ppp_generic.c.orig	2010-03-25
16:56:05.000000000 +0100
+++ /usr/src/linux-2.6.32.3/drivers/net/ppp_generic.c	2010-03-26
14:42:42.000000000 +0100
@@ -123,6 +123,7 @@
 	struct net_device *dev;		/* network interface device a4 */
 	int		closing;	/* is device closing down? a8 */
 #ifdef CONFIG_PPP_MULTILINK
+	int		rrsched;	/* round robin scheduler for packet distribution */
 	int		nxchan;		/* next channel to send something on */
 	u32		nxseq;		/* next sequence number to send */
 	int		mrru;		/* MP: max reconst. receive unit */
@@ -1261,6 +1262,7 @@
 	struct list_head *list;
 	struct channel *pch;
 	struct sk_buff *skb = ppp->xmit_pending;
+	int i;

 	if (!skb)
 		return;
@@ -1292,10 +1294,40 @@
 	}

 #ifdef CONFIG_PPP_MULTILINK
-	/* Multilink: fragment the packet over as many links
-	   as can take the packet at the moment. */
-	if (!ppp_mp_explode(ppp, skb))
-		return;
+	ppp->rrsched++;
+//    printk(KERN_ERR "ppp: multi new packet, rrsched = %d\n", ppp->rrsched);
+
+	i = 0;
+	list_for_each_entry(pch, &ppp->channels, clist)      {
+//    	printk(KERN_ERR "ppp: channel %d ... \n", i);
+		if(pch->chan = NULL) continue;
+
+		if (ppp->rrsched % ppp->n_channels = i) {
+//    		printk(KERN_ERR "use channel %d\n", i);
+			spin_lock_bh(&pch->downl);
+			if (pch->chan) {
+//				++ppp->nxseq;
+				if (pch->chan->ops->start_xmit(pch->chan, skb)) {
+					ppp->xmit_pending = NULL;
+				}
+			} else {
+				/* channel got unregistered */
+				kfree_skb(skb);
+				ppp->xmit_pending = NULL;
+			}
+			spin_unlock_bh(&pch->downl);
+			return;
+		}
+		i++;
+	}
+//	printk(KERN_ERR "keep in queue\n");
+	return;
+
+
+//	/* Multilink: fragment the packet over as many links
+//	   as can take the packet at the moment. */
+//	if (!ppp_mp_explode(ppp, skb))
+//		return;
 #endif /* CONFIG_PPP_MULTILINK */

 	ppp->xmit_pending = NULL;

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP
  2010-03-26 15:50 ` Richard Hartmann
@ 2010-03-26 16:02   ` Alan Cox
  -1 siblings, 0 replies; 73+ messages in thread
From: Alan Cox @ 2010-03-26 15:58 UTC (permalink / raw)
  To: Richard Hartmann; +Cc: linux-kernel, netdev, linux-ppp

> We are fully aware that this code is not nearly up for inclusion into
> the kernel. What we hope to achieve is that someone with the skills to
> do this properly will introduce an option to turn off fragmentation on
> PPP multi-link or to just do away with it completely.

You should be able to manage that I'm sure:

The main thing will be to take this chunk, strip out the // stuff and
then make it a function of its own so you've got clean code in the main
path of the form

#ifdef CONFIG_PPP_MULTILINK
	if (ml_explode) {
		if (!ppp_mp_explode(...)
	..
	} else {
		ppp_mp_roundrobin(...)
	}

and your bits as a new routine. You can then do this

static int ml_explode = 1;
module_param(ml_explode, int, 0600);
MODULE_PARM_DESC(ml_expode, "Set this to zero to disabling multilink \
	fragmentation when talking to cisco devices");

which will let you load the module with the option ml_explode = 0 if you
want that property.

Making it runtime per link selectable would be nicer but thats a bit more
work.

Alan


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
@ 2010-03-26 16:02   ` Alan Cox
  0 siblings, 0 replies; 73+ messages in thread
From: Alan Cox @ 2010-03-26 16:02 UTC (permalink / raw)
  To: Richard Hartmann; +Cc: linux-kernel, netdev, linux-ppp

> We are fully aware that this code is not nearly up for inclusion into
> the kernel. What we hope to achieve is that someone with the skills to
> do this properly will introduce an option to turn off fragmentation on
> PPP multi-link or to just do away with it completely.

You should be able to manage that I'm sure:

The main thing will be to take this chunk, strip out the // stuff and
then make it a function of its own so you've got clean code in the main
path of the form

#ifdef CONFIG_PPP_MULTILINK
	if (ml_explode) {
		if (!ppp_mp_explode(...)
	..
	} else {
		ppp_mp_roundrobin(...)
	}

and your bits as a new routine. You can then do this

static int ml_explode = 1;
module_param(ml_explode, int, 0600);
MODULE_PARM_DESC(ml_expode, "Set this to zero to disabling multilink \
	fragmentation when talking to cisco devices");

which will let you load the module with the option ml_explode = 0 if you
want that property.

Making it runtime per link selectable would be nicer but thats a bit more
work.

Alan


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
  2010-03-26 16:02   ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Alan Cox
@ 2010-03-26 16:33     ` Joe Perches
  -1 siblings, 0 replies; 73+ messages in thread
From: Joe Perches @ 2010-03-26 16:33 UTC (permalink / raw)
  To: Alan Cox; +Cc: Richard Hartmann, linux-kernel, netdev, linux-ppp

On Fri, 2010-03-26 at 16:02 +0000, Alan Cox wrote:
> > We are fully aware that this code is not nearly up for inclusion into
> > the kernel. What we hope to achieve is that someone with the skills to
> > do this properly will introduce an option to turn off fragmentation on
> > PPP multi-link or to just do away with it completely.
> 
> You should be able to manage that I'm sure:
[]
> MODULE_PARM_DESC(ml_expode, "Set this to zero to disabling multilink \
> 	fragmentation when talking to cisco devices");

trivial:

It's better to use something like:

MODULE_PARM_DESC(ml_expode, "Set this to zero to disable multilink "
			    "fragmentation when talking to cisco devices");

so odd spacing from the continuation line aren't
used in the description.


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP
@ 2010-03-26 16:33     ` Joe Perches
  0 siblings, 0 replies; 73+ messages in thread
From: Joe Perches @ 2010-03-26 16:33 UTC (permalink / raw)
  To: Alan Cox; +Cc: Richard Hartmann, linux-kernel, netdev, linux-ppp

On Fri, 2010-03-26 at 16:02 +0000, Alan Cox wrote:
> > We are fully aware that this code is not nearly up for inclusion into
> > the kernel. What we hope to achieve is that someone with the skills to
> > do this properly will introduce an option to turn off fragmentation on
> > PPP multi-link or to just do away with it completely.
> 
> You should be able to manage that I'm sure:
[]
> MODULE_PARM_DESC(ml_expode, "Set this to zero to disabling multilink \
> 	fragmentation when talking to cisco devices");

trivial:

It's better to use something like:

MODULE_PARM_DESC(ml_expode, "Set this to zero to disable multilink "
			    "fragmentation when talking to cisco devices");

so odd spacing from the continuation line aren't
used in the description.


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP  multi-link
  2010-03-26 16:02   ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Alan Cox
  (?)
@ 2010-03-26 16:39     ` Richard Hartmann
  -1 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-03-26 16:39 UTC (permalink / raw)
  To: Alan Cox; +Cc: linux-kernel, netdev, linux-ppp

On Fri, Mar 26, 2010 at 17:02, Alan Cox <alan@lxorguk.ukuu.org.uk> wrote:

> You should be able to manage that I'm sure:

Hopefully. We will see :)


> MODULE_PARM_DESC(ml_expode, "Set this to zero to disabling multilink \
>        fragmentation when talking to cisco devices");

To be exact, this is not a Linux vs. Cisco issue, but a Linux vs. world
issue.

At least the last-level support of Telefonica and QSC was definite and
adamant about not enabling fragmentation on _any_ PPP multi-link, ever.

Also, I am not sure if it would not be better to default to no
fragmentation and enable it optionally. I am aware that changing default
behaviour is always a bit of a problem but to the best of my knowledge
enabling fragmentation is a bug in any and all real-world applications.


Richard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
@ 2010-03-26 16:39     ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-03-26 16:39 UTC (permalink / raw)
  To: Alan Cox; +Cc: linux-kernel, netdev, linux-ppp

On Fri, Mar 26, 2010 at 17:02, Alan Cox <alan@lxorguk.ukuu.org.uk> wrote:

> You should be able to manage that I'm sure:

Hopefully. We will see :)


> MODULE_PARM_DESC(ml_expode, "Set this to zero to disabling multilink \
>        fragmentation when talking to cisco devices");

To be exact, this is not a Linux vs. Cisco issue, but a Linux vs. world
issue.

At least the last-level support of Telefonica and QSC was definite and
adamant about not enabling fragmentation on _any_ PPP multi-link, ever.

Also, I am not sure if it would not be better to default to no
fragmentation and enable it optionally. I am aware that changing default
behaviour is always a bit of a problem but to the best of my knowledge
enabling fragmentation is a bug in any and all real-world applications.


Richard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP
@ 2010-03-26 16:39     ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-03-26 16:39 UTC (permalink / raw)
  To: Alan Cox; +Cc: linux-kernel, netdev, linux-ppp

On Fri, Mar 26, 2010 at 17:02, Alan Cox <alan@lxorguk.ukuu.org.uk> wrote:

> You should be able to manage that I'm sure:

Hopefully. We will see :)


> MODULE_PARM_DESC(ml_expode, "Set this to zero to disabling multilink \
>        fragmentation when talking to cisco devices");

To be exact, this is not a Linux vs. Cisco issue, but a Linux vs. world
issue.

At least the last-level support of Telefonica and QSC was definite and
adamant about not enabling fragmentation on _any_ PPP multi-link, ever.

Also, I am not sure if it would not be better to default to no
fragmentation and enable it optionally. I am aware that changing default
behaviour is always a bit of a problem but to the best of my knowledge
enabling fragmentation is a bug in any and all real-world applications.


Richard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
  2010-03-26 16:02   ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Alan Cox
@ 2010-03-26 17:00     ` Alexander E. Patrakov
  -1 siblings, 0 replies; 73+ messages in thread
From: Alexander E. Patrakov @ 2010-03-26 16:59 UTC (permalink / raw)
  To: Alan Cox; +Cc: Richard Hartmann, linux-kernel, netdev, linux-ppp

26.03.2010 21:02, Alan Cox wrote:

> You can then do this
>
> static int ml_explode = 1;
> module_param(ml_explode, int, 0600);
> MODULE_PARM_DESC(ml_expode, "Set this to zero to disabling multilink \
> 	fragmentation when talking to cisco devices");
>
> which will let you load the module with the option ml_explode = 0 if you
> want that property.
>
> Making it runtime per link selectable would be nicer but thats a bit more
> work.

Doesn't it work already via echoing values to 
/sys/module/ppp/generic/parameters/ml_explode in the above code?

-- 
Alexander E. Patrakov

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
  2010-03-26 16:39     ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
@ 2010-03-26 16:59       ` David Miller
  -1 siblings, 0 replies; 73+ messages in thread
From: David Miller @ 2010-03-26 16:59 UTC (permalink / raw)
  To: richih.mailinglist; +Cc: alan, linux-kernel, netdev, linux-ppp

From: Richard Hartmann <richih.mailinglist@gmail.com>
Date: Fri, 26 Mar 2010 17:39:27 +0100

> At least the last-level support of Telefonica and QSC was definite
> and adamant about not enabling fragmentation on _any_ PPP
> multi-link, ever.

I'm sure we can find just as many counter examples.

I doubt we'll ever be able to change the default here,
sorry.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP
@ 2010-03-26 16:59       ` David Miller
  0 siblings, 0 replies; 73+ messages in thread
From: David Miller @ 2010-03-26 16:59 UTC (permalink / raw)
  To: richih.mailinglist; +Cc: alan, linux-kernel, netdev, linux-ppp

From: Richard Hartmann <richih.mailinglist@gmail.com>
Date: Fri, 26 Mar 2010 17:39:27 +0100

> At least the last-level support of Telefonica and QSC was definite
> and adamant about not enabling fragmentation on _any_ PPP
> multi-link, ever.

I'm sure we can find just as many counter examples.

I doubt we'll ever be able to change the default here,
sorry.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
@ 2010-03-26 17:00     ` Alexander E. Patrakov
  0 siblings, 0 replies; 73+ messages in thread
From: Alexander E. Patrakov @ 2010-03-26 17:00 UTC (permalink / raw)
  To: Alan Cox; +Cc: Richard Hartmann, linux-kernel, netdev, linux-ppp

26.03.2010 21:02, Alan Cox wrote:

> You can then do this
>
> static int ml_explode = 1;
> module_param(ml_explode, int, 0600);
> MODULE_PARM_DESC(ml_expode, "Set this to zero to disabling multilink \
> 	fragmentation when talking to cisco devices");
>
> which will let you load the module with the option ml_explode = 0 if you
> want that property.
>
> Making it runtime per link selectable would be nicer but thats a bit more
> work.

Doesn't it work already via echoing values to 
/sys/module/ppp/generic/parameters/ml_explode in the above code?

-- 
Alexander E. Patrakov

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP  multi-link
  2010-03-26 16:39     ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
@ 2010-03-26 17:04       ` James Carlson
  -1 siblings, 0 replies; 73+ messages in thread
From: James Carlson @ 2010-03-26 17:04 UTC (permalink / raw)
  To: Richard Hartmann; +Cc: Alan Cox, linux-kernel, netdev, linux-ppp

Richard Hartmann wrote:
> Also, I am not sure if it would not be better to default to no
> fragmentation and enable it optionally. I am aware that changing default
> behaviour is always a bit of a problem but to the best of my knowledge
> enabling fragmentation is a bug in any and all real-world applications.

It worked well and was enabled by default on all the Bay Networks
equipment I used ~15 years ago.  And I know for certain that we tested
with other gear (Ascend and Clam, probably) that did it right.

If it works with the equipment you're using, it's a useful feature in
that it can balance out the latencies among the links, resulting in much
lower overall latency observed by higher layers -- especially so on
lower-speed links where MP is more likely to be used.  Without it,
you're left either waiting for the one slow link choking on a big packet
to catch up, or (worse) disabling the sequence headers altogether,
resulting in reordering unless you're really "clever."

It's a darned shame that lame implementations would force a change in
the default ...

-- 
James Carlson         42.703N 71.076W         <carlsonj@workingcode.com>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP  multi-link
@ 2010-03-26 17:04       ` James Carlson
  0 siblings, 0 replies; 73+ messages in thread
From: James Carlson @ 2010-03-26 17:04 UTC (permalink / raw)
  To: Richard Hartmann; +Cc: Alan Cox, linux-kernel, netdev, linux-ppp

Richard Hartmann wrote:
> Also, I am not sure if it would not be better to default to no
> fragmentation and enable it optionally. I am aware that changing default
> behaviour is always a bit of a problem but to the best of my knowledge
> enabling fragmentation is a bug in any and all real-world applications.

It worked well and was enabled by default on all the Bay Networks
equipment I used ~15 years ago.  And I know for certain that we tested
with other gear (Ascend and Clam, probably) that did it right.

If it works with the equipment you're using, it's a useful feature in
that it can balance out the latencies among the links, resulting in much
lower overall latency observed by higher layers -- especially so on
lower-speed links where MP is more likely to be used.  Without it,
you're left either waiting for the one slow link choking on a big packet
to catch up, or (worse) disabling the sequence headers altogether,
resulting in reordering unless you're really "clever."

It's a darned shame that lame implementations would force a change in
the default ...

-- 
James Carlson         42.703N 71.076W         <carlsonj@workingcode.com>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
  2010-03-26 17:00     ` Alexander E. Patrakov
@ 2010-03-26 17:04       ` Alan Cox
  -1 siblings, 0 replies; 73+ messages in thread
From: Alan Cox @ 2010-03-26 17:04 UTC (permalink / raw)
  To: Alexander E. Patrakov; +Cc: Richard Hartmann, linux-kernel, netdev, linux-ppp

On Fri, 26 Mar 2010 22:00:23 +0500
"Alexander E. Patrakov" <patrakov@gmail.com> wrote:

> 26.03.2010 21:02, Alan Cox wrote:
> 
> > You can then do this
> >
> > static int ml_explode = 1;
> > module_param(ml_explode, int, 0600);
> > MODULE_PARM_DESC(ml_expode, "Set this to zero to disabling multilink \
> > 	fragmentation when talking to cisco devices");
> >
> > which will let you load the module with the option ml_explode = 0 if you
> > want that property.
> >
> > Making it runtime per link selectable would be nicer but thats a bit more
> > work.
> 
> Doesn't it work already via echoing values to 
> /sys/module/ppp/generic/parameters/ml_explode in the above code?

Thats runtime (and why I set 0600 in the permissions for the example) but
not per link.

Alan

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP
@ 2010-03-26 17:04       ` Alan Cox
  0 siblings, 0 replies; 73+ messages in thread
From: Alan Cox @ 2010-03-26 17:04 UTC (permalink / raw)
  To: Alexander E. Patrakov; +Cc: Richard Hartmann, linux-kernel, netdev, linux-ppp

On Fri, 26 Mar 2010 22:00:23 +0500
"Alexander E. Patrakov" <patrakov@gmail.com> wrote:

> 26.03.2010 21:02, Alan Cox wrote:
> 
> > You can then do this
> >
> > static int ml_explode = 1;
> > module_param(ml_explode, int, 0600);
> > MODULE_PARM_DESC(ml_expode, "Set this to zero to disabling multilink \
> > 	fragmentation when talking to cisco devices");
> >
> > which will let you load the module with the option ml_explode = 0 if you
> > want that property.
> >
> > Making it runtime per link selectable would be nicer but thats a bit more
> > work.
> 
> Doesn't it work already via echoing values to 
> /sys/module/ppp/generic/parameters/ml_explode in the above code?

Thats runtime (and why I set 0600 in the permissions for the example) but
not per link.

Alan

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
  2010-03-26 16:59       ` [Patch] fix packet loss and massive ping spikes with PPP David Miller
@ 2010-03-26 17:04         ` David Miller
  -1 siblings, 0 replies; 73+ messages in thread
From: David Miller @ 2010-03-26 17:04 UTC (permalink / raw)
  To: richih.mailinglist; +Cc: alan, linux-kernel, netdev, linux-ppp


BTW, it seems your email client very thoroughly corrupted your
original patch submission.

This is a common problem with gmail, please read:

	linux/Documentation/email-clients.txt

so that you can learn how to adjust your email client settings such
that your outgoing patches are usable by us.

Thanks.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP
@ 2010-03-26 17:04         ` David Miller
  0 siblings, 0 replies; 73+ messages in thread
From: David Miller @ 2010-03-26 17:04 UTC (permalink / raw)
  To: richih.mailinglist; +Cc: alan, linux-kernel, netdev, linux-ppp


BTW, it seems your email client very thoroughly corrupted your
original patch submission.

This is a common problem with gmail, please read:

	linux/Documentation/email-clients.txt

so that you can learn how to adjust your email client settings such
that your outgoing patches are usable by us.

Thanks.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP  multi-link
  2010-03-26 15:50 ` Richard Hartmann
  (?)
@ 2010-03-31  9:01   ` Richard Hartmann
  -1 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-03-31  9:01 UTC (permalink / raw)
  To: linux-kernel, netdev, linux-ppp

[-- Attachment #1: Type: text/plain, Size: 6366 bytes --]

Hi all,

this is our attempt at a cleaner version. It is still far from being
perfect and packets seem to gain one to two bytes in size sometimes
which means that you will run into "normal" fragmentation if you set
your MTU to the possible maximum (as you are then over said max) but it
does what it should, can be changed at run-time and is not panicking the
kernel.

Feedback appreciated, code even more so.


Thanks,
Richard

PS: The main patch is inline, both the main and the debug patch are
attached.


--- drivers/net/ppp_generic.c.orig      2010-03-25 16:56:05.000000000 +0100
+++ drivers/net/ppp_generic.c   2010-03-30 19:56:15.000000000 +0200
@@ -129,6 +129,7 @@
        u32             nextseq;        /* MP: seq no of next packet */
        u32             minseq;         /* MP: min of most recent seqnos */
        struct sk_buff_head mrq;        /* MP: receive reconstruction queue */
+       int     rrsched;    /* round robin scheduler for packet distribution */
 #endif /* CONFIG_PPP_MULTILINK */
 #ifdef CONFIG_PPP_FILTER
        struct sock_filter *pass_filter;        /* filter for packets to pass */
@@ -227,6 +228,17 @@
 #define B      0x80            /* this fragment begins a packet */
 #define E      0x40            /* this fragment ends a packet */

+#ifdef CONFIG_PPP_MULTILINK
+/* alternate fragmentation algorithm and multilink behaviour options
added by uli.staerk@globalways.net */
+static int ppp_ml_noexplode = 0;
+module_param(ppp_ml_noexplode, int, 0600);
+MODULE_PARM_DESC(ppp_ml_noexplode, "Set this to any other values than
zero to avoid fragmentation over connected channels");
+
+static int ppp_ml_noheader = 0;
+module_param(ppp_ml_noheader, int, 0600);
+MODULE_PARM_DESC(ppp_ml_noheader, "Set this to any other value than
zero to remove the ppp-multilink protocol header (pppoes.protocol not
0x003d) which enables fragmentation and reordering");
+#endif /* CONFIG_PPP_MULTILINK */
+
 /* Compare multilink sequence numbers (assumed to be 32 bits wide) */
 #define seq_before(a, b)	((s32)((a) - (b)) < 0)
 #define seq_after(a, b)		((s32)((a) - (b)) > 0)
@@ -250,6 +262,7 @@
 static void ppp_mp_insert(struct ppp *ppp, struct sk_buff *skb);
 static struct sk_buff *ppp_mp_reconstruct(struct ppp *ppp);
 static int ppp_mp_explode(struct ppp *ppp, struct sk_buff *skb);
+static void ppp_mp_roundrobin(struct ppp *ppp, struct sk_buff *skb);
 #endif /* CONFIG_PPP_MULTILINK */
 static int ppp_set_compress(struct ppp *ppp, unsigned long arg);
 static void ppp_ccp_peek(struct ppp *ppp, struct sk_buff *skb, int inbound);
@@ -1292,10 +1305,18 @@
 	}

 #ifdef CONFIG_PPP_MULTILINK
-	/* Multilink: fragment the packet over as many links
-	   as can take the packet at the moment. */
-	if (!ppp_mp_explode(ppp, skb))
+	/* send packet without multilink header */
+	if(ppp_ml_noheader) {
+		ppp_mp_roundrobin(ppp, skb);
 		return;
+	}
+	else {
+		/* Multilink: fragment the packet over as many links
+		   as can take the packet at the moment. */
+		if (!ppp_mp_explode(ppp, skb)) {
+			return;
+		}
+	}
 #endif /* CONFIG_PPP_MULTILINK */

 	ppp->xmit_pending = NULL;
@@ -1304,6 +1325,38 @@

 #ifdef CONFIG_PPP_MULTILINK
 /*
+ * Send packet through the next channel (round robin)
+ */
+static void ppp_mp_roundrobin(struct ppp *ppp, struct sk_buff *skb)
+{
+	int i;
+	struct channel *pch;
+
+	ppp->rrsched++;
+	i = 0;
+	list_for_each_entry(pch, &ppp->channels, clist)      {
+		if(pch->chan == NULL) continue;
+
+		if (ppp->rrsched % ppp->n_channels == i) {
+			spin_lock_bh(&pch->downl);
+			if (pch->chan) {
+				if (pch->chan->ops->start_xmit(pch->chan, skb)) {
+					ppp->xmit_pending = NULL;
+				}
+			} else {
+				/* channel got unregistered */
+				kfree_skb(skb);
+				ppp->xmit_pending = NULL;
+			}
+			spin_unlock_bh(&pch->downl);
+			return;
+		}
+		i++;
+	}
+	return;
+}
+
+/*
  * Divide a packet to be transmitted into fragments and
  * send them out the individual links.
  */
@@ -1352,13 +1405,21 @@
 		}
 		++i;
 	}
-	/*
-	 * Don't start sending this	packet unless at least half	of
-	 * the channels	are	free.  This	gives much better TCP
-	 * performance if we have a	lot	of channels.
-	 */
-	if (nfree == 0 || nfree	< navail / 2)
-		return 0; /* can't take now, leave it in xmit_pending	*/
+
+
+	if(ppp_ml_noexplode) {
+	}
+	else {
+		/*
+		 * Don't start sending this	packet unless at least half	of
+		 * the channels	are	free.  This	gives much better TCP
+		 * performance if we have a	lot	of channels.
+		 */
+		if (nfree == 0 || nfree	< navail / 2) {
+			return 0; /* can't take now, leave it in xmit_pending	*/
+
+		}
+	}

 	/* Do protocol field compression (XXX this should be optional) */
 	p =	skb->data;
@@ -1371,6 +1432,7 @@
 	totlen = len;
 	nbigger	= len %	nfree;

+
 	/* skip	to the channel after the one we	last used
 	   and start at	that one */
 	list = &ppp->channels;
@@ -1432,33 +1494,40 @@
 		*of the channel we are going to transmit on
 		*/
 		flen = len;
-		if (nfree > 0) {
-			if (pch->speed == 0) {
-				flen = totlen/nfree	;
-				if (nbigger > 0) {
-					flen++;
-					nbigger--;
-				}
-			} else {
-				flen = (((totfree - nzero)*(totlen + hdrlen*totfree)) /
-					((totspeed*totfree)/pch->speed)) - hdrlen;
-				if (nbigger > 0) {
-					flen += ((totfree - nzero)*pch->speed)/totspeed;
-					nbigger -= ((totfree - nzero)*pch->speed)/
-							totspeed;
+
+		if(ppp_ml_noexplode) {
+			nfree--;
+		}
+		else {
+			if (nfree > 0) {
+				if (pch->speed == 0) {
+					flen = totlen/nfree	;
+					if (nbigger > 0) {
+						flen++;
+						nbigger--;
+					}
+				} else {
+					flen = (((totfree - nzero)*(totlen + hdrlen*totfree)) /
+						((totspeed*totfree)/pch->speed)) - hdrlen;
+					if (nbigger > 0) {
+						flen += ((totfree - nzero)*pch->speed)/totspeed;
+						nbigger -= ((totfree - nzero)*pch->speed)/
+								totspeed;
+					}
 				}
+				nfree--;
 			}
-			nfree--;
+			/*
+			 *check	if we are on the last channel or
+			 *we exceded the lenght	of the data	to
+			 *fragment
+			 */
+			if ((nfree <= 0) || (flen > len))
+				flen = len;
+
 		}

 		/*
-		 *check	if we are on the last channel or
-		 *we exceded the lenght	of the data	to
-		 *fragment
-		 */
-		if ((nfree <= 0) || (flen > len))
-			flen = len;
-		/*
 		 *it is not worth to tx on slow channels:
 		 *in that case from the resulting flen according to the
 		 *above formula will be equal or less than zero.

[-- Attachment #2: ppp_ml_noexplode.patch --]
[-- Type: text/x-diff, Size: 5747 bytes --]

--- drivers/net/ppp_generic.c.orig	2010-03-25 16:56:05.000000000 +0100
+++ drivers/net/ppp_generic.c	2010-03-30 19:56:15.000000000 +0200
@@ -129,6 +129,7 @@
 	u32		nextseq;	/* MP: seq no of next packet */
 	u32		minseq;		/* MP: min of most recent seqnos */
 	struct sk_buff_head mrq;	/* MP: receive reconstruction queue */
+	int     rrsched;    /* round robin scheduler for packet distribution */
 #endif /* CONFIG_PPP_MULTILINK */
 #ifdef CONFIG_PPP_FILTER
 	struct sock_filter *pass_filter;	/* filter for packets to pass */
@@ -227,6 +228,17 @@
 #define B	0x80		/* this fragment begins a packet */
 #define E	0x40		/* this fragment ends a packet */
 
+#ifdef CONFIG_PPP_MULTILINK
+/* alternate fragmentation algorithm and multilink behaviour options added by uli.staerk@globalways.net */
+static int ppp_ml_noexplode = 0;
+module_param(ppp_ml_noexplode, int, 0600);
+MODULE_PARM_DESC(ppp_ml_noexplode, "Set this to any other values than zero to avoid fragmentation over connected channels");
+
+static int ppp_ml_noheader = 0;
+module_param(ppp_ml_noheader, int, 0600);
+MODULE_PARM_DESC(ppp_ml_noheader, "Set this to any other value than zero to remove the ppp-multilink protocol header (pppoes.protocol not 0x003d) which enables fragmentation and reordering");
+#endif /* CONFIG_PPP_MULTILINK */
+
 /* Compare multilink sequence numbers (assumed to be 32 bits wide) */
 #define seq_before(a, b)	((s32)((a) - (b)) < 0)
 #define seq_after(a, b)		((s32)((a) - (b)) > 0)
@@ -250,6 +262,7 @@
 static void ppp_mp_insert(struct ppp *ppp, struct sk_buff *skb);
 static struct sk_buff *ppp_mp_reconstruct(struct ppp *ppp);
 static int ppp_mp_explode(struct ppp *ppp, struct sk_buff *skb);
+static void ppp_mp_roundrobin(struct ppp *ppp, struct sk_buff *skb);
 #endif /* CONFIG_PPP_MULTILINK */
 static int ppp_set_compress(struct ppp *ppp, unsigned long arg);
 static void ppp_ccp_peek(struct ppp *ppp, struct sk_buff *skb, int inbound);
@@ -1292,10 +1305,18 @@
 	}
 
 #ifdef CONFIG_PPP_MULTILINK
-	/* Multilink: fragment the packet over as many links
-	   as can take the packet at the moment. */
-	if (!ppp_mp_explode(ppp, skb))
+	/* send packet without multilink header */
+	if(ppp_ml_noheader) {
+		ppp_mp_roundrobin(ppp, skb);
 		return;
+	}
+	else {
+		/* Multilink: fragment the packet over as many links
+		   as can take the packet at the moment. */
+		if (!ppp_mp_explode(ppp, skb)) {
+			return;
+		}
+	}
 #endif /* CONFIG_PPP_MULTILINK */
 
 	ppp->xmit_pending = NULL;
@@ -1304,6 +1325,38 @@
 
 #ifdef CONFIG_PPP_MULTILINK
 /*
+ * Send packet through the next channel (round robin)
+ */
+static void ppp_mp_roundrobin(struct ppp *ppp, struct sk_buff *skb)
+{
+	int i;
+	struct channel *pch;
+
+	ppp->rrsched++;
+	i = 0;
+	list_for_each_entry(pch, &ppp->channels, clist)      {
+		if(pch->chan == NULL) continue;
+
+		if (ppp->rrsched % ppp->n_channels == i) {
+			spin_lock_bh(&pch->downl);
+			if (pch->chan) {
+				if (pch->chan->ops->start_xmit(pch->chan, skb)) {
+					ppp->xmit_pending = NULL;
+				}
+			} else {
+				/* channel got unregistered */
+				kfree_skb(skb);
+				ppp->xmit_pending = NULL;
+			}
+			spin_unlock_bh(&pch->downl);
+			return;
+		}
+		i++;
+	}
+	return;
+}
+
+/*
  * Divide a packet to be transmitted into fragments and
  * send them out the individual links.
  */
@@ -1352,13 +1405,21 @@
 		}
 		++i;
 	}
-	/*
-	 * Don't start sending this	packet unless at least half	of
-	 * the channels	are	free.  This	gives much better TCP
-	 * performance if we have a	lot	of channels.
-	 */
-	if (nfree == 0 || nfree	< navail / 2)
-		return 0; /* can't take now, leave it in xmit_pending	*/
+
+
+	if(ppp_ml_noexplode) {
+	}
+	else {
+		/*
+		 * Don't start sending this	packet unless at least half	of
+		 * the channels	are	free.  This	gives much better TCP
+		 * performance if we have a	lot	of channels.
+		 */
+		if (nfree == 0 || nfree	< navail / 2) {
+			return 0; /* can't take now, leave it in xmit_pending	*/
+
+		}
+	}
 
 	/* Do protocol field compression (XXX this should be optional) */
 	p =	skb->data;
@@ -1371,6 +1432,7 @@
 	totlen = len;
 	nbigger	= len %	nfree;
 
+
 	/* skip	to the channel after the one we	last used
 	   and start at	that one */
 	list = &ppp->channels;
@@ -1432,33 +1494,40 @@
 		*of the channel we are going to transmit on
 		*/
 		flen = len;
-		if (nfree > 0) {
-			if (pch->speed == 0) {
-				flen = totlen/nfree	;
-				if (nbigger > 0) {
-					flen++;
-					nbigger--;
-				}
-			} else {
-				flen = (((totfree - nzero)*(totlen + hdrlen*totfree)) /
-					((totspeed*totfree)/pch->speed)) - hdrlen;
-				if (nbigger > 0) {
-					flen += ((totfree - nzero)*pch->speed)/totspeed;
-					nbigger -= ((totfree - nzero)*pch->speed)/
-							totspeed;
+
+		if(ppp_ml_noexplode) {
+			nfree--;
+		}
+		else {
+			if (nfree > 0) {
+				if (pch->speed == 0) {
+					flen = totlen/nfree	;
+					if (nbigger > 0) {
+						flen++;
+						nbigger--;
+					}
+				} else {
+					flen = (((totfree - nzero)*(totlen + hdrlen*totfree)) /
+						((totspeed*totfree)/pch->speed)) - hdrlen;
+					if (nbigger > 0) {
+						flen += ((totfree - nzero)*pch->speed)/totspeed;
+						nbigger -= ((totfree - nzero)*pch->speed)/
+								totspeed;
+					}
 				}
+				nfree--;
 			}
-			nfree--;
+			/*
+			 *check	if we are on the last channel or
+			 *we exceded the lenght	of the data	to
+			 *fragment
+			 */
+			if ((nfree <= 0) || (flen > len))
+				flen = len;
+
 		}
 
 		/*
-		 *check	if we are on the last channel or
-		 *we exceded the lenght	of the data	to
-		 *fragment
-		 */
-		if ((nfree <= 0) || (flen > len))
-			flen = len;
-		/*
 		 *it is not worth to tx on slow channels:
 		 *in that case from the resulting flen according to the
 		 *above formula will be equal or less than zero.

[-- Attachment #3: ppp_ml_noexplode-with_debug.patch --]
[-- Type: text/x-diff, Size: 7913 bytes --]

--- drivers/net/ppp_generic.c.orig	2010-03-25 16:56:05.000000000 +0100
+++ drivers/net/ppp_generic.c.patched_with_debug	2010-03-30 20:03:31.000000000 +0200
@@ -129,6 +129,7 @@
 	u32		nextseq;	/* MP: seq no of next packet */
 	u32		minseq;		/* MP: min of most recent seqnos */
 	struct sk_buff_head mrq;	/* MP: receive reconstruction queue */
+	int     rrsched;    /* round robin scheduler for packet distribution */
 #endif /* CONFIG_PPP_MULTILINK */
 #ifdef CONFIG_PPP_FILTER
 	struct sock_filter *pass_filter;	/* filter for packets to pass */
@@ -227,6 +228,17 @@
 #define B	0x80		/* this fragment begins a packet */
 #define E	0x40		/* this fragment ends a packet */
 
+#ifdef CONFIG_PPP_MULTILINK
+/* alternate fragmentation algorithm and multilink behaviour options added by uli.staerk@globalways.net */
+static int ppp_ml_noexplode = 0;
+module_param(ppp_ml_noexplode, int, 0600);
+MODULE_PARM_DESC(ppp_ml_noexplode, "Set this to any other values than zero to avoid fragmentation over connected channels");
+
+static int ppp_ml_noheader = 0;
+module_param(ppp_ml_noheader, int, 0600);
+MODULE_PARM_DESC(ppp_ml_noheader, "Set this to any other value than zero to remove the ppp-multilink protocol header (pppoes.protocol not 0x003d) which enables fragmentation and reordering");
+#endif /* CONFIG_PPP_MULTILINK */
+
 /* Compare multilink sequence numbers (assumed to be 32 bits wide) */
 #define seq_before(a, b)	((s32)((a) - (b)) < 0)
 #define seq_after(a, b)		((s32)((a) - (b)) > 0)
@@ -250,6 +262,7 @@
 static void ppp_mp_insert(struct ppp *ppp, struct sk_buff *skb);
 static struct sk_buff *ppp_mp_reconstruct(struct ppp *ppp);
 static int ppp_mp_explode(struct ppp *ppp, struct sk_buff *skb);
+static void ppp_mp_roundrobin(struct ppp *ppp, struct sk_buff *skb);
 #endif /* CONFIG_PPP_MULTILINK */
 static int ppp_set_compress(struct ppp *ppp, unsigned long arg);
 static void ppp_ccp_peek(struct ppp *ppp, struct sk_buff *skb, int inbound);
@@ -1273,6 +1286,7 @@
 		return;
 	}
 
+printk(KERN_ERR "send packet\n");
 	if ((ppp->flags & SC_MULTILINK) == 0) {
 		/* not doing multilink: send it down the first channel */
 		list = list->next;
@@ -1292,11 +1306,24 @@
 	}
 
 #ifdef CONFIG_PPP_MULTILINK
-	/* Multilink: fragment the packet over as many links
-	   as can take the packet at the moment. */
-	if (!ppp_mp_explode(ppp, skb))
+	/* send packet without multilink header */
+	if(ppp_ml_noheader) {
+printk(KERN_ERR "BEGIN SEND RR\n");
+		ppp_mp_roundrobin(ppp, skb);
 		return;
+printk(KERN_ERR "END SEND RR\n");
+	}
+	else {
+printk(KERN_ERR "BEGIN SEND MULTILINK\n");
+		/* Multilink: fragment the packet over as many links
+		   as can take the packet at the moment. */
+		if (!ppp_mp_explode(ppp, skb)) {
+printk(KERN_ERR "END SEND MULTILINK\n");
+			return;
+		}
+	}
 #endif /* CONFIG_PPP_MULTILINK */
+printk(KERN_ERR "END SEND DROP PACKET\n");
 
 	ppp->xmit_pending = NULL;
 	kfree_skb(skb);
@@ -1304,6 +1331,41 @@
 
 #ifdef CONFIG_PPP_MULTILINK
 /*
+ * Send packet through the next channel (round robin)
+ */
+static void ppp_mp_roundrobin(struct ppp *ppp, struct sk_buff *skb)
+{
+	int i;
+	struct channel *pch;
+
+	ppp->rrsched++;
+printk(KERN_ERR "  RR counter=%d, len=%d, devmtu=%d\n", ppp->rrsched, skb->len, ppp->dev->mtu);
+	i = 0;
+	list_for_each_entry(pch, &ppp->channels, clist)      {
+		if(pch->chan == NULL) continue;
+
+		if (ppp->rrsched % ppp->n_channels == i) {
+printk(KERN_ERR "  RR send via %d, chmtu=%d\n", i, pch->chan->mtu);
+			spin_lock_bh(&pch->downl);
+			if (pch->chan) {
+				if (pch->chan->ops->start_xmit(pch->chan, skb)) {
+					ppp->xmit_pending = NULL;
+				}
+			} else {
+printk(KERN_ERR "  RR dropped at %d\n", i);
+				/* channel got unregistered */
+				kfree_skb(skb);
+				ppp->xmit_pending = NULL;
+			}
+			spin_unlock_bh(&pch->downl);
+			return;
+		}
+		i++;
+	}
+	return;
+}
+
+/*
  * Divide a packet to be transmitted into fragments and
  * send them out the individual links.
  */
@@ -1352,13 +1414,25 @@
 		}
 		++i;
 	}
-	/*
-	 * Don't start sending this	packet unless at least half	of
-	 * the channels	are	free.  This	gives much better TCP
-	 * performance if we have a	lot	of channels.
-	 */
-	if (nfree == 0 || nfree	< navail / 2)
-		return 0; /* can't take now, leave it in xmit_pending	*/
+
+printk(KERN_ERR "  ML nfree=%d, navail=%d, nzero=%d, totfree=%d, totspeed=%d\n", nfree,navail,nzero,totfree,totspeed);
+
+	if(ppp_ml_noexplode) {
+printk(KERN_ERR "  ML no explode A\n");
+	}
+	else {
+printk(KERN_ERR "  ML explode A\n");
+		/*
+		 * Don't start sending this	packet unless at least half	of
+		 * the channels	are	free.  This	gives much better TCP
+		 * performance if we have a	lot	of channels.
+		 */
+		if (nfree == 0 || nfree	< navail / 2) {
+printk(KERN_ERR "  ML wait A\n");
+			return 0; /* can't take now, leave it in xmit_pending	*/
+
+		}
+	}
 
 	/* Do protocol field compression (XXX this should be optional) */
 	p =	skb->data;
@@ -1371,6 +1445,8 @@
 	totlen = len;
 	nbigger	= len %	nfree;
 
+printk(KERN_ERR "  ML len=%d, totlen=%d, nbigger=%d\n", len, totlen, nbigger);
+
 	/* skip	to the channel after the one we	last used
 	   and start at	that one */
 	list = &ppp->channels;
@@ -1381,10 +1457,12 @@
 			break;
 		}
 	}
+printk(KERN_ERR "  ML skip to channel=%d\n", i);
 
 	/* create a	fragment for each channel */
 	bits = B;
 	while (len	> 0) {
+printk(KERN_ERR "  ML while len=%d, i=%d\n", len, i);
 		list = list->next;
 		if (list ==	&ppp->channels)	{
 			i =	0;
@@ -1432,45 +1510,58 @@
 		*of the channel we are going to transmit on
 		*/
 		flen = len;
-		if (nfree > 0) {
-			if (pch->speed == 0) {
-				flen = totlen/nfree	;
-				if (nbigger > 0) {
-					flen++;
-					nbigger--;
-				}
-			} else {
-				flen = (((totfree - nzero)*(totlen + hdrlen*totfree)) /
-					((totspeed*totfree)/pch->speed)) - hdrlen;
-				if (nbigger > 0) {
-					flen += ((totfree - nzero)*pch->speed)/totspeed;
-					nbigger -= ((totfree - nzero)*pch->speed)/
-							totspeed;
+printk(KERN_ERR "  ML fragmentlen=%d\n", flen);
+
+		if(ppp_ml_noexplode) {
+printk(KERN_ERR "  ML no explode B \n");
+			nfree--;
+		}
+		else {
+printk(KERN_ERR "  ML explode B \n");
+			if (nfree > 0) {
+				if (pch->speed == 0) {
+					flen = totlen/nfree	;
+					if (nbigger > 0) {
+						flen++;
+						nbigger--;
+					}
+				} else {
+					flen = (((totfree - nzero)*(totlen + hdrlen*totfree)) /
+						((totspeed*totfree)/pch->speed)) - hdrlen;
+					if (nbigger > 0) {
+						flen += ((totfree - nzero)*pch->speed)/totspeed;
+						nbigger -= ((totfree - nzero)*pch->speed)/
+								totspeed;
+					}
 				}
+				nfree--;
 			}
-			nfree--;
+			/*
+			 *check	if we are on the last channel or
+			 *we exceded the lenght	of the data	to
+			 *fragment
+			 */
+			if ((nfree <= 0) || (flen > len))
+				flen = len;
+
+printk(KERN_ERR "  ML new fragmentlen=%d\n", flen);
 		}
 
 		/*
-		 *check	if we are on the last channel or
-		 *we exceded the lenght	of the data	to
-		 *fragment
-		 */
-		if ((nfree <= 0) || (flen > len))
-			flen = len;
-		/*
 		 *it is not worth to tx on slow channels:
 		 *in that case from the resulting flen according to the
 		 *above formula will be equal or less than zero.
 		 *Skip the channel in this case
 		 */
 		if (flen <=	0) {
+printk(KERN_ERR "  ML fragmentlen is zero\n");
 			pch->avail = 2;
 			spin_unlock_bh(&pch->downl);
 			continue;
 		}
 
 		mtu	= pch->chan->mtu - hdrlen;
+printk(KERN_ERR "  ML mtu=%d (chan-mtu=%d)\n", mtu, pch->chan->mtu);
 		if (mtu	< 4)
 			mtu	= 4;
 		if (flen > mtu)
@@ -1502,6 +1593,7 @@
 		if (!skb_queue_empty(&pch->file.xq)	||
 			!chan->ops->start_xmit(chan, frag))
 			skb_queue_tail(&pch->file.xq, frag);
+printk(KERN_ERR "  ML sent packet with seq: %d\n", ppp->nxseq);
 		pch->had_frag =	1;
 		p += flen;
 		len	-= flen;
@@ -1510,6 +1602,7 @@
 		spin_unlock_bh(&pch->downl);
 	}
 	ppp->nxchan	= i;
+printk(KERN_ERR "  ML END SEND\n");
 
 	return 1;
 

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
@ 2010-03-31  9:01   ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-03-31  9:01 UTC (permalink / raw)
  To: linux-kernel, netdev, linux-ppp

[-- Attachment #1: Type: text/plain, Size: 6366 bytes --]

Hi all,

this is our attempt at a cleaner version. It is still far from being
perfect and packets seem to gain one to two bytes in size sometimes
which means that you will run into "normal" fragmentation if you set
your MTU to the possible maximum (as you are then over said max) but it
does what it should, can be changed at run-time and is not panicking the
kernel.

Feedback appreciated, code even more so.


Thanks,
Richard

PS: The main patch is inline, both the main and the debug patch are
attached.


--- drivers/net/ppp_generic.c.orig      2010-03-25 16:56:05.000000000 +0100
+++ drivers/net/ppp_generic.c   2010-03-30 19:56:15.000000000 +0200
@@ -129,6 +129,7 @@
        u32             nextseq;        /* MP: seq no of next packet */
        u32             minseq;         /* MP: min of most recent seqnos */
        struct sk_buff_head mrq;        /* MP: receive reconstruction queue */
+       int     rrsched;    /* round robin scheduler for packet distribution */
 #endif /* CONFIG_PPP_MULTILINK */
 #ifdef CONFIG_PPP_FILTER
        struct sock_filter *pass_filter;        /* filter for packets to pass */
@@ -227,6 +228,17 @@
 #define B      0x80            /* this fragment begins a packet */
 #define E      0x40            /* this fragment ends a packet */

+#ifdef CONFIG_PPP_MULTILINK
+/* alternate fragmentation algorithm and multilink behaviour options
added by uli.staerk@globalways.net */
+static int ppp_ml_noexplode = 0;
+module_param(ppp_ml_noexplode, int, 0600);
+MODULE_PARM_DESC(ppp_ml_noexplode, "Set this to any other values than
zero to avoid fragmentation over connected channels");
+
+static int ppp_ml_noheader = 0;
+module_param(ppp_ml_noheader, int, 0600);
+MODULE_PARM_DESC(ppp_ml_noheader, "Set this to any other value than
zero to remove the ppp-multilink protocol header (pppoes.protocol not
0x003d) which enables fragmentation and reordering");
+#endif /* CONFIG_PPP_MULTILINK */
+
 /* Compare multilink sequence numbers (assumed to be 32 bits wide) */
 #define seq_before(a, b)	((s32)((a) - (b)) < 0)
 #define seq_after(a, b)		((s32)((a) - (b)) > 0)
@@ -250,6 +262,7 @@
 static void ppp_mp_insert(struct ppp *ppp, struct sk_buff *skb);
 static struct sk_buff *ppp_mp_reconstruct(struct ppp *ppp);
 static int ppp_mp_explode(struct ppp *ppp, struct sk_buff *skb);
+static void ppp_mp_roundrobin(struct ppp *ppp, struct sk_buff *skb);
 #endif /* CONFIG_PPP_MULTILINK */
 static int ppp_set_compress(struct ppp *ppp, unsigned long arg);
 static void ppp_ccp_peek(struct ppp *ppp, struct sk_buff *skb, int inbound);
@@ -1292,10 +1305,18 @@
 	}

 #ifdef CONFIG_PPP_MULTILINK
-	/* Multilink: fragment the packet over as many links
-	   as can take the packet at the moment. */
-	if (!ppp_mp_explode(ppp, skb))
+	/* send packet without multilink header */
+	if(ppp_ml_noheader) {
+		ppp_mp_roundrobin(ppp, skb);
 		return;
+	}
+	else {
+		/* Multilink: fragment the packet over as many links
+		   as can take the packet at the moment. */
+		if (!ppp_mp_explode(ppp, skb)) {
+			return;
+		}
+	}
 #endif /* CONFIG_PPP_MULTILINK */

 	ppp->xmit_pending = NULL;
@@ -1304,6 +1325,38 @@

 #ifdef CONFIG_PPP_MULTILINK
 /*
+ * Send packet through the next channel (round robin)
+ */
+static void ppp_mp_roundrobin(struct ppp *ppp, struct sk_buff *skb)
+{
+	int i;
+	struct channel *pch;
+
+	ppp->rrsched++;
+	i = 0;
+	list_for_each_entry(pch, &ppp->channels, clist)      {
+		if(pch->chan == NULL) continue;
+
+		if (ppp->rrsched % ppp->n_channels == i) {
+			spin_lock_bh(&pch->downl);
+			if (pch->chan) {
+				if (pch->chan->ops->start_xmit(pch->chan, skb)) {
+					ppp->xmit_pending = NULL;
+				}
+			} else {
+				/* channel got unregistered */
+				kfree_skb(skb);
+				ppp->xmit_pending = NULL;
+			}
+			spin_unlock_bh(&pch->downl);
+			return;
+		}
+		i++;
+	}
+	return;
+}
+
+/*
  * Divide a packet to be transmitted into fragments and
  * send them out the individual links.
  */
@@ -1352,13 +1405,21 @@
 		}
 		++i;
 	}
-	/*
-	 * Don't start sending this	packet unless at least half	of
-	 * the channels	are	free.  This	gives much better TCP
-	 * performance if we have a	lot	of channels.
-	 */
-	if (nfree == 0 || nfree	< navail / 2)
-		return 0; /* can't take now, leave it in xmit_pending	*/
+
+
+	if(ppp_ml_noexplode) {
+	}
+	else {
+		/*
+		 * Don't start sending this	packet unless at least half	of
+		 * the channels	are	free.  This	gives much better TCP
+		 * performance if we have a	lot	of channels.
+		 */
+		if (nfree == 0 || nfree	< navail / 2) {
+			return 0; /* can't take now, leave it in xmit_pending	*/
+
+		}
+	}

 	/* Do protocol field compression (XXX this should be optional) */
 	p =	skb->data;
@@ -1371,6 +1432,7 @@
 	totlen = len;
 	nbigger	= len %	nfree;

+
 	/* skip	to the channel after the one we	last used
 	   and start at	that one */
 	list = &ppp->channels;
@@ -1432,33 +1494,40 @@
 		*of the channel we are going to transmit on
 		*/
 		flen = len;
-		if (nfree > 0) {
-			if (pch->speed == 0) {
-				flen = totlen/nfree	;
-				if (nbigger > 0) {
-					flen++;
-					nbigger--;
-				}
-			} else {
-				flen = (((totfree - nzero)*(totlen + hdrlen*totfree)) /
-					((totspeed*totfree)/pch->speed)) - hdrlen;
-				if (nbigger > 0) {
-					flen += ((totfree - nzero)*pch->speed)/totspeed;
-					nbigger -= ((totfree - nzero)*pch->speed)/
-							totspeed;
+
+		if(ppp_ml_noexplode) {
+			nfree--;
+		}
+		else {
+			if (nfree > 0) {
+				if (pch->speed == 0) {
+					flen = totlen/nfree	;
+					if (nbigger > 0) {
+						flen++;
+						nbigger--;
+					}
+				} else {
+					flen = (((totfree - nzero)*(totlen + hdrlen*totfree)) /
+						((totspeed*totfree)/pch->speed)) - hdrlen;
+					if (nbigger > 0) {
+						flen += ((totfree - nzero)*pch->speed)/totspeed;
+						nbigger -= ((totfree - nzero)*pch->speed)/
+								totspeed;
+					}
 				}
+				nfree--;
 			}
-			nfree--;
+			/*
+			 *check	if we are on the last channel or
+			 *we exceded the lenght	of the data	to
+			 *fragment
+			 */
+			if ((nfree <= 0) || (flen > len))
+				flen = len;
+
 		}

 		/*
-		 *check	if we are on the last channel or
-		 *we exceded the lenght	of the data	to
-		 *fragment
-		 */
-		if ((nfree <= 0) || (flen > len))
-			flen = len;
-		/*
 		 *it is not worth to tx on slow channels:
 		 *in that case from the resulting flen according to the
 		 *above formula will be equal or less than zero.

[-- Attachment #2: ppp_ml_noexplode.patch --]
[-- Type: text/x-diff, Size: 5747 bytes --]

--- drivers/net/ppp_generic.c.orig	2010-03-25 16:56:05.000000000 +0100
+++ drivers/net/ppp_generic.c	2010-03-30 19:56:15.000000000 +0200
@@ -129,6 +129,7 @@
 	u32		nextseq;	/* MP: seq no of next packet */
 	u32		minseq;		/* MP: min of most recent seqnos */
 	struct sk_buff_head mrq;	/* MP: receive reconstruction queue */
+	int     rrsched;    /* round robin scheduler for packet distribution */
 #endif /* CONFIG_PPP_MULTILINK */
 #ifdef CONFIG_PPP_FILTER
 	struct sock_filter *pass_filter;	/* filter for packets to pass */
@@ -227,6 +228,17 @@
 #define B	0x80		/* this fragment begins a packet */
 #define E	0x40		/* this fragment ends a packet */
 
+#ifdef CONFIG_PPP_MULTILINK
+/* alternate fragmentation algorithm and multilink behaviour options added by uli.staerk@globalways.net */
+static int ppp_ml_noexplode = 0;
+module_param(ppp_ml_noexplode, int, 0600);
+MODULE_PARM_DESC(ppp_ml_noexplode, "Set this to any other values than zero to avoid fragmentation over connected channels");
+
+static int ppp_ml_noheader = 0;
+module_param(ppp_ml_noheader, int, 0600);
+MODULE_PARM_DESC(ppp_ml_noheader, "Set this to any other value than zero to remove the ppp-multilink protocol header (pppoes.protocol not 0x003d) which enables fragmentation and reordering");
+#endif /* CONFIG_PPP_MULTILINK */
+
 /* Compare multilink sequence numbers (assumed to be 32 bits wide) */
 #define seq_before(a, b)	((s32)((a) - (b)) < 0)
 #define seq_after(a, b)		((s32)((a) - (b)) > 0)
@@ -250,6 +262,7 @@
 static void ppp_mp_insert(struct ppp *ppp, struct sk_buff *skb);
 static struct sk_buff *ppp_mp_reconstruct(struct ppp *ppp);
 static int ppp_mp_explode(struct ppp *ppp, struct sk_buff *skb);
+static void ppp_mp_roundrobin(struct ppp *ppp, struct sk_buff *skb);
 #endif /* CONFIG_PPP_MULTILINK */
 static int ppp_set_compress(struct ppp *ppp, unsigned long arg);
 static void ppp_ccp_peek(struct ppp *ppp, struct sk_buff *skb, int inbound);
@@ -1292,10 +1305,18 @@
 	}
 
 #ifdef CONFIG_PPP_MULTILINK
-	/* Multilink: fragment the packet over as many links
-	   as can take the packet at the moment. */
-	if (!ppp_mp_explode(ppp, skb))
+	/* send packet without multilink header */
+	if(ppp_ml_noheader) {
+		ppp_mp_roundrobin(ppp, skb);
 		return;
+	}
+	else {
+		/* Multilink: fragment the packet over as many links
+		   as can take the packet at the moment. */
+		if (!ppp_mp_explode(ppp, skb)) {
+			return;
+		}
+	}
 #endif /* CONFIG_PPP_MULTILINK */
 
 	ppp->xmit_pending = NULL;
@@ -1304,6 +1325,38 @@
 
 #ifdef CONFIG_PPP_MULTILINK
 /*
+ * Send packet through the next channel (round robin)
+ */
+static void ppp_mp_roundrobin(struct ppp *ppp, struct sk_buff *skb)
+{
+	int i;
+	struct channel *pch;
+
+	ppp->rrsched++;
+	i = 0;
+	list_for_each_entry(pch, &ppp->channels, clist)      {
+		if(pch->chan == NULL) continue;
+
+		if (ppp->rrsched % ppp->n_channels == i) {
+			spin_lock_bh(&pch->downl);
+			if (pch->chan) {
+				if (pch->chan->ops->start_xmit(pch->chan, skb)) {
+					ppp->xmit_pending = NULL;
+				}
+			} else {
+				/* channel got unregistered */
+				kfree_skb(skb);
+				ppp->xmit_pending = NULL;
+			}
+			spin_unlock_bh(&pch->downl);
+			return;
+		}
+		i++;
+	}
+	return;
+}
+
+/*
  * Divide a packet to be transmitted into fragments and
  * send them out the individual links.
  */
@@ -1352,13 +1405,21 @@
 		}
 		++i;
 	}
-	/*
-	 * Don't start sending this	packet unless at least half	of
-	 * the channels	are	free.  This	gives much better TCP
-	 * performance if we have a	lot	of channels.
-	 */
-	if (nfree == 0 || nfree	< navail / 2)
-		return 0; /* can't take now, leave it in xmit_pending	*/
+
+
+	if(ppp_ml_noexplode) {
+	}
+	else {
+		/*
+		 * Don't start sending this	packet unless at least half	of
+		 * the channels	are	free.  This	gives much better TCP
+		 * performance if we have a	lot	of channels.
+		 */
+		if (nfree == 0 || nfree	< navail / 2) {
+			return 0; /* can't take now, leave it in xmit_pending	*/
+
+		}
+	}
 
 	/* Do protocol field compression (XXX this should be optional) */
 	p =	skb->data;
@@ -1371,6 +1432,7 @@
 	totlen = len;
 	nbigger	= len %	nfree;
 
+
 	/* skip	to the channel after the one we	last used
 	   and start at	that one */
 	list = &ppp->channels;
@@ -1432,33 +1494,40 @@
 		*of the channel we are going to transmit on
 		*/
 		flen = len;
-		if (nfree > 0) {
-			if (pch->speed == 0) {
-				flen = totlen/nfree	;
-				if (nbigger > 0) {
-					flen++;
-					nbigger--;
-				}
-			} else {
-				flen = (((totfree - nzero)*(totlen + hdrlen*totfree)) /
-					((totspeed*totfree)/pch->speed)) - hdrlen;
-				if (nbigger > 0) {
-					flen += ((totfree - nzero)*pch->speed)/totspeed;
-					nbigger -= ((totfree - nzero)*pch->speed)/
-							totspeed;
+
+		if(ppp_ml_noexplode) {
+			nfree--;
+		}
+		else {
+			if (nfree > 0) {
+				if (pch->speed == 0) {
+					flen = totlen/nfree	;
+					if (nbigger > 0) {
+						flen++;
+						nbigger--;
+					}
+				} else {
+					flen = (((totfree - nzero)*(totlen + hdrlen*totfree)) /
+						((totspeed*totfree)/pch->speed)) - hdrlen;
+					if (nbigger > 0) {
+						flen += ((totfree - nzero)*pch->speed)/totspeed;
+						nbigger -= ((totfree - nzero)*pch->speed)/
+								totspeed;
+					}
 				}
+				nfree--;
 			}
-			nfree--;
+			/*
+			 *check	if we are on the last channel or
+			 *we exceded the lenght	of the data	to
+			 *fragment
+			 */
+			if ((nfree <= 0) || (flen > len))
+				flen = len;
+
 		}
 
 		/*
-		 *check	if we are on the last channel or
-		 *we exceded the lenght	of the data	to
-		 *fragment
-		 */
-		if ((nfree <= 0) || (flen > len))
-			flen = len;
-		/*
 		 *it is not worth to tx on slow channels:
 		 *in that case from the resulting flen according to the
 		 *above formula will be equal or less than zero.

[-- Attachment #3: ppp_ml_noexplode-with_debug.patch --]
[-- Type: text/x-diff, Size: 7913 bytes --]

--- drivers/net/ppp_generic.c.orig	2010-03-25 16:56:05.000000000 +0100
+++ drivers/net/ppp_generic.c.patched_with_debug	2010-03-30 20:03:31.000000000 +0200
@@ -129,6 +129,7 @@
 	u32		nextseq;	/* MP: seq no of next packet */
 	u32		minseq;		/* MP: min of most recent seqnos */
 	struct sk_buff_head mrq;	/* MP: receive reconstruction queue */
+	int     rrsched;    /* round robin scheduler for packet distribution */
 #endif /* CONFIG_PPP_MULTILINK */
 #ifdef CONFIG_PPP_FILTER
 	struct sock_filter *pass_filter;	/* filter for packets to pass */
@@ -227,6 +228,17 @@
 #define B	0x80		/* this fragment begins a packet */
 #define E	0x40		/* this fragment ends a packet */
 
+#ifdef CONFIG_PPP_MULTILINK
+/* alternate fragmentation algorithm and multilink behaviour options added by uli.staerk@globalways.net */
+static int ppp_ml_noexplode = 0;
+module_param(ppp_ml_noexplode, int, 0600);
+MODULE_PARM_DESC(ppp_ml_noexplode, "Set this to any other values than zero to avoid fragmentation over connected channels");
+
+static int ppp_ml_noheader = 0;
+module_param(ppp_ml_noheader, int, 0600);
+MODULE_PARM_DESC(ppp_ml_noheader, "Set this to any other value than zero to remove the ppp-multilink protocol header (pppoes.protocol not 0x003d) which enables fragmentation and reordering");
+#endif /* CONFIG_PPP_MULTILINK */
+
 /* Compare multilink sequence numbers (assumed to be 32 bits wide) */
 #define seq_before(a, b)	((s32)((a) - (b)) < 0)
 #define seq_after(a, b)		((s32)((a) - (b)) > 0)
@@ -250,6 +262,7 @@
 static void ppp_mp_insert(struct ppp *ppp, struct sk_buff *skb);
 static struct sk_buff *ppp_mp_reconstruct(struct ppp *ppp);
 static int ppp_mp_explode(struct ppp *ppp, struct sk_buff *skb);
+static void ppp_mp_roundrobin(struct ppp *ppp, struct sk_buff *skb);
 #endif /* CONFIG_PPP_MULTILINK */
 static int ppp_set_compress(struct ppp *ppp, unsigned long arg);
 static void ppp_ccp_peek(struct ppp *ppp, struct sk_buff *skb, int inbound);
@@ -1273,6 +1286,7 @@
 		return;
 	}
 
+printk(KERN_ERR "send packet\n");
 	if ((ppp->flags & SC_MULTILINK) == 0) {
 		/* not doing multilink: send it down the first channel */
 		list = list->next;
@@ -1292,11 +1306,24 @@
 	}
 
 #ifdef CONFIG_PPP_MULTILINK
-	/* Multilink: fragment the packet over as many links
-	   as can take the packet at the moment. */
-	if (!ppp_mp_explode(ppp, skb))
+	/* send packet without multilink header */
+	if(ppp_ml_noheader) {
+printk(KERN_ERR "BEGIN SEND RR\n");
+		ppp_mp_roundrobin(ppp, skb);
 		return;
+printk(KERN_ERR "END SEND RR\n");
+	}
+	else {
+printk(KERN_ERR "BEGIN SEND MULTILINK\n");
+		/* Multilink: fragment the packet over as many links
+		   as can take the packet at the moment. */
+		if (!ppp_mp_explode(ppp, skb)) {
+printk(KERN_ERR "END SEND MULTILINK\n");
+			return;
+		}
+	}
 #endif /* CONFIG_PPP_MULTILINK */
+printk(KERN_ERR "END SEND DROP PACKET\n");
 
 	ppp->xmit_pending = NULL;
 	kfree_skb(skb);
@@ -1304,6 +1331,41 @@
 
 #ifdef CONFIG_PPP_MULTILINK
 /*
+ * Send packet through the next channel (round robin)
+ */
+static void ppp_mp_roundrobin(struct ppp *ppp, struct sk_buff *skb)
+{
+	int i;
+	struct channel *pch;
+
+	ppp->rrsched++;
+printk(KERN_ERR "  RR counter=%d, len=%d, devmtu=%d\n", ppp->rrsched, skb->len, ppp->dev->mtu);
+	i = 0;
+	list_for_each_entry(pch, &ppp->channels, clist)      {
+		if(pch->chan == NULL) continue;
+
+		if (ppp->rrsched % ppp->n_channels == i) {
+printk(KERN_ERR "  RR send via %d, chmtu=%d\n", i, pch->chan->mtu);
+			spin_lock_bh(&pch->downl);
+			if (pch->chan) {
+				if (pch->chan->ops->start_xmit(pch->chan, skb)) {
+					ppp->xmit_pending = NULL;
+				}
+			} else {
+printk(KERN_ERR "  RR dropped at %d\n", i);
+				/* channel got unregistered */
+				kfree_skb(skb);
+				ppp->xmit_pending = NULL;
+			}
+			spin_unlock_bh(&pch->downl);
+			return;
+		}
+		i++;
+	}
+	return;
+}
+
+/*
  * Divide a packet to be transmitted into fragments and
  * send them out the individual links.
  */
@@ -1352,13 +1414,25 @@
 		}
 		++i;
 	}
-	/*
-	 * Don't start sending this	packet unless at least half	of
-	 * the channels	are	free.  This	gives much better TCP
-	 * performance if we have a	lot	of channels.
-	 */
-	if (nfree == 0 || nfree	< navail / 2)
-		return 0; /* can't take now, leave it in xmit_pending	*/
+
+printk(KERN_ERR "  ML nfree=%d, navail=%d, nzero=%d, totfree=%d, totspeed=%d\n", nfree,navail,nzero,totfree,totspeed);
+
+	if(ppp_ml_noexplode) {
+printk(KERN_ERR "  ML no explode A\n");
+	}
+	else {
+printk(KERN_ERR "  ML explode A\n");
+		/*
+		 * Don't start sending this	packet unless at least half	of
+		 * the channels	are	free.  This	gives much better TCP
+		 * performance if we have a	lot	of channels.
+		 */
+		if (nfree == 0 || nfree	< navail / 2) {
+printk(KERN_ERR "  ML wait A\n");
+			return 0; /* can't take now, leave it in xmit_pending	*/
+
+		}
+	}
 
 	/* Do protocol field compression (XXX this should be optional) */
 	p =	skb->data;
@@ -1371,6 +1445,8 @@
 	totlen = len;
 	nbigger	= len %	nfree;
 
+printk(KERN_ERR "  ML len=%d, totlen=%d, nbigger=%d\n", len, totlen, nbigger);
+
 	/* skip	to the channel after the one we	last used
 	   and start at	that one */
 	list = &ppp->channels;
@@ -1381,10 +1457,12 @@
 			break;
 		}
 	}
+printk(KERN_ERR "  ML skip to channel=%d\n", i);
 
 	/* create a	fragment for each channel */
 	bits = B;
 	while (len	> 0) {
+printk(KERN_ERR "  ML while len=%d, i=%d\n", len, i);
 		list = list->next;
 		if (list ==	&ppp->channels)	{
 			i =	0;
@@ -1432,45 +1510,58 @@
 		*of the channel we are going to transmit on
 		*/
 		flen = len;
-		if (nfree > 0) {
-			if (pch->speed == 0) {
-				flen = totlen/nfree	;
-				if (nbigger > 0) {
-					flen++;
-					nbigger--;
-				}
-			} else {
-				flen = (((totfree - nzero)*(totlen + hdrlen*totfree)) /
-					((totspeed*totfree)/pch->speed)) - hdrlen;
-				if (nbigger > 0) {
-					flen += ((totfree - nzero)*pch->speed)/totspeed;
-					nbigger -= ((totfree - nzero)*pch->speed)/
-							totspeed;
+printk(KERN_ERR "  ML fragmentlen=%d\n", flen);
+
+		if(ppp_ml_noexplode) {
+printk(KERN_ERR "  ML no explode B \n");
+			nfree--;
+		}
+		else {
+printk(KERN_ERR "  ML explode B \n");
+			if (nfree > 0) {
+				if (pch->speed == 0) {
+					flen = totlen/nfree	;
+					if (nbigger > 0) {
+						flen++;
+						nbigger--;
+					}
+				} else {
+					flen = (((totfree - nzero)*(totlen + hdrlen*totfree)) /
+						((totspeed*totfree)/pch->speed)) - hdrlen;
+					if (nbigger > 0) {
+						flen += ((totfree - nzero)*pch->speed)/totspeed;
+						nbigger -= ((totfree - nzero)*pch->speed)/
+								totspeed;
+					}
 				}
+				nfree--;
 			}
-			nfree--;
+			/*
+			 *check	if we are on the last channel or
+			 *we exceded the lenght	of the data	to
+			 *fragment
+			 */
+			if ((nfree <= 0) || (flen > len))
+				flen = len;
+
+printk(KERN_ERR "  ML new fragmentlen=%d\n", flen);
 		}
 
 		/*
-		 *check	if we are on the last channel or
-		 *we exceded the lenght	of the data	to
-		 *fragment
-		 */
-		if ((nfree <= 0) || (flen > len))
-			flen = len;
-		/*
 		 *it is not worth to tx on slow channels:
 		 *in that case from the resulting flen according to the
 		 *above formula will be equal or less than zero.
 		 *Skip the channel in this case
 		 */
 		if (flen <=	0) {
+printk(KERN_ERR "  ML fragmentlen is zero\n");
 			pch->avail = 2;
 			spin_unlock_bh(&pch->downl);
 			continue;
 		}
 
 		mtu	= pch->chan->mtu - hdrlen;
+printk(KERN_ERR "  ML mtu=%d (chan-mtu=%d)\n", mtu, pch->chan->mtu);
 		if (mtu	< 4)
 			mtu	= 4;
 		if (flen > mtu)
@@ -1502,6 +1593,7 @@
 		if (!skb_queue_empty(&pch->file.xq)	||
 			!chan->ops->start_xmit(chan, frag))
 			skb_queue_tail(&pch->file.xq, frag);
+printk(KERN_ERR "  ML sent packet with seq: %d\n", ppp->nxseq);
 		pch->had_frag =	1;
 		p += flen;
 		len	-= flen;
@@ -1510,6 +1602,7 @@
 		spin_unlock_bh(&pch->downl);
 	}
 	ppp->nxchan	= i;
+printk(KERN_ERR "  ML END SEND\n");
 
 	return 1;
 

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP
@ 2010-03-31  9:01   ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-03-31  9:01 UTC (permalink / raw)
  To: linux-kernel, netdev, linux-ppp

[-- Attachment #1: Type: text/plain, Size: 6366 bytes --]

Hi all,

this is our attempt at a cleaner version. It is still far from being
perfect and packets seem to gain one to two bytes in size sometimes
which means that you will run into "normal" fragmentation if you set
your MTU to the possible maximum (as you are then over said max) but it
does what it should, can be changed at run-time and is not panicking the
kernel.

Feedback appreciated, code even more so.


Thanks,
Richard

PS: The main patch is inline, both the main and the debug patch are
attached.


--- drivers/net/ppp_generic.c.orig      2010-03-25 16:56:05.000000000 +0100
+++ drivers/net/ppp_generic.c   2010-03-30 19:56:15.000000000 +0200
@@ -129,6 +129,7 @@
        u32             nextseq;        /* MP: seq no of next packet */
        u32             minseq;         /* MP: min of most recent seqnos */
        struct sk_buff_head mrq;        /* MP: receive reconstruction queue */
+       int     rrsched;    /* round robin scheduler for packet distribution */
 #endif /* CONFIG_PPP_MULTILINK */
 #ifdef CONFIG_PPP_FILTER
        struct sock_filter *pass_filter;        /* filter for packets to pass */
@@ -227,6 +228,17 @@
 #define B      0x80            /* this fragment begins a packet */
 #define E      0x40            /* this fragment ends a packet */

+#ifdef CONFIG_PPP_MULTILINK
+/* alternate fragmentation algorithm and multilink behaviour options
added by uli.staerk@globalways.net */
+static int ppp_ml_noexplode = 0;
+module_param(ppp_ml_noexplode, int, 0600);
+MODULE_PARM_DESC(ppp_ml_noexplode, "Set this to any other values than
zero to avoid fragmentation over connected channels");
+
+static int ppp_ml_noheader = 0;
+module_param(ppp_ml_noheader, int, 0600);
+MODULE_PARM_DESC(ppp_ml_noheader, "Set this to any other value than
zero to remove the ppp-multilink protocol header (pppoes.protocol not
0x003d) which enables fragmentation and reordering");
+#endif /* CONFIG_PPP_MULTILINK */
+
 /* Compare multilink sequence numbers (assumed to be 32 bits wide) */
 #define seq_before(a, b)	((s32)((a) - (b)) < 0)
 #define seq_after(a, b)		((s32)((a) - (b)) > 0)
@@ -250,6 +262,7 @@
 static void ppp_mp_insert(struct ppp *ppp, struct sk_buff *skb);
 static struct sk_buff *ppp_mp_reconstruct(struct ppp *ppp);
 static int ppp_mp_explode(struct ppp *ppp, struct sk_buff *skb);
+static void ppp_mp_roundrobin(struct ppp *ppp, struct sk_buff *skb);
 #endif /* CONFIG_PPP_MULTILINK */
 static int ppp_set_compress(struct ppp *ppp, unsigned long arg);
 static void ppp_ccp_peek(struct ppp *ppp, struct sk_buff *skb, int inbound);
@@ -1292,10 +1305,18 @@
 	}

 #ifdef CONFIG_PPP_MULTILINK
-	/* Multilink: fragment the packet over as many links
-	   as can take the packet at the moment. */
-	if (!ppp_mp_explode(ppp, skb))
+	/* send packet without multilink header */
+	if(ppp_ml_noheader) {
+		ppp_mp_roundrobin(ppp, skb);
 		return;
+	}
+	else {
+		/* Multilink: fragment the packet over as many links
+		   as can take the packet at the moment. */
+		if (!ppp_mp_explode(ppp, skb)) {
+			return;
+		}
+	}
 #endif /* CONFIG_PPP_MULTILINK */

 	ppp->xmit_pending = NULL;
@@ -1304,6 +1325,38 @@

 #ifdef CONFIG_PPP_MULTILINK
 /*
+ * Send packet through the next channel (round robin)
+ */
+static void ppp_mp_roundrobin(struct ppp *ppp, struct sk_buff *skb)
+{
+	int i;
+	struct channel *pch;
+
+	ppp->rrsched++;
+	i = 0;
+	list_for_each_entry(pch, &ppp->channels, clist)      {
+		if(pch->chan == NULL) continue;
+
+		if (ppp->rrsched % ppp->n_channels == i) {
+			spin_lock_bh(&pch->downl);
+			if (pch->chan) {
+				if (pch->chan->ops->start_xmit(pch->chan, skb)) {
+					ppp->xmit_pending = NULL;
+				}
+			} else {
+				/* channel got unregistered */
+				kfree_skb(skb);
+				ppp->xmit_pending = NULL;
+			}
+			spin_unlock_bh(&pch->downl);
+			return;
+		}
+		i++;
+	}
+	return;
+}
+
+/*
  * Divide a packet to be transmitted into fragments and
  * send them out the individual links.
  */
@@ -1352,13 +1405,21 @@
 		}
 		++i;
 	}
-	/*
-	 * Don't start sending this	packet unless at least half	of
-	 * the channels	are	free.  This	gives much better TCP
-	 * performance if we have a	lot	of channels.
-	 */
-	if (nfree == 0 || nfree	< navail / 2)
-		return 0; /* can't take now, leave it in xmit_pending	*/
+
+
+	if(ppp_ml_noexplode) {
+	}
+	else {
+		/*
+		 * Don't start sending this	packet unless at least half	of
+		 * the channels	are	free.  This	gives much better TCP
+		 * performance if we have a	lot	of channels.
+		 */
+		if (nfree == 0 || nfree	< navail / 2) {
+			return 0; /* can't take now, leave it in xmit_pending	*/
+
+		}
+	}

 	/* Do protocol field compression (XXX this should be optional) */
 	p =	skb->data;
@@ -1371,6 +1432,7 @@
 	totlen = len;
 	nbigger	= len %	nfree;

+
 	/* skip	to the channel after the one we	last used
 	   and start at	that one */
 	list = &ppp->channels;
@@ -1432,33 +1494,40 @@
 		*of the channel we are going to transmit on
 		*/
 		flen = len;
-		if (nfree > 0) {
-			if (pch->speed == 0) {
-				flen = totlen/nfree	;
-				if (nbigger > 0) {
-					flen++;
-					nbigger--;
-				}
-			} else {
-				flen = (((totfree - nzero)*(totlen + hdrlen*totfree)) /
-					((totspeed*totfree)/pch->speed)) - hdrlen;
-				if (nbigger > 0) {
-					flen += ((totfree - nzero)*pch->speed)/totspeed;
-					nbigger -= ((totfree - nzero)*pch->speed)/
-							totspeed;
+
+		if(ppp_ml_noexplode) {
+			nfree--;
+		}
+		else {
+			if (nfree > 0) {
+				if (pch->speed == 0) {
+					flen = totlen/nfree	;
+					if (nbigger > 0) {
+						flen++;
+						nbigger--;
+					}
+				} else {
+					flen = (((totfree - nzero)*(totlen + hdrlen*totfree)) /
+						((totspeed*totfree)/pch->speed)) - hdrlen;
+					if (nbigger > 0) {
+						flen += ((totfree - nzero)*pch->speed)/totspeed;
+						nbigger -= ((totfree - nzero)*pch->speed)/
+								totspeed;
+					}
 				}
+				nfree--;
 			}
-			nfree--;
+			/*
+			 *check	if we are on the last channel or
+			 *we exceded the lenght	of the data	to
+			 *fragment
+			 */
+			if ((nfree <= 0) || (flen > len))
+				flen = len;
+
 		}

 		/*
-		 *check	if we are on the last channel or
-		 *we exceded the lenght	of the data	to
-		 *fragment
-		 */
-		if ((nfree <= 0) || (flen > len))
-			flen = len;
-		/*
 		 *it is not worth to tx on slow channels:
 		 *in that case from the resulting flen according to the
 		 *above formula will be equal or less than zero.

[-- Attachment #2: ppp_ml_noexplode.patch --]
[-- Type: text/x-diff, Size: 5747 bytes --]

--- drivers/net/ppp_generic.c.orig	2010-03-25 16:56:05.000000000 +0100
+++ drivers/net/ppp_generic.c	2010-03-30 19:56:15.000000000 +0200
@@ -129,6 +129,7 @@
 	u32		nextseq;	/* MP: seq no of next packet */
 	u32		minseq;		/* MP: min of most recent seqnos */
 	struct sk_buff_head mrq;	/* MP: receive reconstruction queue */
+	int     rrsched;    /* round robin scheduler for packet distribution */
 #endif /* CONFIG_PPP_MULTILINK */
 #ifdef CONFIG_PPP_FILTER
 	struct sock_filter *pass_filter;	/* filter for packets to pass */
@@ -227,6 +228,17 @@
 #define B	0x80		/* this fragment begins a packet */
 #define E	0x40		/* this fragment ends a packet */
 
+#ifdef CONFIG_PPP_MULTILINK
+/* alternate fragmentation algorithm and multilink behaviour options added by uli.staerk@globalways.net */
+static int ppp_ml_noexplode = 0;
+module_param(ppp_ml_noexplode, int, 0600);
+MODULE_PARM_DESC(ppp_ml_noexplode, "Set this to any other values than zero to avoid fragmentation over connected channels");
+
+static int ppp_ml_noheader = 0;
+module_param(ppp_ml_noheader, int, 0600);
+MODULE_PARM_DESC(ppp_ml_noheader, "Set this to any other value than zero to remove the ppp-multilink protocol header (pppoes.protocol not 0x003d) which enables fragmentation and reordering");
+#endif /* CONFIG_PPP_MULTILINK */
+
 /* Compare multilink sequence numbers (assumed to be 32 bits wide) */
 #define seq_before(a, b)	((s32)((a) - (b)) < 0)
 #define seq_after(a, b)		((s32)((a) - (b)) > 0)
@@ -250,6 +262,7 @@
 static void ppp_mp_insert(struct ppp *ppp, struct sk_buff *skb);
 static struct sk_buff *ppp_mp_reconstruct(struct ppp *ppp);
 static int ppp_mp_explode(struct ppp *ppp, struct sk_buff *skb);
+static void ppp_mp_roundrobin(struct ppp *ppp, struct sk_buff *skb);
 #endif /* CONFIG_PPP_MULTILINK */
 static int ppp_set_compress(struct ppp *ppp, unsigned long arg);
 static void ppp_ccp_peek(struct ppp *ppp, struct sk_buff *skb, int inbound);
@@ -1292,10 +1305,18 @@
 	}
 
 #ifdef CONFIG_PPP_MULTILINK
-	/* Multilink: fragment the packet over as many links
-	   as can take the packet at the moment. */
-	if (!ppp_mp_explode(ppp, skb))
+	/* send packet without multilink header */
+	if(ppp_ml_noheader) {
+		ppp_mp_roundrobin(ppp, skb);
 		return;
+	}
+	else {
+		/* Multilink: fragment the packet over as many links
+		   as can take the packet at the moment. */
+		if (!ppp_mp_explode(ppp, skb)) {
+			return;
+		}
+	}
 #endif /* CONFIG_PPP_MULTILINK */
 
 	ppp->xmit_pending = NULL;
@@ -1304,6 +1325,38 @@
 
 #ifdef CONFIG_PPP_MULTILINK
 /*
+ * Send packet through the next channel (round robin)
+ */
+static void ppp_mp_roundrobin(struct ppp *ppp, struct sk_buff *skb)
+{
+	int i;
+	struct channel *pch;
+
+	ppp->rrsched++;
+	i = 0;
+	list_for_each_entry(pch, &ppp->channels, clist)      {
+		if(pch->chan == NULL) continue;
+
+		if (ppp->rrsched % ppp->n_channels == i) {
+			spin_lock_bh(&pch->downl);
+			if (pch->chan) {
+				if (pch->chan->ops->start_xmit(pch->chan, skb)) {
+					ppp->xmit_pending = NULL;
+				}
+			} else {
+				/* channel got unregistered */
+				kfree_skb(skb);
+				ppp->xmit_pending = NULL;
+			}
+			spin_unlock_bh(&pch->downl);
+			return;
+		}
+		i++;
+	}
+	return;
+}
+
+/*
  * Divide a packet to be transmitted into fragments and
  * send them out the individual links.
  */
@@ -1352,13 +1405,21 @@
 		}
 		++i;
 	}
-	/*
-	 * Don't start sending this	packet unless at least half	of
-	 * the channels	are	free.  This	gives much better TCP
-	 * performance if we have a	lot	of channels.
-	 */
-	if (nfree == 0 || nfree	< navail / 2)
-		return 0; /* can't take now, leave it in xmit_pending	*/
+
+
+	if(ppp_ml_noexplode) {
+	}
+	else {
+		/*
+		 * Don't start sending this	packet unless at least half	of
+		 * the channels	are	free.  This	gives much better TCP
+		 * performance if we have a	lot	of channels.
+		 */
+		if (nfree == 0 || nfree	< navail / 2) {
+			return 0; /* can't take now, leave it in xmit_pending	*/
+
+		}
+	}
 
 	/* Do protocol field compression (XXX this should be optional) */
 	p =	skb->data;
@@ -1371,6 +1432,7 @@
 	totlen = len;
 	nbigger	= len %	nfree;
 
+
 	/* skip	to the channel after the one we	last used
 	   and start at	that one */
 	list = &ppp->channels;
@@ -1432,33 +1494,40 @@
 		*of the channel we are going to transmit on
 		*/
 		flen = len;
-		if (nfree > 0) {
-			if (pch->speed == 0) {
-				flen = totlen/nfree	;
-				if (nbigger > 0) {
-					flen++;
-					nbigger--;
-				}
-			} else {
-				flen = (((totfree - nzero)*(totlen + hdrlen*totfree)) /
-					((totspeed*totfree)/pch->speed)) - hdrlen;
-				if (nbigger > 0) {
-					flen += ((totfree - nzero)*pch->speed)/totspeed;
-					nbigger -= ((totfree - nzero)*pch->speed)/
-							totspeed;
+
+		if(ppp_ml_noexplode) {
+			nfree--;
+		}
+		else {
+			if (nfree > 0) {
+				if (pch->speed == 0) {
+					flen = totlen/nfree	;
+					if (nbigger > 0) {
+						flen++;
+						nbigger--;
+					}
+				} else {
+					flen = (((totfree - nzero)*(totlen + hdrlen*totfree)) /
+						((totspeed*totfree)/pch->speed)) - hdrlen;
+					if (nbigger > 0) {
+						flen += ((totfree - nzero)*pch->speed)/totspeed;
+						nbigger -= ((totfree - nzero)*pch->speed)/
+								totspeed;
+					}
 				}
+				nfree--;
 			}
-			nfree--;
+			/*
+			 *check	if we are on the last channel or
+			 *we exceded the lenght	of the data	to
+			 *fragment
+			 */
+			if ((nfree <= 0) || (flen > len))
+				flen = len;
+
 		}
 
 		/*
-		 *check	if we are on the last channel or
-		 *we exceded the lenght	of the data	to
-		 *fragment
-		 */
-		if ((nfree <= 0) || (flen > len))
-			flen = len;
-		/*
 		 *it is not worth to tx on slow channels:
 		 *in that case from the resulting flen according to the
 		 *above formula will be equal or less than zero.

[-- Attachment #3: ppp_ml_noexplode-with_debug.patch --]
[-- Type: text/x-diff, Size: 7913 bytes --]

--- drivers/net/ppp_generic.c.orig	2010-03-25 16:56:05.000000000 +0100
+++ drivers/net/ppp_generic.c.patched_with_debug	2010-03-30 20:03:31.000000000 +0200
@@ -129,6 +129,7 @@
 	u32		nextseq;	/* MP: seq no of next packet */
 	u32		minseq;		/* MP: min of most recent seqnos */
 	struct sk_buff_head mrq;	/* MP: receive reconstruction queue */
+	int     rrsched;    /* round robin scheduler for packet distribution */
 #endif /* CONFIG_PPP_MULTILINK */
 #ifdef CONFIG_PPP_FILTER
 	struct sock_filter *pass_filter;	/* filter for packets to pass */
@@ -227,6 +228,17 @@
 #define B	0x80		/* this fragment begins a packet */
 #define E	0x40		/* this fragment ends a packet */
 
+#ifdef CONFIG_PPP_MULTILINK
+/* alternate fragmentation algorithm and multilink behaviour options added by uli.staerk@globalways.net */
+static int ppp_ml_noexplode = 0;
+module_param(ppp_ml_noexplode, int, 0600);
+MODULE_PARM_DESC(ppp_ml_noexplode, "Set this to any other values than zero to avoid fragmentation over connected channels");
+
+static int ppp_ml_noheader = 0;
+module_param(ppp_ml_noheader, int, 0600);
+MODULE_PARM_DESC(ppp_ml_noheader, "Set this to any other value than zero to remove the ppp-multilink protocol header (pppoes.protocol not 0x003d) which enables fragmentation and reordering");
+#endif /* CONFIG_PPP_MULTILINK */
+
 /* Compare multilink sequence numbers (assumed to be 32 bits wide) */
 #define seq_before(a, b)	((s32)((a) - (b)) < 0)
 #define seq_after(a, b)		((s32)((a) - (b)) > 0)
@@ -250,6 +262,7 @@
 static void ppp_mp_insert(struct ppp *ppp, struct sk_buff *skb);
 static struct sk_buff *ppp_mp_reconstruct(struct ppp *ppp);
 static int ppp_mp_explode(struct ppp *ppp, struct sk_buff *skb);
+static void ppp_mp_roundrobin(struct ppp *ppp, struct sk_buff *skb);
 #endif /* CONFIG_PPP_MULTILINK */
 static int ppp_set_compress(struct ppp *ppp, unsigned long arg);
 static void ppp_ccp_peek(struct ppp *ppp, struct sk_buff *skb, int inbound);
@@ -1273,6 +1286,7 @@
 		return;
 	}
 
+printk(KERN_ERR "send packet\n");
 	if ((ppp->flags & SC_MULTILINK) == 0) {
 		/* not doing multilink: send it down the first channel */
 		list = list->next;
@@ -1292,11 +1306,24 @@
 	}
 
 #ifdef CONFIG_PPP_MULTILINK
-	/* Multilink: fragment the packet over as many links
-	   as can take the packet at the moment. */
-	if (!ppp_mp_explode(ppp, skb))
+	/* send packet without multilink header */
+	if(ppp_ml_noheader) {
+printk(KERN_ERR "BEGIN SEND RR\n");
+		ppp_mp_roundrobin(ppp, skb);
 		return;
+printk(KERN_ERR "END SEND RR\n");
+	}
+	else {
+printk(KERN_ERR "BEGIN SEND MULTILINK\n");
+		/* Multilink: fragment the packet over as many links
+		   as can take the packet at the moment. */
+		if (!ppp_mp_explode(ppp, skb)) {
+printk(KERN_ERR "END SEND MULTILINK\n");
+			return;
+		}
+	}
 #endif /* CONFIG_PPP_MULTILINK */
+printk(KERN_ERR "END SEND DROP PACKET\n");
 
 	ppp->xmit_pending = NULL;
 	kfree_skb(skb);
@@ -1304,6 +1331,41 @@
 
 #ifdef CONFIG_PPP_MULTILINK
 /*
+ * Send packet through the next channel (round robin)
+ */
+static void ppp_mp_roundrobin(struct ppp *ppp, struct sk_buff *skb)
+{
+	int i;
+	struct channel *pch;
+
+	ppp->rrsched++;
+printk(KERN_ERR "  RR counter=%d, len=%d, devmtu=%d\n", ppp->rrsched, skb->len, ppp->dev->mtu);
+	i = 0;
+	list_for_each_entry(pch, &ppp->channels, clist)      {
+		if(pch->chan == NULL) continue;
+
+		if (ppp->rrsched % ppp->n_channels == i) {
+printk(KERN_ERR "  RR send via %d, chmtu=%d\n", i, pch->chan->mtu);
+			spin_lock_bh(&pch->downl);
+			if (pch->chan) {
+				if (pch->chan->ops->start_xmit(pch->chan, skb)) {
+					ppp->xmit_pending = NULL;
+				}
+			} else {
+printk(KERN_ERR "  RR dropped at %d\n", i);
+				/* channel got unregistered */
+				kfree_skb(skb);
+				ppp->xmit_pending = NULL;
+			}
+			spin_unlock_bh(&pch->downl);
+			return;
+		}
+		i++;
+	}
+	return;
+}
+
+/*
  * Divide a packet to be transmitted into fragments and
  * send them out the individual links.
  */
@@ -1352,13 +1414,25 @@
 		}
 		++i;
 	}
-	/*
-	 * Don't start sending this	packet unless at least half	of
-	 * the channels	are	free.  This	gives much better TCP
-	 * performance if we have a	lot	of channels.
-	 */
-	if (nfree == 0 || nfree	< navail / 2)
-		return 0; /* can't take now, leave it in xmit_pending	*/
+
+printk(KERN_ERR "  ML nfree=%d, navail=%d, nzero=%d, totfree=%d, totspeed=%d\n", nfree,navail,nzero,totfree,totspeed);
+
+	if(ppp_ml_noexplode) {
+printk(KERN_ERR "  ML no explode A\n");
+	}
+	else {
+printk(KERN_ERR "  ML explode A\n");
+		/*
+		 * Don't start sending this	packet unless at least half	of
+		 * the channels	are	free.  This	gives much better TCP
+		 * performance if we have a	lot	of channels.
+		 */
+		if (nfree == 0 || nfree	< navail / 2) {
+printk(KERN_ERR "  ML wait A\n");
+			return 0; /* can't take now, leave it in xmit_pending	*/
+
+		}
+	}
 
 	/* Do protocol field compression (XXX this should be optional) */
 	p =	skb->data;
@@ -1371,6 +1445,8 @@
 	totlen = len;
 	nbigger	= len %	nfree;
 
+printk(KERN_ERR "  ML len=%d, totlen=%d, nbigger=%d\n", len, totlen, nbigger);
+
 	/* skip	to the channel after the one we	last used
 	   and start at	that one */
 	list = &ppp->channels;
@@ -1381,10 +1457,12 @@
 			break;
 		}
 	}
+printk(KERN_ERR "  ML skip to channel=%d\n", i);
 
 	/* create a	fragment for each channel */
 	bits = B;
 	while (len	> 0) {
+printk(KERN_ERR "  ML while len=%d, i=%d\n", len, i);
 		list = list->next;
 		if (list ==	&ppp->channels)	{
 			i =	0;
@@ -1432,45 +1510,58 @@
 		*of the channel we are going to transmit on
 		*/
 		flen = len;
-		if (nfree > 0) {
-			if (pch->speed == 0) {
-				flen = totlen/nfree	;
-				if (nbigger > 0) {
-					flen++;
-					nbigger--;
-				}
-			} else {
-				flen = (((totfree - nzero)*(totlen + hdrlen*totfree)) /
-					((totspeed*totfree)/pch->speed)) - hdrlen;
-				if (nbigger > 0) {
-					flen += ((totfree - nzero)*pch->speed)/totspeed;
-					nbigger -= ((totfree - nzero)*pch->speed)/
-							totspeed;
+printk(KERN_ERR "  ML fragmentlen=%d\n", flen);
+
+		if(ppp_ml_noexplode) {
+printk(KERN_ERR "  ML no explode B \n");
+			nfree--;
+		}
+		else {
+printk(KERN_ERR "  ML explode B \n");
+			if (nfree > 0) {
+				if (pch->speed == 0) {
+					flen = totlen/nfree	;
+					if (nbigger > 0) {
+						flen++;
+						nbigger--;
+					}
+				} else {
+					flen = (((totfree - nzero)*(totlen + hdrlen*totfree)) /
+						((totspeed*totfree)/pch->speed)) - hdrlen;
+					if (nbigger > 0) {
+						flen += ((totfree - nzero)*pch->speed)/totspeed;
+						nbigger -= ((totfree - nzero)*pch->speed)/
+								totspeed;
+					}
 				}
+				nfree--;
 			}
-			nfree--;
+			/*
+			 *check	if we are on the last channel or
+			 *we exceded the lenght	of the data	to
+			 *fragment
+			 */
+			if ((nfree <= 0) || (flen > len))
+				flen = len;
+
+printk(KERN_ERR "  ML new fragmentlen=%d\n", flen);
 		}
 
 		/*
-		 *check	if we are on the last channel or
-		 *we exceded the lenght	of the data	to
-		 *fragment
-		 */
-		if ((nfree <= 0) || (flen > len))
-			flen = len;
-		/*
 		 *it is not worth to tx on slow channels:
 		 *in that case from the resulting flen according to the
 		 *above formula will be equal or less than zero.
 		 *Skip the channel in this case
 		 */
 		if (flen <=	0) {
+printk(KERN_ERR "  ML fragmentlen is zero\n");
 			pch->avail = 2;
 			spin_unlock_bh(&pch->downl);
 			continue;
 		}
 
 		mtu	= pch->chan->mtu - hdrlen;
+printk(KERN_ERR "  ML mtu=%d (chan-mtu=%d)\n", mtu, pch->chan->mtu);
 		if (mtu	< 4)
 			mtu	= 4;
 		if (flen > mtu)
@@ -1502,6 +1593,7 @@
 		if (!skb_queue_empty(&pch->file.xq)	||
 			!chan->ops->start_xmit(chan, frag))
 			skb_queue_tail(&pch->file.xq, frag);
+printk(KERN_ERR "  ML sent packet with seq: %d\n", ppp->nxseq);
 		pch->had_frag =	1;
 		p += flen;
 		len	-= flen;
@@ -1510,6 +1602,7 @@
 		spin_unlock_bh(&pch->downl);
 	}
 	ppp->nxchan	= i;
+printk(KERN_ERR "  ML END SEND\n");
 
 	return 1;
 

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
  2010-03-26 17:04       ` [Patch] fix packet loss and massive ping spikes with PPP Alan Cox
@ 2010-03-31 10:03         ` Ben McKeegan
  -1 siblings, 0 replies; 73+ messages in thread
From: Ben McKeegan @ 2010-03-31 10:03 UTC (permalink / raw)
  To: netdev, linux-ppp
  Cc: Alan Cox, Alexander E. Patrakov, Richard Hartmann, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2775 bytes --]

>>> Making it runtime per link selectable would be nicer but thats a bit more
>>> work.
>> Doesn't it work already via echoing values to 
>> /sys/module/ppp/generic/parameters/ml_explode in the above code?
> 
> Thats runtime (and why I set 0600 in the permissions for the example) but
> not per link.
> 

I needed to do something similar a while back and I took a very 
different approach, which I think is more flexible.   Rather than 
implement a new round-robin scheduler I simply introduced a target 
minimum fragment size into the fragment size calculation, as a per 
bundle parameter that can be configured via a new ioctl.  This modifies 
the algorithm so that it tries to limit the number of fragments such 
that each fragment is at least the minimum size.  If the minimum size is 
greater than the packet size it will not be fragmented all but will 
instead just get sent down the next available channel.

A pppd plugin generates the ioctl call allowing this to be tweaked per 
connection.  It is more flexible in that you can still have the larger 
packets fragmented if you wish.

We've used a variant of this patch on our ADSL LNS pool for a few years 
now with varying results.  We originally did it to save bandwidth as we 
have a per packet overhead and fragmenting tiny packets such as VoIP 
across a bundle of 4 lines made no sense at all.  We've experimented 
with higher minimum settings up to and above the link MTU, thus 
achieving the equivalent of Richard's patch.

In some cases this has improved performance, others it makes it worse. 
It depends a lot on the lines and traffic patterns, and it is certainly 
not a change we would wish to have on by default.  Any solution going 
into mainline kernel would need to be tunable per connection.  One of 
the issues seems to be with poor recovery from packet loss on low 
volume, highly delay sensitive traffic on large bundles of lines.  With 
Linux at both ends you are relying on received sequence numbers to 
detect loss.  When packets are being fragmented across all channels and 
a fragment is lost, the receiving system is able to spot the lost 
fragment fairly quickly.  Once you start sending some multilink frames 
down individual channels, it takes a lot longer for the receiver to 
notice the packet loss on an individual channel.  Until another fragment 
is successfully received on the lossy channel, the fragments of the 
incomplete frame sit in the queue clogging up the other channels (the 
receiver is attempting to preserve the original packet order and is 
still waiting for the lost fragment).

Original patch attached.   This almost certainly needs updating to take 
account of other more recent changes in multi link algorithm but it may 
provide some inspiration.

Regards,
Ben.


[-- Attachment #2: mppp-min-frag-size.patch --]
[-- Type: text/x-diff, Size: 4255 bytes --]

diff -ubdr linux-2.6.16.16-l2tp/drivers/net/ppp_generic.c linux-2.6.16.16-l2tp-mppp/drivers/net/ppp_generic.c
--- linux-2.6.16.16-l2tp/drivers/net/ppp_generic.c	2006-05-11 02:56:24.000000000 +0100
+++ linux-2.6.16.16-l2tp-mppp/drivers/net/ppp_generic.c	2007-07-03 18:23:35.000000000 +0100
@@ -64,7 +64,7 @@
 
 #define MPHDRLEN	6	/* multilink protocol header length */
 #define MPHDRLEN_SSN	4	/* ditto with short sequence numbers */
-#define MIN_FRAG_SIZE	64
+#define MIN_FRAG_SIZE	256
 
 /*
  * An instance of /dev/ppp can be associated with either a ppp
@@ -120,6 +120,7 @@
 	unsigned long	last_recv;	/* jiffies when last pkt rcvd a0 */
 	struct net_device *dev;		/* network interface device a4 */
 #ifdef CONFIG_PPP_MULTILINK
+        int             minfragsize;    /* minimum size for a fragment */
 	int		nxchan;		/* next channel to send something on */
 	u32		nxseq;		/* next sequence number to send */
 	int		mrru;		/* MP: max reconst. receive unit */
@@ -767,6 +768,15 @@
 		ppp_recv_unlock(ppp);
 		err = 0;
 		break;
+
+	case PPPIOCSMINFRAG:
+	        if (get_user(val, p))
+	                break;
+	        ppp_recv_lock(ppp);
+	        ppp->minfragsize = val < 0 ? 0 : val;
+	        ppp_recv_unlock(ppp);
+	        err = 0;
+	        break;
 #endif /* CONFIG_PPP_MULTILINK */
 
 	default:
@@ -1254,7 +1264,7 @@
 	int len, fragsize;
 	int i, bits, hdrlen, mtu;
 	int flen;
-	int navail, nfree;
+	int navail, nfree, nfrag;
 	int nbigger;
 	unsigned char *p, *q;
 	struct list_head *list;
@@ -1285,7 +1295,7 @@
 	 * the channels are free.  This gives much better TCP
 	 * performance if we have a lot of channels.
 	 */
-	if (nfree == 0 || nfree < navail / 2)
+	if (nfree == 0 || (nfree < navail / 2 && ppp->minfragsize == 0))
 		return 0;	/* can't take now, leave it in xmit_pending */
 
 	/* Do protocol field compression (XXX this should be optional) */
@@ -1302,13 +1312,24 @@
 	 * how small they are (i.e. even 0 length) in order to minimize
 	 * the time that it will take to detect when a channel drops
 	 * a fragment.
+         * However, if ppp->minfragsize > 0 we try to avoid creating
+         * fragments smaller than ppp->minfragsize and thus do not
+         * always use all free channels
 	 */
+	if (ppp->minfragsize > 0) {
+	  nfrag= len / ppp->minfragsize;
+	  if (nfrag < 1)
+	        nfrag = 1;
+	  else if (nfrag > nfree)
+	        nfrag = nfree;
+	} else
+	        nfrag = nfree;
 	fragsize = len;
-	if (nfree > 1)
-		fragsize = DIV_ROUND_UP(fragsize, nfree);
+	if (nfrag > 1)
+	        fragsize = DIV_ROUND_UP(fragsize, nfrag);
 	/* nbigger channels get fragsize bytes, the rest get fragsize-1,
 	   except if nbigger==0, then they all get fragsize. */
-	nbigger = len % nfree;
+	nbigger = len % nfrag;
 
 	/* skip to the channel after the one we last used
 	   and start at that one */
@@ -1323,7 +1344,7 @@
 
 	/* create a fragment for each channel */
 	bits = B;
-	while (nfree > 0 || len > 0) {
+	while (len > 0 || (nfree > 0 && ppp->minfragsize == 0)) {
 		list = list->next;
 		if (list == &ppp->channels) {
 			i = 0;
@@ -1371,7 +1392,7 @@
 			mtu = 4;
 		if (flen > mtu)
 			flen = mtu;
-		if (flen == len && nfree == 0)
+		if (flen == len && (nfree == 0 || ppp->minfragsize != 0))
 			bits |= E;
 		frag = alloc_skb(flen + hdrlen + (flen == 0), GFP_ATOMIC);
 		if (frag == 0)
@@ -2435,6 +2456,7 @@
 	spin_lock_init(&ppp->rlock);
 	spin_lock_init(&ppp->wlock);
 #ifdef CONFIG_PPP_MULTILINK
+	ppp->minfragsize = MIN_FRAG_SIZE;
 	ppp->minseq = -1;
 	skb_queue_head_init(&ppp->mrq);
 #endif /* CONFIG_PPP_MULTILINK */
diff -ubdr linux-2.6.16.16-l2tp/include/linux/if_ppp.h linux-2.6.16.16-l2tp-mppp/include/linux/if_ppp.h
--- linux-2.6.16.16-l2tp/include/linux/if_ppp.h	2006-05-12 13:45:00.000000000 +0100
+++ linux-2.6.16.16-l2tp-mppp/include/linux/if_ppp.h	2007-07-03 18:15:27.000000000 +0100
@@ -162,6 +162,7 @@
 #define PPPIOCATTCHAN	_IOW('t', 56, int)	/* attach to ppp channel */
 #define PPPIOCGCHAN	_IOR('t', 55, int)	/* get ppp channel number */
 #define PPPIOCGL2TPSTATS _IOR('t', 54, struct pppol2tp_ioc_stats)
+#define PPPIOCSMINFRAG  _IOW('t', 53, int)  /* minimum fragment size */
 
 #define SIOCGPPPSTATS   (SIOCDEVPRIVATE + 0)
 #define SIOCGPPPVER     (SIOCDEVPRIVATE + 1)	/* NEVER change this!! */

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
@ 2010-03-31 10:03         ` Ben McKeegan
  0 siblings, 0 replies; 73+ messages in thread
From: Ben McKeegan @ 2010-03-31 10:03 UTC (permalink / raw)
  To: netdev, linux-ppp
  Cc: Alan Cox, Alexander E. Patrakov, Richard Hartmann, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2775 bytes --]

>>> Making it runtime per link selectable would be nicer but thats a bit more
>>> work.
>> Doesn't it work already via echoing values to 
>> /sys/module/ppp/generic/parameters/ml_explode in the above code?
> 
> Thats runtime (and why I set 0600 in the permissions for the example) but
> not per link.
> 

I needed to do something similar a while back and I took a very 
different approach, which I think is more flexible.   Rather than 
implement a new round-robin scheduler I simply introduced a target 
minimum fragment size into the fragment size calculation, as a per 
bundle parameter that can be configured via a new ioctl.  This modifies 
the algorithm so that it tries to limit the number of fragments such 
that each fragment is at least the minimum size.  If the minimum size is 
greater than the packet size it will not be fragmented all but will 
instead just get sent down the next available channel.

A pppd plugin generates the ioctl call allowing this to be tweaked per 
connection.  It is more flexible in that you can still have the larger 
packets fragmented if you wish.

We've used a variant of this patch on our ADSL LNS pool for a few years 
now with varying results.  We originally did it to save bandwidth as we 
have a per packet overhead and fragmenting tiny packets such as VoIP 
across a bundle of 4 lines made no sense at all.  We've experimented 
with higher minimum settings up to and above the link MTU, thus 
achieving the equivalent of Richard's patch.

In some cases this has improved performance, others it makes it worse. 
It depends a lot on the lines and traffic patterns, and it is certainly 
not a change we would wish to have on by default.  Any solution going 
into mainline kernel would need to be tunable per connection.  One of 
the issues seems to be with poor recovery from packet loss on low 
volume, highly delay sensitive traffic on large bundles of lines.  With 
Linux at both ends you are relying on received sequence numbers to 
detect loss.  When packets are being fragmented across all channels and 
a fragment is lost, the receiving system is able to spot the lost 
fragment fairly quickly.  Once you start sending some multilink frames 
down individual channels, it takes a lot longer for the receiver to 
notice the packet loss on an individual channel.  Until another fragment 
is successfully received on the lossy channel, the fragments of the 
incomplete frame sit in the queue clogging up the other channels (the 
receiver is attempting to preserve the original packet order and is 
still waiting for the lost fragment).

Original patch attached.   This almost certainly needs updating to take 
account of other more recent changes in multi link algorithm but it may 
provide some inspiration.

Regards,
Ben.


[-- Attachment #2: mppp-min-frag-size.patch --]
[-- Type: text/x-diff, Size: 4255 bytes --]

diff -ubdr linux-2.6.16.16-l2tp/drivers/net/ppp_generic.c linux-2.6.16.16-l2tp-mppp/drivers/net/ppp_generic.c
--- linux-2.6.16.16-l2tp/drivers/net/ppp_generic.c	2006-05-11 02:56:24.000000000 +0100
+++ linux-2.6.16.16-l2tp-mppp/drivers/net/ppp_generic.c	2007-07-03 18:23:35.000000000 +0100
@@ -64,7 +64,7 @@
 
 #define MPHDRLEN	6	/* multilink protocol header length */
 #define MPHDRLEN_SSN	4	/* ditto with short sequence numbers */
-#define MIN_FRAG_SIZE	64
+#define MIN_FRAG_SIZE	256
 
 /*
  * An instance of /dev/ppp can be associated with either a ppp
@@ -120,6 +120,7 @@
 	unsigned long	last_recv;	/* jiffies when last pkt rcvd a0 */
 	struct net_device *dev;		/* network interface device a4 */
 #ifdef CONFIG_PPP_MULTILINK
+        int             minfragsize;    /* minimum size for a fragment */
 	int		nxchan;		/* next channel to send something on */
 	u32		nxseq;		/* next sequence number to send */
 	int		mrru;		/* MP: max reconst. receive unit */
@@ -767,6 +768,15 @@
 		ppp_recv_unlock(ppp);
 		err = 0;
 		break;
+
+	case PPPIOCSMINFRAG:
+	        if (get_user(val, p))
+	                break;
+	        ppp_recv_lock(ppp);
+	        ppp->minfragsize = val < 0 ? 0 : val;
+	        ppp_recv_unlock(ppp);
+	        err = 0;
+	        break;
 #endif /* CONFIG_PPP_MULTILINK */
 
 	default:
@@ -1254,7 +1264,7 @@
 	int len, fragsize;
 	int i, bits, hdrlen, mtu;
 	int flen;
-	int navail, nfree;
+	int navail, nfree, nfrag;
 	int nbigger;
 	unsigned char *p, *q;
 	struct list_head *list;
@@ -1285,7 +1295,7 @@
 	 * the channels are free.  This gives much better TCP
 	 * performance if we have a lot of channels.
 	 */
-	if (nfree == 0 || nfree < navail / 2)
+	if (nfree == 0 || (nfree < navail / 2 && ppp->minfragsize == 0))
 		return 0;	/* can't take now, leave it in xmit_pending */
 
 	/* Do protocol field compression (XXX this should be optional) */
@@ -1302,13 +1312,24 @@
 	 * how small they are (i.e. even 0 length) in order to minimize
 	 * the time that it will take to detect when a channel drops
 	 * a fragment.
+         * However, if ppp->minfragsize > 0 we try to avoid creating
+         * fragments smaller than ppp->minfragsize and thus do not
+         * always use all free channels
 	 */
+	if (ppp->minfragsize > 0) {
+	  nfrag= len / ppp->minfragsize;
+	  if (nfrag < 1)
+	        nfrag = 1;
+	  else if (nfrag > nfree)
+	        nfrag = nfree;
+	} else
+	        nfrag = nfree;
 	fragsize = len;
-	if (nfree > 1)
-		fragsize = DIV_ROUND_UP(fragsize, nfree);
+	if (nfrag > 1)
+	        fragsize = DIV_ROUND_UP(fragsize, nfrag);
 	/* nbigger channels get fragsize bytes, the rest get fragsize-1,
 	   except if nbigger==0, then they all get fragsize. */
-	nbigger = len % nfree;
+	nbigger = len % nfrag;
 
 	/* skip to the channel after the one we last used
 	   and start at that one */
@@ -1323,7 +1344,7 @@
 
 	/* create a fragment for each channel */
 	bits = B;
-	while (nfree > 0 || len > 0) {
+	while (len > 0 || (nfree > 0 && ppp->minfragsize == 0)) {
 		list = list->next;
 		if (list == &ppp->channels) {
 			i = 0;
@@ -1371,7 +1392,7 @@
 			mtu = 4;
 		if (flen > mtu)
 			flen = mtu;
-		if (flen == len && nfree == 0)
+		if (flen == len && (nfree == 0 || ppp->minfragsize != 0))
 			bits |= E;
 		frag = alloc_skb(flen + hdrlen + (flen == 0), GFP_ATOMIC);
 		if (frag == 0)
@@ -2435,6 +2456,7 @@
 	spin_lock_init(&ppp->rlock);
 	spin_lock_init(&ppp->wlock);
 #ifdef CONFIG_PPP_MULTILINK
+	ppp->minfragsize = MIN_FRAG_SIZE;
 	ppp->minseq = -1;
 	skb_queue_head_init(&ppp->mrq);
 #endif /* CONFIG_PPP_MULTILINK */
diff -ubdr linux-2.6.16.16-l2tp/include/linux/if_ppp.h linux-2.6.16.16-l2tp-mppp/include/linux/if_ppp.h
--- linux-2.6.16.16-l2tp/include/linux/if_ppp.h	2006-05-12 13:45:00.000000000 +0100
+++ linux-2.6.16.16-l2tp-mppp/include/linux/if_ppp.h	2007-07-03 18:15:27.000000000 +0100
@@ -162,6 +162,7 @@
 #define PPPIOCATTCHAN	_IOW('t', 56, int)	/* attach to ppp channel */
 #define PPPIOCGCHAN	_IOR('t', 55, int)	/* get ppp channel number */
 #define PPPIOCGL2TPSTATS _IOR('t', 54, struct pppol2tp_ioc_stats)
+#define PPPIOCSMINFRAG  _IOW('t', 53, int)  /* minimum fragment size */
 
 #define SIOCGPPPSTATS   (SIOCDEVPRIVATE + 0)
 #define SIOCGPPPVER     (SIOCDEVPRIVATE + 1)	/* NEVER change this!! */

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP  multi-link
  2010-03-26 15:50 ` Richard Hartmann
  (?)
@ 2010-05-25  9:52   ` Richard Hartmann
  -1 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-05-25  9:52 UTC (permalink / raw)
  To: linux-kernel, netdev, linux-ppp, erik_list

Hi all,

unfortunately, I lack the thread [1] in which John and Erik so I am
replying in my initial thread.

Unfortunately, our own patch is still in its initial state even though
we tried to clean it up to the Kernel's standard.

As you can see in [1], there is a real problem with multilink ppp. Thus,
I wanted to take the opportunity to ask for someone who is more familiar
with both C and the Linux Kernel than we are to have a go at getting
this code cleaned up and into mainline.


Any and all feedback appreciated,
Richard

[1] http://marc.info/?l=linux-ppp&m=127429724125841&w=2

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
@ 2010-05-25  9:52   ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-05-25  9:52 UTC (permalink / raw)
  To: linux-kernel, netdev, linux-ppp, erik_list

Hi all,

unfortunately, I lack the thread [1] in which John and Erik so I am
replying in my initial thread.

Unfortunately, our own patch is still in its initial state even though
we tried to clean it up to the Kernel's standard.

As you can see in [1], there is a real problem with multilink ppp. Thus,
I wanted to take the opportunity to ask for someone who is more familiar
with both C and the Linux Kernel than we are to have a go at getting
this code cleaned up and into mainline.


Any and all feedback appreciated,
Richard

[1] http://marc.info/?l=linux-ppp&m=127429724125841&w=2

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP
@ 2010-05-25  9:52   ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-05-25  9:52 UTC (permalink / raw)
  To: linux-kernel, netdev, linux-ppp, erik_list

Hi all,

unfortunately, I lack the thread [1] in which John and Erik so I am
replying in my initial thread.

Unfortunately, our own patch is still in its initial state even though
we tried to clean it up to the Kernel's standard.

As you can see in [1], there is a real problem with multilink ppp. Thus,
I wanted to take the opportunity to ask for someone who is more familiar
with both C and the Linux Kernel than we are to have a go at getting
this code cleaned up and into mainline.


Any and all feedback appreciated,
Richard

[1] http://marc.info/?l=linux-ppp&m\x127429724125841&w=2

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP  multi-link
  2010-03-26 15:50 ` Richard Hartmann
                   ` (3 preceding siblings ...)
  (?)
@ 2010-05-25 10:18 ` walter harms
  -1 siblings, 0 replies; 73+ messages in thread
From: walter harms @ 2010-05-25 10:18 UTC (permalink / raw)
  To: linux-ppp



Richard Hartmann schrieb:
> Hi all,
> 
> unfortunately, I lack the thread [1] in which John and Erik so I am
> replying in my initial thread.
> 
> Unfortunately, our own patch is still in its initial state even though
> we tried to clean it up to the Kernel's standard.
> 
> As you can see in [1], there is a real problem with multilink ppp. Thus,
> I wanted to take the opportunity to ask for someone who is more familiar
> with both C and the Linux Kernel than we are to have a go at getting
> this code cleaned up and into mainline.
> 
> 
> Any and all feedback appreciated,
> Richard
> 


Hi Richard,
i am not an export on kernel programming but i took a quick look into
patch and it look reasonable.
by looking at the patch only:

ppp->rrsched never gets resetted, i assume that is somewhere else ?

can you move the whole block into a separate function ?
then it will be easy to remove the ifdef stuff from the core function.

#if multi_link
 function()
{

	do shomething
}
#else
function();
#endif

you get the idea ?

re,
 wh

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP
  2010-03-26 15:50 ` Richard Hartmann
                   ` (4 preceding siblings ...)
  (?)
@ 2010-05-25 14:58 ` Richard Hartmann
  -1 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-05-25 14:58 UTC (permalink / raw)
  To: linux-ppp

Hi Walter,


> ppp->rrsched never gets resetted, i assume that is somewhere else ?

*cough* no *cough*


> can you move the whole block into a separate function ?
> then it will be easy to remove the ifdef stuff from the core function.

Did you look at the most current version? It can be found within this
thread or here [1].


Richard

[1] http://lkml.org/lkml/2010/3/31/87

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP  multi-link
  2010-03-26 15:50 ` Richard Hartmann
                   ` (5 preceding siblings ...)
  (?)
@ 2010-05-25 17:30 ` walter harms
  2010-05-26  8:47     ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
  -1 siblings, 1 reply; 73+ messages in thread
From: walter harms @ 2010-05-25 17:30 UTC (permalink / raw)
  To: linux-ppp



Richard Hartmann schrieb:
> Hi Walter,
> 
> 
>> ppp->rrsched never gets resetted, i assume that is somewhere else ?
> 
> *cough* no *cough*
> 
> 
>> can you move the whole block into a separate function ?
>> then it will be easy to remove the ifdef stuff from the core function.
> 
> Did you look at the most current version? It can be found within this
> thread or here [1].
> 
> 
> Richard
> 
> [1] http://lkml.org/lkml/2010/3/31/87
> 

Hi Richard,
thats  looks different. :)

here my questions:

from 	ppp_mp_roundrobin()

  if (ppp->rrsched % ppp->n_channels = i)

since both do not change in that while() loop you can calculate in advance
perhaps ppp->rrsched %= ppp->n_channels before the while ?
(that would reduce my bad feels about variables that only increments also :)

btw: you are doing  after loop() if(pch->chan = NULL) continue;
that means the else in the if below  if (pch->chan) should never be reached.
Or is it likely that some channel will be dropped (?) ?

so the code says:
 go to channel  (ppp->rrsched % ppp->n_channels)
 send packet pch->chan->ops->start_xmit(pch->chan, skb)


btw: this is intentional ? looks strange

	if(ppp_ml_noexplode) {
+	}
+	else {

just my 2 cents,
re,
 wh







^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP  multi-link
  2010-05-25 17:30 ` [Patch] fix packet loss and massive ping spikes with PPP multi-link walter harms
  2010-05-26  8:47     ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
@ 2010-05-26  8:47     ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-05-26  8:47 UTC (permalink / raw)
  To: wharms, linux-kernel, netdev, linux-ppp; +Cc: erik_list

==It seems LKML & netdev were dropped from the To list, re-adding==

Hi Walter,


>  if (ppp->rrsched % ppp->n_channels == i)
>
> since both do not change in that while() loop you can calculate in advance
> perhaps ppp->rrsched %= ppp->n_channels before the while ?
> (that would reduce my bad feels about variables that only increments also :)

rrsched and i do change when appropriate. As they are used as a cheap
way to get round robin, rrsched is not even initialized. One can argue
that this should be done, but as it literally does not matter where the
value starts counting....


> btw: you are doing  after loop() if(pch->chan == NULL) continue;
> that means the else in the if below  if (pch->chan) should never be reached.
> Or is it likely that some channel will be dropped (?) ?

Channels could be dropped and we need to guard against that.


> btw: this is intentional ? looks strange
>
>        if(ppp_ml_noexplode) {
> +       }
> +       else {

Leftover from various printks for debugging reasons.


Thanks for your feedback,
Richard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
@ 2010-05-26  8:47     ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-05-26  8:47 UTC (permalink / raw)
  To: wharms, linux-kernel, netdev, linux-ppp; +Cc: erik_list

==It seems LKML & netdev were dropped from the To list, re-adding==

Hi Walter,


>  if (ppp->rrsched % ppp->n_channels == i)
>
> since both do not change in that while() loop you can calculate in advance
> perhaps ppp->rrsched %= ppp->n_channels before the while ?
> (that would reduce my bad feels about variables that only increments also :)

rrsched and i do change when appropriate. As they are used as a cheap
way to get round robin, rrsched is not even initialized. One can argue
that this should be done, but as it literally does not matter where the
value starts counting....


> btw: you are doing  after loop() if(pch->chan == NULL) continue;
> that means the else in the if below  if (pch->chan) should never be reached.
> Or is it likely that some channel will be dropped (?) ?

Channels could be dropped and we need to guard against that.


> btw: this is intentional ? looks strange
>
>        if(ppp_ml_noexplode) {
> +       }
> +       else {

Leftover from various printks for debugging reasons.


Thanks for your feedback,
Richard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP
@ 2010-05-26  8:47     ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-05-26  8:47 UTC (permalink / raw)
  To: wharms, linux-kernel, netdev, linux-ppp; +Cc: erik_list

=It seems LKML & netdev were dropped from the To list, re-adding=

Hi Walter,


>  if (ppp->rrsched % ppp->n_channels = i)
>
> since both do not change in that while() loop you can calculate in advance
> perhaps ppp->rrsched %= ppp->n_channels before the while ?
> (that would reduce my bad feels about variables that only increments also :)

rrsched and i do change when appropriate. As they are used as a cheap
way to get round robin, rrsched is not even initialized. One can argue
that this should be done, but as it literally does not matter where the
value starts counting....


> btw: you are doing  after loop() if(pch->chan = NULL) continue;
> that means the else in the if below  if (pch->chan) should never be reached.
> Or is it likely that some channel will be dropped (?) ?

Channels could be dropped and we need to guard against that.


> btw: this is intentional ? looks strange
>
>        if(ppp_ml_noexplode) {
> +       }
> +       else {

Leftover from various printks for debugging reasons.


Thanks for your feedback,
Richard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP  multi-link
  2010-05-26  8:47     ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
@ 2010-05-28  7:28       ` walter harms
  -1 siblings, 0 replies; 73+ messages in thread
From: walter harms @ 2010-05-28  7:28 UTC (permalink / raw)
  To: Richard Hartmann; +Cc: linux-kernel, netdev, linux-ppp, erik_list



Richard Hartmann schrieb:
> ==It seems LKML & netdev were dropped from the To list, re-adding==
> 
> Hi Walter,
> 
> 
>>  if (ppp->rrsched % ppp->n_channels == i)
>>
>> since both do not change in that while() loop you can calculate in advance
>> perhaps ppp->rrsched %= ppp->n_channels before the while ?
>> (that would reduce my bad feels about variables that only increments also :)
> 
> rrsched and i do change when appropriate. As they are used as a cheap
> way to get round robin, rrsched is not even initialized. One can argue
> that this should be done, but as it literally does not matter where the
> value starts counting....
> 

yep,
the problem is that you will trigger a warning "variable uninitialised".
And as programmer you are trained to spot such kind of code.
in short you violated "the rule of least surprise", simply set it to 99
and add a comment that the value does not matter because it is actualy a
random seed.
Basicly the same reason for the ppp->rrsched %= ppp->n_channels outside
the loop. 1. people/compiler are happy because they see the variable
is used. 2. no need to recalculate the if in a loop (never trust optimisers).

/*
perhaps rr_chanel is a better name ? round robin channel
that would requiere the changes but explain what it actualy is
*/



> 
>> btw: you are doing  after loop() if(pch->chan == NULL) continue;
>> that means the else in the if below  if (pch->chan) should never be reached.
>> Or is it likely that some channel will be dropped (?) ?
> 
> Channels could be dropped and we need to guard against that.
> 

please add a comment about that. i can garantee you someone will spot it
and remove either the pch->chan == NULL or the else.



just my 2 cents,
wh


> 
>> btw: this is intentional ? looks strange
>>
>>        if(ppp_ml_noexplode) {
>> +       }
>> +       else {
> 
> Leftover from various printks for debugging reasons.
> 
> 
> Thanks for your feedback,
> Richard
> 

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP  multi-link
@ 2010-05-28  7:28       ` walter harms
  0 siblings, 0 replies; 73+ messages in thread
From: walter harms @ 2010-05-28  7:28 UTC (permalink / raw)
  To: Richard Hartmann; +Cc: linux-kernel, netdev, linux-ppp, erik_list



Richard Hartmann schrieb:
> =It seems LKML & netdev were dropped from the To list, re-adding=
> 
> Hi Walter,
> 
> 
>>  if (ppp->rrsched % ppp->n_channels = i)
>>
>> since both do not change in that while() loop you can calculate in advance
>> perhaps ppp->rrsched %= ppp->n_channels before the while ?
>> (that would reduce my bad feels about variables that only increments also :)
> 
> rrsched and i do change when appropriate. As they are used as a cheap
> way to get round robin, rrsched is not even initialized. One can argue
> that this should be done, but as it literally does not matter where the
> value starts counting....
> 

yep,
the problem is that you will trigger a warning "variable uninitialised".
And as programmer you are trained to spot such kind of code.
in short you violated "the rule of least surprise", simply set it to 99
and add a comment that the value does not matter because it is actualy a
random seed.
Basicly the same reason for the ppp->rrsched %= ppp->n_channels outside
the loop. 1. people/compiler are happy because they see the variable
is used. 2. no need to recalculate the if in a loop (never trust optimisers).

/*
perhaps rr_chanel is a better name ? round robin channel
that would requiere the changes but explain what it actualy is
*/



> 
>> btw: you are doing  after loop() if(pch->chan = NULL) continue;
>> that means the else in the if below  if (pch->chan) should never be reached.
>> Or is it likely that some channel will be dropped (?) ?
> 
> Channels could be dropped and we need to guard against that.
> 

please add a comment about that. i can garantee you someone will spot it
and remove either the pch->chan = NULL or the else.



just my 2 cents,
wh


> 
>> btw: this is intentional ? looks strange
>>
>>        if(ppp_ml_noexplode) {
>> +       }
>> +       else {
> 
> Leftover from various printks for debugging reasons.
> 
> 
> Thanks for your feedback,
> Richard
> 

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
  2010-03-31 10:03         ` Ben McKeegan
@ 2010-05-29  2:16           ` Paul Mackerras
  -1 siblings, 0 replies; 73+ messages in thread
From: Paul Mackerras @ 2010-05-29  2:16 UTC (permalink / raw)
  To: Ben McKeegan
  Cc: netdev, linux-ppp, Alan Cox, Alexander E. Patrakov,
	Richard Hartmann, linux-kernel

On Wed, Mar 31, 2010 at 11:03:44AM +0100, Ben McKeegan wrote:

> I needed to do something similar a while back and I took a very
> different approach, which I think is more flexible.   Rather than
> implement a new round-robin scheduler I simply introduced a target
> minimum fragment size into the fragment size calculation, as a per
> bundle parameter that can be configured via a new ioctl.  This
> modifies the algorithm so that it tries to limit the number of
> fragments such that each fragment is at least the minimum size.  If
> the minimum size is greater than the packet size it will not be
> fragmented all but will instead just get sent down the next
> available channel.
> 
> A pppd plugin generates the ioctl call allowing this to be tweaked
> per connection.  It is more flexible in that you can still have the
> larger packets fragmented if you wish.

I like this a lot better than the other proposed patch.  It adds less
code because it uses the fact that ppp_mp_explode() already has a
round-robin capability using the ppp->nxchan field, plus it provides a
way to control it per bundle via pppd.

If you fix up the indentation issues (2-space indent in some of the
added code -- if you're using emacs, set c-basic-offset to 8), I'll
ack it and hopefully DaveM will pick it up.

Paul.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP
@ 2010-05-29  2:16           ` Paul Mackerras
  0 siblings, 0 replies; 73+ messages in thread
From: Paul Mackerras @ 2010-05-29  2:16 UTC (permalink / raw)
  To: Ben McKeegan
  Cc: netdev, linux-ppp, Alan Cox, Alexander E. Patrakov,
	Richard Hartmann, linux-kernel

On Wed, Mar 31, 2010 at 11:03:44AM +0100, Ben McKeegan wrote:

> I needed to do something similar a while back and I took a very
> different approach, which I think is more flexible.   Rather than
> implement a new round-robin scheduler I simply introduced a target
> minimum fragment size into the fragment size calculation, as a per
> bundle parameter that can be configured via a new ioctl.  This
> modifies the algorithm so that it tries to limit the number of
> fragments such that each fragment is at least the minimum size.  If
> the minimum size is greater than the packet size it will not be
> fragmented all but will instead just get sent down the next
> available channel.
> 
> A pppd plugin generates the ioctl call allowing this to be tweaked
> per connection.  It is more flexible in that you can still have the
> larger packets fragmented if you wish.

I like this a lot better than the other proposed patch.  It adds less
code because it uses the fact that ppp_mp_explode() already has a
round-robin capability using the ppp->nxchan field, plus it provides a
way to control it per bundle via pppd.

If you fix up the indentation issues (2-space indent in some of the
added code -- if you're using emacs, set c-basic-offset to 8), I'll
ack it and hopefully DaveM will pick it up.

Paul.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP  multi-link
  2010-05-29  2:16           ` [Patch] fix packet loss and massive ping spikes with PPP Paul Mackerras
  (?)
@ 2010-05-29  9:06             ` Richard Hartmann
  -1 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-05-29  9:06 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: Ben McKeegan, netdev, linux-ppp, Alan Cox, Alexander E. Patrakov,
	linux-kernel

On Sat, May 29, 2010 at 04:16, Paul Mackerras <paulus@samba.org> wrote:

> I like this a lot better than the other proposed patch.

Not that it really matters, but we would be happy with either patch
as long as we can stop patching mainline :)


Richard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
@ 2010-05-29  9:06             ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-05-29  9:06 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: Ben McKeegan, netdev, linux-ppp, Alan Cox, Alexander E. Patrakov,
	linux-kernel

On Sat, May 29, 2010 at 04:16, Paul Mackerras <paulus@samba.org> wrote:

> I like this a lot better than the other proposed patch.

Not that it really matters, but we would be happy with either patch
as long as we can stop patching mainline :)


Richard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP
@ 2010-05-29  9:06             ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-05-29  9:06 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: Ben McKeegan, netdev, linux-ppp, Alan Cox, Alexander E. Patrakov,
	linux-kernel

On Sat, May 29, 2010 at 04:16, Paul Mackerras <paulus@samba.org> wrote:

> I like this a lot better than the other proposed patch.

Not that it really matters, but we would be happy with either patch
as long as we can stop patching mainline :)


Richard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP  multi-link
  2010-05-29  2:16           ` [Patch] fix packet loss and massive ping spikes with PPP Paul Mackerras
  (?)
@ 2010-05-31 13:39             ` Richard Hartmann
  -1 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-05-31 13:39 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: Ben McKeegan, netdev, linux-ppp, Alan Cox, Alexander E. Patrakov,
	linux-kernel

On Sat, May 29, 2010 at 04:16, Paul Mackerras <paulus@samba.org> wrote:

> If you fix up the indentation issues (2-space indent in some of the
> added code -- if you're using emacs, set c-basic-offset to 8), I'll
> ack it and hopefully DaveM will pick it up.

If no one submits it by tonight, I will probably send a cleaned-up
version of the alternative patch. Not that I want to steal anyone's
laurels, mind. But having it in mainline is better than not having
it in mainline.


Richard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
@ 2010-05-31 13:39             ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-05-31 13:39 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: Ben McKeegan, netdev, linux-ppp, Alan Cox, Alexander E. Patrakov,
	linux-kernel

On Sat, May 29, 2010 at 04:16, Paul Mackerras <paulus@samba.org> wrote:

> If you fix up the indentation issues (2-space indent in some of the
> added code -- if you're using emacs, set c-basic-offset to 8), I'll
> ack it and hopefully DaveM will pick it up.

If no one submits it by tonight, I will probably send a cleaned-up
version of the alternative patch. Not that I want to steal anyone's
laurels, mind. But having it in mainline is better than not having
it in mainline.


Richard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP
@ 2010-05-31 13:39             ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-05-31 13:39 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: Ben McKeegan, netdev, linux-ppp, Alan Cox, Alexander E. Patrakov,
	linux-kernel

On Sat, May 29, 2010 at 04:16, Paul Mackerras <paulus@samba.org> wrote:

> If you fix up the indentation issues (2-space indent in some of the
> added code -- if you're using emacs, set c-basic-offset to 8), I'll
> ack it and hopefully DaveM will pick it up.

If no one submits it by tonight, I will probably send a cleaned-up
version of the alternative patch. Not that I want to steal anyone's
laurels, mind. But having it in mainline is better than not having
it in mainline.


Richard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
  2010-05-29  2:16           ` [Patch] fix packet loss and massive ping spikes with PPP Paul Mackerras
@ 2010-05-31 16:20             ` Ben McKeegan
  -1 siblings, 0 replies; 73+ messages in thread
From: Ben McKeegan @ 2010-05-31 16:20 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: netdev, linux-ppp, Alan Cox, Alexander E. Patrakov,
	Richard Hartmann, linux-kernel

Paul Mackerras wrote:

>> I needed to do something similar a while back and I took a very
>> different approach, which I think is more flexible.   Rather than
>> implement a new round-robin scheduler I simply introduced a target
>> minimum fragment size into the fragment size calculation, as a per
>> bundle parameter that can be configured via a new ioctl.  This
>> modifies the algorithm so that it tries to limit the number of
>> fragments such that each fragment is at least the minimum size.  If
>> the minimum size is greater than the packet size it will not be
>> fragmented all but will instead just get sent down the next
>> available channel.
>>
>> A pppd plugin generates the ioctl call allowing this to be tweaked
>> per connection.  It is more flexible in that you can still have the
>> larger packets fragmented if you wish.
> 
> I like this a lot better than the other proposed patch.  It adds less
> code because it uses the fact that ppp_mp_explode() already has a
> round-robin capability using the ppp->nxchan field, plus it provides a
> way to control it per bundle via pppd.
> 
> If you fix up the indentation issues (2-space indent in some of the
> added code -- if you're using emacs, set c-basic-offset to 8), I'll
> ack it and hopefully DaveM will pick it up.

Okay, I'll fix it up when I'm back at work Tuesday and submit (today is 
a UK public holiday) and also dig out and fix up the userspace code to 
go with it.

Ben.


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
@ 2010-05-31 16:20             ` Ben McKeegan
  0 siblings, 0 replies; 73+ messages in thread
From: Ben McKeegan @ 2010-05-31 16:20 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: netdev, linux-ppp, Alan Cox, Alexander E. Patrakov,
	Richard Hartmann, linux-kernel

Paul Mackerras wrote:

>> I needed to do something similar a while back and I took a very
>> different approach, which I think is more flexible.   Rather than
>> implement a new round-robin scheduler I simply introduced a target
>> minimum fragment size into the fragment size calculation, as a per
>> bundle parameter that can be configured via a new ioctl.  This
>> modifies the algorithm so that it tries to limit the number of
>> fragments such that each fragment is at least the minimum size.  If
>> the minimum size is greater than the packet size it will not be
>> fragmented all but will instead just get sent down the next
>> available channel.
>>
>> A pppd plugin generates the ioctl call allowing this to be tweaked
>> per connection.  It is more flexible in that you can still have the
>> larger packets fragmented if you wish.
> 
> I like this a lot better than the other proposed patch.  It adds less
> code because it uses the fact that ppp_mp_explode() already has a
> round-robin capability using the ppp->nxchan field, plus it provides a
> way to control it per bundle via pppd.
> 
> If you fix up the indentation issues (2-space indent in some of the
> added code -- if you're using emacs, set c-basic-offset to 8), I'll
> ack it and hopefully DaveM will pick it up.

Okay, I'll fix it up when I'm back at work Tuesday and submit (today is 
a UK public holiday) and also dig out and fix up the userspace code to 
go with it.

Ben.


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP  multi-link
  2010-05-29  2:16           ` [Patch] fix packet loss and massive ping spikes with PPP Paul Mackerras
  (?)
@ 2010-06-01 10:20             ` Richard Hartmann
  -1 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-06-01 10:20 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: Ben McKeegan, netdev, linux-ppp, Alan Cox, Alexander E. Patrakov,
	linux-kernel

On Sat, May 29, 2010 at 04:16, Paul Mackerras <paulus@samba.org> wrote:


> I'll
> ack it and hopefully DaveM will pick it up.

As 2.6.35-rc1 is out, does this mean that we are looking at 2.6.36 at the
earliest? Or could this make it in as it is

a) a bug fix
b) tested in the field
c) a small change


Thanks,
Richard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
@ 2010-06-01 10:20             ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-06-01 10:20 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: Ben McKeegan, netdev, linux-ppp, Alan Cox, Alexander E. Patrakov,
	linux-kernel

On Sat, May 29, 2010 at 04:16, Paul Mackerras <paulus@samba.org> wrote:


> I'll
> ack it and hopefully DaveM will pick it up.

As 2.6.35-rc1 is out, does this mean that we are looking at 2.6.36 at the
earliest? Or could this make it in as it is

a) a bug fix
b) tested in the field
c) a small change


Thanks,
Richard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP
@ 2010-06-01 10:20             ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-06-01 10:20 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: Ben McKeegan, netdev, linux-ppp, Alan Cox, Alexander E. Patrakov,
	linux-kernel

On Sat, May 29, 2010 at 04:16, Paul Mackerras <paulus@samba.org> wrote:


> I'll
> ack it and hopefully DaveM will pick it up.

As 2.6.35-rc1 is out, does this mean that we are looking at 2.6.36 at the
earliest? Or could this make it in as it is

a) a bug fix
b) tested in the field
c) a small change


Thanks,
Richard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP  multi-link
  2010-06-01 10:20             ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
@ 2010-06-01 11:18               ` Ben McKeegan
  -1 siblings, 0 replies; 73+ messages in thread
From: Ben McKeegan @ 2010-06-01 11:18 UTC (permalink / raw)
  To: Richard Hartmann
  Cc: Paul Mackerras, netdev, linux-ppp, Alan Cox,
	Alexander E. Patrakov, linux-kernel

Richard Hartmann wrote:

> As 2.6.35-rc1 is out, does this mean that we are looking at 2.6.36 at the
> earliest? Or could this make it in as it is
> 
> a) a bug fix

This isn't really a bug fix.  Its a behavioural change to work around 
poor quality/mismatched underlying PPP channels.

Regards,
Ben.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP  multi-link
@ 2010-06-01 11:18               ` Ben McKeegan
  0 siblings, 0 replies; 73+ messages in thread
From: Ben McKeegan @ 2010-06-01 11:18 UTC (permalink / raw)
  To: Richard Hartmann
  Cc: Paul Mackerras, netdev, linux-ppp, Alan Cox,
	Alexander E. Patrakov, linux-kernel

Richard Hartmann wrote:

> As 2.6.35-rc1 is out, does this mean that we are looking at 2.6.36 at the
> earliest? Or could this make it in as it is
> 
> a) a bug fix

This isn't really a bug fix.  Its a behavioural change to work around 
poor quality/mismatched underlying PPP channels.

Regards,
Ben.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP  multi-link
  2010-06-01 11:18               ` Ben McKeegan
  (?)
@ 2010-06-01 11:28                 ` Richard Hartmann
  -1 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-06-01 11:28 UTC (permalink / raw)
  To: Ben McKeegan
  Cc: Paul Mackerras, netdev, linux-ppp, Alan Cox,
	Alexander E. Patrakov, linux-kernel

On Tue, Jun 1, 2010 at 13:18, Ben McKeegan <ben@netservers.co.uk> wrote:

> This isn't really a bug fix.  Its a behavioural change to work around poor
> quality/mismatched underlying PPP channels.

Maybe not a bug in the Linux kernel itself, but certainly in the real world
that exists around Linux. Similar to how a change to a device driver that
is needed to work around broken hardware is a bug fix, imo.


RIchard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
@ 2010-06-01 11:28                 ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-06-01 11:28 UTC (permalink / raw)
  To: Ben McKeegan
  Cc: Paul Mackerras, netdev, linux-ppp, Alan Cox,
	Alexander E. Patrakov, linux-kernel

On Tue, Jun 1, 2010 at 13:18, Ben McKeegan <ben@netservers.co.uk> wrote:

> This isn't really a bug fix.  Its a behavioural change to work around poor
> quality/mismatched underlying PPP channels.

Maybe not a bug in the Linux kernel itself, but certainly in the real world
that exists around Linux. Similar to how a change to a device driver that
is needed to work around broken hardware is a bug fix, imo.


RIchard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP
@ 2010-06-01 11:28                 ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-06-01 11:28 UTC (permalink / raw)
  To: Ben McKeegan
  Cc: Paul Mackerras, netdev, linux-ppp, Alan Cox,
	Alexander E. Patrakov, linux-kernel

On Tue, Jun 1, 2010 at 13:18, Ben McKeegan <ben@netservers.co.uk> wrote:

> This isn't really a bug fix.  Its a behavioural change to work around poor
> quality/mismatched underlying PPP channels.

Maybe not a bug in the Linux kernel itself, but certainly in the real world
that exists around Linux. Similar to how a change to a device driver that
is needed to work around broken hardware is a bug fix, imo.


RIchard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
  2010-06-01 11:28                 ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
@ 2010-06-01 22:15                   ` David Miller
  -1 siblings, 0 replies; 73+ messages in thread
From: David Miller @ 2010-06-01 22:15 UTC (permalink / raw)
  To: richih.mailinglist
  Cc: ben, paulus, netdev, linux-ppp, alan, patrakov, linux-kernel

From: Richard Hartmann <richih.mailinglist@gmail.com>
Date: Tue, 1 Jun 2010 13:28:59 +0200

> Maybe not a bug in the Linux kernel itself, but certainly in the real world
> that exists around Linux. Similar to how a change to a device driver that
> is needed to work around broken hardware is a bug fix, imo.

It's not the same situation at all.

It is easier to fix misconfigured products that exist because of
software and configurations than it is to fix a physical piece of
hardware.

So you could work around it if you wanted to.

I definitely don't see this as -stable material, as a result.  We will
push it to net-next-2.6 and it will thus hit 2.6.36 as previously
mentioned.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP
@ 2010-06-01 22:15                   ` David Miller
  0 siblings, 0 replies; 73+ messages in thread
From: David Miller @ 2010-06-01 22:15 UTC (permalink / raw)
  To: richih.mailinglist
  Cc: ben, paulus, netdev, linux-ppp, alan, patrakov, linux-kernel

From: Richard Hartmann <richih.mailinglist@gmail.com>
Date: Tue, 1 Jun 2010 13:28:59 +0200

> Maybe not a bug in the Linux kernel itself, but certainly in the real world
> that exists around Linux. Similar to how a change to a device driver that
> is needed to work around broken hardware is a bug fix, imo.

It's not the same situation at all.

It is easier to fix misconfigured products that exist because of
software and configurations than it is to fix a physical piece of
hardware.

So you could work around it if you wanted to.

I definitely don't see this as -stable material, as a result.  We will
push it to net-next-2.6 and it will thus hit 2.6.36 as previously
mentioned.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
  2010-05-31 16:20             ` Ben McKeegan
@ 2010-06-02 14:55               ` Ben McKeegan
  -1 siblings, 0 replies; 73+ messages in thread
From: Ben McKeegan @ 2010-06-02 14:55 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: netdev, linux-ppp, Alan Cox, Alexander E. Patrakov,
	Richard Hartmann, linux-kernel, gabriele.paoloni

Ben McKeegan wrote:
> Paul Mackerras wrote:
> 
>>> I needed to do something similar a while back and I took a very
>>> different approach, which I think is more flexible.   Rather than
>>> implement a new round-robin scheduler I simply introduced a target
>>> minimum fragment size into the fragment size calculation, as a per
>>> bundle parameter that can be configured via a new ioctl.  This
>>> modifies the algorithm so that it tries to limit the number of
>>> fragments such that each fragment is at least the minimum size.  If
>>> the minimum size is greater than the packet size it will not be
>>> fragmented all but will instead just get sent down the next
>>> available channel.
>>>
>>> A pppd plugin generates the ioctl call allowing this to be tweaked
>>> per connection.  It is more flexible in that you can still have the
>>> larger packets fragmented if you wish.
>>
>> I like this a lot better than the other proposed patch.  It adds less
>> code because it uses the fact that ppp_mp_explode() already has a
>> round-robin capability using the ppp->nxchan field, plus it provides a
>> way to control it per bundle via pppd.
>>
>> If you fix up the indentation issues (2-space indent in some of the
>> added code -- if you're using emacs, set c-basic-offset to 8), I'll
>> ack it and hopefully DaveM will pick it up.
> 
> Okay, I'll fix it up when I'm back at work Tuesday and submit (today is 
> a UK public holiday) and also dig out and fix up the userspace code to 
> go with it.

Still working on this, updating the patch wasn't as trivial as I thought 
as it clashed with Gabriele Paoloni's ppp_mp_explode redesign.  However, 
while looking at this code I believe I have found a bug which might have 
been contributing to the poor performance the OP was experiencing.   For 
the case where channel speeds are unknown and there are more than 2 
channels it would miscalculate the fragment sizes so they are not 
balanced on the channels.

Patch for the bug to follow immediately.

Regards,
Ben.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
@ 2010-06-02 14:55               ` Ben McKeegan
  0 siblings, 0 replies; 73+ messages in thread
From: Ben McKeegan @ 2010-06-02 14:55 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: netdev, linux-ppp, Alan Cox, Alexander E. Patrakov,
	Richard Hartmann, linux-kernel, gabriele.paoloni

Ben McKeegan wrote:
> Paul Mackerras wrote:
> 
>>> I needed to do something similar a while back and I took a very
>>> different approach, which I think is more flexible.   Rather than
>>> implement a new round-robin scheduler I simply introduced a target
>>> minimum fragment size into the fragment size calculation, as a per
>>> bundle parameter that can be configured via a new ioctl.  This
>>> modifies the algorithm so that it tries to limit the number of
>>> fragments such that each fragment is at least the minimum size.  If
>>> the minimum size is greater than the packet size it will not be
>>> fragmented all but will instead just get sent down the next
>>> available channel.
>>>
>>> A pppd plugin generates the ioctl call allowing this to be tweaked
>>> per connection.  It is more flexible in that you can still have the
>>> larger packets fragmented if you wish.
>>
>> I like this a lot better than the other proposed patch.  It adds less
>> code because it uses the fact that ppp_mp_explode() already has a
>> round-robin capability using the ppp->nxchan field, plus it provides a
>> way to control it per bundle via pppd.
>>
>> If you fix up the indentation issues (2-space indent in some of the
>> added code -- if you're using emacs, set c-basic-offset to 8), I'll
>> ack it and hopefully DaveM will pick it up.
> 
> Okay, I'll fix it up when I'm back at work Tuesday and submit (today is 
> a UK public holiday) and also dig out and fix up the userspace code to 
> go with it.

Still working on this, updating the patch wasn't as trivial as I thought 
as it clashed with Gabriele Paoloni's ppp_mp_explode redesign.  However, 
while looking at this code I believe I have found a bug which might have 
been contributing to the poor performance the OP was experiencing.   For 
the case where channel speeds are unknown and there are more than 2 
channels it would miscalculate the fragment sizes so they are not 
balanced on the channels.

Patch for the bug to follow immediately.

Regards,
Ben.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [PATCH] ppp_generic: fix multilink fragment sizes
  2010-06-02 14:55               ` Ben McKeegan
@ 2010-06-02 15:04                 ` Ben McKeegan
  -1 siblings, 0 replies; 73+ messages in thread
From: Ben McKeegan @ 2010-06-02 15:04 UTC (permalink / raw)
  To: davem
  Cc: ben, netdev, linux-kernel, gabriele.paoloni, alan, linux-ppp, paulus

Fix bug in multilink fragment size calculation introduced by
commit 9c705260feea6ae329bc6b6d5f6d2ef0227eda0a
"ppp: ppp_mp_explode() redesign"

Signed-off-by: Ben McKeegan <ben@netservers.co.uk>
---
 drivers/net/ppp_generic.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ppp_generic.c b/drivers/net/ppp_generic.c
index 0db3894..8b36bfe 100644
--- a/drivers/net/ppp_generic.c
+++ b/drivers/net/ppp_generic.c
@@ -1416,7 +1416,8 @@ static int ppp_mp_explode(struct ppp *ppp, struct sk_buff *skb)
 		flen = len;
 		if (nfree > 0) {
 			if (pch->speed == 0) {
-				flen = totlen/nfree;
+				if (nfree > 1)
+					flen = DIV_ROUND_UP(len, nfree);
 				if (nbigger > 0) {
 					flen++;
 					nbigger--;
-- 
1.5.6.5


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH] ppp_generic: fix multilink fragment sizes
@ 2010-06-02 15:04                 ` Ben McKeegan
  0 siblings, 0 replies; 73+ messages in thread
From: Ben McKeegan @ 2010-06-02 15:04 UTC (permalink / raw)
  To: davem
  Cc: ben, netdev, linux-kernel, gabriele.paoloni, alan, linux-ppp, paulus

Fix bug in multilink fragment size calculation introduced by
commit 9c705260feea6ae329bc6b6d5f6d2ef0227eda0a
"ppp: ppp_mp_explode() redesign"

Signed-off-by: Ben McKeegan <ben@netservers.co.uk>
---
 drivers/net/ppp_generic.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ppp_generic.c b/drivers/net/ppp_generic.c
index 0db3894..8b36bfe 100644
--- a/drivers/net/ppp_generic.c
+++ b/drivers/net/ppp_generic.c
@@ -1416,7 +1416,8 @@ static int ppp_mp_explode(struct ppp *ppp, struct sk_buff *skb)
 		flen = len;
 		if (nfree > 0) {
 			if (pch->speed = 0) {
-				flen = totlen/nfree;
+				if (nfree > 1)
+					flen = DIV_ROUND_UP(len, nfree);
 				if (nbigger > 0) {
 					flen++;
 					nbigger--;
-- 
1.5.6.5


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* RE: [PATCH] ppp_generic: fix multilink fragment sizes
  2010-06-02 15:04                 ` Ben McKeegan
@ 2010-06-02 15:17                   ` Paoloni, Gabriele
  -1 siblings, 0 replies; 73+ messages in thread
From: Paoloni, Gabriele @ 2010-06-02 15:17 UTC (permalink / raw)
  To: Ben McKeegan, davem; +Cc: netdev, linux-kernel, alan, linux-ppp, paulus

The proposed patch looks wrong to me.

nbigger is already doing the job; I didn't use DIV_ROUND_UP because in general we don't have always to roundup, otherwise we would exceed the total bandwidth.

Regards

Gabriele Paoloni

-----Original Message-----
From: Ben McKeegan [mailto:ben@netservers.co.uk] 
Sent: 02 June 2010 16:05
To: davem@davemloft.net
Cc: ben@netservers.co.uk; netdev@vger.kernel.org; linux-kernel@vger.kernel.org; Paoloni, Gabriele; alan@lxorguk.ukuu.org.uk; linux-ppp@vger.kernel.org; paulus@samba.org
Subject: [PATCH] ppp_generic: fix multilink fragment sizes

Fix bug in multilink fragment size calculation introduced by
commit 9c705260feea6ae329bc6b6d5f6d2ef0227eda0a
"ppp: ppp_mp_explode() redesign"

Signed-off-by: Ben McKeegan <ben@netservers.co.uk>
---
 drivers/net/ppp_generic.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ppp_generic.c b/drivers/net/ppp_generic.c
index 0db3894..8b36bfe 100644
--- a/drivers/net/ppp_generic.c
+++ b/drivers/net/ppp_generic.c
@@ -1416,7 +1416,8 @@ static int ppp_mp_explode(struct ppp *ppp, struct sk_buff *skb)
 		flen = len;
 		if (nfree > 0) {
 			if (pch->speed == 0) {
-				flen = totlen/nfree;
+				if (nfree > 1)
+					flen = DIV_ROUND_UP(len, nfree);
 				if (nbigger > 0) {
 					flen++;
 					nbigger--;
-- 
1.5.6.5

--------------------------------------------------------------
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.



^ permalink raw reply related	[flat|nested] 73+ messages in thread

* RE: [PATCH] ppp_generic: fix multilink fragment sizes
@ 2010-06-02 15:17                   ` Paoloni, Gabriele
  0 siblings, 0 replies; 73+ messages in thread
From: Paoloni, Gabriele @ 2010-06-02 15:17 UTC (permalink / raw)
  To: Ben McKeegan, davem; +Cc: netdev, linux-kernel, alan, linux-ppp, paulus

The proposed patch looks wrong to me.

nbigger is already doing the job; I didn't use DIV_ROUND_UP because in general we don't have always to roundup, otherwise we would exceed the total bandwidth.

Regards

Gabriele Paoloni

-----Original Message-----
From: Ben McKeegan [mailto:ben@netservers.co.uk] 
Sent: 02 June 2010 16:05
To: davem@davemloft.net
Cc: ben@netservers.co.uk; netdev@vger.kernel.org; linux-kernel@vger.kernel.org; Paoloni, Gabriele; alan@lxorguk.ukuu.org.uk; linux-ppp@vger.kernel.org; paulus@samba.org
Subject: [PATCH] ppp_generic: fix multilink fragment sizes

Fix bug in multilink fragment size calculation introduced by
commit 9c705260feea6ae329bc6b6d5f6d2ef0227eda0a
"ppp: ppp_mp_explode() redesign"

Signed-off-by: Ben McKeegan <ben@netservers.co.uk>
---
 drivers/net/ppp_generic.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ppp_generic.c b/drivers/net/ppp_generic.c
index 0db3894..8b36bfe 100644
--- a/drivers/net/ppp_generic.c
+++ b/drivers/net/ppp_generic.c
@@ -1416,7 +1416,8 @@ static int ppp_mp_explode(struct ppp *ppp, struct sk_buff *skb)
 		flen = len;
 		if (nfree > 0) {
 			if (pch->speed = 0) {
-				flen = totlen/nfree;
+				if (nfree > 1)
+					flen = DIV_ROUND_UP(len, nfree);
 				if (nbigger > 0) {
 					flen++;
 					nbigger--;
-- 
1.5.6.5

--------------------------------------------------------------
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.



^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [PATCH] ppp_generic: fix multilink fragment sizes
  2010-06-02 15:17                   ` Paoloni, Gabriele
@ 2010-06-02 15:31                     ` David Miller
  -1 siblings, 0 replies; 73+ messages in thread
From: David Miller @ 2010-06-02 15:31 UTC (permalink / raw)
  To: gabriele.paoloni; +Cc: ben, netdev, linux-kernel, alan, linux-ppp, paulus

From: "Paoloni, Gabriele" <gabriele.paoloni@intel.com>
Date: Wed, 2 Jun 2010 16:17:59 +0100

> The proposed patch looks wrong to me.

Can you please reply to postings properly?  Use the reply function
of your mail client and do not "top post."

Because of the way you just simply inlined the entirety of Ben's
patch posting, it ends up looking like a new patch being posted
on patchwork and this makes more work and analysis for me.

Thanks.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] ppp_generic: fix multilink fragment sizes
@ 2010-06-02 15:31                     ` David Miller
  0 siblings, 0 replies; 73+ messages in thread
From: David Miller @ 2010-06-02 15:31 UTC (permalink / raw)
  To: gabriele.paoloni; +Cc: ben, netdev, linux-kernel, alan, linux-ppp, paulus

From: "Paoloni, Gabriele" <gabriele.paoloni@intel.com>
Date: Wed, 2 Jun 2010 16:17:59 +0100

> The proposed patch looks wrong to me.

Can you please reply to postings properly?  Use the reply function
of your mail client and do not "top post."

Because of the way you just simply inlined the entirety of Ben's
patch posting, it ends up looking like a new patch being posted
on patchwork and this makes more work and analysis for me.

Thanks.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] ppp_generic: fix multilink fragment sizes
  2010-06-02 15:17                   ` Paoloni, Gabriele
@ 2010-06-02 15:55                     ` Ben McKeegan
  -1 siblings, 0 replies; 73+ messages in thread
From: Ben McKeegan @ 2010-06-02 15:55 UTC (permalink / raw)
  To: Paoloni, Gabriele; +Cc: davem, netdev, linux-kernel, alan, linux-ppp, paulus

Paoloni, Gabriele wrote:
> The proposed patch looks wrong to me.
> 
> nbigger is already doing the job; I didn't use DIV_ROUND_UP because in general we don't have always to roundup, otherwise we would exceed the total bandwidth.

I was basing this on the original code prior to your patch, which used 
DIV_ROUND_UP to get the fragment size.  Looking more closely I see your 
point, the original code was starting with the larger fragment size and 
decrementing rather than starting with the smaller size and incrementing 
as your code does, so that makes sense.


> 
>  		flen = len;
>  		if (nfree > 0) {
>  			if (pch->speed == 0) {
> -				flen = totlen/nfree;
> +				if (nfree > 1)
> +					flen = DIV_ROUND_UP(len, nfree);
>  				if (nbigger > 0) {
>  					flen++;
>  					nbigger--;

The important change here is the use of 'len' instead of 'totlen'. 
'nfree' and 'len' should decrease roughly proportionally with each 
iteration of the loop whereas 'totlen' remains unchanged.  Thus 
(totlen/nfree) gets bigger on each iteration whereas len/nfree should 
give roughly the same.  However, without rounding up here I'm not sure 
the logic is right either, since the side effect of nbigger is to make 
len decrease faster so it is not quite proportional to the decrease in 
nfree.  Is there a risk of ending up on the nfree == 1 iteration with 
flen == len - 1 and thus generating a superfluous extra 1 byte long 
fragment?  This would be a far worse situation than a slight imbalance 
in the size of the fragments.

Perhaps the solution is to go back to a precalculated fragment size for 
the pch->speed == 0 case as per original code?

Regards,
Ben.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] ppp_generic: fix multilink fragment sizes
@ 2010-06-02 15:55                     ` Ben McKeegan
  0 siblings, 0 replies; 73+ messages in thread
From: Ben McKeegan @ 2010-06-02 15:55 UTC (permalink / raw)
  To: Paoloni, Gabriele; +Cc: davem, netdev, linux-kernel, alan, linux-ppp, paulus

Paoloni, Gabriele wrote:
> The proposed patch looks wrong to me.
> 
> nbigger is already doing the job; I didn't use DIV_ROUND_UP because in general we don't have always to roundup, otherwise we would exceed the total bandwidth.

I was basing this on the original code prior to your patch, which used 
DIV_ROUND_UP to get the fragment size.  Looking more closely I see your 
point, the original code was starting with the larger fragment size and 
decrementing rather than starting with the smaller size and incrementing 
as your code does, so that makes sense.


> 
>  		flen = len;
>  		if (nfree > 0) {
>  			if (pch->speed = 0) {
> -				flen = totlen/nfree;
> +				if (nfree > 1)
> +					flen = DIV_ROUND_UP(len, nfree);
>  				if (nbigger > 0) {
>  					flen++;
>  					nbigger--;

The important change here is the use of 'len' instead of 'totlen'. 
'nfree' and 'len' should decrease roughly proportionally with each 
iteration of the loop whereas 'totlen' remains unchanged.  Thus 
(totlen/nfree) gets bigger on each iteration whereas len/nfree should 
give roughly the same.  However, without rounding up here I'm not sure 
the logic is right either, since the side effect of nbigger is to make 
len decrease faster so it is not quite proportional to the decrease in 
nfree.  Is there a risk of ending up on the nfree = 1 iteration with 
flen = len - 1 and thus generating a superfluous extra 1 byte long 
fragment?  This would be a far worse situation than a slight imbalance 
in the size of the fragments.

Perhaps the solution is to go back to a precalculated fragment size for 
the pch->speed = 0 case as per original code?

Regards,
Ben.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [PATCH] ppp_generic: fix multilink fragment sizes
  2010-06-02 15:55                     ` Ben McKeegan
@ 2010-06-03  8:41                       ` Paoloni, Gabriele
  -1 siblings, 0 replies; 73+ messages in thread
From: Paoloni, Gabriele @ 2010-06-03  8:41 UTC (permalink / raw)
  To: Ben McKeegan; +Cc: davem, netdev, linux-kernel, alan, linux-ppp, paulus

Hi

I agree with you about replacing totlen with len (actually the previous one was quite bad).
I think we don't need to round up anyway and nbigger is doing his job I think. Basically we are giving just one more byte to the first nbigger free channels and for the rest the integer division will round down automatically.

For example say you have before transmitting 5 free channels and len is 83bytes

nbigger will be (ln 1284) 83%5=3

the frame will be split as follows: 17 - 17 - 17 - 16 - 16

Since for the first three iterations the channel to tx on will get an extra byte (ln1427) and nbigger will be decreased by one (ln1428).

The only change I would make to the code is to replace totlen with len @ln1425

Now if you agree either me or you can submit a new patch.

Regards

Gabriele Paoloni   

>-----Original Message-----
>From: Ben McKeegan [mailto:ben@netservers.co.uk]
>Sent: 02 June 2010 16:56
>To: Paoloni, Gabriele
>Cc: davem@davemloft.net; netdev@vger.kernel.org; linux-
>kernel@vger.kernel.org; alan@lxorguk.ukuu.org.uk; linux-
>ppp@vger.kernel.org; paulus@samba.org
>Subject: Re: [PATCH] ppp_generic: fix multilink fragment sizes
>
>Paoloni, Gabriele wrote:
>> The proposed patch looks wrong to me.
>>
>> nbigger is already doing the job; I didn't use DIV_ROUND_UP because in
>general we don't have always to roundup, otherwise we would exceed the
>total bandwidth.
>
>I was basing this on the original code prior to your patch, which used
>DIV_ROUND_UP to get the fragment size.  Looking more closely I see your
>point, the original code was starting with the larger fragment size and
>decrementing rather than starting with the smaller size and incrementing
>as your code does, so that makes sense.
>
>
>>
>>  		flen = len;
>>  		if (nfree > 0) {
>>  			if (pch->speed == 0) {
>> -				flen = totlen/nfree;
>> +				if (nfree > 1)
>> +					flen = DIV_ROUND_UP(len, nfree);
>>  				if (nbigger > 0) {
>>  					flen++;
>>  					nbigger--;
>
>The important change here is the use of 'len' instead of 'totlen'.
>'nfree' and 'len' should decrease roughly proportionally with each
>iteration of the loop whereas 'totlen' remains unchanged.  Thus
>(totlen/nfree) gets bigger on each iteration whereas len/nfree should
>give roughly the same.  However, without rounding up here I'm not sure
>the logic is right either, since the side effect of nbigger is to make
>len decrease faster so it is not quite proportional to the decrease in
>nfree.  Is there a risk of ending up on the nfree == 1 iteration with
>flen == len - 1 and thus generating a superfluous extra 1 byte long
>fragment?  This would be a far worse situation than a slight imbalance
>in the size of the fragments.
>
>Perhaps the solution is to go back to a precalculated fragment size for
>the pch->speed == 0 case as per original code?
>
>Regards,
>Ben.
--------------------------------------------------------------
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.



^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [PATCH] ppp_generic: fix multilink fragment sizes
@ 2010-06-03  8:41                       ` Paoloni, Gabriele
  0 siblings, 0 replies; 73+ messages in thread
From: Paoloni, Gabriele @ 2010-06-03  8:41 UTC (permalink / raw)
  To: Ben McKeegan; +Cc: davem, netdev, linux-kernel, alan, linux-ppp, paulus

Hi

I agree with you about replacing totlen with len (actually the previous one was quite bad).
I think we don't need to round up anyway and nbigger is doing his job I think. Basically we are giving just one more byte to the first nbigger free channels and for the rest the integer division will round down automatically.

For example say you have before transmitting 5 free channels and len is 83bytes

nbigger will be (ln 1284) 83%5=3

the frame will be split as follows: 17 - 17 - 17 - 16 - 16

Since for the first three iterations the channel to tx on will get an extra byte (ln1427) and nbigger will be decreased by one (ln1428).

The only change I would make to the code is to replace totlen with len @ln1425

Now if you agree either me or you can submit a new patch.

Regards

Gabriele Paoloni   

>-----Original Message-----
>From: Ben McKeegan [mailto:ben@netservers.co.uk]
>Sent: 02 June 2010 16:56
>To: Paoloni, Gabriele
>Cc: davem@davemloft.net; netdev@vger.kernel.org; linux-
>kernel@vger.kernel.org; alan@lxorguk.ukuu.org.uk; linux-
>ppp@vger.kernel.org; paulus@samba.org
>Subject: Re: [PATCH] ppp_generic: fix multilink fragment sizes
>
>Paoloni, Gabriele wrote:
>> The proposed patch looks wrong to me.
>>
>> nbigger is already doing the job; I didn't use DIV_ROUND_UP because in
>general we don't have always to roundup, otherwise we would exceed the
>total bandwidth.
>
>I was basing this on the original code prior to your patch, which used
>DIV_ROUND_UP to get the fragment size.  Looking more closely I see your
>point, the original code was starting with the larger fragment size and
>decrementing rather than starting with the smaller size and incrementing
>as your code does, so that makes sense.
>
>
>>
>>  		flen = len;
>>  		if (nfree > 0) {
>>  			if (pch->speed = 0) {
>> -				flen = totlen/nfree;
>> +				if (nfree > 1)
>> +					flen = DIV_ROUND_UP(len, nfree);
>>  				if (nbigger > 0) {
>>  					flen++;
>>  					nbigger--;
>
>The important change here is the use of 'len' instead of 'totlen'.
>'nfree' and 'len' should decrease roughly proportionally with each
>iteration of the loop whereas 'totlen' remains unchanged.  Thus
>(totlen/nfree) gets bigger on each iteration whereas len/nfree should
>give roughly the same.  However, without rounding up here I'm not sure
>the logic is right either, since the side effect of nbigger is to make
>len decrease faster so it is not quite proportional to the decrease in
>nfree.  Is there a risk of ending up on the nfree = 1 iteration with
>flen = len - 1 and thus generating a superfluous extra 1 byte long
>fragment?  This would be a far worse situation than a slight imbalance
>in the size of the fragments.
>
>Perhaps the solution is to go back to a precalculated fragment size for
>the pch->speed = 0 case as per original code?
>
>Regards,
>Ben.
--------------------------------------------------------------
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.



^ permalink raw reply	[flat|nested] 73+ messages in thread

* [PATCH] ppp_generic: fix multilink fragment sizes
  2010-06-03  8:41                       ` Paoloni, Gabriele
@ 2010-06-03  9:14                         ` Ben McKeegan
  -1 siblings, 0 replies; 73+ messages in thread
From: Ben McKeegan @ 2010-06-03  9:14 UTC (permalink / raw)
  To: davem
  Cc: ben, netdev, linux-kernel, gabriele.paoloni, alan, linux-ppp, paulus

Fix bug in multilink fragment size calculation introduced by
commit 9c705260feea6ae329bc6b6d5f6d2ef0227eda0a
"ppp: ppp_mp_explode() redesign"

Signed-off-by: Ben McKeegan <ben@netservers.co.uk>
---
 drivers/net/ppp_generic.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ppp_generic.c b/drivers/net/ppp_generic.c
index 0db3894..c980f74 100644
--- a/drivers/net/ppp_generic.c
+++ b/drivers/net/ppp_generic.c
@@ -1416,7 +1416,7 @@ static int ppp_mp_explode(struct ppp *ppp, struct sk_buff *skb)
 		flen = len;
 		if (nfree > 0) {
 			if (pch->speed == 0) {
-				flen = totlen/nfree;
+				flen = len/nfree;
 				if (nbigger > 0) {
 					flen++;
 					nbigger--;
-- 
1.5.6.5


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH] ppp_generic: fix multilink fragment sizes
@ 2010-06-03  9:14                         ` Ben McKeegan
  0 siblings, 0 replies; 73+ messages in thread
From: Ben McKeegan @ 2010-06-03  9:14 UTC (permalink / raw)
  To: davem
  Cc: ben, netdev, linux-kernel, gabriele.paoloni, alan, linux-ppp, paulus

Fix bug in multilink fragment size calculation introduced by
commit 9c705260feea6ae329bc6b6d5f6d2ef0227eda0a
"ppp: ppp_mp_explode() redesign"

Signed-off-by: Ben McKeegan <ben@netservers.co.uk>
---
 drivers/net/ppp_generic.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ppp_generic.c b/drivers/net/ppp_generic.c
index 0db3894..c980f74 100644
--- a/drivers/net/ppp_generic.c
+++ b/drivers/net/ppp_generic.c
@@ -1416,7 +1416,7 @@ static int ppp_mp_explode(struct ppp *ppp, struct sk_buff *skb)
 		flen = len;
 		if (nfree > 0) {
 			if (pch->speed = 0) {
-				flen = totlen/nfree;
+				flen = len/nfree;
 				if (nbigger > 0) {
 					flen++;
 					nbigger--;
-- 
1.5.6.5


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
  2010-06-02 14:55               ` Ben McKeegan
@ 2010-11-08 14:05                 ` Richard Hartmann
  -1 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-11-08 14:05 UTC (permalink / raw)
  To: Ben McKeegan
  Cc: Paul Mackerras, netdev, linux-ppp, Alan Cox,
	Alexander E. Patrakov, linux-kernel, gabriele.paoloni

On Wed, Jun 2, 2010 at 16:55, Ben McKeegan <ben@netservers.co.uk> wrote:

> Still working on this, updating the patch wasn't as trivial as I thought as
> it clashed with Gabriele Paoloni's ppp_mp_explode redesign.  However, while
> looking at this code I believe I have found a bug which might have been
> contributing to the poor performance the OP was experiencing.   For the case
> where channel speeds are unknown and there are more than 2 channels it would
> miscalculate the fragment sizes so they are not balanced on the channels.
>
> Patch for the bug to follow immediately.

Is there any update on this? It's been quite some time since you last updated
on this issue.


Thanks a lot,
Richard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
@ 2010-11-08 14:05                 ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-11-08 14:05 UTC (permalink / raw)
  To: Ben McKeegan
  Cc: Paul Mackerras, netdev, linux-ppp, Alan Cox,
	Alexander E. Patrakov, linux-kernel, gabriele.paoloni

On Wed, Jun 2, 2010 at 16:55, Ben McKeegan <ben@netservers.co.uk> wrote:

> Still working on this, updating the patch wasn't as trivial as I thought as
> it clashed with Gabriele Paoloni's ppp_mp_explode redesign.  However, while
> looking at this code I believe I have found a bug which might have been
> contributing to the poor performance the OP was experiencing.   For the case
> where channel speeds are unknown and there are more than 2 channels it would
> miscalculate the fragment sizes so they are not balanced on the channels.
>
> Patch for the bug to follow immediately.

Is there any update on this? It's been quite some time since you last updated
on this issue.


Thanks a lot,
Richard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
  2010-11-08 14:05                 ` Richard Hartmann
@ 2010-11-15 12:07                   ` Richard Hartmann
  -1 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-11-15 12:07 UTC (permalink / raw)
  To: Ben McKeegan
  Cc: Paul Mackerras, netdev, linux-ppp, Alan Cox,
	Alexander E. Patrakov, linux-kernel, gabriele.paoloni

On Mon, Nov 8, 2010 at 15:05, Richard Hartmann
<richih.mailinglist@gmail.com> wrote:

> Is there any update on this? It's been quite some time since you last updated
> on this issue.

As it's been a week without any reply and as I know how stuff can
drown in more important work & projects, I am tentatively poking again
:)


RIchard

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link
@ 2010-11-15 12:07                   ` Richard Hartmann
  0 siblings, 0 replies; 73+ messages in thread
From: Richard Hartmann @ 2010-11-15 12:07 UTC (permalink / raw)
  To: Ben McKeegan
  Cc: Paul Mackerras, netdev, linux-ppp, Alan Cox,
	Alexander E. Patrakov, linux-kernel, gabriele.paoloni

On Mon, Nov 8, 2010 at 15:05, Richard Hartmann
<richih.mailinglist@gmail.com> wrote:

> Is there any update on this? It's been quite some time since you last updated
> on this issue.

As it's been a week without any reply and as I know how stuff can
drown in more important work & projects, I am tentatively poking again
:)


RIchard

^ permalink raw reply	[flat|nested] 73+ messages in thread

end of thread, other threads:[~2010-11-15 12:12 UTC | newest]

Thread overview: 73+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-03-26 15:50 [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
2010-03-26 15:50 ` Richard Hartmann
2010-03-26 15:58 ` [Patch] fix packet loss and massive ping spikes with PPP Alan Cox
2010-03-26 16:02   ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Alan Cox
2010-03-26 16:33   ` Joe Perches
2010-03-26 16:33     ` [Patch] fix packet loss and massive ping spikes with PPP Joe Perches
2010-03-26 16:39   ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
2010-03-26 16:39     ` [Patch] fix packet loss and massive ping spikes with PPP Richard Hartmann
2010-03-26 16:39     ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
2010-03-26 16:59     ` David Miller
2010-03-26 16:59       ` [Patch] fix packet loss and massive ping spikes with PPP David Miller
2010-03-26 17:04       ` [Patch] fix packet loss and massive ping spikes with PPP multi-link David Miller
2010-03-26 17:04         ` [Patch] fix packet loss and massive ping spikes with PPP David Miller
2010-03-26 17:04     ` [Patch] fix packet loss and massive ping spikes with PPP multi-link James Carlson
2010-03-26 17:04       ` James Carlson
2010-03-26 16:59   ` Alexander E. Patrakov
2010-03-26 17:00     ` Alexander E. Patrakov
2010-03-26 17:04     ` Alan Cox
2010-03-26 17:04       ` [Patch] fix packet loss and massive ping spikes with PPP Alan Cox
2010-03-31 10:03       ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Ben McKeegan
2010-03-31 10:03         ` Ben McKeegan
2010-05-29  2:16         ` Paul Mackerras
2010-05-29  2:16           ` [Patch] fix packet loss and massive ping spikes with PPP Paul Mackerras
2010-05-29  9:06           ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
2010-05-29  9:06             ` [Patch] fix packet loss and massive ping spikes with PPP Richard Hartmann
2010-05-29  9:06             ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
2010-05-31 13:39           ` Richard Hartmann
2010-05-31 13:39             ` [Patch] fix packet loss and massive ping spikes with PPP Richard Hartmann
2010-05-31 13:39             ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
2010-05-31 16:20           ` Ben McKeegan
2010-05-31 16:20             ` Ben McKeegan
2010-06-02 14:55             ` Ben McKeegan
2010-06-02 14:55               ` Ben McKeegan
2010-06-02 15:04               ` [PATCH] ppp_generic: fix multilink fragment sizes Ben McKeegan
2010-06-02 15:04                 ` Ben McKeegan
2010-06-02 15:17                 ` Paoloni, Gabriele
2010-06-02 15:17                   ` Paoloni, Gabriele
2010-06-02 15:31                   ` David Miller
2010-06-02 15:31                     ` David Miller
2010-06-02 15:55                   ` Ben McKeegan
2010-06-02 15:55                     ` Ben McKeegan
2010-06-03  8:41                     ` Paoloni, Gabriele
2010-06-03  8:41                       ` Paoloni, Gabriele
2010-06-03  9:14                       ` Ben McKeegan
2010-06-03  9:14                         ` Ben McKeegan
2010-11-08 14:05               ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
2010-11-08 14:05                 ` Richard Hartmann
2010-11-15 12:07                 ` Richard Hartmann
2010-11-15 12:07                   ` Richard Hartmann
2010-06-01 10:20           ` Richard Hartmann
2010-06-01 10:20             ` [Patch] fix packet loss and massive ping spikes with PPP Richard Hartmann
2010-06-01 10:20             ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
2010-06-01 11:18             ` Ben McKeegan
2010-06-01 11:18               ` Ben McKeegan
2010-06-01 11:28               ` Richard Hartmann
2010-06-01 11:28                 ` [Patch] fix packet loss and massive ping spikes with PPP Richard Hartmann
2010-06-01 11:28                 ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
2010-06-01 22:15                 ` David Miller
2010-06-01 22:15                   ` [Patch] fix packet loss and massive ping spikes with PPP David Miller
2010-03-31  9:01 ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
2010-03-31  9:01   ` [Patch] fix packet loss and massive ping spikes with PPP Richard Hartmann
2010-03-31  9:01   ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
2010-05-25  9:52 ` Richard Hartmann
2010-05-25  9:52   ` [Patch] fix packet loss and massive ping spikes with PPP Richard Hartmann
2010-05-25  9:52   ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
2010-05-25 10:18 ` walter harms
2010-05-25 14:58 ` [Patch] fix packet loss and massive ping spikes with PPP Richard Hartmann
2010-05-25 17:30 ` [Patch] fix packet loss and massive ping spikes with PPP multi-link walter harms
2010-05-26  8:47   ` Richard Hartmann
2010-05-26  8:47     ` [Patch] fix packet loss and massive ping spikes with PPP Richard Hartmann
2010-05-26  8:47     ` [Patch] fix packet loss and massive ping spikes with PPP multi-link Richard Hartmann
2010-05-28  7:28     ` walter harms
2010-05-28  7:28       ` walter harms

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.