All of lore.kernel.org
 help / color / mirror / Atom feed
* UDP stress-testing
@ 2016-04-07 20:19 Peter Kietzmann
  2016-04-07 20:41 ` Alexander Aring
  0 siblings, 1 reply; 10+ messages in thread
From: Peter Kietzmann @ 2016-04-07 20:19 UTC (permalink / raw)
  To: linux-wpan

Dear list,

first of all let me say that I'm new to this list. So if I'm completely 
wrong in my concern please excuse the noise and guide me to the right 
place. If you can :-)!

For some experiments I'm trying to send "great" numbers of UDP packets 
with "great" payloads as fast as possible from a RasPi equipped with the 
Openlabs transceiver. With another RasPi+transceiver I'm sniffing the 
traffic. It turns out that just the first x packets are sent out 
correctly before the outgoing packets come out irregularly. The number 
of correctly sent packets depends on the UDP payload size and it looks 
like the problem occurs after ~30-35 kB Bytes (gross) in total have been 
transmitted (fragmentation overhead included). I already increased the 
send socket memory to a reasonably high value, without success. But 
still I assume some buffer problems. Do you have a hint which screw to 
adjust?

BTW: Introducing a delay after each packet to send fixes the problem. 
But I'd like to do stress-testing...

Best
Peter

-- 
Peter Kietzmann

Hamburg University of Applied Sciences
Dept. Informatik, Internet Technologies Group
Berliner Tor 7, 20099 Hamburg, Germany
Fon: +49-40-42875-8426
Web: http://www.haw-hamburg.de/inet

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: UDP stress-testing
  2016-04-07 20:19 UDP stress-testing Peter Kietzmann
@ 2016-04-07 20:41 ` Alexander Aring
  2016-04-08  7:22   ` Peter Kietzmann
  0 siblings, 1 reply; 10+ messages in thread
From: Alexander Aring @ 2016-04-07 20:41 UTC (permalink / raw)
  To: Peter Kietzmann; +Cc: linux-wpan

On Thu, Apr 07, 2016 at 10:19:35PM +0200, Peter Kietzmann wrote:
> Dear list,
> 
> first of all let me say that I'm new to this list. So if I'm completely
> wrong in my concern please excuse the noise and guide me to the right place.
> If you can :-)!
> 
> For some experiments I'm trying to send "great" numbers of UDP packets with
> "great" payloads as fast as possible from a RasPi equipped with the Openlabs
> transceiver. With another RasPi+transceiver I'm sniffing the traffic. It
> turns out that just the first x packets are sent out correctly before the
> outgoing packets come out irregularly. The number of correctly sent packets
> depends on the UDP payload size and it looks like the problem occurs after
> ~30-35 kB Bytes (gross) in total have been transmitted (fragmentation
> overhead included). I already increased the send socket memory to a
> reasonably high value, without success. But still I assume some buffer
> problems. Do you have a hint which screw to adjust?
> 
> BTW: Introducing a delay after each packet to send fixes the problem. But
> I'd like to do stress-testing...

1.

You cannot be sure that a monitor interface shows all traffic which is
on the air. You have at least hardware limitations which begins at
"rising IRQ" and ends at "framebuffer readed".

2.

Try to enable ack request bit. This is default disabled because you need
to know what you doing when you enable it. Be sure all nodes supports
ACK handling before that.

You can do that with:

iwpan dev $WPAN_DEV set ackreq_default 1

3.

Change tx queue setting:

ip link set txqueuelen 1000 dev $WPAN_DEV



Please reply if that helped you otherwise we will maybe find another
tweaks. :-)

- Alex

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: UDP stress-testing
  2016-04-07 20:41 ` Alexander Aring
@ 2016-04-08  7:22   ` Peter Kietzmann
  2016-04-08 12:58     ` Alexander Aring
  2016-04-13  0:00     ` Michael Richardson
  0 siblings, 2 replies; 10+ messages in thread
From: Peter Kietzmann @ 2016-04-08  7:22 UTC (permalink / raw)
  Cc: linux-wpan

Hi Alex,

thanks for your quick reply. It helped a lot! See some comments inline.

Am 07.04.2016 um 22:41 schrieb Alexander Aring:
> On Thu, Apr 07, 2016 at 10:19:35PM +0200, Peter Kietzmann wrote:
>> Dear list,
>>
>> first of all let me say that I'm new to this list. So if I'm completely
>> wrong in my concern please excuse the noise and guide me to the right place.
>> If you can :-)!
>>
>> For some experiments I'm trying to send "great" numbers of UDP packets with
>> "great" payloads as fast as possible from a RasPi equipped with the Openlabs
>> transceiver. With another RasPi+transceiver I'm sniffing the traffic. It
>> turns out that just the first x packets are sent out correctly before the
>> outgoing packets come out irregularly. The number of correctly sent packets
>> depends on the UDP payload size and it looks like the problem occurs after
>> ~30-35 kB Bytes (gross) in total have been transmitted (fragmentation
>> overhead included). I already increased the send socket memory to a
>> reasonably high value, without success. But still I assume some buffer
>> problems. Do you have a hint which screw to adjust?
>>
>> BTW: Introducing a delay after each packet to send fixes the problem. But
>> I'd like to do stress-testing...
>
> 1.
>
> You cannot be sure that a monitor interface shows all traffic which is
> on the air. You have at least hardware limitations which begins at
> "rising IRQ" and ends at "framebuffer readed".

I totally agree. But here it was quite obvious something was going on 
after some time.

>
> 2.
>
> Try to enable ack request bit. This is default disabled because you need
> to know what you doing when you enable it. Be sure all nodes supports
> ACK handling before that.
>
> You can do that with:
>
> iwpan dev $WPAN_DEV set ackreq_default 1

Found out that my sender Pi doesn't support this option. Guess I should 
update it...

>
> 3.
>
> Change tx queue setting:
>
> ip link set txqueuelen 1000 dev $WPAN_DEV

That was the parameter I was looking for. Increasing this queue it is :-) !

>
>
>
> Please reply if that helped you otherwise we will maybe find another
> tweaks. :-)
>
> - Alex
>

Again, thanks for your quick help!

Cheers
Peter

-- 
Peter Kietzmann

Hamburg University of Applied Sciences
Dept. Informatik, Internet Technologies Group
Berliner Tor 7, 20099 Hamburg, Germany
Fon: +49-40-42875-8426
Web: http://www.haw-hamburg.de/inet

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: UDP stress-testing
  2016-04-08  7:22   ` Peter Kietzmann
@ 2016-04-08 12:58     ` Alexander Aring
  2016-04-13  0:00     ` Michael Richardson
  1 sibling, 0 replies; 10+ messages in thread
From: Alexander Aring @ 2016-04-08 12:58 UTC (permalink / raw)
  To: Peter Kietzmann; +Cc: linux-wpan

On Fri, Apr 08, 2016 at 09:22:29AM +0200, Peter Kietzmann wrote:
...
> >
> >3.
> >
> >Change tx queue setting:
> >
> >ip link set txqueuelen 1000 dev $WPAN_DEV
> 
> That was the parameter I was looking for. Increasing this queue it is :-) !
> 

Maybe we should increase the default parameter for the txqueuelen. I
know there is some science about to calculate bandwidth, payload, etc to
calculate a good value.

I know ethernet changed to 1000 because gigabit ethernet is common
nowadays, before it was 100.

Our default value is 300 (don't know why, Alan increased it one day), see
commit e937f583ec3a40cccd480b40d8c6d54751781587.

It was 10 before and 10 is really too small...

- Alex

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: UDP stress-testing
  2016-04-08  7:22   ` Peter Kietzmann
  2016-04-08 12:58     ` Alexander Aring
@ 2016-04-13  0:00     ` Michael Richardson
  2016-04-13  8:28       ` Alexander Aring
  1 sibling, 1 reply; 10+ messages in thread
From: Michael Richardson @ 2016-04-13  0:00 UTC (permalink / raw)
  To: linux-wpan, Peter Kietzmann

[-- Attachment #1: Type: text/plain, Size: 729 bytes --]


Peter Kietzmann <peter.kietzmann@haw-hamburg.de> wrote:
    >> Change tx queue setting:
    >>
    >> ip link set txqueuelen 1000 dev $WPAN_DEV

    > That was the parameter I was looking for. Increasing this queue it is
    > :-) !

assuming that you aren't sending packets faster than your essential bit rate,
increasing the txqueuelen just increases the latency.

A queue length of 3 or 4 ought to be enough to keep the transmitter busy.
Setting it to 1000 just causes bufferbloat.


--
]               Never tell me the odds!                 | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works        | network architect  [
]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 464 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: UDP stress-testing
  2016-04-13  0:00     ` Michael Richardson
@ 2016-04-13  8:28       ` Alexander Aring
  2016-04-13 12:50         ` Michael Richardson
  0 siblings, 1 reply; 10+ messages in thread
From: Alexander Aring @ 2016-04-13  8:28 UTC (permalink / raw)
  To: Michael Richardson; +Cc: linux-wpan, Peter Kietzmann

Hi Michael,

On Tue, Apr 12, 2016 at 08:00:58PM -0400, Michael Richardson wrote:
> 
> Peter Kietzmann <peter.kietzmann@haw-hamburg.de> wrote:
>     >> Change tx queue setting:
>     >>
>     >> ip link set txqueuelen 1000 dev $WPAN_DEV
> 
>     > That was the parameter I was looking for. Increasing this queue it is
>     > :-) !
> 
> assuming that you aren't sending packets faster than your essential bit rate,
> increasing the txqueuelen just increases the latency.
> 
> A queue length of 3 or 4 ought to be enough to keep the transmitter busy.
> Setting it to 1000 just causes bufferbloat.
> 

Okay I think we need to talk about this.

_All_ current supported transceivers have on hardware side _one_
framebuffer only.

I would agree 3-4 frames should keep the transmitter buffer, because
only one framebuffer, 250kBit (even slower on sub 1-GHz) and holding
3-4 skb's in background.

But the first question for me now is:

What really happens if the queue is full and we run dev_queue_xmit? e.g.
[0].

---

btw:
Error handling of dev_queue_xmit looks a little bit confused, [1]. After
running a test-grep on Kernel source, I see that some subsystems
completely ignore the return value of dev_queue_xmit. Maybe we should
think about a "correct" handling with return value of dev_queue_xmit.

---

After some digging inside the code is, that the selected qdisc will
decide what happens when the queue is full.

On most systems the qdisc default is pfifo (but I remember something
that systemd changed to fq_codel as default).

The pfifo implementation is here, you can see the implementation will
drop one (the oldest in the queue) and insert new one skb when queue is full.

Now with fragmentation and pfifo:

 - Queue length: 4
 - ~Payload per 802.15.4 frame: 88 bytes

Maximum payload 88 * 4 = 352 bytes.

The fragmentation works as following:

IPv6 packet comes ->
  Splitted into fragments ->
    dev_queue_xmit ...
    dev_queue_xmit ...
    dev_queue_xmit ...
    (Very fast, no delay or something. Will queue into wpan interface).
IPv6 packet comes ->
 ...

This will getting the queue full and with payload of 352 bytes it makes
fragments invalid because pfifo will drop some which is part of the
whole fragment.

Maybe we need some different strategy to queueing fragments into
wpan interface or we need another qdisc for wpan interface as default.

I am not an expert, but adding a delay is the same like increasing the
tx_queue_len.

We need maybe to first figure out what are the "typical, practical"
parameters which we work on.

I know that the most IoT MCU doesn't support IPv6 fragmentation yet, so
a IPv6 payload above 1280 bytes is not typical and people hit such
payloads e.g. discovery (the .well-known/core stuff) in CoAP protocols.

Don't know which points are important here "avoid latency" or "avoid invalid
fragments (because we are not fast enough and to send all of them)".

What I know is, when we drop one fragment, then we could drop every fragment
inside the queue which comes from the whole fragmented 6lowpan packet. it
seems this will not be handled currently.

- Alex

[0] http://lxr.free-electrons.com/source/net/ieee802154/6lowpan/tx.c#L138
[1] http://lxr.free-electrons.com/source/net/core/dev.c#L3247
[2] http://lxr.free-electrons.com/source/net/sched/sch_fifo.c#L38


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: UDP stress-testing
  2016-04-13  8:28       ` Alexander Aring
@ 2016-04-13 12:50         ` Michael Richardson
  2016-04-13 20:34           ` Alexander Aring
  0 siblings, 1 reply; 10+ messages in thread
From: Michael Richardson @ 2016-04-13 12:50 UTC (permalink / raw)
  To: Alexander Aring; +Cc: linux-wpan, Peter Kietzmann

[-- Attachment #1: Type: text/plain, Size: 1608 bytes --]


Alexander Aring <alex.aring@gmail.com> wrote:
    > _All_ current supported transceivers have on hardware side _one_
    > framebuffer only.

    > I would agree 3-4 frames should keep the transmitter buffer, because
    > only one framebuffer, 250kBit (even slower on sub 1-GHz) and holding
    > 3-4 skb's in background.

If we are counting frames rather than packets, then I agree that 3-4 is too
low; we should accomodate 2-3 full-sized IPv6 packets worth of fragments.
1280 / 88 = 14 * 3 = 42.
So I suggest somewhere between 32 and 48 as a good default.

Well, we really ought to use the BQL to get the right number!
        https://lwn.net/Articles/469652/

    > On most systems the qdisc default is pfifo (but I remember something
    > that systemd changed to fq_codel as default).

yes, fq_codel is often the default now.

    > This will getting the queue full and with payload of 352 bytes it makes
    > fragments invalid because pfifo will drop some which is part of the
    > whole fragment.

I had assumed that the fragmentation happened after the qdisc.

    > What I know is, when we drop one fragment, then we could drop every
    > fragment inside the queue which comes from the whole fragmented 6lowpan
    > packet. it seems this will not be handled currently.

This is very important, and why I had assumed that fragmentation was
afterwards.

--
]               Never tell me the odds!                 | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works        | network architect  [
]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 464 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: UDP stress-testing
  2016-04-13 12:50         ` Michael Richardson
@ 2016-04-13 20:34           ` Alexander Aring
  2016-04-14  8:53             ` Peter Kietzmann
  0 siblings, 1 reply; 10+ messages in thread
From: Alexander Aring @ 2016-04-13 20:34 UTC (permalink / raw)
  To: Michael Richardson; +Cc: linux-wpan, Peter Kietzmann

Hi Michael,

On Wed, Apr 13, 2016 at 08:50:22AM -0400, Michael Richardson wrote:
> 
> Alexander Aring <alex.aring@gmail.com> wrote:
>     > _All_ current supported transceivers have on hardware side _one_
>     > framebuffer only.
> 
>     > I would agree 3-4 frames should keep the transmitter buffer, because
>     > only one framebuffer, 250kBit (even slower on sub 1-GHz) and holding
>     > 3-4 skb's in background.
> 
> If we are counting frames rather than packets, then I agree that 3-4 is too
> low; we should accomodate 2-3 full-sized IPv6 packets worth of fragments.
> 1280 / 88 = 14 * 3 = 42.
> So I suggest somewhere between 32 and 48 as a good default.
> 
> Well, we really ought to use the BQL to get the right number!
>         https://lwn.net/Articles/469652/
> 

ok, I looked into one example to provide BQL for some ethernet driver
[0]. For SoftMAC we have for the flow-label stuff some helper
functions [1].

Maybe some point to start there and look what happens. :-)

>     > On most systems the qdisc default is pfifo (but I remember something
>     > that systemd changed to fq_codel as default).
> 
> yes, fq_codel is often the default now.
> 
>     > This will getting the queue full and with payload of 352 bytes it makes
>     > fragments invalid because pfifo will drop some which is part of the
>     > whole fragment.
> 
> I had assumed that the fragmentation happened after the qdisc.
> 
>     > What I know is, when we drop one fragment, then we could drop every
>     > fragment inside the queue which comes from the whole fragmented 6lowpan
>     > packet. it seems this will not be handled currently.
> 
> This is very important, and why I had assumed that fragmentation was
> afterwards.
> 

We could implement the following:

1. Splitting fragments
2. While splitting don't call dev_queue_xmit directly.
   Instead we have an own small queue where we first collect all fragments.
3. After all fragments are inside the queue we check if qdisc has the
   "free slots" to store all fragments then, e.g.
    if ((dev->tx_queue_len - qdisc_qlen(dev->qdisc) >= $FRAGMENTS)
    then
       for_each_skb_fragment
          dev_queue_xmit...
    else
       drop;

Something like that, but maybe I should look at first inside IPv6
fragmentation deeper. I think they handled such case somehow, too. If
not then IPv6 should handle it as well.

It should be a similar issue there and we can hopefully grab some code
from there.

- Alex

[0] https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=4a89ba04ecc6377696e4e26c1abc1cb5764decb9
[1] http://lxr.free-electrons.com/source/net/mac802154/util.c#L22

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: UDP stress-testing
  2016-04-13 20:34           ` Alexander Aring
@ 2016-04-14  8:53             ` Peter Kietzmann
  2016-04-14 14:59               ` Alexander Aring
  0 siblings, 1 reply; 10+ messages in thread
From: Peter Kietzmann @ 2016-04-14  8:53 UTC (permalink / raw)
  To: Alexander Aring, Michael Richardson; +Cc: linux-wpan

Hi,

thanks for giving this topic some informational background! As it was me 
triggering that discussion, I just wanted to give my linux-network-newie 
opinion back to you.

IMO "avoiding invalid fragments" has higher priority than "avoiding 
latency" for the reason that dropping one fragment means dropping all 
fragments of an IPv6 packet. Processing 1280B IPv6 packets leads to 14 
6LoWPAN fragments which we'd have processed for nothing when dropping 
one of them fragments. Therefore I like Alex's proposal about queuing 
6Lowpan fragments and checking space in the actual xmit queue. However, 
I currently don't know much about IPv6 fragmentation but probably one 
thing to be careful about is interaction between 6lo and IPv6 
fragmentation and possible amplification effects which could affect the 
needed txqueue size.

A simple solution for my test cases would have been an easy way to check 
the status of the transmit queue.

Best regards and thanks again for your feedback
Peter


Am 13.04.2016 um 22:34 schrieb Alexander Aring:
> Hi Michael,
>
> On Wed, Apr 13, 2016 at 08:50:22AM -0400, Michael Richardson wrote:
>>
>> Alexander Aring <alex.aring@gmail.com> wrote:
>>      > _All_ current supported transceivers have on hardware side _one_
>>      > framebuffer only.
>>
>>      > I would agree 3-4 frames should keep the transmitter buffer, because
>>      > only one framebuffer, 250kBit (even slower on sub 1-GHz) and holding
>>      > 3-4 skb's in background.
>>
>> If we are counting frames rather than packets, then I agree that 3-4 is too
>> low; we should accomodate 2-3 full-sized IPv6 packets worth of fragments.
>> 1280 / 88 = 14 * 3 = 42.
>> So I suggest somewhere between 32 and 48 as a good default.
>>
>> Well, we really ought to use the BQL to get the right number!
>>          https://lwn.net/Articles/469652/
>>
>
> ok, I looked into one example to provide BQL for some ethernet driver
> [0]. For SoftMAC we have for the flow-label stuff some helper
> functions [1].
>
> Maybe some point to start there and look what happens. :-)
>
>>      > On most systems the qdisc default is pfifo (but I remember something
>>      > that systemd changed to fq_codel as default).
>>
>> yes, fq_codel is often the default now.
>>
>>      > This will getting the queue full and with payload of 352 bytes it makes
>>      > fragments invalid because pfifo will drop some which is part of the
>>      > whole fragment.
>>
>> I had assumed that the fragmentation happened after the qdisc.
>>
>>      > What I know is, when we drop one fragment, then we could drop every
>>      > fragment inside the queue which comes from the whole fragmented 6lowpan
>>      > packet. it seems this will not be handled currently.
>>
>> This is very important, and why I had assumed that fragmentation was
>> afterwards.
>>
>
> We could implement the following:
>
> 1. Splitting fragments
> 2. While splitting don't call dev_queue_xmit directly.
>     Instead we have an own small queue where we first collect all fragments.
> 3. After all fragments are inside the queue we check if qdisc has the
>     "free slots" to store all fragments then, e.g.
>      if ((dev->tx_queue_len - qdisc_qlen(dev->qdisc) >= $FRAGMENTS)
>      then
>         for_each_skb_fragment
>            dev_queue_xmit...
>      else
>         drop;
>
> Something like that, but maybe I should look at first inside IPv6
> fragmentation deeper. I think they handled such case somehow, too. If
> not then IPv6 should handle it as well.
>
> It should be a similar issue there and we can hopefully grab some code
> from there.
>
> - Alex
>
> [0] https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=4a89ba04ecc6377696e4e26c1abc1cb5764decb9
> [1] http://lxr.free-electrons.com/source/net/mac802154/util.c#L22
>

-- 
Peter Kietzmann

Hamburg University of Applied Sciences
Dept. Informatik, Internet Technologies Group
Berliner Tor 7, 20099 Hamburg, Germany
Fon: +49-40-42875-8426
Web: http://www.haw-hamburg.de/inet

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: UDP stress-testing
  2016-04-14  8:53             ` Peter Kietzmann
@ 2016-04-14 14:59               ` Alexander Aring
  0 siblings, 0 replies; 10+ messages in thread
From: Alexander Aring @ 2016-04-14 14:59 UTC (permalink / raw)
  To: Peter Kietzmann; +Cc: Michael Richardson, linux-wpan

Hi,

On Thu, Apr 14, 2016 at 10:53:02AM +0200, Peter Kietzmann wrote:
> Hi,
> 
> thanks for giving this topic some informational background! As it was me
> triggering that discussion, I just wanted to give my linux-network-newie
> opinion back to you.
> 
> IMO "avoiding invalid fragments" has higher priority than "avoiding latency"
> for the reason that dropping one fragment means dropping all fragments of an
> IPv6 packet. Processing 1280B IPv6 packets leads to 14 6LoWPAN fragments
> which we'd have processed for nothing when dropping one of them fragments.
> Therefore I like Alex's proposal about queuing 6Lowpan fragments and
> checking space in the actual xmit queue. However, I currently don't know
> much about IPv6 fragmentation but probably one thing to be careful about is
> interaction between 6lo and IPv6 fragmentation and possible amplification
> effects which could affect the needed txqueue size.
> 

I think for dealing with the fragments right inside the queue we need to
use "skb_shinfo(skb)->frag_list".

A kfree_skb will then care about for releasing all other skb's which are
use for the fragment and I hope this will deal with every others skb's
which are inside the tx queue.
I need to dig more into that if using frag_list really fix this issue.

In case of IPv6 fragmentation for the wpan interface queue (lowpan
interface doesn't have any tx queue, dev_queue_xmit will direct call xmit
of lowpan interface) it's another issue and not easy because we have a
fragmentation (IPv6) over fragmentation(6LoWPAN) there and notify dropped
fragments from wpan-interface to the correct ipv6 fragments (in case of
ipv6 fragmentation) sounds more complicated.

I would try to fix at first the fragmentation for 6LoWPAN fragmentation,
which is more likely than doing IPv6 fragmentation over 6LoWPAN.

- Alex

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2016-04-14 14:59 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-04-07 20:19 UDP stress-testing Peter Kietzmann
2016-04-07 20:41 ` Alexander Aring
2016-04-08  7:22   ` Peter Kietzmann
2016-04-08 12:58     ` Alexander Aring
2016-04-13  0:00     ` Michael Richardson
2016-04-13  8:28       ` Alexander Aring
2016-04-13 12:50         ` Michael Richardson
2016-04-13 20:34           ` Alexander Aring
2016-04-14  8:53             ` Peter Kietzmann
2016-04-14 14:59               ` Alexander Aring

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.