All of lore.kernel.org
 help / color / mirror / Atom feed
* AF_XDP Side Of Project Breaking With XDP-Native
@ 2020-05-22 15:22 Christian Deacon
  2020-05-22 15:51 ` Jesper Dangaard Brouer
  0 siblings, 1 reply; 9+ messages in thread
From: Christian Deacon @ 2020-05-22 15:22 UTC (permalink / raw)
  To: xdp-newbies

Hey everyone,

I have a VPS hosted by Vultr that appears to support XDP-native. I'm 
trying to get an XDP project loaded onto this VPS and while the XDP 
program itself works fine with XDP-native, the AF_XDP side of the 
project breaks. Loading everything in SKB/XDP-generic mode doesn't 
result in the AF_XDP side breaking.

Initially, I thought this may be due to the project using an outdated 
LibBPF submodule and outdated AF_XDP code. Therefore, I tried creating a 
test AF_XDP project using the latest code from XDP-Tutorial:

https://github.com/gamemann/AF_XDP-Test

When loading the test XDP program using SKB/XDP-generic mode, all 
traffic goes over RX queue #0. However, when XDP-native is loaded, all 
traffic goes over RX queue #1. When XDP-native is loaded, I can still 
attach the AF_XDP sockets to RX queues #0 and #1 (RX queue #0 sees no 
traffic, though).

The problem I'm having is after the first load, I cannot reattach the 
AF_XDP socket to RX queue #1. I receive the error "Device or resource 
busy". Here's an image showing this:

https://g.gflclan.com/2764-05-22-2020-BLkiLcUW.png

I have to reboot the VPS if I want to reattach the AF_XDP socket to 
queue #1.

I believe I'm cleaning up the AF_XDP socket(s) correctly here:

https://github.com/gamemann/AF_XDP-Test/blob/master/src/afxdp_user.c#L428

https://github.com/gamemann/AF_XDP-Test/blob/master/src/afxdp_user.c#L307

Initially, I only cleaned up the interface on line 307. However, I've 
been trying to add more cleanup code to see if it makes any difference.

I've tried kernels `5.4.0-21-generic`, `5.4.0-26-generic`, and 
`5.6.14-050614-generic` (current). The VPS is also running on Ubuntu 
20.04 LTS.

I'm honestly not sure what I'm doing wrong here. I'm new to AF_XDP. 
Therefore, I do apologize if I'm missing something obvious.

If I'm not doing anything wrong here, is it possible there's a bug with 
the NIC's driver? Unfortunately, I'm not sure which driver the cluster's 
NIC is using. If my code is fine, I will try reaching out to our hosting 
provider to see if I can get this information. If this is the case, I'd 
think there's a bug with the NIC driver's cleanup code.

Here's the output from `ethtool -l ens3`:

```
root@SEAV21:~/AF_XDP-Test# ethtool -l ens3
Channel parameters for ens3:
Pre-set maximums:
RX:             0
TX:             0
Other:          0
Combined:       8
Current hardware settings:
RX:             0
TX:             0
Other:          0
Combined:       1
```

One other question I have is if anyone knows of a way to get the exact 
RX queue count. As of right now, I make the same amount of AF_XDP 
sockets as cores. However, sometimes servers have less RX queues than CPUs.

If you need additional information, please let me know.

Any help is highly appreciated and thank you for your time!

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: AF_XDP Side Of Project Breaking With XDP-Native
  2020-05-22 15:22 AF_XDP Side Of Project Breaking With XDP-Native Christian Deacon
@ 2020-05-22 15:51 ` Jesper Dangaard Brouer
  2020-05-22 16:12   ` Christian Deacon
  0 siblings, 1 reply; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2020-05-22 15:51 UTC (permalink / raw)
  To: Christian Deacon; +Cc: brouer, xdp-newbies

On Fri, 22 May 2020 10:22:10 -0500
Christian Deacon <gamemann@gflclan.com> wrote:

> If I'm not doing anything wrong here, is it possible there's a bug with 
> the NIC's driver? Unfortunately, I'm not sure which driver the cluster's 
> NIC is using. 

Please run:
 ethtool -i ens3

And provide output.  As it will tell you that NIC driver you are using.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: AF_XDP Side Of Project Breaking With XDP-Native
  2020-05-22 15:51 ` Jesper Dangaard Brouer
@ 2020-05-22 16:12   ` Christian Deacon
  2020-05-24 17:35     ` David Ahern
  0 siblings, 1 reply; 9+ messages in thread
From: Christian Deacon @ 2020-05-22 16:12 UTC (permalink / raw)
  To: Jesper Dangaard Brouer; +Cc: xdp-newbies

Hey Jesper,


I apologize for not providing that information before. The driver is 
`virtio_net`. Unfortunately, I'm not sure what the NIC driver on the 
cluster is. Once my program's code is confirmed to be correct, I will 
try reaching out to our hosting provider to see if they can provide this 
information if the NIC's driver is the suspected cause to this issue.

```

root@SEAV21:~/AF_XDP-Test# ethtool -i ens3
driver: virtio_net
version: 1.0.0
firmware-version:
expansion-rom-version:
bus-info: 0000:00:03.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
```


Thank you!


On 5/22/2020 10:51 AM, Jesper Dangaard Brouer wrote:
> On Fri, 22 May 2020 10:22:10 -0500
> Christian Deacon <gamemann@gflclan.com> wrote:
>
>> If I'm not doing anything wrong here, is it possible there's a bug with
>> the NIC's driver? Unfortunately, I'm not sure which driver the cluster's
>> NIC is using.
> Please run:
>   ethtool -i ens3
>
> And provide output.  As it will tell you that NIC driver you are using.
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: AF_XDP Side Of Project Breaking With XDP-Native
  2020-05-22 16:12   ` Christian Deacon
@ 2020-05-24 17:35     ` David Ahern
  2020-05-24 18:13       ` Christian Deacon
  0 siblings, 1 reply; 9+ messages in thread
From: David Ahern @ 2020-05-24 17:35 UTC (permalink / raw)
  To: Christian Deacon, Jesper Dangaard Brouer; +Cc: xdp-newbies

On 5/22/20 10:12 AM, Christian Deacon wrote:
> Hey Jesper,
> 
> 
> I apologize for not providing that information before. The driver is
> `virtio_net`. Unfortunately, I'm not sure what the NIC driver on the
> cluster is. Once my program's code is confirmed to be correct, I will
> try reaching out to our hosting provider to see if they can provide this
> information if the NIC's driver is the suspected cause to this issue.
> 
> ```
> 
> root@SEAV21:~/AF_XDP-Test# ethtool -i ens3
> driver: virtio_net
> version: 1.0.0
> firmware-version:
> expansion-rom-version:
> bus-info: 0000:00:03.0
> supports-statistics: yes
> supports-test: no
> supports-eeprom-access: no
> supports-register-dump: no
> supports-priv-flags: no
> ```
> 
> 

Is this a 4-cpu VM or 8 cpu VM?

A previous response had:

root@SEAV21:~/AF_XDP-Test# ethtool -l ens3
Channel parameters for ens3:
Pre-set maximums:
RX:             0
TX:             0
Other:          0
Combined:       8
Current hardware settings:
RX:             0
TX:             0
Other:          0
Combined:       1

The 8 for pre-set max says the nic has 8 queues. If it is a 4-vcpu vm,
then try

ethtool -L ens3 combined 4

which leaves 4 for xdp. If it is an 8 cpu VM I believe you are out of
luck given current requirements.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: AF_XDP Side Of Project Breaking With XDP-Native
  2020-05-24 17:35     ` David Ahern
@ 2020-05-24 18:13       ` Christian Deacon
  2020-05-24 18:58         ` David Ahern
  0 siblings, 1 reply; 9+ messages in thread
From: Christian Deacon @ 2020-05-24 18:13 UTC (permalink / raw)
  To: David Ahern, Jesper Dangaard Brouer; +Cc: xdp-newbies

Hey David,


Thank you for your response!


The VM only has one CPU right now. It's possible the cluster has 8 RX 
queues I'd imagine, but I don't have that information sadly. I executed 
the same command on another VM I have with two CPUs (not being used for 
the XDP-native testing):


```

root@Test:~# ethtool -l ens3
Channel parameters for ens3:
Pre-set maximums:
RX:             0
TX:             0
Other:          0
Combined:       8
Current hardware settings:
RX:             0
TX:             0
Other:          0
Combined:       2
```


I did receive this from my hosting provider when asking which NIC driver 
they use:


```

Hello,

Thanks for your inquiry.

We do not pass through the host node's NIC to your VPS.

Because of that, it's not relevant what NIC driver we are using on our 
host nodes.

Thanks for your patience.

```


To my understanding, if the NIC isn't offloading packets directly to our 
VPS, wouldn't this destroy the purpose of using XDP-native over 
XDP-generic/SKB mode for performance in our case? I was under the 
assumption that was the point of XDP-native. If so, I'm not sure why the 
program is loading with XDP-native without any issues besides the AF_XDP 
program.


I will admit I've been wondering what the difference is between 
`XDP_FLAGS_DRV_MODE` (XDP-native) and `XDP_FLAGS_HW_MODE` since I 
thought XDP-native was offloading packets from the NIC.


Thank you for the help!


On 5/24/2020 12:35 PM, David Ahern wrote:
> On 5/22/20 10:12 AM, Christian Deacon wrote:
>> Hey Jesper,
>>
>>
>> I apologize for not providing that information before. The driver is
>> `virtio_net`. Unfortunately, I'm not sure what the NIC driver on the
>> cluster is. Once my program's code is confirmed to be correct, I will
>> try reaching out to our hosting provider to see if they can provide this
>> information if the NIC's driver is the suspected cause to this issue.
>>
>> ```
>>
>> root@SEAV21:~/AF_XDP-Test# ethtool -i ens3
>> driver: virtio_net
>> version: 1.0.0
>> firmware-version:
>> expansion-rom-version:
>> bus-info: 0000:00:03.0
>> supports-statistics: yes
>> supports-test: no
>> supports-eeprom-access: no
>> supports-register-dump: no
>> supports-priv-flags: no
>> ```
>>
>>
> Is this a 4-cpu VM or 8 cpu VM?
>
> A previous response had:
>
> root@SEAV21:~/AF_XDP-Test# ethtool -l ens3
> Channel parameters for ens3:
> Pre-set maximums:
> RX:             0
> TX:             0
> Other:          0
> Combined:       8
> Current hardware settings:
> RX:             0
> TX:             0
> Other:          0
> Combined:       1
>
> The 8 for pre-set max says the nic has 8 queues. If it is a 4-vcpu vm,
> then try
>
> ethtool -L ens3 combined 4
>
> which leaves 4 for xdp. If it is an 8 cpu VM I believe you are out of
> luck given current requirements.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: AF_XDP Side Of Project Breaking With XDP-Native
  2020-05-24 18:13       ` Christian Deacon
@ 2020-05-24 18:58         ` David Ahern
  2020-05-24 19:27           ` Christian Deacon
  0 siblings, 1 reply; 9+ messages in thread
From: David Ahern @ 2020-05-24 18:58 UTC (permalink / raw)
  To: Christian Deacon, Jesper Dangaard Brouer; +Cc: xdp-newbies

On 5/24/20 12:13 PM, Christian Deacon wrote:
> Hey David,
> 
> 
> Thank you for your response!
> 
> 
> The VM only has one CPU right now. It's possible the cluster has 8 RX
> queues I'd imagine, but I don't have that information sadly. I executed
> the same command on another VM I have with two CPUs (not being used for
> the XDP-native testing):
> 
> 
> ```
> 
> root@Test:~# ethtool -l ens3
> Channel parameters for ens3:
> Pre-set maximums:
> RX:             0
> TX:             0
> Other:          0
> Combined:       8
> Current hardware settings:
> RX:             0
> TX:             0
> Other:          0
> Combined:       2
> ```

That's odd that they give you 8 queues for a 1 cpu VM. This is vultr? I
may have to spin up a VM there and check it out.

> 
> 
> I did receive this from my hosting provider when asking which NIC driver
> they use:

...
> 
> 

I agree with the provider - the hardware nic's are not relevant to the VM.

> To my understanding, if the NIC isn't offloading packets directly to our
> VPS, wouldn't this destroy the purpose of using XDP-native over
> XDP-generic/SKB mode for performance in our case? I was under the
> assumption that was the point of XDP-native. If so, I'm not sure why the
> program is loading with XDP-native without any issues besides the AF_XDP
> program.

The host is essentially the network to your VM / VPS. What data
structure it uses is not relevant to what you want to do inside the VM.
Right now there are a lot of missing features for the host OS to rely
solely on XDP frames.

Inside the VM kernel, efficiency of XDP depends on what you are trying
to do.

A 1 or 2-cpu VM with 8 queues meets the resource requirement for XDP
programs; I am not familiar with the details on AF_XDP to know if some
kind of support is missing inside the virtio driver.

> 
> 
> I will admit I've been wondering what the difference is between
> `XDP_FLAGS_DRV_MODE` (XDP-native) and `XDP_FLAGS_HW_MODE` since I
> thought XDP-native was offloading packets from the NIC.

H/W mode means the program is pushed down to the hardware. I believe
only netronome's nic currently does offload. Some folks have discussed
offloading programs for the virtio NIC, but that does not work today.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: AF_XDP Side Of Project Breaking With XDP-Native
  2020-05-24 18:58         ` David Ahern
@ 2020-05-24 19:27           ` Christian Deacon
  2020-05-24 20:23             ` David Ahern
  0 siblings, 1 reply; 9+ messages in thread
From: Christian Deacon @ 2020-05-24 19:27 UTC (permalink / raw)
  To: David Ahern, Jesper Dangaard Brouer; +Cc: xdp-newbies

Hey David,


Thank you for your response and the information! That cleared a lot of 
things up for me!


Yes, this is with Vultr.


As of right now, the packet processing software I'm using forwards 
traffic to another server via XDP_TX. It also drops any traffic via 
XDP_DROP that doesn't match our filters (these filters aren't included 
in the open-source project linked below). Do you know if there would be 
any real performance advantage using XDP-native over XDP-generic in our 
case with the `virtio_net` driver for XDP_TX and XDP_DROP actions? We're 
currently battling (D)DoS attacks. Therefore, I'm trying to do 
everything I can to drop these packets as fast as possible.


If you would like to inspect the source code for this project, here's a 
link to the GitHub repository:


https://github.com/Dreae/compressor


I'm also working on a bigger open-source project with a friend that'll 
drop traffic based off of filtering rules with XDP (it'll be version two 
of the project I linked above) and we plan to use it on VMs with the 
`virtio_net` driver. Therefore, it'll be useful to know if XDP-native 
will provide a performance advantage over XDP-generic when dropping packets.


Thank you for all your help so far. I appreciate it!


On 5/24/2020 1:58 PM, David Ahern wrote:
> On 5/24/20 12:13 PM, Christian Deacon wrote:
>> Hey David,
>>
>>
>> Thank you for your response!
>>
>>
>> The VM only has one CPU right now. It's possible the cluster has 8 RX
>> queues I'd imagine, but I don't have that information sadly. I executed
>> the same command on another VM I have with two CPUs (not being used for
>> the XDP-native testing):
>>
>>
>> ```
>>
>> root@Test:~# ethtool -l ens3
>> Channel parameters for ens3:
>> Pre-set maximums:
>> RX:             0
>> TX:             0
>> Other:          0
>> Combined:       8
>> Current hardware settings:
>> RX:             0
>> TX:             0
>> Other:          0
>> Combined:       2
>> ```
> That's odd that they give you 8 queues for a 1 cpu VM. This is vultr? I
> may have to spin up a VM there and check it out.
>
>>
>> I did receive this from my hosting provider when asking which NIC driver
>> they use:
> ...
>>
> I agree with the provider - the hardware nic's are not relevant to the VM.
>
>> To my understanding, if the NIC isn't offloading packets directly to our
>> VPS, wouldn't this destroy the purpose of using XDP-native over
>> XDP-generic/SKB mode for performance in our case? I was under the
>> assumption that was the point of XDP-native. If so, I'm not sure why the
>> program is loading with XDP-native without any issues besides the AF_XDP
>> program.
> The host is essentially the network to your VM / VPS. What data
> structure it uses is not relevant to what you want to do inside the VM.
> Right now there are a lot of missing features for the host OS to rely
> solely on XDP frames.
>
> Inside the VM kernel, efficiency of XDP depends on what you are trying
> to do.
>
> A 1 or 2-cpu VM with 8 queues meets the resource requirement for XDP
> programs; I am not familiar with the details on AF_XDP to know if some
> kind of support is missing inside the virtio driver.
>
>>
>> I will admit I've been wondering what the difference is between
>> `XDP_FLAGS_DRV_MODE` (XDP-native) and `XDP_FLAGS_HW_MODE` since I
>> thought XDP-native was offloading packets from the NIC.
> H/W mode means the program is pushed down to the hardware. I believe
> only netronome's nic currently does offload. Some folks have discussed
> offloading programs for the virtio NIC, but that does not work today.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: AF_XDP Side Of Project Breaking With XDP-Native
  2020-05-24 19:27           ` Christian Deacon
@ 2020-05-24 20:23             ` David Ahern
  2020-05-24 21:25               ` Christian Deacon
  0 siblings, 1 reply; 9+ messages in thread
From: David Ahern @ 2020-05-24 20:23 UTC (permalink / raw)
  To: Christian Deacon; +Cc: Jesper Dangaard Brouer, xdp-newbies

On 5/24/20 1:27 PM, Christian Deacon wrote:
> As of right now, the packet processing software I'm using forwards
> traffic to another server via XDP_TX. It also drops any traffic via
> XDP_DROP that doesn't match our filters (these filters aren't included
> in the open-source project linked below). Do you know if there would be
> any real performance advantage using XDP-native over XDP-generic in our
> case with the `virtio_net` driver for XDP_TX and XDP_DROP actions? We're
> currently battling (D)DoS attacks. Therefore, I'm trying to do
> everything I can to drop these packets as fast as possible.

native will be much faster than generic.

> 
> 
> If you would like to inspect the source code for this project, here's a
> link to the GitHub repository:
> 
> 
> https://github.com/Dreae/compressor
> 
> 
> I'm also working on a bigger open-source project with a friend that'll
> drop traffic based off of filtering rules with XDP (it'll be version two
> of the project I linked above) and we plan to use it on VMs with the
> `virtio_net` driver. Therefore, it'll be useful to know if XDP-native
> will provide a performance advantage over XDP-generic when dropping
> packets.
> 

Looking at:
https://github.com/Dreae/compressor/blob/master/src/compressor_filter_kern.c

A packet parser would simplify that code a lot - and make it more
readable. For example:

https://github.com/dsahern/bpf-progs/blob/master/ksrc/flow.c
https://github.com/dsahern/bpf-progs/blob/master/ksrc/flow.h

It is modeled to a huge degree after the kernel's flow dissector. It
needs to be extended to handle IPIP, but that is straightforward. The
flow struct can also expanded to save the various header locations. You
don't care about IPv6 so you could make the v6 code based on #ifdef
CONFIG options to compile it out.

I have an acl program that uses it, but I make too many changes to it
right now to make it public. Example use of the flow parser:

        void *data_end = (void *)(long)ctx->data_end;
        void *data = (void *)(long)ctx->data;
        struct ethhdr *eth = data;
        struct flow fl = {};
        void *nh = eth + 1;
        u16 h_proto;
        int rc;

        if (nh > data_end)
                return true;

        h_proto = eth->h_proto;
	/* vlan handling here if relevant */

        rc = parse_pkt(&fl, h_proto, nh, data_end, 0);
        if (rc)
                // you might just want DROP here
                return rc > 0 ? XDP_PASS : XDP_DROP;

        ...
        make decisions based on L3 address family (AF_INET), L4 protocol
, etc

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: AF_XDP Side Of Project Breaking With XDP-Native
  2020-05-24 20:23             ` David Ahern
@ 2020-05-24 21:25               ` Christian Deacon
  0 siblings, 0 replies; 9+ messages in thread
From: Christian Deacon @ 2020-05-24 21:25 UTC (permalink / raw)
  To: David Ahern; +Cc: Jesper Dangaard Brouer, xdp-newbies

Hey David,


Thank you for this!


I will be looking into implementing the packet parsers into my current 
projects with XDP to simplify code :)


In regards to the AF_XDP issue, now that I know it should be working 
with the `virtio_net` driver under XDP-native, I'm not sure why I keep 
receiving a device is busy error after the first XDP attach. I also have 
a second issue with Compressor, AF_XDP, and XDP-native which is the 
AF_XDP program isn't sending packets back to the client via TX. It works 
fine with XDP-generic, though.


https://github.com/Dreae/compressor/blob/master/src/compressor_cache_user.c#L283


I discovered that `rcvd` is returning 0 when XDP-native is enabled, but 
returns a number higher than 0 when using XDP-generic. I'd imagine this 
is due to outdated AF_XDP code, though. I'll continue digging deeper 
into that issue after the first issue is resolved (the device is too 
busy error).


Thank you again for all the help!


On 5/24/2020 3:23 PM, David Ahern wrote:
> On 5/24/20 1:27 PM, Christian Deacon wrote:
>> As of right now, the packet processing software I'm using forwards
>> traffic to another server via XDP_TX. It also drops any traffic via
>> XDP_DROP that doesn't match our filters (these filters aren't included
>> in the open-source project linked below). Do you know if there would be
>> any real performance advantage using XDP-native over XDP-generic in our
>> case with the `virtio_net` driver for XDP_TX and XDP_DROP actions? We're
>> currently battling (D)DoS attacks. Therefore, I'm trying to do
>> everything I can to drop these packets as fast as possible.
> native will be much faster than generic.
>
>>
>> If you would like to inspect the source code for this project, here's a
>> link to the GitHub repository:
>>
>>
>> https://github.com/Dreae/compressor
>>
>>
>> I'm also working on a bigger open-source project with a friend that'll
>> drop traffic based off of filtering rules with XDP (it'll be version two
>> of the project I linked above) and we plan to use it on VMs with the
>> `virtio_net` driver. Therefore, it'll be useful to know if XDP-native
>> will provide a performance advantage over XDP-generic when dropping
>> packets.
>>
> Looking at:
> https://github.com/Dreae/compressor/blob/master/src/compressor_filter_kern.c
>
> A packet parser would simplify that code a lot - and make it more
> readable. For example:
>
> https://github.com/dsahern/bpf-progs/blob/master/ksrc/flow.c
> https://github.com/dsahern/bpf-progs/blob/master/ksrc/flow.h
>
> It is modeled to a huge degree after the kernel's flow dissector. It
> needs to be extended to handle IPIP, but that is straightforward. The
> flow struct can also expanded to save the various header locations. You
> don't care about IPv6 so you could make the v6 code based on #ifdef
> CONFIG options to compile it out.
>
> I have an acl program that uses it, but I make too many changes to it
> right now to make it public. Example use of the flow parser:
>
>          void *data_end = (void *)(long)ctx->data_end;
>          void *data = (void *)(long)ctx->data;
>          struct ethhdr *eth = data;
>          struct flow fl = {};
>          void *nh = eth + 1;
>          u16 h_proto;
>          int rc;
>
>          if (nh > data_end)
>                  return true;
>
>          h_proto = eth->h_proto;
> 	/* vlan handling here if relevant */
>
>          rc = parse_pkt(&fl, h_proto, nh, data_end, 0);
>          if (rc)
>                  // you might just want DROP here
>                  return rc > 0 ? XDP_PASS : XDP_DROP;
>
>          ...
>          make decisions based on L3 address family (AF_INET), L4 protocol
> , etc

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-05-24 21:25 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-22 15:22 AF_XDP Side Of Project Breaking With XDP-Native Christian Deacon
2020-05-22 15:51 ` Jesper Dangaard Brouer
2020-05-22 16:12   ` Christian Deacon
2020-05-24 17:35     ` David Ahern
2020-05-24 18:13       ` Christian Deacon
2020-05-24 18:58         ` David Ahern
2020-05-24 19:27           ` Christian Deacon
2020-05-24 20:23             ` David Ahern
2020-05-24 21:25               ` Christian Deacon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.