netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [VXLAN] [MLX5] Lost traffic and issues
@ 2020-02-28 15:02 Ian Kumlien
  2020-03-02 19:10 ` Saeed Mahameed
  0 siblings, 1 reply; 5+ messages in thread
From: Ian Kumlien @ 2020-02-28 15:02 UTC (permalink / raw)
  To: Linux Kernel Network Developers; +Cc: Saeed Mahameed, Leon Romanovsky, kliteyn

Hi,

Including netdev - to see if someone else has a clue.

We have a few machines in a cloud and when upgrading from 4.16.7 ->
5.4.15 we ran in to
unexpected and intermittent problems.
(I have tested 5.5.6 and the problems persists)

What we saw, using several monitoring points, was that traffic
disappeared after what we can see when tcpdumping on "bond0"

We had tcpdump running on:
1, DHCP nodes (local tap interfaces)
2, Router instances on L3 node
3, Local node (where the VM runs) (tap, bridge and eventually tap
interface dumping VXLAN traffic)
4, Using port mirroring on the 100gbit switch to see what ended up on
the physical wire.

What we can see is that from the four step handshake for DHCP only two
steps works, the forth step will be dropped "on the nic".

We can see it go out bond0, in tagged VLAN and within a VXLAN packet -
however the switch never sees it.

There has been a few mlx5 changes wrt VXLAN which can be culprits but
it's really hard to judge.

dmesg |grep mlx
[    2.231399] mlx5_core 0000:0b:00.0: firmware version: 16.26.1040
[    2.912595] mlx5_core 0000:0b:00.0: Rate limit: 127 rates are
supported, range: 0Mbps to 97656Mbps
[    2.935012] mlx5_core 0000:0b:00.0: Port module event: module 0,
Cable plugged
[    2.949528] mlx5_core 0000:0b:00.1: firmware version: 16.26.1040
[    3.638647] mlx5_core 0000:0b:00.1: Rate limit: 127 rates are
supported, range: 0Mbps to 97656Mbps
[    3.661206] mlx5_core 0000:0b:00.1: Port module event: module 1,
Cable plugged
[    3.675562] mlx5_core 0000:0b:00.0: MLX5E: StrdRq(1) RqSz(8)
StrdSz(64) RxCqeCmprss(0)
[    3.846149] mlx5_core 0000:0b:00.1: MLX5E: StrdRq(1) RqSz(8)
StrdSz(64) RxCqeCmprss(0)
[    4.021738] mlx5_core 0000:0b:00.0 enp11s0f0: renamed from eth0
[    4.021962] mlx5_ib: Mellanox Connect-IB Infiniband driver v5.0-0

I have tried turning all offloads off, but the problem persists as
well - it's really weird that it seems to be only some packets.

To be clear, the bond0 interface is 2*100gbit, using 802.1ad (LACP)
with layer2+3 hashing.
This seems to be offloaded in to the nic (can it be turned off?) and
messages about modifying the "lag map" was
quite frequent until we did a firmware upgrade - even with upgraded
firmware, it continued but to a lesser extent.

With 5.5.7 approaching, we would want a path forward to handle this...

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [VXLAN] [MLX5] Lost traffic and issues
  2020-02-28 15:02 [VXLAN] [MLX5] Lost traffic and issues Ian Kumlien
@ 2020-03-02 19:10 ` Saeed Mahameed
  2020-03-02 22:45   ` Ian Kumlien
  0 siblings, 1 reply; 5+ messages in thread
From: Saeed Mahameed @ 2020-03-02 19:10 UTC (permalink / raw)
  To: Roi Dayan, ian.kumlien, netdev; +Cc: Yevgeny Kliteynik, Leon Romanovsky

On Fri, 2020-02-28 at 16:02 +0100, Ian Kumlien wrote:
> Hi,
> 
> Including netdev - to see if someone else has a clue.
> 
> We have a few machines in a cloud and when upgrading from 4.16.7 ->
> 5.4.15 we ran in to
> unexpected and intermittent problems.
> (I have tested 5.5.6 and the problems persists)
> 
> What we saw, using several monitoring points, was that traffic
> disappeared after what we can see when tcpdumping on "bond0"
> 
> We had tcpdump running on:
> 1, DHCP nodes (local tap interfaces)
> 2, Router instances on L3 node
> 3, Local node (where the VM runs) (tap, bridge and eventually tap
> interface dumping VXLAN traffic)
> 4, Using port mirroring on the 100gbit switch to see what ended up on
> the physical wire.
> 
> What we can see is that from the four step handshake for DHCP only
> two
> steps works, the forth step will be dropped "on the nic".
> 
> We can see it go out bond0, in tagged VLAN and within a VXLAN packet
> -
> however the switch never sees it.
> 

Hi, 

Have you seen the packets actually going out on one of the mlx5 100gbit
legs ? 

> There has been a few mlx5 changes wrt VXLAN which can be culprits but
> it's really hard to judge.
> 
> dmesg |grep mlx
> [    2.231399] mlx5_core 0000:0b:00.0: firmware version: 16.26.1040
> [    2.912595] mlx5_core 0000:0b:00.0: Rate limit: 127 rates are
> supported, range: 0Mbps to 97656Mbps
> [    2.935012] mlx5_core 0000:0b:00.0: Port module event: module 0,
> Cable plugged
> [    2.949528] mlx5_core 0000:0b:00.1: firmware version: 16.26.1040
> [    3.638647] mlx5_core 0000:0b:00.1: Rate limit: 127 rates are
> supported, range: 0Mbps to 97656Mbps
> [    3.661206] mlx5_core 0000:0b:00.1: Port module event: module 1,
> Cable plugged
> [    3.675562] mlx5_core 0000:0b:00.0: MLX5E: StrdRq(1) RqSz(8)
> StrdSz(64) RxCqeCmprss(0)
> [    3.846149] mlx5_core 0000:0b:00.1: MLX5E: StrdRq(1) RqSz(8)
> StrdSz(64) RxCqeCmprss(0)
> [    4.021738] mlx5_core 0000:0b:00.0 enp11s0f0: renamed from eth0
> [    4.021962] mlx5_ib: Mellanox Connect-IB Infiniband driver v5.0-0
> 
> I have tried turning all offloads off, but the problem persists as
> well - it's really weird that it seems to be only some packets.
> 
> To be clear, the bond0 interface is 2*100gbit, using 802.1ad (LACP)
> with layer2+3 hashing.
> This seems to be offloaded in to the nic (can it be turned off?) and
> messages about modifying the "lag map" was
> quite frequent until we did a firmware upgrade - even with upgraded
> firmware, it continued but to a lesser extent.
> 
> With 5.5.7 approaching, we would want a path forward to handle
> this...


What type of mlx5 configuration you have (Native PV virtualization ?
SRIOV ? legacy mode or switchdev mode ? )

The only change that i could think of is the lag multi-path support we
added, Roi can you please take a look at this ?

Thanks,
Saeed.




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [VXLAN] [MLX5] Lost traffic and issues
  2020-03-02 19:10 ` Saeed Mahameed
@ 2020-03-02 22:45   ` Ian Kumlien
  2020-03-03 10:23     ` Ian Kumlien
  0 siblings, 1 reply; 5+ messages in thread
From: Ian Kumlien @ 2020-03-02 22:45 UTC (permalink / raw)
  To: Saeed Mahameed; +Cc: Roi Dayan, netdev, Yevgeny Kliteynik, Leon Romanovsky

On Mon, Mar 2, 2020 at 8:10 PM Saeed Mahameed <saeedm@mellanox.com> wrote:
> On Fri, 2020-02-28 at 16:02 +0100, Ian Kumlien wrote:
> > Hi,
> >
> > Including netdev - to see if someone else has a clue.
> >
> > We have a few machines in a cloud and when upgrading from 4.16.7 ->
> > 5.4.15 we ran in to
> > unexpected and intermittent problems.
> > (I have tested 5.5.6 and the problems persists)
> >
> > What we saw, using several monitoring points, was that traffic
> > disappeared after what we can see when tcpdumping on "bond0"
> >
> > We had tcpdump running on:
> > 1, DHCP nodes (local tap interfaces)
> > 2, Router instances on L3 node
> > 3, Local node (where the VM runs) (tap, bridge and eventually tap
> > interface dumping VXLAN traffic)
> > 4, Using port mirroring on the 100gbit switch to see what ended up on
> > the physical wire.
> >
> > What we can see is that from the four step handshake for DHCP only
> > two
> > steps works, the forth step will be dropped "on the nic".
> >
> > We can see it go out bond0, in tagged VLAN and within a VXLAN packet
> > -
> > however the switch never sees it.
>
> Hi,
>
> Have you seen the packets actually going out on one of the mlx5 100gbit
> legs ?

We disabled bond and made it go to only one interface to be able to snoop the
traffic (sorry, sometimes you forget things when writing them down)

And no, the traffic that was lost never reached the "wire" to our knowledge

> > There has been a few mlx5 changes wrt VXLAN which can be culprits but
> > it's really hard to judge.
> >
> > dmesg |grep mlx
> > [    2.231399] mlx5_core 0000:0b:00.0: firmware version: 16.26.1040
> > [    2.912595] mlx5_core 0000:0b:00.0: Rate limit: 127 rates are
> > supported, range: 0Mbps to 97656Mbps
> > [    2.935012] mlx5_core 0000:0b:00.0: Port module event: module 0,
> > Cable plugged
> > [    2.949528] mlx5_core 0000:0b:00.1: firmware version: 16.26.1040
> > [    3.638647] mlx5_core 0000:0b:00.1: Rate limit: 127 rates are
> > supported, range: 0Mbps to 97656Mbps
> > [    3.661206] mlx5_core 0000:0b:00.1: Port module event: module 1,
> > Cable plugged
> > [    3.675562] mlx5_core 0000:0b:00.0: MLX5E: StrdRq(1) RqSz(8)
> > StrdSz(64) RxCqeCmprss(0)
> > [    3.846149] mlx5_core 0000:0b:00.1: MLX5E: StrdRq(1) RqSz(8)
> > StrdSz(64) RxCqeCmprss(0)
> > [    4.021738] mlx5_core 0000:0b:00.0 enp11s0f0: renamed from eth0
> > [    4.021962] mlx5_ib: Mellanox Connect-IB Infiniband driver v5.0-0
> >
> > I have tried turning all offloads off, but the problem persists as
> > well - it's really weird that it seems to be only some packets.
> >
> > To be clear, the bond0 interface is 2*100gbit, using 802.1ad (LACP)
> > with layer2+3 hashing.
> > This seems to be offloaded in to the nic (can it be turned off?) and
> > messages about modifying the "lag map" was
> > quite frequent until we did a firmware upgrade - even with upgraded
> > firmware, it continued but to a lesser extent.
> >
> > With 5.5.7 approaching, we would want a path forward to handle
> > this...
>
>
> What type of mlx5 configuration you have (Native PV virtualization ?
> SRIOV ? legacy mode or switchdev mode ? )

We have:
tap -> bridge -> ovs -> bond (one legged) -switch-fabric-> <other-end>

So a pretty standard openstack setup

> The only change that i could think of is the lag multi-path support we
> added, Roi can you please take a look at this ?

I'm also trying to get a setup working where i could try reverting changes
but so far we've only had this problem with mlx5_core...
Also the intermittent but reliable patterns are really weird...

All traffic seems fine, except vxlan traffic :/

(The problem is that the actual machines that has the issue is in production
with 8x V100 nvidia cards... Kinda hard to justify having them "offline" ;))

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [VXLAN] [MLX5] Lost traffic and issues
  2020-03-02 22:45   ` Ian Kumlien
@ 2020-03-03 10:23     ` Ian Kumlien
  2020-03-04  9:47       ` Ian Kumlien
  0 siblings, 1 reply; 5+ messages in thread
From: Ian Kumlien @ 2020-03-03 10:23 UTC (permalink / raw)
  To: Saeed Mahameed; +Cc: Roi Dayan, netdev, Yevgeny Kliteynik, Leon Romanovsky

On Mon, Mar 2, 2020 at 11:45 PM Ian Kumlien <ian.kumlien@gmail.com> wrote:
>
> On Mon, Mar 2, 2020 at 8:10 PM Saeed Mahameed <saeedm@mellanox.com> wrote:

[... 8< ...]

> > What type of mlx5 configuration you have (Native PV virtualization ?
> > SRIOV ? legacy mode or switchdev mode ? )
>
> We have:
> tap -> bridge -> ovs -> bond (one legged) -switch-fabric-> <other-end>
>
> So a pretty standard openstack setup

Oh, the L3 nodes are also MLX5s (50gbit) and they do report the lag map thing

[   37.389366] mlx5_core 0000:04:00.0 ens1f0: S-tagged traffic will be
dropped while C-tag vlan stripping is enabled
[77126.178520] mlx5_core 0000:04:00.0: modify lag map port 1:2 port 2:2
[77131.485189] mlx5_core 0000:04:00.0 ens1f0: Link down
[77337.033686] mlx5_core 0000:04:00.0 ens1f0: Link up
[77344.338901] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
[78098.028670] mlx5_core 0000:04:00.0: modify lag map port 1:2 port 2:2
[78103.479494] mlx5_core 0000:04:00.0 ens1f0: Link down
[78310.028518] mlx5_core 0000:04:00.0 ens1f0: Link up
[78317.797155] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
[78504.893590] mlx5_core 0000:04:00.0: modify lag map port 1:2 port 2:2
[78511.277529] mlx5_core 0000:04:00.0 ens1f0: Link down
[78714.526539] mlx5_core 0000:04:00.0 ens1f0: Link up
[78720.422078] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
[78720.838063] mlx5_core 0000:04:00.0: modify lag map port 1:2 port 2:2
[78727.226433] mlx5_core 0000:04:00.0 ens1f0: Link down
[78929.575826] mlx5_core 0000:04:00.0 ens1f0: Link up
[78935.422600] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
[79330.519516] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:1
[79330.831447] mlx5_core 0000:04:00.0: modify lag map port 1:2 port 2:2
[79336.073520] mlx5_core 0000:04:00.1 ens1f1: Link down
[79336.279519] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:1
[79541.272469] mlx5_core 0000:04:00.1 ens1f1: Link up
[79546.664008] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
[82107.461831] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:1
[82113.859238] mlx5_core 0000:04:00.1 ens1f1: Link down
[82320.458475] mlx5_core 0000:04:00.1 ens1f1: Link up
[82327.774289] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
[82490.950671] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:1
[82497.307348] mlx5_core 0000:04:00.1 ens1f1: Link down
[82705.956583] mlx5_core 0000:04:00.1 ens1f1: Link up
[82714.055134] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
[83100.804620] mlx5_core 0000:04:00.0 ens1f0: Link down
[83100.860943] mlx5_core 0000:04:00.0: modify lag map port 1:2 port 2:2
[83319.953296] mlx5_core 0000:04:00.0 ens1f0: Link up
[83327.984559] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
[83924.600444] mlx5_core 0000:04:00.0 ens1f0: Link down
[83924.656321] mlx5_core 0000:04:00.0: modify lag map port 1:2 port 2:2
[84312.648630] mlx5_core 0000:04:00.0 ens1f0: Link up
[84319.571326] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
[84946.495374] mlx5_core 0000:04:00.1 ens1f1: Link down
[84946.588637] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:1
[84946.692596] mlx5_core 0000:04:00.0: modify lag map port 1:2 port 2:2
[84949.188628] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:1
[85363.543475] mlx5_core 0000:04:00.1 ens1f1: Link up
[85371.093484] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
[624051.460733] mlx5_core 0000:04:00.0: modify lag map port 1:2 port 2:2
[624053.644769] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:1
[624053.674747] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2

Sorry, it's been a long couple of weeks ;)

> > The only change that i could think of is the lag multi-path support we
> > added, Roi can you please take a look at this ?
>
> I'm also trying to get a setup working where i could try reverting changes
> but so far we've only had this problem with mlx5_core...
> Also the intermittent but reliable patterns are really weird...
>
> All traffic seems fine, except vxlan traffic :/
>
> (The problem is that the actual machines that has the issue is in production
> with 8x V100 nvidia cards... Kinda hard to justify having them "offline" ;))

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [VXLAN] [MLX5] Lost traffic and issues
  2020-03-03 10:23     ` Ian Kumlien
@ 2020-03-04  9:47       ` Ian Kumlien
  0 siblings, 0 replies; 5+ messages in thread
From: Ian Kumlien @ 2020-03-04  9:47 UTC (permalink / raw)
  To: Saeed Mahameed; +Cc: Roi Dayan, netdev, Yevgeny Kliteynik, Leon Romanovsky

On Tue, Mar 3, 2020 at 11:23 AM Ian Kumlien <ian.kumlien@gmail.com> wrote:
> On Mon, Mar 2, 2020 at 11:45 PM Ian Kumlien <ian.kumlien@gmail.com> wrote:
> > On Mon, Mar 2, 2020 at 8:10 PM Saeed Mahameed <saeedm@mellanox.com> wrote:
>
> [... 8< ...]
>
> > > What type of mlx5 configuration you have (Native PV virtualization ?
> > > SRIOV ? legacy mode or switchdev mode ? )
> >
> > We have:
> > tap -> bridge -> ovs -> bond (one legged) -switch-fabric-> <other-end>
> >
> > So a pretty standard openstack setup
>
> Oh, the L3 nodes are also MLX5s (50gbit) and they do report the lag map thing
>
> [   37.389366] mlx5_core 0000:04:00.0 ens1f0: S-tagged traffic will be
> dropped while C-tag vlan stripping is enabled
> [77126.178520] mlx5_core 0000:04:00.0: modify lag map port 1:2 port 2:2
> [77131.485189] mlx5_core 0000:04:00.0 ens1f0: Link down
> [77337.033686] mlx5_core 0000:04:00.0 ens1f0: Link up
> [77344.338901] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
> [78098.028670] mlx5_core 0000:04:00.0: modify lag map port 1:2 port 2:2
> [78103.479494] mlx5_core 0000:04:00.0 ens1f0: Link down
> [78310.028518] mlx5_core 0000:04:00.0 ens1f0: Link up
> [78317.797155] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
> [78504.893590] mlx5_core 0000:04:00.0: modify lag map port 1:2 port 2:2
> [78511.277529] mlx5_core 0000:04:00.0 ens1f0: Link down
> [78714.526539] mlx5_core 0000:04:00.0 ens1f0: Link up
> [78720.422078] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
> [78720.838063] mlx5_core 0000:04:00.0: modify lag map port 1:2 port 2:2
> [78727.226433] mlx5_core 0000:04:00.0 ens1f0: Link down
> [78929.575826] mlx5_core 0000:04:00.0 ens1f0: Link up
> [78935.422600] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
> [79330.519516] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:1
> [79330.831447] mlx5_core 0000:04:00.0: modify lag map port 1:2 port 2:2
> [79336.073520] mlx5_core 0000:04:00.1 ens1f1: Link down
> [79336.279519] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:1
> [79541.272469] mlx5_core 0000:04:00.1 ens1f1: Link up
> [79546.664008] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
> [82107.461831] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:1
> [82113.859238] mlx5_core 0000:04:00.1 ens1f1: Link down
> [82320.458475] mlx5_core 0000:04:00.1 ens1f1: Link up
> [82327.774289] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
> [82490.950671] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:1
> [82497.307348] mlx5_core 0000:04:00.1 ens1f1: Link down
> [82705.956583] mlx5_core 0000:04:00.1 ens1f1: Link up
> [82714.055134] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
> [83100.804620] mlx5_core 0000:04:00.0 ens1f0: Link down
> [83100.860943] mlx5_core 0000:04:00.0: modify lag map port 1:2 port 2:2
> [83319.953296] mlx5_core 0000:04:00.0 ens1f0: Link up
> [83327.984559] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
> [83924.600444] mlx5_core 0000:04:00.0 ens1f0: Link down
> [83924.656321] mlx5_core 0000:04:00.0: modify lag map port 1:2 port 2:2
> [84312.648630] mlx5_core 0000:04:00.0 ens1f0: Link up
> [84319.571326] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
> [84946.495374] mlx5_core 0000:04:00.1 ens1f1: Link down
> [84946.588637] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:1
> [84946.692596] mlx5_core 0000:04:00.0: modify lag map port 1:2 port 2:2
> [84949.188628] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:1
> [85363.543475] mlx5_core 0000:04:00.1 ens1f1: Link up
> [85371.093484] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
> [624051.460733] mlx5_core 0000:04:00.0: modify lag map port 1:2 port 2:2
> [624053.644769] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:1
> [624053.674747] mlx5_core 0000:04:00.0: modify lag map port 1:1 port 2:2
>
> Sorry, it's been a long couple of weeks ;)

I made them one-legged but it doesn't seem to help

Someone also posted this:
https://marc.info/?l=linux-netdev&m=158330796503347&w=2

While I don't use IPVS - I do use VXLAN and if checksums are incorrectly tagged
the nic might drop it?

> > > The only change that i could think of is the lag multi-path support we
> > > added, Roi can you please take a look at this ?
> >
> > I'm also trying to get a setup working where i could try reverting changes
> > but so far we've only had this problem with mlx5_core...
> > Also the intermittent but reliable patterns are really weird...
> >
> > All traffic seems fine, except vxlan traffic :/
> >
> > (The problem is that the actual machines that has the issue is in production
> > with 8x V100 nvidia cards... Kinda hard to justify having them "offline" ;))

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-03-04  9:48 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-28 15:02 [VXLAN] [MLX5] Lost traffic and issues Ian Kumlien
2020-03-02 19:10 ` Saeed Mahameed
2020-03-02 22:45   ` Ian Kumlien
2020-03-03 10:23     ` Ian Kumlien
2020-03-04  9:47       ` Ian Kumlien

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).