mptcp.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* [PATCH mptcp-net v2] mptcp: ensure snd_una is properly initialized on connect
@ 2024-04-15 14:05 Paolo Abeni
  2024-04-15 20:48 ` Christoph Paasch
  2024-04-15 21:20 ` MPTCP CI
  0 siblings, 2 replies; 3+ messages in thread
From: Paolo Abeni @ 2024-04-15 14:05 UTC (permalink / raw)
  To: MPTCP Upstream; +Cc: Christoph Paasch

Christoph reported a splat hinting at a corrupted snd_una:

WARNING: CPU: 1 PID: 38 at net/mptcp/protocol.c:1005 __mptcp_clean_una+0x4b3/0x620 net/mptcp/protocol.c:1005
Modules linked in:
CPU: 1 PID: 38 Comm: kworker/1:1 Not tainted 6.9.0-rc1-gbbeac67456c9 #59
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.0-2.el7 04/01/2014
Workqueue: events mptcp_worker
RIP: 0010:__mptcp_clean_una+0x4b3/0x620 net/mptcp/protocol.c:1005
Code: be 06 01 00 00 bf 06 01 00 00 e8 a8 12 e7 fe e9 00 fe ff ff e8
	8e 1a e7 fe 0f b7 ab 3e 02 00 00 e9 d3 fd ff ff e8 7d 1a e7 fe
	<0f> 0b 4c 8b bb e0 05 00 00 e9 74 fc ff ff e8 6a 1a e7 fe 0f 0b e9
RSP: 0018:ffffc9000013fd48 EFLAGS: 00010293
RAX: 0000000000000000 RBX: ffff8881029bd280 RCX: ffffffff82382fe4
RDX: ffff8881003cbd00 RSI: ffffffff823833c3 RDI: 0000000000000001
RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000
R10: 0000000000000000 R11: fefefefefefefeff R12: ffff888138ba8000
R13: 0000000000000106 R14: ffff8881029bd908 R15: ffff888126560000
FS:  0000000000000000(0000) GS:ffff88813bd00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f604a5dae38 CR3: 0000000101dac002 CR4: 0000000000170ef0
Call Trace:
 <TASK>
 __mptcp_clean_una_wakeup net/mptcp/protocol.c:1055 [inline]
 mptcp_clean_una_wakeup net/mptcp/protocol.c:1062 [inline]
 __mptcp_retrans+0x7f/0x7e0 net/mptcp/protocol.c:2615
 mptcp_worker+0x434/0x740 net/mptcp/protocol.c:2767
 process_one_work+0x1e0/0x560 kernel/workqueue.c:3254
 process_scheduled_works kernel/workqueue.c:3335 [inline]
 worker_thread+0x3c7/0x640 kernel/workqueue.c:3416
 kthread+0x121/0x170 kernel/kthread.c:388
 ret_from_fork+0x44/0x50 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243
 </TASK>

When fallback to TCP happens early on client socket, and the mptcp
worker (dumbly) tries mptcp-level re-injection, the snd_una is
not yet initialized, and the worker tries to use it to clean-up
the send buffer.

Address the issue always initializing snd_una for active sockets
at establish .

Reported-by: Christoph Paasch <cpaasch@apple.com>
Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/485
Fixes: 8fd738049ac3 ("mptcp: fallback in case of simultaneous connect")
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
v1 -> v2:
 - moved the initialization at established time (Mat)
 - dropped debug code wrongly landed in v1
 - added fixes tag

@Christoph: could you please add also the following chunk, for debug
purpouse?

diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 3e1b15d76442..46fd84697495 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -1002,8 +1002,12 @@ static void __mptcp_clean_una(struct sock *sk)

                if (unlikely(dfrag == msk->first_pending)) {
                        /* in recovery mode can see ack after the current snd head */
-                       if (WARN_ON_ONCE(!msk->recovery))
+                       if (WARN_ON_ONCE(!msk->recovery)) {
+                               pr_err("snd_una %llx dfrag end seq %llx frag len %d bytes sent %lld acked %lld",
+                                       msk->snd_una, dfrag->data_seq + dfrag->data_len, dfrag->data_len,
+                                       msk->bytes_sent, msk->bytes_acked);
                                break;
+                       }

                        WRITE_ONCE(msk->first_pending, mptcp_send_next(sk));
                }
---
 net/mptcp/subflow.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
index b94d1dca1094..25004473e7ef 100644
--- a/net/mptcp/subflow.c
+++ b/net/mptcp/subflow.c
@@ -475,12 +475,12 @@ static void mptcp_propagate_state(struct sock *sk, struct sock *ssk,
 	struct mptcp_sock *msk = mptcp_sk(sk);
 
 	mptcp_data_lock(sk);
+	WRITE_ONCE(msk->snd_una, subflow->idsn + 1);
+	WRITE_ONCE(msk->wnd_end, subflow->idsn + 1 + tcp_sk(ssk)->snd_wnd);
 	if (mp_opt) {
 		/* Options are available only in the non fallback cases
 		 * avoid updating rx path fields otherwise
 		 */
-		WRITE_ONCE(msk->snd_una, subflow->idsn + 1);
-		WRITE_ONCE(msk->wnd_end, subflow->idsn + 1 + tcp_sk(ssk)->snd_wnd);
 		subflow_set_remote_key(msk, subflow, mp_opt);
 	}
 
-- 
2.43.2


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH mptcp-net v2] mptcp: ensure snd_una is properly initialized on connect
  2024-04-15 14:05 [PATCH mptcp-net v2] mptcp: ensure snd_una is properly initialized on connect Paolo Abeni
@ 2024-04-15 20:48 ` Christoph Paasch
  2024-04-15 21:20 ` MPTCP CI
  1 sibling, 0 replies; 3+ messages in thread
From: Christoph Paasch @ 2024-04-15 20:48 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: MPTCP Upstream

Hello Paolo,

> On Apr 15, 2024, at 7:05 AM, Paolo Abeni <pabeni@redhat.com> wrote:
> 
> Christoph reported a splat hinting at a corrupted snd_una:
> 
> WARNING: CPU: 1 PID: 38 at net/mptcp/protocol.c:1005 __mptcp_clean_una+0x4b3/0x620 net/mptcp/protocol.c:1005
> Modules linked in:
> CPU: 1 PID: 38 Comm: kworker/1:1 Not tainted 6.9.0-rc1-gbbeac67456c9 #59
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.0-2.el7 04/01/2014
> Workqueue: events mptcp_worker
> RIP: 0010:__mptcp_clean_una+0x4b3/0x620 net/mptcp/protocol.c:1005
> Code: be 06 01 00 00 bf 06 01 00 00 e8 a8 12 e7 fe e9 00 fe ff ff e8
> 8e 1a e7 fe 0f b7 ab 3e 02 00 00 e9 d3 fd ff ff e8 7d 1a e7 fe
> <0f> 0b 4c 8b bb e0 05 00 00 e9 74 fc ff ff e8 6a 1a e7 fe 0f 0b e9
> RSP: 0018:ffffc9000013fd48 EFLAGS: 00010293
> RAX: 0000000000000000 RBX: ffff8881029bd280 RCX: ffffffff82382fe4
> RDX: ffff8881003cbd00 RSI: ffffffff823833c3 RDI: 0000000000000001
> RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000
> R10: 0000000000000000 R11: fefefefefefefeff R12: ffff888138ba8000
> R13: 0000000000000106 R14: ffff8881029bd908 R15: ffff888126560000
> FS:  0000000000000000(0000) GS:ffff88813bd00000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 00007f604a5dae38 CR3: 0000000101dac002 CR4: 0000000000170ef0
> Call Trace:
> <TASK>
> __mptcp_clean_una_wakeup net/mptcp/protocol.c:1055 [inline]
> mptcp_clean_una_wakeup net/mptcp/protocol.c:1062 [inline]
> __mptcp_retrans+0x7f/0x7e0 net/mptcp/protocol.c:2615
> mptcp_worker+0x434/0x740 net/mptcp/protocol.c:2767
> process_one_work+0x1e0/0x560 kernel/workqueue.c:3254
> process_scheduled_works kernel/workqueue.c:3335 [inline]
> worker_thread+0x3c7/0x640 kernel/workqueue.c:3416
> kthread+0x121/0x170 kernel/kthread.c:388
> ret_from_fork+0x44/0x50 arch/x86/kernel/process.c:147
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243
> </TASK>
> 
> When fallback to TCP happens early on client socket, and the mptcp
> worker (dumbly) tries mptcp-level re-injection, the snd_una is
> not yet initialized, and the worker tries to use it to clean-up
> the send buffer.
> 
> Address the issue always initializing snd_una for active sockets
> at establish .
> 
> Reported-by: Christoph Paasch <cpaasch@apple.com>
> Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/485
> Fixes: 8fd738049ac3 ("mptcp: fallback in case of simultaneous connect")
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> ---
> v1 -> v2:
> - moved the initialization at established time (Mat)
> - dropped debug code wrongly landed in v1
> - added fixes tag
> 
> @Christoph: could you please add also the following chunk, for debug
> purpouse?

The issue still reproduces:

[   20.153874] ------------[ cut here ]------------
[   20.153882] MPTCP: snd_una 0 dfrag end seq d9b32cd3cf5bc831 frag len 32728 bytes sent 3763720 acked 2759812858423158191
[   20.154574] ------------[ cut here ]------------
[   20.154498] WARNING: CPU: 1 PID: 50 at net/mptcp/protocol.c:1005 __mptcp_clean_una+0x10ba/0x13d0
[   20.156057] WARNING: CPU: 0 PID: 7 at net/mptcp/protocol.c:1025 __mptcp_clean_una+0x12dd/0x13d0
[   20.156774] Modules linked in:
[   20.157988] Modules linked in:
[   20.159157] CPU: 1 PID: 50 Comm: kworker/1:1 Not tainted 6.9.0-rc2-g14566b183225-dirty #623
[   20.159623]
[   20.160079] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.0-2.el7 04/01/2014
[   20.160089] Workqueue: events mptcp_worker
[   20.161349] CPU: 0 PID: 7 Comm: kworker/0:0 Not tainted 6.9.0-rc2-g14566b183225-dirty #623
[   20.161544]
[   20.162779] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.0-2.el7 04/01/2014
[   20.163265] RIP: 0010:__mptcp_clean_una+0x10ba/0x13d0
[   20.164450] Workqueue: events mptcp_worker
[   20.164661] Code: e8 4b 99 97 fd e9 60 fa ff ff 44 89 f1 80 e1 07 38 c1 0f 8c 67 fa ff ff 4c 89 f7 e8 40 98 97 fd e9 5a fa ff ff e8 b6 ab 4a fd <0f> 0b 48 8b 44 24 70 42 80 3c 30 00 74 0a 48 8b 7c 24 50 e8 ae 98
[   20.165896]
[   20.166517] RSP: 0018:ffffc9000029f900 EFLAGS: 00010293
[   20.167092] RIP: 0010:__mptcp_clean_una+0x12dd/0x13d0
[   20.169307]
[   20.169311] RAX: ffffffff83fbfaba RBX: 1ffff11021ff9003 RCX: ffff888100803f00
[   20.169537] Code: fc ff df e9 95 f3 ff ff 89 d9 80 e1 07 80 c1 03 38 c1 0f 8c d0 f3 ff ff 48 89 df e8 7d 96 97 fd e9 c3 f3 ff ff e8 93 a9 4a fd <0f> 0b eb 07 e8 8a a9 4a fd 0f 0b 49 be 00 00 00 00 00 fc ff df 4c
[   20.170162] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[   20.170905] RSP: 0018:ffffc90000077900 EFLAGS: 00010293
[   20.171135] RBP: ffffc9000029fa30 R08: ffffea00043ff037 R09: 1ffffd400087fe06
[   20.172144]
[   20.174498] R10: dffffc0000000000 R11: fffff9400087fe07 R12: 1ffff11021ff9002
[   20.175515] RAX: ffffffff83fbfcdd RBX: 0000000000000000 RCX: ffff8881001e5e80
[   20.176170] R13: ffff88810ffc8000 R14: dffffc0000000000 R15: ffff88810fa88698
[   20.177152] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[   20.177372] FS:  0000000000000000(0000) GS:ffff88811b300000(0000) knlGS:0000000000000000
[   20.178393] RBP: ffffc90000077a30 R08: ffffc90000077647 R09: 1ffff9200000eec8
[   20.179273] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   20.180272] R10: dffffc0000000000 R11: fffff5200000eec9 R12: 1ffff110207660b1
[   20.181222] CR2: 00000000203c6000 CR3: 0000000103804004 CR4: 0000000000170ef0
[   20.182340] R13: ffff888103b30588 R14: ffff88810fbf8000 R15: 264cd32c30a4b7a7
[   20.183232] Call Trace:
[   20.184059] FS:  0000000000000000(0000) GS:ffff88811b200000(0000) knlGS:0000000000000000
[   20.184949]  <TASK>
[   20.185950] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   20.186842]  ? __warn+0x161/0x3e0
[   20.187222] CR2: 00000000203c6000 CR3: 00000001038ba005 CR4: 0000000000170ef0
[   20.188212]  ? __mptcp_clean_una+0x10ba/0x13d0
[   20.188566] Call Trace:
[   20.189286]  ? report_bug+0x473/0x660
[   20.189782]  <TASK>
[   20.190613]  ? __mptcp_clean_una+0x10ba/0x13d0
[   20.191246]  ? __warn+0x161/0x3e0
[   20.191595]  ? handle_bug+0x3d/0x80
[   20.192121]  ? __mptcp_clean_una+0x12dd/0x13d0
[   20.192396]  ? exc_invalid_op+0x1a/0x50
[   20.193022]  ? report_bug+0x473/0x660
[   20.193432]  ? asm_exc_invalid_op+0x1a/0x20
[   20.193942]  ? __mptcp_clean_una+0x12dd/0x13d0
[   20.194533]  ? __mptcp_clean_una+0x10ba/0x13d0
[   20.195097]  ? handle_bug+0x3d/0x80
[   20.195575]  ? __mptcp_clean_una+0x10ba/0x13d0
[   20.196171]  ? exc_invalid_op+0x1a/0x50
[   20.196724]  ? __pfx___mptcp_clean_una+0x10/0x10
[   20.197358]  ? asm_exc_invalid_op+0x1a/0x20
[   20.197799]  ? preempt_count_add+0x93/0x190
[   20.198444]  ? __mptcp_clean_una+0x12dd/0x13d0
[   20.198989]  ? __vmap_pages_range_noflush+0xc0e/0xc30
[   20.199668]  ? __mptcp_clean_una+0x12dd/0x13d0
[   20.200177]  __mptcp_retrans+0x105/0x12b0
[   20.200797]  ? __enqueue_entity+0x25e/0x4f0
[   20.201332]  ? __local_bh_enable_ip+0x58/0x80
[   20.202042]  ? __pfx___mptcp_clean_una+0x10/0x10
[   20.202605]  ? lock_sock_nested+0x258/0x320
[   20.203188]  __mptcp_retrans+0x105/0x12b0
[   20.203750]  ? mptcp_check_data_fin+0x2e7/0x840
[   20.204396]  ? __local_bh_enable_ip+0x58/0x80
[   20.204949]  ? __pfx___mptcp_retrans+0x10/0x10
[   20.205548]  ? lock_sock_nested+0x258/0x320
[   20.206037]  ? mm_cid_get+0xb5/0x700
[   20.206706]  ? mptcp_check_data_fin+0x2e7/0x840
[   20.207222]  mptcp_worker+0xe00/0x1790
[   20.207863]  ? __pfx___mptcp_retrans+0x10/0x10
[   20.208421]  ? __pfx_mptcp_worker+0x10/0x10
[   20.208940]  ? update_load_avg+0x50f/0x1910
[   20.209629]  ? kick_pool+0x16b/0x5d0
[   20.210163]  mptcp_worker+0xe00/0x1790
[   20.210711]  ? read_word_at_a_time+0xe/0x20
[   20.211310]  ? __pfx_mptcp_worker+0x10/0x10
[   20.211866]  ? sized_strscpy+0x8c/0x220
[   20.212388]  ? kick_pool+0x16b/0x5d0
[   20.212866]  process_scheduled_works+0x74e/0xfb0
[   20.213502]  ? read_word_at_a_time+0xe/0x20
[   20.214012]  worker_thread+0xb79/0x11d0
[   20.214563]  ? sized_strscpy+0x8c/0x220
[   20.215018]  kthread+0x298/0x330
[   20.215673]  process_scheduled_works+0x74e/0xfb0
[   20.216233]  ? __pfx_worker_thread+0x10/0x10
[   20.216829]  worker_thread+0xb79/0x11d0
[   20.217341]  ? __pfx_kthread+0x10/0x10
[   20.217820]  kthread+0x298/0x330
[   20.218401]  ret_from_fork+0x4e/0x80
[   20.219020]  ? __pfx_worker_thread+0x10/0x10
[   20.219515]  ? __pfx_kthread+0x10/0x10
[   20.220049]  ? __pfx_kthread+0x10/0x10
[   20.220445]  ret_from_fork_asm+0x1a/0x30
[   20.220952]  ret_from_fork+0x4e/0x80
[   20.221513]  </TASK>
[   20.222051]  ? __pfx_kthread+0x10/0x10
[   20.222507] ---[ end trace 0000000000000000 ]---
[   20.223069]  ret_from_fork_asm+0x1a/0x30
[   20.223501] MPTCP: snd_una 0 dfrag end seq 817baffcc5438272 frag len 32728 bytes sent 3763720 acked 9116499520509078382
[   20.223854]  </TASK>
[   20.227837] ---[ end trace 0000000000000000 ]---
[   20.426084] MPTCP: snd_una 0 dfrag end seq 817baffcc5438272 frag len 32728 bytes sent 3763720 acked 9116499520509078382
[   20.433870] MPTCP: snd_una 0 dfrag end seq d9b32cd3cf5bc831 frag len 32728 bytes sent 3763720 acked 2759812858423158191
[   20.633862] MPTCP: snd_una 0 dfrag end seq 817baffcc5438272 frag len 32728 bytes sent 3763720 acked 9116499520509078382



Christoph


> 
> diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
> index 3e1b15d76442..46fd84697495 100644
> --- a/net/mptcp/protocol.c
> +++ b/net/mptcp/protocol.c
> @@ -1002,8 +1002,12 @@ static void __mptcp_clean_una(struct sock *sk)
> 
>                if (unlikely(dfrag == msk->first_pending)) {
>                        /* in recovery mode can see ack after the current snd head */
> -                       if (WARN_ON_ONCE(!msk->recovery))
> +                       if (WARN_ON_ONCE(!msk->recovery)) {
> +                               pr_err("snd_una %llx dfrag end seq %llx frag len %d bytes sent %lld acked %lld",
> +                                       msk->snd_una, dfrag->data_seq + dfrag->data_len, dfrag->data_len,
> +                                       msk->bytes_sent, msk->bytes_acked);
>                                break;
> +                       }
> 
>                        WRITE_ONCE(msk->first_pending, mptcp_send_next(sk));
>                }
> ---
> net/mptcp/subflow.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
> index b94d1dca1094..25004473e7ef 100644
> --- a/net/mptcp/subflow.c
> +++ b/net/mptcp/subflow.c
> @@ -475,12 +475,12 @@ static void mptcp_propagate_state(struct sock *sk, struct sock *ssk,
> struct mptcp_sock *msk = mptcp_sk(sk);
> 
> mptcp_data_lock(sk);
> + WRITE_ONCE(msk->snd_una, subflow->idsn + 1);
> + WRITE_ONCE(msk->wnd_end, subflow->idsn + 1 + tcp_sk(ssk)->snd_wnd);
> if (mp_opt) {
> /* Options are available only in the non fallback cases
> * avoid updating rx path fields otherwise
> */
> - WRITE_ONCE(msk->snd_una, subflow->idsn + 1);
> - WRITE_ONCE(msk->wnd_end, subflow->idsn + 1 + tcp_sk(ssk)->snd_wnd);
> subflow_set_remote_key(msk, subflow, mp_opt);
> }
> 
> -- 
> 2.43.2
> 
> 


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH mptcp-net v2] mptcp: ensure snd_una is properly initialized on connect
  2024-04-15 14:05 [PATCH mptcp-net v2] mptcp: ensure snd_una is properly initialized on connect Paolo Abeni
  2024-04-15 20:48 ` Christoph Paasch
@ 2024-04-15 21:20 ` MPTCP CI
  1 sibling, 0 replies; 3+ messages in thread
From: MPTCP CI @ 2024-04-15 21:20 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: mptcp

Hi Paolo,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal: Success! ✅
- KVM Validation: debug: Success! ✅
- KVM Validation: btf (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/8695857032

Initiator: Matthieu Baerts (NGI0)
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/d56bd52a5ffe
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=844661


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-normal

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2024-04-15 21:20 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-15 14:05 [PATCH mptcp-net v2] mptcp: ensure snd_una is properly initialized on connect Paolo Abeni
2024-04-15 20:48 ` Christoph Paasch
2024-04-15 21:20 ` MPTCP CI

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).