* kmemleak not play well with low memory situation
@ 2018-12-30 5:25 Qian Cai
2019-01-02 16:08 ` [PATCH] kmemleak: survive in a low-memory situation Qian Cai
0 siblings, 1 reply; 10+ messages in thread
From: Qian Cai @ 2018-12-30 5:25 UTC (permalink / raw)
To: Catalin Marinas, Linux-MM
When in low memory situation with swapping, kmemleak quickly disable itself due
to cannot allocate a kmemleak_object structure. If kmemleak could hold a bit
longer, the system will trigger OOM soon to free up the memory. This is a bit
tricky to solve because in __alloc_pages_slowpath() even though it has
GFP_NOFAIL, it would fail allocation due to no GFP_DIRECT_RECLAIM,
/* Caller is not willing to reclaim, we can't balance anything */
if (!can_direct_reclaim)
goto nopage;
if (WARN_ON_ONCE(!can_direct_reclaim))
goto fail;
If adding GFP_DIRECT_RECLAIM to kmemleak_alloc(), it will trigger endless
warnings in slab_pre_alloc_hook(),
might_sleep_if(gfpflags_allow_blocking(flags));
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH] kmemleak: survive in a low-memory situation
2018-12-30 5:25 kmemleak not play well with low memory situation Qian Cai
@ 2019-01-02 16:08 ` Qian Cai
2019-01-02 16:59 ` Catalin Marinas
0 siblings, 1 reply; 10+ messages in thread
From: Qian Cai @ 2019-01-02 16:08 UTC (permalink / raw)
To: catalin.marinas, akpm, cl, penberg, rientjes, iamjoonsoo.kim
Cc: linux-mm, linux-kernel, Qian Cai
Kmemleak could quickly fail to allocate an object structure and then
disable itself in a low-memory situation. For example, running a mmap()
workload triggering swapping and OOM [1].
First, it unnecessarily attempt to allocate even though the tracking
object is NULL in kmem_cache_alloc(). For example,
alloc_io
bio_alloc_bioset
mempool_alloc
mempool_alloc_slab
kmem_cache_alloc
slab_alloc_node
__slab_alloc <-- could return NULL
slab_post_alloc_hook
kmemleak_alloc_recursive
Second, kmemleak allocation could fail even though the trackig object is
succeeded. Hence, it could still try to start a direct reclaim if it is
not executed in an atomic context (spinlock, irq-handler etc), or a
high-priority allocation in an atomic context as a last-ditch effort.
[1]
https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/mem/oom/oom01.c
Signed-off-by: Qian Cai <cai@lca.pw>
---
mm/kmemleak.c | 10 ++++++++++
mm/slab.h | 17 +++++++++--------
2 files changed, 19 insertions(+), 8 deletions(-)
diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index f9d9dc250428..9e1aa3b7df75 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -576,6 +576,16 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
struct rb_node **link, *rb_parent;
object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp));
+#ifdef CONFIG_PREEMPT_COUNT
+ if (!object) {
+ /* last-ditch effort in a low-memory situation */
+ if (irqs_disabled() || is_idle_task(current) || in_atomic())
+ gfp = GFP_ATOMIC;
+ else
+ gfp = gfp_kmemleak_mask(gfp) | __GFP_DIRECT_RECLAIM;
+ object = kmem_cache_alloc(object_cache, gfp);
+ }
+#endif
if (!object) {
pr_warn("Cannot allocate a kmemleak_object structure\n");
kmemleak_disable();
diff --git a/mm/slab.h b/mm/slab.h
index 4190c24ef0e9..51a9a942cc56 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -435,15 +435,16 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
{
size_t i;
- flags &= gfp_allowed_mask;
- for (i = 0; i < size; i++) {
- void *object = p[i];
-
- kmemleak_alloc_recursive(object, s->object_size, 1,
- s->flags, flags);
- p[i] = kasan_slab_alloc(s, object, flags);
+ if (*p) {
+ flags &= gfp_allowed_mask;
+ for (i = 0; i < size; i++) {
+ void *object = p[i];
+
+ kmemleak_alloc_recursive(object, s->object_size, 1,
+ s->flags, flags);
+ p[i] = kasan_slab_alloc(s, object, flags);
+ }
}
-
if (memcg_kmem_enabled())
memcg_kmem_put_cache(s);
}
--
2.17.2 (Apple Git-113)
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH] kmemleak: survive in a low-memory situation
2019-01-02 16:08 ` [PATCH] kmemleak: survive in a low-memory situation Qian Cai
@ 2019-01-02 16:59 ` Catalin Marinas
2019-01-02 18:06 ` [PATCH v2] " Qian Cai
0 siblings, 1 reply; 10+ messages in thread
From: Catalin Marinas @ 2019-01-02 16:59 UTC (permalink / raw)
To: Qian Cai
Cc: akpm, cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, linux-kernel
Hi Qian,
On Wed, Jan 02, 2019 at 11:08:49AM -0500, Qian Cai wrote:
> Kmemleak could quickly fail to allocate an object structure and then
> disable itself in a low-memory situation. For example, running a mmap()
> workload triggering swapping and OOM [1].
>
> First, it unnecessarily attempt to allocate even though the tracking
> object is NULL in kmem_cache_alloc(). For example,
>
> alloc_io
> bio_alloc_bioset
> mempool_alloc
> mempool_alloc_slab
> kmem_cache_alloc
> slab_alloc_node
> __slab_alloc <-- could return NULL
> slab_post_alloc_hook
> kmemleak_alloc_recursive
kmemleak_alloc() only continues with the kmemleak_object allocation if
the given pointer is not NULL.
> diff --git a/mm/slab.h b/mm/slab.h
> index 4190c24ef0e9..51a9a942cc56 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -435,15 +435,16 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
> {
> size_t i;
>
> - flags &= gfp_allowed_mask;
> - for (i = 0; i < size; i++) {
> - void *object = p[i];
> -
> - kmemleak_alloc_recursive(object, s->object_size, 1,
> - s->flags, flags);
> - p[i] = kasan_slab_alloc(s, object, flags);
> + if (*p) {
> + flags &= gfp_allowed_mask;
> + for (i = 0; i < size; i++) {
> + void *object = p[i];
> +
> + kmemleak_alloc_recursive(object, s->object_size, 1,
> + s->flags, flags);
> + p[i] = kasan_slab_alloc(s, object, flags);
> + }
> }
This is not necessary for kmemleak.
--
Catalin
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v2] kmemleak: survive in a low-memory situation
2019-01-02 16:59 ` Catalin Marinas
@ 2019-01-02 18:06 ` Qian Cai
2019-01-03 9:32 ` Michal Hocko
0 siblings, 1 reply; 10+ messages in thread
From: Qian Cai @ 2019-01-02 18:06 UTC (permalink / raw)
To: catalin.marinas
Cc: akpm, cl, penberg, rientjes, iamjoonsoo.kim, linux-mm,
linux-kernel, Qian Cai
Kmemleak could quickly fail to allocate an object structure and then
disable itself in a low-memory situation. For example, running a mmap()
workload triggering swapping and OOM [1].
Kmemleak allocation could fail even though the trackig object is
succeeded. Hence, it could still try to start a direct reclaim if it is
not executed in an atomic context (spinlock, irq-handler etc), or a
high-priority allocation in an atomic context as a last-ditch effort.
[1]
https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/mem/oom/oom01.c
Signed-off-by: Qian Cai <cai@lca.pw>
---
v2: remove the needless checking for NULL objects in slab_post_alloc_hook()
pointed out by Catalin.
mm/kmemleak.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index f9d9dc250428..9e1aa3b7df75 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -576,6 +576,16 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
struct rb_node **link, *rb_parent;
object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp));
+#ifdef CONFIG_PREEMPT_COUNT
+ if (!object) {
+ /* last-ditch effort in a low-memory situation */
+ if (irqs_disabled() || is_idle_task(current) || in_atomic())
+ gfp = GFP_ATOMIC;
+ else
+ gfp = gfp_kmemleak_mask(gfp) | __GFP_DIRECT_RECLAIM;
+ object = kmem_cache_alloc(object_cache, gfp);
+ }
+#endif
if (!object) {
pr_warn("Cannot allocate a kmemleak_object structure\n");
kmemleak_disable();
--
2.17.2 (Apple Git-113)
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v2] kmemleak: survive in a low-memory situation
2019-01-02 18:06 ` [PATCH v2] " Qian Cai
@ 2019-01-03 9:32 ` Michal Hocko
2019-01-03 16:51 ` Qian Cai
0 siblings, 1 reply; 10+ messages in thread
From: Michal Hocko @ 2019-01-03 9:32 UTC (permalink / raw)
To: Qian Cai
Cc: catalin.marinas, akpm, cl, penberg, rientjes, iamjoonsoo.kim,
linux-mm, linux-kernel
On Wed 02-01-19 13:06:19, Qian Cai wrote:
[...]
> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
> index f9d9dc250428..9e1aa3b7df75 100644
> --- a/mm/kmemleak.c
> +++ b/mm/kmemleak.c
> @@ -576,6 +576,16 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
> struct rb_node **link, *rb_parent;
>
> object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp));
> +#ifdef CONFIG_PREEMPT_COUNT
> + if (!object) {
> + /* last-ditch effort in a low-memory situation */
> + if (irqs_disabled() || is_idle_task(current) || in_atomic())
> + gfp = GFP_ATOMIC;
> + else
> + gfp = gfp_kmemleak_mask(gfp) | __GFP_DIRECT_RECLAIM;
> + object = kmem_cache_alloc(object_cache, gfp);
> + }
> +#endif
I do not get it. How can this possibly help when gfp_kmemleak_mask()
adds __GFP_NOFAIL modifier to the given gfp mask? Or is this not the
case anymore in some tree?
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] kmemleak: survive in a low-memory situation
2019-01-03 9:32 ` Michal Hocko
@ 2019-01-03 16:51 ` Qian Cai
2019-01-03 17:07 ` Michal Hocko
0 siblings, 1 reply; 10+ messages in thread
From: Qian Cai @ 2019-01-03 16:51 UTC (permalink / raw)
To: Michal Hocko
Cc: catalin.marinas, akpm, cl, penberg, rientjes, iamjoonsoo.kim,
linux-mm, linux-kernel
On 1/3/19 4:32 AM, Michal Hocko wrote:
> On Wed 02-01-19 13:06:19, Qian Cai wrote:
> [...]
>> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
>> index f9d9dc250428..9e1aa3b7df75 100644
>> --- a/mm/kmemleak.c
>> +++ b/mm/kmemleak.c
>> @@ -576,6 +576,16 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
>> struct rb_node **link, *rb_parent;
>>
>> object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp));
>> +#ifdef CONFIG_PREEMPT_COUNT
>> + if (!object) {
>> + /* last-ditch effort in a low-memory situation */
>> + if (irqs_disabled() || is_idle_task(current) || in_atomic())
>> + gfp = GFP_ATOMIC;
>> + else
>> + gfp = gfp_kmemleak_mask(gfp) | __GFP_DIRECT_RECLAIM;
>> + object = kmem_cache_alloc(object_cache, gfp);
>> + }
>> +#endif
>
> I do not get it. How can this possibly help when gfp_kmemleak_mask()
> adds __GFP_NOFAIL modifier to the given gfp mask? Or is this not the
> case anymore in some tree?
Well, __GFP_NOFAIL can still fail easily without __GFP_DIRECT_RECLAIM in a
low-memory situation.
__alloc_pages_slowpath():
/* Caller is not willing to reclaim, we can't balance anything */
if (!can_direct_reclaim)
goto nopage;
nopage:
/*
* All existing users of the __GFP_NOFAIL are blockable, so
* warn of any new users that actually require GFP_NOWAIT
*/
if (WARN_ON_ONCE(!can_direct_reclaim))
goto fail;
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] kmemleak: survive in a low-memory situation
2019-01-03 16:51 ` Qian Cai
@ 2019-01-03 17:07 ` Michal Hocko
2019-01-07 10:43 ` Catalin Marinas
0 siblings, 1 reply; 10+ messages in thread
From: Michal Hocko @ 2019-01-03 17:07 UTC (permalink / raw)
To: Qian Cai
Cc: catalin.marinas, akpm, cl, penberg, rientjes, iamjoonsoo.kim,
linux-mm, linux-kernel
On Thu 03-01-19 11:51:57, Qian Cai wrote:
> On 1/3/19 4:32 AM, Michal Hocko wrote:
> > On Wed 02-01-19 13:06:19, Qian Cai wrote:
> > [...]
> >> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
> >> index f9d9dc250428..9e1aa3b7df75 100644
> >> --- a/mm/kmemleak.c
> >> +++ b/mm/kmemleak.c
> >> @@ -576,6 +576,16 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
> >> struct rb_node **link, *rb_parent;
> >>
> >> object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp));
> >> +#ifdef CONFIG_PREEMPT_COUNT
> >> + if (!object) {
> >> + /* last-ditch effort in a low-memory situation */
> >> + if (irqs_disabled() || is_idle_task(current) || in_atomic())
> >> + gfp = GFP_ATOMIC;
> >> + else
> >> + gfp = gfp_kmemleak_mask(gfp) | __GFP_DIRECT_RECLAIM;
> >> + object = kmem_cache_alloc(object_cache, gfp);
> >> + }
> >> +#endif
> >
> > I do not get it. How can this possibly help when gfp_kmemleak_mask()
> > adds __GFP_NOFAIL modifier to the given gfp mask? Or is this not the
> > case anymore in some tree?
>
> Well, __GFP_NOFAIL can still fail easily without __GFP_DIRECT_RECLAIM in a
> low-memory situation.
OK, I guess I understand now. So the issue is that a (general) atomic
allocation will provide its gfp mask down to kmemleak and you are
trying/hoping that if the allocation is no from an atomic context then
you can fortify it by using a sleepable allocation for the kmemleak
metadata or giving it access to memory reserves for atomic allocations.
I think this is still fragile because most atomic allocations are for a
good reason. As I've said earlier the current implementation which
abuses __GFP_NOFAIL is fra from great and we have discussed some
alternatives. Not sure whan came out of it.
I will not object to this workaround but I strongly believe that
kmemleak should rethink the metadata allocation strategy to be really
robust.
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] kmemleak: survive in a low-memory situation
2019-01-03 17:07 ` Michal Hocko
@ 2019-01-07 10:43 ` Catalin Marinas
2019-01-08 2:06 ` Qian Cai
0 siblings, 1 reply; 10+ messages in thread
From: Catalin Marinas @ 2019-01-07 10:43 UTC (permalink / raw)
To: Michal Hocko
Cc: Qian Cai, akpm, cl, penberg, rientjes, iamjoonsoo.kim, linux-mm,
linux-kernel
On Thu, Jan 03, 2019 at 06:07:35PM +0100, Michal Hocko wrote:
> > > On Wed 02-01-19 13:06:19, Qian Cai wrote:
> > > [...]
> > >> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
> > >> index f9d9dc250428..9e1aa3b7df75 100644
> > >> --- a/mm/kmemleak.c
> > >> +++ b/mm/kmemleak.c
> > >> @@ -576,6 +576,16 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
> > >> struct rb_node **link, *rb_parent;
> > >>
> > >> object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp));
> > >> +#ifdef CONFIG_PREEMPT_COUNT
> > >> + if (!object) {
> > >> + /* last-ditch effort in a low-memory situation */
> > >> + if (irqs_disabled() || is_idle_task(current) || in_atomic())
> > >> + gfp = GFP_ATOMIC;
> > >> + else
> > >> + gfp = gfp_kmemleak_mask(gfp) | __GFP_DIRECT_RECLAIM;
> > >> + object = kmem_cache_alloc(object_cache, gfp);
> > >> + }
> > >> +#endif
[...]
> I will not object to this workaround but I strongly believe that
> kmemleak should rethink the metadata allocation strategy to be really
> robust.
This would be nice indeed and it was discussed last year. I just haven't
got around to trying anything yet:
https://marc.info/?l=linux-mm&m=152812489819532
--
Catalin
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] kmemleak: survive in a low-memory situation
2019-01-07 10:43 ` Catalin Marinas
@ 2019-01-08 2:06 ` Qian Cai
2019-01-08 3:49 ` Qian Cai
0 siblings, 1 reply; 10+ messages in thread
From: Qian Cai @ 2019-01-08 2:06 UTC (permalink / raw)
To: Catalin Marinas, Michal Hocko
Cc: akpm, cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, linux-kernel
On 1/7/19 5:43 AM, Catalin Marinas wrote:
> On Thu, Jan 03, 2019 at 06:07:35PM +0100, Michal Hocko wrote:
>>>> On Wed 02-01-19 13:06:19, Qian Cai wrote:
>>>> [...]
>>>>> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
>>>>> index f9d9dc250428..9e1aa3b7df75 100644
>>>>> --- a/mm/kmemleak.c
>>>>> +++ b/mm/kmemleak.c
>>>>> @@ -576,6 +576,16 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
>>>>> struct rb_node **link, *rb_parent;
>>>>>
>>>>> object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp));
>>>>> +#ifdef CONFIG_PREEMPT_COUNT
>>>>> + if (!object) {
>>>>> + /* last-ditch effort in a low-memory situation */
>>>>> + if (irqs_disabled() || is_idle_task(current) || in_atomic())
>>>>> + gfp = GFP_ATOMIC;
>>>>> + else
>>>>> + gfp = gfp_kmemleak_mask(gfp) | __GFP_DIRECT_RECLAIM;
>>>>> + object = kmem_cache_alloc(object_cache, gfp);
>>>>> + }
>>>>> +#endif
> [...]
>> I will not object to this workaround but I strongly believe that
>> kmemleak should rethink the metadata allocation strategy to be really
>> robust.
>
> This would be nice indeed and it was discussed last year. I just haven't
> got around to trying anything yet:
>
> https://marc.info/?l=linux-mm&m=152812489819532
>
It could be helpful to apply this 10-line patch first if has no fundamental
issue, as it survives probably 50 times running LTP oom* workloads without a
single kmemleak allocation failure.
Of course, if someone is going to embed kmemleak metadata into slab objects
itself soon, this workaround is not needed.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] kmemleak: survive in a low-memory situation
2019-01-08 2:06 ` Qian Cai
@ 2019-01-08 3:49 ` Qian Cai
0 siblings, 0 replies; 10+ messages in thread
From: Qian Cai @ 2019-01-08 3:49 UTC (permalink / raw)
To: Catalin Marinas, Michal Hocko
Cc: akpm, cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, linux-kernel
On 1/7/19 9:06 PM, Qian Cai wrote:
>
>
> On 1/7/19 5:43 AM, Catalin Marinas wrote:
>> On Thu, Jan 03, 2019 at 06:07:35PM +0100, Michal Hocko wrote:
>>>>> On Wed 02-01-19 13:06:19, Qian Cai wrote:
>>>>> [...]
>>>>>> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
>>>>>> index f9d9dc250428..9e1aa3b7df75 100644
>>>>>> --- a/mm/kmemleak.c
>>>>>> +++ b/mm/kmemleak.c
>>>>>> @@ -576,6 +576,16 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
>>>>>> struct rb_node **link, *rb_parent;
>>>>>>
>>>>>> object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp));
>>>>>> +#ifdef CONFIG_PREEMPT_COUNT
>>>>>> + if (!object) {
>>>>>> + /* last-ditch effort in a low-memory situation */
>>>>>> + if (irqs_disabled() || is_idle_task(current) || in_atomic())
>>>>>> + gfp = GFP_ATOMIC;
>>>>>> + else
>>>>>> + gfp = gfp_kmemleak_mask(gfp) | __GFP_DIRECT_RECLAIM;
>>>>>> + object = kmem_cache_alloc(object_cache, gfp);
>>>>>> + }
>>>>>> +#endif
>> [...]
>>> I will not object to this workaround but I strongly believe that
>>> kmemleak should rethink the metadata allocation strategy to be really
>>> robust.
>>
>> This would be nice indeed and it was discussed last year. I just haven't
>> got around to trying anything yet:
>>
>> https://marc.info/?l=linux-mm&m=152812489819532
>>
>
> It could be helpful to apply this 10-line patch first if has no fundamental
> issue, as it survives probably 50 times running LTP oom* workloads without a
> single kmemleak allocation failure.
>
> Of course, if someone is going to embed kmemleak metadata into slab objects
> itself soon, this workaround is not needed.
>
Well, it is really hard to tell even if someone get eventually redesign kmemleak
to embed the metadata into slab objects alone would survive LTP oom* workloads,
because it seems still use separate metadata for non-slab objects where kmemleak
allocation could fail like it right now and disable itself again.
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2019-01-08 3:49 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-30 5:25 kmemleak not play well with low memory situation Qian Cai
2019-01-02 16:08 ` [PATCH] kmemleak: survive in a low-memory situation Qian Cai
2019-01-02 16:59 ` Catalin Marinas
2019-01-02 18:06 ` [PATCH v2] " Qian Cai
2019-01-03 9:32 ` Michal Hocko
2019-01-03 16:51 ` Qian Cai
2019-01-03 17:07 ` Michal Hocko
2019-01-07 10:43 ` Catalin Marinas
2019-01-08 2:06 ` Qian Cai
2019-01-08 3:49 ` Qian Cai
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).