All of lore.kernel.org
 help / color / mirror / Atom feed
From: Waiman Long <llong@redhat.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tejun Heo <tj@kernel.org>, Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Vlastimil Babka <vbabka@suse.cz>, Roman Gushchin <guro@fb.com>,
	linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
	linux-mm@kvack.org, Shakeel Butt <shakeelb@google.com>,
	Muchun Song <songmuchun@bytedance.com>,
	Alex Shi <alex.shi@linux.alibaba.com>,
	Chris Down <chris@chrisdown.name>,
	Yafang Shao <laoar.shao@gmail.com>,
	Wei Yang <richard.weiyang@gmail.com>,
	Masayoshi Mizuma <msys.mizuma@gmail.com>,
	Xing Zhengjun <zhengjun.xing@linux.intel.com>,
	Matthew Wilcox <willy@infradead.org>
Subject: Re: [PATCH v4 2/5] mm/memcg: Cache vmstat data in percpu memcg_stock_pcp
Date: Mon, 19 Apr 2021 19:42:07 -0400	[thread overview]
Message-ID: <09ea1749-8978-091b-7727-d86f8e6c49cc@redhat.com> (raw)
In-Reply-To: <YH2yA1oZoyQoMhAH@cmpxchg.org>

On 4/19/21 12:38 PM, Johannes Weiner wrote:
> On Sun, Apr 18, 2021 at 08:00:29PM -0400, Waiman Long wrote:
>> Before the new slab memory controller with per object byte charging,
>> charging and vmstat data update happen only when new slab pages are
>> allocated or freed. Now they are done with every kmem_cache_alloc()
>> and kmem_cache_free(). This causes additional overhead for workloads
>> that generate a lot of alloc and free calls.
>>
>> The memcg_stock_pcp is used to cache byte charge for a specific
>> obj_cgroup to reduce that overhead. To further reducing it, this patch
>> makes the vmstat data cached in the memcg_stock_pcp structure as well
>> until it accumulates a page size worth of update or when other cached
>> data change. Caching the vmstat data in the per-cpu stock eliminates two
>> writes to non-hot cachelines for memcg specific as well as memcg-lruvecs
>> specific vmstat data by a write to a hot local stock cacheline.
>>
>> On a 2-socket Cascade Lake server with instrumentation enabled and this
>> patch applied, it was found that about 20% (634400 out of 3243830)
>> of the time when mod_objcg_state() is called leads to an actual call
>> to __mod_objcg_state() after initial boot. When doing parallel kernel
>> build, the figure was about 17% (24329265 out of 142512465). So caching
>> the vmstat data reduces the number of calls to __mod_objcg_state()
>> by more than 80%.
>>
>> Signed-off-by: Waiman Long <longman@redhat.com>
>> Reviewed-by: Shakeel Butt <shakeelb@google.com>
>> ---
>>   mm/memcontrol.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++---
>>   1 file changed, 61 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>> index dc9032f28f2e..693453f95d99 100644
>> --- a/mm/memcontrol.c
>> +++ b/mm/memcontrol.c
>> @@ -2213,7 +2213,10 @@ struct memcg_stock_pcp {
>>   
>>   #ifdef CONFIG_MEMCG_KMEM
>>   	struct obj_cgroup *cached_objcg;
>> +	struct pglist_data *cached_pgdat;
>>   	unsigned int nr_bytes;
>> +	int vmstat_idx;
>> +	int vmstat_bytes;
>>   #endif
>>   
>>   	struct work_struct work;
>> @@ -3150,8 +3153,9 @@ void __memcg_kmem_uncharge_page(struct page *page, int order)
>>   	css_put(&memcg->css);
>>   }
>>   
>> -void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat,
>> -		     enum node_stat_item idx, int nr)
>> +static inline void __mod_objcg_state(struct obj_cgroup *objcg,
>> +				     struct pglist_data *pgdat,
>> +				     enum node_stat_item idx, int nr)
> This naming is dangerous, as the __mod_foo naming scheme we use
> everywhere else suggests it's the same function as mod_foo() just with
> preemption/irqs disabled.
>
I will change its name to, say, mod_objcg_mlstate() to indicate that it 
is something different. Actually, it is hard to come up with a good name 
which is not too long.


>> @@ -3159,10 +3163,53 @@ void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat,
>>   	rcu_read_lock();
>>   	memcg = obj_cgroup_memcg(objcg);
>>   	lruvec = mem_cgroup_lruvec(memcg, pgdat);
>> -	mod_memcg_lruvec_state(lruvec, idx, nr);
>> +	__mod_memcg_lruvec_state(lruvec, idx, nr);
>>   	rcu_read_unlock();
>>   }
>>   
>> +void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat,
>> +		     enum node_stat_item idx, int nr)
>> +{
>> +	struct memcg_stock_pcp *stock;
>> +	unsigned long flags;
>> +
>> +	local_irq_save(flags);
>> +	stock = this_cpu_ptr(&memcg_stock);
>> +
>> +	/*
>> +	 * Save vmstat data in stock and skip vmstat array update unless
>> +	 * accumulating over a page of vmstat data or when pgdat or idx
>> +	 * changes.
>> +	 */
>> +	if (stock->cached_objcg != objcg) {
>> +		/* Output the current data as is */
> When you get here with the wrong objcg and hit the cold path, it's
> usually immediately followed by an uncharge -> refill_obj_stock() that
> will then flush and reset cached_objcg.
>
> Instead of doing two cold paths, why not flush the old objcg right
> away and set the new so that refill_obj_stock() can use the fast path?

That is a good idea. Will do that.


>
>> +	} else if (!stock->vmstat_bytes) {
>> +		/* Save the current data */
>> +		stock->vmstat_bytes = nr;
>> +		stock->vmstat_idx = idx;
>> +		stock->cached_pgdat = pgdat;
>> +		nr = 0;
>> +	} else if ((stock->cached_pgdat != pgdat) ||
>> +		   (stock->vmstat_idx != idx)) {
>> +		/* Output the cached data & save the current data */
>> +		swap(nr, stock->vmstat_bytes);
>> +		swap(idx, stock->vmstat_idx);
>> +		swap(pgdat, stock->cached_pgdat);
> Is this optimization worth doing?
>
> You later split vmstat_bytes and idx doesn't change anymore.

I am going to merge patch 2 and patch 4 to avoid the confusion.


>
> How often does the pgdat change? This is a per-cpu cache after all,
> and the numa node a given cpu allocates from tends to not change that
> often. Even with interleaving mode, which I think is pretty rare, the
> interleaving happens at the slab/page level, not the object level, and
> the cache isn't bigger than a page anyway.

The testing done on a 2-socket system indicated that pgdat changes 
roughly 10-20% of time. So it does happen, especially on the kfree() 
path, I think. I have tried to cached vmstat update for those on the 
local node only, but I got more misses with that. So I am just going to 
change pgdat and flush out existing data for now.


>
>> +	} else {
>> +		stock->vmstat_bytes += nr;
>> +		if (abs(stock->vmstat_bytes) > PAGE_SIZE) {
>> +			nr = stock->vmstat_bytes;
>> +			stock->vmstat_bytes = 0;
>> +		} else {
>> +			nr = 0;
>> +		}
> ..and this is the regular overflow handling done by the objcg and
> memcg charge stock as well.
>
> How about this?
>
> 	if (stock->cached_objcg != objcg ||
> 	    stock->cached_pgdat != pgdat ||
> 	    stock->vmstat_idx != idx) {
> 		drain_obj_stock(stock);
> 		obj_cgroup_get(objcg);
> 		stock->cached_objcg = objcg;
> 		stock->nr_bytes = atomic_xchg(&objcg->nr_charged_bytes, 0);
> 		stock->vmstat_idx = idx;
> 	}
> 	stock->vmstat_bytes += nr_bytes;
>
> 	if (abs(stock->vmstat_bytes > PAGE_SIZE))
> 		drain_obj_stock(stock);
>
> (Maybe we could be clever, here since the charge and stat caches are
> the same size: don't flush an oversized charge cache from
> refill_obj_stock in the charge path, but leave it to the
> mod_objcg_state() that follows; likewise don't flush an undersized
> vmstat stock from mod_objcg_state() in the uncharge path, but leave it
> to the refill_obj_stock() that follows. Could get a bit complicated...)

If you look at patch 5, I am trying to avoid doing drain_obj_stock() 
unless the objcg change. I am going to do the same here.

Cheers,
Longman


WARNING: multiple messages have this Message-ID (diff)
From: Waiman Long <llong-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
To: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
Cc: Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Vladimir Davydov
	<vdavydov.dev-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	Andrew Morton
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Christoph Lameter <cl-vYTEC60ixJUAvxtiuMwx3w@public.gmane.org>,
	Pekka Enberg <penberg-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	David Rientjes <rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	Joonsoo Kim <iamjoonsoo.kim-Hm3cg6mZ9cc@public.gmane.org>,
	Vlastimil Babka <vbabka-AlSwsSmVLrQ@public.gmane.org>,
	Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>,
	Alex Shi
	<alex.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org>,
	Chris Down <chris-6Bi1550iOqEnzZ6mRAm98g@public.gmane.org>,
	Yafang Shao <laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	Wei Yang
	<richard.weiyang-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	Masayoshi Mizuma
	<msys.mizuma-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	Xing Zhengjun
	<zhengjun.xing-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>,
	Matthew Wilcox <>
Subject: Re: [PATCH v4 2/5] mm/memcg: Cache vmstat data in percpu memcg_stock_pcp
Date: Mon, 19 Apr 2021 19:42:07 -0400	[thread overview]
Message-ID: <09ea1749-8978-091b-7727-d86f8e6c49cc@redhat.com> (raw)
In-Reply-To: <YH2yA1oZoyQoMhAH-druUgvl0LCNAfugRpC6u6w@public.gmane.org>

On 4/19/21 12:38 PM, Johannes Weiner wrote:
> On Sun, Apr 18, 2021 at 08:00:29PM -0400, Waiman Long wrote:
>> Before the new slab memory controller with per object byte charging,
>> charging and vmstat data update happen only when new slab pages are
>> allocated or freed. Now they are done with every kmem_cache_alloc()
>> and kmem_cache_free(). This causes additional overhead for workloads
>> that generate a lot of alloc and free calls.
>>
>> The memcg_stock_pcp is used to cache byte charge for a specific
>> obj_cgroup to reduce that overhead. To further reducing it, this patch
>> makes the vmstat data cached in the memcg_stock_pcp structure as well
>> until it accumulates a page size worth of update or when other cached
>> data change. Caching the vmstat data in the per-cpu stock eliminates two
>> writes to non-hot cachelines for memcg specific as well as memcg-lruvecs
>> specific vmstat data by a write to a hot local stock cacheline.
>>
>> On a 2-socket Cascade Lake server with instrumentation enabled and this
>> patch applied, it was found that about 20% (634400 out of 3243830)
>> of the time when mod_objcg_state() is called leads to an actual call
>> to __mod_objcg_state() after initial boot. When doing parallel kernel
>> build, the figure was about 17% (24329265 out of 142512465). So caching
>> the vmstat data reduces the number of calls to __mod_objcg_state()
>> by more than 80%.
>>
>> Signed-off-by: Waiman Long <longman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>> Reviewed-by: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
>> ---
>>   mm/memcontrol.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++---
>>   1 file changed, 61 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>> index dc9032f28f2e..693453f95d99 100644
>> --- a/mm/memcontrol.c
>> +++ b/mm/memcontrol.c
>> @@ -2213,7 +2213,10 @@ struct memcg_stock_pcp {
>>   
>>   #ifdef CONFIG_MEMCG_KMEM
>>   	struct obj_cgroup *cached_objcg;
>> +	struct pglist_data *cached_pgdat;
>>   	unsigned int nr_bytes;
>> +	int vmstat_idx;
>> +	int vmstat_bytes;
>>   #endif
>>   
>>   	struct work_struct work;
>> @@ -3150,8 +3153,9 @@ void __memcg_kmem_uncharge_page(struct page *page, int order)
>>   	css_put(&memcg->css);
>>   }
>>   
>> -void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat,
>> -		     enum node_stat_item idx, int nr)
>> +static inline void __mod_objcg_state(struct obj_cgroup *objcg,
>> +				     struct pglist_data *pgdat,
>> +				     enum node_stat_item idx, int nr)
> This naming is dangerous, as the __mod_foo naming scheme we use
> everywhere else suggests it's the same function as mod_foo() just with
> preemption/irqs disabled.
>
I will change its name to, say, mod_objcg_mlstate() to indicate that it 
is something different. Actually, it is hard to come up with a good name 
which is not too long.


>> @@ -3159,10 +3163,53 @@ void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat,
>>   	rcu_read_lock();
>>   	memcg = obj_cgroup_memcg(objcg);
>>   	lruvec = mem_cgroup_lruvec(memcg, pgdat);
>> -	mod_memcg_lruvec_state(lruvec, idx, nr);
>> +	__mod_memcg_lruvec_state(lruvec, idx, nr);
>>   	rcu_read_unlock();
>>   }
>>   
>> +void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat,
>> +		     enum node_stat_item idx, int nr)
>> +{
>> +	struct memcg_stock_pcp *stock;
>> +	unsigned long flags;
>> +
>> +	local_irq_save(flags);
>> +	stock = this_cpu_ptr(&memcg_stock);
>> +
>> +	/*
>> +	 * Save vmstat data in stock and skip vmstat array update unless
>> +	 * accumulating over a page of vmstat data or when pgdat or idx
>> +	 * changes.
>> +	 */
>> +	if (stock->cached_objcg != objcg) {
>> +		/* Output the current data as is */
> When you get here with the wrong objcg and hit the cold path, it's
> usually immediately followed by an uncharge -> refill_obj_stock() that
> will then flush and reset cached_objcg.
>
> Instead of doing two cold paths, why not flush the old objcg right
> away and set the new so that refill_obj_stock() can use the fast path?

That is a good idea. Will do that.


>
>> +	} else if (!stock->vmstat_bytes) {
>> +		/* Save the current data */
>> +		stock->vmstat_bytes = nr;
>> +		stock->vmstat_idx = idx;
>> +		stock->cached_pgdat = pgdat;
>> +		nr = 0;
>> +	} else if ((stock->cached_pgdat != pgdat) ||
>> +		   (stock->vmstat_idx != idx)) {
>> +		/* Output the cached data & save the current data */
>> +		swap(nr, stock->vmstat_bytes);
>> +		swap(idx, stock->vmstat_idx);
>> +		swap(pgdat, stock->cached_pgdat);
> Is this optimization worth doing?
>
> You later split vmstat_bytes and idx doesn't change anymore.

I am going to merge patch 2 and patch 4 to avoid the confusion.


>
> How often does the pgdat change? This is a per-cpu cache after all,
> and the numa node a given cpu allocates from tends to not change that
> often. Even with interleaving mode, which I think is pretty rare, the
> interleaving happens at the slab/page level, not the object level, and
> the cache isn't bigger than a page anyway.

The testing done on a 2-socket system indicated that pgdat changes 
roughly 10-20% of time. So it does happen, especially on the kfree() 
path, I think. I have tried to cached vmstat update for those on the 
local node only, but I got more misses with that. So I am just going to 
change pgdat and flush out existing data for now.


>
>> +	} else {
>> +		stock->vmstat_bytes += nr;
>> +		if (abs(stock->vmstat_bytes) > PAGE_SIZE) {
>> +			nr = stock->vmstat_bytes;
>> +			stock->vmstat_bytes = 0;
>> +		} else {
>> +			nr = 0;
>> +		}
> ..and this is the regular overflow handling done by the objcg and
> memcg charge stock as well.
>
> How about this?
>
> 	if (stock->cached_objcg != objcg ||
> 	    stock->cached_pgdat != pgdat ||
> 	    stock->vmstat_idx != idx) {
> 		drain_obj_stock(stock);
> 		obj_cgroup_get(objcg);
> 		stock->cached_objcg = objcg;
> 		stock->nr_bytes = atomic_xchg(&objcg->nr_charged_bytes, 0);
> 		stock->vmstat_idx = idx;
> 	}
> 	stock->vmstat_bytes += nr_bytes;
>
> 	if (abs(stock->vmstat_bytes > PAGE_SIZE))
> 		drain_obj_stock(stock);
>
> (Maybe we could be clever, here since the charge and stat caches are
> the same size: don't flush an oversized charge cache from
> refill_obj_stock in the charge path, but leave it to the
> mod_objcg_state() that follows; likewise don't flush an undersized
> vmstat stock from mod_objcg_state() in the uncharge path, but leave it
> to the refill_obj_stock() that follows. Could get a bit complicated...)

If you look at patch 5, I am trying to avoid doing drain_obj_stock() 
unless the objcg change. I am going to do the same here.

Cheers,
Longman


  reply	other threads:[~2021-04-19 23:42 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-19  0:00 [PATCH v4 0/5] mm/memcg: Reduce kmemcache memory accounting overhead Waiman Long
2021-04-19  0:00 ` Waiman Long
2021-04-19  0:00 ` [PATCH v4 1/5] mm/memcg: Move mod_objcg_state() to memcontrol.c Waiman Long
2021-04-19  0:00   ` Waiman Long
2021-04-19 15:14   ` Johannes Weiner
2021-04-19 15:14     ` Johannes Weiner
2021-04-19 15:21     ` Waiman Long
2021-04-19 15:21       ` Waiman Long
2021-04-19 16:18       ` Waiman Long
2021-04-19 16:18         ` Waiman Long
2021-04-19 17:13         ` Johannes Weiner
2021-04-19 17:13           ` Johannes Weiner
2021-04-19 17:19           ` Waiman Long
2021-04-19 17:19             ` Waiman Long
2021-04-19 17:26             ` Waiman Long
2021-04-19 17:26               ` Waiman Long
2021-04-19 21:11               ` Johannes Weiner
2021-04-19 21:11                 ` Johannes Weiner
2021-04-19 21:24                 ` Waiman Long
2021-04-19 21:24                   ` Waiman Long
2021-04-20  8:05                 ` Michal Hocko
2021-04-20  8:05                   ` Michal Hocko
2021-04-19 15:24   ` Shakeel Butt
2021-04-19 15:24     ` Shakeel Butt
2021-04-19 15:24     ` Shakeel Butt
2021-04-19  0:00 ` [PATCH v4 2/5] mm/memcg: Cache vmstat data in percpu memcg_stock_pcp Waiman Long
2021-04-19 16:38   ` Johannes Weiner
2021-04-19 16:38     ` Johannes Weiner
2021-04-19 23:42     ` Waiman Long [this message]
2021-04-19 23:42       ` Waiman Long
2021-04-19  0:00 ` [PATCH v4 3/5] mm/memcg: Optimize user context object stock access Waiman Long
2021-04-19  0:00 ` [PATCH v4 4/5] mm/memcg: Save both reclaimable & unreclaimable bytes in object stock Waiman Long
2021-04-19  0:00   ` Waiman Long
2021-04-19 16:55   ` Johannes Weiner
2021-04-19 16:55     ` Johannes Weiner
2021-04-20 19:09     ` Waiman Long
2021-04-20 19:09       ` Waiman Long
2021-04-19  0:00 ` [PATCH v4 5/5] mm/memcg: Improve refill_obj_stock() performance Waiman Long
2021-04-19  0:00   ` Waiman Long
2021-04-19  6:06   ` [External] " Muchun Song
2021-04-19  6:06     ` Muchun Song
2021-04-19  6:06     ` Muchun Song
2021-04-19 15:00     ` Shakeel Butt
2021-04-19 15:00       ` Shakeel Butt
2021-04-19 15:00       ` Shakeel Butt
2021-04-19 15:19       ` Waiman Long
2021-04-19 15:19         ` Waiman Long
2021-04-19 15:56     ` Waiman Long
2021-04-19 15:56       ` Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=09ea1749-8978-091b-7727-d86f8e6c49cc@redhat.com \
    --to=llong@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=alex.shi@linux.alibaba.com \
    --cc=cgroups@vger.kernel.org \
    --cc=chris@chrisdown.name \
    --cc=cl@linux.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=laoar.shao@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=msys.mizuma@gmail.com \
    --cc=penberg@kernel.org \
    --cc=richard.weiyang@gmail.com \
    --cc=rientjes@google.com \
    --cc=shakeelb@google.com \
    --cc=songmuchun@bytedance.com \
    --cc=tj@kernel.org \
    --cc=vbabka@suse.cz \
    --cc=vdavydov.dev@gmail.com \
    --cc=willy@infradead.org \
    --cc=zhengjun.xing@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.