All of lore.kernel.org
 help / color / mirror / Atom feed
From: Muchun Song <songmuchun@bytedance.com>
To: Suren Baghdasaryan <surenb@google.com>
Cc: Tejun Heo <tj@kernel.org>, Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@kernel.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Shakeel Butt <shakeelb@google.com>, Roman Gushchin <guro@fb.com>,
	Yang Shi <shy828301@gmail.com>, Alex Shi <alexs@kernel.org>,
	Wei Yang <richard.weiyang@gmail.com>,
	Vlastimil Babka <vbabka@suse.cz>, Jens Axboe <axboe@kernel.dk>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	David Hildenbrand <david@redhat.com>,
	Matthew Wilcox <willy@infradead.org>,
	apopple@nvidia.com, Minchan Kim <minchan@kernel.org>,
	Miaohe Lin <linmiaohe@huawei.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Cgroups <cgroups@vger.kernel.org>,
	Linux Memory Management List <linux-mm@kvack.org>,
	kernel-team@android.com
Subject: Re: [External] [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config
Date: Sat, 10 Jul 2021 19:08:17 +0800	[thread overview]
Message-ID: <CAMZfGtUqMKnMKDqY7wP+29U-fSxqsOv9OHnaZxQSsOtKrBQYfQ@mail.gmail.com> (raw)
In-Reply-To: <20210710003626.3549282-2-surenb@google.com>

On Sat, Jul 10, 2021 at 8:36 AM Suren Baghdasaryan <surenb@google.com> wrote:
>
> Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions
> functions to perform mem_cgroup_disabled static key check inline before
> calling the main body of the function. This minimizes the memcg overhead
> in the pagefault and exit_mmap paths when memcgs are disabled using
> cgroup_disable=memory command-line option.
> This change results in ~0.4% overhead reduction when running PFT test
> comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> configurationon on an 8-core ARM64 Android device.
>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>

Reviewed-by: Muchun Song <songmuchun@bytedance.com>

But some nits below.

> ---
>  include/linux/memcontrol.h | 28 +++++++++++++++++++++++++---
>  mm/memcontrol.c            | 29 ++++++++++-------------------
>  2 files changed, 35 insertions(+), 22 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index bfe5c486f4ad..39fa88051a42 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -693,13 +693,35 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg)
>                 page_counter_read(&memcg->memory);
>  }
>
> -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask);
> +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> +                       gfp_t gfp_mask);
> +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> +                                   gfp_t gfp_mask)
> +{
> +       if (mem_cgroup_disabled())
> +               return 0;
> +       return __mem_cgroup_charge(page, mm, gfp_mask);
> +}
> +
>  int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm,
>                                   gfp_t gfp, swp_entry_t entry);
>  void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry);
>
> -void mem_cgroup_uncharge(struct page *page);
> -void mem_cgroup_uncharge_list(struct list_head *page_list);
> +void __mem_cgroup_uncharge(struct page *page);
> +static inline void mem_cgroup_uncharge(struct page *page)
> +{
> +       if (mem_cgroup_disabled())
> +               return;
> +       __mem_cgroup_uncharge(page);
> +}
> +
> +void __mem_cgroup_uncharge_list(struct list_head *page_list);
> +static inline void mem_cgroup_uncharge_list(struct list_head *page_list)
> +{
> +       if (mem_cgroup_disabled())
> +               return;
> +       __mem_cgroup_uncharge_list(page_list);
> +}
>
>  void mem_cgroup_migrate(struct page *oldpage, struct page *newpage);
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index a228cd51c4bd..cdaf7003b43d 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -6701,8 +6701,7 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root,
>                         atomic_long_read(&parent->memory.children_low_usage)));
>  }
>
> -static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> -                              gfp_t gfp)
> +static int charge_memcg(struct page *page, struct mem_cgroup *memcg, gfp_t gfp)
>  {
>         unsigned int nr_pages = thp_nr_pages(page);
>         int ret;
> @@ -6723,7 +6722,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
>  }
>
>  /**
> - * mem_cgroup_charge - charge a newly allocated page to a cgroup
> + * __mem_cgroup_charge - charge a newly allocated page to a cgroup
>   * @page: page to charge
>   * @mm: mm context of the victim
>   * @gfp_mask: reclaim mode
> @@ -6736,16 +6735,14 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
>   *
>   * Returns 0 on success. Otherwise, an error code is returned.
>   */
> -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
> +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> +                       gfp_t gfp_mask)
>  {
>         struct mem_cgroup *memcg;
>         int ret;
>
> -       if (mem_cgroup_disabled())
> -               return 0;
> -
>         memcg = get_mem_cgroup_from_mm(mm);
> -       ret = __mem_cgroup_charge(page, memcg, gfp_mask);
> +       ret = charge_memcg(page, memcg, gfp_mask);
>         css_put(&memcg->css);
>
>         return ret;
> @@ -6780,7 +6777,7 @@ int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm,
>                 memcg = get_mem_cgroup_from_mm(mm);
>         rcu_read_unlock();
>
> -       ret = __mem_cgroup_charge(page, memcg, gfp);
> +       ret = charge_memcg(page, memcg, gfp);
>
>         css_put(&memcg->css);
>         return ret;
> @@ -6916,18 +6913,15 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
>  }
>
>  /**
> - * mem_cgroup_uncharge - uncharge a page
> + * __mem_cgroup_uncharge - uncharge a page
>   * @page: page to uncharge
>   *
>   * Uncharge a page previously charged with mem_cgroup_charge().

The comment here also needs to be updated.

mem_cgroup_uncharge() -> __mem_cgroup_uncharge()

>   */
> -void mem_cgroup_uncharge(struct page *page)
> +void __mem_cgroup_uncharge(struct page *page)
>  {
>         struct uncharge_gather ug;
>
> -       if (mem_cgroup_disabled())
> -               return;
> -
>         /* Don't touch page->lru of any random page, pre-check: */
>         if (!page_memcg(page))
>                 return;
> @@ -6938,20 +6932,17 @@ void mem_cgroup_uncharge(struct page *page)
>  }
>
>  /**
> - * mem_cgroup_uncharge_list - uncharge a list of page
> + * __mem_cgroup_uncharge_list - uncharge a list of page
>   * @page_list: list of pages to uncharge
>   *
>   * Uncharge a list of pages previously charged with
>   * mem_cgroup_charge().

Should be __mem_cgroup_charge().

Thanks.

>   */
> -void mem_cgroup_uncharge_list(struct list_head *page_list)
> +void __mem_cgroup_uncharge_list(struct list_head *page_list)
>  {
>         struct uncharge_gather ug;
>         struct page *page;
>
> -       if (mem_cgroup_disabled())
> -               return;
> -
>         uncharge_gather_clear(&ug);
>         list_for_each_entry(page, page_list, lru)
>                 uncharge_page(page, &ug);
> --
> 2.32.0.93.g670b81a890-goog
>

WARNING: multiple messages have this Message-ID (diff)
From: Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>
To: Suren Baghdasaryan <surenb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Cc: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>,
	Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Vladimir Davydov
	<vdavydov.dev-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	Andrew Morton
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>,
	Yang Shi <shy828301-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	Alex Shi <alexs-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Wei Yang
	<richard.weiyang-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	Vlastimil Babka <vbabka-AlSwsSmVLrQ@public.gmane.org>,
	Jens Axboe <axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org>,
	Joonsoo Kim <iamjoonsoo.kim-Hm3cg6mZ9cc@public.gmane.org>,
	David Hildenbrand <david-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	Matthew Wilcox <willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>,
	apopple-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org,
	Minchan Kim <minchan-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Miaohe Lin <linmiaohe-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>,
	LKML <linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	Cgroups <cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	Linux Memory Management List
	<linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org>,
	kernel-team@android.
Subject: Re: [External] [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config
Date: Sat, 10 Jul 2021 19:08:17 +0800	[thread overview]
Message-ID: <CAMZfGtUqMKnMKDqY7wP+29U-fSxqsOv9OHnaZxQSsOtKrBQYfQ@mail.gmail.com> (raw)
In-Reply-To: <20210710003626.3549282-2-surenb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>

On Sat, Jul 10, 2021 at 8:36 AM Suren Baghdasaryan <surenb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
>
> Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions
> functions to perform mem_cgroup_disabled static key check inline before
> calling the main body of the function. This minimizes the memcg overhead
> in the pagefault and exit_mmap paths when memcgs are disabled using
> cgroup_disable=memory command-line option.
> This change results in ~0.4% overhead reduction when running PFT test
> comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> configurationon on an 8-core ARM64 Android device.
>
> Signed-off-by: Suren Baghdasaryan <surenb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> Reviewed-by: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>

Reviewed-by: Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>

But some nits below.

> ---
>  include/linux/memcontrol.h | 28 +++++++++++++++++++++++++---
>  mm/memcontrol.c            | 29 ++++++++++-------------------
>  2 files changed, 35 insertions(+), 22 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index bfe5c486f4ad..39fa88051a42 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -693,13 +693,35 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg)
>                 page_counter_read(&memcg->memory);
>  }
>
> -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask);
> +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> +                       gfp_t gfp_mask);
> +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> +                                   gfp_t gfp_mask)
> +{
> +       if (mem_cgroup_disabled())
> +               return 0;
> +       return __mem_cgroup_charge(page, mm, gfp_mask);
> +}
> +
>  int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm,
>                                   gfp_t gfp, swp_entry_t entry);
>  void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry);
>
> -void mem_cgroup_uncharge(struct page *page);
> -void mem_cgroup_uncharge_list(struct list_head *page_list);
> +void __mem_cgroup_uncharge(struct page *page);
> +static inline void mem_cgroup_uncharge(struct page *page)
> +{
> +       if (mem_cgroup_disabled())
> +               return;
> +       __mem_cgroup_uncharge(page);
> +}
> +
> +void __mem_cgroup_uncharge_list(struct list_head *page_list);
> +static inline void mem_cgroup_uncharge_list(struct list_head *page_list)
> +{
> +       if (mem_cgroup_disabled())
> +               return;
> +       __mem_cgroup_uncharge_list(page_list);
> +}
>
>  void mem_cgroup_migrate(struct page *oldpage, struct page *newpage);
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index a228cd51c4bd..cdaf7003b43d 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -6701,8 +6701,7 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root,
>                         atomic_long_read(&parent->memory.children_low_usage)));
>  }
>
> -static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> -                              gfp_t gfp)
> +static int charge_memcg(struct page *page, struct mem_cgroup *memcg, gfp_t gfp)
>  {
>         unsigned int nr_pages = thp_nr_pages(page);
>         int ret;
> @@ -6723,7 +6722,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
>  }
>
>  /**
> - * mem_cgroup_charge - charge a newly allocated page to a cgroup
> + * __mem_cgroup_charge - charge a newly allocated page to a cgroup
>   * @page: page to charge
>   * @mm: mm context of the victim
>   * @gfp_mask: reclaim mode
> @@ -6736,16 +6735,14 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
>   *
>   * Returns 0 on success. Otherwise, an error code is returned.
>   */
> -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
> +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> +                       gfp_t gfp_mask)
>  {
>         struct mem_cgroup *memcg;
>         int ret;
>
> -       if (mem_cgroup_disabled())
> -               return 0;
> -
>         memcg = get_mem_cgroup_from_mm(mm);
> -       ret = __mem_cgroup_charge(page, memcg, gfp_mask);
> +       ret = charge_memcg(page, memcg, gfp_mask);
>         css_put(&memcg->css);
>
>         return ret;
> @@ -6780,7 +6777,7 @@ int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm,
>                 memcg = get_mem_cgroup_from_mm(mm);
>         rcu_read_unlock();
>
> -       ret = __mem_cgroup_charge(page, memcg, gfp);
> +       ret = charge_memcg(page, memcg, gfp);
>
>         css_put(&memcg->css);
>         return ret;
> @@ -6916,18 +6913,15 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
>  }
>
>  /**
> - * mem_cgroup_uncharge - uncharge a page
> + * __mem_cgroup_uncharge - uncharge a page
>   * @page: page to uncharge
>   *
>   * Uncharge a page previously charged with mem_cgroup_charge().

The comment here also needs to be updated.

mem_cgroup_uncharge() -> __mem_cgroup_uncharge()

>   */
> -void mem_cgroup_uncharge(struct page *page)
> +void __mem_cgroup_uncharge(struct page *page)
>  {
>         struct uncharge_gather ug;
>
> -       if (mem_cgroup_disabled())
> -               return;
> -
>         /* Don't touch page->lru of any random page, pre-check: */
>         if (!page_memcg(page))
>                 return;
> @@ -6938,20 +6932,17 @@ void mem_cgroup_uncharge(struct page *page)
>  }
>
>  /**
> - * mem_cgroup_uncharge_list - uncharge a list of page
> + * __mem_cgroup_uncharge_list - uncharge a list of page
>   * @page_list: list of pages to uncharge
>   *
>   * Uncharge a list of pages previously charged with
>   * mem_cgroup_charge().

Should be __mem_cgroup_charge().

Thanks.

>   */
> -void mem_cgroup_uncharge_list(struct list_head *page_list)
> +void __mem_cgroup_uncharge_list(struct list_head *page_list)
>  {
>         struct uncharge_gather ug;
>         struct page *page;
>
> -       if (mem_cgroup_disabled())
> -               return;
> -
>         uncharge_gather_clear(&ug);
>         list_for_each_entry(page, page_list, lru)
>                 uncharge_page(page, &ug);
> --
> 2.32.0.93.g670b81a890-goog
>

  reply	other threads:[~2021-07-10 11:08 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-10  0:36 [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Suren Baghdasaryan
2021-07-10  0:36 ` Suren Baghdasaryan
2021-07-10  0:36 ` Suren Baghdasaryan
2021-07-10  0:36 ` [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config Suren Baghdasaryan
2021-07-10  0:36   ` Suren Baghdasaryan
2021-07-10 11:08   ` Muchun Song [this message]
2021-07-10 11:08     ` [External] " Muchun Song
2021-07-10 11:08     ` Muchun Song
2021-07-13  1:12     ` Suren Baghdasaryan
2021-07-13  1:12       ` Suren Baghdasaryan
2021-07-13  1:12       ` Suren Baghdasaryan
2021-07-12  7:15   ` Michal Hocko
2021-07-12  7:15     ` Michal Hocko
2021-07-12 15:55     ` Suren Baghdasaryan
2021-07-12 15:55       ` Suren Baghdasaryan
2021-07-12 15:55       ` Suren Baghdasaryan
2021-07-18 16:55   ` Matthew Wilcox
2021-07-18 16:55     ` Matthew Wilcox
2021-07-18 21:25     ` Suren Baghdasaryan
2021-07-18 21:25       ` Suren Baghdasaryan
2021-07-18 21:25       ` Suren Baghdasaryan
2021-07-18 21:29       ` Matthew Wilcox
2021-07-18 21:29         ` Matthew Wilcox
2021-07-18 21:32         ` Suren Baghdasaryan
2021-07-18 21:32           ` Suren Baghdasaryan
2021-07-18 21:32           ` Suren Baghdasaryan
2021-07-10  0:36 ` [PATCH v3 3/3] mm, memcg: inline swap-related functions " Suren Baghdasaryan
2021-07-10  0:36   ` Suren Baghdasaryan
2021-07-10  0:36   ` Suren Baghdasaryan
2021-07-10 11:19   ` [External] " Muchun Song
2021-07-10 11:19     ` Muchun Song
2021-07-10 11:19     ` Muchun Song
2021-07-12  7:17   ` Michal Hocko
2021-07-12  7:17     ` Michal Hocko
2021-07-12 15:57     ` Suren Baghdasaryan
2021-07-12 15:57       ` Suren Baghdasaryan
2021-07-10  1:52 ` [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Miaohe Lin
2021-07-10  1:52   ` Miaohe Lin
2021-07-10  2:40   ` Suren Baghdasaryan
2021-07-10  2:40     ` Suren Baghdasaryan
2021-07-10  3:37     ` Miaohe Lin
2021-07-10  3:37       ` Miaohe Lin
2021-07-10 10:54 ` [External] " Muchun Song
2021-07-10 10:54   ` Muchun Song
2021-07-10 10:54   ` Muchun Song
2021-07-12  7:11 ` Michal Hocko
2021-07-12  7:11   ` Michal Hocko
2021-07-12 15:55   ` Suren Baghdasaryan
2021-07-12 15:55     ` Suren Baghdasaryan
2021-07-12 15:55     ` Suren Baghdasaryan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAMZfGtUqMKnMKDqY7wP+29U-fSxqsOv9OHnaZxQSsOtKrBQYfQ@mail.gmail.com \
    --to=songmuchun@bytedance.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexs@kernel.org \
    --cc=apopple@nvidia.com \
    --cc=axboe@kernel.dk \
    --cc=cgroups@vger.kernel.org \
    --cc=david@redhat.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=kernel-team@android.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=minchan@kernel.org \
    --cc=richard.weiyang@gmail.com \
    --cc=shakeelb@google.com \
    --cc=shy828301@gmail.com \
    --cc=surenb@google.com \
    --cc=tj@kernel.org \
    --cc=vbabka@suse.cz \
    --cc=vdavydov.dev@gmail.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.