All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@techsingularity.net>
To: J?r?me Glisse <jglisse@redhat.com>
Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, John Hubbard <jhubbard@nvidia.com>,
	Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>,
	David Nellans <dnellans@nvidia.com>,
	Evgeny Baskakov <ebaskakov@nvidia.com>,
	Mark Hairgrove <mhairgrove@nvidia.com>,
	Sherry Cheung <SCheung@nvidia.com>,
	Subhash Gutti <sgutti@nvidia.com>
Subject: Re: [HMM 10/16] mm/hmm/mirror: mirror process address space on device with HMM helpers
Date: Sun, 19 Mar 2017 20:09:32 +0000	[thread overview]
Message-ID: <20170319200932.GH2774@techsingularity.net> (raw)
In-Reply-To: <1489680335-6594-11-git-send-email-jglisse@redhat.com>

On Thu, Mar 16, 2017 at 12:05:29PM -0400, J?r?me Glisse wrote:
> This is useful for NVidia GPU >= Pascal, Mellanox IB >= mlx5 and more
> hardware in the future.
> 

Insert boiler plate comment about in kernel user being ready here.

> +#if IS_ENABLED(CONFIG_HMM_MIRROR)
> <SNIP>
> + */
> +struct hmm_mirror_ops {
> +	/* update() - update virtual address range of memory
> +	 *
> +	 * @mirror: pointer to struct hmm_mirror
> +	 * @update: update's type (turn read only, unmap, ...)
> +	 * @start: virtual start address of the range to update
> +	 * @end: virtual end address of the range to update
> +	 *
> +	 * This callback is call when the CPU page table is updated, the device
> +	 * driver must update device page table accordingly to update's action.
> +	 *
> +	 * Device driver callback must wait until the device has fully updated
> +	 * its view for the range. Note we plan to make this asynchronous in
> +	 * later patches, so that multiple devices can schedule update to their
> +	 * page tables, and once all device have schedule the update then we
> +	 * wait for them to propagate.

That sort of future TODO comment may be more appropriate in the
changelog. It doesn't help understand what update is for.

"update" is also unnecessarily vague. sync_cpu_device_pagetables may be
unnecessarily long but at least it's a hint about what's going on. It's
also not super clear from the comment but I assume this is called via a
mmu notifier. If so, that should mentioned here.

> +int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm);
> +int hmm_mirror_register_locked(struct hmm_mirror *mirror,
> +			       struct mm_struct *mm);

Just a note to say this looks very dangerous and it's not clear why an
unlocked version should ever be used. It's the HMM mirror list that is
being protected but to external callers, that type is opaque so how can
they ever safely use the unlocked version?

Recommend getting rid of it unless there are really great reasons why an
external user of this API should be able to poke at HMM internals.

> diff --git a/mm/Kconfig b/mm/Kconfig
> index fe8ad24..8ae7600 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -293,6 +293,21 @@ config HMM
>  	bool
>  	depends on MMU
>  
> +config HMM_MIRROR
> +	bool "HMM mirror CPU page table into a device page table"
> +	select HMM
> +	select MMU_NOTIFIER
> +	help
> +	  HMM mirror is a set of helpers to mirror CPU page table into a device
> +	  page table. There is two side, first keep both page table synchronize

Word smithing, "there are two sides". There are a number of places in
earlier patches where there are minor typos that are worth searching
for. This one really stuck out though.

>  /*
>   * struct hmm - HMM per mm struct
>   *
>   * @mm: mm struct this HMM struct is bound to
> + * @lock: lock protecting mirrors list
> + * @mirrors: list of mirrors for this mm
> + * @wait_queue: wait queue
> + * @sequence: we track updates to the CPU page table with a sequence number
> + * @mmu_notifier: mmu notifier to track updates to CPU page table
> + * @notifier_count: number of currently active notifiers
>   */
>  struct hmm {
>  	struct mm_struct	*mm;
> +	spinlock_t		lock;
> +	struct list_head	mirrors;
> +	atomic_t		sequence;
> +	wait_queue_head_t	wait_queue;
> +	struct mmu_notifier	mmu_notifier;
> +	atomic_t		notifier_count;
>  };

Minor nit but notifier_count might be better named nr_notifiers because
it was not clear at all what the difference between the sequence count
and the notifier count is.

Also do a pahole check on this struct. There is a potential for holes with
the embedded structs and I also think that having the notifier_count and
sequence on separate cache lines just unnecessarily increases the cache
footprint. They are generally updated together so you might as well take
one cache miss to cover them both.

>  
>  /*
> @@ -47,6 +60,12 @@ static struct hmm *hmm_register(struct mm_struct *mm)
>  		hmm = kmalloc(sizeof(*hmm), GFP_KERNEL);
>  		if (!hmm)
>  			return NULL;
> +		init_waitqueue_head(&hmm->wait_queue);
> +		atomic_set(&hmm->notifier_count, 0);
> +		INIT_LIST_HEAD(&hmm->mirrors);
> +		atomic_set(&hmm->sequence, 0);
> +		hmm->mmu_notifier.ops = NULL;
> +		spin_lock_init(&hmm->lock);
>  		hmm->mm = mm;
>  
>  		spin_lock(&mm->page_table_lock);
> @@ -79,3 +98,170 @@ void hmm_mm_destroy(struct mm_struct *mm)
>  	spin_unlock(&mm->page_table_lock);
>  	kfree(hmm);
>  }
> +
> +
> +#if IS_ENABLED(CONFIG_HMM_MIRROR)
> +static void hmm_invalidate_range(struct hmm *hmm,
> +				 enum hmm_update action,
> +				 unsigned long start,
> +				 unsigned long end)
> +{
> +	struct hmm_mirror *mirror;
> +
> +	/*
> +	 * Mirror being added or removed is a rare event so list traversal isn't
> +	 * protected by a lock, we rely on simple rules. All list modification
> +	 * are done using list_add_rcu() and list_del_rcu() under a spinlock to
> +	 * protect from concurrent addition or removal but not traversal.
> +	 *
> +	 * Because hmm_mirror_unregister() waits for all running invalidation to
> +	 * complete (and thus all list traversals to finish), none of the mirror
> +	 * structs can be freed from under us while traversing the list and thus
> +	 * it is safe to dereference their list pointer even if they were just
> +	 * removed.
> +	 */
> +	list_for_each_entry (mirror, &hmm->mirrors, list)
> +		mirror->ops->update(mirror, action, start, end);
> +}

Double check this very carefully because I believe it's wrong. If the
update side is protected by a spin lock then the traversal must be done
using list_for_each_entry_rcu with the the rcu read lock held. I see no
indication from this context that you have the necessary protection in
place.

> +
> +static void hmm_invalidate_page(struct mmu_notifier *mn,
> +				struct mm_struct *mm,
> +				unsigned long addr)
> +{
> +	unsigned long start = addr & PAGE_MASK;
> +	unsigned long end = start + PAGE_SIZE;
> +	struct hmm *hmm = mm->hmm;
> +
> +	VM_BUG_ON(!hmm);
> +
> +	atomic_inc(&hmm->notifier_count);
> +	atomic_inc(&hmm->sequence);
> +	hmm_invalidate_range(mm->hmm, HMM_UPDATE_INVALIDATE, start, end);
> +	atomic_dec(&hmm->notifier_count);
> +	wake_up(&hmm->wait_queue);
> +}
> +

The wait queue made me search further and I see it's only necessary for an
unregister but you end up with a bunch of atomic operations as a result
of it and a lot of wakeup checks that are almost never useful. That is a
real shame in itself but I'm not convinced it's right either.

It's not clear why both notifier count and sequence are needed considering
that nothing special is done with the sequence value. The comment says
updates are tracked with it but not why or what happens when that sequence
counter wraps. I strong suspect it can be removed.

Most importantly, with the lack of RCU locking in the invalidate_range
function, it's not clear at all how this is safe. The unregister function
says it's to keep the traversal safe but it does the list modification
before the wait so nothing is protected really.

You either need to keep the locking simple here or get the RCU details right.
If it's important that unregister return with no invalidating happening
on the mirror being unregistered then that can be achieved with a
sychronize_rcu() after the list update assuming that the invalidations
take the rcu read lock properly.

WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@techsingularity.net>
To: J?r?me Glisse <jglisse@redhat.com>
Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, John Hubbard <jhubbard@nvidia.com>,
	Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>,
	David Nellans <dnellans@nvidia.com>,
	Evgeny Baskakov <ebaskakov@nvidia.com>,
	Mark Hairgrove <mhairgrove@nvidia.com>,
	Sherry Cheung <SCheung@nvidia.com>,
	Subhash Gutti <sgutti@nvidia.com>
Subject: Re: [HMM 10/16] mm/hmm/mirror: mirror process address space on device with HMM helpers
Date: Sun, 19 Mar 2017 20:09:32 +0000	[thread overview]
Message-ID: <20170319200932.GH2774@techsingularity.net> (raw)
In-Reply-To: <1489680335-6594-11-git-send-email-jglisse@redhat.com>

On Thu, Mar 16, 2017 at 12:05:29PM -0400, J?r?me Glisse wrote:
> This is useful for NVidia GPU >= Pascal, Mellanox IB >= mlx5 and more
> hardware in the future.
> 

Insert boiler plate comment about in kernel user being ready here.

> +#if IS_ENABLED(CONFIG_HMM_MIRROR)
> <SNIP>
> + */
> +struct hmm_mirror_ops {
> +	/* update() - update virtual address range of memory
> +	 *
> +	 * @mirror: pointer to struct hmm_mirror
> +	 * @update: update's type (turn read only, unmap, ...)
> +	 * @start: virtual start address of the range to update
> +	 * @end: virtual end address of the range to update
> +	 *
> +	 * This callback is call when the CPU page table is updated, the device
> +	 * driver must update device page table accordingly to update's action.
> +	 *
> +	 * Device driver callback must wait until the device has fully updated
> +	 * its view for the range. Note we plan to make this asynchronous in
> +	 * later patches, so that multiple devices can schedule update to their
> +	 * page tables, and once all device have schedule the update then we
> +	 * wait for them to propagate.

That sort of future TODO comment may be more appropriate in the
changelog. It doesn't help understand what update is for.

"update" is also unnecessarily vague. sync_cpu_device_pagetables may be
unnecessarily long but at least it's a hint about what's going on. It's
also not super clear from the comment but I assume this is called via a
mmu notifier. If so, that should mentioned here.

> +int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm);
> +int hmm_mirror_register_locked(struct hmm_mirror *mirror,
> +			       struct mm_struct *mm);

Just a note to say this looks very dangerous and it's not clear why an
unlocked version should ever be used. It's the HMM mirror list that is
being protected but to external callers, that type is opaque so how can
they ever safely use the unlocked version?

Recommend getting rid of it unless there are really great reasons why an
external user of this API should be able to poke at HMM internals.

> diff --git a/mm/Kconfig b/mm/Kconfig
> index fe8ad24..8ae7600 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -293,6 +293,21 @@ config HMM
>  	bool
>  	depends on MMU
>  
> +config HMM_MIRROR
> +	bool "HMM mirror CPU page table into a device page table"
> +	select HMM
> +	select MMU_NOTIFIER
> +	help
> +	  HMM mirror is a set of helpers to mirror CPU page table into a device
> +	  page table. There is two side, first keep both page table synchronize

Word smithing, "there are two sides". There are a number of places in
earlier patches where there are minor typos that are worth searching
for. This one really stuck out though.

>  /*
>   * struct hmm - HMM per mm struct
>   *
>   * @mm: mm struct this HMM struct is bound to
> + * @lock: lock protecting mirrors list
> + * @mirrors: list of mirrors for this mm
> + * @wait_queue: wait queue
> + * @sequence: we track updates to the CPU page table with a sequence number
> + * @mmu_notifier: mmu notifier to track updates to CPU page table
> + * @notifier_count: number of currently active notifiers
>   */
>  struct hmm {
>  	struct mm_struct	*mm;
> +	spinlock_t		lock;
> +	struct list_head	mirrors;
> +	atomic_t		sequence;
> +	wait_queue_head_t	wait_queue;
> +	struct mmu_notifier	mmu_notifier;
> +	atomic_t		notifier_count;
>  };

Minor nit but notifier_count might be better named nr_notifiers because
it was not clear at all what the difference between the sequence count
and the notifier count is.

Also do a pahole check on this struct. There is a potential for holes with
the embedded structs and I also think that having the notifier_count and
sequence on separate cache lines just unnecessarily increases the cache
footprint. They are generally updated together so you might as well take
one cache miss to cover them both.

>  
>  /*
> @@ -47,6 +60,12 @@ static struct hmm *hmm_register(struct mm_struct *mm)
>  		hmm = kmalloc(sizeof(*hmm), GFP_KERNEL);
>  		if (!hmm)
>  			return NULL;
> +		init_waitqueue_head(&hmm->wait_queue);
> +		atomic_set(&hmm->notifier_count, 0);
> +		INIT_LIST_HEAD(&hmm->mirrors);
> +		atomic_set(&hmm->sequence, 0);
> +		hmm->mmu_notifier.ops = NULL;
> +		spin_lock_init(&hmm->lock);
>  		hmm->mm = mm;
>  
>  		spin_lock(&mm->page_table_lock);
> @@ -79,3 +98,170 @@ void hmm_mm_destroy(struct mm_struct *mm)
>  	spin_unlock(&mm->page_table_lock);
>  	kfree(hmm);
>  }
> +
> +
> +#if IS_ENABLED(CONFIG_HMM_MIRROR)
> +static void hmm_invalidate_range(struct hmm *hmm,
> +				 enum hmm_update action,
> +				 unsigned long start,
> +				 unsigned long end)
> +{
> +	struct hmm_mirror *mirror;
> +
> +	/*
> +	 * Mirror being added or removed is a rare event so list traversal isn't
> +	 * protected by a lock, we rely on simple rules. All list modification
> +	 * are done using list_add_rcu() and list_del_rcu() under a spinlock to
> +	 * protect from concurrent addition or removal but not traversal.
> +	 *
> +	 * Because hmm_mirror_unregister() waits for all running invalidation to
> +	 * complete (and thus all list traversals to finish), none of the mirror
> +	 * structs can be freed from under us while traversing the list and thus
> +	 * it is safe to dereference their list pointer even if they were just
> +	 * removed.
> +	 */
> +	list_for_each_entry (mirror, &hmm->mirrors, list)
> +		mirror->ops->update(mirror, action, start, end);
> +}

Double check this very carefully because I believe it's wrong. If the
update side is protected by a spin lock then the traversal must be done
using list_for_each_entry_rcu with the the rcu read lock held. I see no
indication from this context that you have the necessary protection in
place.

> +
> +static void hmm_invalidate_page(struct mmu_notifier *mn,
> +				struct mm_struct *mm,
> +				unsigned long addr)
> +{
> +	unsigned long start = addr & PAGE_MASK;
> +	unsigned long end = start + PAGE_SIZE;
> +	struct hmm *hmm = mm->hmm;
> +
> +	VM_BUG_ON(!hmm);
> +
> +	atomic_inc(&hmm->notifier_count);
> +	atomic_inc(&hmm->sequence);
> +	hmm_invalidate_range(mm->hmm, HMM_UPDATE_INVALIDATE, start, end);
> +	atomic_dec(&hmm->notifier_count);
> +	wake_up(&hmm->wait_queue);
> +}
> +

The wait queue made me search further and I see it's only necessary for an
unregister but you end up with a bunch of atomic operations as a result
of it and a lot of wakeup checks that are almost never useful. That is a
real shame in itself but I'm not convinced it's right either.

It's not clear why both notifier count and sequence are needed considering
that nothing special is done with the sequence value. The comment says
updates are tracked with it but not why or what happens when that sequence
counter wraps. I strong suspect it can be removed.

Most importantly, with the lack of RCU locking in the invalidate_range
function, it's not clear at all how this is safe. The unregister function
says it's to keep the traversal safe but it does the list modification
before the wait so nothing is protected really.

You either need to keep the locking simple here or get the RCU details right.
If it's important that unregister return with no invalidating happening
on the mirror being unregistered then that can be achieved with a
sychronize_rcu() after the list update assuming that the invalidations
take the rcu read lock properly.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-03-19 20:09 UTC|newest]

Thread overview: 90+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-16 16:05 [HMM 00/16] HMM (Heterogeneous Memory Management) v18 Jérôme Glisse
2017-03-16 16:05 ` Jérôme Glisse
2017-03-16 16:05 ` [HMM 01/16] mm/memory/hotplug: convert device bool to int to allow for more flags v3 Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-19 20:08   ` Mel Gorman
2017-03-19 20:08     ` Mel Gorman
2017-03-16 16:05 ` [HMM 02/16] mm/put_page: move ref decrement to put_zone_device_page() Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-19 20:08   ` Mel Gorman
2017-03-19 20:08     ` Mel Gorman
2017-03-16 16:05 ` [HMM 03/16] mm/ZONE_DEVICE/free-page: callback when page is freed v3 Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-19 20:08   ` Mel Gorman
2017-03-19 20:08     ` Mel Gorman
2017-03-16 16:05 ` [HMM 04/16] mm/ZONE_DEVICE/unaddressable: add support for un-addressable device memory v3 Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-19 20:09   ` Mel Gorman
2017-03-19 20:09     ` Mel Gorman
2017-03-16 16:05 ` [HMM 05/16] mm/ZONE_DEVICE/x86: add support for un-addressable device memory Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-16 16:05 ` [HMM 06/16] mm/migrate: add new boolean copy flag to migratepage() callback Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-19 20:09   ` Mel Gorman
2017-03-19 20:09     ` Mel Gorman
2017-03-16 16:05 ` [HMM 07/16] mm/migrate: new memory migration helper for use with device memory v4 Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-16 16:24   ` Reza Arbab
2017-03-16 16:24     ` Reza Arbab
2017-03-16 20:58     ` Balbir Singh
2017-03-16 20:58       ` Balbir Singh
2017-03-16 23:05   ` Andrew Morton
2017-03-16 23:05     ` Andrew Morton
2017-03-17  0:22     ` John Hubbard
2017-03-17  0:22       ` John Hubbard
2017-03-17  0:45       ` Balbir Singh
2017-03-17  0:45         ` Balbir Singh
2017-03-17  0:57         ` John Hubbard
2017-03-17  0:57           ` John Hubbard
2017-03-17  1:52           ` Jerome Glisse
2017-03-17  1:52             ` Jerome Glisse
2017-03-17  3:32             ` Andrew Morton
2017-03-17  3:32               ` Andrew Morton
2017-03-17  3:42           ` Balbir Singh
2017-03-17  3:42             ` Balbir Singh
2017-03-17  4:51             ` Balbir Singh
2017-03-17  4:51               ` Balbir Singh
2017-03-17  7:17               ` John Hubbard
2017-03-17  7:17                 ` John Hubbard
2017-03-16 16:05 ` [HMM 08/16] mm/migrate: migrate_vma() unmap page from vma while collecting pages Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-16 16:05 ` [HMM 09/16] mm/hmm: heterogeneous memory management (HMM for short) Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-19 20:09   ` Mel Gorman
2017-03-19 20:09     ` Mel Gorman
2017-03-16 16:05 ` [HMM 10/16] mm/hmm/mirror: mirror process address space on device with HMM helpers Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-19 20:09   ` Mel Gorman [this message]
2017-03-19 20:09     ` Mel Gorman
2017-03-16 16:05 ` [HMM 11/16] mm/hmm/mirror: helper to snapshot CPU page table v2 Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-19 20:09   ` Mel Gorman
2017-03-19 20:09     ` Mel Gorman
2017-03-16 16:05 ` [HMM 12/16] mm/hmm/mirror: device page fault handler Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-16 16:05 ` [HMM 13/16] mm/hmm/migrate: support un-addressable ZONE_DEVICE page in migration Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-16 16:05 ` [HMM 14/16] mm/migrate: allow migrate_vma() to alloc new page on empty entry Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-16 16:05 ` [HMM 15/16] mm/hmm/devmem: device memory hotplug using ZONE_DEVICE Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-16 16:05 ` [HMM 16/16] mm/hmm/devmem: dummy HMM device for ZONE_DEVICE memory v2 Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-17  6:55   ` Bob Liu
2017-03-17  6:55     ` Bob Liu
2017-03-17 16:53     ` Jerome Glisse
2017-03-17 16:53       ` Jerome Glisse
2017-03-16 20:43 ` [HMM 00/16] HMM (Heterogeneous Memory Management) v18 Andrew Morton
2017-03-16 20:43   ` Andrew Morton
2017-03-16 23:49   ` Jerome Glisse
2017-03-16 23:49     ` Jerome Glisse
2017-03-17  8:29     ` Bob Liu
2017-03-17  8:29       ` Bob Liu
2017-03-17 15:57       ` Jerome Glisse
2017-03-17 15:57         ` Jerome Glisse
2017-03-17  8:39     ` Bob Liu
2017-03-17  8:39       ` Bob Liu
2017-03-17 15:52       ` Jerome Glisse
2017-03-17 15:52         ` Jerome Glisse
2017-03-19 20:09 ` Mel Gorman
2017-03-19 20:09   ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170319200932.GH2774@techsingularity.net \
    --to=mgorman@techsingularity.net \
    --cc=SCheung@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=dnellans@nvidia.com \
    --cc=ebaskakov@nvidia.com \
    --cc=jglisse@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhairgrove@nvidia.com \
    --cc=n-horiguchi@ah.jp.nec.com \
    --cc=sgutti@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.