All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Will Deacon <will@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org, linux-arch@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Marc Zyngier <maz@kernel.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Morten Rasmussen <morten.rasmussen@arm.com>,
	Qais Yousef <qais.yousef@arm.com>,
	Suren Baghdasaryan <surenb@google.com>,
	Quentin Perret <qperret@google.com>, Tejun Heo <tj@kernel.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Ingo Molnar <mingo@redhat.com>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	kernel-team@android.com, Oleg Nesterov <oleg@redhat.com>
Subject: Re: [RFC][PATCH] freezer,sched: Rewrite core freezer logic
Date: Thu, 3 Jun 2021 12:35:22 +0200	[thread overview]
Message-ID: <YLiwahWvnnkeL+vc@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20210602125452.GG30593@willie-the-truck>

On Wed, Jun 02, 2021 at 01:54:53PM +0100, Will Deacon wrote:

> There's also Documentation/power/freezing-of-tasks.rst to update. I'm not

Since it's .rst, the only update I'm willing to do is delete it
outright.

> sure if fs/proc/array.c should be updated to display frozen tasks; I
> couldn't see how that was useful, but thought I'd mention it anyway.

Yeah, I considered it too, but I figured that if we're all frozen
there's noone left to observe us being frozen, so I didn't bother.

> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index 2982cfab1ae9..bfadc1dbcf24 100644
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@ -95,7 +95,12 @@ struct task_group;
> >  #define TASK_WAKING			0x0200
> >  #define TASK_NOLOAD			0x0400
> >  #define TASK_NEW			0x0800
> > -#define TASK_STATE_MAX			0x1000
> > +#define TASK_FREEZABLE			0x1000
> > +#define __TASK_FREEZABLE_UNSAFE		0x2000
> 
> Give that this is only needed to avoid lockdep checks, maybe we should avoid
> allocating the bit if lockdep is not enabled? Otherwise, people might start
> to use it for other things.

Something like

#define __TASK_FREEZABLE_UNSAFE			(0x2000 * IS_ENABLED(CONFIG_LOCKDEP))

?

> > +#define TASK_FROZEN			0x4000
> > +#define TASK_STATE_MAX			0x8000
> > +
> > +#define TASK_FREEZABLE_UNSAFE		(TASK_FREEZABLE | __TASK_FREEZABLE_UNSAFE)
> 
> We probably want to preserve the "DO NOT ADD ANY NEW CALLERS OF THIS STATE"
> comment for the unsafe stuff.

Done.

> > +/* Recursion relies on tail-call optimization to not blow away the stack */
> > +static bool __frozen(struct task_struct *p)
> > +{
> > +	if (p->state == TASK_FROZEN)
> > +		return true;
> 
> READ_ONCE()?

task_struct::state is volatile -- for now. I've got other patches to
deal with that.

> > +
> > +	/*
> > +	 * If stuck in TRACED, and the ptracer is FROZEN, we're frozen too.
> > +	 */
> > +	if (task_is_traced(p))
> > +		return frozen(rcu_dereference(p->parent));
> > +
> > +	/*
> > +	 * If stuck in STOPPED and the parent is FROZEN, we're frozen too.
> > +	 */
> > +	if (task_is_stopped(p))
> > +		return frozen(rcu_dereference(p->real_parent));
> 
> This looks convincing, but I really can't tell if we're missing anything.

Yeah, Oleg would be the one to tell us I suppose.

> > +static bool __freeze_task(struct task_struct *p)
> > +{
> > +	unsigned long flags;
> > +	unsigned int state;
> > +	bool frozen = false;
> > +
> > +	raw_spin_lock_irqsave(&p->pi_lock, flags);
> > +	state = READ_ONCE(p->state);
> > +	if (state & TASK_FREEZABLE) {
> > +		/*
> > +		 * Only TASK_NORMAL can be augmented with TASK_FREEZABLE,
> > +		 * since they can suffer spurious wakeups.
> > +		 */
> > +		WARN_ON_ONCE(!(state & TASK_NORMAL));
> > +
> > +#ifdef CONFIG_LOCKDEP
> > +		/*
> > +		 * It's dangerous to freeze with locks held; there be dragons there.
> > +		 */
> > +		if (!(state & __TASK_FREEZABLE_UNSAFE))
> > +			WARN_ON_ONCE(debug_locks && p->lockdep_depth);
> > +#endif
> > +
> > +		p->state = TASK_FROZEN;
> > +		frozen = true;
> > +	}
> > +	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
> > +
> > +	return frozen;
> > +}
> > +
> >  /**
> >   * freeze_task - send a freeze request to given task
> >   * @p: task to send the request to
> > @@ -116,20 +173,8 @@ bool freeze_task(struct task_struct *p)
> >  {
> >  	unsigned long flags;
> >  
> >  	spin_lock_irqsave(&freezer_lock, flags);
> > +	if (!freezing(p) || frozen(p) || __freeze_task(p)) {
> >  		spin_unlock_irqrestore(&freezer_lock, flags);
> >  		return false;
> >  	}
> 
> I've been trying to figure out how this serialises with ttwu(), given that
> frozen(p) will go and read p->state. I suppose it works out because only the
> freezer can wake up tasks from the FROZEN state, but it feels a bit brittle.

p->pi_lock; both ttwu() and __freeze_task() (which is essentially a
variant of set_special_state()) take ->pi_lock. I'll put in a comment.

> > @@ -137,7 +182,7 @@ bool freeze_task(struct task_struct *p)
> >  	if (!(p->flags & PF_KTHREAD))
> >  		fake_signal_wake_up(p);
> >  	else
> > -		wake_up_state(p, TASK_INTERRUPTIBLE);
> > +		wake_up_state(p, TASK_INTERRUPTIBLE); // TASK_NORMAL ?!?
> >  
> >  	spin_unlock_irqrestore(&freezer_lock, flags);
> >  	return true;
> > @@ -148,8 +193,8 @@ void __thaw_task(struct task_struct *p)
> >  	unsigned long flags;
> >  
> >  	spin_lock_irqsave(&freezer_lock, flags);
> > -	if (frozen(p))
> > -		wake_up_process(p);
> > +	WARN_ON_ONCE(freezing(p));
> > +	wake_up_state(p, TASK_FROZEN | TASK_NORMAL);
> 
> Why do we need TASK_NORMAL here?

It's a left-over from hacking, but I left it in because anything
TASK_NORMAL should be able to deal with spuriuos wakeups, something
try_to_freeze() now also relies on.


WARNING: multiple messages have this Message-ID (diff)
From: Peter Zijlstra <peterz@infradead.org>
To: Will Deacon <will@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org, linux-arch@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Marc Zyngier <maz@kernel.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Morten Rasmussen <morten.rasmussen@arm.com>,
	Qais Yousef <qais.yousef@arm.com>,
	Suren Baghdasaryan <surenb@google.com>,
	Quentin Perret <qperret@google.com>, Tejun Heo <tj@kernel.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Ingo Molnar <mingo@redhat.com>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	kernel-team@android.com, Oleg Nesterov <oleg@redhat.com>
Subject: Re: [RFC][PATCH] freezer,sched: Rewrite core freezer logic
Date: Thu, 3 Jun 2021 12:35:22 +0200	[thread overview]
Message-ID: <YLiwahWvnnkeL+vc@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20210602125452.GG30593@willie-the-truck>

On Wed, Jun 02, 2021 at 01:54:53PM +0100, Will Deacon wrote:

> There's also Documentation/power/freezing-of-tasks.rst to update. I'm not

Since it's .rst, the only update I'm willing to do is delete it
outright.

> sure if fs/proc/array.c should be updated to display frozen tasks; I
> couldn't see how that was useful, but thought I'd mention it anyway.

Yeah, I considered it too, but I figured that if we're all frozen
there's noone left to observe us being frozen, so I didn't bother.

> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index 2982cfab1ae9..bfadc1dbcf24 100644
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@ -95,7 +95,12 @@ struct task_group;
> >  #define TASK_WAKING			0x0200
> >  #define TASK_NOLOAD			0x0400
> >  #define TASK_NEW			0x0800
> > -#define TASK_STATE_MAX			0x1000
> > +#define TASK_FREEZABLE			0x1000
> > +#define __TASK_FREEZABLE_UNSAFE		0x2000
> 
> Give that this is only needed to avoid lockdep checks, maybe we should avoid
> allocating the bit if lockdep is not enabled? Otherwise, people might start
> to use it for other things.

Something like

#define __TASK_FREEZABLE_UNSAFE			(0x2000 * IS_ENABLED(CONFIG_LOCKDEP))

?

> > +#define TASK_FROZEN			0x4000
> > +#define TASK_STATE_MAX			0x8000
> > +
> > +#define TASK_FREEZABLE_UNSAFE		(TASK_FREEZABLE | __TASK_FREEZABLE_UNSAFE)
> 
> We probably want to preserve the "DO NOT ADD ANY NEW CALLERS OF THIS STATE"
> comment for the unsafe stuff.

Done.

> > +/* Recursion relies on tail-call optimization to not blow away the stack */
> > +static bool __frozen(struct task_struct *p)
> > +{
> > +	if (p->state == TASK_FROZEN)
> > +		return true;
> 
> READ_ONCE()?

task_struct::state is volatile -- for now. I've got other patches to
deal with that.

> > +
> > +	/*
> > +	 * If stuck in TRACED, and the ptracer is FROZEN, we're frozen too.
> > +	 */
> > +	if (task_is_traced(p))
> > +		return frozen(rcu_dereference(p->parent));
> > +
> > +	/*
> > +	 * If stuck in STOPPED and the parent is FROZEN, we're frozen too.
> > +	 */
> > +	if (task_is_stopped(p))
> > +		return frozen(rcu_dereference(p->real_parent));
> 
> This looks convincing, but I really can't tell if we're missing anything.

Yeah, Oleg would be the one to tell us I suppose.

> > +static bool __freeze_task(struct task_struct *p)
> > +{
> > +	unsigned long flags;
> > +	unsigned int state;
> > +	bool frozen = false;
> > +
> > +	raw_spin_lock_irqsave(&p->pi_lock, flags);
> > +	state = READ_ONCE(p->state);
> > +	if (state & TASK_FREEZABLE) {
> > +		/*
> > +		 * Only TASK_NORMAL can be augmented with TASK_FREEZABLE,
> > +		 * since they can suffer spurious wakeups.
> > +		 */
> > +		WARN_ON_ONCE(!(state & TASK_NORMAL));
> > +
> > +#ifdef CONFIG_LOCKDEP
> > +		/*
> > +		 * It's dangerous to freeze with locks held; there be dragons there.
> > +		 */
> > +		if (!(state & __TASK_FREEZABLE_UNSAFE))
> > +			WARN_ON_ONCE(debug_locks && p->lockdep_depth);
> > +#endif
> > +
> > +		p->state = TASK_FROZEN;
> > +		frozen = true;
> > +	}
> > +	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
> > +
> > +	return frozen;
> > +}
> > +
> >  /**
> >   * freeze_task - send a freeze request to given task
> >   * @p: task to send the request to
> > @@ -116,20 +173,8 @@ bool freeze_task(struct task_struct *p)
> >  {
> >  	unsigned long flags;
> >  
> >  	spin_lock_irqsave(&freezer_lock, flags);
> > +	if (!freezing(p) || frozen(p) || __freeze_task(p)) {
> >  		spin_unlock_irqrestore(&freezer_lock, flags);
> >  		return false;
> >  	}
> 
> I've been trying to figure out how this serialises with ttwu(), given that
> frozen(p) will go and read p->state. I suppose it works out because only the
> freezer can wake up tasks from the FROZEN state, but it feels a bit brittle.

p->pi_lock; both ttwu() and __freeze_task() (which is essentially a
variant of set_special_state()) take ->pi_lock. I'll put in a comment.

> > @@ -137,7 +182,7 @@ bool freeze_task(struct task_struct *p)
> >  	if (!(p->flags & PF_KTHREAD))
> >  		fake_signal_wake_up(p);
> >  	else
> > -		wake_up_state(p, TASK_INTERRUPTIBLE);
> > +		wake_up_state(p, TASK_INTERRUPTIBLE); // TASK_NORMAL ?!?
> >  
> >  	spin_unlock_irqrestore(&freezer_lock, flags);
> >  	return true;
> > @@ -148,8 +193,8 @@ void __thaw_task(struct task_struct *p)
> >  	unsigned long flags;
> >  
> >  	spin_lock_irqsave(&freezer_lock, flags);
> > -	if (frozen(p))
> > -		wake_up_process(p);
> > +	WARN_ON_ONCE(freezing(p));
> > +	wake_up_state(p, TASK_FROZEN | TASK_NORMAL);
> 
> Why do we need TASK_NORMAL here?

It's a left-over from hacking, but I left it in because anything
TASK_NORMAL should be able to deal with spuriuos wakeups, something
try_to_freeze() now also relies on.


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-06-03 10:35 UTC|newest]

Thread overview: 114+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-25 15:14 [PATCH v7 00/22] Add support for 32-bit tasks on asymmetric AArch32 systems Will Deacon
2021-05-25 15:14 ` Will Deacon
2021-05-25 15:14 ` [PATCH v7 01/22] sched: Favour predetermined active CPU as migration destination Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-26 11:14   ` Valentin Schneider
2021-05-26 11:14     ` Valentin Schneider
2021-05-26 12:32     ` Peter Zijlstra
2021-05-26 12:32       ` Peter Zijlstra
2021-05-26 12:36       ` Valentin Schneider
2021-05-26 12:36         ` Valentin Schneider
2021-05-26 16:03     ` Will Deacon
2021-05-26 16:03       ` Will Deacon
2021-05-26 17:46       ` Valentin Schneider
2021-05-26 17:46         ` Valentin Schneider
2021-05-25 15:14 ` [PATCH v7 02/22] arm64: cpuinfo: Split AArch32 registers out into a separate struct Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-25 15:14 ` [PATCH v7 03/22] arm64: Allow mismatched 32-bit EL0 support Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-25 15:14 ` [PATCH v7 04/22] KVM: arm64: Kill 32-bit vCPUs on systems with mismatched " Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-25 15:14 ` [PATCH v7 05/22] arm64: Kill 32-bit applications scheduled on 64-bit-only CPUs Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-25 15:14 ` [PATCH v7 06/22] arm64: Advertise CPUs capable of running 32-bit applications in sysfs Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-25 15:14 ` [PATCH v7 07/22] sched: Introduce task_cpu_possible_mask() to limit fallback rq selection Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-25 15:14 ` [PATCH v7 08/22] cpuset: Don't use the cpu_possible_mask as a last resort for cgroup v1 Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-26 15:02   ` Peter Zijlstra
2021-05-26 15:02     ` Peter Zijlstra
2021-05-26 16:07     ` Will Deacon
2021-05-26 16:07       ` Will Deacon
2021-05-25 15:14 ` [PATCH v7 09/22] cpuset: Honour task_cpu_possible_mask() in guarantee_online_cpus() Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-25 15:14 ` [PATCH v7 10/22] sched: Reject CPU affinity changes based on task_cpu_possible_mask() Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-26 15:15   ` Peter Zijlstra
2021-05-26 15:15     ` Peter Zijlstra
2021-05-26 16:12     ` Will Deacon
2021-05-26 16:12       ` Will Deacon
2021-05-26 17:56       ` Peter Zijlstra
2021-05-26 17:56         ` Peter Zijlstra
2021-05-26 18:59         ` Will Deacon
2021-05-26 18:59           ` Will Deacon
2021-05-25 15:14 ` [PATCH v7 11/22] sched: Introduce task_struct::user_cpus_ptr to track requested affinity Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-25 15:14 ` [PATCH v7 12/22] sched: Split the guts of sched_setaffinity() into a helper function Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-25 15:14 ` [PATCH v7 13/22] sched: Allow task CPU affinity to be restricted on asymmetric systems Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-26 16:20   ` Peter Zijlstra
2021-05-26 16:20     ` Peter Zijlstra
2021-05-26 16:35     ` Will Deacon
2021-05-26 16:35       ` Will Deacon
2021-05-26 16:30   ` Peter Zijlstra
2021-05-26 16:30     ` Peter Zijlstra
2021-05-26 17:02     ` Will Deacon
2021-05-26 17:02       ` Will Deacon
2021-05-27  7:56       ` Peter Zijlstra
2021-05-27  7:56         ` Peter Zijlstra
2021-05-25 15:14 ` [PATCH v7 14/22] sched: Introduce task_cpus_dl_admissible() to check proposed affinity Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-25 15:14 ` [PATCH v7 15/22] freezer: Add frozen_or_skipped() helper function Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-25 15:14 ` [PATCH v7 16/22] sched: Defer wakeup in ttwu() for unschedulable frozen tasks Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-27 14:10   ` Peter Zijlstra
2021-05-27 14:10     ` Peter Zijlstra
2021-05-27 14:31     ` Peter Zijlstra
2021-05-27 14:31       ` Peter Zijlstra
2021-05-27 14:44       ` Will Deacon
2021-05-27 14:44         ` Will Deacon
2021-05-27 14:55         ` Peter Zijlstra
2021-05-27 14:55           ` Peter Zijlstra
2021-05-27 14:50       ` Peter Zijlstra
2021-05-27 14:50         ` Peter Zijlstra
2021-05-28 10:49       ` Peter Zijlstra
2021-05-28 10:49         ` Peter Zijlstra
2021-05-27 14:36     ` Will Deacon
2021-05-27 14:36       ` Will Deacon
2021-06-01  8:21   ` [RFC][PATCH] freezer,sched: Rewrite core freezer logic Peter Zijlstra
2021-06-01  8:21     ` Peter Zijlstra
2021-06-01 11:27     ` Peter Zijlstra
2021-06-01 11:27       ` Peter Zijlstra
2021-06-02 12:54       ` Will Deacon
2021-06-02 12:54         ` Will Deacon
2021-06-03 10:35         ` Peter Zijlstra [this message]
2021-06-03 10:35           ` Peter Zijlstra
2021-06-03 10:58           ` Will Deacon
2021-06-03 10:58             ` Will Deacon
2021-06-03 11:26             ` Peter Zijlstra
2021-06-03 11:26               ` Peter Zijlstra
2021-06-03 11:36               ` Will Deacon
2021-06-03 11:36                 ` Will Deacon
2021-05-25 15:14 ` [PATCH v7 17/22] arm64: Implement task_cpu_possible_mask() Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-25 15:14 ` [PATCH v7 18/22] arm64: exec: Adjust affinity for compat tasks with mismatched 32-bit EL0 Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-25 15:14 ` [PATCH v7 19/22] arm64: Prevent offlining first CPU with 32-bit EL0 on mismatched system Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-25 15:14 ` [PATCH v7 20/22] arm64: Hook up cmdline parameter to allow mismatched 32-bit EL0 Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-25 15:14 ` [PATCH v7 21/22] arm64: Remove logic to kill 32-bit tasks on 64-bit-only cores Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-25 15:14 ` [PATCH v7 22/22] Documentation: arm64: describe asymmetric 32-bit support Will Deacon
2021-05-25 15:14   ` Will Deacon
2021-05-25 17:13   ` Marc Zyngier
2021-05-25 17:13     ` Marc Zyngier
2021-05-25 17:27     ` Will Deacon
2021-05-25 17:27       ` Will Deacon
2021-05-25 18:11       ` Marc Zyngier
2021-05-25 18:11         ` Marc Zyngier
2021-05-26 16:00         ` Will Deacon
2021-05-26 16:00           ` Will Deacon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YLiwahWvnnkeL+vc@hirez.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=bristot@redhat.com \
    --cc=catalin.marinas@arm.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=juri.lelli@redhat.com \
    --cc=kernel-team@android.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maz@kernel.org \
    --cc=mingo@redhat.com \
    --cc=morten.rasmussen@arm.com \
    --cc=oleg@redhat.com \
    --cc=qais.yousef@arm.com \
    --cc=qperret@google.com \
    --cc=rjw@rjwysocki.net \
    --cc=surenb@google.com \
    --cc=tj@kernel.org \
    --cc=vincent.guittot@linaro.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.