linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org, "Paul E. McKenney" <paulmck@kernel.org>,
	Andy Lutomirski <luto@kernel.org>,
	Alexandre Chartre <alexandre.chartre@oracle.com>,
	Frederic Weisbecker <frederic@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Petr Mladek <pmladek@suse.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Joel Fernandes <joel@joelfernandes.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>, Brian Gerst <brgerst@gmail.com>,
	Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>,
	Will Deacon <will@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Wei Liu <wei.liu@kernel.org>,
	Michael Kelley <mikelley@microsoft.com>,
	Jason Chen CJ <jason.cj.chen@intel.com>,
	Zhao Yakui <yakui.zhao@intel.com>,
	"Peter Zijlstra (Intel)" <peterz@infradead.org>
Subject: [patch V6 12/37] x86/entry: Provide idtentry_entry/exit_cond_rcu()
Date: Sat, 16 May 2020 01:45:59 +0200	[thread overview]
Message-ID: <20200515235125.628629605@linutronix.de> (raw)
In-Reply-To: 20200515234547.710474468@linutronix.de


The pagefault handler cannot use the regular idtentry_enter() because that
invokes rcu_irq_enter() if the pagefault was caused in the kernel. Not a
problem per se, but kernel side page faults can schedule which is not
possible without invoking rcu_irq_exit().

Adding rcu_irq_exit() and a matching rcu_irq_enter() into the actual
pagefault handling code would be possible, but not pretty either.

Provide idtentry_entry/exit_cond_rcu() which calls rcu_irq_enter() only
when RCU is not watching. The conditional RCU enabling is a correctness
issue: A kernel page fault which hits a RCU idle reason can neither
schedule nor is it likely to survive. But avoiding RCU warnings or RCU side
effects is at least increasing the chance for useful debug output.

The function is also useful for implementing lightweight reschedule IPI and
KVM posted interrupt IPI entry handling later.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
index 34caf3849632..72588f1a45a2 100644
--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -515,6 +515,36 @@ SYSCALL_DEFINE0(ni_syscall)
 	return -ENOSYS;
 }
 
+static __always_inline bool __idtentry_enter(struct pt_regs *regs,
+					     bool cond_rcu)
+{
+	if (user_mode(regs)) {
+		enter_from_user_mode();
+	} else {
+		if (!cond_rcu || !__rcu_is_watching()) {
+			/*
+			 * If RCU is not watching then the same careful
+			 * sequence vs. lockdep and tracing is required.
+			 */
+			lockdep_hardirqs_off(CALLER_ADDR0);
+			rcu_irq_enter();
+			instrumentation_begin();
+			trace_hardirqs_off_prepare();
+			instrumentation_end();
+			return true;
+		} else {
+			/*
+			 * If RCU is watching then the combo function
+			 * can be used.
+			 */
+			instrumentation_begin();
+			trace_hardirqs_off();
+			instrumentation_end();
+		}
+	}
+	return false;
+}
+
 /**
  * idtentry_enter - Handle state tracking on idtentry
  * @regs:	Pointer to pt_regs of interrupted context
@@ -532,19 +562,60 @@ SYSCALL_DEFINE0(ni_syscall)
  */
 void noinstr idtentry_enter(struct pt_regs *regs)
 {
-	if (user_mode(regs)) {
-		enter_from_user_mode();
-	} else {
-		lockdep_hardirqs_off(CALLER_ADDR0);
-		rcu_irq_enter();
-		instrumentation_begin();
-		trace_hardirqs_off_prepare();
-		instrumentation_end();
-	}
+	__idtentry_enter(regs, false);
+}
+
+/**
+ * idtentry_enter_cond_rcu - Handle state tracking on idtentry with conditional
+ *			     RCU handling
+ * @regs:	Pointer to pt_regs of interrupted context
+ *
+ * Invokes:
+ *  - lockdep irqflag state tracking as low level ASM entry disabled
+ *    interrupts.
+ *
+ *  - Context tracking if the exception hit user mode.
+ *
+ *  - The hardirq tracer to keep the state consistent as low level ASM
+ *    entry disabled interrupts.
+ *
+ * For kernel mode entries the conditional RCU handling is useful for two
+ * purposes
+ *
+ * 1) Pagefaults: Kernel code can fault and sleep, e.g. on exec. This code
+ *    is not in an RCU idle section. If rcu_irq_enter() would be invoked
+ *    then nothing would invoke rcu_irq_exit() before scheduling.
+ *
+ *   If the kernel faults in a RCU idle section then all bets are off
+ *   anyway but at least avoiding a subsequent issue vs. RCU is helpful for
+ *   debugging.
+ *
+ * 2) Scheduler IPI: To avoid the overhead of a regular idtentry vs. RCU
+ *    and irq_enter() the IPI can be made lightweight if the tracepoints
+ *    are not enabled. While the IPI functionality itself does not require
+ *    RCU (folding preempt count) it still calls out into instrumentable
+ *    functions, e.g. ack_APIC_irq(). The scheduler IPI can hit RCU idle
+ *    sections, so RCU needs to be adjusted. For the fast path case, e.g.
+ *    KVM kicking a vCPU out of guest mode this can be avoided because the
+ *    IPI is handled after KVM reestablished kernel context including RCU.
+ *
+ * For user mode entries enter_from_user_mode() must be invoked to
+ * establish the proper context for NOHZ_FULL. Otherwise scheduling on exit
+ * would not be possible.
+ *
+ * Returns: True if RCU has been adjusted on a kernel entry
+ *	    False otherwise
+ *
+ * The return value must be fed into the rcu_exit argument of
+ * idtentry_exit_cond_rcu().
+ */
+bool noinstr idtentry_enter_cond_rcu(struct pt_regs *regs)
+{
+	return __idtentry_enter(regs, true);
 }
 
 static __always_inline void __idtentry_exit(struct pt_regs *regs,
-					    bool preempt_hcall)
+					    bool preempt_hcall, bool rcu_exit)
 {
 	lockdep_assert_irqs_disabled();
 
@@ -570,7 +641,12 @@ static __always_inline void __idtentry_exit(struct pt_regs *regs,
 				if (IS_ENABLED(CONFIG_DEBUG_ENTRY))
 					WARN_ON_ONCE(!on_thread_stack());
 				instrumentation_begin();
-				rcu_irq_exit_preempt();
+				/*
+				 * Conditional for idtentry_exit_cond_rcu(),
+				 * unconditional for all other users.
+				 */
+				if (rcu_exit)
+					rcu_irq_exit_preempt();
 				if (need_resched())
 					preempt_schedule_irq();
 				/* Covers both tracing and lockdep */
@@ -602,11 +678,22 @@ static __always_inline void __idtentry_exit(struct pt_regs *regs,
 		trace_hardirqs_on_prepare();
 		lockdep_hardirqs_on_prepare(CALLER_ADDR0);
 		instrumentation_end();
-		rcu_irq_exit();
+		/*
+		 * Conditional for idtentry_exit_cond_rcu(), unconditional
+		 * for all other users.
+		 */
+		if (rcu_exit)
+			rcu_irq_exit();
 		lockdep_hardirqs_on(CALLER_ADDR0);
 	} else {
-		/* IRQ flags state is correct already. Just tell RCU */
-		rcu_irq_exit();
+		/*
+		 * IRQ flags state is correct already. Just tell RCU.
+		 *
+		 * Conditional for idtentry_exit_cond_rcu(), unconditional
+		 * for all other users.
+		 */
+		if (rcu_exit)
+			rcu_irq_exit();
 	}
 }
 
@@ -627,7 +714,28 @@ static __always_inline void __idtentry_exit(struct pt_regs *regs,
  */
 void noinstr idtentry_exit(struct pt_regs *regs)
 {
-	__idtentry_exit(regs, false);
+	__idtentry_exit(regs, false, true);
+}
+
+/**
+ * idtentry_exit_cond_rcu - Handle return from exception with conditional RCU
+ *			    handling
+ * @regs:	Pointer to pt_regs (exception entry regs)
+ * @rcu_exit:	Invoke rcu_irq_exit() if true
+ *
+ * Depending on the return target (kernel/user) this runs the necessary
+ * preemption and work checks if possible and reguired and returns to
+ * the caller with interrupts disabled and no further work pending.
+ *
+ * This is the last action before returning to the low level ASM code which
+ * just needs to return to the appropriate context.
+ *
+ * Counterpart to idtentry_enter_cond_rcu(). The return value of the entry
+ * function must be fed into the @rcu_exit argument.
+ */
+void noinstr idtentry_exit_cond_rcu(struct pt_regs *regs, bool rcu_exit)
+{
+	__idtentry_exit(regs, false, rcu_exit);
 }
 
 #ifdef CONFIG_XEN_PV
@@ -659,11 +767,11 @@ __visible noinstr void xen_pv_evtchn_do_upcall(struct pt_regs *regs)
 	set_irq_regs(old_regs);
 
 	if (IS_ENABLED(CONFIG_PREEMPTION)) {
-		__idtentry_exit(regs, false);
+		__idtentry_exit(regs, false, true);
 	} else {
 		bool inhcall = __this_cpu_read(xen_in_preemptible_hcall);
 
-		__idtentry_exit(regs, inhcall && need_resched());
+		__idtentry_exit(regs, inhcall && need_resched(), true);
 	}
 }
 #endif /* CONFIG_XEN_PV */
diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
index fac73bb3577f..ccd572fd6583 100644
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -10,6 +10,9 @@
 void idtentry_enter(struct pt_regs *regs);
 void idtentry_exit(struct pt_regs *regs);
 
+bool idtentry_enter_cond_rcu(struct pt_regs *regs);
+void idtentry_exit_cond_rcu(struct pt_regs *regs, bool rcu_exit);
+
 /**
  * DECLARE_IDTENTRY - Declare functions for simple IDT entry points
  *		      No error code pushed by hardware


  parent reply	other threads:[~2020-05-16  0:10 UTC|newest]

Thread overview: 159+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-15 23:45 [patch V6 00/37] x86/entry: Rework leftovers and merge plan Thomas Gleixner
2020-05-15 23:45 ` [patch V6 01/37] tracing/hwlat: Use ktime_get_mono_fast_ns() Thomas Gleixner
2020-05-19 21:26   ` Steven Rostedt
2020-05-19 21:45     ` Thomas Gleixner
2020-05-19 22:18       ` Steven Rostedt
2020-05-20 19:51         ` Thomas Gleixner
2020-05-20 20:14   ` Peter Zijlstra
2020-05-20 22:20     ` Thomas Gleixner
2020-05-15 23:45 ` [patch V6 02/37] tracing/hwlat: Split ftrace_nmi_enter/exit() Thomas Gleixner
2020-05-19 22:23   ` Steven Rostedt
2020-05-15 23:45 ` [patch V6 03/37] nmi, tracing: Provide nmi_enter/exit_notrace() Thomas Gleixner
2020-05-17  5:12   ` Andy Lutomirski
2020-05-19 22:24   ` Steven Rostedt
2020-05-15 23:45 ` [patch V6 04/37] x86: Make hardware latency tracing explicit Thomas Gleixner
2020-05-17  5:36   ` Andy Lutomirski
2020-05-17  8:48     ` Thomas Gleixner
2020-05-18  5:50       ` Andy Lutomirski
2020-05-18  8:03         ` Thomas Gleixner
2020-05-18 20:42           ` Andy Lutomirski
2020-05-18  8:01   ` Peter Zijlstra
2020-05-18  8:05     ` Thomas Gleixner
2020-05-18  8:08       ` Peter Zijlstra
2020-05-20 20:09         ` Thomas Gleixner
2020-05-20 20:14           ` Andy Lutomirski
2020-05-20 22:20             ` Thomas Gleixner
2020-05-15 23:45 ` [patch V6 05/37] genirq: Provide irq_enter/exit_rcu() Thomas Gleixner
2020-05-18 23:06   ` Andy Lutomirski
2020-05-15 23:45 ` [patch V6 06/37] genirq: Provde __irq_enter/exit_raw() Thomas Gleixner
2020-05-18 23:07   ` Andy Lutomirski
2020-05-15 23:45 ` [patch V6 07/37] x86/entry: Provide helpers for execute on irqstack Thomas Gleixner
2020-05-18 23:11   ` Andy Lutomirski
2020-05-18 23:46     ` Andy Lutomirski
2020-05-18 23:53       ` Thomas Gleixner
2020-05-18 23:56         ` Andy Lutomirski
2020-05-20 12:35           ` Thomas Gleixner
2020-05-20 15:09             ` Andy Lutomirski
2020-05-20 15:27               ` Thomas Gleixner
2020-05-20 15:36                 ` Andy Lutomirski
2020-05-18 23:51     ` Thomas Gleixner
2020-05-15 23:45 ` [patch V6 08/37] x86/entry/64: Move do_softirq_own_stack() to C Thomas Gleixner
2020-05-18 23:48   ` Andy Lutomirski
2020-05-15 23:45 ` [patch V6 09/37] x86/entry: Split idtentry_enter/exit() Thomas Gleixner
2020-05-18 23:49   ` Andy Lutomirski
2020-05-19  8:25     ` Thomas Gleixner
2020-05-15 23:45 ` [patch V6 10/37] x86/entry: Switch XEN/PV hypercall entry to IDTENTRY Thomas Gleixner
2020-05-19 17:06   ` Andy Lutomirski
2020-05-19 18:57     ` Thomas Gleixner
2020-05-19 19:44       ` Andy Lutomirski
2020-05-20  8:06         ` Jürgen Groß
2020-05-20 11:31           ` Andrew Cooper
2020-05-20 14:13         ` Thomas Gleixner
2020-05-20 15:16           ` Andy Lutomirski
2020-05-20 17:22             ` Andy Lutomirski
2020-05-20 19:16               ` Thomas Gleixner
2020-05-20 23:21                 ` Andy Lutomirski
2020-05-21 10:45                   ` Thomas Gleixner
2020-05-21  2:23                 ` Boris Ostrovsky
2020-05-21  7:08                   ` Thomas Gleixner
2020-05-15 23:45 ` [patch V6 11/37] x86/entry/64: Simplify idtentry_body Thomas Gleixner
2020-05-19 17:06   ` Andy Lutomirski
2020-05-15 23:45 ` Thomas Gleixner [this message]
2020-05-19 17:08   ` [patch V6 12/37] x86/entry: Provide idtentry_entry/exit_cond_rcu() Andy Lutomirski
2020-05-19 19:00     ` Thomas Gleixner
2020-05-19 20:20       ` Thomas Gleixner
2020-05-19 20:24         ` Andy Lutomirski
2020-05-19 21:20           ` Thomas Gleixner
2020-05-20  0:26             ` Andy Lutomirski
2020-05-20  2:23               ` Paul E. McKenney
2020-05-20 15:36                 ` Andy Lutomirski
2020-05-20 16:51                   ` Andy Lutomirski
2020-05-20 18:05                     ` Paul E. McKenney
2020-05-20 19:49                       ` Thomas Gleixner
2020-05-20 22:15                         ` Paul E. McKenney
2020-05-20 23:25                           ` Paul E. McKenney
2020-05-21  8:31                             ` Thomas Gleixner
2020-05-21 13:39                               ` Paul E. McKenney
2020-05-21 18:41                                 ` Thomas Gleixner
2020-05-21 19:04                                   ` Paul E. McKenney
2020-05-20 18:32                     ` Thomas Gleixner
2020-05-20 19:24                     ` Thomas Gleixner
2020-05-20 19:42                       ` Paul E. McKenney
2020-05-20 17:38                   ` Paul E. McKenney
2020-05-20 17:47                     ` Andy Lutomirski
2020-05-20 18:11                       ` Paul E. McKenney
2020-05-20 14:19               ` Thomas Gleixner
2020-05-27  8:12   ` [tip: x86/entry] " tip-bot2 for Thomas Gleixner
2020-05-15 23:46 ` [patch V6 13/37] x86/entry: Switch page fault exception to IDTENTRY_RAW Thomas Gleixner
2020-05-19 20:12   ` Andy Lutomirski
2020-05-15 23:46 ` [patch V6 14/37] x86/entry: Remove the transition leftovers Thomas Gleixner
2020-05-19 20:13   ` Andy Lutomirski
2020-05-15 23:46 ` [patch V6 15/37] x86/entry: Change exit path of xen_failsafe_callback Thomas Gleixner
2020-05-19 20:14   ` Andy Lutomirski
2020-05-15 23:46 ` [patch V6 16/37] x86/entry/64: Remove error_exit Thomas Gleixner
2020-05-19 20:14   ` Andy Lutomirski
2020-05-15 23:46 ` [patch V6 17/37] x86/entry/32: Remove common_exception Thomas Gleixner
2020-05-19 20:14   ` Andy Lutomirski
2020-05-15 23:46 ` [patch V6 18/37] x86/irq: Use generic irq_regs implementation Thomas Gleixner
2020-05-15 23:46 ` [patch V6 19/37] x86/irq: Convey vector as argument and not in ptregs Thomas Gleixner
2020-05-19 20:19   ` Andy Lutomirski
2020-05-21 13:22     ` Thomas Gleixner
2020-05-22 18:48       ` Boris Ostrovsky
2020-05-22 19:26         ` Josh Poimboeuf
2020-05-22 19:54           ` Thomas Gleixner
2020-05-15 23:46 ` [patch V6 20/37] x86/irq/64: Provide handle_irq() Thomas Gleixner
2020-05-19 20:21   ` Andy Lutomirski
2020-05-15 23:46 ` [patch V6 21/37] x86/entry: Add IRQENTRY_IRQ macro Thomas Gleixner
2020-05-19 20:27   ` Andy Lutomirski
2020-05-15 23:46 ` [patch V6 22/37] x86/entry: Use idtentry for interrupts Thomas Gleixner
2020-05-19 20:28   ` Andy Lutomirski
2020-05-15 23:46 ` [patch V6 23/37] x86/entry: Provide IDTENTRY_SYSVEC Thomas Gleixner
2020-05-20  0:29   ` Andy Lutomirski
2020-05-20 15:07     ` Thomas Gleixner
2020-05-15 23:46 ` [patch V6 24/37] x86/entry: Convert APIC interrupts to IDTENTRY_SYSVEC Thomas Gleixner
2020-05-20  0:27   ` Andy Lutomirski
2020-05-15 23:46 ` [patch V6 25/37] x86/entry: Convert SMP system vectors " Thomas Gleixner
2020-05-20  0:28   ` Andy Lutomirski
2020-05-15 23:46 ` [patch V6 26/37] x86/entry: Convert various system vectors Thomas Gleixner
2020-05-20  0:30   ` Andy Lutomirski
2020-05-15 23:46 ` [patch V6 27/37] x86/entry: Convert KVM vectors to IDTENTRY_SYSVEC Thomas Gleixner
2020-05-20  0:30   ` Andy Lutomirski
2020-05-15 23:46 ` [patch V6 28/37] x86/entry: Convert various hypervisor " Thomas Gleixner
2020-05-20  0:31   ` Andy Lutomirski
2020-05-15 23:46 ` [patch V6 29/37] x86/entry: Convert XEN hypercall vector " Thomas Gleixner
2020-05-20  0:31   ` Andy Lutomirski
2020-05-15 23:46 ` [patch V6 30/37] x86/entry: Convert reschedule interrupt to IDTENTRY_RAW Thomas Gleixner
2020-05-19 23:57   ` Andy Lutomirski
2020-05-20 15:08     ` Thomas Gleixner
2020-05-15 23:46 ` [patch V6 31/37] x86/entry: Remove the apic/BUILD interrupt leftovers Thomas Gleixner
2020-05-20  0:32   ` Andy Lutomirski
2020-05-15 23:46 ` [patch V6 32/37] x86/entry/64: Remove IRQ stack switching ASM Thomas Gleixner
2020-05-20  0:33   ` Andy Lutomirski
2020-05-15 23:46 ` [patch V6 33/37] x86/entry: Make enter_from_user_mode() static Thomas Gleixner
2020-05-20  0:34   ` Andy Lutomirski
2020-05-15 23:46 ` [patch V6 34/37] x86/entry/32: Remove redundant irq disable code Thomas Gleixner
2020-05-20  0:35   ` Andy Lutomirski
2020-05-15 23:46 ` [patch V6 35/37] x86/entry/64: Remove TRACE_IRQS_*_DEBUG Thomas Gleixner
2020-05-20  0:46   ` Andy Lutomirski
2020-05-15 23:46 ` [patch V6 36/37] x86/entry: Move paranoid irq tracing out of ASM code Thomas Gleixner
2020-05-20  0:53   ` Andy Lutomirski
2020-05-20 15:16     ` Thomas Gleixner
2020-05-20 17:13       ` Andy Lutomirski
2020-05-20 18:33         ` Thomas Gleixner
2020-05-15 23:46 ` [patch V6 37/37] x86/entry: Remove the TRACE_IRQS cruft Thomas Gleixner
2020-05-18 23:07   ` Andy Lutomirski
2020-05-16 17:18 ` [patch V6 00/37] x86/entry: Rework leftovers and merge plan Paul E. McKenney
2020-05-19 12:28   ` Joel Fernandes
2020-05-18 16:07 ` Peter Zijlstra
2020-05-18 18:53   ` Thomas Gleixner
2020-05-19  8:29     ` Peter Zijlstra
2020-05-18 20:24   ` Thomas Gleixner
2020-05-19  8:38     ` Peter Zijlstra
2020-05-19  9:02       ` Peter Zijlstra
2020-05-23  2:52         ` Lai Jiangshan
2020-05-23 13:08           ` Peter Zijlstra
2020-06-15 16:17             ` Peter Zijlstra
2020-05-19  9:06       ` Thomas Gleixner
2020-05-19 18:37 ` Steven Rostedt
2020-05-19 19:09   ` Thomas Gleixner
2020-05-19 19:13     ` Steven Rostedt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200515235125.628629605@linutronix.de \
    --to=tglx@linutronix.de \
    --cc=alexandre.chartre@oracle.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=brgerst@gmail.com \
    --cc=frederic@kernel.org \
    --cc=jason.cj.chen@intel.com \
    --cc=jgross@suse.com \
    --cc=joel@joelfernandes.org \
    --cc=jpoimboe@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mhiramat@kernel.org \
    --cc=mikelley@microsoft.com \
    --cc=paulmck@kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=pmladek@suse.com \
    --cc=rostedt@goodmis.org \
    --cc=sean.j.christopherson@intel.com \
    --cc=thomas.lendacky@amd.com \
    --cc=wei.liu@kernel.org \
    --cc=will@kernel.org \
    --cc=x86@kernel.org \
    --cc=yakui.zhao@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).