linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Andy Lutomirski <luto@amacapital.net>
Cc: Chris Metcalf <cmetcalf@mellanox.com>,
	Gilad Ben Yossef <giladb@mellanox.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ingo Molnar <mingo@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Rik van Riel <riel@redhat.com>, Tejun Heo <tj@kernel.org>,
	Frederic Weisbecker <fweisbec@gmail.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Christoph Lameter <cl@linux.com>,
	Viresh Kumar <viresh.kumar@linaro.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	Francis Giraldeau <francis.giraldeau@gmail.com>,
	Andi Kleen <andi@firstfloor.org>, Arnd Bergmann <arnd@arndb.de>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: task isolation discussion at Linux Plumbers
Date: Wed, 9 Nov 2016 09:38:08 -0800	[thread overview]
Message-ID: <20161109173808.GJ4127@linux.vnet.ibm.com> (raw)
In-Reply-To: <CALCETrWjMrdJPHSjA64rJpVCaEtZCr-ZeE08sKP1-rbvw-4+tQ@mail.gmail.com>

On Wed, Nov 09, 2016 at 03:14:35AM -0800, Andy Lutomirski wrote:
> On Tue, Nov 8, 2016 at 5:40 PM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:

Thank you for the review and comments!

> > commit 49961e272333ac720ac4ccbaba45521bfea259ae
> > Author: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > Date:   Tue Nov 8 14:25:21 2016 -0800
> >
> >     rcu: Maintain special bits at bottom of ->dynticks counter
> >
> >     Currently, IPIs are used to force other CPUs to invalidate their TLBs
> >     in response to a kernel virtual-memory mapping change.  This works, but
> >     degrades both battery lifetime (for idle CPUs) and real-time response
> >     (for nohz_full CPUs), and in addition results in unnecessary IPIs due to
> >     the fact that CPUs executing in usermode are unaffected by stale kernel
> >     mappings.  It would be better to cause a CPU executing in usermode to
> >     wait until it is entering kernel mode to
> 
> missing words here?

Just a few, added more.  ;-)

> >     This commit therefore reserves a bit at the bottom of the ->dynticks
> >     counter, which is checked upon exit from extended quiescent states.  If it
> >     is set, it is cleared and then a new rcu_dynticks_special_exit() macro
> >     is invoked, which, if not supplied, is an empty single-pass do-while loop.
> >     If this bottom bit is set on -entry- to an extended quiescent state,
> >     then a WARN_ON_ONCE() triggers.
> >
> >     This bottom bit may be set using a new rcu_dynticks_special_set()
> >     function, which returns true if the bit was set, or false if the CPU
> >     turned out to not be in an extended quiescent state.  Please note that
> >     this function refuses to set the bit for a non-nohz_full CPU when that
> >     CPU is executing in usermode because usermode execution is tracked by
> >     RCU as a dyntick-idle extended quiescent state only for nohz_full CPUs.
> 
> I'm inclined to suggest s/dynticks/eqs/ in the public API.  To me,
> "dynticks" is a feature, whereas "eqs" means "extended quiescent
> state" and means something concrete about the CPU state

OK, I have changed rcu_dynticks_special_exit() to rcu_eqs_special_exit().
I also changed rcu_dynticks_special_set() to rcu_eqs_special_set().

I left rcu_dynticks_snap() as is because it is internal to RCU.  (External
only for the benefit of kernel/rcu/tree_trace.c.)

Any others?

Current state of patch below.

> >     Reported-by: Andy Lutomirski <luto@amacapital.net>
> >     Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> >
> > diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h
> > index 4f9b2fa2173d..130d911e4ba1 100644
> > --- a/include/linux/rcutiny.h
> > +++ b/include/linux/rcutiny.h
> > @@ -33,6 +33,11 @@ static inline int rcu_dynticks_snap(struct rcu_dynticks *rdtp)
> >         return 0;
> >  }
> >
> > +static inline bool rcu_dynticks_special_set(int cpu)
> > +{
> > +       return false;  /* Never flag non-existent other CPUs! */
> > +}
> > +
> >  static inline unsigned long get_state_synchronize_rcu(void)
> >  {
> >         return 0;
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index dbf20b058f48..8de83830e86b 100644
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -279,23 +279,36 @@ static DEFINE_PER_CPU(struct rcu_dynticks, rcu_dynticks) = {
> >  };
> >
> >  /*
> > + * Steal a bit from the bottom of ->dynticks for idle entry/exit
> > + * control.  Initially this is for TLB flushing.
> > + */
> > +#define RCU_DYNTICK_CTRL_MASK 0x1
> > +#define RCU_DYNTICK_CTRL_CTR  (RCU_DYNTICK_CTRL_MASK + 1)
> > +#ifndef rcu_dynticks_special_exit
> > +#define rcu_dynticks_special_exit() do { } while (0)
> > +#endif
> > +
> 
> >  /*
> > @@ -305,17 +318,21 @@ static void rcu_dynticks_eqs_enter(void)
> >  static void rcu_dynticks_eqs_exit(void)
> >  {
> >         struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
> > +       int seq;
> >
> >         /*
> > -        * CPUs seeing atomic_inc() must see prior idle sojourns,
> > +        * CPUs seeing atomic_inc_return() must see prior idle sojourns,
> >          * and we also must force ordering with the next RCU read-side
> >          * critical section.
> >          */
> > -       smp_mb__before_atomic(); /* See above. */
> > -       atomic_inc(&rdtp->dynticks);
> > -       smp_mb__after_atomic(); /* See above. */
> > +       seq = atomic_inc_return(&rdtp->dynticks);
> >         WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
> > -                    !(atomic_read(&rdtp->dynticks) & 0x1));
> > +                    !(seq & RCU_DYNTICK_CTRL_CTR));
> > +       if (seq & RCU_DYNTICK_CTRL_MASK) {
> > +               atomic_and(~RCU_DYNTICK_CTRL_MASK, &rdtp->dynticks);
> > +               smp_mb__after_atomic(); /* Clear bits before acting on them */
> > +               rcu_dynticks_special_exit();
> 
> I think this needs to be reversed for NMI safety: do the callback and
> then clear the bits.

OK.  Ah, the race that I was worried about can't happen due to the
fact that rdtp->dynticks gets incremented before the call to
rcu_dynticks_special_exit().

Good catch, fixed.

And the other thing I forgot is that I cannot clear the bottom bits if
this is an NMI handler.  But now I cannot construct a case where this
is a problem.  The only way this could matter is if an NMI is taken in
an extended quiescent state.  In that case, the code flushes and clears
the bit, and any later remote-flush request to this CPU will set the
bit again.  And any races between the NMI handler and the other CPU look
the same as between IRQ handlers and process entry.

Yes, this one needs some formal verification, doesn't it?

In the meantime, if you can reproduce the race that led us to believe
that NMI handlers should not clear the bottom bits, please let me know.

> > +/*
> > + * Set the special (bottom) bit of the specified CPU so that it
> > + * will take special action (such as flushing its TLB) on the
> > + * next exit from an extended quiescent state.  Returns true if
> > + * the bit was successfully set, or false if the CPU was not in
> > + * an extended quiescent state.
> > + */
> > +bool rcu_dynticks_special_set(int cpu)
> > +{
> > +       int old;
> > +       int new;
> > +       struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);
> > +
> > +       do {
> > +               old = atomic_read(&rdtp->dynticks);
> > +               if (old & RCU_DYNTICK_CTRL_CTR)
> > +                       return false;
> > +               new = old | ~RCU_DYNTICK_CTRL_MASK;
> 
> Shouldn't this be old | RCU_DYNTICK_CTRL_MASK?

Indeed it should!  (What -was- I thinking?)  Fixed.

> > +       } while (atomic_cmpxchg(&rdtp->dynticks, old, new) != old);
> > +       return true;
> >  }

Thank you again, please see update below.

							Thanx, Paul

------------------------------------------------------------------------

commit 7bbb80d5f612e7f0ffc826b11d292a3616150b34
Author: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Date:   Tue Nov 8 14:25:21 2016 -0800

    rcu: Maintain special bits at bottom of ->dynticks counter
    
    Currently, IPIs are used to force other CPUs to invalidate their TLBs
    in response to a kernel virtual-memory mapping change.  This works, but
    degrades both battery lifetime (for idle CPUs) and real-time response
    (for nohz_full CPUs), and in addition results in unnecessary IPIs due to
    the fact that CPUs executing in usermode are unaffected by stale kernel
    mappings.  It would be better to cause a CPU executing in usermode to
    wait until it is entering kernel mode to do the flush, first to avoid
    interrupting usemode tasks and second to handle multiple flush requests
    with a single flush in the case of a long-running user task.
    
    This commit therefore reserves a bit at the bottom of the ->dynticks
    counter, which is checked upon exit from extended quiescent states.
    If it is set, it is cleared and then a new rcu_eqs_special_exit() macro is
    invoked, which, if not supplied, is an empty single-pass do-while loop.
    If this bottom bit is set on -entry- to an extended quiescent state,
    then a WARN_ON_ONCE() triggers.
    
    This bottom bit may be set using a new rcu_eqs_special_set() function,
    which returns true if the bit was set, or false if the CPU turned
    out to not be in an extended quiescent state.  Please note that this
    function refuses to set the bit for a non-nohz_full CPU when that CPU
    is executing in usermode because usermode execution is tracked by RCU
    as a dyntick-idle extended quiescent state only for nohz_full CPUs.
    
    Reported-by: Andy Lutomirski <luto@amacapital.net>
    Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h
index 4f9b2fa2173d..7232d199a81c 100644
--- a/include/linux/rcutiny.h
+++ b/include/linux/rcutiny.h
@@ -33,6 +33,11 @@ static inline int rcu_dynticks_snap(struct rcu_dynticks *rdtp)
 	return 0;
 }
 
+static inline bool rcu_eqs_special_set(int cpu)
+{
+	return false;  /* Never flag non-existent other CPUs! */
+}
+
 static inline unsigned long get_state_synchronize_rcu(void)
 {
 	return 0;
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index dbf20b058f48..342c8ee402d6 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -279,23 +279,36 @@ static DEFINE_PER_CPU(struct rcu_dynticks, rcu_dynticks) = {
 };
 
 /*
+ * Steal a bit from the bottom of ->dynticks for idle entry/exit
+ * control.  Initially this is for TLB flushing.
+ */
+#define RCU_DYNTICK_CTRL_MASK 0x1
+#define RCU_DYNTICK_CTRL_CTR  (RCU_DYNTICK_CTRL_MASK + 1)
+#ifndef rcu_eqs_special_exit
+#define rcu_eqs_special_exit() do { } while (0)
+#endif
+
+/*
  * Record entry into an extended quiescent state.  This is only to be
  * called when not already in an extended quiescent state.
  */
 static void rcu_dynticks_eqs_enter(void)
 {
 	struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
+	int seq;
 
 	/*
-	 * CPUs seeing atomic_inc() must see prior RCU read-side critical
-	 * sections, and we also must force ordering with the next idle
-	 * sojourn.
+	 * CPUs seeing atomic_inc_return() must see prior RCU read-side
+	 * critical sections, and we also must force ordering with the
+	 * next idle sojourn.
 	 */
-	smp_mb__before_atomic(); /* See above. */
-	atomic_inc(&rdtp->dynticks);
-	smp_mb__after_atomic(); /* See above. */
+	seq = atomic_inc_return(&rdtp->dynticks);
+	/* Better be in an extended quiescent state! */
+	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
+		     (seq & RCU_DYNTICK_CTRL_CTR));
+	/* Better not have special action (TLB flush) pending! */
 	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
-		     atomic_read(&rdtp->dynticks) & 0x1);
+		     (seq & RCU_DYNTICK_CTRL_MASK));
 }
 
 /*
@@ -305,17 +318,22 @@ static void rcu_dynticks_eqs_enter(void)
 static void rcu_dynticks_eqs_exit(void)
 {
 	struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
+	int seq;
 
 	/*
-	 * CPUs seeing atomic_inc() must see prior idle sojourns,
+	 * CPUs seeing atomic_inc_return() must see prior idle sojourns,
 	 * and we also must force ordering with the next RCU read-side
 	 * critical section.
 	 */
-	smp_mb__before_atomic(); /* See above. */
-	atomic_inc(&rdtp->dynticks);
-	smp_mb__after_atomic(); /* See above. */
+	seq = atomic_inc_return(&rdtp->dynticks);
 	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
-		     !(atomic_read(&rdtp->dynticks) & 0x1));
+		     !(seq & RCU_DYNTICK_CTRL_CTR));
+	if (seq & RCU_DYNTICK_CTRL_MASK) {
+		rcu_eqs_special_exit();
+		/* Prefer duplicate flushes to losing a flush. */
+		smp_mb__before_atomic(); /* NMI safety. */
+		atomic_and(~RCU_DYNTICK_CTRL_MASK, &rdtp->dynticks);
+	}
 }
 
 /*
@@ -326,7 +344,7 @@ int rcu_dynticks_snap(struct rcu_dynticks *rdtp)
 {
 	int snap = atomic_add_return(0, &rdtp->dynticks);
 
-	return snap;
+	return snap & ~RCU_DYNTICK_CTRL_MASK;
 }
 
 /*
@@ -335,7 +353,7 @@ int rcu_dynticks_snap(struct rcu_dynticks *rdtp)
  */
 static bool rcu_dynticks_in_eqs(int snap)
 {
-	return !(snap & 0x1);
+	return !(snap & RCU_DYNTICK_CTRL_CTR);
 }
 
 /*
@@ -355,10 +373,33 @@ static bool rcu_dynticks_in_eqs_since(struct rcu_dynticks *rdtp, int snap)
 static void rcu_dynticks_momentary_idle(void)
 {
 	struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
-	int special = atomic_add_return(2, &rdtp->dynticks);
+	int special = atomic_add_return(2 * RCU_DYNTICK_CTRL_CTR,
+					&rdtp->dynticks);
 
 	/* It is illegal to call this from idle state. */
-	WARN_ON_ONCE(!(special & 0x1));
+	WARN_ON_ONCE(!(special & RCU_DYNTICK_CTRL_CTR));
+}
+
+/*
+ * Set the special (bottom) bit of the specified CPU so that it
+ * will take special action (such as flushing its TLB) on the
+ * next exit from an extended quiescent state.  Returns true if
+ * the bit was successfully set, or false if the CPU was not in
+ * an extended quiescent state.
+ */
+bool rcu_eqs_special_set(int cpu)
+{
+	int old;
+	int new;
+	struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);
+
+	do {
+		old = atomic_read(&rdtp->dynticks);
+		if (old & RCU_DYNTICK_CTRL_CTR)
+			return false;
+		new = old | RCU_DYNTICK_CTRL_MASK;
+	} while (atomic_cmpxchg(&rdtp->dynticks, old, new) != old);
+	return true;
 }
 
 DEFINE_PER_CPU_SHARED_ALIGNED(unsigned long, rcu_qs_ctr);
diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
index 3b953dcf6afc..7dcdd59d894c 100644
--- a/kernel/rcu/tree.h
+++ b/kernel/rcu/tree.h
@@ -596,6 +596,7 @@ extern struct rcu_state rcu_preempt_state;
 #endif /* #ifdef CONFIG_PREEMPT_RCU */
 
 int rcu_dynticks_snap(struct rcu_dynticks *rdtp);
+bool rcu_eqs_special_set(int cpu);
 
 #ifdef CONFIG_RCU_BOOST
 DECLARE_PER_CPU(unsigned int, rcu_cpu_kthread_status);

  reply	other threads:[~2016-11-09 17:38 UTC|newest]

Thread overview: 80+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-16 21:19 [PATCH v15 00/13] support "task_isolation" mode Chris Metcalf
2016-08-16 21:19 ` [PATCH v15 01/13] vmstat: add quiet_vmstat_sync function Chris Metcalf
2016-08-16 21:19 ` [PATCH v15 02/13] vmstat: add vmstat_idle function Chris Metcalf
2016-08-16 21:19 ` [PATCH v15 03/13] lru_add_drain_all: factor out lru_add_drain_needed Chris Metcalf
2016-08-16 21:19 ` [PATCH v15 04/13] task_isolation: add initial support Chris Metcalf
2016-08-29 16:33   ` Peter Zijlstra
2016-08-29 16:40     ` Chris Metcalf
2016-08-29 16:48       ` Peter Zijlstra
2016-08-29 16:53         ` Chris Metcalf
2016-08-30  7:59           ` Peter Zijlstra
2016-08-30  7:58       ` Peter Zijlstra
2016-08-30 15:32         ` Chris Metcalf
2016-08-30 16:30           ` Andy Lutomirski
2016-08-30 17:02             ` Chris Metcalf
2016-08-30 18:43               ` Andy Lutomirski
2016-08-30 19:37                 ` Chris Metcalf
2016-08-30 19:50                   ` Andy Lutomirski
2016-09-02 14:04                     ` Chris Metcalf
2016-09-02 17:28                       ` Andy Lutomirski
2016-09-09 17:40                         ` Chris Metcalf
2016-09-12 17:41                           ` Andy Lutomirski
2016-09-12 19:25                             ` Chris Metcalf
2016-09-27 14:22                         ` Frederic Weisbecker
2016-09-27 14:39                           ` Peter Zijlstra
2016-09-27 14:51                             ` Frederic Weisbecker
2016-09-27 14:48                           ` Paul E. McKenney
2016-09-30 16:59                 ` Chris Metcalf
2016-09-01 10:06           ` Peter Zijlstra
2016-09-02 14:03             ` Chris Metcalf
2016-09-02 16:40               ` Peter Zijlstra
2017-02-02 16:13   ` Eugene Syromiatnikov
2017-02-02 18:12     ` Chris Metcalf
2016-08-16 21:19 ` [PATCH v15 05/13] task_isolation: track asynchronous interrupts Chris Metcalf
2016-08-16 21:19 ` [PATCH v15 06/13] arch/x86: enable task isolation functionality Chris Metcalf
2016-08-30 21:46   ` Andy Lutomirski
2016-08-16 21:19 ` [PATCH v15 07/13] arm64: factor work_pending state machine to C Chris Metcalf
2016-08-17  8:05   ` Will Deacon
2016-08-16 21:19 ` [PATCH v15 08/13] arch/arm64: enable task isolation functionality Chris Metcalf
2016-08-26 16:25   ` Catalin Marinas
2016-08-16 21:19 ` [PATCH v15 09/13] arch/tile: " Chris Metcalf
2016-08-16 21:19 ` [PATCH v15 10/13] arm, tile: turn off timer tick for oneshot_stopped state Chris Metcalf
2016-08-16 21:19 ` [PATCH v15 11/13] task_isolation: support CONFIG_TASK_ISOLATION_ALL Chris Metcalf
2016-08-16 21:19 ` [PATCH v15 12/13] task_isolation: add user-settable notification signal Chris Metcalf
2016-08-16 21:19 ` [PATCH v15 13/13] task_isolation self test Chris Metcalf
2016-08-17 19:37 ` [PATCH] Fix /proc/stat freezes (was [PATCH v15] "task_isolation" mode) Christoph Lameter
2016-08-20  1:42   ` Chris Metcalf
2016-09-28 13:16   ` Frederic Weisbecker
2016-08-29 16:27 ` Ping: [PATCH v15 00/13] support "task_isolation" mode Chris Metcalf
2016-09-07 21:11   ` Francis Giraldeau
2016-09-07 21:39     ` Francis Giraldeau
2016-09-08 16:21     ` Francis Giraldeau
2016-09-12 16:01     ` Chris Metcalf
2016-09-12 16:14       ` Peter Zijlstra
2016-09-12 21:15         ` Rafael J. Wysocki
2016-09-13  0:05           ` Rafael J. Wysocki
2016-09-13 16:00             ` Francis Giraldeau
2016-09-13  0:20       ` Francis Giraldeau
2016-09-13 16:12         ` Chris Metcalf
2016-09-27 14:49         ` Frederic Weisbecker
2016-09-27 14:35   ` Frederic Weisbecker
2016-09-30 17:07     ` Chris Metcalf
2016-11-05  4:04 ` task isolation discussion at Linux Plumbers Chris Metcalf
2016-11-05 16:05   ` Christoph Lameter
2016-11-07 16:55   ` Thomas Gleixner
2016-11-07 18:36     ` Thomas Gleixner
2016-11-07 19:12       ` Rik van Riel
2016-11-07 19:16         ` Will Deacon
2016-11-07 19:18           ` Rik van Riel
2016-11-11 20:54     ` Luiz Capitulino
2016-11-09  1:40   ` Paul E. McKenney
2016-11-09 11:14     ` Andy Lutomirski
2016-11-09 17:38       ` Paul E. McKenney [this message]
2016-11-09 18:57         ` Will Deacon
2016-11-09 19:11           ` Paul E. McKenney
2016-11-10  1:44         ` Andy Lutomirski
2016-11-10  4:52           ` Paul E. McKenney
2016-11-10  5:10             ` Paul E. McKenney
2016-11-11 17:00             ` Andy Lutomirski
2016-11-09 11:07   ` Frederic Weisbecker
2016-12-19 14:37   ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161109173808.GJ4127@linux.vnet.ibm.com \
    --to=paulmck@linux.vnet.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=andi@firstfloor.org \
    --cc=arnd@arndb.de \
    --cc=catalin.marinas@arm.com \
    --cc=cl@linux.com \
    --cc=cmetcalf@mellanox.com \
    --cc=daniel.lezcano@linaro.org \
    --cc=francis.giraldeau@gmail.com \
    --cc=fweisbec@gmail.com \
    --cc=giladb@mellanox.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@amacapital.net \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=riel@redhat.com \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=viresh.kumar@linaro.org \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).