All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Ben Maurer <bmaurer@fb.com>,
	Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
	David Goldblatt <davidgoldblatt@fb.com>, Qi Wang <qiwang@fb.com>,
	Boqun Feng <boqun.feng@gmail.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Paul Turner <pjt@google.com>, Andrew Hunter <ahh@google.com>,
	Andy Lutomirski <luto@amacapital.net>,
	Dave Watson <davejwatson@fb.com>,
	Josh Triplett <josh@joshtriplett.org>,
	Will Deacon <will.deacon@arm.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Andi Kleen <andi@firstfloor.org>, Chris Lameter <cl@linux.com>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Russell King <linux@arm.linux.org.uk>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Michael Kerrisk <mtk.manpages@gmail.com>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Linux API <linux-api@vger.kernel.org>
Subject: Re: [RFC PATCH v9 for 4.15 01/14] Restartable sequences system call
Date: Fri, 13 Oct 2017 13:54:18 -0700	[thread overview]
Message-ID: <20171013205418.GM3521@linux.vnet.ibm.com> (raw)
In-Reply-To: <CA+55aFzPBES0JOYuZhuNM7NKN+G9ytZQT2daueFPw0j9HGpdGQ@mail.gmail.com>

On Fri, Oct 13, 2017 at 11:30:29AM -0700, Linus Torvalds wrote:
> On Fri, Oct 13, 2017 at 2:35 AM, Ben Maurer <bmaurer@fb.com> wrote:
> >
> > I'm really excited to hear that you're open to this patch set and totally understand the desire for some more numbers.
> 
> So the patch-set actually looks very reasonable today. I looked
> through it (ok, I wasn't cc'd on the ppc-only patches so I didn't look
> at those, but I don't think they are likely objectionable either), and
> everything looked fine from a patch standpoint.
> 
> But it's not _just_ numbers for real loads I'm looking for, it's
> actually an _existence proof_ for a real load too. I'd like to know
> that the suggested interface _really_ works in practice too for all
> the expected users.
> 
> In particular, it's easy to make test-cases to show basic
> functionality, but that does not necessarily show that the interface
> then works in "real life".
> 
> For example, if this is supposed to work for a malloc library, it's
> important that people show that yes, this can really work in a
> *LIBRARY*.
> 
> That sounds so obvious and stupid that you might go "What do you
> mean?", but for things to work for libraries, they have to work
> together with *other* users, and with *independent* users.
> 
> For example, say that you're some runtime that wants to use the percpu
> thing for percpu counters - because you want to avoid cache ping-pong,
> and you want to avoid per-thread allocation overhead (or per-thread
> scaling for just summing up the counters) when you have potentially
> tens of thousands of threads.
> 
> Now, how does this runtime work *together* with
> 
>  - CPU hotplug adding new cpu's while you are running (and after you
> allocated your percpu areas)
> 
>  - libraries and system admins that limit - or extend - you to a
> certain set of CPUs
> 
>  - another library (like the malloc library) that wants to use the
> same interface for its percpu allocation queues.
> 
> maybe all of this "just works", but I really want to see an existence
> proof.  Not just a "dedicated use of the interface for one benchmark".
> 
> So yes, I want to see numbers, but I really want to see something much
> more fundamental. I want to feel like there is a good reason to
> believe that the interface really is sufficient and that it really
> does work, even when a single thread may have multiple *different*
> uses for this. Statistics, memory allocation queues, RCU, per-cpu
> locking, yadda yadda. All these things may want to use this, but they
> want to use it *together*, and without you having to write special
> code where every user needs to know about every other user statically.
> 
> Can you load two different *dynamic* libraries that each independently
> uses this thing for their own use, without having to be built together
> for each other?
> 
> >> A "increment percpu value" simply isn't relevant.
> >
> > While I understand it seems trivial, my experience has been that this type of operation can actually be important in many server workloads.
> 
> Oh, I'm not saying that it's not relevant to have high-performance
> statistics gathering using percpu data structures. Of _course_ that is
> important, we do that very much in the kernel itself.
> 
> But a benchmark that does nothing else really isn't relevant.  If the
> *only* thing somebody uses this for is statistics, it's simply not
> good enough.
> 
> 
> >> Because without real-world uses, it's not obvious that there won't be
> >> somebody who goes "oh, this isn't quite enough for us, the semantics
> >> are subtly incompatible with our real-world use case".
> >
> > Is your concern mainly this question (is this patchset a good way to
> > bring per-cpu algorithms to userspace)? I'm hoping that given the
> > variety of ways that per-cpu data structures are used in the kernel
> > the concerns around this patch set are mainly around what approach we
> > should take rather than if per-cpu algorithms are a good idea at all.
> > If this is your main concern perhaps our focus should be around
> > demonstrating that a number of useful per-cpu algorithms can be
> > implemented using restartable sequences.
> 
> The important thing for me is that it should demonstrate that you can
> have users co-exists, and that the interface is sufficient for that.
> 
> So I do want to see "just numbers" in the sense that I would want to
> see that people have actually written code that takes advantage of the
> percpu nature to do real things (like an allocator). But more than
> that, I want to see *use*.
> 
> > Ultimately I'm worried there's a chicken and egg problem here.
> 
> This patch-set has been around for *years* in some form. It's improved
> over the years, but the basic approaches are not new.
> 
> Honestly, if people still don't have any actual user-level code that
> really _uses_ this, I'm not interested in merging it.
> 
> There's no chicken-and-egg here. Anybody who wants to push this
> patch-set needs to write the user level code to validate that the
> patch-set makes sense. That's not chicken-and-egg, that's just
> "without the user-space code, the kernel code has never been tested,
> validated or used".
> 
> And if nobody can be bothered to write the user-level code and test
> this patch-series, then obviously it's not important enough for the
> kernel to merge it.

My guess is that it will take some time, probably measured in months,
to carry out this level of integration and testing to.  But agreed, it
is needed -- as you know, I recently removed some code from RCU that
was requested but then never used.  Not fun.  Even worse would be a
case where the requested code was half-used in a inefficient way, but
precluding improvements.  And we actually had that problem some years
back with the userspace-accessible ring-buffer code.

So if it would help the people doing the testing, I would be happy to
maintain a out-of-tree repository for this series.  That way, if the
testing showed that kernel-code changes were required, these changes
could be easily made without worrying about backwards compatibility
(you don't get the backwards-compatibility guarantee until the code
hits mainline).  This repository would be similar in some ways to the
-rt tree, however, given the small size of this patchset, I cannot
justify a separate git tree.  My thought is to provide (for example)
v4.14-rc4-rseq tags within my -rcu tree.

If there are problems with this approach, or if someone has a better idea,
please let me know.

							Thanx, Paul

WARNING: multiple messages have this Message-ID (diff)
From: "Paul E. McKenney" <paulmck-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org>
To: Linus Torvalds
	<torvalds-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
Cc: Ben Maurer <bmaurer-b10kYP2dOMg@public.gmane.org>,
	Mathieu Desnoyers
	<mathieu.desnoyers-vg+e7yoeK/dWk0Htik3J/w@public.gmane.org>,
	David Goldblatt <davidgoldblatt-b10kYP2dOMg@public.gmane.org>,
	Qi Wang <qiwang-b10kYP2dOMg@public.gmane.org>,
	Boqun Feng <boqun.feng-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	Peter Zijlstra <peterz-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>,
	Paul Turner <pjt-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	Andrew Hunter <ahh-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	Andy Lutomirski <luto-kltTT9wpgjJwATOyAt5JVQ@public.gmane.org>,
	Dave Watson <davejwatson-b10kYP2dOMg@public.gmane.org>,
	Josh Triplett <josh-iaAMLnmF4UmaiuxdJuQwMA@public.gmane.org>,
	Will Deacon <will.deacon-5wv7dgnIgG8@public.gmane.org>,
	Linux Kernel Mailing List
	<linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	Thomas Gleixner <tglx-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org>,
	Andi Kleen <andi-Vw/NltI1exuRpAAqCnN02g@public.gmane.org>,
	Chris Lameter <cl-vYTEC60ixJUAvxtiuMwx3w@public.gmane.org>,
	Ingo Molnar <mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	"H. Peter Anvin" <hpa-YMNOUZJC4hwAvxtiuMwx3w@public.gmane.org>,
	Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org>,
	Andrew Morton
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	Russell King <linux@arm>
Subject: Re: [RFC PATCH v9 for 4.15 01/14] Restartable sequences system call
Date: Fri, 13 Oct 2017 13:54:18 -0700	[thread overview]
Message-ID: <20171013205418.GM3521@linux.vnet.ibm.com> (raw)
In-Reply-To: <CA+55aFzPBES0JOYuZhuNM7NKN+G9ytZQT2daueFPw0j9HGpdGQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>

On Fri, Oct 13, 2017 at 11:30:29AM -0700, Linus Torvalds wrote:
> On Fri, Oct 13, 2017 at 2:35 AM, Ben Maurer <bmaurer-b10kYP2dOMg@public.gmane.org> wrote:
> >
> > I'm really excited to hear that you're open to this patch set and totally understand the desire for some more numbers.
> 
> So the patch-set actually looks very reasonable today. I looked
> through it (ok, I wasn't cc'd on the ppc-only patches so I didn't look
> at those, but I don't think they are likely objectionable either), and
> everything looked fine from a patch standpoint.
> 
> But it's not _just_ numbers for real loads I'm looking for, it's
> actually an _existence proof_ for a real load too. I'd like to know
> that the suggested interface _really_ works in practice too for all
> the expected users.
> 
> In particular, it's easy to make test-cases to show basic
> functionality, but that does not necessarily show that the interface
> then works in "real life".
> 
> For example, if this is supposed to work for a malloc library, it's
> important that people show that yes, this can really work in a
> *LIBRARY*.
> 
> That sounds so obvious and stupid that you might go "What do you
> mean?", but for things to work for libraries, they have to work
> together with *other* users, and with *independent* users.
> 
> For example, say that you're some runtime that wants to use the percpu
> thing for percpu counters - because you want to avoid cache ping-pong,
> and you want to avoid per-thread allocation overhead (or per-thread
> scaling for just summing up the counters) when you have potentially
> tens of thousands of threads.
> 
> Now, how does this runtime work *together* with
> 
>  - CPU hotplug adding new cpu's while you are running (and after you
> allocated your percpu areas)
> 
>  - libraries and system admins that limit - or extend - you to a
> certain set of CPUs
> 
>  - another library (like the malloc library) that wants to use the
> same interface for its percpu allocation queues.
> 
> maybe all of this "just works", but I really want to see an existence
> proof.  Not just a "dedicated use of the interface for one benchmark".
> 
> So yes, I want to see numbers, but I really want to see something much
> more fundamental. I want to feel like there is a good reason to
> believe that the interface really is sufficient and that it really
> does work, even when a single thread may have multiple *different*
> uses for this. Statistics, memory allocation queues, RCU, per-cpu
> locking, yadda yadda. All these things may want to use this, but they
> want to use it *together*, and without you having to write special
> code where every user needs to know about every other user statically.
> 
> Can you load two different *dynamic* libraries that each independently
> uses this thing for their own use, without having to be built together
> for each other?
> 
> >> A "increment percpu value" simply isn't relevant.
> >
> > While I understand it seems trivial, my experience has been that this type of operation can actually be important in many server workloads.
> 
> Oh, I'm not saying that it's not relevant to have high-performance
> statistics gathering using percpu data structures. Of _course_ that is
> important, we do that very much in the kernel itself.
> 
> But a benchmark that does nothing else really isn't relevant.  If the
> *only* thing somebody uses this for is statistics, it's simply not
> good enough.
> 
> 
> >> Because without real-world uses, it's not obvious that there won't be
> >> somebody who goes "oh, this isn't quite enough for us, the semantics
> >> are subtly incompatible with our real-world use case".
> >
> > Is your concern mainly this question (is this patchset a good way to
> > bring per-cpu algorithms to userspace)? I'm hoping that given the
> > variety of ways that per-cpu data structures are used in the kernel
> > the concerns around this patch set are mainly around what approach we
> > should take rather than if per-cpu algorithms are a good idea at all.
> > If this is your main concern perhaps our focus should be around
> > demonstrating that a number of useful per-cpu algorithms can be
> > implemented using restartable sequences.
> 
> The important thing for me is that it should demonstrate that you can
> have users co-exists, and that the interface is sufficient for that.
> 
> So I do want to see "just numbers" in the sense that I would want to
> see that people have actually written code that takes advantage of the
> percpu nature to do real things (like an allocator). But more than
> that, I want to see *use*.
> 
> > Ultimately I'm worried there's a chicken and egg problem here.
> 
> This patch-set has been around for *years* in some form. It's improved
> over the years, but the basic approaches are not new.
> 
> Honestly, if people still don't have any actual user-level code that
> really _uses_ this, I'm not interested in merging it.
> 
> There's no chicken-and-egg here. Anybody who wants to push this
> patch-set needs to write the user level code to validate that the
> patch-set makes sense. That's not chicken-and-egg, that's just
> "without the user-space code, the kernel code has never been tested,
> validated or used".
> 
> And if nobody can be bothered to write the user-level code and test
> this patch-series, then obviously it's not important enough for the
> kernel to merge it.

My guess is that it will take some time, probably measured in months,
to carry out this level of integration and testing to.  But agreed, it
is needed -- as you know, I recently removed some code from RCU that
was requested but then never used.  Not fun.  Even worse would be a
case where the requested code was half-used in a inefficient way, but
precluding improvements.  And we actually had that problem some years
back with the userspace-accessible ring-buffer code.

So if it would help the people doing the testing, I would be happy to
maintain a out-of-tree repository for this series.  That way, if the
testing showed that kernel-code changes were required, these changes
could be easily made without worrying about backwards compatibility
(you don't get the backwards-compatibility guarantee until the code
hits mainline).  This repository would be similar in some ways to the
-rt tree, however, given the small size of this patchset, I cannot
justify a separate git tree.  My thought is to provide (for example)
v4.14-rc4-rseq tags within my -rcu tree.

If there are problems with this approach, or if someone has a better idea,
please let me know.

							Thanx, Paul

  reply	other threads:[~2017-10-13 20:54 UTC|newest]

Thread overview: 113+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-10-12 23:03 [RFC PATCH v9 for 4.15 00/14] Restartable sequences and CPU op vector system calls Mathieu Desnoyers
2017-10-12 23:03 ` [RFC PATCH v9 for 4.15 01/14] Restartable sequences system call Mathieu Desnoyers
2017-10-13  0:36   ` Linus Torvalds
2017-10-13  0:36     ` Linus Torvalds
2017-10-13  9:35     ` Ben Maurer
2017-10-13  9:35       ` Ben Maurer
2017-10-13 18:30       ` Linus Torvalds
2017-10-13 18:30         ` Linus Torvalds
2017-10-13 20:54         ` Paul E. McKenney [this message]
2017-10-13 20:54           ` Paul E. McKenney
2017-10-13 21:05           ` Linus Torvalds
2017-10-13 21:05             ` Linus Torvalds
2017-10-13 21:21             ` Paul E. McKenney
2017-10-13 21:21               ` Paul E. McKenney
2017-10-13 21:36             ` Mathieu Desnoyers
2017-10-13 21:36               ` Mathieu Desnoyers
2017-10-16 16:04               ` Carlos O'Donell
2017-10-16 16:04                 ` Carlos O'Donell
2017-10-16 16:46                 ` Andi Kleen
2017-10-16 16:46                   ` Andi Kleen
2017-10-16 22:17                   ` Mathieu Desnoyers
2017-10-16 22:17                     ` Mathieu Desnoyers
2017-10-17 16:19                     ` Ben Maurer
2017-10-17 16:19                       ` Ben Maurer
2017-10-17 16:33                       ` Mathieu Desnoyers
2017-10-17 16:33                         ` Mathieu Desnoyers
2017-10-17 16:41                         ` Ben Maurer
2017-10-17 16:41                           ` Ben Maurer
2017-10-17 17:48                           ` Mathieu Desnoyers
2017-10-17 17:48                             ` Mathieu Desnoyers
2017-10-18  6:22                       ` Greg KH
2017-10-18  6:22                         ` Greg KH
2017-10-18 16:28                         ` Mathieu Desnoyers
2017-10-18 16:28                           ` Mathieu Desnoyers
2017-10-14  3:01         ` Andi Kleen
2017-10-14  3:01           ` Andi Kleen
2017-10-14  4:05           ` Linus Torvalds
2017-10-14  4:05             ` Linus Torvalds
2017-10-14 11:37             ` Mathieu Desnoyers
2017-10-14 11:37               ` Mathieu Desnoyers
2017-10-13 12:50   ` Florian Weimer
2017-10-13 13:40     ` Mathieu Desnoyers
2017-10-13 13:40       ` Mathieu Desnoyers
2017-10-13 13:56       ` Florian Weimer
2017-10-13 13:56         ` Florian Weimer
2017-10-13 14:27         ` Mathieu Desnoyers
2017-10-13 14:27           ` Mathieu Desnoyers
2017-10-13 17:24           ` Andy Lutomirski
2017-10-13 17:24             ` Andy Lutomirski
2017-10-13 17:53             ` Florian Weimer
2017-10-13 17:53               ` Florian Weimer
2017-10-13 18:17               ` Andy Lutomirski
2017-10-13 18:17                 ` Andy Lutomirski
2017-10-14 11:53                 ` Mathieu Desnoyers
2017-10-14 11:53                   ` Mathieu Desnoyers
2017-10-18 16:41   ` Ben Maurer
2017-10-18 18:11     ` Mathieu Desnoyers
2017-10-18 18:11       ` Mathieu Desnoyers
2017-10-19 11:35       ` Mathieu Desnoyers
2017-10-19 11:35         ` Mathieu Desnoyers
2017-10-19 17:01         ` Florian Weimer
2017-10-19 17:01           ` Florian Weimer
2017-10-23 17:30       ` Ben Maurer
2017-10-23 17:30         ` Ben Maurer
2017-10-23 20:44         ` Mathieu Desnoyers
2017-10-23 20:44           ` Mathieu Desnoyers
2017-10-12 23:03 ` [RFC PATCH for 4.15 02/14] tracing: instrument restartable sequences Mathieu Desnoyers
2017-10-12 23:03 ` [RFC PATCH for 4.15 03/14] Restartable sequences: ARM 32 architecture support Mathieu Desnoyers
2017-10-12 23:03 ` [RFC PATCH for 4.15 04/14] Restartable sequences: wire up ARM 32 system call Mathieu Desnoyers
2017-10-12 23:03 ` [RFC PATCH for 4.15 05/14] Restartable sequences: x86 32/64 architecture support Mathieu Desnoyers
2017-10-12 23:03 ` [RFC PATCH for 4.15 06/14] Restartable sequences: wire up x86 32/64 system call Mathieu Desnoyers
2017-10-12 23:03 ` [RFC PATCH for 4.15 07/14] Restartable sequences: powerpc architecture support Mathieu Desnoyers
2017-10-12 23:03 ` [RFC PATCH for 4.15 08/14] Restartable sequences: Wire up powerpc system call Mathieu Desnoyers
2017-10-12 23:03 ` [RFC PATCH for 4.15 09/14] Provide cpu_opv " Mathieu Desnoyers
2017-10-13 13:57   ` Alan Cox
2017-10-13 13:57     ` Alan Cox
2017-10-13 14:50     ` Mathieu Desnoyers
2017-10-13 14:50       ` Mathieu Desnoyers
2017-10-14 14:22       ` Mathieu Desnoyers
2017-10-14 14:22         ` Mathieu Desnoyers
2017-10-13 17:20   ` Andy Lutomirski
2017-10-13 17:20     ` Andy Lutomirski
2017-10-14  2:50   ` Andi Kleen
2017-10-14  2:50     ` Andi Kleen
2017-10-14 13:35     ` Mathieu Desnoyers
2017-10-14 13:35       ` Mathieu Desnoyers
2017-10-12 23:03 ` [RFC PATCH for 4.15 10/14] cpu_opv: Wire up x86 32/64 " Mathieu Desnoyers
2017-10-12 23:03 ` [RFC PATCH for 4.15 11/14] cpu_opv: Wire up powerpc " Mathieu Desnoyers
2017-10-12 23:03 ` [RFC PATCH for 4.15 12/14] cpu_opv: Wire up ARM32 " Mathieu Desnoyers
2017-10-12 23:03 ` [RFC PATCH for 4.15 13/14] cpu_opv: Implement selftests Mathieu Desnoyers
2017-10-12 23:03 ` [RFC PATCH for 4.15 14/14] Restartable sequences: Provide self-tests Mathieu Desnoyers
2017-10-16  2:51   ` Michael Ellerman
2017-10-16  2:51     ` Michael Ellerman
2017-10-16 14:23     ` Mathieu Desnoyers
2017-10-16 14:23       ` Mathieu Desnoyers
2017-10-17 10:38       ` Michael Ellerman
2017-10-17 10:38         ` Michael Ellerman
2017-10-17 13:50         ` Mathieu Desnoyers
2017-10-17 13:50           ` Mathieu Desnoyers
2017-10-16 18:50     ` Mathieu Desnoyers
2017-10-16 18:50       ` Mathieu Desnoyers
2017-10-17 10:36       ` Michael Ellerman
2017-10-17 10:36         ` Michael Ellerman
2017-10-17 13:50         ` Mathieu Desnoyers
2017-10-17 13:50           ` Mathieu Desnoyers
2017-10-18  5:45           ` Michael Ellerman
2017-10-18  5:45             ` Michael Ellerman
2017-10-16  3:00   ` Michael Ellerman
2017-10-16  3:00     ` Michael Ellerman
2017-10-16  3:48     ` Boqun Feng
2017-10-16  3:48       ` Boqun Feng
2017-10-16 11:48       ` Michael Ellerman
2017-10-16 11:48         ` Michael Ellerman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171013205418.GM3521@linux.vnet.ibm.com \
    --to=paulmck@linux.vnet.ibm.com \
    --cc=ahh@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=andi@firstfloor.org \
    --cc=bmaurer@fb.com \
    --cc=boqun.feng@gmail.com \
    --cc=catalin.marinas@arm.com \
    --cc=cl@linux.com \
    --cc=davejwatson@fb.com \
    --cc=davidgoldblatt@fb.com \
    --cc=hpa@zytor.com \
    --cc=josh@joshtriplett.org \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@arm.linux.org.uk \
    --cc=luto@amacapital.net \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mingo@redhat.com \
    --cc=mtk.manpages@gmail.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=qiwang@fb.com \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=viro@zeniv.linux.org.uk \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.