linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Jan H. Schönherr" <jschoenh@amazon.de>
To: Ingo Molnar <mingo@redhat.com>, Peter Zijlstra <peterz@infradead.org>
Cc: "Jan H. Schönherr" <jschoenh@amazon.de>, linux-kernel@vger.kernel.org
Subject: [RFC 25/60] cosched: Prepare scheduling domain topology for coscheduling
Date: Fri,  7 Sep 2018 23:40:12 +0200	[thread overview]
Message-ID: <20180907214047.26914-26-jschoenh@amazon.de> (raw)
In-Reply-To: <20180907214047.26914-1-jschoenh@amazon.de>

The ability to coschedule is coupled closely to the scheduling domain
topology: all CPUs within a scheduling domain will context switch
simultaneously. In other words, each scheduling domain also defines
a synchronization domain.

That means, that we should have a wider selection of scheduling domains
than just the typical core, socket, system distinction. Otherwise, it
won't be possible to, e.g., coschedule a subset of a processor. While
synchronization domains based on hardware boundaries are exactly the
right thing when it comes to most resource contention or security
related use cases for coscheduling, it is a limiting factor when
considering coscheduling use cases around parallel programs.

On the other hand that means, that all CPUs need to have the same view
on which groups of CPUs form scheduling domains, as synchronization
domains have to be global by definition.

Introduce a function that post-processes the scheduling domain
topology just before the scheduling domains are generated. We use this
opportunity to get rid of overlapping (aka. non-global) scheduling
domains and to introduce additional levels of scheduling domains to
keep a more reasonable fan-out.

Couple the splitting of scheduling domains to a command line argument,
that is disabled by default. The splitting is not NUMA-aware and may
generate non-optimal splits at that level, when there are four or more
NUMA nodes. Doing this right at this level would require a proper
graph partitioning algorithm operating on the NUMA distance matrix.
Also, as mentioned before, not everyone needs the finer granularity.

Signed-off-by: Jan H. Schönherr <jschoenh@amazon.de>
---
 kernel/sched/core.c    |   1 +
 kernel/sched/cosched.c | 259 +++++++++++++++++++++++++++++++++++++++++++++++++
 kernel/sched/sched.h   |   2 +
 3 files changed, 262 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a235b6041cb5..cc801f84bf97 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5867,6 +5867,7 @@ int sched_cpu_dying(unsigned int cpu)
 void __init sched_init_smp(void)
 {
 	sched_init_numa();
+	cosched_init_topology();
 
 	/*
 	 * There's no userspace yet to cause hotplug operations; hence all the
diff --git a/kernel/sched/cosched.c b/kernel/sched/cosched.c
index 03ba86676b90..7a793aa93114 100644
--- a/kernel/sched/cosched.c
+++ b/kernel/sched/cosched.c
@@ -6,6 +6,8 @@
  * Author: Jan H. Schönherr <jschoenh@amazon.de>
  */
 
+#include <linux/moduleparam.h>
+
 #include "sched.h"
 
 static int mask_to_node(const struct cpumask *span)
@@ -92,3 +94,260 @@ void cosched_init_bottom(void)
 		init_sdrq(&root_task_group, sdrq, NULL, NULL, data);
 	}
 }
+
+static bool __read_mostly cosched_split_domains;
+
+static int __init cosched_split_domains_setup(char *str)
+{
+	cosched_split_domains = true;
+	return 0;
+}
+
+early_param("cosched_split_domains", cosched_split_domains_setup);
+
+struct sd_sdrqmask_level {
+	int groups;
+	struct cpumask **masks;
+};
+
+static struct sd_sdrqmask_level *sd_sdrqmasks;
+
+static const struct cpumask *
+sd_cpu_mask(struct sched_domain_topology_level *tl, int cpu)
+{
+	return get_cpu_mask(cpu);
+}
+
+static const struct cpumask *
+sd_sdrqmask(struct sched_domain_topology_level *tl, int cpu)
+{
+	int i, nr = tl->level;
+
+	for (i = 0; i < sd_sdrqmasks[nr].groups; i++) {
+		if (cpumask_test_cpu(cpu, sd_sdrqmasks[nr].masks[i]))
+			return sd_sdrqmasks[nr].masks[i];
+	}
+
+	WARN(1, "CPU%d not in any group for level %d", cpu, nr);
+	return get_cpu_mask(cpu);
+}
+
+static int calc_agglomeration_factor(int upperweight, int lowerweight)
+{
+	int factor;
+
+	/* Only split domains if actually requested. */
+	if (!cosched_split_domains)
+		return 1;
+
+	/* Determine branching factor */
+	if (upperweight % lowerweight) {
+		pr_info("Non-homogeneous topology?! Not restructuring! (%d, %d)",
+			upperweight, lowerweight);
+		return 1;
+	}
+
+	factor = upperweight / lowerweight;
+	WARN_ON_ONCE(factor <= 0);
+
+	/* Determine number of lower groups to agglomerate */
+	if (factor <= 3)
+		return 1;
+	if (factor % 2 == 0)
+		return 2;
+	if (factor % 3 == 0)
+		return 3;
+
+	pr_info("Cannot find a suitable agglomeration. Not restructuring! (%d, %d)",
+		upperweight, lowerweight);
+	return 1;
+}
+
+/*
+ * Construct the needed masks for an intermediate level.
+ *
+ * There is the level above us, and the level below us.
+ *
+ * Both levels consist of disjoint groups, while the lower level contains
+ * multiple groups for each group of the higher level. The branching factor
+ * must be > 3, otherwise splitting is not useful.
+ *
+ * Thus, to facilitate a bottom up approach, that can later also add more
+ * than one level between two existing levels, we always group two (or
+ * three if possible) lower groups together, given that they are in
+ * the same upper group.
+ *
+ * To get deterministic results, we go through groups from left to right,
+ * ordered by their lowest numbered cpu.
+ *
+ * FIXME: This does not consider distances of NUMA nodes. They may end up
+ *        in non-optimal groups.
+ *
+ * FIXME: This only does the right thing for homogeneous topologies,
+ *        where additionally all CPUs are online.
+ */
+static int create_sdrqmask(struct sd_sdrqmask_level *current_level,
+			   struct sched_domain_topology_level *upper,
+			   struct sched_domain_topology_level *lower)
+{
+	int lowerweight, agg;
+	int i, g = 0;
+	const struct cpumask *next;
+	cpumask_var_t remaining_system, remaining_upper;
+
+	/* Determine number of lower groups to agglomerate */
+	lowerweight = cpumask_weight(lower->mask(lower, 0));
+	agg = calc_agglomeration_factor(cpumask_weight(upper->mask(upper, 0)),
+					lowerweight);
+	WARN_ON_ONCE(agg <= 1);
+
+	/* Determine number of agglomerated groups across the system */
+	current_level->groups = cpumask_weight(cpu_online_mask)
+				/ (agg * lowerweight);
+
+	/* Allocate memory for new masks and tmp vars */
+	current_level->masks = kcalloc(current_level->groups, sizeof(void *),
+				       GFP_KERNEL);
+	if (!current_level->masks)
+		return -ENOMEM;
+	if (!zalloc_cpumask_var(&remaining_system, GFP_KERNEL))
+		return -ENOMEM;
+	if (!zalloc_cpumask_var(&remaining_upper, GFP_KERNEL))
+		return -ENOMEM;
+
+	/* Go through groups in upper level, creating all agglomerated masks */
+	cpumask_copy(remaining_system, cpu_online_mask);
+
+	/* While there is an unprocessed upper group */
+	while (!cpumask_empty(remaining_system)) {
+		/* Get that group */
+		next = upper->mask(upper, cpumask_first(remaining_system));
+		cpumask_andnot(remaining_system, remaining_system, next);
+
+		cpumask_copy(remaining_upper, next);
+
+		/* While there are unprocessed lower groups */
+		while (!cpumask_empty(remaining_upper)) {
+			struct cpumask *mask = kzalloc(cpumask_size(),
+						       GFP_KERNEL);
+
+			if (!mask)
+				return -ENOMEM;
+
+			if (WARN_ON_ONCE(g == current_level->groups))
+				return -EINVAL;
+			current_level->masks[g] = mask;
+			g++;
+
+			/* Create agglomerated mask */
+			for (i = 0; i < agg; i++) {
+				WARN_ON_ONCE(cpumask_empty(remaining_upper));
+
+				next = lower->mask(lower, cpumask_first(remaining_upper));
+				cpumask_andnot(remaining_upper, remaining_upper, next);
+
+				cpumask_or(mask, mask, next);
+			}
+		}
+	}
+
+	if (WARN_ON_ONCE(g != current_level->groups))
+		return -EINVAL;
+
+	free_cpumask_var(remaining_system);
+	free_cpumask_var(remaining_upper);
+
+	return 0;
+}
+
+void cosched_init_topology(void)
+{
+	struct sched_domain_topology_level *sched_domain_topology = get_sched_topology();
+	struct sched_domain_topology_level *tl;
+	int i, agg, orig_level, levels = 0, extra_levels = 0;
+	int span, prev_span = 1;
+
+	/* Only one CPU in the system, we are finished here */
+	if (cpumask_weight(cpu_possible_mask) == 1)
+		return;
+
+	/* Determine number of additional levels */
+	for (tl = sched_domain_topology; tl->mask; tl++) {
+		/* Skip overlap levels, except for the last one */
+		if (tl->flags & SDTL_OVERLAP && tl[1].mask)
+			continue;
+
+		levels++;
+
+		/* FIXME: this assumes a homogeneous topology */
+		span = cpumask_weight(tl->mask(tl, 0));
+		for (;;) {
+			agg = calc_agglomeration_factor(span, prev_span);
+			if (agg <= 1)
+				break;
+			levels++;
+			extra_levels++;
+			prev_span *= agg;
+		}
+		prev_span = span;
+	}
+
+	/* Allocate memory for all levels plus terminators on both ends */
+	tl = kcalloc((levels + 2), sizeof(*tl), GFP_KERNEL);
+	sd_sdrqmasks = kcalloc(extra_levels, sizeof(*sd_sdrqmasks), GFP_KERNEL);
+	if (!tl || !sd_sdrqmasks)
+		return;
+
+	/* Fill in start terminator and forget about it */
+	tl->mask = sd_cpu_mask;
+	tl++;
+
+	/* Copy existing levels and add new ones */
+	prev_span = 1;
+	orig_level = 0;
+	extra_levels = 0;
+	for (i = 0; i < levels; i++) {
+		BUG_ON(!sched_domain_topology[orig_level].mask);
+
+		/* Skip overlap levels, except for the last one */
+		while (sched_domain_topology[orig_level].flags & SDTL_OVERLAP &&
+		       sched_domain_topology[orig_level + 1].mask) {
+			orig_level++;
+		}
+
+		/* Copy existing */
+		tl[i] = sched_domain_topology[orig_level];
+
+		/* Check if we must add a level */
+		/* FIXME: this assumes a homogeneous topology */
+		span = cpumask_weight(tl[i].mask(&tl[i], 0));
+		agg = calc_agglomeration_factor(span, prev_span);
+		if (agg <= 1) {
+			orig_level++;
+			prev_span = span;
+			continue;
+		}
+
+		/*
+		 * For the new level, we take the same setting as the level
+		 * above us (the one we already copied). We just give it
+		 * different set of masks.
+		 */
+		if (create_sdrqmask(&sd_sdrqmasks[extra_levels],
+				    &sched_domain_topology[orig_level],
+				    &tl[i - 1]))
+			return;
+
+		tl[i].mask = sd_sdrqmask;
+		tl[i].level = extra_levels;
+		tl[i].flags &= ~SDTL_OVERLAP;
+		tl[i].simple_mask = NULL;
+
+		extra_levels++;
+		prev_span = cpumask_weight(tl[i].mask(&tl[i], 0));
+	}
+	BUG_ON(sched_domain_topology[orig_level].mask);
+
+	/* Make permanent */
+	set_sched_topology(tl);
+}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 21b7c6cf8b87..ed9c526b74ee 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1131,8 +1131,10 @@ static inline struct cfs_rq *taskgroup_next_cfsrq(struct task_group *tg,
 
 #ifdef CONFIG_COSCHEDULING
 void cosched_init_bottom(void);
+void cosched_init_topology(void);
 #else /* !CONFIG_COSCHEDULING */
 static inline void cosched_init_bottom(void) { }
+static inline void cosched_init_topology(void) { }
 #endif /* !CONFIG_COSCHEDULING */
 
 #ifdef CONFIG_SCHED_SMT
-- 
2.9.3.1.gcba166c.dirty


  parent reply	other threads:[~2018-09-07 21:42 UTC|newest]

Thread overview: 114+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-07 21:39 [RFC 00/60] Coscheduling for Linux Jan H. Schönherr
2018-09-07 21:39 ` [RFC 01/60] sched: Store task_group->se[] pointers as part of cfs_rq Jan H. Schönherr
2018-09-07 21:39 ` [RFC 02/60] sched: Introduce set_entity_cfs() to place a SE into a certain CFS runqueue Jan H. Schönherr
2018-09-07 21:39 ` [RFC 03/60] sched: Setup sched_domain_shared for all sched_domains Jan H. Schönherr
2018-09-07 21:39 ` [RFC 04/60] sched: Replace sd_numa_mask() hack with something sane Jan H. Schönherr
2018-09-07 21:39 ` [RFC 05/60] sched: Allow to retrieve the sched_domain_topology Jan H. Schönherr
2018-09-07 21:39 ` [RFC 06/60] sched: Add a lock-free variant of resched_cpu() Jan H. Schönherr
2018-09-07 21:39 ` [RFC 07/60] sched: Reduce dependencies of init_tg_cfs_entry() Jan H. Schönherr
2018-09-07 21:39 ` [RFC 08/60] sched: Move init_entity_runnable_average() into init_tg_cfs_entry() Jan H. Schönherr
2018-09-07 21:39 ` [RFC 09/60] sched: Do not require a CFS in init_tg_cfs_entry() Jan H. Schönherr
2018-09-07 21:39 ` [RFC 10/60] sched: Use parent_entity() in more places Jan H. Schönherr
2018-09-07 21:39 ` [RFC 11/60] locking/lockdep: Increase number of supported lockdep subclasses Jan H. Schönherr
2018-09-07 21:39 ` [RFC 12/60] locking/lockdep: Make cookie generator accessible Jan H. Schönherr
2018-09-07 21:40 ` [RFC 13/60] sched: Remove useless checks for root task-group Jan H. Schönherr
2018-09-07 21:40 ` [RFC 14/60] sched: Refactor sync_throttle() to accept a CFS runqueue as argument Jan H. Schönherr
2018-09-07 21:40 ` [RFC 15/60] sched: Introduce parent_cfs_rq() and use it Jan H. Schönherr
2018-09-07 21:40 ` [RFC 16/60] sched: Preparatory code movement Jan H. Schönherr
2018-09-07 21:40 ` [RFC 17/60] sched: Introduce and use generic task group CFS traversal functions Jan H. Schönherr
2018-09-07 21:40 ` [RFC 18/60] sched: Fix return value of SCHED_WARN_ON() Jan H. Schönherr
2018-09-07 21:40 ` [RFC 19/60] sched: Add entity variants of enqueue_task_fair() and dequeue_task_fair() Jan H. Schönherr
2018-09-07 21:40 ` [RFC 20/60] sched: Let {en,de}queue_entity_fair() work with a varying amount of tasks Jan H. Schönherr
2018-09-07 21:40 ` [RFC 21/60] sched: Add entity variants of put_prev_task_fair() and set_curr_task_fair() Jan H. Schönherr
2018-09-07 21:40 ` [RFC 22/60] cosched: Add config option for coscheduling support Jan H. Schönherr
2018-09-07 21:40 ` [RFC 23/60] cosched: Add core data structures for coscheduling Jan H. Schönherr
2018-09-07 21:40 ` [RFC 24/60] cosched: Do minimal pre-SMP coscheduler initialization Jan H. Schönherr
2018-09-07 21:40 ` Jan H. Schönherr [this message]
2018-09-07 21:40 ` [RFC 26/60] cosched: Construct runqueue hierarchy Jan H. Schönherr
2018-09-07 21:40 ` [RFC 27/60] cosched: Add some small helper functions for later use Jan H. Schönherr
2018-09-07 21:40 ` [RFC 28/60] cosched: Add is_sd_se() to distinguish SD-SEs from TG-SEs Jan H. Schönherr
2018-09-07 21:40 ` [RFC 29/60] cosched: Adjust code reflecting on the total number of CFS tasks on a CPU Jan H. Schönherr
2018-09-07 21:40 ` [RFC 30/60] cosched: Disallow share modification on task groups for now Jan H. Schönherr
2018-09-07 21:40 ` [RFC 31/60] cosched: Don't disable idle tick " Jan H. Schönherr
2018-09-07 21:40 ` [RFC 32/60] cosched: Specialize parent_cfs_rq() for hierarchical runqueues Jan H. Schönherr
2018-09-07 21:40 ` [RFC 33/60] cosched: Allow resched_curr() to be called " Jan H. Schönherr
2018-09-07 21:40 ` [RFC 34/60] cosched: Add rq_of() variants for different use cases Jan H. Schönherr
2018-09-07 21:40 ` [RFC 35/60] cosched: Adjust rq_lock() functions to work with hierarchical runqueues Jan H. Schönherr
2018-09-07 21:40 ` [RFC 36/60] cosched: Use hrq_of() for rq_clock() and rq_clock_task() Jan H. Schönherr
2018-09-07 21:40 ` [RFC 37/60] cosched: Use hrq_of() for (indirect calls to) ___update_load_sum() Jan H. Schönherr
2018-09-07 21:40 ` [RFC 38/60] cosched: Skip updates on non-CPU runqueues in cfs_rq_util_change() Jan H. Schönherr
2018-09-07 21:40 ` [RFC 39/60] cosched: Adjust task group management for hierarchical runqueues Jan H. Schönherr
2018-09-07 21:40 ` [RFC 40/60] cosched: Keep track of task group hierarchy within each SD-RQ Jan H. Schönherr
2018-09-07 21:40 ` [RFC 41/60] cosched: Introduce locking for leader activities Jan H. Schönherr
2018-09-07 21:40 ` [RFC 42/60] cosched: Introduce locking for (mostly) enqueuing and dequeuing Jan H. Schönherr
2018-09-07 21:40 ` [RFC 43/60] cosched: Add for_each_sched_entity() variant for owned entities Jan H. Schönherr
2018-09-07 21:40 ` [RFC 44/60] cosched: Perform various rq_of() adjustments in scheduler code Jan H. Schönherr
2018-09-07 21:40 ` [RFC 45/60] cosched: Continue to account all load on per-CPU runqueues Jan H. Schönherr
2018-09-07 21:40 ` [RFC 46/60] cosched: Warn on throttling attempts of non-CPU runqueues Jan H. Schönherr
2018-09-07 21:40 ` [RFC 47/60] cosched: Adjust SE traversal and locking for common leader activities Jan H. Schönherr
2018-09-07 21:40 ` [RFC 48/60] cosched: Adjust SE traversal and locking for yielding and buddies Jan H. Schönherr
2018-09-07 21:40 ` [RFC 49/60] cosched: Adjust locking for enqueuing and dequeueing Jan H. Schönherr
2018-09-07 21:40 ` [RFC 50/60] cosched: Propagate load changes across hierarchy levels Jan H. Schönherr
2018-09-07 21:40 ` [RFC 51/60] cosched: Hacky work-around to avoid observing zero weight SD-SE Jan H. Schönherr
2018-09-07 21:40 ` [RFC 52/60] cosched: Support SD-SEs in enqueuing and dequeuing Jan H. Schönherr
2018-09-07 21:40 ` [RFC 53/60] cosched: Prevent balancing related functions from crossing hierarchy levels Jan H. Schönherr
2018-09-07 21:40 ` [RFC 54/60] cosched: Support idling in a coscheduled set Jan H. Schönherr
2018-09-07 21:40 ` [RFC 55/60] cosched: Adjust task selection for coscheduling Jan H. Schönherr
2018-09-07 21:40 ` [RFC 56/60] cosched: Adjust wakeup preemption rules " Jan H. Schönherr
2018-09-07 21:40 ` [RFC 57/60] cosched: Add sysfs interface to configure coscheduling on cgroups Jan H. Schönherr
2018-09-07 21:40 ` [RFC 58/60] cosched: Switch runqueues between regular scheduling and coscheduling Jan H. Schönherr
2018-09-07 21:40 ` [RFC 59/60] cosched: Handle non-atomicity during switches to and from coscheduling Jan H. Schönherr
2018-09-07 21:40 ` [RFC 60/60] cosched: Add command line argument to enable coscheduling Jan H. Schönherr
2018-09-10  2:50   ` Randy Dunlap
2018-09-12  0:24 ` [RFC 00/60] Coscheduling for Linux Nishanth Aravamudan
2018-09-12 19:34   ` Jan H. Schönherr
2018-09-12 23:15     ` Nishanth Aravamudan
2018-09-13 11:31       ` Jan H. Schönherr
2018-09-13 18:16         ` Nishanth Aravamudan
2018-09-12 23:18     ` Jan H. Schönherr
2018-09-13  3:05       ` Nishanth Aravamudan
2018-09-13 19:19 ` [RFC 61/60] cosched: Accumulated fixes and improvements Jan H. Schönherr
2018-09-26 17:25   ` Nishanth Aravamudan
2018-09-26 21:05     ` Nishanth Aravamudan
2018-10-01  9:13       ` Jan H. Schönherr
2018-09-14 11:12 ` [RFC 00/60] Coscheduling for Linux Peter Zijlstra
2018-09-14 16:25   ` Jan H. Schönherr
2018-09-15  8:48     ` Task group cleanups and optimizations (was: Re: [RFC 00/60] Coscheduling for Linux) Jan H. Schönherr
2018-09-17  9:48       ` Peter Zijlstra
2018-09-18 13:22         ` Jan H. Schönherr
2018-09-18 13:38           ` Peter Zijlstra
2018-09-18 13:54             ` Jan H. Schönherr
2018-09-18 13:42           ` Peter Zijlstra
2018-09-18 14:35           ` Rik van Riel
2018-09-19  9:23             ` Jan H. Schönherr
2018-11-23 16:51           ` Frederic Weisbecker
2018-12-04 13:23             ` Jan H. Schönherr
2018-09-17 11:33     ` [RFC 00/60] Coscheduling for Linux Peter Zijlstra
2018-11-02 22:13       ` Nishanth Aravamudan
2018-09-17 12:25     ` Peter Zijlstra
2018-09-26  9:58       ` Jan H. Schönherr
2018-09-27 18:36         ` Subhra Mazumdar
2018-11-23 16:29           ` Frederic Weisbecker
2018-09-17 13:37     ` Peter Zijlstra
2018-09-26  9:35       ` Jan H. Schönherr
2018-09-18 14:40     ` Rik van Riel
2018-09-24 15:23       ` Jan H. Schönherr
2018-09-24 18:01         ` Rik van Riel
2018-09-18  0:33 ` Subhra Mazumdar
2018-09-18 11:44   ` Jan H. Schönherr
2018-09-19 21:53     ` Subhra Mazumdar
2018-09-24 15:43       ` Jan H. Schönherr
2018-09-27 18:12         ` Subhra Mazumdar
2018-10-04 13:29 ` Jon Masters
2018-10-17  2:09 ` Frederic Weisbecker
2018-10-19 11:40   ` Jan H. Schönherr
2018-10-19 14:52     ` Frederic Weisbecker
2018-10-19 15:16     ` Rik van Riel
2018-10-19 15:33       ` Frederic Weisbecker
2018-10-19 15:45         ` Rik van Riel
2018-10-19 19:07           ` Jan H. Schönherr
2018-10-19  0:26 ` Subhra Mazumdar
2018-10-26 23:44   ` Jan H. Schönherr
2018-10-29 22:52     ` Subhra Mazumdar
2018-10-26 23:05 ` Subhra Mazumdar
2018-10-27  0:07   ` Jan H. Schönherr

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180907214047.26914-26-jschoenh@amazon.de \
    --to=jschoenh@amazon.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).