All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org, peterz@infradead.org,
	x86@kernel.org, Linus Torvalds <torvalds@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>,
	Josh Poimboeuf <jpoimboe@kernel.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Pawan Gupta <pawan.kumar.gupta@linux.intel.com>,
	Johannes Wikner <kwikner@ethz.ch>,
	Alyssa Milburn <alyssa.milburn@linux.intel.com>,
	Jann Horn <jannh@google.com>, "H.J. Lu" <hjl.tools@gmail.com>,
	Joao Moreira <joao.moreira@intel.com>,
	Joseph Nuzman <joseph.nuzman@intel.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Juergen Gross <jgross@suse.com>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	K Prateek Nayak <kprateek.nayak@amd.com>,
	Eric Dumazet <edumazet@google.com>
Subject: [PATCH v2 33/59] objtool: Fix find_{symbol,func}_containing()
Date: Fri, 02 Sep 2022 15:06:58 +0200	[thread overview]
Message-ID: <20220902130949.789826745@infradead.org> (raw)
In-Reply-To: 20220902130625.217071627@infradead.org

From: Peter Zijlstra <peterz@infradead.org>

The current find_{symbol,func}_containing() functions are broken in
the face of overlapping symbols, exactly the case that is needed for a
new ibt/endbr supression.

Import interval_tree_generic.h into the tools tree and convert the
symbol tree to an interval tree to support proper range stabs.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 tools/include/linux/interval_tree_generic.h |  187 ++++++++++++++++++++++++++++
 tools/objtool/elf.c                         |   93 +++++--------
 tools/objtool/include/objtool/elf.h         |    3 
 3 files changed, 229 insertions(+), 54 deletions(-)

--- /dev/null
+++ b/tools/include/linux/interval_tree_generic.h
@@ -0,0 +1,187 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+  Interval Trees
+  (C) 2012  Michel Lespinasse <walken@google.com>
+
+
+  include/linux/interval_tree_generic.h
+*/
+
+#include <linux/rbtree_augmented.h>
+
+/*
+ * Template for implementing interval trees
+ *
+ * ITSTRUCT:   struct type of the interval tree nodes
+ * ITRB:       name of struct rb_node field within ITSTRUCT
+ * ITTYPE:     type of the interval endpoints
+ * ITSUBTREE:  name of ITTYPE field within ITSTRUCT holding last-in-subtree
+ * ITSTART(n): start endpoint of ITSTRUCT node n
+ * ITLAST(n):  last endpoint of ITSTRUCT node n
+ * ITSTATIC:   'static' or empty
+ * ITPREFIX:   prefix to use for the inline tree definitions
+ *
+ * Note - before using this, please consider if generic version
+ * (interval_tree.h) would work for you...
+ */
+
+#define INTERVAL_TREE_DEFINE(ITSTRUCT, ITRB, ITTYPE, ITSUBTREE,		      \
+			     ITSTART, ITLAST, ITSTATIC, ITPREFIX)	      \
+									      \
+/* Callbacks for augmented rbtree insert and remove */			      \
+									      \
+RB_DECLARE_CALLBACKS_MAX(static, ITPREFIX ## _augment,			      \
+			 ITSTRUCT, ITRB, ITTYPE, ITSUBTREE, ITLAST)	      \
+									      \
+/* Insert / remove interval nodes from the tree */			      \
+									      \
+ITSTATIC void ITPREFIX ## _insert(ITSTRUCT *node,			      \
+				  struct rb_root_cached *root)	 	      \
+{									      \
+	struct rb_node **link = &root->rb_root.rb_node, *rb_parent = NULL;    \
+	ITTYPE start = ITSTART(node), last = ITLAST(node);		      \
+	ITSTRUCT *parent;						      \
+	bool leftmost = true;						      \
+									      \
+	while (*link) {							      \
+		rb_parent = *link;					      \
+		parent = rb_entry(rb_parent, ITSTRUCT, ITRB);		      \
+		if (parent->ITSUBTREE < last)				      \
+			parent->ITSUBTREE = last;			      \
+		if (start < ITSTART(parent))				      \
+			link = &parent->ITRB.rb_left;			      \
+		else {							      \
+			link = &parent->ITRB.rb_right;			      \
+			leftmost = false;				      \
+		}							      \
+	}								      \
+									      \
+	node->ITSUBTREE = last;						      \
+	rb_link_node(&node->ITRB, rb_parent, link);			      \
+	rb_insert_augmented_cached(&node->ITRB, root,			      \
+				   leftmost, &ITPREFIX ## _augment);	      \
+}									      \
+									      \
+ITSTATIC void ITPREFIX ## _remove(ITSTRUCT *node,			      \
+				  struct rb_root_cached *root)		      \
+{									      \
+	rb_erase_augmented_cached(&node->ITRB, root, &ITPREFIX ## _augment);  \
+}									      \
+									      \
+/*									      \
+ * Iterate over intervals intersecting [start;last]			      \
+ *									      \
+ * Note that a node's interval intersects [start;last] iff:		      \
+ *   Cond1: ITSTART(node) <= last					      \
+ * and									      \
+ *   Cond2: start <= ITLAST(node)					      \
+ */									      \
+									      \
+static ITSTRUCT *							      \
+ITPREFIX ## _subtree_search(ITSTRUCT *node, ITTYPE start, ITTYPE last)	      \
+{									      \
+	while (true) {							      \
+		/*							      \
+		 * Loop invariant: start <= node->ITSUBTREE		      \
+		 * (Cond2 is satisfied by one of the subtree nodes)	      \
+		 */							      \
+		if (node->ITRB.rb_left) {				      \
+			ITSTRUCT *left = rb_entry(node->ITRB.rb_left,	      \
+						  ITSTRUCT, ITRB);	      \
+			if (start <= left->ITSUBTREE) {			      \
+				/*					      \
+				 * Some nodes in left subtree satisfy Cond2.  \
+				 * Iterate to find the leftmost such node N.  \
+				 * If it also satisfies Cond1, that's the     \
+				 * match we are looking for. Otherwise, there \
+				 * is no matching interval as nodes to the    \
+				 * right of N can't satisfy Cond1 either.     \
+				 */					      \
+				node = left;				      \
+				continue;				      \
+			}						      \
+		}							      \
+		if (ITSTART(node) <= last) {		/* Cond1 */	      \
+			if (start <= ITLAST(node))	/* Cond2 */	      \
+				return node;	/* node is leftmost match */  \
+			if (node->ITRB.rb_right) {			      \
+				node = rb_entry(node->ITRB.rb_right,	      \
+						ITSTRUCT, ITRB);	      \
+				if (start <= node->ITSUBTREE)		      \
+					continue;			      \
+			}						      \
+		}							      \
+		return NULL;	/* No match */				      \
+	}								      \
+}									      \
+									      \
+ITSTATIC ITSTRUCT *							      \
+ITPREFIX ## _iter_first(struct rb_root_cached *root,			      \
+			ITTYPE start, ITTYPE last)			      \
+{									      \
+	ITSTRUCT *node, *leftmost;					      \
+									      \
+	if (!root->rb_root.rb_node)					      \
+		return NULL;						      \
+									      \
+	/*								      \
+	 * Fastpath range intersection/overlap between A: [a0, a1] and	      \
+	 * B: [b0, b1] is given by:					      \
+	 *								      \
+	 *         a0 <= b1 && b0 <= a1					      \
+	 *								      \
+	 *  ... where A holds the lock range and B holds the smallest	      \
+	 * 'start' and largest 'last' in the tree. For the later, we	      \
+	 * rely on the root node, which by augmented interval tree	      \
+	 * property, holds the largest value in its last-in-subtree.	      \
+	 * This allows mitigating some of the tree walk overhead for	      \
+	 * for non-intersecting ranges, maintained and consulted in O(1).     \
+	 */								      \
+	node = rb_entry(root->rb_root.rb_node, ITSTRUCT, ITRB);		      \
+	if (node->ITSUBTREE < start)					      \
+		return NULL;						      \
+									      \
+	leftmost = rb_entry(root->rb_leftmost, ITSTRUCT, ITRB);		      \
+	if (ITSTART(leftmost) > last)					      \
+		return NULL;						      \
+									      \
+	return ITPREFIX ## _subtree_search(node, start, last);		      \
+}									      \
+									      \
+ITSTATIC ITSTRUCT *							      \
+ITPREFIX ## _iter_next(ITSTRUCT *node, ITTYPE start, ITTYPE last)	      \
+{									      \
+	struct rb_node *rb = node->ITRB.rb_right, *prev;		      \
+									      \
+	while (true) {							      \
+		/*							      \
+		 * Loop invariants:					      \
+		 *   Cond1: ITSTART(node) <= last			      \
+		 *   rb == node->ITRB.rb_right				      \
+		 *							      \
+		 * First, search right subtree if suitable		      \
+		 */							      \
+		if (rb) {						      \
+			ITSTRUCT *right = rb_entry(rb, ITSTRUCT, ITRB);	      \
+			if (start <= right->ITSUBTREE)			      \
+				return ITPREFIX ## _subtree_search(right,     \
+								start, last); \
+		}							      \
+									      \
+		/* Move up the tree until we come from a node's left child */ \
+		do {							      \
+			rb = rb_parent(&node->ITRB);			      \
+			if (!rb)					      \
+				return NULL;				      \
+			prev = &node->ITRB;				      \
+			node = rb_entry(rb, ITSTRUCT, ITRB);		      \
+			rb = node->ITRB.rb_right;			      \
+		} while (prev == rb);					      \
+									      \
+		/* Check if the node intersects [start;last] */		      \
+		if (last < ITSTART(node))		/* !Cond1 */	      \
+			return NULL;					      \
+		else if (start <= ITLAST(node))		/* Cond2 */	      \
+			return node;					      \
+	}								      \
+}
--- a/tools/objtool/elf.c
+++ b/tools/objtool/elf.c
@@ -16,6 +16,7 @@
 #include <string.h>
 #include <unistd.h>
 #include <errno.h>
+#include <linux/interval_tree_generic.h>
 #include <objtool/builtin.h>
 
 #include <objtool/elf.h>
@@ -50,38 +51,22 @@ static inline u32 str_hash(const char *s
 	__elf_table(name); \
 })
 
-static bool symbol_to_offset(struct rb_node *a, const struct rb_node *b)
+static inline unsigned long __sym_start(struct symbol *s)
 {
-	struct symbol *sa = rb_entry(a, struct symbol, node);
-	struct symbol *sb = rb_entry(b, struct symbol, node);
-
-	if (sa->offset < sb->offset)
-		return true;
-	if (sa->offset > sb->offset)
-		return false;
-
-	if (sa->len < sb->len)
-		return true;
-	if (sa->len > sb->len)
-		return false;
-
-	sa->alias = sb;
-
-	return false;
+	return s->offset;
 }
 
-static int symbol_by_offset(const void *key, const struct rb_node *node)
+static inline unsigned long __sym_last(struct symbol *s)
 {
-	const struct symbol *s = rb_entry(node, struct symbol, node);
-	const unsigned long *o = key;
+	return s->offset + s->len - 1;
+}
 
-	if (*o < s->offset)
-		return -1;
-	if (*o >= s->offset + s->len)
-		return 1;
+INTERVAL_TREE_DEFINE(struct symbol, node, unsigned long, __subtree_last,
+		     __sym_start, __sym_last, static, __sym)
 
-	return 0;
-}
+#define __sym_for_each(_iter, _tree, _start, _end)			\
+	for (_iter = __sym_iter_first((_tree), (_start), (_end));	\
+	     _iter; _iter = __sym_iter_next(_iter, (_start), (_end)))
 
 struct symbol_hole {
 	unsigned long key;
@@ -147,13 +132,12 @@ static struct symbol *find_symbol_by_ind
 
 struct symbol *find_symbol_by_offset(struct section *sec, unsigned long offset)
 {
-	struct rb_node *node;
-
-	rb_for_each(node, &offset, &sec->symbol_tree, symbol_by_offset) {
-		struct symbol *s = rb_entry(node, struct symbol, node);
+	struct rb_root_cached *tree = (struct rb_root_cached *)&sec->symbol_tree;
+	struct symbol *iter;
 
-		if (s->offset == offset && s->type != STT_SECTION)
-			return s;
+	__sym_for_each(iter, tree, offset, offset) {
+		if (iter->offset == offset && iter->type != STT_SECTION)
+			return iter;
 	}
 
 	return NULL;
@@ -161,13 +145,12 @@ struct symbol *find_symbol_by_offset(str
 
 struct symbol *find_func_by_offset(struct section *sec, unsigned long offset)
 {
-	struct rb_node *node;
+	struct rb_root_cached *tree = (struct rb_root_cached *)&sec->symbol_tree;
+	struct symbol *iter;
 
-	rb_for_each(node, &offset, &sec->symbol_tree, symbol_by_offset) {
-		struct symbol *s = rb_entry(node, struct symbol, node);
-
-		if (s->offset == offset && s->type == STT_FUNC)
-			return s;
+	__sym_for_each(iter, tree, offset, offset) {
+		if (iter->offset == offset && iter->type == STT_FUNC)
+			return iter;
 	}
 
 	return NULL;
@@ -175,13 +158,12 @@ struct symbol *find_func_by_offset(struc
 
 struct symbol *find_symbol_containing(const struct section *sec, unsigned long offset)
 {
-	struct rb_node *node;
-
-	rb_for_each(node, &offset, &sec->symbol_tree, symbol_by_offset) {
-		struct symbol *s = rb_entry(node, struct symbol, node);
+	struct rb_root_cached *tree = (struct rb_root_cached *)&sec->symbol_tree;
+	struct symbol *iter;
 
-		if (s->type != STT_SECTION)
-			return s;
+	__sym_for_each(iter, tree, offset, offset) {
+		if (iter->type != STT_SECTION)
+			return iter;
 	}
 
 	return NULL;
@@ -202,7 +184,7 @@ int find_symbol_hole_containing(const st
 	/*
 	 * Find the rightmost symbol for which @offset is after it.
 	 */
-	n = rb_find(&hole, &sec->symbol_tree, symbol_hole_by_offset);
+	n = rb_find(&hole, &sec->symbol_tree.rb_root, symbol_hole_by_offset);
 
 	/* found a symbol that contains @offset */
 	if (n)
@@ -224,13 +206,12 @@ int find_symbol_hole_containing(const st
 
 struct symbol *find_func_containing(struct section *sec, unsigned long offset)
 {
-	struct rb_node *node;
-
-	rb_for_each(node, &offset, &sec->symbol_tree, symbol_by_offset) {
-		struct symbol *s = rb_entry(node, struct symbol, node);
+	struct rb_root_cached *tree = (struct rb_root_cached *)&sec->symbol_tree;
+	struct symbol *iter;
 
-		if (s->type == STT_FUNC)
-			return s;
+	__sym_for_each(iter, tree, offset, offset) {
+		if (iter->type == STT_FUNC)
+			return iter;
 	}
 
 	return NULL;
@@ -373,6 +354,7 @@ static void elf_add_symbol(struct elf *e
 {
 	struct list_head *entry;
 	struct rb_node *pnode;
+	struct symbol *iter;
 
 	INIT_LIST_HEAD(&sym->pv_target);
 	sym->alias = sym;
@@ -386,7 +368,12 @@ static void elf_add_symbol(struct elf *e
 	sym->offset = sym->sym.st_value;
 	sym->len = sym->sym.st_size;
 
-	rb_add(&sym->node, &sym->sec->symbol_tree, symbol_to_offset);
+	__sym_for_each(iter, &sym->sec->symbol_tree, sym->offset, sym->offset) {
+		if (iter->offset == sym->offset && iter->type == sym->type)
+			iter->alias = sym;
+	}
+
+	__sym_insert(sym, &sym->sec->symbol_tree);
 	pnode = rb_prev(&sym->node);
 	if (pnode)
 		entry = &rb_entry(pnode, struct symbol, node)->list;
@@ -401,7 +388,7 @@ static void elf_add_symbol(struct elf *e
 	 * can exist within a function, confusing the sorting.
 	 */
 	if (!sym->len)
-		rb_erase(&sym->node, &sym->sec->symbol_tree);
+		__sym_remove(sym, &sym->sec->symbol_tree);
 }
 
 static int read_symbols(struct elf *elf)
--- a/tools/objtool/include/objtool/elf.h
+++ b/tools/objtool/include/objtool/elf.h
@@ -30,7 +30,7 @@ struct section {
 	struct hlist_node hash;
 	struct hlist_node name_hash;
 	GElf_Shdr sh;
-	struct rb_root symbol_tree;
+	struct rb_root_cached symbol_tree;
 	struct list_head symbol_list;
 	struct list_head reloc_list;
 	struct section *base, *reloc;
@@ -53,6 +53,7 @@ struct symbol {
 	unsigned char bind, type;
 	unsigned long offset;
 	unsigned int len;
+	unsigned long __subtree_last;
 	struct symbol *pfunc, *cfunc, *alias;
 	u8 uaccess_safe      : 1;
 	u8 static_call_tramp : 1;



  parent reply	other threads:[~2022-09-02 14:31 UTC|newest]

Thread overview: 81+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-02 13:06 [PATCH v2 00/59] x86/retbleed: Call depth tracking mitigation Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 01/59] x86/paravirt: Ensure proper alignment Peter Zijlstra
2022-09-02 16:05   ` Juergen Gross
2022-09-02 13:06 ` [PATCH v2 02/59] x86/cpu: Remove segment load from switch_to_new_gdt() Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 03/59] x86/cpu: Get rid of redundant switch_to_new_gdt() invocations Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 04/59] x86/cpu: Re-enable stackprotector Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 05/59] x86/modules: Set VM_FLUSH_RESET_PERMS in module_alloc() Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 06/59] x86/vdso: Ensure all kernel code is seen by objtool Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 07/59] x86: Sanitize linker script Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 08/59] x86/build: Ensure proper function alignment Peter Zijlstra
2022-09-02 16:51   ` Linus Torvalds
2022-09-02 17:32     ` Peter Zijlstra
2022-09-02 18:08       ` Linus Torvalds
2022-09-05 10:04         ` Peter Zijlstra
2022-09-12 14:09           ` Linus Torvalds
2022-09-12 19:44             ` Peter Zijlstra
2022-09-13  8:08               ` Peter Zijlstra
2022-09-13 13:08                 ` Linus Torvalds
2022-09-05  2:09   ` David Laight
2022-09-02 13:06 ` [PATCH v2 09/59] x86/asm: " Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 10/59] x86/error_inject: Align function properly Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 11/59] x86/paravirt: Properly align PV functions Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 12/59] x86/entry: Align SYM_CODE_START() variants Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 13/59] crypto: x86/camellia: Remove redundant alignments Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 14/59] crypto: x86/cast5: " Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 15/59] crypto: x86/crct10dif-pcl: " Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 16/59] crypto: x86/serpent: " Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 17/59] crypto: x86/sha1: Remove custom alignments Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 18/59] crypto: x86/sha256: " Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 19/59] crypto: x86/sm[34]: Remove redundant alignments Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 20/59] crypto: twofish: " Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 21/59] crypto: x86/poly1305: Remove custom function alignment Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 22/59] x86: Put hot per CPU variables into a struct Peter Zijlstra
2022-09-02 18:02   ` Jann Horn
2022-09-15 11:22     ` Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 23/59] x86/percpu: Move preempt_count next to current_task Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 24/59] x86/percpu: Move cpu_number " Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 25/59] x86/percpu: Move current_top_of_stack " Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 26/59] x86/percpu: Move irq_stack variables " Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 27/59] x86/softirq: Move softirq pending next to current task Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 28/59] objtool: Allow !PC relative relocations Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 29/59] objtool: Track init section Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 30/59] objtool: Add .call_sites section Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 31/59] objtool: Add --hacks=skylake Peter Zijlstra
2022-09-02 13:06 ` [PATCH v2 32/59] objtool: Allow STT_NOTYPE -> STT_FUNC+0 tail-calls Peter Zijlstra
2022-09-02 13:06 ` Peter Zijlstra [this message]
2022-09-02 13:06 ` [PATCH v2 34/59] objtool: Allow symbol range comparisons for IBT/ENDBR Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 35/59] x86/entry: Make sync_regs() invocation a tail call Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 36/59] ftrace: Add HAVE_DYNAMIC_FTRACE_NO_PATCHABLE Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 37/59] x86/putuser: Provide room for padding Peter Zijlstra
2022-09-02 16:43   ` Linus Torvalds
2022-09-02 17:03     ` Peter Zijlstra
2022-09-02 20:24       ` Peter Zijlstra
2022-09-02 21:46         ` Linus Torvalds
2022-09-03 17:26           ` Linus Torvalds
2022-09-05  7:16             ` Peter Zijlstra
2022-09-05 11:26               ` Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 38/59] x86/Kconfig: Add CONFIG_CALL_THUNKS Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 39/59] x86/Kconfig: Introduce function padding Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 40/59] x86/retbleed: Add X86_FEATURE_CALL_DEPTH Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 41/59] x86/alternatives: Provide text_poke_copy_locked() Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 42/59] x86/entry: Make some entry symbols global Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 43/59] x86/paravirt: Make struct paravirt_call_site unconditionally available Peter Zijlstra
2022-09-02 16:09   ` Juergen Gross
2022-09-02 13:07 ` [PATCH v2 44/59] x86/callthunks: Add call patching for call depth tracking Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 45/59] x86/modules: Add call patching Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 46/59] x86/returnthunk: Allow different return thunks Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 47/59] x86/asm: Provide ALTERNATIVE_3 Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 48/59] x86/retbleed: Add SKL return thunk Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 49/59] x86/retpoline: Add SKL retthunk retpolines Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 50/59] x86/retbleed: Add SKL call thunk Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 51/59] x86/calldepth: Add ret/call counting for debug Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 52/59] static_call: Add call depth tracking support Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 53/59] kallsyms: Take callthunks into account Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 54/59] x86/orc: Make it callthunk aware Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 55/59] x86/bpf: Emit call depth accounting if required Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 56/59] x86/ftrace: Remove ftrace_epilogue() Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 57/59] x86/ftrace: Rebalance RSB Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 58/59] x86/ftrace: Make it call depth tracking aware Peter Zijlstra
2022-09-02 13:07 ` [PATCH v2 59/59] x86/retbleed: Add call depth tracking mitigation Peter Zijlstra
2022-09-16  9:35 ` [PATCH v2 00/59] x86/retbleed: Call " Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220902130949.789826745@infradead.org \
    --to=peterz@infradead.org \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=alyssa.milburn@linux.intel.com \
    --cc=ast@kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=edumazet@google.com \
    --cc=hjl.tools@gmail.com \
    --cc=jannh@google.com \
    --cc=jgross@suse.com \
    --cc=joao.moreira@intel.com \
    --cc=joseph.nuzman@intel.com \
    --cc=jpoimboe@kernel.org \
    --cc=kprateek.nayak@amd.com \
    --cc=kwikner@ethz.ch \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mhiramat@kernel.org \
    --cc=pawan.kumar.gupta@linux.intel.com \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    --cc=tim.c.chen@linux.intel.com \
    --cc=torvalds@linux-foundation.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.