All of lore.kernel.org
 help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org, "Paul E. McKenney" <paulmck@kernel.org>,
	Andy Lutomirski <luto@kernel.org>,
	Alexandre Chartre <alexandre.chartre@oracle.com>,
	Frederic Weisbecker <frederic@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Petr Mladek <pmladek@suse.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Joel Fernandes <joel@joelfernandes.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>, Brian Gerst <brgerst@gmail.com>,
	Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>,
	Will Deacon <will@kernel.org>
Subject: [patch V4 part 1 12/36] x86/kvm: Sanitize kvm_async_pf_task_wait()
Date: Tue, 05 May 2020 15:16:14 +0200	[thread overview]
Message-ID: <20200505134059.262701431@linutronix.de> (raw)
In-Reply-To: 20200505131602.633487962@linutronix.de

From: Thomas Gleixner <tglx@linutronix.de>

While working on the entry consolidation I stumbled over the KVM async page
fault handler and kvm_async_pf_task_wait() in particular. It took me a
while to realize that the randomly sprinkled around rcu_irq_enter()/exit()
invocations are just cargo cult programming. Several patches "fixed" RCU
splats by curing the symptoms without noticing that the code is flawed 
from a design perspective.

The main problem is that this async injection is not based on a proper
handshake mechanism and only respects the minimal requirement, i.e. the
guest is not in a state where it has interrupts disabled.

Aside of that the actual code is a convoluted one fits it all swiss army
knife. It is invoked from different places with different RCU constraints:

  1) Host side:

     vcpu_enter_guest()
       kvm_x86_ops->handle_exit()
         kvm_handle_page_fault()
           kvm_async_pf_task_wait()

     The invocation happens from fully preemptible context.

  2) Guest side:

     The async page fault interrupted:

         a) user space

	 b) preemptible kernel code which is not in a RCU read side
	    critical section

     	 c) non-preemtible kernel code or a RCU read side critical section
	    or kernel code with CONFIG_PREEMPTION=n which allows not to
	    differentiate between #2b and #2c.

RCU is watching for:

  #1  The vCPU exited and current is definitely not the idle task

  #2a The #PF entry code on the guest went through enter_from_user_mode()
      which reactivates RCU

  #2b There is no preemptible, interrupts enabled code in the kernel
      which can run with RCU looking away. (The idle task is always
      non preemptible).

I.e. all schedulable states (#1, #2a, #2b) do not need any of this RCU
voodoo at all.

In #2c RCU is eventually not watching, but as that state cannot schedule
anyway there is no point to worry about it so it has to invoke
rcu_irq_enter() before running that code. This can be optimized, but this
will be done as an extra step in course of the entry code consolidation
work.

So the proper solution for this is to:

  - Split kvm_async_pf_task_wait() into schedule and halt based waiting
    interfaces which share the enqueueing code.

  - Add comments (condensed form of this changelog) to spare others the
    time waste and pain of reverse engineering all of this with the help of
    uncomprehensible changelogs and code history.

  - Invoke kvm_async_pf_task_wait_schedule() from kvm_handle_page_fault(),
    user mode and schedulable kernel side async page faults (#1, #2a, #2b)

  - Invoke kvm_async_pf_task_wait_halt() for the non schedulable kernel
    case (#2c).

    For this case also remove the rcu_irq_exit()/enter() pair around the
    halt as it is just a pointless exercise:

       - vCPUs can VMEXIT at any random point and can be scheduled out for
         an arbitrary amount of time by the host and this is not any
         different except that it voluntary triggers the exit via halt.

       - The interrupted context could have RCU watching already. So the
	 rcu_irq_exit() before the halt is not gaining anything aside of
	 confusing the reader. Claiming that this might prevent RCU stalls
	 is just an illusion.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
V2: Panic if async #PF is injected into an interrupt disabled region.
---
 arch/x86/include/asm/kvm_para.h |    4 
 arch/x86/kernel/kvm.c           |  201 ++++++++++++++++++++++++++++------------
 arch/x86/kvm/mmu/mmu.c          |    2 
 3 files changed, 144 insertions(+), 63 deletions(-)

--- a/arch/x86/include/asm/kvm_para.h
+++ b/arch/x86/include/asm/kvm_para.h
@@ -88,7 +88,7 @@ static inline long kvm_hypercall4(unsign
 bool kvm_para_available(void);
 unsigned int kvm_arch_para_features(void);
 unsigned int kvm_arch_para_hints(void);
-void kvm_async_pf_task_wait(u32 token, int interrupt_kernel);
+void kvm_async_pf_task_wait_schedule(u32 token);
 void kvm_async_pf_task_wake(u32 token);
 u32 kvm_read_and_reset_pf_reason(void);
 void kvm_disable_steal_time(void);
@@ -113,7 +113,7 @@ static inline void kvm_spinlock_init(voi
 #endif /* CONFIG_PARAVIRT_SPINLOCKS */
 
 #else /* CONFIG_KVM_GUEST */
-#define kvm_async_pf_task_wait(T, I) do {} while(0)
+#define kvm_async_pf_task_wait_schedule(T) do {} while(0)
 #define kvm_async_pf_task_wake(T) do {} while(0)
 
 static inline bool kvm_para_available(void)
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -75,7 +75,7 @@ struct kvm_task_sleep_node {
 	struct swait_queue_head wq;
 	u32 token;
 	int cpu;
-	bool halted;
+	bool use_halt;
 };
 
 static struct kvm_task_sleep_head {
@@ -98,75 +98,145 @@ static struct kvm_task_sleep_node *_find
 	return NULL;
 }
 
-/*
- * @interrupt_kernel: Is this called from a routine which interrupts the kernel
- * 		      (other than user space)?
- */
-void kvm_async_pf_task_wait(u32 token, int interrupt_kernel)
+static bool kvm_async_pf_queue_task(u32 token, bool use_halt,
+				    struct kvm_task_sleep_node *n)
 {
 	u32 key = hash_32(token, KVM_TASK_SLEEP_HASHBITS);
 	struct kvm_task_sleep_head *b = &async_pf_sleepers[key];
-	struct kvm_task_sleep_node n, *e;
-	DECLARE_SWAITQUEUE(wait);
-
-	rcu_irq_enter();
+	struct kvm_task_sleep_node *e;
 
 	raw_spin_lock(&b->lock);
 	e = _find_apf_task(b, token);
 	if (e) {
 		/* dummy entry exist -> wake up was delivered ahead of PF */
 		hlist_del(&e->link);
-		kfree(e);
 		raw_spin_unlock(&b->lock);
+		kfree(e);
+		return false;
+	}
 
-		rcu_irq_exit();
+	n->token = token;
+	n->cpu = smp_processor_id();
+	n->use_halt = use_halt;
+	init_swait_queue_head(&n->wq);
+	hlist_add_head(&n->link, &b->list);
+	raw_spin_unlock(&b->lock);
+	return true;
+}
+
+/*
+ * kvm_async_pf_task_wait_schedule - Wait for pagefault to be handled
+ * @token:	Token to identify the sleep node entry
+ *
+ * Invoked from the async pagefault handling code or from the VM exit page
+ * fault handler. In both cases RCU is watching.
+ */
+void kvm_async_pf_task_wait_schedule(u32 token)
+{
+	struct kvm_task_sleep_node n;
+	DECLARE_SWAITQUEUE(wait);
+
+	lockdep_assert_irqs_disabled();
+
+	if (!kvm_async_pf_queue_task(token, false, &n))
 		return;
+
+	for (;;) {
+		prepare_to_swait_exclusive(&n.wq, &wait, TASK_UNINTERRUPTIBLE);
+		if (hlist_unhashed(&n.link))
+			break;
+
+		local_irq_enable();
+		schedule();
+		local_irq_disable();
 	}
+	finish_swait(&n.wq, &wait);
+}
+EXPORT_SYMBOL_GPL(kvm_async_pf_task_wait_schedule);
 
-	n.token = token;
-	n.cpu = smp_processor_id();
-	n.halted = is_idle_task(current) ||
-		   (IS_ENABLED(CONFIG_PREEMPT_COUNT)
-		    ? preempt_count() > 1 || rcu_preempt_depth()
-		    : interrupt_kernel);
-	init_swait_queue_head(&n.wq);
-	hlist_add_head(&n.link, &b->list);
-	raw_spin_unlock(&b->lock);
+/*
+ * Invoked from the async page fault handler.
+ */
+static void kvm_async_pf_task_wait_halt(u32 token)
+{
+	struct kvm_task_sleep_node n;
+
+	if (!kvm_async_pf_queue_task(token, true, &n))
+		return;
 
 	for (;;) {
-		if (!n.halted)
-			prepare_to_swait_exclusive(&n.wq, &wait, TASK_UNINTERRUPTIBLE);
 		if (hlist_unhashed(&n.link))
 			break;
+		/*
+		 * No point in doing anything about RCU here. Any RCU read
+		 * side critical section or RCU watching section can be
+		 * interrupted by VMEXITs and the host is free to keep the
+		 * vCPU scheduled out as long as it sees fit. This is not
+		 * any different just because of the halt induced voluntary
+		 * VMEXIT.
+		 *
+		 * Also the async page fault could have interrupted any RCU
+		 * watching context, so invoking rcu_irq_exit()/enter()
+		 * around this is not gaining anything.
+		 */
+		native_safe_halt();
+		local_irq_disable();
+	}
+}
 
-		rcu_irq_exit();
+/* Invoked from the async page fault handler */
+static void kvm_async_pf_task_wait(u32 token, bool usermode)
+{
+	bool can_schedule;
 
-		if (!n.halted) {
-			local_irq_enable();
-			schedule();
-			local_irq_disable();
-		} else {
-			/*
-			 * We cannot reschedule. So halt.
-			 */
-			native_safe_halt();
-			local_irq_disable();
-		}
+	/*
+	 * No need to check whether interrupts were disabled because the
+	 * host will (hopefully) only inject an async page fault into
+	 * interrupt enabled regions.
+	 *
+	 * If CONFIG_PREEMPTION is enabled then check whether the code
+	 * which triggered the page fault is preemptible. This covers user
+	 * mode as well because preempt_count() is obviously 0 there.
+	 *
+	 * The check for rcu_preempt_depth() is also required because
+	 * voluntary scheduling inside a rcu read locked section is not
+	 * allowed.
+	 *
+	 * The idle task is already covered by this because idle always
+	 * has a preempt count > 0.
+	 *
+	 * If CONFIG_PREEMPTION is disabled only allow scheduling when
+	 * coming from user mode as there is no indication whether the
+	 * context which triggered the page fault could schedule or not.
+	 */
+	if (IS_ENABLED(CONFIG_PREEMPTION))
+		can_schedule = preempt_count() + rcu_preempt_depth() == 0;
+	else
+		can_schedule = usermode;
 
+	/*
+	 * If the kernel context is allowed to schedule then RCU is
+	 * watching because no preemptible code in the kernel is inside RCU
+	 * idle state. So it can be treated like user mode. User mode is
+	 * safe because the #PF entry invoked enter_from_user_mode().
+	 *
+	 * For the non schedulable case invoke rcu_irq_enter() for
+	 * now. This will be moved out to the pagefault entry code later
+	 * and only invoked when really needed.
+	 */
+	if (can_schedule) {
+		kvm_async_pf_task_wait_schedule(token);
+	} else {
 		rcu_irq_enter();
+		kvm_async_pf_task_wait_halt(token);
+		rcu_irq_exit();
 	}
-	if (!n.halted)
-		finish_swait(&n.wq, &wait);
-
-	rcu_irq_exit();
-	return;
 }
-EXPORT_SYMBOL_GPL(kvm_async_pf_task_wait);
 
 static void apf_task_wake_one(struct kvm_task_sleep_node *n)
 {
 	hlist_del_init(&n->link);
-	if (n->halted)
+	if (n->use_halt)
 		smp_send_reschedule(n->cpu);
 	else if (swq_has_sleeper(&n->wq))
 		swake_up_one(&n->wq);
@@ -177,12 +247,13 @@ static void apf_task_wake_all(void)
 	int i;
 
 	for (i = 0; i < KVM_TASK_SLEEP_HASHSIZE; i++) {
-		struct hlist_node *p, *next;
 		struct kvm_task_sleep_head *b = &async_pf_sleepers[i];
+		struct kvm_task_sleep_node *n;
+		struct hlist_node *p, *next;
+
 		raw_spin_lock(&b->lock);
 		hlist_for_each_safe(p, next, &b->list) {
-			struct kvm_task_sleep_node *n =
-				hlist_entry(p, typeof(*n), link);
+			n = hlist_entry(p, typeof(*n), link);
 			if (n->cpu == smp_processor_id())
 				apf_task_wake_one(n);
 		}
@@ -223,8 +294,9 @@ void kvm_async_pf_task_wake(u32 token)
 		n->cpu = smp_processor_id();
 		init_swait_queue_head(&n->wq);
 		hlist_add_head(&n->link, &b->list);
-	} else
+	} else {
 		apf_task_wake_one(n);
+	}
 	raw_spin_unlock(&b->lock);
 	return;
 }
@@ -246,23 +318,33 @@ NOKPROBE_SYMBOL(kvm_read_and_reset_pf_re
 
 bool __kvm_handle_async_pf(struct pt_regs *regs, u32 token)
 {
-	/*
-	 * If we get a page fault right here, the pf_reason seems likely
-	 * to be clobbered.  Bummer.
-	 */
-	switch (kvm_read_and_reset_pf_reason()) {
+	u32 reason = kvm_read_and_reset_pf_reason();
+
+	switch (reason) {
+	case KVM_PV_REASON_PAGE_NOT_PRESENT:
+	case KVM_PV_REASON_PAGE_READY:
+		break;
 	default:
 		return false;
-	case KVM_PV_REASON_PAGE_NOT_PRESENT:
+	}
+
+	/*
+	 * If the host managed to inject an async #PF into an interrupt
+	 * disabled region, then die hard as this is not going to end well
+	 * and the host side is seriously broken.
+	 */
+	if (unlikely(!(regs->flags & X86_EFLAGS_IF)))
+		panic("Host injected async #PF in interrupt disabled region\n");
+
+	if (reason == KVM_PV_REASON_PAGE_NOT_PRESENT) {
 		/* page is swapped out by the host. */
-		kvm_async_pf_task_wait(token, !user_mode(regs));
-		return true;
-	case KVM_PV_REASON_PAGE_READY:
+		kvm_async_pf_task_wait(token, user_mode(regs));
+	} else {
 		rcu_irq_enter();
 		kvm_async_pf_task_wake(token);
 		rcu_irq_exit();
-		return true;
 	}
+	return true;
 }
 NOKPROBE_SYMBOL(__kvm_handle_async_pf);
 
@@ -326,12 +408,12 @@ static void kvm_guest_cpu_init(void)
 
 		wrmsrl(MSR_KVM_ASYNC_PF_EN, pa);
 		__this_cpu_write(apf_reason.enabled, 1);
-		printk(KERN_INFO"KVM setup async PF for cpu %d\n",
-		       smp_processor_id());
+		pr_info("KVM setup async PF for cpu %d\n", smp_processor_id());
 	}
 
 	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI)) {
 		unsigned long pa;
+
 		/* Size alignment is implied but just to make it explicit. */
 		BUILD_BUG_ON(__alignof__(kvm_apic_eoi) < 4);
 		__this_cpu_write(kvm_apic_eoi, 0);
@@ -352,8 +434,7 @@ static void kvm_pv_disable_apf(void)
 	wrmsrl(MSR_KVM_ASYNC_PF_EN, 0);
 	__this_cpu_write(apf_reason.enabled, 0);
 
-	printk(KERN_INFO"Unregister pv shared memory for cpu %d\n",
-	       smp_processor_id());
+	pr_info("Unregister pv shared memory for cpu %d\n", smp_processor_id());
 }
 
 static void kvm_pv_guest_cpu_reboot(void *unused)
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4198,7 +4198,7 @@ int kvm_handle_page_fault(struct kvm_vcp
 	case KVM_PV_REASON_PAGE_NOT_PRESENT:
 		vcpu->arch.apf.host_apf_reason = 0;
 		local_irq_disable();
-		kvm_async_pf_task_wait(fault_address, 0);
+		kvm_async_pf_task_wait_schedule(fault_address);
 		local_irq_enable();
 		break;
 	case KVM_PV_REASON_PAGE_READY:


  parent reply	other threads:[~2020-05-05 14:24 UTC|newest]

Thread overview: 178+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-05 13:16 [patch V4 part 1 00/36] x86/entry: Entry/exception code rework, preparatory patches Thomas Gleixner
2020-05-05 13:16 ` [patch V4 part 1 01/36] rcu: Add comments marking transitions between RCU watching and not Thomas Gleixner
2020-05-05 13:16 ` [patch V4 part 1 02/36] x86/hw_breakpoint: Prevent data breakpoints on cpu_entry_area Thomas Gleixner
2020-05-06  8:14   ` Borislav Petkov
2020-05-06 12:11   ` Alexandre Chartre
2020-05-09  9:00   ` Lai Jiangshan
2020-05-09  9:23   ` Lai Jiangshan
2020-05-09 19:08     ` Andy Lutomirski
2020-05-19 19:58   ` [tip: x86/entry] " tip-bot2 for Andy Lutomirski
2020-05-05 13:16 ` [patch V4 part 1 03/36] sched: Clean up scheduler_ipi() Thomas Gleixner
2020-05-06  8:32   ` Thomas Gleixner
2020-05-06  8:40   ` Borislav Petkov
2020-05-06  9:12     ` Thomas Gleixner
2020-05-06 10:02       ` Borislav Petkov
2020-05-06 12:37   ` Alexandre Chartre
2020-05-06 15:03     ` Thomas Gleixner
2020-05-06 15:33     ` Peter Zijlstra
2020-05-06 18:28       ` Paul E. McKenney
2020-05-06 18:37         ` Peter Zijlstra
2020-05-06 18:46           ` Paul E. McKenney
2020-05-12 15:13   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra (Intel)
2020-05-05 13:16 ` [patch V4 part 1 04/36] sched: Make scheduler_ipi inline Thomas Gleixner
2020-05-06 12:42   ` Alexandre Chartre
2020-05-12 15:13   ` [tip: sched/core] " tip-bot2 for Thomas Gleixner
2020-05-05 13:16 ` [patch V4 part 1 05/36] x86/entry: Flip _TIF_SIGPENDING and _TIF_NOTIFY_RESUME handling Thomas Gleixner
2020-05-06 11:53   ` Miroslav Benes
2020-05-06 12:06     ` Thomas Gleixner
2020-05-06 15:35     ` Peter Zijlstra
2020-05-06 13:06   ` Alexandre Chartre
2020-05-06 16:26   ` Borislav Petkov
2020-05-07 17:35   ` Andy Lutomirski
2020-05-13 20:56   ` Mathieu Desnoyers
2020-05-13 21:10     ` Steven Rostedt
2020-05-13 22:48       ` Mathieu Desnoyers
2020-05-14  0:12       ` Thomas Gleixner
2020-05-14  0:37         ` Steven Rostedt
2020-05-14  0:49           ` Thomas Gleixner
2020-05-14  1:22         ` Andy Lutomirski
2020-05-14  2:51         ` Mathieu Desnoyers
2020-05-14  9:19           ` Peter Zijlstra
2020-05-05 13:16 ` [patch V4 part 1 06/36] compiler: Simple READ/WRITE_ONCE() implementations Thomas Gleixner
2020-05-06 13:11   ` Alexandre Chartre
2020-05-06 13:33   ` Will Deacon
2020-05-06 15:36     ` Peter Zijlstra
2020-05-06 16:33   ` Borislav Petkov
2020-05-05 13:16 ` [patch V4 part 1 07/36] locking/atomics: Flip fallbacks and instrumentation Thomas Gleixner
2020-05-05 16:04   ` Mark Rutland
2020-05-07 23:41   ` Steven Rostedt
2020-05-08  8:40     ` Peter Zijlstra
2020-05-12 14:36   ` [tip: locking/kcsan] " tip-bot2 for Peter Zijlstra
2020-05-05 13:16 ` [patch V4 part 1 08/36] x86/doublefault: Remove memmove() call Thomas Gleixner
2020-05-06 13:47   ` Alexandre Chartre
2020-05-19 19:58   ` [tip: x86/entry] " tip-bot2 for Peter Zijlstra
2020-05-05 13:16 ` [patch V4 part 1 09/36] x86/entry/64: Avoid pointless code when CONTEXT_TRACKING=n Thomas Gleixner
2020-05-19 19:58   ` [tip: x86/entry] " tip-bot2 for Thomas Gleixner
2020-05-05 13:16 ` [patch V4 part 1 10/36] x86/entry: Remove the unused LOCKDEP_SYSEXIT cruft Thomas Gleixner
2020-05-06 13:52   ` Alexandre Chartre
2020-05-19 19:58   ` [tip: x86/entry] " tip-bot2 for Thomas Gleixner
2020-05-05 13:16 ` [patch V4 part 1 11/36] x86/kvm: Handle async page faults directly through do_page_fault() Thomas Gleixner
2020-05-06  7:00   ` Paolo Bonzini
2020-05-06 14:05   ` Alexandre Chartre
2020-05-19 19:58   ` [tip: x86/entry] " tip-bot2 for Andy Lutomirski
2020-05-05 13:16 ` Thomas Gleixner [this message]
2020-05-05 17:54   ` [patch V4 part 1 12/36] x86/kvm: Sanitize kvm_async_pf_task_wait() Paul E. McKenney
2020-05-05 21:50     ` Thomas Gleixner
2020-05-06  7:00   ` Paolo Bonzini
2020-05-06 12:53     ` Steven Rostedt
2020-05-06 15:13   ` Alexandre Chartre
2020-05-19 19:58   ` [tip: x86/entry] " tip-bot2 for Thomas Gleixner
2020-05-05 13:16 ` [patch V4 part 1 13/36] x86/kvm: Restrict ASYNC_PF to user space Thomas Gleixner
2020-05-06  7:00   ` Paolo Bonzini
2020-05-06 15:29   ` Alexandre Chartre
2020-05-19 19:58   ` [tip: x86/entry] " tip-bot2 for Thomas Gleixner
2020-05-05 13:16 ` [patch V4 part 1 14/36] x86/entry: Get rid of ist_begin/end_non_atomic() Thomas Gleixner
2020-05-06 15:34   ` Alexandre Chartre
2020-05-07 17:46   ` Andy Lutomirski
2020-05-13 22:57   ` Mathieu Desnoyers
2020-05-14  0:13     ` Steven Rostedt
2020-05-15  9:34     ` Thomas Gleixner
2020-05-15 13:11       ` Mathieu Desnoyers
2020-05-19 19:52   ` [tip: core/rcu] " tip-bot2 for Thomas Gleixner
2020-05-05 13:16 ` [patch V4 part 1 15/36] kprobes: Lock kprobe_mutex while showing kprobe_blacklist Thomas Gleixner
2020-05-06 15:38   ` Alexandre Chartre
2020-05-12 15:18   ` [tip: core/kprobes] " tip-bot2 for Masami Hiramatsu
2020-05-05 13:16 ` [patch V4 part 1 16/36] kprobes: Support __kprobes blacklist in modules Thomas Gleixner
2020-05-06 15:47   ` Alexandre Chartre
2020-05-12 15:18   ` [tip: core/kprobes] " tip-bot2 for Masami Hiramatsu
2020-05-05 13:16 ` [patch V4 part 1 17/36] kprobes: Support NOKPROBE_SYMBOL() " Thomas Gleixner
2020-05-06 15:54   ` Alexandre Chartre
2020-05-12 15:18   ` [tip: core/kprobes] " tip-bot2 for Masami Hiramatsu
2020-05-05 13:16 ` [patch V4 part 1 18/36] samples/kprobes: Add __kprobes and NOKPROBE_SYMBOL() for handlers Thomas Gleixner
2020-05-06 15:57   ` Alexandre Chartre
2020-05-12 15:18   ` [tip: core/kprobes] " tip-bot2 for Masami Hiramatsu
2020-05-05 13:16 ` [patch V4 part 1 19/36] x86/entry: Exclude low level entry code from sanitizing Thomas Gleixner
2020-05-05 20:39   ` Brian Gerst
2020-05-06 15:42     ` Peter Zijlstra
2020-05-06 16:03   ` Alexandre Chartre
2020-05-13 22:58     ` Mathieu Desnoyers
2020-05-19 19:58   ` [tip: x86/entry] " tip-bot2 for Peter Zijlstra
2020-05-05 13:16 ` [patch V4 part 1 20/36] vmlinux.lds.h: Create section for protection against instrumentation Thomas Gleixner
2020-05-06 16:08   ` Sean Christopherson
2020-05-06 16:28     ` Peter Zijlstra
2020-05-06 16:57       ` Thomas Gleixner
2020-05-19 19:52   ` [tip: core/rcu] " tip-bot2 for Thomas Gleixner
2020-05-05 13:16 ` [patch V4 part 1 21/36] kprobes: Prevent probes in .noinstr.text section Thomas Gleixner
2020-05-08  6:30   ` Masami Hiramatsu
2020-05-19 19:52   ` [tip: core/kprobes] " tip-bot2 for Thomas Gleixner
2020-05-05 13:16 ` [patch V4 part 1 22/36] tracing: Provide lockdep less trace_hardirqs_on/off() variants Thomas Gleixner
2020-05-07 17:55   ` Andy Lutomirski
2020-05-07 18:52     ` Thomas Gleixner
2020-05-19 19:58   ` [tip: x86/entry] " tip-bot2 for Thomas Gleixner
2020-05-05 13:16 ` [patch V4 part 1 23/36] bug: Annotate WARN/BUG/stackfail as noinstr safe Thomas Gleixner
2020-05-13 23:12   ` Mathieu Desnoyers
2020-05-19 19:58   ` [tip: x86/entry] " tip-bot2 for Thomas Gleixner
2020-05-05 13:16 ` [patch V4 part 1 24/36] lockdep: Prepare for noinstr sections Thomas Gleixner
2020-05-08  0:23   ` Steven Rostedt
2020-05-08  8:44     ` Peter Zijlstra
2020-05-19 19:58   ` [tip: x86/entry] " tip-bot2 for Peter Zijlstra
2020-05-05 13:16 ` [patch V4 part 1 25/36] rcu/tree: Mark the idle relevant functions noinstr Thomas Gleixner
2020-05-05 18:07   ` Paul E. McKenney
2020-05-19 19:48   ` Joel Fernandes
2020-05-19 19:52   ` [tip: core/rcu] " tip-bot2 for Thomas Gleixner
2020-09-28 22:22     ` Kim Phillips
2020-09-28 22:55       ` Paul E. McKenney
2020-09-29  7:25       ` Peter Zijlstra
2020-09-29 11:25     ` Peter Zijlstra
2020-09-29 14:34       ` Steven Rostedt
2020-09-29 14:52         ` Peter Zijlstra
2020-05-05 13:16 ` [patch V4 part 1 26/36] printk: Prepare for nested printk_nmi_enter() Thomas Gleixner
2020-05-19 19:52   ` [tip: core/rcu] " tip-bot2 for Petr Mladek
2020-05-05 13:16 ` [patch V4 part 1 27/36] arm64: Prepare arch_nmi_enter() for recursion Thomas Gleixner
2020-05-13 23:28   ` Mathieu Desnoyers
2020-05-15 14:04     ` Frederic Weisbecker
2020-05-15 15:45       ` Will Deacon
2020-05-15 16:01         ` Mathieu Desnoyers
2020-05-15 21:29   ` Thomas Gleixner
2020-05-15 21:31     ` Frederic Weisbecker
2020-05-19 19:52   ` [tip: core/rcu] " tip-bot2 for Frederic Weisbecker
2020-05-05 13:16 ` [patch V4 part 1 28/36] hardirq/nmi: Allow nested nmi_enter() Thomas Gleixner
2020-05-19 19:52   ` [tip: core/rcu] " tip-bot2 for Peter Zijlstra
2020-05-05 13:16 ` [patch V4 part 1 29/36] x86/mce: Send #MC singal from task work Thomas Gleixner
2020-05-07 18:02   ` Andy Lutomirski
2020-05-08  8:48     ` Peter Zijlstra
2020-05-08 21:30       ` Andy Lutomirski
2020-05-14 14:16     ` Borislav Petkov
2020-05-13 23:42   ` Mathieu Desnoyers
2020-05-14 17:38     ` Thomas Gleixner
2020-05-14 17:42       ` Mathieu Desnoyers
2020-05-14 14:17   ` Borislav Petkov
2020-05-14 16:03     ` Mathieu Desnoyers
2020-05-14 16:19       ` Andy Lutomirski
2020-05-14 16:39       ` Borislav Petkov
2020-05-14 17:05         ` Mathieu Desnoyers
2020-05-19 19:52   ` [tip: core/rcu] " tip-bot2 for Peter Zijlstra
2020-05-05 13:16 ` [patch V4 part 1 30/36] lockdep: Always inline lockdep_{off,on}() Thomas Gleixner
2020-05-13 23:46   ` Mathieu Desnoyers
2020-05-19 19:52   ` [tip: core/rcu] " tip-bot2 for Peter Zijlstra
2020-05-05 13:16 ` [patch V4 part 1 31/36] printk: Disallow instrumenting print_nmi_enter() Thomas Gleixner
2020-05-19 19:52   ` [tip: core/rcu] " tip-bot2 for Peter Zijlstra
2020-05-05 13:16 ` [patch V4 part 1 32/36] sh/ftrace: Move arch_ftrace_nmi_{enter,exit} into nmi exception Thomas Gleixner
2020-05-08  0:34   ` Steven Rostedt
2020-05-19 19:52   ` [tip: core/rcu] " tip-bot2 for Peter Zijlstra
2020-05-05 13:16 ` [patch V4 part 1 33/36] x86,tracing: Robustify ftrace_nmi_enter() Thomas Gleixner
2020-05-08  6:19   ` Masami Hiramatsu
2020-05-05 13:16 ` [patch V4 part 1 34/36] sched,rcu,tracing: Avoid tracing before in_nmi() is correct Thomas Gleixner
2020-05-19 19:52   ` [tip: core/rcu] " tip-bot2 for Peter Zijlstra
2020-05-05 13:16 ` [patch V4 part 1 35/36] x86: Replace ist_enter() with nmi_enter() Thomas Gleixner
2020-05-07 18:04   ` Andy Lutomirski
2020-05-07 18:17     ` Mathieu Desnoyers
2020-05-08  8:50       ` Peter Zijlstra
2020-05-08 17:12         ` Josh Poimboeuf
2020-05-14  0:12   ` Mathieu Desnoyers
2020-05-19 19:52   ` [tip: core/rcu] " tip-bot2 for Peter Zijlstra
2020-05-05 13:16 ` [patch V4 part 1 36/36] rcu: Make RCU IRQ enter/exit functions rely on in_nmi() Thomas Gleixner
2020-05-05 18:13   ` Paul E. McKenney
2020-05-06 17:09   ` Alexandre Chartre
2020-05-19 19:52   ` [tip: core/rcu] " tip-bot2 for Paul E. McKenney
2020-05-07 18:05 ` [patch V4 part 1 00/36] x86/entry: Entry/exception code rework, preparatory patches Andy Lutomirski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200505134059.262701431@linutronix.de \
    --to=tglx@linutronix.de \
    --cc=alexandre.chartre@oracle.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=brgerst@gmail.com \
    --cc=frederic@kernel.org \
    --cc=jgross@suse.com \
    --cc=joel@joelfernandes.org \
    --cc=jpoimboe@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mhiramat@kernel.org \
    --cc=paulmck@kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=pmladek@suse.com \
    --cc=rostedt@goodmis.org \
    --cc=sean.j.christopherson@intel.com \
    --cc=will@kernel.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.