All of lore.kernel.org
 help / color / mirror / Atom feed
From: "tip-bot for Peter Zijlstra (Intel)" <tipbot@zytor.com>
To: linux-tip-commits@vger.kernel.org
Cc: linux-kernel@vger.kernel.org, bp@alien8.de, daniel@numascale.com,
	scott.norton@hp.com, riel@redhat.com, doug.hatch@hp.com,
	hpa@zytor.com, oleg@redhat.com, peterz@infradead.org,
	raghavendra.kt@linux.vnet.ibm.com, konrad.wilk@oracle.com,
	torvalds@linux-foundation.org, mingo@kernel.org,
	akpm@linux-foundation.org, tglx@linutronix.de,
	david.vrabel@citrix.com, boris.ostrovsky@oracle.com,
	Waiman.Long@hp.com, paolo.bonzini@gmail.com,
	paulmck@linux.vnet.ibm.com
Subject: [tip:locking/core] locking/pvqspinlock, x86: Implement the paravirt qspinlock call patching
Date: Fri, 8 May 2015 06:27:59 -0700	[thread overview]
Message-ID: <tip-f233f7f1581e78fd9b4023f2e7d8c1ed89020cc9@git.kernel.org> (raw)
In-Reply-To: <1429901803-29771-10-git-send-email-Waiman.Long@hp.com>

Commit-ID:  f233f7f1581e78fd9b4023f2e7d8c1ed89020cc9
Gitweb:     http://git.kernel.org/tip/f233f7f1581e78fd9b4023f2e7d8c1ed89020cc9
Author:     Peter Zijlstra (Intel) <peterz@infradead.org>
AuthorDate: Fri, 24 Apr 2015 14:56:38 -0400
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 8 May 2015 12:37:09 +0200

locking/pvqspinlock, x86: Implement the paravirt qspinlock call patching

We use the regular paravirt call patching to switch between:

  native_queued_spin_lock_slowpath()	__pv_queued_spin_lock_slowpath()
  native_queued_spin_unlock()		__pv_queued_spin_unlock()

We use a callee saved call for the unlock function which reduces the
i-cache footprint and allows 'inlining' of SPIN_UNLOCK functions
again.

We further optimize the unlock path by patching the direct call with a
"movb $0,%arg1" if we are indeed using the native unlock code. This
makes the unlock code almost as fast as the !PARAVIRT case.

This significantly lowers the overhead of having
CONFIG_PARAVIRT_SPINLOCKS enabled, even for native code.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Daniel J Blueman <daniel@numascale.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Douglas Hatch <doug.hatch@hp.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Paolo Bonzini <paolo.bonzini@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Scott J Norton <scott.norton@hp.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: virtualization@lists.linux-foundation.org
Cc: xen-devel@lists.xenproject.org
Link: http://lkml.kernel.org/r/1429901803-29771-10-git-send-email-Waiman.Long@hp.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/Kconfig                          |  2 +-
 arch/x86/include/asm/paravirt.h           | 29 ++++++++++++++++++++++++++++-
 arch/x86/include/asm/paravirt_types.h     | 10 ++++++++++
 arch/x86/include/asm/qspinlock.h          | 25 ++++++++++++++++++++++++-
 arch/x86/include/asm/qspinlock_paravirt.h |  6 ++++++
 arch/x86/kernel/paravirt-spinlocks.c      | 24 +++++++++++++++++++++++-
 arch/x86/kernel/paravirt_patch_32.c       | 22 ++++++++++++++++++----
 arch/x86/kernel/paravirt_patch_64.c       | 22 ++++++++++++++++++----
 8 files changed, 128 insertions(+), 12 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 90b1b54..50ec043 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -667,7 +667,7 @@ config PARAVIRT_DEBUG
 config PARAVIRT_SPINLOCKS
 	bool "Paravirtualization layer for spinlocks"
 	depends on PARAVIRT && SMP
-	select UNINLINE_SPIN_UNLOCK
+	select UNINLINE_SPIN_UNLOCK if !QUEUED_SPINLOCK
 	---help---
 	  Paravirtualized spinlocks allow a pvops backend to replace the
 	  spinlock implementation with something virtualization-friendly
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 8957810..266c353 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -712,6 +712,31 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
 
+#ifdef CONFIG_QUEUED_SPINLOCK
+
+static __always_inline void pv_queued_spin_lock_slowpath(struct qspinlock *lock,
+							u32 val)
+{
+	PVOP_VCALL2(pv_lock_ops.queued_spin_lock_slowpath, lock, val);
+}
+
+static __always_inline void pv_queued_spin_unlock(struct qspinlock *lock)
+{
+	PVOP_VCALLEE1(pv_lock_ops.queued_spin_unlock, lock);
+}
+
+static __always_inline void pv_wait(u8 *ptr, u8 val)
+{
+	PVOP_VCALL2(pv_lock_ops.wait, ptr, val);
+}
+
+static __always_inline void pv_kick(int cpu)
+{
+	PVOP_VCALL1(pv_lock_ops.kick, cpu);
+}
+
+#else /* !CONFIG_QUEUED_SPINLOCK */
+
 static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
 							__ticket_t ticket)
 {
@@ -724,7 +749,9 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
 	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
 
-#endif
+#endif /* CONFIG_QUEUED_SPINLOCK */
+
+#endif /* SMP && PARAVIRT_SPINLOCKS */
 
 #ifdef CONFIG_X86_32
 #define PV_SAVE_REGS "pushl %ecx; pushl %edx;"
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index f7b0b5c..76cd684 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -333,9 +333,19 @@ struct arch_spinlock;
 typedef u16 __ticket_t;
 #endif
 
+struct qspinlock;
+
 struct pv_lock_ops {
+#ifdef CONFIG_QUEUED_SPINLOCK
+	void (*queued_spin_lock_slowpath)(struct qspinlock *lock, u32 val);
+	struct paravirt_callee_save queued_spin_unlock;
+
+	void (*wait)(u8 *ptr, u8 val);
+	void (*kick)(int cpu);
+#else /* !CONFIG_QUEUED_SPINLOCK */
 	struct paravirt_callee_save lock_spinning;
 	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
+#endif /* !CONFIG_QUEUED_SPINLOCK */
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index f079b70..9d51fae 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -3,6 +3,7 @@
 
 #include <asm/cpufeature.h>
 #include <asm-generic/qspinlock_types.h>
+#include <asm/paravirt.h>
 
 #define	queued_spin_unlock queued_spin_unlock
 /**
@@ -11,11 +12,33 @@
  *
  * A smp_store_release() on the least-significant byte.
  */
-static inline void queued_spin_unlock(struct qspinlock *lock)
+static inline void native_queued_spin_unlock(struct qspinlock *lock)
 {
 	smp_store_release((u8 *)lock, 0);
 }
 
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+extern void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val);
+extern void __pv_init_lock_hash(void);
+extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val);
+extern void __raw_callee_save___pv_queued_spin_unlock(struct qspinlock *lock);
+
+static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+{
+	pv_queued_spin_lock_slowpath(lock, val);
+}
+
+static inline void queued_spin_unlock(struct qspinlock *lock)
+{
+	pv_queued_spin_unlock(lock);
+}
+#else
+static inline void queued_spin_unlock(struct qspinlock *lock)
+{
+	native_queued_spin_unlock(lock);
+}
+#endif
+
 #define virt_queued_spin_lock virt_queued_spin_lock
 
 static inline bool virt_queued_spin_lock(struct qspinlock *lock)
diff --git a/arch/x86/include/asm/qspinlock_paravirt.h b/arch/x86/include/asm/qspinlock_paravirt.h
new file mode 100644
index 0000000..b002e71
--- /dev/null
+++ b/arch/x86/include/asm/qspinlock_paravirt.h
@@ -0,0 +1,6 @@
+#ifndef __ASM_QSPINLOCK_PARAVIRT_H
+#define __ASM_QSPINLOCK_PARAVIRT_H
+
+PV_CALLEE_SAVE_REGS_THUNK(__pv_queued_spin_unlock);
+
+#endif
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index bbb6c73..a33f1eb 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -8,11 +8,33 @@
 
 #include <asm/paravirt.h>
 
+#ifdef CONFIG_QUEUED_SPINLOCK
+__visible void __native_queued_spin_unlock(struct qspinlock *lock)
+{
+	native_queued_spin_unlock(lock);
+}
+
+PV_CALLEE_SAVE_REGS_THUNK(__native_queued_spin_unlock);
+
+bool pv_is_native_spin_unlock(void)
+{
+	return pv_lock_ops.queued_spin_unlock.func ==
+		__raw_callee_save___native_queued_spin_unlock;
+}
+#endif
+
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
+#ifdef CONFIG_QUEUED_SPINLOCK
+	.queued_spin_lock_slowpath = native_queued_spin_lock_slowpath,
+	.queued_spin_unlock = PV_CALLEE_SAVE(__native_queued_spin_unlock),
+	.wait = paravirt_nop,
+	.kick = paravirt_nop,
+#else /* !CONFIG_QUEUED_SPINLOCK */
 	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
 	.unlock_kick = paravirt_nop,
-#endif
+#endif /* !CONFIG_QUEUED_SPINLOCK */
+#endif /* SMP */
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
diff --git a/arch/x86/kernel/paravirt_patch_32.c b/arch/x86/kernel/paravirt_patch_32.c
index d9f32e6..e1b0136 100644
--- a/arch/x86/kernel/paravirt_patch_32.c
+++ b/arch/x86/kernel/paravirt_patch_32.c
@@ -12,6 +12,10 @@ DEF_NATIVE(pv_mmu_ops, read_cr3, "mov %cr3, %eax");
 DEF_NATIVE(pv_cpu_ops, clts, "clts");
 DEF_NATIVE(pv_cpu_ops, read_tsc, "rdtsc");
 
+#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUED_SPINLOCKS)
+DEF_NATIVE(pv_lock_ops, queued_spin_unlock, "movb $0, (%eax)");
+#endif
+
 unsigned paravirt_patch_ident_32(void *insnbuf, unsigned len)
 {
 	/* arg in %eax, return in %eax */
@@ -24,6 +28,8 @@ unsigned paravirt_patch_ident_64(void *insnbuf, unsigned len)
 	return 0;
 }
 
+extern bool pv_is_native_spin_unlock(void);
+
 unsigned native_patch(u8 type, u16 clobbers, void *ibuf,
 		      unsigned long addr, unsigned len)
 {
@@ -47,14 +53,22 @@ unsigned native_patch(u8 type, u16 clobbers, void *ibuf,
 		PATCH_SITE(pv_mmu_ops, write_cr3);
 		PATCH_SITE(pv_cpu_ops, clts);
 		PATCH_SITE(pv_cpu_ops, read_tsc);
-
-	patch_site:
-		ret = paravirt_patch_insns(ibuf, len, start, end);
-		break;
+#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUED_SPINLOCKS)
+		case PARAVIRT_PATCH(pv_lock_ops.queued_spin_unlock):
+			if (pv_is_native_spin_unlock()) {
+				start = start_pv_lock_ops_queued_spin_unlock;
+				end   = end_pv_lock_ops_queued_spin_unlock;
+				goto patch_site;
+			}
+#endif
 
 	default:
 		ret = paravirt_patch_default(type, clobbers, ibuf, addr, len);
 		break;
+
+patch_site:
+		ret = paravirt_patch_insns(ibuf, len, start, end);
+		break;
 	}
 #undef PATCH_SITE
 	return ret;
diff --git a/arch/x86/kernel/paravirt_patch_64.c b/arch/x86/kernel/paravirt_patch_64.c
index a1da673..e0fb41c 100644
--- a/arch/x86/kernel/paravirt_patch_64.c
+++ b/arch/x86/kernel/paravirt_patch_64.c
@@ -21,6 +21,10 @@ DEF_NATIVE(pv_cpu_ops, swapgs, "swapgs");
 DEF_NATIVE(, mov32, "mov %edi, %eax");
 DEF_NATIVE(, mov64, "mov %rdi, %rax");
 
+#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUED_SPINLOCK)
+DEF_NATIVE(pv_lock_ops, queued_spin_unlock, "movb $0, (%rdi)");
+#endif
+
 unsigned paravirt_patch_ident_32(void *insnbuf, unsigned len)
 {
 	return paravirt_patch_insns(insnbuf, len,
@@ -33,6 +37,8 @@ unsigned paravirt_patch_ident_64(void *insnbuf, unsigned len)
 				    start__mov64, end__mov64);
 }
 
+extern bool pv_is_native_spin_unlock(void);
+
 unsigned native_patch(u8 type, u16 clobbers, void *ibuf,
 		      unsigned long addr, unsigned len)
 {
@@ -59,14 +65,22 @@ unsigned native_patch(u8 type, u16 clobbers, void *ibuf,
 		PATCH_SITE(pv_cpu_ops, clts);
 		PATCH_SITE(pv_mmu_ops, flush_tlb_single);
 		PATCH_SITE(pv_cpu_ops, wbinvd);
-
-	patch_site:
-		ret = paravirt_patch_insns(ibuf, len, start, end);
-		break;
+#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUED_SPINLOCK)
+		case PARAVIRT_PATCH(pv_lock_ops.queued_spin_unlock):
+			if (pv_is_native_spin_unlock()) {
+				start = start_pv_lock_ops_queued_spin_unlock;
+				end   = end_pv_lock_ops_queued_spin_unlock;
+				goto patch_site;
+			}
+#endif
 
 	default:
 		ret = paravirt_patch_default(type, clobbers, ibuf, addr, len);
 		break;
+
+patch_site:
+		ret = paravirt_patch_insns(ibuf, len, start, end);
+		break;
 	}
 #undef PATCH_SITE
 	return ret;

  reply	other threads:[~2015-05-08 13:29 UTC|newest]

Thread overview: 90+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-24 18:56 [PATCH v16 00/14] qspinlock: a 4-byte queue spinlock with PV support Waiman Long
2015-04-24 18:56 ` [PATCH v16 01/14] qspinlock: A simple generic 4-byte queue spinlock Waiman Long
2015-04-24 18:56 ` Waiman Long
2015-04-24 18:56   ` Waiman Long
2015-04-24 18:56   ` Waiman Long
2015-05-08 13:25   ` [tip:locking/core] locking/qspinlock: Introduce a simple generic 4-byte queued spinlock tip-bot for Waiman Long
2015-04-24 18:56 ` [PATCH v16 02/14] qspinlock, x86: Enable x86-64 to use queue spinlock Waiman Long
2015-04-24 18:56 ` Waiman Long
2015-04-24 18:56 ` Waiman Long
2015-04-24 18:56   ` Waiman Long
2015-05-08 13:25   ` [tip:locking/core] locking/qspinlock, x86: Enable x86-64 to use queued spinlocks tip-bot for Waiman Long
2015-04-24 18:56 ` [PATCH v16 03/14] qspinlock: Add pending bit Waiman Long
2015-04-24 18:56 ` Waiman Long
2015-04-24 18:56   ` Waiman Long
2015-04-24 18:56   ` Waiman Long
2015-05-08 13:26   ` [tip:locking/core] locking/qspinlock: " tip-bot for Peter Zijlstra (Intel)
2015-04-24 18:56 ` [PATCH v16 04/14] qspinlock: Extract out code snippets for the next patch Waiman Long
2015-04-24 18:56 ` Waiman Long
2015-04-24 18:56 ` Waiman Long
2015-04-24 18:56   ` Waiman Long
2015-05-08 13:26   ` [tip:locking/core] locking/qspinlock: " tip-bot for Waiman Long
2015-04-24 18:56 ` [PATCH v16 05/14] qspinlock: Optimize for smaller NR_CPUS Waiman Long
2015-04-24 18:56 ` Waiman Long
2015-04-24 18:56 ` Waiman Long
2015-04-24 18:56   ` Waiman Long
2015-05-08 13:26   ` [tip:locking/core] locking/qspinlock: " tip-bot for Peter Zijlstra (Intel)
2015-04-24 18:56 ` [PATCH v16 06/14] qspinlock: Use a simple write to grab the lock Waiman Long
2015-04-24 18:56 ` Waiman Long
2015-04-24 18:56 ` Waiman Long
2015-04-24 18:56   ` Waiman Long
2015-05-08 13:27   ` [tip:locking/core] locking/qspinlock: " tip-bot for Waiman Long
2015-04-24 18:56 ` [PATCH v16 07/14] qspinlock: Revert to test-and-set on hypervisors Waiman Long
2015-04-24 18:56   ` Waiman Long
2015-04-24 18:56   ` Waiman Long
2015-05-08 13:27   ` [tip:locking/core] locking/qspinlock: " tip-bot for Peter Zijlstra (Intel)
2015-04-24 18:56 ` [PATCH v16 07/14] qspinlock: " Waiman Long
2015-04-24 18:56 ` [PATCH v16 08/14] pvqspinlock: Implement simple paravirt support for the qspinlock Waiman Long
2015-04-24 18:56   ` Waiman Long
2015-05-04 14:20   ` Peter Zijlstra
2015-05-04 14:20   ` Peter Zijlstra
2015-05-04 14:20   ` Peter Zijlstra
2015-05-04 17:15     ` Waiman Long
2015-05-04 17:15     ` Waiman Long
2015-05-04 17:15     ` Waiman Long
2015-05-08 13:27   ` [tip:locking/core] locking/pvqspinlock: " tip-bot for Waiman Long
2015-04-24 18:56 ` [PATCH v16 08/14] pvqspinlock: " Waiman Long
2015-04-24 18:56 ` [PATCH v16 09/14] pvqspinlock, x86: Implement the paravirt qspinlock call patching Waiman Long
2015-04-24 18:56   ` Waiman Long
2015-05-08 13:27   ` tip-bot for Peter Zijlstra (Intel) [this message]
2015-05-30  4:09     ` [tip:locking/core] locking/pvqspinlock, " Sasha Levin
2015-05-31 18:29       ` Waiman Long
2015-04-24 18:56 ` [PATCH v16 09/14] pvqspinlock, " Waiman Long
2015-04-24 18:56 ` Waiman Long
2015-04-24 18:56 ` [PATCH v16 10/14] pvqspinlock, x86: Enable PV qspinlock for KVM Waiman Long
2015-04-24 18:56 ` Waiman Long
2015-04-24 18:56   ` Waiman Long
2015-05-08 13:28   ` [tip:locking/core] locking/pvqspinlock, " tip-bot for Waiman Long
2015-04-24 18:56 ` [PATCH v16 11/14] pvqspinlock, x86: Enable PV qspinlock for Xen Waiman Long
2015-04-24 18:56 ` Waiman Long
2015-04-24 18:56   ` Waiman Long
2015-05-08 13:28   ` [tip:locking/core] locking/pvqspinlock, " tip-bot for David Vrabel
2015-04-24 18:56 ` [PATCH v16 12/14] pvqspinlock: Only kick CPU at unlock time Waiman Long
2015-04-24 18:56 ` Waiman Long
2015-04-24 18:56 ` Waiman Long
2015-04-24 18:56 ` [PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg Waiman Long
2015-04-24 18:56 ` Waiman Long
2015-04-24 18:56 ` Waiman Long
2015-04-29 18:11   ` Peter Zijlstra
2015-04-29 18:27     ` Linus Torvalds
2015-04-29 18:27     ` Linus Torvalds
2015-04-29 18:27       ` Linus Torvalds
2015-04-30 18:56       ` Waiman Long
2015-04-30 18:56       ` Waiman Long
2015-04-30 18:56         ` Waiman Long
2015-04-30 18:56         ` Waiman Long
2015-04-30 18:56       ` Waiman Long
2015-04-30 18:49     ` Waiman Long
2015-04-30 18:49     ` Waiman Long
2015-04-30 18:49       ` Waiman Long
2015-05-04 14:05       ` Peter Zijlstra
2015-05-04 14:05       ` Peter Zijlstra
2015-05-04 14:05         ` Peter Zijlstra
2015-05-04 17:18         ` Waiman Long
2015-05-04 17:18         ` Waiman Long
2015-05-04 17:18         ` Waiman Long
2015-04-29 18:11   ` Peter Zijlstra
2015-04-29 18:11   ` Peter Zijlstra
2015-04-24 18:56 ` [PATCH v16 14/14] pvqspinlock: Collect slowpath lock statistics Waiman Long
2015-04-24 18:56   ` Waiman Long
2015-04-24 18:56 ` Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=tip-f233f7f1581e78fd9b4023f2e7d8c1ed89020cc9@git.kernel.org \
    --to=tipbot@zytor.com \
    --cc=Waiman.Long@hp.com \
    --cc=akpm@linux-foundation.org \
    --cc=boris.ostrovsky@oracle.com \
    --cc=bp@alien8.de \
    --cc=daniel@numascale.com \
    --cc=david.vrabel@citrix.com \
    --cc=doug.hatch@hp.com \
    --cc=hpa@zytor.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-tip-commits@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=oleg@redhat.com \
    --cc=paolo.bonzini@gmail.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=raghavendra.kt@linux.vnet.ibm.com \
    --cc=riel@redhat.com \
    --cc=scott.norton@hp.com \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.