All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
To: Ingo Molnar <mingo@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>, Borislav Petkov <bp@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Brian Gerst <brgerst@gmail.com>,
	Chris Metcalf <cmetcalf@mellanox.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Liang Z Li <liang.z.li@intel.com>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Huang Rui <ray.huang@amd.com>, Jiri Slaby <jslaby@suse.cz>,
	Jonathan Corbet <corbet@lwn.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Paul Gortmaker <paul.gortmaker@windriver.com>,
	Vlastimil Babka <vbabka@suse.cz>, Chen Yucong <slaoub@gmail.com>,
	"Ravi V. Shankar" <ravi.v.shankar@intel.com>,
	Shuah Khan <shuah@kernel.org>,
	linux-kernel@vger.kernel.org, x86@kernel.org,
	ricardo.neri@intel.com,
	Ricardo Neri <ricardo.neri-calderon@linux.intel.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>
Subject: [PATCH v8 01/28] x86/mm: Relocate page fault error codes to traps.h
Date: Fri, 18 Aug 2017 17:27:42 -0700	[thread overview]
Message-ID: <20170819002809.111312-2-ricardo.neri-calderon@linux.intel.com> (raw)
In-Reply-To: <20170819002809.111312-1-ricardo.neri-calderon@linux.intel.com>

Up to this point, only fault.c used the definitions of the page fault error
codes. Thus, it made sense to keep them within such file. Other portions of
code might be interested in those definitions too. For instance, the User-
Mode Instruction Prevention emulation code will use such definitions to
emulate a page fault when it is unable to successfully copy the results
of the emulated instructions to user space.

While relocating the error code enumeration, the prefix X86_ is used to
make it consistent with the rest of the definitions in traps.h. Of course,
code using the enumeration had to be updated as well. No functional changes
were performed.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: x86@kernel.org
Reviewed-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
---
 arch/x86/include/asm/traps.h | 18 +++++++++
 arch/x86/mm/fault.c          | 88 +++++++++++++++++---------------------------
 2 files changed, 52 insertions(+), 54 deletions(-)

diff --git a/arch/x86/include/asm/traps.h b/arch/x86/include/asm/traps.h
index 01fd0a7f48cd..4a2e5852eacc 100644
--- a/arch/x86/include/asm/traps.h
+++ b/arch/x86/include/asm/traps.h
@@ -148,4 +148,22 @@ enum {
 	X86_TRAP_IRET = 32,	/* 32, IRET Exception */
 };
 
+/*
+ * Page fault error code bits:
+ *
+ *   bit 0 ==	 0: no page found	1: protection fault
+ *   bit 1 ==	 0: read access		1: write access
+ *   bit 2 ==	 0: kernel-mode access	1: user-mode access
+ *   bit 3 ==				1: use of reserved bit detected
+ *   bit 4 ==				1: fault was an instruction fetch
+ *   bit 5 ==				1: protection keys block access
+ */
+enum x86_pf_error_code {
+	X86_PF_PROT	=		1 << 0,
+	X86_PF_WRITE	=		1 << 1,
+	X86_PF_USER	=		1 << 2,
+	X86_PF_RSVD	=		1 << 3,
+	X86_PF_INSTR	=		1 << 4,
+	X86_PF_PK	=		1 << 5,
+};
 #endif /* _ASM_X86_TRAPS_H */
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 2a1fa10c6a98..dc87badd16e9 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -29,26 +29,6 @@
 #include <asm/trace/exceptions.h>
 
 /*
- * Page fault error code bits:
- *
- *   bit 0 ==	 0: no page found	1: protection fault
- *   bit 1 ==	 0: read access		1: write access
- *   bit 2 ==	 0: kernel-mode access	1: user-mode access
- *   bit 3 ==				1: use of reserved bit detected
- *   bit 4 ==				1: fault was an instruction fetch
- *   bit 5 ==				1: protection keys block access
- */
-enum x86_pf_error_code {
-
-	PF_PROT		=		1 << 0,
-	PF_WRITE	=		1 << 1,
-	PF_USER		=		1 << 2,
-	PF_RSVD		=		1 << 3,
-	PF_INSTR	=		1 << 4,
-	PF_PK		=		1 << 5,
-};
-
-/*
  * Returns 0 if mmiotrace is disabled, or if the fault is not
  * handled by mmiotrace:
  */
@@ -149,7 +129,7 @@ is_prefetch(struct pt_regs *regs, unsigned long error_code, unsigned long addr)
 	 * If it was a exec (instruction fetch) fault on NX page, then
 	 * do not ignore the fault:
 	 */
-	if (error_code & PF_INSTR)
+	if (error_code & X86_PF_INSTR)
 		return 0;
 
 	instr = (void *)convert_ip_to_linear(current, regs);
@@ -179,7 +159,7 @@ is_prefetch(struct pt_regs *regs, unsigned long error_code, unsigned long addr)
  * siginfo so userspace can discover which protection key was set
  * on the PTE.
  *
- * If we get here, we know that the hardware signaled a PF_PK
+ * If we get here, we know that the hardware signaled a X86_PF_PK
  * fault and that there was a VMA once we got in the fault
  * handler.  It does *not* guarantee that the VMA we find here
  * was the one that we faulted on.
@@ -205,7 +185,7 @@ static void fill_sig_info_pkey(int si_code, siginfo_t *info,
 	/*
 	 * force_sig_info_fault() is called from a number of
 	 * contexts, some of which have a VMA and some of which
-	 * do not.  The PF_PK handing happens after we have a
+	 * do not.  The X86_PF_PK handing happens after we have a
 	 * valid VMA, so we should never reach this without a
 	 * valid VMA.
 	 */
@@ -695,7 +675,7 @@ show_fault_oops(struct pt_regs *regs, unsigned long error_code,
 	if (!oops_may_print())
 		return;
 
-	if (error_code & PF_INSTR) {
+	if (error_code & X86_PF_INSTR) {
 		unsigned int level;
 		pgd_t *pgd;
 		pte_t *pte;
@@ -779,7 +759,7 @@ no_context(struct pt_regs *regs, unsigned long error_code,
 		 */
 		if (current->thread.sig_on_uaccess_err && signal) {
 			tsk->thread.trap_nr = X86_TRAP_PF;
-			tsk->thread.error_code = error_code | PF_USER;
+			tsk->thread.error_code = error_code | X86_PF_USER;
 			tsk->thread.cr2 = address;
 
 			/* XXX: hwpoison faults will set the wrong code. */
@@ -899,7 +879,7 @@ __bad_area_nosemaphore(struct pt_regs *regs, unsigned long error_code,
 	struct task_struct *tsk = current;
 
 	/* User mode accesses just cause a SIGSEGV */
-	if (error_code & PF_USER) {
+	if (error_code & X86_PF_USER) {
 		/*
 		 * It's possible to have interrupts off here:
 		 */
@@ -920,7 +900,7 @@ __bad_area_nosemaphore(struct pt_regs *regs, unsigned long error_code,
 		 * Instruction fetch faults in the vsyscall page might need
 		 * emulation.
 		 */
-		if (unlikely((error_code & PF_INSTR) &&
+		if (unlikely((error_code & X86_PF_INSTR) &&
 			     ((address & ~0xfff) == VSYSCALL_ADDR))) {
 			if (emulate_vsyscall(regs, address))
 				return;
@@ -933,7 +913,7 @@ __bad_area_nosemaphore(struct pt_regs *regs, unsigned long error_code,
 		 * are always protection faults.
 		 */
 		if (address >= TASK_SIZE_MAX)
-			error_code |= PF_PROT;
+			error_code |= X86_PF_PROT;
 
 		if (likely(show_unhandled_signals))
 			show_signal_msg(regs, error_code, address, tsk);
@@ -989,11 +969,11 @@ static inline bool bad_area_access_from_pkeys(unsigned long error_code,
 
 	if (!boot_cpu_has(X86_FEATURE_OSPKE))
 		return false;
-	if (error_code & PF_PK)
+	if (error_code & X86_PF_PK)
 		return true;
 	/* this checks permission keys on the VMA: */
-	if (!arch_vma_access_permitted(vma, (error_code & PF_WRITE),
-				(error_code & PF_INSTR), foreign))
+	if (!arch_vma_access_permitted(vma, (error_code & X86_PF_WRITE),
+				       (error_code & X86_PF_INSTR), foreign))
 		return true;
 	return false;
 }
@@ -1021,7 +1001,7 @@ do_sigbus(struct pt_regs *regs, unsigned long error_code, unsigned long address,
 	int code = BUS_ADRERR;
 
 	/* Kernel mode? Handle exceptions or die: */
-	if (!(error_code & PF_USER)) {
+	if (!(error_code & X86_PF_USER)) {
 		no_context(regs, error_code, address, SIGBUS, BUS_ADRERR);
 		return;
 	}
@@ -1050,14 +1030,14 @@ mm_fault_error(struct pt_regs *regs, unsigned long error_code,
 	       unsigned long address, struct vm_area_struct *vma,
 	       unsigned int fault)
 {
-	if (fatal_signal_pending(current) && !(error_code & PF_USER)) {
+	if (fatal_signal_pending(current) && !(error_code & X86_PF_USER)) {
 		no_context(regs, error_code, address, 0, 0);
 		return;
 	}
 
 	if (fault & VM_FAULT_OOM) {
 		/* Kernel mode? Handle exceptions or die: */
-		if (!(error_code & PF_USER)) {
+		if (!(error_code & X86_PF_USER)) {
 			no_context(regs, error_code, address,
 				   SIGSEGV, SEGV_MAPERR);
 			return;
@@ -1082,16 +1062,16 @@ mm_fault_error(struct pt_regs *regs, unsigned long error_code,
 
 static int spurious_fault_check(unsigned long error_code, pte_t *pte)
 {
-	if ((error_code & PF_WRITE) && !pte_write(*pte))
+	if ((error_code & X86_PF_WRITE) && !pte_write(*pte))
 		return 0;
 
-	if ((error_code & PF_INSTR) && !pte_exec(*pte))
+	if ((error_code & X86_PF_INSTR) && !pte_exec(*pte))
 		return 0;
 	/*
 	 * Note: We do not do lazy flushing on protection key
-	 * changes, so no spurious fault will ever set PF_PK.
+	 * changes, so no spurious fault will ever set X86_PF_PK.
 	 */
-	if ((error_code & PF_PK))
+	if ((error_code & X86_PF_PK))
 		return 1;
 
 	return 1;
@@ -1137,8 +1117,8 @@ spurious_fault(unsigned long error_code, unsigned long address)
 	 * change, so user accesses are not expected to cause spurious
 	 * faults.
 	 */
-	if (error_code != (PF_WRITE | PF_PROT)
-	    && error_code != (PF_INSTR | PF_PROT))
+	if (error_code != (X86_PF_WRITE | X86_PF_PROT) &&
+	    error_code != (X86_PF_INSTR | X86_PF_PROT))
 		return 0;
 
 	pgd = init_mm.pgd + pgd_index(address);
@@ -1198,19 +1178,19 @@ access_error(unsigned long error_code, struct vm_area_struct *vma)
 	 * always an unconditional error and can never result in
 	 * a follow-up action to resolve the fault, like a COW.
 	 */
-	if (error_code & PF_PK)
+	if (error_code & X86_PF_PK)
 		return 1;
 
 	/*
 	 * Make sure to check the VMA so that we do not perform
-	 * faults just to hit a PF_PK as soon as we fill in a
+	 * faults just to hit a X86_PF_PK as soon as we fill in a
 	 * page.
 	 */
-	if (!arch_vma_access_permitted(vma, (error_code & PF_WRITE),
-				(error_code & PF_INSTR), foreign))
+	if (!arch_vma_access_permitted(vma, (error_code & X86_PF_WRITE),
+				       (error_code & X86_PF_INSTR), foreign))
 		return 1;
 
-	if (error_code & PF_WRITE) {
+	if (error_code & X86_PF_WRITE) {
 		/* write, present and write, not present: */
 		if (unlikely(!(vma->vm_flags & VM_WRITE)))
 			return 1;
@@ -1218,7 +1198,7 @@ access_error(unsigned long error_code, struct vm_area_struct *vma)
 	}
 
 	/* read, present: */
-	if (unlikely(error_code & PF_PROT))
+	if (unlikely(error_code & X86_PF_PROT))
 		return 1;
 
 	/* read, not present: */
@@ -1241,7 +1221,7 @@ static inline bool smap_violation(int error_code, struct pt_regs *regs)
 	if (!static_cpu_has(X86_FEATURE_SMAP))
 		return false;
 
-	if (error_code & PF_USER)
+	if (error_code & X86_PF_USER)
 		return false;
 
 	if (!user_mode(regs) && (regs->flags & X86_EFLAGS_AC))
@@ -1297,7 +1277,7 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
 	 * protection error (error_code & 9) == 0.
 	 */
 	if (unlikely(fault_in_kernel_space(address))) {
-		if (!(error_code & (PF_RSVD | PF_USER | PF_PROT))) {
+		if (!(error_code & (X86_PF_RSVD | X86_PF_USER | X86_PF_PROT))) {
 			if (vmalloc_fault(address) >= 0)
 				return;
 
@@ -1325,7 +1305,7 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
 	if (unlikely(kprobes_fault(regs)))
 		return;
 
-	if (unlikely(error_code & PF_RSVD))
+	if (unlikely(error_code & X86_PF_RSVD))
 		pgtable_bad(regs, error_code, address);
 
 	if (unlikely(smap_violation(error_code, regs))) {
@@ -1351,7 +1331,7 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
 	 */
 	if (user_mode(regs)) {
 		local_irq_enable();
-		error_code |= PF_USER;
+		error_code |= X86_PF_USER;
 		flags |= FAULT_FLAG_USER;
 	} else {
 		if (regs->flags & X86_EFLAGS_IF)
@@ -1360,9 +1340,9 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
 
 	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
 
-	if (error_code & PF_WRITE)
+	if (error_code & X86_PF_WRITE)
 		flags |= FAULT_FLAG_WRITE;
-	if (error_code & PF_INSTR)
+	if (error_code & X86_PF_INSTR)
 		flags |= FAULT_FLAG_INSTRUCTION;
 
 	/*
@@ -1382,7 +1362,7 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
 	 * space check, thus avoiding the deadlock:
 	 */
 	if (unlikely(!down_read_trylock(&mm->mmap_sem))) {
-		if ((error_code & PF_USER) == 0 &&
+		if (!(error_code & X86_PF_USER) &&
 		    !search_exception_tables(regs->ip)) {
 			bad_area_nosemaphore(regs, error_code, address, NULL);
 			return;
@@ -1409,7 +1389,7 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
 		bad_area(regs, error_code, address);
 		return;
 	}
-	if (error_code & PF_USER) {
+	if (error_code & X86_PF_USER) {
 		/*
 		 * Accessing the stack below %sp is always a bug.
 		 * The large cushion allows instructions like enter
-- 
2.13.0

  reply	other threads:[~2017-08-19  0:35 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-19  0:27 [PATCH v8 00/28] x86: Enable User-Mode Instruction Prevention Ricardo Neri
2017-08-19  0:27 ` Ricardo Neri [this message]
2017-08-19  0:27 ` [PATCH v8 02/28] x86/boot: Relocate definition of the initial state of CR0 Ricardo Neri
2017-08-19  0:27   ` Ricardo Neri
2017-08-19  0:27   ` Ricardo Neri
2017-08-25 17:41   ` Borislav Petkov
2017-08-25 17:41     ` Borislav Petkov
2017-08-25 17:41     ` Borislav Petkov
2017-08-31  4:04     ` Ricardo Neri
2017-08-31  4:04       ` Ricardo Neri
2017-08-31  4:04       ` Ricardo Neri
2017-08-31  9:51       ` Borislav Petkov
2017-08-31  9:51         ` Borislav Petkov
2017-08-31  9:51         ` Borislav Petkov
2017-09-02 17:35         ` Ricardo Neri
2017-09-02 17:35           ` Ricardo Neri
2017-09-02 17:35           ` Ricardo Neri
2017-08-19  0:27 ` [PATCH v8 03/28] ptrace,x86: Make user_64bit_mode() available to 32-bit builds Ricardo Neri
2017-08-19  0:27 ` [PATCH v8 04/28] uprobes/x86: Use existing definitions for segment override prefixes Ricardo Neri
2017-08-19  0:27 ` [PATCH v8 05/28] x86/mpx: Use signed variables to compute effective addresses Ricardo Neri
2017-08-29 16:09   ` Borislav Petkov
2017-08-31  4:19     ` Ricardo Neri
2017-08-31  9:52       ` Borislav Petkov
2017-08-19  0:27 ` [PATCH v8 06/28] x86/mpx: Do not use SIB.index if its value is 100b and ModRM.mod is not 11b Ricardo Neri
2017-08-31 19:38   ` Borislav Petkov
2017-09-02 17:19     ` Ricardo Neri
2017-08-19  0:27 ` [PATCH v8 07/28] x86/mpx: Do not use SIB.base if its value is 101b and ModRM.mod = 0 Ricardo Neri
2017-09-06 15:44   ` Borislav Petkov
2017-08-19  0:27 ` [PATCH v8 08/28] x86/mpx, x86/insn: Relocate insn util functions to a new insn-eval file Ricardo Neri
2017-09-06 15:54   ` Borislav Petkov
2017-09-06 19:27     ` Ricardo Neri
2017-08-19  0:27 ` [PATCH v8 09/28] x86/insn-eval: Do not BUG on invalid register type Ricardo Neri
2017-09-07 17:54   ` Borislav Petkov
2017-09-07 20:27     ` Ricardo Neri
2017-08-19  0:27 ` [PATCH v8 10/28] x86/insn-eval: Add a utility function to get register offsets Ricardo Neri
2017-09-08 13:35   ` Borislav Petkov
2017-09-14 18:30     ` Ricardo Neri
2017-08-19  0:27 ` [PATCH v8 11/28] x86/insn-eval: Add utility function to identify string instructions Ricardo Neri
2017-09-08 13:57   ` Borislav Petkov
2017-09-14 18:30     ` Ricardo Neri
2017-08-19  0:27 ` [PATCH v8 12/28] x86/insn-eval: Add utility functions to get segment selector Ricardo Neri
2017-09-26 10:43   ` Borislav Petkov
2017-09-27  4:21     ` Ricardo Neri
2017-09-27 11:47       ` Borislav Petkov
2017-09-27 22:32         ` Ricardo Neri
2017-09-28  9:36           ` Borislav Petkov
2017-09-29  6:06             ` Ricardo Neri
2017-09-29 11:56               ` Borislav Petkov
2017-10-04 16:47                 ` Ricardo Neri
2017-08-19  0:27 ` [PATCH v8 13/28] x86/insn-eval: Add utility function to get segment descriptor Ricardo Neri
2017-09-26 18:05   ` Borislav Petkov
2017-09-27 17:39     ` Neri, Ricardo
2017-08-19  0:27 ` [PATCH v8 14/28] x86/insn-eval: Add utility functions to get segment descriptor base address and limit Ricardo Neri
2017-08-19  0:27 ` [PATCH v8 15/28] x86/insn-eval: Add function to get default params of code segment Ricardo Neri
2017-08-19  0:27 ` [PATCH v8 16/28] x86/insn-eval: Indicate a 32-bit displacement if ModRM.mod is 0 and ModRM.rm is 101b Ricardo Neri
2017-08-19  0:27 ` [PATCH v8 17/28] x86/insn-eval: Incorporate segment base in linear address computation Ricardo Neri
2017-08-19  0:27 ` [PATCH v8 18/28] x86/insn-eval: Add support to resolve 32-bit address encodings Ricardo Neri
2017-08-19  0:28 ` [PATCH v8 19/28] x86/insn-eval: Add wrapper function for 32 and 64-bit addresses Ricardo Neri
2017-08-19  0:28 ` [PATCH v8 20/28] x86/insn-eval: Handle 32-bit address encodings in virtual-8086 mode Ricardo Neri
2017-08-19  0:28 ` [PATCH v8 21/28] x86/insn-eval: Add support to resolve 16-bit addressing encodings Ricardo Neri
2017-08-19  0:28 ` [PATCH v8 22/28] x86/cpufeature: Add User-Mode Instruction Prevention definitions Ricardo Neri
2017-08-19  0:28 ` [PATCH v8 23/28] x86: Add emulation code for UMIP instructions Ricardo Neri
2017-08-19  0:28 ` [PATCH v8 24/28] x86/umip: Force a page fault when unable to copy emulated result to user Ricardo Neri
2017-08-19  0:28 ` [PATCH v8 25/28] x86: Enable User-Mode Instruction Prevention Ricardo Neri
2017-08-19  0:28 ` [PATCH v8 26/28] x86/traps: Fixup general protection faults caused by UMIP Ricardo Neri
2017-08-19  0:28 ` [PATCH v8 27/28] selftests/x86: Add tests for User-Mode Instruction Prevention Ricardo Neri
2017-08-19  0:28 ` [PATCH v8 28/28] selftests/x86: Add tests for instruction str and sldt Ricardo Neri

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170819002809.111312-2-ricardo.neri-calderon@linux.intel.com \
    --to=ricardo.neri-calderon@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=bp@suse.de \
    --cc=brgerst@gmail.com \
    --cc=cmetcalf@mellanox.com \
    --cc=corbet@lwn.net \
    --cc=dave.hansen@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=jpoimboe@redhat.com \
    --cc=jslaby@suse.cz \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=liang.z.li@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mhiramat@kernel.org \
    --cc=mingo@redhat.com \
    --cc=mst@redhat.com \
    --cc=paul.gortmaker@windriver.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=ravi.v.shankar@intel.com \
    --cc=ray.huang@amd.com \
    --cc=ricardo.neri@intel.com \
    --cc=shuah@kernel.org \
    --cc=slaoub@gmail.com \
    --cc=tglx@linutronix.de \
    --cc=vbabka@suse.cz \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.