All of lore.kernel.org
 help / color / mirror / Atom feed
From: tip-bot for Thomas Garnier <tipbot@zytor.com>
To: linux-tip-commits@vger.kernel.org
Cc: guangrong.xiao@linux.intel.com, toshi.kani@hpe.com,
	dan.j.williams@intel.com, akpm@linux-foundation.org,
	jpoimboe@redhat.com, corbet@lwn.net, brgerst@gmail.com,
	torvalds@linux-foundation.org, alpopov@ptsecurity.com,
	dyoung@redhat.com, bp@alien8.de, luto@kernel.org,
	thgarnie@google.com, tglx@linutronix.de, peterz@infradead.org,
	kuleshovmail@gmail.com, jroedel@suse.de,
	kirill.shutemov@linux.intel.com, boris.ostrovsky@oracle.com,
	msalter@redhat.com, bp@suse.de, matt@codeblueprint.co.uk,
	JBeulich@suse.com, schwidefsky@de.ibm.com, sds@tycho.nsa.gov,
	hpa@zytor.com, linux-kernel@vger.kernel.org,
	borntraeger@de.ibm.com, bhe@redhat.com, mingo@kernel.org,
	keescook@chromium.org, dvyukov@google.com, dvlasenk@redhat.com,
	aneesh.kumar@linux.vnet.ibm.com, yinghai@kernel.org,
	lv.zheng@intel.com, dave.hansen@linux.intel.com, jgross@suse.com
Subject: [tip:x86/boot] x86/mm: Update physical mapping variable names
Date: Fri, 8 Jul 2016 13:34:12 -0700	[thread overview]
Message-ID: <tip-59b3d0206d74a700069e49160e8194b2ca93b703@git.kernel.org> (raw)
In-Reply-To: <1466556426-32664-3-git-send-email-keescook@chromium.org>

Commit-ID:  59b3d0206d74a700069e49160e8194b2ca93b703
Gitweb:     http://git.kernel.org/tip/59b3d0206d74a700069e49160e8194b2ca93b703
Author:     Thomas Garnier <thgarnie@google.com>
AuthorDate: Tue, 21 Jun 2016 17:46:59 -0700
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 8 Jul 2016 17:33:46 +0200

x86/mm: Update physical mapping variable names

Change the variable names in kernel_physical_mapping_init() and related
functions to correctly reflect physical and virtual memory addresses.
Also add comments on each function to describe usage and alignment
constraints.

Signed-off-by: Thomas Garnier <thgarnie@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Alexander Kuleshov <kuleshovmail@gmail.com>
Cc: Alexander Popov <alpopov@ptsecurity.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Borislav Petkov <bp@suse.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jan Beulich <JBeulich@suse.com>
Cc: Joerg Roedel <jroedel@suse.de>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Lv Zheng <lv.zheng@intel.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephen Smalley <sds@tycho.nsa.gov>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Toshi Kani <toshi.kani@hpe.com>
Cc: Xiao Guangrong <guangrong.xiao@linux.intel.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-doc@vger.kernel.org
Link: http://lkml.kernel.org/r/1466556426-32664-3-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/mm/init_64.c | 162 ++++++++++++++++++++++++++++++--------------------
 1 file changed, 96 insertions(+), 66 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index bce2e5d..6714712 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -328,22 +328,30 @@ void __init cleanup_highmap(void)
 	}
 }
 
+/*
+ * Create PTE level page table mapping for physical addresses.
+ * It returns the last physical address mapped.
+ */
 static unsigned long __meminit
-phys_pte_init(pte_t *pte_page, unsigned long addr, unsigned long end,
+phys_pte_init(pte_t *pte_page, unsigned long paddr, unsigned long paddr_end,
 	      pgprot_t prot)
 {
-	unsigned long pages = 0, next;
-	unsigned long last_map_addr = end;
+	unsigned long pages = 0, paddr_next;
+	unsigned long paddr_last = paddr_end;
+	pte_t *pte;
 	int i;
 
-	pte_t *pte = pte_page + pte_index(addr);
+	pte = pte_page + pte_index(paddr);
+	i = pte_index(paddr);
 
-	for (i = pte_index(addr); i < PTRS_PER_PTE; i++, addr = next, pte++) {
-		next = (addr & PAGE_MASK) + PAGE_SIZE;
-		if (addr >= end) {
+	for (; i < PTRS_PER_PTE; i++, paddr = paddr_next, pte++) {
+		paddr_next = (paddr & PAGE_MASK) + PAGE_SIZE;
+		if (paddr >= paddr_end) {
 			if (!after_bootmem &&
-			    !e820_any_mapped(addr & PAGE_MASK, next, E820_RAM) &&
-			    !e820_any_mapped(addr & PAGE_MASK, next, E820_RESERVED_KERN))
+			    !e820_any_mapped(paddr & PAGE_MASK, paddr_next,
+					     E820_RAM) &&
+			    !e820_any_mapped(paddr & PAGE_MASK, paddr_next,
+					     E820_RESERVED_KERN))
 				set_pte(pte, __pte(0));
 			continue;
 		}
@@ -361,37 +369,44 @@ phys_pte_init(pte_t *pte_page, unsigned long addr, unsigned long end,
 		}
 
 		if (0)
-			printk("   pte=%p addr=%lx pte=%016lx\n",
-			       pte, addr, pfn_pte(addr >> PAGE_SHIFT, PAGE_KERNEL).pte);
+			pr_info("   pte=%p addr=%lx pte=%016lx\n", pte, paddr,
+				pfn_pte(paddr >> PAGE_SHIFT, PAGE_KERNEL).pte);
 		pages++;
-		set_pte(pte, pfn_pte(addr >> PAGE_SHIFT, prot));
-		last_map_addr = (addr & PAGE_MASK) + PAGE_SIZE;
+		set_pte(pte, pfn_pte(paddr >> PAGE_SHIFT, prot));
+		paddr_last = (paddr & PAGE_MASK) + PAGE_SIZE;
 	}
 
 	update_page_count(PG_LEVEL_4K, pages);
 
-	return last_map_addr;
+	return paddr_last;
 }
 
+/*
+ * Create PMD level page table mapping for physical addresses. The virtual
+ * and physical address have to be aligned at this level.
+ * It returns the last physical address mapped.
+ */
 static unsigned long __meminit
-phys_pmd_init(pmd_t *pmd_page, unsigned long address, unsigned long end,
+phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
 	      unsigned long page_size_mask, pgprot_t prot)
 {
-	unsigned long pages = 0, next;
-	unsigned long last_map_addr = end;
+	unsigned long pages = 0, paddr_next;
+	unsigned long paddr_last = paddr_end;
 
-	int i = pmd_index(address);
+	int i = pmd_index(paddr);
 
-	for (; i < PTRS_PER_PMD; i++, address = next) {
-		pmd_t *pmd = pmd_page + pmd_index(address);
+	for (; i < PTRS_PER_PMD; i++, paddr = paddr_next) {
+		pmd_t *pmd = pmd_page + pmd_index(paddr);
 		pte_t *pte;
 		pgprot_t new_prot = prot;
 
-		next = (address & PMD_MASK) + PMD_SIZE;
-		if (address >= end) {
+		paddr_next = (paddr & PMD_MASK) + PMD_SIZE;
+		if (paddr >= paddr_end) {
 			if (!after_bootmem &&
-			    !e820_any_mapped(address & PMD_MASK, next, E820_RAM) &&
-			    !e820_any_mapped(address & PMD_MASK, next, E820_RESERVED_KERN))
+			    !e820_any_mapped(paddr & PMD_MASK, paddr_next,
+					     E820_RAM) &&
+			    !e820_any_mapped(paddr & PMD_MASK, paddr_next,
+					     E820_RESERVED_KERN))
 				set_pmd(pmd, __pmd(0));
 			continue;
 		}
@@ -400,8 +415,8 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long address, unsigned long end,
 			if (!pmd_large(*pmd)) {
 				spin_lock(&init_mm.page_table_lock);
 				pte = (pte_t *)pmd_page_vaddr(*pmd);
-				last_map_addr = phys_pte_init(pte, address,
-								end, prot);
+				paddr_last = phys_pte_init(pte, paddr,
+							   paddr_end, prot);
 				spin_unlock(&init_mm.page_table_lock);
 				continue;
 			}
@@ -420,7 +435,7 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long address, unsigned long end,
 			if (page_size_mask & (1 << PG_LEVEL_2M)) {
 				if (!after_bootmem)
 					pages++;
-				last_map_addr = next;
+				paddr_last = paddr_next;
 				continue;
 			}
 			new_prot = pte_pgprot(pte_clrhuge(*(pte_t *)pmd));
@@ -430,42 +445,49 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long address, unsigned long end,
 			pages++;
 			spin_lock(&init_mm.page_table_lock);
 			set_pte((pte_t *)pmd,
-				pfn_pte((address & PMD_MASK) >> PAGE_SHIFT,
+				pfn_pte((paddr & PMD_MASK) >> PAGE_SHIFT,
 					__pgprot(pgprot_val(prot) | _PAGE_PSE)));
 			spin_unlock(&init_mm.page_table_lock);
-			last_map_addr = next;
+			paddr_last = paddr_next;
 			continue;
 		}
 
 		pte = alloc_low_page();
-		last_map_addr = phys_pte_init(pte, address, end, new_prot);
+		paddr_last = phys_pte_init(pte, paddr, paddr_end, new_prot);
 
 		spin_lock(&init_mm.page_table_lock);
 		pmd_populate_kernel(&init_mm, pmd, pte);
 		spin_unlock(&init_mm.page_table_lock);
 	}
 	update_page_count(PG_LEVEL_2M, pages);
-	return last_map_addr;
+	return paddr_last;
 }
 
+/*
+ * Create PUD level page table mapping for physical addresses. The virtual
+ * and physical address have to be aligned at this level.
+ * It returns the last physical address mapped.
+ */
 static unsigned long __meminit
-phys_pud_init(pud_t *pud_page, unsigned long addr, unsigned long end,
-			 unsigned long page_size_mask)
+phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
+	      unsigned long page_size_mask)
 {
-	unsigned long pages = 0, next;
-	unsigned long last_map_addr = end;
-	int i = pud_index(addr);
+	unsigned long pages = 0, paddr_next;
+	unsigned long paddr_last = paddr_end;
+	int i = pud_index(paddr);
 
-	for (; i < PTRS_PER_PUD; i++, addr = next) {
-		pud_t *pud = pud_page + pud_index(addr);
+	for (; i < PTRS_PER_PUD; i++, paddr = paddr_next) {
+		pud_t *pud = pud_page + pud_index(paddr);
 		pmd_t *pmd;
 		pgprot_t prot = PAGE_KERNEL;
 
-		next = (addr & PUD_MASK) + PUD_SIZE;
-		if (addr >= end) {
+		paddr_next = (paddr & PUD_MASK) + PUD_SIZE;
+		if (paddr >= paddr_end) {
 			if (!after_bootmem &&
-			    !e820_any_mapped(addr & PUD_MASK, next, E820_RAM) &&
-			    !e820_any_mapped(addr & PUD_MASK, next, E820_RESERVED_KERN))
+			    !e820_any_mapped(paddr & PUD_MASK, paddr_next,
+					     E820_RAM) &&
+			    !e820_any_mapped(paddr & PUD_MASK, paddr_next,
+					     E820_RESERVED_KERN))
 				set_pud(pud, __pud(0));
 			continue;
 		}
@@ -473,8 +495,10 @@ phys_pud_init(pud_t *pud_page, unsigned long addr, unsigned long end,
 		if (pud_val(*pud)) {
 			if (!pud_large(*pud)) {
 				pmd = pmd_offset(pud, 0);
-				last_map_addr = phys_pmd_init(pmd, addr, end,
-							 page_size_mask, prot);
+				paddr_last = phys_pmd_init(pmd, paddr,
+							   paddr_end,
+							   page_size_mask,
+							   prot);
 				__flush_tlb_all();
 				continue;
 			}
@@ -493,7 +517,7 @@ phys_pud_init(pud_t *pud_page, unsigned long addr, unsigned long end,
 			if (page_size_mask & (1 << PG_LEVEL_1G)) {
 				if (!after_bootmem)
 					pages++;
-				last_map_addr = next;
+				paddr_last = paddr_next;
 				continue;
 			}
 			prot = pte_pgprot(pte_clrhuge(*(pte_t *)pud));
@@ -503,16 +527,16 @@ phys_pud_init(pud_t *pud_page, unsigned long addr, unsigned long end,
 			pages++;
 			spin_lock(&init_mm.page_table_lock);
 			set_pte((pte_t *)pud,
-				pfn_pte((addr & PUD_MASK) >> PAGE_SHIFT,
+				pfn_pte((paddr & PUD_MASK) >> PAGE_SHIFT,
 					PAGE_KERNEL_LARGE));
 			spin_unlock(&init_mm.page_table_lock);
-			last_map_addr = next;
+			paddr_last = paddr_next;
 			continue;
 		}
 
 		pmd = alloc_low_page();
-		last_map_addr = phys_pmd_init(pmd, addr, end, page_size_mask,
-					      prot);
+		paddr_last = phys_pmd_init(pmd, paddr, paddr_end,
+					   page_size_mask, prot);
 
 		spin_lock(&init_mm.page_table_lock);
 		pud_populate(&init_mm, pud, pmd);
@@ -522,38 +546,44 @@ phys_pud_init(pud_t *pud_page, unsigned long addr, unsigned long end,
 
 	update_page_count(PG_LEVEL_1G, pages);
 
-	return last_map_addr;
+	return paddr_last;
 }
 
+/*
+ * Create page table mapping for the physical memory for specific physical
+ * addresses. The virtual and physical addresses have to be aligned on PUD level
+ * down. It returns the last physical address mapped.
+ */
 unsigned long __meminit
-kernel_physical_mapping_init(unsigned long start,
-			     unsigned long end,
+kernel_physical_mapping_init(unsigned long paddr_start,
+			     unsigned long paddr_end,
 			     unsigned long page_size_mask)
 {
 	bool pgd_changed = false;
-	unsigned long next, last_map_addr = end;
-	unsigned long addr;
+	unsigned long vaddr, vaddr_start, vaddr_end, vaddr_next, paddr_last;
 
-	start = (unsigned long)__va(start);
-	end = (unsigned long)__va(end);
-	addr = start;
+	paddr_last = paddr_end;
+	vaddr = (unsigned long)__va(paddr_start);
+	vaddr_end = (unsigned long)__va(paddr_end);
+	vaddr_start = vaddr;
 
-	for (; start < end; start = next) {
-		pgd_t *pgd = pgd_offset_k(start);
+	for (; vaddr < vaddr_end; vaddr = vaddr_next) {
+		pgd_t *pgd = pgd_offset_k(vaddr);
 		pud_t *pud;
 
-		next = (start & PGDIR_MASK) + PGDIR_SIZE;
+		vaddr_next = (vaddr & PGDIR_MASK) + PGDIR_SIZE;
 
 		if (pgd_val(*pgd)) {
 			pud = (pud_t *)pgd_page_vaddr(*pgd);
-			last_map_addr = phys_pud_init(pud, __pa(start),
-						 __pa(end), page_size_mask);
+			paddr_last = phys_pud_init(pud, __pa(vaddr),
+						   __pa(vaddr_end),
+						   page_size_mask);
 			continue;
 		}
 
 		pud = alloc_low_page();
-		last_map_addr = phys_pud_init(pud, __pa(start), __pa(end),
-						 page_size_mask);
+		paddr_last = phys_pud_init(pud, __pa(vaddr), __pa(vaddr_end),
+					   page_size_mask);
 
 		spin_lock(&init_mm.page_table_lock);
 		pgd_populate(&init_mm, pgd, pud);
@@ -562,11 +592,11 @@ kernel_physical_mapping_init(unsigned long start,
 	}
 
 	if (pgd_changed)
-		sync_global_pgds(addr, end - 1, 0);
+		sync_global_pgds(vaddr_start, vaddr_end - 1, 0);
 
 	__flush_tlb_all();
 
-	return last_map_addr;
+	return paddr_last;
 }
 
 #ifndef CONFIG_NUMA

  reply	other threads:[~2016-07-08 20:37 UTC|newest]

Thread overview: 74+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-22  0:46 [PATCH v7 0/9] x86/mm: memory area address KASLR Kees Cook
2016-06-22  0:46 ` [kernel-hardening] " Kees Cook
2016-06-22  0:46 ` [PATCH v7 1/9] x86/mm: Refactor KASLR entropy functions Kees Cook
2016-06-22  0:46   ` [kernel-hardening] " Kees Cook
2016-07-08 20:33   ` [tip:x86/boot] " tip-bot for Thomas Garnier
2016-06-22  0:46 ` [PATCH v7 2/9] x86/mm: Update physical mapping variable names (x86_64) Kees Cook
2016-06-22  0:46   ` [kernel-hardening] " Kees Cook
2016-07-08 20:34   ` tip-bot for Thomas Garnier [this message]
2016-06-22  0:47 ` [PATCH v7 3/9] x86/mm: PUD VA support for physical mapping (x86_64) Kees Cook
2016-06-22  0:47   ` [kernel-hardening] " Kees Cook
2016-07-08 20:34   ` [tip:x86/boot] x86/mm: Add PUD VA support for physical mapping tip-bot for Thomas Garnier
2016-06-22  0:47 ` [PATCH v7 4/9] x86/mm: Separate variable for trampoline PGD (x86_64) Kees Cook
2016-06-22  0:47   ` [kernel-hardening] " Kees Cook
2016-07-08 20:35   ` [tip:x86/boot] x86/mm: Separate variable for trampoline PGD tip-bot for Thomas Garnier
2016-06-22  0:47 ` [PATCH v7 5/9] x86/mm: Implement ASLR for kernel memory regions (x86_64) Kees Cook
2016-06-22  0:47   ` [kernel-hardening] " Kees Cook
2016-07-08 20:35   ` [tip:x86/boot] x86/mm: Implement ASLR for kernel memory regions tip-bot for Thomas Garnier
2016-06-22  0:47 ` [PATCH v7 6/9] x86/mm: Enable KASLR for physical mapping memory region (x86_64) Kees Cook
2016-06-22  0:47   ` [kernel-hardening] " Kees Cook
2016-07-08 20:35   ` [tip:x86/boot] x86/mm: Enable KASLR for physical mapping memory regions tip-bot for Thomas Garnier
2016-08-14  4:25     ` Brian Gerst
2016-08-14 23:26       ` Baoquan He
2016-08-16 11:31         ` Brian Gerst
2016-08-16 13:42           ` Borislav Petkov
2016-08-16 13:49             ` Borislav Petkov
2016-08-16 15:54               ` Borislav Petkov
2016-08-16 17:50                 ` Borislav Petkov
2016-08-16 19:49                   ` Kees Cook
2016-08-16 21:01                     ` Borislav Petkov
2016-08-17  0:31                       ` Brian Gerst
2016-08-17  9:11                         ` Borislav Petkov
2016-08-17 10:19                           ` Ingo Molnar
2016-08-17 11:33                             ` Borislav Petkov
2016-08-18 10:49                               ` [tip:x86/urgent] x86/microcode/AMD: Fix initrd loading with CONFIG_RANDOMIZE_MEMORY=y tip-bot for Borislav Petkov
2016-06-22  0:47 ` [PATCH v7 7/9] x86/mm: Enable KASLR for vmalloc memory region (x86_64) Kees Cook
2016-06-22  0:47   ` [kernel-hardening] " Kees Cook
2016-07-08 20:36   ` [tip:x86/boot] x86/mm: Enable KASLR for vmalloc memory regions tip-bot for Thomas Garnier
2016-06-22  0:47 ` [PATCH v7 8/9] x86/mm: Enable KASLR for vmemmap memory region (x86_64) Kees Cook
2016-06-22  0:47   ` [kernel-hardening] " Kees Cook
2016-06-22  0:47 ` [PATCH v7 9/9] x86/mm: Memory hotplug support for KASLR memory randomization (x86_64) Kees Cook
2016-06-22  0:47   ` [kernel-hardening] " Kees Cook
2016-07-08 20:36   ` [tip:x86/boot] x86/mm: Add memory hotplug support for KASLR memory randomization tip-bot for Thomas Garnier
2016-06-22 12:47 ` [kernel-hardening] [PATCH v7 0/9] x86/mm: memory area address KASLR Jason Cooper
2016-06-22 15:59   ` Thomas Garnier
2016-06-22 17:05     ` Kees Cook
2016-06-22 17:05       ` Kees Cook
2016-06-23 19:33       ` Jason Cooper
2016-06-23 19:33         ` Jason Cooper
2016-06-23 19:45         ` Sandy Harris
2016-06-23 19:59           ` Kees Cook
2016-06-23 19:59             ` Kees Cook
2016-06-23 20:19             ` Jason Cooper
2016-06-23 20:16           ` Jason Cooper
2016-06-23 19:58         ` Kees Cook
2016-06-23 19:58           ` Kees Cook
2016-06-23 20:05           ` Ard Biesheuvel
2016-06-23 20:05             ` Ard Biesheuvel
2016-06-24  1:11             ` Jason Cooper
2016-06-24  1:11               ` Jason Cooper
2016-06-24 10:54               ` Ard Biesheuvel
2016-06-24 10:54                 ` Ard Biesheuvel
2016-06-24 16:02                 ` devicetree random-seed properties, was: "Re: [PATCH v7 0/9] x86/mm: memory area address KASLR" Jason Cooper
2016-06-24 16:02                   ` [kernel-hardening] " Jason Cooper
2016-06-24 19:04                   ` Kees Cook
2016-06-24 19:04                     ` [kernel-hardening] " Kees Cook
2016-06-24 20:40                     ` Andy Lutomirski
2016-06-24 20:40                       ` [kernel-hardening] " Andy Lutomirski
2016-06-30 21:48                       ` Jason Cooper
2016-06-30 21:48                         ` [kernel-hardening] " Jason Cooper
2016-06-30 21:56                         ` Thomas Garnier
2016-06-30 21:48                     ` Jason Cooper
2016-06-30 21:48                       ` [kernel-hardening] " Jason Cooper
2016-07-07 22:24 ` [PATCH v7 0/9] x86/mm: memory area address KASLR Kees Cook
2016-07-07 22:24   ` [kernel-hardening] " Kees Cook

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=tip-59b3d0206d74a700069e49160e8194b2ca93b703@git.kernel.org \
    --to=tipbot@zytor.com \
    --cc=JBeulich@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=alpopov@ptsecurity.com \
    --cc=aneesh.kumar@linux.vnet.ibm.com \
    --cc=bhe@redhat.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=borntraeger@de.ibm.com \
    --cc=bp@alien8.de \
    --cc=bp@suse.de \
    --cc=brgerst@gmail.com \
    --cc=corbet@lwn.net \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=dvlasenk@redhat.com \
    --cc=dvyukov@google.com \
    --cc=dyoung@redhat.com \
    --cc=guangrong.xiao@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=jgross@suse.com \
    --cc=jpoimboe@redhat.com \
    --cc=jroedel@suse.de \
    --cc=keescook@chromium.org \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=kuleshovmail@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-tip-commits@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=lv.zheng@intel.com \
    --cc=matt@codeblueprint.co.uk \
    --cc=mingo@kernel.org \
    --cc=msalter@redhat.com \
    --cc=peterz@infradead.org \
    --cc=schwidefsky@de.ibm.com \
    --cc=sds@tycho.nsa.gov \
    --cc=tglx@linutronix.de \
    --cc=thgarnie@google.com \
    --cc=torvalds@linux-foundation.org \
    --cc=toshi.kani@hpe.com \
    --cc=yinghai@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.