All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 1/2] x86/KASLR: Fix physical memory calculation on KASLR memory randomization
@ 2016-08-09 16:35 ` Thomas Garnier
  0 siblings, 0 replies; 8+ messages in thread
From: Thomas Garnier @ 2016-08-09 16:35 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H . Peter Anvin, Borislav Petkov,
	Joerg Roedel, Dave Young, Rafael J . Wysocki, Lv Zheng,
	Thomas Garnier, Baoquan He, Dave Hansen, Mark Salter,
	Aleksey Makarov, Kees Cook, Andrew Morton, Christian Borntraeger,
	Fabian Frederick, Toshi Kani, Dan Williams
  Cc: x86, linux-kernel, kernel-hardening

Initialize KASLR memory randomization after max_pfn is initialized. Also
ensure the size is rounded up. Could have create problems on machines
with more than 1Tb of memory on certain random addresses.

Fixes: 021182e52fe0 ("Enable KASLR for physical mapping memory regions")
Signed-off-by: Thomas Garnier <thgarnie@google.com>
---
Based on next-20160805
---
 arch/x86/kernel/setup.c | 8 ++++++--
 arch/x86/mm/kaslr.c     | 2 +-
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index bcabb88..dc50644 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -936,8 +936,6 @@ void __init setup_arch(char **cmdline_p)
 
 	x86_init.oem.arch_setup();
 
-	kernel_randomize_memory();
-
 	iomem_resource.end = (1ULL << boot_cpu_data.x86_phys_bits) - 1;
 	setup_memory_map();
 	parse_setup_data();
@@ -1055,6 +1053,12 @@ void __init setup_arch(char **cmdline_p)
 
 	max_possible_pfn = max_pfn;
 
+	/*
+	 * Define random base addresses for memory section after max_pfn is
+	 * defined and before each memory section based is used.
+	 */
+	kernel_randomize_memory();
+
 #ifdef CONFIG_X86_32
 	/* max_low_pfn get updated here */
 	find_low_pfn_range();
diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 26dccd6..ec8654f 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -97,7 +97,7 @@ void __init kernel_randomize_memory(void)
 	 * add padding if needed (especially for memory hotplug support).
 	 */
 	BUG_ON(kaslr_regions[0].base != &page_offset_base);
-	memory_tb = ((max_pfn << PAGE_SHIFT) >> TB_SHIFT) +
+	memory_tb = DIV_ROUND_UP(max_pfn << PAGE_SHIFT, 1UL << TB_SHIFT) +
 		CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING;
 
 	/* Adapt phyiscal memory region size based on available memory */
-- 
2.8.0.rc3.226.g39d4020

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [kernel-hardening] [PATCH v2 1/2] x86/KASLR: Fix physical memory calculation on KASLR memory randomization
@ 2016-08-09 16:35 ` Thomas Garnier
  0 siblings, 0 replies; 8+ messages in thread
From: Thomas Garnier @ 2016-08-09 16:35 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H . Peter Anvin, Borislav Petkov,
	Joerg Roedel, Dave Young, Rafael J . Wysocki, Lv Zheng,
	Thomas Garnier, Baoquan He, Dave Hansen, Mark Salter,
	Aleksey Makarov, Kees Cook, Andrew Morton, Christian Borntraeger,
	Fabian Frederick, Toshi Kani, Dan Williams
  Cc: x86, linux-kernel, kernel-hardening

Initialize KASLR memory randomization after max_pfn is initialized. Also
ensure the size is rounded up. Could have create problems on machines
with more than 1Tb of memory on certain random addresses.

Fixes: 021182e52fe0 ("Enable KASLR for physical mapping memory regions")
Signed-off-by: Thomas Garnier <thgarnie@google.com>
---
Based on next-20160805
---
 arch/x86/kernel/setup.c | 8 ++++++--
 arch/x86/mm/kaslr.c     | 2 +-
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index bcabb88..dc50644 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -936,8 +936,6 @@ void __init setup_arch(char **cmdline_p)
 
 	x86_init.oem.arch_setup();
 
-	kernel_randomize_memory();
-
 	iomem_resource.end = (1ULL << boot_cpu_data.x86_phys_bits) - 1;
 	setup_memory_map();
 	parse_setup_data();
@@ -1055,6 +1053,12 @@ void __init setup_arch(char **cmdline_p)
 
 	max_possible_pfn = max_pfn;
 
+	/*
+	 * Define random base addresses for memory section after max_pfn is
+	 * defined and before each memory section based is used.
+	 */
+	kernel_randomize_memory();
+
 #ifdef CONFIG_X86_32
 	/* max_low_pfn get updated here */
 	find_low_pfn_range();
diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 26dccd6..ec8654f 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -97,7 +97,7 @@ void __init kernel_randomize_memory(void)
 	 * add padding if needed (especially for memory hotplug support).
 	 */
 	BUG_ON(kaslr_regions[0].base != &page_offset_base);
-	memory_tb = ((max_pfn << PAGE_SHIFT) >> TB_SHIFT) +
+	memory_tb = DIV_ROUND_UP(max_pfn << PAGE_SHIFT, 1UL << TB_SHIFT) +
 		CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING;
 
 	/* Adapt phyiscal memory region size based on available memory */
-- 
2.8.0.rc3.226.g39d4020

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 2/2] x86/KASLR: Increase BRK pages for KASLR memory randomization
  2016-08-09 16:35 ` [kernel-hardening] " Thomas Garnier
@ 2016-08-09 16:35   ` Thomas Garnier
  -1 siblings, 0 replies; 8+ messages in thread
From: Thomas Garnier @ 2016-08-09 16:35 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H . Peter Anvin, Borislav Petkov,
	Joerg Roedel, Dave Young, Rafael J . Wysocki, Lv Zheng,
	Thomas Garnier, Baoquan He, Dave Hansen, Mark Salter,
	Aleksey Makarov, Kees Cook, Andrew Morton, Christian Borntraeger,
	Fabian Frederick, Toshi Kani, Dan Williams
  Cc: x86, linux-kernel, kernel-hardening

Default implementation expects 6 pages maximum are needed for low page
allocations. If KASLR memory randomization is enabled, the worse case
of e820 layout would require 12 pages (no large pages). It is due to the
PUD level randomization and the variable e820 memory layout.

This bug was found while doing extensive testing of KASLR memory
randomization on different type of hardware.

Fixes: 021182e52fe0 ("Enable KASLR for physical mapping memory regions")
Signed-off-by: Thomas Garnier <thgarnie@google.com>
---
Based on next-20160805
---
 arch/x86/mm/init.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 6209289..796e7af 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -122,8 +122,18 @@ __ref void *alloc_low_pages(unsigned int num)
 	return __va(pfn << PAGE_SHIFT);
 }
 
-/* need 3 4k for initial PMD_SIZE,  3 4k for 0-ISA_END_ADDRESS */
-#define INIT_PGT_BUF_SIZE	(6 * PAGE_SIZE)
+/*
+ * By default need 3 4k for initial PMD_SIZE,  3 4k for 0-ISA_END_ADDRESS.
+ * With KASLR memory randomization, depending on the machine e860 memory layout
+ * and the PUD alignement. We may need twice more pages when KASLR memoy
+ * randomization is enabled.
+ */
+#ifndef CONFIG_RANDOMIZE_MEMORY
+#define INIT_PGD_PAGE_COUNT      6
+#else
+#define INIT_PGD_PAGE_COUNT      12
+#endif
+#define INIT_PGT_BUF_SIZE	(INIT_PGD_PAGE_COUNT * PAGE_SIZE)
 RESERVE_BRK(early_pgt_alloc, INIT_PGT_BUF_SIZE);
 void  __init early_alloc_pgt_buf(void)
 {
-- 
2.8.0.rc3.226.g39d4020

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [kernel-hardening] [PATCH v2 2/2] x86/KASLR: Increase BRK pages for KASLR memory randomization
@ 2016-08-09 16:35   ` Thomas Garnier
  0 siblings, 0 replies; 8+ messages in thread
From: Thomas Garnier @ 2016-08-09 16:35 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H . Peter Anvin, Borislav Petkov,
	Joerg Roedel, Dave Young, Rafael J . Wysocki, Lv Zheng,
	Thomas Garnier, Baoquan He, Dave Hansen, Mark Salter,
	Aleksey Makarov, Kees Cook, Andrew Morton, Christian Borntraeger,
	Fabian Frederick, Toshi Kani, Dan Williams
  Cc: x86, linux-kernel, kernel-hardening

Default implementation expects 6 pages maximum are needed for low page
allocations. If KASLR memory randomization is enabled, the worse case
of e820 layout would require 12 pages (no large pages). It is due to the
PUD level randomization and the variable e820 memory layout.

This bug was found while doing extensive testing of KASLR memory
randomization on different type of hardware.

Fixes: 021182e52fe0 ("Enable KASLR for physical mapping memory regions")
Signed-off-by: Thomas Garnier <thgarnie@google.com>
---
Based on next-20160805
---
 arch/x86/mm/init.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 6209289..796e7af 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -122,8 +122,18 @@ __ref void *alloc_low_pages(unsigned int num)
 	return __va(pfn << PAGE_SHIFT);
 }
 
-/* need 3 4k for initial PMD_SIZE,  3 4k for 0-ISA_END_ADDRESS */
-#define INIT_PGT_BUF_SIZE	(6 * PAGE_SIZE)
+/*
+ * By default need 3 4k for initial PMD_SIZE,  3 4k for 0-ISA_END_ADDRESS.
+ * With KASLR memory randomization, depending on the machine e860 memory layout
+ * and the PUD alignement. We may need twice more pages when KASLR memoy
+ * randomization is enabled.
+ */
+#ifndef CONFIG_RANDOMIZE_MEMORY
+#define INIT_PGD_PAGE_COUNT      6
+#else
+#define INIT_PGD_PAGE_COUNT      12
+#endif
+#define INIT_PGT_BUF_SIZE	(INIT_PGD_PAGE_COUNT * PAGE_SIZE)
 RESERVE_BRK(early_pgt_alloc, INIT_PGT_BUF_SIZE);
 void  __init early_alloc_pgt_buf(void)
 {
-- 
2.8.0.rc3.226.g39d4020

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 2/2] x86/KASLR: Increase BRK pages for KASLR memory randomization
  2016-08-09 16:35   ` [kernel-hardening] " Thomas Garnier
@ 2016-08-09 16:54     ` Borislav Petkov
  -1 siblings, 0 replies; 8+ messages in thread
From: Borislav Petkov @ 2016-08-09 16:54 UTC (permalink / raw)
  To: Thomas Garnier
  Cc: Thomas Gleixner, Ingo Molnar, H . Peter Anvin, Joerg Roedel,
	Dave Young, Rafael J . Wysocki, Lv Zheng, Baoquan He,
	Dave Hansen, Mark Salter, Aleksey Makarov, Kees Cook,
	Andrew Morton, Christian Borntraeger, Fabian Frederick,
	Toshi Kani, Dan Williams, x86, linux-kernel, kernel-hardening

On Tue, Aug 09, 2016 at 09:35:54AM -0700, Thomas Garnier wrote:
> Default implementation expects 6 pages maximum are needed for low page
> allocations. If KASLR memory randomization is enabled, the worse case
> of e820 layout would require 12 pages (no large pages). It is due to the
> PUD level randomization and the variable e820 memory layout.
> 
> This bug was found while doing extensive testing of KASLR memory
> randomization on different type of hardware.
> 
> Fixes: 021182e52fe0 ("Enable KASLR for physical mapping memory regions")
> Signed-off-by: Thomas Garnier <thgarnie@google.com>
> ---
> Based on next-20160805
> ---
>  arch/x86/mm/init.c | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index 6209289..796e7af 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -122,8 +122,18 @@ __ref void *alloc_low_pages(unsigned int num)
>  	return __va(pfn << PAGE_SHIFT);
>  }
>  
> -/* need 3 4k for initial PMD_SIZE,  3 4k for 0-ISA_END_ADDRESS */
> -#define INIT_PGT_BUF_SIZE	(6 * PAGE_SIZE)
> +/*
> + * By default need 3 4k for initial PMD_SIZE,  3 4k for 0-ISA_END_ADDRESS.
> + * With KASLR memory randomization, depending on the machine e860 memory layout
> + * and the PUD alignement. We may need twice more pages when KASLR memoy
> + * randomization is enabled.

Can you please integrate all review feedback before you send your next
versions?

There's no "e860" thing and
s/memoy/memory/ and
s/alignement/alignment/

IOW, just integrate a spellchecker into your workflow.

Thanks.

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
--

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [kernel-hardening] Re: [PATCH v2 2/2] x86/KASLR: Increase BRK pages for KASLR memory randomization
@ 2016-08-09 16:54     ` Borislav Petkov
  0 siblings, 0 replies; 8+ messages in thread
From: Borislav Petkov @ 2016-08-09 16:54 UTC (permalink / raw)
  To: Thomas Garnier
  Cc: Thomas Gleixner, Ingo Molnar, H . Peter Anvin, Joerg Roedel,
	Dave Young, Rafael J . Wysocki, Lv Zheng, Baoquan He,
	Dave Hansen, Mark Salter, Aleksey Makarov, Kees Cook,
	Andrew Morton, Christian Borntraeger, Fabian Frederick,
	Toshi Kani, Dan Williams, x86, linux-kernel, kernel-hardening

On Tue, Aug 09, 2016 at 09:35:54AM -0700, Thomas Garnier wrote:
> Default implementation expects 6 pages maximum are needed for low page
> allocations. If KASLR memory randomization is enabled, the worse case
> of e820 layout would require 12 pages (no large pages). It is due to the
> PUD level randomization and the variable e820 memory layout.
> 
> This bug was found while doing extensive testing of KASLR memory
> randomization on different type of hardware.
> 
> Fixes: 021182e52fe0 ("Enable KASLR for physical mapping memory regions")
> Signed-off-by: Thomas Garnier <thgarnie@google.com>
> ---
> Based on next-20160805
> ---
>  arch/x86/mm/init.c | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index 6209289..796e7af 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -122,8 +122,18 @@ __ref void *alloc_low_pages(unsigned int num)
>  	return __va(pfn << PAGE_SHIFT);
>  }
>  
> -/* need 3 4k for initial PMD_SIZE,  3 4k for 0-ISA_END_ADDRESS */
> -#define INIT_PGT_BUF_SIZE	(6 * PAGE_SIZE)
> +/*
> + * By default need 3 4k for initial PMD_SIZE,  3 4k for 0-ISA_END_ADDRESS.
> + * With KASLR memory randomization, depending on the machine e860 memory layout
> + * and the PUD alignement. We may need twice more pages when KASLR memoy
> + * randomization is enabled.

Can you please integrate all review feedback before you send your next
versions?

There's no "e860" thing and
s/memoy/memory/ and
s/alignement/alignment/

IOW, just integrate a spellchecker into your workflow.

Thanks.

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
--

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 2/2] x86/KASLR: Increase BRK pages for KASLR memory randomization
  2016-08-09 16:54     ` [kernel-hardening] " Borislav Petkov
@ 2016-08-09 17:02       ` Thomas Garnier
  -1 siblings, 0 replies; 8+ messages in thread
From: Thomas Garnier @ 2016-08-09 17:02 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Thomas Gleixner, Ingo Molnar, H . Peter Anvin, Joerg Roedel,
	Dave Young, Rafael J . Wysocki, Lv Zheng, Baoquan He,
	Dave Hansen, Mark Salter, Aleksey Makarov, Kees Cook,
	Andrew Morton, Christian Borntraeger, Fabian Frederick,
	Toshi Kani, Dan Williams, the arch/x86 maintainers, LKML,
	Kernel Hardening

On Tue, Aug 9, 2016 at 9:54 AM, Borislav Petkov <bp@suse.de> wrote:
> On Tue, Aug 09, 2016 at 09:35:54AM -0700, Thomas Garnier wrote:
>> Default implementation expects 6 pages maximum are needed for low page
>> allocations. If KASLR memory randomization is enabled, the worse case
>> of e820 layout would require 12 pages (no large pages). It is due to the
>> PUD level randomization and the variable e820 memory layout.
>>
>> This bug was found while doing extensive testing of KASLR memory
>> randomization on different type of hardware.
>>
>> Fixes: 021182e52fe0 ("Enable KASLR for physical mapping memory regions")
>> Signed-off-by: Thomas Garnier <thgarnie@google.com>
>> ---
>> Based on next-20160805
>> ---
>>  arch/x86/mm/init.c | 14 ++++++++++++--
>>  1 file changed, 12 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
>> index 6209289..796e7af 100644
>> --- a/arch/x86/mm/init.c
>> +++ b/arch/x86/mm/init.c
>> @@ -122,8 +122,18 @@ __ref void *alloc_low_pages(unsigned int num)
>>       return __va(pfn << PAGE_SHIFT);
>>  }
>>
>> -/* need 3 4k for initial PMD_SIZE,  3 4k for 0-ISA_END_ADDRESS */
>> -#define INIT_PGT_BUF_SIZE    (6 * PAGE_SIZE)
>> +/*
>> + * By default need 3 4k for initial PMD_SIZE,  3 4k for 0-ISA_END_ADDRESS.
>> + * With KASLR memory randomization, depending on the machine e860 memory layout
>> + * and the PUD alignement. We may need twice more pages when KASLR memoy
>> + * randomization is enabled.
>
> Can you please integrate all review feedback before you send your next
> versions?
>
> There's no "e860" thing and
> s/memoy/memory/ and
> s/alignement/alignment/
>
> IOW, just integrate a spellchecker into your workflow.

Will do, thanks.

>
> Thanks.
>
> --
> Regards/Gruss,
>     Boris.
>
> ECO tip #101: Trim your mails when you reply.
>
> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
> --

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [kernel-hardening] Re: [PATCH v2 2/2] x86/KASLR: Increase BRK pages for KASLR memory randomization
@ 2016-08-09 17:02       ` Thomas Garnier
  0 siblings, 0 replies; 8+ messages in thread
From: Thomas Garnier @ 2016-08-09 17:02 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Thomas Gleixner, Ingo Molnar, H . Peter Anvin, Joerg Roedel,
	Dave Young, Rafael J . Wysocki, Lv Zheng, Baoquan He,
	Dave Hansen, Mark Salter, Aleksey Makarov, Kees Cook,
	Andrew Morton, Christian Borntraeger, Fabian Frederick,
	Toshi Kani, Dan Williams, the arch/x86 maintainers, LKML,
	Kernel Hardening

On Tue, Aug 9, 2016 at 9:54 AM, Borislav Petkov <bp@suse.de> wrote:
> On Tue, Aug 09, 2016 at 09:35:54AM -0700, Thomas Garnier wrote:
>> Default implementation expects 6 pages maximum are needed for low page
>> allocations. If KASLR memory randomization is enabled, the worse case
>> of e820 layout would require 12 pages (no large pages). It is due to the
>> PUD level randomization and the variable e820 memory layout.
>>
>> This bug was found while doing extensive testing of KASLR memory
>> randomization on different type of hardware.
>>
>> Fixes: 021182e52fe0 ("Enable KASLR for physical mapping memory regions")
>> Signed-off-by: Thomas Garnier <thgarnie@google.com>
>> ---
>> Based on next-20160805
>> ---
>>  arch/x86/mm/init.c | 14 ++++++++++++--
>>  1 file changed, 12 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
>> index 6209289..796e7af 100644
>> --- a/arch/x86/mm/init.c
>> +++ b/arch/x86/mm/init.c
>> @@ -122,8 +122,18 @@ __ref void *alloc_low_pages(unsigned int num)
>>       return __va(pfn << PAGE_SHIFT);
>>  }
>>
>> -/* need 3 4k for initial PMD_SIZE,  3 4k for 0-ISA_END_ADDRESS */
>> -#define INIT_PGT_BUF_SIZE    (6 * PAGE_SIZE)
>> +/*
>> + * By default need 3 4k for initial PMD_SIZE,  3 4k for 0-ISA_END_ADDRESS.
>> + * With KASLR memory randomization, depending on the machine e860 memory layout
>> + * and the PUD alignement. We may need twice more pages when KASLR memoy
>> + * randomization is enabled.
>
> Can you please integrate all review feedback before you send your next
> versions?
>
> There's no "e860" thing and
> s/memoy/memory/ and
> s/alignement/alignment/
>
> IOW, just integrate a spellchecker into your workflow.

Will do, thanks.

>
> Thanks.
>
> --
> Regards/Gruss,
>     Boris.
>
> ECO tip #101: Trim your mails when you reply.
>
> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
> --

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-08-09 17:02 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-09 16:35 [PATCH v2 1/2] x86/KASLR: Fix physical memory calculation on KASLR memory randomization Thomas Garnier
2016-08-09 16:35 ` [kernel-hardening] " Thomas Garnier
2016-08-09 16:35 ` [PATCH v2 2/2] x86/KASLR: Increase BRK pages for " Thomas Garnier
2016-08-09 16:35   ` [kernel-hardening] " Thomas Garnier
2016-08-09 16:54   ` Borislav Petkov
2016-08-09 16:54     ` [kernel-hardening] " Borislav Petkov
2016-08-09 17:02     ` Thomas Garnier
2016-08-09 17:02       ` [kernel-hardening] " Thomas Garnier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.