All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] x86, kaslr: propagate base load address calculation
@ 2015-02-10 13:17 ` Jiri Kosina
  0 siblings, 0 replies; 38+ messages in thread
From: Jiri Kosina @ 2015-02-10 13:17 UTC (permalink / raw)
  To: Kees Cook, H. Peter Anvin; +Cc: linux-kernel, live-patching, linux-mm, x86

Commit e2b32e678 ("x86, kaslr: randomize module base load address") makes 
the base address for module to be unconditionally randomized in case when 
CONFIG_RANDOMIZE_BASE is defined and "nokaslr" option isn't present on the 
commandline.

This is not consistent with how choose_kernel_location() decides whether 
it will randomize kernel load base.

Namely, CONFIG_HIBERNATION disables kASLR (unless "kaslr" option is 
explicitly specified on kernel commandline), which makes the state space 
larger than what module loader is looking at. IOW CONFIG_HIBERNATION && 
CONFIG_RANDOMIZE_BASE is a valid config option, kASLR wouldn't be applied 
by default in that case, but module loader is not aware of that.

Instead of fixing the logic in module.c, this patch takes more generic 
aproach, and exposes __KERNEL_OFFSET macro, which calculates the real 
offset that has been established by choose_kernel_location() during boot. 
This can be used later by other kernel code as well (such as, but not 
limited to, live patching).

OOPS offset dumper and module loader are converted to that they make use 
of this macro as well.

Signed-off-by: Jiri Kosina <jkosina@suse.cz>
---
 arch/x86/include/asm/page_types.h |  4 ++++
 arch/x86/kernel/module.c          | 10 +---------
 arch/x86/kernel/setup.c           |  4 ++--
 3 files changed, 7 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
index f97fbe3..7f18eaf 100644
--- a/arch/x86/include/asm/page_types.h
+++ b/arch/x86/include/asm/page_types.h
@@ -46,6 +46,10 @@
 
 #ifndef __ASSEMBLY__
 
+/* Return kASLR relocation offset */
+extern char _text[];
+#define __KERNEL_OFFSET ((unsigned long)&_text - __START_KERNEL)
+
 extern int devmem_is_allowed(unsigned long pagenr);
 
 extern unsigned long max_low_pfn_mapped;
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..d236bd2 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -46,21 +46,13 @@ do {							\
 
 #ifdef CONFIG_RANDOMIZE_BASE
 static unsigned long module_load_offset;
-static int randomize_modules = 1;
 
 /* Mutex protects the module_load_offset. */
 static DEFINE_MUTEX(module_kaslr_mutex);
 
-static int __init parse_nokaslr(char *p)
-{
-	randomize_modules = 0;
-	return 0;
-}
-early_param("nokaslr", parse_nokaslr);
-
 static unsigned long int get_module_load_offset(void)
 {
-	if (randomize_modules) {
+	if (__KERNEL_OFFSET) {
 		mutex_lock(&module_kaslr_mutex);
 		/*
 		 * Calculate the module_load_offset the first time this
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index c4648ada..08124a1 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -833,8 +833,8 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p)
 {
 	pr_emerg("Kernel Offset: 0x%lx from 0x%lx "
 		 "(relocation range: 0x%lx-0x%lx)\n",
-		 (unsigned long)&_text - __START_KERNEL, __START_KERNEL,
-		 __START_KERNEL_map, MODULES_VADDR-1);
+		 __KERNEL_OFFSET, __START_KERNEL, __START_KERNEL_map,
+		 MODULES_VADDR-1);
 
 	return 0;
 }

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH] x86, kaslr: propagate base load address calculation
@ 2015-02-10 13:17 ` Jiri Kosina
  0 siblings, 0 replies; 38+ messages in thread
From: Jiri Kosina @ 2015-02-10 13:17 UTC (permalink / raw)
  To: Kees Cook, H. Peter Anvin; +Cc: linux-kernel, live-patching, linux-mm, x86

Commit e2b32e678 ("x86, kaslr: randomize module base load address") makes 
the base address for module to be unconditionally randomized in case when 
CONFIG_RANDOMIZE_BASE is defined and "nokaslr" option isn't present on the 
commandline.

This is not consistent with how choose_kernel_location() decides whether 
it will randomize kernel load base.

Namely, CONFIG_HIBERNATION disables kASLR (unless "kaslr" option is 
explicitly specified on kernel commandline), which makes the state space 
larger than what module loader is looking at. IOW CONFIG_HIBERNATION && 
CONFIG_RANDOMIZE_BASE is a valid config option, kASLR wouldn't be applied 
by default in that case, but module loader is not aware of that.

Instead of fixing the logic in module.c, this patch takes more generic 
aproach, and exposes __KERNEL_OFFSET macro, which calculates the real 
offset that has been established by choose_kernel_location() during boot. 
This can be used later by other kernel code as well (such as, but not 
limited to, live patching).

OOPS offset dumper and module loader are converted to that they make use 
of this macro as well.

Signed-off-by: Jiri Kosina <jkosina@suse.cz>
---
 arch/x86/include/asm/page_types.h |  4 ++++
 arch/x86/kernel/module.c          | 10 +---------
 arch/x86/kernel/setup.c           |  4 ++--
 3 files changed, 7 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
index f97fbe3..7f18eaf 100644
--- a/arch/x86/include/asm/page_types.h
+++ b/arch/x86/include/asm/page_types.h
@@ -46,6 +46,10 @@
 
 #ifndef __ASSEMBLY__
 
+/* Return kASLR relocation offset */
+extern char _text[];
+#define __KERNEL_OFFSET ((unsigned long)&_text - __START_KERNEL)
+
 extern int devmem_is_allowed(unsigned long pagenr);
 
 extern unsigned long max_low_pfn_mapped;
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..d236bd2 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -46,21 +46,13 @@ do {							\
 
 #ifdef CONFIG_RANDOMIZE_BASE
 static unsigned long module_load_offset;
-static int randomize_modules = 1;
 
 /* Mutex protects the module_load_offset. */
 static DEFINE_MUTEX(module_kaslr_mutex);
 
-static int __init parse_nokaslr(char *p)
-{
-	randomize_modules = 0;
-	return 0;
-}
-early_param("nokaslr", parse_nokaslr);
-
 static unsigned long int get_module_load_offset(void)
 {
-	if (randomize_modules) {
+	if (__KERNEL_OFFSET) {
 		mutex_lock(&module_kaslr_mutex);
 		/*
 		 * Calculate the module_load_offset the first time this
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index c4648ada..08124a1 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -833,8 +833,8 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p)
 {
 	pr_emerg("Kernel Offset: 0x%lx from 0x%lx "
 		 "(relocation range: 0x%lx-0x%lx)\n",
-		 (unsigned long)&_text - __START_KERNEL, __START_KERNEL,
-		 __START_KERNEL_map, MODULES_VADDR-1);
+		 __KERNEL_OFFSET, __START_KERNEL, __START_KERNEL_map,
+		 MODULES_VADDR-1);
 
 	return 0;
 }

-- 
Jiri Kosina
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH] x86, kaslr: propagate base load address calculation
  2015-02-10 13:17 ` Jiri Kosina
@ 2015-02-10 17:25   ` Kees Cook
  -1 siblings, 0 replies; 38+ messages in thread
From: Kees Cook @ 2015-02-10 17:25 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Tue, Feb 10, 2015 at 5:17 AM, Jiri Kosina <jkosina@suse.cz> wrote:
> Commit e2b32e678 ("x86, kaslr: randomize module base load address") makes
> the base address for module to be unconditionally randomized in case when
> CONFIG_RANDOMIZE_BASE is defined and "nokaslr" option isn't present on the
> commandline.
>
> This is not consistent with how choose_kernel_location() decides whether
> it will randomize kernel load base.
>
> Namely, CONFIG_HIBERNATION disables kASLR (unless "kaslr" option is
> explicitly specified on kernel commandline), which makes the state space
> larger than what module loader is looking at. IOW CONFIG_HIBERNATION &&
> CONFIG_RANDOMIZE_BASE is a valid config option, kASLR wouldn't be applied
> by default in that case, but module loader is not aware of that.
>
> Instead of fixing the logic in module.c, this patch takes more generic
> aproach, and exposes __KERNEL_OFFSET macro, which calculates the real
> offset that has been established by choose_kernel_location() during boot.
> This can be used later by other kernel code as well (such as, but not
> limited to, live patching).
>
> OOPS offset dumper and module loader are converted to that they make use
> of this macro as well.
>
> Signed-off-by: Jiri Kosina <jkosina@suse.cz>

Ah, yes! This is a good clean up. Thanks! I do see, however, one
corner case remaining: kASLR randomized to 0 offset. This will force
module ASLR off, which I think is a mistake. Perhaps we need to export
the kaslr state as a separate item to be checked directly, instead of
using __KERNEL_OFFSET?

-Kees

> ---
>  arch/x86/include/asm/page_types.h |  4 ++++
>  arch/x86/kernel/module.c          | 10 +---------
>  arch/x86/kernel/setup.c           |  4 ++--
>  3 files changed, 7 insertions(+), 11 deletions(-)
>
> diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
> index f97fbe3..7f18eaf 100644
> --- a/arch/x86/include/asm/page_types.h
> +++ b/arch/x86/include/asm/page_types.h
> @@ -46,6 +46,10 @@
>
>  #ifndef __ASSEMBLY__
>
> +/* Return kASLR relocation offset */
> +extern char _text[];
> +#define __KERNEL_OFFSET ((unsigned long)&_text - __START_KERNEL)
> +
>  extern int devmem_is_allowed(unsigned long pagenr);
>
>  extern unsigned long max_low_pfn_mapped;
> diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
> index e69f988..d236bd2 100644
> --- a/arch/x86/kernel/module.c
> +++ b/arch/x86/kernel/module.c
> @@ -46,21 +46,13 @@ do {                                                        \
>
>  #ifdef CONFIG_RANDOMIZE_BASE
>  static unsigned long module_load_offset;
> -static int randomize_modules = 1;
>
>  /* Mutex protects the module_load_offset. */
>  static DEFINE_MUTEX(module_kaslr_mutex);
>
> -static int __init parse_nokaslr(char *p)
> -{
> -       randomize_modules = 0;
> -       return 0;
> -}
> -early_param("nokaslr", parse_nokaslr);
> -
>  static unsigned long int get_module_load_offset(void)
>  {
> -       if (randomize_modules) {
> +       if (__KERNEL_OFFSET) {
>                 mutex_lock(&module_kaslr_mutex);
>                 /*
>                  * Calculate the module_load_offset the first time this
> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> index c4648ada..08124a1 100644
> --- a/arch/x86/kernel/setup.c
> +++ b/arch/x86/kernel/setup.c
> @@ -833,8 +833,8 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p)
>  {
>         pr_emerg("Kernel Offset: 0x%lx from 0x%lx "
>                  "(relocation range: 0x%lx-0x%lx)\n",
> -                (unsigned long)&_text - __START_KERNEL, __START_KERNEL,
> -                __START_KERNEL_map, MODULES_VADDR-1);
> +                __KERNEL_OFFSET, __START_KERNEL, __START_KERNEL_map,
> +                MODULES_VADDR-1);
>
>         return 0;
>  }
>
> --
> Jiri Kosina
> SUSE Labs



-- 
Kees Cook
Chrome OS Security

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH] x86, kaslr: propagate base load address calculation
@ 2015-02-10 17:25   ` Kees Cook
  0 siblings, 0 replies; 38+ messages in thread
From: Kees Cook @ 2015-02-10 17:25 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Tue, Feb 10, 2015 at 5:17 AM, Jiri Kosina <jkosina@suse.cz> wrote:
> Commit e2b32e678 ("x86, kaslr: randomize module base load address") makes
> the base address for module to be unconditionally randomized in case when
> CONFIG_RANDOMIZE_BASE is defined and "nokaslr" option isn't present on the
> commandline.
>
> This is not consistent with how choose_kernel_location() decides whether
> it will randomize kernel load base.
>
> Namely, CONFIG_HIBERNATION disables kASLR (unless "kaslr" option is
> explicitly specified on kernel commandline), which makes the state space
> larger than what module loader is looking at. IOW CONFIG_HIBERNATION &&
> CONFIG_RANDOMIZE_BASE is a valid config option, kASLR wouldn't be applied
> by default in that case, but module loader is not aware of that.
>
> Instead of fixing the logic in module.c, this patch takes more generic
> aproach, and exposes __KERNEL_OFFSET macro, which calculates the real
> offset that has been established by choose_kernel_location() during boot.
> This can be used later by other kernel code as well (such as, but not
> limited to, live patching).
>
> OOPS offset dumper and module loader are converted to that they make use
> of this macro as well.
>
> Signed-off-by: Jiri Kosina <jkosina@suse.cz>

Ah, yes! This is a good clean up. Thanks! I do see, however, one
corner case remaining: kASLR randomized to 0 offset. This will force
module ASLR off, which I think is a mistake. Perhaps we need to export
the kaslr state as a separate item to be checked directly, instead of
using __KERNEL_OFFSET?

-Kees

> ---
>  arch/x86/include/asm/page_types.h |  4 ++++
>  arch/x86/kernel/module.c          | 10 +---------
>  arch/x86/kernel/setup.c           |  4 ++--
>  3 files changed, 7 insertions(+), 11 deletions(-)
>
> diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
> index f97fbe3..7f18eaf 100644
> --- a/arch/x86/include/asm/page_types.h
> +++ b/arch/x86/include/asm/page_types.h
> @@ -46,6 +46,10 @@
>
>  #ifndef __ASSEMBLY__
>
> +/* Return kASLR relocation offset */
> +extern char _text[];
> +#define __KERNEL_OFFSET ((unsigned long)&_text - __START_KERNEL)
> +
>  extern int devmem_is_allowed(unsigned long pagenr);
>
>  extern unsigned long max_low_pfn_mapped;
> diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
> index e69f988..d236bd2 100644
> --- a/arch/x86/kernel/module.c
> +++ b/arch/x86/kernel/module.c
> @@ -46,21 +46,13 @@ do {                                                        \
>
>  #ifdef CONFIG_RANDOMIZE_BASE
>  static unsigned long module_load_offset;
> -static int randomize_modules = 1;
>
>  /* Mutex protects the module_load_offset. */
>  static DEFINE_MUTEX(module_kaslr_mutex);
>
> -static int __init parse_nokaslr(char *p)
> -{
> -       randomize_modules = 0;
> -       return 0;
> -}
> -early_param("nokaslr", parse_nokaslr);
> -
>  static unsigned long int get_module_load_offset(void)
>  {
> -       if (randomize_modules) {
> +       if (__KERNEL_OFFSET) {
>                 mutex_lock(&module_kaslr_mutex);
>                 /*
>                  * Calculate the module_load_offset the first time this
> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> index c4648ada..08124a1 100644
> --- a/arch/x86/kernel/setup.c
> +++ b/arch/x86/kernel/setup.c
> @@ -833,8 +833,8 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p)
>  {
>         pr_emerg("Kernel Offset: 0x%lx from 0x%lx "
>                  "(relocation range: 0x%lx-0x%lx)\n",
> -                (unsigned long)&_text - __START_KERNEL, __START_KERNEL,
> -                __START_KERNEL_map, MODULES_VADDR-1);
> +                __KERNEL_OFFSET, __START_KERNEL, __START_KERNEL_map,
> +                MODULES_VADDR-1);
>
>         return 0;
>  }
>
> --
> Jiri Kosina
> SUSE Labs



-- 
Kees Cook
Chrome OS Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH] x86, kaslr: propagate base load address calculation
  2015-02-10 17:25   ` Kees Cook
@ 2015-02-10 23:07     ` Jiri Kosina
  -1 siblings, 0 replies; 38+ messages in thread
From: Jiri Kosina @ 2015-02-10 23:07 UTC (permalink / raw)
  To: Kees Cook; +Cc: H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Tue, 10 Feb 2015, Kees Cook wrote:

> > Instead of fixing the logic in module.c, this patch takes more generic
> > aproach, and exposes __KERNEL_OFFSET macro, which calculates the real
> > offset that has been established by choose_kernel_location() during boot.
> > This can be used later by other kernel code as well (such as, but not
> > limited to, live patching).
> >
> > OOPS offset dumper and module loader are converted to that they make use
> > of this macro as well.
> >
> > Signed-off-by: Jiri Kosina <jkosina@suse.cz>
> 
> Ah, yes! This is a good clean up. Thanks! I do see, however, one
> corner case remaining: kASLR randomized to 0 offset. This will force
> module ASLR off, which I think is a mistake. 

Ah, right, good point. I thought that zero-randomization is not possible, 
but looking closely, it is.

> Perhaps we need to export the kaslr state as a separate item to be 
> checked directly, instead of using __KERNEL_OFFSET?

I wanted to avoid sharing variables between compressed loader and the rest 
of the kernel, but if that's what you prefer, I can do it.

Alternatively, we can forbid zero-sized randomization, and always enforce 
at least some minimal offset to be chosen in case zero would be chosen.

I think that'd be even more bulletproof for any future changes, as it 
automatically clearly and immediately distinguishes between 'disabled' and 
'randomized' states, and the loss of entropy is negligible.

Let me know which of the two you'd prefer; I'll then send you a 
corresponding patch, as I don't have a strong opinion either way.

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH] x86, kaslr: propagate base load address calculation
@ 2015-02-10 23:07     ` Jiri Kosina
  0 siblings, 0 replies; 38+ messages in thread
From: Jiri Kosina @ 2015-02-10 23:07 UTC (permalink / raw)
  To: Kees Cook; +Cc: H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Tue, 10 Feb 2015, Kees Cook wrote:

> > Instead of fixing the logic in module.c, this patch takes more generic
> > aproach, and exposes __KERNEL_OFFSET macro, which calculates the real
> > offset that has been established by choose_kernel_location() during boot.
> > This can be used later by other kernel code as well (such as, but not
> > limited to, live patching).
> >
> > OOPS offset dumper and module loader are converted to that they make use
> > of this macro as well.
> >
> > Signed-off-by: Jiri Kosina <jkosina@suse.cz>
> 
> Ah, yes! This is a good clean up. Thanks! I do see, however, one
> corner case remaining: kASLR randomized to 0 offset. This will force
> module ASLR off, which I think is a mistake. 

Ah, right, good point. I thought that zero-randomization is not possible, 
but looking closely, it is.

> Perhaps we need to export the kaslr state as a separate item to be 
> checked directly, instead of using __KERNEL_OFFSET?

I wanted to avoid sharing variables between compressed loader and the rest 
of the kernel, but if that's what you prefer, I can do it.

Alternatively, we can forbid zero-sized randomization, and always enforce 
at least some minimal offset to be chosen in case zero would be chosen.

I think that'd be even more bulletproof for any future changes, as it 
automatically clearly and immediately distinguishes between 'disabled' and 
'randomized' states, and the loss of entropy is negligible.

Let me know which of the two you'd prefer; I'll then send you a 
corresponding patch, as I don't have a strong opinion either way.

Thanks,

-- 
Jiri Kosina
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH] x86, kaslr: propagate base load address calculation
  2015-02-10 23:07     ` Jiri Kosina
@ 2015-02-10 23:13       ` Jiri Kosina
  -1 siblings, 0 replies; 38+ messages in thread
From: Jiri Kosina @ 2015-02-10 23:13 UTC (permalink / raw)
  To: Kees Cook; +Cc: H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Wed, 11 Feb 2015, Jiri Kosina wrote:

> Alternatively, we can forbid zero-sized randomization, and always enforce 
> at least some minimal offset to be chosen in case zero would be chosen.

Okay, I see, that might not be always possible, depending on the memory 
map layout.

So I'll just send you a respin of my previous patch tomorrow that would, 
instead of defining __KERNEL_OFFSET as a particular value, introduce a 
simple global flag which would indicate whether kaslr is in place or not.

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH] x86, kaslr: propagate base load address calculation
@ 2015-02-10 23:13       ` Jiri Kosina
  0 siblings, 0 replies; 38+ messages in thread
From: Jiri Kosina @ 2015-02-10 23:13 UTC (permalink / raw)
  To: Kees Cook; +Cc: H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Wed, 11 Feb 2015, Jiri Kosina wrote:

> Alternatively, we can forbid zero-sized randomization, and always enforce 
> at least some minimal offset to be chosen in case zero would be chosen.

Okay, I see, that might not be always possible, depending on the memory 
map layout.

So I'll just send you a respin of my previous patch tomorrow that would, 
instead of defining __KERNEL_OFFSET as a particular value, introduce a 
simple global flag which would indicate whether kaslr is in place or not.

-- 
Jiri Kosina
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH v2] x86, kaslr: propagate base load address calculation
  2015-02-10 23:13       ` Jiri Kosina
@ 2015-02-13 15:04         ` Jiri Kosina
  -1 siblings, 0 replies; 38+ messages in thread
From: Jiri Kosina @ 2015-02-13 15:04 UTC (permalink / raw)
  To: Kees Cook, H. Peter Anvin; +Cc: LKML, live-patching, Linux-MM, x86

Commit e2b32e678 ("x86, kaslr: randomize module base load address") makes 
the base address for module to be unconditionally randomized in case when 
CONFIG_RANDOMIZE_BASE is defined and "nokaslr" option isn't present on the 
commandline.

This is not consistent with how choose_kernel_location() decides whether 
it will randomize kernel load base.

Namely, CONFIG_HIBERNATION disables kASLR (unless "kaslr" option is 
explicitly specified on kernel commandline), which makes the state space 
larger than what module loader is looking at. IOW CONFIG_HIBERNATION && 
CONFIG_RANDOMIZE_BASE is a valid config option, kASLR wouldn't be applied 
by default in that case, but module loader is not aware of that.

Instead of fixing the logic in module.c, this patch takes more generic 
aproach. It introduces a new bootparam setup data_type SETUP_KASLR and 
uses that to pass the information whether kaslr has been applied during 
kernel decompression, and sets a global 'kaslr_enabled' variable 
accordingly, so that any kernel code (module loading, livepatching, ...) 
can make decisions based on its value.

x86 module loader is converted to make use of this flag.

Signed-off-by: Jiri Kosina <jkosina@suse.cz>
---

v1 -> v2:

Originally I just calculated the fact on the fly from difference between 
__START_KERNEL and &text, but Kees correctly pointed out that this doesn't 
properly catch the case when the offset is randomized to zero. I don't see 
a better option how to propagate the information from 
choose_kernel_location() to the decompressed kernel than introducing new 
bootparam setup type. Comments welcome.

 arch/x86/boot/compressed/aslr.c       | 34 +++++++++++++++++++++++++++++++++-
 arch/x86/boot/compressed/misc.c       |  3 ++-
 arch/x86/boot/compressed/misc.h       |  6 ++++--
 arch/x86/include/asm/page_types.h     |  3 +++
 arch/x86/include/uapi/asm/bootparam.h |  1 +
 arch/x86/kernel/module.c              | 11 ++---------
 arch/x86/kernel/setup.c               | 10 ++++++++++
 7 files changed, 55 insertions(+), 13 deletions(-)

diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
index bb13763..d9d1da9 100644
--- a/arch/x86/boot/compressed/aslr.c
+++ b/arch/x86/boot/compressed/aslr.c
@@ -14,6 +14,13 @@
 static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
 		LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
 
+struct kaslr_setup_data {
+        __u64 next;
+        __u32 type;
+        __u32 len;
+        __u8 data[1];
+} kaslr_setup_data;
+
 #define I8254_PORT_CONTROL	0x43
 #define I8254_PORT_COUNTER0	0x40
 #define I8254_CMD_READBACK	0xC0
@@ -295,7 +302,29 @@ static unsigned long find_random_addr(unsigned long minimum,
 	return slots_fetch_random();
 }
 
-unsigned char *choose_kernel_location(unsigned char *input,
+static void add_kaslr_setup_data(struct boot_params *params, __u8 enabled)
+{
+	struct setup_data *data;
+
+	kaslr_setup_data.type = SETUP_KASLR;
+	kaslr_setup_data.len = 1;
+	kaslr_setup_data.next = 0;
+	kaslr_setup_data.data[0] = enabled;
+
+	data = (struct setup_data *)(unsigned long)params->hdr.setup_data;
+
+	while (data && data->next)
+		data = (struct setup_data *)(unsigned long)data->next;
+
+	if (data)
+		data->next = (unsigned long)&kaslr_setup_data;
+	else
+		params->hdr.setup_data = (unsigned long)&kaslr_setup_data;
+
+}
+
+unsigned char *choose_kernel_location(struct boot_params *params,
+				      unsigned char *input,
 				      unsigned long input_size,
 				      unsigned char *output,
 				      unsigned long output_size)
@@ -306,14 +335,17 @@ unsigned char *choose_kernel_location(unsigned char *input,
 #ifdef CONFIG_HIBERNATION
 	if (!cmdline_find_option_bool("kaslr")) {
 		debug_putstr("KASLR disabled by default...\n");
+		add_kaslr_setup_data(params, 0);
 		goto out;
 	}
 #else
 	if (cmdline_find_option_bool("nokaslr")) {
 		debug_putstr("KASLR disabled by cmdline...\n");
+		add_kaslr_setup_data(params, 0);
 		goto out;
 	}
 #endif
+	add_kaslr_setup_data(params, 1);
 
 	/* Record the various known unsafe memory ranges. */
 	mem_avoid_init((unsigned long)input, input_size,
diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index dcc1c53..5aecf56 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -399,7 +399,8 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
 	 * the entire decompressed kernel plus relocation table, or the
 	 * entire decompressed kernel plus .bss and .brk sections.
 	 */
-	output = choose_kernel_location(input_data, input_len, output,
+	output = choose_kernel_location(real_mode, input_data, input_len,
+					output,
 					output_len > run_size ? output_len
 							      : run_size);
 
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 24e3e56..6d67307 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -56,7 +56,8 @@ int cmdline_find_option_bool(const char *option);
 
 #if CONFIG_RANDOMIZE_BASE
 /* aslr.c */
-unsigned char *choose_kernel_location(unsigned char *input,
+unsigned char *choose_kernel_location(struct boot_params *params,
+				      unsigned char *input,
 				      unsigned long input_size,
 				      unsigned char *output,
 				      unsigned long output_size);
@@ -64,7 +65,8 @@ unsigned char *choose_kernel_location(unsigned char *input,
 bool has_cpuflag(int flag);
 #else
 static inline
-unsigned char *choose_kernel_location(unsigned char *input,
+unsigned char *choose_kernel_location(struct boot_params *params,
+				      unsigned char *input,
 				      unsigned long input_size,
 				      unsigned char *output,
 				      unsigned long output_size)
diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
index f97fbe3..3d43ce3 100644
--- a/arch/x86/include/asm/page_types.h
+++ b/arch/x86/include/asm/page_types.h
@@ -3,6 +3,7 @@
 
 #include <linux/const.h>
 #include <linux/types.h>
+#include <asm/bootparam.h>
 
 /* PAGE_SHIFT determines the page size */
 #define PAGE_SHIFT	12
@@ -51,6 +52,8 @@ extern int devmem_is_allowed(unsigned long pagenr);
 extern unsigned long max_low_pfn_mapped;
 extern unsigned long max_pfn_mapped;
 
+extern bool kaslr_enabled;
+
 static inline phys_addr_t get_max_mapped(void)
 {
 	return (phys_addr_t)max_pfn_mapped << PAGE_SHIFT;
diff --git a/arch/x86/include/uapi/asm/bootparam.h b/arch/x86/include/uapi/asm/bootparam.h
index 225b098..44e6dd7 100644
--- a/arch/x86/include/uapi/asm/bootparam.h
+++ b/arch/x86/include/uapi/asm/bootparam.h
@@ -7,6 +7,7 @@
 #define SETUP_DTB			2
 #define SETUP_PCI			3
 #define SETUP_EFI			4
+#define SETUP_KASLR			5
 
 /* ram_size flags */
 #define RAMDISK_IMAGE_START_MASK	0x07FF
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..c3c59a3 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -32,6 +32,7 @@
 
 #include <asm/page.h>
 #include <asm/pgtable.h>
+#include <asm/page_types.h>
 
 #if 0
 #define DEBUGP(fmt, ...)				\
@@ -46,21 +47,13 @@ do {							\
 
 #ifdef CONFIG_RANDOMIZE_BASE
 static unsigned long module_load_offset;
-static int randomize_modules = 1;
 
 /* Mutex protects the module_load_offset. */
 static DEFINE_MUTEX(module_kaslr_mutex);
 
-static int __init parse_nokaslr(char *p)
-{
-	randomize_modules = 0;
-	return 0;
-}
-early_param("nokaslr", parse_nokaslr);
-
 static unsigned long int get_module_load_offset(void)
 {
-	if (randomize_modules) {
+	if (kaslr_enabled) {
 		mutex_lock(&module_kaslr_mutex);
 		/*
 		 * Calculate the module_load_offset the first time this
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index ab4734e..78c91bb 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -121,6 +121,8 @@
 unsigned long max_low_pfn_mapped;
 unsigned long max_pfn_mapped;
 
+bool __read_mostly kaslr_enabled = false;
+
 #ifdef CONFIG_DMI
 RESERVE_BRK(dmi_alloc, 65536);
 #endif
@@ -424,6 +426,11 @@ static void __init reserve_initrd(void)
 }
 #endif /* CONFIG_BLK_DEV_INITRD */
 
+static void __init parse_kaslr_setup(u64 pa_data, u32 data_len)
+{
+	kaslr_enabled = (bool)(pa_data + sizeof(struct setup_data));
+}
+
 static void __init parse_setup_data(void)
 {
 	struct setup_data *data;
@@ -451,6 +458,9 @@ static void __init parse_setup_data(void)
 		case SETUP_EFI:
 			parse_efi_setup(pa_data, data_len);
 			break;
+		case SETUP_KASLR:
+			parse_kaslr_setup(pa_data, data_len);
+			break;
 		default:
 			break;
 		}
-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2] x86, kaslr: propagate base load address calculation
@ 2015-02-13 15:04         ` Jiri Kosina
  0 siblings, 0 replies; 38+ messages in thread
From: Jiri Kosina @ 2015-02-13 15:04 UTC (permalink / raw)
  To: Kees Cook, H. Peter Anvin; +Cc: LKML, live-patching, Linux-MM, x86

Commit e2b32e678 ("x86, kaslr: randomize module base load address") makes 
the base address for module to be unconditionally randomized in case when 
CONFIG_RANDOMIZE_BASE is defined and "nokaslr" option isn't present on the 
commandline.

This is not consistent with how choose_kernel_location() decides whether 
it will randomize kernel load base.

Namely, CONFIG_HIBERNATION disables kASLR (unless "kaslr" option is 
explicitly specified on kernel commandline), which makes the state space 
larger than what module loader is looking at. IOW CONFIG_HIBERNATION && 
CONFIG_RANDOMIZE_BASE is a valid config option, kASLR wouldn't be applied 
by default in that case, but module loader is not aware of that.

Instead of fixing the logic in module.c, this patch takes more generic 
aproach. It introduces a new bootparam setup data_type SETUP_KASLR and 
uses that to pass the information whether kaslr has been applied during 
kernel decompression, and sets a global 'kaslr_enabled' variable 
accordingly, so that any kernel code (module loading, livepatching, ...) 
can make decisions based on its value.

x86 module loader is converted to make use of this flag.

Signed-off-by: Jiri Kosina <jkosina@suse.cz>
---

v1 -> v2:

Originally I just calculated the fact on the fly from difference between 
__START_KERNEL and &text, but Kees correctly pointed out that this doesn't 
properly catch the case when the offset is randomized to zero. I don't see 
a better option how to propagate the information from 
choose_kernel_location() to the decompressed kernel than introducing new 
bootparam setup type. Comments welcome.

 arch/x86/boot/compressed/aslr.c       | 34 +++++++++++++++++++++++++++++++++-
 arch/x86/boot/compressed/misc.c       |  3 ++-
 arch/x86/boot/compressed/misc.h       |  6 ++++--
 arch/x86/include/asm/page_types.h     |  3 +++
 arch/x86/include/uapi/asm/bootparam.h |  1 +
 arch/x86/kernel/module.c              | 11 ++---------
 arch/x86/kernel/setup.c               | 10 ++++++++++
 7 files changed, 55 insertions(+), 13 deletions(-)

diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
index bb13763..d9d1da9 100644
--- a/arch/x86/boot/compressed/aslr.c
+++ b/arch/x86/boot/compressed/aslr.c
@@ -14,6 +14,13 @@
 static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
 		LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
 
+struct kaslr_setup_data {
+        __u64 next;
+        __u32 type;
+        __u32 len;
+        __u8 data[1];
+} kaslr_setup_data;
+
 #define I8254_PORT_CONTROL	0x43
 #define I8254_PORT_COUNTER0	0x40
 #define I8254_CMD_READBACK	0xC0
@@ -295,7 +302,29 @@ static unsigned long find_random_addr(unsigned long minimum,
 	return slots_fetch_random();
 }
 
-unsigned char *choose_kernel_location(unsigned char *input,
+static void add_kaslr_setup_data(struct boot_params *params, __u8 enabled)
+{
+	struct setup_data *data;
+
+	kaslr_setup_data.type = SETUP_KASLR;
+	kaslr_setup_data.len = 1;
+	kaslr_setup_data.next = 0;
+	kaslr_setup_data.data[0] = enabled;
+
+	data = (struct setup_data *)(unsigned long)params->hdr.setup_data;
+
+	while (data && data->next)
+		data = (struct setup_data *)(unsigned long)data->next;
+
+	if (data)
+		data->next = (unsigned long)&kaslr_setup_data;
+	else
+		params->hdr.setup_data = (unsigned long)&kaslr_setup_data;
+
+}
+
+unsigned char *choose_kernel_location(struct boot_params *params,
+				      unsigned char *input,
 				      unsigned long input_size,
 				      unsigned char *output,
 				      unsigned long output_size)
@@ -306,14 +335,17 @@ unsigned char *choose_kernel_location(unsigned char *input,
 #ifdef CONFIG_HIBERNATION
 	if (!cmdline_find_option_bool("kaslr")) {
 		debug_putstr("KASLR disabled by default...\n");
+		add_kaslr_setup_data(params, 0);
 		goto out;
 	}
 #else
 	if (cmdline_find_option_bool("nokaslr")) {
 		debug_putstr("KASLR disabled by cmdline...\n");
+		add_kaslr_setup_data(params, 0);
 		goto out;
 	}
 #endif
+	add_kaslr_setup_data(params, 1);
 
 	/* Record the various known unsafe memory ranges. */
 	mem_avoid_init((unsigned long)input, input_size,
diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index dcc1c53..5aecf56 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -399,7 +399,8 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
 	 * the entire decompressed kernel plus relocation table, or the
 	 * entire decompressed kernel plus .bss and .brk sections.
 	 */
-	output = choose_kernel_location(input_data, input_len, output,
+	output = choose_kernel_location(real_mode, input_data, input_len,
+					output,
 					output_len > run_size ? output_len
 							      : run_size);
 
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 24e3e56..6d67307 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -56,7 +56,8 @@ int cmdline_find_option_bool(const char *option);
 
 #if CONFIG_RANDOMIZE_BASE
 /* aslr.c */
-unsigned char *choose_kernel_location(unsigned char *input,
+unsigned char *choose_kernel_location(struct boot_params *params,
+				      unsigned char *input,
 				      unsigned long input_size,
 				      unsigned char *output,
 				      unsigned long output_size);
@@ -64,7 +65,8 @@ unsigned char *choose_kernel_location(unsigned char *input,
 bool has_cpuflag(int flag);
 #else
 static inline
-unsigned char *choose_kernel_location(unsigned char *input,
+unsigned char *choose_kernel_location(struct boot_params *params,
+				      unsigned char *input,
 				      unsigned long input_size,
 				      unsigned char *output,
 				      unsigned long output_size)
diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
index f97fbe3..3d43ce3 100644
--- a/arch/x86/include/asm/page_types.h
+++ b/arch/x86/include/asm/page_types.h
@@ -3,6 +3,7 @@
 
 #include <linux/const.h>
 #include <linux/types.h>
+#include <asm/bootparam.h>
 
 /* PAGE_SHIFT determines the page size */
 #define PAGE_SHIFT	12
@@ -51,6 +52,8 @@ extern int devmem_is_allowed(unsigned long pagenr);
 extern unsigned long max_low_pfn_mapped;
 extern unsigned long max_pfn_mapped;
 
+extern bool kaslr_enabled;
+
 static inline phys_addr_t get_max_mapped(void)
 {
 	return (phys_addr_t)max_pfn_mapped << PAGE_SHIFT;
diff --git a/arch/x86/include/uapi/asm/bootparam.h b/arch/x86/include/uapi/asm/bootparam.h
index 225b098..44e6dd7 100644
--- a/arch/x86/include/uapi/asm/bootparam.h
+++ b/arch/x86/include/uapi/asm/bootparam.h
@@ -7,6 +7,7 @@
 #define SETUP_DTB			2
 #define SETUP_PCI			3
 #define SETUP_EFI			4
+#define SETUP_KASLR			5
 
 /* ram_size flags */
 #define RAMDISK_IMAGE_START_MASK	0x07FF
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index e69f988..c3c59a3 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -32,6 +32,7 @@
 
 #include <asm/page.h>
 #include <asm/pgtable.h>
+#include <asm/page_types.h>
 
 #if 0
 #define DEBUGP(fmt, ...)				\
@@ -46,21 +47,13 @@ do {							\
 
 #ifdef CONFIG_RANDOMIZE_BASE
 static unsigned long module_load_offset;
-static int randomize_modules = 1;
 
 /* Mutex protects the module_load_offset. */
 static DEFINE_MUTEX(module_kaslr_mutex);
 
-static int __init parse_nokaslr(char *p)
-{
-	randomize_modules = 0;
-	return 0;
-}
-early_param("nokaslr", parse_nokaslr);
-
 static unsigned long int get_module_load_offset(void)
 {
-	if (randomize_modules) {
+	if (kaslr_enabled) {
 		mutex_lock(&module_kaslr_mutex);
 		/*
 		 * Calculate the module_load_offset the first time this
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index ab4734e..78c91bb 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -121,6 +121,8 @@
 unsigned long max_low_pfn_mapped;
 unsigned long max_pfn_mapped;
 
+bool __read_mostly kaslr_enabled = false;
+
 #ifdef CONFIG_DMI
 RESERVE_BRK(dmi_alloc, 65536);
 #endif
@@ -424,6 +426,11 @@ static void __init reserve_initrd(void)
 }
 #endif /* CONFIG_BLK_DEV_INITRD */
 
+static void __init parse_kaslr_setup(u64 pa_data, u32 data_len)
+{
+	kaslr_enabled = (bool)(pa_data + sizeof(struct setup_data));
+}
+
 static void __init parse_setup_data(void)
 {
 	struct setup_data *data;
@@ -451,6 +458,9 @@ static void __init parse_setup_data(void)
 		case SETUP_EFI:
 			parse_efi_setup(pa_data, data_len);
 			break;
+		case SETUP_KASLR:
+			parse_kaslr_setup(pa_data, data_len);
+			break;
 		default:
 			break;
 		}
-- 
Jiri Kosina
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
  2015-02-13 15:04         ` Jiri Kosina
@ 2015-02-13 17:49           ` Kees Cook
  -1 siblings, 0 replies; 38+ messages in thread
From: Kees Cook @ 2015-02-13 17:49 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Fri, Feb 13, 2015 at 7:04 AM, Jiri Kosina <jkosina@suse.cz> wrote:
> Commit e2b32e678 ("x86, kaslr: randomize module base load address") makes
> the base address for module to be unconditionally randomized in case when
> CONFIG_RANDOMIZE_BASE is defined and "nokaslr" option isn't present on the
> commandline.
>
> This is not consistent with how choose_kernel_location() decides whether
> it will randomize kernel load base.
>
> Namely, CONFIG_HIBERNATION disables kASLR (unless "kaslr" option is
> explicitly specified on kernel commandline), which makes the state space
> larger than what module loader is looking at. IOW CONFIG_HIBERNATION &&
> CONFIG_RANDOMIZE_BASE is a valid config option, kASLR wouldn't be applied
> by default in that case, but module loader is not aware of that.
>
> Instead of fixing the logic in module.c, this patch takes more generic
> aproach. It introduces a new bootparam setup data_type SETUP_KASLR and
> uses that to pass the information whether kaslr has been applied during
> kernel decompression, and sets a global 'kaslr_enabled' variable
> accordingly, so that any kernel code (module loading, livepatching, ...)
> can make decisions based on its value.
>
> x86 module loader is converted to make use of this flag.
>
> Signed-off-by: Jiri Kosina <jkosina@suse.cz>

Thanks for working on this! If others are happy with the setup_data
approach, I think this is fine. My only concern is confusion over
seeing SETUP_KASLR that was added by a boot loader.

Another way to handle it might be to do some kind of relocs-like
poking of a value into the decompressed kernel?

> ---
>
> v1 -> v2:
>
> Originally I just calculated the fact on the fly from difference between
> __START_KERNEL and &text, but Kees correctly pointed out that this doesn't
> properly catch the case when the offset is randomized to zero. I don't see
> a better option how to propagate the information from
> choose_kernel_location() to the decompressed kernel than introducing new
> bootparam setup type. Comments welcome.
>
>  arch/x86/boot/compressed/aslr.c       | 34 +++++++++++++++++++++++++++++++++-
>  arch/x86/boot/compressed/misc.c       |  3 ++-
>  arch/x86/boot/compressed/misc.h       |  6 ++++--
>  arch/x86/include/asm/page_types.h     |  3 +++
>  arch/x86/include/uapi/asm/bootparam.h |  1 +
>  arch/x86/kernel/module.c              | 11 ++---------
>  arch/x86/kernel/setup.c               | 10 ++++++++++
>  7 files changed, 55 insertions(+), 13 deletions(-)
>
> diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
> index bb13763..d9d1da9 100644
> --- a/arch/x86/boot/compressed/aslr.c
> +++ b/arch/x86/boot/compressed/aslr.c
> @@ -14,6 +14,13 @@
>  static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
>                 LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
>
> +struct kaslr_setup_data {

Should this be "static"?

> +        __u64 next;
> +        __u32 type;
> +        __u32 len;
> +        __u8 data[1];
> +} kaslr_setup_data;
> +
>  #define I8254_PORT_CONTROL     0x43
>  #define I8254_PORT_COUNTER0    0x40
>  #define I8254_CMD_READBACK     0xC0
> @@ -295,7 +302,29 @@ static unsigned long find_random_addr(unsigned long minimum,
>         return slots_fetch_random();
>  }
>
> -unsigned char *choose_kernel_location(unsigned char *input,
> +static void add_kaslr_setup_data(struct boot_params *params, __u8 enabled)
> +{
> +       struct setup_data *data;
> +
> +       kaslr_setup_data.type = SETUP_KASLR;
> +       kaslr_setup_data.len = 1;
> +       kaslr_setup_data.next = 0;
> +       kaslr_setup_data.data[0] = enabled;
> +
> +       data = (struct setup_data *)(unsigned long)params->hdr.setup_data;
> +
> +       while (data && data->next)
> +               data = (struct setup_data *)(unsigned long)data->next;
> +
> +       if (data)
> +               data->next = (unsigned long)&kaslr_setup_data;
> +       else
> +               params->hdr.setup_data = (unsigned long)&kaslr_setup_data;
> +
> +}
> +
> +unsigned char *choose_kernel_location(struct boot_params *params,
> +                                     unsigned char *input,
>                                       unsigned long input_size,
>                                       unsigned char *output,
>                                       unsigned long output_size)
> @@ -306,14 +335,17 @@ unsigned char *choose_kernel_location(unsigned char *input,
>  #ifdef CONFIG_HIBERNATION
>         if (!cmdline_find_option_bool("kaslr")) {
>                 debug_putstr("KASLR disabled by default...\n");
> +               add_kaslr_setup_data(params, 0);
>                 goto out;
>         }
>  #else
>         if (cmdline_find_option_bool("nokaslr")) {
>                 debug_putstr("KASLR disabled by cmdline...\n");
> +               add_kaslr_setup_data(params, 0);
>                 goto out;
>         }
>  #endif
> +       add_kaslr_setup_data(params, 1);
>
>         /* Record the various known unsafe memory ranges. */
>         mem_avoid_init((unsigned long)input, input_size,
> diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
> index dcc1c53..5aecf56 100644
> --- a/arch/x86/boot/compressed/misc.c
> +++ b/arch/x86/boot/compressed/misc.c
> @@ -399,7 +399,8 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
>          * the entire decompressed kernel plus relocation table, or the
>          * entire decompressed kernel plus .bss and .brk sections.
>          */
> -       output = choose_kernel_location(input_data, input_len, output,
> +       output = choose_kernel_location(real_mode, input_data, input_len,
> +                                       output,
>                                         output_len > run_size ? output_len
>                                                               : run_size);
>
> diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
> index 24e3e56..6d67307 100644
> --- a/arch/x86/boot/compressed/misc.h
> +++ b/arch/x86/boot/compressed/misc.h
> @@ -56,7 +56,8 @@ int cmdline_find_option_bool(const char *option);
>
>  #if CONFIG_RANDOMIZE_BASE
>  /* aslr.c */
> -unsigned char *choose_kernel_location(unsigned char *input,
> +unsigned char *choose_kernel_location(struct boot_params *params,
> +                                     unsigned char *input,
>                                       unsigned long input_size,
>                                       unsigned char *output,
>                                       unsigned long output_size);
> @@ -64,7 +65,8 @@ unsigned char *choose_kernel_location(unsigned char *input,
>  bool has_cpuflag(int flag);
>  #else
>  static inline
> -unsigned char *choose_kernel_location(unsigned char *input,
> +unsigned char *choose_kernel_location(struct boot_params *params,
> +                                     unsigned char *input,
>                                       unsigned long input_size,
>                                       unsigned char *output,
>                                       unsigned long output_size)
> diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
> index f97fbe3..3d43ce3 100644
> --- a/arch/x86/include/asm/page_types.h
> +++ b/arch/x86/include/asm/page_types.h
> @@ -3,6 +3,7 @@
>
>  #include <linux/const.h>
>  #include <linux/types.h>
> +#include <asm/bootparam.h>
>
>  /* PAGE_SHIFT determines the page size */
>  #define PAGE_SHIFT     12
> @@ -51,6 +52,8 @@ extern int devmem_is_allowed(unsigned long pagenr);
>  extern unsigned long max_low_pfn_mapped;
>  extern unsigned long max_pfn_mapped;
>
> +extern bool kaslr_enabled;
> +
>  static inline phys_addr_t get_max_mapped(void)
>  {
>         return (phys_addr_t)max_pfn_mapped << PAGE_SHIFT;
> diff --git a/arch/x86/include/uapi/asm/bootparam.h b/arch/x86/include/uapi/asm/bootparam.h
> index 225b098..44e6dd7 100644
> --- a/arch/x86/include/uapi/asm/bootparam.h
> +++ b/arch/x86/include/uapi/asm/bootparam.h
> @@ -7,6 +7,7 @@
>  #define SETUP_DTB                      2
>  #define SETUP_PCI                      3
>  #define SETUP_EFI                      4
> +#define SETUP_KASLR                    5
>
>  /* ram_size flags */
>  #define RAMDISK_IMAGE_START_MASK       0x07FF
> diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
> index e69f988..c3c59a3 100644
> --- a/arch/x86/kernel/module.c
> +++ b/arch/x86/kernel/module.c
> @@ -32,6 +32,7 @@
>
>  #include <asm/page.h>
>  #include <asm/pgtable.h>
> +#include <asm/page_types.h>
>
>  #if 0
>  #define DEBUGP(fmt, ...)                               \
> @@ -46,21 +47,13 @@ do {                                                        \
>
>  #ifdef CONFIG_RANDOMIZE_BASE
>  static unsigned long module_load_offset;
> -static int randomize_modules = 1;
>
>  /* Mutex protects the module_load_offset. */
>  static DEFINE_MUTEX(module_kaslr_mutex);
>
> -static int __init parse_nokaslr(char *p)
> -{
> -       randomize_modules = 0;
> -       return 0;
> -}
> -early_param("nokaslr", parse_nokaslr);
> -
>  static unsigned long int get_module_load_offset(void)
>  {
> -       if (randomize_modules) {
> +       if (kaslr_enabled) {
>                 mutex_lock(&module_kaslr_mutex);
>                 /*
>                  * Calculate the module_load_offset the first time this
> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> index ab4734e..78c91bb 100644
> --- a/arch/x86/kernel/setup.c
> +++ b/arch/x86/kernel/setup.c
> @@ -121,6 +121,8 @@
>  unsigned long max_low_pfn_mapped;
>  unsigned long max_pfn_mapped;
>
> +bool __read_mostly kaslr_enabled = false;
> +
>  #ifdef CONFIG_DMI
>  RESERVE_BRK(dmi_alloc, 65536);
>  #endif
> @@ -424,6 +426,11 @@ static void __init reserve_initrd(void)
>  }
>  #endif /* CONFIG_BLK_DEV_INITRD */
>
> +static void __init parse_kaslr_setup(u64 pa_data, u32 data_len)
> +{
> +       kaslr_enabled = (bool)(pa_data + sizeof(struct setup_data));
> +}
> +
>  static void __init parse_setup_data(void)
>  {
>         struct setup_data *data;
> @@ -451,6 +458,9 @@ static void __init parse_setup_data(void)
>                 case SETUP_EFI:
>                         parse_efi_setup(pa_data, data_len);
>                         break;
> +               case SETUP_KASLR:
> +                       parse_kaslr_setup(pa_data, data_len);
> +                       break;
>                 default:
>                         break;
>                 }
> --
> Jiri Kosina
> SUSE Labs

-Kees

-- 
Kees Cook
Chrome OS Security

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
@ 2015-02-13 17:49           ` Kees Cook
  0 siblings, 0 replies; 38+ messages in thread
From: Kees Cook @ 2015-02-13 17:49 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Fri, Feb 13, 2015 at 7:04 AM, Jiri Kosina <jkosina@suse.cz> wrote:
> Commit e2b32e678 ("x86, kaslr: randomize module base load address") makes
> the base address for module to be unconditionally randomized in case when
> CONFIG_RANDOMIZE_BASE is defined and "nokaslr" option isn't present on the
> commandline.
>
> This is not consistent with how choose_kernel_location() decides whether
> it will randomize kernel load base.
>
> Namely, CONFIG_HIBERNATION disables kASLR (unless "kaslr" option is
> explicitly specified on kernel commandline), which makes the state space
> larger than what module loader is looking at. IOW CONFIG_HIBERNATION &&
> CONFIG_RANDOMIZE_BASE is a valid config option, kASLR wouldn't be applied
> by default in that case, but module loader is not aware of that.
>
> Instead of fixing the logic in module.c, this patch takes more generic
> aproach. It introduces a new bootparam setup data_type SETUP_KASLR and
> uses that to pass the information whether kaslr has been applied during
> kernel decompression, and sets a global 'kaslr_enabled' variable
> accordingly, so that any kernel code (module loading, livepatching, ...)
> can make decisions based on its value.
>
> x86 module loader is converted to make use of this flag.
>
> Signed-off-by: Jiri Kosina <jkosina@suse.cz>

Thanks for working on this! If others are happy with the setup_data
approach, I think this is fine. My only concern is confusion over
seeing SETUP_KASLR that was added by a boot loader.

Another way to handle it might be to do some kind of relocs-like
poking of a value into the decompressed kernel?

> ---
>
> v1 -> v2:
>
> Originally I just calculated the fact on the fly from difference between
> __START_KERNEL and &text, but Kees correctly pointed out that this doesn't
> properly catch the case when the offset is randomized to zero. I don't see
> a better option how to propagate the information from
> choose_kernel_location() to the decompressed kernel than introducing new
> bootparam setup type. Comments welcome.
>
>  arch/x86/boot/compressed/aslr.c       | 34 +++++++++++++++++++++++++++++++++-
>  arch/x86/boot/compressed/misc.c       |  3 ++-
>  arch/x86/boot/compressed/misc.h       |  6 ++++--
>  arch/x86/include/asm/page_types.h     |  3 +++
>  arch/x86/include/uapi/asm/bootparam.h |  1 +
>  arch/x86/kernel/module.c              | 11 ++---------
>  arch/x86/kernel/setup.c               | 10 ++++++++++
>  7 files changed, 55 insertions(+), 13 deletions(-)
>
> diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
> index bb13763..d9d1da9 100644
> --- a/arch/x86/boot/compressed/aslr.c
> +++ b/arch/x86/boot/compressed/aslr.c
> @@ -14,6 +14,13 @@
>  static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
>                 LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
>
> +struct kaslr_setup_data {

Should this be "static"?

> +        __u64 next;
> +        __u32 type;
> +        __u32 len;
> +        __u8 data[1];
> +} kaslr_setup_data;
> +
>  #define I8254_PORT_CONTROL     0x43
>  #define I8254_PORT_COUNTER0    0x40
>  #define I8254_CMD_READBACK     0xC0
> @@ -295,7 +302,29 @@ static unsigned long find_random_addr(unsigned long minimum,
>         return slots_fetch_random();
>  }
>
> -unsigned char *choose_kernel_location(unsigned char *input,
> +static void add_kaslr_setup_data(struct boot_params *params, __u8 enabled)
> +{
> +       struct setup_data *data;
> +
> +       kaslr_setup_data.type = SETUP_KASLR;
> +       kaslr_setup_data.len = 1;
> +       kaslr_setup_data.next = 0;
> +       kaslr_setup_data.data[0] = enabled;
> +
> +       data = (struct setup_data *)(unsigned long)params->hdr.setup_data;
> +
> +       while (data && data->next)
> +               data = (struct setup_data *)(unsigned long)data->next;
> +
> +       if (data)
> +               data->next = (unsigned long)&kaslr_setup_data;
> +       else
> +               params->hdr.setup_data = (unsigned long)&kaslr_setup_data;
> +
> +}
> +
> +unsigned char *choose_kernel_location(struct boot_params *params,
> +                                     unsigned char *input,
>                                       unsigned long input_size,
>                                       unsigned char *output,
>                                       unsigned long output_size)
> @@ -306,14 +335,17 @@ unsigned char *choose_kernel_location(unsigned char *input,
>  #ifdef CONFIG_HIBERNATION
>         if (!cmdline_find_option_bool("kaslr")) {
>                 debug_putstr("KASLR disabled by default...\n");
> +               add_kaslr_setup_data(params, 0);
>                 goto out;
>         }
>  #else
>         if (cmdline_find_option_bool("nokaslr")) {
>                 debug_putstr("KASLR disabled by cmdline...\n");
> +               add_kaslr_setup_data(params, 0);
>                 goto out;
>         }
>  #endif
> +       add_kaslr_setup_data(params, 1);
>
>         /* Record the various known unsafe memory ranges. */
>         mem_avoid_init((unsigned long)input, input_size,
> diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
> index dcc1c53..5aecf56 100644
> --- a/arch/x86/boot/compressed/misc.c
> +++ b/arch/x86/boot/compressed/misc.c
> @@ -399,7 +399,8 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
>          * the entire decompressed kernel plus relocation table, or the
>          * entire decompressed kernel plus .bss and .brk sections.
>          */
> -       output = choose_kernel_location(input_data, input_len, output,
> +       output = choose_kernel_location(real_mode, input_data, input_len,
> +                                       output,
>                                         output_len > run_size ? output_len
>                                                               : run_size);
>
> diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
> index 24e3e56..6d67307 100644
> --- a/arch/x86/boot/compressed/misc.h
> +++ b/arch/x86/boot/compressed/misc.h
> @@ -56,7 +56,8 @@ int cmdline_find_option_bool(const char *option);
>
>  #if CONFIG_RANDOMIZE_BASE
>  /* aslr.c */
> -unsigned char *choose_kernel_location(unsigned char *input,
> +unsigned char *choose_kernel_location(struct boot_params *params,
> +                                     unsigned char *input,
>                                       unsigned long input_size,
>                                       unsigned char *output,
>                                       unsigned long output_size);
> @@ -64,7 +65,8 @@ unsigned char *choose_kernel_location(unsigned char *input,
>  bool has_cpuflag(int flag);
>  #else
>  static inline
> -unsigned char *choose_kernel_location(unsigned char *input,
> +unsigned char *choose_kernel_location(struct boot_params *params,
> +                                     unsigned char *input,
>                                       unsigned long input_size,
>                                       unsigned char *output,
>                                       unsigned long output_size)
> diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
> index f97fbe3..3d43ce3 100644
> --- a/arch/x86/include/asm/page_types.h
> +++ b/arch/x86/include/asm/page_types.h
> @@ -3,6 +3,7 @@
>
>  #include <linux/const.h>
>  #include <linux/types.h>
> +#include <asm/bootparam.h>
>
>  /* PAGE_SHIFT determines the page size */
>  #define PAGE_SHIFT     12
> @@ -51,6 +52,8 @@ extern int devmem_is_allowed(unsigned long pagenr);
>  extern unsigned long max_low_pfn_mapped;
>  extern unsigned long max_pfn_mapped;
>
> +extern bool kaslr_enabled;
> +
>  static inline phys_addr_t get_max_mapped(void)
>  {
>         return (phys_addr_t)max_pfn_mapped << PAGE_SHIFT;
> diff --git a/arch/x86/include/uapi/asm/bootparam.h b/arch/x86/include/uapi/asm/bootparam.h
> index 225b098..44e6dd7 100644
> --- a/arch/x86/include/uapi/asm/bootparam.h
> +++ b/arch/x86/include/uapi/asm/bootparam.h
> @@ -7,6 +7,7 @@
>  #define SETUP_DTB                      2
>  #define SETUP_PCI                      3
>  #define SETUP_EFI                      4
> +#define SETUP_KASLR                    5
>
>  /* ram_size flags */
>  #define RAMDISK_IMAGE_START_MASK       0x07FF
> diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
> index e69f988..c3c59a3 100644
> --- a/arch/x86/kernel/module.c
> +++ b/arch/x86/kernel/module.c
> @@ -32,6 +32,7 @@
>
>  #include <asm/page.h>
>  #include <asm/pgtable.h>
> +#include <asm/page_types.h>
>
>  #if 0
>  #define DEBUGP(fmt, ...)                               \
> @@ -46,21 +47,13 @@ do {                                                        \
>
>  #ifdef CONFIG_RANDOMIZE_BASE
>  static unsigned long module_load_offset;
> -static int randomize_modules = 1;
>
>  /* Mutex protects the module_load_offset. */
>  static DEFINE_MUTEX(module_kaslr_mutex);
>
> -static int __init parse_nokaslr(char *p)
> -{
> -       randomize_modules = 0;
> -       return 0;
> -}
> -early_param("nokaslr", parse_nokaslr);
> -
>  static unsigned long int get_module_load_offset(void)
>  {
> -       if (randomize_modules) {
> +       if (kaslr_enabled) {
>                 mutex_lock(&module_kaslr_mutex);
>                 /*
>                  * Calculate the module_load_offset the first time this
> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> index ab4734e..78c91bb 100644
> --- a/arch/x86/kernel/setup.c
> +++ b/arch/x86/kernel/setup.c
> @@ -121,6 +121,8 @@
>  unsigned long max_low_pfn_mapped;
>  unsigned long max_pfn_mapped;
>
> +bool __read_mostly kaslr_enabled = false;
> +
>  #ifdef CONFIG_DMI
>  RESERVE_BRK(dmi_alloc, 65536);
>  #endif
> @@ -424,6 +426,11 @@ static void __init reserve_initrd(void)
>  }
>  #endif /* CONFIG_BLK_DEV_INITRD */
>
> +static void __init parse_kaslr_setup(u64 pa_data, u32 data_len)
> +{
> +       kaslr_enabled = (bool)(pa_data + sizeof(struct setup_data));
> +}
> +
>  static void __init parse_setup_data(void)
>  {
>         struct setup_data *data;
> @@ -451,6 +458,9 @@ static void __init parse_setup_data(void)
>                 case SETUP_EFI:
>                         parse_efi_setup(pa_data, data_len);
>                         break;
> +               case SETUP_KASLR:
> +                       parse_kaslr_setup(pa_data, data_len);
> +                       break;
>                 default:
>                         break;
>                 }
> --
> Jiri Kosina
> SUSE Labs

-Kees

-- 
Kees Cook
Chrome OS Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
  2015-02-13 17:49           ` Kees Cook
@ 2015-02-13 22:20             ` Jiri Kosina
  -1 siblings, 0 replies; 38+ messages in thread
From: Jiri Kosina @ 2015-02-13 22:20 UTC (permalink / raw)
  To: Kees Cook; +Cc: H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Fri, 13 Feb 2015, Kees Cook wrote:

> > Commit e2b32e678 ("x86, kaslr: randomize module base load address") makes
> > the base address for module to be unconditionally randomized in case when
> > CONFIG_RANDOMIZE_BASE is defined and "nokaslr" option isn't present on the
> > commandline.
> >
> > This is not consistent with how choose_kernel_location() decides whether
> > it will randomize kernel load base.
> >
> > Namely, CONFIG_HIBERNATION disables kASLR (unless "kaslr" option is
> > explicitly specified on kernel commandline), which makes the state space
> > larger than what module loader is looking at. IOW CONFIG_HIBERNATION &&
> > CONFIG_RANDOMIZE_BASE is a valid config option, kASLR wouldn't be applied
> > by default in that case, but module loader is not aware of that.
> >
> > Instead of fixing the logic in module.c, this patch takes more generic
> > aproach. It introduces a new bootparam setup data_type SETUP_KASLR and
> > uses that to pass the information whether kaslr has been applied during
> > kernel decompression, and sets a global 'kaslr_enabled' variable
> > accordingly, so that any kernel code (module loading, livepatching, ...)
> > can make decisions based on its value.
> >
> > x86 module loader is converted to make use of this flag.
> >
> > Signed-off-by: Jiri Kosina <jkosina@suse.cz>
> 
> Thanks for working on this! If others are happy with the setup_data
> approach, I think this is fine. 

This is for x86 folks to decide. I hope my original CC covers this, so 
let's wait for their verdict.

> My only concern is confusion over seeing SETUP_KASLR that was added by a 
> boot loader.

Well, so you are concerned about bootloader that is evil on purpose?

If you have such bootloader, you are screwed anyway, because it's free to 
setup asynchronous events that will corrupt your kernel anyway (DMA that 
will happen only after the loaded kernel is already active, for example). 
If you want to avoid evil bootloaders, secure boot is currently The 
option, I am afraid.

> Another way to handle it might be to do some kind of relocs-like poking 
> of a value into the decompressed kernel?

This is so hackish that I'd like to avoid it in favor of the boot params 
aproach as much as possbile :)

[ ... snip ... ]
> > diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
> > index bb13763..d9d1da9 100644
> > --- a/arch/x86/boot/compressed/aslr.c
> > +++ b/arch/x86/boot/compressed/aslr.c
> > @@ -14,6 +14,13 @@
> >  static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
> >                 LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
> >
> > +struct kaslr_setup_data {
> 
> Should this be "static"?

Good catch. So let's wait what x86 folks have to say. I'll either update 
in in v3, or hopefully someone will fix this when applying the patch for 
-tip.

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
@ 2015-02-13 22:20             ` Jiri Kosina
  0 siblings, 0 replies; 38+ messages in thread
From: Jiri Kosina @ 2015-02-13 22:20 UTC (permalink / raw)
  To: Kees Cook; +Cc: H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Fri, 13 Feb 2015, Kees Cook wrote:

> > Commit e2b32e678 ("x86, kaslr: randomize module base load address") makes
> > the base address for module to be unconditionally randomized in case when
> > CONFIG_RANDOMIZE_BASE is defined and "nokaslr" option isn't present on the
> > commandline.
> >
> > This is not consistent with how choose_kernel_location() decides whether
> > it will randomize kernel load base.
> >
> > Namely, CONFIG_HIBERNATION disables kASLR (unless "kaslr" option is
> > explicitly specified on kernel commandline), which makes the state space
> > larger than what module loader is looking at. IOW CONFIG_HIBERNATION &&
> > CONFIG_RANDOMIZE_BASE is a valid config option, kASLR wouldn't be applied
> > by default in that case, but module loader is not aware of that.
> >
> > Instead of fixing the logic in module.c, this patch takes more generic
> > aproach. It introduces a new bootparam setup data_type SETUP_KASLR and
> > uses that to pass the information whether kaslr has been applied during
> > kernel decompression, and sets a global 'kaslr_enabled' variable
> > accordingly, so that any kernel code (module loading, livepatching, ...)
> > can make decisions based on its value.
> >
> > x86 module loader is converted to make use of this flag.
> >
> > Signed-off-by: Jiri Kosina <jkosina@suse.cz>
> 
> Thanks for working on this! If others are happy with the setup_data
> approach, I think this is fine. 

This is for x86 folks to decide. I hope my original CC covers this, so 
let's wait for their verdict.

> My only concern is confusion over seeing SETUP_KASLR that was added by a 
> boot loader.

Well, so you are concerned about bootloader that is evil on purpose?

If you have such bootloader, you are screwed anyway, because it's free to 
setup asynchronous events that will corrupt your kernel anyway (DMA that 
will happen only after the loaded kernel is already active, for example). 
If you want to avoid evil bootloaders, secure boot is currently The 
option, I am afraid.

> Another way to handle it might be to do some kind of relocs-like poking 
> of a value into the decompressed kernel?

This is so hackish that I'd like to avoid it in favor of the boot params 
aproach as much as possbile :)

[ ... snip ... ]
> > diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
> > index bb13763..d9d1da9 100644
> > --- a/arch/x86/boot/compressed/aslr.c
> > +++ b/arch/x86/boot/compressed/aslr.c
> > @@ -14,6 +14,13 @@
> >  static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
> >                 LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
> >
> > +struct kaslr_setup_data {
> 
> Should this be "static"?

Good catch. So let's wait what x86 folks have to say. I'll either update 
in in v3, or hopefully someone will fix this when applying the patch for 
-tip.

Thanks,

-- 
Jiri Kosina
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
  2015-02-13 22:20             ` Jiri Kosina
@ 2015-02-13 23:25               ` Kees Cook
  -1 siblings, 0 replies; 38+ messages in thread
From: Kees Cook @ 2015-02-13 23:25 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Fri, Feb 13, 2015 at 2:20 PM, Jiri Kosina <jkosina@suse.cz> wrote:
> On Fri, 13 Feb 2015, Kees Cook wrote:
>
>> > Commit e2b32e678 ("x86, kaslr: randomize module base load address") makes
>> > the base address for module to be unconditionally randomized in case when
>> > CONFIG_RANDOMIZE_BASE is defined and "nokaslr" option isn't present on the
>> > commandline.
>> >
>> > This is not consistent with how choose_kernel_location() decides whether
>> > it will randomize kernel load base.
>> >
>> > Namely, CONFIG_HIBERNATION disables kASLR (unless "kaslr" option is
>> > explicitly specified on kernel commandline), which makes the state space
>> > larger than what module loader is looking at. IOW CONFIG_HIBERNATION &&
>> > CONFIG_RANDOMIZE_BASE is a valid config option, kASLR wouldn't be applied
>> > by default in that case, but module loader is not aware of that.
>> >
>> > Instead of fixing the logic in module.c, this patch takes more generic
>> > aproach. It introduces a new bootparam setup data_type SETUP_KASLR and
>> > uses that to pass the information whether kaslr has been applied during
>> > kernel decompression, and sets a global 'kaslr_enabled' variable
>> > accordingly, so that any kernel code (module loading, livepatching, ...)
>> > can make decisions based on its value.
>> >
>> > x86 module loader is converted to make use of this flag.
>> >
>> > Signed-off-by: Jiri Kosina <jkosina@suse.cz>
>>
>> Thanks for working on this! If others are happy with the setup_data
>> approach, I think this is fine.
>
> This is for x86 folks to decide. I hope my original CC covers this, so
> let's wait for their verdict.
>
>> My only concern is confusion over seeing SETUP_KASLR that was added by a
>> boot loader.
>
> Well, so you are concerned about bootloader that is evil on purpose?

No, no; I agree: a malicious boot loader is a lost cause. I mean
mostly from a misbehavior perspective. Like, someone sees "kaslr" in
the setup args and thinks they can set it to 1 and boot a kernel, etc.
Or they set it to 0, but they lack HIBERNATION and "1" gets appended,
but the setup_data parser sees the boot-loader one set to 0, etc. I'm
just curious if we should avoid getting some poor system into a
confusing state.

>
> If you have such bootloader, you are screwed anyway, because it's free to
> setup asynchronous events that will corrupt your kernel anyway (DMA that
> will happen only after the loaded kernel is already active, for example).
> If you want to avoid evil bootloaders, secure boot is currently The
> option, I am afraid.
>
>> Another way to handle it might be to do some kind of relocs-like poking
>> of a value into the decompressed kernel?
>
> This is so hackish that I'd like to avoid it in favor of the boot params
> aproach as much as possbile :)

Yeah, I think so too. :)

>
> [ ... snip ... ]
>> > diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
>> > index bb13763..d9d1da9 100644
>> > --- a/arch/x86/boot/compressed/aslr.c
>> > +++ b/arch/x86/boot/compressed/aslr.c
>> > @@ -14,6 +14,13 @@
>> >  static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
>> >                 LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
>> >
>> > +struct kaslr_setup_data {
>>
>> Should this be "static"?
>
> Good catch. So let's wait what x86 folks have to say. I'll either update
> in in v3, or hopefully someone will fix this when applying the patch for
> -tip.

Great!

-Kees

>
> Thanks,
>
> --
> Jiri Kosina
> SUSE Labs



-- 
Kees Cook
Chrome OS Security

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
@ 2015-02-13 23:25               ` Kees Cook
  0 siblings, 0 replies; 38+ messages in thread
From: Kees Cook @ 2015-02-13 23:25 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Fri, Feb 13, 2015 at 2:20 PM, Jiri Kosina <jkosina@suse.cz> wrote:
> On Fri, 13 Feb 2015, Kees Cook wrote:
>
>> > Commit e2b32e678 ("x86, kaslr: randomize module base load address") makes
>> > the base address for module to be unconditionally randomized in case when
>> > CONFIG_RANDOMIZE_BASE is defined and "nokaslr" option isn't present on the
>> > commandline.
>> >
>> > This is not consistent with how choose_kernel_location() decides whether
>> > it will randomize kernel load base.
>> >
>> > Namely, CONFIG_HIBERNATION disables kASLR (unless "kaslr" option is
>> > explicitly specified on kernel commandline), which makes the state space
>> > larger than what module loader is looking at. IOW CONFIG_HIBERNATION &&
>> > CONFIG_RANDOMIZE_BASE is a valid config option, kASLR wouldn't be applied
>> > by default in that case, but module loader is not aware of that.
>> >
>> > Instead of fixing the logic in module.c, this patch takes more generic
>> > aproach. It introduces a new bootparam setup data_type SETUP_KASLR and
>> > uses that to pass the information whether kaslr has been applied during
>> > kernel decompression, and sets a global 'kaslr_enabled' variable
>> > accordingly, so that any kernel code (module loading, livepatching, ...)
>> > can make decisions based on its value.
>> >
>> > x86 module loader is converted to make use of this flag.
>> >
>> > Signed-off-by: Jiri Kosina <jkosina@suse.cz>
>>
>> Thanks for working on this! If others are happy with the setup_data
>> approach, I think this is fine.
>
> This is for x86 folks to decide. I hope my original CC covers this, so
> let's wait for their verdict.
>
>> My only concern is confusion over seeing SETUP_KASLR that was added by a
>> boot loader.
>
> Well, so you are concerned about bootloader that is evil on purpose?

No, no; I agree: a malicious boot loader is a lost cause. I mean
mostly from a misbehavior perspective. Like, someone sees "kaslr" in
the setup args and thinks they can set it to 1 and boot a kernel, etc.
Or they set it to 0, but they lack HIBERNATION and "1" gets appended,
but the setup_data parser sees the boot-loader one set to 0, etc. I'm
just curious if we should avoid getting some poor system into a
confusing state.

>
> If you have such bootloader, you are screwed anyway, because it's free to
> setup asynchronous events that will corrupt your kernel anyway (DMA that
> will happen only after the loaded kernel is already active, for example).
> If you want to avoid evil bootloaders, secure boot is currently The
> option, I am afraid.
>
>> Another way to handle it might be to do some kind of relocs-like poking
>> of a value into the decompressed kernel?
>
> This is so hackish that I'd like to avoid it in favor of the boot params
> aproach as much as possbile :)

Yeah, I think so too. :)

>
> [ ... snip ... ]
>> > diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
>> > index bb13763..d9d1da9 100644
>> > --- a/arch/x86/boot/compressed/aslr.c
>> > +++ b/arch/x86/boot/compressed/aslr.c
>> > @@ -14,6 +14,13 @@
>> >  static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
>> >                 LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
>> >
>> > +struct kaslr_setup_data {
>>
>> Should this be "static"?
>
> Good catch. So let's wait what x86 folks have to say. I'll either update
> in in v3, or hopefully someone will fix this when applying the patch for
> -tip.

Great!

-Kees

>
> Thanks,
>
> --
> Jiri Kosina
> SUSE Labs



-- 
Kees Cook
Chrome OS Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
  2015-02-13 23:25               ` Kees Cook
@ 2015-02-16 11:55                 ` Borislav Petkov
  -1 siblings, 0 replies; 38+ messages in thread
From: Borislav Petkov @ 2015-02-16 11:55 UTC (permalink / raw)
  To: Kees Cook; +Cc: Jiri Kosina, H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Fri, Feb 13, 2015 at 03:25:26PM -0800, Kees Cook wrote:
> No, no; I agree: a malicious boot loader is a lost cause. I mean
> mostly from a misbehavior perspective. Like, someone sees "kaslr" in
> the setup args and thinks they can set it to 1 and boot a kernel, etc.
> Or they set it to 0, but they lack HIBERNATION and "1" gets appended,
> but the setup_data parser sees the boot-loader one set to 0, etc. I'm
> just curious if we should avoid getting some poor system into a
> confusing state.

Well, we can apply the rule of the last setting sticks and since the
kernel is always going to be adding the last setup_data element of
type SETUP_KASLR (the boot loader ones will be somewhere on the list
in-between and we add to the end), we're fine, no?

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
@ 2015-02-16 11:55                 ` Borislav Petkov
  0 siblings, 0 replies; 38+ messages in thread
From: Borislav Petkov @ 2015-02-16 11:55 UTC (permalink / raw)
  To: Kees Cook; +Cc: Jiri Kosina, H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Fri, Feb 13, 2015 at 03:25:26PM -0800, Kees Cook wrote:
> No, no; I agree: a malicious boot loader is a lost cause. I mean
> mostly from a misbehavior perspective. Like, someone sees "kaslr" in
> the setup args and thinks they can set it to 1 and boot a kernel, etc.
> Or they set it to 0, but they lack HIBERNATION and "1" gets appended,
> but the setup_data parser sees the boot-loader one set to 0, etc. I'm
> just curious if we should avoid getting some poor system into a
> confusing state.

Well, we can apply the rule of the last setting sticks and since the
kernel is always going to be adding the last setup_data element of
type SETUP_KASLR (the boot loader ones will be somewhere on the list
in-between and we add to the end), we're fine, no?

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
  2015-02-16 11:55                 ` Borislav Petkov
@ 2015-02-16 19:27                   ` Kees Cook
  -1 siblings, 0 replies; 38+ messages in thread
From: Kees Cook @ 2015-02-16 19:27 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Jiri Kosina, H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Mon, Feb 16, 2015 at 3:55 AM, Borislav Petkov <bp@alien8.de> wrote:
> On Fri, Feb 13, 2015 at 03:25:26PM -0800, Kees Cook wrote:
>> No, no; I agree: a malicious boot loader is a lost cause. I mean
>> mostly from a misbehavior perspective. Like, someone sees "kaslr" in
>> the setup args and thinks they can set it to 1 and boot a kernel, etc.
>> Or they set it to 0, but they lack HIBERNATION and "1" gets appended,
>> but the setup_data parser sees the boot-loader one set to 0, etc. I'm
>> just curious if we should avoid getting some poor system into a
>> confusing state.
>
> Well, we can apply the rule of the last setting sticks and since the
> kernel is always going to be adding the last setup_data element of
> type SETUP_KASLR (the boot loader ones will be somewhere on the list
> in-between and we add to the end), we're fine, no?

Sounds good to me!

-Kees

>
> --
> Regards/Gruss,
>     Boris.
>
> ECO tip #101: Trim your mails when you reply.
> --



-- 
Kees Cook
Chrome OS Security

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
@ 2015-02-16 19:27                   ` Kees Cook
  0 siblings, 0 replies; 38+ messages in thread
From: Kees Cook @ 2015-02-16 19:27 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Jiri Kosina, H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Mon, Feb 16, 2015 at 3:55 AM, Borislav Petkov <bp@alien8.de> wrote:
> On Fri, Feb 13, 2015 at 03:25:26PM -0800, Kees Cook wrote:
>> No, no; I agree: a malicious boot loader is a lost cause. I mean
>> mostly from a misbehavior perspective. Like, someone sees "kaslr" in
>> the setup args and thinks they can set it to 1 and boot a kernel, etc.
>> Or they set it to 0, but they lack HIBERNATION and "1" gets appended,
>> but the setup_data parser sees the boot-loader one set to 0, etc. I'm
>> just curious if we should avoid getting some poor system into a
>> confusing state.
>
> Well, we can apply the rule of the last setting sticks and since the
> kernel is always going to be adding the last setup_data element of
> type SETUP_KASLR (the boot loader ones will be somewhere on the list
> in-between and we add to the end), we're fine, no?

Sounds good to me!

-Kees

>
> --
> Regards/Gruss,
>     Boris.
>
> ECO tip #101: Trim your mails when you reply.
> --



-- 
Kees Cook
Chrome OS Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
  2015-02-16 19:27                   ` Kees Cook
@ 2015-02-16 19:42                     ` Borislav Petkov
  -1 siblings, 0 replies; 38+ messages in thread
From: Borislav Petkov @ 2015-02-16 19:42 UTC (permalink / raw)
  To: Kees Cook; +Cc: Jiri Kosina, H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Mon, Feb 16, 2015 at 11:27:42AM -0800, Kees Cook wrote:
> > Well, we can apply the rule of the last setting sticks and since the
> > kernel is always going to be adding the last setup_data element of
> > type SETUP_KASLR (the boot loader ones will be somewhere on the list
> > in-between and we add to the end), we're fine, no?
> 
> Sounds good to me!

Ok, thanks. I'll pick it up and route it through the proper channels.

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
@ 2015-02-16 19:42                     ` Borislav Petkov
  0 siblings, 0 replies; 38+ messages in thread
From: Borislav Petkov @ 2015-02-16 19:42 UTC (permalink / raw)
  To: Kees Cook; +Cc: Jiri Kosina, H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Mon, Feb 16, 2015 at 11:27:42AM -0800, Kees Cook wrote:
> > Well, we can apply the rule of the last setting sticks and since the
> > kernel is always going to be adding the last setup_data element of
> > type SETUP_KASLR (the boot loader ones will be somewhere on the list
> > in-between and we add to the end), we're fine, no?
> 
> Sounds good to me!

Ok, thanks. I'll pick it up and route it through the proper channels.

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
  2015-02-13 15:04         ` Jiri Kosina
@ 2015-02-17 10:44           ` Borislav Petkov
  -1 siblings, 0 replies; 38+ messages in thread
From: Borislav Petkov @ 2015-02-17 10:44 UTC (permalink / raw)
  To: Jiri Kosina, Kees Cook; +Cc: H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Fri, Feb 13, 2015 at 04:04:55PM +0100, Jiri Kosina wrote:
> Commit e2b32e678 ("x86, kaslr: randomize module base load address") makes 
> the base address for module to be unconditionally randomized in case when 
> CONFIG_RANDOMIZE_BASE is defined and "nokaslr" option isn't present on the 
> commandline.
> 
> This is not consistent with how choose_kernel_location() decides whether 
> it will randomize kernel load base.
> 
> Namely, CONFIG_HIBERNATION disables kASLR (unless "kaslr" option is 
> explicitly specified on kernel commandline), which makes the state space 
> larger than what module loader is looking at. IOW CONFIG_HIBERNATION && 
> CONFIG_RANDOMIZE_BASE is a valid config option, kASLR wouldn't be applied 
> by default in that case, but module loader is not aware of that.
> 
> Instead of fixing the logic in module.c, this patch takes more generic 
> aproach. It introduces a new bootparam setup data_type SETUP_KASLR and 
> uses that to pass the information whether kaslr has been applied during 
> kernel decompression, and sets a global 'kaslr_enabled' variable 
> accordingly, so that any kernel code (module loading, livepatching, ...) 
> can make decisions based on its value.
> 
> x86 module loader is converted to make use of this flag.
> 
> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
> ---
> 
> v1 -> v2:
> 
> Originally I just calculated the fact on the fly from difference between 
> __START_KERNEL and &text, but Kees correctly pointed out that this doesn't 
> properly catch the case when the offset is randomized to zero. I don't see 

Yeah, about that. I think we want to do the thing in addition so that
we don't have the misleading "Kernel Offset:..." line in splats in case
kaslr is off.

Right?

---
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index ab4734e5411d..a203da9cc445 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -1275,6 +1275,9 @@ static struct notifier_block kernel_offset_notifier = {
 
 static int __init register_kernel_offset_dumper(void)
 {
+	if (!kaslr_enabled)
+		return 0;
+
 	atomic_notifier_chain_register(&panic_notifier_list,
 					&kernel_offset_notifier);
 	return 0;

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
@ 2015-02-17 10:44           ` Borislav Petkov
  0 siblings, 0 replies; 38+ messages in thread
From: Borislav Petkov @ 2015-02-17 10:44 UTC (permalink / raw)
  To: Jiri Kosina, Kees Cook; +Cc: H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Fri, Feb 13, 2015 at 04:04:55PM +0100, Jiri Kosina wrote:
> Commit e2b32e678 ("x86, kaslr: randomize module base load address") makes 
> the base address for module to be unconditionally randomized in case when 
> CONFIG_RANDOMIZE_BASE is defined and "nokaslr" option isn't present on the 
> commandline.
> 
> This is not consistent with how choose_kernel_location() decides whether 
> it will randomize kernel load base.
> 
> Namely, CONFIG_HIBERNATION disables kASLR (unless "kaslr" option is 
> explicitly specified on kernel commandline), which makes the state space 
> larger than what module loader is looking at. IOW CONFIG_HIBERNATION && 
> CONFIG_RANDOMIZE_BASE is a valid config option, kASLR wouldn't be applied 
> by default in that case, but module loader is not aware of that.
> 
> Instead of fixing the logic in module.c, this patch takes more generic 
> aproach. It introduces a new bootparam setup data_type SETUP_KASLR and 
> uses that to pass the information whether kaslr has been applied during 
> kernel decompression, and sets a global 'kaslr_enabled' variable 
> accordingly, so that any kernel code (module loading, livepatching, ...) 
> can make decisions based on its value.
> 
> x86 module loader is converted to make use of this flag.
> 
> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
> ---
> 
> v1 -> v2:
> 
> Originally I just calculated the fact on the fly from difference between 
> __START_KERNEL and &text, but Kees correctly pointed out that this doesn't 
> properly catch the case when the offset is randomized to zero. I don't see 

Yeah, about that. I think we want to do the thing in addition so that
we don't have the misleading "Kernel Offset:..." line in splats in case
kaslr is off.

Right?

---
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index ab4734e5411d..a203da9cc445 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -1275,6 +1275,9 @@ static struct notifier_block kernel_offset_notifier = {
 
 static int __init register_kernel_offset_dumper(void)
 {
+	if (!kaslr_enabled)
+		return 0;
+
 	atomic_notifier_chain_register(&panic_notifier_list,
 					&kernel_offset_notifier);
 	return 0;

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
  2015-02-17 10:44           ` Borislav Petkov
@ 2015-02-17 12:21             ` Jiri Kosina
  -1 siblings, 0 replies; 38+ messages in thread
From: Jiri Kosina @ 2015-02-17 12:21 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Kees Cook, H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Tue, 17 Feb 2015, Borislav Petkov wrote:

> > Commit e2b32e678 ("x86, kaslr: randomize module base load address") makes 
> > the base address for module to be unconditionally randomized in case when 
> > CONFIG_RANDOMIZE_BASE is defined and "nokaslr" option isn't present on the 
> > commandline.
> > 
> > This is not consistent with how choose_kernel_location() decides whether 
> > it will randomize kernel load base.
> > 
> > Namely, CONFIG_HIBERNATION disables kASLR (unless "kaslr" option is 
> > explicitly specified on kernel commandline), which makes the state space 
> > larger than what module loader is looking at. IOW CONFIG_HIBERNATION && 
> > CONFIG_RANDOMIZE_BASE is a valid config option, kASLR wouldn't be applied 
> > by default in that case, but module loader is not aware of that.
> > 
> > Instead of fixing the logic in module.c, this patch takes more generic 
> > aproach. It introduces a new bootparam setup data_type SETUP_KASLR and 
> > uses that to pass the information whether kaslr has been applied during 
> > kernel decompression, and sets a global 'kaslr_enabled' variable 
> > accordingly, so that any kernel code (module loading, livepatching, ...) 
> > can make decisions based on its value.
> > 
> > x86 module loader is converted to make use of this flag.
> > 
> > Signed-off-by: Jiri Kosina <jkosina@suse.cz>
> > ---
> > 
> > v1 -> v2:
> > 
> > Originally I just calculated the fact on the fly from difference between 
> > __START_KERNEL and &text, but Kees correctly pointed out that this doesn't 
> > properly catch the case when the offset is randomized to zero. I don't see 
> 
> Yeah, about that. I think we want to do the thing in addition so that
> we don't have the misleading "Kernel Offset:..." line in splats in case
> kaslr is off.
> 
> Right?

I don't have strong feelings either way. It seems slightly nicer to have a 
predictable oops output format no matter the CONFIG_ options and 
command-line contents, but if you feel like seeing the 'Kernel offset: 0' 
in 'nokaslr' and !CONFIG_RANDOMIZE_BASE cases is unnecessary noise, feel 
free to make this change to my patch.

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
@ 2015-02-17 12:21             ` Jiri Kosina
  0 siblings, 0 replies; 38+ messages in thread
From: Jiri Kosina @ 2015-02-17 12:21 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Kees Cook, H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Tue, 17 Feb 2015, Borislav Petkov wrote:

> > Commit e2b32e678 ("x86, kaslr: randomize module base load address") makes 
> > the base address for module to be unconditionally randomized in case when 
> > CONFIG_RANDOMIZE_BASE is defined and "nokaslr" option isn't present on the 
> > commandline.
> > 
> > This is not consistent with how choose_kernel_location() decides whether 
> > it will randomize kernel load base.
> > 
> > Namely, CONFIG_HIBERNATION disables kASLR (unless "kaslr" option is 
> > explicitly specified on kernel commandline), which makes the state space 
> > larger than what module loader is looking at. IOW CONFIG_HIBERNATION && 
> > CONFIG_RANDOMIZE_BASE is a valid config option, kASLR wouldn't be applied 
> > by default in that case, but module loader is not aware of that.
> > 
> > Instead of fixing the logic in module.c, this patch takes more generic 
> > aproach. It introduces a new bootparam setup data_type SETUP_KASLR and 
> > uses that to pass the information whether kaslr has been applied during 
> > kernel decompression, and sets a global 'kaslr_enabled' variable 
> > accordingly, so that any kernel code (module loading, livepatching, ...) 
> > can make decisions based on its value.
> > 
> > x86 module loader is converted to make use of this flag.
> > 
> > Signed-off-by: Jiri Kosina <jkosina@suse.cz>
> > ---
> > 
> > v1 -> v2:
> > 
> > Originally I just calculated the fact on the fly from difference between 
> > __START_KERNEL and &text, but Kees correctly pointed out that this doesn't 
> > properly catch the case when the offset is randomized to zero. I don't see 
> 
> Yeah, about that. I think we want to do the thing in addition so that
> we don't have the misleading "Kernel Offset:..." line in splats in case
> kaslr is off.
> 
> Right?

I don't have strong feelings either way. It seems slightly nicer to have a 
predictable oops output format no matter the CONFIG_ options and 
command-line contents, but if you feel like seeing the 'Kernel offset: 0' 
in 'nokaslr' and !CONFIG_RANDOMIZE_BASE cases is unnecessary noise, feel 
free to make this change to my patch.

Thanks,

-- 
Jiri Kosina
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
  2015-02-17 12:21             ` Jiri Kosina
@ 2015-02-17 12:39               ` Borislav Petkov
  -1 siblings, 0 replies; 38+ messages in thread
From: Borislav Petkov @ 2015-02-17 12:39 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: Kees Cook, H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Tue, Feb 17, 2015 at 01:21:20PM +0100, Jiri Kosina wrote:
> I don't have strong feelings either way. It seems slightly nicer
> to have a predictable oops output format no matter the CONFIG_
> options and command-line contents, but if you feel like seeing the
> 'Kernel offset: 0' in 'nokaslr' and !CONFIG_RANDOMIZE_BASE cases is
> unnecessary noise, feel free to make this change to my patch.

Well, wouldn't it be wrong to print this line if kaslr is disabled?
Because of the ambiguity in that case: that line could mean either we
randomized to 0 or kaslr is disabled but you can't know that from the
"0" in there, right?

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
@ 2015-02-17 12:39               ` Borislav Petkov
  0 siblings, 0 replies; 38+ messages in thread
From: Borislav Petkov @ 2015-02-17 12:39 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: Kees Cook, H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Tue, Feb 17, 2015 at 01:21:20PM +0100, Jiri Kosina wrote:
> I don't have strong feelings either way. It seems slightly nicer
> to have a predictable oops output format no matter the CONFIG_
> options and command-line contents, but if you feel like seeing the
> 'Kernel offset: 0' in 'nokaslr' and !CONFIG_RANDOMIZE_BASE cases is
> unnecessary noise, feel free to make this change to my patch.

Well, wouldn't it be wrong to print this line if kaslr is disabled?
Because of the ambiguity in that case: that line could mean either we
randomized to 0 or kaslr is disabled but you can't know that from the
"0" in there, right?

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
  2015-02-17 12:39               ` Borislav Petkov
@ 2015-02-17 16:45                 ` Kees Cook
  -1 siblings, 0 replies; 38+ messages in thread
From: Kees Cook @ 2015-02-17 16:45 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Jiri Kosina, H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Tue, Feb 17, 2015 at 4:39 AM, Borislav Petkov <bp@alien8.de> wrote:
> On Tue, Feb 17, 2015 at 01:21:20PM +0100, Jiri Kosina wrote:
>> I don't have strong feelings either way. It seems slightly nicer
>> to have a predictable oops output format no matter the CONFIG_
>> options and command-line contents, but if you feel like seeing the
>> 'Kernel offset: 0' in 'nokaslr' and !CONFIG_RANDOMIZE_BASE cases is
>> unnecessary noise, feel free to make this change to my patch.
>
> Well, wouldn't it be wrong to print this line if kaslr is disabled?
> Because of the ambiguity in that case: that line could mean either we
> randomized to 0 or kaslr is disabled but you can't know that from the
> "0" in there, right?

Maybe it should say:

Kernel offset: disabled

for maximum clarity?

-Kees

>
> --
> Regards/Gruss,
>     Boris.
>
> ECO tip #101: Trim your mails when you reply.
> --



-- 
Kees Cook
Chrome OS Security

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
@ 2015-02-17 16:45                 ` Kees Cook
  0 siblings, 0 replies; 38+ messages in thread
From: Kees Cook @ 2015-02-17 16:45 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Jiri Kosina, H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Tue, Feb 17, 2015 at 4:39 AM, Borislav Petkov <bp@alien8.de> wrote:
> On Tue, Feb 17, 2015 at 01:21:20PM +0100, Jiri Kosina wrote:
>> I don't have strong feelings either way. It seems slightly nicer
>> to have a predictable oops output format no matter the CONFIG_
>> options and command-line contents, but if you feel like seeing the
>> 'Kernel offset: 0' in 'nokaslr' and !CONFIG_RANDOMIZE_BASE cases is
>> unnecessary noise, feel free to make this change to my patch.
>
> Well, wouldn't it be wrong to print this line if kaslr is disabled?
> Because of the ambiguity in that case: that line could mean either we
> randomized to 0 or kaslr is disabled but you can't know that from the
> "0" in there, right?

Maybe it should say:

Kernel offset: disabled

for maximum clarity?

-Kees

>
> --
> Regards/Gruss,
>     Boris.
>
> ECO tip #101: Trim your mails when you reply.
> --



-- 
Kees Cook
Chrome OS Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
  2015-02-17 16:45                 ` Kees Cook
@ 2015-02-17 22:31                   ` Borislav Petkov
  -1 siblings, 0 replies; 38+ messages in thread
From: Borislav Petkov @ 2015-02-17 22:31 UTC (permalink / raw)
  To: Kees Cook; +Cc: Jiri Kosina, H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Tue, Feb 17, 2015 at 08:45:53AM -0800, Kees Cook wrote:
> Maybe it should say:
> 
> Kernel offset: disabled
> 
> for maximum clarity?

I.e.:

---
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 78c91bbf50e2..16b6043cb073 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -843,10 +843,14 @@ static void __init trim_low_memory_range(void)
 static int
 dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p)
 {
-	pr_emerg("Kernel Offset: 0x%lx from 0x%lx "
-		 "(relocation range: 0x%lx-0x%lx)\n",
-		 (unsigned long)&_text - __START_KERNEL, __START_KERNEL,
-		 __START_KERNEL_map, MODULES_VADDR-1);
+	if (kaslr_enabled)
+		pr_emerg("Kernel Offset: 0x%lx from 0x%lx (relocation range: 0x%lx-0x%lx)\n",
+			 (unsigned long)&_text - __START_KERNEL,
+			 __START_KERNEL,
+			 __START_KERNEL_map,
+			 MODULES_VADDR-1);
+	else
+		pr_emerg("Kernel Offset: disabled\n");
 
 	return 0;
 }
---

?

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
@ 2015-02-17 22:31                   ` Borislav Petkov
  0 siblings, 0 replies; 38+ messages in thread
From: Borislav Petkov @ 2015-02-17 22:31 UTC (permalink / raw)
  To: Kees Cook; +Cc: Jiri Kosina, H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Tue, Feb 17, 2015 at 08:45:53AM -0800, Kees Cook wrote:
> Maybe it should say:
> 
> Kernel offset: disabled
> 
> for maximum clarity?

I.e.:

---
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 78c91bbf50e2..16b6043cb073 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -843,10 +843,14 @@ static void __init trim_low_memory_range(void)
 static int
 dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p)
 {
-	pr_emerg("Kernel Offset: 0x%lx from 0x%lx "
-		 "(relocation range: 0x%lx-0x%lx)\n",
-		 (unsigned long)&_text - __START_KERNEL, __START_KERNEL,
-		 __START_KERNEL_map, MODULES_VADDR-1);
+	if (kaslr_enabled)
+		pr_emerg("Kernel Offset: 0x%lx from 0x%lx (relocation range: 0x%lx-0x%lx)\n",
+			 (unsigned long)&_text - __START_KERNEL,
+			 __START_KERNEL,
+			 __START_KERNEL_map,
+			 MODULES_VADDR-1);
+	else
+		pr_emerg("Kernel Offset: disabled\n");
 
 	return 0;
 }
---

?

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
  2015-02-17 22:31                   ` Borislav Petkov
@ 2015-02-18  3:33                     ` Kees Cook
  -1 siblings, 0 replies; 38+ messages in thread
From: Kees Cook @ 2015-02-18  3:33 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Jiri Kosina, H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Tue, Feb 17, 2015 at 2:31 PM, Borislav Petkov <bp@alien8.de> wrote:
> On Tue, Feb 17, 2015 at 08:45:53AM -0800, Kees Cook wrote:
>> Maybe it should say:
>>
>> Kernel offset: disabled
>>
>> for maximum clarity?
>
> I.e.:
>
> ---
> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> index 78c91bbf50e2..16b6043cb073 100644
> --- a/arch/x86/kernel/setup.c
> +++ b/arch/x86/kernel/setup.c
> @@ -843,10 +843,14 @@ static void __init trim_low_memory_range(void)
>  static int
>  dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p)
>  {
> -       pr_emerg("Kernel Offset: 0x%lx from 0x%lx "
> -                "(relocation range: 0x%lx-0x%lx)\n",
> -                (unsigned long)&_text - __START_KERNEL, __START_KERNEL,
> -                __START_KERNEL_map, MODULES_VADDR-1);
> +       if (kaslr_enabled)
> +               pr_emerg("Kernel Offset: 0x%lx from 0x%lx (relocation range: 0x%lx-0x%lx)\n",
> +                        (unsigned long)&_text - __START_KERNEL,
> +                        __START_KERNEL,
> +                        __START_KERNEL_map,
> +                        MODULES_VADDR-1);
> +       else
> +               pr_emerg("Kernel Offset: disabled\n");
>
>         return 0;
>  }
> ---
>
> ?

You are the best. :)

Acked-by: Kees Cook <keescook@chromium.org>

-Kees

>
> --
> Regards/Gruss,
>     Boris.
>
> ECO tip #101: Trim your mails when you reply.
> --



-- 
Kees Cook
Chrome OS Security

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
@ 2015-02-18  3:33                     ` Kees Cook
  0 siblings, 0 replies; 38+ messages in thread
From: Kees Cook @ 2015-02-18  3:33 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Jiri Kosina, H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Tue, Feb 17, 2015 at 2:31 PM, Borislav Petkov <bp@alien8.de> wrote:
> On Tue, Feb 17, 2015 at 08:45:53AM -0800, Kees Cook wrote:
>> Maybe it should say:
>>
>> Kernel offset: disabled
>>
>> for maximum clarity?
>
> I.e.:
>
> ---
> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> index 78c91bbf50e2..16b6043cb073 100644
> --- a/arch/x86/kernel/setup.c
> +++ b/arch/x86/kernel/setup.c
> @@ -843,10 +843,14 @@ static void __init trim_low_memory_range(void)
>  static int
>  dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p)
>  {
> -       pr_emerg("Kernel Offset: 0x%lx from 0x%lx "
> -                "(relocation range: 0x%lx-0x%lx)\n",
> -                (unsigned long)&_text - __START_KERNEL, __START_KERNEL,
> -                __START_KERNEL_map, MODULES_VADDR-1);
> +       if (kaslr_enabled)
> +               pr_emerg("Kernel Offset: 0x%lx from 0x%lx (relocation range: 0x%lx-0x%lx)\n",
> +                        (unsigned long)&_text - __START_KERNEL,
> +                        __START_KERNEL,
> +                        __START_KERNEL_map,
> +                        MODULES_VADDR-1);
> +       else
> +               pr_emerg("Kernel Offset: disabled\n");
>
>         return 0;
>  }
> ---
>
> ?

You are the best. :)

Acked-by: Kees Cook <keescook@chromium.org>

-Kees

>
> --
> Regards/Gruss,
>     Boris.
>
> ECO tip #101: Trim your mails when you reply.
> --



-- 
Kees Cook
Chrome OS Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
  2015-02-18  3:33                     ` Kees Cook
@ 2015-02-18  8:32                       ` Borislav Petkov
  -1 siblings, 0 replies; 38+ messages in thread
From: Borislav Petkov @ 2015-02-18  8:32 UTC (permalink / raw)
  To: Kees Cook; +Cc: Jiri Kosina, H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Tue, Feb 17, 2015 at 07:33:40PM -0800, Kees Cook wrote:
> You are the best. :)

Of course, the bestest! :-P

> Acked-by: Kees Cook <keescook@chromium.org>

Thanks Kees, I'll fold it into Jiri's patch and forward.

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
@ 2015-02-18  8:32                       ` Borislav Petkov
  0 siblings, 0 replies; 38+ messages in thread
From: Borislav Petkov @ 2015-02-18  8:32 UTC (permalink / raw)
  To: Kees Cook; +Cc: Jiri Kosina, H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Tue, Feb 17, 2015 at 07:33:40PM -0800, Kees Cook wrote:
> You are the best. :)

Of course, the bestest! :-P

> Acked-by: Kees Cook <keescook@chromium.org>

Thanks Kees, I'll fold it into Jiri's patch and forward.

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
  2015-02-18  8:32                       ` Borislav Petkov
@ 2015-02-18 10:46                         ` Jiri Kosina
  -1 siblings, 0 replies; 38+ messages in thread
From: Jiri Kosina @ 2015-02-18 10:46 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Kees Cook, H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Wed, 18 Feb 2015, Borislav Petkov wrote:

> > Acked-by: Kees Cook <keescook@chromium.org>
> 
> Thanks Kees, I'll fold it into Jiri's patch and forward.

Fine by me, thanks.

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2] x86, kaslr: propagate base load address calculation
@ 2015-02-18 10:46                         ` Jiri Kosina
  0 siblings, 0 replies; 38+ messages in thread
From: Jiri Kosina @ 2015-02-18 10:46 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Kees Cook, H. Peter Anvin, LKML, live-patching, Linux-MM, x86

On Wed, 18 Feb 2015, Borislav Petkov wrote:

> > Acked-by: Kees Cook <keescook@chromium.org>
> 
> Thanks Kees, I'll fold it into Jiri's patch and forward.

Fine by me, thanks.

-- 
Jiri Kosina
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2015-02-18 10:46 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-10 13:17 [PATCH] x86, kaslr: propagate base load address calculation Jiri Kosina
2015-02-10 13:17 ` Jiri Kosina
2015-02-10 17:25 ` Kees Cook
2015-02-10 17:25   ` Kees Cook
2015-02-10 23:07   ` Jiri Kosina
2015-02-10 23:07     ` Jiri Kosina
2015-02-10 23:13     ` Jiri Kosina
2015-02-10 23:13       ` Jiri Kosina
2015-02-13 15:04       ` [PATCH v2] " Jiri Kosina
2015-02-13 15:04         ` Jiri Kosina
2015-02-13 17:49         ` Kees Cook
2015-02-13 17:49           ` Kees Cook
2015-02-13 22:20           ` Jiri Kosina
2015-02-13 22:20             ` Jiri Kosina
2015-02-13 23:25             ` Kees Cook
2015-02-13 23:25               ` Kees Cook
2015-02-16 11:55               ` Borislav Petkov
2015-02-16 11:55                 ` Borislav Petkov
2015-02-16 19:27                 ` Kees Cook
2015-02-16 19:27                   ` Kees Cook
2015-02-16 19:42                   ` Borislav Petkov
2015-02-16 19:42                     ` Borislav Petkov
2015-02-17 10:44         ` Borislav Petkov
2015-02-17 10:44           ` Borislav Petkov
2015-02-17 12:21           ` Jiri Kosina
2015-02-17 12:21             ` Jiri Kosina
2015-02-17 12:39             ` Borislav Petkov
2015-02-17 12:39               ` Borislav Petkov
2015-02-17 16:45               ` Kees Cook
2015-02-17 16:45                 ` Kees Cook
2015-02-17 22:31                 ` Borislav Petkov
2015-02-17 22:31                   ` Borislav Petkov
2015-02-18  3:33                   ` Kees Cook
2015-02-18  3:33                     ` Kees Cook
2015-02-18  8:32                     ` Borislav Petkov
2015-02-18  8:32                       ` Borislav Petkov
2015-02-18 10:46                       ` Jiri Kosina
2015-02-18 10:46                         ` Jiri Kosina

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.