* [PATCH 0/2] introduce kaslr_offset() and its users @ 2015-04-27 14:27 Jiri Kosina 2015-04-27 14:28 ` [PATCH 1/2] x86: introduce kaslr_offset() Jiri Kosina 2015-04-27 14:28 ` [PATCH 2/2] livepatch: x86: make kASLR logic more accurate Jiri Kosina 0 siblings, 2 replies; 14+ messages in thread From: Jiri Kosina @ 2015-04-27 14:27 UTC (permalink / raw) To: x86, Borislav Petkov, Kees Cook, Josh Poimboeuf, Seth Jennings, Vojtech Pavlik Cc: linux-kernel, live-patching There is already in-kernel code which computes the offset that has been used for kASLR -- that's dump_kernel_offset() notifier. As there is now a potential second user coming, it seems reasonable to provide a common helper that will compute the offset. - Patch 1/2 introduces kaslr_offset() which computes the kASLR offset, and converts dump_kernel_offset() to make use of it - Patch 2/2 extends the - currently limited - functionality of livepatching when kASLR has been enabled and is active ---------------------------------------------------------------- Jiri Kosina (2): x86: introduce kaslr_offset() livepatch: x86: make kASLR logic more accurate arch/x86/include/asm/livepatch.h | 1 + arch/x86/include/asm/setup.h | 6 ++++++ arch/x86/kernel/setup.c | 2 +- kernel/livepatch/core.c | 5 +++-- 4 files changed, 11 insertions(+), 3 deletions(-) -- Jiri Kosina SUSE Labs ^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 1/2] x86: introduce kaslr_offset() 2015-04-27 14:27 [PATCH 0/2] introduce kaslr_offset() and its users Jiri Kosina @ 2015-04-27 14:28 ` Jiri Kosina 2015-04-28 12:08 ` Josh Poimboeuf 2015-04-27 14:28 ` [PATCH 2/2] livepatch: x86: make kASLR logic more accurate Jiri Kosina 1 sibling, 1 reply; 14+ messages in thread From: Jiri Kosina @ 2015-04-27 14:28 UTC (permalink / raw) To: x86, Borislav Petkov, Kees Cook, Josh Poimboeuf, Seth Jennings, Vojtech Pavlik Cc: linux-kernel, live-patching Offset that has been chosen for kaslr during kernel decompression can be easily computed as a difference between _text and __START_KERNEL. We are already making use of this in dump_kernel_offset() notifier. Introduce kaslr_offset() that makes this computation instead of hard-coding it, so that other kernel code (such as live patching) can make use of it. Signed-off-by: Jiri Kosina <jkosina@suse.cz> --- arch/x86/include/asm/setup.h | 6 ++++++ arch/x86/kernel/setup.c | 2 +- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h index f69e06b..785ac2f 100644 --- a/arch/x86/include/asm/setup.h +++ b/arch/x86/include/asm/setup.h @@ -65,12 +65,18 @@ static inline void x86_ce4100_early_setup(void) { } * This is set up by the setup-routine at boot-time */ extern struct boot_params boot_params; +extern char _text[]; static inline bool kaslr_enabled(void) { return !!(boot_params.hdr.loadflags & KASLR_FLAG); } +static inline unsigned long kaslr_offset(void) +{ + return (unsigned long)&_text - __START_KERNEL; +} + /* * Do NOT EVER look at the BIOS memory size location. * It does not work on many machines. diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index d74ac33..5056d3c 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -834,7 +834,7 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p) { if (kaslr_enabled()) { pr_emerg("Kernel Offset: 0x%lx from 0x%lx (relocation range: 0x%lx-0x%lx)\n", - (unsigned long)&_text - __START_KERNEL, + kaslr_offset(), __START_KERNEL, __START_KERNEL_map, MODULES_VADDR-1); -- Jiri Kosina SUSE Labs ^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 1/2] x86: introduce kaslr_offset() 2015-04-27 14:28 ` [PATCH 1/2] x86: introduce kaslr_offset() Jiri Kosina @ 2015-04-28 12:08 ` Josh Poimboeuf 2015-04-28 15:15 ` [PATCH v2 " Jiri Kosina 0 siblings, 1 reply; 14+ messages in thread From: Josh Poimboeuf @ 2015-04-28 12:08 UTC (permalink / raw) To: Jiri Kosina Cc: x86, Borislav Petkov, Kees Cook, Seth Jennings, Vojtech Pavlik, linux-kernel, live-patching On Mon, Apr 27, 2015 at 04:28:07PM +0200, Jiri Kosina wrote: > Offset that has been chosen for kaslr during kernel decompression can be > easily computed as a difference between _text and __START_KERNEL. We are > already making use of this in dump_kernel_offset() notifier. > > Introduce kaslr_offset() that makes this computation instead of > hard-coding it, so that other kernel code (such as live patching) can make > use of it. > > Signed-off-by: Jiri Kosina <jkosina@suse.cz> > --- > arch/x86/include/asm/setup.h | 6 ++++++ > arch/x86/kernel/setup.c | 2 +- > 2 files changed, 7 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h > index f69e06b..785ac2f 100644 > --- a/arch/x86/include/asm/setup.h > +++ b/arch/x86/include/asm/setup.h > @@ -65,12 +65,18 @@ static inline void x86_ce4100_early_setup(void) { } > * This is set up by the setup-routine at boot-time > */ > extern struct boot_params boot_params; > +extern char _text[]; > > static inline bool kaslr_enabled(void) > { > return !!(boot_params.hdr.loadflags & KASLR_FLAG); > } > > +static inline unsigned long kaslr_offset(void) > +{ > + return (unsigned long)&_text - __START_KERNEL; > +} > + > /* > * Do NOT EVER look at the BIOS memory size location. > * It does not work on many machines. > diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c > index d74ac33..5056d3c 100644 > --- a/arch/x86/kernel/setup.c > +++ b/arch/x86/kernel/setup.c > @@ -834,7 +834,7 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p) > { > if (kaslr_enabled()) { > pr_emerg("Kernel Offset: 0x%lx from 0x%lx (relocation range: 0x%lx-0x%lx)\n", > - (unsigned long)&_text - __START_KERNEL, > + kaslr_offset(), > __START_KERNEL, > __START_KERNEL_map, > MODULES_VADDR-1); It looks like kaslr_offset() can also be used by arch_crash_save_vmcoreinfo() in machine_kexec_64.c. -- Josh ^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v2 1/2] x86: introduce kaslr_offset() 2015-04-28 12:08 ` Josh Poimboeuf @ 2015-04-28 15:15 ` Jiri Kosina 2015-04-28 15:57 ` Jiri Kosina 0 siblings, 1 reply; 14+ messages in thread From: Jiri Kosina @ 2015-04-28 15:15 UTC (permalink / raw) To: x86, Borislav Petkov Cc: Josh Poimboeuf, Kees Cook, Seth Jennings, Vojtech Pavlik, linux-kernel, live-patching Offset that has been chosen for kaslr during kernel decompression can be easily computed as a difference between _text and __START_KERNEL. We are already making use of this in dump_kernel_offset() notifier and in arch_crash_save_vmcoreinfo(). Introduce kaslr_offset() that makes this computation instead of hard-coding it, so that other kernel code (such as live patching) can make use of it. Also convert existing users to make use of it. Signed-off-by: Jiri Kosina <jkosina@suse.cz> --- It'd be great to potentially have Ack from x86 guys for this patch so that I could take it through livepatching.git with the depending 2/2 patch. Thanks. v1 -> v2: convert arch_crash_save_vmcoreinfo(), as spotted by Josh Poimboeuf. arch/x86/include/asm/setup.h | 6 ++++++ arch/x86/kernel/machine_kexec_64.c | 3 ++- arch/x86/kernel/setup.c | 2 +- 3 files changed, 9 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h index f69e06b..785ac2f 100644 --- a/arch/x86/include/asm/setup.h +++ b/arch/x86/include/asm/setup.h @@ -65,12 +65,18 @@ static inline void x86_ce4100_early_setup(void) { } * This is set up by the setup-routine at boot-time */ extern struct boot_params boot_params; +extern char _text[]; static inline bool kaslr_enabled(void) { return !!(boot_params.hdr.loadflags & KASLR_FLAG); } +static inline unsigned long kaslr_offset(void) +{ + return (unsigned long)&_text - __START_KERNEL; +} + /* * Do NOT EVER look at the BIOS memory size location. * It does not work on many machines. diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c index 415480d..e102963 100644 --- a/arch/x86/kernel/machine_kexec_64.c +++ b/arch/x86/kernel/machine_kexec_64.c @@ -25,6 +25,7 @@ #include <asm/io_apic.h> #include <asm/debugreg.h> #include <asm/kexec-bzimage64.h> +#include <asm/setup.h> #ifdef CONFIG_KEXEC_FILE static struct kexec_file_ops *kexec_file_loaders[] = { @@ -334,7 +335,7 @@ void arch_crash_save_vmcoreinfo(void) VMCOREINFO_LENGTH(node_data, MAX_NUMNODES); #endif vmcoreinfo_append_str("KERNELOFFSET=%lx\n", - (unsigned long)&_text - __START_KERNEL); + kaslr_offset()); } /* arch-dependent functionality related to kexec file-based syscall */ diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index d74ac33..5056d3c 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -834,7 +834,7 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p) { if (kaslr_enabled()) { pr_emerg("Kernel Offset: 0x%lx from 0x%lx (relocation range: 0x%lx-0x%lx)\n", - (unsigned long)&_text - __START_KERNEL, + kaslr_offset(), __START_KERNEL, __START_KERNEL_map, MODULES_VADDR-1); -- Jiri Kosina SUSE Labs ^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH v2 1/2] x86: introduce kaslr_offset() 2015-04-28 15:15 ` [PATCH v2 " Jiri Kosina @ 2015-04-28 15:57 ` Jiri Kosina 2015-04-28 15:59 ` Borislav Petkov 0 siblings, 1 reply; 14+ messages in thread From: Jiri Kosina @ 2015-04-28 15:57 UTC (permalink / raw) To: x86, Borislav Petkov Cc: Josh Poimboeuf, Kees Cook, Seth Jennings, Vojtech Pavlik, linux-kernel, live-patching On Tue, 28 Apr 2015, Jiri Kosina wrote: > Offset that has been chosen for kaslr during kernel decompression can be > easily computed as a difference between _text and __START_KERNEL. We are > already making use of this in dump_kernel_offset() notifier and in > arch_crash_save_vmcoreinfo(). > > Introduce kaslr_offset() that makes this computation instead of > hard-coding it, so that other kernel code (such as live patching) can make > use of it. Also convert existing users to make use of it. > > Signed-off-by: Jiri Kosina <jkosina@suse.cz> > --- > > It'd be great to potentially have Ack from x86 guys for this patch so that > I could take it through livepatching.git with the depending 2/2 patch. > Thanks. > > v1 -> v2: convert arch_crash_save_vmcoreinfo(), as spotted by Josh > Poimboeuf. FWIW this patch is equivalent transofrmation without any effects on the resulting code: $ diff -u vmlinux.old.asm vmlinux.new.asm --- vmlinux.old.asm 2015-04-28 17:55:19.520983368 +0200 +++ vmlinux.new.asm 2015-04-28 17:55:24.141206072 +0200 @@ -1,5 +1,5 @@ -vmlinux.old: file format elf64-x86-64 +vmlinux.new: file format elf64-x86-64 Disassembly of section .text: $ -- Jiri Kosina SUSE Labs ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 1/2] x86: introduce kaslr_offset() 2015-04-28 15:57 ` Jiri Kosina @ 2015-04-28 15:59 ` Borislav Petkov 2015-04-29 14:56 ` Jiri Kosina 0 siblings, 1 reply; 14+ messages in thread From: Borislav Petkov @ 2015-04-28 15:59 UTC (permalink / raw) To: Jiri Kosina Cc: x86, Josh Poimboeuf, Kees Cook, Seth Jennings, Vojtech Pavlik, linux-kernel, live-patching On Tue, Apr 28, 2015 at 05:57:14PM +0200, Jiri Kosina wrote: > On Tue, 28 Apr 2015, Jiri Kosina wrote: > > > Offset that has been chosen for kaslr during kernel decompression can be > > easily computed as a difference between _text and __START_KERNEL. We are > > already making use of this in dump_kernel_offset() notifier and in > > arch_crash_save_vmcoreinfo(). > > > > Introduce kaslr_offset() that makes this computation instead of > > hard-coding it, so that other kernel code (such as live patching) can make > > use of it. Also convert existing users to make use of it. > > > > Signed-off-by: Jiri Kosina <jkosina@suse.cz> > > --- > > > > It'd be great to potentially have Ack from x86 guys for this patch so that > > I could take it through livepatching.git with the depending 2/2 patch. > > Thanks. > > > > v1 -> v2: convert arch_crash_save_vmcoreinfo(), as spotted by Josh > > Poimboeuf. > > FWIW this patch is equivalent transofrmation without any effects on the > resulting code: > > $ diff -u vmlinux.old.asm vmlinux.new.asm > --- vmlinux.old.asm 2015-04-28 17:55:19.520983368 +0200 > +++ vmlinux.new.asm 2015-04-28 17:55:24.141206072 +0200 > @@ -1,5 +1,5 @@ > > -vmlinux.old: file format elf64-x86-64 > +vmlinux.new: file format elf64-x86-64 > > > Disassembly of section .text: > $ Then those are easy. Please add that piece of infomation to the commit message. With that: Acked-by: Borislav Petkov <bp@suse.de> -- Regards/Gruss, Boris. ECO tip #101: Trim your mails when you reply. -- ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 1/2] x86: introduce kaslr_offset() 2015-04-28 15:59 ` Borislav Petkov @ 2015-04-29 14:56 ` Jiri Kosina 2015-04-29 16:16 ` Jiri Kosina 0 siblings, 1 reply; 14+ messages in thread From: Jiri Kosina @ 2015-04-29 14:56 UTC (permalink / raw) To: Borislav Petkov Cc: x86, Josh Poimboeuf, Kees Cook, Seth Jennings, Vojtech Pavlik, linux-kernel, live-patching On Tue, 28 Apr 2015, Borislav Petkov wrote: > > > Offset that has been chosen for kaslr during kernel decompression can be > > > easily computed as a difference between _text and __START_KERNEL. We are > > > already making use of this in dump_kernel_offset() notifier and in > > > arch_crash_save_vmcoreinfo(). > > > > > > Introduce kaslr_offset() that makes this computation instead of > > > hard-coding it, so that other kernel code (such as live patching) can make > > > use of it. Also convert existing users to make use of it. > > > > > > Signed-off-by: Jiri Kosina <jkosina@suse.cz> > > > --- > > > > > > It'd be great to potentially have Ack from x86 guys for this patch so that > > > I could take it through livepatching.git with the depending 2/2 patch. > > > Thanks. > > > > > > v1 -> v2: convert arch_crash_save_vmcoreinfo(), as spotted by Josh > > > Poimboeuf. > > > > FWIW this patch is equivalent transofrmation without any effects on the > > resulting code: > > > > $ diff -u vmlinux.old.asm vmlinux.new.asm > > --- vmlinux.old.asm 2015-04-28 17:55:19.520983368 +0200 > > +++ vmlinux.new.asm 2015-04-28 17:55:24.141206072 +0200 > > @@ -1,5 +1,5 @@ > > > > -vmlinux.old: file format elf64-x86-64 > > +vmlinux.new: file format elf64-x86-64 > > > > > > Disassembly of section .text: > > $ > > Then those are easy. Please add that piece of infomation to the commit > message. > > With that: > > Acked-by: Borislav Petkov <bp@suse.de> Applied to livepatching.git#for-4.2/kaslr. Thanks, -- Jiri Kosina SUSE Labs ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 1/2] x86: introduce kaslr_offset() 2015-04-29 14:56 ` Jiri Kosina @ 2015-04-29 16:16 ` Jiri Kosina 0 siblings, 0 replies; 14+ messages in thread From: Jiri Kosina @ 2015-04-29 16:16 UTC (permalink / raw) To: Borislav Petkov Cc: x86, Josh Poimboeuf, Kees Cook, Seth Jennings, Vojtech Pavlik, linux-kernel, live-patching On Wed, 29 Apr 2015, Jiri Kosina wrote: > > Acked-by: Borislav Petkov <bp@suse.de> > > Applied to livepatching.git#for-4.2/kaslr. Thanks, Fengguang's buildbot reported a randconfig build breakage caused by this patch. The fix below is necessary on top. From: Jiri Kosina <jkosina@suse.cz> Subject: [PATCH] x86: kaslr: fix build due to missing ALIGN definition Fengguang's bot reported that 4545c898 ("x86: introduce kaslr_offset()") broke randconfig build In file included from arch/x86/xen/vga.c:5:0: arch/x86/include/asm/setup.h: In function 'kaslr_offset': >> arch/x86/include/asm/setup.h:77:2: error: implicit declaration of function 'ALIGN' [-Werror=implicit-function-declaration] return (unsigned long)&_text - __START_KERNEL; ^ Fix that by making setup.h self-sufficient by explicitly including linux/kernel.h, which is needed for ALIGN() (which is what __START_KERNEL contains in its expansion). Reported-by: fengguang.wu@intel.com Signed-off-by: Jiri Kosina <jkosina@suse.cz> --- arch/x86/include/asm/setup.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h index 785ac2f..11af24e 100644 --- a/arch/x86/include/asm/setup.h +++ b/arch/x86/include/asm/setup.h @@ -60,6 +60,7 @@ static inline void x86_ce4100_early_setup(void) { } #ifndef _SETUP #include <asm/espfix.h> +#include <linux/kernel.h> /* * This is set up by the setup-routine at boot-time -- Jiri Kosina SUSE Labs ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 2/2] livepatch: x86: make kASLR logic more accurate 2015-04-27 14:27 [PATCH 0/2] introduce kaslr_offset() and its users Jiri Kosina 2015-04-27 14:28 ` [PATCH 1/2] x86: introduce kaslr_offset() Jiri Kosina @ 2015-04-27 14:28 ` Jiri Kosina 2015-04-27 14:41 ` Minfei Huang 2015-04-28 12:09 ` Josh Poimboeuf 1 sibling, 2 replies; 14+ messages in thread From: Jiri Kosina @ 2015-04-27 14:28 UTC (permalink / raw) To: x86, Borislav Petkov, Kees Cook, Josh Poimboeuf, Seth Jennings, Vojtech Pavlik Cc: linux-kernel, live-patching We give up old_addr hint from the coming patch module in cases when kernel load base has been randomized (as in such case, the coming module has no idea about the exact randomization offset). We are currently too pessimistic, and give up immediately as soon as CONFIG_RANDOMIZE_BASE is set; this doesn't however directly imply that the load base has actually been randomized. There are config options that disable kASLR (such as hibernation), user could have disabled kaslr on kernel command-line, etc. The loader propagates the information whether kernel has been randomized through bootparams. This allows us to have the condition more accurate. On top of that, it seems unnecessary to give up old_addr hints even if randomization is active. The relocation offset can be computed using kaslr_ofsset(), and therefore old_addr can be adjusted accordingly. Signed-off-by: Jiri Kosina <jkosina@suse.cz> --- arch/x86/include/asm/livepatch.h | 1 + kernel/livepatch/core.c | 5 +++-- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/livepatch.h b/arch/x86/include/asm/livepatch.h index 2d29197..19c099a 100644 --- a/arch/x86/include/asm/livepatch.h +++ b/arch/x86/include/asm/livepatch.h @@ -21,6 +21,7 @@ #ifndef _ASM_X86_LIVEPATCH_H #define _ASM_X86_LIVEPATCH_H +#include <asm/setup.h> #include <linux/module.h> #include <linux/ftrace.h> diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c index 284e269..0e7c23c 100644 --- a/kernel/livepatch/core.c +++ b/kernel/livepatch/core.c @@ -234,8 +234,9 @@ static int klp_find_verify_func_addr(struct klp_object *obj, int ret; #if defined(CONFIG_RANDOMIZE_BASE) - /* KASLR is enabled, disregard old_addr from user */ - func->old_addr = 0; + /* If KASLR has been enabled, adjust old_addr accordingly */ + if (kaslr_enabled() && func->old_addr) + func->old_addr += kaslr_offset(); #endif if (!func->old_addr || klp_is_module(obj)) -- Jiri Kosina SUSE Labs ^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 2/2] livepatch: x86: make kASLR logic more accurate 2015-04-27 14:28 ` [PATCH 2/2] livepatch: x86: make kASLR logic more accurate Jiri Kosina @ 2015-04-27 14:41 ` Minfei Huang 2015-04-27 23:29 ` Jiri Kosina 2015-04-28 12:09 ` Josh Poimboeuf 1 sibling, 1 reply; 14+ messages in thread From: Minfei Huang @ 2015-04-27 14:41 UTC (permalink / raw) To: Jiri Kosina Cc: x86, Borislav Petkov, Kees Cook, Josh Poimboeuf, Seth Jennings, Vojtech Pavlik, linux-kernel, live-patching On 04/27/15 at 04:28P, Jiri Kosina wrote: > We give up old_addr hint from the coming patch module in cases when kernel > load base has been randomized (as in such case, the coming module has no > idea about the exact randomization offset). > > We are currently too pessimistic, and give up immediately as soon as > CONFIG_RANDOMIZE_BASE is set; this doesn't however directly imply that the > load base has actually been randomized. There are config options that > disable kASLR (such as hibernation), user could have disabled kaslr on > kernel command-line, etc. > > The loader propagates the information whether kernel has been randomized > through bootparams. This allows us to have the condition more accurate. > > On top of that, it seems unnecessary to give up old_addr hints even if > randomization is active. The relocation offset can be computed using > kaslr_ofsset(), and therefore old_addr can be adjusted accordingly. > > Signed-off-by: Jiri Kosina <jkosina@suse.cz> > --- > arch/x86/include/asm/livepatch.h | 1 + > kernel/livepatch/core.c | 5 +++-- > 2 files changed, 4 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/include/asm/livepatch.h b/arch/x86/include/asm/livepatch.h > index 2d29197..19c099a 100644 > --- a/arch/x86/include/asm/livepatch.h > +++ b/arch/x86/include/asm/livepatch.h > @@ -21,6 +21,7 @@ > #ifndef _ASM_X86_LIVEPATCH_H > #define _ASM_X86_LIVEPATCH_H > > +#include <asm/setup.h> > #include <linux/module.h> > #include <linux/ftrace.h> > > diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c > index 284e269..0e7c23c 100644 > --- a/kernel/livepatch/core.c > +++ b/kernel/livepatch/core.c > @@ -234,8 +234,9 @@ static int klp_find_verify_func_addr(struct klp_object *obj, > int ret; > > #if defined(CONFIG_RANDOMIZE_BASE) > - /* KASLR is enabled, disregard old_addr from user */ > - func->old_addr = 0; > + /* If KASLR has been enabled, adjust old_addr accordingly */ > + if (kaslr_enabled() && func->old_addr) > + func->old_addr += kaslr_offset(); Hi. Remove the judgement "CONFIG_RANDOMIZE_BASE" is fine. if kaslr is disabled, the offset will be 0. Found that kaslr_enabled is only exist for x86. Maybe you can define a weak function klp_adjustment_function_addr in general. Then each arch can overwrite the function to implemente it specially. Thanks Minfei > #endif > > if (!func->old_addr || klp_is_module(obj)) > -- > Jiri Kosina > SUSE Labs > -- > To unsubscribe from this list: send the line "unsubscribe live-patching" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 2/2] livepatch: x86: make kASLR logic more accurate 2015-04-27 14:41 ` Minfei Huang @ 2015-04-27 23:29 ` Jiri Kosina 2015-04-28 0:08 ` Minfei Huang 0 siblings, 1 reply; 14+ messages in thread From: Jiri Kosina @ 2015-04-27 23:29 UTC (permalink / raw) To: Minfei Huang Cc: x86, Borislav Petkov, Kees Cook, Josh Poimboeuf, Seth Jennings, Vojtech Pavlik, linux-kernel, live-patching On Mon, 27 Apr 2015, Minfei Huang wrote: > Found that kaslr_enabled is only exist for x86. Maybe you can define a > weak function klp_adjustment_function_addr in general. Then each arch > can overwrite the function to implemente it specially. It might start to make sense once there is at least one additional arch that supports kaslr. Currently, I don't see a benefit. Why are you so obstinate about this? I personally don't find that important at all; it's something that can always be sorted out once more archs start supporting kaslr. Thanks, -- Jiri Kosina SUSE Labs ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 2/2] livepatch: x86: make kASLR logic more accurate 2015-04-27 23:29 ` Jiri Kosina @ 2015-04-28 0:08 ` Minfei Huang 0 siblings, 0 replies; 14+ messages in thread From: Minfei Huang @ 2015-04-28 0:08 UTC (permalink / raw) To: Jiri Kosina Cc: x86, Borislav Petkov, Kees Cook, Josh Poimboeuf, Seth Jennings, Vojtech Pavlik, linux-kernel, live-patching On 04/28/15 at 01:29P, Jiri Kosina wrote: > On Mon, 27 Apr 2015, Minfei Huang wrote: > > > Found that kaslr_enabled is only exist for x86. Maybe you can define a > > weak function klp_adjustment_function_addr in general. Then each arch > > can overwrite the function to implemente it specially. > > It might start to make sense once there is at least one additional arch > that supports kaslr. Currently, I don't see a benefit. > > Why are you so obstinate about this? I personally don't find that > important at all; it's something that can always be sorted out once more > archs start supporting kaslr. > ohhh... Previously, IMO, putting the relevant function address adjustment into the specified arch is more clearly to be reviewed and understood. Now, I know what you actual want according to above commit, I am fine with it. Thanks Minfei > Thanks, > > -- > Jiri Kosina > SUSE Labs > -- > To unsubscribe from this list: send the line "unsubscribe live-patching" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 2/2] livepatch: x86: make kASLR logic more accurate 2015-04-27 14:28 ` [PATCH 2/2] livepatch: x86: make kASLR logic more accurate Jiri Kosina 2015-04-27 14:41 ` Minfei Huang @ 2015-04-28 12:09 ` Josh Poimboeuf 2015-04-29 14:56 ` Jiri Kosina 1 sibling, 1 reply; 14+ messages in thread From: Josh Poimboeuf @ 2015-04-28 12:09 UTC (permalink / raw) To: Jiri Kosina Cc: x86, Borislav Petkov, Kees Cook, Seth Jennings, Vojtech Pavlik, linux-kernel, live-patching On Mon, Apr 27, 2015 at 04:28:58PM +0200, Jiri Kosina wrote: > We give up old_addr hint from the coming patch module in cases when kernel > load base has been randomized (as in such case, the coming module has no > idea about the exact randomization offset). > > We are currently too pessimistic, and give up immediately as soon as > CONFIG_RANDOMIZE_BASE is set; this doesn't however directly imply that the > load base has actually been randomized. There are config options that > disable kASLR (such as hibernation), user could have disabled kaslr on > kernel command-line, etc. > > The loader propagates the information whether kernel has been randomized > through bootparams. This allows us to have the condition more accurate. > > On top of that, it seems unnecessary to give up old_addr hints even if > randomization is active. The relocation offset can be computed using > kaslr_ofsset(), and therefore old_addr can be adjusted accordingly. > > Signed-off-by: Jiri Kosina <jkosina@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> > --- > arch/x86/include/asm/livepatch.h | 1 + > kernel/livepatch/core.c | 5 +++-- > 2 files changed, 4 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/include/asm/livepatch.h b/arch/x86/include/asm/livepatch.h > index 2d29197..19c099a 100644 > --- a/arch/x86/include/asm/livepatch.h > +++ b/arch/x86/include/asm/livepatch.h > @@ -21,6 +21,7 @@ > #ifndef _ASM_X86_LIVEPATCH_H > #define _ASM_X86_LIVEPATCH_H > > +#include <asm/setup.h> > #include <linux/module.h> > #include <linux/ftrace.h> > > diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c > index 284e269..0e7c23c 100644 > --- a/kernel/livepatch/core.c > +++ b/kernel/livepatch/core.c > @@ -234,8 +234,9 @@ static int klp_find_verify_func_addr(struct klp_object *obj, > int ret; > > #if defined(CONFIG_RANDOMIZE_BASE) > - /* KASLR is enabled, disregard old_addr from user */ > - func->old_addr = 0; > + /* If KASLR has been enabled, adjust old_addr accordingly */ > + if (kaslr_enabled() && func->old_addr) > + func->old_addr += kaslr_offset(); > #endif > > if (!func->old_addr || klp_is_module(obj)) > -- > Jiri Kosina > SUSE Labs -- Josh ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 2/2] livepatch: x86: make kASLR logic more accurate 2015-04-28 12:09 ` Josh Poimboeuf @ 2015-04-29 14:56 ` Jiri Kosina 0 siblings, 0 replies; 14+ messages in thread From: Jiri Kosina @ 2015-04-29 14:56 UTC (permalink / raw) To: Josh Poimboeuf Cc: x86, Borislav Petkov, Kees Cook, Seth Jennings, Vojtech Pavlik, linux-kernel, live-patching On Tue, 28 Apr 2015, Josh Poimboeuf wrote: > On Mon, Apr 27, 2015 at 04:28:58PM +0200, Jiri Kosina wrote: > > We give up old_addr hint from the coming patch module in cases when kernel > > load base has been randomized (as in such case, the coming module has no > > idea about the exact randomization offset). > > > > We are currently too pessimistic, and give up immediately as soon as > > CONFIG_RANDOMIZE_BASE is set; this doesn't however directly imply that the > > load base has actually been randomized. There are config options that > > disable kASLR (such as hibernation), user could have disabled kaslr on > > kernel command-line, etc. > > > > The loader propagates the information whether kernel has been randomized > > through bootparams. This allows us to have the condition more accurate. > > > > On top of that, it seems unnecessary to give up old_addr hints even if > > randomization is active. The relocation offset can be computed using > > kaslr_ofsset(), and therefore old_addr can be adjusted accordingly. > > > > Signed-off-by: Jiri Kosina <jkosina@suse.cz> > > Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Applied to for-4.2/kaslr. Thanks, -- Jiri Kosina SUSE Labs ^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2015-04-29 16:16 UTC | newest] Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2015-04-27 14:27 [PATCH 0/2] introduce kaslr_offset() and its users Jiri Kosina 2015-04-27 14:28 ` [PATCH 1/2] x86: introduce kaslr_offset() Jiri Kosina 2015-04-28 12:08 ` Josh Poimboeuf 2015-04-28 15:15 ` [PATCH v2 " Jiri Kosina 2015-04-28 15:57 ` Jiri Kosina 2015-04-28 15:59 ` Borislav Petkov 2015-04-29 14:56 ` Jiri Kosina 2015-04-29 16:16 ` Jiri Kosina 2015-04-27 14:28 ` [PATCH 2/2] livepatch: x86: make kASLR logic more accurate Jiri Kosina 2015-04-27 14:41 ` Minfei Huang 2015-04-27 23:29 ` Jiri Kosina 2015-04-28 0:08 ` Minfei Huang 2015-04-28 12:09 ` Josh Poimboeuf 2015-04-29 14:56 ` Jiri Kosina
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).