linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC] x86/vdso: Align vdso after searching for free area
@ 2018-06-12 20:49 Dmitry Safonov
  2018-06-12 21:24 ` Dmitry Safonov
  0 siblings, 1 reply; 4+ messages in thread
From: Dmitry Safonov @ 2018-06-12 20:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Safonov, Andy Lutomirski, Borislav Petkov, Dmitry Safonov,
	H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov, Thomas Gleixner,
	Vasiliy Khoruzhick, x86

There is errata for AMD family 15h CPUs [1] and since
commit dfb09f9b7ab03 ("x86, amd: Avoid cache aliasing penalties on AMD
family 15h") bits [14:12] are being cleared for shared libraries.
Also per-boot ASLR applies over upper bits by OR directly over the
address.

As we need special alignment and lower bits values to be set, it makes
only a little sense to call get_unmapped_area() after calculating the
address. It also can lead to random crashes if get_unmapped_area()
actually changes/aligns the address, which we observed on 15h CPU.
Usually it's not a problem as there isn't many mappings (except possibly
ld.so, uprobes?) and result address is the same before/after
get_unmapped_area().

Move align_vdso_addr() after get_unmapped_area() to make sure that
errata for AMD 15h is always applied.

[1]: https://developer.amd.com/wordpress/media/2012/10/SharedL1InstructionCacheonAMD15hCPU.pdf
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dmitry Safonov <0x7f454c46@gmail.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vasiliy Khoruzhick <vasilykh@arista.com>
Cc: x86@kernel.org
Signed-off-by: Dmitry Safonov <dima@arista.com>
---
 arch/x86/entry/vdso/vma.c    | 18 ++++++++++--------
 arch/x86/include/asm/elf.h   |  1 +
 arch/x86/kernel/sys_x86_64.c |  2 +-
 3 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
index 5b8b556dbb12..862f0cce3ee6 100644
--- a/arch/x86/entry/vdso/vma.c
+++ b/arch/x86/entry/vdso/vma.c
@@ -154,18 +154,26 @@ static int map_vdso(const struct vdso_image *image, unsigned long addr)
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma;
 	unsigned long text_start;
+	unsigned long area_size = image->size - image->sym_vvar_start;
 	int ret = 0;
 
 	if (down_write_killable(&mm->mmap_sem))
 		return -EINTR;
 
-	addr = get_unmapped_area(NULL, addr,
-				 image->size - image->sym_vvar_start, 0, 0);
+	/* Find a bigger place for vma then needed - to align vdso later. */
+	area_size += get_align_mask();
+	addr = get_unmapped_area(NULL, addr, area_size, 0, 0);
 	if (IS_ERR_VALUE(addr)) {
 		ret = addr;
 		goto up_fail;
 	}
 
+	/*
+	 * Forcibly align the final address in case we have a hardware
+	 * issue that requires alignment for performance reasons.
+	 */
+	addr = align_vdso_addr(addr);
+
 	text_start = addr - image->sym_vvar_start;
 
 	/*
@@ -239,12 +247,6 @@ static unsigned long vdso_addr(unsigned long start, unsigned len)
 		addr = start;
 	}
 
-	/*
-	 * Forcibly align the final address in case we have a hardware
-	 * issue that requires alignment for performance reasons.
-	 */
-	addr = align_vdso_addr(addr);
-
 	return addr;
 }
 
diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h
index 0d157d2a1e2a..88aa49294c9c 100644
--- a/arch/x86/include/asm/elf.h
+++ b/arch/x86/include/asm/elf.h
@@ -382,4 +382,5 @@ struct va_alignment {
 
 extern struct va_alignment va_align;
 extern unsigned long align_vdso_addr(unsigned long);
+extern unsigned long get_align_mask(void);
 #endif /* _ASM_X86_ELF_H */
diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
index 6a78d4b36a79..4dd74c6f236d 100644
--- a/arch/x86/kernel/sys_x86_64.c
+++ b/arch/x86/kernel/sys_x86_64.c
@@ -27,7 +27,7 @@
 /*
  * Align a virtual address to avoid aliasing in the I$ on AMD F15h.
  */
-static unsigned long get_align_mask(void)
+unsigned long get_align_mask(void)
 {
 	/* handle 32- and 64-bit case with a single conditional */
 	if (va_align.flags < 0 || !(va_align.flags & (2 - mmap_is_ia32())))
-- 
2.13.6


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [RFC] x86/vdso: Align vdso after searching for free area
  2018-06-12 20:49 [RFC] x86/vdso: Align vdso after searching for free area Dmitry Safonov
@ 2018-06-12 21:24 ` Dmitry Safonov
  2018-06-12 21:39   ` H. Peter Anvin
  0 siblings, 1 reply; 4+ messages in thread
From: Dmitry Safonov @ 2018-06-12 21:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andy Lutomirski, Borislav Petkov, Dmitry Safonov, H. Peter Anvin,
	Ingo Molnar, Kirill A. Shutemov, Thomas Gleixner,
	Vasiliy Khoruzhick, x86

On Tue, 2018-06-12 at 21:49 +0100, Dmitry Safonov wrote:
> There is errata for AMD family 15h CPUs [1] and since
> commit dfb09f9b7ab03 ("x86, amd: Avoid cache aliasing penalties on
> AMD
> family 15h") bits [14:12] are being cleared for shared libraries.
> Also per-boot ASLR applies over upper bits by OR directly over the
> address.
> 
> As we need special alignment and lower bits values to be set, it
> makes
> only a little sense to call get_unmapped_area() after calculating the
> address. It also can lead to random crashes if get_unmapped_area()
> actually changes/aligns the address, which we observed on 15h CPU.
> Usually it's not a problem as there isn't many mappings (except
> possibly
> ld.so, uprobes?) and result address is the same before/after
> get_unmapped_area().
> 
> Move align_vdso_addr() after get_unmapped_area() to make sure that
> errata for AMD 15h is always applied.

Alternative dirty-hacky idea:
specify some (struct file*) to get_unmapped_area() for vdso vma, then
mapping would be automatically aligned. Dirty as hell as relies on
get_unmapped_area() realization details.

-- 
            Dima

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [RFC] x86/vdso: Align vdso after searching for free area
  2018-06-12 21:24 ` Dmitry Safonov
@ 2018-06-12 21:39   ` H. Peter Anvin
  2018-06-12 21:48     ` Dmitry Safonov
  0 siblings, 1 reply; 4+ messages in thread
From: H. Peter Anvin @ 2018-06-12 21:39 UTC (permalink / raw)
  To: Dmitry Safonov, linux-kernel
  Cc: Andy Lutomirski, Borislav Petkov, Dmitry Safonov, Ingo Molnar,
	Kirill A. Shutemov, Thomas Gleixner, Vasiliy Khoruzhick, x86

On 06/12/18 14:24, Dmitry Safonov wrote:
>>
>> Move align_vdso_addr() after get_unmapped_area() to make sure that
>> errata for AMD 15h is always applied.
> 
> Alternative dirty-hacky idea:
> specify some (struct file*) to get_unmapped_area() for vdso vma, then
> mapping would be automatically aligned. Dirty as hell as relies on
> get_unmapped_area() realization details.
> 


I have mentioned several times that I would like to see the vdso
actually be an actual file in a filesystem, that the kernel *or* user
space can map (needs to be MAP_SHARED, of course.)

The vdso data page needs to be moved after the ELF object itself for
this to work. Ideally it should be given an actual ELF segment (and
ideally an ELF section as well.)  The easy way to do this is to give the
linker a dummy vvar page as a properly aligned section at compile time,
into which space the kernel can map the real vvar page. The only
downside is that the linker likes to put section headings after the
actual data, so it may end up taking up an extra page over the current
arrangement. However, I think the gains outweigh the losses.

	-hpa

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [RFC] x86/vdso: Align vdso after searching for free area
  2018-06-12 21:39   ` H. Peter Anvin
@ 2018-06-12 21:48     ` Dmitry Safonov
  0 siblings, 0 replies; 4+ messages in thread
From: Dmitry Safonov @ 2018-06-12 21:48 UTC (permalink / raw)
  To: H. Peter Anvin, linux-kernel
  Cc: Andy Lutomirski, Borislav Petkov, Dmitry Safonov, Ingo Molnar,
	Kirill A. Shutemov, Thomas Gleixner, Vasiliy Khoruzhick, x86

On Tue, 2018-06-12 at 14:39 -0700, H. Peter Anvin wrote:
> On 06/12/18 14:24, Dmitry Safonov wrote:
> > > 
> > > Move align_vdso_addr() after get_unmapped_area() to make sure
> > > that
> > > errata for AMD 15h is always applied.
> > 
> > Alternative dirty-hacky idea:
> > specify some (struct file*) to get_unmapped_area() for vdso vma,
> > then
> > mapping would be automatically aligned. Dirty as hell as relies on
> > get_unmapped_area() realization details.
> > 
> 
> 
> I have mentioned several times that I would like to see the vdso
> actually be an actual file in a filesystem, that the kernel *or* user
> space can map (needs to be MAP_SHARED, of course.)

Yeah, I remember I did that previously:
https://lwn.net/Articles/698854/

But hadn't time to fix review replies.
Probably, worth to resurrect the patches and clean the dust out from
them.

> The vdso data page needs to be moved after the ELF object itself for
> this to work. Ideally it should be given an actual ELF segment (and
> ideally an ELF section as well.)  The easy way to do this is to give
> the
> linker a dummy vvar page as a properly aligned section at compile
> time,
> into which space the kernel can map the real vvar page. The only
> downside is that the linker likes to put section headings after the
> actual data, so it may end up taking up an extra page over the
> current
> arrangement. However, I think the gains outweigh the losses.

-- 
Thanks,
             Dmitry

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-06-12 21:48 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-12 20:49 [RFC] x86/vdso: Align vdso after searching for free area Dmitry Safonov
2018-06-12 21:24 ` Dmitry Safonov
2018-06-12 21:39   ` H. Peter Anvin
2018-06-12 21:48     ` Dmitry Safonov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).