From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753649AbdKXJU5 (ORCPT ); Fri, 24 Nov 2017 04:20:57 -0500 Received: from mail-wm0-f66.google.com ([74.125.82.66]:34294 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753269AbdKXJPm (ORCPT ); Fri, 24 Nov 2017 04:15:42 -0500 X-Google-Smtp-Source: AGs4zMZzYQW+jplUys7RyJaIIEgTSedFimbjv1Qu4LVOWFHOTxJe85kehU/aBbO4Urg71T1eFFAUsg== From: Ingo Molnar To: linux-kernel@vger.kernel.org Cc: Dave Hansen , Andy Lutomirski , Thomas Gleixner , "H . Peter Anvin" , Peter Zijlstra , Borislav Petkov , Linus Torvalds Subject: [PATCH 24/43] x86/mm/kaiser: Mark per-cpu data structures required for entry/exit Date: Fri, 24 Nov 2017 10:14:29 +0100 Message-Id: <20171124091448.7649-25-mingo@kernel.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20171124091448.7649-1-mingo@kernel.org> References: <20171124091448.7649-1-mingo@kernel.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Dave Hansen These patches are based on work from a team at Graz University of Technology posted here: https://github.com/IAIK/KAISER The KAISER approach keeps two copies of the page tables: one for running in the kernel and one for running userspace. But, there are a few structures that are needed for switching in and out of the kernel and a good subset of *those* are per-cpu data. Here's a short summary of the things mapped to userspace: * The gdt_page's virtual address is pointed to by the LGDT instruction. It is needed to define the segments. Deeply required by CPU to run. * cpu_tss tells the CPU, among other things, where the new stacks are after user<->kernel transitions. Needed by the CPU to make ring transitions. * exception_stacks are needed at interrupt and exception entry so that there is storage for, among other things, some temporary space to permit clobbering a register to load the kernel CR3. Signed-off-by: Dave Hansen Cc: Andy Lutomirski Cc: Borislav Petkov Cc: Brian Gerst Cc: Daniel Gruss Cc: Denys Vlasenko Cc: H. Peter Anvin Cc: Hugh Dickins Cc: Josh Poimboeuf Cc: Kees Cook Cc: Linus Torvalds Cc: Michael Schwarz Cc: Moritz Lipp Cc: Peter Zijlstra Cc: Richard Fellner Cc: Thomas Gleixner Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/20171123003445.DF9EA351@viggo.jf.intel.com Signed-off-by: Ingo Molnar --- arch/x86/include/asm/desc.h | 2 +- arch/x86/include/asm/processor.h | 2 +- arch/x86/kernel/cpu/common.c | 4 ++-- arch/x86/kernel/process.c | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/desc.h b/arch/x86/include/asm/desc.h index aab4fe9f49f8..300090d1c209 100644 --- a/arch/x86/include/asm/desc.h +++ b/arch/x86/include/asm/desc.h @@ -46,7 +46,7 @@ struct gdt_page { struct desc_struct gdt[GDT_ENTRIES]; } __attribute__((aligned(PAGE_SIZE))); -DECLARE_PER_CPU_PAGE_ALIGNED(struct gdt_page, gdt_page); +DECLARE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(struct gdt_page, gdt_page); /* Provide the original GDT */ static inline struct desc_struct *get_cpu_gdt_rw(unsigned int cpu) diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 54f3ee3bc8a0..83dd7c97ba5d 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -359,7 +359,7 @@ struct tss_struct { unsigned long io_bitmap[IO_BITMAP_LONGS + 1]; } __aligned(PAGE_SIZE); -DECLARE_PER_CPU_PAGE_ALIGNED(struct tss_struct, cpu_tss); +DECLARE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(struct tss_struct, cpu_tss); /* * sizeof(unsigned long) coming from an extra "long" at the end diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index f9c7e6852874..3b6920c9fef7 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -98,7 +98,7 @@ static const struct cpu_dev default_cpu = { static const struct cpu_dev *this_cpu = &default_cpu; -DEFINE_PER_CPU_PAGE_ALIGNED(struct gdt_page, gdt_page) = { .gdt = { +DEFINE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(struct gdt_page, gdt_page) = { .gdt = { #ifdef CONFIG_X86_64 /* * We need valid kernel segments for data and code in long mode too @@ -515,7 +515,7 @@ static const unsigned int exception_stack_sizes[N_EXCEPTION_STACKS] = { [DEBUG_STACK - 1] = DEBUG_STKSZ }; -static DEFINE_PER_CPU_PAGE_ALIGNED(char, exception_stacks +DEFINE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(char, exception_stacks [(N_EXCEPTION_STACKS - 1) * EXCEPTION_STKSZ + DEBUG_STKSZ]); #endif diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 6a04287f222b..9365b4f965e0 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -47,7 +47,7 @@ * section. Since TSS's are completely CPU-local, we want them * on exact cacheline boundaries, to eliminate cacheline ping-pong. */ -__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss) = { +__visible DEFINE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(struct tss_struct, cpu_tss) = { .x86_tss = { /* * .sp0 is only used when entering ring 0 from a lower -- 2.14.1