From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B96AC65C22 for ; Fri, 2 Nov 2018 23:32:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0AD512081F for ; Fri, 2 Nov 2018 23:32:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0AD512081F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=vmware.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728675AbeKCIlO (ORCPT ); Sat, 3 Nov 2018 04:41:14 -0400 Received: from ex13-edg-ou-001.vmware.com ([208.91.0.189]:8611 "EHLO EX13-EDG-OU-001.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728485AbeKCIk6 (ORCPT ); Sat, 3 Nov 2018 04:40:58 -0400 Received: from sc9-mailhost2.vmware.com (10.113.161.72) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Fri, 2 Nov 2018 16:31:32 -0700 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost2.vmware.com (Postfix) with ESMTP id A6A7FB14D2; Fri, 2 Nov 2018 19:31:42 -0400 (EDT) From: Nadav Amit To: Ingo Molnar CC: , , "H. Peter Anvin" , Thomas Gleixner , Borislav Petkov , Dave Hansen , Andy Lutomirski , Kees Cook , Peter Zijlstra , Dave Hansen , Nadav Amit Subject: [PATCH v3 3/7] x86/mm: temporary mm struct Date: Fri, 2 Nov 2018 16:29:42 -0700 Message-ID: <20181102232946.98461-4-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181102232946.98461-1-namit@vmware.com> References: <20181102232946.98461-1-namit@vmware.com> MIME-Version: 1.0 Content-Type: text/plain Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Andy Lutomirski Sometimes we want to set a temporary page-table entries (PTEs) in one of the cores, without allowing other cores to use - even speculatively - these mappings. There are two benefits for doing so: (1) Security: if sensitive PTEs are set, temporary mm prevents their use in other cores. This hardens the security as it prevents exploding a dangling pointer to overwrite sensitive data using the sensitive PTE. (2) Avoiding TLB shootdowns: the PTEs do not need to be flushed in remote page-tables. To do so a temporary mm_struct can be used. Mappings which are private for this mm can be set in the userspace part of the address-space. During the whole time in which the temporary mm is loaded, interrupts must be disabled. The first use-case for temporary PTEs, which will follow, is for poking the kernel text. [ Commit message was written by Nadav ] Cc: Kees Cook Cc: Peter Zijlstra Cc: Dave Hansen Reviewed-by: Masami Hiramatsu Tested-by: Masami Hiramatsu Signed-off-by: Andy Lutomirski Signed-off-by: Nadav Amit --- arch/x86/include/asm/mmu_context.h | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index 0ca50611e8ce..7cc8e5c50bf6 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -338,4 +338,24 @@ static inline unsigned long __get_current_cr3_fast(void) return cr3; } +typedef struct { + struct mm_struct *prev; +} temporary_mm_state_t; + +static inline temporary_mm_state_t use_temporary_mm(struct mm_struct *mm) +{ + temporary_mm_state_t state; + + lockdep_assert_irqs_disabled(); + state.prev = this_cpu_read(cpu_tlbstate.loaded_mm); + switch_mm_irqs_off(NULL, mm, current); + return state; +} + +static inline void unuse_temporary_mm(temporary_mm_state_t prev) +{ + lockdep_assert_irqs_disabled(); + switch_mm_irqs_off(NULL, prev.prev, current); +} + #endif /* _ASM_X86_MMU_CONTEXT_H */ -- 2.17.1