From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753500AbcDZW53 (ORCPT ); Tue, 26 Apr 2016 18:57:29 -0400 Received: from mail-bl2on0092.outbound.protection.outlook.com ([65.55.169.92]:63516 "EHLO na01-bl2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753381AbcDZW5O (ORCPT ); Tue, 26 Apr 2016 18:57:14 -0400 Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; From: Tom Lendacky Subject: [RFC PATCH v1 06/18] x86: Provide general kernel support for memory encryption To: , , , , , , , , CC: Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , "Konrad Rzeszutek Wilk" , Paolo Bonzini , "Ingo Molnar" , Borislav Petkov , "H. Peter Anvin" , Andrey Ryabinin , "Alexander Potapenko" , Thomas Gleixner , "Dmitry Vyukov" Date: Tue, 26 Apr 2016 17:56:57 -0500 Message-ID: <20160426225657.13567.43518.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20160426225553.13567.19459.stgit@tlendack-t1.amdoffice.net> References: <20160426225553.13567.19459.stgit@tlendack-t1.amdoffice.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: SN1PR0701CA0031.namprd07.prod.outlook.com (10.162.96.41) To CY1PR1201MB1113.namprd12.prod.outlook.com (10.169.19.17) X-MS-Office365-Filtering-Correlation-Id: 60c96a2e-ad7b-481e-d3d0-08d36e26145b X-Microsoft-Exchange-Diagnostics: 1;CY1PR1201MB1113;2:lJZqlN1ls7Ytgs1KWvOx9VtphFLDHrqzpkwA4b5X8c1xVDVZ+kPYuEDbc3+yEXWdiXoCyMTPyf3OcRWAdHe7tHezJOYH05LgktdFroG/uXf2P3OojVLNZVh8MK3XnuQ5VnC78dKwwn40jGwGLqVg62Wo0e8rJPA2QsNnBsbfYqPR8RUtqIAsl/geMjw72vio;3:gDLDwJOFynP8nWfXqzqMudDtOWX+TMlsNwgrJjgSXiSSEK+J1ps7AM9Da0FbJwQzdKumeRREVolfIiuASbTZVZZv1Dho9U6ht6SnHaVZSidqIa8LN5LNJkcHV86FOJzp X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:CY1PR1201MB1113; X-Microsoft-Exchange-Diagnostics: 1;CY1PR1201MB1113;25:Xy3xUAhdcAV+ntQE9hTIWVspQF5Vj0REagB8h5g/Tm4dUJjZkr/RXOfL97aGK0ncwDpJdw3utVD5Z/XTll/Kt9F3wQggqms3iVgh6EKcCiIvTJW8UGydBY8NshWC0ib7XWvMvZdsX8KOLG9j0UUTo8VPOSCbrbiGkxaQPTWwvBHZEh8ywZoHbjx3rL3vtbgLr3tOP9pgv08mnsQx925EaLTfC9mvjvj4VrKpRD4jcQdmv/U96myajYLj4oFWGGGLSU7ZmhVFVSc/C+0wcVQUwOfTBlJ86qTX0FI6umHMu1rYkR9QfpTOpmyoznkmAt86FBHfLY85skTBNuNzwTvYpny3C1z0TxuffmWFyX6XrGhGtbexiMAPYwfWHcrKg7/gX7wik8qOAfCwOzt7JZEfzcpfZxlW911KiK0L2OLJCt02696IF+2+uBcX2hNXTeyVtdBMVbONvxiYI6TIj2ZWmG6xXgUDIY3DSINbOxOdzkwMZ+Ig7nCXVizyLnr1zwSmWyRGhvy07FfXSH5sCj/5c84FXefkgc+6U2A7IQKFGnc//eyDWNyzRLoBrvALUURsE2UbVhXNnoTLidcDDukv56q6bHiDT/F/qAcn5YxKqkYmreRcFZD+9bH5TXvHT4dqjWIZKhU0C9Yf1Zmn1KgBkCjPvTmcYIBGQQxtdL58Usg= X-Microsoft-Exchange-Diagnostics: 1;CY1PR1201MB1113;20:80UQZQtzMtAJ9aIZiKzy2F0ufanK/j3Il+/wBCrAUYJvCs1mlxEeRFG7a0Btd1+WQF93jgfhxWlHRslVs0fgFNtABQyBpty0WF46nVxvD3ueiv6e2KXoOKXFD/0deBMWG6NF6n9NZ8hgzzAQoG55X//NdO+WaY3VQiXePkiOz6o7DNmk9a3sOGgtcIQTgjfY3Q5uauwbyrXgD8r+tPVMVFiay/IvwM+tk1/aiIxXWuPP2rTuxFbk/kDmsHUSnTdeI9wdn/tY+bjSfAXPdU9IrqzjLgEgCJO/QhtQBGu0RU8sax3Arblq5k03GN75fbKSCA2syGcdX8dZii4/9LdZJ4gW/tYYNLq1gDJSIyOmD+vCQoT4avxTWOwEllLwE7Ag4pwL3TuX7sU0WtDeLnAtmYla8LqupFGgz0nrKiq8IhIN2D4qtO+V6x0yThIRHYcIOL4tB4aPRtyLXgYmQD2hwcXHB6hmlxcO/QaghhuviQoWQY3bHX2t526KOfIVSSSk;4:pp3tKB9Y796Bs76b+6/3M/0cinyWxWUjGxqacEJdn8p0TtQ59olmI3/6E3VLYe95btr1NSw73qsQZnist/5sG46inI3dwMuAHqmaFmutowfZoNF+3kQYmsJV3gP1CP2Y7Mp1QBHgTE9Z9pF0IIPD65DaG5xcTdNI0CCwI2iS3Ji8W/UcTzG7VhkTcBGx1Tc/+lTplBXw9TY5nDcXTqLvtDk/Up2giG9B5IkcICVyyhHvBmVFTxgBToKD5XSur4TBodWYTrEucxAsvPTIWlqkAZLnPSgAniXcfQ6ZoCyUf1tw6pK5uL8l8lw5SYdo0t5wff+jU55ShysU0zZi+pHRNHlOkQkHloLBQFXB9IKdIbOavMf1vuebl4JN1y41XZfKi51iJkrxnD3+TwvYQ7O5nwDIhSBJaD/YZnM+U3DO/DQ= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(9101521072)(601004)(2401047)(5005006)(8121501046)(3002001)(10201501046)(6055026);SRVR:CY1PR1201MB1113;BCL:0;PCL:0;RULEID:;SRVR:CY1PR1201MB1113; X-Forefront-PRVS: 0924C6A0D5 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6009001)(2201001)(1096002)(103116003)(50466002)(575784001)(86362001)(5008740100001)(92566002)(230700001)(5004730100002)(23676002)(4326007)(97746001)(2950100001)(33646002)(2906002)(50986999)(83506001)(19580395003)(76176999)(54356999)(47776003)(19580405001)(66066001)(3846002)(189998001)(53416004)(5001770100001)(229853001)(97736004)(81166005)(586003)(4001350100001)(77096005)(6116002)(1076002)(9686002)(42186005)(11771545001)(71626007)(2004002)(217873001);DIR:OUT;SFP:1101;SCL:1;SRVR:CY1PR1201MB1113;H:tlendack-t1.amdoffice.net;FPR:;SPF:None;MLV:nov;PTR:InfoNoRecords;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtDWTFQUjEyMDFNQjExMTM7MjM6WERqSUVGbEdYMEdNRG9uaGgrWGRFOE8v?= =?utf-8?B?eldFdS9ReTg1UFRMNERRZFA5d0lZQWh6dCtQakZYZjJocy81STlhTTdzUHF5?= =?utf-8?B?WW5HQW9qRDhuUlowVWdzQWtteU0xdHhlT2VrQ1JWMms2b3VlM1FqMjlvYTNU?= =?utf-8?B?ZVNiYnhGcmNJT1RHL01rM1RlT1pCa08wYjBQNUJ5bkVtSEdjdUJRd1VnNGFu?= =?utf-8?B?WjlsMHRtanhFcHdZcXlONGkrbExWTkVEbGNhbkVWS2dUZ3U4cGw4Rmx4d1oy?= =?utf-8?B?MWttRjJPb0NaNlFhRHpSYUkvNTJ4d0VScXh5Yk9iRVZUbEFvY1ZQTklPNzNu?= =?utf-8?B?RmcyWTNBV0o0eGZoa3ViNnByUVVoV1VqYThNVHlhV1c4aVZic09aSjhmUEd2?= =?utf-8?B?amlja0YwUkM4S2c4Tm9JQTE4MVd0UjJYRHhJUG5OZTVFTkVZR3hpa3gvUEtL?= =?utf-8?B?THdZbUlrWjFINXdvWGNwTkRDeGRXeFVLWFFhUS9FckdRZ0RvTHNLbjEyWko1?= =?utf-8?B?dHlUZGJXTHFHTzVxZ3Z4NkZBczQ0RkJXdzBJMXh3cWdiTTBaWW4zL2JwZ1Av?= =?utf-8?B?ZFBnWE5hTkxPajNEcnd6MUVsVUthTVNOSTFkSTNNTFZGTENSQW5STU5EZVBG?= =?utf-8?B?RGVZQXNGTVVORTB5UUI2cFdvNVhZTXFZRVRxbmRhcE0xNk9UbVFReTBpa2l1?= =?utf-8?B?Qy9mMVI0Zzg5ZXAzc0JRZ3NXY1R2R3NROXhITHA4TXBSMVF2VzlGWjNLaEc2?= =?utf-8?B?byt2Rnc0V2JyQ284Y3NRWEExcjhyUkJJRlZUczNuZXd4d0NtdGhvNUNHV0pz?= =?utf-8?B?OTJiNUZhWTRaVDk5bEpzb2lDaDdnbWNia2QrUW9CbG5Ici81U0lZLytwSjZK?= =?utf-8?B?VHdJUmxkLzZIWWwxak5SbWpleHByOE1YOVZWd1J1eWJVZzhZZi9PUzlKRTNy?= =?utf-8?B?QmxsUE9GSzdSVjc1ZnpMdlJGcWp5R2o2NEY2dWJrQ2hEbUlFTEZlT0VId051?= =?utf-8?B?QmtBaEQyU1F5dzA4clRzRSsyLzZnRUo3ZjFYNmFQdnAvdzJFUVZIZFpLcm9Y?= =?utf-8?B?czRuWkpOdzRUcFZQYys0QWlPWVlzeWsyUjJiYXdlZFpvakJaS3IvbFlFd3Q5?= =?utf-8?B?d0pNTlBaMnI4bnV4MHd0ZTgvVTVMY2p1OUpGTzVzaGYvZ1BlN09VblluS05M?= =?utf-8?B?eWJVUENJbEwvVWdwUDM0VC9BT0dYSjdtaFByNDVmSnQ0NHd3U0RzYjRHRlEw?= =?utf-8?B?K1VvTFFMWjBTT0hpMS90dkhoczM1SHNwUTRpRUFxaTc0ZWpLNjBtY0tOMHEv?= =?utf-8?B?cUJ6aStwTVREOHJ5VXR3SzJpSms0dlhCd1FabUFJM1hZNm5lNVZNVVB2N1Vq?= =?utf-8?B?dkN6U1VFNktwN3VxYUtXYmxhRmwxekhXbmhTQzhEbmFSbWQvL1o3d1RBTkwr?= =?utf-8?B?ejZFdmZQK0dGM2Q1bXVNVENaNEowekw5amduRzFlVjhWQjRaUHVJT2ZOS2hZ?= =?utf-8?B?TnRaZXZKQ0s2SVQ1WDJIYnQ5aFpnWjVSUVQxOWxuTHoxZGljZGcxMUFPcUx0?= =?utf-8?B?dTkwTVlvVEdhd24yNEpVUmZ3VHlwMUMwUT09?= X-Microsoft-Exchange-Diagnostics: 1;CY1PR1201MB1113;5:s2NYgtmZGD61qUfutSthrhBae8Y8GaQuddlk9QYy+J+CreaL6pb7P2hqiWaHO+VUyPgsD88je22Nf7pNES6LuVQZXlTHrgN1aVxFy4tglbvW/2O4JskgF9z2+RZlosy3s2MYS/PYbGGvALYZS58Glg==;24:mLaK7u8vexbsl+oZX4pnWtXzh6nfH9ypOVD0Hl4dz1x4iKsHuTC7XIn0Oep+UIQA0LGSz36I84IJsyJiMxtXi0adI1QS3nD+2kfOMxNDp9Q=;7:zGFLxNtVL3J5FWu3FgrDdWJjAhOEdymWKBwOa9agmM4pcZU6AYYekT5MuENL5WnaMJR49O0meyjmeljFWW3eyH3Tb/qqOUBTaRO/5HHFdmGl+nQ1Mcv/Dfg8SvvMCCYHyi9eoRVO/Dc8ron4JdPJLH3/8jVYbRSJnQNW4v4QYG4=;20:a9VCYEVw+389AJ5zXTfBTGh5edhFcMx24bslveSNljpqnJiVd4UXqdwsfdybetl5/URGc3VQqHX/auIhRwygIzGhg4RHTKBuzU7TXrJ6drqLyPtFyAa14UeK7kfbSW58xrqUxerheCMEdv5zn0dYSoP67xqgS+U7f2Dtwri/Df0oPwe0BNepgUA1IFikikk1hvwh4xHjIRO3RteDkjqn4RONzEt8Jm8F466YvtL4F7X1cuzpvfzvIE7hm0lbEPBL X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2016 22:57:00.7808 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY1PR1201MB1113 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Adding general kernel support for memory encryption includes: - Modify and create some page table macros to include the Secure Memory Encryption (SME) memory encryption mask - Update kernel boot support to call an SME routine that checks for and sets the SME capability (the SME routine will grow later and for now is just a stub routine) - Update kernel boot support to call an SME routine that encrypts the kernel (the SME routine will grow later and for now is just a stub routine) - Provide an SME initialization routine to update the protection map with the memory encryption mask so that it is used by default Signed-off-by: Tom Lendacky --- arch/x86/include/asm/fixmap.h | 7 ++++++ arch/x86/include/asm/mem_encrypt.h | 18 +++++++++++++++ arch/x86/include/asm/pgtable_types.h | 41 ++++++++++++++++++++++----------- arch/x86/include/asm/processor.h | 3 ++ arch/x86/kernel/espfix_64.c | 2 +- arch/x86/kernel/head64.c | 10 ++++++-- arch/x86/kernel/head_64.S | 42 ++++++++++++++++++++++++++-------- arch/x86/kernel/machine_kexec_64.c | 2 +- arch/x86/kernel/mem_encrypt.S | 8 ++++++ arch/x86/mm/Makefile | 1 + arch/x86/mm/fault.c | 5 ++-- arch/x86/mm/ioremap.c | 3 ++ arch/x86/mm/kasan_init_64.c | 4 ++- arch/x86/mm/mem_encrypt.c | 30 ++++++++++++++++++++++++ arch/x86/mm/pageattr.c | 3 ++ 15 files changed, 145 insertions(+), 34 deletions(-) create mode 100644 arch/x86/mm/mem_encrypt.c diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h index 8554f96..83e91f0 100644 --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -153,6 +153,13 @@ static inline void __set_fixmap(enum fixed_addresses idx, } #endif +/* + * Fixmap settings used with memory encryption + * - FIXMAP_PAGE_NOCACHE is used for MMIO so make sure the memory + * encryption mask is not part of the page attributes + */ +#define FIXMAP_PAGE_NOCACHE PAGE_KERNEL_IO_NOCACHE + #include #define __late_set_fixmap(idx, phys, flags) __set_fixmap(idx, phys, flags) diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 747fc52..9f3e762 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -15,12 +15,21 @@ #ifndef __ASSEMBLY__ +#include + #ifdef CONFIG_AMD_MEM_ENCRYPT extern unsigned long sme_me_mask; u8 sme_get_me_loss(void); +void __init sme_early_init(void); + +#define __sme_pa(x) (__pa((x)) | sme_me_mask) +#define __sme_pa_nodebug(x) (__pa_nodebug((x)) | sme_me_mask) + +#define __sme_va(x) (__va((x) & ~sme_me_mask)) + #else /* !CONFIG_AMD_MEM_ENCRYPT */ #define sme_me_mask 0UL @@ -30,6 +39,15 @@ static inline u8 sme_get_me_loss(void) return 0; } +static inline void __init sme_early_init(void) +{ +} + +#define __sme_pa __pa +#define __sme_pa_nodebug __pa_nodebug + +#define __sme_va __va + #endif /* CONFIG_AMD_MEM_ENCRYPT */ #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index 7b5efe2..fda7877 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -3,6 +3,7 @@ #include #include +#include #define FIRST_USER_ADDRESS 0UL @@ -115,9 +116,9 @@ #define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE) -#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ +#define __PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ _PAGE_ACCESSED | _PAGE_DIRTY) -#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ +#define __KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ _PAGE_DIRTY) /* @@ -185,18 +186,30 @@ enum page_cache_mode { #define __PAGE_KERNEL_IO (__PAGE_KERNEL) #define __PAGE_KERNEL_IO_NOCACHE (__PAGE_KERNEL_NOCACHE) -#define PAGE_KERNEL __pgprot(__PAGE_KERNEL) -#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO) -#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC) -#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX) -#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE) -#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE) -#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC) -#define PAGE_KERNEL_VSYSCALL __pgprot(__PAGE_KERNEL_VSYSCALL) -#define PAGE_KERNEL_VVAR __pgprot(__PAGE_KERNEL_VVAR) - -#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO) -#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE) +#ifndef __ASSEMBLY__ + +#define _PAGE_ENC sme_me_mask + +/* Redefine macros to inclue the memory encryption mask */ +#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ + _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_ENC) +#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ + _PAGE_DIRTY | _PAGE_ENC) + +#define PAGE_KERNEL __pgprot(__PAGE_KERNEL | _PAGE_ENC) +#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) +#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX | _PAGE_ENC) +#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_VSYSCALL __pgprot(__PAGE_KERNEL_VSYSCALL | _PAGE_ENC) +#define PAGE_KERNEL_VVAR __pgprot(__PAGE_KERNEL_VVAR | _PAGE_ENC) + +#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO) +#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE) + +#endif /* __ASSEMBLY__ */ /* xwr */ #define __P000 PAGE_NONE diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 8d326e8..1fae737 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -22,6 +22,7 @@ struct vm86; #include #include #include +#include #include #include @@ -207,7 +208,7 @@ static inline void native_cpuid(unsigned int *eax, unsigned int *ebx, static inline void load_cr3(pgd_t *pgdir) { - write_cr3(__pa(pgdir)); + write_cr3(__sme_pa(pgdir)); } #ifdef CONFIG_X86_32 diff --git a/arch/x86/kernel/espfix_64.c b/arch/x86/kernel/espfix_64.c index 4d38416..3385377 100644 --- a/arch/x86/kernel/espfix_64.c +++ b/arch/x86/kernel/espfix_64.c @@ -193,7 +193,7 @@ void init_espfix_ap(int cpu) pte_p = pte_offset_kernel(&pmd, addr); stack_page = page_address(alloc_pages_node(node, GFP_KERNEL, 0)); - pte = __pte(__pa(stack_page) | (__PAGE_KERNEL_RO & ptemask)); + pte = __pte(__pa(stack_page) | ((__PAGE_KERNEL_RO | _PAGE_ENC) & ptemask)); for (n = 0; n < ESPFIX_PTE_CLONES; n++) set_pte(&pte_p[n*PTE_STRIDE], pte); diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index b72fb0b..3516f9f 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -28,6 +28,7 @@ #include #include #include +#include /* * Manage page tables very early on. @@ -42,7 +43,7 @@ static void __init reset_early_page_tables(void) { memset(early_level4_pgt, 0, sizeof(pgd_t)*(PTRS_PER_PGD-1)); next_early_pgt = 0; - write_cr3(__pa_nodebug(early_level4_pgt)); + write_cr3(__sme_pa_nodebug(early_level4_pgt)); } /* Create a new PMD entry */ @@ -54,7 +55,7 @@ int __init early_make_pgtable(unsigned long address) pmdval_t pmd, *pmd_p; /* Invalid address or early pgt is done ? */ - if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt)) + if (physaddr >= MAXMEM || read_cr3() != __sme_pa_nodebug(early_level4_pgt)) return -1; again: @@ -157,6 +158,11 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data) clear_page(init_level4_pgt); + /* Update the early_pmd_flags with the memory encryption mask */ + early_pmd_flags |= _PAGE_ENC; + + sme_early_init(); + kasan_early_init(); for (i = 0; i < NUM_EXCEPTION_VECTORS; i++) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index 5df831e..0f3ad72 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -95,6 +95,13 @@ startup_64: jnz bad_address /* + * Enable memory encryption (if available). Add the memory encryption + * mask to %rbp to include it in the the page table fixup. + */ + call sme_enable + addq sme_me_mask(%rip), %rbp + + /* * Fixup the physical addresses in the page table */ addq %rbp, early_level4_pgt + (L4_START_KERNEL*8)(%rip) @@ -116,7 +123,8 @@ startup_64: movq %rdi, %rax shrq $PGDIR_SHIFT, %rax - leaq (4096 + _KERNPG_TABLE)(%rbx), %rdx + leaq (4096 + __KERNPG_TABLE)(%rbx), %rdx + addq sme_me_mask(%rip), %rdx /* Apply mem encryption mask */ movq %rdx, 0(%rbx,%rax,8) movq %rdx, 8(%rbx,%rax,8) @@ -133,6 +141,7 @@ startup_64: movq %rdi, %rax shrq $PMD_SHIFT, %rdi addq $(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL), %rax + addq sme_me_mask(%rip), %rax /* Apply mem encryption mask */ leaq (_end - 1)(%rip), %rcx shrq $PMD_SHIFT, %rcx subq %rdi, %rcx @@ -163,9 +172,19 @@ startup_64: cmp %r8, %rdi jne 1b - /* Fixup phys_base */ + /* + * Fixup phys_base, remove the memory encryption mask from %rbp + * to obtain the true physical address. + */ + subq sme_me_mask(%rip), %rbp addq %rbp, phys_base(%rip) + /* + * The page tables have been updated with the memory encryption mask, + * so encrypt the kernel if memory encryption is active + */ + call sme_encrypt_kernel + movq $(early_level4_pgt - __START_KERNEL_map), %rax jmp 1f ENTRY(secondary_startup_64) @@ -189,6 +208,9 @@ ENTRY(secondary_startup_64) movq $(init_level4_pgt - __START_KERNEL_map), %rax 1: + /* Add the memory encryption mask to RAX */ + addq sme_me_mask(%rip), %rax + /* Enable PAE mode and PGE */ movl $(X86_CR4_PAE | X86_CR4_PGE), %ecx movq %rcx, %cr4 @@ -416,7 +438,7 @@ GLOBAL(name) __INITDATA NEXT_PAGE(early_level4_pgt) .fill 511,8,0 - .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level3_kernel_pgt - __START_KERNEL_map + __PAGE_TABLE NEXT_PAGE(early_dynamic_pgts) .fill 512*EARLY_DYNAMIC_PAGE_TABLES,8,0 @@ -428,15 +450,15 @@ NEXT_PAGE(init_level4_pgt) .fill 512,8,0 #else NEXT_PAGE(init_level4_pgt) - .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level3_ident_pgt - __START_KERNEL_map + __KERNPG_TABLE .org init_level4_pgt + L4_PAGE_OFFSET*8, 0 - .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level3_ident_pgt - __START_KERNEL_map + __KERNPG_TABLE .org init_level4_pgt + L4_START_KERNEL*8, 0 /* (2^48-(2*1024*1024*1024))/(2^39) = 511 */ - .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level3_kernel_pgt - __START_KERNEL_map + __PAGE_TABLE NEXT_PAGE(level3_ident_pgt) - .quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level2_ident_pgt - __START_KERNEL_map + __KERNPG_TABLE .fill 511, 8, 0 NEXT_PAGE(level2_ident_pgt) /* Since I easily can, map the first 1G. @@ -448,8 +470,8 @@ NEXT_PAGE(level2_ident_pgt) NEXT_PAGE(level3_kernel_pgt) .fill L3_START_KERNEL,8,0 /* (2^48-(2*1024*1024*1024)-((2^39)*511))/(2^30) = 510 */ - .quad level2_kernel_pgt - __START_KERNEL_map + _KERNPG_TABLE - .quad level2_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level2_kernel_pgt - __START_KERNEL_map + __KERNPG_TABLE + .quad level2_fixmap_pgt - __START_KERNEL_map + __PAGE_TABLE NEXT_PAGE(level2_kernel_pgt) /* @@ -467,7 +489,7 @@ NEXT_PAGE(level2_kernel_pgt) NEXT_PAGE(level2_fixmap_pgt) .fill 506,8,0 - .quad level1_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level1_fixmap_pgt - __START_KERNEL_map + __PAGE_TABLE /* 8MB reserved for vsyscalls + a 2MB hole = 4 + 1 entries */ .fill 5,8,0 diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c index ba7fbba..eb2faee 100644 --- a/arch/x86/kernel/machine_kexec_64.c +++ b/arch/x86/kernel/machine_kexec_64.c @@ -103,7 +103,7 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable) struct x86_mapping_info info = { .alloc_pgt_page = alloc_pgt_page, .context = image, - .pmd_flag = __PAGE_KERNEL_LARGE_EXEC, + .pmd_flag = __PAGE_KERNEL_LARGE_EXEC | _PAGE_ENC, }; unsigned long mstart, mend; pgd_t *level4p; diff --git a/arch/x86/kernel/mem_encrypt.S b/arch/x86/kernel/mem_encrypt.S index ef7f325..f2e0536 100644 --- a/arch/x86/kernel/mem_encrypt.S +++ b/arch/x86/kernel/mem_encrypt.S @@ -14,6 +14,14 @@ .text .code64 +ENTRY(sme_enable) + ret +ENDPROC(sme_enable) + +ENTRY(sme_encrypt_kernel) + ret +ENDPROC(sme_encrypt_kernel) + ENTRY(sme_get_me_loss) xor %rax, %rax mov sme_me_loss(%rip), %al diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index f989132..21f8ea3 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -39,3 +39,4 @@ obj-$(CONFIG_NUMA_EMU) += numa_emulation.o obj-$(CONFIG_X86_INTEL_MPX) += mpx.o obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o +obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt.o diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 5ce1ed0..9e93a0d 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -23,6 +23,7 @@ #include /* emulate_vsyscall */ #include /* struct vm86 */ #include /* vma_pkey() */ +#include /* __sme_va() */ #define CREATE_TRACE_POINTS #include @@ -523,7 +524,7 @@ static int bad_address(void *p) static void dump_pagetable(unsigned long address) { - pgd_t *base = __va(read_cr3() & PHYSICAL_PAGE_MASK); + pgd_t *base = __sme_va(read_cr3() & PHYSICAL_PAGE_MASK); pgd_t *pgd = base + pgd_index(address); pud_t *pud; pmd_t *pmd; @@ -659,7 +660,7 @@ show_fault_oops(struct pt_regs *regs, unsigned long error_code, pgd_t *pgd; pte_t *pte; - pgd = __va(read_cr3() & PHYSICAL_PAGE_MASK); + pgd = __sme_va(read_cr3() & PHYSICAL_PAGE_MASK); pgd += pgd_index(address); pte = lookup_address_in_pgd(pgd, address, &level); diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index f089491..77dadf5 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -21,6 +21,7 @@ #include #include #include +#include #include "physaddr.h" @@ -424,7 +425,7 @@ static pte_t bm_pte[PAGE_SIZE/sizeof(pte_t)] __page_aligned_bss; static inline pmd_t * __init early_ioremap_pmd(unsigned long addr) { /* Don't assume we're using swapper_pg_dir at this point */ - pgd_t *base = __va(read_cr3()); + pgd_t *base = __sme_va(read_cr3()); pgd_t *pgd = &base[pgd_index(addr)]; pud_t *pud = pud_offset(pgd, addr); pmd_t *pmd = pmd_offset(pud, addr); diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index 1b1110f..7102408 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -68,7 +68,7 @@ static struct notifier_block kasan_die_notifier = { void __init kasan_early_init(void) { int i; - pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL; + pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL | _PAGE_ENC; pmdval_t pmd_val = __pa_nodebug(kasan_zero_pte) | _KERNPG_TABLE; pudval_t pud_val = __pa_nodebug(kasan_zero_pmd) | _KERNPG_TABLE; @@ -130,7 +130,7 @@ void __init kasan_init(void) */ memset(kasan_zero_page, 0, PAGE_SIZE); for (i = 0; i < PTRS_PER_PTE; i++) { - pte_t pte = __pte(__pa(kasan_zero_page) | __PAGE_KERNEL_RO); + pte_t pte = __pte(__pa(kasan_zero_page) | __PAGE_KERNEL_RO | _PAGE_ENC); set_pte(&kasan_zero_pte[i], pte); } /* Flush TLBs again to be sure that write protection applied. */ diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c new file mode 100644 index 0000000..00eb705 --- /dev/null +++ b/arch/x86/mm/mem_encrypt.c @@ -0,0 +1,30 @@ +/* + * AMD Memory Encryption Support + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include + +#include + +void __init sme_early_init(void) +{ + unsigned int i; + + if (!sme_me_mask) + return; + + __supported_pte_mask |= sme_me_mask; + + /* Update the protection map with memory encryption mask */ + for (i = 0; i < ARRAY_SIZE(protection_map); i++) + protection_map[i] = __pgprot(pgprot_val(protection_map[i]) | sme_me_mask); +} diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index bbf462f..c055302 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1976,6 +1976,9 @@ int kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address, if (!(page_flags & _PAGE_RW)) cpa.mask_clr = __pgprot(_PAGE_RW); + if (!(page_flags & _PAGE_ENC)) + cpa.mask_clr = __pgprot(pgprot_val(cpa.mask_clr) | _PAGE_ENC); + cpa.mask_set = __pgprot(_PAGE_PRESENT | page_flags); retval = __change_page_attr_set_clr(&cpa, 0);