From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753399AbcDZWqf (ORCPT ); Tue, 26 Apr 2016 18:46:35 -0400 Received: from mail-by2on0067.outbound.protection.outlook.com ([207.46.100.67]:54473 "EHLO na01-by2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752646AbcDZWq3 (ORCPT ); Tue, 26 Apr 2016 18:46:29 -0400 Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; From: Tom Lendacky Subject: [RFC PATCH v1 06/18] x86: Provide general kernel support for memory encryption To: , , , , , , , , CC: Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , "Konrad Rzeszutek Wilk" , Paolo Bonzini , "Ingo Molnar" , Borislav Petkov , "H. Peter Anvin" , Andrey Ryabinin , "Alexander Potapenko" , Thomas Gleixner , "Dmitry Vyukov" Date: Tue, 26 Apr 2016 17:46:14 -0500 Message-ID: <20160426224614.13079.60665.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20160426224508.13079.90373.stgit@tlendack-t1.amdoffice.net> References: <20160426224508.13079.90373.stgit@tlendack-t1.amdoffice.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: BY2PR04CA0068.namprd04.prod.outlook.com (10.255.247.36) To CY1PR1201MB1113.namprd12.prod.outlook.com (10.169.19.17) X-MS-Office365-Filtering-Correlation-Id: 07f0bd5d-625d-4e88-e906-08d36e2496f4 X-Microsoft-Exchange-Diagnostics: 1;CY1PR1201MB1113;2:RtG7K8gJUmYsZSnfBetM+0lOuJ0HPo80C+ivSUcBooxg+pNNo0etbnJI0udwDHZOBCj9ZKoQ0SJ8MWiFUd+pmscWbS1bqnPLzyL8WVxkVc6OwKkvnmBC8qVJfJw5W54MddUczVlhFKvcDe1UfuJBcQ66SfQMA4gGO/kEPZ8SS1H6vZmmPxrp/6nKjEPYXsBG;3:zLkVD/9pK0aATvs4/HdivBv5esoG7MC1dMoL22Gb/0BwmJYDeT4nLFRw9zM5YzQkYcE/Im6pYkbvBo7zXwBjtrtbsMMwxhsewvbR8Cd8iPjc0CSaODN6qrWOaLbzj6Gv;25:SCjUJw6ffsLzPJvQwPCmKF/AA3yucFEPFZU0XY5C+n2P7rdAWU1u3FWef8fiL8ukzg/MWpwGu7xKpdfmLbKSrZcL5tiSoBT7Nl1j8hkXHML5BAX+cxUfwBG8TrTvidQLQ7CsS997c68xjyBB9t0/A2cYiVYnG1PfsdUjJMIWp9SAEDJoKm0BXNSi5MpdILGc29919hzEVurMpVxgvSfB+tA+X5oPHkqJQkh5dGH5oKRCOARsDRsznlRJ/+B5xH1CU1+L2YnQf3IGIKOwZ8VrELE7aPruOUIc0jBcNVZTVh0pC/dCYFRchPAl4JhBR2i1wux6o34HPDeMu+7n0OoUdA== X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:CY1PR1201MB1113; X-Microsoft-Exchange-Diagnostics: 1;CY1PR1201MB1113;20:I/hj1fte2OJ9B++aALKfdZ4wl1lvJAG/M1atvBZPBasVrR6BRQFP7xPJcCuIGXXpz/d8kXqqw08Mlj0XTezHGtXTSRPdmigmdj7mdic94LWYzBZWt2vswikQd3QeJZ6Jt3cWByuA3/BMLJwOq/3fqx15xcLAjmBO9cU2VvlZCIPhAhkpfZNHQ6yWHXASJ8HCjOnJsZwWZGBR+2stFRQI8s5craj+xrdMWf7EjlFZihdntyJRG1c0/5QDtCdib5LUQvJxHIdG8xcrTQUHg/FlO+r3hIWxMgmHy0lTURcwBg5vS+ff4O+yRwqtGprpH+vbIKBRSPH9w0b8YU2pT59iXZoP+QhVNCWhmqZ6zUctXxrCsR4cqF4RNyt4sxc3HoGzHwBVNtDrYmFDUloak7+sf+E+53oJolDUyvKPCqCBCkCiHUF7dC9aZ5U4MCAsfRCAwSKaNaSJzw5C41h5Y9+j4ylEI8cbfmoLFeobQEx4MouyH7fI/oBOPwD4v3HSJLoS;4:GlZVNgGMqfMR1xKQUewVC1D8jdwfvP42UrIrooivJaGMdCMo/r/QJMzoPOg52OXT2QMMD6l4JVgRPQufPt6HXd+Hf1pwGVNqG+4McJryjmoejH82Flg7iIciIUcWFo3kC76q4ruu7bSJd9N0Wf1w9m2NvHN9o1TP2T9B9z9ri0OpLu1EW/80kD9E8xelamsXL8NVqFA2Wq74EMt6BWqKNpk51IBSvFA8G24XwsxxI96xNfOrM8w9CSOrMGU0SvHl/0+itdHPxE+gveiUXeTema6TnmKE6Xor7KSA04BSEEooviYuU4cq8V5ybeXY7Yt94OI8GUjQNCTvish5jQyz7HS3OQlQiY6P8tvOZmNSGXYjPUbRligdXDYki/Ox/V4/QRhk9YW2ZoLzuf54RYa/IWGkpJWbVg0Mw3HrioyH2lc= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(9101521072)(601004)(2401047)(5005006)(8121501046)(3002001)(10201501046)(6055026);SRVR:CY1PR1201MB1113;BCL:0;PCL:0;RULEID:;SRVR:CY1PR1201MB1113; X-Forefront-PRVS: 0924C6A0D5 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6009001)(2201001)(1096002)(103116003)(50466002)(575784001)(86362001)(5008740100001)(92566002)(230700001)(5004730100002)(23676002)(4326007)(97746001)(2950100001)(33646002)(2906002)(50986999)(83506001)(19580395003)(76176999)(54356999)(47776003)(19580405001)(66066001)(3846002)(189998001)(53416004)(5001770100001)(229853001)(97736004)(81166005)(586003)(4001350100001)(77096005)(6116002)(1076002)(9686002)(42186005)(11771545001)(71626007)(2004002)(217873001);DIR:OUT;SFP:1101;SCL:1;SRVR:CY1PR1201MB1113;H:tlendack-t1.amdoffice.net;FPR:;SPF:None;MLV:nov;PTR:InfoNoRecords;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtDWTFQUjEyMDFNQjExMTM7MjM6T2x3N3pkbGJSaml1U2RvdktDWUFGYkdG?= =?utf-8?B?TGVYOUhwUzMzWGVXTUUvRWErVHBLNkc5SFZBYzdEMEdwZ1h0eEpLVlVjRlJm?= =?utf-8?B?d3drM2psTis0a1g0STlUN1prQ081SWJ6TTdsSjRXdmE5WHpkemk3K2pIYkdR?= =?utf-8?B?YkttZXhOeFM0NlN1TlVXaW5DN09ycFRodEgraDZuL2ZjOXYwN0JNaDBvWjlm?= =?utf-8?B?b2lZOWl0emhpNHBCR0tTZTVUQkZQYW52VTcwbkEyaHVQTUJBRkovdldnbUd0?= =?utf-8?B?dlFxRGN1c0twdTQxNmxlanh1aE5GLzBzRnNGMEVXeFVabHkrTHVXb0tNSFJ5?= =?utf-8?B?NUNjbUdsL0UzTzZmTVZlREdveWhKaENpRklVTVZaZTRrbC92SDZHYWRrMndv?= =?utf-8?B?QkdJUjAwamlEbTY3bU11ZTMrcHZUY0c3RUFGblZNQndWOTBIaGVHTnA3cHll?= =?utf-8?B?UWx3dzdMNno4VDFudFA5QXlJVnQ1T2ZFOXJSQ2xlTnBibHpYL2h5VWdKWnpJ?= =?utf-8?B?RWVNaDI2eDNQMWUvV1lCT21hWWVRZjFJdExMeHJkUHBrbVcxOWtHSUNmMTVD?= =?utf-8?B?dUVCQjJ3VkJvTFdmWlpGWTJxQ2xtNVVhZFlGR3c3aGNydG5ITzdCVGVYdmdL?= =?utf-8?B?aERwam41QVRTclE4bmlaOGRFV3RSbThxWnlDVzlrelFkaXprY2hyT2FPRmp5?= =?utf-8?B?Nms4R09UWGdyMFhNY2svQ0xzZlY0N1lSMDZVNmwwekJwNGdFdXdjVXVYR2d0?= =?utf-8?B?eHRBUm9VbVQycXlKdWloNXJ0OGJBZ1FwT3ZWa2xlYzJQSUg0UmJ3NDd2SWln?= =?utf-8?B?S1ljVEZPaXl1dVdxb1NwRFAzelhLMG9tVU4vdWZRc2d1MUlGUnNOS0lFS2Qr?= =?utf-8?B?c3dRYUtYbVVwOUVtRVBjd09yWXhKVWxpNEo0L0cvVFFDZTk0ZWJHV1ZIZlR5?= =?utf-8?B?bE1UUUZYNFlIZTZ1TFpOSytjUWRLNmhxUDdqK1RRUUFFb2FpSEZ3L3l1VVI5?= =?utf-8?B?SjlFSlZ6cnhCa1REd0VGL2NMcGk3OFBhOWIrQktuMlp6SVVteSswVVpjL0R0?= =?utf-8?B?a0pKemlDQlkxNzB2Vzc1aTJZYlIrUlBQOVdPbzl4THhEZUxnNE1VQ1M1T1Vl?= =?utf-8?B?bzB4WUJQcWRjL3BTUmJ1MjN1M0lxbFlhOFAwWWlFQlVKT2tLVzB3aTllL0c5?= =?utf-8?B?UVJzTkhoWjM1S3pjcjMyL1hiZHMwdmN2M3l4TksyQkh3Lzd0cWFXZHI0RGpo?= =?utf-8?B?ZVJ0NXdEMWtTeGNoRXVTbUtQQy8yMjltR1FKNjJRZDd2U0V1eStUaHJtSzVE?= =?utf-8?B?eWh1RTFJQnllRk14S0pPQUZIdmpWK05FbTdjcFpMWklUZ2xubWVNblRXeHNl?= =?utf-8?B?ZFg5OGlRMmdickJkUHNSUmRjeUZ1RmhvcTlZdFhOQkl0NVdGMG5uZUROZ1Na?= =?utf-8?B?R0NLOGtGUkxDNnMzdDZUOWZNUW5oZURHNmEyb2NxU0hkYWVJdFNpbytTQWs0?= =?utf-8?B?RTFMSDROT1QxTzBKZmZDQnp1Yzc3MW1ISTlNTE4wNnIvTysydXNpY1BOYTFT?= =?utf-8?B?cnRRa2kxNTVCam1aZnlVMmN3bG5qWG10dz09?= X-Microsoft-Exchange-Diagnostics: 1;CY1PR1201MB1113;5:erSoLA81T+A6xOGQ3KXP/WDm3HK3zX+m0hgDZ/ClFOv0FYfKZrThBlne9x5ArLesM270F3daV/pqWsM3jIFWUCelDzGXziXxzXhmoyHCJF4MFrX5GngMn2MlOVYWkM0jVRQrK0r9gDomhgH4nMkcCw==;24:YQez2J5im8I2amEgI5ZD1gQmkKAtJkLMGH5VWfrJOkKAxy59GDxL76H8Tn8v8Q0b0I+VS6Y+Xd66n0+pFAFf+PWj1ak7NiKHvL7/2sgEeMk=;7:UteIjZwESuSRv0epeTrR2Odgh8t88r4ZB1UB47MbO+X3fNHGjePtOp7qhZk4JWPicfw2mb23K+1WNNmisKfk8KgEQ9vUOuZeadFOEHlVe5nCPLfkN7PFsRL+apwURO2Q+PWWJApn1C3+iDjpzdVsiiAbSHaQx9GXMAzok15dPpg=;20:VhqsZeHpQtLTmwHWEFPgBvts4BF6mnHJPe+6608UjmrTr1pAyt5JXkQdybxDWURkjgQYcmWbbfWozjIf/crqvg4jFigY1eR2oQOnTGlYALHAxmI6E+m6vN3a0+DLXrm2N0Q+IEnv/3kxn3t/hjZT5H8GxoFKxl/C/IuPVZOMMBVgFXqCj0l00OTrNuUkA0OZ+itOMoZRK3mHci0J8zB/s4VkbVAfrvJmgzbrlL+IWiBW6I67f7kFMucYZ7dT/Hu7 X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2016 22:46:19.8475 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY1PR1201MB1113 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Adding general kernel support for memory encryption includes: - Modify and create some page table macros to include the Secure Memory Encryption (SME) memory encryption mask - Update kernel boot support to call an SME routine that checks for and sets the SME capability (the SME routine will grow later and for now is just a stub routine) - Update kernel boot support to call an SME routine that encrypts the kernel (the SME routine will grow later and for now is just a stub routine) - Provide an SME initialization routine to update the protection map with the memory encryption mask so that it is used by default Signed-off-by: Tom Lendacky --- arch/x86/include/asm/fixmap.h | 7 ++++++ arch/x86/include/asm/mem_encrypt.h | 18 +++++++++++++++ arch/x86/include/asm/pgtable_types.h | 41 ++++++++++++++++++++++----------- arch/x86/include/asm/processor.h | 3 ++ arch/x86/kernel/espfix_64.c | 2 +- arch/x86/kernel/head64.c | 10 ++++++-- arch/x86/kernel/head_64.S | 42 ++++++++++++++++++++++++++-------- arch/x86/kernel/machine_kexec_64.c | 2 +- arch/x86/kernel/mem_encrypt.S | 8 ++++++ arch/x86/mm/Makefile | 1 + arch/x86/mm/fault.c | 5 ++-- arch/x86/mm/ioremap.c | 3 ++ arch/x86/mm/kasan_init_64.c | 4 ++- arch/x86/mm/mem_encrypt.c | 30 ++++++++++++++++++++++++ arch/x86/mm/pageattr.c | 3 ++ 15 files changed, 145 insertions(+), 34 deletions(-) create mode 100644 arch/x86/mm/mem_encrypt.c diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h index 8554f96..83e91f0 100644 --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -153,6 +153,13 @@ static inline void __set_fixmap(enum fixed_addresses idx, } #endif +/* + * Fixmap settings used with memory encryption + * - FIXMAP_PAGE_NOCACHE is used for MMIO so make sure the memory + * encryption mask is not part of the page attributes + */ +#define FIXMAP_PAGE_NOCACHE PAGE_KERNEL_IO_NOCACHE + #include #define __late_set_fixmap(idx, phys, flags) __set_fixmap(idx, phys, flags) diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 747fc52..9f3e762 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -15,12 +15,21 @@ #ifndef __ASSEMBLY__ +#include + #ifdef CONFIG_AMD_MEM_ENCRYPT extern unsigned long sme_me_mask; u8 sme_get_me_loss(void); +void __init sme_early_init(void); + +#define __sme_pa(x) (__pa((x)) | sme_me_mask) +#define __sme_pa_nodebug(x) (__pa_nodebug((x)) | sme_me_mask) + +#define __sme_va(x) (__va((x) & ~sme_me_mask)) + #else /* !CONFIG_AMD_MEM_ENCRYPT */ #define sme_me_mask 0UL @@ -30,6 +39,15 @@ static inline u8 sme_get_me_loss(void) return 0; } +static inline void __init sme_early_init(void) +{ +} + +#define __sme_pa __pa +#define __sme_pa_nodebug __pa_nodebug + +#define __sme_va __va + #endif /* CONFIG_AMD_MEM_ENCRYPT */ #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index 7b5efe2..fda7877 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -3,6 +3,7 @@ #include #include +#include #define FIRST_USER_ADDRESS 0UL @@ -115,9 +116,9 @@ #define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE) -#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ +#define __PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ _PAGE_ACCESSED | _PAGE_DIRTY) -#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ +#define __KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ _PAGE_DIRTY) /* @@ -185,18 +186,30 @@ enum page_cache_mode { #define __PAGE_KERNEL_IO (__PAGE_KERNEL) #define __PAGE_KERNEL_IO_NOCACHE (__PAGE_KERNEL_NOCACHE) -#define PAGE_KERNEL __pgprot(__PAGE_KERNEL) -#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO) -#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC) -#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX) -#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE) -#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE) -#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC) -#define PAGE_KERNEL_VSYSCALL __pgprot(__PAGE_KERNEL_VSYSCALL) -#define PAGE_KERNEL_VVAR __pgprot(__PAGE_KERNEL_VVAR) - -#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO) -#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE) +#ifndef __ASSEMBLY__ + +#define _PAGE_ENC sme_me_mask + +/* Redefine macros to inclue the memory encryption mask */ +#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ + _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_ENC) +#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ + _PAGE_DIRTY | _PAGE_ENC) + +#define PAGE_KERNEL __pgprot(__PAGE_KERNEL | _PAGE_ENC) +#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) +#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX | _PAGE_ENC) +#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_VSYSCALL __pgprot(__PAGE_KERNEL_VSYSCALL | _PAGE_ENC) +#define PAGE_KERNEL_VVAR __pgprot(__PAGE_KERNEL_VVAR | _PAGE_ENC) + +#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO) +#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE) + +#endif /* __ASSEMBLY__ */ /* xwr */ #define __P000 PAGE_NONE diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 8d326e8..1fae737 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -22,6 +22,7 @@ struct vm86; #include #include #include +#include #include #include @@ -207,7 +208,7 @@ static inline void native_cpuid(unsigned int *eax, unsigned int *ebx, static inline void load_cr3(pgd_t *pgdir) { - write_cr3(__pa(pgdir)); + write_cr3(__sme_pa(pgdir)); } #ifdef CONFIG_X86_32 diff --git a/arch/x86/kernel/espfix_64.c b/arch/x86/kernel/espfix_64.c index 4d38416..3385377 100644 --- a/arch/x86/kernel/espfix_64.c +++ b/arch/x86/kernel/espfix_64.c @@ -193,7 +193,7 @@ void init_espfix_ap(int cpu) pte_p = pte_offset_kernel(&pmd, addr); stack_page = page_address(alloc_pages_node(node, GFP_KERNEL, 0)); - pte = __pte(__pa(stack_page) | (__PAGE_KERNEL_RO & ptemask)); + pte = __pte(__pa(stack_page) | ((__PAGE_KERNEL_RO | _PAGE_ENC) & ptemask)); for (n = 0; n < ESPFIX_PTE_CLONES; n++) set_pte(&pte_p[n*PTE_STRIDE], pte); diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index b72fb0b..3516f9f 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -28,6 +28,7 @@ #include #include #include +#include /* * Manage page tables very early on. @@ -42,7 +43,7 @@ static void __init reset_early_page_tables(void) { memset(early_level4_pgt, 0, sizeof(pgd_t)*(PTRS_PER_PGD-1)); next_early_pgt = 0; - write_cr3(__pa_nodebug(early_level4_pgt)); + write_cr3(__sme_pa_nodebug(early_level4_pgt)); } /* Create a new PMD entry */ @@ -54,7 +55,7 @@ int __init early_make_pgtable(unsigned long address) pmdval_t pmd, *pmd_p; /* Invalid address or early pgt is done ? */ - if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt)) + if (physaddr >= MAXMEM || read_cr3() != __sme_pa_nodebug(early_level4_pgt)) return -1; again: @@ -157,6 +158,11 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data) clear_page(init_level4_pgt); + /* Update the early_pmd_flags with the memory encryption mask */ + early_pmd_flags |= _PAGE_ENC; + + sme_early_init(); + kasan_early_init(); for (i = 0; i < NUM_EXCEPTION_VECTORS; i++) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index 5df831e..0f3ad72 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -95,6 +95,13 @@ startup_64: jnz bad_address /* + * Enable memory encryption (if available). Add the memory encryption + * mask to %rbp to include it in the the page table fixup. + */ + call sme_enable + addq sme_me_mask(%rip), %rbp + + /* * Fixup the physical addresses in the page table */ addq %rbp, early_level4_pgt + (L4_START_KERNEL*8)(%rip) @@ -116,7 +123,8 @@ startup_64: movq %rdi, %rax shrq $PGDIR_SHIFT, %rax - leaq (4096 + _KERNPG_TABLE)(%rbx), %rdx + leaq (4096 + __KERNPG_TABLE)(%rbx), %rdx + addq sme_me_mask(%rip), %rdx /* Apply mem encryption mask */ movq %rdx, 0(%rbx,%rax,8) movq %rdx, 8(%rbx,%rax,8) @@ -133,6 +141,7 @@ startup_64: movq %rdi, %rax shrq $PMD_SHIFT, %rdi addq $(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL), %rax + addq sme_me_mask(%rip), %rax /* Apply mem encryption mask */ leaq (_end - 1)(%rip), %rcx shrq $PMD_SHIFT, %rcx subq %rdi, %rcx @@ -163,9 +172,19 @@ startup_64: cmp %r8, %rdi jne 1b - /* Fixup phys_base */ + /* + * Fixup phys_base, remove the memory encryption mask from %rbp + * to obtain the true physical address. + */ + subq sme_me_mask(%rip), %rbp addq %rbp, phys_base(%rip) + /* + * The page tables have been updated with the memory encryption mask, + * so encrypt the kernel if memory encryption is active + */ + call sme_encrypt_kernel + movq $(early_level4_pgt - __START_KERNEL_map), %rax jmp 1f ENTRY(secondary_startup_64) @@ -189,6 +208,9 @@ ENTRY(secondary_startup_64) movq $(init_level4_pgt - __START_KERNEL_map), %rax 1: + /* Add the memory encryption mask to RAX */ + addq sme_me_mask(%rip), %rax + /* Enable PAE mode and PGE */ movl $(X86_CR4_PAE | X86_CR4_PGE), %ecx movq %rcx, %cr4 @@ -416,7 +438,7 @@ GLOBAL(name) __INITDATA NEXT_PAGE(early_level4_pgt) .fill 511,8,0 - .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level3_kernel_pgt - __START_KERNEL_map + __PAGE_TABLE NEXT_PAGE(early_dynamic_pgts) .fill 512*EARLY_DYNAMIC_PAGE_TABLES,8,0 @@ -428,15 +450,15 @@ NEXT_PAGE(init_level4_pgt) .fill 512,8,0 #else NEXT_PAGE(init_level4_pgt) - .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level3_ident_pgt - __START_KERNEL_map + __KERNPG_TABLE .org init_level4_pgt + L4_PAGE_OFFSET*8, 0 - .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level3_ident_pgt - __START_KERNEL_map + __KERNPG_TABLE .org init_level4_pgt + L4_START_KERNEL*8, 0 /* (2^48-(2*1024*1024*1024))/(2^39) = 511 */ - .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level3_kernel_pgt - __START_KERNEL_map + __PAGE_TABLE NEXT_PAGE(level3_ident_pgt) - .quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level2_ident_pgt - __START_KERNEL_map + __KERNPG_TABLE .fill 511, 8, 0 NEXT_PAGE(level2_ident_pgt) /* Since I easily can, map the first 1G. @@ -448,8 +470,8 @@ NEXT_PAGE(level2_ident_pgt) NEXT_PAGE(level3_kernel_pgt) .fill L3_START_KERNEL,8,0 /* (2^48-(2*1024*1024*1024)-((2^39)*511))/(2^30) = 510 */ - .quad level2_kernel_pgt - __START_KERNEL_map + _KERNPG_TABLE - .quad level2_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level2_kernel_pgt - __START_KERNEL_map + __KERNPG_TABLE + .quad level2_fixmap_pgt - __START_KERNEL_map + __PAGE_TABLE NEXT_PAGE(level2_kernel_pgt) /* @@ -467,7 +489,7 @@ NEXT_PAGE(level2_kernel_pgt) NEXT_PAGE(level2_fixmap_pgt) .fill 506,8,0 - .quad level1_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level1_fixmap_pgt - __START_KERNEL_map + __PAGE_TABLE /* 8MB reserved for vsyscalls + a 2MB hole = 4 + 1 entries */ .fill 5,8,0 diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c index ba7fbba..eb2faee 100644 --- a/arch/x86/kernel/machine_kexec_64.c +++ b/arch/x86/kernel/machine_kexec_64.c @@ -103,7 +103,7 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable) struct x86_mapping_info info = { .alloc_pgt_page = alloc_pgt_page, .context = image, - .pmd_flag = __PAGE_KERNEL_LARGE_EXEC, + .pmd_flag = __PAGE_KERNEL_LARGE_EXEC | _PAGE_ENC, }; unsigned long mstart, mend; pgd_t *level4p; diff --git a/arch/x86/kernel/mem_encrypt.S b/arch/x86/kernel/mem_encrypt.S index ef7f325..f2e0536 100644 --- a/arch/x86/kernel/mem_encrypt.S +++ b/arch/x86/kernel/mem_encrypt.S @@ -14,6 +14,14 @@ .text .code64 +ENTRY(sme_enable) + ret +ENDPROC(sme_enable) + +ENTRY(sme_encrypt_kernel) + ret +ENDPROC(sme_encrypt_kernel) + ENTRY(sme_get_me_loss) xor %rax, %rax mov sme_me_loss(%rip), %al diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index f989132..21f8ea3 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -39,3 +39,4 @@ obj-$(CONFIG_NUMA_EMU) += numa_emulation.o obj-$(CONFIG_X86_INTEL_MPX) += mpx.o obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o +obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt.o diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 5ce1ed0..9e93a0d 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -23,6 +23,7 @@ #include /* emulate_vsyscall */ #include /* struct vm86 */ #include /* vma_pkey() */ +#include /* __sme_va() */ #define CREATE_TRACE_POINTS #include @@ -523,7 +524,7 @@ static int bad_address(void *p) static void dump_pagetable(unsigned long address) { - pgd_t *base = __va(read_cr3() & PHYSICAL_PAGE_MASK); + pgd_t *base = __sme_va(read_cr3() & PHYSICAL_PAGE_MASK); pgd_t *pgd = base + pgd_index(address); pud_t *pud; pmd_t *pmd; @@ -659,7 +660,7 @@ show_fault_oops(struct pt_regs *regs, unsigned long error_code, pgd_t *pgd; pte_t *pte; - pgd = __va(read_cr3() & PHYSICAL_PAGE_MASK); + pgd = __sme_va(read_cr3() & PHYSICAL_PAGE_MASK); pgd += pgd_index(address); pte = lookup_address_in_pgd(pgd, address, &level); diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index f089491..77dadf5 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -21,6 +21,7 @@ #include #include #include +#include #include "physaddr.h" @@ -424,7 +425,7 @@ static pte_t bm_pte[PAGE_SIZE/sizeof(pte_t)] __page_aligned_bss; static inline pmd_t * __init early_ioremap_pmd(unsigned long addr) { /* Don't assume we're using swapper_pg_dir at this point */ - pgd_t *base = __va(read_cr3()); + pgd_t *base = __sme_va(read_cr3()); pgd_t *pgd = &base[pgd_index(addr)]; pud_t *pud = pud_offset(pgd, addr); pmd_t *pmd = pmd_offset(pud, addr); diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index 1b1110f..7102408 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -68,7 +68,7 @@ static struct notifier_block kasan_die_notifier = { void __init kasan_early_init(void) { int i; - pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL; + pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL | _PAGE_ENC; pmdval_t pmd_val = __pa_nodebug(kasan_zero_pte) | _KERNPG_TABLE; pudval_t pud_val = __pa_nodebug(kasan_zero_pmd) | _KERNPG_TABLE; @@ -130,7 +130,7 @@ void __init kasan_init(void) */ memset(kasan_zero_page, 0, PAGE_SIZE); for (i = 0; i < PTRS_PER_PTE; i++) { - pte_t pte = __pte(__pa(kasan_zero_page) | __PAGE_KERNEL_RO); + pte_t pte = __pte(__pa(kasan_zero_page) | __PAGE_KERNEL_RO | _PAGE_ENC); set_pte(&kasan_zero_pte[i], pte); } /* Flush TLBs again to be sure that write protection applied. */ diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c new file mode 100644 index 0000000..00eb705 --- /dev/null +++ b/arch/x86/mm/mem_encrypt.c @@ -0,0 +1,30 @@ +/* + * AMD Memory Encryption Support + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include + +#include + +void __init sme_early_init(void) +{ + unsigned int i; + + if (!sme_me_mask) + return; + + __supported_pte_mask |= sme_me_mask; + + /* Update the protection map with memory encryption mask */ + for (i = 0; i < ARRAY_SIZE(protection_map); i++) + protection_map[i] = __pgprot(pgprot_val(protection_map[i]) | sme_me_mask); +} diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index bbf462f..c055302 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1976,6 +1976,9 @@ int kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address, if (!(page_flags & _PAGE_RW)) cpa.mask_clr = __pgprot(_PAGE_RW); + if (!(page_flags & _PAGE_ENC)) + cpa.mask_clr = __pgprot(pgprot_val(cpa.mask_clr) | _PAGE_ENC); + cpa.mask_set = __pgprot(_PAGE_PRESENT | page_flags); retval = __change_page_attr_set_clr(&cpa, 0);