From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754893AbcKJAxo (ORCPT ); Wed, 9 Nov 2016 19:53:44 -0500 Received: from mail-bn3nam01on0051.outbound.protection.outlook.com ([104.47.33.51]:32647 "EHLO NAM01-BN3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1754142AbcKJAxi (ORCPT ); Wed, 9 Nov 2016 19:53:38 -0500 X-Greylist: delayed 991 seconds by postgrey-1.27 at vger.kernel.org; Wed, 09 Nov 2016 19:53:37 EST Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Thomas.Lendacky@amd.com; From: Tom Lendacky Subject: [RFC PATCH v3 19/20] x86: Add support to make use of Secure Memory Encryption To: , , , , , , , , CC: Rik van Riel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , "Paolo Bonzini" , Larry Woodman , "Ingo Molnar" , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , "Thomas Gleixner" , Dmitry Vyukov Date: Wed, 9 Nov 2016 18:38:26 -0600 Message-ID: <20161110003826.3280.5546.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> References: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: BN6PR03CA0053.namprd03.prod.outlook.com (10.173.137.15) To DM5PR12MB1145.namprd12.prod.outlook.com (10.168.236.140) X-MS-Office365-Filtering-Correlation-Id: b0883962-e257-4219-52c7-08d40901e65c X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1145;2:7Byq4LrqLjfvM3pU1FC/QTwGUjcHZlrH9sHBR+g0goaciIvh/X5yPE8Fu+ULX7MYAc+AvLXQWW5OiAiFh11npFsmk+iiueQOW5kqCxbbd9jNiquUBnwxdIj5MQeoQ2S+OnfhwApNzs+TVu1EaMSmkm49JvoSPc4zSwzGneZAqtPFWEjdfFHMW3IOCoGcI9fA4z1AeppYlMWYk+uZLD5Kug==;3:sv9jV3RiaTfUu4pfQHxqC28Lzx9SP+qY7grCpoY2n1Kwvc2JoaVMAnzTQZdvSgy3ZC4hlfT4JaOgdaNDHELtR6U8pg2jdsD5Gb18ZjukIfkizJ+tClmRYQTPU2EY8Jen6d3jgZmV1F2P8wFwmCdq5g== X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:DM5PR12MB1145; X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1145;25:KyMzTUoaqGmIeLD5yq4YyiNYgau5pqY+Xh9y4Oe+qc4zO2Pjfus4/tf4MMjsaNcUpqUdkjIt32dOf+/YGDGdKYOhHGsQky5D36RMs2DN4SFVLJWyUhAW1LJyzgEzx9lfdwa+vWeIFgYNJBcnQXV6debQVUvhX7zwgIFB3AbkC5tPZ/HPHWnbmgOU+aM6zT0KV/Kx6RF6US7Zi9QVtoKItlZOtL1k1YgDMfUa8LdRvdqCjqYX72oMAiZc34mWco/gwvKkIS5vW/Fmq66Q1hNGyu9s2N9CZ9MvtCGhCRi1gOSE2KPqaWHwPdOt6Q9Lfi17H5AGRGbLSS7QSEtbDowoJ+O8arw6duhchSDDI2paDf9zkaF3RzBVhjJzoMzqTM0vz46wJ6g9uNyYgX43RA+ZYzIY1PN/8FCqbNYCzma2g8KuDCUPKLgZWdWcXOZz82NZxLIUsq2cJU2kzb3UimSNFoNnMwY8uVR4snyREjLGwQ4RKeAT+lUojI+0HxzKNd3Okjn1YM8LEinz5iu72t6atDFgTFvWDMdlcotcIyfwncf+yIW5BKLyeYBgtfrVYwobp7EXQhJG6f91ZHTQuB+UEWFSt1boJMQML3wgQscseM0syr991By2bPeK9hjp/R8UmunaUeawPoyB0+WPFkfQ7zRDk3770GyQtZ4l/4DVWx16cM6RJW2ylZLNOUW8p2P343enow/ffUAz4Cdn4JOj6DCityG04njIw4sbKnf1eQZgzvpAdbPkE240v6ztmOfm X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1145;31:8ymXtDgPR4h0C4yxTf3ABti3o2e1PZHSYeTOCKQbAo6wPHxYk+0XOb7Lm9vSgVCWWtYUsI6Gc/QKVGQhFFZb2goJfVH4/NNb8PnP08ATL/wkRpCuNb1+bGobwC7hZJgJcO+76Q2HitaUiwheeaGF1uQ0rviDL7eE8WbSbEdmVWAN5UbBITb5CNnM8yrpnUEpP+PTot3dfJJiQPu1OG9gdwzMz4tHpYqrFSObu08LdnvlQZeWyciHwdEKrJ42JtEs;20:O5pOp66oOob855f0a4wxGJdX5SLTBAwSj+F8Dsz/EmpOB/cpkBcvaJweYfSIA3U9iKb6pdHU5AcJqj9o5geE5Pycgz3Da84Zc3l9bgEs/HZdNZoGAvzTmVnnegkA019MnawiKa4fPrh9VEODb6UdtIy2LRsTht/fhQoqu0+EgkvSggWByu+oRC6aK6owvPftCD/qUplWflBHH6lAvQYXSB7490mQTDfc2l+ECuM8QHWvI90Bhvwfdx6Yp/vc1Uonf01XLafImAwEmNbnf9zIRXWQsE7qVwQnzclWMSc6tKVFTbVxJBtmkjNaIhMYWLu4JHsDm+U/cIiDeN6sKvX+Fvdk40QKl6PtA7ls+UrbSBDNDtCeP0h9mYeIq+3AdYALqZic+NzzIMaW4Onnpi4R0+nW6SLenx4MCdTgAM8BmWl2+1xRodUXUyIG2RIrp4WMEXiMI/FM8SYU8YzpM8LUZjBjefWptwKl+Xv6Iuqv7ddGBu7UIegkbyHn/KFav/B9 X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040176)(601004)(2401047)(5005006)(8121501046)(10201501046)(3002001)(6055026);SRVR:DM5PR12MB1145;BCL:0;PCL:0;RULEID:;SRVR:DM5PR12MB1145; X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1145;4:hc5kLozgoNYUAtYLQsqmWSoEyzvRaLdLXtSCD8+xA66klyMcm2zZutB882jsqYTzEHMinu2LRm2Hyfethx6OMtrHEWoYBaqxtsQUy3MrpByjLp+e4P8NQN0H7iWtdXO5dLfPdN2K404O+uBVPMB0dWc2IHVnCyfuMyDbtmEANG7NkpYaFa0DwRhibK+KCbfCVVoDnwgzNZJ93nElPQHIemfQutCbYJriojsbwLOfh5IiUsYy1dsszVTfFhHCbEuGFkbDQ714SuZR5I61OMErRqdKFLnJZ1JYoqneZPOkzg+6CkhJMxpiCE8MDtZy3Pb5p6isAlcpfYjHw1k7Q8lHsnjxAQx2bshh3NT7/2ZQlGEM6E9dXTgNcH3ay2YUivAnE4GpZB3pQU6pgEjfBpc0NtJDcC3uL2WiP2UYEUkeg/30JqppMDIe2D9Y57JeS3hD9r3i8hXyXbVJa2ovtVFNIg== X-Forefront-PRVS: 01221E3973 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6009001)(7916002)(199003)(189002)(5001770100001)(47776003)(106356001)(189998001)(81166006)(81156014)(2906002)(4326007)(68736007)(105586002)(2201001)(101416001)(54356999)(76176999)(42186005)(53416004)(66066001)(97746001)(4001350100001)(97736004)(230700001)(86362001)(50986999)(33646002)(50466002)(103116003)(5660300001)(7736002)(305945005)(77096005)(92566002)(7416002)(3846002)(6116002)(1076002)(586003)(8676002)(7846002)(23676002)(6666003)(2950100002)(9686002)(69596002)(83506001)(2004002)(71626007)(217873001);DIR:OUT;SFP:1101;SCL:1;SRVR:DM5PR12MB1145;H:tlendack-t1.amdoffice.net;FPR:;SPF:None;PTR:InfoNoRecords;A:1;MX:1;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtETTVQUjEyTUIxMTQ1OzIzOllnT052Nnl5Wis5OUVpcjdrR2hjenB5VEdm?= =?utf-8?B?UU1EQjJybWp3aU9NNHI1T1I1YlVXcWUzcSt6VmNNaHlaRy9NV0VVQVRkSEVR?= =?utf-8?B?UndLSkFMZldsTi93bXMyYktNR1ZVMnhaVjhBTTAvdVk4QllTVlcwS3BEKzBv?= =?utf-8?B?NjZTakp1cy96Q1BrMHUyOWxJUktRNTM3MUVPUk95NnhIV0o4QmJKMmRySksz?= =?utf-8?B?M2g4Z3ZXYXIxdGMvek5MVDNQTVUvL050MGhxdVNqWVZEUFN0cU9kVENFR2hl?= =?utf-8?B?TXlGSjBxNEp5MXJrL1RkY1lxOUlScWtXZWpxVE8rcWx6cDNRSFpKcklVU1Fn?= =?utf-8?B?dWZuenZ4OWpNUnVET2p3K2hVVXYyV2dldDBtOG8zdHNOeEtNNnFWSkgydjRJ?= =?utf-8?B?VVBiYzBiQ011VDRta1I1c3RITzE3NUNLNW1yTWNDVUovaStRMUt3aDh6N2RJ?= =?utf-8?B?QUNZWjRNdGdyS0JSZGhkZXFuaEpMbEVyMjFtQlJBNFk5aVE4b0Jha2FMUTlB?= =?utf-8?B?cjJEY1dabWVURUViNzdoTjVZR3VVNnhhOE5ITTRpMTVYREJub20zQWZQVlN1?= =?utf-8?B?dTRzOUh3YUhxNlBIdFBReGg2Q3h2dnpiVHBLcy9SdmZrWnVtcFliT1RlR3Bw?= =?utf-8?B?WHVsY2EvaVgwRStDNENBSkt0ZHlLOUIrNStpTlFyQXNSVUFXVHZxN1F6OHFC?= =?utf-8?B?RTRvMjhoaThLQTdNakpHMmkyWm8rT1Bha1Z4aStpZ1h2S21qNm9IRHE4dkFO?= =?utf-8?B?QTJqWm5yRkFQUjh2czd2WVduOHh5eDlTTWdKVnVlMGVWY1MvM3l5aHh1d3Y0?= =?utf-8?B?dlJVT3ZNS0M0bk5tSE0yNDBhZEVhc0d3dUFPSEZNM1doY2VvUzNlLzYvMGlU?= =?utf-8?B?NTM0SFBUY0w5ckZPcnRmaHI0bndNbWZwQlVueGVPQy93czJxa2x5UVR0MEpt?= =?utf-8?B?WHRtYTRnaGhjaE1FUVhWaUpoUmRuNWpURWhQWjNKdTRXTkswNTdjbEFWMm1M?= =?utf-8?B?Rkl6TDNZc2VRZm51d1BGR1hHajNzWDAwMVpHVkxJc3AreitKZGFHczNmeHE0?= =?utf-8?B?YmhGcXRpZXo3UTJxWDdKV1NYNThkNEU1UG9mVDQvaDV3eElCZFpjSjVBMDNv?= =?utf-8?B?ZURpdDNJUldoYURKVGhiWmVNVjJtR1RVMkZPVWM1c1JzVkZLdVVWVjJ3QTE5?= =?utf-8?B?b2UvWjZyQmo1Wm9FclQ3RDFIcmF6Rlo1U0lHMFJrRHNsMnF0TWZZR216Zk5r?= =?utf-8?B?TXlqcHlVSmhJT2wrdk1UUGhIUERjdThPL2hIV2F5WGplbGg4dGlLQkxsU1gv?= =?utf-8?B?aDl1WElNQWpsTmxuYkQvM2V0Z1ptSzlHN3FzdkhuaEJCSHFLWHVNVy8xaEFY?= =?utf-8?B?aW1WU2NucHArSWl4YjlKVjJJeDdMTVAvSUh0VitudWlMYXdPL0oyREhPTHEz?= =?utf-8?B?aUNGMFpiaUF4Rk9vbmxyNTlrcjducEg3MUFpMkZsK25jc1dvVXNFMzk3Ym5h?= =?utf-8?B?T1dIcjJMNTVsN3FMd0l1aFhCMzBtMUw4elV5OGFsMnpmNVl6cTFGWGJvUVkz?= =?utf-8?B?TmdHWklFQXpka2JBTVVXR1BtdnpieDRkQnNFNzRqaTRsM3Mrcno3a2Rwbklm?= =?utf-8?B?aVVsd2dSZkZHdkdGQWhnN3psWUc2Q0ZhcUFXcmNPYi94dTlqWTRQTm9lQXRn?= =?utf-8?Q?zwyIUglCZ79zRqp2ziFKdB6YluM2Wb+ZN+/ojqW?= X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1145;6:JrEaRdrcI4M5fP5Xt14ydSgb9bWAlalpMSw1ZeU1g05NcO8oBcuierp54NWa7gxdHi5vdhHXySJMXA1HnDOdWj9rak/A2YLflKhBL6a3cycYgmhXsw8DpVtflyGgtDch2MQpdEjguwW1Tj/8cwpacni68N4AHX/sij/y4MPVr5mfL87oEUch0nSfX0hPD3OkwwG+Ce+fGgSelLcGpTxIuSCYji14tR77hld5r4k1CutjNECLmDL1Jzan6Tbuv15UP8coBp/9sQR6BLXlCkYXQ7r/yltkwLazVjtViRbUMhRx4P9OkvvHDfkMIZgJ536T+UF5D8fGuE/NeR5RoDqrpg==;5:m4Gbc/xIu2I1qjMWalz6il0IhUbVbi9pVM1/lpvccopng8rjjj+aAR6QKpV7Wd/1KX4kZogubo5zq/IXFL0lXzXDr0iOcWRLE7/rhkp91Rb63tGZjXhMf06PCFuuViztQxzlvh0rSmQkCeejIh6pNg==;24:poL95Xa7IEKawCx8egLJzsranWF0gr7ZHqknccFK+ikf8IyNkj7LUcM8hkOBKUageJlsAPPIV3K30+CsEymhUwml3KE+Izb8P7CXSJEcVAs= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1145;7:h3xGMRPeWDgV2VPZ1zE7sZ/v+8OnRThYS40VL3EjSnKi7ATsL5+R0xNuaRg4Y2W8fILtBzda/PHM52SxC/BybVA86nPl7YCAT9l6efCm5Dz1MS+6gRkN9Rj7pm3Xw3cVPbUTkzhXgJztSf1N3L6d9s7tEiycikHZ/r4P6ovUmZYNB83OhINRY2bZn6EBLgh2JjT2PmmUY5YOQce8fSjvHiOLGiNcoplNNuW5j4t54NlpdCYKfwD1i3xqHFuoSn1QXDKcYwOuUQu2G9I5a2BSIwLW/tNWAf1CxH1uCX2w1EAd1mxPsi0gMg9L8bgX7tQod+/jsFc1eXe7Y+SnhxLHOzBsX4JQuL+fZDh2mNsh1zg=;20:wE/6zcLM5d5JUJPfc7grr5j9pAF18KP9rziTi+RoKvI572ilF1ATMqvCD7l5RZSqLscGEF9DCTcnyXzQV65U2N9ZnqKpYhTynuQV8hWMs3SvOFzgX+a7ucpkITNuOMPAF8TON2TfaYVPsJMG8olVkbekNvCvyg4G6VqrJLwPwFU70C8j7cecsW4u/pIRAxa/fBPoyZIIQwO2qq/NZiVrTAkJpLHpE8G12zbDzVR1YG4QPcEIqkaC4SE40vWsv858 X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Nov 2016 00:38:30.6970 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1145 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch adds the support to check if SME has been enabled and if the mem_encrypt=on command line option is set. If both of these conditions are true, then the encryption mask is set and the kernel is encrypted "in place." Signed-off-by: Tom Lendacky --- arch/x86/kernel/Makefile | 1 arch/x86/kernel/mem_encrypt_boot.S | 156 +++++++++++++++++++++++++++++ arch/x86/kernel/mem_encrypt_init.c | 196 ++++++++++++++++++++++++++++++++++++ 3 files changed, 353 insertions(+) create mode 100644 arch/x86/kernel/mem_encrypt_boot.S diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 27e22f4..020759f 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -143,4 +143,5 @@ ifeq ($(CONFIG_X86_64),y) obj-y += vsmp_64.o obj-y += mem_encrypt_init.o + obj-y += mem_encrypt_boot.o endif diff --git a/arch/x86/kernel/mem_encrypt_boot.S b/arch/x86/kernel/mem_encrypt_boot.S new file mode 100644 index 0000000..d4917ba --- /dev/null +++ b/arch/x86/kernel/mem_encrypt_boot.S @@ -0,0 +1,156 @@ +/* + * AMD Memory Encryption Support + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include +#include +#include + + .text + .code64 +ENTRY(sme_encrypt_execute) + +#ifdef CONFIG_AMD_MEM_ENCRYPT + /* + * Entry parameters: + * RDI - virtual address for the encrypted kernel mapping + * RSI - virtual address for the un-encrypted kernel mapping + * RDX - length of kernel + * RCX - address of the encryption workarea + * - stack page (PAGE_SIZE) + * - encryption routine page (PAGE_SIZE) + * - intermediate copy buffer (PMD_PAGE_SIZE) + * R8 - address of the pagetables to use for encryption + */ + + /* Set up a one page stack in the non-encrypted memory area */ + movq %rcx, %rax + addq $PAGE_SIZE, %rax + movq %rsp, %rbp + movq %rax, %rsp + push %rbp + + push %r12 + push %r13 + + movq %rdi, %r10 + movq %rsi, %r11 + movq %rdx, %r12 + movq %rcx, %r13 + + /* Copy encryption routine into the workarea */ + movq %rax, %rdi + leaq .Lencrypt_start(%rip), %rsi + movq $(.Lencrypt_stop - .Lencrypt_start), %rcx + rep movsb + + /* Setup registers for call */ + movq %r10, %rdi + movq %r11, %rsi + movq %r8, %rdx + movq %r12, %rcx + movq %rax, %r8 + addq $PAGE_SIZE, %r8 + + /* Call the encryption routine */ + call *%rax + + pop %r13 + pop %r12 + + pop %rsp /* Restore original stack pointer */ +.Lencrypt_exit: +#endif /* CONFIG_AMD_MEM_ENCRYPT */ + + ret +ENDPROC(sme_encrypt_execute) + +#ifdef CONFIG_AMD_MEM_ENCRYPT +/* + * Routine used to encrypt kernel. + * This routine must be run outside of the kernel proper since + * the kernel will be encrypted during the process. So this + * routine is defined here and then copied to an area outside + * of the kernel where it will remain and run un-encrypted + * during execution. + * + * On entry the registers must be: + * RDI - virtual address for the encrypted kernel mapping + * RSI - virtual address for the un-encrypted kernel mapping + * RDX - address of the pagetables to use for encryption + * RCX - length of kernel + * R8 - intermediate copy buffer + * + * RAX - points to this routine + * + * The kernel will be encrypted by copying from the non-encrypted + * kernel space to an intermediate buffer and then copying from the + * intermediate buffer back to the encrypted kernel space. The physical + * addresses of the two kernel space mappings are the same which + * results in the kernel being encrypted "in place". + */ +.Lencrypt_start: + /* Enable the new page tables */ + mov %rdx, %cr3 + + /* Flush any global TLBs */ + mov %cr4, %rdx + andq $~X86_CR4_PGE, %rdx + mov %rdx, %cr4 + orq $X86_CR4_PGE, %rdx + mov %rdx, %cr4 + + /* Set the PAT register PA5 entry to write-protect */ + push %rcx + movl $MSR_IA32_CR_PAT, %ecx + rdmsr + push %rdx /* Save original PAT value */ + andl $0xffff00ff, %edx /* Clear PA5 */ + orl $0x00000500, %edx /* Set PA5 to WP */ + wrmsr + pop %rdx /* RDX contains original PAT value */ + pop %rcx + + movq %rcx, %r9 /* Save length */ + movq %rdi, %r10 /* Save destination address */ + movq %rsi, %r11 /* Save source address */ + + wbinvd /* Invalidate any cache entries */ + + /* Copy/encrypt 2MB at a time */ +1: + movq %r11, %rsi + movq %r8, %rdi + movq $PMD_PAGE_SIZE, %rcx + rep movsb + + movq %r8, %rsi + movq %r10, %rdi + movq $PMD_PAGE_SIZE, %rcx + rep movsb + + addq $PMD_PAGE_SIZE, %r11 + addq $PMD_PAGE_SIZE, %r10 + subq $PMD_PAGE_SIZE, %r9 + jnz 1b + + /* Restore PAT register */ + push %rdx + movl $MSR_IA32_CR_PAT, %ecx + rdmsr + pop %rdx + wrmsr + + ret +.Lencrypt_stop: +#endif /* CONFIG_AMD_MEM_ENCRYPT */ diff --git a/arch/x86/kernel/mem_encrypt_init.c b/arch/x86/kernel/mem_encrypt_init.c index 388d6fb..7bdd159 100644 --- a/arch/x86/kernel/mem_encrypt_init.c +++ b/arch/x86/kernel/mem_encrypt_init.c @@ -13,9 +13,205 @@ #include #include #include +#include + +#include + +#ifdef CONFIG_AMD_MEM_ENCRYPT + +extern void sme_encrypt_execute(unsigned long, unsigned long, unsigned long, + void *, pgd_t *); + +#define PGD_FLAGS _KERNPG_TABLE_NO_ENC +#define PUD_FLAGS _KERNPG_TABLE_NO_ENC +#define PMD_FLAGS __PAGE_KERNEL_LARGE_EXEC + +static void __init *sme_pgtable_entry(pgd_t *pgd, void *next_page, + void *vaddr, pmdval_t pmd_val) +{ + pud_t *pud; + pmd_t *pmd; + + pgd += pgd_index((unsigned long)vaddr); + if (pgd_none(*pgd)) { + pud = next_page; + memset(pud, 0, sizeof(*pud) * PTRS_PER_PUD); + native_set_pgd(pgd, + native_make_pgd((unsigned long)pud + PGD_FLAGS)); + next_page += sizeof(*pud) * PTRS_PER_PUD; + } else { + pud = (pud_t *)(native_pgd_val(*pgd) & ~PTE_FLAGS_MASK); + } + + pud += pud_index((unsigned long)vaddr); + if (pud_none(*pud)) { + pmd = next_page; + memset(pmd, 0, sizeof(*pmd) * PTRS_PER_PMD); + native_set_pud(pud, + native_make_pud((unsigned long)pmd + PUD_FLAGS)); + next_page += sizeof(*pmd) * PTRS_PER_PMD; + } else { + pmd = (pmd_t *)(native_pud_val(*pud) & ~PTE_FLAGS_MASK); + } + + pmd += pmd_index((unsigned long)vaddr); + if (pmd_none(*pmd) || !pmd_large(*pmd)) + native_set_pmd(pmd, native_make_pmd(pmd_val)); + + return next_page; +} + +static unsigned long __init sme_pgtable_calc(unsigned long start, + unsigned long end) +{ + unsigned long addr, total; + + total = 0; + addr = start; + while (addr < end) { + unsigned long pgd_end; + + pgd_end = (addr & PGDIR_MASK) + PGDIR_SIZE; + if (pgd_end > end) + pgd_end = end; + + total += sizeof(pud_t) * PTRS_PER_PUD * 2; + + while (addr < pgd_end) { + unsigned long pud_end; + + pud_end = (addr & PUD_MASK) + PUD_SIZE; + if (pud_end > end) + pud_end = end; + + total += sizeof(pmd_t) * PTRS_PER_PMD * 2; + + addr = pud_end; + } + + addr = pgd_end; + } + total += sizeof(pgd_t) * PTRS_PER_PGD; + + return total; +} +#endif /* CONFIG_AMD_MEM_ENCRYPT */ void __init sme_encrypt_kernel(void) { +#ifdef CONFIG_AMD_MEM_ENCRYPT + pgd_t *pgd; + void *workarea, *next_page, *vaddr; + unsigned long kern_start, kern_end, kern_len; + unsigned long index, paddr, pmd_flags; + unsigned long exec_size, full_size; + + /* If SME is not active then no need to prepare */ + if (!sme_me_mask) + return; + + /* Set the workarea to be after the kernel */ + workarea = (void *)ALIGN(__pa_symbol(_end), PMD_PAGE_SIZE); + + /* + * Prepare for encrypting the kernel by building new pagetables with + * the necessary attributes needed to encrypt the kernel in place. + * + * One range of virtual addresses will map the memory occupied + * by the kernel as encrypted. + * + * Another range of virtual addresses will map the memory occupied + * by the kernel as un-encrypted and write-protected. + * + * The use of write-protect attribute will prevent any of the + * memory from being cached. + */ + + /* Physical address gives us the identity mapped virtual address */ + kern_start = __pa_symbol(_text); + kern_end = ALIGN(__pa_symbol(_end), PMD_PAGE_SIZE) - 1; + kern_len = kern_end - kern_start + 1; + + /* + * Calculate required number of workarea bytes needed: + * executable encryption area size: + * stack page (PAGE_SIZE) + * encryption routine page (PAGE_SIZE) + * intermediate copy buffer (PMD_PAGE_SIZE) + * pagetable structures for workarea (in case not currently mapped) + * pagetable structures for the encryption of the kernel + */ + exec_size = (PAGE_SIZE * 2) + PMD_PAGE_SIZE; + + full_size = exec_size; + full_size += ALIGN(exec_size, PMD_PAGE_SIZE) / PMD_PAGE_SIZE * + sizeof(pmd_t) * PTRS_PER_PMD; + full_size += sme_pgtable_calc(kern_start, kern_end + exec_size); + + next_page = workarea + exec_size; + + /* Make sure the current pagetables have entries for the workarea */ + pgd = (pgd_t *)native_read_cr3(); + paddr = (unsigned long)workarea; + while (paddr < (unsigned long)workarea + full_size) { + vaddr = (void *)paddr; + next_page = sme_pgtable_entry(pgd, next_page, vaddr, + paddr + PMD_FLAGS); + + paddr += PMD_PAGE_SIZE; + } + native_write_cr3(native_read_cr3()); + + /* Calculate a PGD index to be used for the un-encrypted mapping */ + index = (pgd_index(kern_end + full_size) + 1) & (PTRS_PER_PGD - 1); + index <<= PGDIR_SHIFT; + + /* Set and clear the PGD */ + pgd = next_page; + memset(pgd, 0, sizeof(*pgd) * PTRS_PER_PGD); + next_page += sizeof(*pgd) * PTRS_PER_PGD; + + /* Add encrypted (identity) mappings for the kernel */ + pmd_flags = PMD_FLAGS | _PAGE_ENC; + paddr = kern_start; + while (paddr < kern_end) { + vaddr = (void *)paddr; + next_page = sme_pgtable_entry(pgd, next_page, vaddr, + paddr + pmd_flags); + + paddr += PMD_PAGE_SIZE; + } + + /* Add un-encrypted (non-identity) mappings for the kernel */ + pmd_flags = (PMD_FLAGS & ~_PAGE_CACHE_MASK) | (_PAGE_PAT | _PAGE_PWT); + paddr = kern_start; + while (paddr < kern_end) { + vaddr = (void *)(paddr + index); + next_page = sme_pgtable_entry(pgd, next_page, vaddr, + paddr + pmd_flags); + + paddr += PMD_PAGE_SIZE; + } + + /* Add the workarea to both mappings */ + paddr = kern_end + 1; + while (paddr < (kern_end + exec_size)) { + vaddr = (void *)paddr; + next_page = sme_pgtable_entry(pgd, next_page, vaddr, + paddr + PMD_FLAGS); + + vaddr = (void *)(paddr + index); + next_page = sme_pgtable_entry(pgd, next_page, vaddr, + paddr + PMD_FLAGS); + + paddr += PMD_PAGE_SIZE; + } + + /* Perform the encryption */ + sme_encrypt_execute(kern_start, kern_start + index, kern_len, + workarea, pgd); + +#endif /* CONFIG_AMD_MEM_ENCRYPT */ } unsigned long __init sme_get_me_mask(void)