From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753540AbcDZW7a (ORCPT ); Tue, 26 Apr 2016 18:59:30 -0400 Received: from mail-bl2on0063.outbound.protection.outlook.com ([65.55.169.63]:4340 "EHLO na01-bl2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752517AbcDZW7Y (ORCPT ); Tue, 26 Apr 2016 18:59:24 -0400 Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; From: Tom Lendacky Subject: [RFC PATCH v1 18/18] x86: Add support to turn on Secure Memory Encryption To: , , , , , , , , CC: Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , "Konrad Rzeszutek Wilk" , Paolo Bonzini , "Ingo Molnar" , Borislav Petkov , "H. Peter Anvin" , Andrey Ryabinin , "Alexander Potapenko" , Thomas Gleixner , "Dmitry Vyukov" Date: Tue, 26 Apr 2016 17:59:04 -0500 Message-ID: <20160426225904.13567.538.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20160426225553.13567.19459.stgit@tlendack-t1.amdoffice.net> References: <20160426225553.13567.19459.stgit@tlendack-t1.amdoffice.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: BY1PR20CA0034.namprd20.prod.outlook.com (10.162.140.44) To CY1PR1201MB1114.namprd12.prod.outlook.com (10.169.19.18) X-MS-Office365-Filtering-Correlation-Id: 2f182553-2372-4105-7e92-08d36e26610d X-Microsoft-Exchange-Diagnostics: 1;CY1PR1201MB1114;2:PXibV2NzHUbbfvBMxC11Ml7HY7iGgoElJ/cMA/cvvGN67UG6pKFXeATJUBTz9PW9oHsIg7H9gspdg+diiqwzppIcBlI5ZCboRkdGxpsSwHLOP+NpncFMng5MuMx55+kxOFHS/jJ1MyJgoyzP1iqmSFgAVmYCQg3qQVsblZDPrY6kFT65zsw9Uo9KvHqbW7wp;3:IJVwmPv0FPC9Ya9HDIR9v+vIhQh4121R4Bjn5pAwT0LlOFCKeL+EtFBMXGQr16GIVDkpgK9tUS4S04Q6tQDxzmfTo98p6OZpr4VLC14MZA8ZfcFeFgKeafhXC1GxLjjR X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:CY1PR1201MB1114; X-Microsoft-Exchange-Diagnostics: 1;CY1PR1201MB1114;25:kq1CX/j4a4KbLP6VtNoap84MXV5ecAbWzwEPYruSvy0s2Uip+YqRffm+U8eVAvVKAegXKXFoUObbc/XEqg2IWNu20eLxUvhsB/Qp3i5q5UZOIhV80gmXYK/5nPeXoT4+l89zhUhqdaa+MwvxOwfXiHjgcGkC7ckKATP7SpC8U3kK8xf2o2v6CtkC/wqlmOprsbGY0lpUmBdSZyQUxQ08odVCdecwHibccXJAfm/nJCM1PkR7IGqnciF2c749YM/AafjYLsuAng5HsVz2ZSEVivfhXyRjHJf3j5N0VVNxdpyD/fApFD1mkHEuuqPYuVu5l/JmLoRgB7zDhqsj6oRooW9XU4bAuK4Z9BjDvwupnC4FhAdWeBAtsOpZwaqVP9BK210GuzZZ0/0tysi/Kmcup7MJjboSNqBOHrvQXmhmVooZTjvrcTjnkKYRpaQZYTqLm7Gl/jPRk3GLaOvL/6oxSvaQ/dVYS6pFEdvMtYsZD2H5uTn/k62TvpU17OHhzrwSBiDWkjMOqfG9RdJEl7FDma3MTcM094KP4Tdz/6fS1JA3Emsfu2o95S7qHZ2bs97gvaV7PZpxNkWGbDIClLyqc2wrCPcJmcp38GHdf97q8ImbymkatnlhjqwHo5roXS2cXnvskP9Rx+kL2zFYRttEO8Zi0CLV/kg4vnvQ06OFF/o= X-Microsoft-Exchange-Diagnostics: 1;CY1PR1201MB1114;20:aeNolTPBu6VkyGDP6D4QNcfR6IoIOAAq6TADjZqgEwp7rS2As3QrR1DX8U0xXY80pdrsBaoyMARCeRF0ZWfwyh7nH4+KVzS2fIdSGQcQfYmvodL/iST1atsGMx5NCg4vX+0TsN8qkfDqEDQ45XLvBep+Lmjs5SEEx128k1yv+VCWh/Y1r3q/QXdTJUfE3dzffJi2CMc+B55OZMXtCZuo4hk7kNw+H04jstXugOlpIjFIWnXiFrFV7lJfysNs4yrNNnSXR6Zuci7IcgHcg2tCjE3VFOtbxERqC1aAlNZ+orvSHe6J5L+Nr7lPsjR3Hx1SGjO9rSulADwaKbLRCLgLPLP9qeqVvU7icpbaeS4s4OI4TcR/0dnNXDOCWoRuZDy4usRBqHiiL90qF7d1kUbdC9zfV6/p9FhaCW2vmWPSbw+kU+jaEl2CKjE+8zoLbIXyNRR09Hk7aJUClhrMVD87tVUF7GaGc7KNVAIB2xYoHu/8Dx/bdwBUHZYesU/K2yGn;4:6iPWQ4RDsbY3+QfQNvYdx1Wy6wt9et+NiLnrjG6Fza8sgm4L5Nb9AaXkpcq/RHFQu7izbuKsLQrXmkLH1t9kR4NQYbJHYRmpHYlMtrLwiRiQ/Vvt/Uo3wQlIhqj2RsNCWfz7otZ0RePZOQ8454tm635zHtXVag5UbFlTlAWxRAx3W3S9+fF8BVuVcGm1iqFCqLv2bcR/y1twDczngq3a5nL5Cdm0SgSqrhhNUqfLIIjbg00YHLjhi5MKNgsvScTxdMJpAcMHdYfFpD8PD6j9TahbxUHBy7itzSVV73oKH63m6tqtzXdeSCaJCz44vMAPH6suOG3ZHxeu6qdEkF+jAFWlQg7+D0Q1cSwHwAKXceGUwg5SFFDQThu4v5kwk9qT+EJPfeNOaaoUvDTIzyUZa2aKbMGC4XHPuR2SZ4fRtPE= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(9101521072)(601004)(2401047)(8121501046)(5005006)(3002001)(10201501046)(6055026);SRVR:CY1PR1201MB1114;BCL:0;PCL:0;RULEID:;SRVR:CY1PR1201MB1114; X-Forefront-PRVS: 0924C6A0D5 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6009001)(2201001)(92566002)(2906002)(5001770100001)(97746001)(81166005)(33646002)(66066001)(230700001)(4001350100001)(47776003)(54356999)(50986999)(50466002)(575784001)(23676002)(76176999)(86362001)(229853001)(9686002)(5008740100001)(189998001)(53416004)(4326007)(6116002)(1096002)(103116003)(42186005)(1076002)(5004730100002)(19580405001)(77096005)(2950100001)(19580395003)(586003)(71626007)(217873001);DIR:OUT;SFP:1101;SCL:1;SRVR:CY1PR1201MB1114;H:tlendack-t1.amdoffice.net;FPR:;SPF:None;MLV:sfv;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtDWTFQUjEyMDFNQjExMTQ7MjM6L3prclNFSjE0ZkY4d3UyRHRMU21pUWdR?= =?utf-8?B?U0JYMy9mMDhBVTM0d2ppajZ6c0JQN1B3WEZMaDFsU0xpdnJLekp2RXBoYVVK?= =?utf-8?B?T1paSHoxamcwN1VYc0JCS2dJMEt2YjVaNENKYmlDWm00NmNTSTkxaEl0bTlF?= =?utf-8?B?Q2QyTVFnOVdBRndoSWVHd0ovcWppK0plZFJXR2VtVW5hMGUvRm1iVFVKZGM0?= =?utf-8?B?UlF5MTFycnRhMW45L1N1ZU5PWFJySHYrYnk3ME4yOEE0Tmg4cmIrZkd6SUFV?= =?utf-8?B?OE9oenhaaVBJV0VRZzBycWRMOC9Qbm43YU41OUtZQVZjL3lucTVxRXY1UlVo?= =?utf-8?B?NmNsbkhqcno4aVZNOWJIMy9YMFRhdHJiWnp3bGxObkpqa3FTMTJQb0pBS09J?= =?utf-8?B?UWc4NFJvcG9NTXl5MVAzUmY5bDdnRURncE53bDV6MXd4b2RVTmY4TENwQW5W?= =?utf-8?B?c1pmdDFiNlo0TUFzY05NQ1FZcEdnb3FoTEZKdE1mMEFBUWZBRThRbHB6cy82?= =?utf-8?B?WVVqNGZPY2hnVGtDRVo2UUhIYTRvbWI2MnZHZUVzenFLWitDZTc3TnBYUVRn?= =?utf-8?B?emhwcjhOTTZZV1ZEaTVSTWI0U1NQYjlIK0ZxaldkSURNRjVCZjUvSTlXUGFP?= =?utf-8?B?cVNsbG4rV04zSTJVZ0lNS2VaLzg3T0tNSlpiVGRjRm9COVZVVmc1aDJMcHZi?= =?utf-8?B?d2tuNG9nUWtodVZKakNFVEVqTmQwUHVUdktENVNZOW1LQlU2a2phQ1hkaVpp?= =?utf-8?B?ZVl2VjdLd2twcisycFdQcTlZcU94MnpidDQ4Ky83RWV3TEhROFlLUXhpdEUr?= =?utf-8?B?Y2RNM2tOUTV4YXpqVE42a09ieEowaEV6YytLZmVITmw5NndRNnFhLy9sUmc5?= =?utf-8?B?OVB6eTFiSzhsYkJiRmJkZVFsUExMZDZFYWVvVW5Qd21wTVhDTUdHQkU5M3dG?= =?utf-8?B?UGJEUjhybjBUcHErQVdpeWZUUHorbnJXTU9nZDJjZDB2SXJNWTlTYXBBT21o?= =?utf-8?B?L3JzU0c0K2h6VTVob1dZUXcxWkNNeHNuSjlNdEd5RXdGOCsvZVZOYWxmV3Vi?= =?utf-8?B?dU12eWI4YzFIRitSNDVBWEJEUER1WDBzTVpIaTJaeGJ6NW1aTW5IZ3c0Wjl3?= =?utf-8?B?Um5iaE5JZjdQM0tuTzRsK0h6azdLV05yeXRQVXRsb0lDOGJxOHVadEhSZzVn?= =?utf-8?B?dWZCYTlXWFRuelh0UG1ZM2RUUTRXNUptK2lrcnNUUVJTL2NOd2ZGaWM4dkhh?= =?utf-8?B?d3dReUhTeDJOd1A3N0p0QjlwdWN3Y2xUN2dreUZUVDBJMXloeHEvQ09aQWNC?= =?utf-8?B?VjJOTVpHSFV5YU05dDQ2Z2JHRDN0WVFHRXdqQVEwVmdwelg3UHZidkxBd2xW?= =?utf-8?B?ZzBjUHFjSDY0b1k3Zmx0V0JhVUJqZUpKVnhLTDJUdXEyUmZ5cHNpWFBaRnJi?= =?utf-8?Q?PfiklOBM=3D?= X-Microsoft-Exchange-Diagnostics: 1;CY1PR1201MB1114;5:qtS8nMbZx51zbHQAy8OT6vZvSmwY+p/saQx/D0osUZnF/hH6Ugrhr1vvatlEbZXVjQv62KzzZDIhrcWM5ou33sJ+wiSmOtTB1uY02CpaaiDBeESVyHPaFFas9XNokqjj4DPSsiNTRu/2/+9Ef/An7A==;24:BvRuPFRjHWBDooRU8IW6C4dvC+OmxQVMvsIn2k126Fgnadl0gtx/rdS5W2XSsp/pRc0kLqKenjrOdrn8PHaxpL0O4V5Npdb1cF8yX7AmRHI=;7:znuLu8hUmOp3/Yf3gcHPIiZjgrI11trsW4XwY3mFJQCPwVNoIajg6mSdtjRsP78FeCSeqVwF2Ou3NDv91F7U2bFA6fSzUZta9qIRVy2f9xR9zST5DQ9gNVN0M8IX9K4ZePsvqH8og+ZcVMqgvao1PmnwqqCjlfSeFdHAgKo4Y/Q=;20:H2UwPmeqz/6iPqL+b5TfbpqnwfVsOiumGVyTJ1DWLRkITepShwUG8WIgJoTOQrLtZ64Z4epCTin2m8Ob5jcqMs8sUBc8Vix6RhMkhbTziHp1GV5wlFwBbU0wWD5MGGsKsMhjC97mSAKnGSfY10Ic0adIP0FDsmYmKrc3BTMwewV4RFXGJZmjx/wHl/sodoIdXkfuFajqR7Ee2AKJyqBwD9XGiDO7B4eKGTfFMamkEE5Sjr3rwM771WzvHWPhXQsy X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2016 22:59:08.4364 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY1PR1201MB1114 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch adds the support to check for and enable SME when available on the processor and when the mem_encrypt=on command line option is set. This consists of setting the encryption mask, calculating the number of physical bits of addressing lost and encrypting the kernel "in place." Signed-off-by: Tom Lendacky --- Documentation/kernel-parameters.txt | 3 arch/x86/kernel/asm-offsets.c | 2 arch/x86/kernel/mem_encrypt.S | 306 +++++++++++++++++++++++++++++++++++ 3 files changed, 311 insertions(+) diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt index 8ba7f82..0a2678a 100644 --- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -2210,6 +2210,9 @@ bytes respectively. Such letter suffixes can also be entirely omitted. memory contents and reserves bad memory regions that are detected. + mem_encrypt=on [X86_64] Enable memory encryption on processors + that support this feature. + meye.*= [HW] Set MotionEye Camera parameters See Documentation/video4linux/meye.txt. diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c index 5c04246..a0f76de 100644 --- a/arch/x86/kernel/asm-offsets.c +++ b/arch/x86/kernel/asm-offsets.c @@ -82,6 +82,8 @@ void common(void) { OFFSET(BP_kernel_alignment, boot_params, hdr.kernel_alignment); OFFSET(BP_pref_address, boot_params, hdr.pref_address); OFFSET(BP_code32_start, boot_params, hdr.code32_start); + OFFSET(BP_cmd_line_ptr, boot_params, hdr.cmd_line_ptr); + OFFSET(BP_ext_cmd_line_ptr, boot_params, ext_cmd_line_ptr); BLANK(); DEFINE(PTREGS_SIZE, sizeof(struct pt_regs)); diff --git a/arch/x86/kernel/mem_encrypt.S b/arch/x86/kernel/mem_encrypt.S index f2e0536..4d3326d 100644 --- a/arch/x86/kernel/mem_encrypt.S +++ b/arch/x86/kernel/mem_encrypt.S @@ -12,13 +12,236 @@ #include +#include +#include +#include +#include +#include + .text .code64 ENTRY(sme_enable) +#ifdef CONFIG_AMD_MEM_ENCRYPT + /* Check for AMD processor */ + xorl %eax, %eax + cpuid + cmpl $0x68747541, %ebx # AuthenticAMD + jne .Lno_mem_encrypt + cmpl $0x69746e65, %edx + jne .Lno_mem_encrypt + cmpl $0x444d4163, %ecx + jne .Lno_mem_encrypt + + /* Check for memory encryption leaf */ + movl $0x80000000, %eax + cpuid + cmpl $0x8000001f, %eax + jb .Lno_mem_encrypt + + /* + * Check for memory encryption feature: + * CPUID Fn8000_001F[EAX] - Bit 0 + */ + movl $0x8000001f, %eax + cpuid + bt $0, %eax + jnc .Lno_mem_encrypt + + /* Check for the mem_encrypt=on command line option */ + push %rsi /* Save RSI (real_mode_data) */ + movl BP_ext_cmd_line_ptr(%rsi), %ecx + shlq $32, %rcx + movl BP_cmd_line_ptr(%rsi), %edi + addq %rcx, %rdi + leaq mem_encrypt_enable_option(%rip), %rsi + call cmdline_find_option_bool + pop %rsi /* Restore RSI (real_mode_data) */ + testl %eax, %eax + jz .Lno_mem_encrypt + + /* + * Get memory encryption information: + * CPUID Fn8000_001F[EBX] - Bits 5:0 + * Pagetable bit position used to indicate encryption + */ + movl %ebx, %ecx + andl $0x3f, %ecx + jz .Lno_mem_encrypt + bts %ecx, sme_me_mask(%rip) + shrl $6, %ebx + + /* + * Get memory encryption information: + * CPUID Fn8000_001F[EBX] - Bits 11:6 + * Reduction in physical address space (in bits) when enabled + */ + movl %ebx, %ecx + andl $0x3f, %ecx + movb %cl, sme_me_loss(%rip) + + /* + * Enable memory encryption through the SYSCFG MSR + */ + movl $MSR_K8_SYSCFG, %ecx + rdmsr + bt $MSR_K8_SYSCFG_MEM_ENCRYPT_BIT, %eax + jc .Lmem_encrypt_exit + bts $MSR_K8_SYSCFG_MEM_ENCRYPT_BIT, %eax + wrmsr + jmp .Lmem_encrypt_exit + +.Lno_mem_encrypt: + /* Did not get enabled, clear settings */ + movq $0, sme_me_mask(%rip) + movb $0, sme_me_loss(%rip) + +.Lmem_encrypt_exit: +#endif /* CONFIG_AMD_MEM_ENCRYPT */ + ret ENDPROC(sme_enable) ENTRY(sme_encrypt_kernel) +#ifdef CONFIG_AMD_MEM_ENCRYPT + cmpq $0, sme_me_mask(%rip) + jz .Lencrypt_exit + + /* + * Encrypt the kernel. + * Pagetables for performing kernel encryption: + * 0x0000000000 - 0x00FFFFFFFF will map just the memory occupied by + * the kernel as encrypted memory + * 0x8000000000 - 0x80FFFFFFFF will map all memory as write-protected, + * non-encrypted + * + * The use of write-protected memory will prevent any of the + * non-encrypted memory from being cached. + * + * 0x00... and 0x80... represent the first and second PGD entries. + * + * This collection of entries will be created in an area outside + * of the area that is being encrypted (outside the kernel) and + * requires 11 4K pages: + * 1 - PGD + * 2 - PUDs (1 for each mapping) + * 8 - PMDs (4 for each mapping) + */ + leaq _end(%rip), %rdi + addq $~PMD_PAGE_MASK, %rdi + andq $PMD_PAGE_MASK, %rdi /* RDI points to the new PGD */ + + /* Clear the pagetable memory */ + movq %rdi, %rbx /* Save pointer to PGD */ + movl $(4096 * 11), %ecx + xorl %eax, %eax + rep stosb + movq %rbx, %rdi /* Restore pointer to PGD */ + + /* Set up PGD entries for the two mappings */ + leaq (0x1000 + 0x03)(%rdi), %rbx /* PUD for encrypted kernel */ + movq %rbx, (%rdi) + leaq (0x2000 + 0x03)(%rdi), %rbx /* PUD for unencrypted kernel */ + movq %rbx, 8(%rdi) + + /* Set up PUD entries (4 per mapping) for the two mappings */ + leaq (0x3000 + 0x03)(%rdi), %rbx /* PMD for encrypted kernel */ + leaq (0x7000 + 0x03)(%rdi), %rdx /* PMD for unencrypted kernel */ + xorq %rcx, %rcx +1: + /* Populate the PUD entries in each mapping */ + movq %rbx, 0x1000(%rdi, %rcx, 8) + movq %rdx, 0x2000(%rdi, %rcx, 8) + addq $0x1000, %rbx + addq $0x1000, %rdx + incq %rcx + cmpq $4, %rcx + jb 1b + + /* + * Set up PMD entries (4GB worth) for the two mappings. + * For the encrypted kernel mapping, when R11 is above RDX + * and below RDI then we know we are in the kernel and we + * set the encryption mask for that PMD entry. + * + * The use of _PAGE_PAT and _PAGE_PWT will provide for the + * write-protected mapping. + */ + movq sme_me_mask(%rip), %r10 + movq $__PAGE_KERNEL_LARGE_EXEC, %r11 + andq $~_PAGE_GLOBAL, %r11 + movq %r11, %r12 + andq $~_PAGE_CACHE_MASK, %r12 + orq $(_PAGE_PAT | _PAGE_PWT), %r12 /* PA5 index */ + xorq %rcx, %rcx + leaq _text(%rip), %rdx /* RDX points to start of kernel */ +1: + /* Populate the PMD entries in each mapping */ + movq %r11, 0x3000(%rdi, %rcx, 8) + movq %r12, 0x7000(%rdi, %rcx, 8) + + /* + * Check if we are in the kernel range, and if so, set the + * memory encryption mask. + */ + cmpq %r11, %rdx + jae 2f + cmpq %r11, %rdi + jbe 2f + orq %r10, 0x3000(%rdi, %rcx, 8) +2: + addq $PMD_SIZE, %r11 + addq $PMD_SIZE, %r12 + incq %rcx + cmpq $2048, %rcx + jb 1b + + /* + * Set up a one page stack in the non-encrypted memory area. + * Set RAX to point to the next page in memory after all the + * page tables. The stack grows from the bottom so point to + * the end of the page. + */ + leaq (4096 * 11)(%rdi), %rax + addq $PAGE_SIZE, %rax + movq %rsp, %rbp + movq %rax, %rsp + push %rbp /* Save original stack pointer */ + + push %rsi /* Save RSI (real mode data) */ + + /* + * Copy encryption routine into safe memory + * - RAX points to the page after all the page tables and stack + * where the routine will copied + * - RDI points to the PGD table + * - Setup registers for call + * and then call it + */ + movq %rdi, %rbx + + leaq .Lencrypt_start(%rip), %rsi + movq %rax, %rdi + movq $(.Lencrypt_stop - .Lencrypt_start), %rcx + rep movsb + + leaq _text(%rip), %rsi /* Kernel start */ + movq %rbx, %rcx /* New PGD start */ + subq %rsi, %rcx /* Size of area to encrypt */ + + movq %rsi, %rdi /* Encrypted kernel space start */ + movq $1, %rsi + shlq $PGDIR_SHIFT, %rsi + addq %rdi, %rsi /* Non-encrypted kernel start */ + + /* Call the encryption routine */ + call *%rax + + pop %rsi /* Restore RSI (real mode data ) */ + + pop %rsp /* Restore original stack pointer */ +.Lencrypt_exit: +#endif /* CONFIG_AMD_MEM_ENCRYPT */ + ret ENDPROC(sme_encrypt_kernel) @@ -28,6 +251,87 @@ ENTRY(sme_get_me_loss) ret ENDPROC(sme_get_me_loss) +#ifdef CONFIG_AMD_MEM_ENCRYPT +/* + * Routine used to encrypt kernel. + * This routine must be run outside of the kernel proper since + * the kernel will be encrypted during the process. So this + * routine is defined here and then copied to an area outside + * of the kernel where it will remain and run un-encrypted + * during execution. + * + * On entry the registers must be: + * - RAX points to this routine + * - RBX points to new PGD to use + * - RCX contains the kernel length + * - RSI points to the non-encrypted kernel space + * - RDI points to the encrypted kernel space + * + * The kernel will be encrypted by copying from the non-encrypted + * kernel space to a temporary buffer and then copying from the + * temporary buffer back to the encrypted kernel space. The physical + * addresses of the two kernel space mappings are the same which + * results in the kernel being encrypted "in place". + */ +.Lencrypt_start: + /* Enable the new page tables */ + mov %rbx, %cr3 + + /* Flush any global TLBs */ + mov %cr4, %rbx + andq $~X86_CR4_PGE, %rbx + mov %rbx, %cr4 + orq $X86_CR4_PGE, %rbx + mov %rbx, %cr4 + + /* Set the PAT register PA5 entry to write-protect */ + push %rax + push %rcx + movl $MSR_IA32_CR_PAT, %ecx + rdmsr + push %rdx /* Save original PAT value */ + andl $0xffff00ff, %edx /* Clear PA5 */ + orl $0x00000500, %edx /* Set PA5 to WP */ + wrmsr + pop %rdx /* RDX contains original PAT value */ + pop %rcx + pop %rax + + movq %rsi, %r10 /* Save source address */ + movq %rdi, %r11 /* Save destination address */ + movq %rcx, %r12 /* Save length */ + addq $PAGE_SIZE, %rax /* RAX now points to temp copy page */ + + wbinvd /* Invalidate any cache entries */ + + /* Copy/encrypt 2MB at a time */ +1: + movq %r10, %rsi + movq %rax, %rdi + movq $PMD_PAGE_SIZE, %rcx + rep movsb + + movq %rax, %rsi + movq %r11, %rdi + movq $PMD_PAGE_SIZE, %rcx + rep movsb + + addq $PMD_PAGE_SIZE, %r10 + addq $PMD_PAGE_SIZE, %r11 + subq $PMD_PAGE_SIZE, %r12 + jnz 1b + + /* Restore PAT register */ + push %rdx + movl $MSR_IA32_CR_PAT, %ecx + rdmsr + pop %rdx + wrmsr + + ret +.Lencrypt_stop: +#endif /* CONFIG_AMD_MEM_ENCRYPT */ + .data .align 16 ENTRY(sme_me_mask) @@ -35,3 +339,5 @@ ENTRY(sme_me_mask) sme_me_loss: .byte 0x00 .align 8 +mem_encrypt_enable_option: + .asciz "mem_encrypt=on"