From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754646AbcKJAuN (ORCPT ); Wed, 9 Nov 2016 19:50:13 -0500 Received: from mail-bl2nam02on0048.outbound.protection.outlook.com ([104.47.38.48]:42515 "EHLO NAM02-BL2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1754500AbcKJAuG (ORCPT ); Wed, 9 Nov 2016 19:50:06 -0500 Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Thomas.Lendacky@amd.com; From: Tom Lendacky Subject: [RFC PATCH v3 06/20] x86: Add support to enable SME during early boot processing To: , , , , , , , , CC: Rik van Riel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , "Paolo Bonzini" , Larry Woodman , "Ingo Molnar" , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , "Thomas Gleixner" , Dmitry Vyukov Date: Wed, 9 Nov 2016 18:35:43 -0600 Message-ID: <20161110003543.3280.99623.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> References: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: DM5PR15CA0010.namprd15.prod.outlook.com (10.173.207.148) To BN6PR12MB1139.namprd12.prod.outlook.com (10.168.226.141) X-MS-Office365-Filtering-Correlation-Id: 787338d6-c25b-4f62-45f6-08d409018422 X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1139;2:Ct+rgj+32/HOSBL9zAKKeB/fmhGKVBIhRH2BJKnduIXM0hzO+AF7uvI2NydMDE4baVVfuDFdVYiUCtpYVlC9rDlilNgVwDrmIzDh3Tjb/0PLUjRAbbytQz/rmU5E/zUMvVxiBkuCk55d0CPXnTARmARlnBN35puYCmalk9Ktbj7LsAqYjuDNI9cbJ+LbIWNrU75ZU8oAs5foVXnnQCpXHA==;3:1hFWFICbh+eVdNxFm4zP097N/xz1fpVWxSUxadAVQPdq+9XKyAunMhqwiYfjNpyauQAUDArHwhQqpAaE9lHEA8ltPRc6nNP3zDagQUIjgXoEakray1alnqYdtEg96jUd0wXO0rDuFwoNR01kCrHqhg==;25:/RLXXpZyBWYfHFBVZiN4zdF63NvmyJBDaei6clCw3D2lsouOilGO8tgXqY6Br/uQwtrGBzVi7A2Oo8kKaB7+GUKlkogR0BKiC7w7rYINHQEze4y74QVU50mkFlf3UB6qrGU2ISBOkJs+djpgjvF9Coz+Oql1Oi4lGGhpp/ncTiW3Y450Y36cbS6ba3r8TIOHUfd8w+Y6I/x5LB7kUxXujPKIbY0n5Aa5X1B21sjUNpkx0FYGBdEc2962yJm7yi5lh/zGWFHR/c5j3uhD8FZlACuo3GzLQNws4UhAX+L5lev74/sCNNVCuM/cgYb5m25XUjlZyon+fWtsxQrOE0m7t5gaCeYkAmm9RFQZslWZ7IhqaAh/58LTHOadq73YBkAVG/CtXiBE6lP6KPiJknpKOusTK3BUDomOBtinzeytmPoJ2qNvDWSvVVkZwcxlDyr4 X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:BN6PR12MB1139; X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1139;31:h0V4YT984JIcJBJmLDcLbIWrYIzP9xmnvZwer6AQsjM07/N9pc9HDmpLB0vbdMZSVpl+itYMbknwnqYCwi4F4YCUSPvzNMWatiJeueO7Tsz+mjzo2ONJ22YZhYaGRg8y4Aje2IkNpvFeeXrrsAJsJ/WbiNdWy4DKYz1EsvS7KIzJlGa3axOM6OhfOreP5du9NW0Loi/LRkSQUDP/FXBmdUyq3mTmXlFOZX4wBibX/+wplIm26VnFs16UDbR1B7LWsLNM0Sw6wk3+PwH+rZL6bA==;20:UoblhuZejo2V9xWjZztGLFdIoZ6zTwCPT80yZDF/nVmoN2Pji2djNt9WAM2mVOL5JizOpTP1tRM9KPpF7fnBoPb2iRw2uB3idnvnPoztMLFro0bRvHPhu2cyJr9O8pdHpRgEmxA/MHOJLjpWcHjLFtaQxsoV0ww53scC2xpISjlODhSrCSdv7GT9F2au0ag7yATsmpa9iQ8/ErW+6rtLVdIunLF4uorQlxLFuVA855AkemDQHAIAgdRfM6OA8Du438JfO0DtOtBL/yLS8zn3Ub8T+9px1glPWd4sOytfFHVTxZ+wG/c5vJRc5M281+L6rgCRmmyKhv/CGax3Boxa9GtIQjYay2tMVZi6atxs74gfroZJKB15RO/ppTIkYYPu/v0MsUAec1jV35gsYvLjVK8ZuvnqxFjJHcg5tpoUS1fh6zA7r7MiIUod3dKkuckRZDgwCrwXZ6g3ds6AzuAjFt1potJ04cXJyVO5krh6kUI6FyWyHyuRT6GiHUy47w2o X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040176)(601004)(2401047)(5005006)(8121501046)(10201501046)(3002001)(6055026);SRVR:BN6PR12MB1139;BCL:0;PCL:0;RULEID:;SRVR:BN6PR12MB1139; X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1139;4:XMrRhgrAqYPRu1gM8VoW8suVa/LPeQgi1igsXDg0zF9J2pJwIn1mBvZzXB876P1bIkvfHZajeCLdcjzj2isERmV+AFe0LEqNJ5RwiRHkuGg3msdKsn5KTzl3TlDSVPY5oFlg2SJz2Q8RqDfalY+Q5ZSgLvVMxB31zMncRtFHefyAVVcz6BKj9SnvOzdJc5L920hwvsEHpnhVZ2k3uZB4fJSrZZ2xmFd/cNq5l8rdqIAV3KAtZaxVfjI6MRddxSf59jEmBjoAbMxsf0vKUin+4Iww0MnMxSw1URc0wUPM0873kfgy7F3mYlIk/7ssVYxKx8BlhRzpCjwzvKyLV2aZp9n5HiuAhWQOJZrFIU4FjLYTUTpMiJQ3woL2DbpSYTkyW8S0WnanUyUx+RFb+CpfEiADDMcXaEzLkLtNHZKaxS0I/sZ28yrs9F3K+7lGg+0FjlmMOp6hNKyR66aQ6NsduQ== X-Forefront-PRVS: 01221E3973 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6009001)(7916002)(189002)(199003)(42186005)(8676002)(81166006)(53416004)(2950100002)(2201001)(33646002)(4001350100001)(5660300001)(106356001)(105586002)(97736004)(5001770100001)(54356999)(69596002)(76176999)(7846002)(50986999)(7736002)(97746001)(189998001)(86362001)(2906002)(77096005)(305945005)(4326007)(50466002)(101416001)(23676002)(81156014)(68736007)(103116003)(83506001)(7416002)(66066001)(6666003)(47776003)(92566002)(1076002)(230700001)(6116002)(9686002)(3846002)(586003)(71626007)(2004002)(217873001);DIR:OUT;SFP:1101;SCL:1;SRVR:BN6PR12MB1139;H:tlendack-t1.amdoffice.net;FPR:;SPF:None;PTR:InfoNoRecords;MX:1;A:1;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtCTjZQUjEyTUIxMTM5OzIzOjdjOTZtRFRmU014T3dweWZlL2ZFWFlVekEz?= =?utf-8?B?aGFWZXg4R0wrcEZKaC8zTTlTS1Q0U1FsbFFTcnRram1LKzFzRCt4b3kwSWo4?= =?utf-8?B?bllyYXplVm56NEt1V3RhVEQwa0E1bC90OFcwVlNNUEtFUjdoTWx4MTRUMU5x?= =?utf-8?B?bm96M3dTZXcya2ViR0xYWEVnTHVkbW16VkJrSlQvbDR4b1MrZG81U1NOd2tP?= =?utf-8?B?NDJSRTVvOC9XMzBMQ3lKREY1Zm93dXdBTmxnV0dzYmRtb3JxbVNpcWRqK1B1?= =?utf-8?B?NENEbHkzSUxzMkYycW1jMitFL1hoSW5TZUx1Z0lQQ1hMVFVHL3U2OGlHdUcr?= =?utf-8?B?NkNjVHdYMU80MmpmUWRRa3ZnWDJSZDg1U1orZFJwVWFIUUtsQWEzejF2TzFk?= =?utf-8?B?Z3hDUko3TlVuOHJQMHFHQ2RBa2tVSU81eXBsSlpEQ2JxVWR0SmRLQmhDN3dk?= =?utf-8?B?SXp2RGQzNnpTbHlMRGpXR3N1UXAzYVNXTUN3aGY4VThrVWNkY0NndjFKQmgw?= =?utf-8?B?M21rY0xLVFVNWWlEdTNDbGk2Mm5mV29KWUdpTEI2RW9WckdldjE0ZndYWmor?= =?utf-8?B?SXc3L3RYUVJ4TmlhQUZwUWE4OTErMjQrVEZTZlowNzlFdzJBNk9jU3RjdXZs?= =?utf-8?B?ckJ5bmtYZ3paTFJRajcvYWtyU1pEamVUZmlBVEwySE1uTzBnc25Va2NSYlpT?= =?utf-8?B?MVBORy9DSUgxbFhMV05kUndScjRlcFJyYllybWVXOFAvSHB1bk1FZUwvRUZ1?= =?utf-8?B?ZncxTnN2NGJvaUZmS3FOTnB2MkNFbnI3dHlHSDJ6cHFacE5qYTdWSWtWdHkz?= =?utf-8?B?d2VOU25yYkxlZmJsNHlpN0JPejdnKzdrMGZ0d2Q3Z201aVAxa241N2xYZy8w?= =?utf-8?B?ZWNhUUt6dk4vQnl2TzNHUlcwNVFqRmdIakdzeURUbGtCQ25OYkloYkdXZVhk?= =?utf-8?B?bHRBQVZPMkM0TlBXTVovaGZodEk4am9jLzhQS0NTWnJFNllsMThkejArWUFw?= =?utf-8?B?YkJJSGl1ZytoQzh1VXhYaXc2dFZYRHh6OFFJVnpMbTRSd00vK1VGbnhTQmVU?= =?utf-8?B?d0Qrand1cUhOeXo2bE1JWkZIc2VhemhaQjQ5K3M5NCtocnNJb0lXbitRYUpX?= =?utf-8?B?Yy9qZEtSU1NHSDhjVXRuK1RUTFRacVdQb0dXbzRYODZObGVJb0E2SWhLRkFm?= =?utf-8?B?eTV2Qzdic1VERUwyTC9aRDlGWGVvaXR4bDFIaEJsemFzaGllbmgzVy95UVZI?= =?utf-8?B?VTQ1TmpVVUxzR21FWk1DdDE2cDVpU3NXSndOd2ZabXhmc3ZBOFNYaXBsaEY0?= =?utf-8?B?QTV2dTk5VFE3aFNsd29FZkJ1blJjT0dNMFBNbHYvR0JUaE0rSFlhdXBpQUNx?= =?utf-8?B?Sy9SNWlpWncxcWdzeGZDNGhJRzQ3Z0o5eHB5Vzg1L0JpVU5CUHFJMS9iMUdl?= =?utf-8?B?aFM2U1RJbEtoeHErY1dYTS9Ec0xBMTFwV1dCQWFrMjlIWitvUW5nU1hiYVhT?= =?utf-8?B?WHRBNTBCQ1FDTkxIT0ZMYzZpNEdwUmpodThGNDJ2aG1VYXA3QjJFSkFKMTk0?= =?utf-8?B?OERpMzBXNUVnQ3llWjNPVG96eklnbWhiY1V2NmxMeFhUVGFnSExsWHlnMEc3?= =?utf-8?B?N3g4bWY3dVV3Wlp1aFYyekJYT2IrVGdlbWJwei9rV2FOc3ArWkx6TUVEdmFR?= =?utf-8?Q?lngtga5EFxiQ8Pr9OMDgchFRhqTHyd1R7QuRjq9?= X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1139;6:XqxQGiPJ8QfK+Jjb+0Oe373Um0FvCYLbBOfGH3QG+5QlulF861nAd3P/dy7x5JdI5fm7RdQl6W0DCMWo6oHX4xN8lz3tSpfU20ZeGdz1oGV9jpil+QY1ntLHaSyMAad0HfyoQJd1xclNbxz/Inf9q7YeENXooQfB3rmkK1PWdfWRBnKnq+AFmyYpKLEsYyTZkxRr6HMo0YViLszCvMQz/c1owNn3lIP392LjEtu42SFVL0FMyR2rA2OwnMIAMeaajoK3FRY1rpsAJNonA3ydCmdSOaN1ZxeYKVMpj5KSpY5vjNwy3gYOvmfpibF8EoL0OfS+ISNBnVRZ8/4HQ0f5zDjC/n60PPoRA7MBYyUmteg=;5:glX7BVjceuSaC8B27VVa+j8bwRXERrOtOmk2EioqRm04xcdfDJ/bcJI0HnGEXw9LmH5aUH77cQcsmPii+6XJpiG1ogMwLsy8vStYAJM3ti+GX5LGJaxRRTjtsNTafmR06YQ9FLap7FikFW+4zlpOTA==;24:xnah0KePfEluCI5J3xVd8u5Th+QP7A0ejJfosQKobnVNCSPHeoGrMowmCLVV96VNR+soTJg8yaH19rPLD7Djg0PFuxYigSQL22xT4vyKdQM= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1139;7:Z1qDVebAiceuijzEQrVocBqJj8FVSD+yHhRqsFa0foInpVSgbA6qbYSfQ9MCWjFr5BMkJvDg0Zmtx4Awxbtz1sfdMhCw7+6IaoMTYDmWwMMBlntmN1OZNCrvIB/OnSNPfNa7U8X2iHtQ+S3P8t2F/AvPQ8OnQ9lC0ObnXc1rG20yecevo7fcn3ujQr3tQAUgBGyh14MQ0sIvwWw2nc+zkQAHgPktrt7KO69PPrUc8R6zcEXDcwEenLoUbfT3SK2MQYt/VmG7UFKvgULBMT083bxIhV4cuI3ALr5H+c9Hcv7+F9BPRHQhu+5brsILNdVXCGtL7jbm5BMJv7I8q5aXTBdvaxByBmujytglWCeQ8LU=;20:nB892+kid4TtFCEMrjbLBHT6b/PMcp642erwzv69d6oSy0hBQuAC0xU8x4wehxUZ/C7RTWwkMQ0ekkQwYsWA77YgLIYdOlRiQGQsVDGXanDz8TlkhD3jElrWLX4jyjEXE5BSmqFRmyS8W2AqAL+PxAfoVFpRS9yuTafllMh877Sty2t/2D8RvLPBJeMWasvNRGeeDmHNJuP7AvyBjOMtIvfsT02+6SLBTkqKbO7eNlz0soaIYhrCl4P/irPObgm/ X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Nov 2016 00:35:46.5874 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1139 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch adds support to the early boot code to use Secure Memory Encryption (SME). Support is added to update the early pagetables with the memory encryption mask and to encrypt the kernel in place. The routines to set the encryption mask and perform the encryption are stub routines for now with full function to be added in a later patch. Signed-off-by: Tom Lendacky --- arch/x86/kernel/Makefile | 2 ++ arch/x86/kernel/head_64.S | 35 ++++++++++++++++++++++++++++++++++- arch/x86/kernel/mem_encrypt_init.c | 29 +++++++++++++++++++++++++++++ 3 files changed, 65 insertions(+), 1 deletion(-) create mode 100644 arch/x86/kernel/mem_encrypt_init.c diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 45257cf..27e22f4 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -141,4 +141,6 @@ ifeq ($(CONFIG_X86_64),y) obj-$(CONFIG_PCI_MMCONFIG) += mmconf-fam10h_64.o obj-y += vsmp_64.o + + obj-y += mem_encrypt_init.o endif diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index c98a559..9a28aad 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -95,6 +95,17 @@ startup_64: jnz bad_address /* + * Enable Secure Memory Encryption (if available). Save the mask + * in %r12 for later use and add the memory encryption mask to %rbp + * to include it in the page table fixups. + */ + push %rsi + call sme_enable + pop %rsi + movq %rax, %r12 + addq %r12, %rbp + + /* * Fixup the physical addresses in the page table */ addq %rbp, early_level4_pgt + (L4_START_KERNEL*8)(%rip) @@ -117,6 +128,7 @@ startup_64: shrq $PGDIR_SHIFT, %rax leaq (4096 + _KERNPG_TABLE)(%rbx), %rdx + addq %r12, %rdx movq %rdx, 0(%rbx,%rax,8) movq %rdx, 8(%rbx,%rax,8) @@ -133,6 +145,7 @@ startup_64: movq %rdi, %rax shrq $PMD_SHIFT, %rdi addq $(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL), %rax + addq %r12, %rax leaq (_end - 1)(%rip), %rcx shrq $PMD_SHIFT, %rcx subq %rdi, %rcx @@ -163,9 +176,21 @@ startup_64: cmp %r8, %rdi jne 1b - /* Fixup phys_base */ + /* + * Fixup phys_base, remove the memory encryption mask from %rbp + * to obtain the true physical address. + */ + subq %r12, %rbp addq %rbp, phys_base(%rip) + /* + * The page tables have been updated with the memory encryption mask, + * so encrypt the kernel if memory encryption is active + */ + push %rsi + call sme_encrypt_kernel + pop %rsi + movq $(early_level4_pgt - __START_KERNEL_map), %rax jmp 1f ENTRY(secondary_startup_64) @@ -186,9 +211,17 @@ ENTRY(secondary_startup_64) /* Sanitize CPU configuration */ call verify_cpu + push %rsi + call sme_get_me_mask + pop %rsi + movq %rax, %r12 + movq $(init_level4_pgt - __START_KERNEL_map), %rax 1: + /* Add the memory encryption mask to RAX */ + addq %r12, %rax + /* Enable PAE mode and PGE */ movl $(X86_CR4_PAE | X86_CR4_PGE), %ecx movq %rcx, %cr4 diff --git a/arch/x86/kernel/mem_encrypt_init.c b/arch/x86/kernel/mem_encrypt_init.c new file mode 100644 index 0000000..388d6fb --- /dev/null +++ b/arch/x86/kernel/mem_encrypt_init.c @@ -0,0 +1,29 @@ +/* + * AMD Memory Encryption Support + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include + +void __init sme_encrypt_kernel(void) +{ +} + +unsigned long __init sme_get_me_mask(void) +{ + return sme_me_mask; +} + +unsigned long __init sme_enable(void) +{ + return sme_me_mask; +} From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Lendacky Subject: [RFC PATCH v3 06/20] x86: Add support to enable SME during early boot processing Date: Wed, 9 Nov 2016 18:35:43 -0600 Message-ID: <20161110003543.3280.99623.stgit@tlendack-t1.amdoffice.net> References: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20161110003426.3280.2999.stgit-qCXWGYdRb2BnqfbPTmsdiZQ+2ll4COg0XqFh9Ls21Oc@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org To: linux-arch-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-efi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-doc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kasan-dev-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Cc: Rik van Riel , Thomas Gleixner , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Radim =?utf-8?b?S3LEjW3DocWZ?= , Andrey Ryabinin , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , Larry Woodman , Dmitry Vyukov List-Id: linux-efi@vger.kernel.org This patch adds support to the early boot code to use Secure Memory Encryption (SME). Support is added to update the early pagetables with the memory encryption mask and to encrypt the kernel in place. The routines to set the encryption mask and perform the encryption are stub routines for now with full function to be added in a later patch. Signed-off-by: Tom Lendacky --- arch/x86/kernel/Makefile | 2 ++ arch/x86/kernel/head_64.S | 35 ++++++++++++++++++++++++++++++++++- arch/x86/kernel/mem_encrypt_init.c | 29 +++++++++++++++++++++++++++++ 3 files changed, 65 insertions(+), 1 deletion(-) create mode 100644 arch/x86/kernel/mem_encrypt_init.c diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 45257cf..27e22f4 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -141,4 +141,6 @@ ifeq ($(CONFIG_X86_64),y) obj-$(CONFIG_PCI_MMCONFIG) += mmconf-fam10h_64.o obj-y += vsmp_64.o + + obj-y += mem_encrypt_init.o endif diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index c98a559..9a28aad 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -95,6 +95,17 @@ startup_64: jnz bad_address /* + * Enable Secure Memory Encryption (if available). Save the mask + * in %r12 for later use and add the memory encryption mask to %rbp + * to include it in the page table fixups. + */ + push %rsi + call sme_enable + pop %rsi + movq %rax, %r12 + addq %r12, %rbp + + /* * Fixup the physical addresses in the page table */ addq %rbp, early_level4_pgt + (L4_START_KERNEL*8)(%rip) @@ -117,6 +128,7 @@ startup_64: shrq $PGDIR_SHIFT, %rax leaq (4096 + _KERNPG_TABLE)(%rbx), %rdx + addq %r12, %rdx movq %rdx, 0(%rbx,%rax,8) movq %rdx, 8(%rbx,%rax,8) @@ -133,6 +145,7 @@ startup_64: movq %rdi, %rax shrq $PMD_SHIFT, %rdi addq $(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL), %rax + addq %r12, %rax leaq (_end - 1)(%rip), %rcx shrq $PMD_SHIFT, %rcx subq %rdi, %rcx @@ -163,9 +176,21 @@ startup_64: cmp %r8, %rdi jne 1b - /* Fixup phys_base */ + /* + * Fixup phys_base, remove the memory encryption mask from %rbp + * to obtain the true physical address. + */ + subq %r12, %rbp addq %rbp, phys_base(%rip) + /* + * The page tables have been updated with the memory encryption mask, + * so encrypt the kernel if memory encryption is active + */ + push %rsi + call sme_encrypt_kernel + pop %rsi + movq $(early_level4_pgt - __START_KERNEL_map), %rax jmp 1f ENTRY(secondary_startup_64) @@ -186,9 +211,17 @@ ENTRY(secondary_startup_64) /* Sanitize CPU configuration */ call verify_cpu + push %rsi + call sme_get_me_mask + pop %rsi + movq %rax, %r12 + movq $(init_level4_pgt - __START_KERNEL_map), %rax 1: + /* Add the memory encryption mask to RAX */ + addq %r12, %rax + /* Enable PAE mode and PGE */ movl $(X86_CR4_PAE | X86_CR4_PGE), %ecx movq %rcx, %cr4 diff --git a/arch/x86/kernel/mem_encrypt_init.c b/arch/x86/kernel/mem_encrypt_init.c new file mode 100644 index 0000000..388d6fb --- /dev/null +++ b/arch/x86/kernel/mem_encrypt_init.c @@ -0,0 +1,29 @@ +/* + * AMD Memory Encryption Support + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include + +void __init sme_encrypt_kernel(void) +{ +} + +unsigned long __init sme_get_me_mask(void) +{ + return sme_me_mask; +} + +unsigned long __init sme_enable(void) +{ + return sme_me_mask; +} From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-bl2nam02on0048.outbound.protection.outlook.com ([104.47.38.48]:42515 "EHLO NAM02-BL2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1754500AbcKJAuG (ORCPT ); Wed, 9 Nov 2016 19:50:06 -0500 From: Tom Lendacky Subject: [RFC PATCH v3 06/20] x86: Add support to enable SME during early boot processing Date: Wed, 9 Nov 2016 18:35:43 -0600 Message-ID: <20161110003543.3280.99623.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> References: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: linux-arch-owner@vger.kernel.org List-ID: To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Rik van Riel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Paolo Bonzini , Larry Woodman , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov Message-ID: <20161110003543.c4lT_RX3pegG7BlJ6ZrnRXf1uZLwx2wOeVaBLeKDGPc@z> This patch adds support to the early boot code to use Secure Memory Encryption (SME). Support is added to update the early pagetables with the memory encryption mask and to encrypt the kernel in place. The routines to set the encryption mask and perform the encryption are stub routines for now with full function to be added in a later patch. Signed-off-by: Tom Lendacky --- arch/x86/kernel/Makefile | 2 ++ arch/x86/kernel/head_64.S | 35 ++++++++++++++++++++++++++++++++++- arch/x86/kernel/mem_encrypt_init.c | 29 +++++++++++++++++++++++++++++ 3 files changed, 65 insertions(+), 1 deletion(-) create mode 100644 arch/x86/kernel/mem_encrypt_init.c diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 45257cf..27e22f4 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -141,4 +141,6 @@ ifeq ($(CONFIG_X86_64),y) obj-$(CONFIG_PCI_MMCONFIG) += mmconf-fam10h_64.o obj-y += vsmp_64.o + + obj-y += mem_encrypt_init.o endif diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index c98a559..9a28aad 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -95,6 +95,17 @@ startup_64: jnz bad_address /* + * Enable Secure Memory Encryption (if available). Save the mask + * in %r12 for later use and add the memory encryption mask to %rbp + * to include it in the page table fixups. + */ + push %rsi + call sme_enable + pop %rsi + movq %rax, %r12 + addq %r12, %rbp + + /* * Fixup the physical addresses in the page table */ addq %rbp, early_level4_pgt + (L4_START_KERNEL*8)(%rip) @@ -117,6 +128,7 @@ startup_64: shrq $PGDIR_SHIFT, %rax leaq (4096 + _KERNPG_TABLE)(%rbx), %rdx + addq %r12, %rdx movq %rdx, 0(%rbx,%rax,8) movq %rdx, 8(%rbx,%rax,8) @@ -133,6 +145,7 @@ startup_64: movq %rdi, %rax shrq $PMD_SHIFT, %rdi addq $(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL), %rax + addq %r12, %rax leaq (_end - 1)(%rip), %rcx shrq $PMD_SHIFT, %rcx subq %rdi, %rcx @@ -163,9 +176,21 @@ startup_64: cmp %r8, %rdi jne 1b - /* Fixup phys_base */ + /* + * Fixup phys_base, remove the memory encryption mask from %rbp + * to obtain the true physical address. + */ + subq %r12, %rbp addq %rbp, phys_base(%rip) + /* + * The page tables have been updated with the memory encryption mask, + * so encrypt the kernel if memory encryption is active + */ + push %rsi + call sme_encrypt_kernel + pop %rsi + movq $(early_level4_pgt - __START_KERNEL_map), %rax jmp 1f ENTRY(secondary_startup_64) @@ -186,9 +211,17 @@ ENTRY(secondary_startup_64) /* Sanitize CPU configuration */ call verify_cpu + push %rsi + call sme_get_me_mask + pop %rsi + movq %rax, %r12 + movq $(init_level4_pgt - __START_KERNEL_map), %rax 1: + /* Add the memory encryption mask to RAX */ + addq %r12, %rax + /* Enable PAE mode and PGE */ movl $(X86_CR4_PAE | X86_CR4_PGE), %ecx movq %rcx, %cr4 diff --git a/arch/x86/kernel/mem_encrypt_init.c b/arch/x86/kernel/mem_encrypt_init.c new file mode 100644 index 0000000..388d6fb --- /dev/null +++ b/arch/x86/kernel/mem_encrypt_init.c @@ -0,0 +1,29 @@ +/* + * AMD Memory Encryption Support + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include + +void __init sme_encrypt_kernel(void) +{ +} + +unsigned long __init sme_get_me_mask(void) +{ + return sme_me_mask; +} + +unsigned long __init sme_enable(void) +{ + return sme_me_mask; +} From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Lendacky Subject: [RFC PATCH v3 06/20] x86: Add support to enable SME during early boot processing Date: Wed, 9 Nov 2016 18:35:43 -0600 Message-ID: <20161110003543.3280.99623.stgit@tlendack-t1.amdoffice.net> References: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: Rik van Riel , Thomas Gleixner , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Radim =?utf-8?b?S3LEjW3DocWZ?= , Andrey Ryabinin , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , Larry Woodman , Dmitry Vyukov To: , , , , , , , , Return-path: In-Reply-To: <20161110003426.3280.2999.stgit-qCXWGYdRb2BnqfbPTmsdiZQ+2ll4COg0XqFh9Ls21Oc@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org List-Id: kvm.vger.kernel.org This patch adds support to the early boot code to use Secure Memory Encryption (SME). Support is added to update the early pagetables with the memory encryption mask and to encrypt the kernel in place. The routines to set the encryption mask and perform the encryption are stub routines for now with full function to be added in a later patch. Signed-off-by: Tom Lendacky --- arch/x86/kernel/Makefile | 2 ++ arch/x86/kernel/head_64.S | 35 ++++++++++++++++++++++++++++++++++- arch/x86/kernel/mem_encrypt_init.c | 29 +++++++++++++++++++++++++++++ 3 files changed, 65 insertions(+), 1 deletion(-) create mode 100644 arch/x86/kernel/mem_encrypt_init.c diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 45257cf..27e22f4 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -141,4 +141,6 @@ ifeq ($(CONFIG_X86_64),y) obj-$(CONFIG_PCI_MMCONFIG) += mmconf-fam10h_64.o obj-y += vsmp_64.o + + obj-y += mem_encrypt_init.o endif diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index c98a559..9a28aad 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -95,6 +95,17 @@ startup_64: jnz bad_address /* + * Enable Secure Memory Encryption (if available). Save the mask + * in %r12 for later use and add the memory encryption mask to %rbp + * to include it in the page table fixups. + */ + push %rsi + call sme_enable + pop %rsi + movq %rax, %r12 + addq %r12, %rbp + + /* * Fixup the physical addresses in the page table */ addq %rbp, early_level4_pgt + (L4_START_KERNEL*8)(%rip) @@ -117,6 +128,7 @@ startup_64: shrq $PGDIR_SHIFT, %rax leaq (4096 + _KERNPG_TABLE)(%rbx), %rdx + addq %r12, %rdx movq %rdx, 0(%rbx,%rax,8) movq %rdx, 8(%rbx,%rax,8) @@ -133,6 +145,7 @@ startup_64: movq %rdi, %rax shrq $PMD_SHIFT, %rdi addq $(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL), %rax + addq %r12, %rax leaq (_end - 1)(%rip), %rcx shrq $PMD_SHIFT, %rcx subq %rdi, %rcx @@ -163,9 +176,21 @@ startup_64: cmp %r8, %rdi jne 1b - /* Fixup phys_base */ + /* + * Fixup phys_base, remove the memory encryption mask from %rbp + * to obtain the true physical address. + */ + subq %r12, %rbp addq %rbp, phys_base(%rip) + /* + * The page tables have been updated with the memory encryption mask, + * so encrypt the kernel if memory encryption is active + */ + push %rsi + call sme_encrypt_kernel + pop %rsi + movq $(early_level4_pgt - __START_KERNEL_map), %rax jmp 1f ENTRY(secondary_startup_64) @@ -186,9 +211,17 @@ ENTRY(secondary_startup_64) /* Sanitize CPU configuration */ call verify_cpu + push %rsi + call sme_get_me_mask + pop %rsi + movq %rax, %r12 + movq $(init_level4_pgt - __START_KERNEL_map), %rax 1: + /* Add the memory encryption mask to RAX */ + addq %r12, %rax + /* Enable PAE mode and PGE */ movl $(X86_CR4_PAE | X86_CR4_PGE), %ecx movq %rcx, %cr4 diff --git a/arch/x86/kernel/mem_encrypt_init.c b/arch/x86/kernel/mem_encrypt_init.c new file mode 100644 index 0000000..388d6fb --- /dev/null +++ b/arch/x86/kernel/mem_encrypt_init.c @@ -0,0 +1,29 @@ +/* + * AMD Memory Encryption Support + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include + +void __init sme_encrypt_kernel(void) +{ +} + +unsigned long __init sme_get_me_mask(void) +{ + return sme_me_mask; +} + +unsigned long __init sme_enable(void) +{ + return sme_me_mask; +} From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-f71.google.com (mail-pa0-f71.google.com [209.85.220.71]) by kanga.kvack.org (Postfix) with ESMTP id 0F6A96B0267 for ; Wed, 9 Nov 2016 19:35:51 -0500 (EST) Received: by mail-pa0-f71.google.com with SMTP id rf5so83853791pab.3 for ; Wed, 09 Nov 2016 16:35:51 -0800 (PST) Received: from NAM03-CO1-obe.outbound.protection.outlook.com (mail-co1nam03on0056.outbound.protection.outlook.com. [104.47.40.56]) by mx.google.com with ESMTPS id tb1si1540588pab.242.2016.11.09.16.35.50 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 09 Nov 2016 16:35:50 -0800 (PST) From: Tom Lendacky Subject: [RFC PATCH v3 06/20] x86: Add support to enable SME during early boot processing Date: Wed, 9 Nov 2016 18:35:43 -0600 Message-ID: <20161110003543.3280.99623.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> References: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Rik van Riel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Paolo Bonzini , Larry Woodman , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov This patch adds support to the early boot code to use Secure Memory Encryption (SME). Support is added to update the early pagetables with the memory encryption mask and to encrypt the kernel in place. The routines to set the encryption mask and perform the encryption are stub routines for now with full function to be added in a later patch. Signed-off-by: Tom Lendacky --- arch/x86/kernel/Makefile | 2 ++ arch/x86/kernel/head_64.S | 35 ++++++++++++++++++++++++++++++++++- arch/x86/kernel/mem_encrypt_init.c | 29 +++++++++++++++++++++++++++++ 3 files changed, 65 insertions(+), 1 deletion(-) create mode 100644 arch/x86/kernel/mem_encrypt_init.c diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 45257cf..27e22f4 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -141,4 +141,6 @@ ifeq ($(CONFIG_X86_64),y) obj-$(CONFIG_PCI_MMCONFIG) += mmconf-fam10h_64.o obj-y += vsmp_64.o + + obj-y += mem_encrypt_init.o endif diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index c98a559..9a28aad 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -95,6 +95,17 @@ startup_64: jnz bad_address /* + * Enable Secure Memory Encryption (if available). Save the mask + * in %r12 for later use and add the memory encryption mask to %rbp + * to include it in the page table fixups. + */ + push %rsi + call sme_enable + pop %rsi + movq %rax, %r12 + addq %r12, %rbp + + /* * Fixup the physical addresses in the page table */ addq %rbp, early_level4_pgt + (L4_START_KERNEL*8)(%rip) @@ -117,6 +128,7 @@ startup_64: shrq $PGDIR_SHIFT, %rax leaq (4096 + _KERNPG_TABLE)(%rbx), %rdx + addq %r12, %rdx movq %rdx, 0(%rbx,%rax,8) movq %rdx, 8(%rbx,%rax,8) @@ -133,6 +145,7 @@ startup_64: movq %rdi, %rax shrq $PMD_SHIFT, %rdi addq $(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL), %rax + addq %r12, %rax leaq (_end - 1)(%rip), %rcx shrq $PMD_SHIFT, %rcx subq %rdi, %rcx @@ -163,9 +176,21 @@ startup_64: cmp %r8, %rdi jne 1b - /* Fixup phys_base */ + /* + * Fixup phys_base, remove the memory encryption mask from %rbp + * to obtain the true physical address. + */ + subq %r12, %rbp addq %rbp, phys_base(%rip) + /* + * The page tables have been updated with the memory encryption mask, + * so encrypt the kernel if memory encryption is active + */ + push %rsi + call sme_encrypt_kernel + pop %rsi + movq $(early_level4_pgt - __START_KERNEL_map), %rax jmp 1f ENTRY(secondary_startup_64) @@ -186,9 +211,17 @@ ENTRY(secondary_startup_64) /* Sanitize CPU configuration */ call verify_cpu + push %rsi + call sme_get_me_mask + pop %rsi + movq %rax, %r12 + movq $(init_level4_pgt - __START_KERNEL_map), %rax 1: + /* Add the memory encryption mask to RAX */ + addq %r12, %rax + /* Enable PAE mode and PGE */ movl $(X86_CR4_PAE | X86_CR4_PGE), %ecx movq %rcx, %cr4 diff --git a/arch/x86/kernel/mem_encrypt_init.c b/arch/x86/kernel/mem_encrypt_init.c new file mode 100644 index 0000000..388d6fb --- /dev/null +++ b/arch/x86/kernel/mem_encrypt_init.c @@ -0,0 +1,29 @@ +/* + * AMD Memory Encryption Support + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include + +void __init sme_encrypt_kernel(void) +{ +} + +unsigned long __init sme_get_me_mask(void) +{ + return sme_me_mask; +} + +unsigned long __init sme_enable(void) +{ + return sme_me_mask; +} -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org