From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935782AbcKNSTA (ORCPT ); Mon, 14 Nov 2016 13:19:00 -0500 Received: from mail-bn3nam01on0082.outbound.protection.outlook.com ([104.47.33.82]:19424 "EHLO NAM01-BN3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S932174AbcKNSSz (ORCPT ); Mon, 14 Nov 2016 13:18:55 -0500 Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Thomas.Lendacky@amd.com; Subject: Re: [RFC PATCH v3 06/20] x86: Add support to enable SME during early boot processing To: Borislav Petkov References: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> <20161110003543.3280.99623.stgit@tlendack-t1.amdoffice.net> <20161114172930.27z7p2kytmhtcbsb@pd.tnic> CC: , , , , , , , , , Rik van Riel , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Paolo Bonzini , Larry Woodman , Ingo Molnar , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov From: Tom Lendacky Message-ID: <178d7d21-ffbd-1083-9c64-f05378147e27@amd.com> Date: Mon, 14 Nov 2016 12:18:44 -0600 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0 MIME-Version: 1.0 In-Reply-To: <20161114172930.27z7p2kytmhtcbsb@pd.tnic> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: DM5PR17CA0047.namprd17.prod.outlook.com (10.173.128.161) To MWHPR12MB1152.namprd12.prod.outlook.com (10.169.204.16) X-Microsoft-Exchange-Diagnostics: 1;MWHPR12MB1152;2:SK1SeotHcSLxNbUxCOKLlIQWmbgHWxWZfR7EarPyfnP5xyeA70LgatO6lPjnWvglrtWfsAMCTBXdzIyWlW+q2TrUVCUPzUqqdfmikhKl+uQ14wbiVDRj8l1+dSlmW8vFkKfjlFRmfTaqPC5xyYqCBepv33lcAXk0hMlqJR1Qm+U=;3:uOCx1Z64H130EOb5eAGeR0yCHr9UMt+J8+VtFI6Gg2+hlpMjf3XiUmn6MWuHNvAU90B1LHGBcwNigtXyUguwWTsz5e+UxyPawINpWjjTiQ8ClVtv+o6TKNPMfXm2uyZuNcm1RTdFLTDuchU//p02xTMHC1LpWF+t2/1d+oDccus=;25:FrOMSD/e2qnJoMUW1qumbZqHDIcJkzexDySdt11XJwiflpa0Y9crAuqmG46dhE0NgVyCtALKU5QOuQ6/ryBwxb/DPW2qTvS4WVMY/8KFRA8XQk26LFOs4YeLNsU3MtIu1qfAQHGkGUP+KG1PgmFKzhspIxKCl6HsZtc3a0aF/nY8m4BhgUb+iT7Zb2NSqYGWboTlbEg1iZMLwH46kvXN79TSCK3syrati7UviZbqk3cQZKejbz+TWhSXyg7rAfhmd3tExXdL5dnMA4iXJB6XrIzo7ZqlyEaTnhRRdBDOGpDORNuvnB6rj7M2TqXbzYRISyvRHz3GHE+OfVNQ16m9TjYwJfdgmGnNdJxleeM3iGdvN995++Br+yK8DKlR5+6KJI+Aycxa8tR+7F/mK1LQDzQ5avpTKZ6DafzWp0+ZOXFgOt+JqcyMqybPSzz2vaxX X-MS-Office365-Filtering-Correlation-Id: 24f504c9-b831-4ea0-42b6-08d40cbaae79 X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(22001);SRVR:MWHPR12MB1152; X-Microsoft-Exchange-Diagnostics: 1;MWHPR12MB1152;31:7jbGzapEzmaXUkNLGQSxfCNd0g0cHL7dor+DdVbxGdGF8e2QsPt69wslkRdJfqV5XSGw4H2dqwKzC93QL7jWPzy8VP6gdckBWJmlIGPiMkg0TaG59F1SOkD9fgBkx8STl215BHeSutVzs9xOU9qGGSfVoy1iD1ZyqzIxJ+/W3/OFgspoKQFEiilQndRK7gwKY8ONr3E2Z7GaOYIw2tP/6VZYNqdybYW+hfqVVB9n46bjr1pWtkEv8ce9vh9f0W6G;20:tNvpxATcQ5V3vpdc/fcuMLSBju28M0jYtC24poiY7YdmSbeNu23U12QeF2cGx9vF1Dj2zswIL1XDpZTuO8RwgPjVZpqI6m1QN6SB2deuq3pfbgH9iUxiD9i0LfV2ju6TgKrLLMmDpn4CbAOWfLUMPbTLLKcPbBxZ7q5eT/OdTVHP7yk98ZkjTNeE/I+omjSIMBDTNKmzh52RpnAFfFsLYMGjxQiM2xrTEWzqXSNlcw7mN3LRtbXCK09weIEYqjMqrWzrDDEybZhqkVwlO5BSUEjQ46L26B6fX5sIh5lmkFpO6Rjc7d8mIErjeIOsZhlzLCj1aXVohWgw1WsRtTHdqfK74opWcGXOyYJJqu6GftGh5SmaQ5Lm2woSc9qG8YQ/Wrqys11kjyamTS7sFwlcC/HWnaNlCPaxFdIbtNIfNiZaNTxs9FSoV3JMzppNvguSWsppXqqnXfj90g2qgm4lQ0r9WFe266eYEcGE9l8VAuFvpd3RMZyWU0K+K4Atvy4R X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6060326)(601004)(2401047)(8121501046)(5005006)(10201501046)(3002001)(6055026)(6061324);SRVR:MWHPR12MB1152;BCL:0;PCL:0;RULEID:;SRVR:MWHPR12MB1152; X-Microsoft-Exchange-Diagnostics: 1;MWHPR12MB1152;4:4zF/CBMF398jVOlIbD1qEJlyN6z1MnhzepWyDcJTqXEe+ZcWyQuvLEDoNW/sHmcNnpJLcn5ALNzm/NrqIerUTUWEVJovYfehGjih54IVdTmxW0v5zy7uxT7Jyv5Jg5GuLkIKlwN0eQbvCbT1S8Ebyg+YhAksShbrnHrGOuZzP2H08iiTs1m6nRriylLRDWI+Xki96Q2m+nhZ7CkdfqTVIut19Osbc5PXfw29+5VOr7EK5y6PMHtuNFQdIWSCMIu3m/ymw4GsYXaV9tL+8bND4WEwfgpcGicHpZaWutm4O9ZF2WCPVAWp2i1MbcPZO2jncVMHpMchsarrd0aStGsyKFJFZ3PaTl0jVGriUGumgkZymS2dvLuJJ5DYY0mPoKdhUsKMjzYQH3a+Zjffl91MJO+tUwxFcan/eNjZMHKJ6OM+3EMdenw/SDvnD+vRJ4170HJo7GEIT941kbs1CE7GeA6OqZwJQENVVNRKXya2jyY= X-Forefront-PRVS: 0126A32F74 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6049001)(6009001)(7916002)(51444003)(377454003)(24454002)(199003)(189002)(7416002)(4326007)(65826007)(230700001)(97736004)(2950100002)(6666003)(6916009)(5660300001)(8676002)(110136003)(2906002)(4001350100001)(83506001)(66066001)(65956001)(7736002)(189998001)(47776003)(3846002)(6116002)(65806001)(586003)(305945005)(7846002)(54356999)(101416001)(81166006)(81156014)(105586002)(50986999)(229853002)(76176999)(23676002)(42186005)(31686004)(64126003)(36756003)(68736007)(92566002)(77096005)(50466002)(31696002)(86362001)(106356001)(33646002)(2004002)(217873001);DIR:OUT;SFP:1101;SCL:1;SRVR:MWHPR12MB1152;H:[10.236.64.222];FPR:;SPF:None;PTR:InfoNoRecords;A:1;MX:1;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtNV0hQUjEyTUIxMTUyOzIzOm9CK3BQS3JQcVhmNS9wZVlLcHRCYnZKNGl5?= =?utf-8?B?TStzdDlESUU1NDZWekVaeGVpOTlsbUVXeGY2SGUvUXFHVFpaUmFvdDFtZ29N?= =?utf-8?B?bVRRczA5Zmg3QkJMeEJMKy9XZHdLV1doMlVKU0UxUTdvMnZONHc3L0xtQ051?= =?utf-8?B?bUMzQmgwWS9iZzhzaG5GVnQ0N0VYc2hEb1ByNlRpS1FaMzJCRmYzZm45OVZh?= =?utf-8?B?UEtQb0c5UWRkeVRvR3RzNmZSQ2hpUHI4S1c2b1VIZTBIUU92UlZ2UG02dHVJ?= =?utf-8?B?dmJRZlhZK0s1aGZhV0ZqL2ZPMHUzVlRFalFyMUNqbGRGRXBvcmNWaDY5Zk9W?= =?utf-8?B?WTQ4YXZoMVVpTlA0MkxIcGFJTXl0cnAyV3U4WlNvWlF2MDlUVk5yeUVzcmE1?= =?utf-8?B?dEFQWCtyT2tyUyt2eUhGczZtNEZlMlR1eUtMUmVtYmFmYVd3azl2VnlYUHpV?= =?utf-8?B?dUMzM25oNzgrTXNHdityUEEyb1A4a0xFZzFBbUo2UFluaktXRXo4SnJEVU1B?= =?utf-8?B?enJ2MlBxQldBblA0Q2czdXo4MDA1b3BjVm15SmI1dG85RnNZaHJvcUkyd2Z5?= =?utf-8?B?SmlBajBuc3NlTlI5a2cwOEhoSEppOExJc1RsNDkzVXZmNU13S2RqQlZ6K3BF?= =?utf-8?B?d0ZldUZsVG9YU3pyUGdQSUZRc3REZDYyYmtiQXZqMTBzd1ROcERveXllMlU1?= =?utf-8?B?N2t3NnJuQ3JVdHdweEhndjdIbG84V1c0bFY4UWJBWUhTbzlzeURJYUpCMVZC?= =?utf-8?B?S0RLczRCZjNTMzJZQVRPYW96TnpBTkExMXF1djcrUURyTUdQTjViM2k4Q2p1?= =?utf-8?B?VmhxSW5pWkpZVHFIT3B3bHBJTCtUakNSckhud3hyRFRUbUhBSkVOUXpqT0Q5?= =?utf-8?B?TVJSRTFGNjJLK3lzdnlHTWQ1T0xGZnZFcWs1aTVrRUdiaWlVdmJyaVJNaUdF?= =?utf-8?B?SFo1WTdwSXg1bTdmbUFTK3NQaFUyb3dpNFYzb0VkQTNrdEdZY3RWcURzN3Nw?= =?utf-8?B?SlU1ek11ZzhTcDJEVWQxbmN0K3ZKU21Bdzc0RHUwYU1TdlJCUVBXaWtkdndU?= =?utf-8?B?OXZEcmNYaTNUdEFZUjZXamxWL1N6ZVJJdUZvcXpVRFErSjI1OHZYbDBjYWtR?= =?utf-8?B?cWt3UXE1NkJEQUNRd3UrNk1KcXgvRWtZMGxTUDZKZUp5NXlFdU5lc2J2NWZL?= =?utf-8?B?MWxmMS9KNEFmU2dNY3IvZ01Gd1ZrYjN1Uis5dVNDN2h4UUcrR0dTa0ExakIz?= =?utf-8?B?YWsvVUpWVTQwaGloMVVreXdFUE1xNGQ5NzhvVlFYRGorNFJnNitEVmJPVnJ4?= =?utf-8?B?akY2clE1WlA5TGU3Mm9NQnQydjM1Yis2d3Zwcnk3dHB4ZFpoQ0ZqSjkyN2FI?= =?utf-8?B?Z2RLNWFpNmwwZVBBdEdmMENFUmZjRlA1YUhmS1ZmOXZ1WVdkTFlDY3ZJZ3NG?= =?utf-8?B?aTU2UXZMZHR1R1Z6RkVPZ0ppZ0hhTkVxdFQ0TnhYdkU2Smt1WnZEUXdoMVI0?= =?utf-8?B?UUoxclVxa3NOL2FoaWEzTTNNQlNYNUc0TGh6YnJNQVo1aEs2KzVYc1hvTHBq?= =?utf-8?B?Rno3RHRnWnIyNXp4SzVkdVR5U2R6eGJtV1JJTUVkSWdaQXFNVmpPTDhBejA1?= =?utf-8?B?Z05JSnVDTm5nZ3d5cTV2bkxsMHZsSnpORVJDSkd2M0hhWHlsbFlkS1ppSUdF?= =?utf-8?B?TUtlbTFNb0tNNldOaWVZMUxPUHR4UWdRK01CVk54dHd6aDYxemNWVTg4Mnd3?= =?utf-8?B?OUxkY1hyNXJSKzNyOHJVWG92Ymt1em5qQWJ2WVp1RGlQNkVvelArZnN3M3Br?= =?utf-8?B?QUorQTZicDVQVWE4R1pTVXJwTlF1T2k0U1N6ZlpUSTFwSWZVZ2Z6a21jRWJr?= =?utf-8?Q?LjbiLYayMGg=3D?= X-Microsoft-Exchange-Diagnostics: 1;MWHPR12MB1152;6:yvrz7Za6K+otGUxcCQt8mwZpgKFXHICQQy76RynMJyXc6HkISsxrrLl1j5uCjZL5SDq2xXIgD8cdLol+fudU8T53YHUfdXIcu/uKm7+607xkJ48FBJd0iFzfVG0wjzX25WUtrStou5BspCsoAjleyACVI3X4P2WT8O2vwkKlZiDk2Fm4GstCfacyRU20wOCOpfkGY7Am1tNn0VWxbyuqG3V0k/Cyvs9txZY7pNQ90eUSSzK/4unBgEW38lXoYZ8qpXGO9sMneUv5Uwc9fEcAOcn05RfEXbFWAM+JnfyyhSpkyMM2sUHNRvG/TTRyO+18wcsatPSzGmIJIxI4GlXhEWsE70kXUmIjtbWHEWana5G6jptHxgPsDzmJF6UuZzfY;5:y2TOCDBOGHT80sOTrYWQaV8vUJDym36wqJkOQ0MSDecAg824WM1WJVZscM84RD9hwgy8CTVGnPcX/ntDfvYTjzmFAQoN1+6VIhS2W0nz7m/+YVjsj6cW9cUP/TE6oXcgeaQOUvoH58B+bDB0lOLUjw==;24:iq0+uJFjIMS+29d79fRjOqlfHGZ7Zh/pD2iupHDsHQYaP7t4xWvFtgvoSfTckQXfenU5EiIhcbyRGbA+AbLKoQtNhqeF8DT3C2LIKMkPXUs= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;MWHPR12MB1152;7:bEebjNieTUMIe9tRYnSn3aC12OMT9MVQJVi7wCVDZ4AEO/f9rHrfO/7FWAL52qqROHjEO2bRQ4ULN7eeDiks3gpEv4XbafQY/cIcNNhqGKv1dBmqZoiqtPMbSkKbQzy43Y83FiZ0szqPovS+jEYW2szs9Nh3gxcoj9Ris1p3VITL4AymZU+auKf4f4gwzWucnoD6a81qDu9rsQHXP57bvp0vhNxCwxTk75237gYyo+KLI/fgca/KS4IX5a4gz2qrykhgm752M0Oxe7L2ctDkv+pDQ7w0MZrujthF5i7OQqxBO/lje5CUPjpcFK4R8tOxMLatwggL4nc0G7jPWRq8VbAJenK58ZwWhtsqp7X2UxE=;20:NB7VDoL7TGzxQZuxr2aJzbMX88ixlf1Ze5uVXNZvMBfIeGkohjBaW3DUV3RkKIipe6ouE47CZ/GRwwo8LY+7K0XidI2kNLlawgzDo2zclQ4EzwnA+2a57X+uiMYLPj3E6J6sOtlF96zRsxSrufWyQa9ivzAU5W0UKeJPyHhfFZqtPiSzijRY/VOgWt7b3zqHyq8rKD0HStJYrpz/atUyNsi8Z5v0qVHihzXAqFLWx1Jj+NmZygN67AFbogSJsLrW X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Nov 2016 18:18:47.7579 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1152 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11/14/2016 11:29 AM, Borislav Petkov wrote: > On Wed, Nov 09, 2016 at 06:35:43PM -0600, Tom Lendacky wrote: >> This patch adds support to the early boot code to use Secure Memory >> Encryption (SME). Support is added to update the early pagetables with >> the memory encryption mask and to encrypt the kernel in place. >> >> The routines to set the encryption mask and perform the encryption are >> stub routines for now with full function to be added in a later patch. >> >> Signed-off-by: Tom Lendacky >> --- >> arch/x86/kernel/Makefile | 2 ++ >> arch/x86/kernel/head_64.S | 35 ++++++++++++++++++++++++++++++++++- >> arch/x86/kernel/mem_encrypt_init.c | 29 +++++++++++++++++++++++++++++ >> 3 files changed, 65 insertions(+), 1 deletion(-) >> create mode 100644 arch/x86/kernel/mem_encrypt_init.c >> >> diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile >> index 45257cf..27e22f4 100644 >> --- a/arch/x86/kernel/Makefile >> +++ b/arch/x86/kernel/Makefile >> @@ -141,4 +141,6 @@ ifeq ($(CONFIG_X86_64),y) >> >> obj-$(CONFIG_PCI_MMCONFIG) += mmconf-fam10h_64.o >> obj-y += vsmp_64.o >> + >> + obj-y += mem_encrypt_init.o >> endif >> diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S >> index c98a559..9a28aad 100644 >> --- a/arch/x86/kernel/head_64.S >> +++ b/arch/x86/kernel/head_64.S >> @@ -95,6 +95,17 @@ startup_64: >> jnz bad_address >> >> /* >> + * Enable Secure Memory Encryption (if available). Save the mask >> + * in %r12 for later use and add the memory encryption mask to %rbp >> + * to include it in the page table fixups. >> + */ >> + push %rsi >> + call sme_enable >> + pop %rsi > > Why %rsi? > > sme_enable() is void so no args in registers and returns in %rax. > > /me is confused. The %rsi register can be clobbered by the called function so I'm saving it since it points to the real mode data. I might be able to look into saving it earlier and restoring it before needed, but I though this might be clearer. > >> + movq %rax, %r12 >> + addq %r12, %rbp >> + >> + /* >> * Fixup the physical addresses in the page table >> */ >> addq %rbp, early_level4_pgt + (L4_START_KERNEL*8)(%rip) >> @@ -117,6 +128,7 @@ startup_64: >> shrq $PGDIR_SHIFT, %rax >> >> leaq (4096 + _KERNPG_TABLE)(%rbx), %rdx >> + addq %r12, %rdx >> movq %rdx, 0(%rbx,%rax,8) >> movq %rdx, 8(%rbx,%rax,8) >> >> @@ -133,6 +145,7 @@ startup_64: >> movq %rdi, %rax >> shrq $PMD_SHIFT, %rdi >> addq $(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL), %rax >> + addq %r12, %rax >> leaq (_end - 1)(%rip), %rcx >> shrq $PMD_SHIFT, %rcx >> subq %rdi, %rcx >> @@ -163,9 +176,21 @@ startup_64: >> cmp %r8, %rdi >> jne 1b >> >> - /* Fixup phys_base */ >> + /* >> + * Fixup phys_base, remove the memory encryption mask from %rbp >> + * to obtain the true physical address. >> + */ >> + subq %r12, %rbp >> addq %rbp, phys_base(%rip) >> >> + /* >> + * The page tables have been updated with the memory encryption mask, >> + * so encrypt the kernel if memory encryption is active >> + */ >> + push %rsi >> + call sme_encrypt_kernel >> + pop %rsi > > Ditto. > >> + >> movq $(early_level4_pgt - __START_KERNEL_map), %rax >> jmp 1f >> ENTRY(secondary_startup_64) >> @@ -186,9 +211,17 @@ ENTRY(secondary_startup_64) >> /* Sanitize CPU configuration */ >> call verify_cpu >> >> + push %rsi >> + call sme_get_me_mask >> + pop %rsi > > Ditto. > >> + movq %rax, %r12 >> + >> movq $(init_level4_pgt - __START_KERNEL_map), %rax >> 1: >> >> + /* Add the memory encryption mask to RAX */ > > I think that should say something like: > > /* > * Add the memory encryption mask to init_level4_pgt's physical address > */ > > or so... Yup, I'll expand on the comment for this. > >> + addq %r12, %rax >> + >> /* Enable PAE mode and PGE */ >> movl $(X86_CR4_PAE | X86_CR4_PGE), %ecx >> movq %rcx, %cr4 >> diff --git a/arch/x86/kernel/mem_encrypt_init.c b/arch/x86/kernel/mem_encrypt_init.c >> new file mode 100644 >> index 0000000..388d6fb >> --- /dev/null >> +++ b/arch/x86/kernel/mem_encrypt_init.c > > So nothing in the commit message explains why we need a separate > mem_encrypt_init.c file when we already have arch/x86/mm/mem_encrypt.c > for all memory encryption code... I can expand on the commit message about that. I was trying to keep the early boot-related code separate from the main code in arch/x86/mm dir. Thanks, Tom > >> @@ -0,0 +1,29 @@ >> +/* >> + * AMD Memory Encryption Support >> + * >> + * Copyright (C) 2016 Advanced Micro Devices, Inc. >> + * >> + * Author: Tom Lendacky >> + * >> + * This program is free software; you can redistribute it and/or modify >> + * it under the terms of the GNU General Public License version 2 as >> + * published by the Free Software Foundation. >> + */ >> + >> +#include >> +#include >> +#include >> + >> +void __init sme_encrypt_kernel(void) >> +{ >> +} >> + >> +unsigned long __init sme_get_me_mask(void) >> +{ >> + return sme_me_mask; >> +} >> + >> +unsigned long __init sme_enable(void) >> +{ >> + return sme_me_mask; >> +} > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Lendacky Subject: Re: [RFC PATCH v3 06/20] x86: Add support to enable SME during early boot processing Date: Mon, 14 Nov 2016 12:18:44 -0600 Message-ID: <178d7d21-ffbd-1083-9c64-f05378147e27@amd.com> References: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> <20161110003543.3280.99623.stgit@tlendack-t1.amdoffice.net> <20161114172930.27z7p2kytmhtcbsb@pd.tnic> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20161114172930.27z7p2kytmhtcbsb@pd.tnic> Sender: linux-kernel-owner@vger.kernel.org To: Borislav Petkov Cc: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org, Rik van Riel , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Paolo Bonzini , Larry Woodman , Ingo Molnar , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander List-Id: linux-efi@vger.kernel.org On 11/14/2016 11:29 AM, Borislav Petkov wrote: > On Wed, Nov 09, 2016 at 06:35:43PM -0600, Tom Lendacky wrote: >> This patch adds support to the early boot code to use Secure Memory >> Encryption (SME). Support is added to update the early pagetables with >> the memory encryption mask and to encrypt the kernel in place. >> >> The routines to set the encryption mask and perform the encryption are >> stub routines for now with full function to be added in a later patch. >> >> Signed-off-by: Tom Lendacky >> --- >> arch/x86/kernel/Makefile | 2 ++ >> arch/x86/kernel/head_64.S | 35 ++++++++++++++++++++++++++++++++++- >> arch/x86/kernel/mem_encrypt_init.c | 29 +++++++++++++++++++++++++++++ >> 3 files changed, 65 insertions(+), 1 deletion(-) >> create mode 100644 arch/x86/kernel/mem_encrypt_init.c >> >> diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile >> index 45257cf..27e22f4 100644 >> --- a/arch/x86/kernel/Makefile >> +++ b/arch/x86/kernel/Makefile >> @@ -141,4 +141,6 @@ ifeq ($(CONFIG_X86_64),y) >> >> obj-$(CONFIG_PCI_MMCONFIG) += mmconf-fam10h_64.o >> obj-y += vsmp_64.o >> + >> + obj-y += mem_encrypt_init.o >> endif >> diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S >> index c98a559..9a28aad 100644 >> --- a/arch/x86/kernel/head_64.S >> +++ b/arch/x86/kernel/head_64.S >> @@ -95,6 +95,17 @@ startup_64: >> jnz bad_address >> >> /* >> + * Enable Secure Memory Encryption (if available). Save the mask >> + * in %r12 for later use and add the memory encryption mask to %rbp >> + * to include it in the page table fixups. >> + */ >> + push %rsi >> + call sme_enable >> + pop %rsi > > Why %rsi? > > sme_enable() is void so no args in registers and returns in %rax. > > /me is confused. The %rsi register can be clobbered by the called function so I'm saving it since it points to the real mode data. I might be able to look into saving it earlier and restoring it before needed, but I though this might be clearer. > >> + movq %rax, %r12 >> + addq %r12, %rbp >> + >> + /* >> * Fixup the physical addresses in the page table >> */ >> addq %rbp, early_level4_pgt + (L4_START_KERNEL*8)(%rip) >> @@ -117,6 +128,7 @@ startup_64: >> shrq $PGDIR_SHIFT, %rax >> >> leaq (4096 + _KERNPG_TABLE)(%rbx), %rdx >> + addq %r12, %rdx >> movq %rdx, 0(%rbx,%rax,8) >> movq %rdx, 8(%rbx,%rax,8) >> >> @@ -133,6 +145,7 @@ startup_64: >> movq %rdi, %rax >> shrq $PMD_SHIFT, %rdi >> addq $(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL), %rax >> + addq %r12, %rax >> leaq (_end - 1)(%rip), %rcx >> shrq $PMD_SHIFT, %rcx >> subq %rdi, %rcx >> @@ -163,9 +176,21 @@ startup_64: >> cmp %r8, %rdi >> jne 1b >> >> - /* Fixup phys_base */ >> + /* >> + * Fixup phys_base, remove the memory encryption mask from %rbp >> + * to obtain the true physical address. >> + */ >> + subq %r12, %rbp >> addq %rbp, phys_base(%rip) >> >> + /* >> + * The page tables have been updated with the memory encryption mask, >> + * so encrypt the kernel if memory encryption is active >> + */ >> + push %rsi >> + call sme_encrypt_kernel >> + pop %rsi > > Ditto. > >> + >> movq $(early_level4_pgt - __START_KERNEL_map), %rax >> jmp 1f >> ENTRY(secondary_startup_64) >> @@ -186,9 +211,17 @@ ENTRY(secondary_startup_64) >> /* Sanitize CPU configuration */ >> call verify_cpu >> >> + push %rsi >> + call sme_get_me_mask >> + pop %rsi > > Ditto. > >> + movq %rax, %r12 >> + >> movq $(init_level4_pgt - __START_KERNEL_map), %rax >> 1: >> >> + /* Add the memory encryption mask to RAX */ > > I think that should say something like: > > /* > * Add the memory encryption mask to init_level4_pgt's physical address > */ > > or so... Yup, I'll expand on the comment for this. > >> + addq %r12, %rax >> + >> /* Enable PAE mode and PGE */ >> movl $(X86_CR4_PAE | X86_CR4_PGE), %ecx >> movq %rcx, %cr4 >> diff --git a/arch/x86/kernel/mem_encrypt_init.c b/arch/x86/kernel/mem_encrypt_init.c >> new file mode 100644 >> index 0000000..388d6fb >> --- /dev/null >> +++ b/arch/x86/kernel/mem_encrypt_init.c > > So nothing in the commit message explains why we need a separate > mem_encrypt_init.c file when we already have arch/x86/mm/mem_encrypt.c > for all memory encryption code... I can expand on the commit message about that. I was trying to keep the early boot-related code separate from the main code in arch/x86/mm dir. Thanks, Tom > >> @@ -0,0 +1,29 @@ >> +/* >> + * AMD Memory Encryption Support >> + * >> + * Copyright (C) 2016 Advanced Micro Devices, Inc. >> + * >> + * Author: Tom Lendacky >> + * >> + * This program is free software; you can redistribute it and/or modify >> + * it under the terms of the GNU General Public License version 2 as >> + * published by the Free Software Foundation. >> + */ >> + >> +#include >> +#include >> +#include >> + >> +void __init sme_encrypt_kernel(void) >> +{ >> +} >> + >> +unsigned long __init sme_get_me_mask(void) >> +{ >> + return sme_me_mask; >> +} >> + >> +unsigned long __init sme_enable(void) >> +{ >> + return sme_me_mask; >> +} > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id DA1556B0069 for ; Mon, 14 Nov 2016 13:18:55 -0500 (EST) Received: by mail-pg0-f71.google.com with SMTP id e9so81081508pgc.5 for ; Mon, 14 Nov 2016 10:18:55 -0800 (PST) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on0084.outbound.protection.outlook.com. [104.47.36.84]) by mx.google.com with ESMTPS id 76si23175454pfo.238.2016.11.14.10.18.54 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 14 Nov 2016 10:18:54 -0800 (PST) Subject: Re: [RFC PATCH v3 06/20] x86: Add support to enable SME during early boot processing References: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> <20161110003543.3280.99623.stgit@tlendack-t1.amdoffice.net> <20161114172930.27z7p2kytmhtcbsb@pd.tnic> From: Tom Lendacky Message-ID: <178d7d21-ffbd-1083-9c64-f05378147e27@amd.com> Date: Mon, 14 Nov 2016 12:18:44 -0600 MIME-Version: 1.0 In-Reply-To: <20161114172930.27z7p2kytmhtcbsb@pd.tnic> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Borislav Petkov Cc: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org, Rik van Riel , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Paolo Bonzini , Larry Woodman , Ingo Molnar , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov On 11/14/2016 11:29 AM, Borislav Petkov wrote: > On Wed, Nov 09, 2016 at 06:35:43PM -0600, Tom Lendacky wrote: >> This patch adds support to the early boot code to use Secure Memory >> Encryption (SME). Support is added to update the early pagetables with >> the memory encryption mask and to encrypt the kernel in place. >> >> The routines to set the encryption mask and perform the encryption are >> stub routines for now with full function to be added in a later patch. >> >> Signed-off-by: Tom Lendacky >> --- >> arch/x86/kernel/Makefile | 2 ++ >> arch/x86/kernel/head_64.S | 35 ++++++++++++++++++++++++++++++++++- >> arch/x86/kernel/mem_encrypt_init.c | 29 +++++++++++++++++++++++++++++ >> 3 files changed, 65 insertions(+), 1 deletion(-) >> create mode 100644 arch/x86/kernel/mem_encrypt_init.c >> >> diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile >> index 45257cf..27e22f4 100644 >> --- a/arch/x86/kernel/Makefile >> +++ b/arch/x86/kernel/Makefile >> @@ -141,4 +141,6 @@ ifeq ($(CONFIG_X86_64),y) >> >> obj-$(CONFIG_PCI_MMCONFIG) += mmconf-fam10h_64.o >> obj-y += vsmp_64.o >> + >> + obj-y += mem_encrypt_init.o >> endif >> diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S >> index c98a559..9a28aad 100644 >> --- a/arch/x86/kernel/head_64.S >> +++ b/arch/x86/kernel/head_64.S >> @@ -95,6 +95,17 @@ startup_64: >> jnz bad_address >> >> /* >> + * Enable Secure Memory Encryption (if available). Save the mask >> + * in %r12 for later use and add the memory encryption mask to %rbp >> + * to include it in the page table fixups. >> + */ >> + push %rsi >> + call sme_enable >> + pop %rsi > > Why %rsi? > > sme_enable() is void so no args in registers and returns in %rax. > > /me is confused. The %rsi register can be clobbered by the called function so I'm saving it since it points to the real mode data. I might be able to look into saving it earlier and restoring it before needed, but I though this might be clearer. > >> + movq %rax, %r12 >> + addq %r12, %rbp >> + >> + /* >> * Fixup the physical addresses in the page table >> */ >> addq %rbp, early_level4_pgt + (L4_START_KERNEL*8)(%rip) >> @@ -117,6 +128,7 @@ startup_64: >> shrq $PGDIR_SHIFT, %rax >> >> leaq (4096 + _KERNPG_TABLE)(%rbx), %rdx >> + addq %r12, %rdx >> movq %rdx, 0(%rbx,%rax,8) >> movq %rdx, 8(%rbx,%rax,8) >> >> @@ -133,6 +145,7 @@ startup_64: >> movq %rdi, %rax >> shrq $PMD_SHIFT, %rdi >> addq $(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL), %rax >> + addq %r12, %rax >> leaq (_end - 1)(%rip), %rcx >> shrq $PMD_SHIFT, %rcx >> subq %rdi, %rcx >> @@ -163,9 +176,21 @@ startup_64: >> cmp %r8, %rdi >> jne 1b >> >> - /* Fixup phys_base */ >> + /* >> + * Fixup phys_base, remove the memory encryption mask from %rbp >> + * to obtain the true physical address. >> + */ >> + subq %r12, %rbp >> addq %rbp, phys_base(%rip) >> >> + /* >> + * The page tables have been updated with the memory encryption mask, >> + * so encrypt the kernel if memory encryption is active >> + */ >> + push %rsi >> + call sme_encrypt_kernel >> + pop %rsi > > Ditto. > >> + >> movq $(early_level4_pgt - __START_KERNEL_map), %rax >> jmp 1f >> ENTRY(secondary_startup_64) >> @@ -186,9 +211,17 @@ ENTRY(secondary_startup_64) >> /* Sanitize CPU configuration */ >> call verify_cpu >> >> + push %rsi >> + call sme_get_me_mask >> + pop %rsi > > Ditto. > >> + movq %rax, %r12 >> + >> movq $(init_level4_pgt - __START_KERNEL_map), %rax >> 1: >> >> + /* Add the memory encryption mask to RAX */ > > I think that should say something like: > > /* > * Add the memory encryption mask to init_level4_pgt's physical address > */ > > or so... Yup, I'll expand on the comment for this. > >> + addq %r12, %rax >> + >> /* Enable PAE mode and PGE */ >> movl $(X86_CR4_PAE | X86_CR4_PGE), %ecx >> movq %rcx, %cr4 >> diff --git a/arch/x86/kernel/mem_encrypt_init.c b/arch/x86/kernel/mem_encrypt_init.c >> new file mode 100644 >> index 0000000..388d6fb >> --- /dev/null >> +++ b/arch/x86/kernel/mem_encrypt_init.c > > So nothing in the commit message explains why we need a separate > mem_encrypt_init.c file when we already have arch/x86/mm/mem_encrypt.c > for all memory encryption code... I can expand on the commit message about that. I was trying to keep the early boot-related code separate from the main code in arch/x86/mm dir. Thanks, Tom > >> @@ -0,0 +1,29 @@ >> +/* >> + * AMD Memory Encryption Support >> + * >> + * Copyright (C) 2016 Advanced Micro Devices, Inc. >> + * >> + * Author: Tom Lendacky >> + * >> + * This program is free software; you can redistribute it and/or modify >> + * it under the terms of the GNU General Public License version 2 as >> + * published by the Free Software Foundation. >> + */ >> + >> +#include >> +#include >> +#include >> + >> +void __init sme_encrypt_kernel(void) >> +{ >> +} >> + >> +unsigned long __init sme_get_me_mask(void) >> +{ >> + return sme_me_mask; >> +} >> + >> +unsigned long __init sme_enable(void) >> +{ >> + return sme_me_mask; >> +} > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org