From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932880AbdDRVSO (ORCPT ); Tue, 18 Apr 2017 17:18:14 -0400 Received: from mail-by2nam01on0083.outbound.protection.outlook.com ([104.47.34.83]:19648 "EHLO NAM01-BY2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1757947AbdDRVRv (ORCPT ); Tue, 18 Apr 2017 17:17:51 -0400 Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; From: Tom Lendacky Subject: [PATCH v5 07/32] x86/mm: Add support to enable SME in early boot processing To: , , , , , , , , , CC: Rik van Riel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Toshimitsu Kani , Arnd Bergmann , Jonathan Corbet , Matt Fleming , "Michael S. Tsirkin" , Joerg Roedel , Konrad Rzeszutek Wilk , Paolo Bonzini , Larry Woodman , Brijesh Singh , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , Dave Young , Thomas Gleixner , Dmitry Vyukov Date: Tue, 18 Apr 2017 16:17:35 -0500 Message-ID: <20170418211735.10190.29562.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20170418211612.10190.82788.stgit@tlendack-t1.amdoffice.net> References: <20170418211612.10190.82788.stgit@tlendack-t1.amdoffice.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: SN1PR19CA0015.namprd19.prod.outlook.com (10.163.220.25) To MWHPR12MB1150.namprd12.prod.outlook.com (10.169.204.14) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2ec652e2-1c8c-4fad-ba73-08d486a058ca X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(22001)(48565401081)(201703131423075)(201703031133081);SRVR:MWHPR12MB1150; X-Microsoft-Exchange-Diagnostics: 1;MWHPR12MB1150;3:0xoWD98+EBFjGvE3mopqGKm7qmt8HMaFrSfV2c5WZqG9uH1e11Tm6f4g6moq04tosJ39ECj+Zl3iMwNtrNoGdMhjdW9NlvgHTi7IOfKT69shv7oWBBze8tsulhAav9u9WaS/y+/fxzKub09XJLXldowWbTzkeXVHj58quBh9/SnYc/jYJraTRH/5OkIv+3KAeQBBvgUFUzTMPXesUrAuZT3iUVqKw0FWHo6fOgRVtgcxFkOwwIQnzrxb0rI3AlOar73D9zqQ1ekaEzZW0FN705PXNddfvn9Eez3BdXf0KlHlpaYi0bAGNPVFW3FYJJgB/YJ2PUIV3c+VNdV/fS3mpj6h2fgGTGP6rYzjeo9TAu4=;25:svXuaQrL22SkRkSxY7EL6UH4R/zI7yc+1yrFVZUiW74pmLdZK8OFZBwkVHIJGSnBrfslz+LuljsGXMaR/kp6tH1nL3qJxzdn10o3BL3XWyk256pHe3mgrJrU3nT8jQ5qpyfnCiNzgZMAdrOfZwz12GJQe4HHGELSbFXJ7NU20duftQwdiAVOvXXkAhj9AR6QFAG2NRaTqXBD0JISM9/IaAEmcMCEgYPJjANSw8Di2Lza+e2a8QKSdkhDPhsNkQ2BKwQFtw6y/N3bxOrPyvZDTj95N12nODaX600a6BLgXoXMSFrKVQtgEcVcyYjqzNBK6l+LuqREEEWP2BwoqvOfZ7C0gZJgAqS6A4/6iH1UJET70VAEgLB7i7OsHUIEe6eyCpS/VXI78YYUtjqO6fgYcrK0NXcJ0BTyodrgJ+V9EE2fyUUDBffPD13FYUxipO4fKNpEFu05koQ9uh0VhuNibQ== X-Microsoft-Exchange-Diagnostics: 1;MWHPR12MB1150;31:EIImdu2h6Z2/3idWI2SFWtGA/ryYPneLSdlNfXT4B4aZchGuY8nWz5ZgFN7HWcRw04Z5WALQtil0hSNaFxrfy+4NDqMZHcpzf62pijj0GdlzcRuHelnAMvY9eFvhvMfbU/YCn78n1YnaR66F4XetNKN1W0m0pCT/XKHz1lMclPxiMhHDXTeXbhj6gdtW5fUCSfOdwaNTW+OUZnOSpfo+9gTehcjlZbV1sXQlbJsx5hnbCNAUkG0t8eE8wRULZHko;20:PBSRP6s8ff9L44X8EOPwvd9keXdmE7K43GNXNec2gsHwa8U/4vTQ0PAZ/MupY59vTgKC54WkxePQE88DpLPvxe+1v9vXoUZ3n5utryILy2LIfJFZINKANFYx2sMZvW2JUI++v00mrvrRZPBklROBN6qwee6d5G5bfDK/3FFP2eUmR0SyuXJUOFqx9ajaRDOpFeH8dOnTteevVaCZkBo9W9RzsWrWFGmOO9cHQd/TDUiBgfWixx3uURTmoosmFfGRVeV7X3jm8yMIYjJeeE7FpQiNSyfz96fa5YmQrXtH6Sxc1m842uN5c7qZfSE02Bl9jx/ChWlqazNAaCobWN+N6CY8hfXj8YO7y/GtV/XXVh+MqO2uIrIN8FixJlKllNnZodV9Vsd+y1fgpL0dqTDPvbIkChuFXWHmJ5FRlG31ebqFfnsG9gyMRhs70rejPfODoD/JsflUy82qKN7jzHZ7NLh1pWNwTGURtUOWrU1cSlEE4+OLs34m7lKA0cIPcdC6 X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040450)(601004)(2401047)(8121501046)(5005006)(10201501046)(3002001)(93006095)(93001095)(6055026)(6041248)(201703131423075)(201702281528075)(201703061421075)(20161123562025)(20161123560025)(20161123555025)(20161123564025)(6072148);SRVR:MWHPR12MB1150;BCL:0;PCL:0;RULEID:;SRVR:MWHPR12MB1150; X-Microsoft-Exchange-Diagnostics: 1;MWHPR12MB1150;4:VWl05zL8Tm1CEpobnNVTWZSwCLl6upfmWo/biCFOGy2D4dKBRWV56StLWMFpTV6WGQ/MM3TRkQw+jPrFUqUmgjnqEQgQTCYP37YoalqLNs7WeBLyrNw+CWBgHy/ts6bwDDpxxiow/2SpKERe7VZwIF2eXG5u02XjQAxv2qkTdrjaOO9pOmsIOcKWn4fM+viCUuRkc2quSJUalPLkM5/qZVDi7RH4s+WAPhjCKJRwHfzszf8AB76yCzqmWWe4a+0DDc0U0H4HyMka5ccNFgMb+p/6Ta8H16SlkxfAIvkMC7DocVuhaWE5fAxhLh0uy5VpFbWiNctixQV1E3JICbLFUuUd14u/CiJqkK4Bna9/jCVdDjZCds17yyNXpEc+XekhMAepblVxZ6JqvxeKsgIL8IPHMuU5VTH4SfHjudQaORaXEajKaYCiAC4OjgN08d64rivLRWQRJgXBVvzpm7vEE+vaaPOCJnA/Nk93iny167Te5Vsi5zraEmXQVwRBhIh9mi3LdoJZz3vreKelEXNAKlqp9752AW47VkinpAtyu2wLO8vkupmx888y5hCjkRpxTlJCkb9Aj57t5LGH5fYnUqj/xm5/OapmZYJqYns4GTM5dPvJAhI/s1Qu6QQzraYZG5fQ4YVU0Fa/JfE/wmRnZroJdR82b7mtLyn7XDHuljuDVFYycqIpJuFkFJtWPj4oj5hPEUZLJdRcMbNWRxSwH8xv9j9wF+e+wYL1FFiG1bdiWSqyuTOeHjwCbf+tCWQ6kW/LaoX0KRCSj3DazMgMRA== X-Forefront-PRVS: 028166BF91 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6009001)(39450400003)(39860400002)(39840400002)(39400400002)(39850400002)(39410400002)(50466002)(5660300001)(103116003)(305945005)(97746001)(189998001)(53416004)(33646002)(7736002)(81166006)(54906002)(76176999)(2950100002)(8676002)(54356999)(55016002)(7416002)(50986999)(23676002)(42186005)(6666003)(6506006)(7406005)(1076002)(83506001)(230700001)(86362001)(38730400002)(4326008)(9686003)(47776003)(66066001)(25786009)(53936002)(2201001)(4001350100001)(3846002)(2906002)(6116002)(71626007)(921003)(1121003);DIR:OUT;SFP:1101;SCL:1;SRVR:MWHPR12MB1150;H:tlendack-t1.amdoffice.net;FPR:;SPF:None;MLV:sfv;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtNV0hQUjEyTUIxMTUwOzIzOmpsNDg1eEhNU0k5SjYxdHNXWHZaUDM4dTZZ?= =?utf-8?B?R1lLUXV1NjZUa2Rob3ZCUEVlZi9VVFVYdENUa0VobW9LS2ZmWThpU3NXSW5v?= =?utf-8?B?ZFBUTXlzUStBamh1MTk5dmh5SkRCVHdMR1g5bXYxQVZ0TFZDYlhtRTd0U2Ns?= =?utf-8?B?WGs3M01yYXdOTTA3cjgxSkVzbDBDdHZ4YkM4V3pVejV1VHNPbUNEWnpPVUhM?= =?utf-8?B?WnhRbDZtYzNTUzBaMnBwRDBJK09LRkR6ZDRPdGphcTBhQzNpREQ4cXR4WjdX?= =?utf-8?B?UzNXRWNDQWpYdHNhSWRjdU80WWhpbGZxMkoyTTQxcUE5QXFKK2p1cEpGOFFm?= =?utf-8?B?b09oYTh6R3Jzc1h0ZDlsR09qbGRYeVp5OTZPSlZaa2RhTkhSOVpCSG51dXZu?= =?utf-8?B?bjBFTmJKK0ZqcVNiSGFVMTAyT3Z3QXYxY1d4cWZkZ0swOXR5TjhmUmhLdE9x?= =?utf-8?B?cUJZckF2UG5vUmRwT3FsQjhLenlQSTFuSGd6Y2RuSGJVT1h4NENSVEM3V3Za?= =?utf-8?B?R0FDRk90dzdqejFPQWRjRDNuSEFKbVNlTWRwbjdUOEJNTHBUYjloMHQ2UUVQ?= =?utf-8?B?N0ZoQ0hZOFl4cjRYUGkwN3V0MjMvZ0V6Y21DYjArSk5kMXVSUnQ2THg5V3Uy?= =?utf-8?B?dHJnN3hHMEdIWE9kYTRjTVk1VjF3c2ZQMzdqSjhWRlBiWldjVHRGdXdxMmE1?= =?utf-8?B?emVkT0hyaW1LZ1IwT3Q1OEpLU1RtL3h1MGFQMUczUGMzTDdiVkpucXc2SmVX?= =?utf-8?B?VEYzWmlYSzRUQVJKYXpRa2dBd3c5QjlIcHRZcGEyUnFjTjcxY3R6aDh2YzN5?= =?utf-8?B?a2U0Nk5XcU9UaG1qOFVLV1FDMHdKNTBOY1BCem5kd25lNGhxdjRxdEdldm1O?= =?utf-8?B?RitEM2hFTWZmbVIvemszejBRQWlmSWxVZWhBQUhXMHhkV2JMK3FVblVTNmlw?= =?utf-8?B?S0dzTzVOR2U5cDAzU2ZQdXlFVDJNQUxZMCtGZVBKb2dVOFlCYzF4YlVERXpM?= =?utf-8?B?OUdHNEU0N3QwUWtka2crU1RYSzVtcWpxUGd1cmdLY0NIYStwZTQ4SGxhREpt?= =?utf-8?B?QnhpYjAveEhUcXJCQ1ZhUDFlK2QwQ3g0SVJlbUxhQVg5elFYWVFvcFFpbzla?= =?utf-8?B?SlE2QUZHaGkvYWsvUUdzTE16Qk1aUDZZWXEyYTl5UFhOVkJ4ZWNVeTQ0bkhQ?= =?utf-8?B?L0FiVHdyaVpjdE1ib0hDVkpMbkNFejFQODJ3NE1LMHlrY3pHMC8zbkRxcjRF?= =?utf-8?B?dzRTZjMvOXIySjVaeWpGaU1UQnkxaE4zS0dhN3ZWSk5VYWkvcEdWN1QrVmUz?= =?utf-8?B?NFJpUmxLNk13emlXSDVZa2xDdjI2dGNOMDlFUDFTRGxIL3UwdDBYczU3dlll?= =?utf-8?B?Y2RwWnRWME5aWCtoTFl5QjFRc1pIdTc2R0JPaDdKcUpmUUEvZ1k4VHUzdFNx?= =?utf-8?B?VVlySXZHVDQ1M1J3RW42ZDhwWHZJbnI3V256bzNCczZnU2hGaElzRjVtVUtK?= =?utf-8?B?dFIvRnpDd1VlUDBndzZYR0NxU25TOVk5KzcrVkVZQWF4endzcVRkakZuTDh4?= =?utf-8?B?RXY4R1VPWTY0L2EzTmQ1djhRMDE2V0NUeHd2V25TanFqd1BCQkpiY2xTSzM3?= =?utf-8?B?MHAyR1lMNlRzUWc2ZDRucVkrL0VIYTVNVEYwa0lJSGNJRXVqMXhsdlBBPT0=?= X-Microsoft-Exchange-Diagnostics: 1;MWHPR12MB1150;6:P9cgIKZW6ceOwvKxPoB/MXhsKnhJnffpxsYOyOtDIhO3LS2bNXXej/jsD7i50Z7y8LGEGeoEbo4wmV4Q5/FNS0vk83AKKbxkkirMB3vQekWivDa7IO+zZA02vjBqPhqZ4ERnjds4x+p6KA7rG0uU9hDWYlwP0c2xH2OUi32Bj5GmFyX7Ctgkbc8R81YdI3ElBpUDtfh72jvVtSmVTqt/n+JuQaQE25diKuiE+u093JHgHjeZZx44cYMja+i7Fm21tGEP+OWtZxVH52yQQ2QujmKXJJ1cTd4mDyL10kCS0INp1ayrNUHD3J79VpjYQuc8WIzHJVv4Fsvb/lklXm0hLatJspFNnUn7kTncLKCj726m3Mk07RtZ5Zr92x7eEjxUP7jqIChze26BoRG3NfKo8Z20DDbPsknQqg28nxikU/h2mTa2WTZoeOHQO6147E6D7PXyfEEmZt96y4L4OQ1ZIc6meSLf+bjkDby0O7r7DJw=;5:XMtpEWjPmFA5x076hkyKu5n6vXPYxiEsehgV9A+KQw9KljHnogwBKzu9t88VFMcsBeFtQ4cgdJpZv/IwYprzq6rriHCuFL6bBbdfSEgnQTG/gwPKhPI6aqmjeBhXbGfXDTN7tfasz+2wQl5EEsB+ug==;24:DnyRGdQnWs7QoHVkq6D8HAa9j5JW8IbDM8kYlcGLivjuPwxT43fuIghQURqmfDIFWdIlq2vloXiub2IbVvnz9Zx0LXVMI+ubYrB14rJMPoA= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;MWHPR12MB1150;7:N07+d4u8rkevbzddCxX2X+pgQovO0RoCnJDrehAMSHIDe1oREzQfE2vmwlSN+hpubHeSbTDJLDrLW32zJSraTL+1fTNMK896zvJbZfkhC8LPEJrYpE0UkYvGpqYKlWbu2sdibGa2/R+pWjv3aB/HbZNZBg40lhakO43ET4gHDAnPjBxHjhfkM6AE35jIR2eixHMgMHHcYCCxSqM5AYYFPBNwPyDNBzHNQ6PNER1cqg5ATntTkQKRvd5S88NZBLvLiu806L0CALyNsTRoqv7u8ZS/NcY37f6zZfP0w099faW8Xgl46CD7k8HnbWFo4D2KhtvGFk4l8wV4Akt0KTMs5g==;20:de9seVzxRmALJSEOiMddNakOvWjGBy46UWwatL9fwcreAWM798Nj4Zc6CyJopBQa6Gq1QmXFd+Z1XJjfCvitQg+nHvMWkJQZCsrnzjt7XQKxRlqFVaVFJvsnJfk7sSNQ1devtyTGD5ahEShFbZGfCfPTgE6M/350h8wpcSgF+OZvp/z9yZ8H325yDi5nz6+RTlnsSbFeAqeT9ouqi928N1Sahw230M+nt2zSuKxXMOsz80Yl5uXuh54BPkVb/c9v X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2017 21:17:38.8035 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1150 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add support to the early boot code to use Secure Memory Encryption (SME). Since the kernel has been loaded into memory in a decrypted state, support is added to encrypt the kernel in place and update the early pagetables with the memory encryption mask so that new pagetable entries will use memory encryption. The routines to set the encryption mask and perform the encryption are stub routines for now with functionality to be added in a later patch. Because of the need to have the routines available to head_64.S, the mem_encrypt.c is always built and #ifdefs in mem_encrypt.c will provide functionality or stub routines depending on CONFIG_AMD_MEM_ENCRYPT. Signed-off-by: Tom Lendacky --- arch/x86/kernel/head_64.S | 61 ++++++++++++++++++++++++++++++++++++++++++++- arch/x86/mm/Makefile | 4 +-- arch/x86/mm/mem_encrypt.c | 26 +++++++++++++++++++ 3 files changed, 86 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index ac9d327..3115e21 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -91,6 +91,23 @@ startup_64: jnz bad_address /* + * Enable Secure Memory Encryption (SME), if supported and enabled. + * The real_mode_data address is in %rsi and that register can be + * clobbered by the called function so be sure to save it. + * Save the returned mask in %r12 for later use. + */ + push %rsi + call sme_enable + pop %rsi + movq %rax, %r12 + + /* + * Add the memory encryption mask to %rbp to include it in the page + * table fixups. + */ + addq %r12, %rbp + + /* * Fixup the physical addresses in the page table */ addq %rbp, early_level4_pgt + (L4_START_KERNEL*8)(%rip) @@ -113,6 +130,7 @@ startup_64: shrq $PGDIR_SHIFT, %rax leaq (PAGE_SIZE + _KERNPG_TABLE)(%rbx), %rdx + addq %r12, %rdx movq %rdx, 0(%rbx,%rax,8) movq %rdx, 8(%rbx,%rax,8) @@ -129,6 +147,7 @@ startup_64: movq %rdi, %rax shrq $PMD_SHIFT, %rdi addq $(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL), %rax + addq %r12, %rax leaq (_end - 1)(%rip), %rcx shrq $PMD_SHIFT, %rcx subq %rdi, %rcx @@ -142,6 +161,12 @@ startup_64: decl %ecx jnz 1b + /* + * Determine if any fixups are required. This includes fixups + * based on where the kernel was loaded and whether SME is + * active. If %rbp is zero, then we can skip both the fixups + * and the call to encrypt the kernel. + */ test %rbp, %rbp jz .Lskip_fixup @@ -162,11 +187,30 @@ startup_64: cmp %r8, %rdi jne 1b - /* Fixup phys_base */ + /* + * Fixup phys_base - remove the memory encryption mask from %rbp + * to obtain the true physical address. + */ + subq %r12, %rbp addq %rbp, phys_base(%rip) + /* + * Encrypt the kernel if SME is active. + * The real_mode_data address is in %rsi and that register can be + * clobbered by the called function so be sure to save it. + */ + push %rsi + call sme_encrypt_kernel + pop %rsi + .Lskip_fixup: + /* + * The encryption mask is in %r12. We ADD this to %rax to be sure + * that the encryption mask is part of the value that will be + * stored in %cr3. + */ movq $(early_level4_pgt - __START_KERNEL_map), %rax + addq %r12, %rax jmp 1f ENTRY(secondary_startup_64) /* @@ -186,7 +230,20 @@ ENTRY(secondary_startup_64) /* Sanitize CPU configuration */ call verify_cpu - movq $(init_level4_pgt - __START_KERNEL_map), %rax + /* + * Get the SME encryption mask. + * The encryption mask will be returned in %rax so we do an ADD + * below to be sure that the encryption mask is part of the + * value that will stored in %cr3. + * + * The real_mode_data address is in %rsi and that register can be + * clobbered by the called function so be sure to save it. + */ + push %rsi + call sme_get_me_mask + pop %rsi + + addq $(init_level4_pgt - __START_KERNEL_map), %rax 1: /* Enable PAE mode and PGE */ diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index a94a7b6..9e13841 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -2,7 +2,7 @@ KCOV_INSTRUMENT_tlb.o := n obj-y := init.o init_$(BITS).o fault.o ioremap.o extable.o pageattr.o mmap.o \ - pat.o pgtable.o physaddr.o setup_nx.o tlb.o + pat.o pgtable.o physaddr.o setup_nx.o tlb.o mem_encrypt.o # Make sure __phys_addr has no stackprotector nostackp := $(call cc-option, -fno-stack-protector) @@ -38,5 +38,3 @@ obj-$(CONFIG_NUMA_EMU) += numa_emulation.o obj-$(CONFIG_X86_INTEL_MPX) += mpx.o obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o - -obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt.o diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index b99d469..cc00d8b 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -11,6 +11,9 @@ */ #include +#include + +#ifdef CONFIG_AMD_MEM_ENCRYPT /* * Since SME related variables are set early in the boot process they must @@ -19,3 +22,26 @@ */ unsigned long sme_me_mask __section(.data) = 0; EXPORT_SYMBOL_GPL(sme_me_mask); + +void __init sme_encrypt_kernel(void) +{ +} + +unsigned long __init sme_enable(void) +{ + return sme_me_mask; +} + +unsigned long sme_get_me_mask(void) +{ + return sme_me_mask; +} + +#else /* !CONFIG_AMD_MEM_ENCRYPT */ + +void __init sme_encrypt_kernel(void) { } +unsigned long __init sme_enable(void) { return 0; } + +unsigned long sme_get_me_mask(void) { return 0; } + +#endif /* CONFIG_AMD_MEM_ENCRYPT */ From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Lendacky Subject: [PATCH v5 07/32] x86/mm: Add support to enable SME in early boot processing Date: Tue, 18 Apr 2017 16:17:35 -0500 Message-ID: <20170418211735.10190.29562.stgit@tlendack-t1.amdoffice.net> References: <20170418211612.10190.82788.stgit@tlendack-t1.amdoffice.net> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20170418211612.10190.82788.stgit@tlendack-t1.amdoffice.net> Sender: owner-linux-mm@kvack.org To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Rik van Riel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Toshimitsu Kani , Arnd Bergmann , Jonathan Corbet , Matt Fleming , "Michael S. Tsirkin" , Joerg Roedel , Konrad Rzeszutek Wilk , Paolo Bonzini , Larry Woodman , Brijesh Singh , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , Dave Young , Thomas Gleixner , Dmitry Vyukov List-Id: linux-efi@vger.kernel.org Add support to the early boot code to use Secure Memory Encryption (SME). Since the kernel has been loaded into memory in a decrypted state, support is added to encrypt the kernel in place and update the early pagetables with the memory encryption mask so that new pagetable entries will use memory encryption. The routines to set the encryption mask and perform the encryption are stub routines for now with functionality to be added in a later patch. Because of the need to have the routines available to head_64.S, the mem_encrypt.c is always built and #ifdefs in mem_encrypt.c will provide functionality or stub routines depending on CONFIG_AMD_MEM_ENCRYPT. Signed-off-by: Tom Lendacky --- arch/x86/kernel/head_64.S | 61 ++++++++++++++++++++++++++++++++++++++++++++- arch/x86/mm/Makefile | 4 +-- arch/x86/mm/mem_encrypt.c | 26 +++++++++++++++++++ 3 files changed, 86 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index ac9d327..3115e21 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -91,6 +91,23 @@ startup_64: jnz bad_address /* + * Enable Secure Memory Encryption (SME), if supported and enabled. + * The real_mode_data address is in %rsi and that register can be + * clobbered by the called function so be sure to save it. + * Save the returned mask in %r12 for later use. + */ + push %rsi + call sme_enable + pop %rsi + movq %rax, %r12 + + /* + * Add the memory encryption mask to %rbp to include it in the page + * table fixups. + */ + addq %r12, %rbp + + /* * Fixup the physical addresses in the page table */ addq %rbp, early_level4_pgt + (L4_START_KERNEL*8)(%rip) @@ -113,6 +130,7 @@ startup_64: shrq $PGDIR_SHIFT, %rax leaq (PAGE_SIZE + _KERNPG_TABLE)(%rbx), %rdx + addq %r12, %rdx movq %rdx, 0(%rbx,%rax,8) movq %rdx, 8(%rbx,%rax,8) @@ -129,6 +147,7 @@ startup_64: movq %rdi, %rax shrq $PMD_SHIFT, %rdi addq $(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL), %rax + addq %r12, %rax leaq (_end - 1)(%rip), %rcx shrq $PMD_SHIFT, %rcx subq %rdi, %rcx @@ -142,6 +161,12 @@ startup_64: decl %ecx jnz 1b + /* + * Determine if any fixups are required. This includes fixups + * based on where the kernel was loaded and whether SME is + * active. If %rbp is zero, then we can skip both the fixups + * and the call to encrypt the kernel. + */ test %rbp, %rbp jz .Lskip_fixup @@ -162,11 +187,30 @@ startup_64: cmp %r8, %rdi jne 1b - /* Fixup phys_base */ + /* + * Fixup phys_base - remove the memory encryption mask from %rbp + * to obtain the true physical address. + */ + subq %r12, %rbp addq %rbp, phys_base(%rip) + /* + * Encrypt the kernel if SME is active. + * The real_mode_data address is in %rsi and that register can be + * clobbered by the called function so be sure to save it. + */ + push %rsi + call sme_encrypt_kernel + pop %rsi + .Lskip_fixup: + /* + * The encryption mask is in %r12. We ADD this to %rax to be sure + * that the encryption mask is part of the value that will be + * stored in %cr3. + */ movq $(early_level4_pgt - __START_KERNEL_map), %rax + addq %r12, %rax jmp 1f ENTRY(secondary_startup_64) /* @@ -186,7 +230,20 @@ ENTRY(secondary_startup_64) /* Sanitize CPU configuration */ call verify_cpu - movq $(init_level4_pgt - __START_KERNEL_map), %rax + /* + * Get the SME encryption mask. + * The encryption mask will be returned in %rax so we do an ADD + * below to be sure that the encryption mask is part of the + * value that will stored in %cr3. + * + * The real_mode_data address is in %rsi and that register can be + * clobbered by the called function so be sure to save it. + */ + push %rsi + call sme_get_me_mask + pop %rsi + + addq $(init_level4_pgt - __START_KERNEL_map), %rax 1: /* Enable PAE mode and PGE */ diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index a94a7b6..9e13841 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -2,7 +2,7 @@ KCOV_INSTRUMENT_tlb.o := n obj-y := init.o init_$(BITS).o fault.o ioremap.o extable.o pageattr.o mmap.o \ - pat.o pgtable.o physaddr.o setup_nx.o tlb.o + pat.o pgtable.o physaddr.o setup_nx.o tlb.o mem_encrypt.o # Make sure __phys_addr has no stackprotector nostackp := $(call cc-option, -fno-stack-protector) @@ -38,5 +38,3 @@ obj-$(CONFIG_NUMA_EMU) += numa_emulation.o obj-$(CONFIG_X86_INTEL_MPX) += mpx.o obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o - -obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt.o diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index b99d469..cc00d8b 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -11,6 +11,9 @@ */ #include +#include + +#ifdef CONFIG_AMD_MEM_ENCRYPT /* * Since SME related variables are set early in the boot process they must @@ -19,3 +22,26 @@ */ unsigned long sme_me_mask __section(.data) = 0; EXPORT_SYMBOL_GPL(sme_me_mask); + +void __init sme_encrypt_kernel(void) +{ +} + +unsigned long __init sme_enable(void) +{ + return sme_me_mask; +} + +unsigned long sme_get_me_mask(void) +{ + return sme_me_mask; +} + +#else /* !CONFIG_AMD_MEM_ENCRYPT */ + +void __init sme_encrypt_kernel(void) { } +unsigned long __init sme_enable(void) { return 0; } + +unsigned long sme_get_me_mask(void) { return 0; } + +#endif /* CONFIG_AMD_MEM_ENCRYPT */ -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-by2nam01on0083.outbound.protection.outlook.com ([104.47.34.83]:19648 "EHLO NAM01-BY2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1757947AbdDRVRv (ORCPT ); Tue, 18 Apr 2017 17:17:51 -0400 From: Tom Lendacky Subject: [PATCH v5 07/32] x86/mm: Add support to enable SME in early boot processing Date: Tue, 18 Apr 2017 16:17:35 -0500 Message-ID: <20170418211735.10190.29562.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20170418211612.10190.82788.stgit@tlendack-t1.amdoffice.net> References: <20170418211612.10190.82788.stgit@tlendack-t1.amdoffice.net> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: linux-arch-owner@vger.kernel.org List-ID: To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Rik van Riel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Toshimitsu Kani , Arnd Bergmann , Jonathan Corbet , Matt Fleming , "Michael S. Tsirkin" , Joerg Roedel , Konrad Rzeszutek Wilk , Paolo Bonzini , Larry Woodman , Brijesh Singh , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , Dave Young , Thomas Gleixner , Dmitry Vyukov Message-ID: <20170418211735.YvR83tWJnbGfxgbPC0ffuC05jY2ZjYd6GkyzjJRlknI@z> Add support to the early boot code to use Secure Memory Encryption (SME). Since the kernel has been loaded into memory in a decrypted state, support is added to encrypt the kernel in place and update the early pagetables with the memory encryption mask so that new pagetable entries will use memory encryption. The routines to set the encryption mask and perform the encryption are stub routines for now with functionality to be added in a later patch. Because of the need to have the routines available to head_64.S, the mem_encrypt.c is always built and #ifdefs in mem_encrypt.c will provide functionality or stub routines depending on CONFIG_AMD_MEM_ENCRYPT. Signed-off-by: Tom Lendacky --- arch/x86/kernel/head_64.S | 61 ++++++++++++++++++++++++++++++++++++++++++++- arch/x86/mm/Makefile | 4 +-- arch/x86/mm/mem_encrypt.c | 26 +++++++++++++++++++ 3 files changed, 86 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index ac9d327..3115e21 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -91,6 +91,23 @@ startup_64: jnz bad_address /* + * Enable Secure Memory Encryption (SME), if supported and enabled. + * The real_mode_data address is in %rsi and that register can be + * clobbered by the called function so be sure to save it. + * Save the returned mask in %r12 for later use. + */ + push %rsi + call sme_enable + pop %rsi + movq %rax, %r12 + + /* + * Add the memory encryption mask to %rbp to include it in the page + * table fixups. + */ + addq %r12, %rbp + + /* * Fixup the physical addresses in the page table */ addq %rbp, early_level4_pgt + (L4_START_KERNEL*8)(%rip) @@ -113,6 +130,7 @@ startup_64: shrq $PGDIR_SHIFT, %rax leaq (PAGE_SIZE + _KERNPG_TABLE)(%rbx), %rdx + addq %r12, %rdx movq %rdx, 0(%rbx,%rax,8) movq %rdx, 8(%rbx,%rax,8) @@ -129,6 +147,7 @@ startup_64: movq %rdi, %rax shrq $PMD_SHIFT, %rdi addq $(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL), %rax + addq %r12, %rax leaq (_end - 1)(%rip), %rcx shrq $PMD_SHIFT, %rcx subq %rdi, %rcx @@ -142,6 +161,12 @@ startup_64: decl %ecx jnz 1b + /* + * Determine if any fixups are required. This includes fixups + * based on where the kernel was loaded and whether SME is + * active. If %rbp is zero, then we can skip both the fixups + * and the call to encrypt the kernel. + */ test %rbp, %rbp jz .Lskip_fixup @@ -162,11 +187,30 @@ startup_64: cmp %r8, %rdi jne 1b - /* Fixup phys_base */ + /* + * Fixup phys_base - remove the memory encryption mask from %rbp + * to obtain the true physical address. + */ + subq %r12, %rbp addq %rbp, phys_base(%rip) + /* + * Encrypt the kernel if SME is active. + * The real_mode_data address is in %rsi and that register can be + * clobbered by the called function so be sure to save it. + */ + push %rsi + call sme_encrypt_kernel + pop %rsi + .Lskip_fixup: + /* + * The encryption mask is in %r12. We ADD this to %rax to be sure + * that the encryption mask is part of the value that will be + * stored in %cr3. + */ movq $(early_level4_pgt - __START_KERNEL_map), %rax + addq %r12, %rax jmp 1f ENTRY(secondary_startup_64) /* @@ -186,7 +230,20 @@ ENTRY(secondary_startup_64) /* Sanitize CPU configuration */ call verify_cpu - movq $(init_level4_pgt - __START_KERNEL_map), %rax + /* + * Get the SME encryption mask. + * The encryption mask will be returned in %rax so we do an ADD + * below to be sure that the encryption mask is part of the + * value that will stored in %cr3. + * + * The real_mode_data address is in %rsi and that register can be + * clobbered by the called function so be sure to save it. + */ + push %rsi + call sme_get_me_mask + pop %rsi + + addq $(init_level4_pgt - __START_KERNEL_map), %rax 1: /* Enable PAE mode and PGE */ diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index a94a7b6..9e13841 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -2,7 +2,7 @@ KCOV_INSTRUMENT_tlb.o := n obj-y := init.o init_$(BITS).o fault.o ioremap.o extable.o pageattr.o mmap.o \ - pat.o pgtable.o physaddr.o setup_nx.o tlb.o + pat.o pgtable.o physaddr.o setup_nx.o tlb.o mem_encrypt.o # Make sure __phys_addr has no stackprotector nostackp := $(call cc-option, -fno-stack-protector) @@ -38,5 +38,3 @@ obj-$(CONFIG_NUMA_EMU) += numa_emulation.o obj-$(CONFIG_X86_INTEL_MPX) += mpx.o obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o - -obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt.o diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index b99d469..cc00d8b 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -11,6 +11,9 @@ */ #include +#include + +#ifdef CONFIG_AMD_MEM_ENCRYPT /* * Since SME related variables are set early in the boot process they must @@ -19,3 +22,26 @@ */ unsigned long sme_me_mask __section(.data) = 0; EXPORT_SYMBOL_GPL(sme_me_mask); + +void __init sme_encrypt_kernel(void) +{ +} + +unsigned long __init sme_enable(void) +{ + return sme_me_mask; +} + +unsigned long sme_get_me_mask(void) +{ + return sme_me_mask; +} + +#else /* !CONFIG_AMD_MEM_ENCRYPT */ + +void __init sme_encrypt_kernel(void) { } +unsigned long __init sme_enable(void) { return 0; } + +unsigned long sme_get_me_mask(void) { return 0; } + +#endif /* CONFIG_AMD_MEM_ENCRYPT */ From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Lendacky Subject: [PATCH v5 07/32] x86/mm: Add support to enable SME in early boot processing Date: Tue, 18 Apr 2017 16:17:35 -0500 Message-ID: <20170418211735.10190.29562.stgit@tlendack-t1.amdoffice.net> References: <20170418211612.10190.82788.stgit@tlendack-t1.amdoffice.net> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Cc: Rik van Riel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Toshimitsu Kani , Arnd Bergmann , Jonathan Corbet , Matt Fleming , "Michael S. Tsirkin" , Joerg Roedel , Konrad Rzeszutek Wilk , Paolo Bonzini , Larry Woodman , Brijesh Singh , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , Dave Young , Thomas Gleixner , Dmitry Vyukov To: , , , , , , , , , Return-path: In-Reply-To: <20170418211612.10190.82788.stgit@tlendack-t1.amdoffice.net> Sender: owner-linux-mm@kvack.org List-Id: kvm.vger.kernel.org Add support to the early boot code to use Secure Memory Encryption (SME). Since the kernel has been loaded into memory in a decrypted state, support is added to encrypt the kernel in place and update the early pagetables with the memory encryption mask so that new pagetable entries will use memory encryption. The routines to set the encryption mask and perform the encryption are stub routines for now with functionality to be added in a later patch. Because of the need to have the routines available to head_64.S, the mem_encrypt.c is always built and #ifdefs in mem_encrypt.c will provide functionality or stub routines depending on CONFIG_AMD_MEM_ENCRYPT. Signed-off-by: Tom Lendacky --- arch/x86/kernel/head_64.S | 61 ++++++++++++++++++++++++++++++++++++++++++++- arch/x86/mm/Makefile | 4 +-- arch/x86/mm/mem_encrypt.c | 26 +++++++++++++++++++ 3 files changed, 86 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index ac9d327..3115e21 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -91,6 +91,23 @@ startup_64: jnz bad_address /* + * Enable Secure Memory Encryption (SME), if supported and enabled. + * The real_mode_data address is in %rsi and that register can be + * clobbered by the called function so be sure to save it. + * Save the returned mask in %r12 for later use. + */ + push %rsi + call sme_enable + pop %rsi + movq %rax, %r12 + + /* + * Add the memory encryption mask to %rbp to include it in the page + * table fixups. + */ + addq %r12, %rbp + + /* * Fixup the physical addresses in the page table */ addq %rbp, early_level4_pgt + (L4_START_KERNEL*8)(%rip) @@ -113,6 +130,7 @@ startup_64: shrq $PGDIR_SHIFT, %rax leaq (PAGE_SIZE + _KERNPG_TABLE)(%rbx), %rdx + addq %r12, %rdx movq %rdx, 0(%rbx,%rax,8) movq %rdx, 8(%rbx,%rax,8) @@ -129,6 +147,7 @@ startup_64: movq %rdi, %rax shrq $PMD_SHIFT, %rdi addq $(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL), %rax + addq %r12, %rax leaq (_end - 1)(%rip), %rcx shrq $PMD_SHIFT, %rcx subq %rdi, %rcx @@ -142,6 +161,12 @@ startup_64: decl %ecx jnz 1b + /* + * Determine if any fixups are required. This includes fixups + * based on where the kernel was loaded and whether SME is + * active. If %rbp is zero, then we can skip both the fixups + * and the call to encrypt the kernel. + */ test %rbp, %rbp jz .Lskip_fixup @@ -162,11 +187,30 @@ startup_64: cmp %r8, %rdi jne 1b - /* Fixup phys_base */ + /* + * Fixup phys_base - remove the memory encryption mask from %rbp + * to obtain the true physical address. + */ + subq %r12, %rbp addq %rbp, phys_base(%rip) + /* + * Encrypt the kernel if SME is active. + * The real_mode_data address is in %rsi and that register can be + * clobbered by the called function so be sure to save it. + */ + push %rsi + call sme_encrypt_kernel + pop %rsi + .Lskip_fixup: + /* + * The encryption mask is in %r12. We ADD this to %rax to be sure + * that the encryption mask is part of the value that will be + * stored in %cr3. + */ movq $(early_level4_pgt - __START_KERNEL_map), %rax + addq %r12, %rax jmp 1f ENTRY(secondary_startup_64) /* @@ -186,7 +230,20 @@ ENTRY(secondary_startup_64) /* Sanitize CPU configuration */ call verify_cpu - movq $(init_level4_pgt - __START_KERNEL_map), %rax + /* + * Get the SME encryption mask. + * The encryption mask will be returned in %rax so we do an ADD + * below to be sure that the encryption mask is part of the + * value that will stored in %cr3. + * + * The real_mode_data address is in %rsi and that register can be + * clobbered by the called function so be sure to save it. + */ + push %rsi + call sme_get_me_mask + pop %rsi + + addq $(init_level4_pgt - __START_KERNEL_map), %rax 1: /* Enable PAE mode and PGE */ diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index a94a7b6..9e13841 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -2,7 +2,7 @@ KCOV_INSTRUMENT_tlb.o := n obj-y := init.o init_$(BITS).o fault.o ioremap.o extable.o pageattr.o mmap.o \ - pat.o pgtable.o physaddr.o setup_nx.o tlb.o + pat.o pgtable.o physaddr.o setup_nx.o tlb.o mem_encrypt.o # Make sure __phys_addr has no stackprotector nostackp := $(call cc-option, -fno-stack-protector) @@ -38,5 +38,3 @@ obj-$(CONFIG_NUMA_EMU) += numa_emulation.o obj-$(CONFIG_X86_INTEL_MPX) += mpx.o obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o - -obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt.o diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index b99d469..cc00d8b 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -11,6 +11,9 @@ */ #include +#include + +#ifdef CONFIG_AMD_MEM_ENCRYPT /* * Since SME related variables are set early in the boot process they must @@ -19,3 +22,26 @@ */ unsigned long sme_me_mask __section(.data) = 0; EXPORT_SYMBOL_GPL(sme_me_mask); + +void __init sme_encrypt_kernel(void) +{ +} + +unsigned long __init sme_enable(void) +{ + return sme_me_mask; +} + +unsigned long sme_get_me_mask(void) +{ + return sme_me_mask; +} + +#else /* !CONFIG_AMD_MEM_ENCRYPT */ + +void __init sme_encrypt_kernel(void) { } +unsigned long __init sme_enable(void) { return 0; } + +unsigned long sme_get_me_mask(void) { return 0; } + +#endif /* CONFIG_AMD_MEM_ENCRYPT */ -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-path: Received: from mail-by2nam01on0067.outbound.protection.outlook.com ([104.47.34.67] helo=NAM01-BY2-obe.outbound.protection.outlook.com) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1d0aVh-0001wK-Or for kexec@lists.infradead.org; Tue, 18 Apr 2017 21:18:07 +0000 From: Tom Lendacky Subject: [PATCH v5 07/32] x86/mm: Add support to enable SME in early boot processing Date: Tue, 18 Apr 2017 16:17:35 -0500 Message-ID: <20170418211735.10190.29562.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20170418211612.10190.82788.stgit@tlendack-t1.amdoffice.net> References: <20170418211612.10190.82788.stgit@tlendack-t1.amdoffice.net> MIME-Version: 1.0 List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "kexec" Errors-To: kexec-bounces+dwmw2=infradead.org@lists.infradead.org To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Thomas Gleixner , Rik van Riel , Brijesh Singh , Toshimitsu Kani , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Konrad Rzeszutek Wilk , Andrey Ryabinin , Ingo Molnar , "Michael S. Tsirkin" , Andy Lutomirski , "H. Peter Anvin" , Borislav Petkov , Paolo Bonzini , Alexander Potapenko , Dave Young , Larry Woodman , Dmitry Vyukov Add support to the early boot code to use Secure Memory Encryption (SME). Since the kernel has been loaded into memory in a decrypted state, support is added to encrypt the kernel in place and update the early pagetables with the memory encryption mask so that new pagetable entries will use memory encryption. The routines to set the encryption mask and perform the encryption are stub routines for now with functionality to be added in a later patch. Because of the need to have the routines available to head_64.S, the mem_encrypt.c is always built and #ifdefs in mem_encrypt.c will provide functionality or stub routines depending on CONFIG_AMD_MEM_ENCRYPT. Signed-off-by: Tom Lendacky --- arch/x86/kernel/head_64.S | 61 ++++++++++++++++++++++++++++++++++++++++++++- arch/x86/mm/Makefile | 4 +-- arch/x86/mm/mem_encrypt.c | 26 +++++++++++++++++++ 3 files changed, 86 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index ac9d327..3115e21 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -91,6 +91,23 @@ startup_64: jnz bad_address /* + * Enable Secure Memory Encryption (SME), if supported and enabled. + * The real_mode_data address is in %rsi and that register can be + * clobbered by the called function so be sure to save it. + * Save the returned mask in %r12 for later use. + */ + push %rsi + call sme_enable + pop %rsi + movq %rax, %r12 + + /* + * Add the memory encryption mask to %rbp to include it in the page + * table fixups. + */ + addq %r12, %rbp + + /* * Fixup the physical addresses in the page table */ addq %rbp, early_level4_pgt + (L4_START_KERNEL*8)(%rip) @@ -113,6 +130,7 @@ startup_64: shrq $PGDIR_SHIFT, %rax leaq (PAGE_SIZE + _KERNPG_TABLE)(%rbx), %rdx + addq %r12, %rdx movq %rdx, 0(%rbx,%rax,8) movq %rdx, 8(%rbx,%rax,8) @@ -129,6 +147,7 @@ startup_64: movq %rdi, %rax shrq $PMD_SHIFT, %rdi addq $(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL), %rax + addq %r12, %rax leaq (_end - 1)(%rip), %rcx shrq $PMD_SHIFT, %rcx subq %rdi, %rcx @@ -142,6 +161,12 @@ startup_64: decl %ecx jnz 1b + /* + * Determine if any fixups are required. This includes fixups + * based on where the kernel was loaded and whether SME is + * active. If %rbp is zero, then we can skip both the fixups + * and the call to encrypt the kernel. + */ test %rbp, %rbp jz .Lskip_fixup @@ -162,11 +187,30 @@ startup_64: cmp %r8, %rdi jne 1b - /* Fixup phys_base */ + /* + * Fixup phys_base - remove the memory encryption mask from %rbp + * to obtain the true physical address. + */ + subq %r12, %rbp addq %rbp, phys_base(%rip) + /* + * Encrypt the kernel if SME is active. + * The real_mode_data address is in %rsi and that register can be + * clobbered by the called function so be sure to save it. + */ + push %rsi + call sme_encrypt_kernel + pop %rsi + .Lskip_fixup: + /* + * The encryption mask is in %r12. We ADD this to %rax to be sure + * that the encryption mask is part of the value that will be + * stored in %cr3. + */ movq $(early_level4_pgt - __START_KERNEL_map), %rax + addq %r12, %rax jmp 1f ENTRY(secondary_startup_64) /* @@ -186,7 +230,20 @@ ENTRY(secondary_startup_64) /* Sanitize CPU configuration */ call verify_cpu - movq $(init_level4_pgt - __START_KERNEL_map), %rax + /* + * Get the SME encryption mask. + * The encryption mask will be returned in %rax so we do an ADD + * below to be sure that the encryption mask is part of the + * value that will stored in %cr3. + * + * The real_mode_data address is in %rsi and that register can be + * clobbered by the called function so be sure to save it. + */ + push %rsi + call sme_get_me_mask + pop %rsi + + addq $(init_level4_pgt - __START_KERNEL_map), %rax 1: /* Enable PAE mode and PGE */ diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index a94a7b6..9e13841 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -2,7 +2,7 @@ KCOV_INSTRUMENT_tlb.o := n obj-y := init.o init_$(BITS).o fault.o ioremap.o extable.o pageattr.o mmap.o \ - pat.o pgtable.o physaddr.o setup_nx.o tlb.o + pat.o pgtable.o physaddr.o setup_nx.o tlb.o mem_encrypt.o # Make sure __phys_addr has no stackprotector nostackp := $(call cc-option, -fno-stack-protector) @@ -38,5 +38,3 @@ obj-$(CONFIG_NUMA_EMU) += numa_emulation.o obj-$(CONFIG_X86_INTEL_MPX) += mpx.o obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o - -obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt.o diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index b99d469..cc00d8b 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -11,6 +11,9 @@ */ #include +#include + +#ifdef CONFIG_AMD_MEM_ENCRYPT /* * Since SME related variables are set early in the boot process they must @@ -19,3 +22,26 @@ */ unsigned long sme_me_mask __section(.data) = 0; EXPORT_SYMBOL_GPL(sme_me_mask); + +void __init sme_encrypt_kernel(void) +{ +} + +unsigned long __init sme_enable(void) +{ + return sme_me_mask; +} + +unsigned long sme_get_me_mask(void) +{ + return sme_me_mask; +} + +#else /* !CONFIG_AMD_MEM_ENCRYPT */ + +void __init sme_encrypt_kernel(void) { } +unsigned long __init sme_enable(void) { return 0; } + +unsigned long sme_get_me_mask(void) { return 0; } + +#endif /* CONFIG_AMD_MEM_ENCRYPT */ _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec