From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755794AbcHVWxe (ORCPT ); Mon, 22 Aug 2016 18:53:34 -0400 Received: from mail-sn1nam02on0064.outbound.protection.outlook.com ([104.47.36.64]:10336 "EHLO NAM02-SN1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1754126AbcHVWx1 (ORCPT ); Mon, 22 Aug 2016 18:53:27 -0400 Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Thomas.Lendacky@amd.com; From: Tom Lendacky Subject: [RFC PATCH v2 07/20] x86: Provide general kernel support for memory encryption To: , , , , , , , , CC: Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , "Konrad Rzeszutek Wilk" , Andrey Ryabinin , Ingo Molnar , Borislav Petkov , "Andy Lutomirski" , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , "Thomas Gleixner" , Dmitry Vyukov Date: Mon, 22 Aug 2016 17:36:46 -0500 Message-ID: <20160822223646.29880.28794.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: BLUPR0301CA0029.namprd03.prod.outlook.com (10.162.113.167) To MWHPR12MB1149.namprd12.prod.outlook.com (10.169.204.13) X-MS-Office365-Filtering-Correlation-Id: f12cf8a1-0f7b-49b1-6e84-08d3cadcd145 X-Microsoft-Exchange-Diagnostics: 1;MWHPR12MB1149;2:p9d2rAuttx5nhZ6JG8f+lCKRllgdUZB6GfTiku+MzXRHXlQi6QGhiOw5WE/ZY78yzceAISFUGhoZ2FL4lZYM0v3w3Wm7YwgF+DM4DfOg9uhs8PX79bhRVbHMnJNcO6lm8pyf+y+v199VvOOXm/iubLeCtXLhQHGAqMEFqfMiBY1+94S83PTwwl5YRsmevfif;3:6V6hogfr0fzqA320I4xUgZJXpZf2UfTFtXOmz42lWsyqfHsYaGn0c6ONVMhh0fubDMUpfpDOTSxz9OmywjCKjO7bTRNrGE778RRU0J3EqtACt3rsDHv+tuFr/AHrz0YF X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:MWHPR12MB1149; X-Microsoft-Exchange-Diagnostics: 1;MWHPR12MB1149;25:snOIyRx+spHAiuzOxOByzw0JcJQV7ID/9vs6zywzDe7tPU+npoVCT2iQ+QYshmry3avkTr2n+Uw3wl5z8PUNorFKxR/8lOC3dUD79J//614lhEEkAKQ16vhn/jeFG448Cp+Xw+8FNgBVjDVCqkPLP7HzxwXAwV8b+w+n0BzDIi1Y0gpt3zAW6aqt453Zm42yGQRJ/wzG+++4ZHWBW4fe18FiQg/76Dw60BBSvVRQIttA5uh7KlNvthhtigPgF4fEQoEWLKXfvTba5uxbX2nBP3kui5/DmQXcLZiS1q5AmHrDJEWjzArCjIsA21TcsiAnmJTd0rxY+ASzLKlTzWpGcT9MLmoYWsZlnW0J1PTpVj2zZb2jlNpOE/5g71ucCKZN5YWErEq3aFz9prj0bt2bR22rz33FWXAL+qcCgnOvTTlzzIpeOLkryBdQS1E0gP3C6kgJascYDuM9hHNzkUa1kjOyEROmkQ4qq4UyYig/B7tkyobfkJLn/VDmBFcA5qlN3TZ5DFNzhC/42pLjxG9dGORuBub3p2sQzTv5t+1dpZdOD5FeeHJK76v5C54gYNfQRREZvtxRlIQNlLyy7XowHdiUtxOeBvg4SLQYFchoLe0otpVrpmzHhjOmnRnf/4g65cTHMBVA/SzZopbsJaXJacSQ3hUIC+9Q3QBsyXVZmXLs/bmfoDEr6hpQ001bcNRwgf7qXREcHbV+1PEwtsUcyYN5ucr1Mf05nOqM62+xSPo= X-Microsoft-Exchange-Diagnostics: 1;MWHPR12MB1149;31:3yii15DlqDMrUKKx4AHh8tDGc7dzp/KO06tzyLDwk23hNAq91PS6tquz285yDK3IZk6AAqA9zOqqWP6EPdVAWJZU51YiwWSGR0tP0SIZexlmgEDRiwk3cxeZlRf3iIRHyZmF0xTdIDQGE6aXMDppI+mvxR9DOGJtFB5z+teZgtjx/deepFymb0EWMQPUJAbBXRSSaDHHa77Q1/gnm3oM0KH/OAIIyh9LOQelgyF1s9E=;20:XeML51efyG2awD32XncXib0q9h9UcpkAez01tH8QcNRz1Cdsgn9bXHs6LsugodrQHqQA4f0KrwLLrqeiO4C5F3qTPGu1DAJHL3Mn8QhqaMhJwd2Nw1jGMWaGdjk28raceKwMATiidUFIpADCDq1Egn1iDuyAW/UuCr774c7p9Aaam3noEPQl/Awr80azf8W5RBPfBttc6yq++2H9IROnn3b92Z+EnSk56qkQ3xAZom7975awLfx/p5qP0FIzsBI/2F9oJZ/lzugPvPQ97L/0c2Z9bisR4ImDl2JvpzRXrmt0/EQ20PRVq2VhJBiEml66Lz2U3n2Geb69jZB6ppXquG05OzhIAfbUEaJjQYkS7Ux/abdQJjR+Aze1ZUjJ40ldOxrllcavdH2Epwu48JgcWV7geEUpbvhXU0zSTaWiJDEFvsAFFORgxxl2wEhpejxdzL6jxuIqxCJ2h6xEGnfcAQOzmDvKnx1VaFLVhiA6RDM2fO25n1BLwXjPT8e7Cag1 X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040176)(601004)(2401047)(5005006)(8121501046)(3002001)(10201501046)(6055026);SRVR:MWHPR12MB1149;BCL:0;PCL:0;RULEID:;SRVR:MWHPR12MB1149; X-Microsoft-Exchange-Diagnostics: 1;MWHPR12MB1149;4:aXeNL9mTYpfJAwg5I3vLX5prj1b9GMr+E9eQBeqvCknDkUM8wc0qeT/ysOynAeJfQ/w4ykdPMnfgtsIbjm0Rr1DvAMUS1+bNptic7B2jFTd4KiRO0XByjtcXvRP2fepYkR7URZVjfbCQP4/WnKUoDAbRP1kDO9M9KjrTqiddTtHM7p9043x9lc1nglhDZLqL68Yvq1oAjYvxmJjTH2hlFosUG6ewIxgy5+ToD0/7NOl2tiAMRPiz6c5culYW/at/Da15ikeMPAuC2HaejXUqLuSeFmoIAMsG8YtvHYl0P33DciXWpTWF9qgv24jLTeobfRXaRoeq+pxKb/B9p+zkCrNgb0tsvCwqzAafMPthm7n2VLn5jjmTw68ccsWguLF4v+POrwnY6cEJaTtI0QGboG7KUpUTU51bWYvvz7kk5tlQROpmStekySA03bw1MaSR X-Forefront-PRVS: 00429279BA X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6009001)(7916002)(199003)(189002)(68736007)(7736002)(1076002)(305945005)(586003)(6116002)(3846002)(42186005)(97746001)(53416004)(33646002)(4326007)(81166006)(81156014)(2906002)(230700001)(5660300001)(8676002)(77096005)(2950100001)(7846002)(9686002)(19580395003)(23676002)(101416001)(19580405001)(103116003)(97736004)(5001770100001)(189998001)(69596002)(105586002)(92566002)(83506001)(47776003)(106356001)(229853001)(66066001)(86362001)(7416002)(575784001)(2201001)(54356999)(76176999)(50986999)(4001350100001)(50466002)(11771545001)(2004002)(71626007)(217873001);DIR:OUT;SFP:1101;SCL:1;SRVR:MWHPR12MB1149;H:tlendack-t1.amdoffice.net;FPR:;SPF:None;PTR:InfoNoRecords;A:1;MX:1;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtNV0hQUjEyTUIxMTQ5OzIzOndzQlZjaTZSaUZ6U2liS2FwSnpadm9WN0dn?= =?utf-8?B?bWdhcEQwS0FaZ3ptWUx1UGE2U09tSitEcHVOd0dxbEc0dHJHYm1LWGxCQnBD?= =?utf-8?B?RExRc2VOZW8wZG1FdWZoWThBV2VmblYxMUhDelgvK0xiU1ZFdDBTWTYwaTEw?= =?utf-8?B?WTQxVW9ZZG5hQk5sbE53V08zRnZPUTFuelVhMEZXaXBjL2xvWGVnaFhIUGRM?= =?utf-8?B?NGJrcWhGdXBkS1I0d3MzVWM1ZVkvV0EyRTU1czM2b2pDRHpWM0NqMDNQNjZt?= =?utf-8?B?LzlvT3ZHcXFqbm5JZTlXNGJDanMxUldrczE0cWlxbHpXTnhBblEzZWVlU0wy?= =?utf-8?B?UFA4TmtCNENQSi9TOE8rVFQxRDY0cDJRV1g3dXZINXpibEFCaEtiTEttSm5p?= =?utf-8?B?a0VHYll2RGd0dzM5QU4zL1F1YjFiMjB1TVJFR3JrcHp0QTBDL05yZlIzRnl1?= =?utf-8?B?QlQ1K2tQU1pqOVpvbzVYTVBlaysvdnVldDkzNDBGdGpzeExpb1lSNFIrN0tE?= =?utf-8?B?Z2x0cFRudkZ1MWFtMU9IRUF2L3pLb2M4bFpVUmlyWlA0TUVZd1lvUGhpc2l4?= =?utf-8?B?ZGhxMDZlVTRkWk1oc0F1bjh5UWdPSVZPdWtvQU1xQ0ZWT1RJTG5kdkxNc2Mz?= =?utf-8?B?MW5BQ1U1TnlFbHBUUncyaTZRbGpCTUhEUUh0ZGpIcDdiaU9NSGNQTzE2MndD?= =?utf-8?B?OEhETkNqSS9HbXQrN3BJMlVUd2ZYdW5BbWZKR2Q4cVZtYkEvWk5ZTG05WFRw?= =?utf-8?B?ZEVvcUl5cmR6cEFSMEdMd0N1RVpzUlhhYitXem5GMDRyVDZOVkUzWEphRGJl?= =?utf-8?B?STZxTkl3VzRXOVB0b3hEZTJ5QXdZTndXeGJYVHZUUVcvN0NJVUlNdGJta3RU?= =?utf-8?B?NmRBVjhvdlE5MzVrNnVoL0RzVWVjTWVBVVRhTEFRV09NcXNYNDFZNHBoUmda?= =?utf-8?B?THJQNlNqTVRmTXpLbG9YMUN4b05MT2FKdFQ2Si9EbDVlMFR2NWxqeWo0enpD?= =?utf-8?B?ZUR6cksrZ2ovMTlVWWlyN01zMUdsZXlaZjZndnRGVWpkaXBJYWlKUzZCZUov?= =?utf-8?B?bFhoTkRiOU44OEZwQ1F2L0tZZERpUnlML1pqYVpVL2MxT1Z2bU9KNDFFZkRL?= =?utf-8?B?UFh4YThmZ3c0bnZFSUxiWmlhZ0tvbXBySkRvV3R1RzlKZXh5dUpMeVA3V3Vr?= =?utf-8?B?L1JUOHNSd3ltKys3OU9LVUlyOXhUUHp4cUVYVmRQby81b0ZUZHMwTzBSREkz?= =?utf-8?B?Tm8rQ1BjaTVzWnJpNXR5bFNXdVgyUVd2Mk91ZjBhdURpRHdDZHpZMys3blFV?= =?utf-8?B?MHc4NUI5RXlLVnZIVXFEbVRlQUlRUEE5Z05TMEpBTFF2RjhWdkdreEUvVjJJ?= =?utf-8?B?NWFLbUVSVE5nMHNUTEZLWjZ3S2JicUFYVnpxYUxsdHhKQWJrVUFRbTFMazkv?= =?utf-8?B?YTFKRFhCREFHNkdQTHR2eFpnY2tyb0tyaTl2Qk0yZU9OdURPZ0tIWmVKRE9C?= =?utf-8?B?LzdIQ1ZnRXUzYVJZMWhCUmppRjJmdXQ4dmZ2ZmlyWGhjUDBnWkUvbGFCeSt6?= =?utf-8?B?YlYvMnJBa0tHMEgyTGRaZHdXRlhYNTByMU1hRkIzc3Jvdk5FS08zL0RJeXhj?= =?utf-8?B?TG8zUU9xMUFCM0NxdWtEOVRyRHJWaXkyZXJtRGxIY2RoMkttdVA1dzY2bkZ4?= =?utf-8?B?RmxKbjNod1NDbTdibytIa1VLQnN1ZDVma0t5V1F0NWxVMGZxZDRKeHhvRlN2?= =?utf-8?B?aUo4YUVtc29JZ3M4bndHYVhHem1HMlBRMDd6YWU1R3FINTMyOHJkQ24ybTlt?= =?utf-8?B?UmlHem4wZDVNdCs0NXFldnVHeHIyY3VMcExhVGNsZ3YzcjhXRGJkS3RYeXFS?= =?utf-8?Q?CHVSLs3ebqc=3D?= X-Microsoft-Exchange-Diagnostics: 1;MWHPR12MB1149;6:uPNP9NTgStT6nS6NNBQDSI7jq4GOv+El23mlAwTVo+nG1TEjeapE2+Jirl7hDPeSzbGadnb+9aDoMDU62O99mXbTNTta0y6v/WCDQnvbp9awj6gWrZyRo3jjA1xzDBJLysF6EVFGA61QkzZahUtc4X2zOEjKCkgDbWTTwnhUcA8YSjMpo0FzgJ5dUk6LFuf3KfmcLYjWaiWDK5x4CjVrKAeMEMG3+LbeVQ1S6+1ksRlyzdY4Ydd+p8yVzokve+XEjvl9i5ziDVRNrfYCaS1+dfuYLfa6mc5stx+OOW6WQXtZPak6l3XLYZWzHHQ5rGjw/RFLweHV43abhxbahmMguw==;5:NY1NKSwkLnr0MqCgU21M9Xis9NMWlUSn538/0RZgKF4FBmhMFBbcwX/b7Om5ITVzblkPZbz3Gb2prjJePJ06KqR5ymXRZigJndb6Uh+BkX5DtAexT0Zx4gdAMvXG92aN+Ub6sWnfvmbdo5Jw28RYDg==;24:Y0wclW5eRGiaIvJrGdsAaTwl+bSgHd7h4Y7b8A5iH/4PT+z1qP4HKutS6yuN6bACJSBYsgE9y2QrQo75Yty2qAasUbJ+4u2IRAIStX9fRYs=;7:ZgetbLLAr6jqMnPfe2HQRy4qyTavnsc9ir0jPJQjgEDz0luQGBoBGxfQQld5YAZupMHOBJpQbQ39htaYFz0NS2lH17DdrQWwq8pyJyY1v5jgUBoBucVG+MvnZ/Ohu/kblGbnMo05Jm7HS3W2vj03axvbclb8hDyLJHh6amQJkZwCPuDj/lZcVbgBAyI7jlaUUMjSMDJJ0eYCB/1FmGrR8P+41PDloQNruJLU9j1AinpfIkhEa1WbRn0i1w6bmV7a SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;MWHPR12MB1149;20:H4AKJE/INLsfQLf/5klS5dPTllo0DBK+5G0rczS475n1ehEM5diiSpbBRb85aGpOxokXa/Obvjki5DAhA5/qcQFiMTr9vlnqq9ZXBquEsEQXvukxvAhi1srI0OgIEv5BlL6CSX8xow1nRRM69YBIV0lWu6GqAGc/6eXLUINbOi33jHKwjM13+cSziiHrc2rwFl1kiQbPiSgQp6Vq5jGgxgDOuY0O5Ag8ZSR1g2Dbba8bEuKhTjFgv2TlRH3JSk4c X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Aug 2016 22:36:51.0910 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1149 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Adding general kernel support for memory encryption includes: - Modify and create some page table macros to include the Secure Memory Encryption (SME) memory encryption mask - Update kernel boot support to call an SME routine that checks for and sets the SME capability (the SME routine will grow later and for now is just a stub routine) - Update kernel boot support to call an SME routine that encrypts the kernel (the SME routine will grow later and for now is just a stub routine) - Provide an SME initialization routine to update the protection map with the memory encryption mask so that it is used by default Signed-off-by: Tom Lendacky --- arch/x86/boot/compressed/pagetable.c | 7 ++++++ arch/x86/include/asm/fixmap.h | 7 ++++++ arch/x86/include/asm/mem_encrypt.h | 18 +++++++++++++++ arch/x86/include/asm/pgtable.h | 26 ++++++++++++--------- arch/x86/include/asm/pgtable_types.h | 41 ++++++++++++++++++++++----------- arch/x86/include/asm/processor.h | 3 ++ arch/x86/kernel/espfix_64.c | 2 +- arch/x86/kernel/head64.c | 10 ++++++-- arch/x86/kernel/head_64.S | 42 ++++++++++++++++++++++++++-------- arch/x86/kernel/machine_kexec_64.c | 2 +- arch/x86/kernel/mem_encrypt.S | 8 ++++++ arch/x86/mm/Makefile | 1 + arch/x86/mm/fault.c | 5 ++-- arch/x86/mm/ioremap.c | 3 ++ arch/x86/mm/kasan_init_64.c | 4 ++- arch/x86/mm/mem_encrypt.c | 30 ++++++++++++++++++++++++ arch/x86/mm/pageattr.c | 3 ++ 17 files changed, 167 insertions(+), 45 deletions(-) create mode 100644 arch/x86/mm/mem_encrypt.c diff --git a/arch/x86/boot/compressed/pagetable.c b/arch/x86/boot/compressed/pagetable.c index 56589d0..411c443 100644 --- a/arch/x86/boot/compressed/pagetable.c +++ b/arch/x86/boot/compressed/pagetable.c @@ -15,6 +15,13 @@ #define __pa(x) ((unsigned long)(x)) #define __va(x) ((void *)((unsigned long)(x))) +/* + * The pgtable.h and mm/ident_map.c includes make use of the SME related + * information which is not used in the compressed image support. Un-define + * the SME support to avoid any compile and link errors. + */ +#undef CONFIG_AMD_MEM_ENCRYPT + #include "misc.h" /* These actually do the work of building the kernel identity maps. */ diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h index 8554f96..83e91f0 100644 --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -153,6 +153,13 @@ static inline void __set_fixmap(enum fixed_addresses idx, } #endif +/* + * Fixmap settings used with memory encryption + * - FIXMAP_PAGE_NOCACHE is used for MMIO so make sure the memory + * encryption mask is not part of the page attributes + */ +#define FIXMAP_PAGE_NOCACHE PAGE_KERNEL_IO_NOCACHE + #include #define __late_set_fixmap(idx, phys, flags) __set_fixmap(idx, phys, flags) diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 747fc52..9f3e762 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -15,12 +15,21 @@ #ifndef __ASSEMBLY__ +#include + #ifdef CONFIG_AMD_MEM_ENCRYPT extern unsigned long sme_me_mask; u8 sme_get_me_loss(void); +void __init sme_early_init(void); + +#define __sme_pa(x) (__pa((x)) | sme_me_mask) +#define __sme_pa_nodebug(x) (__pa_nodebug((x)) | sme_me_mask) + +#define __sme_va(x) (__va((x) & ~sme_me_mask)) + #else /* !CONFIG_AMD_MEM_ENCRYPT */ #define sme_me_mask 0UL @@ -30,6 +39,15 @@ static inline u8 sme_get_me_loss(void) return 0; } +static inline void __init sme_early_init(void) +{ +} + +#define __sme_pa __pa +#define __sme_pa_nodebug __pa_nodebug + +#define __sme_va __va + #endif /* CONFIG_AMD_MEM_ENCRYPT */ #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 437feb4..e3f2b3e 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -5,6 +5,7 @@ #include #include +#include /* * Macro to mark a page protection value as UC- @@ -155,17 +156,22 @@ static inline int pte_special(pte_t pte) static inline unsigned long pte_pfn(pte_t pte) { - return (pte_val(pte) & PTE_PFN_MASK) >> PAGE_SHIFT; + return (pte_val(pte) & ~sme_me_mask & PTE_PFN_MASK) >> PAGE_SHIFT; } static inline unsigned long pmd_pfn(pmd_t pmd) { - return (pmd_val(pmd) & pmd_pfn_mask(pmd)) >> PAGE_SHIFT; + return (pmd_val(pmd) & ~sme_me_mask & pmd_pfn_mask(pmd)) >> PAGE_SHIFT; } static inline unsigned long pud_pfn(pud_t pud) { - return (pud_val(pud) & pud_pfn_mask(pud)) >> PAGE_SHIFT; + return (pud_val(pud) & ~sme_me_mask & pud_pfn_mask(pud)) >> PAGE_SHIFT; +} + +static inline unsigned long pgd_pfn(pgd_t pgd) +{ + return (pgd_val(pgd) & ~sme_me_mask) >> PAGE_SHIFT; } #define pte_page(pte) pfn_to_page(pte_pfn(pte)) @@ -558,15 +564,14 @@ static inline int pmd_none(pmd_t pmd) static inline unsigned long pmd_page_vaddr(pmd_t pmd) { - return (unsigned long)__va(pmd_val(pmd) & pmd_pfn_mask(pmd)); + return (unsigned long)__sme_va(pmd_val(pmd) & pmd_pfn_mask(pmd)); } /* * Currently stuck as a macro due to indirect forward reference to * linux/mmzone.h's __section_mem_map_addr() definition: */ -#define pmd_page(pmd) \ - pfn_to_page((pmd_val(pmd) & pmd_pfn_mask(pmd)) >> PAGE_SHIFT) +#define pmd_page(pmd) pfn_to_page(pmd_pfn(pmd)) /* * the pmd page can be thought of an array like this: pmd_t[PTRS_PER_PMD] @@ -627,15 +632,14 @@ static inline int pud_present(pud_t pud) static inline unsigned long pud_page_vaddr(pud_t pud) { - return (unsigned long)__va(pud_val(pud) & pud_pfn_mask(pud)); + return (unsigned long)__sme_va(pud_val(pud) & pud_pfn_mask(pud)); } /* * Currently stuck as a macro due to indirect forward reference to * linux/mmzone.h's __section_mem_map_addr() definition: */ -#define pud_page(pud) \ - pfn_to_page((pud_val(pud) & pud_pfn_mask(pud)) >> PAGE_SHIFT) +#define pud_page(pud) pfn_to_page(pud_pfn(pud)) /* Find an entry in the second-level page table.. */ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) @@ -668,14 +672,14 @@ static inline int pgd_present(pgd_t pgd) static inline unsigned long pgd_page_vaddr(pgd_t pgd) { - return (unsigned long)__va((unsigned long)pgd_val(pgd) & PTE_PFN_MASK); + return (unsigned long)__sme_va((unsigned long)pgd_val(pgd) & PTE_PFN_MASK); } /* * Currently stuck as a macro due to indirect forward reference to * linux/mmzone.h's __section_mem_map_addr() definition: */ -#define pgd_page(pgd) pfn_to_page(pgd_val(pgd) >> PAGE_SHIFT) +#define pgd_page(pgd) pfn_to_page(pgd_pfn(pgd)) /* to find an entry in a page-table-directory. */ static inline unsigned long pud_index(unsigned long address) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index f1218f5..a01f0e1 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -3,6 +3,7 @@ #include #include +#include #define FIRST_USER_ADDRESS 0UL @@ -121,9 +122,9 @@ #define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE) -#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ +#define __PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ _PAGE_ACCESSED | _PAGE_DIRTY) -#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ +#define __KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ _PAGE_DIRTY) /* @@ -191,18 +192,30 @@ enum page_cache_mode { #define __PAGE_KERNEL_IO (__PAGE_KERNEL) #define __PAGE_KERNEL_IO_NOCACHE (__PAGE_KERNEL_NOCACHE) -#define PAGE_KERNEL __pgprot(__PAGE_KERNEL) -#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO) -#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC) -#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX) -#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE) -#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE) -#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC) -#define PAGE_KERNEL_VSYSCALL __pgprot(__PAGE_KERNEL_VSYSCALL) -#define PAGE_KERNEL_VVAR __pgprot(__PAGE_KERNEL_VVAR) - -#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO) -#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE) +#ifndef __ASSEMBLY__ + +#define _PAGE_ENC sme_me_mask + +/* Redefine macros to inclue the memory encryption mask */ +#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ + _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_ENC) +#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ + _PAGE_DIRTY | _PAGE_ENC) + +#define PAGE_KERNEL __pgprot(__PAGE_KERNEL | _PAGE_ENC) +#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) +#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX | _PAGE_ENC) +#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_VSYSCALL __pgprot(__PAGE_KERNEL_VSYSCALL | _PAGE_ENC) +#define PAGE_KERNEL_VVAR __pgprot(__PAGE_KERNEL_VVAR | _PAGE_ENC) + +#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO) +#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE) + +#endif /* __ASSEMBLY__ */ /* xwr */ #define __P000 PAGE_NONE diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 63def95..024d58c 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -22,6 +22,7 @@ struct vm86; #include #include #include +#include #include #include @@ -207,7 +208,7 @@ static inline void native_cpuid(unsigned int *eax, unsigned int *ebx, static inline void load_cr3(pgd_t *pgdir) { - write_cr3(__pa(pgdir)); + write_cr3(__sme_pa(pgdir)); } #ifdef CONFIG_X86_32 diff --git a/arch/x86/kernel/espfix_64.c b/arch/x86/kernel/espfix_64.c index 04f89ca..51566d7 100644 --- a/arch/x86/kernel/espfix_64.c +++ b/arch/x86/kernel/espfix_64.c @@ -193,7 +193,7 @@ void init_espfix_ap(int cpu) pte_p = pte_offset_kernel(&pmd, addr); stack_page = page_address(alloc_pages_node(node, GFP_KERNEL, 0)); - pte = __pte(__pa(stack_page) | (__PAGE_KERNEL_RO & ptemask)); + pte = __pte(__pa(stack_page) | ((__PAGE_KERNEL_RO | _PAGE_ENC) & ptemask)); for (n = 0; n < ESPFIX_PTE_CLONES; n++) set_pte(&pte_p[n*PTE_STRIDE], pte); diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index 54a2372..88c7bae 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -28,6 +28,7 @@ #include #include #include +#include /* * Manage page tables very early on. @@ -42,7 +43,7 @@ static void __init reset_early_page_tables(void) { memset(early_level4_pgt, 0, sizeof(pgd_t)*(PTRS_PER_PGD-1)); next_early_pgt = 0; - write_cr3(__pa_nodebug(early_level4_pgt)); + write_cr3(__sme_pa_nodebug(early_level4_pgt)); } /* Create a new PMD entry */ @@ -54,7 +55,7 @@ int __init early_make_pgtable(unsigned long address) pmdval_t pmd, *pmd_p; /* Invalid address or early pgt is done ? */ - if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt)) + if (physaddr >= MAXMEM || read_cr3() != __sme_pa_nodebug(early_level4_pgt)) return -1; again: @@ -157,6 +158,11 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data) clear_page(init_level4_pgt); + /* Update the early_pmd_flags with the memory encryption mask */ + early_pmd_flags |= _PAGE_ENC; + + sme_early_init(); + kasan_early_init(); for (i = 0; i < NUM_EXCEPTION_VECTORS; i++) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index c98a559..30f7715 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -95,6 +95,13 @@ startup_64: jnz bad_address /* + * Enable memory encryption (if available). Add the memory encryption + * mask to %rbp to include it in the the page table fixup. + */ + call sme_enable + addq sme_me_mask(%rip), %rbp + + /* * Fixup the physical addresses in the page table */ addq %rbp, early_level4_pgt + (L4_START_KERNEL*8)(%rip) @@ -116,7 +123,8 @@ startup_64: movq %rdi, %rax shrq $PGDIR_SHIFT, %rax - leaq (4096 + _KERNPG_TABLE)(%rbx), %rdx + leaq (4096 + __KERNPG_TABLE)(%rbx), %rdx + addq sme_me_mask(%rip), %rdx /* Apply mem encryption mask */ movq %rdx, 0(%rbx,%rax,8) movq %rdx, 8(%rbx,%rax,8) @@ -133,6 +141,7 @@ startup_64: movq %rdi, %rax shrq $PMD_SHIFT, %rdi addq $(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL), %rax + addq sme_me_mask(%rip), %rax /* Apply mem encryption mask */ leaq (_end - 1)(%rip), %rcx shrq $PMD_SHIFT, %rcx subq %rdi, %rcx @@ -163,9 +172,19 @@ startup_64: cmp %r8, %rdi jne 1b - /* Fixup phys_base */ + /* + * Fixup phys_base, remove the memory encryption mask from %rbp + * to obtain the true physical address. + */ + subq sme_me_mask(%rip), %rbp addq %rbp, phys_base(%rip) + /* + * The page tables have been updated with the memory encryption mask, + * so encrypt the kernel if memory encryption is active + */ + call sme_encrypt_kernel + movq $(early_level4_pgt - __START_KERNEL_map), %rax jmp 1f ENTRY(secondary_startup_64) @@ -189,6 +208,9 @@ ENTRY(secondary_startup_64) movq $(init_level4_pgt - __START_KERNEL_map), %rax 1: + /* Add the memory encryption mask to RAX */ + addq sme_me_mask(%rip), %rax + /* Enable PAE mode and PGE */ movl $(X86_CR4_PAE | X86_CR4_PGE), %ecx movq %rcx, %cr4 @@ -415,7 +437,7 @@ GLOBAL(name) __INITDATA NEXT_PAGE(early_level4_pgt) .fill 511,8,0 - .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level3_kernel_pgt - __START_KERNEL_map + __PAGE_TABLE NEXT_PAGE(early_dynamic_pgts) .fill 512*EARLY_DYNAMIC_PAGE_TABLES,8,0 @@ -427,15 +449,15 @@ NEXT_PAGE(init_level4_pgt) .fill 512,8,0 #else NEXT_PAGE(init_level4_pgt) - .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level3_ident_pgt - __START_KERNEL_map + __KERNPG_TABLE .org init_level4_pgt + L4_PAGE_OFFSET*8, 0 - .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level3_ident_pgt - __START_KERNEL_map + __KERNPG_TABLE .org init_level4_pgt + L4_START_KERNEL*8, 0 /* (2^48-(2*1024*1024*1024))/(2^39) = 511 */ - .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level3_kernel_pgt - __START_KERNEL_map + __PAGE_TABLE NEXT_PAGE(level3_ident_pgt) - .quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level2_ident_pgt - __START_KERNEL_map + __KERNPG_TABLE .fill 511, 8, 0 NEXT_PAGE(level2_ident_pgt) /* Since I easily can, map the first 1G. @@ -447,8 +469,8 @@ NEXT_PAGE(level2_ident_pgt) NEXT_PAGE(level3_kernel_pgt) .fill L3_START_KERNEL,8,0 /* (2^48-(2*1024*1024*1024)-((2^39)*511))/(2^30) = 510 */ - .quad level2_kernel_pgt - __START_KERNEL_map + _KERNPG_TABLE - .quad level2_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level2_kernel_pgt - __START_KERNEL_map + __KERNPG_TABLE + .quad level2_fixmap_pgt - __START_KERNEL_map + __PAGE_TABLE NEXT_PAGE(level2_kernel_pgt) /* @@ -466,7 +488,7 @@ NEXT_PAGE(level2_kernel_pgt) NEXT_PAGE(level2_fixmap_pgt) .fill 506,8,0 - .quad level1_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level1_fixmap_pgt - __START_KERNEL_map + __PAGE_TABLE /* 8MB reserved for vsyscalls + a 2MB hole = 4 + 1 entries */ .fill 5,8,0 diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c index 5a294e4..f976867 100644 --- a/arch/x86/kernel/machine_kexec_64.c +++ b/arch/x86/kernel/machine_kexec_64.c @@ -103,7 +103,7 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable) struct x86_mapping_info info = { .alloc_pgt_page = alloc_pgt_page, .context = image, - .pmd_flag = __PAGE_KERNEL_LARGE_EXEC, + .pmd_flag = __PAGE_KERNEL_LARGE_EXEC | _PAGE_ENC, }; unsigned long mstart, mend; pgd_t *level4p; diff --git a/arch/x86/kernel/mem_encrypt.S b/arch/x86/kernel/mem_encrypt.S index ef7f325..f2e0536 100644 --- a/arch/x86/kernel/mem_encrypt.S +++ b/arch/x86/kernel/mem_encrypt.S @@ -14,6 +14,14 @@ .text .code64 +ENTRY(sme_enable) + ret +ENDPROC(sme_enable) + +ENTRY(sme_encrypt_kernel) + ret +ENDPROC(sme_encrypt_kernel) + ENTRY(sme_get_me_loss) xor %rax, %rax mov sme_me_loss(%rip), %al diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 96d2b84..44d4d21 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -39,3 +39,4 @@ obj-$(CONFIG_X86_INTEL_MPX) += mpx.o obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o +obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt.o diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index dc80230..3ea88ca 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -23,6 +23,7 @@ #include /* emulate_vsyscall */ #include /* struct vm86 */ #include /* vma_pkey() */ +#include /* __sme_va() */ #define CREATE_TRACE_POINTS #include @@ -523,7 +524,7 @@ static int bad_address(void *p) static void dump_pagetable(unsigned long address) { - pgd_t *base = __va(read_cr3() & PHYSICAL_PAGE_MASK); + pgd_t *base = __sme_va(read_cr3() & PHYSICAL_PAGE_MASK); pgd_t *pgd = base + pgd_index(address); pud_t *pud; pmd_t *pmd; @@ -659,7 +660,7 @@ show_fault_oops(struct pt_regs *regs, unsigned long error_code, pgd_t *pgd; pte_t *pte; - pgd = __va(read_cr3() & PHYSICAL_PAGE_MASK); + pgd = __sme_va(read_cr3() & PHYSICAL_PAGE_MASK); pgd += pgd_index(address); pte = lookup_address_in_pgd(pgd, address, &level); diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 7aaa263..5cf0489 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -20,6 +20,7 @@ #include #include #include +#include #include "physaddr.h" @@ -423,7 +424,7 @@ static pte_t bm_pte[PAGE_SIZE/sizeof(pte_t)] __page_aligned_bss; static inline pmd_t * __init early_ioremap_pmd(unsigned long addr) { /* Don't assume we're using swapper_pg_dir at this point */ - pgd_t *base = __va(read_cr3()); + pgd_t *base = __sme_va(read_cr3()); pgd_t *pgd = &base[pgd_index(addr)]; pud_t *pud = pud_offset(pgd, addr); pmd_t *pmd = pmd_offset(pud, addr); diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index 0493c17..0608dc8 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -68,7 +68,7 @@ static struct notifier_block kasan_die_notifier = { void __init kasan_early_init(void) { int i; - pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL; + pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL | _PAGE_ENC; pmdval_t pmd_val = __pa_nodebug(kasan_zero_pte) | _KERNPG_TABLE; pudval_t pud_val = __pa_nodebug(kasan_zero_pmd) | _KERNPG_TABLE; @@ -130,7 +130,7 @@ void __init kasan_init(void) */ memset(kasan_zero_page, 0, PAGE_SIZE); for (i = 0; i < PTRS_PER_PTE; i++) { - pte_t pte = __pte(__pa(kasan_zero_page) | __PAGE_KERNEL_RO); + pte_t pte = __pte(__pa(kasan_zero_page) | __PAGE_KERNEL_RO | _PAGE_ENC); set_pte(&kasan_zero_pte[i], pte); } /* Flush TLBs again to be sure that write protection applied. */ diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c new file mode 100644 index 0000000..00eb705 --- /dev/null +++ b/arch/x86/mm/mem_encrypt.c @@ -0,0 +1,30 @@ +/* + * AMD Memory Encryption Support + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include + +#include + +void __init sme_early_init(void) +{ + unsigned int i; + + if (!sme_me_mask) + return; + + __supported_pte_mask |= sme_me_mask; + + /* Update the protection map with memory encryption mask */ + for (i = 0; i < ARRAY_SIZE(protection_map); i++) + protection_map[i] = __pgprot(pgprot_val(protection_map[i]) | sme_me_mask); +} diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index 849dc09..72c292d 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1973,6 +1973,9 @@ int kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address, if (!(page_flags & _PAGE_RW)) cpa.mask_clr = __pgprot(_PAGE_RW); + if (!(page_flags & _PAGE_ENC)) + cpa.mask_clr = __pgprot(pgprot_val(cpa.mask_clr) | _PAGE_ENC); + cpa.mask_set = __pgprot(_PAGE_PRESENT | page_flags); retval = __change_page_attr_set_clr(&cpa, 0); From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Lendacky Subject: [RFC PATCH v2 07/20] x86: Provide general kernel support for memory encryption Date: Mon, 22 Aug 2016 17:36:46 -0500 Message-ID: <20160822223646.29880.28794.stgit@tlendack-t1.amdoffice.net> References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> Sender: owner-linux-mm@kvack.org To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Andrey Ryabinin , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov List-Id: linux-efi@vger.kernel.org Adding general kernel support for memory encryption includes: - Modify and create some page table macros to include the Secure Memory Encryption (SME) memory encryption mask - Update kernel boot support to call an SME routine that checks for and sets the SME capability (the SME routine will grow later and for now is just a stub routine) - Update kernel boot support to call an SME routine that encrypts the kernel (the SME routine will grow later and for now is just a stub routine) - Provide an SME initialization routine to update the protection map with the memory encryption mask so that it is used by default Signed-off-by: Tom Lendacky --- arch/x86/boot/compressed/pagetable.c | 7 ++++++ arch/x86/include/asm/fixmap.h | 7 ++++++ arch/x86/include/asm/mem_encrypt.h | 18 +++++++++++++++ arch/x86/include/asm/pgtable.h | 26 ++++++++++++--------- arch/x86/include/asm/pgtable_types.h | 41 ++++++++++++++++++++++----------- arch/x86/include/asm/processor.h | 3 ++ arch/x86/kernel/espfix_64.c | 2 +- arch/x86/kernel/head64.c | 10 ++++++-- arch/x86/kernel/head_64.S | 42 ++++++++++++++++++++++++++-------- arch/x86/kernel/machine_kexec_64.c | 2 +- arch/x86/kernel/mem_encrypt.S | 8 ++++++ arch/x86/mm/Makefile | 1 + arch/x86/mm/fault.c | 5 ++-- arch/x86/mm/ioremap.c | 3 ++ arch/x86/mm/kasan_init_64.c | 4 ++- arch/x86/mm/mem_encrypt.c | 30 ++++++++++++++++++++++++ arch/x86/mm/pageattr.c | 3 ++ 17 files changed, 167 insertions(+), 45 deletions(-) create mode 100644 arch/x86/mm/mem_encrypt.c diff --git a/arch/x86/boot/compressed/pagetable.c b/arch/x86/boot/compressed/pagetable.c index 56589d0..411c443 100644 --- a/arch/x86/boot/compressed/pagetable.c +++ b/arch/x86/boot/compressed/pagetable.c @@ -15,6 +15,13 @@ #define __pa(x) ((unsigned long)(x)) #define __va(x) ((void *)((unsigned long)(x))) +/* + * The pgtable.h and mm/ident_map.c includes make use of the SME related + * information which is not used in the compressed image support. Un-define + * the SME support to avoid any compile and link errors. + */ +#undef CONFIG_AMD_MEM_ENCRYPT + #include "misc.h" /* These actually do the work of building the kernel identity maps. */ diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h index 8554f96..83e91f0 100644 --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -153,6 +153,13 @@ static inline void __set_fixmap(enum fixed_addresses idx, } #endif +/* + * Fixmap settings used with memory encryption + * - FIXMAP_PAGE_NOCACHE is used for MMIO so make sure the memory + * encryption mask is not part of the page attributes + */ +#define FIXMAP_PAGE_NOCACHE PAGE_KERNEL_IO_NOCACHE + #include #define __late_set_fixmap(idx, phys, flags) __set_fixmap(idx, phys, flags) diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 747fc52..9f3e762 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -15,12 +15,21 @@ #ifndef __ASSEMBLY__ +#include + #ifdef CONFIG_AMD_MEM_ENCRYPT extern unsigned long sme_me_mask; u8 sme_get_me_loss(void); +void __init sme_early_init(void); + +#define __sme_pa(x) (__pa((x)) | sme_me_mask) +#define __sme_pa_nodebug(x) (__pa_nodebug((x)) | sme_me_mask) + +#define __sme_va(x) (__va((x) & ~sme_me_mask)) + #else /* !CONFIG_AMD_MEM_ENCRYPT */ #define sme_me_mask 0UL @@ -30,6 +39,15 @@ static inline u8 sme_get_me_loss(void) return 0; } +static inline void __init sme_early_init(void) +{ +} + +#define __sme_pa __pa +#define __sme_pa_nodebug __pa_nodebug + +#define __sme_va __va + #endif /* CONFIG_AMD_MEM_ENCRYPT */ #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 437feb4..e3f2b3e 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -5,6 +5,7 @@ #include #include +#include /* * Macro to mark a page protection value as UC- @@ -155,17 +156,22 @@ static inline int pte_special(pte_t pte) static inline unsigned long pte_pfn(pte_t pte) { - return (pte_val(pte) & PTE_PFN_MASK) >> PAGE_SHIFT; + return (pte_val(pte) & ~sme_me_mask & PTE_PFN_MASK) >> PAGE_SHIFT; } static inline unsigned long pmd_pfn(pmd_t pmd) { - return (pmd_val(pmd) & pmd_pfn_mask(pmd)) >> PAGE_SHIFT; + return (pmd_val(pmd) & ~sme_me_mask & pmd_pfn_mask(pmd)) >> PAGE_SHIFT; } static inline unsigned long pud_pfn(pud_t pud) { - return (pud_val(pud) & pud_pfn_mask(pud)) >> PAGE_SHIFT; + return (pud_val(pud) & ~sme_me_mask & pud_pfn_mask(pud)) >> PAGE_SHIFT; +} + +static inline unsigned long pgd_pfn(pgd_t pgd) +{ + return (pgd_val(pgd) & ~sme_me_mask) >> PAGE_SHIFT; } #define pte_page(pte) pfn_to_page(pte_pfn(pte)) @@ -558,15 +564,14 @@ static inline int pmd_none(pmd_t pmd) static inline unsigned long pmd_page_vaddr(pmd_t pmd) { - return (unsigned long)__va(pmd_val(pmd) & pmd_pfn_mask(pmd)); + return (unsigned long)__sme_va(pmd_val(pmd) & pmd_pfn_mask(pmd)); } /* * Currently stuck as a macro due to indirect forward reference to * linux/mmzone.h's __section_mem_map_addr() definition: */ -#define pmd_page(pmd) \ - pfn_to_page((pmd_val(pmd) & pmd_pfn_mask(pmd)) >> PAGE_SHIFT) +#define pmd_page(pmd) pfn_to_page(pmd_pfn(pmd)) /* * the pmd page can be thought of an array like this: pmd_t[PTRS_PER_PMD] @@ -627,15 +632,14 @@ static inline int pud_present(pud_t pud) static inline unsigned long pud_page_vaddr(pud_t pud) { - return (unsigned long)__va(pud_val(pud) & pud_pfn_mask(pud)); + return (unsigned long)__sme_va(pud_val(pud) & pud_pfn_mask(pud)); } /* * Currently stuck as a macro due to indirect forward reference to * linux/mmzone.h's __section_mem_map_addr() definition: */ -#define pud_page(pud) \ - pfn_to_page((pud_val(pud) & pud_pfn_mask(pud)) >> PAGE_SHIFT) +#define pud_page(pud) pfn_to_page(pud_pfn(pud)) /* Find an entry in the second-level page table.. */ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) @@ -668,14 +672,14 @@ static inline int pgd_present(pgd_t pgd) static inline unsigned long pgd_page_vaddr(pgd_t pgd) { - return (unsigned long)__va((unsigned long)pgd_val(pgd) & PTE_PFN_MASK); + return (unsigned long)__sme_va((unsigned long)pgd_val(pgd) & PTE_PFN_MASK); } /* * Currently stuck as a macro due to indirect forward reference to * linux/mmzone.h's __section_mem_map_addr() definition: */ -#define pgd_page(pgd) pfn_to_page(pgd_val(pgd) >> PAGE_SHIFT) +#define pgd_page(pgd) pfn_to_page(pgd_pfn(pgd)) /* to find an entry in a page-table-directory. */ static inline unsigned long pud_index(unsigned long address) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index f1218f5..a01f0e1 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -3,6 +3,7 @@ #include #include +#include #define FIRST_USER_ADDRESS 0UL @@ -121,9 +122,9 @@ #define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE) -#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ +#define __PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ _PAGE_ACCESSED | _PAGE_DIRTY) -#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ +#define __KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ _PAGE_DIRTY) /* @@ -191,18 +192,30 @@ enum page_cache_mode { #define __PAGE_KERNEL_IO (__PAGE_KERNEL) #define __PAGE_KERNEL_IO_NOCACHE (__PAGE_KERNEL_NOCACHE) -#define PAGE_KERNEL __pgprot(__PAGE_KERNEL) -#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO) -#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC) -#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX) -#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE) -#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE) -#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC) -#define PAGE_KERNEL_VSYSCALL __pgprot(__PAGE_KERNEL_VSYSCALL) -#define PAGE_KERNEL_VVAR __pgprot(__PAGE_KERNEL_VVAR) - -#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO) -#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE) +#ifndef __ASSEMBLY__ + +#define _PAGE_ENC sme_me_mask + +/* Redefine macros to inclue the memory encryption mask */ +#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ + _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_ENC) +#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ + _PAGE_DIRTY | _PAGE_ENC) + +#define PAGE_KERNEL __pgprot(__PAGE_KERNEL | _PAGE_ENC) +#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) +#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX | _PAGE_ENC) +#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_VSYSCALL __pgprot(__PAGE_KERNEL_VSYSCALL | _PAGE_ENC) +#define PAGE_KERNEL_VVAR __pgprot(__PAGE_KERNEL_VVAR | _PAGE_ENC) + +#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO) +#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE) + +#endif /* __ASSEMBLY__ */ /* xwr */ #define __P000 PAGE_NONE diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 63def95..024d58c 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -22,6 +22,7 @@ struct vm86; #include #include #include +#include #include #include @@ -207,7 +208,7 @@ static inline void native_cpuid(unsigned int *eax, unsigned int *ebx, static inline void load_cr3(pgd_t *pgdir) { - write_cr3(__pa(pgdir)); + write_cr3(__sme_pa(pgdir)); } #ifdef CONFIG_X86_32 diff --git a/arch/x86/kernel/espfix_64.c b/arch/x86/kernel/espfix_64.c index 04f89ca..51566d7 100644 --- a/arch/x86/kernel/espfix_64.c +++ b/arch/x86/kernel/espfix_64.c @@ -193,7 +193,7 @@ void init_espfix_ap(int cpu) pte_p = pte_offset_kernel(&pmd, addr); stack_page = page_address(alloc_pages_node(node, GFP_KERNEL, 0)); - pte = __pte(__pa(stack_page) | (__PAGE_KERNEL_RO & ptemask)); + pte = __pte(__pa(stack_page) | ((__PAGE_KERNEL_RO | _PAGE_ENC) & ptemask)); for (n = 0; n < ESPFIX_PTE_CLONES; n++) set_pte(&pte_p[n*PTE_STRIDE], pte); diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index 54a2372..88c7bae 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -28,6 +28,7 @@ #include #include #include +#include /* * Manage page tables very early on. @@ -42,7 +43,7 @@ static void __init reset_early_page_tables(void) { memset(early_level4_pgt, 0, sizeof(pgd_t)*(PTRS_PER_PGD-1)); next_early_pgt = 0; - write_cr3(__pa_nodebug(early_level4_pgt)); + write_cr3(__sme_pa_nodebug(early_level4_pgt)); } /* Create a new PMD entry */ @@ -54,7 +55,7 @@ int __init early_make_pgtable(unsigned long address) pmdval_t pmd, *pmd_p; /* Invalid address or early pgt is done ? */ - if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt)) + if (physaddr >= MAXMEM || read_cr3() != __sme_pa_nodebug(early_level4_pgt)) return -1; again: @@ -157,6 +158,11 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data) clear_page(init_level4_pgt); + /* Update the early_pmd_flags with the memory encryption mask */ + early_pmd_flags |= _PAGE_ENC; + + sme_early_init(); + kasan_early_init(); for (i = 0; i < NUM_EXCEPTION_VECTORS; i++) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index c98a559..30f7715 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -95,6 +95,13 @@ startup_64: jnz bad_address /* + * Enable memory encryption (if available). Add the memory encryption + * mask to %rbp to include it in the the page table fixup. + */ + call sme_enable + addq sme_me_mask(%rip), %rbp + + /* * Fixup the physical addresses in the page table */ addq %rbp, early_level4_pgt + (L4_START_KERNEL*8)(%rip) @@ -116,7 +123,8 @@ startup_64: movq %rdi, %rax shrq $PGDIR_SHIFT, %rax - leaq (4096 + _KERNPG_TABLE)(%rbx), %rdx + leaq (4096 + __KERNPG_TABLE)(%rbx), %rdx + addq sme_me_mask(%rip), %rdx /* Apply mem encryption mask */ movq %rdx, 0(%rbx,%rax,8) movq %rdx, 8(%rbx,%rax,8) @@ -133,6 +141,7 @@ startup_64: movq %rdi, %rax shrq $PMD_SHIFT, %rdi addq $(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL), %rax + addq sme_me_mask(%rip), %rax /* Apply mem encryption mask */ leaq (_end - 1)(%rip), %rcx shrq $PMD_SHIFT, %rcx subq %rdi, %rcx @@ -163,9 +172,19 @@ startup_64: cmp %r8, %rdi jne 1b - /* Fixup phys_base */ + /* + * Fixup phys_base, remove the memory encryption mask from %rbp + * to obtain the true physical address. + */ + subq sme_me_mask(%rip), %rbp addq %rbp, phys_base(%rip) + /* + * The page tables have been updated with the memory encryption mask, + * so encrypt the kernel if memory encryption is active + */ + call sme_encrypt_kernel + movq $(early_level4_pgt - __START_KERNEL_map), %rax jmp 1f ENTRY(secondary_startup_64) @@ -189,6 +208,9 @@ ENTRY(secondary_startup_64) movq $(init_level4_pgt - __START_KERNEL_map), %rax 1: + /* Add the memory encryption mask to RAX */ + addq sme_me_mask(%rip), %rax + /* Enable PAE mode and PGE */ movl $(X86_CR4_PAE | X86_CR4_PGE), %ecx movq %rcx, %cr4 @@ -415,7 +437,7 @@ GLOBAL(name) __INITDATA NEXT_PAGE(early_level4_pgt) .fill 511,8,0 - .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level3_kernel_pgt - __START_KERNEL_map + __PAGE_TABLE NEXT_PAGE(early_dynamic_pgts) .fill 512*EARLY_DYNAMIC_PAGE_TABLES,8,0 @@ -427,15 +449,15 @@ NEXT_PAGE(init_level4_pgt) .fill 512,8,0 #else NEXT_PAGE(init_level4_pgt) - .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level3_ident_pgt - __START_KERNEL_map + __KERNPG_TABLE .org init_level4_pgt + L4_PAGE_OFFSET*8, 0 - .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level3_ident_pgt - __START_KERNEL_map + __KERNPG_TABLE .org init_level4_pgt + L4_START_KERNEL*8, 0 /* (2^48-(2*1024*1024*1024))/(2^39) = 511 */ - .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level3_kernel_pgt - __START_KERNEL_map + __PAGE_TABLE NEXT_PAGE(level3_ident_pgt) - .quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level2_ident_pgt - __START_KERNEL_map + __KERNPG_TABLE .fill 511, 8, 0 NEXT_PAGE(level2_ident_pgt) /* Since I easily can, map the first 1G. @@ -447,8 +469,8 @@ NEXT_PAGE(level2_ident_pgt) NEXT_PAGE(level3_kernel_pgt) .fill L3_START_KERNEL,8,0 /* (2^48-(2*1024*1024*1024)-((2^39)*511))/(2^30) = 510 */ - .quad level2_kernel_pgt - __START_KERNEL_map + _KERNPG_TABLE - .quad level2_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level2_kernel_pgt - __START_KERNEL_map + __KERNPG_TABLE + .quad level2_fixmap_pgt - __START_KERNEL_map + __PAGE_TABLE NEXT_PAGE(level2_kernel_pgt) /* @@ -466,7 +488,7 @@ NEXT_PAGE(level2_kernel_pgt) NEXT_PAGE(level2_fixmap_pgt) .fill 506,8,0 - .quad level1_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level1_fixmap_pgt - __START_KERNEL_map + __PAGE_TABLE /* 8MB reserved for vsyscalls + a 2MB hole = 4 + 1 entries */ .fill 5,8,0 diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c index 5a294e4..f976867 100644 --- a/arch/x86/kernel/machine_kexec_64.c +++ b/arch/x86/kernel/machine_kexec_64.c @@ -103,7 +103,7 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable) struct x86_mapping_info info = { .alloc_pgt_page = alloc_pgt_page, .context = image, - .pmd_flag = __PAGE_KERNEL_LARGE_EXEC, + .pmd_flag = __PAGE_KERNEL_LARGE_EXEC | _PAGE_ENC, }; unsigned long mstart, mend; pgd_t *level4p; diff --git a/arch/x86/kernel/mem_encrypt.S b/arch/x86/kernel/mem_encrypt.S index ef7f325..f2e0536 100644 --- a/arch/x86/kernel/mem_encrypt.S +++ b/arch/x86/kernel/mem_encrypt.S @@ -14,6 +14,14 @@ .text .code64 +ENTRY(sme_enable) + ret +ENDPROC(sme_enable) + +ENTRY(sme_encrypt_kernel) + ret +ENDPROC(sme_encrypt_kernel) + ENTRY(sme_get_me_loss) xor %rax, %rax mov sme_me_loss(%rip), %al diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 96d2b84..44d4d21 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -39,3 +39,4 @@ obj-$(CONFIG_X86_INTEL_MPX) += mpx.o obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o +obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt.o diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index dc80230..3ea88ca 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -23,6 +23,7 @@ #include /* emulate_vsyscall */ #include /* struct vm86 */ #include /* vma_pkey() */ +#include /* __sme_va() */ #define CREATE_TRACE_POINTS #include @@ -523,7 +524,7 @@ static int bad_address(void *p) static void dump_pagetable(unsigned long address) { - pgd_t *base = __va(read_cr3() & PHYSICAL_PAGE_MASK); + pgd_t *base = __sme_va(read_cr3() & PHYSICAL_PAGE_MASK); pgd_t *pgd = base + pgd_index(address); pud_t *pud; pmd_t *pmd; @@ -659,7 +660,7 @@ show_fault_oops(struct pt_regs *regs, unsigned long error_code, pgd_t *pgd; pte_t *pte; - pgd = __va(read_cr3() & PHYSICAL_PAGE_MASK); + pgd = __sme_va(read_cr3() & PHYSICAL_PAGE_MASK); pgd += pgd_index(address); pte = lookup_address_in_pgd(pgd, address, &level); diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 7aaa263..5cf0489 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -20,6 +20,7 @@ #include #include #include +#include #include "physaddr.h" @@ -423,7 +424,7 @@ static pte_t bm_pte[PAGE_SIZE/sizeof(pte_t)] __page_aligned_bss; static inline pmd_t * __init early_ioremap_pmd(unsigned long addr) { /* Don't assume we're using swapper_pg_dir at this point */ - pgd_t *base = __va(read_cr3()); + pgd_t *base = __sme_va(read_cr3()); pgd_t *pgd = &base[pgd_index(addr)]; pud_t *pud = pud_offset(pgd, addr); pmd_t *pmd = pmd_offset(pud, addr); diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index 0493c17..0608dc8 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -68,7 +68,7 @@ static struct notifier_block kasan_die_notifier = { void __init kasan_early_init(void) { int i; - pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL; + pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL | _PAGE_ENC; pmdval_t pmd_val = __pa_nodebug(kasan_zero_pte) | _KERNPG_TABLE; pudval_t pud_val = __pa_nodebug(kasan_zero_pmd) | _KERNPG_TABLE; @@ -130,7 +130,7 @@ void __init kasan_init(void) */ memset(kasan_zero_page, 0, PAGE_SIZE); for (i = 0; i < PTRS_PER_PTE; i++) { - pte_t pte = __pte(__pa(kasan_zero_page) | __PAGE_KERNEL_RO); + pte_t pte = __pte(__pa(kasan_zero_page) | __PAGE_KERNEL_RO | _PAGE_ENC); set_pte(&kasan_zero_pte[i], pte); } /* Flush TLBs again to be sure that write protection applied. */ diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c new file mode 100644 index 0000000..00eb705 --- /dev/null +++ b/arch/x86/mm/mem_encrypt.c @@ -0,0 +1,30 @@ +/* + * AMD Memory Encryption Support + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include + +#include + +void __init sme_early_init(void) +{ + unsigned int i; + + if (!sme_me_mask) + return; + + __supported_pte_mask |= sme_me_mask; + + /* Update the protection map with memory encryption mask */ + for (i = 0; i < ARRAY_SIZE(protection_map); i++) + protection_map[i] = __pgprot(pgprot_val(protection_map[i]) | sme_me_mask); +} diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index 849dc09..72c292d 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1973,6 +1973,9 @@ int kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address, if (!(page_flags & _PAGE_RW)) cpa.mask_clr = __pgprot(_PAGE_RW); + if (!(page_flags & _PAGE_ENC)) + cpa.mask_clr = __pgprot(pgprot_val(cpa.mask_clr) | _PAGE_ENC); + cpa.mask_set = __pgprot(_PAGE_PRESENT | page_flags); retval = __change_page_attr_set_clr(&cpa, 0); -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-sn1nam02on0064.outbound.protection.outlook.com ([104.47.36.64]:10336 "EHLO NAM02-SN1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1754126AbcHVWx1 (ORCPT ); Mon, 22 Aug 2016 18:53:27 -0400 From: Tom Lendacky Subject: [RFC PATCH v2 07/20] x86: Provide general kernel support for memory encryption Date: Mon, 22 Aug 2016 17:36:46 -0500 Message-ID: <20160822223646.29880.28794.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: linux-arch-owner@vger.kernel.org List-ID: To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Andrey Ryabinin , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov Message-ID: <20160822223646.3ZTSp1h9JJ0uq9kTGtyJOr7nqLgzYJiWSgaUE2kLx8U@z> Adding general kernel support for memory encryption includes: - Modify and create some page table macros to include the Secure Memory Encryption (SME) memory encryption mask - Update kernel boot support to call an SME routine that checks for and sets the SME capability (the SME routine will grow later and for now is just a stub routine) - Update kernel boot support to call an SME routine that encrypts the kernel (the SME routine will grow later and for now is just a stub routine) - Provide an SME initialization routine to update the protection map with the memory encryption mask so that it is used by default Signed-off-by: Tom Lendacky --- arch/x86/boot/compressed/pagetable.c | 7 ++++++ arch/x86/include/asm/fixmap.h | 7 ++++++ arch/x86/include/asm/mem_encrypt.h | 18 +++++++++++++++ arch/x86/include/asm/pgtable.h | 26 ++++++++++++--------- arch/x86/include/asm/pgtable_types.h | 41 ++++++++++++++++++++++----------- arch/x86/include/asm/processor.h | 3 ++ arch/x86/kernel/espfix_64.c | 2 +- arch/x86/kernel/head64.c | 10 ++++++-- arch/x86/kernel/head_64.S | 42 ++++++++++++++++++++++++++-------- arch/x86/kernel/machine_kexec_64.c | 2 +- arch/x86/kernel/mem_encrypt.S | 8 ++++++ arch/x86/mm/Makefile | 1 + arch/x86/mm/fault.c | 5 ++-- arch/x86/mm/ioremap.c | 3 ++ arch/x86/mm/kasan_init_64.c | 4 ++- arch/x86/mm/mem_encrypt.c | 30 ++++++++++++++++++++++++ arch/x86/mm/pageattr.c | 3 ++ 17 files changed, 167 insertions(+), 45 deletions(-) create mode 100644 arch/x86/mm/mem_encrypt.c diff --git a/arch/x86/boot/compressed/pagetable.c b/arch/x86/boot/compressed/pagetable.c index 56589d0..411c443 100644 --- a/arch/x86/boot/compressed/pagetable.c +++ b/arch/x86/boot/compressed/pagetable.c @@ -15,6 +15,13 @@ #define __pa(x) ((unsigned long)(x)) #define __va(x) ((void *)((unsigned long)(x))) +/* + * The pgtable.h and mm/ident_map.c includes make use of the SME related + * information which is not used in the compressed image support. Un-define + * the SME support to avoid any compile and link errors. + */ +#undef CONFIG_AMD_MEM_ENCRYPT + #include "misc.h" /* These actually do the work of building the kernel identity maps. */ diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h index 8554f96..83e91f0 100644 --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -153,6 +153,13 @@ static inline void __set_fixmap(enum fixed_addresses idx, } #endif +/* + * Fixmap settings used with memory encryption + * - FIXMAP_PAGE_NOCACHE is used for MMIO so make sure the memory + * encryption mask is not part of the page attributes + */ +#define FIXMAP_PAGE_NOCACHE PAGE_KERNEL_IO_NOCACHE + #include #define __late_set_fixmap(idx, phys, flags) __set_fixmap(idx, phys, flags) diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 747fc52..9f3e762 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -15,12 +15,21 @@ #ifndef __ASSEMBLY__ +#include + #ifdef CONFIG_AMD_MEM_ENCRYPT extern unsigned long sme_me_mask; u8 sme_get_me_loss(void); +void __init sme_early_init(void); + +#define __sme_pa(x) (__pa((x)) | sme_me_mask) +#define __sme_pa_nodebug(x) (__pa_nodebug((x)) | sme_me_mask) + +#define __sme_va(x) (__va((x) & ~sme_me_mask)) + #else /* !CONFIG_AMD_MEM_ENCRYPT */ #define sme_me_mask 0UL @@ -30,6 +39,15 @@ static inline u8 sme_get_me_loss(void) return 0; } +static inline void __init sme_early_init(void) +{ +} + +#define __sme_pa __pa +#define __sme_pa_nodebug __pa_nodebug + +#define __sme_va __va + #endif /* CONFIG_AMD_MEM_ENCRYPT */ #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 437feb4..e3f2b3e 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -5,6 +5,7 @@ #include #include +#include /* * Macro to mark a page protection value as UC- @@ -155,17 +156,22 @@ static inline int pte_special(pte_t pte) static inline unsigned long pte_pfn(pte_t pte) { - return (pte_val(pte) & PTE_PFN_MASK) >> PAGE_SHIFT; + return (pte_val(pte) & ~sme_me_mask & PTE_PFN_MASK) >> PAGE_SHIFT; } static inline unsigned long pmd_pfn(pmd_t pmd) { - return (pmd_val(pmd) & pmd_pfn_mask(pmd)) >> PAGE_SHIFT; + return (pmd_val(pmd) & ~sme_me_mask & pmd_pfn_mask(pmd)) >> PAGE_SHIFT; } static inline unsigned long pud_pfn(pud_t pud) { - return (pud_val(pud) & pud_pfn_mask(pud)) >> PAGE_SHIFT; + return (pud_val(pud) & ~sme_me_mask & pud_pfn_mask(pud)) >> PAGE_SHIFT; +} + +static inline unsigned long pgd_pfn(pgd_t pgd) +{ + return (pgd_val(pgd) & ~sme_me_mask) >> PAGE_SHIFT; } #define pte_page(pte) pfn_to_page(pte_pfn(pte)) @@ -558,15 +564,14 @@ static inline int pmd_none(pmd_t pmd) static inline unsigned long pmd_page_vaddr(pmd_t pmd) { - return (unsigned long)__va(pmd_val(pmd) & pmd_pfn_mask(pmd)); + return (unsigned long)__sme_va(pmd_val(pmd) & pmd_pfn_mask(pmd)); } /* * Currently stuck as a macro due to indirect forward reference to * linux/mmzone.h's __section_mem_map_addr() definition: */ -#define pmd_page(pmd) \ - pfn_to_page((pmd_val(pmd) & pmd_pfn_mask(pmd)) >> PAGE_SHIFT) +#define pmd_page(pmd) pfn_to_page(pmd_pfn(pmd)) /* * the pmd page can be thought of an array like this: pmd_t[PTRS_PER_PMD] @@ -627,15 +632,14 @@ static inline int pud_present(pud_t pud) static inline unsigned long pud_page_vaddr(pud_t pud) { - return (unsigned long)__va(pud_val(pud) & pud_pfn_mask(pud)); + return (unsigned long)__sme_va(pud_val(pud) & pud_pfn_mask(pud)); } /* * Currently stuck as a macro due to indirect forward reference to * linux/mmzone.h's __section_mem_map_addr() definition: */ -#define pud_page(pud) \ - pfn_to_page((pud_val(pud) & pud_pfn_mask(pud)) >> PAGE_SHIFT) +#define pud_page(pud) pfn_to_page(pud_pfn(pud)) /* Find an entry in the second-level page table.. */ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) @@ -668,14 +672,14 @@ static inline int pgd_present(pgd_t pgd) static inline unsigned long pgd_page_vaddr(pgd_t pgd) { - return (unsigned long)__va((unsigned long)pgd_val(pgd) & PTE_PFN_MASK); + return (unsigned long)__sme_va((unsigned long)pgd_val(pgd) & PTE_PFN_MASK); } /* * Currently stuck as a macro due to indirect forward reference to * linux/mmzone.h's __section_mem_map_addr() definition: */ -#define pgd_page(pgd) pfn_to_page(pgd_val(pgd) >> PAGE_SHIFT) +#define pgd_page(pgd) pfn_to_page(pgd_pfn(pgd)) /* to find an entry in a page-table-directory. */ static inline unsigned long pud_index(unsigned long address) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index f1218f5..a01f0e1 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -3,6 +3,7 @@ #include #include +#include #define FIRST_USER_ADDRESS 0UL @@ -121,9 +122,9 @@ #define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE) -#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ +#define __PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ _PAGE_ACCESSED | _PAGE_DIRTY) -#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ +#define __KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ _PAGE_DIRTY) /* @@ -191,18 +192,30 @@ enum page_cache_mode { #define __PAGE_KERNEL_IO (__PAGE_KERNEL) #define __PAGE_KERNEL_IO_NOCACHE (__PAGE_KERNEL_NOCACHE) -#define PAGE_KERNEL __pgprot(__PAGE_KERNEL) -#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO) -#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC) -#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX) -#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE) -#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE) -#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC) -#define PAGE_KERNEL_VSYSCALL __pgprot(__PAGE_KERNEL_VSYSCALL) -#define PAGE_KERNEL_VVAR __pgprot(__PAGE_KERNEL_VVAR) - -#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO) -#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE) +#ifndef __ASSEMBLY__ + +#define _PAGE_ENC sme_me_mask + +/* Redefine macros to inclue the memory encryption mask */ +#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ + _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_ENC) +#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ + _PAGE_DIRTY | _PAGE_ENC) + +#define PAGE_KERNEL __pgprot(__PAGE_KERNEL | _PAGE_ENC) +#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) +#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX | _PAGE_ENC) +#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_VSYSCALL __pgprot(__PAGE_KERNEL_VSYSCALL | _PAGE_ENC) +#define PAGE_KERNEL_VVAR __pgprot(__PAGE_KERNEL_VVAR | _PAGE_ENC) + +#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO) +#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE) + +#endif /* __ASSEMBLY__ */ /* xwr */ #define __P000 PAGE_NONE diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 63def95..024d58c 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -22,6 +22,7 @@ struct vm86; #include #include #include +#include #include #include @@ -207,7 +208,7 @@ static inline void native_cpuid(unsigned int *eax, unsigned int *ebx, static inline void load_cr3(pgd_t *pgdir) { - write_cr3(__pa(pgdir)); + write_cr3(__sme_pa(pgdir)); } #ifdef CONFIG_X86_32 diff --git a/arch/x86/kernel/espfix_64.c b/arch/x86/kernel/espfix_64.c index 04f89ca..51566d7 100644 --- a/arch/x86/kernel/espfix_64.c +++ b/arch/x86/kernel/espfix_64.c @@ -193,7 +193,7 @@ void init_espfix_ap(int cpu) pte_p = pte_offset_kernel(&pmd, addr); stack_page = page_address(alloc_pages_node(node, GFP_KERNEL, 0)); - pte = __pte(__pa(stack_page) | (__PAGE_KERNEL_RO & ptemask)); + pte = __pte(__pa(stack_page) | ((__PAGE_KERNEL_RO | _PAGE_ENC) & ptemask)); for (n = 0; n < ESPFIX_PTE_CLONES; n++) set_pte(&pte_p[n*PTE_STRIDE], pte); diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index 54a2372..88c7bae 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -28,6 +28,7 @@ #include #include #include +#include /* * Manage page tables very early on. @@ -42,7 +43,7 @@ static void __init reset_early_page_tables(void) { memset(early_level4_pgt, 0, sizeof(pgd_t)*(PTRS_PER_PGD-1)); next_early_pgt = 0; - write_cr3(__pa_nodebug(early_level4_pgt)); + write_cr3(__sme_pa_nodebug(early_level4_pgt)); } /* Create a new PMD entry */ @@ -54,7 +55,7 @@ int __init early_make_pgtable(unsigned long address) pmdval_t pmd, *pmd_p; /* Invalid address or early pgt is done ? */ - if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt)) + if (physaddr >= MAXMEM || read_cr3() != __sme_pa_nodebug(early_level4_pgt)) return -1; again: @@ -157,6 +158,11 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data) clear_page(init_level4_pgt); + /* Update the early_pmd_flags with the memory encryption mask */ + early_pmd_flags |= _PAGE_ENC; + + sme_early_init(); + kasan_early_init(); for (i = 0; i < NUM_EXCEPTION_VECTORS; i++) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index c98a559..30f7715 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -95,6 +95,13 @@ startup_64: jnz bad_address /* + * Enable memory encryption (if available). Add the memory encryption + * mask to %rbp to include it in the the page table fixup. + */ + call sme_enable + addq sme_me_mask(%rip), %rbp + + /* * Fixup the physical addresses in the page table */ addq %rbp, early_level4_pgt + (L4_START_KERNEL*8)(%rip) @@ -116,7 +123,8 @@ startup_64: movq %rdi, %rax shrq $PGDIR_SHIFT, %rax - leaq (4096 + _KERNPG_TABLE)(%rbx), %rdx + leaq (4096 + __KERNPG_TABLE)(%rbx), %rdx + addq sme_me_mask(%rip), %rdx /* Apply mem encryption mask */ movq %rdx, 0(%rbx,%rax,8) movq %rdx, 8(%rbx,%rax,8) @@ -133,6 +141,7 @@ startup_64: movq %rdi, %rax shrq $PMD_SHIFT, %rdi addq $(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL), %rax + addq sme_me_mask(%rip), %rax /* Apply mem encryption mask */ leaq (_end - 1)(%rip), %rcx shrq $PMD_SHIFT, %rcx subq %rdi, %rcx @@ -163,9 +172,19 @@ startup_64: cmp %r8, %rdi jne 1b - /* Fixup phys_base */ + /* + * Fixup phys_base, remove the memory encryption mask from %rbp + * to obtain the true physical address. + */ + subq sme_me_mask(%rip), %rbp addq %rbp, phys_base(%rip) + /* + * The page tables have been updated with the memory encryption mask, + * so encrypt the kernel if memory encryption is active + */ + call sme_encrypt_kernel + movq $(early_level4_pgt - __START_KERNEL_map), %rax jmp 1f ENTRY(secondary_startup_64) @@ -189,6 +208,9 @@ ENTRY(secondary_startup_64) movq $(init_level4_pgt - __START_KERNEL_map), %rax 1: + /* Add the memory encryption mask to RAX */ + addq sme_me_mask(%rip), %rax + /* Enable PAE mode and PGE */ movl $(X86_CR4_PAE | X86_CR4_PGE), %ecx movq %rcx, %cr4 @@ -415,7 +437,7 @@ GLOBAL(name) __INITDATA NEXT_PAGE(early_level4_pgt) .fill 511,8,0 - .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level3_kernel_pgt - __START_KERNEL_map + __PAGE_TABLE NEXT_PAGE(early_dynamic_pgts) .fill 512*EARLY_DYNAMIC_PAGE_TABLES,8,0 @@ -427,15 +449,15 @@ NEXT_PAGE(init_level4_pgt) .fill 512,8,0 #else NEXT_PAGE(init_level4_pgt) - .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level3_ident_pgt - __START_KERNEL_map + __KERNPG_TABLE .org init_level4_pgt + L4_PAGE_OFFSET*8, 0 - .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level3_ident_pgt - __START_KERNEL_map + __KERNPG_TABLE .org init_level4_pgt + L4_START_KERNEL*8, 0 /* (2^48-(2*1024*1024*1024))/(2^39) = 511 */ - .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level3_kernel_pgt - __START_KERNEL_map + __PAGE_TABLE NEXT_PAGE(level3_ident_pgt) - .quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level2_ident_pgt - __START_KERNEL_map + __KERNPG_TABLE .fill 511, 8, 0 NEXT_PAGE(level2_ident_pgt) /* Since I easily can, map the first 1G. @@ -447,8 +469,8 @@ NEXT_PAGE(level2_ident_pgt) NEXT_PAGE(level3_kernel_pgt) .fill L3_START_KERNEL,8,0 /* (2^48-(2*1024*1024*1024)-((2^39)*511))/(2^30) = 510 */ - .quad level2_kernel_pgt - __START_KERNEL_map + _KERNPG_TABLE - .quad level2_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level2_kernel_pgt - __START_KERNEL_map + __KERNPG_TABLE + .quad level2_fixmap_pgt - __START_KERNEL_map + __PAGE_TABLE NEXT_PAGE(level2_kernel_pgt) /* @@ -466,7 +488,7 @@ NEXT_PAGE(level2_kernel_pgt) NEXT_PAGE(level2_fixmap_pgt) .fill 506,8,0 - .quad level1_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level1_fixmap_pgt - __START_KERNEL_map + __PAGE_TABLE /* 8MB reserved for vsyscalls + a 2MB hole = 4 + 1 entries */ .fill 5,8,0 diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c index 5a294e4..f976867 100644 --- a/arch/x86/kernel/machine_kexec_64.c +++ b/arch/x86/kernel/machine_kexec_64.c @@ -103,7 +103,7 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable) struct x86_mapping_info info = { .alloc_pgt_page = alloc_pgt_page, .context = image, - .pmd_flag = __PAGE_KERNEL_LARGE_EXEC, + .pmd_flag = __PAGE_KERNEL_LARGE_EXEC | _PAGE_ENC, }; unsigned long mstart, mend; pgd_t *level4p; diff --git a/arch/x86/kernel/mem_encrypt.S b/arch/x86/kernel/mem_encrypt.S index ef7f325..f2e0536 100644 --- a/arch/x86/kernel/mem_encrypt.S +++ b/arch/x86/kernel/mem_encrypt.S @@ -14,6 +14,14 @@ .text .code64 +ENTRY(sme_enable) + ret +ENDPROC(sme_enable) + +ENTRY(sme_encrypt_kernel) + ret +ENDPROC(sme_encrypt_kernel) + ENTRY(sme_get_me_loss) xor %rax, %rax mov sme_me_loss(%rip), %al diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 96d2b84..44d4d21 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -39,3 +39,4 @@ obj-$(CONFIG_X86_INTEL_MPX) += mpx.o obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o +obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt.o diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index dc80230..3ea88ca 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -23,6 +23,7 @@ #include /* emulate_vsyscall */ #include /* struct vm86 */ #include /* vma_pkey() */ +#include /* __sme_va() */ #define CREATE_TRACE_POINTS #include @@ -523,7 +524,7 @@ static int bad_address(void *p) static void dump_pagetable(unsigned long address) { - pgd_t *base = __va(read_cr3() & PHYSICAL_PAGE_MASK); + pgd_t *base = __sme_va(read_cr3() & PHYSICAL_PAGE_MASK); pgd_t *pgd = base + pgd_index(address); pud_t *pud; pmd_t *pmd; @@ -659,7 +660,7 @@ show_fault_oops(struct pt_regs *regs, unsigned long error_code, pgd_t *pgd; pte_t *pte; - pgd = __va(read_cr3() & PHYSICAL_PAGE_MASK); + pgd = __sme_va(read_cr3() & PHYSICAL_PAGE_MASK); pgd += pgd_index(address); pte = lookup_address_in_pgd(pgd, address, &level); diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 7aaa263..5cf0489 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -20,6 +20,7 @@ #include #include #include +#include #include "physaddr.h" @@ -423,7 +424,7 @@ static pte_t bm_pte[PAGE_SIZE/sizeof(pte_t)] __page_aligned_bss; static inline pmd_t * __init early_ioremap_pmd(unsigned long addr) { /* Don't assume we're using swapper_pg_dir at this point */ - pgd_t *base = __va(read_cr3()); + pgd_t *base = __sme_va(read_cr3()); pgd_t *pgd = &base[pgd_index(addr)]; pud_t *pud = pud_offset(pgd, addr); pmd_t *pmd = pmd_offset(pud, addr); diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index 0493c17..0608dc8 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -68,7 +68,7 @@ static struct notifier_block kasan_die_notifier = { void __init kasan_early_init(void) { int i; - pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL; + pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL | _PAGE_ENC; pmdval_t pmd_val = __pa_nodebug(kasan_zero_pte) | _KERNPG_TABLE; pudval_t pud_val = __pa_nodebug(kasan_zero_pmd) | _KERNPG_TABLE; @@ -130,7 +130,7 @@ void __init kasan_init(void) */ memset(kasan_zero_page, 0, PAGE_SIZE); for (i = 0; i < PTRS_PER_PTE; i++) { - pte_t pte = __pte(__pa(kasan_zero_page) | __PAGE_KERNEL_RO); + pte_t pte = __pte(__pa(kasan_zero_page) | __PAGE_KERNEL_RO | _PAGE_ENC); set_pte(&kasan_zero_pte[i], pte); } /* Flush TLBs again to be sure that write protection applied. */ diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c new file mode 100644 index 0000000..00eb705 --- /dev/null +++ b/arch/x86/mm/mem_encrypt.c @@ -0,0 +1,30 @@ +/* + * AMD Memory Encryption Support + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include + +#include + +void __init sme_early_init(void) +{ + unsigned int i; + + if (!sme_me_mask) + return; + + __supported_pte_mask |= sme_me_mask; + + /* Update the protection map with memory encryption mask */ + for (i = 0; i < ARRAY_SIZE(protection_map); i++) + protection_map[i] = __pgprot(pgprot_val(protection_map[i]) | sme_me_mask); +} diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index 849dc09..72c292d 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1973,6 +1973,9 @@ int kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address, if (!(page_flags & _PAGE_RW)) cpa.mask_clr = __pgprot(_PAGE_RW); + if (!(page_flags & _PAGE_ENC)) + cpa.mask_clr = __pgprot(pgprot_val(cpa.mask_clr) | _PAGE_ENC); + cpa.mask_set = __pgprot(_PAGE_PRESENT | page_flags); retval = __change_page_attr_set_clr(&cpa, 0); From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Lendacky Subject: [RFC PATCH v2 07/20] x86: Provide general kernel support for memory encryption Date: Mon, 22 Aug 2016 17:36:46 -0500 Message-ID: <20160822223646.29880.28794.stgit@tlendack-t1.amdoffice.net> References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Cc: Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , "Konrad Rzeszutek Wilk" , Andrey Ryabinin , Ingo Molnar , Borislav Petkov , "Andy Lutomirski" , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , "Thomas Gleixner" , Dmitry Vyukov To: , , , , , , , , Return-path: In-Reply-To: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> Sender: owner-linux-mm@kvack.org List-Id: kvm.vger.kernel.org Adding general kernel support for memory encryption includes: - Modify and create some page table macros to include the Secure Memory Encryption (SME) memory encryption mask - Update kernel boot support to call an SME routine that checks for and sets the SME capability (the SME routine will grow later and for now is just a stub routine) - Update kernel boot support to call an SME routine that encrypts the kernel (the SME routine will grow later and for now is just a stub routine) - Provide an SME initialization routine to update the protection map with the memory encryption mask so that it is used by default Signed-off-by: Tom Lendacky --- arch/x86/boot/compressed/pagetable.c | 7 ++++++ arch/x86/include/asm/fixmap.h | 7 ++++++ arch/x86/include/asm/mem_encrypt.h | 18 +++++++++++++++ arch/x86/include/asm/pgtable.h | 26 ++++++++++++--------- arch/x86/include/asm/pgtable_types.h | 41 ++++++++++++++++++++++----------- arch/x86/include/asm/processor.h | 3 ++ arch/x86/kernel/espfix_64.c | 2 +- arch/x86/kernel/head64.c | 10 ++++++-- arch/x86/kernel/head_64.S | 42 ++++++++++++++++++++++++++-------- arch/x86/kernel/machine_kexec_64.c | 2 +- arch/x86/kernel/mem_encrypt.S | 8 ++++++ arch/x86/mm/Makefile | 1 + arch/x86/mm/fault.c | 5 ++-- arch/x86/mm/ioremap.c | 3 ++ arch/x86/mm/kasan_init_64.c | 4 ++- arch/x86/mm/mem_encrypt.c | 30 ++++++++++++++++++++++++ arch/x86/mm/pageattr.c | 3 ++ 17 files changed, 167 insertions(+), 45 deletions(-) create mode 100644 arch/x86/mm/mem_encrypt.c diff --git a/arch/x86/boot/compressed/pagetable.c b/arch/x86/boot/compressed/pagetable.c index 56589d0..411c443 100644 --- a/arch/x86/boot/compressed/pagetable.c +++ b/arch/x86/boot/compressed/pagetable.c @@ -15,6 +15,13 @@ #define __pa(x) ((unsigned long)(x)) #define __va(x) ((void *)((unsigned long)(x))) +/* + * The pgtable.h and mm/ident_map.c includes make use of the SME related + * information which is not used in the compressed image support. Un-define + * the SME support to avoid any compile and link errors. + */ +#undef CONFIG_AMD_MEM_ENCRYPT + #include "misc.h" /* These actually do the work of building the kernel identity maps. */ diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h index 8554f96..83e91f0 100644 --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -153,6 +153,13 @@ static inline void __set_fixmap(enum fixed_addresses idx, } #endif +/* + * Fixmap settings used with memory encryption + * - FIXMAP_PAGE_NOCACHE is used for MMIO so make sure the memory + * encryption mask is not part of the page attributes + */ +#define FIXMAP_PAGE_NOCACHE PAGE_KERNEL_IO_NOCACHE + #include #define __late_set_fixmap(idx, phys, flags) __set_fixmap(idx, phys, flags) diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 747fc52..9f3e762 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -15,12 +15,21 @@ #ifndef __ASSEMBLY__ +#include + #ifdef CONFIG_AMD_MEM_ENCRYPT extern unsigned long sme_me_mask; u8 sme_get_me_loss(void); +void __init sme_early_init(void); + +#define __sme_pa(x) (__pa((x)) | sme_me_mask) +#define __sme_pa_nodebug(x) (__pa_nodebug((x)) | sme_me_mask) + +#define __sme_va(x) (__va((x) & ~sme_me_mask)) + #else /* !CONFIG_AMD_MEM_ENCRYPT */ #define sme_me_mask 0UL @@ -30,6 +39,15 @@ static inline u8 sme_get_me_loss(void) return 0; } +static inline void __init sme_early_init(void) +{ +} + +#define __sme_pa __pa +#define __sme_pa_nodebug __pa_nodebug + +#define __sme_va __va + #endif /* CONFIG_AMD_MEM_ENCRYPT */ #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 437feb4..e3f2b3e 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -5,6 +5,7 @@ #include #include +#include /* * Macro to mark a page protection value as UC- @@ -155,17 +156,22 @@ static inline int pte_special(pte_t pte) static inline unsigned long pte_pfn(pte_t pte) { - return (pte_val(pte) & PTE_PFN_MASK) >> PAGE_SHIFT; + return (pte_val(pte) & ~sme_me_mask & PTE_PFN_MASK) >> PAGE_SHIFT; } static inline unsigned long pmd_pfn(pmd_t pmd) { - return (pmd_val(pmd) & pmd_pfn_mask(pmd)) >> PAGE_SHIFT; + return (pmd_val(pmd) & ~sme_me_mask & pmd_pfn_mask(pmd)) >> PAGE_SHIFT; } static inline unsigned long pud_pfn(pud_t pud) { - return (pud_val(pud) & pud_pfn_mask(pud)) >> PAGE_SHIFT; + return (pud_val(pud) & ~sme_me_mask & pud_pfn_mask(pud)) >> PAGE_SHIFT; +} + +static inline unsigned long pgd_pfn(pgd_t pgd) +{ + return (pgd_val(pgd) & ~sme_me_mask) >> PAGE_SHIFT; } #define pte_page(pte) pfn_to_page(pte_pfn(pte)) @@ -558,15 +564,14 @@ static inline int pmd_none(pmd_t pmd) static inline unsigned long pmd_page_vaddr(pmd_t pmd) { - return (unsigned long)__va(pmd_val(pmd) & pmd_pfn_mask(pmd)); + return (unsigned long)__sme_va(pmd_val(pmd) & pmd_pfn_mask(pmd)); } /* * Currently stuck as a macro due to indirect forward reference to * linux/mmzone.h's __section_mem_map_addr() definition: */ -#define pmd_page(pmd) \ - pfn_to_page((pmd_val(pmd) & pmd_pfn_mask(pmd)) >> PAGE_SHIFT) +#define pmd_page(pmd) pfn_to_page(pmd_pfn(pmd)) /* * the pmd page can be thought of an array like this: pmd_t[PTRS_PER_PMD] @@ -627,15 +632,14 @@ static inline int pud_present(pud_t pud) static inline unsigned long pud_page_vaddr(pud_t pud) { - return (unsigned long)__va(pud_val(pud) & pud_pfn_mask(pud)); + return (unsigned long)__sme_va(pud_val(pud) & pud_pfn_mask(pud)); } /* * Currently stuck as a macro due to indirect forward reference to * linux/mmzone.h's __section_mem_map_addr() definition: */ -#define pud_page(pud) \ - pfn_to_page((pud_val(pud) & pud_pfn_mask(pud)) >> PAGE_SHIFT) +#define pud_page(pud) pfn_to_page(pud_pfn(pud)) /* Find an entry in the second-level page table.. */ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) @@ -668,14 +672,14 @@ static inline int pgd_present(pgd_t pgd) static inline unsigned long pgd_page_vaddr(pgd_t pgd) { - return (unsigned long)__va((unsigned long)pgd_val(pgd) & PTE_PFN_MASK); + return (unsigned long)__sme_va((unsigned long)pgd_val(pgd) & PTE_PFN_MASK); } /* * Currently stuck as a macro due to indirect forward reference to * linux/mmzone.h's __section_mem_map_addr() definition: */ -#define pgd_page(pgd) pfn_to_page(pgd_val(pgd) >> PAGE_SHIFT) +#define pgd_page(pgd) pfn_to_page(pgd_pfn(pgd)) /* to find an entry in a page-table-directory. */ static inline unsigned long pud_index(unsigned long address) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index f1218f5..a01f0e1 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -3,6 +3,7 @@ #include #include +#include #define FIRST_USER_ADDRESS 0UL @@ -121,9 +122,9 @@ #define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE) -#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ +#define __PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ _PAGE_ACCESSED | _PAGE_DIRTY) -#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ +#define __KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ _PAGE_DIRTY) /* @@ -191,18 +192,30 @@ enum page_cache_mode { #define __PAGE_KERNEL_IO (__PAGE_KERNEL) #define __PAGE_KERNEL_IO_NOCACHE (__PAGE_KERNEL_NOCACHE) -#define PAGE_KERNEL __pgprot(__PAGE_KERNEL) -#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO) -#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC) -#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX) -#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE) -#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE) -#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC) -#define PAGE_KERNEL_VSYSCALL __pgprot(__PAGE_KERNEL_VSYSCALL) -#define PAGE_KERNEL_VVAR __pgprot(__PAGE_KERNEL_VVAR) - -#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO) -#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE) +#ifndef __ASSEMBLY__ + +#define _PAGE_ENC sme_me_mask + +/* Redefine macros to inclue the memory encryption mask */ +#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ + _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_ENC) +#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ + _PAGE_DIRTY | _PAGE_ENC) + +#define PAGE_KERNEL __pgprot(__PAGE_KERNEL | _PAGE_ENC) +#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) +#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX | _PAGE_ENC) +#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_VSYSCALL __pgprot(__PAGE_KERNEL_VSYSCALL | _PAGE_ENC) +#define PAGE_KERNEL_VVAR __pgprot(__PAGE_KERNEL_VVAR | _PAGE_ENC) + +#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO) +#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE) + +#endif /* __ASSEMBLY__ */ /* xwr */ #define __P000 PAGE_NONE diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 63def95..024d58c 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -22,6 +22,7 @@ struct vm86; #include #include #include +#include #include #include @@ -207,7 +208,7 @@ static inline void native_cpuid(unsigned int *eax, unsigned int *ebx, static inline void load_cr3(pgd_t *pgdir) { - write_cr3(__pa(pgdir)); + write_cr3(__sme_pa(pgdir)); } #ifdef CONFIG_X86_32 diff --git a/arch/x86/kernel/espfix_64.c b/arch/x86/kernel/espfix_64.c index 04f89ca..51566d7 100644 --- a/arch/x86/kernel/espfix_64.c +++ b/arch/x86/kernel/espfix_64.c @@ -193,7 +193,7 @@ void init_espfix_ap(int cpu) pte_p = pte_offset_kernel(&pmd, addr); stack_page = page_address(alloc_pages_node(node, GFP_KERNEL, 0)); - pte = __pte(__pa(stack_page) | (__PAGE_KERNEL_RO & ptemask)); + pte = __pte(__pa(stack_page) | ((__PAGE_KERNEL_RO | _PAGE_ENC) & ptemask)); for (n = 0; n < ESPFIX_PTE_CLONES; n++) set_pte(&pte_p[n*PTE_STRIDE], pte); diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index 54a2372..88c7bae 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -28,6 +28,7 @@ #include #include #include +#include /* * Manage page tables very early on. @@ -42,7 +43,7 @@ static void __init reset_early_page_tables(void) { memset(early_level4_pgt, 0, sizeof(pgd_t)*(PTRS_PER_PGD-1)); next_early_pgt = 0; - write_cr3(__pa_nodebug(early_level4_pgt)); + write_cr3(__sme_pa_nodebug(early_level4_pgt)); } /* Create a new PMD entry */ @@ -54,7 +55,7 @@ int __init early_make_pgtable(unsigned long address) pmdval_t pmd, *pmd_p; /* Invalid address or early pgt is done ? */ - if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt)) + if (physaddr >= MAXMEM || read_cr3() != __sme_pa_nodebug(early_level4_pgt)) return -1; again: @@ -157,6 +158,11 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data) clear_page(init_level4_pgt); + /* Update the early_pmd_flags with the memory encryption mask */ + early_pmd_flags |= _PAGE_ENC; + + sme_early_init(); + kasan_early_init(); for (i = 0; i < NUM_EXCEPTION_VECTORS; i++) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index c98a559..30f7715 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -95,6 +95,13 @@ startup_64: jnz bad_address /* + * Enable memory encryption (if available). Add the memory encryption + * mask to %rbp to include it in the the page table fixup. + */ + call sme_enable + addq sme_me_mask(%rip), %rbp + + /* * Fixup the physical addresses in the page table */ addq %rbp, early_level4_pgt + (L4_START_KERNEL*8)(%rip) @@ -116,7 +123,8 @@ startup_64: movq %rdi, %rax shrq $PGDIR_SHIFT, %rax - leaq (4096 + _KERNPG_TABLE)(%rbx), %rdx + leaq (4096 + __KERNPG_TABLE)(%rbx), %rdx + addq sme_me_mask(%rip), %rdx /* Apply mem encryption mask */ movq %rdx, 0(%rbx,%rax,8) movq %rdx, 8(%rbx,%rax,8) @@ -133,6 +141,7 @@ startup_64: movq %rdi, %rax shrq $PMD_SHIFT, %rdi addq $(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL), %rax + addq sme_me_mask(%rip), %rax /* Apply mem encryption mask */ leaq (_end - 1)(%rip), %rcx shrq $PMD_SHIFT, %rcx subq %rdi, %rcx @@ -163,9 +172,19 @@ startup_64: cmp %r8, %rdi jne 1b - /* Fixup phys_base */ + /* + * Fixup phys_base, remove the memory encryption mask from %rbp + * to obtain the true physical address. + */ + subq sme_me_mask(%rip), %rbp addq %rbp, phys_base(%rip) + /* + * The page tables have been updated with the memory encryption mask, + * so encrypt the kernel if memory encryption is active + */ + call sme_encrypt_kernel + movq $(early_level4_pgt - __START_KERNEL_map), %rax jmp 1f ENTRY(secondary_startup_64) @@ -189,6 +208,9 @@ ENTRY(secondary_startup_64) movq $(init_level4_pgt - __START_KERNEL_map), %rax 1: + /* Add the memory encryption mask to RAX */ + addq sme_me_mask(%rip), %rax + /* Enable PAE mode and PGE */ movl $(X86_CR4_PAE | X86_CR4_PGE), %ecx movq %rcx, %cr4 @@ -415,7 +437,7 @@ GLOBAL(name) __INITDATA NEXT_PAGE(early_level4_pgt) .fill 511,8,0 - .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level3_kernel_pgt - __START_KERNEL_map + __PAGE_TABLE NEXT_PAGE(early_dynamic_pgts) .fill 512*EARLY_DYNAMIC_PAGE_TABLES,8,0 @@ -427,15 +449,15 @@ NEXT_PAGE(init_level4_pgt) .fill 512,8,0 #else NEXT_PAGE(init_level4_pgt) - .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level3_ident_pgt - __START_KERNEL_map + __KERNPG_TABLE .org init_level4_pgt + L4_PAGE_OFFSET*8, 0 - .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level3_ident_pgt - __START_KERNEL_map + __KERNPG_TABLE .org init_level4_pgt + L4_START_KERNEL*8, 0 /* (2^48-(2*1024*1024*1024))/(2^39) = 511 */ - .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level3_kernel_pgt - __START_KERNEL_map + __PAGE_TABLE NEXT_PAGE(level3_ident_pgt) - .quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level2_ident_pgt - __START_KERNEL_map + __KERNPG_TABLE .fill 511, 8, 0 NEXT_PAGE(level2_ident_pgt) /* Since I easily can, map the first 1G. @@ -447,8 +469,8 @@ NEXT_PAGE(level2_ident_pgt) NEXT_PAGE(level3_kernel_pgt) .fill L3_START_KERNEL,8,0 /* (2^48-(2*1024*1024*1024)-((2^39)*511))/(2^30) = 510 */ - .quad level2_kernel_pgt - __START_KERNEL_map + _KERNPG_TABLE - .quad level2_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level2_kernel_pgt - __START_KERNEL_map + __KERNPG_TABLE + .quad level2_fixmap_pgt - __START_KERNEL_map + __PAGE_TABLE NEXT_PAGE(level2_kernel_pgt) /* @@ -466,7 +488,7 @@ NEXT_PAGE(level2_kernel_pgt) NEXT_PAGE(level2_fixmap_pgt) .fill 506,8,0 - .quad level1_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level1_fixmap_pgt - __START_KERNEL_map + __PAGE_TABLE /* 8MB reserved for vsyscalls + a 2MB hole = 4 + 1 entries */ .fill 5,8,0 diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c index 5a294e4..f976867 100644 --- a/arch/x86/kernel/machine_kexec_64.c +++ b/arch/x86/kernel/machine_kexec_64.c @@ -103,7 +103,7 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable) struct x86_mapping_info info = { .alloc_pgt_page = alloc_pgt_page, .context = image, - .pmd_flag = __PAGE_KERNEL_LARGE_EXEC, + .pmd_flag = __PAGE_KERNEL_LARGE_EXEC | _PAGE_ENC, }; unsigned long mstart, mend; pgd_t *level4p; diff --git a/arch/x86/kernel/mem_encrypt.S b/arch/x86/kernel/mem_encrypt.S index ef7f325..f2e0536 100644 --- a/arch/x86/kernel/mem_encrypt.S +++ b/arch/x86/kernel/mem_encrypt.S @@ -14,6 +14,14 @@ .text .code64 +ENTRY(sme_enable) + ret +ENDPROC(sme_enable) + +ENTRY(sme_encrypt_kernel) + ret +ENDPROC(sme_encrypt_kernel) + ENTRY(sme_get_me_loss) xor %rax, %rax mov sme_me_loss(%rip), %al diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 96d2b84..44d4d21 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -39,3 +39,4 @@ obj-$(CONFIG_X86_INTEL_MPX) += mpx.o obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o +obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt.o diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index dc80230..3ea88ca 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -23,6 +23,7 @@ #include /* emulate_vsyscall */ #include /* struct vm86 */ #include /* vma_pkey() */ +#include /* __sme_va() */ #define CREATE_TRACE_POINTS #include @@ -523,7 +524,7 @@ static int bad_address(void *p) static void dump_pagetable(unsigned long address) { - pgd_t *base = __va(read_cr3() & PHYSICAL_PAGE_MASK); + pgd_t *base = __sme_va(read_cr3() & PHYSICAL_PAGE_MASK); pgd_t *pgd = base + pgd_index(address); pud_t *pud; pmd_t *pmd; @@ -659,7 +660,7 @@ show_fault_oops(struct pt_regs *regs, unsigned long error_code, pgd_t *pgd; pte_t *pte; - pgd = __va(read_cr3() & PHYSICAL_PAGE_MASK); + pgd = __sme_va(read_cr3() & PHYSICAL_PAGE_MASK); pgd += pgd_index(address); pte = lookup_address_in_pgd(pgd, address, &level); diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 7aaa263..5cf0489 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -20,6 +20,7 @@ #include #include #include +#include #include "physaddr.h" @@ -423,7 +424,7 @@ static pte_t bm_pte[PAGE_SIZE/sizeof(pte_t)] __page_aligned_bss; static inline pmd_t * __init early_ioremap_pmd(unsigned long addr) { /* Don't assume we're using swapper_pg_dir at this point */ - pgd_t *base = __va(read_cr3()); + pgd_t *base = __sme_va(read_cr3()); pgd_t *pgd = &base[pgd_index(addr)]; pud_t *pud = pud_offset(pgd, addr); pmd_t *pmd = pmd_offset(pud, addr); diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index 0493c17..0608dc8 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -68,7 +68,7 @@ static struct notifier_block kasan_die_notifier = { void __init kasan_early_init(void) { int i; - pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL; + pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL | _PAGE_ENC; pmdval_t pmd_val = __pa_nodebug(kasan_zero_pte) | _KERNPG_TABLE; pudval_t pud_val = __pa_nodebug(kasan_zero_pmd) | _KERNPG_TABLE; @@ -130,7 +130,7 @@ void __init kasan_init(void) */ memset(kasan_zero_page, 0, PAGE_SIZE); for (i = 0; i < PTRS_PER_PTE; i++) { - pte_t pte = __pte(__pa(kasan_zero_page) | __PAGE_KERNEL_RO); + pte_t pte = __pte(__pa(kasan_zero_page) | __PAGE_KERNEL_RO | _PAGE_ENC); set_pte(&kasan_zero_pte[i], pte); } /* Flush TLBs again to be sure that write protection applied. */ diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c new file mode 100644 index 0000000..00eb705 --- /dev/null +++ b/arch/x86/mm/mem_encrypt.c @@ -0,0 +1,30 @@ +/* + * AMD Memory Encryption Support + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include + +#include + +void __init sme_early_init(void) +{ + unsigned int i; + + if (!sme_me_mask) + return; + + __supported_pte_mask |= sme_me_mask; + + /* Update the protection map with memory encryption mask */ + for (i = 0; i < ARRAY_SIZE(protection_map); i++) + protection_map[i] = __pgprot(pgprot_val(protection_map[i]) | sme_me_mask); +} diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index 849dc09..72c292d 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1973,6 +1973,9 @@ int kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address, if (!(page_flags & _PAGE_RW)) cpa.mask_clr = __pgprot(_PAGE_RW); + if (!(page_flags & _PAGE_ENC)) + cpa.mask_clr = __pgprot(pgprot_val(cpa.mask_clr) | _PAGE_ENC); + cpa.mask_set = __pgprot(_PAGE_PRESENT | page_flags); retval = __change_page_attr_set_clr(&cpa, 0); -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org