From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932585AbdBPPnw (ORCPT ); Thu, 16 Feb 2017 10:43:52 -0500 Received: from mail-sn1nam01on0041.outbound.protection.outlook.com ([104.47.32.41]:18432 "EHLO NAM01-SN1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S932359AbdBPPno (ORCPT ); Thu, 16 Feb 2017 10:43:44 -0500 Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Thomas.Lendacky@amd.com; From: Tom Lendacky Subject: [RFC PATCH v4 07/28] x86: Provide general kernel support for memory encryption To: , , , , , , , , CC: Rik van Riel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Toshimitsu Kani , Arnd Bergmann , Jonathan Corbet , Matt Fleming , "Michael S. Tsirkin" , Joerg Roedel , Konrad Rzeszutek Wilk , Paolo Bonzini , Brijesh Singh , Ingo Molnar , Alexander Potapenko , Andy Lutomirski , "H. Peter Anvin" , Borislav Petkov , Andrey Ryabinin , Thomas Gleixner , Larry Woodman , Dmitry Vyukov Date: Thu, 16 Feb 2017 09:43:32 -0600 Message-ID: <20170216154332.19244.55451.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20170216154158.19244.66630.stgit@tlendack-t1.amdoffice.net> References: <20170216154158.19244.66630.stgit@tlendack-t1.amdoffice.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: MWHPR10CA0011.namprd10.prod.outlook.com (10.172.48.21) To BN6PR12MB1140.namprd12.prod.outlook.com (10.168.226.142) X-MS-Office365-Filtering-Correlation-Id: 2462a49f-8e6b-4112-779c-08d456829495 X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(22001)(48565401081);SRVR:BN6PR12MB1140; X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1140;3:rAlcztGeJhm+1Rs8NbaLvm5wctgirdXAbuIbphWxAsEZrwceiQDEaVm1X4Urcg4uVOYKNKE7lpcHFZP77OcfnMtkDWmAPdm9ktOeijBd1d4JfBqRF/57LsM9fe+VQPthobUtO8B8XOtdbTC2U5dnmRIMyz2ly3Q5XHRyXepR/KC9Dox865w9kc4BRiTXRHH7k9h44d3fglz55oby/sLMymeV4IgJbEWmLtkqDnNi6zugT/ghmRUUv4yuILx5ZEaJyZEShEtD0BrpY46DBDGaV3wyQSADNlBCbmQa4i/1HFM=;25:HAk8608y4URyEL98bLbZR2Cgrs+rumPfIl7WhFSvs/iJQLjy1PJiutSBnb+BOx56SQNS3vdoN+CjeNFlrpFUMWHd1FfkS2oF5JaKs5BFvScyvSN080s2Ep2WioP22DdxxP9brBPGsZT/upR9NmGkOO49HL5qYyu0fokRYZgH5qtqO+pRtPP2o+ILp20hcU/mBa5VJeI8S6AgzaH8VEn/rgfqxsM0jyzFtqKj74op06oxnGCyxOc1ZWzMyxdZZVluVE9VEGLVQEXm1j3TqFzh0+fa95hZqC7sWntN5xPFTZFpeYpNvq9zyABk8Kv9e4lqdaHBalPrpBdl28gyLEEHld9vhpFU1ZmFQBRpo+bn6SC9zb9I8HFVYZWaxw4NtmRoYBEIoMcSaWoiRtbkk5eCM/joPyG5VvH2sdzFToaoUm7/yfFhY9exjvQGBypXX7o073+JtlEMyNhAdcxov6Jn1g== X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1140;31:J9/usHrvjXMzDNnytzlTAizeMAIuro6tt6hSOOKTwwvbZStAzBUsBGuhw2/yR6lD2X2IT4gX35UM/ereJuIoY8DkZqP0fg4+HpI61pIHAznnmNssXKX8ivQWRVFkuaAAp9sNTwXp+c99lc1Prz8ItfnGwuS7S3aGjsOWawuWAFmxedK4Ew3ZOf6MBNx7EewXZbIyx0JNa5gcI3yphWqFRZU2HqHhz/dFebxnxyaxp60=;20:pk7F4OC3zJTtErbtW/L65xmOciWeaVAYoLy+OUsRWVWuBObiCFYpViExBNZJ3MgDohCb0zw6ojJzzdmk6Dp8KidJ3evSwNCILdRY5J5I8kFqb5dY/6ZGnPAVSESrB4NBtdYsDIRVUd/R+abacXjiufu6dA1Q+M+LoFcxLHCyvPZym9LqSKjUE9VgMJhDw6xWnX7g9SxBSgTSzKkq2ZXZRuVtv1AxEAZfJBG51XrM0db4I6xSN6eX7srHzcxGTDCCn/raxQinal6V9EUWPELdBSObMowKeo27JX1wfhI7aarKIzNDPjztP0l/S2YYR0OemKkx1j8fdfSWazS8rcTdnb7sOPr7ydIDT4wsHYh7f5upvSb1QPnmpkNOjuPoo3tlXEIXSXyOmoLDMWEHGf1zP93W8QbZHFp59hq91/jCWGZBwm62G/KOLoS1felR5yRID2BTuBMKmwHkPf/fB9PHBrNFedPi8KPt6SUchD1gPKFA4JA/gCKmobtrcgt6S6QA X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040375)(601004)(2401047)(8121501046)(5005006)(10201501046)(3002001)(6055026)(6041248)(20161123558025)(20161123555025)(20161123560025)(20161123562025)(20161123564025)(6072148);SRVR:BN6PR12MB1140;BCL:0;PCL:0;RULEID:;SRVR:BN6PR12MB1140; X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1140;4:ybcmFKNBsKlXDh3Qnb+GQm2gp5CtChc+YqC1SsXpCWmRnyA81yGLgx/0hE7kx+3CTCIM81apbL2aXpAzuwmwStEmmyfFv0TYclbaweJHLRHEwRfJ53760cKGfTLAxlZ7Gn4+jFahKJz2uaOkw5DRdP+wQRJ2jqwjb9gMjb+ZndXHVqHbE2m2KsHUkWZHdhSU1uTlgG59cYOWH/Ay9u6/X0LcMLKqivh+inW/G15ORT11kO1QKwbeqU5WlY34lPHP/RtNT1WR0pU6vefSc08ojCdWMTvSlCMsXYhnerKbkKopRbcYhwYwLIBbBBC1pzT6uG1jDh6QmuYwt0uJys40kB3OBiNKx9dTrljXnPeL+F4xoKl1dVMiQrtu+4I0uNEBsU2hE+CBFTJfVOMnh5m/qoGTAbRthmtFjul5ghuweN5opYpN4pcnoYC5gtHYIrjLSFbXE02Q149XlF826sPvhVZDxC9g5CH/x3W9OR0nFhowVTT//siIHuTBLB1qY7GIA+FJwmAHsa2uVLmOLM0WjXqehRFezDU8ZZFh+rrkDVurCJ6MwyVIbtzCyABKkBBsuzFv+8Y1p90jDN+1T/wGVyUVhzrcTBGfUqphFZEksnu0pXlevBveyqvcRSbjG32/F6S16d+g0klio4ObgKX/kY4kcRXR/BR+RrrGkaxOAfQ= X-Forefront-PRVS: 0220D4B98D X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6009001)(7916002)(39850400002)(39410400002)(39450400003)(39840400002)(39860400002)(199003)(189002)(53416004)(105586002)(42186005)(101416001)(68736007)(38730400002)(106356001)(305945005)(92566002)(7736002)(25786008)(33646002)(9686003)(53936002)(7416002)(389900003)(103116003)(54906002)(55016002)(83506001)(3846002)(2201001)(50466002)(6116002)(1076002)(66066001)(2906002)(4326007)(23676002)(6666003)(5660300001)(4001350100001)(69596002)(86362001)(189998001)(230700001)(575784001)(97736004)(54356999)(76176999)(50986999)(6506006)(2950100002)(97746001)(81166006)(8676002)(81156014)(47776003)(11771545001)(71626007)(217873001);DIR:OUT;SFP:1101;SCL:1;SRVR:BN6PR12MB1140;H:tlendack-t1.amdoffice.net;FPR:;SPF:None;PTR:InfoNoRecords;A:1;MX:1;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtCTjZQUjEyTUIxMTQwOzIzOkhPWDRjZnJkd0ZlRHQvUFNRQklCUm5LY2k0?= =?utf-8?B?bWIwdlJHdTdGYmF3TWxVSlRnRGluTDVrbVdXQjV1N3BNVDhhY0NLYU1JYkhW?= =?utf-8?B?UEJWaG8ycGZRYjkvU0t1T3pKKzdTblBZdHdPQ25pMHVIRmtjNkt1ZVFBUXFp?= =?utf-8?B?bmVTSHBRUzBUdkhyTkZUQnNWWStYMURoRHcyVUd5VjZ3WTZiZzBFaFI1NFBL?= =?utf-8?B?bWdhY2d4elN1cjZ4YUxBbERPSUl6QkNwVmd6RjZPUHVNcUtKOFdjZWFVUFpD?= =?utf-8?B?N0hWaWZVUWx5ZEJEdW9rSTFZNHEwL1dyL0cyOTBGSG1wN0pVdzkxbW9RRjBn?= =?utf-8?B?bUVQV2RkakZTZlN3VzNTQ0lOWkNkSlo4WUkrQ2dweE45d0t2OCtjNisvSk9O?= =?utf-8?B?SnVXZkc1Z0lRdWo4dG5nV3MrMk91aGRqbUlvWDFiM2ZlVEJ1MEhYUWtuT29K?= =?utf-8?B?K0JhcTJYTmRtUUFnV2tzbVdReHFzUGFvOEN0OUlReUpLazJ3aXBMcDErb09L?= =?utf-8?B?Z1JBdjU1YkorZkxQcjJqaVZKQnZyUmphemRFYkpBVHVXbUdCVUtWVi9Oc3NK?= =?utf-8?B?QWE4cnRBbFV4aEtld2hmeXVwcHpsZWRhbmdnWVlHd0ZYZmN6TXpkRHk5VVlC?= =?utf-8?B?NWs1cnVhYmVqb0xPb0NQVE80SDdFUnF5R2dXVlliQjZvMHJNd0lwdVZaWWhZ?= =?utf-8?B?WHhHdk1wMER3djFrVTVyYU40VlJsVUEzdEM5Wk1IQzRDekV6czV5Z3lDSXFN?= =?utf-8?B?dzdLNXNsZUtSTzBmYTNyUExJUzhJRDFqNFg2Z1Z2Mm15NjRPRW5JTkhMMCtz?= =?utf-8?B?bUhvcmlQUFY1T1k4SUg1cHh4OWxoaFBjSE5qQUY5NzdHUGc0THBqWE10TFR1?= =?utf-8?B?U01TWFYwd1NhTEFWSjBvZ2s0cmpNM1dnMDl3VzZ1NlkxMzgvNmJLaG56MG1u?= =?utf-8?B?a3NzNE1hMmNIeXNmRVllRWx2UHRPZWMzT1J1dWdVOTR0Wk9VU2ppT2lTRzVU?= =?utf-8?B?WlJCWkRrcDhUbXV4djh6QjdmOXljZFdUTGIwQWtkQ0t2NEx6YmtNMXh1QzBV?= =?utf-8?B?NUhSQjIrRE1SdTMvTVhZY1R2bUFZS2RpdE0zaTNtTUhzcFprcW5HUTVxM2Y4?= =?utf-8?B?WXg2bUM3cnpzd3djRXZEWFBiQ2tOSmRRUGtVcFg0OGtUdXFpOGdKSmZMb3FB?= =?utf-8?B?aDgyNWE4SFpQbUk2TURwSTh4NXltSno0VHowL0REdGNyVUx3R010ZDE1dG1V?= =?utf-8?B?WWpzUHh0VVlFSHdqZ25lQ2JaSlJhay81SWw0MVRCc2VLMUk0KzZlSzdVWXQw?= =?utf-8?B?Z3hBVkl0S09CMkRhZVZZVnh3dHptSVo3dUZYMjJPZFFyM3FHNWNnZjJ0Zklt?= =?utf-8?B?eWZGLzMzVExRc1dkMDZTU1o5MkgwTnVQb21ZdVdlZW9NV2lja0lxdDlWaTVB?= =?utf-8?B?amJldGd1dnF4Zit2SEJVaUdONFArVVZVSFhxbUMzSm1GZE0yZGE0WXFlVTNu?= =?utf-8?B?c0lzM0xybHROaHd2aWlXa2xjVWtFUnUvWS91ei9IV1pvSHV3eGg4QnRSeTdr?= =?utf-8?B?Y3ZGUzdXaWtIVjRyd21TZEtWcldwS3FDRjZlcDhyNSs0ZC8vTTE4RzlRU2Qw?= =?utf-8?B?SFpEMFltMWxJSjZoNU0yRGJVaWtIODBsK2JsU1dabFhkSFpLS3J1V1gzNXJ0?= =?utf-8?B?eWI4WU1YOENpY05ycGhTeTRxc1lHSm9EdHc1ZG96d1dzM2xabi80NzJuTi9R?= =?utf-8?B?bktnN0I3Vy9iL2YvZDlKYlJWWlhlNjRXd0xqZFFOcU16LzFkekwzcVdLVmNR?= =?utf-8?B?YzhlMnkxaFJqUzRoc2VjbGNSRGlPSUYyZWlVQzZPaVBRQm4wWS9Pd29HYlgx?= =?utf-8?B?dVFPL2hRbVk1TkF6aXZ4RTNSVlVFdmg5MFlUeDFEVFgxMU9NMDZPMHBGTEhn?= =?utf-8?B?eXVzNnZWUisvcmxBRktlT0t3S09Wb2RTdytrUTFXMmZHK1FENUltbzlqd201?= =?utf-8?Q?v25zpg?= X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1140;6:agGTqSBcldbwZwpko52FyfqF9J4rBwA4T9zneEbbKL1Upo/9KejLL7hdzO1V+n7YerL89Uw+4EWHAMp+qfiy/duESp8bhQSJWEsrs/PlldQy3w2ftHA1kpxYVYUkxQcxPTVfwzdA0x+TL60Fz2IXv5tp57iQNDFBqNrBBfi+e129iDBSS4oaOlPuCkvCqgOtxSZK250ihit4L7lA1ol0bnep8otE+t2q7ZQyPGsFrhDDw2RlkVrTA3Gl/U46e8KNYJoFWesdAJ8GaZCx7ZRvPKDW4plmRLTMD5n7+hsspK60s+sxGdt/vg/ogA7byYUHYzDHZdbx5+sgOQWQFIswHaT5yByYQUqMDMxHKSjxYf7iDK676bOsH3hfy6OvXmYhlOT7jdI3LsPtH0CAbRz+8A3xxUousceCDUZ/tYGFliY=;5:a7GuSBKUfPGWDk43tb7IlLRRRzwji9rFEdMMZip474/JE7XhS4nw1q2NCFcgoou0vyjxyflDqzEsH/NnyWVPtdr3ClGpQaxrBxJxGEHlK8/p2lP0ZdTxzuvW4YpIi3ktHL3It8o21eYHazvaoZpBog==;24:g23Q0iK+9e0pYkw3rh7qe9OTpU9M+O65HVqd4NIoHikd9PgF1+5vKTFylqRhaiB7g4LZVoyBbFgsqV3gdQBohkoRf5CrxWgxn+mckxqv5X0= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1140;7:UpSRokr9ACtriRHAc1hIWbFMJFhAND1Yq70rIVePHhKX4a+fmsXtcBOXgE2858oDcUHu1p8J8/L/xIkelMFPiES+8hAeduUQnLzkoUdGCjudMuQujfk+P6/SVTVC0B4NYKzpVluweuHd/3GA1S5QOgDO7Jr5kveBbzjMmY82hqmBzFlpN+JBPK4zr47dX/vZiWBSvvI1DUwQvnUwyq9FOHy8ftV9+dGRqqSekU4UgCEqLBZDi/bXsmKbi4k0fkaFQS3nNqUp36kaA2YwJvdmqJL9NkAhDXnxia+uzkCPFVFs556FyAsDSahETQF1YQeDbQepoeD74WLKdRz4PnXf2w==;20:a1FL77Qi5U6kFYfT4C6wfzvY9p7MkGkXXsu2cj6EKx8KJu4TJ5BuMOx5LsonWXrfqvNzxSJ/KQMmcJchilcUTH9vFnKTjWz2rmFG9j0lIcVK4CcP70jdepl4++BOVFLlsq4Wbp7vBUo6e7aqOrsOgeK6FoXu/sHUkMm6i+KeiLemEVB66mfPrUzJvC1CK1lCBviJl6ucMTXK8fSIIxRp3LHrVTgRzwfKRIpAH1752lXK0P5cq44sKjkgfnCzsF25 X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Feb 2017 15:43:36.4159 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1140 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Adding general kernel support for memory encryption includes: - Modify and create some page table macros to include the Secure Memory Encryption (SME) memory encryption mask - Modify and create some macros for calculating physical and virtual memory addresses - Provide an SME initialization routine to update the protection map with the memory encryption mask so that it is used by default - #undef CONFIG_AMD_MEM_ENCRYPT in the compressed boot path Signed-off-by: Tom Lendacky --- arch/x86/boot/compressed/pagetable.c | 7 +++++ arch/x86/include/asm/fixmap.h | 7 +++++ arch/x86/include/asm/mem_encrypt.h | 14 +++++++++++ arch/x86/include/asm/page.h | 4 ++- arch/x86/include/asm/pgtable.h | 26 ++++++++++++++------ arch/x86/include/asm/pgtable_types.h | 45 ++++++++++++++++++++++------------ arch/x86/include/asm/processor.h | 3 ++ arch/x86/kernel/espfix_64.c | 2 +- arch/x86/kernel/head64.c | 12 ++++++++- arch/x86/kernel/head_64.S | 18 +++++++------- arch/x86/mm/kasan_init_64.c | 4 ++- arch/x86/mm/mem_encrypt.c | 20 +++++++++++++++ arch/x86/mm/pageattr.c | 3 ++ include/asm-generic/pgtable.h | 8 ++++++ 14 files changed, 133 insertions(+), 40 deletions(-) diff --git a/arch/x86/boot/compressed/pagetable.c b/arch/x86/boot/compressed/pagetable.c index 56589d0..411c443 100644 --- a/arch/x86/boot/compressed/pagetable.c +++ b/arch/x86/boot/compressed/pagetable.c @@ -15,6 +15,13 @@ #define __pa(x) ((unsigned long)(x)) #define __va(x) ((void *)((unsigned long)(x))) +/* + * The pgtable.h and mm/ident_map.c includes make use of the SME related + * information which is not used in the compressed image support. Un-define + * the SME support to avoid any compile and link errors. + */ +#undef CONFIG_AMD_MEM_ENCRYPT + #include "misc.h" /* These actually do the work of building the kernel identity maps. */ diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h index 8554f96..83e91f0 100644 --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -153,6 +153,13 @@ static inline void __set_fixmap(enum fixed_addresses idx, } #endif +/* + * Fixmap settings used with memory encryption + * - FIXMAP_PAGE_NOCACHE is used for MMIO so make sure the memory + * encryption mask is not part of the page attributes + */ +#define FIXMAP_PAGE_NOCACHE PAGE_KERNEL_IO_NOCACHE + #include #define __late_set_fixmap(idx, phys, flags) __set_fixmap(idx, phys, flags) diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index ccc53b0..547989d 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -15,6 +15,8 @@ #ifndef __ASSEMBLY__ +#include + #ifdef CONFIG_AMD_MEM_ENCRYPT extern unsigned long sme_me_mask; @@ -24,6 +26,11 @@ static inline bool sme_active(void) return (sme_me_mask) ? true : false; } +void __init sme_early_init(void); + +#define __sme_pa(x) (__pa((x)) | sme_me_mask) +#define __sme_pa_nodebug(x) (__pa_nodebug((x)) | sme_me_mask) + #else /* !CONFIG_AMD_MEM_ENCRYPT */ #ifndef sme_me_mask @@ -35,6 +42,13 @@ static inline bool sme_active(void) } #endif +static inline void __init sme_early_init(void) +{ +} + +#define __sme_pa __pa +#define __sme_pa_nodebug __pa_nodebug + #endif /* CONFIG_AMD_MEM_ENCRYPT */ #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h index cf8f619..b1f7bf6 100644 --- a/arch/x86/include/asm/page.h +++ b/arch/x86/include/asm/page.h @@ -15,6 +15,8 @@ #ifndef __ASSEMBLY__ +#include + struct page; #include @@ -55,7 +57,7 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr, __phys_addr_symbol(__phys_reloc_hide((unsigned long)(x))) #ifndef __va -#define __va(x) ((void *)((unsigned long)(x)+PAGE_OFFSET)) +#define __va(x) ((void *)(((unsigned long)(x) & ~sme_me_mask) + PAGE_OFFSET)) #endif #define __boot_va(x) __va(x) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 2d81161..b41caab 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -3,6 +3,7 @@ #include #include +#include /* * Macro to mark a page protection value as UC- @@ -13,6 +14,12 @@ cachemode2protval(_PAGE_CACHE_MODE_UC_MINUS))) \ : (prot)) +/* + * Macros to add or remove encryption attribute + */ +#define pgprot_encrypted(prot) __pgprot(pgprot_val(prot) | sme_me_mask) +#define pgprot_decrypted(prot) __pgprot(pgprot_val(prot) & ~sme_me_mask) + #ifndef __ASSEMBLY__ #include @@ -153,17 +160,22 @@ static inline int pte_special(pte_t pte) static inline unsigned long pte_pfn(pte_t pte) { - return (pte_val(pte) & PTE_PFN_MASK) >> PAGE_SHIFT; + return (pte_val(pte) & ~sme_me_mask & PTE_PFN_MASK) >> PAGE_SHIFT; } static inline unsigned long pmd_pfn(pmd_t pmd) { - return (pmd_val(pmd) & pmd_pfn_mask(pmd)) >> PAGE_SHIFT; + return (pmd_val(pmd) & ~sme_me_mask & pmd_pfn_mask(pmd)) >> PAGE_SHIFT; } static inline unsigned long pud_pfn(pud_t pud) { - return (pud_val(pud) & pud_pfn_mask(pud)) >> PAGE_SHIFT; + return (pud_val(pud) & ~sme_me_mask & pud_pfn_mask(pud)) >> PAGE_SHIFT; +} + +static inline unsigned long pgd_pfn(pgd_t pgd) +{ + return (pgd_val(pgd) & ~sme_me_mask) >> PAGE_SHIFT; } #define pte_page(pte) pfn_to_page(pte_pfn(pte)) @@ -563,8 +575,7 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) * Currently stuck as a macro due to indirect forward reference to * linux/mmzone.h's __section_mem_map_addr() definition: */ -#define pmd_page(pmd) \ - pfn_to_page((pmd_val(pmd) & pmd_pfn_mask(pmd)) >> PAGE_SHIFT) +#define pmd_page(pmd) pfn_to_page(pmd_pfn(pmd)) /* * the pmd page can be thought of an array like this: pmd_t[PTRS_PER_PMD] @@ -632,8 +643,7 @@ static inline unsigned long pud_page_vaddr(pud_t pud) * Currently stuck as a macro due to indirect forward reference to * linux/mmzone.h's __section_mem_map_addr() definition: */ -#define pud_page(pud) \ - pfn_to_page((pud_val(pud) & pud_pfn_mask(pud)) >> PAGE_SHIFT) +#define pud_page(pud) pfn_to_page(pud_pfn(pud)) /* Find an entry in the second-level page table.. */ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) @@ -673,7 +683,7 @@ static inline unsigned long pgd_page_vaddr(pgd_t pgd) * Currently stuck as a macro due to indirect forward reference to * linux/mmzone.h's __section_mem_map_addr() definition: */ -#define pgd_page(pgd) pfn_to_page(pgd_val(pgd) >> PAGE_SHIFT) +#define pgd_page(pgd) pfn_to_page(pgd_pfn(pgd)) /* to find an entry in a page-table-directory. */ static inline unsigned long pud_index(unsigned long address) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index 8b4de22..500fc60 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -2,7 +2,9 @@ #define _ASM_X86_PGTABLE_DEFS_H #include + #include +#include #define FIRST_USER_ADDRESS 0UL @@ -121,10 +123,10 @@ #define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE) -#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ - _PAGE_ACCESSED | _PAGE_DIRTY) -#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ - _PAGE_DIRTY) +#define _PAGE_TABLE_NOENC (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER |\ + _PAGE_ACCESSED | _PAGE_DIRTY) +#define _KERNPG_TABLE_NOENC (_PAGE_PRESENT | _PAGE_RW | \ + _PAGE_ACCESSED | _PAGE_DIRTY) /* * Set of bits not changed in pte_modify. The pte's @@ -191,18 +193,29 @@ enum page_cache_mode { #define __PAGE_KERNEL_IO (__PAGE_KERNEL) #define __PAGE_KERNEL_IO_NOCACHE (__PAGE_KERNEL_NOCACHE) -#define PAGE_KERNEL __pgprot(__PAGE_KERNEL) -#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO) -#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC) -#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX) -#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE) -#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE) -#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC) -#define PAGE_KERNEL_VSYSCALL __pgprot(__PAGE_KERNEL_VSYSCALL) -#define PAGE_KERNEL_VVAR __pgprot(__PAGE_KERNEL_VVAR) - -#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO) -#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE) +#ifndef __ASSEMBLY__ + +#define _PAGE_ENC (_AT(pteval_t, sme_me_mask)) + +#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ + _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_ENC) +#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ + _PAGE_DIRTY | _PAGE_ENC) + +#define PAGE_KERNEL __pgprot(__PAGE_KERNEL | _PAGE_ENC) +#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) +#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX | _PAGE_ENC) +#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_VSYSCALL __pgprot(__PAGE_KERNEL_VSYSCALL | _PAGE_ENC) +#define PAGE_KERNEL_VVAR __pgprot(__PAGE_KERNEL_VVAR | _PAGE_ENC) + +#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO) +#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE) + +#endif /* __ASSEMBLY__ */ /* xwr */ #define __P000 PAGE_NONE diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index e6cfe7b..86da9a4 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -22,6 +22,7 @@ #include #include #include +#include #include #include @@ -240,7 +241,7 @@ static inline void native_cpuid(unsigned int *eax, unsigned int *ebx, static inline void load_cr3(pgd_t *pgdir) { - write_cr3(__pa(pgdir)); + write_cr3(__sme_pa(pgdir)); } #ifdef CONFIG_X86_32 diff --git a/arch/x86/kernel/espfix_64.c b/arch/x86/kernel/espfix_64.c index 04f89ca..51566d7 100644 --- a/arch/x86/kernel/espfix_64.c +++ b/arch/x86/kernel/espfix_64.c @@ -193,7 +193,7 @@ void init_espfix_ap(int cpu) pte_p = pte_offset_kernel(&pmd, addr); stack_page = page_address(alloc_pages_node(node, GFP_KERNEL, 0)); - pte = __pte(__pa(stack_page) | (__PAGE_KERNEL_RO & ptemask)); + pte = __pte(__pa(stack_page) | ((__PAGE_KERNEL_RO | _PAGE_ENC) & ptemask)); for (n = 0; n < ESPFIX_PTE_CLONES; n++) set_pte(&pte_p[n*PTE_STRIDE], pte); diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index baa0e7b..182a4c7 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -28,6 +28,7 @@ #include #include #include +#include /* * Manage page tables very early on. @@ -42,7 +43,7 @@ static void __init reset_early_page_tables(void) { memset(early_level4_pgt, 0, sizeof(pgd_t)*(PTRS_PER_PGD-1)); next_early_pgt = 0; - write_cr3(__pa_nodebug(early_level4_pgt)); + write_cr3(__sme_pa_nodebug(early_level4_pgt)); } /* Create a new PMD entry */ @@ -54,7 +55,7 @@ int __init early_make_pgtable(unsigned long address) pmdval_t pmd, *pmd_p; /* Invalid address or early pgt is done ? */ - if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt)) + if (physaddr >= MAXMEM || read_cr3() != __sme_pa_nodebug(early_level4_pgt)) return -1; again: @@ -157,6 +158,13 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data) clear_page(init_level4_pgt); + /* + * SME support may update early_pmd_flags to include the memory + * encryption mask, so it needs to be called before anything + * that may generate a page fault. + */ + sme_early_init(); + kasan_early_init(); for (i = 0; i < NUM_EXCEPTION_VECTORS; i++) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index 4f8201b..edd2f14 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -129,7 +129,7 @@ startup_64: movq %rdi, %rax shrq $PGDIR_SHIFT, %rax - leaq (PAGE_SIZE + _KERNPG_TABLE)(%rbx), %rdx + leaq (PAGE_SIZE + _KERNPG_TABLE_NOENC)(%rbx), %rdx addq %r12, %rdx movq %rdx, 0(%rbx,%rax,8) movq %rdx, 8(%rbx,%rax,8) @@ -463,7 +463,7 @@ GLOBAL(name) __INITDATA NEXT_PAGE(early_level4_pgt) .fill 511,8,0 - .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC NEXT_PAGE(early_dynamic_pgts) .fill 512*EARLY_DYNAMIC_PAGE_TABLES,8,0 @@ -475,15 +475,15 @@ NEXT_PAGE(init_level4_pgt) .fill 512,8,0 #else NEXT_PAGE(init_level4_pgt) - .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC .org init_level4_pgt + L4_PAGE_OFFSET*8, 0 - .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC .org init_level4_pgt + L4_START_KERNEL*8, 0 /* (2^48-(2*1024*1024*1024))/(2^39) = 511 */ - .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC NEXT_PAGE(level3_ident_pgt) - .quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE + .quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC .fill 511, 8, 0 NEXT_PAGE(level2_ident_pgt) /* Since I easily can, map the first 1G. @@ -495,8 +495,8 @@ NEXT_PAGE(level2_ident_pgt) NEXT_PAGE(level3_kernel_pgt) .fill L3_START_KERNEL,8,0 /* (2^48-(2*1024*1024*1024)-((2^39)*511))/(2^30) = 510 */ - .quad level2_kernel_pgt - __START_KERNEL_map + _KERNPG_TABLE - .quad level2_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level2_kernel_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC + .quad level2_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC NEXT_PAGE(level2_kernel_pgt) /* @@ -514,7 +514,7 @@ NEXT_PAGE(level2_kernel_pgt) NEXT_PAGE(level2_fixmap_pgt) .fill 506,8,0 - .quad level1_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE + .quad level1_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC /* 8MB reserved for vsyscalls + a 2MB hole = 4 + 1 entries */ .fill 5,8,0 diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index 66d2017..072a70a 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -70,7 +70,7 @@ static int kasan_die_handler(struct notifier_block *self, void __init kasan_early_init(void) { int i; - pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL; + pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL | _PAGE_ENC; pmdval_t pmd_val = __pa_nodebug(kasan_zero_pte) | _KERNPG_TABLE; pudval_t pud_val = __pa_nodebug(kasan_zero_pmd) | _KERNPG_TABLE; @@ -132,7 +132,7 @@ void __init kasan_init(void) */ memset(kasan_zero_page, 0, PAGE_SIZE); for (i = 0; i < PTRS_PER_PTE; i++) { - pte_t pte = __pte(__pa(kasan_zero_page) | __PAGE_KERNEL_RO); + pte_t pte = __pte(__pa(kasan_zero_page) | __PAGE_KERNEL_RO | _PAGE_ENC); set_pte(&kasan_zero_pte[i], pte); } /* Flush TLBs again to be sure that write protection applied. */ diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index b99d469..d71df97 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -11,6 +11,10 @@ */ #include +#include +#include + +extern pmdval_t early_pmd_flags; /* * Since SME related variables are set early in the boot process they must @@ -19,3 +23,19 @@ */ unsigned long sme_me_mask __section(.data) = 0; EXPORT_SYMBOL_GPL(sme_me_mask); + +void __init sme_early_init(void) +{ + unsigned int i; + + if (!sme_me_mask) + return; + + early_pmd_flags |= sme_me_mask; + + __supported_pte_mask |= sme_me_mask; + + /* Update the protection map with memory encryption mask */ + for (i = 0; i < ARRAY_SIZE(protection_map); i++) + protection_map[i] = pgprot_encrypted(protection_map[i]); +} diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index a57e8e0..91c5c63 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1987,6 +1987,9 @@ int kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address, if (!(page_flags & _PAGE_RW)) cpa.mask_clr = __pgprot(_PAGE_RW); + if (!(page_flags & _PAGE_ENC)) + cpa.mask_clr = __pgprot(pgprot_val(cpa.mask_clr) | _PAGE_ENC); + cpa.mask_set = __pgprot(_PAGE_PRESENT | page_flags); retval = __change_page_attr_set_clr(&cpa, 0); diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index 18af2bc..4a24451 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -314,6 +314,14 @@ static inline int pmd_same(pmd_t pmd_a, pmd_t pmd_b) #define pgprot_device pgprot_noncached #endif +#ifndef pgprot_encrypted +#define pgprot_encrypted(prot) (prot) +#endif + +#ifndef pgprot_decrypted +#define pgprot_decrypted(prot) (prot) +#endif + #ifndef pgprot_modify #define pgprot_modify pgprot_modify static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot)