From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753216AbdF0PEr (ORCPT ); Tue, 27 Jun 2017 11:04:47 -0400 Received: from mail-co1nam03on0051.outbound.protection.outlook.com ([104.47.40.51]:55049 "EHLO NAM03-CO1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752579AbdF0PAO (ORCPT ); Tue, 27 Jun 2017 11:00:14 -0400 Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; From: Tom Lendacky Subject: [PATCH v8 22/38] x86/mm: Add support for changing the memory encryption attribute To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, xen-devel@lists.xen.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Brijesh Singh , Toshimitsu Kani , Radim =?utf-8?b?S3LEjW3DocWZ?= , Matt Fleming , Alexander Potapenko , "H. Peter Anvin" , Larry Woodman , Jonathan Corbet , Joerg Roedel , "Michael S. Tsirkin" , Ingo Molnar , Andrey Ryabinin , Dave Young , Rik van Riel , Arnd Bergmann , Konrad Rzeszutek Wilk , Borislav Petkov , Andy Lutomirski , Boris Ostrovsky , Dmitry Vyukov , Juergen Gross , Thomas Gleixner , Paolo Bonzini Date: Tue, 27 Jun 2017 10:00:05 -0500 Message-ID: <20170627150005.15908.84160.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20170627145607.15908.26571.stgit@tlendack-t1.amdoffice.net> References: <20170627145607.15908.26571.stgit@tlendack-t1.amdoffice.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: MWHPR14CA0013.namprd14.prod.outlook.com (10.173.227.23) To CY4PR12MB1144.namprd12.prod.outlook.com (10.168.164.136) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: fb584750-3baf-4f74-5b05-08d4bd6d359a X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(300000500095)(300135000095)(300000501095)(300135300095)(22001)(300000502095)(300135100095)(48565401081)(300000503095)(300135400095)(201703131423075)(201703031133081)(300000504095)(300135200095)(300000505095)(300135600095);SRVR:CY4PR12MB1144; X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1144;3:uEI7GF4FKEJhy5FFlLSo4erY5Lm8sNoxsw7MeaO30+1KYW69OZ4b4UOQwKJcq1qA5NqBJECBh4Km58RDMSVmW4fOr8MtLYvWKo6o0m7GSPdXgPzSLMIku+Zqaeq+6NS+4vEDy3i6m1Nn5H98/+VWH5kUWDv9w7yuGmmKg1obFKU5MOJUfUu8LXFIjkZMbKE6A8yAiuuH/BvMlHLRJVGEpoYnWfntEIueK7/pA7fQPCRaLPXS32BJYSspBhDC9YicQnOOOOCM7Zre3wFdKWXjstWTVQRdaw0YQgI8Hmc170Y7/Zfq6x31O5K5HnV8zvqSNS+TafbIvRRmvrJxdq1d2ymx3ewnXf3nBlbOlLzH4GhYOCGJ0dwoVj7/BUNh/GR3QV3MUAd/4y/pzoIHy8huMLpWeWZqkDx3lA14ORcVQXzIL0U5cROIsJg9W1hWuWdrKlm/Wi87Af0bYfZR5Zbr1J4cZDFEsOEPCbG6xPnA7vLua7dZhId7V53eOmqh5xXRnC+xWX7zPUyRpcVVBxT2W9t3mjc9gqc4MjpukSVXlCI5DTtqs5EIEdRhgOVV67oIerJ/YVJF5xAC639D4hH8lYQrLfY0n2fni81bICUOFOXxmPP/Ryo/0kWdaa6+6l3Q6VfHvj0EODX610H+ZOKMOc3RqbKrlckqsAPouWMrnW1OMTKGwx2QCg1IxtKRrRPPZPvqAslsmfA5trKunXytmQ== X-MS-TrafficTypeDiagnostic: CY4PR12MB1144: X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1144;25:D4fMemxMIKSURmSWX8oKDkVNfg67wITgw3lmQGoWnsizNTBtRMcIa30pViWbI5uo1gTnwMdURrOfFaXCarlEdlRTCYWhUHdeHSBJWmmyhJcwBvwEonoVedOdIco04OiS0vhEZk0PCo1vgkDQcepUKl2I+J+n4vfemqc63Xb4/7OPXAiOh/NVud1QRi4i8qclzYzCY8LAT7+N1Tw9YpiZECcMDU/iG6Wbo36412Lzk7KXApV6Uawsam/FufJdE5qL6T676kMhbwJ1IYEWgYgV0MNUk2IzoNkqtgXIXOFpg3arhsQ2LyuDfmwBkdriLkf6loCIWh1JU0raV3vaD8jw0lzcuQw1p30mo6m8HJZaM643JC/RKpBJMdEnHzEREPOm/dp1St+6J/whLCdr3opbiL90kSIrcYZ4j/d7okjfDZ8sFMXXP580I2uXXmT1Fm9hOIdrMXEBOkdnuOIlPA9r4P/qohY5G1tXupWNsSe/IjI3jOR32G8JYBUqp1U5Pq7utUyyVM7BvDqzbu8k/kw66wR29SfBftcjgxtZ1hCUVctYplYW/8KE33W0Ukooe6GjB7SoVXl1zTYrs/m6b2GK0WHGNCljjbPiCGPXaqIvw8ha1XwOoUn+KfNO5A5QdvPqlOBeZaUfl5xW+Tf0RQcz9xCEJjsTMBSZKxsceBR8yQVbXuDU2l2jTnGdH06emshKdWPCjp+32gCyD7PQAbvFnNBlnfdnIyT1ZUZKwUPZ9+C2R2mKGjI2lgX0kRlekBC8hwkht2VsPSmyQ5OdXTlBrOrSCntllpQBrESoSvX6SNf3WE3YBhcMM3zZ8cVV3aeZmkUMSwLTdXxSC1s9J8YMfb4dQxzhGuLMU1tD3XedUVk1BoW0apKGw5z1Pxvy7dQ6G3l8SKlzYi5EUiXKRLTw6h6Zm2PpNeIv3gkM5Y7W5pI= X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1144;31:yt4KNFTS86ypli8xIVjQAsrRW/3Fj5n4xlt4IZdmY8uAX4faCWkmsW9HXpOpxN5Jaa5wNmqrD2Gk5/2DltSOz2NphEfAKY9CUSm0veRLkP5/albdlOpwP507JAc7ZGvsM9pmMjZGKKXYPoxcqL3V3AiwRVcHWQisgOBWRV/E40+rPhoitxmAwxJ4qeyZ833kRScAENOjR+iV2QGzOFA8mWVjWGW5IVsEcSIKuZuyCCc9gZvOHXBHfuDyB/hGZLYxq4oBzgOy3I8BY6WSJnaIfR1aSlWHbTEBXapbWD9WcyMSbjNd5g2quacVaAqRMu2DdtwNFVWTVYfzbOaIzqXveVgHbZ1n3gE6rjyiAM0Iv9jGMwWUdswV0qfXGdaGHYt4OQB6BNDWmnSXMUn5SCggGqwmAJVswYOIZWe3ES+DuXBoBVY4N38Nzj7R3NL48W7Cdw834d944kjAOTrkTTkmzmI+0iLBVSYvVVE4/u4C6hAtw3G2e/H19OfQzDW9aiPD/pLAkZotFg/lUzhvYGLal5q5bWDAotSaGWUU/MJl3LdIVaDsGx5wa0wBy0HphrSAkD5H5Oq44++f+pn+28KBX8jvyFUSxiZ9j8EQGqd3CcKnvMciWerdMNSb5x93oyc0IgGhCvdQsBmyUQAtCcOQ7gfkFs1xPax2QZxd657kAAI= X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1144;20:D3f0DemwBAhtc/t6hbA303lRKtUdR4tK9R9EAZlRwTSuKoMnv/OlSqP28FNgDag1aP3Io5g6mYGH8pFhCxtqNSOVyr9dcECxVNd1Mf+uJoTqTtcXcXkVyJjk3i+Qib8VcwHl0ZG98JTiWXsevZrZcMJaVD8X1LjN1kilakAOrBQ6I5Fz1Ra6fV2FqfoX+1pfSruXFX2bV8+4jMt2w5Udwb77SiYzbGj9uHWrUiLB9hMy82rmBpfTidOUTGfTHBq+4XzJ3yk+8Lk5eRyx0DAvLetcAdP97wkkvoowcxRX3DKCU2kHNmGECCVJfIVGkV4WfFfeWWxofQCuJl8eicGeg1rzStLgb6EKKkvhjFjxKM+r5QPCTdswelQ0ROe732jJ0KVSBzddH1dOusZiRejlWALe/ekGORgP0TYR4WnIkxpsVGvKaxtH0mi2Bc1B6AWyCqsYDSngnQVIp+RWZxBmYXIL3iTgGJp7rqbqnX8ia4IrpxXam62EF+KP2rVJ2o7h X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(133145235818549)(236129657087228)(767451399110)(247924648384137); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(601004)(2401047)(5005006)(8121501046)(3002001)(93006095)(93001095)(10201501046)(100000703101)(100105400095)(6055026)(6041248)(20161123562025)(20161123560025)(20161123555025)(20161123564025)(20161123558100)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(6072148)(100000704101)(100105200095)(100000705101)(100105500095);SRVR:CY4PR12MB1144;BCL:0;PCL:0;RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095);SRVR:CY4PR12MB1144; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtDWTRQUjEyTUIxMTQ0OzQ6VlhidE1NVHJ5MnlVWFd1NHpYR1NPR0Jwc3pE?= =?utf-8?B?VTRydW9YZHQ0SjNqNEx0byt6ckJzTThHbi9DaVgrV1RJbllKREdBUDhSdmNl?= =?utf-8?B?TVpaNFZuTk5yZHNaV0J6czUzaEQrYUtReTlNQmtCQ013eGJtaXRRZDR5T0hh?= =?utf-8?B?U0NqY2xLWUcwSFV5SkVrMEp4WTJXVlkxa3NpZWhKckhjSUE1YWprVjk5ZmhM?= =?utf-8?B?bHA3MjF3YVpVUnlva0lkbEszZXJZb05nVk5ZRTNhOG9QWWs1YUtpbWUzdTcr?= =?utf-8?B?dWNvekxPZ1JNZENyZW9TdUdVTXoySUVSTkhRZ1dicFRlMURZRkR2bGRwMHFG?= =?utf-8?B?b2NVS2RuOXYvQ0pCaDVBbzJreDExZnFDNktDTE9KcHdnZzhFOWNYRmFzSFZr?= =?utf-8?B?L1VkUHBPK0RXZmMzYmI0aVFkOUtRbFNDbThCZXZwelRsSTMrcEd1YWNxS2hJ?= =?utf-8?B?Q3ZkZzIzbjYvbzR4c0poZTlWRkp5c1NVSUk2WGJxQ3ZWTzZGZ24wN3krRExL?= =?utf-8?B?U2p2L3FEakRCZzluSkpWcnpTMlJ3TW01YWJxS3RwbDN6RGtKL09ta0FjWGJC?= =?utf-8?B?MFl6TEcwUm1EREIzWEcwbnllQ3hxSjRwVHJnZXlQeWExL0R3c0hpdUNZS3pz?= =?utf-8?B?aGF2UjIyK09SdDdxbDRoZkxXdGNheFpaeFQ2WEhsSm9LakNUQm1DY2E3a3h1?= =?utf-8?B?b2pwZ3NQUlY1UGc5RHdBb0JlcjYvSTZMRFd1ZWYxY05kSGozWlVlMEtYMEZy?= =?utf-8?B?ZDNtb3ptWEhnWWlyZEcvcEp3Q3Fra1ZEVlVBTFFodjU5d0F5TU9WM2Jwa3VC?= =?utf-8?B?V1RraU4rUE5JOVcrMmxWeDMzT1BMUWFUZkZlbTMrTnlEdXJ1K21rbXZtSzlI?= =?utf-8?B?cm05dmNoODBGMGc5MmswbC9zMklpMlRCZ0Q4ZFlXT2NlbkoySU5McFAyc2pj?= =?utf-8?B?OTdsNGM4Q3lxaG9yTnFtKzFJK0dnL2FDeGlzU2pqRG9TL2RaVXhwWWViWmFs?= =?utf-8?B?ZCs3RS9TU0lPdGNKUGxoS2lDa3htZzVXMEJlUHBlWC9ERkVNeG4zcVc0R2xs?= =?utf-8?B?WUkxMXRoMGRybnkydTljUlhtekQzc0pNS0FsWU5KaVo4b215cXV0RUZVWldG?= =?utf-8?B?UGhBMXFzQWRrbFhYdDJwTkM4TmlaTG4wd085OGFhY2tudkJhY241TG1OaXFt?= =?utf-8?B?U0pKTVhSRWJKOGFLZW16cWdlWHp1NWdhQ2tqcXlCc2U1aTR3NVFvQi9jVWdW?= =?utf-8?B?MldGclg4dlBPOFJIeXRndTJWUHVrU2hjK0Y1MFUzK2VPNm9DZ2NBeWdiaXF5?= =?utf-8?B?VzhCOWxUajg2TXhKK1p2ajN4V05ibS8rQTMxN2wyTHNaYzlhTFRud29QNVBl?= =?utf-8?B?TDBuWEFsVGNqaFh6MnFiWWhGMHRqbUl1TGVTUVdjWGU0TzQ1WVlsU2F3NC9Y?= =?utf-8?B?QVhqMWhvelJPcFgyTGR6S3BkdEhMWUZGOHp0VmpxVEdFcHQxSmlrNGREVWJT?= =?utf-8?B?SWx4MmVIY1dZVDkreGpycVNOTjFOYmFhSi9oekdIYS9WUjZCelNPL3VuRkZL?= =?utf-8?B?TDQ5UnR2MWp1RUM4Sm5FRUEzQ3Uxd0xkRmROdHhQckJoYlY1clNYcFUvMlZ2?= =?utf-8?B?M2RDOXpUQkdJeFFWVU9YeWpLL1dRTVVCUmYvU3FlV0RvRlVOSDhBRzQ3QkVV?= =?utf-8?B?azhZb0RtbktvYTBwOS9oYzNkMXFXbEpBdmluSDBQZVI5SlhqdWFjMVAybUdw?= =?utf-8?Q?VzIQxgtJ6QNsx0MRsqqDVXVVcQ7HHzZNPl7E=3D?= X-Forefront-PRVS: 0351D213B3 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6009001)(39840400002)(39450400003)(39850400002)(39860400002)(39400400002)(39410400002)(53416004)(9686003)(42186005)(6506006)(103116003)(47776003)(66066001)(38730400002)(50466002)(33646002)(6116002)(1076002)(83506001)(54906002)(72206003)(81166006)(230700001)(55016002)(3846002)(2950100002)(6666003)(54356999)(23676002)(2906002)(8676002)(4001350100001)(4326008)(76176999)(25786009)(97746001)(50986999)(305945005)(478600001)(110136004)(7416002)(7406005)(189998001)(53936002)(5660300001)(7736002)(86362001)(921003)(1121003);DIR:OUT;SFP:1101;SCL:1;SRVR:CY4PR12MB1144;H:tlendack-t1.amdoffice.net;FPR:;SPF:None;MLV:sfv;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtDWTRQUjEyTUIxMTQ0OzIzOjRJcXVzSE1odzBqcitNY01hanZJb0FIK0o3?= =?utf-8?B?d1R4YUczcDNqR1NwSnVwWU5qQ0pKKzVyVGx0OWVpRjdrT0NuL0JKS1BVQ01N?= =?utf-8?B?N2xQb0dsZGs1RGpQajZaVHZMdDUveDlINnl1dzE1NTlhVmpOMmphZ0lWdFNC?= =?utf-8?B?dW9DMDJuZ0w1OG51U3U0Z0JCelBQS1E4WEF1M2QzS2VzdUNiZlNSMllRSHN3?= =?utf-8?B?WTh5RGt5SDcyVVJkTFJpMlFEY0Fuem1PbDRMQ3ZkZ3hGbnFFR1NRMjhNRUZ3?= =?utf-8?B?NEU4V2xRZWloVnJsU1RJYlZLd2ZhbzIyc2ZsVElLYitmRUFPYUpJbFc5bFI5?= =?utf-8?B?b3JqanRBaUlOVUM2WXBSOG5rNWRDamQ4V2lHOThxL1NOcWJ5a0dCOFovbG5q?= =?utf-8?B?dzRTL1k5ZDhtczZzYlpXSDBBZGN0OGd1TVNCV0p3ZUNnRE1iK1RaZ3dzYUJC?= =?utf-8?B?V1BiVjRpK3Jvdll2dm44bUdyVWdtc0hyQTBZZXBkTml4RzFRUHh2NE0wNThq?= =?utf-8?B?TSt5YzBla0hkK1hvQ2RpaFYxRktxQXQybXlNN0o1dm00ZUxuVjVXTndHcE9E?= =?utf-8?B?RDc4SUxlUjh0UEUwSU5lVHpDdGFjcndQNit0QlVKajBhWllxVDFaNWx4Tk5h?= =?utf-8?B?M3dMc0tQUzlGOGlpK0s3K1NzNzFYNDJDWmhWeXR4cjR6bWptR3AvMm9YRm1Y?= =?utf-8?B?SWJ4RjlaS3FGQWdqY3ZyMitTSXB3RG9LOWpkb2pGbUNBYVRWRFJ1dXVjMFd1?= =?utf-8?B?eThkbnhzelRrVXFpQStsYW9pZU5aYm1DbERwMlpkRHVya09HMjJ3WXNqS1N5?= =?utf-8?B?TVdvNUMwZ0NaUlF2clF4WEc0SjF5cFM0UDVHeDdnTHFOUTBvaEJjcTFXd1VM?= =?utf-8?B?R1loQ0hoUVA4KzFpWHlrczV2SFFQV0NhRjA2VGJJS3VCSWVLSUsrNmtJdUxv?= =?utf-8?B?N2MrK2wxVXEwank2UWhTWFVpODY2clJsUUJLT2hkMlBHVFhsdDRIZUFDdmJs?= =?utf-8?B?WFVROFNkTWJBbFJxVEtGQTBVY2lzUUtGUEx6YkVURmdGSGpQLy9CVDhhMWJn?= =?utf-8?B?cW42VTJrV3hQUzhWazNXR3FXY2FGcEVLcENkK2VwYm5PK0lRNHdOYUJDbEtz?= =?utf-8?B?TXZzN1Z3aXJGOUdGZUQraDRtVGViaGlhdU0wd3MzRDVuSHJjS0dHVzBRVWw3?= =?utf-8?B?RTA3Mkk0WDlaSW5HSERiYklQYi9MMDVTZVlIUHdFaDhrb3B3S1FkWHVEV1k2?= =?utf-8?B?WWVLcW85dit2VG9vVS9jM0JidE84NGMrVURENFl4ZlZpZU9STzFSdzMxVWdI?= =?utf-8?B?KytPWElKTnVrRk5LblNQZlRzaXdWSlNjaFYwTFhaUlB2UzBXTnFEQ2ZpRFJ0?= =?utf-8?B?dE5lTU1BLzdKaXozVDFrZ0dzcXZWbDNaQVVwUGp0WWlJUlh6LzlLeXBXcyto?= =?utf-8?B?UDBxTm5GUXhqR2czbDF6NXdVOGRsem9rdGlaZXNkczR5Y1JybVJNTWlYMUJu?= =?utf-8?B?ZFk3SU0xU1BXYW5ZVnRtZFd3N21WaFF5UU1WbTcwVm9vcWJCZDBXdVBIRzV1?= =?utf-8?B?OCtGcXU0NXFGSlU0UzViUkNMV3o4dWhzeVNUNlBqeGRGQW5YWFUwRTIvNGNo?= =?utf-8?B?eG5XVWdablNNQlJaSjRMYjczeHY4SzVKQ2xSVW1UZm05REZOYklMV3FFWUEy?= =?utf-8?Q?0M30C5b7YuFTOurJ2g=3D?= X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtDWTRQUjEyTUIxMTQ0OzY6RlRXK0duaHkyaUxSMkVGY24zU3RqN1BGdGM1?= =?utf-8?B?ZWcrak9iamJYRnZJcERUMjdkbkRHQ090SUl4VHBYYjduVnJPZmlIQVZqUDBD?= =?utf-8?B?NjZBRzFmMzVKSlNaVmdjRDRWZDNLK1gvM25HUEkzdndIMHJtQjVNUUxvR1VE?= =?utf-8?B?bForVlJtR1Q2NVZBQW41ek5rYUZnZXlwYTFEd3BRZUpsQ1p0UjJxV2hNVVF2?= =?utf-8?B?b2pEYlBZeDNwUjRtOTloN3UvYndBZjdPc1F1MUM1SFlYL2t6MmJvSGM0ek54?= =?utf-8?B?QndsOWh0VTNrVjlRK2tTSnFKeG1jMnJqdUQzRFYvNHFNVC92YU5CelcvWEt3?= =?utf-8?B?bXg2R2xKMzZzeXVOWlByY0p2NnFIWUwwY3NMMmxzbGdnaDRYbnNWV0cwcVVT?= =?utf-8?B?RVBqdVlxMTIzSkNFNHBNOU1aUmErdHp3eXdyQ2xjODFVRjhoQ0VqcjZJS05B?= =?utf-8?B?QjhGRmw4K2tYRnNad0lNS0FNVitUSS9URjNhbVhXSUVhOHZ2YjlSOG8rYXdK?= =?utf-8?B?TnV1cWYwZFpWSnpsNU1acUxLYXpJTXMrUHZWSjlPcUNZanc0cDIyZllJOWxB?= =?utf-8?B?dnRCOFVOQ0dESmlSSFpBZm10NUJlRHRoYlZtblRHR0x3Zkh2VnhWMm5lYXBX?= =?utf-8?B?VWpOT2FIeXJaTjZERG1xcmZZaGF1N2U1YlJSRzEwSUFsSjlnUU5mZmxjQWw1?= =?utf-8?B?QnliMUV1RXdYRWRXdGp6UDFacW5aeXpjUW9qM1ZNQm1RRmdoMXB0LzBCUUdn?= =?utf-8?B?VFZYWFAxTTUwVHV0UXdyWFlkWmd6bGl4UnBpdTNuckFOcktWT0JxT2JNNUlW?= =?utf-8?B?bUpvL3NZSndCcGFuVG44RjhyUy9mTDNqQ0U2M2RvdXNiZHQycmtTc3YrUFB5?= =?utf-8?B?OUVtL3VXYWtBNTFNRmZJQ3J3MTR4TXdkMmN4SlRsREY1dEVUNkVjMjlEWnF5?= =?utf-8?B?WVJGak5yMUJyQ1RtL0VRY2kwMWZ5c1BlYktGcmhZVmpZSEZ4Vmx1MDdLTFV6?= =?utf-8?B?Rm5DUy9CZmNoVlhCbHpTRlZ1YXpQdURXQmRJcEJxMEJ2K3BVR2tBaHJyQVpl?= =?utf-8?B?UCtwTUVpZ2UvZk1oSW10T3hwZGdPelFFNEVFV2RQYTlaL29WTnk0Y053QzR1?= =?utf-8?B?akFmSy9MdHp0SjZVVXIwRlFZRi90Wm9WeTJ3YnNuWjZ2VUsxN3pJT3liZmxE?= =?utf-8?B?TFEwODNHQzY1Tmo0Q0xZT2pqZ0VKanV4VHhaTHNjNm1xS3JlMjZCNzF2ekVM?= =?utf-8?B?UXp5MG5IcW1QTlMvbUd1dDIvU0lzRUtzS2hCMEdiZDZEN0FuYllGekZzbWpF?= =?utf-8?B?bWYwUHFuTStpNGM3UC93Z1AvVmNxdTZHNnpsZHlWc1VZZmlQZ3l3clN0SEM3?= =?utf-8?Q?OLMfNXY?= X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1144;5:lKhd4RFZRmMJCPE2YnmJoslBEjGWe1Oqux/ubvKoIFYr0SFHKkNn0a3s+weUFhy8nXYE1WwEQyUWPUYvamAyH77NcvTA5q6aL6cOh6fDyL94XNFmy7MZu940Jzgi+cGWZ8GZq5UUtpvk+Lx7PqD7qCsxYpKBBcCUEMSMyIs2Nkxz0W5Wf7eJaaNU/jMOTsSqmtn586xzXhaCi/P0stERX7Xz2tVGEaCNSYYROnhco6TOjbKb1SmfLAxNw0SanMDisXsCHWKznWMQWh/DLW+WyrBpEd3FZ+9E5Byy1LYQDc4YNX/r40P64501iRKvL9EeRcoz1z7/QPgNLIc5j55dloT39/cwpbfTso++NugXSK7vDc7I3Xt5f0rcWZTTJhf4lauKDMYrg7WivnVC2GBaWBz7H/VPdVQbTW9GBDZiqFZoJyBquPvnShAkwQa/NpPScYETrskWVPu3wdKUMhyUHeSYI2Hlfd2ZGpBMmIEYX+DPinyvhGW+CExsbiskrkvF;24:C/YXsq+d0aYkyzxnIsLfSpLyC95qgoDKHPZl3mBCJWZu/H9XNTsm6Fb8ZPYkEaPCVPreN4d3Xn90wCzP+mDUYO7HH1b/8mRdFMZIuu+TqbM= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1144;7:WZIGQP/5pJKGOGra1ircqG782HxKBViEzwO9Ni13TRN3oJHOx4fop3z3CPYbzdzTeRqNAHs/eJ5RVP29NLITzT2e9tDEDg6DClvusS0v+24wV57qis1BQNiBqF/8/BalIzfZMrbPx6+XRY7tuQc1aJRMd8YAgcmpWa+1xRGJgYAPpakHIfV9ARq0dGcpd0YfN39240ltfvd9kXwQU2vjXcPtWY7bPLWaJglVtTYlIQTtr1FNguFraWVBJIVeO5VZr7/P9+BKS9Gj742xw3vl+RB9HJzWrokhlaE2CiWKw7sovCEDtG0AIzH0VST662Dv4NSK5ULBQn8hhjkhJj0yp/zLeHz7PdOoPF0WP2qeElckIqa6+Yzh8+jrjWu3ZpDxP43W74fvFOW/lZVs8zBPwh3zA1iGdYbmhguBq1U4KmKa+5AVNenfHKHPg1rbW5bYoVm4Hssuy6ItX1yLtqnchvKczEIM6Pnh8pHPY2Hba08Muk0SMnDRaPzhYOej1+vfdKMDCZapyg4cMxCor/qCOvomGX8dWq2rBJGIoXcN1UMn39JOI+bYh1+xvkV+7WJl0gKgQ0w3SEMRlFw+kCEJtxJ9NATXwIeUPeA88A+JwyEfCAyYECK3aRawnEofCbUEcf/L5vZJs5FQHGqmtehepodaN7r/hIICqqHkptOSdnlhZ0t4XZ2YI2QnUBGBOUD25B3jRVSwrossmYWKf/rSyJab6fPFhSFOuQ7vfV94R67RD1143ZWHo5sOgquBYa/RYqOVnbWG9V1K4vR6BAxkOmvs7nngBiEFy46Qb1kNCfo= X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1144;20:OJx0V0GdO4F8N9ajPGGoW3vd3fWR+NuY9KFOlecE7WhbEp0w2RzR/9H0bF0EK/HXVLYUvgUUGw6EXmS70thddNA9J2uhDGXq+VTrXkcRBvJOtGrZKqTea+uikpYTPM8PkfEo4EQj5VuqLmlnLIyuoemsQIFEH1leJSN8c6syTQchYMbgwUBIPw6BQymG1DSJJZSYqVvDQlBaJfk5sdGzaQJO3HOXTNlu2aCEu4WEYkWjOY3pC5bnJM9tWDxq4DtS X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2017 15:00:08.6819 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1144 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add support for changing the memory encryption attribute for one or more memory pages. This will be useful when we have to change the AP trampoline area to not be encrypted. Or when we need to change the SWIOTLB area to not be encrypted in support of devices that can't support the encryption mask range. Reviewed-by: Borislav Petkov Signed-off-by: Tom Lendacky --- arch/x86/include/asm/set_memory.h | 3 ++ arch/x86/mm/pageattr.c | 62 +++++++++++++++++++++++++++++++++++++ 2 files changed, 65 insertions(+) diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h index eaec6c3..cd71273 100644 --- a/arch/x86/include/asm/set_memory.h +++ b/arch/x86/include/asm/set_memory.h @@ -11,6 +11,7 @@ * Executability : eXeutable, NoteXecutable * Read/Write : ReadOnly, ReadWrite * Presence : NotPresent + * Encryption : Encrypted, Decrypted * * Within a category, the attributes are mutually exclusive. * @@ -42,6 +43,8 @@ int set_memory_wb(unsigned long addr, int numpages); int set_memory_np(unsigned long addr, int numpages); int set_memory_4k(unsigned long addr, int numpages); +int set_memory_encrypted(unsigned long addr, int numpages); +int set_memory_decrypted(unsigned long addr, int numpages); int set_memory_array_uc(unsigned long *addr, int addrinarray); int set_memory_array_wc(unsigned long *addr, int addrinarray); diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index e7d3866..d9e09fb 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1769,6 +1769,68 @@ int set_memory_4k(unsigned long addr, int numpages) __pgprot(0), 1, 0, NULL); } +static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) +{ + struct cpa_data cpa; + unsigned long start; + int ret; + + /* Nothing to do if the SME is not active */ + if (!sme_active()) + return 0; + + /* Should not be working on unaligned addresses */ + if (WARN_ONCE(addr & ~PAGE_MASK, "misaligned address: %#lx\n", addr)) + addr &= PAGE_MASK; + + start = addr; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = enc ? __pgprot(_PAGE_ENC) : __pgprot(0); + cpa.mask_clr = enc ? __pgprot(0) : __pgprot(_PAGE_ENC); + cpa.pgd = init_mm.pgd; + + /* Must avoid aliasing mappings in the highmem code */ + kmap_flush_unused(); + vm_unmap_aliases(); + + /* + * Before changing the encryption attribute, we need to flush caches. + */ + if (static_cpu_has(X86_FEATURE_CLFLUSH)) + cpa_flush_range(start, numpages, 1); + else + cpa_flush_all(1); + + ret = __change_page_attr_set_clr(&cpa, 1); + + /* + * After changing the encryption attribute, we need to flush TLBs + * again in case any speculative TLB caching occurred (but no need + * to flush caches again). We could just use cpa_flush_all(), but + * in case TLB flushing gets optimized in the cpa_flush_range() + * path use the same logic as above. + */ + if (static_cpu_has(X86_FEATURE_CLFLUSH)) + cpa_flush_range(start, numpages, 0); + else + cpa_flush_all(0); + + return ret; +} + +int set_memory_encrypted(unsigned long addr, int numpages) +{ + return __set_memory_enc_dec(addr, numpages, true); +} + +int set_memory_decrypted(unsigned long addr, int numpages) +{ + return __set_memory_enc_dec(addr, numpages, false); +} + int set_pages_uc(struct page *page, int numpages) { unsigned long addr = (unsigned long)page_address(page); From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Lendacky Subject: [PATCH v8 22/38] x86/mm: Add support for changing the memory encryption attribute Date: Tue, 27 Jun 2017 10:00:05 -0500 Message-ID: <20170627150005.15908.84160.stgit@tlendack-t1.amdoffice.net> References: <20170627145607.15908.26571.stgit@tlendack-t1.amdoffice.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20170627145607.15908.26571.stgit-qCXWGYdRb2BnqfbPTmsdiZQ+2ll4COg0XqFh9Ls21Oc@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org To: linux-arch-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-efi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-doc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, kexec-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kasan-dev-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org, xen-devel-GuqFBffKawuEi8DpZVb4nw@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Cc: Brijesh Singh , Toshimitsu Kani , "Michael S. Tsirkin" , Matt Fleming , Alexander Potapenko , "H. Peter Anvin" , Boris Ostrovsky , Jonathan Corbet , Radim =?utf-8?b?S3LEjW3DocWZ?= , Larry Woodman , Ingo Molnar , Andrey Ryabinin , Dave Young , Rik van Riel , Arnd Bergmann , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , Dmitry Vyukov , Juergen Gross , Paolo Bonzini List-Id: linux-efi@vger.kernel.org Add support for changing the memory encryption attribute for one or more memory pages. This will be useful when we have to change the AP trampoline area to not be encrypted. Or when we need to change the SWIOTLB area to not be encrypted in support of devices that can't support the encryption mask range. Reviewed-by: Borislav Petkov Signed-off-by: Tom Lendacky --- arch/x86/include/asm/set_memory.h | 3 ++ arch/x86/mm/pageattr.c | 62 +++++++++++++++++++++++++++++++++++++ 2 files changed, 65 insertions(+) diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h index eaec6c3..cd71273 100644 --- a/arch/x86/include/asm/set_memory.h +++ b/arch/x86/include/asm/set_memory.h @@ -11,6 +11,7 @@ * Executability : eXeutable, NoteXecutable * Read/Write : ReadOnly, ReadWrite * Presence : NotPresent + * Encryption : Encrypted, Decrypted * * Within a category, the attributes are mutually exclusive. * @@ -42,6 +43,8 @@ int set_memory_wb(unsigned long addr, int numpages); int set_memory_np(unsigned long addr, int numpages); int set_memory_4k(unsigned long addr, int numpages); +int set_memory_encrypted(unsigned long addr, int numpages); +int set_memory_decrypted(unsigned long addr, int numpages); int set_memory_array_uc(unsigned long *addr, int addrinarray); int set_memory_array_wc(unsigned long *addr, int addrinarray); diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index e7d3866..d9e09fb 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1769,6 +1769,68 @@ int set_memory_4k(unsigned long addr, int numpages) __pgprot(0), 1, 0, NULL); } +static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) +{ + struct cpa_data cpa; + unsigned long start; + int ret; + + /* Nothing to do if the SME is not active */ + if (!sme_active()) + return 0; + + /* Should not be working on unaligned addresses */ + if (WARN_ONCE(addr & ~PAGE_MASK, "misaligned address: %#lx\n", addr)) + addr &= PAGE_MASK; + + start = addr; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = enc ? __pgprot(_PAGE_ENC) : __pgprot(0); + cpa.mask_clr = enc ? __pgprot(0) : __pgprot(_PAGE_ENC); + cpa.pgd = init_mm.pgd; + + /* Must avoid aliasing mappings in the highmem code */ + kmap_flush_unused(); + vm_unmap_aliases(); + + /* + * Before changing the encryption attribute, we need to flush caches. + */ + if (static_cpu_has(X86_FEATURE_CLFLUSH)) + cpa_flush_range(start, numpages, 1); + else + cpa_flush_all(1); + + ret = __change_page_attr_set_clr(&cpa, 1); + + /* + * After changing the encryption attribute, we need to flush TLBs + * again in case any speculative TLB caching occurred (but no need + * to flush caches again). We could just use cpa_flush_all(), but + * in case TLB flushing gets optimized in the cpa_flush_range() + * path use the same logic as above. + */ + if (static_cpu_has(X86_FEATURE_CLFLUSH)) + cpa_flush_range(start, numpages, 0); + else + cpa_flush_all(0); + + return ret; +} + +int set_memory_encrypted(unsigned long addr, int numpages) +{ + return __set_memory_enc_dec(addr, numpages, true); +} + +int set_memory_decrypted(unsigned long addr, int numpages) +{ + return __set_memory_enc_dec(addr, numpages, false); +} + int set_pages_uc(struct page *page, int numpages) { unsigned long addr = (unsigned long)page_address(page); From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 5C80C6B0390 for ; Tue, 27 Jun 2017 11:00:15 -0400 (EDT) Received: by mail-pg0-f71.google.com with SMTP id d191so29352204pga.15 for ; Tue, 27 Jun 2017 08:00:15 -0700 (PDT) Received: from NAM03-DM3-obe.outbound.protection.outlook.com (mail-dm3nam03on0078.outbound.protection.outlook.com. [104.47.41.78]) by mx.google.com with ESMTPS id i68si2059927pgc.593.2017.06.27.08.00.13 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 27 Jun 2017 08:00:13 -0700 (PDT) From: Tom Lendacky Subject: [PATCH v8 22/38] x86/mm: Add support for changing the memory encryption attribute Date: Tue, 27 Jun 2017 10:00:05 -0500 Message-ID: <20170627150005.15908.84160.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20170627145607.15908.26571.stgit@tlendack-t1.amdoffice.net> References: <20170627145607.15908.26571.stgit@tlendack-t1.amdoffice.net> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, xen-devel@lists.xen.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Brijesh Singh , Toshimitsu Kani , Radim =?utf-8?b?S3LEjW3DocWZ?= , Matt Fleming , Alexander Potapenko , "H. Peter Anvin" , Larry Woodman , Jonathan Corbet , Joerg Roedel , "Michael S. Tsirkin" , Ingo Molnar , Andrey Ryabinin , Dave Young , Rik van Riel , Arnd Bergmann , Konrad Rzeszutek Wilk , Borislav Petkov , Andy Lutomirski , Boris Ostrovsky , Dmitry Vyukov , Juergen Gross , Thomas Gleixner , Paolo Bonzini Add support for changing the memory encryption attribute for one or more memory pages. This will be useful when we have to change the AP trampoline area to not be encrypted. Or when we need to change the SWIOTLB area to not be encrypted in support of devices that can't support the encryption mask range. Reviewed-by: Borislav Petkov Signed-off-by: Tom Lendacky --- arch/x86/include/asm/set_memory.h | 3 ++ arch/x86/mm/pageattr.c | 62 +++++++++++++++++++++++++++++++++++++ 2 files changed, 65 insertions(+) diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h index eaec6c3..cd71273 100644 --- a/arch/x86/include/asm/set_memory.h +++ b/arch/x86/include/asm/set_memory.h @@ -11,6 +11,7 @@ * Executability : eXeutable, NoteXecutable * Read/Write : ReadOnly, ReadWrite * Presence : NotPresent + * Encryption : Encrypted, Decrypted * * Within a category, the attributes are mutually exclusive. * @@ -42,6 +43,8 @@ int set_memory_wb(unsigned long addr, int numpages); int set_memory_np(unsigned long addr, int numpages); int set_memory_4k(unsigned long addr, int numpages); +int set_memory_encrypted(unsigned long addr, int numpages); +int set_memory_decrypted(unsigned long addr, int numpages); int set_memory_array_uc(unsigned long *addr, int addrinarray); int set_memory_array_wc(unsigned long *addr, int addrinarray); diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index e7d3866..d9e09fb 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1769,6 +1769,68 @@ int set_memory_4k(unsigned long addr, int numpages) __pgprot(0), 1, 0, NULL); } +static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) +{ + struct cpa_data cpa; + unsigned long start; + int ret; + + /* Nothing to do if the SME is not active */ + if (!sme_active()) + return 0; + + /* Should not be working on unaligned addresses */ + if (WARN_ONCE(addr & ~PAGE_MASK, "misaligned address: %#lx\n", addr)) + addr &= PAGE_MASK; + + start = addr; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = enc ? __pgprot(_PAGE_ENC) : __pgprot(0); + cpa.mask_clr = enc ? __pgprot(0) : __pgprot(_PAGE_ENC); + cpa.pgd = init_mm.pgd; + + /* Must avoid aliasing mappings in the highmem code */ + kmap_flush_unused(); + vm_unmap_aliases(); + + /* + * Before changing the encryption attribute, we need to flush caches. + */ + if (static_cpu_has(X86_FEATURE_CLFLUSH)) + cpa_flush_range(start, numpages, 1); + else + cpa_flush_all(1); + + ret = __change_page_attr_set_clr(&cpa, 1); + + /* + * After changing the encryption attribute, we need to flush TLBs + * again in case any speculative TLB caching occurred (but no need + * to flush caches again). We could just use cpa_flush_all(), but + * in case TLB flushing gets optimized in the cpa_flush_range() + * path use the same logic as above. + */ + if (static_cpu_has(X86_FEATURE_CLFLUSH)) + cpa_flush_range(start, numpages, 0); + else + cpa_flush_all(0); + + return ret; +} + +int set_memory_encrypted(unsigned long addr, int numpages) +{ + return __set_memory_enc_dec(addr, numpages, true); +} + +int set_memory_decrypted(unsigned long addr, int numpages) +{ + return __set_memory_enc_dec(addr, numpages, false); +} + int set_pages_uc(struct page *page, int numpages) { unsigned long addr = (unsigned long)page_address(page); -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-path: Received: from mail-bl2nam02on0067.outbound.protection.outlook.com ([104.47.38.67] helo=NAM02-BL2-obe.outbound.protection.outlook.com) by casper.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1dPryl-0003M2-6w for kexec@lists.infradead.org; Tue, 27 Jun 2017 15:00:38 +0000 From: Tom Lendacky Subject: [PATCH v8 22/38] x86/mm: Add support for changing the memory encryption attribute Date: Tue, 27 Jun 2017 10:00:05 -0500 Message-ID: <20170627150005.15908.84160.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20170627145607.15908.26571.stgit@tlendack-t1.amdoffice.net> References: <20170627145607.15908.26571.stgit@tlendack-t1.amdoffice.net> MIME-Version: 1.0 List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "kexec" Errors-To: kexec-bounces+dwmw2=infradead.org@lists.infradead.org To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, xen-devel@lists.xen.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Brijesh Singh , Toshimitsu Kani , "Michael S. Tsirkin" , Matt Fleming , Alexander Potapenko , "H. Peter Anvin" , Boris Ostrovsky , Jonathan Corbet , Joerg Roedel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Larry Woodman , Ingo Molnar , Andrey Ryabinin , Dave Young , Rik van Riel , Arnd Bergmann , Konrad Rzeszutek Wilk , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , Dmitry Vyukov , Juergen Gross , Paolo Bonzini Add support for changing the memory encryption attribute for one or more memory pages. This will be useful when we have to change the AP trampoline area to not be encrypted. Or when we need to change the SWIOTLB area to not be encrypted in support of devices that can't support the encryption mask range. Reviewed-by: Borislav Petkov Signed-off-by: Tom Lendacky --- arch/x86/include/asm/set_memory.h | 3 ++ arch/x86/mm/pageattr.c | 62 +++++++++++++++++++++++++++++++++++++ 2 files changed, 65 insertions(+) diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h index eaec6c3..cd71273 100644 --- a/arch/x86/include/asm/set_memory.h +++ b/arch/x86/include/asm/set_memory.h @@ -11,6 +11,7 @@ * Executability : eXeutable, NoteXecutable * Read/Write : ReadOnly, ReadWrite * Presence : NotPresent + * Encryption : Encrypted, Decrypted * * Within a category, the attributes are mutually exclusive. * @@ -42,6 +43,8 @@ int set_memory_wb(unsigned long addr, int numpages); int set_memory_np(unsigned long addr, int numpages); int set_memory_4k(unsigned long addr, int numpages); +int set_memory_encrypted(unsigned long addr, int numpages); +int set_memory_decrypted(unsigned long addr, int numpages); int set_memory_array_uc(unsigned long *addr, int addrinarray); int set_memory_array_wc(unsigned long *addr, int addrinarray); diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index e7d3866..d9e09fb 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1769,6 +1769,68 @@ int set_memory_4k(unsigned long addr, int numpages) __pgprot(0), 1, 0, NULL); } +static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) +{ + struct cpa_data cpa; + unsigned long start; + int ret; + + /* Nothing to do if the SME is not active */ + if (!sme_active()) + return 0; + + /* Should not be working on unaligned addresses */ + if (WARN_ONCE(addr & ~PAGE_MASK, "misaligned address: %#lx\n", addr)) + addr &= PAGE_MASK; + + start = addr; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = enc ? __pgprot(_PAGE_ENC) : __pgprot(0); + cpa.mask_clr = enc ? __pgprot(0) : __pgprot(_PAGE_ENC); + cpa.pgd = init_mm.pgd; + + /* Must avoid aliasing mappings in the highmem code */ + kmap_flush_unused(); + vm_unmap_aliases(); + + /* + * Before changing the encryption attribute, we need to flush caches. + */ + if (static_cpu_has(X86_FEATURE_CLFLUSH)) + cpa_flush_range(start, numpages, 1); + else + cpa_flush_all(1); + + ret = __change_page_attr_set_clr(&cpa, 1); + + /* + * After changing the encryption attribute, we need to flush TLBs + * again in case any speculative TLB caching occurred (but no need + * to flush caches again). We could just use cpa_flush_all(), but + * in case TLB flushing gets optimized in the cpa_flush_range() + * path use the same logic as above. + */ + if (static_cpu_has(X86_FEATURE_CLFLUSH)) + cpa_flush_range(start, numpages, 0); + else + cpa_flush_all(0); + + return ret; +} + +int set_memory_encrypted(unsigned long addr, int numpages) +{ + return __set_memory_enc_dec(addr, numpages, true); +} + +int set_memory_decrypted(unsigned long addr, int numpages) +{ + return __set_memory_enc_dec(addr, numpages, false); +} + int set_pages_uc(struct page *page, int numpages) { unsigned long addr = (unsigned long)page_address(page); _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec