From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932679AbcKJAzF (ORCPT ); Wed, 9 Nov 2016 19:55:05 -0500 Received: from mail-sn1nam01on0059.outbound.protection.outlook.com ([104.47.32.59]:19616 "EHLO NAM01-SN1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S932219AbcKJAve (ORCPT ); Wed, 9 Nov 2016 19:51:34 -0500 X-Greylist: delayed 1016 seconds by postgrey-1.27 at vger.kernel.org; Wed, 09 Nov 2016 19:51:32 EST Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Thomas.Lendacky@amd.com; From: Tom Lendacky Subject: [RFC PATCH v3 11/20] x86: Add support for changing memory encryption attribute To: , , , , , , , , CC: Rik van Riel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , "Paolo Bonzini" , Larry Woodman , "Ingo Molnar" , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , "Thomas Gleixner" , Dmitry Vyukov Date: Wed, 9 Nov 2016 18:36:55 -0600 Message-ID: <20161110003655.3280.57333.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> References: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: MWHPR21CA0058.namprd21.prod.outlook.com (10.172.93.148) To DM5PR12MB1146.namprd12.prod.outlook.com (10.168.236.141) X-MS-Office365-Filtering-Correlation-Id: 07a6c1b9-e31a-43ed-e1b4-08d40901b0bb X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1146;2:J01t11YcfB9/HPCuUBv0yUXJCf107DGJI/dTGpmXKQcT5eTxBuT+oAQz+7d7UZY1J25GmqrgQTFgvzyTymO2//OFOPhn2QV6gs5HBlkGTPr5Ol2OZYmLMp9r0Lig2YsJaYHohG8SyhsVaRx9Gof8q3kmkf00O1DGUMri20lpwPwT40Hg8elLAn0UbwN0Y9VZeyw56otM8lIyQzD5mjf0jA==;3:Nx13gqGTm314qTI/BRcOGNaw9uC3JmZaQQnQrYYnoi+ic8gyVqDkQJwlDysclUZSNRa8pVdtESFe+L2PSWe+t34jwN3kZUJgNgWcZcGlC2EY+dX2eAeucQYZX/oRkxn/KYmYD4cDZZ8adtoSoUxQXA==;25:8/o17PWralW5GknEo/MgDz6VOZYIULeb5FADmLm4/bSWs+AseSxo9O5EO40SR9PTECthmmxG1klBy8BSle5juS1tjD8qmq3MAvk7K7/gFOssDfttuRfwNrfMz+NYOOUOFIXh6om0jATReHPoxV37K2rX6f+93vRZeZ+m7A4jkeNWoSntTMBU9rlMlXRfEpq89LdVg7Lo2Dk2bLoPd1miF5YYKbUw55habn92jkBUfRNXYUBC6eGnoNSLaDnaTIoINllQpvSbKq27NsTQAy40yz94nPRxOO6jgFxfb1eEzrrIO23Jg9EuK+atxTiDuYw0EEghGPVEno4Rv1sUneRD/wqxiiYYft7itZpbNZGZ+K8dDCAClMvdebCjKOlQLa7P+eglkh2IMbggpsLbWk5ZwtA2L3RafqjdGQ3weM5G2HdW8dF7rg/V919j9a16W57y X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:DM5PR12MB1146; X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1146;31:2foaFx7ucnMg5LzVaVUAJkGonSKxH495wRc//4MnwsKmuoL7iFbOOgj7JqUpZGVZZ1bX3RXtHxAmbf2DfB7WyLURkLDdefuWlw1E81hugIfIJUhz+s+cQ0DuLhZjKcOYJvpbzrv65fO7q8CuQwxvgzCtVrC70S9bhVp2Oi0mM99V3IE5cal/BkzaVHG7xFTUGM+T2Y+gpXiHatPNJ1f5cKBX9kU/li3aX2HkTUrwVHhXZGmGCD0xY4ubvu4Vmvhw;20:lSAx9BULWrgvFVTjt5E8yUlr/BErP5medChCTftTzSFj4UOV+ZLKKCzIc4RbHEj02x/wEt8WZlQzvpi2L+bUNG4PetAcNlurNxeMJXuFSzrBvdHloc2A7Q6u86My2k4jZ841EnAaY0VhWZHEob2kQHk+1JsO2XtsNOz4R2nwUf6q/GBJ+vjMq/GRNQsemSNNdIp6o2Hec3tZk5rFYTOLXqD7K7NZjqRtThkzamJe0Lp/Uu9J7ljqAlvIWvF8A3/BB+asjYcqU0fOabA8dpsb5CjomLZ26peFzGQygl41MQbBaELU6YeDm++05nt6ii+MQNhQCgAMquSdApeSDrE+v2VHbARx3JHqm3h15NVdTIYCKcYbaAqSsD5RopadV176Am6bw9wWDDk5VRZFp5yVKol0t5wBCNLk/U78qXA1HZmqWgGYOJqSqvx/2xzolsyErt/TB+RBvLqq4p+SY7mcQCQlhKyavxzFHgKVgS02VzpNkPNlMNmoLYY7o5auF5kb X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040176)(601004)(2401047)(8121501046)(5005006)(10201501046)(3002001)(6055026);SRVR:DM5PR12MB1146;BCL:0;PCL:0;RULEID:;SRVR:DM5PR12MB1146; X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1146;4:yQ+2bjtI+i2zlorQLOUPfky++5yZg6smuymECioAHV7A8XKbsUjlr9ZCkraOMe4pGQJAqQVI8o/FqoOBELwBEwMhZPjzqqxJ/OTjkWemCX8Niwy7k9HdBqK9Tsne2IEKoqT/7odFJd1bToqZr6zlzQ0LrW1vbUw2Be5ZochBw2RBnh5ovOGfIzoRrkwQqv9Z1TwvznZj9dc6gVdUgUmun2sKyGjapM4TcQeVhPwCXy+u+HLM69SZw3c/WjX8LwFlgrS6UINpPL5I0kyEGbjcmkQp9EjUZqcdaIuJg+dl5jztlfQRBK5dBdhmmEU8duQqT35e87RXssqapwMJgWwK/sHimFXeurRySWV0X1FCR/swHPig63oXo1Odt3Hpqo4gNkD3iF+eguxUYJSVGUZYIfz7eM2utNp3TZlXLdGAWbhm/9vbp0DZSsaG1SeOtQQtDCcNtKf8rSP5fXSafWuo5g== X-Forefront-PRVS: 01221E3973 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6009001)(7916002)(189002)(199003)(92566002)(50466002)(2950100002)(9686002)(103116003)(230700001)(6666003)(5001770100001)(101416001)(77096005)(97746001)(97736004)(33646002)(4326007)(1076002)(6116002)(586003)(3846002)(2906002)(4001350100001)(68736007)(53416004)(42186005)(2201001)(50986999)(81166006)(76176999)(106356001)(81156014)(54356999)(8676002)(66066001)(105586002)(47776003)(305945005)(7736002)(5660300001)(69596002)(189998001)(23676002)(83506001)(7416002)(86362001)(7846002)(21314002)(71626007)(217873001);DIR:OUT;SFP:1101;SCL:1;SRVR:DM5PR12MB1146;H:tlendack-t1.amdoffice.net;FPR:;SPF:None;PTR:InfoNoRecords;MX:1;A:1;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtETTVQUjEyTUIxMTQ2OzIzOnMwNWIyU20vcGNMb1lzdHhxNi81SkxjRnFK?= =?utf-8?B?MU5WR0dvZWtsdWtmSTZzNjBpMk0xeEt5TmZvbFk5ejFhOVlWc3VpVUtVU1VN?= =?utf-8?B?QUVlaWIyeXNxbEtHbTNwTEFSZmFlWlpTNncrWlV0RDB6alNHbE95ZjRTN0Q0?= =?utf-8?B?UEhoNXVJOGV2RmhLK1dHN2F4dGVoMEFhUGdmdEJLc3V1dy9BR2xzWFI1cUx3?= =?utf-8?B?T21rTmtvMGwrbS92U2w1eEcvUHpMQjVtQjVOOE1xVnNQejVSNU02SFlYTjR5?= =?utf-8?B?cTI1cTE3d2Z6MHhad2l6bU5RUWphMGhVd2ZkcVpLaTN1VnJoMERSdC94SkNh?= =?utf-8?B?OGIxYlY4V3ZDTVRleUhEeXA3TnIvNUxuK051T1d5dEhyZVM5bm1pR0t4S0R6?= =?utf-8?B?UVRoL0Q0bzRmeG4remlMeU1vd1NzOVAzZnJyYnBNQ1NkQk0rd0lFS20vaExO?= =?utf-8?B?M1FhYXVtZkVaclhKd3pOeGNIdHVzbUV1bDFQdmo1MURVOE5hSUlLdUl0UVJy?= =?utf-8?B?TUZLMHlUck5qM2NoN1VlQm53QWdBd0RaN3Q2bFZMRzB1MmZzbjJWVXJRQ2N0?= =?utf-8?B?NTY0NDdvOUxWc0FWQmx4YmdURlUveE5SWnVKbWdjZlFvaU9uSDluSWtmVWh4?= =?utf-8?B?c0NIbEtFTTFqNkYxY3c2bzFGUjdKeklHUEJIanorNUtQeTlvaGhwNWpaTFdr?= =?utf-8?B?clphNUtqbEJkTytMYldablRRdWprT3dTL3VMMkFiN2EyOGxEcy9RSWVlRWJK?= =?utf-8?B?b3dXRlhKclI4KzQrSkJlb0pSV0d2b1c1elY2dzR5YkJGdnZlV2NzeGlBQmhk?= =?utf-8?B?T0tveFFaRUY1c2lmeUM1dVlBd2pxdHBDMkc3Uisydy9RczNiUjRVVDFXaVlJ?= =?utf-8?B?Z1A0bDZ0RVh6OFZhSGRvZzJIZGwrSVRPbTA3alBIenFteXRBek1aNDFjRklM?= =?utf-8?B?TXU2MmVxQ01ySW5zR0Q0NlhlZnhqY3dvbWM4NVhxbStUajkwTTZHZUFWZ0JN?= =?utf-8?B?TFBGeEorVkNIRUpXOUZMZUF4UjBYVDhVdUFMbkI1Smw4TDF1ZlBRV0s5VWJl?= =?utf-8?B?QmFGRnpjVFQ1MlRsZGdPclRyTzlDMEhYMzdHSDIwNVZLOUZCV3dEZDZvdFVp?= =?utf-8?B?Vld6ZmY0dTZUcDlNU0hmTGR6RlVMZVBwWVFnOWtGQ3BiRWJCeDgzNk5hNzFC?= =?utf-8?B?QjlqMHlTUFdYekZYZnViaVBnZndJTHA0WkpmenhTKzhlTUNxbWtrTVg5Zm16?= =?utf-8?B?VFpGaVp3MTY3SkxBOUJqNVpKY21TVUVaWDlpcVlBVjVHRHJDaUxZeDVJRkMy?= =?utf-8?B?MDgzOFhXU1ZZV3lXYllnMktPckhzUkZUQW1BcFEzcW14SnlRTDJ3VnFPeHMy?= =?utf-8?B?YkZrWGFvYmFrZ0F3RVFPWjVSdE8rWVJ2OUVLZU1iZnB0VUZyK1dqNWdrNk5K?= =?utf-8?B?SGhncU03NVduSEtQdTVNLzlRNXR1bzNOU2JlQUo0dnJYYVlGTXYvV1dhZVpi?= =?utf-8?B?NnRrSzJBbVVyRmU3OFhPRjJTNEhzMEhiRWJ4cG0weVRwMDlKRlVpUU9uanpl?= =?utf-8?B?aFlKaDBHRlBZZEFTMmZJZ2owTU4yc0IrZDMzeEhHUFZSc1ltZ041U0pEVTVH?= =?utf-8?B?SllEVWI0MGZEZ0VjS0cxRXJySE8vWWVmK2FOWWZ5azhJS0JTWkJXU1prOTVG?= =?utf-8?Q?ZEcyBFRn+Byv/pjpssK3N/RPoK9tXJziPG0trjp?= X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1146;6:xnkdXrtz5fR2DioRkayBRYtXyEg7Oef5xRPd6uO9XuXovnBuUJeYjKx7h81ruQfT2eyGxqHCiIbHMiFJtDty3RP3LgniPE4yZOAZCDusJqcatApUl1Omf4CRpBFoh/96M/N6Tmu3E7AXGwJGdE6tl93vWGR5eNgyJPccvxJeyMn1iyVjeDwq7lCQdy/zQbER+Tvw/+nGaG3Z7kfC9B70gfJ10usnmO35dnKoRoUr8Y+YoX+hB+JJq0ao7pkdEEM0iG39cKyrDQ4Y6ydKb6ia6iyoYiiTP1tG+O4j1wtJtacLBbK06x2K7SY4s3S0tYz8fLFxM+Oei6LJv3tRquXZ9KEwBMD3HaWJ3fLEyErxZy8=;5:XqH/K4ho/XZOJ9IdYFQGzImaTKTylfB86OF1HsFkC2Wkr7ThBEHXWEYPZdPrHMPioIHqaPsiSRjNDQlTUtRF4rkF+NtAW3tlSsSCqXH2/XBbY3GeEaeoxIZgBAsPuNA7fdumKgoWd+FiBaB2F96ZYw==;24:oP1J1K67xbtteLy2GRdQbDdM6+71CgQvaIw1zazNa8yA8uHLbScq/Vjs6s8hL2O2IvZGIlzp4d6+UfvXYae8VkAW38hDX/gVsT4ArafJqhc= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1146;7:2036iSALh328MDpe9PNtLsO6Ok0DUfrxCVqSwhk1I6v4GCa6PKFN7vyfGTV3lx9IRPZu3ZdpMDkQZlN3ctvnDa5pH0cc9+9xiMWg4cNvxQ3Mlw7dEsPvqRnFzJuSKwwTmB5n4tjeFK49xQadf6/0te91BY0hRCkt07aiCzZ2BCp57u7ME23uDGfGbQkbQ/IF3dSiEnchtLwToVk+lW+8XXZqSm1zUrUnVn3MadsWYZ/zQR2k24ScSBmDEd6oCWqZ2vObAOq+QWKwUmM4I7dzjxF3GXSvpbIa8N9BMJmQFNNXqWlxRCk7FEUaTtfaofuTF9dKj09GX1UE800jyxWjrt25K9vJ/kcnMmJdeU5SwvQ=;20:vTPyHP/7pB0CXmFCYpEfl0Y8mfubDthT9O0WNc7u425pN9K3IAMO5ZMAkIEb9Sx+t/rJbwJVjOJkS5T+tqY8NH9+UdyySkxSaqIETszK50puRH/sWsRvm6QNcT17VlEjk593Sd6MWWEQzNx7UgvZzM1krCwOTbeiMerQsyziuiKzJP39XKeodrYR+ya/m/HSoeEfQGFxapd4mFQTuAxqp2g4oKIGLPMO8mXRn9nIygc4zfsrJaKnzupU4vVXHtAA X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Nov 2016 00:37:00.4651 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1146 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch adds support to be change the memory encryption attribute for one or more memory pages. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/cacheflush.h | 3 + arch/x86/include/asm/mem_encrypt.h | 13 ++++++ arch/x86/mm/mem_encrypt.c | 43 +++++++++++++++++++++ arch/x86/mm/pageattr.c | 73 ++++++++++++++++++++++++++++++++++++ 4 files changed, 132 insertions(+) diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h index 61518cf..bfb08e5 100644 --- a/arch/x86/include/asm/cacheflush.h +++ b/arch/x86/include/asm/cacheflush.h @@ -13,6 +13,7 @@ * Executability : eXeutable, NoteXecutable * Read/Write : ReadOnly, ReadWrite * Presence : NotPresent + * Encryption : ENCrypted, DECrypted * * Within a category, the attributes are mutually exclusive. * @@ -48,6 +49,8 @@ int set_memory_ro(unsigned long addr, int numpages); int set_memory_rw(unsigned long addr, int numpages); int set_memory_np(unsigned long addr, int numpages); int set_memory_4k(unsigned long addr, int numpages); +int set_memory_enc(unsigned long addr, int numpages); +int set_memory_dec(unsigned long addr, int numpages); int set_memory_array_uc(unsigned long *addr, int addrinarray); int set_memory_array_wc(unsigned long *addr, int addrinarray); diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 0b40f79..d544481 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -21,6 +21,9 @@ extern unsigned long sme_me_mask; +int sme_set_mem_enc(void *vaddr, unsigned long size); +int sme_set_mem_unenc(void *vaddr, unsigned long size); + void __init sme_early_mem_enc(resource_size_t paddr, unsigned long size); void __init sme_early_mem_dec(resource_size_t paddr, @@ -39,6 +42,16 @@ void __init sme_early_init(void); #define sme_me_mask 0UL +static inline int sme_set_mem_enc(void *vaddr, unsigned long size) +{ + return 0; +} + +static inline int sme_set_mem_unenc(void *vaddr, unsigned long size) +{ + return 0; +} + static inline void __init sme_early_mem_enc(resource_size_t paddr, unsigned long size) { diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 411210d..41cfdf9 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -18,6 +18,7 @@ #include #include #include +#include extern pmdval_t early_pmd_flags; int __init __early_make_pgtable(unsigned long, pmdval_t); @@ -33,6 +34,48 @@ EXPORT_SYMBOL_GPL(sme_me_mask); /* Buffer used for early in-place encryption by BSP, no locking needed */ static char sme_early_buffer[PAGE_SIZE] __aligned(PAGE_SIZE); +int sme_set_mem_enc(void *vaddr, unsigned long size) +{ + unsigned long addr, numpages; + + if (!sme_me_mask) + return 0; + + addr = (unsigned long)vaddr & PAGE_MASK; + numpages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + /* + * The set_memory_xxx functions take an integer for numpages, make + * sure it doesn't exceed that. + */ + if (numpages > INT_MAX) + return -EINVAL; + + return set_memory_enc(addr, numpages); +} +EXPORT_SYMBOL_GPL(sme_set_mem_enc); + +int sme_set_mem_unenc(void *vaddr, unsigned long size) +{ + unsigned long addr, numpages; + + if (!sme_me_mask) + return 0; + + addr = (unsigned long)vaddr & PAGE_MASK; + numpages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + /* + * The set_memory_xxx functions take an integer for numpages, make + * sure it doesn't exceed that. + */ + if (numpages > INT_MAX) + return -EINVAL; + + return set_memory_dec(addr, numpages); +} +EXPORT_SYMBOL_GPL(sme_set_mem_unenc); + /* * This routine does not change the underlying encryption setting of the * page(s) that map this memory. It assumes that eventually the memory is diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index b8e6bb5..babf3a6 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1729,6 +1729,79 @@ int set_memory_4k(unsigned long addr, int numpages) __pgprot(0), 1, 0, NULL); } +static int __set_memory_enc_dec(struct cpa_data *cpa) +{ + unsigned long addr; + int numpages; + int ret; + + /* People should not be passing in unaligned addresses */ + if (WARN_ONCE(*cpa->vaddr & ~PAGE_MASK, + "misaligned address: %#lx\n", *cpa->vaddr)) + *cpa->vaddr &= PAGE_MASK; + + addr = *cpa->vaddr; + numpages = cpa->numpages; + + /* Must avoid aliasing mappings in the highmem code */ + kmap_flush_unused(); + vm_unmap_aliases(); + + ret = __change_page_attr_set_clr(cpa, 1); + + /* Check whether we really changed something */ + if (!(cpa->flags & CPA_FLUSHTLB)) + goto out; + + /* + * On success we use CLFLUSH, when the CPU supports it to + * avoid the WBINVD. + */ + if (!ret && static_cpu_has(X86_FEATURE_CLFLUSH)) + cpa_flush_range(addr, numpages, 1); + else + cpa_flush_all(1); + +out: + return ret; +} + +int set_memory_enc(unsigned long addr, int numpages) +{ + struct cpa_data cpa; + + if (!sme_me_mask) + return 0; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = __pgprot(_PAGE_ENC); + cpa.mask_clr = __pgprot(0); + cpa.pgd = init_mm.pgd; + + return __set_memory_enc_dec(&cpa); +} +EXPORT_SYMBOL(set_memory_enc); + +int set_memory_dec(unsigned long addr, int numpages) +{ + struct cpa_data cpa; + + if (!sme_me_mask) + return 0; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = __pgprot(0); + cpa.mask_clr = __pgprot(_PAGE_ENC); + cpa.pgd = init_mm.pgd; + + return __set_memory_enc_dec(&cpa); +} +EXPORT_SYMBOL(set_memory_dec); + int set_pages_uc(struct page *page, int numpages) { unsigned long addr = (unsigned long)page_address(page); From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Lendacky Subject: [RFC PATCH v3 11/20] x86: Add support for changing memory encryption attribute Date: Wed, 9 Nov 2016 18:36:55 -0600 Message-ID: <20161110003655.3280.57333.stgit@tlendack-t1.amdoffice.net> References: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20161110003426.3280.2999.stgit-qCXWGYdRb2BnqfbPTmsdiZQ+2ll4COg0XqFh9Ls21Oc@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org To: linux-arch-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-efi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-doc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kasan-dev-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Cc: Rik van Riel , Thomas Gleixner , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Radim =?utf-8?b?S3LEjW3DocWZ?= , Andrey Ryabinin , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , Larry Woodman , Dmitry Vyukov List-Id: linux-efi@vger.kernel.org This patch adds support to be change the memory encryption attribute for one or more memory pages. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/cacheflush.h | 3 + arch/x86/include/asm/mem_encrypt.h | 13 ++++++ arch/x86/mm/mem_encrypt.c | 43 +++++++++++++++++++++ arch/x86/mm/pageattr.c | 73 ++++++++++++++++++++++++++++++++++++ 4 files changed, 132 insertions(+) diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h index 61518cf..bfb08e5 100644 --- a/arch/x86/include/asm/cacheflush.h +++ b/arch/x86/include/asm/cacheflush.h @@ -13,6 +13,7 @@ * Executability : eXeutable, NoteXecutable * Read/Write : ReadOnly, ReadWrite * Presence : NotPresent + * Encryption : ENCrypted, DECrypted * * Within a category, the attributes are mutually exclusive. * @@ -48,6 +49,8 @@ int set_memory_ro(unsigned long addr, int numpages); int set_memory_rw(unsigned long addr, int numpages); int set_memory_np(unsigned long addr, int numpages); int set_memory_4k(unsigned long addr, int numpages); +int set_memory_enc(unsigned long addr, int numpages); +int set_memory_dec(unsigned long addr, int numpages); int set_memory_array_uc(unsigned long *addr, int addrinarray); int set_memory_array_wc(unsigned long *addr, int addrinarray); diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 0b40f79..d544481 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -21,6 +21,9 @@ extern unsigned long sme_me_mask; +int sme_set_mem_enc(void *vaddr, unsigned long size); +int sme_set_mem_unenc(void *vaddr, unsigned long size); + void __init sme_early_mem_enc(resource_size_t paddr, unsigned long size); void __init sme_early_mem_dec(resource_size_t paddr, @@ -39,6 +42,16 @@ void __init sme_early_init(void); #define sme_me_mask 0UL +static inline int sme_set_mem_enc(void *vaddr, unsigned long size) +{ + return 0; +} + +static inline int sme_set_mem_unenc(void *vaddr, unsigned long size) +{ + return 0; +} + static inline void __init sme_early_mem_enc(resource_size_t paddr, unsigned long size) { diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 411210d..41cfdf9 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -18,6 +18,7 @@ #include #include #include +#include extern pmdval_t early_pmd_flags; int __init __early_make_pgtable(unsigned long, pmdval_t); @@ -33,6 +34,48 @@ EXPORT_SYMBOL_GPL(sme_me_mask); /* Buffer used for early in-place encryption by BSP, no locking needed */ static char sme_early_buffer[PAGE_SIZE] __aligned(PAGE_SIZE); +int sme_set_mem_enc(void *vaddr, unsigned long size) +{ + unsigned long addr, numpages; + + if (!sme_me_mask) + return 0; + + addr = (unsigned long)vaddr & PAGE_MASK; + numpages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + /* + * The set_memory_xxx functions take an integer for numpages, make + * sure it doesn't exceed that. + */ + if (numpages > INT_MAX) + return -EINVAL; + + return set_memory_enc(addr, numpages); +} +EXPORT_SYMBOL_GPL(sme_set_mem_enc); + +int sme_set_mem_unenc(void *vaddr, unsigned long size) +{ + unsigned long addr, numpages; + + if (!sme_me_mask) + return 0; + + addr = (unsigned long)vaddr & PAGE_MASK; + numpages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + /* + * The set_memory_xxx functions take an integer for numpages, make + * sure it doesn't exceed that. + */ + if (numpages > INT_MAX) + return -EINVAL; + + return set_memory_dec(addr, numpages); +} +EXPORT_SYMBOL_GPL(sme_set_mem_unenc); + /* * This routine does not change the underlying encryption setting of the * page(s) that map this memory. It assumes that eventually the memory is diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index b8e6bb5..babf3a6 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1729,6 +1729,79 @@ int set_memory_4k(unsigned long addr, int numpages) __pgprot(0), 1, 0, NULL); } +static int __set_memory_enc_dec(struct cpa_data *cpa) +{ + unsigned long addr; + int numpages; + int ret; + + /* People should not be passing in unaligned addresses */ + if (WARN_ONCE(*cpa->vaddr & ~PAGE_MASK, + "misaligned address: %#lx\n", *cpa->vaddr)) + *cpa->vaddr &= PAGE_MASK; + + addr = *cpa->vaddr; + numpages = cpa->numpages; + + /* Must avoid aliasing mappings in the highmem code */ + kmap_flush_unused(); + vm_unmap_aliases(); + + ret = __change_page_attr_set_clr(cpa, 1); + + /* Check whether we really changed something */ + if (!(cpa->flags & CPA_FLUSHTLB)) + goto out; + + /* + * On success we use CLFLUSH, when the CPU supports it to + * avoid the WBINVD. + */ + if (!ret && static_cpu_has(X86_FEATURE_CLFLUSH)) + cpa_flush_range(addr, numpages, 1); + else + cpa_flush_all(1); + +out: + return ret; +} + +int set_memory_enc(unsigned long addr, int numpages) +{ + struct cpa_data cpa; + + if (!sme_me_mask) + return 0; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = __pgprot(_PAGE_ENC); + cpa.mask_clr = __pgprot(0); + cpa.pgd = init_mm.pgd; + + return __set_memory_enc_dec(&cpa); +} +EXPORT_SYMBOL(set_memory_enc); + +int set_memory_dec(unsigned long addr, int numpages) +{ + struct cpa_data cpa; + + if (!sme_me_mask) + return 0; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = __pgprot(0); + cpa.mask_clr = __pgprot(_PAGE_ENC); + cpa.pgd = init_mm.pgd; + + return __set_memory_enc_dec(&cpa); +} +EXPORT_SYMBOL(set_memory_dec); + int set_pages_uc(struct page *page, int numpages) { unsigned long addr = (unsigned long)page_address(page); From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-sn1nam01on0059.outbound.protection.outlook.com ([104.47.32.59]:19616 "EHLO NAM01-SN1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S932219AbcKJAve (ORCPT ); Wed, 9 Nov 2016 19:51:34 -0500 From: Tom Lendacky Subject: [RFC PATCH v3 11/20] x86: Add support for changing memory encryption attribute Date: Wed, 9 Nov 2016 18:36:55 -0600 Message-ID: <20161110003655.3280.57333.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> References: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: linux-arch-owner@vger.kernel.org List-ID: To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Rik van Riel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Paolo Bonzini , Larry Woodman , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov Message-ID: <20161110003655.6qi25r8_u3JNlu1BXFLlG8xyDfbXKyOS7sUedLPMOJI@z> This patch adds support to be change the memory encryption attribute for one or more memory pages. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/cacheflush.h | 3 + arch/x86/include/asm/mem_encrypt.h | 13 ++++++ arch/x86/mm/mem_encrypt.c | 43 +++++++++++++++++++++ arch/x86/mm/pageattr.c | 73 ++++++++++++++++++++++++++++++++++++ 4 files changed, 132 insertions(+) diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h index 61518cf..bfb08e5 100644 --- a/arch/x86/include/asm/cacheflush.h +++ b/arch/x86/include/asm/cacheflush.h @@ -13,6 +13,7 @@ * Executability : eXeutable, NoteXecutable * Read/Write : ReadOnly, ReadWrite * Presence : NotPresent + * Encryption : ENCrypted, DECrypted * * Within a category, the attributes are mutually exclusive. * @@ -48,6 +49,8 @@ int set_memory_ro(unsigned long addr, int numpages); int set_memory_rw(unsigned long addr, int numpages); int set_memory_np(unsigned long addr, int numpages); int set_memory_4k(unsigned long addr, int numpages); +int set_memory_enc(unsigned long addr, int numpages); +int set_memory_dec(unsigned long addr, int numpages); int set_memory_array_uc(unsigned long *addr, int addrinarray); int set_memory_array_wc(unsigned long *addr, int addrinarray); diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 0b40f79..d544481 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -21,6 +21,9 @@ extern unsigned long sme_me_mask; +int sme_set_mem_enc(void *vaddr, unsigned long size); +int sme_set_mem_unenc(void *vaddr, unsigned long size); + void __init sme_early_mem_enc(resource_size_t paddr, unsigned long size); void __init sme_early_mem_dec(resource_size_t paddr, @@ -39,6 +42,16 @@ void __init sme_early_init(void); #define sme_me_mask 0UL +static inline int sme_set_mem_enc(void *vaddr, unsigned long size) +{ + return 0; +} + +static inline int sme_set_mem_unenc(void *vaddr, unsigned long size) +{ + return 0; +} + static inline void __init sme_early_mem_enc(resource_size_t paddr, unsigned long size) { diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 411210d..41cfdf9 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -18,6 +18,7 @@ #include #include #include +#include extern pmdval_t early_pmd_flags; int __init __early_make_pgtable(unsigned long, pmdval_t); @@ -33,6 +34,48 @@ EXPORT_SYMBOL_GPL(sme_me_mask); /* Buffer used for early in-place encryption by BSP, no locking needed */ static char sme_early_buffer[PAGE_SIZE] __aligned(PAGE_SIZE); +int sme_set_mem_enc(void *vaddr, unsigned long size) +{ + unsigned long addr, numpages; + + if (!sme_me_mask) + return 0; + + addr = (unsigned long)vaddr & PAGE_MASK; + numpages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + /* + * The set_memory_xxx functions take an integer for numpages, make + * sure it doesn't exceed that. + */ + if (numpages > INT_MAX) + return -EINVAL; + + return set_memory_enc(addr, numpages); +} +EXPORT_SYMBOL_GPL(sme_set_mem_enc); + +int sme_set_mem_unenc(void *vaddr, unsigned long size) +{ + unsigned long addr, numpages; + + if (!sme_me_mask) + return 0; + + addr = (unsigned long)vaddr & PAGE_MASK; + numpages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + /* + * The set_memory_xxx functions take an integer for numpages, make + * sure it doesn't exceed that. + */ + if (numpages > INT_MAX) + return -EINVAL; + + return set_memory_dec(addr, numpages); +} +EXPORT_SYMBOL_GPL(sme_set_mem_unenc); + /* * This routine does not change the underlying encryption setting of the * page(s) that map this memory. It assumes that eventually the memory is diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index b8e6bb5..babf3a6 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1729,6 +1729,79 @@ int set_memory_4k(unsigned long addr, int numpages) __pgprot(0), 1, 0, NULL); } +static int __set_memory_enc_dec(struct cpa_data *cpa) +{ + unsigned long addr; + int numpages; + int ret; + + /* People should not be passing in unaligned addresses */ + if (WARN_ONCE(*cpa->vaddr & ~PAGE_MASK, + "misaligned address: %#lx\n", *cpa->vaddr)) + *cpa->vaddr &= PAGE_MASK; + + addr = *cpa->vaddr; + numpages = cpa->numpages; + + /* Must avoid aliasing mappings in the highmem code */ + kmap_flush_unused(); + vm_unmap_aliases(); + + ret = __change_page_attr_set_clr(cpa, 1); + + /* Check whether we really changed something */ + if (!(cpa->flags & CPA_FLUSHTLB)) + goto out; + + /* + * On success we use CLFLUSH, when the CPU supports it to + * avoid the WBINVD. + */ + if (!ret && static_cpu_has(X86_FEATURE_CLFLUSH)) + cpa_flush_range(addr, numpages, 1); + else + cpa_flush_all(1); + +out: + return ret; +} + +int set_memory_enc(unsigned long addr, int numpages) +{ + struct cpa_data cpa; + + if (!sme_me_mask) + return 0; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = __pgprot(_PAGE_ENC); + cpa.mask_clr = __pgprot(0); + cpa.pgd = init_mm.pgd; + + return __set_memory_enc_dec(&cpa); +} +EXPORT_SYMBOL(set_memory_enc); + +int set_memory_dec(unsigned long addr, int numpages) +{ + struct cpa_data cpa; + + if (!sme_me_mask) + return 0; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = __pgprot(0); + cpa.mask_clr = __pgprot(_PAGE_ENC); + cpa.pgd = init_mm.pgd; + + return __set_memory_enc_dec(&cpa); +} +EXPORT_SYMBOL(set_memory_dec); + int set_pages_uc(struct page *page, int numpages) { unsigned long addr = (unsigned long)page_address(page); From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Lendacky Subject: [RFC PATCH v3 11/20] x86: Add support for changing memory encryption attribute Date: Wed, 9 Nov 2016 18:36:55 -0600 Message-ID: <20161110003655.3280.57333.stgit@tlendack-t1.amdoffice.net> References: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: Rik van Riel , Thomas Gleixner , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Radim =?utf-8?b?S3LEjW3DocWZ?= , Andrey Ryabinin , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , Larry Woodman , Dmitry Vyukov To: , , , , , , , , Return-path: In-Reply-To: <20161110003426.3280.2999.stgit-qCXWGYdRb2BnqfbPTmsdiZQ+2ll4COg0XqFh9Ls21Oc@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org List-Id: kvm.vger.kernel.org This patch adds support to be change the memory encryption attribute for one or more memory pages. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/cacheflush.h | 3 + arch/x86/include/asm/mem_encrypt.h | 13 ++++++ arch/x86/mm/mem_encrypt.c | 43 +++++++++++++++++++++ arch/x86/mm/pageattr.c | 73 ++++++++++++++++++++++++++++++++++++ 4 files changed, 132 insertions(+) diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h index 61518cf..bfb08e5 100644 --- a/arch/x86/include/asm/cacheflush.h +++ b/arch/x86/include/asm/cacheflush.h @@ -13,6 +13,7 @@ * Executability : eXeutable, NoteXecutable * Read/Write : ReadOnly, ReadWrite * Presence : NotPresent + * Encryption : ENCrypted, DECrypted * * Within a category, the attributes are mutually exclusive. * @@ -48,6 +49,8 @@ int set_memory_ro(unsigned long addr, int numpages); int set_memory_rw(unsigned long addr, int numpages); int set_memory_np(unsigned long addr, int numpages); int set_memory_4k(unsigned long addr, int numpages); +int set_memory_enc(unsigned long addr, int numpages); +int set_memory_dec(unsigned long addr, int numpages); int set_memory_array_uc(unsigned long *addr, int addrinarray); int set_memory_array_wc(unsigned long *addr, int addrinarray); diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 0b40f79..d544481 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -21,6 +21,9 @@ extern unsigned long sme_me_mask; +int sme_set_mem_enc(void *vaddr, unsigned long size); +int sme_set_mem_unenc(void *vaddr, unsigned long size); + void __init sme_early_mem_enc(resource_size_t paddr, unsigned long size); void __init sme_early_mem_dec(resource_size_t paddr, @@ -39,6 +42,16 @@ void __init sme_early_init(void); #define sme_me_mask 0UL +static inline int sme_set_mem_enc(void *vaddr, unsigned long size) +{ + return 0; +} + +static inline int sme_set_mem_unenc(void *vaddr, unsigned long size) +{ + return 0; +} + static inline void __init sme_early_mem_enc(resource_size_t paddr, unsigned long size) { diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 411210d..41cfdf9 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -18,6 +18,7 @@ #include #include #include +#include extern pmdval_t early_pmd_flags; int __init __early_make_pgtable(unsigned long, pmdval_t); @@ -33,6 +34,48 @@ EXPORT_SYMBOL_GPL(sme_me_mask); /* Buffer used for early in-place encryption by BSP, no locking needed */ static char sme_early_buffer[PAGE_SIZE] __aligned(PAGE_SIZE); +int sme_set_mem_enc(void *vaddr, unsigned long size) +{ + unsigned long addr, numpages; + + if (!sme_me_mask) + return 0; + + addr = (unsigned long)vaddr & PAGE_MASK; + numpages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + /* + * The set_memory_xxx functions take an integer for numpages, make + * sure it doesn't exceed that. + */ + if (numpages > INT_MAX) + return -EINVAL; + + return set_memory_enc(addr, numpages); +} +EXPORT_SYMBOL_GPL(sme_set_mem_enc); + +int sme_set_mem_unenc(void *vaddr, unsigned long size) +{ + unsigned long addr, numpages; + + if (!sme_me_mask) + return 0; + + addr = (unsigned long)vaddr & PAGE_MASK; + numpages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + /* + * The set_memory_xxx functions take an integer for numpages, make + * sure it doesn't exceed that. + */ + if (numpages > INT_MAX) + return -EINVAL; + + return set_memory_dec(addr, numpages); +} +EXPORT_SYMBOL_GPL(sme_set_mem_unenc); + /* * This routine does not change the underlying encryption setting of the * page(s) that map this memory. It assumes that eventually the memory is diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index b8e6bb5..babf3a6 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1729,6 +1729,79 @@ int set_memory_4k(unsigned long addr, int numpages) __pgprot(0), 1, 0, NULL); } +static int __set_memory_enc_dec(struct cpa_data *cpa) +{ + unsigned long addr; + int numpages; + int ret; + + /* People should not be passing in unaligned addresses */ + if (WARN_ONCE(*cpa->vaddr & ~PAGE_MASK, + "misaligned address: %#lx\n", *cpa->vaddr)) + *cpa->vaddr &= PAGE_MASK; + + addr = *cpa->vaddr; + numpages = cpa->numpages; + + /* Must avoid aliasing mappings in the highmem code */ + kmap_flush_unused(); + vm_unmap_aliases(); + + ret = __change_page_attr_set_clr(cpa, 1); + + /* Check whether we really changed something */ + if (!(cpa->flags & CPA_FLUSHTLB)) + goto out; + + /* + * On success we use CLFLUSH, when the CPU supports it to + * avoid the WBINVD. + */ + if (!ret && static_cpu_has(X86_FEATURE_CLFLUSH)) + cpa_flush_range(addr, numpages, 1); + else + cpa_flush_all(1); + +out: + return ret; +} + +int set_memory_enc(unsigned long addr, int numpages) +{ + struct cpa_data cpa; + + if (!sme_me_mask) + return 0; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = __pgprot(_PAGE_ENC); + cpa.mask_clr = __pgprot(0); + cpa.pgd = init_mm.pgd; + + return __set_memory_enc_dec(&cpa); +} +EXPORT_SYMBOL(set_memory_enc); + +int set_memory_dec(unsigned long addr, int numpages) +{ + struct cpa_data cpa; + + if (!sme_me_mask) + return 0; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = __pgprot(0); + cpa.mask_clr = __pgprot(_PAGE_ENC); + cpa.pgd = init_mm.pgd; + + return __set_memory_enc_dec(&cpa); +} +EXPORT_SYMBOL(set_memory_dec); + int set_pages_uc(struct page *page, int numpages) { unsigned long addr = (unsigned long)page_address(page); From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-yw0-f199.google.com (mail-yw0-f199.google.com [209.85.161.199]) by kanga.kvack.org (Postfix) with ESMTP id C88866B026A for ; Wed, 9 Nov 2016 19:37:06 -0500 (EST) Received: by mail-yw0-f199.google.com with SMTP id l124so212814841ywb.5 for ; Wed, 09 Nov 2016 16:37:06 -0800 (PST) Received: from NAM01-BY2-obe.outbound.protection.outlook.com (mail-by2nam01on0077.outbound.protection.outlook.com. [104.47.34.77]) by mx.google.com with ESMTPS id w135si982851oia.219.2016.11.09.16.37.05 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 09 Nov 2016 16:37:06 -0800 (PST) From: Tom Lendacky Subject: [RFC PATCH v3 11/20] x86: Add support for changing memory encryption attribute Date: Wed, 9 Nov 2016 18:36:55 -0600 Message-ID: <20161110003655.3280.57333.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> References: <20161110003426.3280.2999.stgit@tlendack-t1.amdoffice.net> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Rik van Riel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Paolo Bonzini , Larry Woodman , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov This patch adds support to be change the memory encryption attribute for one or more memory pages. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/cacheflush.h | 3 + arch/x86/include/asm/mem_encrypt.h | 13 ++++++ arch/x86/mm/mem_encrypt.c | 43 +++++++++++++++++++++ arch/x86/mm/pageattr.c | 73 ++++++++++++++++++++++++++++++++++++ 4 files changed, 132 insertions(+) diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h index 61518cf..bfb08e5 100644 --- a/arch/x86/include/asm/cacheflush.h +++ b/arch/x86/include/asm/cacheflush.h @@ -13,6 +13,7 @@ * Executability : eXeutable, NoteXecutable * Read/Write : ReadOnly, ReadWrite * Presence : NotPresent + * Encryption : ENCrypted, DECrypted * * Within a category, the attributes are mutually exclusive. * @@ -48,6 +49,8 @@ int set_memory_ro(unsigned long addr, int numpages); int set_memory_rw(unsigned long addr, int numpages); int set_memory_np(unsigned long addr, int numpages); int set_memory_4k(unsigned long addr, int numpages); +int set_memory_enc(unsigned long addr, int numpages); +int set_memory_dec(unsigned long addr, int numpages); int set_memory_array_uc(unsigned long *addr, int addrinarray); int set_memory_array_wc(unsigned long *addr, int addrinarray); diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 0b40f79..d544481 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -21,6 +21,9 @@ extern unsigned long sme_me_mask; +int sme_set_mem_enc(void *vaddr, unsigned long size); +int sme_set_mem_unenc(void *vaddr, unsigned long size); + void __init sme_early_mem_enc(resource_size_t paddr, unsigned long size); void __init sme_early_mem_dec(resource_size_t paddr, @@ -39,6 +42,16 @@ void __init sme_early_init(void); #define sme_me_mask 0UL +static inline int sme_set_mem_enc(void *vaddr, unsigned long size) +{ + return 0; +} + +static inline int sme_set_mem_unenc(void *vaddr, unsigned long size) +{ + return 0; +} + static inline void __init sme_early_mem_enc(resource_size_t paddr, unsigned long size) { diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 411210d..41cfdf9 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -18,6 +18,7 @@ #include #include #include +#include extern pmdval_t early_pmd_flags; int __init __early_make_pgtable(unsigned long, pmdval_t); @@ -33,6 +34,48 @@ EXPORT_SYMBOL_GPL(sme_me_mask); /* Buffer used for early in-place encryption by BSP, no locking needed */ static char sme_early_buffer[PAGE_SIZE] __aligned(PAGE_SIZE); +int sme_set_mem_enc(void *vaddr, unsigned long size) +{ + unsigned long addr, numpages; + + if (!sme_me_mask) + return 0; + + addr = (unsigned long)vaddr & PAGE_MASK; + numpages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + /* + * The set_memory_xxx functions take an integer for numpages, make + * sure it doesn't exceed that. + */ + if (numpages > INT_MAX) + return -EINVAL; + + return set_memory_enc(addr, numpages); +} +EXPORT_SYMBOL_GPL(sme_set_mem_enc); + +int sme_set_mem_unenc(void *vaddr, unsigned long size) +{ + unsigned long addr, numpages; + + if (!sme_me_mask) + return 0; + + addr = (unsigned long)vaddr & PAGE_MASK; + numpages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + /* + * The set_memory_xxx functions take an integer for numpages, make + * sure it doesn't exceed that. + */ + if (numpages > INT_MAX) + return -EINVAL; + + return set_memory_dec(addr, numpages); +} +EXPORT_SYMBOL_GPL(sme_set_mem_unenc); + /* * This routine does not change the underlying encryption setting of the * page(s) that map this memory. It assumes that eventually the memory is diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index b8e6bb5..babf3a6 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1729,6 +1729,79 @@ int set_memory_4k(unsigned long addr, int numpages) __pgprot(0), 1, 0, NULL); } +static int __set_memory_enc_dec(struct cpa_data *cpa) +{ + unsigned long addr; + int numpages; + int ret; + + /* People should not be passing in unaligned addresses */ + if (WARN_ONCE(*cpa->vaddr & ~PAGE_MASK, + "misaligned address: %#lx\n", *cpa->vaddr)) + *cpa->vaddr &= PAGE_MASK; + + addr = *cpa->vaddr; + numpages = cpa->numpages; + + /* Must avoid aliasing mappings in the highmem code */ + kmap_flush_unused(); + vm_unmap_aliases(); + + ret = __change_page_attr_set_clr(cpa, 1); + + /* Check whether we really changed something */ + if (!(cpa->flags & CPA_FLUSHTLB)) + goto out; + + /* + * On success we use CLFLUSH, when the CPU supports it to + * avoid the WBINVD. + */ + if (!ret && static_cpu_has(X86_FEATURE_CLFLUSH)) + cpa_flush_range(addr, numpages, 1); + else + cpa_flush_all(1); + +out: + return ret; +} + +int set_memory_enc(unsigned long addr, int numpages) +{ + struct cpa_data cpa; + + if (!sme_me_mask) + return 0; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = __pgprot(_PAGE_ENC); + cpa.mask_clr = __pgprot(0); + cpa.pgd = init_mm.pgd; + + return __set_memory_enc_dec(&cpa); +} +EXPORT_SYMBOL(set_memory_enc); + +int set_memory_dec(unsigned long addr, int numpages) +{ + struct cpa_data cpa; + + if (!sme_me_mask) + return 0; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = __pgprot(0); + cpa.mask_clr = __pgprot(_PAGE_ENC); + cpa.pgd = init_mm.pgd; + + return __set_memory_enc_dec(&cpa); +} +EXPORT_SYMBOL(set_memory_dec); + int set_pages_uc(struct page *page, int numpages) { unsigned long addr = (unsigned long)page_address(page); -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org