From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932303AbcHVXLM (ORCPT ); Mon, 22 Aug 2016 19:11:12 -0400 Received: from mail-by2nam01on0053.outbound.protection.outlook.com ([104.47.34.53]:32994 "EHLO NAM01-BY2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S932280AbcHVXKx (ORCPT ); Mon, 22 Aug 2016 19:10:53 -0400 Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Thomas.Lendacky@amd.com; From: Tom Lendacky Subject: [RFC PATCH v2 12/20] x86: Add support for changing memory encryption attribute To: , , , , , , , , CC: Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , "Konrad Rzeszutek Wilk" , Andrey Ryabinin , Ingo Molnar , Borislav Petkov , "Andy Lutomirski" , "H. Peter Anvin" , "Paolo Bonzini" , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov Date: Mon, 22 Aug 2016 17:37:49 -0500 Message-ID: <20160822223749.29880.10183.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: CY1PR04CA0008.namprd04.prod.outlook.com (10.166.187.18) To CY4PR12MB1143.namprd12.prod.outlook.com (10.168.164.135) X-MS-Office365-Filtering-Correlation-Id: cb9ca0c1-278a-4adf-e51e-08d3cadcf33c X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1143;2:eX1f7Ma7aYaaytDSQ49cIJ5zF5UcaVDTKdKAqp6kzLd6BGRmDkum/UuyBZI7f3sj8rXJth80Zk2I4uBoHkZ2IY0+2XbKpL5pVAyVM69neRy9eaSJVDhpMPaTKWL4jbkgLcT6YyztJbLOawBiPB5uhfK+ZJ905wlbyCtUshT3DOZCN+s5AChPWP5fVhbnyQhp;3:7EMunjrB1Z1glASH6rbVa8QuF4tZwO9ovx1j/WhLPpXQqAiJYOcdG9hN+Z9QC+VfDqfc+Q1gQVyd1uUeLF/hZmkVFmHJwLaU3W3o3WI+Blx5crA11H0j6GoUQE4zn/Aa X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:CY4PR12MB1143; X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1143;25:dkuSaEgPlLFaKont0Ym/F1wK08r/FtRKyESxYxP8JBBucE6Md3aqh+qE6Tu/Yq09pJAPUWf0QFirrscIzWgG09onAaR+sAuaW8732PWnKrPu24m7CWKNb8vQbvNqX42mn3de/PKFKIJWKPzYzxzgeGpnz0Oo2i1/99rQhENHVf/AJgf7IYzcN/+RRULkbRpH4rfVHlWGGezR3QEiN38PbbQOfbiyjkSOebuTeKMf1Gj78HcqoNp9ZPiV/aJJc62oEg2T30NZcmFlIuMtxfhhNxffi+z8ZvhNsMnJxAP1VB35kJ9OO9vHQp79EzwEQe+RP6rfxNfIXZ/NZdz1OFGPoQGPmpXG/7zbwr/wHfddi6FN+o64e95Jzo6JSqeIJlPAEmQk3qZ4UMlyYHmSHZqHvGzlmnA5gyLPxriPfs7h+osQUp9Sw0+5h5bUS9vh1j8Extat8BcMpsC+bmL2YaciZZ9vxrhucXTV/x+sXmk1iGbo4cB/rh37wqNmx+vgAaDTP4znEpNmeOqdt7nqcW4Jf/uZTdus1o1rL86nJSpRPeyOkF+iGwu4GAPisfNwsDWM5IrYH6hGXV4EhLeGbmaLJQWjRPxnkMMGQYcA5oB1fIa6bZGxAaXTA7lBcf5dglQj2CO+2rmFjgxt9GqSnI0Wr5PO8OhQ6myR0w/sq04MY2lVarWrO2RCK16x3B6gzTfa9K97YJhvl4rIb7nOEbaJKh3HL6RVvOqdSMC4alV4Az4= X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1143;31:GGSIioyWVb33MpsinKw8yRVm2yG6FIF4eubFDHLv4kqhGcqtqzphPA+C6bz6mbSDD/LkOTgUvboQa5oXCLm4he4SvQREzezeLFUzQjVTG9F3TXxlOEfiNh3yEIFW/DsgUmwv3hlxh295+KgvqV6NMGZfutysU6V0MDZZaw84NXwTBcPfLiVDU1ldBplL6gVNDAvsWptUegX0rrFdOBGZGaAi6tmZgskBz73dqlFap28=;20:fuSRtxUyz4rGvB3PwSc6dV6EKd1tCbwIeyX1dhAKOnzpDWiL3jbm1GSXTXD1MXHn6uTCxwCp4Ggst8b229aj3XHxjvorc0wwO8dwb9hqWPW+bNGBBZylAKPQh7+6vKMNc0hLjRyBxPkh9Lk2fOi82xyA5a0nfR8LKjgpUqG4V93XynDp/TuF5zFAXSBbohiuF+9BpoQbi6IRZWC4IYsX9WFTO2Btf88ompF52/L3XNp4ykmJ7xU5B+u3zhRCsOPLQVFjAyNElVI0F2csYJgGcvPHdSK2VH0WkONYJESNwEuhTEPVFxJlizo+09X03i/McRlbTFeyu87gpXPBrI6LLRi487Qb0T0ObeLLa+TORN/ThQ6UaDmbd6Nd2Vlp6QcQNMLVZuiF/t5EtPX2J8ZMpnSOcVeGcgXa/fd4SchJz7wY+jsS+9lFV6eqWVUJBqu2rEiZuoS9gISekaMB84K1r6EVLDlbYIerDZrZl7GAbC9aOBhmN6UT1x1qsOVoUkF7 X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040176)(601004)(2401047)(5005006)(8121501046)(3002001)(10201501046)(6055026);SRVR:CY4PR12MB1143;BCL:0;PCL:0;RULEID:;SRVR:CY4PR12MB1143; X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1143;4:rHVOfTEGbcaU6yrSxZdf5PoigYaQ3dj7ixRstEArypt3MTnwAFW/2KhVo471rF/dFSGT7ba9xmarrpjBIHxkRkc58gJufXyYuj/T3ILfyBkfEs/FT7XaLU07+sJHtnJ6xCcP9Ew5ZJyh12rP/cGLfmrlEw9H8t8NLKt6l+GfnsHO+USRqr7sihtvzMriJHfDOo8cbo9afrYJvLofRmkF23ko7kU0WNn4u5bO8Km23xtm3uBIw5Ua8T7dwi8dekvrRQtZcOJgXO7x6UZv3mjb64UOaBQ5tErpEnfw1p2ugvYw9PoZT0xKkjMFHpJsG/t5XiaTAfLFzXxQKPcycQ99g9e5S9JiUhUXWLIunjuLuO8fnozC+3BKpYkf7OkQnD/BuVTKluwfvmbR9junci2gQXpOqxWBoKlDL5XiJypae60rIbiZqjOteNht7BXJdWT2 X-Forefront-PRVS: 00429279BA X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6009001)(7916002)(199003)(189002)(97736004)(230700001)(103116003)(229853001)(33646002)(189998001)(53416004)(105586002)(92566002)(66066001)(106356001)(47776003)(5660300001)(42186005)(4001350100001)(19580395003)(101416001)(6116002)(5001770100001)(1076002)(19580405001)(3846002)(69596002)(575784001)(7736002)(305945005)(83506001)(7416002)(86362001)(2906002)(77096005)(54356999)(76176999)(23676002)(2950100001)(4326007)(50986999)(81156014)(2201001)(97746001)(9686002)(81166006)(50466002)(8676002)(68736007)(7846002)(586003)(71626007)(21314002)(217873001);DIR:OUT;SFP:1101;SCL:1;SRVR:CY4PR12MB1143;H:tlendack-t1.amdoffice.net;FPR:;SPF:None;PTR:InfoNoRecords;A:1;MX:1;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtDWTRQUjEyTUIxMTQzOzIzOnd3aWFhaWJROFEzejB2TksySHZodkxNMStK?= =?utf-8?B?Z0kvckhKRUhKM1RZU2owdmVMK1FWUFpiNEIyaUlmSUJ6b3JXT29uMy9vUlB3?= =?utf-8?B?OU5ZcDZRY2Q1dkMxQTJ0N04xam5MY3B1N1A1YVZ2MGwzMmxaVkx3TFU2aTg1?= =?utf-8?B?OW5TWWdNd3Q4U3NieE1FLzYzY0Uxbmp5L042RGhsU1h4WklwQ05WVTJaNmdZ?= =?utf-8?B?RFRvYzUwNVl6WlViM2JhcGFWOGdoZkt1TjhDOVNkNFg3dkwvRGhNMFluQ2lN?= =?utf-8?B?eHZ4RlRWUlhhaml3ODlvV0tzWmtJU2hvOW1ybWRTVWkvRlpyUnVTYk0rVGgr?= =?utf-8?B?Z0pCZUs5U0ZLc2VtMnBTNm1PdmJkam04cCt6cDVMc3libU8rdzc4a1BiL1BU?= =?utf-8?B?ME4zQlpNMDNUOW9kZ2tSbjBITmphaWY1THM3cTZxalBFOWxBZWlXUXdJbW5o?= =?utf-8?B?ejB6UTFtU1AwMnptTDNvbUtMbUJHTU9mRTN6V0M1Y05NcGVVdEd4MFdmSDUz?= =?utf-8?B?eldneDN6Y1JFVTVnUGVkMzNIUGxDL1g4eVNxbGZyT1d1NmRsK1JYbkRDaFpM?= =?utf-8?B?WjlxaHBpTkRxUmFrL2VXVjlwelZ5Q0tNR2dWeGlvVTdRK3Y4eGJiL0hpeVlY?= =?utf-8?B?MERrVGVmc1pCOU1hUDlDRjVjN05jZ0FyWEpRMjBuL0VGcHMyVFFKTENQbm55?= =?utf-8?B?dGRKdGNERmFuYmJMdnFuOVFucWdvQnZlaWhMUm0yekVPRkptdW5KZGdzLzc2?= =?utf-8?B?UzVvZ1piT2JRRHdqdS8vdUNnMXZvdWxVVUdTakVtM1A5QjV0bGRCN3dLTFF4?= =?utf-8?B?MEdkRHZIbTRPSHRNSDBjbEFvSzdJdUVXbFFnMGdIbnZjS3lzN2VOeXh2T3o0?= =?utf-8?B?dVEzQ2x0LzF3dDdENEJIWTlGOFAyaTFrd2tZaDNmaHQ3RVlPTjV3Q2dha3JR?= =?utf-8?B?SmVReSt1eHdCc3FjVnlzTlpIeUZTazlCWWo3TnVhR3pwNmNmajFxZ1F3bTBk?= =?utf-8?B?Y2tHUlBqUEhGZnBzV3lSbW1ienliY1RPT1NlcElINkVPRjhJQ0tMdHpDempT?= =?utf-8?B?OXFlTHNkV05VU2R2NWxPeVhlSXRBeDJTeUhhS0w1WWVpVUxMUlNBcUF6YVl1?= =?utf-8?B?a3I4RUo5NnVVeGhIdS9BUWpsdTc2R2JjVDhCalg5ai9LUmhPSzhZbGpLelRj?= =?utf-8?B?RTJJNlViampKSnpmcFMvOHhaOGgwdHZTdEoxU3JUR1o4cFlMTWF4MnB0ME93?= =?utf-8?B?R1BycklJTmp0d3RhQWRLQ3hjaVZTd25veWZuTjQrWWxNZXFYNDBiNUw4emlP?= =?utf-8?B?MlJsV3ZtWXI0Mlc0eTNnSDQ5UjFjc3VrNVE5UllhTWVVTTMwV1Y3MjRPR01m?= =?utf-8?B?YXhYaFA1TzVNcU1jUW1HYUZHbWt0T2pQMWhXWFRUOFAzbXk5QzZjWE83dkcz?= =?utf-8?B?ajdTQlh4SkpvTHFLeU5yOElnNVk3b29kaThlRkZMcDlVRERXWllCTDJyT2tQ?= =?utf-8?B?M1dHdlo5WVJnUTJHdi9OR3o1cGJpNkV4eGtjNXRkNWQydTJRS1dNSnRPWmx2?= =?utf-8?B?Wmp0ZEZIcmxZYjEwaEVueUo0T0hta1hrZGFlZW8zZkNEcEFkMlM3L3VNL3lD?= =?utf-8?B?MndSRnlYSmY5MExHZTJ5OTR4R1U3NWNFMVNSQjEydHc5aDVPMXpKcVhvK0RM?= =?utf-8?B?Z29Sc2JjMTNHNURBOFlXeE96U3gvME1DbGRXc2Z1NjUwNSt0ZWJOMjVlUURH?= =?utf-8?B?bkpERWwvKzRrcU5uWkFVT245WFVYR0RGTm1FeGhUUEVFNlcrNmYwMTNxUW0z?= =?utf-8?B?VldhYVcyVldFMzU2K1NpTCtSd3VtcjhaRXBEOHh1NDRLWHc9PQ==?= X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1143;6:INmZkC3m8xppTVxS5sUTrAMtiFEvabrAaDozItagEd/Y+0r7yns0eHNvOyl3mvCmXCqpBBlZ84dgLUfPtDTew04S+saPCQxgzzbmMr7cS5Sk+zWnu+ny7xU9bDS0L8HWcb08OE7fb2CzEGE1T1DONVC+Oo6b96EzGj8bDp7VXnZV7vfjnJkIZy7tPXweTh92kyj/jWsBKTX6/oeHfnL/GHKgiFj39bFzXDlNaPdXK6N1W/VM6fPdSIZdvNlQ2AFY4VUHecm8egoqwD+iRHes/IyKQ3E3GG3yHnBgGcD67fJykbTYmPxYGj02njTpI89Qg1eVr0AHdiVWKzpgi+updA==;5:pWWo3l4S6OCSBqz0FI39RQOzZIfhjKhlus1L9Lu7TZFaNRI2Aa+WcZH7DYDgoEcP9vgS+SvzjZ8Zo3dbNfzGPcg19mDmt2j/xMPpLlP43wt1NmWE1hxRBaGwZaHpnbn0fBLqaqRwol9P9GiYeRH6Jw==;24:H5Fz4mPzhonQtqwduQcAw1kvTXUeQWncYaHV9Qnk6A0GAVMDcG/md0ur0a4c+wQBaRLrCzTcmPvNOTQzoYmM0CXV/3gYNsdbMHhWFqP4tXA=;7:vrKpAN7T0eDiqnVsLTXZt2u75sx2Fj6t80S0hBw2RyRBUFYZKJlnq8jBZmPMo2zX1+s0chwA+6OBjj9gVzXYjSG1vbT8jMkU0LZSCWG2YHzFzyr++HzYsPDzNWPp8sT3HdyOlYJX2DcBR0/oxjIZmrv24mdOwsFG4KZf9yIfvOG/H0CTizbC4U0bb5cum4611lZRV+ilKremQVyz4NZuv/V2g6TyF/R+/qTYxvLUs6hE77U1ppF18XDCMxHjuJCn SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1143;20:TQ/NfBwRFlNYDnVvimRao4262ntGi0G2zNo+MCDbva/d4fxh+tDFGcS7bIlmqQ6Gh4jCMR09VseYU5cYvBHKUlWyvTsat/tRf8ebF4d1AawE9M1Ba5f9pYPtNJMMKsiCEbpjUYbRUtgCE5ifu4dHvGLtKo8b9JvOF4l/L8QAKapIDrROzjzRX2DfCYMTE04Zh1XvHMKR46BTryiSf+Wc7vnz3X+2AggrBqSVjZ5PYVEmwaHUvtXHc6WOS4iHTLH2 X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Aug 2016 22:37:50.3575 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1143 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch adds support to be change the memory encryption attribute for one or more memory pages. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/cacheflush.h | 3 + arch/x86/include/asm/mem_encrypt.h | 13 ++++++ arch/x86/mm/mem_encrypt.c | 43 +++++++++++++++++++++ arch/x86/mm/pageattr.c | 75 ++++++++++++++++++++++++++++++++++++ 4 files changed, 134 insertions(+) diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h index 61518cf..bfb08e5 100644 --- a/arch/x86/include/asm/cacheflush.h +++ b/arch/x86/include/asm/cacheflush.h @@ -13,6 +13,7 @@ * Executability : eXeutable, NoteXecutable * Read/Write : ReadOnly, ReadWrite * Presence : NotPresent + * Encryption : ENCrypted, DECrypted * * Within a category, the attributes are mutually exclusive. * @@ -48,6 +49,8 @@ int set_memory_ro(unsigned long addr, int numpages); int set_memory_rw(unsigned long addr, int numpages); int set_memory_np(unsigned long addr, int numpages); int set_memory_4k(unsigned long addr, int numpages); +int set_memory_enc(unsigned long addr, int numpages); +int set_memory_dec(unsigned long addr, int numpages); int set_memory_array_uc(unsigned long *addr, int addrinarray); int set_memory_array_wc(unsigned long *addr, int addrinarray); diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 2785493..5616ed1 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -23,6 +23,9 @@ extern unsigned long sme_me_mask; u8 sme_get_me_loss(void); +int sme_set_mem_enc(void *vaddr, unsigned long size); +int sme_set_mem_dec(void *vaddr, unsigned long size); + void __init sme_early_mem_enc(resource_size_t paddr, unsigned long size); void __init sme_early_mem_dec(resource_size_t paddr, @@ -44,6 +47,16 @@ static inline u8 sme_get_me_loss(void) return 0; } +static inline int sme_set_mem_enc(void *vaddr, unsigned long size) +{ + return 0; +} + +static inline int sme_set_mem_dec(void *vaddr, unsigned long size) +{ + return 0; +} + static inline void __init sme_early_mem_enc(resource_size_t paddr, unsigned long size) { diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index f35a646..b0f39c5 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -14,12 +14,55 @@ #include #include +#include #include #include /* Buffer used for early in-place encryption by BSP, no locking needed */ static char me_early_buffer[PAGE_SIZE] __aligned(PAGE_SIZE); +int sme_set_mem_enc(void *vaddr, unsigned long size) +{ + unsigned long addr, numpages; + + if (!sme_me_mask) + return 0; + + addr = (unsigned long)vaddr & PAGE_MASK; + numpages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + /* + * The set_memory_xxx functions take an integer for numpages, make + * sure it doesn't exceed that. + */ + if (numpages > INT_MAX) + return -EINVAL; + + return set_memory_enc(addr, numpages); +} +EXPORT_SYMBOL_GPL(sme_set_mem_enc); + +int sme_set_mem_dec(void *vaddr, unsigned long size) +{ + unsigned long addr, numpages; + + if (!sme_me_mask) + return 0; + + addr = (unsigned long)vaddr & PAGE_MASK; + numpages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + /* + * The set_memory_xxx functions take an integer for numpages, make + * sure it doesn't exceed that. + */ + if (numpages > INT_MAX) + return -EINVAL; + + return set_memory_dec(addr, numpages); +} +EXPORT_SYMBOL_GPL(sme_set_mem_dec); + /* * This routine does not change the underlying encryption setting of the * page(s) that map this memory. It assumes that eventually the memory is diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index 72c292d..0ba9382 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1728,6 +1728,81 @@ int set_memory_4k(unsigned long addr, int numpages) __pgprot(0), 1, 0, NULL); } +static int __set_memory_enc_dec(struct cpa_data *cpa) +{ + unsigned long addr; + int numpages; + int ret; + + if (*cpa->vaddr & ~PAGE_MASK) { + *cpa->vaddr &= PAGE_MASK; + + /* People should not be passing in unaligned addresses */ + WARN_ON_ONCE(1); + } + + addr = *cpa->vaddr; + numpages = cpa->numpages; + + /* Must avoid aliasing mappings in the highmem code */ + kmap_flush_unused(); + vm_unmap_aliases(); + + ret = __change_page_attr_set_clr(cpa, 1); + + /* Check whether we really changed something */ + if (!(cpa->flags & CPA_FLUSHTLB)) + goto out; + + /* + * On success we use CLFLUSH, when the CPU supports it to + * avoid the WBINVD. + */ + if (!ret && static_cpu_has(X86_FEATURE_CLFLUSH)) + cpa_flush_range(addr, numpages, 1); + else + cpa_flush_all(1); + +out: + return ret; +} + +int set_memory_enc(unsigned long addr, int numpages) +{ + struct cpa_data cpa; + + if (!sme_me_mask) + return 0; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = __pgprot(_PAGE_ENC); + cpa.mask_clr = __pgprot(0); + cpa.pgd = init_mm.pgd; + + return __set_memory_enc_dec(&cpa); +} +EXPORT_SYMBOL(set_memory_enc); + +int set_memory_dec(unsigned long addr, int numpages) +{ + struct cpa_data cpa; + + if (!sme_me_mask) + return 0; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = __pgprot(0); + cpa.mask_clr = __pgprot(_PAGE_ENC); + cpa.pgd = init_mm.pgd; + + return __set_memory_enc_dec(&cpa); +} +EXPORT_SYMBOL(set_memory_dec); + int set_pages_uc(struct page *page, int numpages) { unsigned long addr = (unsigned long)page_address(page); From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Lendacky Subject: [RFC PATCH v2 12/20] x86: Add support for changing memory encryption attribute Date: Mon, 22 Aug 2016 17:37:49 -0500 Message-ID: <20160822223749.29880.10183.stgit@tlendack-t1.amdoffice.net> References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> Sender: owner-linux-mm@kvack.org To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Andrey Ryabinin , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov List-Id: linux-efi@vger.kernel.org This patch adds support to be change the memory encryption attribute for one or more memory pages. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/cacheflush.h | 3 + arch/x86/include/asm/mem_encrypt.h | 13 ++++++ arch/x86/mm/mem_encrypt.c | 43 +++++++++++++++++++++ arch/x86/mm/pageattr.c | 75 ++++++++++++++++++++++++++++++++++++ 4 files changed, 134 insertions(+) diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h index 61518cf..bfb08e5 100644 --- a/arch/x86/include/asm/cacheflush.h +++ b/arch/x86/include/asm/cacheflush.h @@ -13,6 +13,7 @@ * Executability : eXeutable, NoteXecutable * Read/Write : ReadOnly, ReadWrite * Presence : NotPresent + * Encryption : ENCrypted, DECrypted * * Within a category, the attributes are mutually exclusive. * @@ -48,6 +49,8 @@ int set_memory_ro(unsigned long addr, int numpages); int set_memory_rw(unsigned long addr, int numpages); int set_memory_np(unsigned long addr, int numpages); int set_memory_4k(unsigned long addr, int numpages); +int set_memory_enc(unsigned long addr, int numpages); +int set_memory_dec(unsigned long addr, int numpages); int set_memory_array_uc(unsigned long *addr, int addrinarray); int set_memory_array_wc(unsigned long *addr, int addrinarray); diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 2785493..5616ed1 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -23,6 +23,9 @@ extern unsigned long sme_me_mask; u8 sme_get_me_loss(void); +int sme_set_mem_enc(void *vaddr, unsigned long size); +int sme_set_mem_dec(void *vaddr, unsigned long size); + void __init sme_early_mem_enc(resource_size_t paddr, unsigned long size); void __init sme_early_mem_dec(resource_size_t paddr, @@ -44,6 +47,16 @@ static inline u8 sme_get_me_loss(void) return 0; } +static inline int sme_set_mem_enc(void *vaddr, unsigned long size) +{ + return 0; +} + +static inline int sme_set_mem_dec(void *vaddr, unsigned long size) +{ + return 0; +} + static inline void __init sme_early_mem_enc(resource_size_t paddr, unsigned long size) { diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index f35a646..b0f39c5 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -14,12 +14,55 @@ #include #include +#include #include #include /* Buffer used for early in-place encryption by BSP, no locking needed */ static char me_early_buffer[PAGE_SIZE] __aligned(PAGE_SIZE); +int sme_set_mem_enc(void *vaddr, unsigned long size) +{ + unsigned long addr, numpages; + + if (!sme_me_mask) + return 0; + + addr = (unsigned long)vaddr & PAGE_MASK; + numpages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + /* + * The set_memory_xxx functions take an integer for numpages, make + * sure it doesn't exceed that. + */ + if (numpages > INT_MAX) + return -EINVAL; + + return set_memory_enc(addr, numpages); +} +EXPORT_SYMBOL_GPL(sme_set_mem_enc); + +int sme_set_mem_dec(void *vaddr, unsigned long size) +{ + unsigned long addr, numpages; + + if (!sme_me_mask) + return 0; + + addr = (unsigned long)vaddr & PAGE_MASK; + numpages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + /* + * The set_memory_xxx functions take an integer for numpages, make + * sure it doesn't exceed that. + */ + if (numpages > INT_MAX) + return -EINVAL; + + return set_memory_dec(addr, numpages); +} +EXPORT_SYMBOL_GPL(sme_set_mem_dec); + /* * This routine does not change the underlying encryption setting of the * page(s) that map this memory. It assumes that eventually the memory is diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index 72c292d..0ba9382 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1728,6 +1728,81 @@ int set_memory_4k(unsigned long addr, int numpages) __pgprot(0), 1, 0, NULL); } +static int __set_memory_enc_dec(struct cpa_data *cpa) +{ + unsigned long addr; + int numpages; + int ret; + + if (*cpa->vaddr & ~PAGE_MASK) { + *cpa->vaddr &= PAGE_MASK; + + /* People should not be passing in unaligned addresses */ + WARN_ON_ONCE(1); + } + + addr = *cpa->vaddr; + numpages = cpa->numpages; + + /* Must avoid aliasing mappings in the highmem code */ + kmap_flush_unused(); + vm_unmap_aliases(); + + ret = __change_page_attr_set_clr(cpa, 1); + + /* Check whether we really changed something */ + if (!(cpa->flags & CPA_FLUSHTLB)) + goto out; + + /* + * On success we use CLFLUSH, when the CPU supports it to + * avoid the WBINVD. + */ + if (!ret && static_cpu_has(X86_FEATURE_CLFLUSH)) + cpa_flush_range(addr, numpages, 1); + else + cpa_flush_all(1); + +out: + return ret; +} + +int set_memory_enc(unsigned long addr, int numpages) +{ + struct cpa_data cpa; + + if (!sme_me_mask) + return 0; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = __pgprot(_PAGE_ENC); + cpa.mask_clr = __pgprot(0); + cpa.pgd = init_mm.pgd; + + return __set_memory_enc_dec(&cpa); +} +EXPORT_SYMBOL(set_memory_enc); + +int set_memory_dec(unsigned long addr, int numpages) +{ + struct cpa_data cpa; + + if (!sme_me_mask) + return 0; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = __pgprot(0); + cpa.mask_clr = __pgprot(_PAGE_ENC); + cpa.pgd = init_mm.pgd; + + return __set_memory_enc_dec(&cpa); +} +EXPORT_SYMBOL(set_memory_dec); + int set_pages_uc(struct page *page, int numpages) { unsigned long addr = (unsigned long)page_address(page); -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-by2nam01on0053.outbound.protection.outlook.com ([104.47.34.53]:32994 "EHLO NAM01-BY2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S932280AbcHVXKx (ORCPT ); Mon, 22 Aug 2016 19:10:53 -0400 From: Tom Lendacky Subject: [RFC PATCH v2 12/20] x86: Add support for changing memory encryption attribute Date: Mon, 22 Aug 2016 17:37:49 -0500 Message-ID: <20160822223749.29880.10183.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: linux-arch-owner@vger.kernel.org List-ID: To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Andrey Ryabinin , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov Message-ID: <20160822223749.6PsRZCRRfTTAGIJNwx_H1twFj57wYlj7tq-TtmaRkvY@z> This patch adds support to be change the memory encryption attribute for one or more memory pages. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/cacheflush.h | 3 + arch/x86/include/asm/mem_encrypt.h | 13 ++++++ arch/x86/mm/mem_encrypt.c | 43 +++++++++++++++++++++ arch/x86/mm/pageattr.c | 75 ++++++++++++++++++++++++++++++++++++ 4 files changed, 134 insertions(+) diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h index 61518cf..bfb08e5 100644 --- a/arch/x86/include/asm/cacheflush.h +++ b/arch/x86/include/asm/cacheflush.h @@ -13,6 +13,7 @@ * Executability : eXeutable, NoteXecutable * Read/Write : ReadOnly, ReadWrite * Presence : NotPresent + * Encryption : ENCrypted, DECrypted * * Within a category, the attributes are mutually exclusive. * @@ -48,6 +49,8 @@ int set_memory_ro(unsigned long addr, int numpages); int set_memory_rw(unsigned long addr, int numpages); int set_memory_np(unsigned long addr, int numpages); int set_memory_4k(unsigned long addr, int numpages); +int set_memory_enc(unsigned long addr, int numpages); +int set_memory_dec(unsigned long addr, int numpages); int set_memory_array_uc(unsigned long *addr, int addrinarray); int set_memory_array_wc(unsigned long *addr, int addrinarray); diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 2785493..5616ed1 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -23,6 +23,9 @@ extern unsigned long sme_me_mask; u8 sme_get_me_loss(void); +int sme_set_mem_enc(void *vaddr, unsigned long size); +int sme_set_mem_dec(void *vaddr, unsigned long size); + void __init sme_early_mem_enc(resource_size_t paddr, unsigned long size); void __init sme_early_mem_dec(resource_size_t paddr, @@ -44,6 +47,16 @@ static inline u8 sme_get_me_loss(void) return 0; } +static inline int sme_set_mem_enc(void *vaddr, unsigned long size) +{ + return 0; +} + +static inline int sme_set_mem_dec(void *vaddr, unsigned long size) +{ + return 0; +} + static inline void __init sme_early_mem_enc(resource_size_t paddr, unsigned long size) { diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index f35a646..b0f39c5 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -14,12 +14,55 @@ #include #include +#include #include #include /* Buffer used for early in-place encryption by BSP, no locking needed */ static char me_early_buffer[PAGE_SIZE] __aligned(PAGE_SIZE); +int sme_set_mem_enc(void *vaddr, unsigned long size) +{ + unsigned long addr, numpages; + + if (!sme_me_mask) + return 0; + + addr = (unsigned long)vaddr & PAGE_MASK; + numpages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + /* + * The set_memory_xxx functions take an integer for numpages, make + * sure it doesn't exceed that. + */ + if (numpages > INT_MAX) + return -EINVAL; + + return set_memory_enc(addr, numpages); +} +EXPORT_SYMBOL_GPL(sme_set_mem_enc); + +int sme_set_mem_dec(void *vaddr, unsigned long size) +{ + unsigned long addr, numpages; + + if (!sme_me_mask) + return 0; + + addr = (unsigned long)vaddr & PAGE_MASK; + numpages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + /* + * The set_memory_xxx functions take an integer for numpages, make + * sure it doesn't exceed that. + */ + if (numpages > INT_MAX) + return -EINVAL; + + return set_memory_dec(addr, numpages); +} +EXPORT_SYMBOL_GPL(sme_set_mem_dec); + /* * This routine does not change the underlying encryption setting of the * page(s) that map this memory. It assumes that eventually the memory is diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index 72c292d..0ba9382 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1728,6 +1728,81 @@ int set_memory_4k(unsigned long addr, int numpages) __pgprot(0), 1, 0, NULL); } +static int __set_memory_enc_dec(struct cpa_data *cpa) +{ + unsigned long addr; + int numpages; + int ret; + + if (*cpa->vaddr & ~PAGE_MASK) { + *cpa->vaddr &= PAGE_MASK; + + /* People should not be passing in unaligned addresses */ + WARN_ON_ONCE(1); + } + + addr = *cpa->vaddr; + numpages = cpa->numpages; + + /* Must avoid aliasing mappings in the highmem code */ + kmap_flush_unused(); + vm_unmap_aliases(); + + ret = __change_page_attr_set_clr(cpa, 1); + + /* Check whether we really changed something */ + if (!(cpa->flags & CPA_FLUSHTLB)) + goto out; + + /* + * On success we use CLFLUSH, when the CPU supports it to + * avoid the WBINVD. + */ + if (!ret && static_cpu_has(X86_FEATURE_CLFLUSH)) + cpa_flush_range(addr, numpages, 1); + else + cpa_flush_all(1); + +out: + return ret; +} + +int set_memory_enc(unsigned long addr, int numpages) +{ + struct cpa_data cpa; + + if (!sme_me_mask) + return 0; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = __pgprot(_PAGE_ENC); + cpa.mask_clr = __pgprot(0); + cpa.pgd = init_mm.pgd; + + return __set_memory_enc_dec(&cpa); +} +EXPORT_SYMBOL(set_memory_enc); + +int set_memory_dec(unsigned long addr, int numpages) +{ + struct cpa_data cpa; + + if (!sme_me_mask) + return 0; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = __pgprot(0); + cpa.mask_clr = __pgprot(_PAGE_ENC); + cpa.pgd = init_mm.pgd; + + return __set_memory_enc_dec(&cpa); +} +EXPORT_SYMBOL(set_memory_dec); + int set_pages_uc(struct page *page, int numpages) { unsigned long addr = (unsigned long)page_address(page); From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Lendacky Subject: [RFC PATCH v2 12/20] x86: Add support for changing memory encryption attribute Date: Mon, 22 Aug 2016 17:37:49 -0500 Message-ID: <20160822223749.29880.10183.stgit@tlendack-t1.amdoffice.net> References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Cc: Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , "Konrad Rzeszutek Wilk" , Andrey Ryabinin , Ingo Molnar , Borislav Petkov , "Andy Lutomirski" , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , "Thomas Gleixner" , Dmitry Vyukov To: , , , , , , , , Return-path: In-Reply-To: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> Sender: owner-linux-mm@kvack.org List-Id: kvm.vger.kernel.org This patch adds support to be change the memory encryption attribute for one or more memory pages. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/cacheflush.h | 3 + arch/x86/include/asm/mem_encrypt.h | 13 ++++++ arch/x86/mm/mem_encrypt.c | 43 +++++++++++++++++++++ arch/x86/mm/pageattr.c | 75 ++++++++++++++++++++++++++++++++++++ 4 files changed, 134 insertions(+) diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h index 61518cf..bfb08e5 100644 --- a/arch/x86/include/asm/cacheflush.h +++ b/arch/x86/include/asm/cacheflush.h @@ -13,6 +13,7 @@ * Executability : eXeutable, NoteXecutable * Read/Write : ReadOnly, ReadWrite * Presence : NotPresent + * Encryption : ENCrypted, DECrypted * * Within a category, the attributes are mutually exclusive. * @@ -48,6 +49,8 @@ int set_memory_ro(unsigned long addr, int numpages); int set_memory_rw(unsigned long addr, int numpages); int set_memory_np(unsigned long addr, int numpages); int set_memory_4k(unsigned long addr, int numpages); +int set_memory_enc(unsigned long addr, int numpages); +int set_memory_dec(unsigned long addr, int numpages); int set_memory_array_uc(unsigned long *addr, int addrinarray); int set_memory_array_wc(unsigned long *addr, int addrinarray); diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 2785493..5616ed1 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -23,6 +23,9 @@ extern unsigned long sme_me_mask; u8 sme_get_me_loss(void); +int sme_set_mem_enc(void *vaddr, unsigned long size); +int sme_set_mem_dec(void *vaddr, unsigned long size); + void __init sme_early_mem_enc(resource_size_t paddr, unsigned long size); void __init sme_early_mem_dec(resource_size_t paddr, @@ -44,6 +47,16 @@ static inline u8 sme_get_me_loss(void) return 0; } +static inline int sme_set_mem_enc(void *vaddr, unsigned long size) +{ + return 0; +} + +static inline int sme_set_mem_dec(void *vaddr, unsigned long size) +{ + return 0; +} + static inline void __init sme_early_mem_enc(resource_size_t paddr, unsigned long size) { diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index f35a646..b0f39c5 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -14,12 +14,55 @@ #include #include +#include #include #include /* Buffer used for early in-place encryption by BSP, no locking needed */ static char me_early_buffer[PAGE_SIZE] __aligned(PAGE_SIZE); +int sme_set_mem_enc(void *vaddr, unsigned long size) +{ + unsigned long addr, numpages; + + if (!sme_me_mask) + return 0; + + addr = (unsigned long)vaddr & PAGE_MASK; + numpages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + /* + * The set_memory_xxx functions take an integer for numpages, make + * sure it doesn't exceed that. + */ + if (numpages > INT_MAX) + return -EINVAL; + + return set_memory_enc(addr, numpages); +} +EXPORT_SYMBOL_GPL(sme_set_mem_enc); + +int sme_set_mem_dec(void *vaddr, unsigned long size) +{ + unsigned long addr, numpages; + + if (!sme_me_mask) + return 0; + + addr = (unsigned long)vaddr & PAGE_MASK; + numpages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + /* + * The set_memory_xxx functions take an integer for numpages, make + * sure it doesn't exceed that. + */ + if (numpages > INT_MAX) + return -EINVAL; + + return set_memory_dec(addr, numpages); +} +EXPORT_SYMBOL_GPL(sme_set_mem_dec); + /* * This routine does not change the underlying encryption setting of the * page(s) that map this memory. It assumes that eventually the memory is diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index 72c292d..0ba9382 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1728,6 +1728,81 @@ int set_memory_4k(unsigned long addr, int numpages) __pgprot(0), 1, 0, NULL); } +static int __set_memory_enc_dec(struct cpa_data *cpa) +{ + unsigned long addr; + int numpages; + int ret; + + if (*cpa->vaddr & ~PAGE_MASK) { + *cpa->vaddr &= PAGE_MASK; + + /* People should not be passing in unaligned addresses */ + WARN_ON_ONCE(1); + } + + addr = *cpa->vaddr; + numpages = cpa->numpages; + + /* Must avoid aliasing mappings in the highmem code */ + kmap_flush_unused(); + vm_unmap_aliases(); + + ret = __change_page_attr_set_clr(cpa, 1); + + /* Check whether we really changed something */ + if (!(cpa->flags & CPA_FLUSHTLB)) + goto out; + + /* + * On success we use CLFLUSH, when the CPU supports it to + * avoid the WBINVD. + */ + if (!ret && static_cpu_has(X86_FEATURE_CLFLUSH)) + cpa_flush_range(addr, numpages, 1); + else + cpa_flush_all(1); + +out: + return ret; +} + +int set_memory_enc(unsigned long addr, int numpages) +{ + struct cpa_data cpa; + + if (!sme_me_mask) + return 0; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = __pgprot(_PAGE_ENC); + cpa.mask_clr = __pgprot(0); + cpa.pgd = init_mm.pgd; + + return __set_memory_enc_dec(&cpa); +} +EXPORT_SYMBOL(set_memory_enc); + +int set_memory_dec(unsigned long addr, int numpages) +{ + struct cpa_data cpa; + + if (!sme_me_mask) + return 0; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = &addr; + cpa.numpages = numpages; + cpa.mask_set = __pgprot(0); + cpa.mask_clr = __pgprot(_PAGE_ENC); + cpa.pgd = init_mm.pgd; + + return __set_memory_enc_dec(&cpa); +} +EXPORT_SYMBOL(set_memory_dec); + int set_pages_uc(struct page *page, int numpages) { unsigned long addr = (unsigned long)page_address(page); -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org