From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1762263AbcINNgv (ORCPT ); Wed, 14 Sep 2016 09:36:51 -0400 Received: from mail-cys01nam02on0060.outbound.protection.outlook.com ([104.47.37.60]:62880 "EHLO NAM02-CY1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753254AbcINNgq (ORCPT ); Wed, 14 Sep 2016 09:36:46 -0400 Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Thomas.Lendacky@amd.com; Subject: Re: [RFC PATCH v2 14/20] x86: DMA support for memory encryption To: Borislav Petkov References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> <20160822223807.29880.69294.stgit@tlendack-t1.amdoffice.net> <20160912105815.3z5bvzbcfjcj4ku7@pd.tnic> CC: , , , , , , , , , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Andrey Ryabinin , Ingo Molnar , Andy Lutomirski , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov From: Tom Lendacky Message-ID: Date: Wed, 14 Sep 2016 08:36:30 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 MIME-Version: 1.0 In-Reply-To: <20160912105815.3z5bvzbcfjcj4ku7@pd.tnic> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: CY1PR14CA0028.namprd14.prod.outlook.com (10.163.13.166) To CY4PR12MB1144.namprd12.prod.outlook.com (10.168.164.136) X-MS-Office365-Filtering-Correlation-Id: 81d0296f-8176-471e-9507-08d3dca42a98 X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1144;2:Q8fVHnUvFoa/EYud+Y77uB3NY2BHr6OJzTJdG09g/OznBRwnqwAisEzP2UYa+R4tQAEVJHFBWqOo0jFCpl7itvh94bASko6VaCQK4C778KP+Leaj7Qa7Y9B+0BsjyruwTmZ3XsCP1TTuy+0E8w8GciUJI91XmJ/Rxr6AJNXkKg+5fT+94090sU7GylHYBmEV;3:6zlyEmwpQ1ZXDZoX+KdoCgGSoRw9iSIce4jNfOVuvimLoesQ1S6NKAJCVgL8zP9I9s0VjjuD/dPAqgYGNmFYhJVzUvdI5QSetcQhM7YVNFvaJRLL7mVhEHXq9mVpzDGG X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:CY4PR12MB1144; X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1144;25:dNzKT2H7kxzbduPOrcGruANzebUtDd6XJagYkzwurVWEXZytJoz0AhsOWDTnALI9xLEXnMU1lHwZapQsrybOCxBhzIn/k1XdCQavb9YR/7yJ4qWLZRDKRmZ1CY9PbidI4eap/F+Cq+D+yktDaqSa4Ocmx0KI40Qvxj125ZxXgN5vNRgpkuIoWV3JBub/KkAP6QtrnAVR1DIq5HM00u6m1jVNuJ0Du04hwE07zNmGQEYntOSDxsIVL3IUelZm5w941ZPuPpMTnK1XVW1Qm4UDoMHT6ZWWp8aNAehSQwZXH55DeXVKII2DHNykQfR1un2dbg2Tcym++zlDYPxwS71fAuiTWUF/Ay2RSNGEsfzKX4sIohnPFBfhHBt2b9OAeNKkaD2x2jU6cFyOsSZDQO2Ztoa2IzdAR7rFhyjB04MbN5vGyOW8bWQ35Ufy+EHihMgvLtgvojrifc023mBwpj9z7u4xSXoxY/YmCfhkeiIPBx/YOSQbGDjUXgYMJ0LlX9BJ3Xx5dLECVwdscxSBOc0GQ60IOtvznNMDJ+4iYIUKg7MACBqi7JmsT/rCHbdWfKC8CNn14N4aCuglwSCjhIPdlUB8melX1EHXaLHSF4r3+/AHzXn5OjcWxA9pej2J9X4VuJD3KPs6Y3I9YRRrdkeZzhv7pvGt11CIqnzenkeclAYvUWyqlWzeHwFJ+xyLkx04;31:Oi9aG/rYrOSmjSeRvEZEO5yTexKrbdaXgM/afRE/LBApwHYuNr2+4UorFkn9pe76nL4OlFuvVJUqyPEIJCcE17wDMGYfyatSknIPKRyKnMz8/JUdbyQyjPvtjOGitUO1QZpH3/sZGnCcopX0QcME3Yyb9ZKwf8nY3s03LYZH7XeBx/s9sf9ltbI+W3r0drkMYjwGljKqRqKuajwsl03j0Li0ZM91AdDlGghH/Q9B0S4= X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1144;20:m4zMvLhbQYr0NeAOhl43i5qEQYBwEJ9xI3XvmLTMPvjZ7nKw55uwRfSw8gb6Jo0BRK2hD98AEh2MR5SA3MTOh8TQLAITKa0sLOWrn7ifCIQwXOzdHB5/VjM6vlwE/wMJ1iWa8IgdbKUbrm9eTRWnxk7aE+iUglpmxNpHmkABUOCQ1wg6wQ0VpEQlxm0d/WCJ1zlRJwg20APvEW/mifaLQ8P26PT6x6OA00IYqOzCcd4cbxY+nfC6awrqgmduqvIEIsfaRPLtRBgg6PDUZRYbARw1/E6g6CyNVUZWcRbo5j58XPhMziu72eaygXemnwgjpuImdQoendpP2dagF9YIMIb32gIDOwDvokZvts5fiqxbHL3JndsbdEc/6d+fqxE0WdKRWMEXiO58XQ6ELRwVmYk68uVh5m8Eklufjy7hqvo6ZbT67x/P6zzNC4qe+/54Ir36bA1I8sDCZ1xKBhUj6gBSjGA36pZryK43NrYaKyuMi6/jMmKLw68pkINMAWzv;4:nhufj4MPw1pQoL8UqXsDpwORkQbEI1Ybi/gyykB/JlbR/dsW35wErkWiiVBleKuh1oCZVN+Q+TawAqQ+R0D/GV6G0PrwBWqCBnTmSRTL+wh4egj5FDOakucm5U46BdyoeK9rLoIuIyuXfeC6xj6vmrlJoMmKuJks73Euu3lS4kix5/7i3oSuQwNcSLRWWpV/En/YB9Fh2IK0uMO+1TPfJugW6YsjMKKul7mnbcDLTLgXhtj80xdtWdKwlHRAgINc8o4pLZHuGPgGIhtLCTabKUuw9cnjPCGHIuLwYwW4lHWZT2lTsVaPUToUSNI/JZ4ZaUzgbMRHu2Qms8iVfupjU1fJibHb1ud+7zXO9p1xMGsLxxWniZtebnkjc4NMSz1u1wNDlz7vL7ryvsephPlBtw4oKUfP+ZqHsEGvqpPqqcAOFx4C3bCf3XncewpqXHMN X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040176)(601004)(2401047)(8121501046)(5005006)(10201501046)(3002001)(6055026);SRVR:CY4PR12MB1144;BCL:0;PCL:0;RULEID:;SRVR:CY4PR12MB1144; X-Forefront-PRVS: 006546F32A X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6049001)(6009001)(7916002)(189002)(24454002)(377454003)(199003)(76176999)(68736007)(77096005)(19580405001)(19580395003)(110136003)(83506001)(106356001)(189998001)(36756003)(105586002)(23676002)(3846002)(6116002)(101416001)(31696002)(65806001)(97736004)(33646002)(4001350100001)(66066001)(65956001)(81166006)(65826007)(64126003)(2950100001)(81156014)(92566002)(305945005)(31686004)(54356999)(7846002)(7736002)(47776003)(7416002)(50466002)(50986999)(230700001)(8676002)(42186005)(4326007)(2906002)(86362001)(5660300001)(586003)(217873001);DIR:OUT;SFP:1101;SCL:1;SRVR:CY4PR12MB1144;H:[10.236.18.82];FPR:;SPF:None;PTR:InfoNoRecords;MX:1;A:1;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtDWTRQUjEyTUIxMTQ0OzIzOlhMbzRzMnRER3hoSlhKcUFjNEFvSEJKT0N4?= =?utf-8?B?cjBFSFJrNDBCdFd1aE9ObFBzdy8vamhVZmRuajh0cWx1NDZ6bm1TSEpBTzNO?= =?utf-8?B?RzFFRVM4Y291Sm5YMWdmTTk2ekw1Z3E2cy9UYVNpRXcxNmRUOXlJR0VycE95?= =?utf-8?B?WDY2Y2YyQ2hDc1VUZ2YyaHBySGltaGZFd2pxcWFLbWRKZkRkQVdiRW9hUlJX?= =?utf-8?B?cWhSOXRqcnhwOHpFSHdxd2dkVm92ZzV5MFZiVWVxVk5KL3dQeU5leDBZcHB4?= =?utf-8?B?SmNROHBHN05HbEREYmZzTFpHZFhYVDhWbVBJVGQxemZyZzBJUlYzS3dqb29L?= =?utf-8?B?RTFDK05oUnQ1RmRSZ2RJWUEwdWxXYmszS3FGK2VQcjNnMjdweVc2WXVITzZG?= =?utf-8?B?NDFXUzFjVHZkVFZudXZjcitUMndkM3doeXpmV2hxZ2ZJZGdldWpDNStRQkNF?= =?utf-8?B?QmI3cEdEek5QVWJEM1JPZ21helBUVitiV3RsQTlVNndJbkJjZjhwZmkvRVBw?= =?utf-8?B?OWhRWVo4NTZvVzJZeFdNZlNwMTgzcUtkVjk2N2t6L25WeEpiZnJwUG9LM24v?= =?utf-8?B?K1ZDbG9DcUMvY3poeFNYOXBhSFlaUUVvTVNidXp6Wmx0RUY0S0FVbzN3em80?= =?utf-8?B?S056bjY3NDhQV1RIZ0N4U1Y5RTROYUFrMCtQdUNsbXphYW84V3ZFMVUwSEUr?= =?utf-8?B?ZnkwR3FtNXRxQW81bXdveksrU3Y4b1hYeW9zVWpCa1dxS1JLNFpGSS9DcEEx?= =?utf-8?B?VTd4ZytFZ3dORlFyWVZ1VGlZT3ZxeXlKOC9rZ0pzQnNwd3N0NW5kZEptblNs?= =?utf-8?B?T2g2cWo5VWlnZnhqaWdXdHVwN3Y3MVZOWW1lbHdHWDhlYzBKQ3Z6T3hDSHgx?= =?utf-8?B?ejNJVjF0SjRQNzcvbVg0dVZWWXRrdHJ2dlhnNzFDcDJHSG1PS08rUkhxY0o1?= =?utf-8?B?Y0pwWGQ0cVE1QW5tUGxFUEZaZWtqeHAvLzJvNWM3L2ZNNE4rNEZ2YXo2N0tK?= =?utf-8?B?dVZYN1hiQ2RNakUwbXRNVGszWS96ejlkd2RHSVEySm1KWitPdnhrL09RQmox?= =?utf-8?B?TFA1VnNYVWFwbzNEQXE3aGxkQitkckNwY0x5dmNTbzdOMmJ2dk1iOTltdG40?= =?utf-8?B?OWppckpiWGxMMlVOSkxiNXYwTUVobDdYOHVoVGdOQW1FNURzMnBJNnRnUjJh?= =?utf-8?B?MGNrTjZ0eFlidEpPOTdCTEEwMVVUYnY2U1Q0c3IxeHdTR1B5Z1Y1Zy9vQUZM?= =?utf-8?B?MWNZcitIMEk5VXRRZk5uUWE1bHhyM0ZFQzRMZ0l6RU01dHlyM01PYTk2MXFs?= =?utf-8?B?a0F6QzZUZWtwenR2TWVmL3ZVOG9hUzBndnhud0M1TFhuRytWZU5lUUFqRCtS?= =?utf-8?B?Ui82TXZqbkUvMFN0bE1UcUhTV0xJUElKUVVjVEZnekEzVzV6STBhcW5YK1JQ?= =?utf-8?B?QkFpc3FmUUVMNXhuK1BCNFU0eHQ0TVJqNkoyL0dIcEgyZzlMenFkcXFPNTdk?= =?utf-8?B?RWRNdzNUUkJGYkRKUytGbDFUbmFjcUtYa1cxRjAwQUF5V0JFUVQyMEo5VHFJ?= =?utf-8?B?cE5KamxZSGVraUM1WC9QVWxwdi9XSER6Uzl2UlR4Tmwxd21LMDhtZDVwUE4z?= =?utf-8?B?TkJPMTFHdUpJYVQrS2gvOERZYlplM1pobFYxaVBVSVE2YzVEd21TZUt4Q0hO?= =?utf-8?B?WEhDbUtMMHBCYnB6VzRLMk9NeFpKVzRhVnMxZnFWdHlKUVdFVkVwTW5vajNj?= =?utf-8?B?RXZpblJZWW5RT05QUFYxazhIeVNUcFZhVmhPR0ZIeHZIeFpBL1VNWjB3WVFQ?= =?utf-8?Q?ouDp4eyUleC8g?= X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1144;6:PKsFVkGkdXmz9OfUdemEKgPspZwyQkPCHC3LcAG8XtX6YVM4OR8+gtSCh7kSwCTgYcZ70DB2OAMoHXwO6JrZukCVsbXEb5oNwqOtqsMMp4n3giowYVdiHwp7aGetHKNTXuXvmzmCUbMMROaXGT+SgT0UxkzSV8k4XwYgXzxYV9qaIvw8vOaN3/NQM+qTTtvwCNPhzmtklf59t/oyPAp+M95YbNf3mfrGRkog/2fzmVm9ZTTMz6jjZw3/LJ8SiaRNx3Tm0krJJXJzPzQ34VKbi9V7P6/Q0QJOw216Kg3qfVuC1z0wHCen7A6zScIJQSZPpv9qho22QYhdi58+UsgKHQ==;5:R6s2t0w2Mi7vFevXBaHQa9NLqHUWGh18CEMa0pu0/Q+ZYzz54qO35n/83ZbEdPTXw1wCguvH5xHYlLcIqlPXH6PgDGlqBaKLupj56Zd7NS4D7YCGhwQ58ZVU3mM76KMgUneuSibIKzxDE3HbZH3ZQQ==;24:k+3gzZXrHKkRCaox5R4zldHA44qKuoIx6adWq+8/yqNn712ECp0NWbY3clldOYvcYfY+qMQ6WdhXSNbbrxA3nx5oK3ht+Ys4Lf9AjdPOQ3E=;7:8GXNXXcwTuzMtv6zOx0ONl5nOoPog2603oOgpC3DB5rUQ/govig0APdiCJ0pAKNHcMFtRBX6y6lI0ZH6MzhBWq47UZGWD0M1AtVOftnLd1QE+rjBrIcOocNZDdl1Fr9AY6MA/huoyzJA4xNImhU/58OQPDaCPoE7icbLnijsxx/CfA0D4UDBNMFzYVT2BQU0oem3jc17enEeQ2uuAfYRDALVhZRZWTnZ1VanHBNs0CBoxHZyKZ+hMM07kBaChtUZ SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1144;20:VXmqOHIuTOCQ501msFl6FsiYVN6sVlRJCvD1GEa0NTd72mSRNfeD0jQGTcRTVCYDZd1jRp8Wi5q3FObLfUwXXQGRouxhbQEOoOBZC6mhLstewaCXFTb1H9W258v5T0xp6gn4B6d63suhhnHRm+BOGA0fJPH268WuOinS71JMx4zPgZzEznv5cqc7zQFSHzS8C/cC2EsHlm9jUHeBfd3blHCod8V14nTjKWNNleTWF1OIFcxRSGoJZ4yS5cEmfg09 X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2016 13:36:42.8094 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1144 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/12/2016 05:58 AM, Borislav Petkov wrote: > On Mon, Aug 22, 2016 at 05:38:07PM -0500, Tom Lendacky wrote: >> Since DMA addresses will effectively look like 48-bit addresses when the >> memory encryption mask is set, SWIOTLB is needed if the DMA mask of the >> device performing the DMA does not support 48-bits. SWIOTLB will be >> initialized to create un-encrypted bounce buffers for use by these devices. >> >> Signed-off-by: Tom Lendacky >> --- >> arch/x86/include/asm/dma-mapping.h | 5 ++- >> arch/x86/include/asm/mem_encrypt.h | 6 +++ >> arch/x86/kernel/pci-dma.c | 11 ++++-- >> arch/x86/kernel/pci-nommu.c | 2 + >> arch/x86/kernel/pci-swiotlb.c | 8 +++-- >> arch/x86/mm/mem_encrypt.c | 22 ++++++++++++ >> include/linux/swiotlb.h | 1 + >> init/main.c | 13 +++++++ >> lib/swiotlb.c | 64 ++++++++++++++++++++++++++++++++---- >> 9 files changed, 115 insertions(+), 17 deletions(-) > > ... > >> @@ -172,3 +174,23 @@ void __init sme_early_init(void) >> for (i = 0; i < ARRAY_SIZE(protection_map); i++) >> protection_map[i] = __pgprot(pgprot_val(protection_map[i]) | sme_me_mask); >> } >> + >> +/* Architecture __weak replacement functions */ >> +void __init mem_encrypt_init(void) >> +{ >> + if (!sme_me_mask) >> + return; >> + >> + /* Make SWIOTLB use an unencrypted DMA area */ >> + swiotlb_clear_encryption(); >> +} >> + >> +unsigned long swiotlb_get_me_mask(void) > > This could just as well be named to something more generic: > > swiotlb_get_clear_dma_mask() or so which basically means the mask of > bits which get cleared before returning DMA addresses... Ok. > >> +{ >> + return sme_me_mask; >> +} >> + >> +void swiotlb_set_mem_dec(void *vaddr, unsigned long size) >> +{ >> + sme_set_mem_dec(vaddr, size); >> +} >> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h >> index 5f81f8a..5c909fc 100644 >> --- a/include/linux/swiotlb.h >> +++ b/include/linux/swiotlb.h >> @@ -29,6 +29,7 @@ int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose); >> extern unsigned long swiotlb_nr_tbl(void); >> unsigned long swiotlb_size_or_default(void); >> extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs); >> +extern void __init swiotlb_clear_encryption(void); >> >> /* >> * Enumeration for sync targets >> diff --git a/init/main.c b/init/main.c >> index a8a58e2..82c7cd9 100644 >> --- a/init/main.c >> +++ b/init/main.c >> @@ -458,6 +458,10 @@ void __init __weak thread_stack_cache_init(void) >> } >> #endif >> >> +void __init __weak mem_encrypt_init(void) >> +{ >> +} >> + >> /* >> * Set up kernel memory allocators >> */ >> @@ -598,6 +602,15 @@ asmlinkage __visible void __init start_kernel(void) >> */ >> locking_selftest(); >> >> + /* >> + * This needs to be called before any devices perform DMA >> + * operations that might use the swiotlb bounce buffers. >> + * This call will mark the bounce buffers as un-encrypted so >> + * that the usage of them will not cause "plain-text" data > > ...that their usage will not cause ... Ok. > >> + * to be decrypted when accessed. >> + */ >> + mem_encrypt_init(); >> + >> #ifdef CONFIG_BLK_DEV_INITRD >> if (initrd_start && !initrd_below_start_ok && >> page_to_pfn(virt_to_page((void *)initrd_start)) < min_low_pfn) { >> diff --git a/lib/swiotlb.c b/lib/swiotlb.c >> index 22e13a0..15d5741 100644 >> --- a/lib/swiotlb.c >> +++ b/lib/swiotlb.c >> @@ -131,6 +131,26 @@ unsigned long swiotlb_size_or_default(void) >> return size ? size : (IO_TLB_DEFAULT_SIZE); >> } >> >> +/* >> + * Support for memory encryption. If memory encryption is supported, then an >> + * override to these functions will be provided. >> + */ > > No need for that comment. Ok. > >> +unsigned long __weak swiotlb_get_me_mask(void) >> +{ >> + return 0; >> +} >> + >> +void __weak swiotlb_set_mem_dec(void *vaddr, unsigned long size) >> +{ >> +} >> + >> +/* For swiotlb, clear memory encryption mask from dma addresses */ >> +static dma_addr_t swiotlb_phys_to_dma(struct device *hwdev, >> + phys_addr_t address) >> +{ >> + return phys_to_dma(hwdev, address) & ~swiotlb_get_me_mask(); >> +} >> + >> /* Note that this doesn't work with highmem page */ >> static dma_addr_t swiotlb_virt_to_bus(struct device *hwdev, >> volatile void *address) >> @@ -159,6 +179,30 @@ void swiotlb_print_info(void) >> bytes >> 20, vstart, vend - 1); >> } >> >> +/* >> + * If memory encryption is active, the DMA address for an encrypted page may >> + * be beyond the range of the device. If bounce buffers are required be sure >> + * that they are not on an encrypted page. This should be called before the >> + * iotlb area is used. >> + */ >> +void __init swiotlb_clear_encryption(void) >> +{ >> + void *vaddr; >> + unsigned long bytes; >> + >> + if (no_iotlb_memory || !io_tlb_start || late_alloc) >> + return; >> + >> + vaddr = phys_to_virt(io_tlb_start); >> + bytes = PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT); >> + swiotlb_set_mem_dec(vaddr, bytes); >> + memset(vaddr, 0, bytes); > > io_tlb_start is cleared... > >> + >> + vaddr = phys_to_virt(io_tlb_overflow_buffer); >> + bytes = PAGE_ALIGN(io_tlb_overflow); >> + swiotlb_set_mem_dec(vaddr, bytes); > > ... but io_tlb_overflow_buffer isn't? I don't see the difference here. Yup, I missed that one. Will memset this as well. Thanks, Tom > >> +} >> + >> int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) >> { >> void *v_overflow_buffer; From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Lendacky Subject: Re: [RFC PATCH v2 14/20] x86: DMA support for memory encryption Date: Wed, 14 Sep 2016 08:36:30 -0500 Message-ID: References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> <20160822223807.29880.69294.stgit@tlendack-t1.amdoffice.net> <20160912105815.3z5bvzbcfjcj4ku7@pd.tnic> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20160912105815.3z5bvzbcfjcj4ku7@pd.tnic> Sender: linux-doc-owner@vger.kernel.org To: Borislav Petkov Cc: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org, =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Andrey Ryabinin , Ingo Molnar , Andy Lutomirski , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , Thomas Gleixner , Dm List-Id: linux-efi@vger.kernel.org On 09/12/2016 05:58 AM, Borislav Petkov wrote: > On Mon, Aug 22, 2016 at 05:38:07PM -0500, Tom Lendacky wrote: >> Since DMA addresses will effectively look like 48-bit addresses when the >> memory encryption mask is set, SWIOTLB is needed if the DMA mask of the >> device performing the DMA does not support 48-bits. SWIOTLB will be >> initialized to create un-encrypted bounce buffers for use by these devices. >> >> Signed-off-by: Tom Lendacky >> --- >> arch/x86/include/asm/dma-mapping.h | 5 ++- >> arch/x86/include/asm/mem_encrypt.h | 6 +++ >> arch/x86/kernel/pci-dma.c | 11 ++++-- >> arch/x86/kernel/pci-nommu.c | 2 + >> arch/x86/kernel/pci-swiotlb.c | 8 +++-- >> arch/x86/mm/mem_encrypt.c | 22 ++++++++++++ >> include/linux/swiotlb.h | 1 + >> init/main.c | 13 +++++++ >> lib/swiotlb.c | 64 ++++++++++++++++++++++++++++++++---- >> 9 files changed, 115 insertions(+), 17 deletions(-) > > ... > >> @@ -172,3 +174,23 @@ void __init sme_early_init(void) >> for (i = 0; i < ARRAY_SIZE(protection_map); i++) >> protection_map[i] = __pgprot(pgprot_val(protection_map[i]) | sme_me_mask); >> } >> + >> +/* Architecture __weak replacement functions */ >> +void __init mem_encrypt_init(void) >> +{ >> + if (!sme_me_mask) >> + return; >> + >> + /* Make SWIOTLB use an unencrypted DMA area */ >> + swiotlb_clear_encryption(); >> +} >> + >> +unsigned long swiotlb_get_me_mask(void) > > This could just as well be named to something more generic: > > swiotlb_get_clear_dma_mask() or so which basically means the mask of > bits which get cleared before returning DMA addresses... Ok. > >> +{ >> + return sme_me_mask; >> +} >> + >> +void swiotlb_set_mem_dec(void *vaddr, unsigned long size) >> +{ >> + sme_set_mem_dec(vaddr, size); >> +} >> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h >> index 5f81f8a..5c909fc 100644 >> --- a/include/linux/swiotlb.h >> +++ b/include/linux/swiotlb.h >> @@ -29,6 +29,7 @@ int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose); >> extern unsigned long swiotlb_nr_tbl(void); >> unsigned long swiotlb_size_or_default(void); >> extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs); >> +extern void __init swiotlb_clear_encryption(void); >> >> /* >> * Enumeration for sync targets >> diff --git a/init/main.c b/init/main.c >> index a8a58e2..82c7cd9 100644 >> --- a/init/main.c >> +++ b/init/main.c >> @@ -458,6 +458,10 @@ void __init __weak thread_stack_cache_init(void) >> } >> #endif >> >> +void __init __weak mem_encrypt_init(void) >> +{ >> +} >> + >> /* >> * Set up kernel memory allocators >> */ >> @@ -598,6 +602,15 @@ asmlinkage __visible void __init start_kernel(void) >> */ >> locking_selftest(); >> >> + /* >> + * This needs to be called before any devices perform DMA >> + * operations that might use the swiotlb bounce buffers. >> + * This call will mark the bounce buffers as un-encrypted so >> + * that the usage of them will not cause "plain-text" data > > ...that their usage will not cause ... Ok. > >> + * to be decrypted when accessed. >> + */ >> + mem_encrypt_init(); >> + >> #ifdef CONFIG_BLK_DEV_INITRD >> if (initrd_start && !initrd_below_start_ok && >> page_to_pfn(virt_to_page((void *)initrd_start)) < min_low_pfn) { >> diff --git a/lib/swiotlb.c b/lib/swiotlb.c >> index 22e13a0..15d5741 100644 >> --- a/lib/swiotlb.c >> +++ b/lib/swiotlb.c >> @@ -131,6 +131,26 @@ unsigned long swiotlb_size_or_default(void) >> return size ? size : (IO_TLB_DEFAULT_SIZE); >> } >> >> +/* >> + * Support for memory encryption. If memory encryption is supported, then an >> + * override to these functions will be provided. >> + */ > > No need for that comment. Ok. > >> +unsigned long __weak swiotlb_get_me_mask(void) >> +{ >> + return 0; >> +} >> + >> +void __weak swiotlb_set_mem_dec(void *vaddr, unsigned long size) >> +{ >> +} >> + >> +/* For swiotlb, clear memory encryption mask from dma addresses */ >> +static dma_addr_t swiotlb_phys_to_dma(struct device *hwdev, >> + phys_addr_t address) >> +{ >> + return phys_to_dma(hwdev, address) & ~swiotlb_get_me_mask(); >> +} >> + >> /* Note that this doesn't work with highmem page */ >> static dma_addr_t swiotlb_virt_to_bus(struct device *hwdev, >> volatile void *address) >> @@ -159,6 +179,30 @@ void swiotlb_print_info(void) >> bytes >> 20, vstart, vend - 1); >> } >> >> +/* >> + * If memory encryption is active, the DMA address for an encrypted page may >> + * be beyond the range of the device. If bounce buffers are required be sure >> + * that they are not on an encrypted page. This should be called before the >> + * iotlb area is used. >> + */ >> +void __init swiotlb_clear_encryption(void) >> +{ >> + void *vaddr; >> + unsigned long bytes; >> + >> + if (no_iotlb_memory || !io_tlb_start || late_alloc) >> + return; >> + >> + vaddr = phys_to_virt(io_tlb_start); >> + bytes = PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT); >> + swiotlb_set_mem_dec(vaddr, bytes); >> + memset(vaddr, 0, bytes); > > io_tlb_start is cleared... > >> + >> + vaddr = phys_to_virt(io_tlb_overflow_buffer); >> + bytes = PAGE_ALIGN(io_tlb_overflow); >> + swiotlb_set_mem_dec(vaddr, bytes); > > ... but io_tlb_overflow_buffer isn't? I don't see the difference here. Yup, I missed that one. Will memset this as well. Thanks, Tom > >> +} >> + >> int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) >> { >> void *v_overflow_buffer; From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-f69.google.com (mail-oi0-f69.google.com [209.85.218.69]) by kanga.kvack.org (Postfix) with ESMTP id DDA6C6B0038 for ; Wed, 14 Sep 2016 09:37:03 -0400 (EDT) Received: by mail-oi0-f69.google.com with SMTP id q188so43966062oia.1 for ; Wed, 14 Sep 2016 06:37:03 -0700 (PDT) Received: from NAM03-BY2-obe.outbound.protection.outlook.com (mail-by2nam03on0052.outbound.protection.outlook.com. [104.47.42.52]) by mx.google.com with ESMTPS id a67si9868812oic.250.2016.09.14.06.36.45 for (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 14 Sep 2016 06:36:45 -0700 (PDT) Subject: Re: [RFC PATCH v2 14/20] x86: DMA support for memory encryption References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> <20160822223807.29880.69294.stgit@tlendack-t1.amdoffice.net> <20160912105815.3z5bvzbcfjcj4ku7@pd.tnic> From: Tom Lendacky Message-ID: Date: Wed, 14 Sep 2016 08:36:30 -0500 MIME-Version: 1.0 In-Reply-To: <20160912105815.3z5bvzbcfjcj4ku7@pd.tnic> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Borislav Petkov Cc: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org, =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Andrey Ryabinin , Ingo Molnar , Andy Lutomirski , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov On 09/12/2016 05:58 AM, Borislav Petkov wrote: > On Mon, Aug 22, 2016 at 05:38:07PM -0500, Tom Lendacky wrote: >> Since DMA addresses will effectively look like 48-bit addresses when the >> memory encryption mask is set, SWIOTLB is needed if the DMA mask of the >> device performing the DMA does not support 48-bits. SWIOTLB will be >> initialized to create un-encrypted bounce buffers for use by these devices. >> >> Signed-off-by: Tom Lendacky >> --- >> arch/x86/include/asm/dma-mapping.h | 5 ++- >> arch/x86/include/asm/mem_encrypt.h | 6 +++ >> arch/x86/kernel/pci-dma.c | 11 ++++-- >> arch/x86/kernel/pci-nommu.c | 2 + >> arch/x86/kernel/pci-swiotlb.c | 8 +++-- >> arch/x86/mm/mem_encrypt.c | 22 ++++++++++++ >> include/linux/swiotlb.h | 1 + >> init/main.c | 13 +++++++ >> lib/swiotlb.c | 64 ++++++++++++++++++++++++++++++++---- >> 9 files changed, 115 insertions(+), 17 deletions(-) > > ... > >> @@ -172,3 +174,23 @@ void __init sme_early_init(void) >> for (i = 0; i < ARRAY_SIZE(protection_map); i++) >> protection_map[i] = __pgprot(pgprot_val(protection_map[i]) | sme_me_mask); >> } >> + >> +/* Architecture __weak replacement functions */ >> +void __init mem_encrypt_init(void) >> +{ >> + if (!sme_me_mask) >> + return; >> + >> + /* Make SWIOTLB use an unencrypted DMA area */ >> + swiotlb_clear_encryption(); >> +} >> + >> +unsigned long swiotlb_get_me_mask(void) > > This could just as well be named to something more generic: > > swiotlb_get_clear_dma_mask() or so which basically means the mask of > bits which get cleared before returning DMA addresses... Ok. > >> +{ >> + return sme_me_mask; >> +} >> + >> +void swiotlb_set_mem_dec(void *vaddr, unsigned long size) >> +{ >> + sme_set_mem_dec(vaddr, size); >> +} >> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h >> index 5f81f8a..5c909fc 100644 >> --- a/include/linux/swiotlb.h >> +++ b/include/linux/swiotlb.h >> @@ -29,6 +29,7 @@ int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose); >> extern unsigned long swiotlb_nr_tbl(void); >> unsigned long swiotlb_size_or_default(void); >> extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs); >> +extern void __init swiotlb_clear_encryption(void); >> >> /* >> * Enumeration for sync targets >> diff --git a/init/main.c b/init/main.c >> index a8a58e2..82c7cd9 100644 >> --- a/init/main.c >> +++ b/init/main.c >> @@ -458,6 +458,10 @@ void __init __weak thread_stack_cache_init(void) >> } >> #endif >> >> +void __init __weak mem_encrypt_init(void) >> +{ >> +} >> + >> /* >> * Set up kernel memory allocators >> */ >> @@ -598,6 +602,15 @@ asmlinkage __visible void __init start_kernel(void) >> */ >> locking_selftest(); >> >> + /* >> + * This needs to be called before any devices perform DMA >> + * operations that might use the swiotlb bounce buffers. >> + * This call will mark the bounce buffers as un-encrypted so >> + * that the usage of them will not cause "plain-text" data > > ...that their usage will not cause ... Ok. > >> + * to be decrypted when accessed. >> + */ >> + mem_encrypt_init(); >> + >> #ifdef CONFIG_BLK_DEV_INITRD >> if (initrd_start && !initrd_below_start_ok && >> page_to_pfn(virt_to_page((void *)initrd_start)) < min_low_pfn) { >> diff --git a/lib/swiotlb.c b/lib/swiotlb.c >> index 22e13a0..15d5741 100644 >> --- a/lib/swiotlb.c >> +++ b/lib/swiotlb.c >> @@ -131,6 +131,26 @@ unsigned long swiotlb_size_or_default(void) >> return size ? size : (IO_TLB_DEFAULT_SIZE); >> } >> >> +/* >> + * Support for memory encryption. If memory encryption is supported, then an >> + * override to these functions will be provided. >> + */ > > No need for that comment. Ok. > >> +unsigned long __weak swiotlb_get_me_mask(void) >> +{ >> + return 0; >> +} >> + >> +void __weak swiotlb_set_mem_dec(void *vaddr, unsigned long size) >> +{ >> +} >> + >> +/* For swiotlb, clear memory encryption mask from dma addresses */ >> +static dma_addr_t swiotlb_phys_to_dma(struct device *hwdev, >> + phys_addr_t address) >> +{ >> + return phys_to_dma(hwdev, address) & ~swiotlb_get_me_mask(); >> +} >> + >> /* Note that this doesn't work with highmem page */ >> static dma_addr_t swiotlb_virt_to_bus(struct device *hwdev, >> volatile void *address) >> @@ -159,6 +179,30 @@ void swiotlb_print_info(void) >> bytes >> 20, vstart, vend - 1); >> } >> >> +/* >> + * If memory encryption is active, the DMA address for an encrypted page may >> + * be beyond the range of the device. If bounce buffers are required be sure >> + * that they are not on an encrypted page. This should be called before the >> + * iotlb area is used. >> + */ >> +void __init swiotlb_clear_encryption(void) >> +{ >> + void *vaddr; >> + unsigned long bytes; >> + >> + if (no_iotlb_memory || !io_tlb_start || late_alloc) >> + return; >> + >> + vaddr = phys_to_virt(io_tlb_start); >> + bytes = PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT); >> + swiotlb_set_mem_dec(vaddr, bytes); >> + memset(vaddr, 0, bytes); > > io_tlb_start is cleared... > >> + >> + vaddr = phys_to_virt(io_tlb_overflow_buffer); >> + bytes = PAGE_ALIGN(io_tlb_overflow); >> + swiotlb_set_mem_dec(vaddr, bytes); > > ... but io_tlb_overflow_buffer isn't? I don't see the difference here. Yup, I missed that one. Will memset this as well. Thanks, Tom > >> +} >> + >> int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) >> { >> void *v_overflow_buffer; -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org