From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753557AbdF0PVw (ORCPT ); Tue, 27 Jun 2017 11:21:52 -0400 Received: from mail-bl2nam02on0078.outbound.protection.outlook.com ([104.47.38.78]:33616 "EHLO NAM02-BL2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753379AbdF0PNU (ORCPT ); Tue, 27 Jun 2017 11:13:20 -0400 Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; From: Tom Lendacky Subject: [PATCH v8 RESEND 27/38] iommu/amd: Allow the AMD IOMMU to work with memory encryption To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, xen-devel@lists.xen.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Brijesh Singh , Toshimitsu Kani , Radim =?utf-8?b?S3LEjW3DocWZ?= , Matt Fleming , Alexander Potapenko , "H. Peter Anvin" , Larry Woodman , Jonathan Corbet , Joerg Roedel , "Michael S. Tsirkin" , Ingo Molnar , Andrey Ryabinin , Dave Young , Rik van Riel , Arnd Bergmann , Konrad Rzeszutek Wilk , Borislav Petkov , Andy Lutomirski , Boris Ostrovsky , Dmitry Vyukov , Juergen Gross , Thomas Gleixner , Paolo Bonzini Date: Tue, 27 Jun 2017 10:12:30 -0500 Message-ID: <20170627151230.17428.75281.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20170627150718.17428.81813.stgit@tlendack-t1.amdoffice.net> References: <20170627150718.17428.81813.stgit@tlendack-t1.amdoffice.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: BN6PR1701CA0010.namprd17.prod.outlook.com (10.172.26.148) To DM5PR12MB1145.namprd12.prod.outlook.com (10.168.236.140) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 10f5cd06-09ae-4e53-10c6-08d4bd6ef1a4 X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(300000500095)(300135000095)(300000501095)(300135300095)(22001)(300000502095)(300135100095)(48565401081)(300000503095)(300135400095)(201703131423075)(201703031133081)(300000504095)(300135200095)(300000505095)(300135600095);SRVR:DM5PR12MB1145; X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1145;3:8ACMAX0h7P9ZUB3snCAmzdoDE8mkG/xEyKOXKDPH/GUfmowP0zlQq7Ol2k9dONPc7NXZgYIT0caEtmbwOpjeEwFJmTlFCSaOW6BsscQoyhVRZEzK/wjS0gt+RTw9O6AiTLDAChrjXHVOcMLj0Rvo5FCmuPSnVdliz6HW2/fh63wuDydqO6cogR3TgMb/qX80PVFmiP4k5K5x+C+9Y2Co0R+DLokhofgto14hq7wN49/iaJ5DoeNPSFcnp2wvvwx76ScArZYMRAIDUOCb42sQoqmNhGpIuRqYdwjFSoVvL2xBXjXlT+RzaLzuGBKCidygkwngar0UB8cerJMJBM3VkRK8wTLLlRsNmkrJEmM1drHqesu2bbjs4m+F2atGezf6W8tViHj08cX0ic7bBbjcpR4ovAuV64OqrW3LGif1MTB5Ehk+e7lcp4QjybGPYOOFDjV6cZRF7S8K6ClP2R1WyvF8HH88u9W+gJ1WUHVE+DjuIVvtFFTw5t4yIsGVm4GqPItu1r03SWGnYAbSFXLBac8MUydWwQW68/iYFCbRYEc+NmxDq0CVrww6PvJ8W6G0/lBubFSkyfz8oTxfflBmOjf889bS+IwOYaajjT0FgH5ySgvf83rwCVkW9RzCYea+FRw3d5DGoV/q8OKFyU7JCg7rdWmbGuj1KZ4wQlzxSQcnq2bTnViuqBbmJRNWDV0JHx5KSrwXQVUxDfmVNLmaKQ== X-MS-TrafficTypeDiagnostic: DM5PR12MB1145: X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1145;25:iUo3LahAuf8rPr7eDJoIt6E8APrVEUCMSqY63Q8xwqxQ84ryqM4TesTOlaA8dP78LLcq5Gt5TKvVXxdEgJIJEY+/0gY+bF/be4nHKvxIRhML52IW1WtOy9xAzLZbtKtgh1JIMZKmjUHYkcwWZ5jgjni+niwZdHYOtLIXU2wd5F4MX1tugZQ7duqr5N3ols1e8ITAuqpQ8QapAQmlcu3Yz3LDc4Z5Sj7nfC7vXDglc6zeWAeXzjNxOTbxKSRm5TsxEFWNvBsfSpVv5mlaSm2s/pc0H+RED4N5rzmjY+fDCu+YfOqhu4yXkBtNSP5BVQ12BCdXeVeMo+6G599ug9L3rTAs+yKeqXmqRfrMJQJJO9V5ZBRf2YeFLuVLXw4lDdTPtBje1RYjBcW7poOYOiPElSVlHD/KqbPGwibsKW50ZWLh00MmKkgaw2QQP/mQdJsBuBushun5mkgM1nDztsJdJeUkg3FTv6eXtHR5olbU0mEqtphRd7E3K4zWJBqFH4zmrVlk99OoZyreqWkuML//Zc4CxBAtFjNyBhRkEWmC9wkfUS57DP6qNO6Wyy5ka9pzYfEjRn5gipxY22uTP1mpoKILz4jU6Gz+8MApzBed2cRbwyaXVOC0CmqhvoHb+vBv+zKfoPq2H0w5tnVP6GSTEXFa8h006mAtdnEP34B2XojxPrDiuTh/IiUq1J/3HtdHGlS61gwI6pzsmC1wpqxeVQEs/CrP+g4IuLhyTY1SIDJQeJi8NeCjifoT1GQ0Sk5f5Sv9jl/kziybcf3+w03pyUqQVXykYriudUysgNBv4Xr1DRZpIchf+05AdZPmkpptdFie/e17ukOJW/FzohRKXOabZHKrmqOWN4StQ9nvJQFwBISv8JC+4vIX3xqpYuDPUcmOxasitoyox8Wsxy6izVN+RiBbsr91ERlX5YH5lSg= X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1145;31:xn4fytZs1IB0/lbLWzqszCvpnGd4gKAUdDEe/3ii1CEk7UecJefCEc5991vgVPb5+3rdZZC7bfqlqHAiumn+nTJC1hMvOQI9Bjr1BHb5athzffAKoeuGdsST7TYgD89Z24ps5xiDJacY7ZpbitHXWJXw+6RLsu+48bBTkOjWFcombGfp1FCX8PVgxcRiP8gLX887gOlEnWWAzjm3Kz/HcHUTD2rhv2Nn5TSKBP8EzJkicaq9Og/kdZONNEXotxK2OLYRNBx5/RA9+5kNlAwFa2oVz8wrhI9W3+hYDmIMqcuXZRMF2olS7ySPpNNUZAE1QIdEFTT8UMFU0MeCqE5pZ0ZvPyvcCA7wCpc4EGuM5mmbtuR7Y4k5OhogamXQqtQU3ulVTXMPsFjd4o1vLayewOQO88JIH7JAtu7oWwE6FV9SoFH4lVDmQ4KP+Ze8o+J6JBTVI//cF6DczBYYw8XN7efGsXQT7gYOm8QYA3PfPy/6tKjozY55Y3wADjeHuAwSzbhxxUXNupIv6ijla6ewS+GtKnWONuH2xvlo4z3OgI+ERlEH7KsHBZUvYWnukZhlcx+Qi4eBHLhUCvV7/tWp7qjitsFyTfz24U6tEHdihfV1HEmZHcsiEKD9cmnP8jP6SUUQ0shpsyFORti1UEFBpzeymbRGEHWS0b3ieA9d7dJc2+ESrO8aGZyBm3DL2WKbRYEOcV7C8SX3jyNZTYeV3Q== X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1145;20:fVMaKFYBbyQ0OffS+inbGe/zMtGPj36VjSjvVlPfCnUA7oU7FoqWjiae27YpgczP9otauwB2n4p64dKMDuULWS5dc0UOyD4j3o4oIxfKuGNtqmiHO4O7R66kKufbmOT0IjwkA5LOOFaTeajXH9mx9mYBIgO+3sMrnLfj+ZT9aJ1TLncGfXfPuqsBDRcwlI/81+1hfgkdBSja8cWmLou5ZWrdiSDQOx95jVoPsjwmFGsj3gsCMHgTsSkRhkPKycvRLreArOosi2jbCED1YB6OjNnpspBWUghiIMe3Q9ajIQZ9PpSQvjAC7gYRxfvnR9AwfzTWyqEoLprtYaU2hqqi4bXjzIr91TBtd8tFM6eFOJ5ssYdB08y6BtbWX2lFI3ni6oPJbZPz3Y6nAINW3sWuvy9Bq8n9ioATW4CFLmgq62GhFUzo54Kgmqr0be4lp3I5pKLNV7IVUo90wmR52J1iv76Q0ZubXz9s4T56/Ra2gzZ9Ehb9JsT9uyxedISEFcn0 X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(133145235818549)(236129657087228)(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(601004)(2401047)(5005006)(8121501046)(3002001)(93006095)(93001095)(10201501046)(100000703101)(100105400095)(6055026)(6041248)(20161123558100)(20161123560025)(20161123564025)(20161123555025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123562025)(6072148)(100000704101)(100105200095)(100000705101)(100105500095);SRVR:DM5PR12MB1145;BCL:0;PCL:0;RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095);SRVR:DM5PR12MB1145; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtETTVQUjEyTUIxMTQ1OzQ6WncrK25WMDhCTUpaSER5NzM0MHAyam5pK05K?= =?utf-8?B?OUI4a2RHZlVXOThCWTFLLzBjNXdiSmkzdGxndkhqdFRDNUVvSUkrNUVxK2tI?= =?utf-8?B?a3F5ZTVDUHdIYWZWMU9ESTlwRWJ5RUpURWF0aXNoUzNJTnF0WThqS2paZlpy?= =?utf-8?B?cVM1YkxTdzEyeDE5WE1IYmpBNnl5UUZMS2ZwYlhpb0pFaWhveStjK2JCOUZZ?= =?utf-8?B?SUZ2RDFqZWRxNkJ3TXllTXBVR2hqNGU2N3JrbzdBdnlVM1AycTc2Z3Y1ZkRM?= =?utf-8?B?ZXd6dWpzZGZjYnZyaldTY3llaXFIdXBKYk5oSUNFWkpZWHVDZWdFR1gxNkRq?= =?utf-8?B?RzRTcWdEL3RtbDF0WU94RzRzQUZSMTZ1ZmxUNnRJaURYd2JjZjZ5aUM3MDVt?= =?utf-8?B?bzdRMEZ0Zy91dDkrTmRHM2VPRDFyMFc0bjY0RHRaWUxQMjJEeSttT2ZoM2V2?= =?utf-8?B?ZkNFR3lFSXQ4Y1V3bjhlUVN4SlpXaVdOb3V4MVR1aE8ybnVXaVdKUzFKL3pj?= =?utf-8?B?MnlaMGF4R0NtWU9iNlk4L25qWmVhT0FwZjdmbXcwN3huMzkvd0IrNldyN3pv?= =?utf-8?B?YjNvcjZHYUV1Q1l0NHVJTmVIWnBYQzFQUWsxdFQrV2lOaWdhd2JsRkpLUUpp?= =?utf-8?B?YzkyTUkvbkp5UFd1a3VRNi8yS3NIaEZqZFJpYTN4RXhYWEtVb2NJei9sbHdP?= =?utf-8?B?UTkxRmlVM0NjOERrcFNVc0RoTk5qNm9aMTFHK05oOERjeDY5b254T1Vva2xq?= =?utf-8?B?aHMvY3RBZ2pta3BDWTJEQVpGYUs2U082VEhZQ3doaGRMZks5RmxUQXRzdjQx?= =?utf-8?B?eXFZRVBaZ1Zvd0lqTEJWUTc1cWNDMlRQc3IrNHc2Sko0cnd1L1BJN1cxQ0RS?= =?utf-8?B?eHVqblpLRmorblNWK2w4aXlpSCtzbWVaNU1oekNIVm95RktkcE8zVGU2VXVh?= =?utf-8?B?elVidU96MFlHZ3NSNWZaM2t3Mm9sMG55SUJjRm1LOVprNS9kVURWQWxYSEp1?= =?utf-8?B?dm5lRis0N0tYcFhiUmkwcWlBVVJQK20vRlMrOW1yNjdwM0pnY05tdEd0cXVZ?= =?utf-8?B?Vjd1VFhpRENmQ2FVNjd2ZVIzK1dhU1pMUkhPRFhSSmxkZFhsUWFaVXZEZFVj?= =?utf-8?B?bDFUbGdPTG5JREl1OVBYbHVEd0JXNUZPVWgvVTRZZ2RBdnZsb3FCUk5veGZD?= =?utf-8?B?NDVOd24rY0pDWUdsR0p0VENsZTB0Szc4WXpnanlwVnhRTHVFY0RxcVI1T1ZJ?= =?utf-8?B?dStEWlU4b1JPL0xwRHlocnNYZHlVM0llb2YzNWFOSTNMQTU1RVRjTENZa1VK?= =?utf-8?B?a3duMFQ1V0ZQcjRXSjREbnpDQzRMRXVseVJDUFpJKzlDQlB3WEVBZ2ZJQUc4?= =?utf-8?B?YlVuK01iV3VxSkxkTzVkRWc2bVQ0cXczV2loQiswWi80MWQ2OWs3YzNxSmtl?= =?utf-8?B?djAyUEF5WFliSzJPVlVBRXFEcW1zQkhpMFJ5RXVWbmRlSUx4NExlcDNOQkpu?= =?utf-8?B?TTJldjBtUElZb0JKR3ZUVFFVT1hhWXA5QUhiOFRpKy9yc2I5S25DZXNQZUNL?= =?utf-8?B?WktLbXhDMG5hVGRvM1F5Q0FqdHdFWTBjYXExbHlOZzZDL0p6L3hsU0YzMDRM?= =?utf-8?B?bWwva0w5UThqREN4L3RwTUhxUzJpbVRDbytMVFhWWGsyb3BSOUwzZmpZWTcx?= =?utf-8?Q?tix+Ye6bpwNYfVET+zXlqeazIFoXaOnsHijW5i?= X-Forefront-PRVS: 0351D213B3 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6009001)(39450400003)(39860400002)(39850400002)(39410400002)(39400400002)(39840400002)(3846002)(7406005)(6506006)(1076002)(7416002)(478600001)(103116003)(53416004)(5660300001)(86362001)(47776003)(2906002)(8676002)(50986999)(81166006)(97746001)(230700001)(6116002)(42186005)(305945005)(54356999)(76176999)(72206003)(33646002)(55016002)(7736002)(25786009)(2950100002)(4001350100001)(83506001)(53936002)(9686003)(50466002)(54906002)(23676002)(189998001)(66066001)(110136004)(38730400002)(4326008)(921003)(1121003)(217873001);DIR:OUT;SFP:1101;SCL:1;SRVR:DM5PR12MB1145;H:tlendack-t1.amdoffice.net;FPR:;SPF:None;MLV:sfv;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtETTVQUjEyTUIxMTQ1OzIzOmc3d0FGT1owVXdBaDNRd2R3SG4wZXpSaElj?= =?utf-8?B?UVpYblFxMkJROTVMN2xEOTJES0ZRc3NpZm8zZC85bFY4MXhvNUlrMVpvaVh5?= =?utf-8?B?N29uVzFTaHBlcno5cU16NWxtS1NhcGVGbTk0OG1DTUlkWHVtaWh1dks2azRv?= =?utf-8?B?Z2xzbjMxUUVRck1MRFBWcEd3ZnJqaDVVSzdsWUVOL3BVL0lNRjJBbjdzKzFr?= =?utf-8?B?ZHlJTko1YXhMR3MzMGd1UTl5S1VHSXBiZGdodmNKcHczblZPaHNBK0ZTTFNF?= =?utf-8?B?UUdCQXE3b0UwZGgyaksrU3VCQnZ3cGg5WnRobURaYXVxdzY0YUk2K0QrcDlq?= =?utf-8?B?TFFsUml3OHJ2RDJBWnFoZjE5L1VOdmQxM3VjWEkwaUZoNmYrWjMwbXRrQlZT?= =?utf-8?B?SHVOQm1rN0xMRS9qWmYxanArUGVwNGUxdWhTVGZtMlNJNktBZFRZK1hGcVlC?= =?utf-8?B?czBTeEVOMEpFRUtjbU9wMnU5akdGQnVVTkxKeXoveDlBSnNzOW9xVWY0cWpz?= =?utf-8?B?WnIzbHorZlg5ZlB2ZnNqZHhmUmRSMitjZTJkWVhhZks2SWQvQmZQOU9FU25h?= =?utf-8?B?V0RtQi8yWURkcHB5NFN0Z1l0VHVSOVlOU0pkZFZiWlpXTGpyVkErTDJkNldW?= =?utf-8?B?UGVlWHhkWm9udFZHU0QvMjhkNmx0dmc1V1krQTM0ZXdnOGxodjd0a1A4c251?= =?utf-8?B?V3lvWC82M04wdkZmalJnWEdOcmR3RGRhQ0JHc0xWd2ttNCs1QWExaDAySUZz?= =?utf-8?B?Z01jbmtOL0EyV0djSnF0YmhodnFFSDArY2lvZXVjUjBHRzEvZnYwbEVVYVdh?= =?utf-8?B?NjVaKzczWkJ4Q1UrOTJTckV5TU1vMWYwUytXbUtvYWNyOSt0RnJQU3BBd2h5?= =?utf-8?B?dUpTQXZGMTZWNGNLYlV3VCtzSlZoYVpMRlhTOEwxOThSRW1WZ0NTM3FyM0VS?= =?utf-8?B?Si85aEJmclA3WXZ4T052WS9YdDZldmRpZDBta2s0eGJqTWtSZWkvczNoOURz?= =?utf-8?B?Q1MrQjEzR29idDV3b1UvcjBoQkVJY0h4cGlYUEhlaTdmRVJyeVpPOVBNMTFJ?= =?utf-8?B?S3ZZQnBTUEVsMFNxM29ObFhPUmhJaFRScExwSG9mSDZwazVpQ3hod0tPdzNT?= =?utf-8?B?MHdXN3FHc3djTHdmdUJBVlZWeGlUclFWSUFKclhVUFNxSjdiZXFyMTgrVE9u?= =?utf-8?B?MGJiRVF5d2UwZ1hMa0ZqYlR0U0RqM3c4d25yakFUUk9xdVZsa0g4Y0VPVkht?= =?utf-8?B?SmpIN2QxYmd4dUE5L29pYmo5U2lxcS9JNFc0RE54OVJIWUZNWVEzd01kNUEy?= =?utf-8?B?VGhkdkdGTDMxamxGYUtBSko3R3ZpQmx0NzJidklDdStMTXllZSs2ZnQwbkxT?= =?utf-8?B?Ujk3WUlrNEhYTmpEZEpjS1dYT1RpYW56N2ZwU09rQ1FaelVBcWQ4OVNkbDc1?= =?utf-8?B?OUFRbDI0Y3hVL0QxZThRVXFrTnl3WnViUUg4am5xWUY1OURVZ1dkZ0NQbDFR?= =?utf-8?B?ZFBBQm54RE9ndXBSaTAxTnFQbjAxa25Xd0RqemFES29QSTdnUlYrZjJaNXBL?= =?utf-8?B?c2dHdUNVYkFYYW90NCtEOS9oenRNWDNaY3lHU0ZRaHZUdzhjbXFNbThIamxW?= =?utf-8?B?UzBYZkZvMU00THI2MHE4dEsrdWhwMk1MZE1jc2dkSEFOcHZKZGcwZVkwaFpQ?= =?utf-8?Q?/K1tb7WhdcGHc4n80F+1Qfr91F4vtrS/pGtOdz/?= X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtETTVQUjEyTUIxMTQ1OzY6NUdZRzZjZ01jNWtYL0xod1V1Zmc0NHYyK1ZJ?= =?utf-8?B?QTNKMGJRTEdUeW9udDZpM3krbGE4Wkx3OGN2TlVJVG9mVE42SGk5V3k5MGl2?= =?utf-8?B?aHpzNklCM2JGT0hsbFZ5UGZNd1VCN0o2RFpBWW0xUForeThHZWJFMEZjVVgx?= =?utf-8?B?SlF0eUdncWpFenc5UWdldlB5TEcrYUUwd2FHV3NZcU53N3A4NkxHZzdVQzZB?= =?utf-8?B?azBHYm4vQUpaaEJ6TFpId2RDVVRUQk9GcWNXQnJqaXhpUDRKV1JzNURid0RX?= =?utf-8?B?VFhoNnhDcFRBUE9YQVZQS0o3RnlsenIxZW4wa0FnWnBQRStBRURTQmdENDh6?= =?utf-8?B?YVBTZ0RLMHV2WWl1SnR4MmtHajJwZUhlMDRhaFZ3TU5LRnhDTmllSEtXWk5C?= =?utf-8?B?bmdoYXkrN1VuVmRWa3d0WjFCQ0hOcG8vMWI5THp3aVJYNzFvaWdTOW55V2lL?= =?utf-8?B?L3psVEtYWmV3V0N2Z3VIazNBMWJ2K0d6NWt4S1QwbDJyRUpwaGhIdTEyZUwx?= =?utf-8?B?aFE5ZFNQNGM1YUNsY2dtckdUaXFTeHYyLzRQMWJSYjBZUE9YeGtZNURCMHZj?= =?utf-8?B?NjFKbStTRU5Rb0pjVWlzZVMyakpDemFxZFJQdEdXTDZHaTdtc3o4bHllb29R?= =?utf-8?B?Um4zTHpnUmVPaHU0WEYyWlY5VzdGUCtoK2wzeXg5QUlGcnBjamhoTWVWbVJk?= =?utf-8?B?MjhlU2lRTzNSOHF6Q0FCcWRKR3oxTjNGNGdQU1hNOEZXa0x3dHc0TUZrNERD?= =?utf-8?B?d0N5RDlDQ05DZURkUVQ2bXJ5REcwRnVOaW4zZTA3a0VhZUo3b21jSXNsejJW?= =?utf-8?B?QUlnSk8wd24wYkM4eTFTV3hrcG9BZXhJVHpiUVR4dmdMc0o3TVo4U1ZKL1NO?= =?utf-8?B?RGxNZmxjeW55RWtLQ2N0Z2Fxd0FMSDVzK2NXdzRNeGcvMVlFK3k4MlBhTXZH?= =?utf-8?B?WVlmNE80TkYyNFMwYXZVZXcyK0JOaFRheWl5bWN6aE9FeVRveXhZMlpUN0Vm?= =?utf-8?B?TjFSNWhkWktWR3VMMmxvaTFaWENLUG1DMVdrQ2hyQmxNL21JQlo2amxMRlNH?= =?utf-8?B?VTZVbDRnemQrZGE4VlF5L2FBOTVnL0M5cmlLM2s4VlczaWwvVVBEMmZPY25o?= =?utf-8?B?QmRRVldtbXBpWlVJa0xWQ3RwUkNTTlRvazJYZGIrckNzZGRuRXZ1MUdrcG5Z?= =?utf-8?B?TWRtUytvTE4reDNQalpXTjRZVXgwdUNSYUNKdGlMc1lPQWk0dFV1MXFsMEhR?= =?utf-8?B?NGdnVDRqZUpBcVJGOUZlOWZTaEVEeXBtenRBQzR0WG5MTjJaL2xNRjdYdEFo?= =?utf-8?B?TGZWT1Qxdm9pQ2MxT0prNHNpeTc0R2d5aVM3cFhoR3dob2Zvbkdsa1dDSSs1?= =?utf-8?Q?gGG727g?= X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1145;5:XPAAOA6hFYjuO41vWAZiw1JzYH2R+2WukLubNqTyB26enyg5jfWFsBUUGOL2nak3FpwNF76Bjb3Jv2szZXGUYoCOkNMTdlBQ9Dd8qOt3RjKNzP/6uO3LR6KC9lQT413BzrQef/urxw6G0O5T3r2q3ClFNvNfDohU2IUGXWpHcP88Bie+aqatB0r1uSD5uFv/CWfkRg+9jRetYt9gr7d693ykxcM3rla7OvOJHjTX9GSxNHAUnbqERTSnpOGihc8+gOAYcMEWBM1oQ0TwmdtxJ0y/TAeLs18CcSzAER/qUVyonzR3WDHYS6yNZGF75sM50KETZrtj4a6wPiH+5e9Obtr55Ymz4GchWakWHBmHc0RX7wjalkTLebQGCtIRdc4l70jGNFbr6h7AgwQ5YTogEdsqiSpMEwvasZL2TpSzL0qHGvWLEqu5yVO+BBzaezSnMNvRCOvpZvSbM3os7hEHXytsnTO2QDiX9/wsZdrqdseZi6eNujwLoIfYghIST1rh;24:Ggf3jUef75crhpSA7S1tbA7jyjzknnfPxYW6Pf4lr+79ZgXnGBah3KuPjejsSO6OkEP7ZjijzrJzMWmO8lA0ETxK+S8znGVhR0Q+tsLX2f4= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1145;7:E7l+U0nzId56jhK/wLucxu2jo6VHcJwoVQ9ztp5mW+v8M5XU5MYm1z9anoXL1/ek2pofSW2M675MjRe6OU3Z8LPWD/e3b5XTi7FUGdGoNFxHkdsbWrjspC2ZcL1ktZ3XyRAr381bSymN3mnpC3tXtohpKF51yGy7X7k0Y0IGUKmNjGs0g+jiUAmj7Gjm05kHozypwpy3ujZ4seDe8Y9uQ5pwnosYniJOND+qtwEXq/hcrTOdJxEI1rLmXpZjROJ6Zqizr68aitZO5tzxypbtcVnNWiaa7oAbA0AfEKM42rity7Aw+Hy31AFda/iBWStRTqRxCkI61iRU0trr3Asp2sSPh/m21GEA/a9nHjSQIEkWn0TA2Li4vf30/JAGWN2d5zN1YQpIhwi8F7S4+ZtaA6raQTw66RbE35L1HD7wuXbQjoSrVfq7tTWHwqTXLt8Vu6gulnPgVJzlZM3VGmWLSk0cwBWfF0ltXbxRKnxBi25G9mPxdOzxJsXikLiGSV3UvGDc3LIXLQqgfstmbQh9/5UCNfGuhI/h7VELXbaAzaP3i2bdrs5Zk0squ6vB5TPsFDbobCGd+LoO9eyA4yTLFlfZF56Yj9vAjHVPdpa68kSd0vMO/HqEnhax8nrV5MM9mi3UmI+IZ0Fe/Q7KvLrQAi1CbFj6guP0n/mVtdPAfvQloOZHwQK0/J+4U+Xa4fDdlvbwFVKO+HL9PYc2Es2PStD4e7wZseiiZatrdI0MojCE/JDwtAW/TmzYDfA7fSTF6la3YzzdpMzPNY5VOHCD40Lj283rrbjHddvyz/bHjSM= X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1145;20:5QXmNyDz/4Uqz3qMk+vvTiCejitHbfmE8d6r8H4Ekxw+FpQrE16BklFf2J+URgpV4eDajjlVLiOZ16wgsnYvrL0Irwl5T+5ck+bir2tD0thp68Qabm25Obq6bqLTXGfBAylxbPul79+I8zJT3J9mDq9c5ybGTut+fIhIk1RO5prX8NnVnQ6NFzFFyPn1vwxBxWZVohDR0IMoQb8xeFjhk11+rTQ9Z32cGbyqyAJZCcOHvo8sOfTvLLDglO8iYAh6 X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2017 15:12:33.5914 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1145 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The IOMMU is programmed with physical addresses for the various tables and buffers that are used to communicate between the device and the driver. When the driver allocates this memory it is encrypted. In order for the IOMMU to access the memory as encrypted the encryption mask needs to be included in these physical addresses during configuration. The PTE entries created by the IOMMU should also include the encryption mask so that when the device behind the IOMMU performs a DMA, the DMA will be performed to encrypted memory. Reviewed-by: Borislav Petkov Signed-off-by: Tom Lendacky --- drivers/iommu/amd_iommu.c | 30 ++++++++++++++++-------------- drivers/iommu/amd_iommu_init.c | 34 ++++++++++++++++++++++++++++------ drivers/iommu/amd_iommu_proto.h | 10 ++++++++++ drivers/iommu/amd_iommu_types.h | 2 +- 4 files changed, 55 insertions(+), 21 deletions(-) diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index 503849d..16cc54b 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -544,7 +544,7 @@ static void dump_dte_entry(u16 devid) static void dump_command(unsigned long phys_addr) { - struct iommu_cmd *cmd = phys_to_virt(phys_addr); + struct iommu_cmd *cmd = iommu_phys_to_virt(phys_addr); int i; for (i = 0; i < 4; ++i) @@ -865,11 +865,13 @@ static void copy_cmd_to_buffer(struct amd_iommu *iommu, static void build_completion_wait(struct iommu_cmd *cmd, u64 address) { + u64 paddr = iommu_virt_to_phys((void *)address); + WARN_ON(address & 0x7ULL); memset(cmd, 0, sizeof(*cmd)); - cmd->data[0] = lower_32_bits(__pa(address)) | CMD_COMPL_WAIT_STORE_MASK; - cmd->data[1] = upper_32_bits(__pa(address)); + cmd->data[0] = lower_32_bits(paddr) | CMD_COMPL_WAIT_STORE_MASK; + cmd->data[1] = upper_32_bits(paddr); cmd->data[2] = 1; CMD_SET_TYPE(cmd, CMD_COMPL_WAIT); } @@ -1328,7 +1330,7 @@ static bool increase_address_space(struct protection_domain *domain, return false; *pte = PM_LEVEL_PDE(domain->mode, - virt_to_phys(domain->pt_root)); + iommu_virt_to_phys(domain->pt_root)); domain->pt_root = pte; domain->mode += 1; domain->updated = true; @@ -1365,7 +1367,7 @@ static u64 *alloc_pte(struct protection_domain *domain, if (!page) return NULL; - __npte = PM_LEVEL_PDE(level, virt_to_phys(page)); + __npte = PM_LEVEL_PDE(level, iommu_virt_to_phys(page)); /* pte could have been changed somewhere. */ if (cmpxchg64(pte, __pte, __npte) != __pte) { @@ -1481,10 +1483,10 @@ static int iommu_map_page(struct protection_domain *dom, return -EBUSY; if (count > 1) { - __pte = PAGE_SIZE_PTE(phys_addr, page_size); + __pte = PAGE_SIZE_PTE(__sme_set(phys_addr), page_size); __pte |= PM_LEVEL_ENC(7) | IOMMU_PTE_P | IOMMU_PTE_FC; } else - __pte = phys_addr | IOMMU_PTE_P | IOMMU_PTE_FC; + __pte = __sme_set(phys_addr) | IOMMU_PTE_P | IOMMU_PTE_FC; if (prot & IOMMU_PROT_IR) __pte |= IOMMU_PTE_IR; @@ -1700,7 +1702,7 @@ static void free_gcr3_tbl_level1(u64 *tbl) if (!(tbl[i] & GCR3_VALID)) continue; - ptr = __va(tbl[i] & PAGE_MASK); + ptr = iommu_phys_to_virt(tbl[i] & PAGE_MASK); free_page((unsigned long)ptr); } @@ -1715,7 +1717,7 @@ static void free_gcr3_tbl_level2(u64 *tbl) if (!(tbl[i] & GCR3_VALID)) continue; - ptr = __va(tbl[i] & PAGE_MASK); + ptr = iommu_phys_to_virt(tbl[i] & PAGE_MASK); free_gcr3_tbl_level1(ptr); } @@ -1807,7 +1809,7 @@ static void set_dte_entry(u16 devid, struct protection_domain *domain, bool ats) u64 flags = 0; if (domain->mode != PAGE_MODE_NONE) - pte_root = virt_to_phys(domain->pt_root); + pte_root = iommu_virt_to_phys(domain->pt_root); pte_root |= (domain->mode & DEV_ENTRY_MODE_MASK) << DEV_ENTRY_MODE_SHIFT; @@ -1819,7 +1821,7 @@ static void set_dte_entry(u16 devid, struct protection_domain *domain, bool ats) flags |= DTE_FLAG_IOTLB; if (domain->flags & PD_IOMMUV2_MASK) { - u64 gcr3 = __pa(domain->gcr3_tbl); + u64 gcr3 = iommu_virt_to_phys(domain->gcr3_tbl); u64 glx = domain->glx; u64 tmp; @@ -3470,10 +3472,10 @@ static u64 *__get_gcr3_pte(u64 *root, int level, int pasid, bool alloc) if (root == NULL) return NULL; - *pte = __pa(root) | GCR3_VALID; + *pte = iommu_virt_to_phys(root) | GCR3_VALID; } - root = __va(*pte & PAGE_MASK); + root = iommu_phys_to_virt(*pte & PAGE_MASK); level -= 1; } @@ -3652,7 +3654,7 @@ static void set_dte_irq_entry(u16 devid, struct irq_remap_table *table) dte = amd_iommu_dev_table[devid].data[2]; dte &= ~DTE_IRQ_PHYS_ADDR_MASK; - dte |= virt_to_phys(table->table); + dte |= iommu_virt_to_phys(table->table); dte |= DTE_IRQ_REMAP_INTCTL; dte |= DTE_IRQ_TABLE_LEN; dte |= DTE_IRQ_REMAP_ENABLE; diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c index 5a11328..26862f1 100644 --- a/drivers/iommu/amd_iommu_init.c +++ b/drivers/iommu/amd_iommu_init.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -346,7 +347,7 @@ static void iommu_set_device_table(struct amd_iommu *iommu) BUG_ON(iommu->mmio_base == NULL); - entry = virt_to_phys(amd_iommu_dev_table); + entry = iommu_virt_to_phys(amd_iommu_dev_table); entry |= (dev_table_size >> 12) - 1; memcpy_toio(iommu->mmio_base + MMIO_DEV_TABLE_OFFSET, &entry, sizeof(entry)); @@ -602,7 +603,7 @@ static void iommu_enable_command_buffer(struct amd_iommu *iommu) BUG_ON(iommu->cmd_buf == NULL); - entry = (u64)virt_to_phys(iommu->cmd_buf); + entry = iommu_virt_to_phys(iommu->cmd_buf); entry |= MMIO_CMD_SIZE_512; memcpy_toio(iommu->mmio_base + MMIO_CMD_BUF_OFFSET, @@ -631,7 +632,7 @@ static void iommu_enable_event_buffer(struct amd_iommu *iommu) BUG_ON(iommu->evt_buf == NULL); - entry = (u64)virt_to_phys(iommu->evt_buf) | EVT_LEN_MASK; + entry = iommu_virt_to_phys(iommu->evt_buf) | EVT_LEN_MASK; memcpy_toio(iommu->mmio_base + MMIO_EVT_BUF_OFFSET, &entry, sizeof(entry)); @@ -664,7 +665,7 @@ static void iommu_enable_ppr_log(struct amd_iommu *iommu) if (iommu->ppr_log == NULL) return; - entry = (u64)virt_to_phys(iommu->ppr_log) | PPR_LOG_SIZE_512; + entry = iommu_virt_to_phys(iommu->ppr_log) | PPR_LOG_SIZE_512; memcpy_toio(iommu->mmio_base + MMIO_PPR_LOG_OFFSET, &entry, sizeof(entry)); @@ -744,10 +745,10 @@ static int iommu_init_ga_log(struct amd_iommu *iommu) if (!iommu->ga_log_tail) goto err_out; - entry = (u64)virt_to_phys(iommu->ga_log) | GA_LOG_SIZE_512; + entry = iommu_virt_to_phys(iommu->ga_log) | GA_LOG_SIZE_512; memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_BASE_OFFSET, &entry, sizeof(entry)); - entry = ((u64)virt_to_phys(iommu->ga_log) & 0xFFFFFFFFFFFFFULL) & ~7ULL; + entry = (iommu_virt_to_phys(iommu->ga_log) & 0xFFFFFFFFFFFFFULL) & ~7ULL; memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_TAIL_OFFSET, &entry, sizeof(entry)); writel(0x00, iommu->mmio_base + MMIO_GA_HEAD_OFFSET); @@ -2535,6 +2536,24 @@ static int __init amd_iommu_init(void) return ret; } +static bool amd_iommu_supports_sme(void) +{ + if (!sme_active() || (boot_cpu_data.x86 != 0x17)) + return true; + + /* For Fam17h, a specific level of support is required */ + if (boot_cpu_data.microcode >= 0x08001205) + return true; + + if ((boot_cpu_data.microcode >= 0x08001126) && + (boot_cpu_data.microcode <= 0x080011ff)) + return true; + + pr_notice("AMD-Vi: IOMMU not currently supported when SME is active\n"); + + return false; +} + /**************************************************************************** * * Early detect code. This code runs at IOMMU detection time in the DMA @@ -2552,6 +2571,9 @@ int __init amd_iommu_detect(void) if (amd_iommu_disabled) return -ENODEV; + if (!amd_iommu_supports_sme()) + return -ENODEV; + ret = iommu_go_to_state(IOMMU_IVRS_DETECTED); if (ret) return ret; diff --git a/drivers/iommu/amd_iommu_proto.h b/drivers/iommu/amd_iommu_proto.h index 466260f..3f12fb2 100644 --- a/drivers/iommu/amd_iommu_proto.h +++ b/drivers/iommu/amd_iommu_proto.h @@ -87,4 +87,14 @@ static inline bool iommu_feature(struct amd_iommu *iommu, u64 f) return !!(iommu->features & f); } +static inline u64 iommu_virt_to_phys(void *vaddr) +{ + return (u64)__sme_set(virt_to_phys(vaddr)); +} + +static inline void *iommu_phys_to_virt(unsigned long paddr) +{ + return phys_to_virt(__sme_clr(paddr)); +} + #endif /* _ASM_X86_AMD_IOMMU_PROTO_H */ diff --git a/drivers/iommu/amd_iommu_types.h b/drivers/iommu/amd_iommu_types.h index 4de8f41..3ce587d 100644 --- a/drivers/iommu/amd_iommu_types.h +++ b/drivers/iommu/amd_iommu_types.h @@ -343,7 +343,7 @@ #define IOMMU_PAGE_MASK (((1ULL << 52) - 1) & ~0xfffULL) #define IOMMU_PTE_PRESENT(pte) ((pte) & IOMMU_PTE_P) -#define IOMMU_PTE_PAGE(pte) (phys_to_virt((pte) & IOMMU_PAGE_MASK)) +#define IOMMU_PTE_PAGE(pte) (iommu_phys_to_virt((pte) & IOMMU_PAGE_MASK)) #define IOMMU_PTE_MODE(pte) (((pte) >> 9) & 0x07) #define IOMMU_PROT_MASK 0x03 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Lendacky Subject: [PATCH v8 RESEND 27/38] iommu/amd: Allow the AMD IOMMU to work with memory encryption Date: Tue, 27 Jun 2017 10:12:30 -0500 Message-ID: <20170627151230.17428.75281.stgit@tlendack-t1.amdoffice.net> References: <20170627150718.17428.81813.stgit@tlendack-t1.amdoffice.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20170627150718.17428.81813.stgit-qCXWGYdRb2BnqfbPTmsdiZQ+2ll4COg0XqFh9Ls21Oc@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org To: linux-arch-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-efi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-doc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, kexec-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kasan-dev-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org, xen-devel-GuqFBffKawuEi8DpZVb4nw@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Cc: Brijesh Singh , Toshimitsu Kani , "Michael S. Tsirkin" , Matt Fleming , Alexander Potapenko , "H. Peter Anvin" , Boris Ostrovsky , Jonathan Corbet , Radim =?utf-8?b?S3LEjW3DocWZ?= , Larry Woodman , Ingo Molnar , Andrey Ryabinin , Dave Young , Rik van Riel , Arnd Bergmann , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , Dmitry Vyukov , Juergen Gross , Paolo Bonzini List-Id: linux-efi@vger.kernel.org The IOMMU is programmed with physical addresses for the various tables and buffers that are used to communicate between the device and the driver. When the driver allocates this memory it is encrypted. In order for the IOMMU to access the memory as encrypted the encryption mask needs to be included in these physical addresses during configuration. The PTE entries created by the IOMMU should also include the encryption mask so that when the device behind the IOMMU performs a DMA, the DMA will be performed to encrypted memory. Reviewed-by: Borislav Petkov Signed-off-by: Tom Lendacky --- drivers/iommu/amd_iommu.c | 30 ++++++++++++++++-------------- drivers/iommu/amd_iommu_init.c | 34 ++++++++++++++++++++++++++++------ drivers/iommu/amd_iommu_proto.h | 10 ++++++++++ drivers/iommu/amd_iommu_types.h | 2 +- 4 files changed, 55 insertions(+), 21 deletions(-) diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index 503849d..16cc54b 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -544,7 +544,7 @@ static void dump_dte_entry(u16 devid) static void dump_command(unsigned long phys_addr) { - struct iommu_cmd *cmd = phys_to_virt(phys_addr); + struct iommu_cmd *cmd = iommu_phys_to_virt(phys_addr); int i; for (i = 0; i < 4; ++i) @@ -865,11 +865,13 @@ static void copy_cmd_to_buffer(struct amd_iommu *iommu, static void build_completion_wait(struct iommu_cmd *cmd, u64 address) { + u64 paddr = iommu_virt_to_phys((void *)address); + WARN_ON(address & 0x7ULL); memset(cmd, 0, sizeof(*cmd)); - cmd->data[0] = lower_32_bits(__pa(address)) | CMD_COMPL_WAIT_STORE_MASK; - cmd->data[1] = upper_32_bits(__pa(address)); + cmd->data[0] = lower_32_bits(paddr) | CMD_COMPL_WAIT_STORE_MASK; + cmd->data[1] = upper_32_bits(paddr); cmd->data[2] = 1; CMD_SET_TYPE(cmd, CMD_COMPL_WAIT); } @@ -1328,7 +1330,7 @@ static bool increase_address_space(struct protection_domain *domain, return false; *pte = PM_LEVEL_PDE(domain->mode, - virt_to_phys(domain->pt_root)); + iommu_virt_to_phys(domain->pt_root)); domain->pt_root = pte; domain->mode += 1; domain->updated = true; @@ -1365,7 +1367,7 @@ static u64 *alloc_pte(struct protection_domain *domain, if (!page) return NULL; - __npte = PM_LEVEL_PDE(level, virt_to_phys(page)); + __npte = PM_LEVEL_PDE(level, iommu_virt_to_phys(page)); /* pte could have been changed somewhere. */ if (cmpxchg64(pte, __pte, __npte) != __pte) { @@ -1481,10 +1483,10 @@ static int iommu_map_page(struct protection_domain *dom, return -EBUSY; if (count > 1) { - __pte = PAGE_SIZE_PTE(phys_addr, page_size); + __pte = PAGE_SIZE_PTE(__sme_set(phys_addr), page_size); __pte |= PM_LEVEL_ENC(7) | IOMMU_PTE_P | IOMMU_PTE_FC; } else - __pte = phys_addr | IOMMU_PTE_P | IOMMU_PTE_FC; + __pte = __sme_set(phys_addr) | IOMMU_PTE_P | IOMMU_PTE_FC; if (prot & IOMMU_PROT_IR) __pte |= IOMMU_PTE_IR; @@ -1700,7 +1702,7 @@ static void free_gcr3_tbl_level1(u64 *tbl) if (!(tbl[i] & GCR3_VALID)) continue; - ptr = __va(tbl[i] & PAGE_MASK); + ptr = iommu_phys_to_virt(tbl[i] & PAGE_MASK); free_page((unsigned long)ptr); } @@ -1715,7 +1717,7 @@ static void free_gcr3_tbl_level2(u64 *tbl) if (!(tbl[i] & GCR3_VALID)) continue; - ptr = __va(tbl[i] & PAGE_MASK); + ptr = iommu_phys_to_virt(tbl[i] & PAGE_MASK); free_gcr3_tbl_level1(ptr); } @@ -1807,7 +1809,7 @@ static void set_dte_entry(u16 devid, struct protection_domain *domain, bool ats) u64 flags = 0; if (domain->mode != PAGE_MODE_NONE) - pte_root = virt_to_phys(domain->pt_root); + pte_root = iommu_virt_to_phys(domain->pt_root); pte_root |= (domain->mode & DEV_ENTRY_MODE_MASK) << DEV_ENTRY_MODE_SHIFT; @@ -1819,7 +1821,7 @@ static void set_dte_entry(u16 devid, struct protection_domain *domain, bool ats) flags |= DTE_FLAG_IOTLB; if (domain->flags & PD_IOMMUV2_MASK) { - u64 gcr3 = __pa(domain->gcr3_tbl); + u64 gcr3 = iommu_virt_to_phys(domain->gcr3_tbl); u64 glx = domain->glx; u64 tmp; @@ -3470,10 +3472,10 @@ static u64 *__get_gcr3_pte(u64 *root, int level, int pasid, bool alloc) if (root == NULL) return NULL; - *pte = __pa(root) | GCR3_VALID; + *pte = iommu_virt_to_phys(root) | GCR3_VALID; } - root = __va(*pte & PAGE_MASK); + root = iommu_phys_to_virt(*pte & PAGE_MASK); level -= 1; } @@ -3652,7 +3654,7 @@ static void set_dte_irq_entry(u16 devid, struct irq_remap_table *table) dte = amd_iommu_dev_table[devid].data[2]; dte &= ~DTE_IRQ_PHYS_ADDR_MASK; - dte |= virt_to_phys(table->table); + dte |= iommu_virt_to_phys(table->table); dte |= DTE_IRQ_REMAP_INTCTL; dte |= DTE_IRQ_TABLE_LEN; dte |= DTE_IRQ_REMAP_ENABLE; diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c index 5a11328..26862f1 100644 --- a/drivers/iommu/amd_iommu_init.c +++ b/drivers/iommu/amd_iommu_init.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -346,7 +347,7 @@ static void iommu_set_device_table(struct amd_iommu *iommu) BUG_ON(iommu->mmio_base == NULL); - entry = virt_to_phys(amd_iommu_dev_table); + entry = iommu_virt_to_phys(amd_iommu_dev_table); entry |= (dev_table_size >> 12) - 1; memcpy_toio(iommu->mmio_base + MMIO_DEV_TABLE_OFFSET, &entry, sizeof(entry)); @@ -602,7 +603,7 @@ static void iommu_enable_command_buffer(struct amd_iommu *iommu) BUG_ON(iommu->cmd_buf == NULL); - entry = (u64)virt_to_phys(iommu->cmd_buf); + entry = iommu_virt_to_phys(iommu->cmd_buf); entry |= MMIO_CMD_SIZE_512; memcpy_toio(iommu->mmio_base + MMIO_CMD_BUF_OFFSET, @@ -631,7 +632,7 @@ static void iommu_enable_event_buffer(struct amd_iommu *iommu) BUG_ON(iommu->evt_buf == NULL); - entry = (u64)virt_to_phys(iommu->evt_buf) | EVT_LEN_MASK; + entry = iommu_virt_to_phys(iommu->evt_buf) | EVT_LEN_MASK; memcpy_toio(iommu->mmio_base + MMIO_EVT_BUF_OFFSET, &entry, sizeof(entry)); @@ -664,7 +665,7 @@ static void iommu_enable_ppr_log(struct amd_iommu *iommu) if (iommu->ppr_log == NULL) return; - entry = (u64)virt_to_phys(iommu->ppr_log) | PPR_LOG_SIZE_512; + entry = iommu_virt_to_phys(iommu->ppr_log) | PPR_LOG_SIZE_512; memcpy_toio(iommu->mmio_base + MMIO_PPR_LOG_OFFSET, &entry, sizeof(entry)); @@ -744,10 +745,10 @@ static int iommu_init_ga_log(struct amd_iommu *iommu) if (!iommu->ga_log_tail) goto err_out; - entry = (u64)virt_to_phys(iommu->ga_log) | GA_LOG_SIZE_512; + entry = iommu_virt_to_phys(iommu->ga_log) | GA_LOG_SIZE_512; memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_BASE_OFFSET, &entry, sizeof(entry)); - entry = ((u64)virt_to_phys(iommu->ga_log) & 0xFFFFFFFFFFFFFULL) & ~7ULL; + entry = (iommu_virt_to_phys(iommu->ga_log) & 0xFFFFFFFFFFFFFULL) & ~7ULL; memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_TAIL_OFFSET, &entry, sizeof(entry)); writel(0x00, iommu->mmio_base + MMIO_GA_HEAD_OFFSET); @@ -2535,6 +2536,24 @@ static int __init amd_iommu_init(void) return ret; } +static bool amd_iommu_supports_sme(void) +{ + if (!sme_active() || (boot_cpu_data.x86 != 0x17)) + return true; + + /* For Fam17h, a specific level of support is required */ + if (boot_cpu_data.microcode >= 0x08001205) + return true; + + if ((boot_cpu_data.microcode >= 0x08001126) && + (boot_cpu_data.microcode <= 0x080011ff)) + return true; + + pr_notice("AMD-Vi: IOMMU not currently supported when SME is active\n"); + + return false; +} + /**************************************************************************** * * Early detect code. This code runs at IOMMU detection time in the DMA @@ -2552,6 +2571,9 @@ int __init amd_iommu_detect(void) if (amd_iommu_disabled) return -ENODEV; + if (!amd_iommu_supports_sme()) + return -ENODEV; + ret = iommu_go_to_state(IOMMU_IVRS_DETECTED); if (ret) return ret; diff --git a/drivers/iommu/amd_iommu_proto.h b/drivers/iommu/amd_iommu_proto.h index 466260f..3f12fb2 100644 --- a/drivers/iommu/amd_iommu_proto.h +++ b/drivers/iommu/amd_iommu_proto.h @@ -87,4 +87,14 @@ static inline bool iommu_feature(struct amd_iommu *iommu, u64 f) return !!(iommu->features & f); } +static inline u64 iommu_virt_to_phys(void *vaddr) +{ + return (u64)__sme_set(virt_to_phys(vaddr)); +} + +static inline void *iommu_phys_to_virt(unsigned long paddr) +{ + return phys_to_virt(__sme_clr(paddr)); +} + #endif /* _ASM_X86_AMD_IOMMU_PROTO_H */ diff --git a/drivers/iommu/amd_iommu_types.h b/drivers/iommu/amd_iommu_types.h index 4de8f41..3ce587d 100644 --- a/drivers/iommu/amd_iommu_types.h +++ b/drivers/iommu/amd_iommu_types.h @@ -343,7 +343,7 @@ #define IOMMU_PAGE_MASK (((1ULL << 52) - 1) & ~0xfffULL) #define IOMMU_PTE_PRESENT(pte) ((pte) & IOMMU_PTE_P) -#define IOMMU_PTE_PAGE(pte) (phys_to_virt((pte) & IOMMU_PAGE_MASK)) +#define IOMMU_PTE_PAGE(pte) (iommu_phys_to_virt((pte) & IOMMU_PAGE_MASK)) #define IOMMU_PTE_MODE(pte) (((pte) >> 9) & 0x07) #define IOMMU_PROT_MASK 0x03 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 1E20883296 for ; Tue, 27 Jun 2017 11:12:39 -0400 (EDT) Received: by mail-pg0-f71.google.com with SMTP id f127so29352161pgc.10 for ; Tue, 27 Jun 2017 08:12:39 -0700 (PDT) Received: from NAM03-CO1-obe.outbound.protection.outlook.com (mail-co1nam03on0078.outbound.protection.outlook.com. [104.47.40.78]) by mx.google.com with ESMTPS id q23si2092514pgc.114.2017.06.27.08.12.37 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 27 Jun 2017 08:12:37 -0700 (PDT) From: Tom Lendacky Subject: [PATCH v8 RESEND 27/38] iommu/amd: Allow the AMD IOMMU to work with memory encryption Date: Tue, 27 Jun 2017 10:12:30 -0500 Message-ID: <20170627151230.17428.75281.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20170627150718.17428.81813.stgit@tlendack-t1.amdoffice.net> References: <20170627150718.17428.81813.stgit@tlendack-t1.amdoffice.net> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, xen-devel@lists.xen.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Brijesh Singh , Toshimitsu Kani , Radim =?utf-8?b?S3LEjW3DocWZ?= , Matt Fleming , Alexander Potapenko , "H. Peter Anvin" , Larry Woodman , Jonathan Corbet , Joerg Roedel , "Michael S. Tsirkin" , Ingo Molnar , Andrey Ryabinin , Dave Young , Rik van Riel , Arnd Bergmann , Konrad Rzeszutek Wilk , Borislav Petkov , Andy Lutomirski , Boris Ostrovsky , Dmitry Vyukov , Juergen Gross , Thomas Gleixner , Paolo Bonzini The IOMMU is programmed with physical addresses for the various tables and buffers that are used to communicate between the device and the driver. When the driver allocates this memory it is encrypted. In order for the IOMMU to access the memory as encrypted the encryption mask needs to be included in these physical addresses during configuration. The PTE entries created by the IOMMU should also include the encryption mask so that when the device behind the IOMMU performs a DMA, the DMA will be performed to encrypted memory. Reviewed-by: Borislav Petkov Signed-off-by: Tom Lendacky --- drivers/iommu/amd_iommu.c | 30 ++++++++++++++++-------------- drivers/iommu/amd_iommu_init.c | 34 ++++++++++++++++++++++++++++------ drivers/iommu/amd_iommu_proto.h | 10 ++++++++++ drivers/iommu/amd_iommu_types.h | 2 +- 4 files changed, 55 insertions(+), 21 deletions(-) diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index 503849d..16cc54b 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -544,7 +544,7 @@ static void dump_dte_entry(u16 devid) static void dump_command(unsigned long phys_addr) { - struct iommu_cmd *cmd = phys_to_virt(phys_addr); + struct iommu_cmd *cmd = iommu_phys_to_virt(phys_addr); int i; for (i = 0; i < 4; ++i) @@ -865,11 +865,13 @@ static void copy_cmd_to_buffer(struct amd_iommu *iommu, static void build_completion_wait(struct iommu_cmd *cmd, u64 address) { + u64 paddr = iommu_virt_to_phys((void *)address); + WARN_ON(address & 0x7ULL); memset(cmd, 0, sizeof(*cmd)); - cmd->data[0] = lower_32_bits(__pa(address)) | CMD_COMPL_WAIT_STORE_MASK; - cmd->data[1] = upper_32_bits(__pa(address)); + cmd->data[0] = lower_32_bits(paddr) | CMD_COMPL_WAIT_STORE_MASK; + cmd->data[1] = upper_32_bits(paddr); cmd->data[2] = 1; CMD_SET_TYPE(cmd, CMD_COMPL_WAIT); } @@ -1328,7 +1330,7 @@ static bool increase_address_space(struct protection_domain *domain, return false; *pte = PM_LEVEL_PDE(domain->mode, - virt_to_phys(domain->pt_root)); + iommu_virt_to_phys(domain->pt_root)); domain->pt_root = pte; domain->mode += 1; domain->updated = true; @@ -1365,7 +1367,7 @@ static u64 *alloc_pte(struct protection_domain *domain, if (!page) return NULL; - __npte = PM_LEVEL_PDE(level, virt_to_phys(page)); + __npte = PM_LEVEL_PDE(level, iommu_virt_to_phys(page)); /* pte could have been changed somewhere. */ if (cmpxchg64(pte, __pte, __npte) != __pte) { @@ -1481,10 +1483,10 @@ static int iommu_map_page(struct protection_domain *dom, return -EBUSY; if (count > 1) { - __pte = PAGE_SIZE_PTE(phys_addr, page_size); + __pte = PAGE_SIZE_PTE(__sme_set(phys_addr), page_size); __pte |= PM_LEVEL_ENC(7) | IOMMU_PTE_P | IOMMU_PTE_FC; } else - __pte = phys_addr | IOMMU_PTE_P | IOMMU_PTE_FC; + __pte = __sme_set(phys_addr) | IOMMU_PTE_P | IOMMU_PTE_FC; if (prot & IOMMU_PROT_IR) __pte |= IOMMU_PTE_IR; @@ -1700,7 +1702,7 @@ static void free_gcr3_tbl_level1(u64 *tbl) if (!(tbl[i] & GCR3_VALID)) continue; - ptr = __va(tbl[i] & PAGE_MASK); + ptr = iommu_phys_to_virt(tbl[i] & PAGE_MASK); free_page((unsigned long)ptr); } @@ -1715,7 +1717,7 @@ static void free_gcr3_tbl_level2(u64 *tbl) if (!(tbl[i] & GCR3_VALID)) continue; - ptr = __va(tbl[i] & PAGE_MASK); + ptr = iommu_phys_to_virt(tbl[i] & PAGE_MASK); free_gcr3_tbl_level1(ptr); } @@ -1807,7 +1809,7 @@ static void set_dte_entry(u16 devid, struct protection_domain *domain, bool ats) u64 flags = 0; if (domain->mode != PAGE_MODE_NONE) - pte_root = virt_to_phys(domain->pt_root); + pte_root = iommu_virt_to_phys(domain->pt_root); pte_root |= (domain->mode & DEV_ENTRY_MODE_MASK) << DEV_ENTRY_MODE_SHIFT; @@ -1819,7 +1821,7 @@ static void set_dte_entry(u16 devid, struct protection_domain *domain, bool ats) flags |= DTE_FLAG_IOTLB; if (domain->flags & PD_IOMMUV2_MASK) { - u64 gcr3 = __pa(domain->gcr3_tbl); + u64 gcr3 = iommu_virt_to_phys(domain->gcr3_tbl); u64 glx = domain->glx; u64 tmp; @@ -3470,10 +3472,10 @@ static u64 *__get_gcr3_pte(u64 *root, int level, int pasid, bool alloc) if (root == NULL) return NULL; - *pte = __pa(root) | GCR3_VALID; + *pte = iommu_virt_to_phys(root) | GCR3_VALID; } - root = __va(*pte & PAGE_MASK); + root = iommu_phys_to_virt(*pte & PAGE_MASK); level -= 1; } @@ -3652,7 +3654,7 @@ static void set_dte_irq_entry(u16 devid, struct irq_remap_table *table) dte = amd_iommu_dev_table[devid].data[2]; dte &= ~DTE_IRQ_PHYS_ADDR_MASK; - dte |= virt_to_phys(table->table); + dte |= iommu_virt_to_phys(table->table); dte |= DTE_IRQ_REMAP_INTCTL; dte |= DTE_IRQ_TABLE_LEN; dte |= DTE_IRQ_REMAP_ENABLE; diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c index 5a11328..26862f1 100644 --- a/drivers/iommu/amd_iommu_init.c +++ b/drivers/iommu/amd_iommu_init.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -346,7 +347,7 @@ static void iommu_set_device_table(struct amd_iommu *iommu) BUG_ON(iommu->mmio_base == NULL); - entry = virt_to_phys(amd_iommu_dev_table); + entry = iommu_virt_to_phys(amd_iommu_dev_table); entry |= (dev_table_size >> 12) - 1; memcpy_toio(iommu->mmio_base + MMIO_DEV_TABLE_OFFSET, &entry, sizeof(entry)); @@ -602,7 +603,7 @@ static void iommu_enable_command_buffer(struct amd_iommu *iommu) BUG_ON(iommu->cmd_buf == NULL); - entry = (u64)virt_to_phys(iommu->cmd_buf); + entry = iommu_virt_to_phys(iommu->cmd_buf); entry |= MMIO_CMD_SIZE_512; memcpy_toio(iommu->mmio_base + MMIO_CMD_BUF_OFFSET, @@ -631,7 +632,7 @@ static void iommu_enable_event_buffer(struct amd_iommu *iommu) BUG_ON(iommu->evt_buf == NULL); - entry = (u64)virt_to_phys(iommu->evt_buf) | EVT_LEN_MASK; + entry = iommu_virt_to_phys(iommu->evt_buf) | EVT_LEN_MASK; memcpy_toio(iommu->mmio_base + MMIO_EVT_BUF_OFFSET, &entry, sizeof(entry)); @@ -664,7 +665,7 @@ static void iommu_enable_ppr_log(struct amd_iommu *iommu) if (iommu->ppr_log == NULL) return; - entry = (u64)virt_to_phys(iommu->ppr_log) | PPR_LOG_SIZE_512; + entry = iommu_virt_to_phys(iommu->ppr_log) | PPR_LOG_SIZE_512; memcpy_toio(iommu->mmio_base + MMIO_PPR_LOG_OFFSET, &entry, sizeof(entry)); @@ -744,10 +745,10 @@ static int iommu_init_ga_log(struct amd_iommu *iommu) if (!iommu->ga_log_tail) goto err_out; - entry = (u64)virt_to_phys(iommu->ga_log) | GA_LOG_SIZE_512; + entry = iommu_virt_to_phys(iommu->ga_log) | GA_LOG_SIZE_512; memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_BASE_OFFSET, &entry, sizeof(entry)); - entry = ((u64)virt_to_phys(iommu->ga_log) & 0xFFFFFFFFFFFFFULL) & ~7ULL; + entry = (iommu_virt_to_phys(iommu->ga_log) & 0xFFFFFFFFFFFFFULL) & ~7ULL; memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_TAIL_OFFSET, &entry, sizeof(entry)); writel(0x00, iommu->mmio_base + MMIO_GA_HEAD_OFFSET); @@ -2535,6 +2536,24 @@ static int __init amd_iommu_init(void) return ret; } +static bool amd_iommu_supports_sme(void) +{ + if (!sme_active() || (boot_cpu_data.x86 != 0x17)) + return true; + + /* For Fam17h, a specific level of support is required */ + if (boot_cpu_data.microcode >= 0x08001205) + return true; + + if ((boot_cpu_data.microcode >= 0x08001126) && + (boot_cpu_data.microcode <= 0x080011ff)) + return true; + + pr_notice("AMD-Vi: IOMMU not currently supported when SME is active\n"); + + return false; +} + /**************************************************************************** * * Early detect code. This code runs at IOMMU detection time in the DMA @@ -2552,6 +2571,9 @@ int __init amd_iommu_detect(void) if (amd_iommu_disabled) return -ENODEV; + if (!amd_iommu_supports_sme()) + return -ENODEV; + ret = iommu_go_to_state(IOMMU_IVRS_DETECTED); if (ret) return ret; diff --git a/drivers/iommu/amd_iommu_proto.h b/drivers/iommu/amd_iommu_proto.h index 466260f..3f12fb2 100644 --- a/drivers/iommu/amd_iommu_proto.h +++ b/drivers/iommu/amd_iommu_proto.h @@ -87,4 +87,14 @@ static inline bool iommu_feature(struct amd_iommu *iommu, u64 f) return !!(iommu->features & f); } +static inline u64 iommu_virt_to_phys(void *vaddr) +{ + return (u64)__sme_set(virt_to_phys(vaddr)); +} + +static inline void *iommu_phys_to_virt(unsigned long paddr) +{ + return phys_to_virt(__sme_clr(paddr)); +} + #endif /* _ASM_X86_AMD_IOMMU_PROTO_H */ diff --git a/drivers/iommu/amd_iommu_types.h b/drivers/iommu/amd_iommu_types.h index 4de8f41..3ce587d 100644 --- a/drivers/iommu/amd_iommu_types.h +++ b/drivers/iommu/amd_iommu_types.h @@ -343,7 +343,7 @@ #define IOMMU_PAGE_MASK (((1ULL << 52) - 1) & ~0xfffULL) #define IOMMU_PTE_PRESENT(pte) ((pte) & IOMMU_PTE_P) -#define IOMMU_PTE_PAGE(pte) (phys_to_virt((pte) & IOMMU_PAGE_MASK)) +#define IOMMU_PTE_PAGE(pte) (iommu_phys_to_virt((pte) & IOMMU_PAGE_MASK)) #define IOMMU_PTE_MODE(pte) (((pte) >> 9) & 0x07) #define IOMMU_PROT_MASK 0x03 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-path: Received: from mail-bl2nam02on0068.outbound.protection.outlook.com ([104.47.38.68] helo=NAM02-BL2-obe.outbound.protection.outlook.com) by casper.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1dPsAc-0004oB-1C for kexec@lists.infradead.org; Tue, 27 Jun 2017 15:12:52 +0000 From: Tom Lendacky Subject: [PATCH v8 RESEND 27/38] iommu/amd: Allow the AMD IOMMU to work with memory encryption Date: Tue, 27 Jun 2017 10:12:30 -0500 Message-ID: <20170627151230.17428.75281.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20170627150718.17428.81813.stgit@tlendack-t1.amdoffice.net> References: <20170627150718.17428.81813.stgit@tlendack-t1.amdoffice.net> MIME-Version: 1.0 List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "kexec" Errors-To: kexec-bounces+dwmw2=infradead.org@lists.infradead.org To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, xen-devel@lists.xen.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Brijesh Singh , Toshimitsu Kani , "Michael S. Tsirkin" , Matt Fleming , Alexander Potapenko , "H. Peter Anvin" , Boris Ostrovsky , Jonathan Corbet , Joerg Roedel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Larry Woodman , Ingo Molnar , Andrey Ryabinin , Dave Young , Rik van Riel , Arnd Bergmann , Konrad Rzeszutek Wilk , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , Dmitry Vyukov , Juergen Gross , Paolo Bonzini The IOMMU is programmed with physical addresses for the various tables and buffers that are used to communicate between the device and the driver. When the driver allocates this memory it is encrypted. In order for the IOMMU to access the memory as encrypted the encryption mask needs to be included in these physical addresses during configuration. The PTE entries created by the IOMMU should also include the encryption mask so that when the device behind the IOMMU performs a DMA, the DMA will be performed to encrypted memory. Reviewed-by: Borislav Petkov Signed-off-by: Tom Lendacky --- drivers/iommu/amd_iommu.c | 30 ++++++++++++++++-------------- drivers/iommu/amd_iommu_init.c | 34 ++++++++++++++++++++++++++++------ drivers/iommu/amd_iommu_proto.h | 10 ++++++++++ drivers/iommu/amd_iommu_types.h | 2 +- 4 files changed, 55 insertions(+), 21 deletions(-) diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index 503849d..16cc54b 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -544,7 +544,7 @@ static void dump_dte_entry(u16 devid) static void dump_command(unsigned long phys_addr) { - struct iommu_cmd *cmd = phys_to_virt(phys_addr); + struct iommu_cmd *cmd = iommu_phys_to_virt(phys_addr); int i; for (i = 0; i < 4; ++i) @@ -865,11 +865,13 @@ static void copy_cmd_to_buffer(struct amd_iommu *iommu, static void build_completion_wait(struct iommu_cmd *cmd, u64 address) { + u64 paddr = iommu_virt_to_phys((void *)address); + WARN_ON(address & 0x7ULL); memset(cmd, 0, sizeof(*cmd)); - cmd->data[0] = lower_32_bits(__pa(address)) | CMD_COMPL_WAIT_STORE_MASK; - cmd->data[1] = upper_32_bits(__pa(address)); + cmd->data[0] = lower_32_bits(paddr) | CMD_COMPL_WAIT_STORE_MASK; + cmd->data[1] = upper_32_bits(paddr); cmd->data[2] = 1; CMD_SET_TYPE(cmd, CMD_COMPL_WAIT); } @@ -1328,7 +1330,7 @@ static bool increase_address_space(struct protection_domain *domain, return false; *pte = PM_LEVEL_PDE(domain->mode, - virt_to_phys(domain->pt_root)); + iommu_virt_to_phys(domain->pt_root)); domain->pt_root = pte; domain->mode += 1; domain->updated = true; @@ -1365,7 +1367,7 @@ static u64 *alloc_pte(struct protection_domain *domain, if (!page) return NULL; - __npte = PM_LEVEL_PDE(level, virt_to_phys(page)); + __npte = PM_LEVEL_PDE(level, iommu_virt_to_phys(page)); /* pte could have been changed somewhere. */ if (cmpxchg64(pte, __pte, __npte) != __pte) { @@ -1481,10 +1483,10 @@ static int iommu_map_page(struct protection_domain *dom, return -EBUSY; if (count > 1) { - __pte = PAGE_SIZE_PTE(phys_addr, page_size); + __pte = PAGE_SIZE_PTE(__sme_set(phys_addr), page_size); __pte |= PM_LEVEL_ENC(7) | IOMMU_PTE_P | IOMMU_PTE_FC; } else - __pte = phys_addr | IOMMU_PTE_P | IOMMU_PTE_FC; + __pte = __sme_set(phys_addr) | IOMMU_PTE_P | IOMMU_PTE_FC; if (prot & IOMMU_PROT_IR) __pte |= IOMMU_PTE_IR; @@ -1700,7 +1702,7 @@ static void free_gcr3_tbl_level1(u64 *tbl) if (!(tbl[i] & GCR3_VALID)) continue; - ptr = __va(tbl[i] & PAGE_MASK); + ptr = iommu_phys_to_virt(tbl[i] & PAGE_MASK); free_page((unsigned long)ptr); } @@ -1715,7 +1717,7 @@ static void free_gcr3_tbl_level2(u64 *tbl) if (!(tbl[i] & GCR3_VALID)) continue; - ptr = __va(tbl[i] & PAGE_MASK); + ptr = iommu_phys_to_virt(tbl[i] & PAGE_MASK); free_gcr3_tbl_level1(ptr); } @@ -1807,7 +1809,7 @@ static void set_dte_entry(u16 devid, struct protection_domain *domain, bool ats) u64 flags = 0; if (domain->mode != PAGE_MODE_NONE) - pte_root = virt_to_phys(domain->pt_root); + pte_root = iommu_virt_to_phys(domain->pt_root); pte_root |= (domain->mode & DEV_ENTRY_MODE_MASK) << DEV_ENTRY_MODE_SHIFT; @@ -1819,7 +1821,7 @@ static void set_dte_entry(u16 devid, struct protection_domain *domain, bool ats) flags |= DTE_FLAG_IOTLB; if (domain->flags & PD_IOMMUV2_MASK) { - u64 gcr3 = __pa(domain->gcr3_tbl); + u64 gcr3 = iommu_virt_to_phys(domain->gcr3_tbl); u64 glx = domain->glx; u64 tmp; @@ -3470,10 +3472,10 @@ static u64 *__get_gcr3_pte(u64 *root, int level, int pasid, bool alloc) if (root == NULL) return NULL; - *pte = __pa(root) | GCR3_VALID; + *pte = iommu_virt_to_phys(root) | GCR3_VALID; } - root = __va(*pte & PAGE_MASK); + root = iommu_phys_to_virt(*pte & PAGE_MASK); level -= 1; } @@ -3652,7 +3654,7 @@ static void set_dte_irq_entry(u16 devid, struct irq_remap_table *table) dte = amd_iommu_dev_table[devid].data[2]; dte &= ~DTE_IRQ_PHYS_ADDR_MASK; - dte |= virt_to_phys(table->table); + dte |= iommu_virt_to_phys(table->table); dte |= DTE_IRQ_REMAP_INTCTL; dte |= DTE_IRQ_TABLE_LEN; dte |= DTE_IRQ_REMAP_ENABLE; diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c index 5a11328..26862f1 100644 --- a/drivers/iommu/amd_iommu_init.c +++ b/drivers/iommu/amd_iommu_init.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -346,7 +347,7 @@ static void iommu_set_device_table(struct amd_iommu *iommu) BUG_ON(iommu->mmio_base == NULL); - entry = virt_to_phys(amd_iommu_dev_table); + entry = iommu_virt_to_phys(amd_iommu_dev_table); entry |= (dev_table_size >> 12) - 1; memcpy_toio(iommu->mmio_base + MMIO_DEV_TABLE_OFFSET, &entry, sizeof(entry)); @@ -602,7 +603,7 @@ static void iommu_enable_command_buffer(struct amd_iommu *iommu) BUG_ON(iommu->cmd_buf == NULL); - entry = (u64)virt_to_phys(iommu->cmd_buf); + entry = iommu_virt_to_phys(iommu->cmd_buf); entry |= MMIO_CMD_SIZE_512; memcpy_toio(iommu->mmio_base + MMIO_CMD_BUF_OFFSET, @@ -631,7 +632,7 @@ static void iommu_enable_event_buffer(struct amd_iommu *iommu) BUG_ON(iommu->evt_buf == NULL); - entry = (u64)virt_to_phys(iommu->evt_buf) | EVT_LEN_MASK; + entry = iommu_virt_to_phys(iommu->evt_buf) | EVT_LEN_MASK; memcpy_toio(iommu->mmio_base + MMIO_EVT_BUF_OFFSET, &entry, sizeof(entry)); @@ -664,7 +665,7 @@ static void iommu_enable_ppr_log(struct amd_iommu *iommu) if (iommu->ppr_log == NULL) return; - entry = (u64)virt_to_phys(iommu->ppr_log) | PPR_LOG_SIZE_512; + entry = iommu_virt_to_phys(iommu->ppr_log) | PPR_LOG_SIZE_512; memcpy_toio(iommu->mmio_base + MMIO_PPR_LOG_OFFSET, &entry, sizeof(entry)); @@ -744,10 +745,10 @@ static int iommu_init_ga_log(struct amd_iommu *iommu) if (!iommu->ga_log_tail) goto err_out; - entry = (u64)virt_to_phys(iommu->ga_log) | GA_LOG_SIZE_512; + entry = iommu_virt_to_phys(iommu->ga_log) | GA_LOG_SIZE_512; memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_BASE_OFFSET, &entry, sizeof(entry)); - entry = ((u64)virt_to_phys(iommu->ga_log) & 0xFFFFFFFFFFFFFULL) & ~7ULL; + entry = (iommu_virt_to_phys(iommu->ga_log) & 0xFFFFFFFFFFFFFULL) & ~7ULL; memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_TAIL_OFFSET, &entry, sizeof(entry)); writel(0x00, iommu->mmio_base + MMIO_GA_HEAD_OFFSET); @@ -2535,6 +2536,24 @@ static int __init amd_iommu_init(void) return ret; } +static bool amd_iommu_supports_sme(void) +{ + if (!sme_active() || (boot_cpu_data.x86 != 0x17)) + return true; + + /* For Fam17h, a specific level of support is required */ + if (boot_cpu_data.microcode >= 0x08001205) + return true; + + if ((boot_cpu_data.microcode >= 0x08001126) && + (boot_cpu_data.microcode <= 0x080011ff)) + return true; + + pr_notice("AMD-Vi: IOMMU not currently supported when SME is active\n"); + + return false; +} + /**************************************************************************** * * Early detect code. This code runs at IOMMU detection time in the DMA @@ -2552,6 +2571,9 @@ int __init amd_iommu_detect(void) if (amd_iommu_disabled) return -ENODEV; + if (!amd_iommu_supports_sme()) + return -ENODEV; + ret = iommu_go_to_state(IOMMU_IVRS_DETECTED); if (ret) return ret; diff --git a/drivers/iommu/amd_iommu_proto.h b/drivers/iommu/amd_iommu_proto.h index 466260f..3f12fb2 100644 --- a/drivers/iommu/amd_iommu_proto.h +++ b/drivers/iommu/amd_iommu_proto.h @@ -87,4 +87,14 @@ static inline bool iommu_feature(struct amd_iommu *iommu, u64 f) return !!(iommu->features & f); } +static inline u64 iommu_virt_to_phys(void *vaddr) +{ + return (u64)__sme_set(virt_to_phys(vaddr)); +} + +static inline void *iommu_phys_to_virt(unsigned long paddr) +{ + return phys_to_virt(__sme_clr(paddr)); +} + #endif /* _ASM_X86_AMD_IOMMU_PROTO_H */ diff --git a/drivers/iommu/amd_iommu_types.h b/drivers/iommu/amd_iommu_types.h index 4de8f41..3ce587d 100644 --- a/drivers/iommu/amd_iommu_types.h +++ b/drivers/iommu/amd_iommu_types.h @@ -343,7 +343,7 @@ #define IOMMU_PAGE_MASK (((1ULL << 52) - 1) & ~0xfffULL) #define IOMMU_PTE_PRESENT(pte) ((pte) & IOMMU_PTE_P) -#define IOMMU_PTE_PAGE(pte) (phys_to_virt((pte) & IOMMU_PAGE_MASK)) +#define IOMMU_PTE_PAGE(pte) (iommu_phys_to_virt((pte) & IOMMU_PAGE_MASK)) #define IOMMU_PTE_MODE(pte) (((pte) >> 9) & 0x07) #define IOMMU_PROT_MASK 0x03 _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec