From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752627AbdFGTRo (ORCPT ); Wed, 7 Jun 2017 15:17:44 -0400 Received: from mail-sn1nam02on0044.outbound.protection.outlook.com ([104.47.36.44]:56743 "EHLO NAM02-SN1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752196AbdFGTRd (ORCPT ); Wed, 7 Jun 2017 15:17:33 -0400 Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; From: Tom Lendacky Subject: [PATCH v6 24/34] x86, swiotlb: Add memory encryption support To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Rik van Riel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Toshimitsu Kani , Arnd Bergmann , Jonathan Corbet , Matt Fleming , "Michael S. Tsirkin" , Joerg Roedel , Konrad Rzeszutek Wilk , Paolo Bonzini , Larry Woodman , Brijesh Singh , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , Dave Young , Thomas Gleixner , Dmitry Vyukov Date: Wed, 07 Jun 2017 14:17:21 -0500 Message-ID: <20170607191721.28645.96519.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20170607191309.28645.15241.stgit@tlendack-t1.amdoffice.net> References: <20170607191309.28645.15241.stgit@tlendack-t1.amdoffice.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: MWHPR15CA0025.namprd15.prod.outlook.com (10.173.226.139) To CY4PR12MB1141.namprd12.prod.outlook.com (10.168.163.149) X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PR12MB1141: X-MS-Office365-Filtering-Correlation-Id: c8531afa-6427-4442-4b74-08d4add9d5c3 X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(22001)(48565401081)(201703131423075)(201703031133081);SRVR:CY4PR12MB1141; X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1141;3:+OWWrpGnu/mKf37Fryp8NJpE6zyjb/O8a0aPGGYqna9e9qEY0UaBA5Jy+0KKktLSHJOE+yi07jihZegymmeJX+0YkwQeZnxPmBMSdH7ogru2GPbq/d2D7p8+XdnZRiJ28kFBvtuWDK+hD9e55CcCFOrXXo7Wbe5GHM+xu36lp3dWq+wUgnb3IHurkGvLaDczP0xeL0uYtohFnGEMHNa0q22+grXdfdC+rYTT4q/GNcxT1LBuWRBlCCv8iElKOuxNmqQZIY9QE3Fvdgjpx9HiALm1OtWi+9rWhdwmtngNm9jMeD94vDhAE78nlyBGxJBQJP25Zpupem0x0PPD1SAb7jNNcY8baHccZFaExZf2f3k=;25:+O8UIbmKSi+oAKKqYwOQ3ctdSTbUTRIVD6Tk06SxeJ9LZD8O9+dSlZCmIkQLfSH3AgzDkGqCRmPV0B6x0tDSujIaSWe0vpQh2pl7DV9wFwY8xcTxamzshKxkRj99d4B0yT4OiqS3z/mV04wbtqNjD1EKogK7t3X6ey7KeMIPMqCiKFij0NMKYJayNXeeCl3YT/kHECmFvSPArAyAn5crwCT+JG91WDgKbNzrMKurcnUw5h3vHUV+i3M03pE/BcExXnNPiYwlvLGDTJ4D3qabm2cj35+652qTUrz4Zwj4uD5cCk7jVctSLzaPEuqtUrLUQG9BCFIrMIIa7/lqqatHjlqunTE3P5OUCp4ooRjTcw1koVA/a1hy4PGraTAOY9rIkQnobie7D4oYVD+wUrY/YlR8pBob8iJKgexWb5g1pGAg/GcAxNZbw1m70NqRZ4XBLJy+NPSnuiy/aIJCK9hgbf1iwyc8DV8rkV5Ip9CYXAo= X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1141;31:OzrZhH45EOrkKroeD+QUWrjZJ4T4BXQF1mgL4F+mSlSi/MitJZrw5V91Tiu0aqOKh5R0pzYGIiyuOmSgUJGFX0UQoJ3lFEHML3drnrf/pfMBXGBB4g9DozpW6/AGcuEARnfo24vghnDmeIAHjS8APH62YRKjBnX2sVKWP+csk9NA/bxAUUQNAOUbZlrv6gwOCC2UBlHlNwwtcvuZGVLLkO6gG6hpZGbZxuhmeYlblzU=;20:eCMN8f3PflnL0LwxCprSWvUjPpnXEFNmweou/kd+EodEicfvXzNVWduhgMc1SUABsdTt8DMpZNEBm/ZHl7nyDmVW4sESwAv9iVOjT0ETCRIK5c4vOtc4EZ+yy0TSMqMeY/sDsmMrb0mBV2DALuSAGuDn3ddEPJWy/ckqMEtt/n874xotygqZDOEJ6PRVqBIMkfKnljky7bcGaXiIW+6RoiMk19qFjZpq4UuH6QJZRZHgsMOdlTDsX+mLCpyoCt5hnNA70VChMYTxtKl3YQ2Gr5V9btTtHZiJ3VcJDnmeiIDeyo+L78S/pMMD4otsHpOGFOxwBZn8pqQRu2WiQ5vpK20bulh4yUiJkyHdrAIc0vKMJL0rKsGiu3tS/OVnATK0zxGxMKuQ3UZzp0wfowX7cg93DUxnPP5b2tOqrAhGXbPJ3mmE1r12EQV4UoGufL5pfCN/yxhFxCPFIOWjgBeIPoCROdWl7CRliKf6IJskC18N46Ob3b9NigQvOMun2zZD X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(601004)(2401047)(8121501046)(5005006)(93006095)(93001095)(100000703101)(100105400095)(10201501046)(3002001)(6055026)(6041248)(20161123555025)(20161123558100)(20161123564025)(20161123560025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123562025)(6072148)(100000704101)(100105200095)(100000705101)(100105500095);SRVR:CY4PR12MB1141;BCL:0;PCL:0;RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095);SRVR:CY4PR12MB1141; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtDWTRQUjEyTUIxMTQxOzQ6em1rR3daYzFoZGNyeUNZRFY0S3pQVXp6U3hU?= =?utf-8?B?OFZoYjlmcFFnaVF3cTlZWElYMWp6d3Vpa0g5OHVDY095YUVnWVhYRDlhazNO?= =?utf-8?B?SUxwMDk0RC8zU0lzNk5ja3EvR2tVRVdMVEp1eEVTTE9NREl1dytkTHJwcXNO?= =?utf-8?B?MUlvMFhlYnhWdGhJMGRCOVRoOWxNQ2VYemYwRFVHOFVyNU0wcGNkSzcyRmZF?= =?utf-8?B?cjZDNjRiQ1RmSmlaRTlHQXpOYlI3RzRndk43bThMVnpGb2x6MWJld1pzREox?= =?utf-8?B?SnFtOFRWcWlhVi9pZitlZ2I5Y0lKQVpJcGpCTG0yK3o0a1lJK08wQmMzR290?= =?utf-8?B?cVdlb0VlRmJ2M1MzVFd6OEFtVGlid2RCalFaa1Jqa2FkSEhzaWxSTzJHYlZJ?= =?utf-8?B?eFZzeVFnL2ppZXJHWjdJWW5adFNkd1p4QndUajJwV3B6NEtRUmp4eHpmMHNT?= =?utf-8?B?eFF4QUphWHhMYVVnakxSbklKTS81MUVXQVh1TDAyNGtudDRFV0FZRmpKSG56?= =?utf-8?B?MEEyTyszQlFLNSt4WVR4M2dXSW1scXlVMktCTS9ER0NiVnRaa29iYkk0NDAw?= =?utf-8?B?Z3VDMUI5UG5ybWVaNlQrWkVuY1JmbjdBTXVRUStJeDJmbFBFNXZqWGovbmlH?= =?utf-8?B?R0JzVTQ4MGhRTWJxVVNkWk9sbGEzOE9qbkFxWGprSjJnN2ZTL3JhT216YTlt?= =?utf-8?B?QTE3bHpZQ2VyalVabkVIdkdha0xPUmVreVBFalgwYURjbE1rLzF2ZmQraGg3?= =?utf-8?B?dzd1RUJHUDZyaGZoTkxTS1M0YTZjMmkyczYwUmhBZGJJL1pJQnFnalYwT0p1?= =?utf-8?B?SnJ1OHprR1E5SDVYb2VENGNvRUp4VTRlODh5ajk0YW5rRUZ0VVgycU1VRTVU?= =?utf-8?B?d2JLWlVqZWxvQ2xkbmxNUzZNaGlkejJ6TkxUYVY5NTBRMVR5b1FGUktXaWRJ?= =?utf-8?B?Nndac3hyeGpySkZUYWQ1NVdRSEhFZUVjZjErT0o3bFV0UmY4K1FhTFh2SFQ5?= =?utf-8?B?ZUJPSnlDWjZONkVvRnJsWFdLMDJBQk9EMkFkUHkwdjBjT0IvdnJUbkNtOG1U?= =?utf-8?B?cStVYW5INmJiQWZxbGt5SS8yUGRjbkQ2Z1ZuV25jUEVOQmtvTjUwclRheU5a?= =?utf-8?B?NTNSR3ltdmRVUWFTTmJNZ0tPblc3RWdUdkNhZ25sbXVUSkxhMi9mc3ZnWUlP?= =?utf-8?B?bWpuVWhodHoxVE0zNW8xbDhsNDF1UEIzcTF0K1MrZklaekQ0eEdtUVpmZzM5?= =?utf-8?B?T1lLakc5RDdRQTUwdlU1OFg4UXY1bGd5M09HL2h2YjhWOVhLVXFJcXJRV1JB?= =?utf-8?B?ckdjc3NBblFZc0ZtM2grL04zUDcydFNqa0R5V3czdFhlOUNaQzNKVDhGS3Fm?= =?utf-8?B?YzlUS2xET1hSWkl3QVA4czdrQ0RGTHdjQVg2L1FNc1VQME1xR2s2UTNYM1Zp?= =?utf-8?B?amZ1MWJ2Qyt6QmJQaVFhYzdBVmx4OTV3a0haRStKekUveFlWNGxNNVBxbDFh?= =?utf-8?B?RFZrRlVudE51OGpuQ2hhakgvazMyYllvVjN3cHpBdHRFK0xxdG5ZMEdCWUwy?= =?utf-8?B?Z1YxVjBZTnB0ZS9HN21pekExdGplbWhFYTFLYmthbUlmd2tNM2U5cWN3SVdB?= =?utf-8?Q?6JQWj9mtmuL9xKhF7ZO?= X-Forefront-PRVS: 03319F6FEF X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6009001)(39450400003)(39840400002)(39860400002)(39410400002)(39850400002)(39400400002)(9686003)(54906002)(55016002)(6506006)(66066001)(4326008)(47776003)(110136004)(103116003)(25786009)(38730400002)(72206003)(478600001)(83506001)(305945005)(7736002)(86362001)(76176999)(50466002)(53936002)(54356999)(97746001)(7416002)(7406005)(23676002)(1076002)(6116002)(33646002)(3846002)(50986999)(230700001)(53416004)(42186005)(5660300001)(4001350100001)(8676002)(81166006)(2906002)(2950100002)(6666003)(189998001)(921003)(1121003);DIR:OUT;SFP:1101;SCL:1;SRVR:CY4PR12MB1141;H:tlendack-t1.amdoffice.net;FPR:;SPF:None;MLV:sfv;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtDWTRQUjEyTUIxMTQxOzIzOkI3ZHBhSE85eS91c1F6Yk83aksxNDZrOXB6?= =?utf-8?B?Y2ZVR05ranBZTnRkWDRrckhvNmlPMldadEJybjlRNDBVaFNHaWpldUR3THNi?= =?utf-8?B?QXpHZ0QxRlF6ZFgvQzViaXlJcjFxNzQ3SFp5ckozRGJ4NjVoMEpLQlJsNmhZ?= =?utf-8?B?ME9VUnhvbEZ0d0VNRWExS3JZNGRQM3R4MldUd0xSa0lCdDlWZmtrSkUvY3V5?= =?utf-8?B?Um1GM3RUck9VaU5KOUtDUTZ6YmtMSXd2N1BSb08zUHNsS3p4N3RlVkdmQkRK?= =?utf-8?B?NlQyTHIwaHM4K3NhSHljRGNZVm1rYmtJcDVJNS9mdEhBTHQ5TVE0STBab3VW?= =?utf-8?B?enoyeWhGcmVPak8yaXBTSDdUN3YwelJ2eTRDVnBscUI2eHMvMGlZamFtbGgw?= =?utf-8?B?aER1bTIvZTR4bmlZenlKcktGMWF4eDBOQXAyYnRaTU1lR2dEaEg0bmdJY1oz?= =?utf-8?B?RG5HdFo0NjdtbjEvYks2ejNBZ25JSDlEbHNVazhrMFMrck1mcmpXMzZ5dHZl?= =?utf-8?B?c3JHOGxCWVJiM2NnR1QrRTQ2NWRwKytXc1JjUFlmdnRIdFkrSFRkeHpMaGly?= =?utf-8?B?QTVqOW9KWEx0WU9GaDRWMVM5enkxS3hRS2RtNXhDWENXOHd2SWdDaDVia1FT?= =?utf-8?B?dGNiK2N3VTV4b01QZHlRRk5VSGc3QVFsWm0yckFGR05uUmd1bXFoOHoyQWtS?= =?utf-8?B?bUZvWG9EeVlFZHNkMElKa1FHRmgrY3FqTFlBZURabHpuNlRXNmExejFFZ0w4?= =?utf-8?B?RHJpUlJqR0pIcHYzMnIxRzFKbzZUMWpSNlNIdHo5RlBpaHpkV0NQb3dTRnIv?= =?utf-8?B?aWQ1dTRnL0JNTDJEaFJjVHppeFcyQkkzT0xpTGZHdm52T2xrYjBZWVdHNHZv?= =?utf-8?B?dFRNSGF0cXE2aXlyZTFaZjg3d3NncE0yYWh2UXF1Smxyc3Z1SkpwcWlBY3Bp?= =?utf-8?B?SExYYWhJcUUyNTBhSWpQKy9UaFhYeWl3d3ZHN0hkblJDRFFrbjVjamt3S1la?= =?utf-8?B?NWtkc1BhMTZQRElkZGREZy9jTEtNSTRDdUtVc3ZNZlh4SVVXOEd6Q2hXWHZR?= =?utf-8?B?Q29OOXAxU3I4QStlUVdwWnVkSmYwYTFBcnJ6VVk5L1lML1JnOHNaeE1DTVhB?= =?utf-8?B?dGVoN0VNM2hFeEN3YUFqdTJ0Uk1WMUx6QW5CdGg1bUxvUGN0eXlFaE5JcG1K?= =?utf-8?B?aEJhaVpMa0RBelBWbG5XNFVOTDI2cGVjRW9lUk1uQzUyaDcyTnkwd2VUZWY4?= =?utf-8?B?My9zZlVZcFBGeTFNSkF3Vm9yNCtMZmtzWnNJd2c4UUJyNlBhLzRVMDEyOTk1?= =?utf-8?B?ejNrWkNucUtUU2s2MUdBZlF1SiszNHV6byt5TTNCWm9TQ1JNQllSeVM2eEc5?= =?utf-8?B?M3I1SlNFRHBwM05TQ0NycVZRdXY3OTJ2d3M2MHhyY3Q4L1p2M2dNdjFON3FU?= =?utf-8?B?bTVkb2ZMSkl6TTg5cnhlNWNSRW5ET1gzbW83bkl2ZDQrcC9sYlFzUnNBOUhJ?= =?utf-8?B?NnU3Ty9nVFdQdEtwbCtqOXhvRlpQWUtzaktvMzR6czJYSmpyaUlDUHNZd2Zj?= =?utf-8?B?OTFSU1VCZnoyTWVtekxUS1Jlamx0Qkp0QTQ1U0FkZGFPMDI2V3RudlVIOEhz?= =?utf-8?B?d1JRQ3prMm92QU1QbVlCWkRIRk13Ylk4SmJvTXRPVVJKelRuTm5ia2thZEFD?= =?utf-8?Q?BBUmxR1qCc154KEulY=3D?= X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1141;6:F0D4YXx+6wWJXUMnTXxCVQYKAYE3JCLF6Qfv6kAbgp92kxVPwhgpryV9vYBGbz+vt5PeR/ERga24NYKzUHtwxXgiFOC+gdqwgN5fuGhT/O2SSQuFunijUbdiVPTkKGbsRf+93QtzXxDJjVWerIRtjYLbP81xVXj5BhuWvVQgZ1fEBiryEVrq2LK8eIAxQHV8gGQqywg09yEtQa9u1X9Rdkep8nKGo7EgJxu41t8QzbpzyHtR72NSHvp5SZHEJO0vo6bMNaxdZsDqqJMrqez4Cqs9bgQ45bvPgfu2bD5lqbKL1n2nlBy7cmemRzynhOzNXPvU5U/REnRC8qdMFqhkavyAypiXYGroaNpyNreWO612wufhVA1SV69vi7MEIGrBmUCv1N9ojpHiG9+1610Vsd+cQyGxaDYeTJ3gmsKFJCY5Ao4mXNet+iVSV4ZwvX5Dd1WqKeZULraq9qM41XxPVCvaQNniNKudTSwWQyX8YNjhTxgq5bQhj5i7slE3hYBNPMQJ04543n+6/Yl/YseDSJgz2OoSYtIpOahu0MwXAt8= X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1141;5:U5+2MQnKPuYKQezd3P3IzJ5aI271APiF1VV+z6ANvK+nHqBlsdE5e/dFLUCeEWUAKicgdcZuEREuVNOcMUnO4QBtibXSntgTOFQIwjrRHdHMs4iDVMEcyqibacF6EKn05+AjTAZE4BzGpX7SCmgFjBKu3JUH3aseOn0j7cuPWlt3SQ3ftCJN3o4WmIZmtq7++IukNnuC/SdLZSttAjvpiaxB5kmzKDB2CAFzs78v01nPh4wT04PuRykr1zp5jjEZH41O6fU4sKmLDiwuNrX92NbRudjYfJXr0yati0e9VLpKvPwTjTl3GEYCc/oDWx5mY4GkUtGB/F7uWgIGNZHWga7umj97j9hmoNhHqHbkoJZD2PitXyiUZgWfOoaDTaRktTyUOTEsNo9+3EJesnhVWP9AeU3weq4kFiqLVG/85sAEjq9AHljF2UagXr2hR9Ia29pdGf3oD6V2G9/yhFMurOUV6ZkFXRgzJwDydYU2gAVCsgsXtqeqH4J8CgWt6jI8;24:FVmWWxdQW0s/BdEDyZ9xTFulYij6QFKRIlm/krr4FJVk77zzX2DQkl625TBS/GBQNPZ/yG6M5Kd1kYiD9SITFGNlglb79FPhrn+KrQeaG7s= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;CY4PR12MB1141;7:vOyK8VaGzU1k/UpkybrHmzq1hdGMeyp+lOOWuNUMQ4lJH2GlUISAXsUSmTK3ixBWh4eM+znNH/wLpZM550No+7RCIMtLAzzIiSzrskWTo1mIWlsFgIB3PGNJRzGb4kcR67k3u3addozob8eA2TDdEO2TgkLUIsOvZSmKusmHWoaDzdfwLETOJTNOCskUYueRGvAAHQScDHC7NV2DswJGPmKBOh7U1CJJFLeZkWORbQ2wVmAmouDgAMSsQXHSIhUIeJ7XIFs+ByIXV8wLDUK0qMSgeB8kWrMTJTD2c9FdZgHRyha+4Ose3seTYDyNIXS+hqk0BJ3V5uJ9GGktjHO5Pg==;20:FzHkcgVv0QnQe3cI+K3Kf6p5nKFKPNWm/TwOcqMHB0NhCIZzUw6BJFz7NlBkl49Adn/RMiZP+P1ZTFXvNUYR272MQBLRt3zrm/rSMquSK8cA/f2zx+I8nlPtiO+gbzZo99AvVMFI5NoOnLN/G5lFEq8+4hY2gmko+agFdyvvzrD2z9kqGg0YYfdcO9+/v23/DdCGCYU77NKSnS5tOO+rXCGBugW2IElSAJGPXNAI7mcwcKepmAYoquDU656AymuE X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2017 19:17:24.6311 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1141 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Since DMA addresses will effectively look like 48-bit addresses when the memory encryption mask is set, SWIOTLB is needed if the DMA mask of the device performing the DMA does not support 48-bits. SWIOTLB will be initialized to create decrypted bounce buffers for use by these devices. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/dma-mapping.h | 5 ++- arch/x86/include/asm/mem_encrypt.h | 5 +++ arch/x86/kernel/pci-dma.c | 11 +++++-- arch/x86/kernel/pci-nommu.c | 2 + arch/x86/kernel/pci-swiotlb.c | 15 ++++++++-- arch/x86/mm/mem_encrypt.c | 22 ++++++++++++++ include/linux/swiotlb.h | 1 + init/main.c | 13 ++++++++ lib/swiotlb.c | 56 +++++++++++++++++++++++++++++++----- 9 files changed, 113 insertions(+), 17 deletions(-) diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h index 08a0838..d75430a 100644 --- a/arch/x86/include/asm/dma-mapping.h +++ b/arch/x86/include/asm/dma-mapping.h @@ -12,6 +12,7 @@ #include #include #include +#include #ifdef CONFIG_ISA # define ISA_DMA_BIT_MASK DMA_BIT_MASK(24) @@ -62,12 +63,12 @@ static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr) { - return paddr; + return __sme_set(paddr); } static inline phys_addr_t dma_to_phys(struct device *dev, dma_addr_t daddr) { - return daddr; + return __sme_clr(daddr); } #endif /* CONFIG_X86_DMA_REMAP */ diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 61a7049..f1215a4 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -31,6 +31,11 @@ void __init sme_early_decrypt(resource_size_t paddr, void __init sme_early_init(void); +/* Architecture __weak replacement functions */ +void __init mem_encrypt_init(void); + +void swiotlb_set_mem_attributes(void *vaddr, unsigned long size); + #else /* !CONFIG_AMD_MEM_ENCRYPT */ #define sme_me_mask 0UL diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c index 3a216ec..72d96d4 100644 --- a/arch/x86/kernel/pci-dma.c +++ b/arch/x86/kernel/pci-dma.c @@ -93,9 +93,12 @@ void *dma_generic_alloc_coherent(struct device *dev, size_t size, if (gfpflags_allow_blocking(flag)) { page = dma_alloc_from_contiguous(dev, count, get_order(size), flag); - if (page && page_to_phys(page) + size > dma_mask) { - dma_release_from_contiguous(dev, page, count); - page = NULL; + if (page) { + addr = phys_to_dma(dev, page_to_phys(page)); + if (addr + size > dma_mask) { + dma_release_from_contiguous(dev, page, count); + page = NULL; + } } } /* fallback */ @@ -104,7 +107,7 @@ void *dma_generic_alloc_coherent(struct device *dev, size_t size, if (!page) return NULL; - addr = page_to_phys(page); + addr = phys_to_dma(dev, page_to_phys(page)); if (addr + size > dma_mask) { __free_pages(page, get_order(size)); diff --git a/arch/x86/kernel/pci-nommu.c b/arch/x86/kernel/pci-nommu.c index a88952e..98b576a 100644 --- a/arch/x86/kernel/pci-nommu.c +++ b/arch/x86/kernel/pci-nommu.c @@ -30,7 +30,7 @@ static dma_addr_t nommu_map_page(struct device *dev, struct page *page, enum dma_data_direction dir, unsigned long attrs) { - dma_addr_t bus = page_to_phys(page) + offset; + dma_addr_t bus = phys_to_dma(dev, page_to_phys(page)) + offset; WARN_ON(size == 0); if (!check_addr("map_single", dev, bus, size)) return DMA_ERROR_CODE; diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c index 1e23577..cc1e106 100644 --- a/arch/x86/kernel/pci-swiotlb.c +++ b/arch/x86/kernel/pci-swiotlb.c @@ -12,6 +12,8 @@ #include #include #include +#include + int swiotlb __read_mostly; void *x86_swiotlb_alloc_coherent(struct device *hwdev, size_t size, @@ -79,8 +81,8 @@ int __init pci_swiotlb_detect_override(void) pci_swiotlb_late_init); /* - * if 4GB or more detected (and iommu=off not set) return 1 - * and set swiotlb to 1. + * If 4GB or more detected (and iommu=off not set) or if SME is active + * then set swiotlb to 1 and return 1. */ int __init pci_swiotlb_detect_4gb(void) { @@ -89,6 +91,15 @@ int __init pci_swiotlb_detect_4gb(void) if (!no_iommu && max_possible_pfn > MAX_DMA32_PFN) swiotlb = 1; #endif + + /* + * If SME is active then swiotlb will be set to 1 so that bounce + * buffers are allocated and used for devices that do not support + * the addressing range required for the encryption mask. + */ + if (sme_active()) + swiotlb = 1; + return swiotlb; } IOMMU_INIT(pci_swiotlb_detect_4gb, diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 2321f05..5d7c51d 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -16,11 +16,14 @@ #ifdef CONFIG_AMD_MEM_ENCRYPT #include +#include +#include #include #include #include #include +#include /* * Since SME related variables are set early in the boot process they must @@ -194,6 +197,25 @@ void __init sme_early_init(void) protection_map[i] = pgprot_encrypted(protection_map[i]); } +/* Architecture __weak replacement functions */ +void __init mem_encrypt_init(void) +{ + if (!sme_me_mask) + return; + + /* Call into SWIOTLB to update the SWIOTLB DMA buffers */ + swiotlb_update_mem_attributes(); +} + +void swiotlb_set_mem_attributes(void *vaddr, unsigned long size) +{ + WARN(PAGE_ALIGN(size) != size, + "size is not page-aligned (%#lx)\n", size); + + /* Make the SWIOTLB buffer area decrypted */ + set_memory_decrypted((unsigned long)vaddr, size >> PAGE_SHIFT); +} + void __init sme_encrypt_kernel(void) { } diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 4ee479f..15e7160 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -35,6 +35,7 @@ enum swiotlb_force { extern unsigned long swiotlb_nr_tbl(void); unsigned long swiotlb_size_or_default(void); extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs); +extern void __init swiotlb_update_mem_attributes(void); /* * Enumeration for sync targets diff --git a/init/main.c b/init/main.c index df58a41..7125b5f 100644 --- a/init/main.c +++ b/init/main.c @@ -488,6 +488,10 @@ void __init __weak thread_stack_cache_init(void) } #endif +void __init __weak mem_encrypt_init(void) +{ +} + /* * Set up kernel memory allocators */ @@ -640,6 +644,15 @@ asmlinkage __visible void __init start_kernel(void) */ locking_selftest(); + /* + * This needs to be called before any devices perform DMA + * operations that might use the SWIOTLB bounce buffers. + * This call will mark the bounce buffers as decrypted so + * that their usage will not cause "plain-text" data to be + * decrypted when accessed. + */ + mem_encrypt_init(); + #ifdef CONFIG_BLK_DEV_INITRD if (initrd_start && !initrd_below_start_ok && page_to_pfn(virt_to_page((void *)initrd_start)) < min_low_pfn) { diff --git a/lib/swiotlb.c b/lib/swiotlb.c index a8d74a7..74d6557 100644 --- a/lib/swiotlb.c +++ b/lib/swiotlb.c @@ -30,6 +30,7 @@ #include #include #include +#include #include #include @@ -155,6 +156,17 @@ unsigned long swiotlb_size_or_default(void) return size ? size : (IO_TLB_DEFAULT_SIZE); } +void __weak swiotlb_set_mem_attributes(void *vaddr, unsigned long size) +{ +} + +/* For swiotlb, clear memory encryption mask from dma addresses */ +static dma_addr_t swiotlb_phys_to_dma(struct device *hwdev, + phys_addr_t address) +{ + return __sme_clr(phys_to_dma(hwdev, address)); +} + /* Note that this doesn't work with highmem page */ static dma_addr_t swiotlb_virt_to_bus(struct device *hwdev, volatile void *address) @@ -183,6 +195,31 @@ void swiotlb_print_info(void) bytes >> 20, vstart, vend - 1); } +/* + * Early SWIOTLB allocation may be too early to allow an architecture to + * perform the desired operations. This function allows the architecture to + * call SWIOTLB when the operations are possible. It needs to be called + * before the SWIOTLB memory is used. + */ +void __init swiotlb_update_mem_attributes(void) +{ + void *vaddr; + unsigned long bytes; + + if (no_iotlb_memory || late_alloc) + return; + + vaddr = phys_to_virt(io_tlb_start); + bytes = PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT); + swiotlb_set_mem_attributes(vaddr, bytes); + memset(vaddr, 0, bytes); + + vaddr = phys_to_virt(io_tlb_overflow_buffer); + bytes = PAGE_ALIGN(io_tlb_overflow); + swiotlb_set_mem_attributes(vaddr, bytes); + memset(vaddr, 0, bytes); +} + int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) { void *v_overflow_buffer; @@ -320,6 +357,7 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) io_tlb_start = virt_to_phys(tlb); io_tlb_end = io_tlb_start + bytes; + swiotlb_set_mem_attributes(tlb, bytes); memset(tlb, 0, bytes); /* @@ -330,6 +368,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) if (!v_overflow_buffer) goto cleanup2; + swiotlb_set_mem_attributes(v_overflow_buffer, io_tlb_overflow); + memset(v_overflow_buffer, 0, io_tlb_overflow); io_tlb_overflow_buffer = virt_to_phys(v_overflow_buffer); /* @@ -581,7 +621,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, return SWIOTLB_MAP_ERROR; } - start_dma_addr = phys_to_dma(hwdev, io_tlb_start); + start_dma_addr = swiotlb_phys_to_dma(hwdev, io_tlb_start); return swiotlb_tbl_map_single(hwdev, start_dma_addr, phys, size, dir, attrs); } @@ -702,7 +742,7 @@ void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr, goto err_warn; ret = phys_to_virt(paddr); - dev_addr = phys_to_dma(hwdev, paddr); + dev_addr = swiotlb_phys_to_dma(hwdev, paddr); /* Confirm address can be DMA'd by device */ if (dev_addr + size - 1 > dma_mask) { @@ -812,10 +852,10 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page, map = map_single(dev, phys, size, dir, attrs); if (map == SWIOTLB_MAP_ERROR) { swiotlb_full(dev, size, dir, 1); - return phys_to_dma(dev, io_tlb_overflow_buffer); + return swiotlb_phys_to_dma(dev, io_tlb_overflow_buffer); } - dev_addr = phys_to_dma(dev, map); + dev_addr = swiotlb_phys_to_dma(dev, map); /* Ensure that the address returned is DMA'ble */ if (dma_capable(dev, dev_addr, size)) @@ -824,7 +864,7 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page, attrs |= DMA_ATTR_SKIP_CPU_SYNC; swiotlb_tbl_unmap_single(dev, map, size, dir, attrs); - return phys_to_dma(dev, io_tlb_overflow_buffer); + return swiotlb_phys_to_dma(dev, io_tlb_overflow_buffer); } EXPORT_SYMBOL_GPL(swiotlb_map_page); @@ -958,7 +998,7 @@ void swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr, sg_dma_len(sgl) = 0; return 0; } - sg->dma_address = phys_to_dma(hwdev, map); + sg->dma_address = swiotlb_phys_to_dma(hwdev, map); } else sg->dma_address = dev_addr; sg_dma_len(sg) = sg->length; @@ -1026,7 +1066,7 @@ void swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr, int swiotlb_dma_mapping_error(struct device *hwdev, dma_addr_t dma_addr) { - return (dma_addr == phys_to_dma(hwdev, io_tlb_overflow_buffer)); + return (dma_addr == swiotlb_phys_to_dma(hwdev, io_tlb_overflow_buffer)); } EXPORT_SYMBOL(swiotlb_dma_mapping_error); @@ -1039,6 +1079,6 @@ void swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr, int swiotlb_dma_supported(struct device *hwdev, u64 mask) { - return phys_to_dma(hwdev, io_tlb_end - 1) <= mask; + return swiotlb_phys_to_dma(hwdev, io_tlb_end - 1) <= mask; } EXPORT_SYMBOL(swiotlb_dma_supported);