From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752151AbdFGTZz (ORCPT ); Wed, 7 Jun 2017 15:25:55 -0400 Received: from mail-sn1nam02on0045.outbound.protection.outlook.com ([104.47.36.45]:17685 "EHLO NAM02-SN1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751795AbdFGTP0 (ORCPT ); Wed, 7 Jun 2017 15:15:26 -0400 Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; From: Tom Lendacky Subject: [PATCH v6 12/34] x86/mm: Extend early_memremap() support with additional attrs To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Rik van Riel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Toshimitsu Kani , Arnd Bergmann , Jonathan Corbet , Matt Fleming , "Michael S. Tsirkin" , Joerg Roedel , Konrad Rzeszutek Wilk , Paolo Bonzini , Larry Woodman , Brijesh Singh , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , Dave Young , Thomas Gleixner , Dmitry Vyukov Date: Wed, 07 Jun 2017 14:15:16 -0500 Message-ID: <20170607191516.28645.12130.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20170607191309.28645.15241.stgit@tlendack-t1.amdoffice.net> References: <20170607191309.28645.15241.stgit@tlendack-t1.amdoffice.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: BN6PR1101CA0012.namprd11.prod.outlook.com (10.174.237.22) To DM5PR12MB1148.namprd12.prod.outlook.com (10.168.236.143) X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM5PR12MB1148: X-MS-Office365-Filtering-Correlation-Id: 22451d92-3f61-4f88-bfde-08d4add98b0a X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(22001)(48565401081)(201703131423075)(201703031133081);SRVR:DM5PR12MB1148; X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1148;3:MjFyKRhbownRQL8RDM8pF/KLxQKtehkPayLa0q+0RSfOOLYVLc9FrdJMtD0D3DSjBYe7oiexKIFTdAbEErzq8pJrcmvY2wInBTzsWBY1JvuK3MkZ9QL2kcQ5jG+10qpQVF3xCnxf+vmubb3g+K3pqz2xKSF1u3GUPo3fF5lfH+hnISqk0hHRqkvxWvlmq+Q9qQSwLgKjcoR9XGGMjA03PWZiVVCzu6v352f9YluGKgWpSkt+FnETCfbahbJpxYcDxpgcgV9Ebq1EUkMtMH4Y0G72YBPfQdDqEHYjlwieXkQXHqPJp9vTApFCLbEU5dAFg2wNi2tyzz2PW2wFzg663ZuFvJ3XDDfLSKOuIdyzreE=;25:MF73kW8lTEZSGrgOmLbIOo0cDz6NmhYMouLkapGPEQsRsi2+D7kZK2aQHH8nl76296sRrrGifg6p8vl2/F34tKx9GQ5CwHxJDX0zFWAZlZ2IPNb05yeUMSHmDRNhlfj8sWQlHzHvMzqZih0wJrJGSuOpmPosk9Ls8Axz5g3nX+r70OrpH1fS4irEP76/YFx2R0NLd5oOAKIyHBi85EtmLwuMG1k/G4AXiADC0L6ZwvsXAwvADAG0VFyuSRKXHwrSwlEnJ/idst9Qze0Upzq/YnlrKsMu6pxMUeucGS4zxYLH9pwz9hN/6xQNBAQSS7Ks6MNqMi/4Wn6YAn1+xPj9TJx+6DWDqc9WxiWJ1h0vubhztmRtUXcfy5gCE2upWe1N1fceBHibFGXLkx+bPDqKogFn49Ju8L394aAmPfBYDSNXksZ+b/1nQHhsDGeyZf+BImm2Q9OpR0hWiTre5pF2Fo3SH23tHy3tKeh6Siv529I= X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1148;31:ltd67PRnbRuGtO8UoSKOSRn9L1gpI+SpWYtbVAJIJejGZ7geGZK0Qevqho+5n1YoQIh1OwCw/neLxxiDA+J0KMCITQ4S/5+HxQvhcM9pHLsis+Mbfz9C+BNIZWOX1+6aWSBzAmTYfamzhrXSs7KE7o2iL38U0mv6TdVuK7i9Db70fsCob60ceP7SXr0pMv5u2BOmlSeUqkUSg35oLJPGi7rFu2a+bN5JtdF+M+OudXQ=;20:oQfQac6jIZ8TNNcRU1blqjCf2GS59vDCPORf5ptCLK+img9sRNvuF7OG9ieRupBM9xayoZG6ZoaFi8nAEI67i3j+c49yLGaa8eysBh/hllfbOyiW527danOFLm9txg1cTSPMr3nj4+7seiWc3N3PArWxHe9cVXCVWAQ1Co79lSAP7TN+p4G8lHHlmj8lQ81os20zWzfScnQX51KyBgNoO+v+6HNQSib/lKHzBfn2/KlzuiZlsc8tjbKbRpdY8RGD6n7OJankH47LMl8YsR//MKRLc1QI1X6Uf0Np+k4FoMARh7uwn5zdMaqfwQVVLBfq9eEl7tBke4gCOoBGgzoNNo24erYoFSubRrkbT+bZA/7lSgE1xH8O+UitBicp2qraBG6V8pspOAPgKcxzuuhB4Wv8dHCRl9PxzKPJ7khpFmlMnlBuHqAGz6JPxxJu92+ci8m1FFT2CgMeTKR80ueHTxTjsNEv7KSypKi8wf9k4lBHUanNrV55yicM69sthyVG X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(601004)(2401047)(8121501046)(5005006)(100000703101)(100105400095)(10201501046)(3002001)(93006095)(93001095)(6055026)(6041248)(20161123555025)(20161123562025)(20161123564025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123560025)(20161123558100)(6072148)(100000704101)(100105200095)(100000705101)(100105500095);SRVR:DM5PR12MB1148;BCL:0;PCL:0;RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095);SRVR:DM5PR12MB1148; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtETTVQUjEyTUIxMTQ4OzQ6YUhXalhoNnh2YjBaRG8ybjIvdEhYaXNaa3BO?= =?utf-8?B?dkw4Z0xSdVB3R0pCbzNQQ25VZ2RubERsN3kvbEFZVy80RTc4T21CUlNWQ0VY?= =?utf-8?B?T0haZHBxQVhrcThRMHVzMFJaWUhtVjNPd0tLWnhDOFppUnZFVCtWZ2ZtREFt?= =?utf-8?B?bzJwQzhycVU4VlZRYkVqZnJuK2RsRzA2T2dUdEJBK3lLOTRBR2VWYy9GaUpn?= =?utf-8?B?QVNmdkpmQndBeFRJY2tLTGZhSklmZlNLY0RnenlKWjhwQUlQYTQwQ3Q5UXMw?= =?utf-8?B?MUU4RlMxc1ZIZEFKa2xzWEpTZVIxNzl4OVBOdmFCRDVLNVF0NU5GYm9pYmlV?= =?utf-8?B?MDJ6cy9FZnJLNTREL3NNbGJpWm03SUpKZjFzQ1p6cnN1N2JTYWVVa2pBTDRr?= =?utf-8?B?OStxRDBIUUJsUHVpVnRRdWRtblZWMlJ1OWt3N0FMWEluZlhMc3lxTm9CVHZL?= =?utf-8?B?VkNXN2JMaW93aHJoVzYvcUY4WG1yZWE4ZzFvazM1N2hiMWd4UXJKa0FRd3FL?= =?utf-8?B?UG1xQXF2Y3pCTWgxaGdVV25lZ3daT3R4dk43c1c4Sng4VkNzNzJUbmU1Q2Zv?= =?utf-8?B?ZHlOMFBrQ1Z1elZWYzNpVXZ3S0kxSGN0VzUwRVdSMEpGRzBwQ3JLSlRCemNY?= =?utf-8?B?YXRGa0ZwSnhyT1Nnek91b25DR3FsSzVneFdWek9Gc1ZMdTZKMFpVdTNnMG54?= =?utf-8?B?RnZ6MWJ1NnRzdFhuUDM4TmtESWtoQ2FubkdwQ1ZKQmxENlNNTzhrNjNmdXh6?= =?utf-8?B?Vk9QaW9PR21CVnc1a0lEUUZrZmh2WTdTOVZKcU9pSVlGUjluNlZ6bGxwdHlh?= =?utf-8?B?UFpBVEFTWDRpKzlSUHlFaWplU0FQd1BtUjhNclI1cVFSbWloS3RhS2RGOW5F?= =?utf-8?B?NXd3a2tSby9ha2JIa1VKYklzSDJMb2wwMTNEc2tpL1F3STd1R3VWQUppbG5t?= =?utf-8?B?RExpSkcyVk44SStob3FCUEtLdW5nNUZwOVdTbmZsZGJ0Qlo2WTJqUytEVnBX?= =?utf-8?B?UENiS2tJaWo2bGtvSU1BSUh4cDBaY0hOV1UvTGR6UGpwOXJrc1pmd2YvRmw5?= =?utf-8?B?a2ZFWnI3QjBuMUlwOXpobTRKSzVQaFd1NWxYSEFQV2RnTGF3VlMwbUFWc3BP?= =?utf-8?B?TzNxbHNlYW5nZzhTd0g4SW13eDRJSXJrV3N3SEpHeExuQTcwanFUODlKT3Z0?= =?utf-8?B?TG9ucjg2NHpnOHIxS2d1bHVzNFViQTdhOENvQlZxd1gwbjZ1bFFNZU1Rd00y?= =?utf-8?B?WGF5b1FWY1cxSXZVQ2tlZHRUSldDQ0Y4MzBra1NvT2h5OTVnOUNIT0FDR05C?= =?utf-8?B?YS9oUG1RMVd5VFlLN3VnMVlISVBTQTlkWHJvTUJFb29BalNBNEQyOE5hQlhT?= =?utf-8?B?NGYybTc2YmpqVmczQWJOMVZuVnJUdWpPSjZHS2k2c1YyMXpTSU0wV0RTR0sz?= =?utf-8?B?WVE0Wkc5SVlKVSs1MlZyakwvd0ZxSTV6U0hqWVNkZVVWNmd0VzdmeTJqUXor?= =?utf-8?B?dHBxT3pJby9MNm1IWFdrZ1JrM2RxTzhDdnIwdE1DbC9pT1IrWEJkSTh5N1dv?= =?utf-8?B?UFlhS0RFb1psTlhscHdkUDN5NkYxSktPbW1nNHhDYTRxZmVvcDJaekRtaUMy?= =?utf-8?Q?MSLEr96iGdBBK1IdiH2?= X-Forefront-PRVS: 03319F6FEF X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6009001)(39410400002)(39450400003)(39850400002)(39860400002)(39840400002)(39400400002)(5660300001)(6506006)(6116002)(1076002)(4001350100001)(103116003)(3846002)(55016002)(230700001)(47776003)(50466002)(189998001)(8676002)(54906002)(110136004)(54356999)(81166006)(66066001)(53936002)(97746001)(50986999)(305945005)(2950100002)(76176999)(38730400002)(23676002)(33646002)(7736002)(2906002)(9686003)(86362001)(7416002)(7406005)(83506001)(53416004)(42186005)(25786009)(72206003)(4326008)(478600001)(921003)(1121003);DIR:OUT;SFP:1101;SCL:1;SRVR:DM5PR12MB1148;H:tlendack-t1.amdoffice.net;FPR:;SPF:None;MLV:sfv;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtETTVQUjEyTUIxMTQ4OzIzOkVuUGhuNU5sclJQL2s1UkhlVUF0UmhXd29U?= =?utf-8?B?cHNOVjZLRmtITVpWVnFzNkt2SU1MbnNrRGo1eWVBb3loTHc1UlpRaEZiaGp3?= =?utf-8?B?M1ZwSC9VMlVOUkUxTVMvUkxhTHpqS0NpSjF6WlhMQ2crYWhvR1BhUzVRK1V3?= =?utf-8?B?QytCZ1NoUEhjUEgzTnQzZ1d2VUxJem9Ub2pwN0U4N3NkTDA1ajc1UFRLNk9Q?= =?utf-8?B?SDBleEs0dmVBRUJSRFg0UG9IbE45UHMreEtKa2V3UWFkbzQxdFk1bjN1b0Jx?= =?utf-8?B?djFyWGROcUxoMk82aElOUDFTYU1ZaEFVVjdmK29BNWZkeGR2U2lpRUtLd3Fz?= =?utf-8?B?ckFrSTY4WGtyN2hzSk83ODR0cDdtWi9oVGFzeExWNVo3TnVFWHNOenBveTZG?= =?utf-8?B?b2RzeGJ5WUVMZ0xlVXhzUzl5WXBkTFAxNXhoT3o2YzVxL0h3aittMnpsazYr?= =?utf-8?B?ZER2VDUrQUlZNDlYa3pPdDBoc3RZbzZsanZjaVFNWW45dHh2cU1pcG81c3l0?= =?utf-8?B?b0g0dU5pT0FPa3gvdk1PZGhGdzhSakVjZU8vVU5kUVY5elQ1cTJWY05EZFlK?= =?utf-8?B?WDl1aXlNZ1Y2K1hESks1clcrSTB5V3BvN01NaHpHczNvb28wY1JTN0EvK1l4?= =?utf-8?B?N2dURzZkSG1XVHRZbmV0NTlyR0YzeEc3VW5VZlNOUS9TSTB1UmdRZlQ4a2E3?= =?utf-8?B?dmpVN3FCM2Y3QzhsMWZjd1VDbDVXdW9UMVhLMXh2aURVblM4cElQdFVGR2tL?= =?utf-8?B?TktFQzdhdmlzTk5JN290SlNvVEYyMS9yUUNvNHRzR1E2NmU0K2JzL1VxaW10?= =?utf-8?B?T0dGb1ZjaUJnZWo1ejZBM09HdmZjdUo2eXNGeHFXMkQ4WVhyTjhmUDhzZmph?= =?utf-8?B?QzRzbGMyN1pmL3RnRCswVUNQY1VlTVJaRExteVdSa1g0aVZvZUs2VE41b1Z5?= =?utf-8?B?UkJLY3JHNVlJMXFFdURMc0lnYlJpSVV6L2Q0UTBoRy9OZ254K3BvVml1UmhS?= =?utf-8?B?SHhQRkc5TkI4TlF6ejBkaFhRZHlPbVpyU0g0SVZLTC8xcjNOV05PQ1dCcWVT?= =?utf-8?B?bUxWeFFFYmhEd0VHeU5VVHY4V2VtZkRmaC9BZXliR0lyMWwvT1ZUeW9uU0NI?= =?utf-8?B?amtnN2tCWmdsTTI0enl3U3Y2RUxJeGMwMmxJcEFEemtwQ2JacmVzTlBGZ3Jw?= =?utf-8?B?d2gvVUkxRURNNnZqZmwvK3hYVisvamQ2cmo0UnhURGUwV3M0ZWlyQzIzR1J5?= =?utf-8?B?eTlqbzVvcjhBTVRYYmJrK0RZQkgyY2RqU0NndUdWZFdyUkUrQ0hSWWhDYUxt?= =?utf-8?B?RzZiU2QwN2RSQkhzY2F5WFdCNjhPbGlIcjZyZFFjOEo3cXIxUW9JZUNwTGZS?= =?utf-8?B?dmZHVmRoaE8ycWpQeVN2VnYxQTFPRWYwNDB1WS9SL1ZpY3g4MTJTVjM3Mzcy?= =?utf-8?B?U29QZnVKV09yYVFJZHJyRGE0UjA3WitYZWVZNFRBYnNJKzh4RmN6YkllaWxo?= =?utf-8?B?b3REV0RraEZ3YTJPYWw4ZXg4YXQ3aDVMeWpQZWRBS1lFQndwOGJlck5DakUz?= =?utf-8?B?T0dWd0VqTDd1TVFDU2laK2c2b0FlQ25FR2pNOFd3NEZQbDNZOWZHQ0dCSkpl?= =?utf-8?B?Q3VGdXN1OWtFQnpyMVRtRTE1bmZvUDZkUWc1WUFsbEFmb0dsVldWSWVLWkNO?= =?utf-8?Q?9rFl7hT6Vbmkb1Kt4I=3D?= X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1148;6:6MZhw74aCOPfBlGA6MKzTUVRC3nR1z6EtA+jU5UfBfgcs7ZUSdqFtQ91kZPGPgSz8oBCnLsHB/JRawdu03NxWKTnPZsRfZkR/hkiOjzwCGEOoiltqH+wgV2rIr+I4SZvLWEg1UO4XEdU+XKdarL5zL31aFB/kV5V3rXgziLDzJfQplDgIM1uHUOhztiOJekrBeHRT2FAvDEP7g14WNBtE+PDEpLI8FeShh2vx97X+RSu7wgf0BP66mttJZJMOGPNYXLJL/Q3UiHSPYaW0q9C4PXH2Im1YDs+JJDvXV353KAIv3KT1JLNOHYDIbD3U+xpye6FXlLTukSoJ6Qv6Ul31hDAQbsAYlbaa3y+QZuJgJzT7RPZ3Ou7toKvw0wWdUdHZEu2M6IFog1MhhtTjcLPbOXGvM2/o2E09adfZd/jW1KovnEMDLQ06nWpP98U1VQmiiph+TXmEtusyiSp5sin5r+AZDtzMxTVH66XfizJopFJkXCY7AiFiGPoDDIyH6JbHgLi/jeeEeNXbNq8o+ifk+G/aejD1gbc1/pz8yo2yGA= X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1148;5:oi0uMrdbHhWaWadGq2dLs0UrrHP/SWkPWTs5+yQ2F/2CnhqSyWoGMn11eKdkuBbRvXBbFWbwLviQrvroDguwnBgl955g52NnJHlwS7vnM+vPCJGvI+yp2nojQQR5td2V+Q5aDoR3zt8Or7QR7vZID0T1ST4jelTqohK3PohdPv/OaHJYbOefh/frc5gzgua+tXRpsYDylIdUALETshI4LUs4TU6aGG7MG5TDTkOeGv5gqhI2oegj39wHwD/bRK0OYlffE/xeoE71CILbY67gcaBiHx3kjC8Rd71gogFeLPZ65CcMU9sXoutu3L+yHX+xJR5z2Y0tmIzibn+t7S2esVMtgK6wJUKpsFcXj3wgoKGuozqmZCHDXUjpSJwV3dlAWwldpSKVc3hdYr5tAboxe5ObL3DIRjH3dyssI56ZX5zmAGnbE0DPH/Z0TKzpEEc22ml9hKBGhNMMrCqBRT7pmcsDnLdxdZoyEVrgdm4BuveWspu/N1tGE3nKdMDd0eKN;24:qqAGz97hXZ/Ihn8/dan73ApiGg6h3SjQF/D8Mzxhk/h9EMPS3oeSu87rm5ZiOqC0hiuJ7pPfTtS3HodjquusJg88KRM94kOaEVPR/5f9350= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1148;7:bclZf7Z6OU2PHGlTamFH1oeEH2S/djTvHYrElhM4g2VHgb7SsbE6yCl0Ok0erEj746YxeFSJ6uWAIMo3d3qThB7xhZ6uuE6+1tpHdzmZQLTccQzi99ei3nJMJx1FEwblQCzC3SBFjnEBqmrOxL61BUerG9jxeaONlFSgG4Us0xBY5OSsodmUf1C0usRllrFV/tkVsi662nWR6dN1f+4PkL6UaXvp4RvLr/tEHPr6+SH6DyJlyqg48U0dxmqZuQzpVNQCjAyAzYx/8NDnq77MX3VC9r1cUwDDGRqjXRnrXy7D5goANMtw3z6qwPzeme17JKezc1rcO5+54P6kqYOb3g==;20:R+AfSEeWkOPzJCw4W7UVYFSFT4nzdyfc4+5BjrWemeSt3G1qRx3ZGJB+1PQWFnCgELxW5IMQx8ZmxXrSPx/+S5RWkn5QglNL3n4hZBAWlh6yagz/cE38AZh1uCc+JSNyD+tda9c7yGvXxw+QK4B/Lsd5XBMHsWYyWkrQhF5kMNAQIfQIW6+6jFJUdFYz2mN7Idrn1oWekAdOLJXy5B6egqpup37I7dQblRY9qJcbpI1PbZumrmJpzibjegAfROa8 X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2017 19:15:18.9734 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1148 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add early_memremap() support to be able to specify encrypted and decrypted mappings with and without write-protection. The use of write-protection is necessary when encrypting data "in place". The write-protect attribute is considered cacheable for loads, but not stores. This implies that the hardware will never give the core a dirty line with this memtype. Signed-off-by: Tom Lendacky --- arch/x86/Kconfig | 4 +++ arch/x86/include/asm/fixmap.h | 13 ++++++++++ arch/x86/include/asm/pgtable_types.h | 8 ++++++ arch/x86/mm/ioremap.c | 44 ++++++++++++++++++++++++++++++++++ include/asm-generic/early_ioremap.h | 2 ++ mm/early_ioremap.c | 10 ++++++++ 6 files changed, 81 insertions(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 11f2fdb..8002530 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1429,6 +1429,10 @@ config AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT If set to N, then the encryption of system memory can be activated with the mem_encrypt=on command line option. +config ARCH_USE_MEMREMAP_PROT + def_bool y + depends on AMD_MEM_ENCRYPT + # Common NUMA Features config NUMA bool "Numa Memory Allocation and Scheduler Support" diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h index d9ff226..dcd9fb5 100644 --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -164,6 +164,19 @@ static inline void __set_fixmap(enum fixed_addresses idx, */ #define FIXMAP_PAGE_NOCACHE PAGE_KERNEL_IO_NOCACHE +/* + * Early memremap routines used for in-place encryption. The mappings created + * by these routines are intended to be used as temporary mappings. + */ +void __init *early_memremap_encrypted(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_encrypted_wp(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_decrypted(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_decrypted_wp(resource_size_t phys_addr, + unsigned long size); + #include #define __late_set_fixmap(idx, phys, flags) __set_fixmap(idx, phys, flags) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index d3ae99c..ce8cb1c 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -161,6 +161,7 @@ enum page_cache_mode { #define _PAGE_CACHE_MASK (_PAGE_PAT | _PAGE_PCD | _PAGE_PWT) #define _PAGE_NOCACHE (cachemode2protval(_PAGE_CACHE_MODE_UC)) +#define _PAGE_CACHE_WP (cachemode2protval(_PAGE_CACHE_MODE_WP)) #define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_ACCESSED) #define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ @@ -189,6 +190,7 @@ enum page_cache_mode { #define __PAGE_KERNEL_VVAR (__PAGE_KERNEL_RO | _PAGE_USER) #define __PAGE_KERNEL_LARGE (__PAGE_KERNEL | _PAGE_PSE) #define __PAGE_KERNEL_LARGE_EXEC (__PAGE_KERNEL_EXEC | _PAGE_PSE) +#define __PAGE_KERNEL_WP (__PAGE_KERNEL | _PAGE_CACHE_WP) #define __PAGE_KERNEL_IO (__PAGE_KERNEL) #define __PAGE_KERNEL_IO_NOCACHE (__PAGE_KERNEL_NOCACHE) @@ -202,6 +204,12 @@ enum page_cache_mode { #define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ _PAGE_DIRTY | _PAGE_ENC) +#define __PAGE_KERNEL_ENC (__PAGE_KERNEL | _PAGE_ENC) +#define __PAGE_KERNEL_ENC_WP (__PAGE_KERNEL_WP | _PAGE_ENC) + +#define __PAGE_KERNEL_NOENC (__PAGE_KERNEL) +#define __PAGE_KERNEL_NOENC_WP (__PAGE_KERNEL_WP) + #define PAGE_KERNEL __pgprot(__PAGE_KERNEL | _PAGE_ENC) #define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) #define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index e6305dd..792db75 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -422,6 +422,50 @@ void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr) iounmap((void __iomem *)((unsigned long)addr & PAGE_MASK)); } +#ifdef CONFIG_ARCH_USE_MEMREMAP_PROT +/* Remap memory with encryption */ +void __init *early_memremap_encrypted(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_ENC); +} + +/* + * Remap memory with encryption and write-protected - cannot be called + * before pat_init() is called + */ +void __init *early_memremap_encrypted_wp(resource_size_t phys_addr, + unsigned long size) +{ + /* Be sure the write-protect PAT entry is set for write-protect */ + if (__pte2cachemode_tbl[_PAGE_CACHE_MODE_WP] != _PAGE_CACHE_MODE_WP) + return NULL; + + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_ENC_WP); +} + +/* Remap memory without encryption */ +void __init *early_memremap_decrypted(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_NOENC); +} + +/* + * Remap memory without encryption and write-protected - cannot be called + * before pat_init() is called + */ +void __init *early_memremap_decrypted_wp(resource_size_t phys_addr, + unsigned long size) +{ + /* Be sure the write-protect PAT entry is set for write-protect */ + if (__pte2cachemode_tbl[_PAGE_CACHE_MODE_WP] != _PAGE_CACHE_MODE_WP) + return NULL; + + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_NOENC_WP); +} +#endif /* CONFIG_ARCH_USE_MEMREMAP_PROT */ + static pte_t bm_pte[PAGE_SIZE/sizeof(pte_t)] __page_aligned_bss; static inline pmd_t * __init early_ioremap_pmd(unsigned long addr) diff --git a/include/asm-generic/early_ioremap.h b/include/asm-generic/early_ioremap.h index 734ad4d..2edef8d 100644 --- a/include/asm-generic/early_ioremap.h +++ b/include/asm-generic/early_ioremap.h @@ -13,6 +13,8 @@ extern void *early_memremap(resource_size_t phys_addr, unsigned long size); extern void *early_memremap_ro(resource_size_t phys_addr, unsigned long size); +extern void *early_memremap_prot(resource_size_t phys_addr, + unsigned long size, unsigned long prot_val); extern void early_iounmap(void __iomem *addr, unsigned long size); extern void early_memunmap(void *addr, unsigned long size); diff --git a/mm/early_ioremap.c b/mm/early_ioremap.c index 6d5717b..d7d30da 100644 --- a/mm/early_ioremap.c +++ b/mm/early_ioremap.c @@ -226,6 +226,16 @@ void __init early_iounmap(void __iomem *addr, unsigned long size) } #endif +#ifdef CONFIG_ARCH_USE_MEMREMAP_PROT +void __init * +early_memremap_prot(resource_size_t phys_addr, unsigned long size, + unsigned long prot_val) +{ + return (__force void *)__early_ioremap(phys_addr, size, + __pgprot(prot_val)); +} +#endif + #define MAX_MAP_CHUNK (NR_FIX_BTMAPS << PAGE_SHIFT) void __init copy_from_early_mem(void *dest, phys_addr_t src, unsigned long size) From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Lendacky Subject: [PATCH v6 12/34] x86/mm: Extend early_memremap() support with additional attrs Date: Wed, 07 Jun 2017 14:15:16 -0500 Message-ID: <20170607191516.28645.12130.stgit@tlendack-t1.amdoffice.net> References: <20170607191309.28645.15241.stgit@tlendack-t1.amdoffice.net> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20170607191309.28645.15241.stgit@tlendack-t1.amdoffice.net> Sender: owner-linux-mm@kvack.org To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Rik van Riel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Toshimitsu Kani , Arnd Bergmann , Jonathan Corbet , Matt Fleming , "Michael S. Tsirkin" , Joerg Roedel , Konrad Rzeszutek Wilk , Paolo Bonzini , Larry Woodman , Brijesh Singh , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , Dave Young , Thomas Gleixner , Dmitry Vyukov List-Id: linux-efi@vger.kernel.org Add early_memremap() support to be able to specify encrypted and decrypted mappings with and without write-protection. The use of write-protection is necessary when encrypting data "in place". The write-protect attribute is considered cacheable for loads, but not stores. This implies that the hardware will never give the core a dirty line with this memtype. Signed-off-by: Tom Lendacky --- arch/x86/Kconfig | 4 +++ arch/x86/include/asm/fixmap.h | 13 ++++++++++ arch/x86/include/asm/pgtable_types.h | 8 ++++++ arch/x86/mm/ioremap.c | 44 ++++++++++++++++++++++++++++++++++ include/asm-generic/early_ioremap.h | 2 ++ mm/early_ioremap.c | 10 ++++++++ 6 files changed, 81 insertions(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 11f2fdb..8002530 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1429,6 +1429,10 @@ config AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT If set to N, then the encryption of system memory can be activated with the mem_encrypt=on command line option. +config ARCH_USE_MEMREMAP_PROT + def_bool y + depends on AMD_MEM_ENCRYPT + # Common NUMA Features config NUMA bool "Numa Memory Allocation and Scheduler Support" diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h index d9ff226..dcd9fb5 100644 --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -164,6 +164,19 @@ static inline void __set_fixmap(enum fixed_addresses idx, */ #define FIXMAP_PAGE_NOCACHE PAGE_KERNEL_IO_NOCACHE +/* + * Early memremap routines used for in-place encryption. The mappings created + * by these routines are intended to be used as temporary mappings. + */ +void __init *early_memremap_encrypted(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_encrypted_wp(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_decrypted(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_decrypted_wp(resource_size_t phys_addr, + unsigned long size); + #include #define __late_set_fixmap(idx, phys, flags) __set_fixmap(idx, phys, flags) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index d3ae99c..ce8cb1c 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -161,6 +161,7 @@ enum page_cache_mode { #define _PAGE_CACHE_MASK (_PAGE_PAT | _PAGE_PCD | _PAGE_PWT) #define _PAGE_NOCACHE (cachemode2protval(_PAGE_CACHE_MODE_UC)) +#define _PAGE_CACHE_WP (cachemode2protval(_PAGE_CACHE_MODE_WP)) #define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_ACCESSED) #define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ @@ -189,6 +190,7 @@ enum page_cache_mode { #define __PAGE_KERNEL_VVAR (__PAGE_KERNEL_RO | _PAGE_USER) #define __PAGE_KERNEL_LARGE (__PAGE_KERNEL | _PAGE_PSE) #define __PAGE_KERNEL_LARGE_EXEC (__PAGE_KERNEL_EXEC | _PAGE_PSE) +#define __PAGE_KERNEL_WP (__PAGE_KERNEL | _PAGE_CACHE_WP) #define __PAGE_KERNEL_IO (__PAGE_KERNEL) #define __PAGE_KERNEL_IO_NOCACHE (__PAGE_KERNEL_NOCACHE) @@ -202,6 +204,12 @@ enum page_cache_mode { #define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ _PAGE_DIRTY | _PAGE_ENC) +#define __PAGE_KERNEL_ENC (__PAGE_KERNEL | _PAGE_ENC) +#define __PAGE_KERNEL_ENC_WP (__PAGE_KERNEL_WP | _PAGE_ENC) + +#define __PAGE_KERNEL_NOENC (__PAGE_KERNEL) +#define __PAGE_KERNEL_NOENC_WP (__PAGE_KERNEL_WP) + #define PAGE_KERNEL __pgprot(__PAGE_KERNEL | _PAGE_ENC) #define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) #define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index e6305dd..792db75 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -422,6 +422,50 @@ void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr) iounmap((void __iomem *)((unsigned long)addr & PAGE_MASK)); } +#ifdef CONFIG_ARCH_USE_MEMREMAP_PROT +/* Remap memory with encryption */ +void __init *early_memremap_encrypted(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_ENC); +} + +/* + * Remap memory with encryption and write-protected - cannot be called + * before pat_init() is called + */ +void __init *early_memremap_encrypted_wp(resource_size_t phys_addr, + unsigned long size) +{ + /* Be sure the write-protect PAT entry is set for write-protect */ + if (__pte2cachemode_tbl[_PAGE_CACHE_MODE_WP] != _PAGE_CACHE_MODE_WP) + return NULL; + + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_ENC_WP); +} + +/* Remap memory without encryption */ +void __init *early_memremap_decrypted(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_NOENC); +} + +/* + * Remap memory without encryption and write-protected - cannot be called + * before pat_init() is called + */ +void __init *early_memremap_decrypted_wp(resource_size_t phys_addr, + unsigned long size) +{ + /* Be sure the write-protect PAT entry is set for write-protect */ + if (__pte2cachemode_tbl[_PAGE_CACHE_MODE_WP] != _PAGE_CACHE_MODE_WP) + return NULL; + + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_NOENC_WP); +} +#endif /* CONFIG_ARCH_USE_MEMREMAP_PROT */ + static pte_t bm_pte[PAGE_SIZE/sizeof(pte_t)] __page_aligned_bss; static inline pmd_t * __init early_ioremap_pmd(unsigned long addr) diff --git a/include/asm-generic/early_ioremap.h b/include/asm-generic/early_ioremap.h index 734ad4d..2edef8d 100644 --- a/include/asm-generic/early_ioremap.h +++ b/include/asm-generic/early_ioremap.h @@ -13,6 +13,8 @@ extern void *early_memremap(resource_size_t phys_addr, unsigned long size); extern void *early_memremap_ro(resource_size_t phys_addr, unsigned long size); +extern void *early_memremap_prot(resource_size_t phys_addr, + unsigned long size, unsigned long prot_val); extern void early_iounmap(void __iomem *addr, unsigned long size); extern void early_memunmap(void *addr, unsigned long size); diff --git a/mm/early_ioremap.c b/mm/early_ioremap.c index 6d5717b..d7d30da 100644 --- a/mm/early_ioremap.c +++ b/mm/early_ioremap.c @@ -226,6 +226,16 @@ void __init early_iounmap(void __iomem *addr, unsigned long size) } #endif +#ifdef CONFIG_ARCH_USE_MEMREMAP_PROT +void __init * +early_memremap_prot(resource_size_t phys_addr, unsigned long size, + unsigned long prot_val) +{ + return (__force void *)__early_ioremap(phys_addr, size, + __pgprot(prot_val)); +} +#endif + #define MAX_MAP_CHUNK (NR_FIX_BTMAPS << PAGE_SHIFT) void __init copy_from_early_mem(void *dest, phys_addr_t src, unsigned long size) -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-path: Received: from mail-sn1nam02on0074.outbound.protection.outlook.com ([104.47.36.74] helo=NAM02-SN1-obe.outbound.protection.outlook.com) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1dIgQk-00015Z-CU for kexec@lists.infradead.org; Wed, 07 Jun 2017 19:15:48 +0000 From: Tom Lendacky Subject: [PATCH v6 12/34] x86/mm: Extend early_memremap() support with additional attrs Date: Wed, 07 Jun 2017 14:15:16 -0500 Message-ID: <20170607191516.28645.12130.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20170607191309.28645.15241.stgit@tlendack-t1.amdoffice.net> References: <20170607191309.28645.15241.stgit@tlendack-t1.amdoffice.net> MIME-Version: 1.0 List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "kexec" Errors-To: kexec-bounces+dwmw2=infradead.org@lists.infradead.org To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Thomas Gleixner , Rik van Riel , Brijesh Singh , Toshimitsu Kani , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Radim =?utf-8?b?S3LEjW3DocWZ?= , Konrad Rzeszutek Wilk , Andrey Ryabinin , Ingo Molnar , "Michael S. Tsirkin" , Andy Lutomirski , "H. Peter Anvin" , Borislav Petkov , Paolo Bonzini , Alexander Potapenko , Dave Young , Larry Woodman , Dmitry Vyukov Add early_memremap() support to be able to specify encrypted and decrypted mappings with and without write-protection. The use of write-protection is necessary when encrypting data "in place". The write-protect attribute is considered cacheable for loads, but not stores. This implies that the hardware will never give the core a dirty line with this memtype. Signed-off-by: Tom Lendacky --- arch/x86/Kconfig | 4 +++ arch/x86/include/asm/fixmap.h | 13 ++++++++++ arch/x86/include/asm/pgtable_types.h | 8 ++++++ arch/x86/mm/ioremap.c | 44 ++++++++++++++++++++++++++++++++++ include/asm-generic/early_ioremap.h | 2 ++ mm/early_ioremap.c | 10 ++++++++ 6 files changed, 81 insertions(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 11f2fdb..8002530 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1429,6 +1429,10 @@ config AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT If set to N, then the encryption of system memory can be activated with the mem_encrypt=on command line option. +config ARCH_USE_MEMREMAP_PROT + def_bool y + depends on AMD_MEM_ENCRYPT + # Common NUMA Features config NUMA bool "Numa Memory Allocation and Scheduler Support" diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h index d9ff226..dcd9fb5 100644 --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -164,6 +164,19 @@ static inline void __set_fixmap(enum fixed_addresses idx, */ #define FIXMAP_PAGE_NOCACHE PAGE_KERNEL_IO_NOCACHE +/* + * Early memremap routines used for in-place encryption. The mappings created + * by these routines are intended to be used as temporary mappings. + */ +void __init *early_memremap_encrypted(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_encrypted_wp(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_decrypted(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_decrypted_wp(resource_size_t phys_addr, + unsigned long size); + #include #define __late_set_fixmap(idx, phys, flags) __set_fixmap(idx, phys, flags) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index d3ae99c..ce8cb1c 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -161,6 +161,7 @@ enum page_cache_mode { #define _PAGE_CACHE_MASK (_PAGE_PAT | _PAGE_PCD | _PAGE_PWT) #define _PAGE_NOCACHE (cachemode2protval(_PAGE_CACHE_MODE_UC)) +#define _PAGE_CACHE_WP (cachemode2protval(_PAGE_CACHE_MODE_WP)) #define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_ACCESSED) #define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ @@ -189,6 +190,7 @@ enum page_cache_mode { #define __PAGE_KERNEL_VVAR (__PAGE_KERNEL_RO | _PAGE_USER) #define __PAGE_KERNEL_LARGE (__PAGE_KERNEL | _PAGE_PSE) #define __PAGE_KERNEL_LARGE_EXEC (__PAGE_KERNEL_EXEC | _PAGE_PSE) +#define __PAGE_KERNEL_WP (__PAGE_KERNEL | _PAGE_CACHE_WP) #define __PAGE_KERNEL_IO (__PAGE_KERNEL) #define __PAGE_KERNEL_IO_NOCACHE (__PAGE_KERNEL_NOCACHE) @@ -202,6 +204,12 @@ enum page_cache_mode { #define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ _PAGE_DIRTY | _PAGE_ENC) +#define __PAGE_KERNEL_ENC (__PAGE_KERNEL | _PAGE_ENC) +#define __PAGE_KERNEL_ENC_WP (__PAGE_KERNEL_WP | _PAGE_ENC) + +#define __PAGE_KERNEL_NOENC (__PAGE_KERNEL) +#define __PAGE_KERNEL_NOENC_WP (__PAGE_KERNEL_WP) + #define PAGE_KERNEL __pgprot(__PAGE_KERNEL | _PAGE_ENC) #define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) #define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index e6305dd..792db75 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -422,6 +422,50 @@ void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr) iounmap((void __iomem *)((unsigned long)addr & PAGE_MASK)); } +#ifdef CONFIG_ARCH_USE_MEMREMAP_PROT +/* Remap memory with encryption */ +void __init *early_memremap_encrypted(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_ENC); +} + +/* + * Remap memory with encryption and write-protected - cannot be called + * before pat_init() is called + */ +void __init *early_memremap_encrypted_wp(resource_size_t phys_addr, + unsigned long size) +{ + /* Be sure the write-protect PAT entry is set for write-protect */ + if (__pte2cachemode_tbl[_PAGE_CACHE_MODE_WP] != _PAGE_CACHE_MODE_WP) + return NULL; + + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_ENC_WP); +} + +/* Remap memory without encryption */ +void __init *early_memremap_decrypted(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_NOENC); +} + +/* + * Remap memory without encryption and write-protected - cannot be called + * before pat_init() is called + */ +void __init *early_memremap_decrypted_wp(resource_size_t phys_addr, + unsigned long size) +{ + /* Be sure the write-protect PAT entry is set for write-protect */ + if (__pte2cachemode_tbl[_PAGE_CACHE_MODE_WP] != _PAGE_CACHE_MODE_WP) + return NULL; + + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_NOENC_WP); +} +#endif /* CONFIG_ARCH_USE_MEMREMAP_PROT */ + static pte_t bm_pte[PAGE_SIZE/sizeof(pte_t)] __page_aligned_bss; static inline pmd_t * __init early_ioremap_pmd(unsigned long addr) diff --git a/include/asm-generic/early_ioremap.h b/include/asm-generic/early_ioremap.h index 734ad4d..2edef8d 100644 --- a/include/asm-generic/early_ioremap.h +++ b/include/asm-generic/early_ioremap.h @@ -13,6 +13,8 @@ extern void *early_memremap(resource_size_t phys_addr, unsigned long size); extern void *early_memremap_ro(resource_size_t phys_addr, unsigned long size); +extern void *early_memremap_prot(resource_size_t phys_addr, + unsigned long size, unsigned long prot_val); extern void early_iounmap(void __iomem *addr, unsigned long size); extern void early_memunmap(void *addr, unsigned long size); diff --git a/mm/early_ioremap.c b/mm/early_ioremap.c index 6d5717b..d7d30da 100644 --- a/mm/early_ioremap.c +++ b/mm/early_ioremap.c @@ -226,6 +226,16 @@ void __init early_iounmap(void __iomem *addr, unsigned long size) } #endif +#ifdef CONFIG_ARCH_USE_MEMREMAP_PROT +void __init * +early_memremap_prot(resource_size_t phys_addr, unsigned long size, + unsigned long prot_val) +{ + return (__force void *)__early_ioremap(phys_addr, size, + __pgprot(prot_val)); +} +#endif + #define MAX_MAP_CHUNK (NR_FIX_BTMAPS << PAGE_SHIFT) void __init copy_from_early_mem(void *dest, phys_addr_t src, unsigned long size) _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec