From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756812AbcHVXKJ (ORCPT ); Mon, 22 Aug 2016 19:10:09 -0400 Received: from mail-bl2nam02on0045.outbound.protection.outlook.com ([104.47.38.45]:45952 "EHLO NAM02-BL2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1756729AbcHVXKC (ORCPT ); Mon, 22 Aug 2016 19:10:02 -0400 X-Greylist: delayed 2063 seconds by postgrey-1.27 at vger.kernel.org; Mon, 22 Aug 2016 19:10:00 EDT Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Thomas.Lendacky@amd.com; From: Tom Lendacky Subject: [RFC PATCH v2 08/20] x86: Extend the early_memmap support with additional attrs To: , , , , , , , , CC: Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , "Konrad Rzeszutek Wilk" , Andrey Ryabinin , Ingo Molnar , Borislav Petkov , "Andy Lutomirski" , "H. Peter Anvin" , "Paolo Bonzini" , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov Date: Mon, 22 Aug 2016 17:37:00 -0500 Message-ID: <20160822223700.29880.21700.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: SN1PR12CA0033.namprd12.prod.outlook.com (10.162.96.171) To DM5PR12MB1148.namprd12.prod.outlook.com (10.168.236.143) X-MS-Office365-Filtering-Correlation-Id: 9bef231d-74e5-4a6d-8467-08d3cadcd7e5 X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1148;2:8Ob9gUvFaMh5NJIWzBC00SpyLqPx4OTowVW3riKdPJzrdWPsYDg4O7SmRBjt1mdGuSSH4YOR3wUmRk1+VazYylfqPCzBJV8+K0LGHJjtbTCRDO0tDORlKGJF/Whkh4DaGDOmk08TyTR6zRX2Fos4DsH0XvtluCRZXkIIEHq2qdtwv76nEqI0/mwxCGZ+iUO3;3:sLmE01rgIe6FVJm6X92L2/2Fk6u14BTExDrR07TwRT3pYz1PRuU9A39c1ZIiX0ygtQRND0bxdhcrg62F7QsNGwuFk9RDgfY17/8/tLoFtPSXzQzPRKQAAV7+nHWW4IAv X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:DM5PR12MB1148; X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1148;25:OYzfGzkZEEx7okLuDyeUZQvOCqLqqTe8fOqn8JwZOpayUlEPxTTw4VpThOdmBWMgmD1zIPl7BvEWytfpjIWpJsY8dyi6/7qup5EIc4DQ+Y1Yk39tgi4o1gEPpc4N5AfwCXUEY/xXQhP+4FXN96uofQmktljcSUfEYjl02L/Ju6d51Y8er4br9YawaCgztF0MQT12RsRSGATD3miet+6oJrUFTpcGznGBz+KQHsgMFW6Haom/Jar3P062zsCuF4QCwqVMZKmUy0MoK9VFLmyovGHFvGnTIps2SRVrttAReT+VrKExnmiVfyDTMzyrUf1UYNXC/97+Jgnersrdd/4cPpAHKiKA8ypBbwWtPVy8d7ZE4G5j9u224nonzChsMBGYZgfrTyeooWESS9Ha8iBapr6jpcLr77g0W82Y6+EWom3WpgMGONewJO9qWVsrpzbLJO9nv8uwqMNyXZ66wO/Q2mxZIO9U3HR3vuZQK5ylFedloWEk6gsAa6rsi8igYaHAVfLOPpR9dFfKtsj7TtdvliW+t9VuvH2FkYPlLHGpbQMFF3xyhvNkJM4DExkxNi3IXkb75jkzCaEKfriryQZu5hS/evcnIQFg4oK2ZzNd11K/5dK7lhztgbQIizMox0QpOZ9HdWOD3VVxbGr0V3XtOk1psPQejMVTJIlEMDrkcWATiN5YwB6bjint+m5B9t5W;31:BfB8TxVGzsGKf0BkZxduPVRct6J5J74dINh8Ts2/kiqs7UGl3nyKTH3ShqQBChYUdIv0OJ7hAE3Dx80nFqQ+Cw7n5Jm2wUWVh9VbgSrElKVZpY7oZRGNqG/gWapcHoUFBvrUpSD4uFujeDNvhwhd4Zo41MS4Ys1KZQ+WqkVabOVSqF+NtnBTuStCjwSg7HdwT46NsG/QabmgDq46vsrdLUih3a+ZnKcfspLAeoE7frg= X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1148;20:V3AWMwuo1Rv5FbkM2LBjME4yvWAIs3hdAzjuVSWIogKFD9b3+3AR7NqCa/zRqERkUNeY9mBpYekKBTrDqs3Mcl3XRP6cQLLSspOs2fKw2sUPgpqfKh5g0CUzWG5MFr/f3U6xKajTONY3rbKYLP52XONK0Yv5jGurVHRP87cLe7y4Koly4GILQhJF+2vHtfYpgZEF6d3gsug1LOn48cEfoJnWc3KJGJa5CY+P7Ll757OxbnDc6QxbctWuYcjzipDXsS/fZVfaU1J/yeywHObP0KRmlrJQijPIE8a2M103xCCdoCwRJiFcWC0cIgpqZZOmbuRnM1CaZ7iI/oapi519TYbJU5arQ5KN7rY3UX7R0qSMyXPI6c72XgCO7eTehtueJ0aBuK0xzbaNvqtXpVi058Qk5gSpONB6sM8fnLWFreWRTvTIVag1XAFAWnGN/mETkEPlx7WeOMt23UjXif1Vvj534PKfrx/aIOlE9UuK9GKrky/UqE8+UJpyPd/aUNpE;4:/ttSFHbWYN8Ylf6krwOEwBSoQ2Z5VfNkZOgJhqzu3d6QP5ylkSkq/SD49qdA9vl6BL/lPDmE5TWORR+Yw9O4bIZjImCH5aHP3V75N4DxguKPwiW9AND4haQHcEtnm7jpFkASmi2LIJaJQ1qIMmQ8NmmezaYBp5bKPqREA144a5ZggizL7G5qSa7oa6LSPWN9FiHUuJLOarLAGoJx3ZN31bMDnIiTT4lIm0kNgtTLXXBDlA0cvepzMNIKnzZ9adVm4Zcn2zIbSYQf9PAxTUmHwSVQiXBkK+qwIiOI3ROMREkFjO4PEGXssnJO4LZUJDdcheMBRnmAsc7/OgSDqRCur4LdCNzawWIkdzAW6YUX/WHav3yn6fn6JX+KASn3a6vx98SfDIBTbM/QDI79C16TUVOs4TOH6gxzvsHR6RCWhUwQMzz/U8DtsQv16JPA6qQl X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040176)(601004)(2401047)(5005006)(8121501046)(3002001)(10201501046)(6055026);SRVR:DM5PR12MB1148;BCL:0;PCL:0;RULEID:;SRVR:DM5PR12MB1148; X-Forefront-PRVS: 00429279BA X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6009001)(7916002)(189002)(199003)(575784001)(81156014)(81166006)(2950100001)(77096005)(8676002)(86362001)(586003)(3846002)(106356001)(101416001)(4001350100001)(6116002)(42186005)(97736004)(5001770100001)(230700001)(229853001)(23676002)(92566002)(1076002)(19580405001)(19580395003)(305945005)(7736002)(7846002)(53416004)(33646002)(4326007)(50466002)(2201001)(7416002)(2906002)(105586002)(68736007)(50986999)(69596002)(47776003)(54356999)(76176999)(83506001)(9686002)(189998001)(5660300001)(97746001)(103116003)(66066001)(71626007)(217873001);DIR:OUT;SFP:1101;SCL:1;SRVR:DM5PR12MB1148;H:tlendack-t1.amdoffice.net;FPR:;SPF:None;PTR:InfoNoRecords;MX:1;A:1;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtETTVQUjEyTUIxMTQ4OzIzOm5pWExOV09FeSt2cHdLblRkQUpHa2htcEFa?= =?utf-8?B?a0E0akE5U1ZFUEtjV2xOSkUrSE40aDhST1hMZC8yL3ZFZ2hlM2E0ZEdwTC9y?= =?utf-8?B?OUplbFlYeFlmOWJpU2JrUmRPcXZnM0U2bFA3YUtCd0lEUGUvYUVJWTczc0xx?= =?utf-8?B?dEtqVFMwUlZ2RUQ0bUdHV0hROEx3YjV2cEZMajBuQXhqRE1uVHh6SjZ2QmRX?= =?utf-8?B?QTlxRFB2RUxZYlIxNlBNSmx1U0pCK1R0b1R1eWZVTVF6dVpVeEZRRjVkYjZk?= =?utf-8?B?ZXBpU2I0SkN3Q2J6QkhRdmJiYW05bmdyY2hjWmw0SzhZN2t1L3RjUVg2S3Rk?= =?utf-8?B?QWJCajlWV0FOOE1YVjRRazVwQUlWYjNabVQ5Ni82V2pMdEdRN0hTSlpsRG9U?= =?utf-8?B?cTR3dzVHT280NTNEZTQvc09aU09sZ2ZZTVpBVDhrVklWRE40cmR4ZUt0eFVy?= =?utf-8?B?Qno0aHNoUTB1V3Y0czFNZTc0YVc4YklhdVNzN05qMjNaK3FsRmdjL2ZCbXJO?= =?utf-8?B?MUlqRHRzM1ZvSVJNV0lSc0gwdkVHRkcrSk1PenhWY1dwRTB1RnRDeGZKZzRv?= =?utf-8?B?WkxnRk1sNTRVQ3RldW1LUVc1ekpDQWJOWEI2OGtZWkd4YVh1Z05rMXNoUHVO?= =?utf-8?B?VWJPdWxocUdBSHkweGpBWU0yWkVFS2dEbk5EaVNtVVVXaEZGNnJTc1Iyank4?= =?utf-8?B?OFNMeDdPNFBZSEw1YmZLTE1tU3lxd1o2amdxU0U3THNETmZCSHhrdTdnR2J4?= =?utf-8?B?ZWE2M2tNa2kycnd4ZDRYYU10NndsL1ZHUXZTYjhvaEV5cW1xTi9HOGtMT043?= =?utf-8?B?Nml1UUdMQmF4NVB2Y2trVy8vNnNLU1VIbkd4U0pRcjFwWis5OFI2elcrU2Ez?= =?utf-8?B?MDByazBNSFFWV0JRM0lPdHVHWExBR0pFZkhHcTJEVXIxTkh2VTRadWUzbUJu?= =?utf-8?B?eGZ5T3ZSQW15emV1SkFmcTB1UldTb09ubFY5aGxsQkErZE5meGdrcHIrTllE?= =?utf-8?B?UjBZZ3A4N1RDNDQvbGltUjduWjE2WWR0R2VWVnY1aXh4Y2ZtMUZXQWVKUktE?= =?utf-8?B?NnZwVDN3Z3dIRExPc3Y2WlZiVVg5SDZkdUREaG04TU80NklvVEJQV2J6SlhB?= =?utf-8?B?NDBacVNFdzU1TDVudzBCVis0MlQ0U0Q4M0syT3phbWVjY3hETGhjQXBBSWtU?= =?utf-8?B?ekxjYVF5NzBZNUF3NDlnM1ErbUpqUjFkeW1BNHB5cXdtMVdqRkE4eGdKcU1E?= =?utf-8?B?aVpCaVh5aFI0Z1BrMkk4QVRrdkxnaytWV1BYNEpZUTFMeWk5Q0tFZmJjQjJT?= =?utf-8?B?bEVIVmxDZnBvczl6ZWdtbnhZTUdjZ3AxNU05Y2RnVWhrNFQwOWhwcGM3VE9q?= =?utf-8?B?UzZFemc1Ymo5eGZTSld5Sm5JYjNSTWNrd3QrSm1zUTZZUzNjU2dnRUFhYm1L?= =?utf-8?B?K3BmVWdQdHZVZXBvaTQ4bU9GbnRxeEFCWEowUzJoMVN5RCt4Ykg1N2pYQ2hL?= =?utf-8?B?Z0JVNEQxTFdpU3JnUlVYSWRGSnRSOWc1SERFV2ZZb2RJaHVZSDFEcHdydSs1?= =?utf-8?B?dU82Q2tGZkV3ZjNWZjBpVlNFR0pvbnF3RVV2WmtWL1hCK2VCaGg1Vlg5dWtI?= =?utf-8?B?ZUcvMnVCZVBFSkhzLzFBdEZLa201a0JhTnh2MHdCTno3U3g5SzNLcmZad1VY?= =?utf-8?B?enZzV2ZMRzlMdy9GejB3SEErNk1QZm5zZXBlbmdxRThOOCtVejAvRnhOUUlu?= =?utf-8?B?YmFSaDR2ckIyN1lDT1g2ZGdJNURDVlArVlRsVkhJTE1rR3hobXhpUkdTUzRH?= =?utf-8?Q?WQWfe6jPtr83W?= X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1148;6:tXuNnhDqsfzsuRlTJHt47+opJVwRs3KC0qp5PaQjUCidQBdZXebsb12MuDr6YetoeWprWEAIDGvYNJDz9QGsrbopfgZIifV8CpbBDqDOj8Itmo9V6pJ3TM6E0VbR4++lik1Wj5kbRH7dC0cb4e0ASKPvedhJcG1/M2vXyzE85ochR8QfTVNuOK3JKAtn+n23nGwrTn7ISPBYtAD7cN/1b9h/OdwVnasmLUW2ajMYwmSIqOsSDz5cHEheFrLLjMsz6yyZadejtwVLxY5236VL4Z4rJJirUTyMtN+tYqWpUI2ondaOYYmYVq2g3RDie2CpO4cWXmlV/TVTaCkvoBa40w==;5:CqdfZtG/Jj/r1JOX1Sby8nqy9ByuifJb+sUyWI9Ayxn50682p0lvzdl2aUot6gortjwecsuH3snqQw8yfFVMXiAWLr/mi2gMb/mUPPPnRf6bPfL1fZhdzpzLBR824oJcUPtqDDk3qQkgaC+QuNytTw==;24:vyOc+DtpqmtCW0urr6RyDNcBsE+uqswLSmHCFWLYC7+nS2kIeC7hXAzbNCmtoO0kJZIhE1EnBonrrU0aqKVDHccFqsaj8anleW3nSgzY1Oo=;7:PkYhAGWkV7PjOuznn877Dc6spoAsNGtA1zH7vunzlie/U0TYBrQVpk/1eVIZ6jl208Q4TmrF45iTD5993mdUCuPBpNI9Oa8xHlYa2hOoE1zmdRodXLq+fu/Rb84G3QOgDOjNP8uqU0tYcmTlpSNF1zmrqR+1RA9U1K5elt3BN4laKpwcHl5kCjTr7+SUyBC6V4sferRJ6XJwod1tICfgZ75rqvYvgSwp22Yjeq60zxNrRi4/PBYvkprTNAih2CLJ SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1148;20:CwOiALTUia71sleYjB4ATrGl06+OiARBKra7oo8uaAx0Pk9FryCeMzr2rovCwKi0UWjfkREcrLz3mz6/iskPeEBSre0F6xa4QOsAEfqVe44NHDLOiidcZbNvvXJrPhEZNn01NRdJH4utbshXpuX8eR2Q7TbU9V+yW7szamfVoOynsEGx5P7MLPWYpLA/nHm3tJaW+GesS/4lpvz36+Qm7OlrRkK5DGLChlQQkwi8o96K0Vln1hmWhkNvIf8+BUWS X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Aug 2016 22:37:04.2201 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1148 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add to the early_memmap support to be able to specify encrypted and un-encrypted mappings with and without write-protection. The use of write-protection is necessary when encrypting data "in place". The write-protect attribute is considered cacheable for loads, but not stores. This implies that the hardware will never give the core a dirty line with this memtype. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/fixmap.h | 9 +++++++++ arch/x86/include/asm/pgtable_types.h | 8 ++++++++ arch/x86/mm/ioremap.c | 28 ++++++++++++++++++++++++++++ include/asm-generic/early_ioremap.h | 2 ++ mm/early_ioremap.c | 15 +++++++++++++++ 5 files changed, 62 insertions(+) diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h index 83e91f0..4d41878 100644 --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -160,6 +160,15 @@ static inline void __set_fixmap(enum fixed_addresses idx, */ #define FIXMAP_PAGE_NOCACHE PAGE_KERNEL_IO_NOCACHE +void __init *early_memremap_enc(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_enc_wp(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_dec(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_dec_wp(resource_size_t phys_addr, + unsigned long size); + #include #define __late_set_fixmap(idx, phys, flags) __set_fixmap(idx, phys, flags) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index a01f0e1..50faef8 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -160,6 +160,7 @@ enum page_cache_mode { #define _PAGE_CACHE_MASK (_PAGE_PAT | _PAGE_PCD | _PAGE_PWT) #define _PAGE_NOCACHE (cachemode2protval(_PAGE_CACHE_MODE_UC)) +#define _PAGE_CACHE_WP (cachemode2protval(_PAGE_CACHE_MODE_WP)) #define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_ACCESSED) #define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ @@ -188,6 +189,7 @@ enum page_cache_mode { #define __PAGE_KERNEL_VVAR (__PAGE_KERNEL_RO | _PAGE_USER) #define __PAGE_KERNEL_LARGE (__PAGE_KERNEL | _PAGE_PSE) #define __PAGE_KERNEL_LARGE_EXEC (__PAGE_KERNEL_EXEC | _PAGE_PSE) +#define __PAGE_KERNEL_WP (__PAGE_KERNEL | _PAGE_CACHE_WP) #define __PAGE_KERNEL_IO (__PAGE_KERNEL) #define __PAGE_KERNEL_IO_NOCACHE (__PAGE_KERNEL_NOCACHE) @@ -202,6 +204,12 @@ enum page_cache_mode { #define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ _PAGE_DIRTY | _PAGE_ENC) +#define __PAGE_KERNEL_ENC (__PAGE_KERNEL | _PAGE_ENC) +#define __PAGE_KERNEL_ENC_WP (__PAGE_KERNEL_WP | _PAGE_ENC) + +#define __PAGE_KERNEL_DEC (__PAGE_KERNEL) +#define __PAGE_KERNEL_DEC_WP (__PAGE_KERNEL_WP) + #define PAGE_KERNEL __pgprot(__PAGE_KERNEL | _PAGE_ENC) #define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) #define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 5cf0489..031db21 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -419,6 +419,34 @@ void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr) iounmap((void __iomem *)((unsigned long)addr & PAGE_MASK)); } +/* Remap memory with encryption */ +void __init *early_memremap_enc(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_ENC); +} + +/* Remap memory with encryption and write-protected */ +void __init *early_memremap_enc_wp(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_ENC_WP); +} + +/* Remap memory without encryption */ +void __init *early_memremap_dec(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_DEC); +} + +/* Remap memory without encryption and write-protected */ +void __init *early_memremap_dec_wp(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_DEC_WP); +} + static pte_t bm_pte[PAGE_SIZE/sizeof(pte_t)] __page_aligned_bss; static inline pmd_t * __init early_ioremap_pmd(unsigned long addr) diff --git a/include/asm-generic/early_ioremap.h b/include/asm-generic/early_ioremap.h index 734ad4d..2edef8d 100644 --- a/include/asm-generic/early_ioremap.h +++ b/include/asm-generic/early_ioremap.h @@ -13,6 +13,8 @@ extern void *early_memremap(resource_size_t phys_addr, unsigned long size); extern void *early_memremap_ro(resource_size_t phys_addr, unsigned long size); +extern void *early_memremap_prot(resource_size_t phys_addr, + unsigned long size, unsigned long prot_val); extern void early_iounmap(void __iomem *addr, unsigned long size); extern void early_memunmap(void *addr, unsigned long size); diff --git a/mm/early_ioremap.c b/mm/early_ioremap.c index 6d5717b..d71b98b 100644 --- a/mm/early_ioremap.c +++ b/mm/early_ioremap.c @@ -226,6 +226,14 @@ early_memremap_ro(resource_size_t phys_addr, unsigned long size) } #endif +void __init * +early_memremap_prot(resource_size_t phys_addr, unsigned long size, + unsigned long prot_val) +{ + return (__force void *)__early_ioremap(phys_addr, size, + __pgprot(prot_val)); +} + #define MAX_MAP_CHUNK (NR_FIX_BTMAPS << PAGE_SHIFT) void __init copy_from_early_mem(void *dest, phys_addr_t src, unsigned long size) @@ -267,6 +275,13 @@ early_memremap_ro(resource_size_t phys_addr, unsigned long size) return (void *)phys_addr; } +void __init * +early_memremap_prot(resource_size_t phys_addr, unsigned long size, + unsigned long prot_val) +{ + return (void *)phys_addr; +} + void __init early_iounmap(void __iomem *addr, unsigned long size) { } From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Lendacky Subject: [RFC PATCH v2 08/20] x86: Extend the early_memmap support with additional attrs Date: Mon, 22 Aug 2016 17:37:00 -0500 Message-ID: <20160822223700.29880.21700.stgit@tlendack-t1.amdoffice.net> References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> Sender: owner-linux-mm@kvack.org To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Andrey Ryabinin , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov List-Id: linux-efi@vger.kernel.org Add to the early_memmap support to be able to specify encrypted and un-encrypted mappings with and without write-protection. The use of write-protection is necessary when encrypting data "in place". The write-protect attribute is considered cacheable for loads, but not stores. This implies that the hardware will never give the core a dirty line with this memtype. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/fixmap.h | 9 +++++++++ arch/x86/include/asm/pgtable_types.h | 8 ++++++++ arch/x86/mm/ioremap.c | 28 ++++++++++++++++++++++++++++ include/asm-generic/early_ioremap.h | 2 ++ mm/early_ioremap.c | 15 +++++++++++++++ 5 files changed, 62 insertions(+) diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h index 83e91f0..4d41878 100644 --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -160,6 +160,15 @@ static inline void __set_fixmap(enum fixed_addresses idx, */ #define FIXMAP_PAGE_NOCACHE PAGE_KERNEL_IO_NOCACHE +void __init *early_memremap_enc(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_enc_wp(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_dec(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_dec_wp(resource_size_t phys_addr, + unsigned long size); + #include #define __late_set_fixmap(idx, phys, flags) __set_fixmap(idx, phys, flags) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index a01f0e1..50faef8 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -160,6 +160,7 @@ enum page_cache_mode { #define _PAGE_CACHE_MASK (_PAGE_PAT | _PAGE_PCD | _PAGE_PWT) #define _PAGE_NOCACHE (cachemode2protval(_PAGE_CACHE_MODE_UC)) +#define _PAGE_CACHE_WP (cachemode2protval(_PAGE_CACHE_MODE_WP)) #define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_ACCESSED) #define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ @@ -188,6 +189,7 @@ enum page_cache_mode { #define __PAGE_KERNEL_VVAR (__PAGE_KERNEL_RO | _PAGE_USER) #define __PAGE_KERNEL_LARGE (__PAGE_KERNEL | _PAGE_PSE) #define __PAGE_KERNEL_LARGE_EXEC (__PAGE_KERNEL_EXEC | _PAGE_PSE) +#define __PAGE_KERNEL_WP (__PAGE_KERNEL | _PAGE_CACHE_WP) #define __PAGE_KERNEL_IO (__PAGE_KERNEL) #define __PAGE_KERNEL_IO_NOCACHE (__PAGE_KERNEL_NOCACHE) @@ -202,6 +204,12 @@ enum page_cache_mode { #define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ _PAGE_DIRTY | _PAGE_ENC) +#define __PAGE_KERNEL_ENC (__PAGE_KERNEL | _PAGE_ENC) +#define __PAGE_KERNEL_ENC_WP (__PAGE_KERNEL_WP | _PAGE_ENC) + +#define __PAGE_KERNEL_DEC (__PAGE_KERNEL) +#define __PAGE_KERNEL_DEC_WP (__PAGE_KERNEL_WP) + #define PAGE_KERNEL __pgprot(__PAGE_KERNEL | _PAGE_ENC) #define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) #define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 5cf0489..031db21 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -419,6 +419,34 @@ void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr) iounmap((void __iomem *)((unsigned long)addr & PAGE_MASK)); } +/* Remap memory with encryption */ +void __init *early_memremap_enc(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_ENC); +} + +/* Remap memory with encryption and write-protected */ +void __init *early_memremap_enc_wp(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_ENC_WP); +} + +/* Remap memory without encryption */ +void __init *early_memremap_dec(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_DEC); +} + +/* Remap memory without encryption and write-protected */ +void __init *early_memremap_dec_wp(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_DEC_WP); +} + static pte_t bm_pte[PAGE_SIZE/sizeof(pte_t)] __page_aligned_bss; static inline pmd_t * __init early_ioremap_pmd(unsigned long addr) diff --git a/include/asm-generic/early_ioremap.h b/include/asm-generic/early_ioremap.h index 734ad4d..2edef8d 100644 --- a/include/asm-generic/early_ioremap.h +++ b/include/asm-generic/early_ioremap.h @@ -13,6 +13,8 @@ extern void *early_memremap(resource_size_t phys_addr, unsigned long size); extern void *early_memremap_ro(resource_size_t phys_addr, unsigned long size); +extern void *early_memremap_prot(resource_size_t phys_addr, + unsigned long size, unsigned long prot_val); extern void early_iounmap(void __iomem *addr, unsigned long size); extern void early_memunmap(void *addr, unsigned long size); diff --git a/mm/early_ioremap.c b/mm/early_ioremap.c index 6d5717b..d71b98b 100644 --- a/mm/early_ioremap.c +++ b/mm/early_ioremap.c @@ -226,6 +226,14 @@ early_memremap_ro(resource_size_t phys_addr, unsigned long size) } #endif +void __init * +early_memremap_prot(resource_size_t phys_addr, unsigned long size, + unsigned long prot_val) +{ + return (__force void *)__early_ioremap(phys_addr, size, + __pgprot(prot_val)); +} + #define MAX_MAP_CHUNK (NR_FIX_BTMAPS << PAGE_SHIFT) void __init copy_from_early_mem(void *dest, phys_addr_t src, unsigned long size) @@ -267,6 +275,13 @@ early_memremap_ro(resource_size_t phys_addr, unsigned long size) return (void *)phys_addr; } +void __init * +early_memremap_prot(resource_size_t phys_addr, unsigned long size, + unsigned long prot_val) +{ + return (void *)phys_addr; +} + void __init early_iounmap(void __iomem *addr, unsigned long size) { } -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-bl2nam02on0045.outbound.protection.outlook.com ([104.47.38.45]:45952 "EHLO NAM02-BL2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1756729AbcHVXKC (ORCPT ); Mon, 22 Aug 2016 19:10:02 -0400 From: Tom Lendacky Subject: [RFC PATCH v2 08/20] x86: Extend the early_memmap support with additional attrs Date: Mon, 22 Aug 2016 17:37:00 -0500 Message-ID: <20160822223700.29880.21700.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: linux-arch-owner@vger.kernel.org List-ID: To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Andrey Ryabinin , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov Message-ID: <20160822223700.QKmnVFYoI445uPv_4lzxN8iZpmLYiZHny2IPV5OLsi4@z> Add to the early_memmap support to be able to specify encrypted and un-encrypted mappings with and without write-protection. The use of write-protection is necessary when encrypting data "in place". The write-protect attribute is considered cacheable for loads, but not stores. This implies that the hardware will never give the core a dirty line with this memtype. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/fixmap.h | 9 +++++++++ arch/x86/include/asm/pgtable_types.h | 8 ++++++++ arch/x86/mm/ioremap.c | 28 ++++++++++++++++++++++++++++ include/asm-generic/early_ioremap.h | 2 ++ mm/early_ioremap.c | 15 +++++++++++++++ 5 files changed, 62 insertions(+) diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h index 83e91f0..4d41878 100644 --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -160,6 +160,15 @@ static inline void __set_fixmap(enum fixed_addresses idx, */ #define FIXMAP_PAGE_NOCACHE PAGE_KERNEL_IO_NOCACHE +void __init *early_memremap_enc(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_enc_wp(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_dec(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_dec_wp(resource_size_t phys_addr, + unsigned long size); + #include #define __late_set_fixmap(idx, phys, flags) __set_fixmap(idx, phys, flags) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index a01f0e1..50faef8 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -160,6 +160,7 @@ enum page_cache_mode { #define _PAGE_CACHE_MASK (_PAGE_PAT | _PAGE_PCD | _PAGE_PWT) #define _PAGE_NOCACHE (cachemode2protval(_PAGE_CACHE_MODE_UC)) +#define _PAGE_CACHE_WP (cachemode2protval(_PAGE_CACHE_MODE_WP)) #define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_ACCESSED) #define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ @@ -188,6 +189,7 @@ enum page_cache_mode { #define __PAGE_KERNEL_VVAR (__PAGE_KERNEL_RO | _PAGE_USER) #define __PAGE_KERNEL_LARGE (__PAGE_KERNEL | _PAGE_PSE) #define __PAGE_KERNEL_LARGE_EXEC (__PAGE_KERNEL_EXEC | _PAGE_PSE) +#define __PAGE_KERNEL_WP (__PAGE_KERNEL | _PAGE_CACHE_WP) #define __PAGE_KERNEL_IO (__PAGE_KERNEL) #define __PAGE_KERNEL_IO_NOCACHE (__PAGE_KERNEL_NOCACHE) @@ -202,6 +204,12 @@ enum page_cache_mode { #define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ _PAGE_DIRTY | _PAGE_ENC) +#define __PAGE_KERNEL_ENC (__PAGE_KERNEL | _PAGE_ENC) +#define __PAGE_KERNEL_ENC_WP (__PAGE_KERNEL_WP | _PAGE_ENC) + +#define __PAGE_KERNEL_DEC (__PAGE_KERNEL) +#define __PAGE_KERNEL_DEC_WP (__PAGE_KERNEL_WP) + #define PAGE_KERNEL __pgprot(__PAGE_KERNEL | _PAGE_ENC) #define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) #define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 5cf0489..031db21 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -419,6 +419,34 @@ void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr) iounmap((void __iomem *)((unsigned long)addr & PAGE_MASK)); } +/* Remap memory with encryption */ +void __init *early_memremap_enc(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_ENC); +} + +/* Remap memory with encryption and write-protected */ +void __init *early_memremap_enc_wp(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_ENC_WP); +} + +/* Remap memory without encryption */ +void __init *early_memremap_dec(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_DEC); +} + +/* Remap memory without encryption and write-protected */ +void __init *early_memremap_dec_wp(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_DEC_WP); +} + static pte_t bm_pte[PAGE_SIZE/sizeof(pte_t)] __page_aligned_bss; static inline pmd_t * __init early_ioremap_pmd(unsigned long addr) diff --git a/include/asm-generic/early_ioremap.h b/include/asm-generic/early_ioremap.h index 734ad4d..2edef8d 100644 --- a/include/asm-generic/early_ioremap.h +++ b/include/asm-generic/early_ioremap.h @@ -13,6 +13,8 @@ extern void *early_memremap(resource_size_t phys_addr, unsigned long size); extern void *early_memremap_ro(resource_size_t phys_addr, unsigned long size); +extern void *early_memremap_prot(resource_size_t phys_addr, + unsigned long size, unsigned long prot_val); extern void early_iounmap(void __iomem *addr, unsigned long size); extern void early_memunmap(void *addr, unsigned long size); diff --git a/mm/early_ioremap.c b/mm/early_ioremap.c index 6d5717b..d71b98b 100644 --- a/mm/early_ioremap.c +++ b/mm/early_ioremap.c @@ -226,6 +226,14 @@ early_memremap_ro(resource_size_t phys_addr, unsigned long size) } #endif +void __init * +early_memremap_prot(resource_size_t phys_addr, unsigned long size, + unsigned long prot_val) +{ + return (__force void *)__early_ioremap(phys_addr, size, + __pgprot(prot_val)); +} + #define MAX_MAP_CHUNK (NR_FIX_BTMAPS << PAGE_SHIFT) void __init copy_from_early_mem(void *dest, phys_addr_t src, unsigned long size) @@ -267,6 +275,13 @@ early_memremap_ro(resource_size_t phys_addr, unsigned long size) return (void *)phys_addr; } +void __init * +early_memremap_prot(resource_size_t phys_addr, unsigned long size, + unsigned long prot_val) +{ + return (void *)phys_addr; +} + void __init early_iounmap(void __iomem *addr, unsigned long size) { } From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Lendacky Subject: [RFC PATCH v2 08/20] x86: Extend the early_memmap support with additional attrs Date: Mon, 22 Aug 2016 17:37:00 -0500 Message-ID: <20160822223700.29880.21700.stgit@tlendack-t1.amdoffice.net> References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Cc: Radim =?utf-8?b?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , "Konrad Rzeszutek Wilk" , Andrey Ryabinin , Ingo Molnar , Borislav Petkov , "Andy Lutomirski" , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , "Thomas Gleixner" , Dmitry Vyukov To: , , , , , , , , Return-path: In-Reply-To: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> Sender: owner-linux-mm@kvack.org List-Id: kvm.vger.kernel.org Add to the early_memmap support to be able to specify encrypted and un-encrypted mappings with and without write-protection. The use of write-protection is necessary when encrypting data "in place". The write-protect attribute is considered cacheable for loads, but not stores. This implies that the hardware will never give the core a dirty line with this memtype. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/fixmap.h | 9 +++++++++ arch/x86/include/asm/pgtable_types.h | 8 ++++++++ arch/x86/mm/ioremap.c | 28 ++++++++++++++++++++++++++++ include/asm-generic/early_ioremap.h | 2 ++ mm/early_ioremap.c | 15 +++++++++++++++ 5 files changed, 62 insertions(+) diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h index 83e91f0..4d41878 100644 --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -160,6 +160,15 @@ static inline void __set_fixmap(enum fixed_addresses idx, */ #define FIXMAP_PAGE_NOCACHE PAGE_KERNEL_IO_NOCACHE +void __init *early_memremap_enc(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_enc_wp(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_dec(resource_size_t phys_addr, + unsigned long size); +void __init *early_memremap_dec_wp(resource_size_t phys_addr, + unsigned long size); + #include #define __late_set_fixmap(idx, phys, flags) __set_fixmap(idx, phys, flags) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index a01f0e1..50faef8 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -160,6 +160,7 @@ enum page_cache_mode { #define _PAGE_CACHE_MASK (_PAGE_PAT | _PAGE_PCD | _PAGE_PWT) #define _PAGE_NOCACHE (cachemode2protval(_PAGE_CACHE_MODE_UC)) +#define _PAGE_CACHE_WP (cachemode2protval(_PAGE_CACHE_MODE_WP)) #define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_ACCESSED) #define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \ @@ -188,6 +189,7 @@ enum page_cache_mode { #define __PAGE_KERNEL_VVAR (__PAGE_KERNEL_RO | _PAGE_USER) #define __PAGE_KERNEL_LARGE (__PAGE_KERNEL | _PAGE_PSE) #define __PAGE_KERNEL_LARGE_EXEC (__PAGE_KERNEL_EXEC | _PAGE_PSE) +#define __PAGE_KERNEL_WP (__PAGE_KERNEL | _PAGE_CACHE_WP) #define __PAGE_KERNEL_IO (__PAGE_KERNEL) #define __PAGE_KERNEL_IO_NOCACHE (__PAGE_KERNEL_NOCACHE) @@ -202,6 +204,12 @@ enum page_cache_mode { #define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \ _PAGE_DIRTY | _PAGE_ENC) +#define __PAGE_KERNEL_ENC (__PAGE_KERNEL | _PAGE_ENC) +#define __PAGE_KERNEL_ENC_WP (__PAGE_KERNEL_WP | _PAGE_ENC) + +#define __PAGE_KERNEL_DEC (__PAGE_KERNEL) +#define __PAGE_KERNEL_DEC_WP (__PAGE_KERNEL_WP) + #define PAGE_KERNEL __pgprot(__PAGE_KERNEL | _PAGE_ENC) #define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) #define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 5cf0489..031db21 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -419,6 +419,34 @@ void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr) iounmap((void __iomem *)((unsigned long)addr & PAGE_MASK)); } +/* Remap memory with encryption */ +void __init *early_memremap_enc(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_ENC); +} + +/* Remap memory with encryption and write-protected */ +void __init *early_memremap_enc_wp(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_ENC_WP); +} + +/* Remap memory without encryption */ +void __init *early_memremap_dec(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_DEC); +} + +/* Remap memory without encryption and write-protected */ +void __init *early_memremap_dec_wp(resource_size_t phys_addr, + unsigned long size) +{ + return early_memremap_prot(phys_addr, size, __PAGE_KERNEL_DEC_WP); +} + static pte_t bm_pte[PAGE_SIZE/sizeof(pte_t)] __page_aligned_bss; static inline pmd_t * __init early_ioremap_pmd(unsigned long addr) diff --git a/include/asm-generic/early_ioremap.h b/include/asm-generic/early_ioremap.h index 734ad4d..2edef8d 100644 --- a/include/asm-generic/early_ioremap.h +++ b/include/asm-generic/early_ioremap.h @@ -13,6 +13,8 @@ extern void *early_memremap(resource_size_t phys_addr, unsigned long size); extern void *early_memremap_ro(resource_size_t phys_addr, unsigned long size); +extern void *early_memremap_prot(resource_size_t phys_addr, + unsigned long size, unsigned long prot_val); extern void early_iounmap(void __iomem *addr, unsigned long size); extern void early_memunmap(void *addr, unsigned long size); diff --git a/mm/early_ioremap.c b/mm/early_ioremap.c index 6d5717b..d71b98b 100644 --- a/mm/early_ioremap.c +++ b/mm/early_ioremap.c @@ -226,6 +226,14 @@ early_memremap_ro(resource_size_t phys_addr, unsigned long size) } #endif +void __init * +early_memremap_prot(resource_size_t phys_addr, unsigned long size, + unsigned long prot_val) +{ + return (__force void *)__early_ioremap(phys_addr, size, + __pgprot(prot_val)); +} + #define MAX_MAP_CHUNK (NR_FIX_BTMAPS << PAGE_SHIFT) void __init copy_from_early_mem(void *dest, phys_addr_t src, unsigned long size) @@ -267,6 +275,13 @@ early_memremap_ro(resource_size_t phys_addr, unsigned long size) return (void *)phys_addr; } +void __init * +early_memremap_prot(resource_size_t phys_addr, unsigned long size, + unsigned long prot_val) +{ + return (void *)phys_addr; +} + void __init early_iounmap(void __iomem *addr, unsigned long size) { } -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org