From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757285AbcIGObK (ORCPT ); Wed, 7 Sep 2016 10:31:10 -0400 Received: from mail-bl2nam02on0064.outbound.protection.outlook.com ([104.47.38.64]:60736 "EHLO NAM02-BL2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1757174AbcIGObE (ORCPT ); Wed, 7 Sep 2016 10:31:04 -0400 Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Thomas.Lendacky@amd.com; Subject: Re: [RFC PATCH v2 07/20] x86: Provide general kernel support for memory encryption To: Borislav Petkov References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> <20160822223646.29880.28794.stgit@tlendack-t1.amdoffice.net> <20160906093113.GA18319@pd.tnic> CC: , , , , , , , , , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Andrey Ryabinin , Ingo Molnar , Andy Lutomirski , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov From: Tom Lendacky Message-ID: Date: Wed, 7 Sep 2016 09:30:54 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 MIME-Version: 1.0 In-Reply-To: <20160906093113.GA18319@pd.tnic> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: BY2PR21CA0017.namprd21.prod.outlook.com (10.162.74.155) To DM5PR12MB1146.namprd12.prod.outlook.com (10.168.236.141) X-MS-Office365-Filtering-Correlation-Id: 2788cf72-b56b-40b0-d979-08d3d72b97d6 X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1146;2:QWRJJNUcPk0tnKTci0cAxFoULqtaGECH82MdHUkdpdWe5FXqhXU9tK1trF3dCenDvdTu1glvPcvy7tjvrJBVVe5A9b5hZ0KDVCg4KxeNm6LaJ7DsVZsE6sXw97PsWK/SW4x9/2spp2nH5SaDnWyV6xlXKqCWZ76wBAcq4CBkT1JigbyZSOqFT5sja0mTwhYt;3:o6oq6JJOGfastEkRQ9AdXpjGcWF5qgno+TLalF8fnDQ+aschq9tUTw0YiGepoxz2YO848wipt6h9sFmYxIalEc1qY4zEgsFyIL3Qkfzllgm+U4oOuxpK92Z8TVZ5qRDm;25:4ln8M76mO+cApsVzxqHcSHKsWBzVu9YwlPvFK45nqxOtWd6g/6k3vapQz5n6Nn2RyYDqLpxR+Snfs5yYUTAJpAfu5kVUrkH/CBjFa5qBmY7lNT2KoK1TN9DtIyVz3gx0s/SG9+1G0+5psqHOqcacIZklZi6kTcVtpIoBzUrqQw75cmkppC7XAPvd+XMx4gMSJ6gte5UQ5EXDCRl0JAnJcJ4mnj+g3qgeHSiBfOTT5yZB1ObrQZgJXaekSJegWPQW1NdaOG8RUnhrnoOgSK3PukwmVg5IhaAzbK++TS4Ib95gEuYOvJS9cMmlVkuk6RAMCx2Ck9y0njRD/m0JJQgSc0mQ92IsgykYGySrpF5K/xZ51GRQdPDuHjlFPe77I63CKUlK9VWfiIxzMNWyCk/Whg==;31:hTn0BH91tBLsT9hKogyHSkAkmrfipQjb01ad+GMWJxGa171kx/6SfRTvgINmjSdlqPNARWB7trAEvsBv7qKsmxmLiMOtTmReKyZdsy5fQ+ATK/zUePUvJ3t65c6RB1L3LCJsD+0vNhWogOSK9gMLvUp2SE0w0/V9ZKE3PvFIjlIFIOPejrETrTEQorQxBOUo2CUWYoYvB7bItQJVM8yfjFnPCdwztU7IVkGMf0suBsk= X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:DM5PR12MB1146; X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1146;20:tRjJc7psRP2GlSfDSvGNG7ySs4cRoITjkI23PLn0YCaKLVMzRpWMknkLnpnCNaXz1PsRksANcn+BO+mPVyi6HpwMEBXn5n+/tnK14MERm17VyIbdSv+xOdILfrIjjv005O44XO2EZAg08ocqplVhJ1vEDRkh0BmxDG2+7BUuwlXXisARod8Gqe6NeECXsvn3lwhtIZbVoCKur379H5mZOwXU5NDTgW3Te2AZkavg+BSYMf4fur2PueqDM9yEN1aIyC05/qG/+hEx2/ybUm2t0E2HdJN8hFoXqwy0FKyME0DGPo5QP6vCV4f4hN4Ln7qMK2V2OmAUmW5AnHIq9I1zIL33u1LdMY+l96H2lWQKmVEmnmGr88FBRd45X/UmioUm6qk+IR5wNhh/uNR1eZ4sC4SO4xNzvJOcSCP+BJzIN0X4wPZPeVdPwNJ1LRbKPNMv3Hi8UtWb+yswsm1HGqFx37zyGyWNmR4dbIEPZZuLdKfiTarmu0gnuEogfdAN4nWO;4:+fyLR+WXGawr1WFb+1GPhwk0rYM0jB7M343B9AnW2da634r0l3pDQ1BTYNL8tkw0F9BPucoSuwXGcZDTkRBAspHXabC8ft6EwFZ0UnAg+aWomZJARdsw8ptydbtAjH/6YWoNpC3NVj3DfeRREhUqwic7tqtcg1jS6/SuzFAROWQwGer4+TZXsKlPub1GdfUJWNP2ZlOsOvIwQxb2p0JF/WDrqqODmz1aoA2CZF7ly5bCMzERo4ZlAwfnUOcVPmR9rIBm3XY83IG61XYK1UeVMEEt0e8Q73itB2Qdh/dAn34djuhGJp1csIixSQAEBTAed2veSGtG0AJhOrc0XzmJHmzwRcIHZzR9M//ygADD7UBQIyItQMHzwfyPah/GVzv33fh8SWtbiGxw2FzikPziFthAlphg9dpYlHT299HACc90G1h5X6aNbktsYebBqVCp X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040176)(601004)(2401047)(8121501046)(5005006)(10201501046)(3002001)(6055026);SRVR:DM5PR12MB1146;BCL:0;PCL:0;RULEID:;SRVR:DM5PR12MB1146; X-Forefront-PRVS: 0058ABBBC7 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6009001)(6049001)(7916002)(189002)(199003)(24454002)(377454003)(7736002)(66066001)(68736007)(50986999)(77096005)(92566002)(7416002)(106356001)(64126003)(230700001)(101416001)(65826007)(81166006)(81156014)(54356999)(97736004)(7846002)(3846002)(2906002)(76176999)(586003)(4001350100001)(86362001)(8676002)(42186005)(31686004)(36756003)(305945005)(6116002)(5660300001)(4326007)(2950100001)(19580405001)(83506001)(19580395003)(189998001)(47776003)(110136002)(105586002)(33646002)(31696002)(65806001)(65956001)(50466002)(23676002)(217873001);DIR:OUT;SFP:1101;SCL:1;SRVR:DM5PR12MB1146;H:[10.236.18.82];FPR:;SPF:None;PTR:InfoNoRecords;MX:1;A:1;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtETTVQUjEyTUIxMTQ2OzIzOmNsMVJNU0c4b3F6V1Z2aEQrNWFCbFNoZUV2?= =?utf-8?B?OFE0UHZqS2RKQUlpTWVRa250V0prcGppMW45VzlrWnpPY2NDdDZ5Q1ZrTC8x?= =?utf-8?B?WmZDVlRDcS9ocXgwenF5YkFRTldId0czOVg5OEN3YVZxYXlMWWtoN2ZnK0hH?= =?utf-8?B?NG9Wc1hoUVJqcnc2cDRzMndUYjRMT1lWN3NxMDZpaDljcmp1Z053UUt1R00w?= =?utf-8?B?UWFPT1AyZkZxa2dLejk5NklWRzQ3MDQyZkh4Uzl3SGxudzRvclZQdHBTL2hY?= =?utf-8?B?OEpsSFgra085UmJEUm5mM2hCTElkeUNTbm9JMHhMemlFQnVTbXNtWk1KM29R?= =?utf-8?B?eVlUWFladG5UK3R1RklLamVXVWxrN2trQXhPUURITU1XYkpGWFZxUXpESWxD?= =?utf-8?B?Y043dXpFOXpWdG9KU01QQm1Pc29rdW1mdjhlWm1QWTkwcnpFaUUzOUNaN0dJ?= =?utf-8?B?bXM0bCtnWmNvZ2VwODRXS0lONU85eFVVMCtPUXByQXNpNWZUMUVXMVdLVnV4?= =?utf-8?B?SThuVUFCeXduaXNaVzBzY05YRU1JSGdUN2hSTFZ6RFRKd2I0OHN4MEpXMVow?= =?utf-8?B?czRuZm9pUWJGZ2wzcmhDRGs5aHlETmlxNTU4VDVjcGtyRWxRZGZ0VUZubXdq?= =?utf-8?B?Y3FZWGg5bGRLV0xjRW1oK25VY3hxbGh6cU5JR0VhT29nNUNjaDM3Vmo0SWpF?= =?utf-8?B?Q3dDYUkwSjU3cS9ibXM1bFlyYnBybkg2TitzeGE0U3J4c3ZBOFZaL0tnbUhN?= =?utf-8?B?WEhjVUR5UkQvUFllVVlnMldOS1dObzlMdnF5SndqdXcvZTNOS0I5SVZnT1ZE?= =?utf-8?B?aFdvQ2VPZEN5MG5CbVNaQjhuM3hxaFZlK3lyaGphV0dwelQxR1l1TkRjMGYz?= =?utf-8?B?SThoQVpCQ0tWUWl4eG1iaVU4a3hNdmpLV08wQVo5dHh1V2E0d2V0NDBOdVlk?= =?utf-8?B?WGNrV1VQVlQyMmJ6RXUzMG9VTHVuUjRabjlod0xFTEJQRDB1WmlYM3BVczA2?= =?utf-8?B?Q0I5RmI4eHlVaXpQcjB6N3VkeEJzaENLVW42cERVUm1Ic3hWZE1rbVJrci9r?= =?utf-8?B?Zjc0and6NVJ0MkNnU1hyVk5sVHdDakg3b0xwQlNRSGJta2xOMFZzQS9URFl6?= =?utf-8?B?WVN1cnlDWnNMNHczdkVMektDVEQ5RW1VUWRRSzNsUkxsUS9LVDZxd0RQU0FF?= =?utf-8?B?QkE2UnRKRUhDK0NqQU5peUE3b2E3MEVNa0d5M0hxYUVyVUkwZ1BsZlQwaHpX?= =?utf-8?B?ZnlZbHZJWDlhaW9odFZvQ0w0QWxuWDd3MDRqdWQ3UDBualFjdDBYTVluWUJP?= =?utf-8?B?dnREZlYwUkxDUkp3RFlReW5OZmlWMDljVGJOTC9PSlBQemNjbUFrUjhsbll6?= =?utf-8?B?ektRNkdjci8zVyswcXZ0cStBSnlBS2RXYzBNYmE2VnhDRUJJdytUUTUrRjVD?= =?utf-8?B?cjFrMHY1c1phek83KzNVMW5aRVpUYnR6bVAweHhITGRuZDRLSzR1dGVEVWNs?= =?utf-8?B?SmlUTEVoOHZZYTROQzJlVS9hRExvRGJIR0VSYjBKd3hTQzdjV1YreTQvbU9G?= =?utf-8?B?KzR3K0NpZUkzMGpyOHB0QVkyTlpyZ2ZCNm5CS1pGazQ2Vk91VXlhMTY5WWEr?= =?utf-8?B?YmxKR0FuVW1WeTRxRFVmdndJNFBFRVowQXc0aGVlNWJxcTB3aXNsbEJkKzNM?= =?utf-8?B?SHpsYVFqMXprejZpYWpYQVNEMTJjMHJQVWpCOVBkdWF3ZjljbXYyby8vcXA0?= =?utf-8?B?MjlhdFJYblFESEI1T2V0K3FtclJZUytPRVVyYWFlWWlsbGlVaElJRHZlOW5x?= =?utf-8?Q?U9CvzDxtfvf86?= X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1146;6:Y5j9VZYiRhtTVhuFcMYxGsULj91UhJwokzpnfGrKDW4ItJ+hMgoXFL7p1vWI2ia+i3hmM0So3Jt5PMnY1x9Es5UNfI6JlG7A/GzmmHgrwHJ4z6Ig2rn1U76zxWnMkeoiyKN2za/RdlAD9XJWrSP5zy7RcOMlCF/HavN2+Mah73BxZZqDZ6CjpZN2ax0CTsChKl++7N/6bDgEJ4MkV2Se11YLzq0PVSW4xnGMW0UsIJoWD2rOOkEAVtcJViWyjEZNEnmrNPDaQiSUhiuzE11iaeSg1NKXiXpg8QGSff1QW4aKsWn4opMPzY+iwv8R16I3j01YDk77ebC0KkEGMVAUAg==;5:0nbC5I9DrZw4y9X0OW5Cj7X+J2k0iMpnxB2im3GG/COKdgecMjxrE2F296no3oI1Nxme9IamVZayt/fboMbCtS9yBotxXF4AeHxG9ZLBaHeN2xx/GOUWBajs48tGqSIx7UgblJOU+iXEPE9osupylw==;24:SPFuGXT0wi+i/vXGn+eQUns4M4gTmI2Fqps7HF7uuvfWgvtUBNGnz/b5t0QKwyZiA4BbUHKrCeV9dEIqj62vicQIehI+pcFy1nvCayDWNCA=;7:MCvdZ0r5Fgeh+n7g3bHIEkeiBHZi9bpO2gHeDgzj3dWz/rRaeTTZ24QOg+Zw2tnBsMoazkxY+sq40esPz7O1l87gmcENycuhZCMkFcmmeCeiXDpiOUuaXRFbHdNvZXoSXktQ1GXkoDd6QtcZXk1lTdOBjhvtOprHpSLSIBV3Bnk5TrZCj+RuUR//yRtuVt971d229TIzEJse7YVIQ+VoeTdP7XNGyfL5jmumQdTlAJxddMkvc0737N+WgHByoAYO SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1146;20:0oU7/gBBNppPyLRsyqy67UpqVOyPryQ21j+3Bqda0eBnRwDk+2BEHHDLnOQ7QRW/YnyG0DxqC/CwLahJIm4EUH4mVCrk0fbCYaNbNi28UYV21RYPKJh1h2AUT4CdMkz+MacU1nVINDw169EZfro2nO5fUPQZeqntKOGulNgK8N8lly34in8swvtKexHb4xGU0luM5e6N77vRwjIKhEyC1Xrxw26bFaJwKPVWWBLKyifsGHcW9DQ+QlTtd5GliFmx X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Sep 2016 14:30:59.7508 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1146 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/06/2016 04:31 AM, Borislav Petkov wrote: > On Mon, Aug 22, 2016 at 05:36:46PM -0500, Tom Lendacky wrote: >> Adding general kernel support for memory encryption includes: >> - Modify and create some page table macros to include the Secure Memory >> Encryption (SME) memory encryption mask >> - Update kernel boot support to call an SME routine that checks for and >> sets the SME capability (the SME routine will grow later and for now >> is just a stub routine) >> - Update kernel boot support to call an SME routine that encrypts the >> kernel (the SME routine will grow later and for now is just a stub >> routine) >> - Provide an SME initialization routine to update the protection map with >> the memory encryption mask so that it is used by default >> >> Signed-off-by: Tom Lendacky > > ... > >> diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c >> index 54a2372..88c7bae 100644 >> --- a/arch/x86/kernel/head64.c >> +++ b/arch/x86/kernel/head64.c >> @@ -28,6 +28,7 @@ >> #include >> #include >> #include >> +#include >> >> /* >> * Manage page tables very early on. >> @@ -42,7 +43,7 @@ static void __init reset_early_page_tables(void) >> { >> memset(early_level4_pgt, 0, sizeof(pgd_t)*(PTRS_PER_PGD-1)); >> next_early_pgt = 0; >> - write_cr3(__pa_nodebug(early_level4_pgt)); >> + write_cr3(__sme_pa_nodebug(early_level4_pgt)); >> } >> >> /* Create a new PMD entry */ >> @@ -54,7 +55,7 @@ int __init early_make_pgtable(unsigned long address) >> pmdval_t pmd, *pmd_p; >> >> /* Invalid address or early pgt is done ? */ >> - if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt)) >> + if (physaddr >= MAXMEM || read_cr3() != __sme_pa_nodebug(early_level4_pgt)) >> return -1; >> >> again: >> @@ -157,6 +158,11 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data) >> >> clear_page(init_level4_pgt); >> >> + /* Update the early_pmd_flags with the memory encryption mask */ >> + early_pmd_flags |= _PAGE_ENC; >> + >> + sme_early_init(); >> + > > So maybe this comes later but you're setting _PAGE_ENC unconditionally > *before* sme_early_init(). > > I think you should set it in sme_early_init() and iff SME is enabled. _PAGE_ENC is #defined as sme_me_mask and sme_me_mask has already been set (or not set) at this point - so it will be the mask if SME is active or 0 if SME is not active. sme_early_init() is merely propagating the mask to other structures. Since early_pmd_flags is mainly used in this file (one line in head_64.S is the other place) I felt it best to modify it here. But it can always be moved if you feel that is best. Thanks, Tom > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Lendacky Subject: Re: [RFC PATCH v2 07/20] x86: Provide general kernel support for memory encryption Date: Wed, 7 Sep 2016 09:30:54 -0500 Message-ID: References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> <20160822223646.29880.28794.stgit@tlendack-t1.amdoffice.net> <20160906093113.GA18319@pd.tnic> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20160906093113.GA18319-fF5Pk5pvG8Y@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org To: Borislav Petkov Cc: linux-efi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Matt Fleming , x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, Alexander Potapenko , "H. Peter Anvin" , linux-arch-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Jonathan Corbet , linux-doc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kasan-dev-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org, Ingo Molnar , Andrey Ryabinin , Arnd Bergmann , Andy Lutomirski , Thomas Gleixner , Dmitry Vyukov , linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org, Paolo Bonzini List-Id: linux-efi@vger.kernel.org On 09/06/2016 04:31 AM, Borislav Petkov wrote: > On Mon, Aug 22, 2016 at 05:36:46PM -0500, Tom Lendacky wrote: >> Adding general kernel support for memory encryption includes: >> - Modify and create some page table macros to include the Secure Memory >> Encryption (SME) memory encryption mask >> - Update kernel boot support to call an SME routine that checks for and >> sets the SME capability (the SME routine will grow later and for now >> is just a stub routine) >> - Update kernel boot support to call an SME routine that encrypts the >> kernel (the SME routine will grow later and for now is just a stub >> routine) >> - Provide an SME initialization routine to update the protection map with >> the memory encryption mask so that it is used by default >> >> Signed-off-by: Tom Lendacky > > ... > >> diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c >> index 54a2372..88c7bae 100644 >> --- a/arch/x86/kernel/head64.c >> +++ b/arch/x86/kernel/head64.c >> @@ -28,6 +28,7 @@ >> #include >> #include >> #include >> +#include >> >> /* >> * Manage page tables very early on. >> @@ -42,7 +43,7 @@ static void __init reset_early_page_tables(void) >> { >> memset(early_level4_pgt, 0, sizeof(pgd_t)*(PTRS_PER_PGD-1)); >> next_early_pgt = 0; >> - write_cr3(__pa_nodebug(early_level4_pgt)); >> + write_cr3(__sme_pa_nodebug(early_level4_pgt)); >> } >> >> /* Create a new PMD entry */ >> @@ -54,7 +55,7 @@ int __init early_make_pgtable(unsigned long address) >> pmdval_t pmd, *pmd_p; >> >> /* Invalid address or early pgt is done ? */ >> - if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt)) >> + if (physaddr >= MAXMEM || read_cr3() != __sme_pa_nodebug(early_level4_pgt)) >> return -1; >> >> again: >> @@ -157,6 +158,11 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data) >> >> clear_page(init_level4_pgt); >> >> + /* Update the early_pmd_flags with the memory encryption mask */ >> + early_pmd_flags |= _PAGE_ENC; >> + >> + sme_early_init(); >> + > > So maybe this comes later but you're setting _PAGE_ENC unconditionally > *before* sme_early_init(). > > I think you should set it in sme_early_init() and iff SME is enabled. _PAGE_ENC is #defined as sme_me_mask and sme_me_mask has already been set (or not set) at this point - so it will be the mask if SME is active or 0 if SME is not active. sme_early_init() is merely propagating the mask to other structures. Since early_pmd_flags is mainly used in this file (one line in head_64.S is the other place) I felt it best to modify it here. But it can always be moved if you feel that is best. Thanks, Tom > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-f72.google.com (mail-oi0-f72.google.com [209.85.218.72]) by kanga.kvack.org (Postfix) with ESMTP id ACD4D6B0038 for ; Wed, 7 Sep 2016 10:31:04 -0400 (EDT) Received: by mail-oi0-f72.google.com with SMTP id w193so38111090oiw.2 for ; Wed, 07 Sep 2016 07:31:04 -0700 (PDT) Received: from NAM03-BY2-obe.outbound.protection.outlook.com (mail-by2nam03on0046.outbound.protection.outlook.com. [104.47.42.46]) by mx.google.com with ESMTPS id 186si14728076oig.259.2016.09.07.07.31.03 for (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 07 Sep 2016 07:31:03 -0700 (PDT) Subject: Re: [RFC PATCH v2 07/20] x86: Provide general kernel support for memory encryption References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> <20160822223646.29880.28794.stgit@tlendack-t1.amdoffice.net> <20160906093113.GA18319@pd.tnic> From: Tom Lendacky Message-ID: Date: Wed, 7 Sep 2016 09:30:54 -0500 MIME-Version: 1.0 In-Reply-To: <20160906093113.GA18319@pd.tnic> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Borislav Petkov Cc: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org, =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Andrey Ryabinin , Ingo Molnar , Andy Lutomirski , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov On 09/06/2016 04:31 AM, Borislav Petkov wrote: > On Mon, Aug 22, 2016 at 05:36:46PM -0500, Tom Lendacky wrote: >> Adding general kernel support for memory encryption includes: >> - Modify and create some page table macros to include the Secure Memory >> Encryption (SME) memory encryption mask >> - Update kernel boot support to call an SME routine that checks for and >> sets the SME capability (the SME routine will grow later and for now >> is just a stub routine) >> - Update kernel boot support to call an SME routine that encrypts the >> kernel (the SME routine will grow later and for now is just a stub >> routine) >> - Provide an SME initialization routine to update the protection map with >> the memory encryption mask so that it is used by default >> >> Signed-off-by: Tom Lendacky > > ... > >> diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c >> index 54a2372..88c7bae 100644 >> --- a/arch/x86/kernel/head64.c >> +++ b/arch/x86/kernel/head64.c >> @@ -28,6 +28,7 @@ >> #include >> #include >> #include >> +#include >> >> /* >> * Manage page tables very early on. >> @@ -42,7 +43,7 @@ static void __init reset_early_page_tables(void) >> { >> memset(early_level4_pgt, 0, sizeof(pgd_t)*(PTRS_PER_PGD-1)); >> next_early_pgt = 0; >> - write_cr3(__pa_nodebug(early_level4_pgt)); >> + write_cr3(__sme_pa_nodebug(early_level4_pgt)); >> } >> >> /* Create a new PMD entry */ >> @@ -54,7 +55,7 @@ int __init early_make_pgtable(unsigned long address) >> pmdval_t pmd, *pmd_p; >> >> /* Invalid address or early pgt is done ? */ >> - if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt)) >> + if (physaddr >= MAXMEM || read_cr3() != __sme_pa_nodebug(early_level4_pgt)) >> return -1; >> >> again: >> @@ -157,6 +158,11 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data) >> >> clear_page(init_level4_pgt); >> >> + /* Update the early_pmd_flags with the memory encryption mask */ >> + early_pmd_flags |= _PAGE_ENC; >> + >> + sme_early_init(); >> + > > So maybe this comes later but you're setting _PAGE_ENC unconditionally > *before* sme_early_init(). > > I think you should set it in sme_early_init() and iff SME is enabled. _PAGE_ENC is #defined as sme_me_mask and sme_me_mask has already been set (or not set) at this point - so it will be the mask if SME is active or 0 if SME is not active. sme_early_init() is merely propagating the mask to other structures. Since early_pmd_flags is mainly used in this file (one line in head_64.S is the other place) I felt it best to modify it here. But it can always be moved if you feel that is best. Thanks, Tom > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org